Your API has no rate limiter. That means anyone can do this: for (let i = 0; i < 100000; i++) { fetch('/api/login', { method: 'POST', body: credentials }); } Brute-force your login endpoint. Scrape all your data. Crash your server with a burst. Rate limiting is not optional. Here's how I implement it in Express/Node.js: import rateLimit from 'express-rate-limit'; const loginLimiter = rateLimit({ windowMs: 15 * 60 * 1000, // 15 minutes max: 10, // max 10 login attempts per window message: { error: 'Too many attempts. Try again in 15 minutes.' }, standardHeaders: true, legacyHeaders: false, }); app.post('/api/auth/login', loginLimiter, authController.login); Different endpoints need different limits: → Login: strict (10/15min) — brute force target → Public search: relaxed (100/min) — UX matters → File upload: very strict (5/hour) — resource cost I added this to every project after forgetting it in my first one. Security is not a feature you add at the end. It's a constraint you build around from day 1. #Backend #Security #NodeJS #APIDesign #WebSecurity
Nguyễn Việt Thành’s Post
More Relevant Posts
-
Most developers treat auth as solved. It isn't. The mechanism you choose has deep consequences for security, scale, and architecture. Here's the real breakdown no one explains properly: Cookies are exceptional — for browsers. When a user logs in, the server sets a cookie. From that point, the browser handles everything: storage, transport, expiry. No client-side code needed. Three flags make this genuinely secure: → HttpOnly — JS can't read the token. XSS can't steal it. → Secure — HTTPS only. → SameSite — neutralizes CSRF in modern browsers. For a classic web app, cookies are still the cleanest auth solution that exists. But browsers aren't the whole world. Cookies are browser-native. That's the ceiling. Mobile apps, CLI tools, microservices — none of them benefit from automatic cookie handling. You're fighting CORS when your frontend, API, and auth service live on different domains. And third-party cookies are actively dying — ITP and browser privacy features are accelerating that fast. So what do you actually use? It depends entirely on what you're building: → Bearer tokens (JWT) — stateless, works across domains, ideal for APIs and SPAs. Tradeoff: storage is your problem. localStorage is an XSS risk. Memory is safer. → API keys — dead simple for server-to-server. No user identity, no session overhead. Don't use them where user-level auth matters. → OAuth / OIDC — "Login with Google" territory. Standardized, scoped, federated. Complex for a reason — don't add this complexity if you don't need it. → Hybrid (cookies + headers) — the right call when you're serving both browsers and external clients simultaneously. The mistake most engineers make: Picking one auth model and forcing it everywhere. Cookies in a distributed system = pain. JWTs in a simple session-based web app = unnecessary complexity. Good auth design isn't about finding the "best" mechanism. It's about matching the mechanism to the client model. Know your clients. Then choose. #BackendDevelopment #SystemDesign #Authentication #SoftwareArchitecture #CyberSecurity #WebSecurity
To view or add a comment, sign in
-
-
CORS is one of those things every developer hits… and most don’t truly understand. CORS (Cross-Origin Resource Sharing) is a browser security mechanism that controls how resources are requested from a different origin. Let’s be clear: CORS is not a backend problem. It’s enforced by the browser. If your frontend (React) runs on: http://localhost:3000 and your backend (.NET API) runs on: http://localhost:5000 That’s a cross-origin request → CORS kicks in. Two key scenarios: 1. Simple Request GET/POST without custom headers → Browser sends request directly 2. Preflight Request Triggered when using: - PUT / DELETE - Custom headers → Browser sends OPTIONS request first to check permissions If server doesn’t allow it → request is blocked. Common mistake: Developers think “API is not working” Reality: Browser is blocking it. In .NET, fixing CORS is straightforward: builder.Services.AddCors(options => { options.AddPolicy("AllowFrontend", policy => { policy.WithOrigins("http://localhost:3000") .AllowAnyHeader() .AllowAnyMethod(); }); }); app.UseCors("AllowFrontend"); Hard truth: If you’re blindly using AllowAnyOrigin() in production, you’re creating a security risk. Correct approach: Be explicit about allowed origins. CORS is not something to memorize. It’s something to understand at request/response level. If you understand: Request → Preflight → Response headers → Browser decision You won’t struggle with it again.
To view or add a comment, sign in
-
-
We use localStorage for everything because it's easy. But is it the right choice for a high-performance? I’ve seen localStorage used as a catch-all for state. While it’s convenient, it comes with two massive risks: Main-Thread Blocking and Security Vulnerabilities. The Problem: Synchronous "Hitch." localStorage is synchronous. When you call .getItem(), the JavaScript engine stops everything to read from the disk. On a slow mobile device (the "Remote Reality"), a large JSON string in localStorage can cause a 50ms-100ms "jank" in your UI. Real-Life Scenario: You’re building a remote-friendly Dashboard. You store a 2MB user-preference object in localStorage. Every time the app boots, the UI freezes for a split second while that object is parsed. To the user, it feels "unpolished." Solution: Use the right tool for the job. 1. SessionStorage for Temp State: Need to remember a scroll position or a partially filled form only while the tab is open? Use sessionStorage. It prevents "stale data" bugs when a user opens the app in a new tab. 2. IndexedDB for Large Data: If you have more than 5MB of data or need to perform searches, move to IndexedDB. It’s Asynchronous, meaning it won't block your animations or clicks. 3. The StorageEvent Trick: Did you know you can sync state across two open tabs without a backend? Listening to the storage event allows Tab A to react instantly when Tab B updates localStorage. Security Warning: LocalStorage vs. XSS. Never, ever store JWTs or PII (Personally Identifiable Information) in localStorage. If an attacker succeeds in a single XSS attack, your user's account is gone. Use HttpOnly Cookies for sensitive tokens. I refactored a multi-tab dashboard that was "drifting" out of sync. By implementing a useStorageSync hook that listens to the storage event, I ensured that "Dark Mode" and "User Profile" updates reflected across all open tabs instantly, without a single API call. #ReactJS #WebPerformance #WebSecurity #FrontendArchitecture #RemoteDeveloper
To view or add a comment, sign in
-
-
Ever noticed how a website still knows it’s you even after a refresh? No magic. Just smart engineering working behind the scenes. Let’s break it down in a simple way 👇 🔐 How the Web Remembers You Every time you log in, the system needs a way to identify you on the next request. That’s where three core concepts come in: 🍪 Cookies - Small but Powerful Cookies live in your browser. They store tiny pieces of data and automatically travel with every request to the server. ✔ Used for preferences (theme, language) ✔ Can store session IDs ✔ Lightweight and fast But: Not ideal for sensitive data unless secured properly. 🗂️ Sessions - Server-Side Control Sessions shift the responsibility to the server. Instead of storing your data in the browser, the server: • Creates a session ID • Stores your data internally • Sends only the ID back to your browser (via cookie) ✔ More secure ✔ Full control on the server ❌ Can become heavy at scale 🪪 Tokens (JWT) - The Modern Approach Tokens are like a self-contained identity card. The server gives you a signed token that includes your data. Every request carries this token, and the server simply verifies it. ✔ Stateless (no server storage needed) ✔ Scales easily across services ✔ Perfect for APIs & mobile app. But: Needs careful handling (expiry, storage, security). ⚖️ So, what should you use? • Building a traditional web app? --> Sessions + Cookies work well • Building APIs / mobile apps / microservices? --> Tokens (JWT) are a better fit 💡 Understanding this is key to building secure and scalable applications. It’s not just about login - it’s about trust between client and server. #WebDevelopment #FullStackDeveloper #SoftwareEngineering #Programming #CodingLife #BackendDevelopment #FrontendDevelopment #TechExplained #CyberSecurity #APIDevelopment #MERNStack #DeveloperCommunity #LearnToCode #BuildInPublic
To view or add a comment, sign in
-
-
Most developers think cookies are just for storing small data. In reality, they are a core part of system design. Think about logging into an app. You enter username + password once. Then you navigate across pages… You don’t log in again every time. Why? Cookies. What’s Actually Happening After login: Server generates a session (or token) Sends it as a cookie to the browser Now for every request: Browser → automatically sends the cookie Server → identifies the user HTTP is stateless. Every request is independent. Without cookies: Server can’t remember users Every request would need authentication That’s not scalable or practical. Cookies Solve This They introduce state over stateless protocols → Enable sessions → Maintain user identity → Reduce repeated authentication But There’s a Trade-off Where you store state matters. 1. Server-Side Sessions Cookie stores session ID Data stored on server - More control - Requires memory / storage - Harder to scale horizontally 2. Client-Side (JWT in Cookies) Cookie stores token No server session storage - Scales easily - Stateless backend - Harder to revoke - Larger payloads 3. Security Considerations Cookies can be risky if misused: Use HttpOnly → prevent JS access Use Secure → only HTTPS Use SameSite → prevent CSRF Cookie strategy affects: Scalability Security Performance Load balancing (sticky sessions vs stateless) Cookies are not just a frontend detail. They define how your system handles identity at scale. Because in system design, state management is everything. #SystemDesign #WebArchitecture #BackendEngineering #Security #Scalability
To view or add a comment, sign in
-
-
A single uncaught error crashed our dashboard for thousands of users. Not a server outage. Not a DDoS attack. Not a deployment gone wrong. One API returned null instead of an array. The .map() call threw. React unmounted the entire component tree. White screen. No error message. No fallback. Nothing. Thousands of users saw a blank page for hours. The fix wasn’t more try/catch blocks. We had try/catch everywhere already. In every API call. In every util function. In every handler. Dozens of catch blocks across dozens of files. The problem? try/catch doesn’t protect your UI. If a child component throws during rendering, try/catch can’t catch it. React will unmount everything above it. Here’s what we replaced it with: ❌ Before (what we had): function Dashboard() { try { return ( <UserStats /> <RevenueChart /> <ActivityFeed /> ); } catch (e) { return <p>Something went wrong</p>; } } // This NEVER catches render errors. // React doesn’t use try/catch for JSX. ✅ After (what fixed it): function Dashboard() { return ( <ErrorBoundary fallback={<StatsError />}> <UserStats /> </ErrorBoundary> <ErrorBoundary fallback={<ChartError />}> <RevenueChart /> </ErrorBoundary> <ErrorBoundary fallback={<FeedError />}> <ActivityFeed /> </ErrorBoundary> ); } // One component fails → only that section shows fallback. // Rest of the dashboard keeps working. The result: → try/catch blocks cut by more than half → Zero full-page crashes in months → Error recovery: instant (no page reload) If one widget fails, only that widget shows a fallback. The rest of the page keeps working. Your users don’t care about your error handling strategy. They care that the app doesn’t go blank. Error Boundaries aren’t optional. They’re infrastructure. #React #TypeScript #WebDevelopment
To view or add a comment, sign in
-
-
𝐇𝐨𝐰 𝐭𝐨 𝐃𝐞𝐜𝐨𝐝𝐞 𝐉𝐖𝐓 𝐓𝐨𝐤𝐞𝐧𝐬: 𝐀 𝐒𝐭𝐞𝐩-𝐛𝐲-𝐒𝐭𝐞𝐩 𝐆𝐮𝐢𝐝𝐞 If you’ve worked with modern web apps or APIs, chances are you’ve heard of JWT (JSON Web Tokens). But how does it actually work? 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗮 𝗝𝗪𝗧? JWT stands for JSON Web Token – an open standard (RFC 7519) for securely transmitting information as a JSON object. It’s compact, URL-safe, and digitally signed, often used for authentication and authorization in stateless systems like REST APIs. 𝗝𝗪𝗧 𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 A JWT has three parts, separated by dots (.): 𝘹𝘹𝘹𝘹𝘹.𝘺𝘺𝘺𝘺𝘺.𝘻𝘻𝘻𝘻𝘻 • Header (specifies algorithm & type) • Payload (the data — user info, roles, claims) • Signature (verifies the token’s integrity) Each part is Base64URL encoded. 𝗛𝗼𝘄 𝗝𝗪𝗧 𝗪𝗼𝗿𝗸𝘀 (𝗦𝘁𝗲𝗽-𝗯𝘆-𝗦𝘁𝗲𝗽) - User Logs In - Token Issued - Token Sent to Client - Client Makes Requests - Server Validates Token 𝗞𝗲𝘆 𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀 • Stateless (no server-side sessions) • Compact & Fast • Easy to use in APIs and Microservices • Works across domains (great for SPAs & mobile apps) 𝗖𝗼𝗺𝗺𝗼𝗻 𝗨𝘀𝗲 𝗖𝗮𝘀𝗲𝘀 • API authentication • Single Sign-On (SSO) • Access control in distributed systems 𝗣𝗿𝗼 𝗧𝗶𝗽: Always use HTTPS and short-lived tokens. For added security, combine JWTs with refresh tokens and expiration checks. 𝗟𝗲𝗮𝗿𝗻 𝗺𝗼𝗿𝗲: https://lnkd.in/dSXWdJau Have you implemented JWT in your project? What challenges did you face? Follow me Kanaiya Katarmal and hit the 🔔 on my profile so you don’t miss upcoming .NET tips and deep dives. 𝗦𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝘁𝗼 𝗺𝘆 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: https://lnkd.in/dp7tz_3V #JWT #JSONWebToken #Authentication #Authorization #WebSecurity #ASPNetCore #OAuth #TokenBasedAuth #Microservices #WebAPI #DeveloperTips #DevEx
To view or add a comment, sign in
-
-
We launched 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗗𝗲𝗲𝗽 𝗦𝗰𝗮𝗻, our next-generation vulnerability scanner for 𝘄𝗲𝗯, 𝗔𝗣𝗜𝘀, 𝗮𝗻𝗱 𝗺𝗼𝗯𝗶𝗹𝗲 𝗮𝗽𝗽𝘀 (𝗶𝗢𝗦, 𝗔𝗻𝗱𝗿𝗼𝗶𝗱, 𝘀𝗼𝗼𝗻 𝗛𝗮𝗿𝗺𝗼𝗻𝘆𝗢𝗦). We are 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗮𝗻𝗱 𝗼𝗻𝗹𝘆 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗱𝗲𝗹𝗶𝘃𝗲𝗿𝗶𝗻𝗴 𝗱𝗲𝗲𝗽 𝗮𝗴𝗲𝗻𝘁𝗶𝗰 𝘀𝗰𝗮𝗻𝗻𝗶𝗻𝗴 𝗳𝗼𝗿 𝗺𝗼𝗯𝗶𝗹𝗲, with full coverage for web and APIs, going beyond standard scanning to give security teams more confidence and control. Agentic Deep Scan uncovers logic flaws in authentication, onboarding, payments, and account workflows, to runtime tampering, API abuse, broken authorization patterns (BOLA, BFLA, IDOR-style), and cross-component attack chains. It can be customized to focus on individual risks, 𝗮𝗰𝗰𝗲𝘀𝘀 𝘂𝘀𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝗼𝘄𝗻 𝗔𝗜 𝗔𝗣𝗜 𝗸𝗲𝘆 (𝗕𝗬𝗢𝗞) and given extra context like documentation and source code to improve its testing. How it works: •𝗔𝗱𝗱 𝗬𝗼𝘂𝗿 𝗔𝗜 𝗣𝗿𝗼𝘃𝗶𝗱𝗲𝗿 𝗞𝗲𝘆 (𝗕𝗬𝗢𝗞), connect your credentials so usage and spend align with internal policies •𝗥𝘂𝗻 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗗𝗲𝗲𝗽 𝗦𝗰𝗮𝗻 𝗼𝗻 𝗪𝗲𝗯, 𝗔𝗣𝗜, 𝗼𝗿 𝗠𝗼𝗯𝗶𝗹𝗲 𝗧𝗮𝗿𝗴𝗲𝘁𝘀, explore runtime behavior, workflow logic, authorization paths, and cross-component attack chains •𝗥𝗲𝗰𝗲𝗶𝘃𝗲 𝗘𝘅𝗽𝗹𝗼𝗶𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆-𝗙𝗶𝗿𝘀𝘁 𝗢𝘂𝘁𝗽𝘂𝘁, validated findings with proof-grade evidence for confident triage •𝗥𝗲𝘁𝗲𝘀𝘁 𝘁𝗼 𝗩𝗲𝗿𝗶𝗳𝘆 𝗙𝗶𝘅𝗲𝘀, confirm that remediation resolves the underlying issue and reduces risk. A real vulnerability example is shared in the first comment. Learn more ↓ 𝗪𝗲𝗯: https://lnkd.in/egVJWqMB 𝗠𝗼𝗯𝗶𝗹𝗲: https://lnkd.in/eUqeGndk
To view or add a comment, sign in
-
Caching improves performance… until it starts breaking your app. Most frontend bugs today aren’t from missing caching — they’re from 𝘄𝗿𝗼𝗻𝗴 𝗰𝗮𝗰𝗵𝗶𝗻𝗴 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗲𝘀. Here are some common mistakes developers make: 1️⃣ 𝗖𝗮𝗰𝗵𝗶𝗻𝗴 𝘀𝗲𝗻𝘀𝗶𝘁𝗶𝘃𝗲 𝗱𝗮𝘁𝗮 Storing tokens or private data in localStorage can expose it to security risks. 👉 Not everything should be cached. 2️⃣ 𝗡𝗲𝘃𝗲𝗿 𝗶𝗻𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗻𝗴 𝗰𝗮𝗰𝗵𝗲 Data gets cached once and never refreshed. 👉 Users end up seeing outdated information. 3️⃣ 𝗨𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 𝘄𝗿𝗼𝗻𝗴 𝗧𝗧𝗟 Caching everything for long durations without considering how often data changes. 👉 Leads to stale UI. 4️⃣ 𝗜𝗴𝗻𝗼𝗿𝗶𝗻𝗴 𝗯𝗮𝗰𝗸𝗴𝗿𝗼𝘂𝗻𝗱 𝗿𝗲𝗳𝗿𝗲𝘀𝗵 Only relying on cache without revalidating. 👉 Data becomes outdated until manual refresh. 5️⃣ 𝗙𝗲𝘁𝗰𝗵𝗶𝗻𝗴 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗔𝗣𝗜 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝘁𝗶𝗺𝗲𝘀 Different components calling the same endpoint again and again. 👉 Wastes performance and increases load. 𝗥𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗲𝘅𝗮𝗺𝗽𝗹𝗲: A user updates their profile name. But your app: • shows old name in Navbar (cached) • shows new name in Settings (fresh API) Now your UI is inconsistent — and confusing. Caching isn’t just about storing data. It’s about: → what to cache → how long to cache → when to refresh Get this wrong, and caching becomes a bug — not an optimization. #frontenddevelopment #webperformance #systemdesign #reactjs #softwareengineering
To view or add a comment, sign in
-
-
Every web developer has seen this error. "Access to fetch at 'http://api.myapp.com' from origin 'http://localhost:3000' has been blocked by CORS policy" And every web developer has Googled "how to fix CORS" without understanding what it actually is. Here's the real explanation: CORS = Cross-Origin Resource Sharing First — what is an "origin"? Origin = protocol + domain + port http://localhost:3000 ← one origin http://localhost:8000 ← different origin (different port) https://myapp.com ← different origin (different protocol + domain) The Same-Origin Policy: Browsers have a built-in security rule: JavaScript on one origin cannot read responses from a different origin. Why? Imagine you're logged into your bank. A malicious website runs this in the background: fetch("https://lnkd.in/gm8X7yTt") Without Same-Origin Policy → your bank responds with your data → hacker steals it. With Same-Origin Policy → browser blocks the response. You're safe. So where does CORS fit in? CORS is how a server says: "It's okay, I trust this other origin." The server adds headers to its response: Access-Control-Allow-Origin: https://myapp.com Browser sees this → "The server said it's okay" → allows the response through. How to fix CORS in FastAPI (2 lines): from fastapi.middleware.cors import CORSMiddleware app.add_middleware( CORSMiddleware, allow_origins=["http://localhost:3000"], allow_methods=["*"], allow_headers=["*"], ) The mistake everyone makes: allow_origins=["*"] ← allows ALL origins Fine for development. Never do this in production. In production: explicitly list only the origins you trust. CORS is not a backend bug. It's the browser protecting your users. The fix lives on the server. The protection lives in the browser. How long did CORS confuse you before it clicked? 👇 #CORS #WebDevelopment #FastAPI #React #Frontend #Backend #SoftwareEngineering
To view or add a comment, sign in
-