Koufi Mohamed’s Post
More Relevant Posts
-
💡 **Database Optimization Tip** If your queries are slow, check your **indexes**. Adding indexes on frequently searched columns can improve performance drastically. ⚡ Example: Columns used in WHERE, JOIN, ORDER BY should be indexed. 🚀 Right indexing = Faster queries = Scalable app #MySQL #Database #BackendDevelopment
To view or add a comment, sign in
-
More database connections do not always mean more performance. That sounds wrong at first. But in real systems, it is often the opposite. I came across a useful video on this topic: https://lnkd.in/gvNdAxPT When connection count keeps growing, the database starts spending more time on coordination than useful work. A few hard truths: • More connections can increase context switching and CPU overhead • Memory usage grows, even when many sessions are idle • Lock contention becomes more visible under concurrency • Query latency can rise even when the database is “up” • Application teams may mistake connection growth for scalability The real goal is not maximum connections. The real goal is controlled concurrency. What usually helps: • Proper connection pooling • Fast query execution • Reduced transaction time • Better indexing • Fewer idle or stuck sessions • Capacity planning based on workload, not assumptions This is one of the most common performance traps in database-backed applications: When the app slows down, people add more connections. But sometimes that is exactly what makes it slower. Scalability is not about how many sessions you can open. It is about how efficiently the database can complete work under pressure. Have you seen cases where increasing database connections actually made performance worse? Share your experience. #DatabasePerformance #PostgreSQL #MySQL #Scalability #DevOps #Architecture
🔥 Why More Database Connections Make Your App Slower
https://www.youtube.com/
To view or add a comment, sign in
-
Ever hit this weird issue with SQLite + EF Core? You run a bulk insert… it fails midway (fine, data issue). But then — every insert after that keeps failing. Even valid ones. And the only “fix”? " Restarting the app. " I recently went through this exact scenario, and it turned out not to be a SQLite limitation alone — but a combination of: • Unfinished transaction state • Reusing a broken DbContext • EF Core not resetting after failure The result? Your app gets stuck in a broken state without you realizing it. In this article, I break down: • What actually happens after a failed bulk insert • Why your DbContext becomes unsafe to reuse • Why restart magically fixes everything • And the correct way to handle recovery in production If you're working with .NET, EF Core, or building APIs that deal with bulk data — this is something you’ll definitely want to understand. 🔗 Read the full article here: https://lnkd.in/eWHSZT9d Curious to know — have you faced similar “restart fixes everything” issues in your apps? #dotnet #efcore #sqlite #backenddevelopment #softwareengineering #webapi #debugging #programming
To view or add a comment, sign in
-
Now let’s make the app persistent. So far, everything we’ve built resets every time the app restarts. That’s because we’ve been using in-memory storage. Today: adding a database. Create a database in Render Render makes this very simple. Create a new PostgreSQL database from your dashboard. Once created, you’ll get an internal database URL Add it to your environment variables Go to your Web Service → Environment Add: DATABASE_URL = your_internal_database_url That’s it. Your app is now connected to a real database. Get Claude to create your tables Go back to Claude and prompt: “Update the app to use a PostgreSQL database. Use the DATABASE_URL environment variable. Create tables to store: • user inputs (URL + platform) • generated social media content Ensure data is saved and can be retrieved.” Claude will: • update your backend • create the database structure • handle the connection Once Claude updates the code: Push to GitHub → Render deploys automatically. That’s it. You now have: • a live app • persistent data • real backend infrastructure This is the difference between a demo… and a real product. If you want help building your own: 📩 info@recogitate.co.uk #VibeCoding, #AIDevelopment, #BuildInPublic, #FullStackAI, #SaaSBuilder
To view or add a comment, sign in
-
-
Fetching was easy. Everything after that wasn’t!! 😖 For a long time, whenever I needed data in a React app, I just wrote a simple fetch call and moved on. 💻 const res = await fetch('/api/users'); const data = await res.json(); It works. It’s native. No dependency. But after a while you start noticing the friction. You write the same loading state again. You handle errors again. You manually refetch when something changes. Caching? Retries? That’s all extra logic you end up building yourself. So naturally many teams move to Axios. const res = await axios.get('/api/users'); const data = res.data; It removes some of the pain cleaner responses, interceptors, better error handling, request cancellation. The request layer becomes easier to manage. But even with Axios, another problem still shows up. You’re not just fetching data. You’re managing server state. When should the data refetch? What if two components need the same data? What about caching? Background updates? Loading states everywhere? That’s where TanStack Query changes the mental model. const { data, isLoading } = useQuery({ queryKey: ['users'], queryFn: () => axios.get('/api/users').then(res => res.data) }); Now caching, retries, background refetching, and loading states stop being things you manually wire together. The interesting part is this: - Fetch and Axios help you make requests. - TanStack Query helps you manage the data that comes back. That’s why in many modern React apps the stack quietly becomes: TanStack Query + Axios (or Fetch) The shift isn’t about replacing fetch. It’s about realizing that frontend apps today aren’t just requesting data they’re managing server state ⚙️ #frontend #webdevelopment #tanstackquery #fetch #axios
To view or add a comment, sign in
-
-
𝗟𝗼𝗪 𝗖𝗼𝗱𝗲 𝗢𝗽𝘁𝗶𝗼𝗻𝘀 You want to build apps quickly without breaking the bank. Here are your best options: - Backend-as-a-service: PocketBase - Internal tool builder: Appsmith - Full app platform: Budibase - Spreadsheet → database: NocoDB These self-hosted low-code platforms give you rapid development capabilities without per-user pricing or vendor lock-in. PocketBase is a great alternative to Firebase. It provides a SQLite database, REST API, real-time subscriptions, file storage, and authentication in one 15 MB executable. Appsmith lets you build internal tools by dragging UI widgets onto a canvas and connecting them to databases and APIs. Budibase combines a database, UI builder, and automation engine in one platform. NocoDB turns any MySQL, PostgreSQL, or SQLite database into a collaborative Airtable-like spreadsheet. Source: https://lnkd.in/gBFMSZsj
To view or add a comment, sign in
-
🚀 Secure Your NestJS Apps with JWT & Prisma! In this guide, I show how to set up JWT-based authentication in a modern NestJS application using Prisma V7+ with PostgreSQL (Supabase). You’ll learn how to: i. Initialize Prisma and configure your database ii. Create user registration and login APIs iii. Hash passwords securely and generate JWT tokens iv. Protect routes with a JWT guard v. Easily test APIs using Postman Whether you’re building a full-stack web app or just starting with NestJS and Prisma, this step-by-step tutorial will help you implement a robust and secure authentication system in no time. 💡 Perfect for developers looking to combine NestJS, Prisma, and Supabase for modern web apps.
To view or add a comment, sign in
-
Introducing Database Traffic Control: a Postgres traffic management system built into PlanetScale. Enforce flexible budgets on your database traffic to protect against unexpected and dangerous workloads. How it works: 1. Create budgets that target subsets of your query traffic 2. Specify which queries fall in those budgets based on query patterns, app names, custom tags, or Postgres users. 3. Set the resource limits each budget can consume Read more: https://lnkd.in/gFQUb_sm
To view or add a comment, sign in
-
#Stop_the_Silent_Performance_Killer_in__Your_Rails_Apps 🏎️💨 As your Rails application grows, database performance becomes your biggest bottleneck. If you’ve ever noticed a page slowing down as you add more data, you’re likely hitting the N+1 Query Problem. Here is a breakdown of how Rails handles data loading and how you can protect your app’s performance: 1. #Lazy_Loading (The Default) Rails is "lazy" by design—it doesn't fetch associated data until you actually call it. The Risk: If you loop through 50 users and call user.posts inside the loop, Rails will fire 51 separate database queries. The Result: High latency and a stressed database. 2. #Eager_Loading (The Solution) By using .includes, .preload, or .eager_load, you tell ActiveRecord to fetch all necessary data in just one or two queries. The Fix: User.includes(:posts).all The Result: Consistent performance, regardless of how many records you have. 3. #Strict_Loading (The Guard) Introduced in Rails 6.1, this is a game-changer for team environments. It forces you to be intentional by raising an error if you attempt to lazy load an association.
To view or add a comment, sign in
-
-
CHAI aur SQL PART-II Today we learnt about the sql joins , foreign key, Indexes And Transaction. This is the first time when ACID properties clicked and I able to differentiate among Atomicity, consistency, Isolation and durability. It was in-depth session where we got knowledge that how things work internally in the DB. and how can we optimize them . Also Design the DataBase using ER diagram for instagram-like app. #ChaiCode #sql
To view or add a comment, sign in
-