Day 36 of #90DaysOfDevOps 🚀 Dockerized a two-tier Flask application with MySQL. • Wrote Dockerfile for the Flask app • Orchestrated services using Docker Compose • Used .env for environment variables • Added volumes for database persistence • Pushed the image to Docker Hub and verified a fresh deployment Tested the full workflow by removing all local images/containers and running the app directly from Docker Hub. GitHub: https://lnkd.in/ej4bsEz8 Docker Hub: https://lnkd.in/eGPDWjk5 #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham
Fahad Jaseem’s Post
More Relevant Posts
-
Deploying a full-stack app shouldn't require a PhD this walkthrough nails it ✔️React frontend, ✔️Node backend, ✔️Postgres database, all live on Render in minutes. ❌no Dockerfiles, ❌no load balancer configs, ❌no 47-tab console sessions. ✅just git push and go: https://lnkd.in/gGKc_p54
From GitHub to Production in Minutes with Render | React + Node + PostgreSQL
https://www.youtube.com/
To view or add a comment, sign in
-
New on our channel: a practical jOOQ walkthrough for when your data layer gets complicated enough that ORM stops being useful. Catherine Edelveis builds a Spring Boot 4 + PostgreSQL app and goes through the pieces that usually decide whether persistence stays manageable or turns into a mess: CTEs, MULTISET, dynamic queries, nested DTO fetching, row-level locking, and repository tests with Testcontainers. https://hubs.li/Q046wvPY0
To view or add a comment, sign in
-
Your Django app is probably opening and closing database connections on every single request. This is Django's default behavior. A connection is established on the first query and torn down when the request finishes. For low-traffic apps, this is fine. But it becomes a serious bottleneck under load. The overhead of the TCP handshake and PostgreSQL authentication for every single request adds up quickly. It consumes CPU on both your app and database servers and can easily lead to connection exhaustion, where your database simply refuses new clients. Your app starts throwing errors, and users see failures. The solution is connection pooling. A connection pooler like PgBouncer sits between your Django application and your database. It maintains a pool of persistent connections to Postgres. When your app needs a connection, PgBouncer hands it a ready-to-use one from the pool instantly, bypassing the expensive setup process. When the request is done, the connection is returned to the pool, not closed. Setting Django's `CONN_MAX_AGE` for persistent connections is a good first step, but an external pooler gives you far more control and insight for serious production environments. Have you ever been bitten by database connection exhaustion in production? What was the fix? Connect with me if you're navigating similar scaling challenges. #Django #PostgreSQL #SystemDesign
To view or add a comment, sign in
-
-
Steve Yegge's Beads, a memory system for coding agents, moved from SQLite+Git to Dolt. Solo users needed an external server just to use a single agent. This was the wrong tradeoff for a single-player workflow. Dustin Brown solved this by making Embedded Dolt the default backend. Now, Beads comes with a built-in version-controlled SQL database. You don't need to set up a server or change any settings. When you run bd init, you'll see "Mode: embedded" and everything works right away. Full writeup: https://lnkd.in/g9rCk3mb
To view or add a comment, sign in
-
-
HeavyDeets update - I'm converting the self-hosted repo to a production app. The database schema got rebuilt from scratch. SQLite became PostgreSQL. Nine SQLAlchemy models, Alembic migrations, and a test suite that rolls back at the connection level so tests don’t leak into each other. That last part sounds minor but it’s not. https://lnkd.in/gMszFHUs
To view or add a comment, sign in
-
Every query in AloDB follows a real-time protocol between the server and your desktop app. Here's what happens when you ask a question: `MessageChat` (you) -> Server receives your question `EventThinking` (server) -> AI agent starts processing `EventQueryRequest` (server) -> Agent needs data, sends SQL with a unique RequestID Your app executes the query locally -> sends `MessageQueryResult` back `EventTextDelta` (server) -> AI streams the response `EventResponseComplete` (server) -> Done The key: the server sends SQL, your machine runs it. The server never connects to your database. Each query request gets a UUID. The server holds a channel waiting for the matching result. When your client sends back the `MessageQueryResult` with the right RequestID, the result is delivered through that channel to the waiting agent. Async request-response over a persistent WebSocket. Each chat session has its own WebSocket connection, so parallel conversations work without blocking. This is the architecture that keeps your credentials on your machine while still giving you AI-powered querying. Source code: github.com/mololab/alodb #WebSocket #GoLang #PostgreSQL #SoftwareArchitecture #DevTools
To view or add a comment, sign in
-
-
🎩 Hat Store - Full-Stack Inventory Management App Built a complete CRUD application for managing a hat store inventory using Node.js, Express, PostgreSQL, and EJS. 🔗 Live Demo: https://lnkd.in/duP45keG ✨ Features: - Full CRUD operations for items and categories - PostgreSQL database with relational data modeling - Deployed on Render with live database 🔧 Tech Stack: Node.js | Express | PostgreSQL | EJS | Render This project helped me strengthen my understanding of backend development, database design, and deployment workflows. 💻 GitHub: https://lnkd.in/dR2_kKGF Note: First load may take ~30s due to free tier cold start. #WebDevelopment #FullStack #NodeJS #PostgreSQL #CodingJourney
To view or add a comment, sign in
-
I recently built a full-stack developer platform using FastAPI, Next.js, Firebase, and PostgreSQL. Some key things I learned: - Backend architecture matters more than expected early on - Authentication (Firebase + backend sync) is trickier than it looks - API design decisions impact everything downstream - Debugging integration issues takes a huge amount of time This project helped me understand how real production systems are structured beyond just writing code. Happy to share more details if anyone is interested.
To view or add a comment, sign in
-
🚀 Day 16/30 — Docker Compose Hands-On 🔹 App + Database Setup • Use Docker Compose to run an application and database together • Example: Web App + MySQL/PostgreSQL 🔹 Services Concept • Each container is defined as a service in docker-compose.yml • Services communicate easily within the same network 🔹 Example Configuration version: "3" services: app: image: nginx ports: - "80:80" database: image: mysql environment: MYSQL_ROOT_PASSWORD: password 🔹 Run Containers docker-compose up 👉 Outcome: Learned how Docker Compose simplifies running multi-container applications like App + Database together. #Docker #DevOps #30DaysChallenge #Containerization
To view or add a comment, sign in
-
-
508: Building apps with GitHub Copilot Spark and Fabric SQL is simplified. Generate code and pages from requirements, saving time and cost. #AppDevelopment #GitHubCopilot #FabricSQL #LowCode #DeveloperTools
To view or add a comment, sign in