We ran the resource usage of every production database on Neon through RDS rightsizing algo to see what instance size each would need: • Neon uses 2.4× less compute • Neon is 50% lower cost • The same workloads running on provisioned instances sized using AWS recommendation would hit 55 performance degradations per db per month
More Relevant Posts
-
Postgres autoscaling saves money and enables branching databases! The secret sauce is true separation of (data lake) storage and compute
We ran the resource usage of every production database on Neon through RDS rightsizing algo to see what instance size each would need: • Neon uses 2.4× less compute • Neon is 50% lower cost • The same workloads running on provisioned instances sized using AWS recommendation would hit 55 performance degradations per db per month
To view or add a comment, sign in
-
Serverless architectures are perfect for MCP servers. MCP servers are gateways, not storage systems—they route requests between AI clients and enterprise resources, making serverless a natural fit. For edge cases like legacy systems with connection pooling or heavy local state needs, use hybrid approaches that route stateless requests to Lambda and stateful ones to Fargate. Full architectural breakdown 👇 https://vendia.fyi/3Y0QKz1 #MCP #ServerlessArchitecture #AWS #Lambda #AIInfrastructure #DistributedSystems #CloudArchitecture #EnterpriseAI
To view or add a comment, sign in
-
-
Reduced AWS bill from $47,000 to $19,000 per month. The findings: - 23 unused EBS volumes: $2,100/month - Oversized RDS instances (used 8% CPU): $8,400/month - NAT Gateway data transfer (internal traffic): $6,200/month - Old snapshots nobody needed: $3,100/month - Dev environments running 24/7: $4,800/month The fixes: - Deleted unused resources - Right-sized RDS with Reserved Instances - Moved internal traffic to VPC endpoints - Automated snapshot lifecycle - Lambda to stop dev environments at 7pm Time spent: 3 weeks Annual savings: $336,000 Dev environments running constantly are classic. Reserved Instances also cut costs way more than people realize. Every company has this waste. Most don't look. #AWS #CloudCostOptimization #FinOps #CloudArchitecture #CloudInfrastructure #DevOps #Infrastructure #CostOptimization #CloudEngineering #SRE #ITLeadership #OperationalExcellence
To view or add a comment, sign in
-
This is good checklist to follow for every Aws cloud project. Save resources use what you need and trim fat. Nat gateway is tough though so many APIs go through there. But if your s3 traffic is not optimized and going through nat gateway it can get expensive fast. It’s good to know db traffic is so low in this example that you can reduce it. I wouldn’t choose to lower that piece too much because you usually have one db and a failover you want high availability and high performance on the db.
Reduced AWS bill from $47,000 to $19,000 per month. The findings: - 23 unused EBS volumes: $2,100/month - Oversized RDS instances (used 8% CPU): $8,400/month - NAT Gateway data transfer (internal traffic): $6,200/month - Old snapshots nobody needed: $3,100/month - Dev environments running 24/7: $4,800/month The fixes: - Deleted unused resources - Right-sized RDS with Reserved Instances - Moved internal traffic to VPC endpoints - Automated snapshot lifecycle - Lambda to stop dev environments at 7pm Time spent: 3 weeks Annual savings: $336,000 Dev environments running constantly are classic. Reserved Instances also cut costs way more than people realize. Every company has this waste. Most don't look. #AWS #CloudCostOptimization #FinOps #CloudArchitecture #CloudInfrastructure #DevOps #Infrastructure #CostOptimization #CloudEngineering #SRE #ITLeadership #OperationalExcellence
To view or add a comment, sign in
-
Dedicated hosts on AWS: When they make sense (and when they don't). BYOL licensing for SQL/Oracle/Windows often requires dedicated hosts. Break-even calculation: → Dedicated host: Fixed cost (24/7) → Shared instances: Hourly cost (actual usage) If utilization >60% → Dedicated hosts win If utilization <60% → Shared instances + license-included cheaper OLA models your exact scenario with 3-year TCO. 100% AWS-funded. 4-6 weeks to results. Email: https://bit.ly/4rEHFJ7 | Subject: "Dedicated Host Analysis" #AWS #CloudArchitecture #Licensing
To view or add a comment, sign in
-
-
Here's how I will host a simple backend server for a MVP Create terraform for infrastructure creation Set up VPC, public and private subnets, RDS or DynamoDB if required or self hosted DB in the EC2 instance, internet gateway, nat gateway Run user data script from terraform and its ready. No load balancer required, not auto scaling features required for this amount of scale, we could add a CDN for bettter performance that's it No need to think more.
To view or add a comment, sign in
-
When your EKS pod needs persistent storage, it doesn't talk to AWS directly . That's the EBS CSI driver's job — it's the bridge between Kubernetes and EBS. And here's the thing — it's not a single pod . It's a controller pod running 6 containers, each with a specific job: 🔧 1. ebs-plugin → The core engine. Makes actual AWS API calls to create/delete EBS volumes. 👀 2. csi-provisioner → Watches for new PVCs in your cluster. The moment you create one, this kicks off volume creation automatically. 🔌 3. csi-attacher → Handles attaching/detaching volumes to the right worker node. Like a traffic cop for your volumes. 📏 4. csi-resizer → Need more storage? This one handles volume expansion without downtime. 📸 5. csi-snapshotter → Takes EBS snapshots on demand. Perfect for backups and cloning environments ❤️ 6. liveness-probe → Just keeps an eye on the health of the whole driver. So next time you write a StorageClass or a PVC, you know exactly what's working behind the scenes 🙌 #AWS #EKS #Kubernetes #EBS #CloudNative
To view or add a comment, sign in
-
-
TASK: Migrate Application to EKS with private nodes, ALB to route traffic to cluster and AWS RDS in the backend PROBLEM: Intermittent 5XX error during peak hours. Application logs are OK. Resource consumption is within limit. DB parameters are also stable including CPU and connection limit. However , network shows intermittent blips with connection timeout between EKS nodes and RDS ROOT CAUSE: Port getting exhausted in NAT gateway, as EKS nodes are accessing other AWS services as well via NAT Gateway. RESOLUTION: - Change EKS default behaviour, Use vpc endpoints to access AWS services - Enable EKS cluster private access for k8s API server internal comunication - For VPC , set enableDnsHostnames and enableDnsSupport to true so that pods resolve AWS service names to their private endpoint IPs instead of public ones.
To view or add a comment, sign in
-
Check out this video we created with AWS and Red Hat to give a basic overview of how you can leverage AWS native shared storage, based on NetApp ONTAP, to get the most performant and resilient experience available for OpenShift and OpenShift Virtualization running in AWS. Everything here is first party AWS service, meaning you get full support from AWS and you can retire committed spend. https://lnkd.in/eZVCQY9N
OpenShift Virt and shared storage options with Amazon FSx Netapp Ontap | Amazon Web Services
https://www.youtube.com/
To view or add a comment, sign in
-
🚨 The moment IAM finally made sense to me: I stopped thinking about permissions as additions. My dear friends, IAM is NOT about what you allow. It’s about what survives the intersection. Effective permissions = IAM ∩ Permission Boundary ∩ Session Policy ∩ SCP Every layer removes something. If an action doesn’t exist in one layer → it’s gone. This single idea changed how I design secure AWS architectures. What IAM concept took you the longest to understand? 👇 #AWS #CloudSecurity #IAM #SolutionsArchitect #CloudArchitecture
To view or add a comment, sign in
-
You can see how we arrived at these numbers and more stats in the full report here https://neon.com/autoscaling-report