🚨 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗡𝗴𝗶𝗻𝘅 𝗜𝘀 𝗥𝗲𝘁𝗶𝗿𝗶𝗻𝗴 — 𝗪𝗵𝗮𝘁 𝗗𝗲𝘃𝗢𝗽𝘀 & 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀 𝗦𝗵𝗼𝘂𝗹𝗱 𝗞𝗻𝗼𝘄 Today I came across an important update in the Kubernetes ecosystem, and I felt it’s something every DevOps/Cloud/K8s engineer should be aware of. The 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆-𝗺𝗮𝗶𝗻𝘁𝗮𝗶𝗻𝗲𝗱 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗡𝗚𝗜𝗡𝗫 (the one most people install directly from the official Kubernetes docs) is officially moving into deprecation. 🔻 𝗪𝗵𝗮𝘁 𝘁𝗵𝗶𝘀 𝗺𝗲𝗮𝗻𝘀 • No new releases after 𝗠𝗮𝗿𝗰𝗵 𝟮𝟬𝟮𝟲 • No bug fixes, security patches, or updates • The project will be archived • Only option will be self-maintenance via a fork — which is not practical for most teams 🔧 𝗪𝗵𝘆 𝗶𝘀 𝘁𝗵𝗶𝘀 𝗵𝗮𝗽𝗽𝗲𝗻𝗶𝗻𝗴? Simply put — 𝗹𝗮𝗰𝗸 𝗼𝗳 𝗺𝗮𝗶𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝘀. A massively used project was being handled by just 1–2 contributors for years. This highlights a real challenge in open-source: people want to contribute, but very few can commit the time required to maintain large projects. 🟦 𝗜𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝗖𝗹𝗮𝗿𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 There are two different NGINX ingress controllers (many people don’t realize this): 1️⃣ 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗡𝗚𝗜𝗡𝗫 – maintained by Kubernetes community (this one is deprecated) 2️⃣ 𝗡𝗚𝗜𝗡𝗫 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿 𝗯𝘆 𝗡𝗚𝗜𝗡𝗫/𝗙𝟱 – maintained by the vendor (❗not deprecated) If you installed your ingress controller via 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗱𝗼𝗰𝘀, you are impacted. If you installed via 𝗡𝗚𝗜𝗡𝗫 𝗼𝗳𝗳𝗶𝗰𝗶𝗮𝗹 𝗱𝗼𝗰𝘀/𝗛𝗲𝗹𝗺 𝗰𝗵𝗮𝗿𝘁, you are not. 🛠️ 𝗪𝗵𝗮𝘁 𝘀𝗵𝗼𝘂𝗹𝗱 𝘆𝗼𝘂 𝗱𝗼 𝗻𝗲𝘅𝘁? Depending on your setup, you have multiple options: ✔️ 𝗠𝗶𝗴𝗿𝗮𝘁𝗲 𝘁𝗼 𝗚𝗮𝘁𝗲𝘄𝗮𝘆 𝗔𝗣𝗜 (future of Kubernetes traffic management) ✔️ Use the vendor-maintained 𝗡𝗚𝗜𝗡𝗫 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿 ✔️ Switch to alternatives like 𝗧𝗿𝗮𝗲𝗳𝗶𝗸, 𝗞𝗼𝗻𝗴, 𝗛𝗔𝗣𝗿𝗼𝘅𝘆, 𝗲𝘁𝗰. This change will impact many Kubernetes environments, so it’s better to plan ahead rather than scramble in 2026. If you’re working in DevOps, Cloud, or handling Kubernetes clusters — definitely keep this on your radar. Happy & Fun Learning! Suyash Kesharwani #Kubernetes #DevOps #CloudComputing #NGINX #IngressController #GatewayAPI #Containers #SRE #K8sCommunity #OpenSource #TechUpdates
Suyash Kesharwani’s Post
More Relevant Posts
-
🚀 𝗡𝗚𝗜𝗡𝗫 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗜𝘀 𝗕𝗲𝗶𝗻𝗴 𝗥𝗲𝘁𝗶𝗿𝗲𝗱 — 𝗕𝘂𝘁 𝗬𝗼𝘂𝗿 𝗦𝘁𝗮𝗻𝗱𝗮𝗹𝗼𝗻𝗲 𝗡𝗚𝗜𝗡𝗫 𝗪𝗲𝗯 𝗦𝗲𝗿𝘃𝗲𝗿 𝗜𝘀 𝟭𝟬𝟬% 𝗦𝗮𝗳𝗲! (Important update for Kubernetes & DevOps engineers) There has been a lot of confusion across the community regarding ingress-nginx vs NGINX Web Server, so here’s a clear and accurate breakdown 👇 🔍 𝗪𝗵𝗮𝘁’𝘀 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗖𝗵𝗮𝗻𝗴𝗶𝗻𝗴? The community-maintained ingress-nginx (the Kubernetes Ingress Controller) is officially planned for retirement in March 2026. This means: • No new features • Minimal maintenance until March 2026 • Teams should plan migration if using it in production ⚠️ This retirement impacts only the Kubernetes ingress-nginx project. 🟢 𝗪𝗵𝗮𝘁 𝗔𝗯𝗼𝘂𝘁 𝘁𝗵𝗲 𝗦𝘁𝗮𝗻𝗱𝗮𝗹𝗼𝗻𝗲 𝗡𝗚𝗜𝗡𝗫 𝗪𝗲𝗯 𝗦𝗲𝗿𝘃𝗲𝗿? Good news: Nothing is changing. The NGINX web server you install on Linux using: apt install nginx -y or yum install nginx -y remains fully supported and continues to receive: • Security updates • Performance improvements • Long-term maintenance You can confidently keep using NGINX for: • Reverse proxy • Load balancing • Static file hosting • Website hosting • API gateway patterns • SSL/TLS termination 👉 The standalone NGINX web server is NOT affected by the Kubernetes Ingress Nginx retirement. 🔵 If You Use 𝗶𝗻𝗴𝗿𝗲𝘀𝘀-𝗻𝗴𝗶𝗻𝘅 in Kubernetes Since support ends in 𝙈𝙖𝙧𝙘𝙝 2026, now is the right time to evaluate alternatives. 🔥 Recommended Kubernetes Ingress/Gateway Alternatives (2025–2026): 1️⃣ Traefik – lightweight, CRD-native, great for microservices 2️⃣ HAProxy Ingress – high-performance L4/L7 load balancing 3️⃣ Envoy-based controllers – Contour, Gloo Edge 4️⃣ Kubernetes Gateway API – the future of Kubernetes traffic routing Popular implementations: Istio Gateway, Envoy Gateway, Cilium Gateway 💬 My View (as a DevOps Engineer): The retirement of ingress-nginx is a great opportunity to modernize Kubernetes ingress architecture. Evaluate alternatives now and consider adopting the 𝗚𝗮𝘁𝗲𝘄𝗮𝘆 𝗔𝗣𝗜 standard. And remember: ✔️ Standalone NGINX Web Server stays fully supported ✔️ Only the community Kubernetes ingress-nginx controller is being retired in March 2026 hashtag#DevOps hashtag#Kubernetes hashtag#CKA hashtag#CloudNative hashtag#K8s hashtag#NGINX hashtag#Linux hashtag#SRE hashtag#PlatformEngineering hashtag#Containers hashtag#Infrastructure hashtag#TechUpdate hashtag#DevOpsEngineer
To view or add a comment, sign in
-
-
Big News for the Kubernetes Community: 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗡𝗚𝗜𝗡𝗫 𝗶𝘀 𝗥𝗲𝘁𝗶𝗿𝗶𝗻𝗴 🚨 If you're running Kubernetes clusters, this announcement from the Kubernetes SIG Network team is critical for your infrastructure planning. 𝗧𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 After years of powering billions of requests worldwide, Ingress NGINX is being retired in March 2026. The project has struggled with insufficient maintainership, with only one or two volunteers maintaining it in their spare time. Despite its massive popularity, the Kubernetes community couldn't find enough maintainers to keep the project sustainable and secure. 𝗪𝗵𝗮𝘁 𝘁𝗵𝗶𝘀 𝗺𝗲𝗮𝗻𝘀 🔸Best-effort maintenance continues until March 2026 🔸After that: no updates, no bug fixes, no security patches 🔸Your existing deployments will keep running, but won't receive support 🔸All installation artifacts remain available 𝗧𝗵𝗲 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: 𝗧𝗶𝗺𝗲 𝘁𝗼 𝗠𝗶𝗴𝗿𝗮𝘁𝗲 The Kubernetes SIG Network strongly recommends migrating to modern alternatives immediately. The good news? You have plenty of excellent options! 𝗧𝗼𝗽 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿 𝗔𝗹𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝘃𝗲𝘀 Gateway API (Recommended Modern Standard) The official next-generation replacement for Ingress, offering role-oriented design, typed routes, and support for both ingress controllers and service meshes. It provides more flexibility and scalability for managing traffic in modern Kubernetes environments. 𝗠𝗼𝘀𝘁 𝗣𝗼𝗽𝘂𝗹𝗮𝗿 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿𝘀 𝗧𝗿𝗮𝗲𝗳𝗶𝗸: Cloud-native proxy with automatic service discovery, Let's Encrypt integration, and excellent developer experience. 𝗛𝗔𝗣𝗿𝗼𝘅𝘆 𝗜𝗻𝗴𝗿𝗲𝘀𝘀: Enterprise-grade performance with advanced traffic management and highly efficient resource usage. 𝗘𝗻𝘃𝗼𝘆-𝗯𝗮𝘀𝗲𝗱 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿𝘀 (𝗖𝗼𝗻𝘁𝗼𝘂𝗿, 𝗘𝗺𝗶𝘀𝘀𝗮𝗿𝘆-𝗜𝗻𝗴𝗿𝗲𝘀𝘀): Modern, high-performance options with advanced observability and traffic management features. Complete list of alternative controllers: link in the comments 𝗠𝘆 𝗧𝗮𝗸𝗲 As someone working with Kubernetes infrastructure daily, this retirement underscores the importance of sustainable open-source maintenance. While it's sad to see such a foundational project retire, the Kubernetes ecosystem has matured significantly with robust alternatives ready to fill the gap. Don't wait until March 2026. Start planning your migration now to ensure a smooth transition without security risks or operational disruption. News source: link in the comments What ingress controller is your team using? Drop a comment below! 👇 #Kubernetes #DevOps #CloudNative #IngressNGINX #GatewayAPI #CloudEngineering #SRE #OpenSource
To view or add a comment, sign in
-
-
"As someone working with Kubernetes infrastructure daily, this retirement underscores the importance of sustainable open-source maintenance." This is exactly why enterprises buy software from enterprise oriented companies. OpenSource is not reliable enough for business continuity.
L5 Cloud DevOps Engineer | CKAD | AWS Certified (3×) | GCP, Kubernetes, Terraform, Docker, CI/CD, Observability
Big News for the Kubernetes Community: 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗡𝗚𝗜𝗡𝗫 𝗶𝘀 𝗥𝗲𝘁𝗶𝗿𝗶𝗻𝗴 🚨 If you're running Kubernetes clusters, this announcement from the Kubernetes SIG Network team is critical for your infrastructure planning. 𝗧𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 After years of powering billions of requests worldwide, Ingress NGINX is being retired in March 2026. The project has struggled with insufficient maintainership, with only one or two volunteers maintaining it in their spare time. Despite its massive popularity, the Kubernetes community couldn't find enough maintainers to keep the project sustainable and secure. 𝗪𝗵𝗮𝘁 𝘁𝗵𝗶𝘀 𝗺𝗲𝗮𝗻𝘀 🔸Best-effort maintenance continues until March 2026 🔸After that: no updates, no bug fixes, no security patches 🔸Your existing deployments will keep running, but won't receive support 🔸All installation artifacts remain available 𝗧𝗵𝗲 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: 𝗧𝗶𝗺𝗲 𝘁𝗼 𝗠𝗶𝗴𝗿𝗮𝘁𝗲 The Kubernetes SIG Network strongly recommends migrating to modern alternatives immediately. The good news? You have plenty of excellent options! 𝗧𝗼𝗽 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿 𝗔𝗹𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝘃𝗲𝘀 Gateway API (Recommended Modern Standard) The official next-generation replacement for Ingress, offering role-oriented design, typed routes, and support for both ingress controllers and service meshes. It provides more flexibility and scalability for managing traffic in modern Kubernetes environments. 𝗠𝗼𝘀𝘁 𝗣𝗼𝗽𝘂𝗹𝗮𝗿 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿𝘀 𝗧𝗿𝗮𝗲𝗳𝗶𝗸: Cloud-native proxy with automatic service discovery, Let's Encrypt integration, and excellent developer experience. 𝗛𝗔𝗣𝗿𝗼𝘅𝘆 𝗜𝗻𝗴𝗿𝗲𝘀𝘀: Enterprise-grade performance with advanced traffic management and highly efficient resource usage. 𝗘𝗻𝘃𝗼𝘆-𝗯𝗮𝘀𝗲𝗱 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿𝘀 (𝗖𝗼𝗻𝘁𝗼𝘂𝗿, 𝗘𝗺𝗶𝘀𝘀𝗮𝗿𝘆-𝗜𝗻𝗴𝗿𝗲𝘀𝘀): Modern, high-performance options with advanced observability and traffic management features. Complete list of alternative controllers: link in the comments 𝗠𝘆 𝗧𝗮𝗸𝗲 As someone working with Kubernetes infrastructure daily, this retirement underscores the importance of sustainable open-source maintenance. While it's sad to see such a foundational project retire, the Kubernetes ecosystem has matured significantly with robust alternatives ready to fill the gap. Don't wait until March 2026. Start planning your migration now to ensure a smooth transition without security risks or operational disruption. News source: link in the comments What ingress controller is your team using? Drop a comment below! 👇 #Kubernetes #DevOps #CloudNative #IngressNGINX #GatewayAPI #CloudEngineering #SRE #OpenSource
To view or add a comment, sign in
-
-
Big News for the Kubernetes Community: 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗡𝗚𝗜𝗡𝗫 𝗶𝘀 𝗥𝗲𝘁𝗶𝗿𝗶𝗻𝗴 🚨 If you're running Kubernetes clusters, this announcement from the Kubernetes SIG Network team is critical for your infrastructure planning. 𝗧𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 After years of powering billions of requests worldwide, Ingress NGINX is being retired in March 2026. The project has struggled with insufficient maintainership, with only one or two volunteers maintaining it in their spare time. Despite its massive popularity, the Kubernetes community couldn't find enough maintainers to keep the project sustainable and secure. 𝗪𝗵𝗮𝘁 𝘁𝗵𝗶𝘀 𝗺𝗲𝗮𝗻𝘀 🔸Best-effort maintenance continues until March 2026 🔸After that: no updates, no bug fixes, no security patches 🔸Your existing deployments will keep running, but won't receive support 🔸All installation artifacts remain available 𝗧𝗵𝗲 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: 𝗧𝗶𝗺𝗲 𝘁𝗼 𝗠𝗶𝗴𝗿𝗮𝘁𝗲 The Kubernetes SIG Network strongly recommends migrating to modern alternatives immediately. The good news? You have plenty of excellent options! 𝗧𝗼𝗽 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿 𝗔𝗹𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝘃𝗲𝘀 Gateway API (Recommended Modern Standard) The official next-generation replacement for Ingress, offering role-oriented design, typed routes, and support for both ingress controllers and service meshes. It provides more flexibility and scalability for managing traffic in modern Kubernetes environments. 𝗠𝗼𝘀𝘁 𝗣𝗼𝗽𝘂𝗹𝗮𝗿 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿𝘀 𝗧𝗿𝗮𝗲𝗳𝗶𝗸: Cloud-native proxy with automatic service discovery, Let's Encrypt integration, and excellent developer experience. 𝗛𝗔𝗣𝗿𝗼𝘅𝘆 𝗜𝗻𝗴𝗿𝗲𝘀𝘀: Enterprise-grade performance with advanced traffic management and highly efficient resource usage. 𝗘𝗻𝘃𝗼𝘆-𝗯𝗮𝘀𝗲𝗱 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝗹𝗲𝗿𝘀 (𝗖𝗼𝗻𝘁𝗼𝘂𝗿, 𝗘𝗺𝗶𝘀𝘀𝗮𝗿𝘆-𝗜𝗻𝗴𝗿𝗲𝘀𝘀): Modern, high-performance options with advanced observability and traffic management features. Complete list of alternative controllers: link in the comments 𝗠𝘆 𝗧𝗮𝗸𝗲 As someone working with Kubernetes infrastructure daily, this retirement underscores the importance of sustainable open-source maintenance. While it's sad to see such a foundational project retire, the Kubernetes ecosystem has matured significantly with robust alternatives ready to fill the gap. Don't wait until March 2026. Start planning your migration now to ensure a smooth transition without security risks or operational disruption. News source: link in the comments What ingress controller is your team using? Drop a comment below! 👇 #Kubernetes #DevOps #CloudNative #IngressNGINX #GatewayAPI #CloudEngineering #SRE #OpenSource
To view or add a comment, sign in
-
-
🎯 DevOps isn’t just about tools… it’s about knowing which door to knock on (literally)! 😎 Ever seen a DevOps engineer panic because Jenkins wasn’t opening — and later realize it was just the wrong port? 😂 Yeah, we’ve all been there. So here’s a quick reality check 👇 If you’re in DevOps, Cloud, or Infra — these ports are your lifeline: 🔹 HTTP → 80 🔹 HTTPS → 443 🔹 SSH → 22 🔹 Jenkins → 8080 🔹 Docker → 2375 / 2376 🔹 Kubernetes API → 6443 🔹 MySQL → 3306 🔹 Prometheus → 9090 🔹 Grafana → 3000 🔹 Redis → 6379 🔹 Kafka → 9092 …and the list goes on! 💥 💡 Why this matters: Because one wrong port and your pipeline sleeps like it’s Sunday 😴 If you’re setting up CI/CD, monitoring, or containers — knowing these ports means faster debugging, smoother automation, and fewer 3 AM production calls. 💬 I’m curious — 👉 How many of these ports can you recall without Googling? 👉 What’s the most unexpected port you had to open in your DevOps journey? Drop it in the comments 👇 Let’s see who’s the real “Port-Guru” of DevOps 🔥 #DevOps #Docker #Kubernetes #Jenkins #CloudEngineering #CICD #Containers #CloudNative #Networking #DevOpsCommunity #Linux #SysAdmin #Automation #InfrastructureAsCode #PlatformEngineering #CloudComputing #DevOpsEngineer #CloudInfrastructure #SiteReliabilityEngineering #SRE #Observability #Grafana #Prometheus #Terraform #Ansible #AWS #Azure #GCP #TechCommunity #LearningInPublic #CloudOps #InfraAutomation #DevOpsTools #NetworkEngineering #DevSecOps #Microservices
To view or add a comment, sign in
-
🚀 𝗡𝗚𝗜𝗡𝗫 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 𝗜𝘀 𝗕𝗲𝗶𝗻𝗴 𝗥𝗲𝘁𝗶𝗿𝗲𝗱 — 𝗕𝘂𝘁 𝗬𝗼𝘂𝗿 𝗦𝘁𝗮𝗻𝗱𝗮𝗹𝗼𝗻𝗲 𝗡𝗚𝗜𝗡𝗫 𝗪𝗲𝗯 𝗦𝗲𝗿𝘃𝗲𝗿 𝗜𝘀 𝟭𝟬𝟬% 𝗦𝗮𝗳𝗲! (Important update for Kubernetes & DevOps engineers) There has been a lot of confusion across the community regarding ingress-nginx vs NGINX Web Server, so here’s a clear and accurate breakdown 👇 🔍 𝗪𝗵𝗮𝘁’𝘀 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗖𝗵𝗮𝗻𝗴𝗶𝗻𝗴? The community-maintained ingress-nginx (the Kubernetes Ingress Controller) is officially planned for retirement in March 2026. This means: • No new features • Minimal maintenance until March 2026 • Teams should plan migration if using it in production ⚠️ This retirement impacts only the Kubernetes ingress-nginx project. 🟢 𝗪𝗵𝗮𝘁 𝗔𝗯𝗼𝘂𝘁 𝘁𝗵𝗲 𝗦𝘁𝗮𝗻𝗱𝗮𝗹𝗼𝗻𝗲 𝗡𝗚𝗜𝗡𝗫 𝗪𝗲𝗯 𝗦𝗲𝗿𝘃𝗲𝗿? Good news: Nothing is changing. The NGINX web server you install on Linux using: apt install nginx -y or yum install nginx -y remains fully supported and continues to receive: • Security updates • Performance improvements • Long-term maintenance You can confidently keep using NGINX for: • Reverse proxy • Load balancing • Static file hosting • Website hosting • API gateway patterns • SSL/TLS termination 👉 The standalone NGINX web server is NOT affected by the Kubernetes Ingress Nginx retirement. 🔵 If You Use 𝗶𝗻𝗴𝗿𝗲𝘀𝘀-𝗻𝗴𝗶𝗻𝘅 in Kubernetes Since support ends in 𝙈𝙖𝙧𝙘𝙝 2026, now is the right time to evaluate alternatives. 🔥 Recommended Kubernetes Ingress/Gateway Alternatives (2025–2026): 1️⃣ Traefik – lightweight, CRD-native, great for microservices 2️⃣ HAProxy Ingress – high-performance L4/L7 load balancing 3️⃣ Envoy-based controllers – Contour, Gloo Edge 4️⃣ Kubernetes Gateway API – the future of Kubernetes traffic routing Popular implementations: Istio Gateway, Envoy Gateway, Cilium Gateway 💬 My View (as a DevOps Engineer): The retirement of ingress-nginx is a great opportunity to modernize Kubernetes ingress architecture. Evaluate alternatives now and consider adopting the 𝗚𝗮𝘁𝗲𝘄𝗮𝘆 𝗔𝗣𝗜 standard. And remember: ✔️ Standalone NGINX Web Server stays fully supported ✔️ Only the community Kubernetes ingress-nginx controller is being retired in March 2026 #DevOps #Kubernetes #CKA #CloudNative #K8s #NGINX #Linux #SRE #PlatformEngineering #Containers #Infrastructure #TechUpdate #DevOpsEngineer
To view or add a comment, sign in
-
-
🚀 Excited to share my latest open-source project: Sloth Kubernetes! 🦥☸️ After months of development, I'm proud to introduce a powerful Infrastructure-as-Code tool that makes Kubernetes cluster deployment incredibly simple and secure. 🎯 What is Sloth Kubernetes? A Go-based CLI tool that automates the deployment and management of production-ready Kubernetes clusters across multiple cloud providers (DigitalOcean, Linode) using Pulumi under the hood. ✨ Key Features: 🔧 One-Line Installation - Get started in seconds with our automated installer ☸️ K3s Deployment - Lightweight, production-grade Kubernetes clusters 🔐 Built-in WireGuard VPN - Secure mesh networking out of the box 🌐 Automated DNS Management - Seamless domain configuration 🎮 GitOps Ready - Integrated ArgoCD support for continuous deployment 📦 Multi-Cloud Support - Deploy to DigitalOcean, Linode, or both 🔄 State Management - S3-compatible backend with secure credential storage 🧪 Highly Tested - Comprehensive test coverage for reliability 💡 Why I Built This: Managing Kubernetes infrastructure shouldn't be complicated. I wanted to create a tool that combines the power of Pulumi's IaC approach with the simplicity of a single command-line interface. Whether you're running a startup or managing enterprise workloads, Sloth Kubernetes gets you from zero to production in minutes, not hours. 🛠️ Built With: • Go for performance and cross-platform support • Pulumi for infrastructure orchestration • K3s for lightweight Kubernetes • WireGuard for secure networking • GitHub Actions for CI/CD 🎉 Recent Milestones: ✅ Production-ready release with full test coverage ✅ ArgoCD integration for GitOps workflows ✅ Enhanced DNS management via Pulumi ✅ Refresh command for state synchronization ✅ Comprehensive security scanning - no exposed tokens! 🔗 The project is open source and available on GitHub. Whether you're interested in contributing, learning about IaC patterns, or just need a reliable way to deploy Kubernetes clusters, I'd love your feedback! 💬 What challenges have you faced with Kubernetes deployment? Drop a comment below! #Kubernetes #DevOps #CloudNative #InfrastructureAsCode #Pulumi #OpenSource #Go #Golang #K3s #GitOps #CloudComputing #SRE #TechInnovation #WireGuard --- 🌟 Star the repo if you find it useful! 👥 Contributions and feedback are always welcome! 📚 Full documentation and examples included! https://lnkd.in/dtKNCj5X
To view or add a comment, sign in
-
Code-Driven Velocity: AWS CI/CD vs. Jenkins Evolution The world of software delivery is moving at cloud speed, and the engine driving this acceleration is Continuous Integration and Continuous Delivery (CI/CD). While Jenkins has long been the flexible, open-source champion, a shift is underway toward the fully managed, cloud-native power of the AWS CI/CD Suite (CodeCommit, CodeBuild, CodeDeploy, and CodePipeline). This isn't just about switching tools; it's about escaping the "complexity tax." 💡The Core Problem with Tradition For organizations that are "all-in" on AWS, running Jenkins introduces a significant maintenance burden that slows down innovation: Maintenance Overhead: Running Jenkins means constant server setup, patching, scaling nodes, and debugging volatile plugins. Your engineers become accidental sysadmins. Integration Friction: Jenkins relies on plugins, credentials, and complex AWS CLI scripts to connect to the cloud environment. Each dependency is a point of failure prone to version conflicts. Security Gaps: Security often relies on manually configured third-party plugins. Protecting credentials requires extra effort, as security isn't built-in from the start. 🌟 The AWS Advantage: Managed Simplicity The AWS CI/CD Suite flips this script, prioritizing speed and simplicity over self-managed flexibility: Zero Infrastructure Management: Services like CodeBuild run in temporary, isolated environments that spin up when needed and vanish when done. This means no servers to babysit and guaranteed on-demand scaling for every build. Native, Seamless Integration: The tools speak the cloud’s native language. Deploying a new Lambda function or updating an ECS service is a single, integrated pipeline action, using the same IAM roles and VPC as your application. Everything just flows. Security by Design: Security is automated and native. Every action runs with a temporary, least-privilege IAM role, eliminating the need for hardcoded keys and ensuring your pipeline is as secure as your AWS account itself. In the end, while Jenkins gives you flexibility, the AWS suite gives you momentum. It forms a cohesive, self-healing delivery engine where your only job is to commit code—AWS handles the rest. The shift is clear: from self-managed complexity to cloud-managed velocity. Read my full blog here: https://lnkd.in/gMaMYM7r #AWSDevOps #CICD #CloudVelocity #NoOps #JenkinsAlternatives #ManagedServices #CodePipeline
To view or add a comment, sign in
-
-
What a mid-level DevOps Engineer really does day-to-day with Terraform (from someone actually in the trenches) When people hear “Terraform”, they imagine fancy IaC setups and cloud magic. But day-to-day? It’s way more grounded and honestly, way more human. Here’s what it actually looks like 👇 1. You read more Terraform than you write. Most projects already exist. Your job is to understand the logic, trace modules, and figure out why something was done before changing anything. Terraform work is 40% detective, 60% careful editor. 2. You maintain and extend infra. You’re adding EC2s, tweaking IAM roles, updating provider versions, and helping devs deploy apps. It’s not glamorous but it’s real impact. 3. You plan before you apply. terraform apply straight to prod? Rookie mistake. Mid-level means testing in dev, reviewing plans, and pushing PRs for eyes to catch what you missed. 4. You automate everything. You wire Terraform into GitHub Actions or GitLab CI. Pipelines run plans, approvals happen, and infra updates roll out safely. You start thinking in YAML as much as HCL. 5. You modularize. You break big messy code into clean, reusable modules. You introduce naming conventions and tags that make life easier for everyone. 6. You troubleshoot state, drift, and weird provider issues. State locks. Resource already exists. Manual AWS changes. You learn to stay calm, read the plan carefully, and fix it methodically. 7. You collaborate. With devs. Architects. Security. You explain Terraform changes in plain English because trust matters more than code. 8. You document and improve. You leave clarity behind. You clean up folder structures, update READMEs, and make the next person’s job easier. 🧭 9. You think beyond the code. You start asking: “Should this be a module?” “Can this be simplified?” “Is there a cheaper or cleaner way?” That’s when you move from mid-level to thinking senior. ⸻ Terraform isn’t just about provisioning resources. It’s about thinking systematically, acting carefully, and leaving things cleaner than you found them. That’s real DevOps. #Terraform #DevOps #CloudComputing #InfrastructureAsCode #AWS #Azure #GoogleCloud #Kubernetes #CloudEngineering #DevOpsEngineer #CloudCareers #TechLeadership #LearningInPublic #CareerGrowth #Automation #CI_CD #EngineeringCulture
To view or add a comment, sign in
-
💡 Before You Learn Kubernetes or Terraform — Master This First Over the last 9 months as a DevOps Engineer, I’ve worked with Terraform, Lambda, API Gateway, ECS, and CI/CD pipelines. But you know what truly tested me (and made me better)? 👉 Cloud Networking. Everyone wants to learn Kubernetes, Docker, or Terraform. But when production breaks, it’s rarely a YAML issue — it’s almost always a networking issue. 🧠 Real Talk: Here’s what I actually faced (and fixed) in real projects: 🔹 Lambda in private subnet couldn’t reach RDS — NAT Gateway in wrong subnet, route table missing. 🔹 API Gateway kept timing out — SG misconfigured; Lambda’s SG didn’t allow API Gateway return traffic. 🔹 ECS task uploads to S3 failing — no S3 VPC Endpoint; NAT GW ran out of ports under load. 🔹 Cross-account VPC peering worked one way only — missing reverse routes + overlapping CIDRs. 🔹 CloudFront → ALB → App giving 502s — wrong health check path + SG blocking HTTPS traffic. 🔹 DNS resolving wrong IPs — Route53 private zone not linked to right VPC. 🔹 VPN connection unstable — asymmetric routing between AZs through multiple NAT GWs. Every single issue looked like an app problem. Every single fix came down to understanding the network path end to end. 🔧 My Takeaway: Before deep-diving into tools like: Python Terraform Kubernetes Jenkins / GitHub Actions/bitbucket Spend time mastering: ✅ VPCs, Subnets, IGWs, NATs ✅ Security Groups & NACLs ✅ Routing & Peering ✅ DNS (Route53) ✅ Flow Logs & Reachability Analyzer Tools change — but networking fundamentals never do. 💬 My Message to DevOps Aspirants: You can’t automate what you don’t understand. When production breaks, the fastest DevOps engineer isn’t the one who writes the best Terraform code — it’s the one who knows where packets drop. ⚡ Final Thought: Real DevOps maturity begins the day you stop blaming the tool and start tracing the route. 🕵️♂️ to get understanding watch this video, https://lnkd.in/eE4enA-C #DevOps #AWS #Networking #Terraform #Kubernetes #CloudComputing #CareerGrowth #CI_CD #Infrastructure #LearningJourney #Cloud #DevOpsEngineer
To view or add a comment, sign in