What REALLY Happens When You Run kubectl apply? You can watch it here: https://lnkd.in/g3QbYt-E Ever feel like you're just copying kubectl commands without really getting what's going on under the hood? I created an 8-minute video that visually walks through the exact roles of the Master (Control Plane) and Worker Nodes in an on-premises Kubeadm cluster. No complex theory, just a straightforward look at how the API server, etcd, scheduler, kubelet, and kube-proxy all work together to make your deployments happen. #Kubernetes #DevOps #CloudNative #cloudcompute #softwaredevelopment
Dulam Murali Krishna’s Post
More Relevant Posts
-
BTS(1/n) Every time we run kubectl get pods, Kubernetes does much more work than it seems on the surface. ⚙️ - kubectl reads kubeconfig (cluster + credentials) - Forms a REST API request - Authenticates with the API Server - API Server queries ETCD for cluster state - Returns JSON with pod details - CLI formats & displays the list A simple command, but it flows through multiple Kubernetes components — ensuring authentication, consistency, and reliability before you see the pods. #Kubernetes #DevOps #CloudNative #SystemDesign #Containers #Microservices #Infrastructure #CNCF #CloudComputing #SRE #PlatformEngineering #DevSecOps #Automation
To view or add a comment, sign in
-
-
🚀 Ever wondered how to spin up a Kubernetes cluster on Proxmox in just minutes? I just dropped a new Short showing how to do exactly that using Cluster API. From VM templates to clusterctl move, it's a fast, clean workflow that brings cloud-native automation to your home lab or edge setup. 🎥 Watch the 2min walkthrough 💡 Whether you're running a homelab or building production-grade clusters, this method scales beautifully. 👉 How do you provision your Kubernetes clusters today? Are you using kubeadm, Terraform, Talos, or something else entirely? Let’s share tips and learn from each other! #Kubernetes #Proxmox #ClusterAPI #CloudNative #HomeLab #DevOps #PlatformEngineering
To view or add a comment, sign in
-
🚀 Cluster API completely changed the way I work. I can now spin up Kubernetes clusters on Proxmox whenever I want, for as long as I need. Testing new Kubernetes releases? Enabling feature gates? Trying out edge cases? ✅ Easy. But running multiple clusters comes with challenges like hitting Docker Hub rate limits. So I took it further: 🔹 I use #Harbor as a registry cache (bonus: it scans public images too). 🔹 I enabled #InfluxDB monitoring on my Proxmox cluster. 🔹 And to get those metrics into Dynatrace. And for that I built a custom OpenTelemetry receiver: 👉 https://lnkd.in/d389Kw5u It auto-discovers InfluxDB measurements and forwards them as OTEL metrics. If you're using InfluxDB and want to bridge it with OpenTelemetry, this receiver might help. Let me know what you think—or if you have similar setups! 🎥 I also just dropped a YouTube Short showing how to get started with Cluster API on Proxmox. 💬 How do you provision your Kubernetes clusters today? Let’s share ideas and learn from each other.
🚀 Ever wondered how to spin up a Kubernetes cluster on Proxmox in just minutes? I just dropped a new Short showing how to do exactly that using Cluster API. From VM templates to clusterctl move, it's a fast, clean workflow that brings cloud-native automation to your home lab or edge setup. 🎥 Watch the 2min walkthrough 💡 Whether you're running a homelab or building production-grade clusters, this method scales beautifully. 👉 How do you provision your Kubernetes clusters today? Are you using kubeadm, Terraform, Talos, or something else entirely? Let’s share tips and learn from each other! #Kubernetes #Proxmox #ClusterAPI #CloudNative #HomeLab #DevOps #PlatformEngineering
To view or add a comment, sign in
-
🚀 “𝗜𝘀 OpenShift 𝗷𝘂𝘀𝘁 𝗮 𝗳𝗼𝗿𝗸 𝗼𝗳 Kubernetes?” This question comes up all the time. Here’s the real story 👇 OpenShift isn’t a Kubernetes fork. It’s an enterprise-grade platform built on top of upstream Kubernetes. So what does that actually mean? ✅ No code divergence ✅ No split communities ✅ 100% compatibility with vanilla Kubernetes But OpenShift adds what enterprises need most: 💡 Developer-friendly web console ⚙️ Integrated CI/CD 🔒 Enterprise-grade security 📊 Built-in monitoring & logging …and strong Red Hat support behind it. ✨ 𝗢𝗽𝗲𝗻𝗦𝗵𝗶𝗳𝘁 = 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗽𝗼𝘄𝗲𝗿 + 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗽𝗼𝗹𝗶𝘀𝗵 + 𝗥𝗲𝗱 𝗛𝗮𝘁 𝗯𝗮𝗰𝗸𝗶𝗻𝗴 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀: If you already know Kubernetes, you’re halfway there. But if your organization cares about scale, compliance, and reliability, OpenShift takes that same Kubernetes foundation and supercharges it for production — without reinventing the wheel. 🔗 𝗕𝗼𝘁𝘁𝗼𝗺 𝗹𝗶𝗻𝗲: OpenShift doesn’t fork Kubernetes. It rides the Kubernetes wave, making it enterprise-ready, secure, and easier to manage at scale. Zero forking. All forward. 🚀 #Kubernetes #OpenShift #CloudNative #DevOps #Containers #RedHat #TechMyths #PlatformEngineering
To view or add a comment, sign in
-
Day - 05 Kubernetes Objects-and-Workloads In Kubernetes, we don’t manage containers directly. Instead, Kubernetes gives us special objects that help handle scaling, restarting, and managing applications easily. Here are the main Kubernetes objects that help define and manage workloads in a cluster. i) Pods ii) ReplicasSets and replica controllers iii) Deployments iv) StatefulSets V) Deamon Sets vi) Jobs and corn jobs vii) Services viii) Volumes& Persistent volumes ix) Labels and Annotations PODS A Pod is the smallest unit in Kubernetes and can contain one or more tightly coupled containers that work together. Containers in a Pod share the same network, storage, and environment. If one container fails, the whole Pod fails. Each Pod gets a unique IP address, and if it restarts, it gets a new IP. Pods always run on the same node and are managed as a single unit. Usually, we don’t manage Pods directly; instead, higher-level objects like Deployments and Replicasets handle scaling, updates, and recovery automatically. Pods are the building blocks of all applications in a Kubernetes cluster. #100DaysOfK8s #Kubernetes #DevOps
To view or add a comment, sign in
-
-
Pods, nodes, clusters, namespaces—where do you start monitoring in Kubernetes? Our beginner-friendly guide shares practical best practices + how Site24x7 helps. Read more: https://lnkd.in/gwgND48s #K8s #SRE #Monitoring #DevOps
To view or add a comment, sign in
-
-
Think of Ztunnel like a bouncer. “Get through me before you get to the Pods” It’s proxying packets to the appropriate destination based on polices that are configured. Because it’s written in Rust and the xDS configurations are lighter, it makes for a much more efficient proxy in today’s world. #kubernetes #devops #platformengineering
To view or add a comment, sign in
-
Kubernetes Ingress vs Gateway API — What’s the Difference? As Kubernetes evolves, the way we manage traffic into clusters is changing. #Ingress is the traditional approach to expose HTTP/HTTPS services. It uses Ingress Controllers (like NGINX or Traefik) to route traffic into the cluster based on host and path rules. - Pros: Simple, widely supported, good for basic routing. - Cons: Limited features and inconsistent behavior between controllers. #Gateway API is the modern, next-generation alternative to Ingress. It provides more flexibility, standardization, and advanced traffic management features like weighted routing, timeouts, and header-based matching. It also separates responsibilities — infrastructure teams manage gateways, while developers define routes. - Pros: Standardized, scalable, supports advanced traffic control. - Cons: Newer and comes with a steeper learning curve. In summary: Ingress → simple, stable, limited but proven. Gateway API → modern, modular, and built for complex traffic management. #Kubernetes #Networking #DevOps #CloudComputing #GatewayAPI #Containers
To view or add a comment, sign in
-
-
Blue-Green vs Canary Deployments - choosing the right strategy Both patterns solve downtime differently: ➡️ Blue-Green → spin up a parallel production environment (“Green”), then switch traffic via load balancer once verified. Fast rollback, but infra-heavy. ➡️ Canary → release gradually to a subset of users or pods, monitor key metrics (latency, errors), and scale up if healthy. Ideal for microservices. In large distributed systems, I lean toward Canary - it gives observability, safer rollouts, and data-driven validation before full exposure. Which one do you use in your deployments, and why? #DevOps #DeploymentStrategies #Kubernetes #BlueGreenDeployment #CanaryRelease #Microservices #SRE
To view or add a comment, sign in
-