If you’re running Kubernetes in production, Kubernetes port security should be near the top of your priority list. Clusters expose a surprisingly large number of ports by default — the API server, etcd, kubelet, NodePort services, and more — and every one of them is a potential way in for an attacker. This article covers exactly which ports matter, what mistakes people actually make, and how to lock things down without breaking your workloads.
Why Kubernetes Clusters Are a Port Security Nightmare
A vanilla Kubernetes deployment can easily have a dozen or more ports open across the control plane and worker nodes. The API server listens on 6443, etcd on 2379-2380, kubelet on 10250, kube-scheduler on 10259, kube-controller-manager on 10257, and that’s before you add any actual workloads. Then NodePort services open random high ports (30000-32767 by default), and suddenly your attack surface is enormous.
The real problem is that most teams treat Kubernetes as an internal system. They focus on ingress controllers and load balancers for external traffic, while leaving control plane and node ports wide open to the internal network — or worse, to the internet. I’ve seen clusters where the kubelet API on port 10250 was reachable from the public internet with no authentication. That’s essentially handing someone root access to every pod on the node.
The Ports That Actually Matter
Let’s break down the critical ones.
Port 6443 — Kubernetes API Server. This is the front door to your entire cluster. If an attacker gets unauthenticated access here, it’s game over. Always require TLS client certificates or token-based auth, and restrict network access to known management IPs.
Ports 2379-2380 — etcd. This is where all cluster state lives. Secrets, configs, everything. etcd should never, under any circumstances, be reachable from outside the control plane nodes. If you can reach etcd from a worker node, your architecture is wrong.
Port 10250 — Kubelet API. This port allows executing commands inside pods. With anonymous auth enabled (which was the default in older versions), anyone who can reach this port can run arbitrary commands in your containers. Ensure –anonymous-auth=false is set and that the kubelet requires proper authorization.
Ports 30000-32767 — NodePort range. Every NodePort service opens a port on every node. Teams often lose track of which services are exposed this way, especially in clusters with dozens of namespaces and teams deploying independently.
The Myth: “My Cloud Firewall Handles It”
Here’s a misconception I see constantly: teams believe their cloud provider’s security groups or VPC firewalls are enough. They set up a few rules, forget about them, and assume everything is locked down.
The reality is that Kubernetes is dynamic. Services come and go. Someone deploys a NodePort service for debugging, forgets to delete it, and now there’s a new open port that your static firewall rules don’t account for. Security groups protect the perimeter, but they can’t track what Kubernetes is doing on the inside. This is exactly the gap where continuous port monitoring fills in what firewalls miss.
Practical Steps to Lock Down Your Cluster
Step 1: Audit what’s actually open. Before changing anything, run an external port scan against every node in your cluster — control plane and workers. You’ll almost certainly find ports you didn’t expect. Seeing your cluster the way an attacker sees it from the outside is the only way to know your real exposure.
Step 2: Restrict control plane access. The API server (6443) and etcd (2379-2380) should only be accessible from a bastion host or VPN. On managed Kubernetes services like EKS, GKE, or AKS, enable the private endpoint option and disable public access. If you must allow public API access, use IP whitelisting as a minimum.
Step 3: Harden kubelet configuration. Set –anonymous-auth=false, use webhook authorization, and make sure the kubelet’s read-only port (10255) is disabled. The read-only port exposes pod lists and resource usage without any authentication.
Step 4: Control NodePort exposure. Either restrict the NodePort range in your firewall rules to only the ports you deliberately expose, or better yet, avoid NodePort services entirely in production. Use LoadBalancer services or an ingress controller instead, and funnel all external traffic through a controlled entry point.
Step 5: Use NetworkPolicies. By default, every pod can talk to every other pod. Implement NetworkPolicies to restrict east-west traffic. If your database pod doesn’t need to accept connections from the frontend namespace, block it. This limits the blast radius when something does get compromised.
Step 6: Monitor continuously. A one-time audit tells you what’s open today. It tells you nothing about what someone deploys tomorrow. Automated port monitoring catches new open ports as they appear, before an attacker finds them.
A Scenario That Happens More Than You’d Think
A development team is debugging a connectivity issue between two services. They create a NodePort service to test from outside the cluster. It works, they fix the bug, and they move on. The NodePort service stays running. That port is now open on every worker node, exposing an internal microservice that was never designed for external access — no rate limiting, no authentication, maybe even with debug endpoints enabled.
Three months later, a routine scan picks it up. Or worse, an attacker’s scan picks it up first. The service has a known CVE in the framework it’s using, and now someone has a foothold. This is not a theoretical attack. It’s what actually happens when you don’t have a process for identifying unnecessary open ports.
FAQ
Which Kubernetes ports should be exposed to the internet?
Ideally, only ports 80 and 443 on your ingress controller or load balancer. Everything else — the API server, etcd, kubelet, NodePorts — should be behind a firewall, VPN, or private network. Even the API server should be private if your workflow allows it.
How do I detect unexpected open ports on Kubernetes nodes?
Run regular external port scans against all node IPs. Internal scans miss what’s actually reachable from the outside. Automated scanning on a schedule ensures you catch changes quickly rather than relying on manual checks that fall behind.
Are managed Kubernetes services (EKS, GKE, AKS) secure by default?
More secure than self-managed clusters, but not fully locked down. The API server is often publicly accessible by default, and NodePort behavior is identical to self-managed Kubernetes. You still need to enable private endpoints, configure security groups, and follow cloud-specific port security guidance for your provider.
Final Thought
Kubernetes makes it trivially easy to expose new ports, and that’s exactly what makes it dangerous from a port security perspective. The clusters that stay secure are the ones where someone is watching — continuously — for changes to the external attack surface. Lock down your control plane, eliminate unnecessary NodePorts, harden your kubelet, and scan from the outside regularly. Your cluster is only as secure as the ports it exposes.
