If you manage servers exposed to the internet, minimizing your attack surface is the single most effective thing you can do for security. Every open port, every running service, and every piece of forgotten software is a potential way in for attackers. This guide covers the best practices for minimizing your server’s attack surface – practical steps you can apply today to reduce risk significantly.
The principle is simple: if something doesn’t need to be accessible, it shouldn’t be. Yet in practice, servers accumulate unnecessary exposure over time. Ports opened for testing stay open. Services installed for a one-off task keep running. And before you know it, your server is presenting a much larger target than it needs to.
Start With an External Port Audit
You can’t minimize what you don’t measure. The first step is seeing your server the way an attacker sees it – from the outside. Internal checks miss things. A service bound to all interfaces instead of localhost won’t show up as a problem until you scan externally.
Run a scan against your public IP and document every open port. For each one, answer three questions: what service is this, who needs it, and is it properly secured? If you can’t answer all three, that port is a liability.
I’ve seen servers with 15–20 open ports where the admin only expected five. The extras were usually leftover development tools, forgotten test databases, or services installed by other software as dependencies. Each one was a potential entry point that nobody was watching. For a structured approach, check out our guide on how to identify unnecessary open ports on your infrastructure.
Close Everything, Then Open What You Need
The most reliable approach is a default-deny firewall policy. Block all inbound traffic, then explicitly allow only the ports your server requires. For a typical web server, that means port 443 for HTTPS, port 80 for HTTP (if you still need redirect handling), and your SSH port for administration.
Common offenders include database ports – 3306 (MySQL), 5432 (PostgreSQL), 6379 (Redis), 27017 (MongoDB). These should never be internet-facing unless you have a documented, reviewed reason and multiple layers of authentication in place. Bind them to 127.0.0.1 or your private network interface.
On Debian, UFW makes default-deny straightforward. But closing ports carelessly can break things. If you’re unsure which services depend on what, read through how to close unused ports without breaking services before making changes in production.
Harden SSH – Your Most Critical Service
SSH is both your lifeline and one of the most targeted services on any server. Port 22 gets hammered by automated brute-force bots around the clock. Basic hardening is non-negotiable.
Disable password authentication and use key-based auth only. Disable root login. These two changes eliminate the vast majority of SSH-based attacks. Use AllowUsers or AllowGroups directives to restrict which accounts can log in remotely.
Moving SSH to a non-standard port reduces noise from automated scanners, but don’t mistake that for real security. A determined attacker with a port scanner will find it in minutes. Real SSH hardening goes deeper – rate limiting, fail2ban, and monitoring for unusual login patterns. We cover this in detail in our article on SSH port security beyond just changing port 22.
Remove What You Don’t Use
Every installed package is a potential vulnerability, even if the service isn’t running. Outdated libraries sitting on disk can be exploited if an attacker gains any level of access. Audit your installed software regularly.
On Debian, list running services with systemctl list-units –type=service –state=running and compare against what you actually need. A web server typically requires your HTTP daemon, application runtime, and maybe a database. If you see avahi-daemon, cups-browsed, or rpcbind running on a production server, those should go.
Same logic applies to packages. If you installed something six months ago for a test and forgot about it, apt-get purge it. Less software means fewer things that need patching, fewer things that can break, and fewer things an attacker can leverage.
Busting the “Firewall Is Enough” Myth
One of the most dangerous misconceptions in server security is that a properly configured firewall makes port monitoring unnecessary. Firewalls are essential, but they’re static. They enforce rules that were correct at the time you wrote them.
What happens when a package update opens a new port? When a developer temporarily modifies iptables rules and forgets to revert them? When a container runtime maps a port you didn’t expect? Your firewall rules haven’t changed, but your actual exposure has.
This is exactly the gap that continuous external monitoring fills. It catches the drift between your intended configuration and your actual state. We wrote a dedicated piece on what firewall rules can’t catch – the port monitoring gap that covers this in depth.
Apply the Principle of Least Privilege Everywhere
Minimizing your attack surface isn’t just about ports. It extends to permissions, network segmentation, and access controls. Services should run as dedicated non-root users. Applications should only access the files and databases they require. Firewall rules should specify exact IPs rather than 0.0.0.0/0 where possible.
Network segmentation matters too. Your database server shouldn’t be on the same network segment as your public-facing web server. If someone compromises your web application, they shouldn’t automatically have a path to your data.
Use localhost bindings aggressively. If Redis or Memcached only serves your local application, there’s zero reason for it to listen on an external interface.
Make It Ongoing, Not One-Time
Attack surface management is a process, not a project. Servers change constantly – updates, new deployments, configuration changes, staff turnover. What was locked down last month might not be today.
Schedule external scans regularly and compare results over time. Investigate any new open port immediately. Document your legitimate port policy so you have a baseline to compare against. Automated monitoring that alerts you to changes is particularly valuable when you manage multiple servers.
Keep a changelog of what’s installed, what’s exposed, and why. When someone asks “why is port 8080 open?” six months from now, you want a clear answer – not a guessing game.
FAQ
How many open ports should a typical web server have?
Most production web servers need only two or three ports open to the internet: 443 (HTTPS), 80 (HTTP for redirects), and an SSH port for administration. Anything beyond that should have a documented justification. The fewer ports exposed, the smaller the target.
How often should I audit my server’s attack surface?
At minimum, audit after every significant change – new software installs, infrastructure updates, or team member changes. Ideally, run continuous external port monitoring so you’re alerted to any changes in real time rather than discovering them weeks later during a manual check.
Is changing default ports effective security?
It reduces noise from automated scanners, which is useful for keeping logs clean and blocking low-effort attacks. But it’s not a substitute for real hardening. Any attacker specifically targeting your server will find non-standard ports quickly. Treat port changes as a minor convenience, not a security measure.
