Port knocking is security through obscurity. Now normally, I say anything to do with security through obscurity is bad practice if that’s the only thing you’re relying on. The idea is that it’s not going to outright secure your network, but it’s going to drastically reduce the risk of an attack during an attacker’s discovery phase. Hiding valuables in your vehicle when it’s left unattended will reduce the likelihood of a criminal passerby from breaking into your vehicle. Likewise, keeping ports closed when they’re not actively in use will reduce the likelihood of a criminal portscanning your network to find your open SSH (Sometimes Secure Hell), RDP (Ransomware Deployment Protocol), or god forbid your HTTP or TELNET ports. To keep the analogy going, we are effectively keeping the open ports in the car, but we are throwing them in the glovebox.
How does Port Knocking Work?
Typically, port knocking is implemented as a daemon that sits on your edge device and looks for incoming network traffic that matches a specific pattern and if a match is found, then some action gets taken. This is where the term knock comes from, it’s like a secret knock on the door to let you in. Usually, when determining the match for a pattern, we’re looking for a sequence of TCP connection attempts from the same IP address using different ports - although this isn’t a hard and fast rule. It can be any pattern of predetermined network activity from a single, or if you want to be hardcore, multiple hosts - like a secret dance at the low-level network, data, or even physical protocol that serves no purpose to any upper-level application. Once a match of activity is found, the action that’s typically taken is to then dynamically adjust the firewall rules to allow the source IP of the matched pattern to access some service, such as SSH - again you can make it do whatever you want to at this point. Technically, you can implement this however you like, there is no standard.
The most important consideration when implementing port knocking is to ensure that whatever action you take is not something you want an unauthenticated user to perform - that is, do not rely on this to secure your service but only to prevent discoverability of the service on the open internet. You should ensure there’s an underlying authentication mechanism to whatever service is being opened up - for instance, SSH still requires a username and a key (or password if you’re dirty).
Securing it Further
To prevent replay attacks (observers of the network noticing this strange pattern being repeated, and transmitting their own matching knock), we can make the knock sequence different each time. The way I’ve implemented this is by:
- The client connecting in performs the matching knock sequence: TCP SYN 8032, TCP SYN 8397, TCP FIN ACK 64824, UDP 38664
- The server performs an action to create a firewall rule allowing SSH access from the client.
- The server generates a new random sequence for the next knock
- The knock client application will SCP the knock sequence file encrypted with a PSK to itself.
- The knock client hashes the file to confirm it matches and informs the server it has received the next knock sequence
- The server then requires the next sequence from that client the next time it wants to connect
- The server uses TCP Keepalive and will time out the connection once inactive
- The server removes the firewall to allow the client to SSH to it.
This randomly changes the knock sequence so that no sequential knock sequences will be similar. I’ve thought about a time based system similar to One Token Passcodes so commonly found as a supplementory authentication factor these days, but because the knock is transmitted publicly, the “token” becomes known to anyone on the wire at that point.
The Impact
While there is still a risk of programmatically exposing your services to the internet, it completely removes all risk associated with drive-by attacks. It would require a very sophisticated attacker to bypass this method of hiding the presence of any public services from the public. All this functionality could be implemented fairly low in the stack and be transparent entirely but a slight increase of time for the first connection to the server to be made. There are of course many better ways to secure services that should be prioritized, but I came across this idea and thought it was kind of neat.
From a “why would you do this” perspective, it’s a good way to obscure an already secure service that you want to reach remotely over the internet, without the need to tunnel to some central location / concentrator.