TCP Starvation

Finjan TeamBlog, Cybersecurity

Finjan TCP Starvation

Since information has become the life’s blood and major currency of the digital economy, it’s essential that measures should be taken to ensure that data flows smoothly between those who create it, and those who are meant to receive it. This is particularly true of the internet, which is the world’s major transport medium for digital information and content of all kinds.

So the technology companies, developers, and standards institutions governing the architecture of the internet have over the years developed mechanisms and protocols for facilitating data transfer, and making it more secure. And these mechanisms have in turn become the targets for cyber-criminals and hackers, looking to manipulate data transfer streams for their own purposes.

It’s widely considered to be a “legacy” mode of attack – especially in the light of the exploits and attack methods which have developed as internet data protocols have evolved. But there’s still some mileage to be had for cyber-attackers in using what’s known as TCP starvation.

Data Transfer Using Transmission Control Protocol (TCP)

Developed as a means of getting information from one network device to another, the Transmission Control Protocol (TCP) uses a “retransmission” strategy, to ensure that data doesn’t get lost in transit.

Flow control and error recovery are at the heart of TCP, which requires facilities in areas such as security, ensuring reliability, precedence, multiplexing (communicating two or more signals over a common channel), the governing of connections, and basic data transfer.

TCP is a connection-based protocol, and therefore requires a stable connection to be established before allowing any data to go through. It also dictates how a connection should be terminated, once a successful transfer has been completed.

When a TCP connection is being established, both the data sender (server) and receiver (client) agree on the data sequence and the acknowledgment (ACK) numbers to be communicated between the two, when confirming that information has been successfully received. At this stage, the client also notifies the server of the addresses of its source ports.

A TCP transmission sequence begins with a random number, associated with the transfer. Each time a new data packet is sent, the sequence is increased by the number of bytes transmitted in the previous segment of the TCP communication.

The acknowledgment or ACK number sent from the receiver’s side follows a similar pattern, with the number being equal to the sender’s sequence numbers, increased by the number of received bytes. The ACK segment is used to confirm that the client has received the transmitted data.

The Transmission Control Protocol features extensive error-checking mechanisms, including its procedures for the acknowledgment of data, and mechanisms for information flow control. Because of the combined effect of these checks, TCP is a relatively slow protocol – especially if lost data packets need to be retransmitted.

The Problem With UDP

The User Datagram Protocol or UDP is a datagram or packet-based protocol which proves very efficient for multicast or broadcast type transmissions. Its only error-checking mechanism takes place through the use of checksums (numerical values associated with a particular piece of information).

UDP makes no allowances for the secure opening of a connection via 3-way handshake, maintaining a connection in progress, or terminating a connection when the data transfer is complete. It’s a “connectionless” protocol in which no sequencing of data occurs, and for which the successful delivery of data can’t be guaranteed.

Though some UDP applications have application-level windowing, flow control, and retransmission capabilities, the User Datagram Protocol is considered much less reliable and trouble-prone than TCP.

Under UDP, data packets will arrive at their destination precisely in the order that they were sent. Given that UDP is typically used on wide-area networks like the internet (where information is routinely sent from servers along different pathways, often with delays), data may arrive at a client device in a jumbled order, which requires sorting before sense can be made out of it.

Problems can therefore occur when both TCP and UDP transmissions are used on the same network.

TCP Starvation

One of the congestion control mechanisms implemented by the Transmission Control Protocol is the “Slow Start”. When a sender begins transmitting data under TCP, it uses a small (congestion) window as a sort of tester to determine whether the recipient was able to successfully receive the information being sent (as confirmed by the client sending back its ACK number).

Once the sender receives an acknowledgment from the recipient, it can then increase the size of its congestion window for the next packet of data that it sends. Under TCP, a server may continue increasing this window size with each successful acknowledgment it gets from the receiver.

The sender’s transmission rate will be increased until a loss is detected – at which point the TCP mechanism assumes that there’s a congestion on the line, and “throttles back” or reduces the volume of data is sends onto the network.

UDP has no such mechanisms in place. So on systems (like the internet) where the two protocols are used together, UDP will continue to transmit data – regardless of whether TCP has detected a congestion in the data stream, or not. In this situation, UDP data streams will become the dominant force, and the choked back TCP transmissions will be starved – hence the name, TCP starvation.

It’s for this reason that TCP starvation is also referred to as UDP dominance.

Applications of TCP Starvation

TCP starvation or UDP dominance has been used by hackers in staging Denial of Service (DoS) attacks on mixed protocol networks. The technique here is to close a TCP session on the attacker’s side, while leaving it open for the victim. Creating a loop of this effect (with UDP data traffic clogging the network) quickly exhausts the victim’s session limit, denying access to their service by other users.

In this way, the effectiveness of a more traditional DoS campaign can be ramped up by a considerable margin. Methods have been developed for “weaponizing” this effect, to create maximum damage.

Vulnerable Processes

Most services running on the Transmission Control Protocol will be vulnerable to TCP starvation. UDP dominance can also affect product-handling TCP sessions such as load balancers, routers and firewalls with NAT tables, or firewalls configured with session-based policies.

In addition, telecommunications systems based on VoIP (Voice over Internet Protocol) technologies can suffer damaging service and Quality of Service (QoS) disruptions, due to data packet losses and TCP starvation effects. These effects are magnified on networks where (for example) TCP-based VoIP traffic co-exists with UDP-based data such as streaming video.


Wherever possible, TCP-based data flows should be separated from UDP information streams.

On systems where mixed data flows can’t be avoided, administrators should tweak their network timeout and data retransmission settings, to minimize the effect of TCP starvation on users with lower bandwidth connections.

The CERT Coordination Center (CERT/CC) has published an advisory on these kinds of attacks, which they initially identified as a variant of the NAPTHA attack, CVE-2000-1039.

Share this Post

Finjan TCP Starvation
Article Name
TCP Starvation | Details About this "legacy" mode of Cyber-Attack
TCP starvation or UDP dominance has been used by hackers in staging Denial of Service (DoS) attacks on mixed protocol networks. The technique here is to close a TCP session on the attacker's side, while leaving it open for the victim.
Publisher Name
Publisher Logo