
Table of Contents
Introduction to HTTP/3
HTTP has evolved to meet the growing need for faster, more efficient, and more reliable web performance.
Since HTTP/1, one of the main issues has been managing the data flow between client and server efficiently. The goal has always been clear: speed up connections and boost performance.
HTTP/2 marked a big leap forward. Thanks to multiplexing, it became possible to send multiple requests at the same time over a single TCP connection. But that also came with a major downside: because TCP enforces strict packet ordering, losing just one packet can delay the entire connection until it’s recovered. In HTTP/2, multiple requests share one connection. If one packet is lost, all requests are delayed. This is known as head-of-line blocking (HOLB), especially visible on high-latency networks.
HTTP/3 offers a smart solution by using QUIC, which runs over UDP. Here, each request or response travels through its own stream, identified uniquely. Streams can be bidirectional (for request/response pairs) or unidirectional (for control and other purposes). The key here is independence: if one stream gets delayed, it doesn’t hold back the others. Each stream moves on its own, and the server processes them as soon as they arrive. There’s no more waiting on lost packets from unrelated streams.
Unlike HTTP/1.1, which needed tons of open connections to handle simultaneous requests, or HTTP/2, which struggled with internal HOL issues, HTTP/3 allows for multiple concurrent streams over a single connection. That’s all possible thanks to QUIC, where each stream is handled independently and unaffected by losses in others.
On top of that, TCP is known for being “heavy” due to the three-way handshake and the separate TLS handshake. This is a major bottleneck for modern internet infrastructure. QUIC avoids that by eliminating those steps and embedding TLS directly into the protocol, making everything faster. Add TLS1.3 0-RTT (Zero Round-Trip Time) to HTTP/3 and you get even quicker reconnections — ideal for mobile devices and frequently changing networks.
But this architecture also introduces new challenges. Even though HTTP/3 eliminates HOLB at the transport layer, each stream still requires the server to maintain internal structures: data buffers, flow control windows, unique IDs, retransmission timers All that eats up memory and CPU, especially with lots of active streams. That’s where a new attack vector appears: if someone opens a ton of streams and never closes them, they can force the server to hold onto those resources, eventually causing congestion or even a denial-of-service.
HTTP/3 brings performance gains, but it also requires new defense strategies to manage the threats introduced by its architecture.
HTTP/3: Efficiency for the User, Pressure on the Server
Let’s look at how multiplexing works and why it increases risk.
A stream is a logical sequence of data within a connection (whether TCP or QUIC) used to handle requests and responses. But it’s not just a bunch of loose bytes: in both HTTP/2 and HTTP/3, each stream is a complex structure that the server needs to manage while it remains active.
That involves several things: it needs buffers to store temporary data, flow control windows to prevent the receiver from being overwhelmed, unique identifiers to track where each stream belongs within the shared connection, and timers to detect stalls or losses.
Keeping all that running requires memory and processing power. And when multiple streams are active at the same time, the load on the server can grow significantly. While this system is efficient under normal conditions, it also opens the door to potential attacks: if a malicious actor creates many streams and doesn’t close them properly, the server may end up using all the resources until it crashes.
Even if the client stops sending data, the server must maintain stream state until it closes or times out.
Does HTTP/3 merely inherit the risks of HTTP/2, or does it introduce new ones?
Every new protocol introduces both opportunities and risks. HTTP/3 is no exception.
One of the most important changes in HTTP/3 (enabled by QUIC) is the introduction of bidirectional streams, which work similarly to full-duplex communication.
In HTTP/2, bidirectional traffic within a stream was already possible, but there was no clear distinction between unidirectional and bidirectional streams. This was mainly because TCP naturally ensures packet delivery and ordering without defining stream types. HTTP/3, on the other hand, makes it explicit: each stream carries an identifier that indicates whether it is unidirectional or bidirectional. Why does that matter? For two key reasons:
- Resource pre-allocation: The server can prepare in advance, for example, by allocating buffer sizes.
- Stronger validation and control: If a server receives a “HEADERS” frame on a stream marked as unidirectional, it can immediately discard it.
In addition, each endpoint can send or receive data independently, without needing to follow a strict order. This allows packets to flow more freely, improving service responsiveness.
In the past, to overwhelm a server, an attacker had to open multiple TCP connections. Now, with QUIC, they can open thousands of streams within a single connection thanks to multiplexing, generating a massive load with far less effort. Since multiple connections are no longer necessary, the cost for the attacker drops significantly. On top of that, traditional protection mechanisms, such as detecting the number of headers, analyzing packet fragmentation, or counting connections from a single IP, become much less effective.
Add to this the fact that HTTP/3 supports ultrafast 0-RTT connections, and things get even more complicated. An attacker who has previously connected can reuse the TLS key and send requests without the server checking if they’re legitimate. What happens then? The server starts allocating buffers and assigning resources almost immediately, even for a fake request.
And what about IP spoofing? In TCP, it’s not very effective, since the client needs to receive and respond to a SYN-ACK for the connection to be completed. With QUIC, if 0-RTT is enabled and the attacker has a valid session ticket, the server may process the first datagram without validation, making spoofing more viable. Although QUIC includes mechanisms like the Retry Token, in many services those validations only occur in the first datagram or for new IP addresses, leaving the door open for amplification attacks.
Since QUIC runs over encrypted UDP packets, distinguishing normal activity and potential threat is not straightforward. Identifying UDP-based floods often requires analyzing encrypted traffic, adding resource strain to the server and security systems.
Some have suggested using QUIC proxies to decrypt and examine traffic, and while that does help to a point, it’s not without drawbacks, especially when dealing with high-traffic systems or apps where low latency really matters. This can diminish the benefits of QUIC.
Real-world tests have shown that even though QUIC has plenty of advantages, it can be more susceptible to certain attacks than older protocols like TCP with SYN cookies. Take handshake flood attacks, for example. QUIC servers tend to hit their limits faster, particularly on the CPU side, since handling each initial connection request demands significantly more processing.
Of course, this isn’t the only threat on the internet, nor the only one related to HTTP/3. If you’re interested in learning about the latest threats, we invite you to check out our latest threat intelligence report.
Defending Against HTTP/3-QUIC DDoS Risks and Recommended Best Practices:
- Take advantage of early detection methods that analyze behavior during the initial stages of the QUIC handshake—like the Client Hello or even the Encrypted Client Hello. These signals can help flag suspicious activity early on, without needing to decrypt the full traffic stream.
- Use HTTP/3 over QUIC selectively, and only where it makes real-world sense. It’s best suited for situations where performance gains—like lower latency or reduced head-of-line blocking—are truly critical, and where the risk of DDoS attacks is already low or well-managed
References
- Teyssier, B., Joarder, Y. A., & Fung, C. (s.f.). An empirical approach to evaluate the resilience of QUIC protocol against handshake flood attacks. Concordia Institute for Information Systems Engineering, Concordia University.
- Chatzoglou, E., Kouliaridis, V., Kambourakis, G., Karopoulos, G., & Gritzalis, S. (2022). A hands-on gaze on HTTP/3 security through the lens of HTTP/2 and a public dataset.
- https://www.sciencedirect.com/science/article/pii/S0167404822004436
- Gahtan, B., Shahla, R. J., Cohen, R., & Bronstein, A. M. (s.f.). Estimating the number of HTTP/3 responses in QUIC using deep learning. Technion – Israel Institute of Technology
RFC9002
RFC9114