text stringlengths 0 1.99k |
|---|
2.0 - Full Kernel Bypass: DPDK |
2.1 - The Kernel Fastpath: XDP |
2.2 - XDP Internals: Actions and Modes |
2.3 - AF_XDP: A Zero-Copy Bridge to Userspace |
3 - Building the Scanner |
3.0 - Core Design |
3.1 - The eBPF Filter Component |
3.2 - The Userspace Application |
3.2.0 - Setup and Initialization |
3.2.1 - The Packet Transmission Loop |
3.2.2 - The Packet Reception Loop |
4 - Performance Analysis |
4.0 - A Note on Benchmarking |
4.1 - Head-to-Head: AF_XDP vs. masscan |
5 - Extending the AF_XDP Framework |
5.0 - High-Speed HTTP/HTTPS Application Fuzzing and L7 DDoS |
5.1 - Stateless UDP Fuzzing and DDoS Amplification |
5.2 - High-Entropy SYN Flooding |
6 - Caveats and Considerations |
7 - Conclusion |
8 - References |
9 - Source Code |
--[ 0 - Introduction |
The network scanner has always been a fundamental tool in my arsenal. As |
network interface speeds have increased, I found my tools were constrained |
by the overhead of the operating system's kernel network stack. This has |
become a significant bottleneck when doing internet-scale scans. |
In this article, I describe the method I used to build a high-performance |
port scanner using the Linux kernel's eBPF and AF_XDP subsystems. This |
approach creates a kernel fastpath that bypasses the traditional network |
stack, allowing my application to interact more directly with the network |
driver for line-rate filtering and zero-copy data transfer. |
--[ 1 - The Slow Path: Traditional Scanning Methods |
---[ 1.0 - Per-Connection Syscall Overhead |
My work began by analyzing the conventional port scanning method, which |
uses the connect() syscall. For each port, the application creates a socket, |
initiates a TCP handshake, and waits for the kernel to report the outcome. |
Every socket() and connect() call is a context switch into the kernel, |
consuming CPU cycles and introducing significant latency, making it |
impractical for my purposes. |
---[ 1.1 - Inefficient Packet Filtering with AF_PACKET |
I then examined raw sockets (AF_PACKET), which allow a userspace |
application to receive raw link-layer frames, bypassing the kernel's |
high-level network stack. While this is an improvement for SYN scanning, |
it does not provide the performance of a true kernel bypass. Packets are |
still delivered via the standard kernel data path, which involves overhead |
from context switches and memory copies for every packet received by the |
interface. This inherent slowness compared to a direct kernel bypass was |
unacceptable for my goals. |
--[ 2 - Kernel Bypass and Fastpath Architectures |
---[ 2.0 - Full Kernel Bypass: DPDK |
To achieve maximum performance, some frameworks like the Data Plane |
Development Kit (DPDK) implement a full kernel bypass. They use custom |
Poll-Mode Drivers (PMDs) that unbind a network interface from the kernel's |
control, giving a userspace application exclusive access. While this is |
very fast, it comes with drawbacks: it requires custom drivers, is invasive |
to the system, and often requires pinning a CPU core at 100% utilization |
for polling. |
---[ 2.1 - The Kernel Fastpath: XDP |
It is important to clarify that AF_XDP is not a kernel bypass in the same |
vein as DPDK. It is a highly efficient kernel fastpath that works in |
cooperation with existing kernel drivers. My XDP program is an eBPF program |
attached to a low-level hook in the network driver, triggered for every |
incoming packet at the earliest possible point. |
---[ 2.2 - XDP Internals: Actions and Modes |
Once my eBPF program is running at the XDP hook, it can inspect the raw |
packet data and return a verdict that determines the packet's fate. The |
primary actions are XDP_PASS, XDP_DROP, XDP_TX, and XDP_REDIRECT. The |
XDP_REDIRECT action is what allows my program to forward a packet to an |
AF_XDP socket in userspace. |
You can load XDP programs in three modes, which affects performance: |
- Native XDP: The program is loaded directly by a supported network card |
driver, providing the highest performance. |
- Offloaded XDP: The program is offloaded to and executed directly on the |
NIC hardware, requiring specific SmartNICs. |
- Generic XDP: The program is hooked later in the kernel's network path, |
after an sk_buff has been allocated. This mode serves as a fallback for |
testing or for use on unsupported hardware. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.