text stringlengths 0 1.99k |
|---|
---[ 2.3 - AF_XDP: A Zero-Copy Bridge to Userspace |
AF_XDP is the kernel feature I used to create a high-performance data path |
between my XDP program and my userspace application. This is achieved |
through a shared memory region called a UMEM, which I allocate in userspace |
and register with the kernel. This UMEM is where all my packet data lives. |
The communication is orchestrated by a set of four single-producer, single- |
consumer rings: |
- RX Ring: The kernel places descriptors here for incoming packets that my |
XDP program has redirected. |
- TX Ring: I place descriptors here for packets I want to send. The kernel |
picks them up and transmits them. |
- FILL Ring: I place descriptors for empty UMEM frames on this ring to give |
the buffers to the kernel for receiving new packets. |
- COMPLETION Ring: After the kernel has sent a packet from my TX ring, it |
places the descriptor on this ring to signal that the UMEM frame can be |
reused. |
This architecture allows me to shuttle packets back and forth with the NIC |
driver while minimizing memory copies and context switches. |
--[ 3 - Building the Scanner |
---[ 3.0 - Core Design |
My demonstration scanner is composed of two primary components: an eBPF+XDP |
filter in C and a userspace packet sender in Go. The core design separates |
the logic for efficiency. My eBPF filter is loaded onto the NIC to inspect |
incoming TCP packets and redirect only the replies relevant to the scanner. |
My Go application then manages the AF_XDP socket, populates the FILL ring, |
sends SYN packets via the TX ring, and processes the replies from the RX |
ring. This division of labor places the performance-critical filtering in |
the kernel, while I handle the more complex state and I/O logic in |
userspace. |
---[ 3.1 - The eBPF Filter Component |
My eBPF code is designed for efficiency and simplicity. |
-------------------------------------------------------------------------- |
// file: bpf/xdp_filter.c |
#include <linux/bpf.h> |
#include <bpf/bpf_helpers.h> |
#include <linux/if_ether.h> |
#include <linux/ip.h> |
#include <linux/tcp.h> |
// This MUST match the -srcport flag in my Go program. |
#define FILTER_PORT 54321 |
// Map to hold the file descriptor of my AF_XDP socket. |
struct { |
__uint(type, BPF_MAP_TYPE_XSKMAP); |
__uint(key_size, sizeof(__u32)); |
__uint(value_size, sizeof(__u32)); |
__uint(max_entries, 1); |
} xsks_map SEC(".maps"); |
SEC("xdp") |
int xdp_port_filter(struct xdp_md *ctx) { |
void *data_end = (void *)(long)ctx->data_end; |
void *data = (void *)(long)ctx->data; |
struct iphdr *ip = data + sizeof(struct ethhdr); |
struct tcphdr *tcp; |
if ((void*)ip + sizeof(*ip) > data_end) |
return XDP_PASS; |
if (ip->protocol != IPPROTO_TCP) |
return XDP_PASS; |
tcp = (void *)ip + ip->ihl * 4; |
if ((void *)tcp + sizeof(*tcp) > data_end) |
return XDP_PASS; |
if (tcp->dest == bpf_htons(FILTER_PORT)) |
return bpf_redirect_map(&xsks_map, 0, 0); |
return XDP_PASS; |
} |
-------------------------------------------------------------------------- |
---[ 3.2 - The Userspace Application |
The PoC Go application orchestrates the entire scanning process. |
----[ 3.2.0 - Setup and Initialization |
Before any packets fly, a sequence of setup steps must be performed. |
First, I parse the arguments for the interface, targets, and ports. Then, |
I load my compiled xdp_filter.o program and attach it to the specified |
interface. The core setup involves creating the AF_XDP socket, then |
allocating and registering the UMEM via the XDP_UMEM_REG setsockopt call. |
Following that, I set the sizes of the four rings and mmap them into my |
application's address space. With the socket ready, I register its file |
descriptor into the eBPF map so the kernel knows where to redirect packets. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.