feat: add PCAP export with process attribution sidecar (#137)

- Add --pcap-export flag to stream packets to PCAP file
- Write connection metadata (PID, process, timestamps) to JSONL sidecar
- Include Python script for enriching PCAP with process info
- Update documentation with usage examples and workflow
This commit is contained in:
Marco Cadetg
2026-01-17 19:51:07 +01:00
committed by GitHub
parent 79677b900c
commit f3f192763a
10 changed files with 640 additions and 11 deletions

1
.gitignore vendored
View File

@@ -21,3 +21,4 @@ target/
/target
.aider*
/logs
.venv

View File

@@ -52,6 +52,7 @@ Uses libpcap to capture raw packets from the network interface. This thread runs
- Open network interface for packet capture (non-promiscuous, read-only mode)
- Apply BPF filters if needed
- Capture raw packets
- Stream packets to PCAP file if `--pcap-export` is enabled (direct disk write, no memory buffering)
- Send packets to processing queue
### 2. Packet Processors
@@ -102,7 +103,7 @@ Creates consistent snapshots of connection data for the UI at regular intervals
### 5. Cleanup Thread
Removes inactive connections using smart, protocol-aware timeouts. This prevents memory leaks and keeps the connection list relevant.
Removes inactive connections using smart, protocol-aware timeouts. This prevents memory leaks and keeps the connection list relevant. When `--pcap-export` is enabled, also streams connection metadata (PID, process name, timestamps) to a JSONL sidecar file as connections close.
**Timeout Strategy:**
@@ -362,6 +363,7 @@ netstat iftop bandwhich RustNet tcpdump Wireshark
| **eBPF support** | Yes (Linux) | No | No | No | No | Yes | No |
| **Landlock sandboxing** | Yes (Linux) | No | No | No | No | No | No |
| **JSON event logging** | Yes | No | No | No | No | No | Yes |
| **PCAP export** | Yes (+ process sidecar) | No | Yes | No | No | No | Yes |
| **Packet capture** | libpcap | Raw sockets | libpcap | libpcap | Kernel | Kernel | libpcap |
### Tool Focus Areas
@@ -384,7 +386,8 @@ netstat iftop bandwhich RustNet tcpdump Wireshark
| Attribute network activity to specific applications | RustNet |
| Deep protocol dissection (3000+ protocols) | Wireshark |
| Quick terminal-based network overview | RustNet |
| Save captures for later analysis | Wireshark/tcpdump |
| Save captures with process attribution | RustNet (`--pcap-export`) |
| Save captures for deep analysis | Wireshark/tcpdump |
### RustNet and Wireshark: Different Strengths
@@ -399,8 +402,35 @@ Wireshark operates at the packet capture layer (libpcap) - it sees raw network t
| Protocol dissectors | ~15 common protocols | 3000+ protocols |
| Packet-level inspection | Metadata only | Full payload |
| Interface | TUI (terminal) | GUI |
| Capture to file | No | Yes (pcap) |
| Capture to file | Yes (`--pcap-export`) | Yes (native) |
Both tools can run in real-time. Choose based on what you need to see:
- **"What is making this connection?"** → RustNet
- **"What's inside this packet?"** → Wireshark
### Bridging the Gap: PCAP Export with Process Attribution
RustNet can now export packet captures while preserving process attribution - something neither tcpdump nor Wireshark can do alone:
```bash
# Capture packets with RustNet (includes process tracking)
sudo rustnet -i eth0 --pcap-export capture.pcap
# Creates:
# capture.pcap - Standard PCAP file
# capture.pcap.connections.jsonl - Process attribution (PID, name, timestamps)
# Enrich PCAP with process info and create annotated PCAPNG
python scripts/pcap_enrich.py capture.pcap -o annotated.pcapng
# Open in Wireshark - packets now show process info in comments
wireshark annotated.pcapng
```
This workflow gives you the best of both worlds:
- **RustNet's process attribution**: Know which application generated each packet
- **Wireshark's deep analysis**: Full protocol dissection with 3000+ analyzers
The enrichment script correlates packets with their originating processes and embeds the information as PCAPNG packet comments, visible in Wireshark's packet details pane.
See [USAGE.md - PCAP Export](USAGE.md#pcap-export) for detailed documentation.

View File

@@ -24,6 +24,7 @@ path = "src/main.rs"
[dependencies]
anyhow = "1.0"
libc = "0.2"
arboard = { version = "3.6", features = ["wayland-data-control"] }
crossterm = "0.29"
crossbeam = "0.8"
@@ -44,13 +45,9 @@ serde_json = "1.0"
[target.'cfg(target_os = "linux")'.dependencies]
procfs = "0.18"
libbpf-rs = { version = "0.25", optional = true }
libc = { version = "0.2", optional = true }
landlock = { version = "0.4", optional = true }
caps = { version = "0.5", optional = true }
[target.'cfg(any(target_os = "macos", target_os = "freebsd"))'.dependencies]
libc = "0.2"
[target.'cfg(windows)'.dependencies]
windows = { version = "0.62", features = [
"Win32_Foundation",
@@ -95,7 +92,7 @@ libbpf-cargo = { version = "0.25", optional = true }
# Landlock provides security sandboxing on Linux 5.13+.
default = ["ebpf", "landlock"]
linux-default = ["ebpf"] # Deprecated: kept for backwards compatibility
ebpf = ["libbpf-rs", "libc", "dep:libbpf-cargo"]
ebpf = ["libbpf-rs", "dep:libbpf-cargo"]
landlock = ["dep:landlock", "dep:caps"]
# Minimal cross configuration to override dependency conflicts

View File

@@ -37,7 +37,7 @@ RustNet fills the gap between simple connection tools (`netstat`, `ss`) and pack
- **Connection-centric view**: Track states, bandwidth, and protocols per connection in real-time
- **SSH-friendly**: TUI works over SSH so you can quickly see what's happening on a remote server without forwarding X11 or capturing traffic
RustNet complements packet capture tools. Use RustNet to see *what's making connections*. For deep forensic analysis, capture with `tcpdump` and analyze in Wireshark. See [Comparison with Similar Tools](ARCHITECTURE.md#comparison-with-similar-tools) for details.
RustNet complements packet capture tools. Use RustNet to see *what's making connections*. For deep forensic analysis, use `--pcap-export` to capture packets with process attribution, then enrich with `scripts/pcap_enrich.py` and analyze in Wireshark with full PID/process context. See [PCAP Export](USAGE.md#pcap-export) and [Comparison with Similar Tools](ARCHITECTURE.md#comparison-with-similar-tools) for details.
<details>
<summary><b>eBPF Enhanced Process Identification (Linux Default)</b></summary>

View File

@@ -110,7 +110,16 @@ The experimental eBPF support provides efficient process identification but has
- [ ] **Internationalization (i18n)**: Support for multiple languages in the UI
- [ ] **Connection History**: Store and display historical connection data
- [ ] **Export Functionality**: On-demand snapshot export (`--json-log` provides streaming)
- [x] **PCAP Export**: Export packets to PCAP file with process attribution sidecar (`--pcap-export`)
- Standard PCAP format compatible with Wireshark/tcpdump
- Streaming JSONL sidecar with PID, process name, timestamps
- Python enrichment script to create annotated PCAPNG
- [ ] **Enhanced PCAP Metadata**: Richer process information in sidecar file
- Process executable full path (not just name)
- Command line arguments
- Working directory
- User/UID information
- Parent process information
- [ ] **Configuration File**: Support for persistent configuration (filters, UI preferences)
- [ ] **Connection Alerts**: Notifications for new connections or suspicious activity
- [ ] **GeoIP Integration**: Maybe add geographical location of remote IPs

View File

@@ -85,6 +85,7 @@ Options:
--show-ptr-lookups Show PTR lookup connections (hidden by default with --resolve-dns)
-l, --log-level <LEVEL> Set the log level (if not provided, no logging will be enabled)
--json-log <FILE> Enable JSON logging of connection events to specified file
--pcap-export <FILE> Export captured packets to PCAP file for Wireshark analysis
-f, --bpf-filter <FILTER> BPF filter expression for packet capture
--no-sandbox Disable Landlock sandboxing (Linux only)
--sandbox-strict Require full sandbox enforcement or exit (Linux only)
@@ -950,3 +951,71 @@ cat /tmp/connections.json | jq 'select(.process_name == "firefox")'
# Count connections by destination
cat /tmp/connections.json | jq -s 'group_by(.destination_ip) | map({ip: .[0].destination_ip, count: length})'
```
### PCAP Export
The `--pcap-export` option captures raw packets to a standard PCAP file for analysis in Wireshark, tcpdump, or other tools.
```bash
# Export all captured packets
sudo rustnet -i eth0 --pcap-export capture.pcap
# Combine with BPF filter
sudo rustnet -i eth0 --bpf-filter "tcp port 443" --pcap-export https.pcap
```
**Output files:**
| File | Description |
|------|-------------|
| `capture.pcap` | Raw packet data in standard PCAP format |
| `capture.pcap.connections.jsonl` | Streaming connection metadata with process info |
**Sidecar JSONL format** (one JSON object per line, written as connections close):
```json
{"timestamp":"2026-01-17T10:30:00Z","protocol":"TCP","local_addr":"192.168.1.100:54321","remote_addr":"142.250.80.46:443","pid":1234,"process_name":"firefox","first_seen":"...","last_seen":"...","bytes_sent":1024,"bytes_received":8192,"state":"ESTABLISHED"}
```
| Field | Description |
|-------|-------------|
| `timestamp` | When the connection record was written |
| `protocol` | TCP, UDP, ICMP, etc. |
| `local_addr` / `remote_addr` | Connection endpoints |
| `pid` / `process_name` | Process info (if identified) |
| `first_seen` / `last_seen` | Connection timestamps |
| `bytes_sent` / `bytes_received` | Traffic totals |
| `state` | Final connection state |
#### Enriching PCAP with Process Information
Standard PCAP files don't include process information. Use the included `scripts/pcap_enrich.py` script to correlate packets with processes:
```bash
# Install scapy (required)
pip install scapy
# Show packets with process info
python scripts/pcap_enrich.py capture.pcap
# Output as TSV for further processing
python scripts/pcap_enrich.py capture.pcap --format tsv > report.tsv
# Create annotated PCAPNG with process comments (requires Wireshark's editcap)
python scripts/pcap_enrich.py capture.pcap -o annotated.pcapng
```
The annotated PCAPNG embeds process information as packet comments, visible in Wireshark's packet details.
**Manual correlation:**
```bash
# View packets
wireshark capture.pcap
# View process mappings
cat capture.pcap.connections.jsonl | jq -r '[.protocol, .local_addr, .remote_addr, .pid, .process_name] | @tsv'
# Filter in Wireshark by connection tuple
# ip.addr == 142.250.80.46 && tcp.port == 443
```

387
scripts/pcap_enrich.py Executable file
View File

@@ -0,0 +1,387 @@
#!/usr/bin/env python3
"""
Enrich RustNet PCAP captures with process information from sidecar JSONL.
This script correlates packets in a PCAP file with process information
from the accompanying .connections.jsonl file created by RustNet.
Usage:
# Show packets with process info
python pcap_enrich.py capture.pcap
# Export to annotated PCAPNG (requires editcap from Wireshark)
python pcap_enrich.py capture.pcap --output annotated.pcapng
# Generate TSV report
python pcap_enrich.py capture.pcap --format tsv > report.tsv
Requirements:
pip install scapy
"""
import argparse
import json
import subprocess
import sys
import tempfile
from pathlib import Path
try:
from scapy.all import rdpcap, IP, TCP, UDP, ICMP
except ImportError:
print("Error: scapy is required. Install with: pip install scapy", file=sys.stderr)
sys.exit(1)
def parse_systemtime(st) -> float | None:
"""Parse a SystemTime serialized as {secs_since_epoch, nanos_since_epoch}."""
if st is None:
return None
if isinstance(st, dict):
secs = st.get("secs_since_epoch", 0)
nanos = st.get("nanos_since_epoch", 0)
return secs + nanos / 1e9
# Fallback for other formats
return None
def load_connections(jsonl_path: Path) -> dict:
"""Load connection-to-process mappings from JSONL file.
Returns a dict mapping (proto, local, remote) -> list of connection info dicts.
Multiple connections can exist for the same tuple (port reuse over time).
"""
lookup = {}
if not jsonl_path.exists():
print(f"Warning: Sidecar file not found: {jsonl_path}", file=sys.stderr)
return lookup
with open(jsonl_path) as f:
for line_num, line in enumerate(f, 1):
line = line.strip()
if not line:
continue
try:
c = json.loads(line)
proto = c.get("protocol", "").upper()
local = c.get("local_addr", "")
remote = c.get("remote_addr", "")
if proto and local and remote:
info = {
"pid": c.get("pid"),
"process_name": c.get("process_name"),
"first_seen": parse_systemtime(c.get("first_seen")),
"last_seen": parse_systemtime(c.get("last_seen")),
"bytes_sent": c.get("bytes_sent", 0),
"bytes_received": c.get("bytes_received", 0),
}
# Store both directions, as a list to handle port reuse
for key in [(proto, local, remote), (proto, remote, local)]:
if key not in lookup:
lookup[key] = []
lookup[key].append(info)
except json.JSONDecodeError as e:
print(f"Warning: Invalid JSON at line {line_num}: {e}", file=sys.stderr)
return lookup
def find_matching_connection(lookup: dict, pkt_tuple: tuple, pkt_time: float, slack: float) -> dict | None:
"""Find the best matching connection for a packet based on tuple and timestamp.
Args:
lookup: Connection lookup dict
pkt_tuple: (proto, src, dst) tuple from packet
pkt_time: Packet timestamp (seconds since epoch)
slack: Allowed time slack in seconds
Returns:
Best matching connection info dict, or None if no match
"""
connections = lookup.get(pkt_tuple, [])
if not connections:
return None
best_match = None
best_score = float('inf')
for conn in connections:
first_seen = conn.get("first_seen")
last_seen = conn.get("last_seen")
# If no timestamps, fall back to simple match (first connection wins)
if first_seen is None or last_seen is None:
if best_match is None:
best_match = conn
continue
# Check if packet falls within connection time range (with slack)
if first_seen - slack <= pkt_time <= last_seen + slack:
# Score by how close the packet is to the connection's time range
# Prefer connections where the packet is well within the range
if pkt_time < first_seen:
score = first_seen - pkt_time
elif pkt_time > last_seen:
score = pkt_time - last_seen
else:
score = 0 # Perfect match (within range)
if score < best_score:
best_score = score
best_match = conn
return best_match
def get_packet_tuple(pkt) -> tuple:
"""Extract connection tuple from packet."""
if not pkt.haslayer(IP):
return None
ip = pkt[IP]
src_ip = ip.src
dst_ip = ip.dst
if pkt.haslayer(TCP):
tcp = pkt[TCP]
return ("TCP", f"{src_ip}:{tcp.sport}", f"{dst_ip}:{tcp.dport}")
elif pkt.haslayer(UDP):
udp = pkt[UDP]
return ("UDP", f"{src_ip}:{udp.sport}", f"{dst_ip}:{udp.dport}")
elif pkt.haslayer(ICMP):
return ("ICMP", src_ip, dst_ip)
return None
def enrich_packets(pcap_path: Path, lookup: dict, slack: float):
"""Yield enriched packet information."""
packets = rdpcap(str(pcap_path))
for frame_num, pkt in enumerate(packets, 1):
pkt_tuple = get_packet_tuple(pkt)
pkt_time = float(pkt.time)
if not pkt_tuple:
yield {
"frame": frame_num,
"time": pkt_time,
"proto": "OTHER",
"src": "",
"dst": "",
"pid": None,
"process": None,
}
continue
proto, src, dst = pkt_tuple
info = find_matching_connection(lookup, pkt_tuple, pkt_time, slack) or {}
yield {
"frame": frame_num,
"time": pkt_time,
"proto": proto,
"src": src,
"dst": dst,
"pid": info.get("pid"),
"process": info.get("process_name"),
"bytes_sent": info.get("bytes_sent"),
"bytes_received": info.get("bytes_received"),
}
def print_table(packets: list):
"""Print enriched packets as a formatted table."""
print(f"{'Frame':>6} {'Proto':<5} {'Source':<24} {'Destination':<24} {'PID':>7} {'Process':<20}")
print("-" * 95)
for p in packets:
pid_str = str(p["pid"]) if p["pid"] else "-"
proc_str = p["process"] or "-"
if len(proc_str) > 20:
proc_str = proc_str[:17] + "..."
print(f"{p['frame']:>6} {p['proto']:<5} {p['src']:<24} {p['dst']:<24} {pid_str:>7} {proc_str:<20}")
def print_tsv(packets: list):
"""Print enriched packets as TSV."""
print("frame\ttime\tproto\tsrc\tdst\tpid\tprocess")
for p in packets:
print(f"{p['frame']}\t{p['time']:.6f}\t{p['proto']}\t{p['src']}\t{p['dst']}\t{p['pid'] or ''}\t{p['process'] or ''}")
def print_json(packets: list):
"""Print enriched packets as JSON."""
print(json.dumps(packets, indent=2))
def create_pcapng(pcap_path: Path, packets: list, output_path: Path):
"""Create annotated PCAPNG using editcap."""
# Check if editcap is available
try:
subprocess.run(["editcap", "--version"], capture_output=True, check=True)
except (subprocess.CalledProcessError, FileNotFoundError):
print("Error: editcap not found. Install Wireshark to get editcap.", file=sys.stderr)
sys.exit(1)
# First convert to pcapng
with tempfile.NamedTemporaryFile(suffix=".pcapng", delete=False) as tmp:
tmp_path = Path(tmp.name)
subprocess.run(["editcap", "-F", "pcapng", str(pcap_path), str(tmp_path)], check=True)
# Build annotation commands
# editcap -a "frame:comment" format
annotations = []
for p in packets:
if p["pid"] or p["process"]:
comment_parts = []
if p["pid"]:
comment_parts.append(f"PID:{p['pid']}")
if p["process"]:
comment_parts.append(f"Process:{p['process']}")
comment = " ".join(comment_parts)
annotations.append(f"{p['frame']}:{comment}")
if not annotations:
print("No process information found to annotate.", file=sys.stderr)
# Just copy the pcapng as-is
tmp_path.rename(output_path)
return
# Apply annotations in batches (editcap has command line limits)
current_input = tmp_path
batch_size = 100
for i in range(0, len(annotations), batch_size):
batch = annotations[i:i + batch_size]
with tempfile.NamedTemporaryFile(suffix=".pcapng", delete=False) as tmp2:
tmp2_path = Path(tmp2.name)
cmd = ["editcap"]
for ann in batch:
cmd.extend(["-a", ann])
cmd.extend([str(current_input), str(tmp2_path)])
subprocess.run(cmd, check=True)
if current_input != tmp_path:
current_input.unlink()
current_input = tmp2_path
# Move final result to output
current_input.rename(output_path)
if tmp_path.exists():
tmp_path.unlink()
print(f"Created annotated PCAPNG: {output_path}")
print(f"Annotated {len(annotations)} packets with process information.")
def count_unique_connections(lookup: dict) -> int:
"""Count unique connections (accounting for bidirectional storage)."""
seen = set()
count = 0
for key, conns in lookup.items():
for conn in conns:
# Create a unique identifier for each connection
conn_id = (key, conn.get("first_seen"), conn.get("pid"))
if conn_id not in seen:
seen.add(conn_id)
count += 1
return count // 2 # Divide by 2 because we store both directions
def print_summary(packets: list, lookup: dict):
"""Print a summary of process information found."""
total = len(packets)
with_pid = sum(1 for p in packets if p["pid"])
# Group by process
by_process = {}
for p in packets:
proc = p["process"] or "<unknown>"
if proc not in by_process:
by_process[proc] = {"count": 0, "pid": p["pid"]}
by_process[proc]["count"] += 1
print(f"\nSummary:")
print(f" Total packets: {total}")
print(f" Packets with process info: {with_pid} ({100*with_pid/total:.1f}%)")
print(f" Unique connections in sidecar: {count_unique_connections(lookup)}")
print(f"\nPackets by process:")
for proc, info in sorted(by_process.items(), key=lambda x: -x[1]["count"]):
pid_str = f" (PID {info['pid']})" if info["pid"] else ""
print(f" {proc}{pid_str}: {info['count']} packets")
def main():
parser = argparse.ArgumentParser(
description="Enrich RustNet PCAP captures with process information.",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
%(prog)s capture.pcap # Show packets with process info
%(prog)s capture.pcap --format tsv # Output as TSV
%(prog)s capture.pcap --format json # Output as JSON
%(prog)s capture.pcap -o annotated.pcapng # Create annotated PCAPNG
%(prog)s capture.pcap --summary # Show summary only
%(prog)s capture.pcap --slack 5 # Use 5 second slack for timestamp matching
"""
)
parser.add_argument("pcap", type=Path, help="Path to PCAP file")
parser.add_argument("-j", "--jsonl", type=Path,
help="Path to sidecar JSONL file (default: <pcap>.connections.jsonl)")
parser.add_argument("-o", "--output", type=Path,
help="Output annotated PCAPNG file")
parser.add_argument("-f", "--format", choices=["table", "tsv", "json"], default="table",
help="Output format (default: table)")
parser.add_argument("-s", "--summary", action="store_true",
help="Show summary only")
parser.add_argument("-l", "--limit", type=int, default=0,
help="Limit number of packets to process (0 = no limit)")
parser.add_argument("--slack", type=float, default=2.0,
help="Timestamp matching slack in seconds (default: 2.0)")
args = parser.parse_args()
if not args.pcap.exists():
print(f"Error: PCAP file not found: {args.pcap}", file=sys.stderr)
sys.exit(1)
# Default sidecar path
jsonl_path = args.jsonl or Path(f"{args.pcap}.connections.jsonl")
# Load connection mappings
lookup = load_connections(jsonl_path)
if lookup:
print(f"Loaded {count_unique_connections(lookup)} connections from {jsonl_path}", file=sys.stderr)
# Process packets
packets = list(enrich_packets(args.pcap, lookup, args.slack))
if args.limit > 0:
packets = packets[:args.limit]
if args.summary:
print_summary(packets, lookup)
return
if args.output:
create_pcapng(args.pcap, packets, args.output)
print_summary(packets, lookup)
else:
if args.format == "table":
print_table(packets)
print_summary(packets, lookup)
elif args.format == "tsv":
print_tsv(packets)
elif args.format == "json":
print_json(packets)
if __name__ == "__main__":
main()

View File

@@ -201,6 +201,34 @@ fn log_connection_event(
}
}
/// Helper function to log connection info to PCAP sidecar file (JSONL format)
fn log_pcap_connection(pcap_path: &str, conn: &Connection) {
let json_path = format!("{}.connections.jsonl", pcap_path);
let event = json!({
"timestamp": chrono::Utc::now().to_rfc3339(),
"protocol": format!("{:?}", conn.protocol),
"local_addr": conn.local_addr.to_string(),
"remote_addr": conn.remote_addr.to_string(),
"pid": conn.pid,
"process_name": conn.process_name,
"first_seen": conn.created_at,
"last_seen": conn.last_activity,
"bytes_sent": conn.bytes_sent,
"bytes_received": conn.bytes_received,
"state": conn.state(),
});
if let Ok(mut file) = OpenOptions::new()
.create(true)
.append(true)
.open(&json_path)
&& let Ok(json_str) = serde_json::to_string(&event)
{
let _ = writeln!(file, "{}", json_str);
}
}
/// Application configuration
#[derive(Debug, Clone)]
pub struct Config {
@@ -216,6 +244,8 @@ pub struct Config {
pub bpf_filter: Option<String>,
/// JSON log file path for connection events
pub json_log_file: Option<String>,
/// PCAP export file path for Wireshark analysis
pub pcap_export_file: Option<String>,
/// Enable reverse DNS resolution for IP addresses
pub resolve_dns: bool,
/// Show PTR lookup connections in UI (when DNS resolution is enabled)
@@ -231,6 +261,7 @@ impl Default for Config {
enable_dpi: true,
bpf_filter: None, // No filter by default to see all packets
json_log_file: None,
pcap_export_file: None,
resolve_dns: false,
show_ptr_lookups: false,
}
@@ -438,6 +469,7 @@ impl App {
let current_interface = Arc::clone(&self.current_interface);
let linktype_storage = Arc::clone(&self.linktype);
let _pktap_active = Arc::clone(&self.pktap_active);
let pcap_export_file = self.config.pcap_export_file.clone();
thread::spawn(move || {
match setup_packet_capture(capture_config) {
@@ -460,6 +492,23 @@ impl App {
"Packet capture started successfully on interface: {} (linktype: {})",
device_name, linktype
);
// Initialize PCAP export if configured (must be before PacketReader consumes capture)
let mut pcap_savefile = if let Some(ref pcap_path) = pcap_export_file {
match capture.savefile(pcap_path) {
Ok(savefile) => {
info!("PCAP export started: {}", pcap_path);
Some(savefile)
}
Err(e) => {
error!("Failed to create PCAP savefile: {}", e);
None
}
}
} else {
None
};
let mut reader = PacketReader::new(capture);
let mut packets_read = 0u64;
let mut last_log = Instant::now();
@@ -488,6 +537,33 @@ impl App {
last_log = Instant::now();
}
// Write to PCAP file if enabled
if let Some(ref mut savefile) = pcap_savefile {
use std::time::{SystemTime, UNIX_EPOCH};
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default();
#[cfg(unix)]
let ts = libc::timeval {
tv_sec: now.as_secs() as libc::time_t,
tv_usec: now.subsec_micros() as libc::suseconds_t,
};
#[cfg(windows)]
let ts = libc::timeval {
tv_sec: now.as_secs() as libc::c_long,
tv_usec: now.subsec_micros() as libc::c_long,
};
let header = pcap::PacketHeader {
ts,
caplen: packet.len() as u32,
len: packet.len() as u32,
};
savefile.write(&pcap::Packet {
header: &header,
data: &packet,
});
}
if packet_tx.send(packet).is_err() {
warn!("Packet channel closed");
break;
@@ -517,6 +593,15 @@ impl App {
}
}
// Flush PCAP savefile before exiting
if let Some(ref mut savefile) = pcap_savefile {
if let Err(e) = savefile.flush() {
error!("Failed to flush PCAP savefile: {}", e);
} else {
info!("PCAP export completed");
}
}
info!(
"Capture thread exiting, total packets read: {}",
packets_read
@@ -1071,6 +1156,7 @@ impl App {
fn start_cleanup_thread(&self, connections: Arc<DashMap<String, Connection>>) -> Result<()> {
let should_stop = Arc::clone(&self.should_stop);
let json_log_path = self.config.json_log_file.clone();
let pcap_export_path = self.config.pcap_export_file.clone();
let dns_resolver = self.dns_resolver.clone();
thread::spawn(move || {
@@ -1114,6 +1200,11 @@ impl App {
);
}
// Log to PCAP sidecar file if PCAP export is enabled
if let Some(pcap_path) = &pcap_export_path {
log_pcap_connection(pcap_path, conn);
}
// Log cleanup reason for debugging
let conn_timeout = conn.get_timeout();
let idle_time = now.duration_since(conn.last_activity).unwrap_or_default();
@@ -1364,6 +1455,24 @@ impl App {
pub fn stop(&self) {
info!("Stopping application");
self.should_stop.store(true, Ordering::Relaxed);
// Write remaining active connections to PCAP sidecar JSONL file
// (connections that haven't been cleaned up yet)
if let Some(ref pcap_path) = self.config.pcap_export_file
&& let Ok(connections) = self.connections_snapshot.read()
{
let count = connections.len();
let with_pids = connections.iter().filter(|c| c.pid.is_some()).count();
for conn in connections.iter() {
log_pcap_connection(pcap_path, conn);
}
info!(
"Wrote {} remaining connections ({} with PIDs) to JSONL",
count, with_pids
);
}
}
}

View File

@@ -69,6 +69,13 @@ pub fn build_cli() -> Command {
.help("Enable JSON logging of connection events to specified file")
.required(false),
)
.arg(
Arg::new("pcap-export")
.long("pcap-export")
.value_name("FILE")
.help("Export captured packets to PCAP file for Wireshark analysis")
.required(false),
)
.arg(
Arg::new("bpf-filter")
.short('f')

View File

@@ -1,6 +1,6 @@
use anyhow::Result;
use arboard::Clipboard;
use log::{LevelFilter, debug, error, info};
use log::{LevelFilter, debug, error, info, warn};
use ratatui::prelude::CrosstermBackend;
use simplelog::{Config as LogConfig, WriteLogger};
use std::fs::{self, File};
@@ -67,6 +67,11 @@ fn main() -> Result<()> {
info!("JSON logging enabled: {}", json_log_path);
}
if let Some(pcap_path) = matches.get_one::<String>("pcap-export") {
config.pcap_export_file = Some(pcap_path.to_string());
info!("PCAP export enabled: {}", pcap_path);
}
if let Some(bpf_filter) = matches.get_one::<String>("bpf-filter") {
let filter = bpf_filter.trim();
if !filter.is_empty() {
@@ -95,6 +100,15 @@ fn main() -> Result<()> {
app.start()?;
info!("Application started");
// Pre-create sidecar JSONL file for PCAP export (needed for Landlock permissions)
// This must be done BEFORE Landlock is applied so the file exists when adding rules
if let Some(ref pcap_path) = config.pcap_export_file {
let jsonl_path = format!("{}.connections.jsonl", pcap_path);
if let Err(e) = std::fs::File::create(&jsonl_path) {
warn!("Failed to pre-create sidecar JSONL file: {}", e);
}
}
// Apply Landlock sandbox (Linux only)
// This must be done AFTER app.start() because:
// - eBPF programs need to be loaded first (access to /sys/kernel/btf)
@@ -127,6 +141,12 @@ fn main() -> Result<()> {
write_paths.push(PathBuf::from(json_log_path));
}
// Add PCAP export paths if specified (both .pcap and .pcap.connections.jsonl)
if let Some(pcap_path) = &config.pcap_export_file {
write_paths.push(PathBuf::from(pcap_path));
write_paths.push(PathBuf::from(format!("{}.connections.jsonl", pcap_path)));
}
let sandbox_config = SandboxConfig {
mode: sandbox_mode,
block_network: true, // RustNet is passive, doesn't need TCP