rustnzbd

A Modern Usenet Binary Downloader Written in Rust

Fast, efficient, and built for self-hosters. Full NNTP pipeline with yEnc decoding, PAR2 verification & repair, archive extraction, and a clean web UI. Multi-server failover, connection pooling, and NNTP pipelining for maximum throughput.

Rust NNTP Docker REST API Web UI
Active Development rustnzbd is under active development. A built-in benchmarking suite (benchnzb) is included for head-to-head performance testing against SABnzbd.

Features

Everything you need for Usenet downloads, built from the ground up in Rust

NNTP Pipelining

Send multiple ARTICLE commands per connection before reading responses. Configurable pipeline depth per server eliminates round-trip latency.

Multi-Server Failover

Priority-ordered server list with automatic failover. Articles not found on one server are retried on the next. Optional servers for fill providers.

yEnc Decoding

Fast, correct yEnc decoder with CRC32 validation. Handles multi-part articles, escape sequences, and assembles files from decoded segments.

PAR2 Verify & Repair

Automatic PAR2 verification after download. Damaged files are repaired using recovery blocks before extraction. Full pipeline automation.

Archive Extraction

Automatic extraction of RAR, 7z, and ZIP archives after download and repair. Supports multi-part RAR (new and old naming) with cleanup.

Clean Web UI

Responsive single-page web interface. Queue management, download history, server configuration, real-time logs, and drag-and-drop NZB upload.

REST API

Full HTTP API with Swagger/OpenAPI documentation. Queue, history, server management, status, and log endpoints. SABnzbd-compatible API layer.

OpenTelemetry

Built-in tracing and metrics export via OpenTelemetry. Ship logs and metrics to Grafana, Jaeger, or any OTLP-compatible backend.

Architecture

A modular Rust workspace with clean separation of concerns

nzb-core
NZB Parser Config Database (SQLite) Models
nzb-nntp
NNTP Protocol Connection Pool TLS (rustls) Pipelining Server Failover
nzb-decode
yEnc Decoder CRC32 Validation File Assembler Caching
nzb-postproc
PAR2 Verify PAR2 Repair RAR / 7z / ZIP Cleanup
nzb-web
Axum HTTP REST API Web UI Queue Manager Log Buffer
Infrastructure
Tokio Runtime SQLite (WAL) OpenTelemetry Docker

Performance

Built-in benchmarking suite for head-to-head comparison with SABnzbd

benchnzb

rustnzbd ships with benchnzb, a comprehensive benchmarking harness that runs both rustnzbd and SABnzbd through identical scenarios using a mock NNTP server. It generates test data with yEnc-encoded articles, PAR2 recovery files, and 7z archives, then measures download speed, CPU usage, memory consumption, and post-processing time.

Benchmark Scenarios

Nine scenarios covering raw download, PAR2 repair, and archive extraction

5 GB · Raw Download Pure NNTP speed

Tests raw NNTP download throughput with 5 GB of yEnc-encoded articles. No post-processing. Measures connection pooling and pipeline efficiency.

10 GB · Raw Download Sustained throughput

Larger raw download to test sustained throughput and memory stability over extended transfers.

50 GB · Raw Download Large transfer

Stress test with 50 GB of raw data. Tests memory management and disk I/O at scale.

5 GB · PAR2 Repair Download + repair

5 GB download with 5% missing articles. Tests PAR2 verification and repair pipeline after download.

10 GB · PAR2 Repair Download + repair

Larger PAR2 repair scenario testing recovery performance at scale.

50 GB · PAR2 Repair Heavy repair

50 GB with 5% missing articles. Full end-to-end pipeline including download, verify, and repair.

5 GB · Unpack Download + extract

5 GB download followed by 7z extraction. Tests archive detection and extraction pipeline.

10 GB · Unpack Download + extract

Larger extraction scenario testing decompression speed and disk I/O coordination.

50 GB · Unpack Heavy extract

50 GB download and extraction. Tests the complete pipeline under heavy load.

All benchmarks run in Docker containers with a mock NNTP server generating yEnc-encoded articles on the fly. Metrics collected via Docker stats API. Full suite and scripts available on GitHub.

See It in Action

Interactive demo with simulated download queue

localhost:8080
rustnzbd Speed: 48.2 MB/s · Queue: 3 · Free: 1.2 TB
Queue History Servers Settings Logs
Name Status Progress Speed Size
Ubuntu.24.04.Desktop.x64 Downloading
67%
32.1 MB/s 4.7 GB
LibreOffice.7.6.Full.Pack Verifying
45%
2.1 GB
Blender.4.0.Benchmark.Scenes Downloading
23%
16.1 MB/s 8.3 GB
PostgreSQL.16.Docs.Pack Completed
100%
892 MB

Getting Started

Terminal
$ git clone https://github.com/AusAgentSmith/rustnzbd.git
$ cd rustnzbd
$ cp config.example.toml config.toml
$ docker compose up --build -d
Building multi-stage Rust image...
Starting rustnzbd container
Web UI available at http://localhost:8080

Port: 8080 (Web UI + API)

Config: Edit config.toml to add your Usenet server(s) or configure via the web UI

Terminal
$ git clone https://github.com/AusAgentSmith/rustnzbd.git
$ cd rustnzbd
$ cp config.example.toml config.toml
$ cargo build --release
$ ./target/release/rustnzbd
rustnzbd started
Web UI at http://localhost:8080

Requirements: Rust 1.85+ (2024 edition), par2 and p7zip for post-processing

Benchmarks: Run cd benchnzb && ./run.sh --scenarios quick to compare against SABnzbd

Download Pipeline

From NZB file to extracted content — fully automated

1

Parse NZB

XML parsing extracts article message IDs, file segments, groups, and metadata. Password support via <meta> tags.

2

Download

NNTP connections fetch articles with pipelining. Multi-server failover retries missing articles on backup servers.

3

Decode

yEnc-encoded article bodies are decoded with CRC32 validation. Segments are assembled into complete files.

4

Verify & Repair

PAR2 checks file integrity. If blocks are missing, recovery data is used to reconstruct damaged files.

5

Extract & Clean

Archives (RAR, 7z, ZIP) are extracted. Temporary files (par2, rar volumes) are cleaned up. Output moved to final directory.