Files
natsdotnet/CLAUDE.md
Joseph Doherty 0ea71ace79 Add CLAUDE.md and base server design document
Design covers the minimal NATS server port: pub/sub with wildcards
and queue groups over System.IO.Pipelines, targeting .NET 10.
2026-02-22 19:37:32 -05:00

6.3 KiB

CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Project Overview

This project ports the NATS server from Go to .NET 10 / C#. The Go reference implementation lives in golang/nats-server/. The .NET port lives at the repository root.

NATS is a high-performance publish-subscribe messaging system. It supports wildcards (* single token, > multi-token), queue groups for load balancing, request-reply, clustering (full-mesh routes, gateways, leaf nodes), and persistent streaming via JetStream.

Build & Test Commands

# Build the solution
dotnet build

# Run all tests
dotnet test

# Run a single test project
dotnet test <path/to/TestProject.csproj>

# Run a specific test by name
dotnet test --filter "FullyQualifiedName~TestClassName.TestMethodName"

# Run tests with verbose output
dotnet test -v normal

# Clean and rebuild
dotnet clean && dotnet build

Go Reference Commands

# Build the Go reference server
cd golang/nats-server && go build

# Run Go tests for a specific area
cd golang/nats-server && go test -v -run TestName ./server/ -count=1 -timeout=30m

# Run all Go server tests (slow, ~30min)
cd golang/nats-server && go test -v ./server/ -count=1 -timeout=30m

Architecture: NATS Server (Reference)

The Go source in golang/nats-server/server/ is the authoritative reference. Key files by subsystem:

Core Message Path

  • server.go — Server struct, startup lifecycle (NewServerRunWaitForShutdown), listener management
  • client.go (6700 lines) — Connection handling, readLoop/writeLoop goroutines, per-client subscription tracking, dynamic buffer sizing (512→65536 bytes), client types: CLIENT, ROUTER, GATEWAY, LEAF, SYSTEM
  • parser.go — Protocol state machine. Text protocol: PUB, SUB, UNSUB, CONNECT, INFO, PING/PONG, MSG. Extended: HPUB/HMSG (headers), RPUB/RMSG (routes). Control line limit: 4096 bytes. Default max payload: 1MB.
  • sublist.go — Trie-based subject matcher with wildcard support. Nodes have psubs (plain), qsubs (queue groups), special pointers for * and > wildcards. Results are cached with atomic generation IDs for invalidation.

Authentication & Accounts

  • auth.go — Auth mechanisms: username/password, token, NKeys (Ed25519), JWT, external auth callout, LDAP
  • accounts.go (137KB) — Multi-tenant account isolation. Each account has its own Sublist, client set, and subject namespace. Supports exports/imports between accounts, service latency tracking.
  • jwt.go, nkey.go — JWT claims parsing and NKey validation

Clustering

  • route.go — Full-mesh cluster routes. Route pooling (default 3 connections per peer). Account-specific dedicated routes. Protocol: RS+/RS- for subscribe propagation, RMSG for routed messages.
  • gateway.go (103KB) — Inter-cluster bridges. Interest-only mode optimizes traffic. Reply subject mapping (_GR_. prefix) avoids cross-cluster conflicts.
  • leafnode.go — Hub-and-spoke topology for edge deployments. Only subscribed subjects shared with hub. Loop detection via $LDS. prefix.

JetStream (Persistence)

  • jetstream.go — Orchestration, API subject handlers ($JS.API.*)
  • stream.go (8000 lines) — Stream lifecycle, retention policies (Limits, Interest, WorkQueue), subject transforms, mirroring/sourcing
  • consumer.go — Stateful readers. Push vs pull delivery. Ack policies: None, All, Explicit. Redelivery tracking, priority groups.
  • filestore.go (337KB) — Block-based persistent storage with S2 compression, encryption (ChaCha20/AES-GCM), indexing
  • memstore.go — In-memory storage with hash-wheel TTL expiration
  • raft.go — RAFT consensus for clustered JetStream. Meta-cluster for metadata, per-stream/consumer RAFT groups.

Configuration & Monitoring

  • opts.go — CLI flags + config file loading. CLI overrides config. Supports hot reload on signal.
  • monitor.go — HTTP endpoints: /varz, /connz, /routez, /gatewayz, /jsz, /healthz
  • conf/ — Config file parser (custom format with includes)

Internal Data Structures

  • server/avl/ — AVL tree for sparse sequence sets (ack tracking)
  • server/stree/ — Subject tree for per-subject state in streams
  • server/gsl/ — Generic subject list, optimized trie
  • server/thw/ — Time hash wheel for efficient TTL expiration

Key Porting Considerations

Concurrency model: Go uses goroutines (one per connection readLoop + writeLoop). Map to async/await with Task-based I/O. Use Channel<T> or Pipe for producer-consumer patterns where Go uses channels.

Locking: Go sync.RWMutex maps to ReaderWriterLockSlim. Go sync.Map maps to ConcurrentDictionary. Go atomic operations map to Interlocked or volatile.

Subject matching: The Sublist trie is performance-critical. Every published message triggers a Match() call. Cache invalidation uses atomic generation counters.

Protocol parsing: The parser is a byte-by-byte state machine. In .NET, use System.IO.Pipelines for zero-copy parsing with ReadOnlySequence<byte>.

Buffer management: Go uses []byte slices with pooling. Map to ArrayPool<byte> and Memory<T>/Span<T>.

Compression: NATS uses S2 (Snappy variant) for route/gateway compression. Use an equivalent .NET S2 library or IronSnappy.

Ports: Client=4222, Cluster=6222, Monitoring=8222, Leaf=5222, Gateway=7222.

Message Flow Summary

Client PUB → parser → permission check → Sublist.Match() →
  ├─ Local subscribers: MSG to each (queue subs: pick one per group)
  ├─ Cluster routes: RMSG to peers (who deliver to their locals)
  ├─ Gateways: forward to interested remote clusters
  └─ JetStream: if subject matches a stream, store + deliver to consumers

Conventions

  • Reference the Go implementation file and line when porting a subsystem
  • Maintain protocol compatibility — the .NET server must interoperate with existing NATS clients and Go servers in a cluster
  • Use the same configuration file format as the Go server (parsed by conf/ package)
  • Match the Go server's monitoring JSON response shapes for tooling compatibility