docs: add Go-to-.NET gap analysis and test mapping

Research documents covering implementation gaps (structuregaps.md)
and full test function mapping (testmapping.md + CSV files) between
the Go NATS server and .NET port. 857/2937 Go tests mapped (29.2%),
2080 unmatched, 2966 .NET-only tests.
This commit is contained in:
Joseph Doherty
2026-02-24 10:07:29 -05:00
parent 1743cfc550
commit 0dc2b38415
4 changed files with 6522 additions and 0 deletions

470
docs/structuregaps.md Normal file
View File

@@ -0,0 +1,470 @@
# Implementation Gaps: Go NATS Server vs .NET Port
## Overview
| Metric | Go Server | .NET Port | Ratio |
|--------|-----------|-----------|-------|
| Source lines | ~130K (109 files) | ~27K (192 files) | 4.8x |
| Test functions | 2,937 `func Test*` | 3,168 `[Fact]`/`[Theory]` | 0.93x |
| Tests passing | 2,937 | 3,501 (incl. parameterized) | 1.19x |
| Largest source file | filestore.go (12,593 lines) | NatsServer.cs (1,739 lines) | 7.2x |
The .NET port has more test methods than the Go server but covers less behavioral surface area. Many .NET tests are unit tests against stubs, mock coordinators, or simulated cluster fixtures, while Go tests are integration tests that start real servers, establish real TCP connections, and exercise full protocol paths.
The .NET codebase is split across more files (192 vs 109) following .NET conventions of one-class-per-file, but each file is substantially smaller. The 4.8x line count gap understates the functional gap because the Go code is denser (less boilerplate per line of logic than C#).
## Gap Severity Legend
- **CRITICAL**: Core functionality missing, blocks production use
- **HIGH**: Important feature incomplete, limits capabilities significantly
- **MEDIUM**: Feature gap exists but workarounds are possible or feature is less commonly used
---
## Gap 1: FileStore Block Management (CRITICAL)
**Go reference**: `filestore.go` -- 12,593 lines, ~300 functions
**NET implementation**: `FileStore.cs` -- 607 lines
**Gap factor**: 20.8x
The Go FileStore is the most complex single file in the server. It implements a block-based persistent storage engine with S2 compression, AEAD encryption (ChaCha20-Poly1305 / AES-GCM), crash recovery, and TTL scheduling. The .NET FileStore is a basic JSONL append-only store that satisfies the `IStreamStore` interface but lacks the performance and durability characteristics required for production use.
**Missing features**:
- Block-based I/O: Go organizes messages into 65KB+ blocks with per-block indexes, allowing efficient range reads and compaction. The .NET store appends to a single file.
- Encryption key management: `genEncryptionKeys`, `recoverAEK`, per-block encryption with key rotation. The .NET `AeadEncryptor` exists (165 lines) but is not integrated into the file store's block lifecycle.
- S2 compression on write / decompression on read: The .NET `S2Codec` exists (111 lines) but is not wired into the storage path.
- Message block state recovery: Go recovers block state from corrupted or partially-written files on startup, rebuilding indexes from raw data. The .NET store has no recovery path.
- Tombstone and deletion tracking: Go tracks deleted messages as tombstones within blocks, allowing sparse sequence sets. The .NET store does not support sparse sequences.
- TTL and message scheduling recovery: Go uses time hash wheels to schedule message expiration and recovers pending expirations on restart.
- Checksum validation: Go computes checksums per message and per block for data integrity verification.
- Atomic file overwrites: Go uses rename-based atomic writes for crash safety. The .NET store does not.
- Multi-block write cache: Go maintains an in-memory write cache with configurable block count limits.
**Related .NET files**: `FileStore.cs` (607), `IStreamStore.cs` (172), `AeadEncryptor.cs` (165), `S2Codec.cs` (111), `StreamState.cs` (78), `MemStore.cs` (160)
**Total .NET lines in storage**: ~1,293
**Go equivalent**: `filestore.go` (12,593) + `memstore.go` (2,700) + `store.go` (791) = 16,084 lines
---
## Gap 2: JetStream Cluster Coordination (CRITICAL)
**Go reference**: `jetstream_cluster.go` -- 10,887 lines
**NET implementation**: `JetStreamMetaGroup.cs` -- 51 lines
**Gap factor**: 213x
This is the largest single-file gap in the project. The Go `jetstream_cluster.go` implements the meta-controller that coordinates stream and consumer placement across a RAFT cluster. The .NET implementation is a 51-line stub.
**Missing features**:
- `streamAssignment` and `consumerAssignment` tracking with RAFT proposal workflow
- Inflight request deduplication: `inflightStreamInfo`, `inflightConsumerInfo` prevent duplicate operations during leader transitions
- Peer remove and stream move operations with data rebalancing
- Leader-based stream and consumer placement with topology awareness (unique nodes, tags, clusters)
- Assignment validation: checking resource limits, storage availability, and replica count constraints before accepting
- Replica group management: `StreamReplicaGroup.cs` exists (91 lines) but lacks the coordination logic
- Step-down and leadership transfer for maintenance operations
- Cross-cluster gateway awareness for super-cluster stream placement
**Related .NET files**: `JetStreamMetaGroup.cs` (51), `StreamReplicaGroup.cs` (91), `JetStreamService.cs` (148)
**Total .NET lines in cluster coordination**: ~290
---
## Gap 3: Stream Mirrors, Sources & Subject Transforms (HIGH)
**Go reference**: `stream.go` -- 8,072 lines, 80+ functions
**NET implementation**: `StreamManager.cs` (436) + `MirrorCoordinator.cs` (22) + `SourceCoordinator.cs` (36)
**Gap factor**: 16.3x
The Go `stream.go` handles the full lifecycle of streams including mirror synchronization, source consumption, deduplication, subject transforms, and retention policy enforcement. The .NET `StreamManager` covers basic CRUD and message storage/retrieval but the mirror and source coordinators are stubs.
**Missing features**:
- `processMirrorMsgs`: continuous mirror synchronization loop that pulls messages from the source stream and applies them locally
- `setupMirrorConsumer` / `setupSourceConsumer`: creation of ephemeral internal consumers that track position in the source
- Source and mirror retry logic with exponential backoff and jitter
- Flow control reply handling for pull-based source consumption
- Subject transforms with regex matching (Go `subject_transform.go` is 688 lines; .NET `SubjectTransform.cs` is 708 lines -- this is actually well-ported)
- Deduplication window maintenance with `Nats-Msg-Id` header tracking
- Mirror/source error state tracking, health reporting, and automatic recovery
- Stream assignment awareness for determining whether a stream should be active on this node in cluster mode
- Purge operations with subject filtering and sequence-based purge
- Stream snapshot and restore for backup/migration
**Related .NET files**: `StreamManager.cs` (436), `MirrorCoordinator.cs` (22), `SourceCoordinator.cs` (36), `SubjectTransform.cs` (708), `StreamApiHandlers.cs` (351)
**Total .NET lines in stream management**: ~1,553
---
## Gap 4: Consumer Delivery Engines (HIGH)
**Go reference**: `consumer.go` -- 6,715 lines, 130+ functions
**NET implementation**: `PushConsumerEngine.cs` (67) + `PullConsumerEngine.cs` (169) + `ConsumerManager.cs` (198) + `AckProcessor.cs` (72)
**Gap factor**: 13.3x
The Go consumer engine is a full state machine handling delivery, acknowledgment tracking, redelivery, rate limiting, and priority group management. The .NET engines are minimal wrappers that deliver messages but lack the stateful tracking required for production consumer behavior.
**Missing features**:
- Priority group pinning and flow: `setPinnedTimer`, `assignNewPinId` for sticky consumer assignment within priority groups
- Pending request queue with waiting consumer pool management for pull consumers
- NAK and redelivery tracking with configurable exponential backoff schedules
- Pause state management with advisory event publication
- Redelivery rate limiting per consumer to prevent thundering herd on redelivery
- Filter subject skip tracking for filtered consumers (skipping messages that do not match the filter without counting them against pending limits)
- Idle heartbeat generation and flow control acknowledgment
- Reset to sequence: allowing a consumer to rewind to a specific stream sequence
- Delivery interest tracking: distinguishing local vs gateway vs leaf node delivery targets
- Max-deliveries enforcement with configurable drop/reject/dead-letter policies
- Sample/observe mode for latency tracking (recording deliver-to-ack latency percentiles)
- Backoff schedules: per-message redelivery delay arrays
- Consumer pause/resume with coordinated cluster state
**Related .NET files**: `PushConsumerEngine.cs` (67), `PullConsumerEngine.cs` (169), `ConsumerManager.cs` (198), `AckProcessor.cs` (72), `ConsumerApiHandlers.cs` (307), `ConsumerState.cs` (41), `ConsumerConfig.cs` (37), `IConsumerStore.cs` (56)
**Total .NET lines in consumer subsystem**: ~947
---
## Gap 5: Client Protocol Handling (HIGH)
**Go reference**: `client.go` -- 6,716 lines, 162+ functions
**NET implementation**: `NatsClient.cs` -- 924 lines
**Gap factor**: 7.3x
The Go client handler manages per-connection state including authentication, permissions, protocol parsing dispatch, write buffering, and trace logging. The .NET `NatsClient` handles the core read/write loops and subscription tracking but lacks the adaptive performance tuning and advanced protocol features.
**Missing features**:
- Adaptive read buffer tuning: Go dynamically resizes read buffers from 512 bytes to 65KB based on message throughput. The .NET port has `AdaptiveReadBuffer` tests but the implementation depth is unclear.
- Dynamic write buffer pooling: Go pools write buffers and uses flush coalescing to reduce syscalls under load
- Per-client trace level with contextual logging: Go supports enabling protocol-level tracing per client connection
- Message trace with direction (in/out) and byte-level logging for debugging
- Permission caching with LRU and TTL: Go caches permission check results to avoid re-evaluating on every publish. The .NET `PermissionLruCache` exists in tests but integration depth is limited.
- Client kind differentiation: Go distinguishes CLIENT, ROUTER, GATEWAY, LEAF, and SYSTEM client types with different protocol handling paths. The .NET port has `ClientKind` enum and routing tests but the protocol dispatch is not fully differentiated.
- Write timeout handling with partial flush recovery
- Nonce-based TLS authentication flows
- Slow consumer detection and eviction with configurable thresholds
- Maximum control line enforcement (4096 bytes in Go)
- MQTT and WebSocket protocol upgrade paths from the base client
**Related .NET files**: `NatsClient.cs` (924), `NatsParser.cs` (495), `NatsProtocol.cs` (estimated), `NatsServer.cs` (1,739 -- includes accept loop and server lifecycle)
---
## Gap 6: MQTT Protocol (HIGH)
**Go reference**: `mqtt.go` -- 5,882 lines
**NET implementation**: `Mqtt/` folder -- 539 lines (5 files)
**Gap factor**: 10.9x
The Go MQTT implementation provides full MQTT v3.1.1 compatibility with JetStream-backed session persistence. The .NET implementation has a protocol parser skeleton and connection handler but lacks the session management, QoS tracking, and JetStream integration.
**Missing features**:
- MQTT client session creation and restoration with persistent ClientID mapping
- Will message handling: delivering a pre-configured message when a client disconnects unexpectedly
- QoS 1 and QoS 2 tracking with packet ID mapping and retry logic
- Retained message storage backed by a JetStream stream (per-account)
- MQTT session stream persistence: each MQTT session's state is stored in a dedicated JetStream consumer
- MQTT subscription wildcard translation: converting MQTT `+` and `#` wildcards to NATS `*` and `>` equivalents
- Session flapper detection: identifying clients that connect/disconnect rapidly and applying backoff
- MaxAckPending enforcement for QoS 1 flow control
- CONNECT packet validation with protocol version negotiation
- MQTT-to-NATS subject mapping with topic separator translation (`/` to `.`)
**Related .NET files**: `MqttListener.cs` (166), `MqttProtocolParser.cs` (141), `MqttConnection.cs` (131), `MqttPacketReader.cs` (63), `MqttPacketWriter.cs` (38)
---
## Gap 7: JetStream API Layer (HIGH)
**Go reference**: `jetstream_api.go` -- 5,165 lines + `jetstream.go` -- 2,866 lines
**NET implementation**: `JetStreamApiRouter.cs` (117) + `StreamApiHandlers.cs` (351) + `ConsumerApiHandlers.cs` (307) + `DirectApiHandlers.cs` (61) + `AccountApiHandlers.cs` (16) + `AccountControlApiHandlers.cs` (34) + `JetStreamService.cs` (148)
**Gap factor**: 7.7x
The Go JetStream API layer handles all `$JS.API.*` subject requests with request validation, leader forwarding, and response serialization. The .NET implementation covers the basic API surface but lacks leader forwarding, cluster-aware request routing, and several API endpoints.
**Missing features**:
- Leader forwarding: API requests that arrive at a non-leader node must be forwarded to the current leader
- Stream/consumer info caching with generation-based invalidation
- API rate limiting and request deduplication
- Snapshot and restore API endpoints
- Stream purge with subject filter and keep options
- Consumer pause/resume API
- Advisory event publication for API operations
- JetStream account resource tracking (storage used, streams count, consumers count)
**Related .NET total**: ~1,034 lines vs Go total: ~8,031 lines
---
## Gap 8: RAFT Consensus (MEDIUM)
**Go reference**: `raft.go` -- 5,037 lines
**NET implementation**: `Raft/` folder -- 1,136 lines (13 files)
**Gap factor**: 4.4x
The .NET RAFT implementation covers the core protocol: leader election, log replication, and basic snapshotting. The gap is in operational features needed for production cluster management.
**Missing features**:
- `InstallSnapshot` with partial state application and streaming transfer
- `CreateSnapshotCheckpoint` for coordinated snapshots across multiple RAFT groups
- Membership change proposals: `ProposeAddPeer`, `ProposeRemovePeer` with joint consensus
- Campaign timeout management with randomized election delays
- Healthy node detection: classifying nodes as current, catching-up, or leaderless
- Applied/processed entry tracking for flow control between the RAFT log and the state machine
- Compaction: truncating the log after a snapshot is taken
- Pre-vote protocol to prevent disruptive elections from partitioned nodes
**Related .NET files**: `RaftWireFormat.cs` (430), `NatsRaftTransport.cs` (201), `RaftNode.cs` (189), `RaftTransport.cs` (64), `RaftLog.cs` (62), `RaftSubjects.cs` (53), `RaftSnapshotStore.cs` (41), `RaftReplicator.cs` (39), and 5 smaller files
---
## Gap 9: Account Management & Multi-Tenancy (MEDIUM)
**Go reference**: `accounts.go` -- 4,774 lines, 172+ functions
**NET implementation**: `Account.cs` (189) + `AuthService.cs` (172) + related auth files
**Gap factor**: 13x (comparing `accounts.go` to `Account.cs`)
The Go account system provides full multi-tenant isolation where each account has its own `Sublist`, client set, and subject namespace. The .NET implementation has basic account configuration and authentication but lacks the runtime isolation and import/export machinery.
**Missing features**:
- Service and stream export whitelist enforcement
- Service import with weighted destination selection for load balancing
- Export authorization checks with account-level permissions
- Cycle detection for service import chains (preventing A imports from B imports from A)
- Response tracking for request-reply latency measurement
- Service latency metrics: p50, p90, p99 percentile tracking per service export
- Account-level JetStream resource limits: max storage, max streams, max consumers per account
- Leaf node cluster registration per account
- Client and connection tracking per account with eviction on limit exceeded
- Weighted subject mappings for traffic shaping
- System account with internal subscription handling
**Related .NET files**: `Account.cs` (189), `AuthService.cs` (172), `JwtAuthenticator.cs` (180), `AccountClaims.cs` (107), `AccountResolver.cs` (65), `ExportAuth.cs` (25), and 10+ smaller auth files
**Total .NET lines in auth/account**: ~1,266
---
## Gap 10: Monitoring & Events (MEDIUM)
**Go reference**: `monitor.go` (4,240) + `events.go` (3,334) + `msgtrace.go` (799) = 8,373 lines
**NET implementation**: `Monitoring/` (1,698) + `Events/` (686) + `MessageTraceContext.cs` (22) = 2,406 lines
**Gap factor**: 3.5x
The monitoring and events subsystem is one of the closer areas in terms of coverage. The .NET port implements `/varz`, `/connz`, `/routez`, `/subsz`, `/leafz`, `/gatewayz`, `/jsz`, `/accountz`, `/stacksz`, `/healthz`, and `/pprof` endpoints. The main gaps are in event detail and message tracing depth.
**Missing features**:
- Full system event payloads: Go publishes detailed JSON events for connect, disconnect, auth errors, slow consumers. The .NET events exist but payload fields may be incomplete.
- Message trace propagation: Go traces messages through the full delivery pipeline (publish, route, gateway, leaf, deliver) with byte-level detail. The .NET `MessageTraceContext` is 22 lines.
- Closed connection tracking: Go maintains a ring buffer of recently closed connections with disconnect reasons for `/connz` queries.
- Account-scoped monitoring: `/connz?acc=ACCOUNT` filtering
- Sort options for monitoring endpoints (by bytes, messages, subs, etc.)
---
## Gap 11: Gateway Bridging (MEDIUM)
**Go reference**: `gateway.go` -- 3,426 lines, 79+ functions
**NET implementation**: `GatewayManager.cs` (225) + `GatewayConnection.cs` (242) + `ReplyMapper.cs` (39) + `GatewayOptions.cs` (9)
**Gap factor**: 6.7x
Gateways bridge separate NATS clusters. The .NET implementation handles basic gateway connections and message forwarding but lacks the optimization modes and cross-cluster isolation features.
**Missing features**:
- Interest-only mode optimization: after initial flooding, gateways switch to sending only messages for subjects that have known subscribers in the remote cluster
- Account-specific gateway routes: isolating gateway traffic per account
- Reply mapper (`_GR_.` prefix) for cross-cluster request-reply isolation (stub exists at 39 lines)
- Inbound gateway subscription interest tracking
- Outbound connection pooling (Go uses 3 connections per gateway peer by default)
- Gateway TLS handshake and mutual authentication
- Message trace propagation through gateways
- Gateway reconnection with exponential backoff
---
## Gap 12: Leaf Node Connections (MEDIUM)
**Go reference**: `leafnode.go` -- 3,470 lines
**NET implementation**: `LeafNodeManager.cs` (213) + `LeafConnection.cs` (242) + `LeafLoopDetector.cs` (35) + `LeafHubSpokeMapper.cs` (30)
**Gap factor**: 6.7x
Leaf nodes provide hub-and-spoke topology for edge deployments. The .NET implementation handles basic leaf connections and subscription propagation but lacks the full feature set.
**Missing features**:
- Solicited leaf connection management with retry and reconnect logic
- Hub-spoke subject filtering: only propagating subjects that have active subscriptions
- Leaf node JetStream domain awareness
- Loop detection refinement: the basic detector exists (35 lines) but the Go implementation is more nuanced with `$LDS.` prefix handling
- Account-scoped leaf connections
- Leaf node compression negotiation (S2)
- Dynamic subscription interest updates after initial connect
**Related .NET files**: `LeafNodeManager.cs` (213), `LeafConnection.cs` (242), `LeafLoopDetector.cs` (35), `LeafHubSpokeMapper.cs` (30), `LeafNodeOptions.cs` (8), `LeafzHandler.cs` (21)
**Total .NET lines**: ~549
---
## Gap 13: Route Clustering (MEDIUM)
**Go reference**: `route.go` -- 3,314 lines
**NET implementation**: `RouteManager.cs` (269) + `RouteConnection.cs` (289) + `RouteCompressionCodec.cs` (26)
**Gap factor**: 5.7x
Route clustering provides full-mesh connectivity between NATS servers. The .NET implementation handles basic route connections, subscription propagation, and message forwarding.
**Missing features**:
- Route pooling: Go uses configurable connection pools (default 3) per route peer. The .NET implementation uses single connections.
- Account-specific dedicated routes
- Route compression with S2 (codec stub exists at 26 lines)
- Solicited route connection management with discovery
- Route permission enforcement
- Dynamic route addition and removal without restart
- Gossip-based topology discovery
**Related .NET files**: `RouteManager.cs` (269), `RouteConnection.cs` (289), `RouteCompressionCodec.cs` (26), `RouteCompression.cs` (7), `RoutezHandler.cs` (21)
**Total .NET lines**: ~612
---
## Gap 14: Configuration & Hot Reload (MEDIUM)
**Go reference**: `opts.go` (6,435) + `reload.go` (2,653) = 9,088 lines
**NET implementation**: `ConfigProcessor.cs` (1,023) + `NatsConfLexer.cs` (1,503) + `NatsConfParser.cs` (421) + `ConfigReloader.cs` (395) = 3,342 lines
**Gap factor**: 2.7x
Configuration parsing is one of the closer areas. The .NET config parser handles the NATS config file format including includes, variables, and nested blocks. The gap is primarily in hot reload scope and CLI option coverage.
**Missing features**:
- Signal handling (SIGHUP) for reload trigger on Unix systems
- Cluster permissions reloading without restart
- Auth change propagation to existing connections (disconnect clients that no longer have valid credentials)
- Logger reconfiguration without restart
- TLS certificate reloading (for certificate rotation)
- JetStream configuration changes (storage directory, limits)
- Route pool size changes at runtime
- Account list updates with connection cleanup for removed accounts
- Many CLI flags that Go supports are not parsed in the .NET host
---
## Gap 15: WebSocket Support (MEDIUM)
**Go reference**: `websocket.go` -- 1,550 lines
**NET implementation**: `WebSocket/` folder -- 1,210 lines (7 files)
**Gap factor**: 1.3x
WebSocket support is one of the most complete areas in the .NET port. The basic WebSocket upgrade, frame reading/writing, and compression are implemented.
**Missing features**:
- WebSocket-specific TLS configuration
- Origin checking refinement (basic checker exists at 81 lines)
- WebSocket compression negotiation (permessage-deflate parameters)
- JWT authentication through WebSocket connect
---
## Test Coverage Gap Summary
The table below maps Go test files to their .NET equivalents. "Go Tests" counts `func Test*` functions. ".NET Tests" counts `[Fact]` and `[Theory]` attributes across all related .NET test files for that subsystem.
| Go Test Area | Go Tests | .NET Tests | Delta | Notes |
|---|---:|---:|---:|---|
| `jetstream_test.go` | 312 | ~350 | +38 | .NET tests are shallower; many test API routing not behavior |
| `filestore_test.go` | 232 | 199 | -33 | .NET lacks block management, recovery, encryption integration tests |
| `jetstream_consumer_test.go` | 160 | 116 | -44 | Missing priority groups, backoff, pause, rate limiting tests |
| `jetstream_cluster_1_test.go` | 151 | 542 (all cluster) | -- | .NET has more cluster tests but uses simulated meta-group, not real clustering |
| `jetstream_cluster_2_test.go` | 123 | (included above) | -- | |
| `jetstream_cluster_3_test.go` | 97 | (included above) | -- | |
| `jetstream_cluster_4_test.go` | 85 | (included above) | -- | |
| `jetstream_cluster_long_test.go` | 7 | (included above) | -- | |
| `jetstream_super_cluster_test.go` | 47 | ~20 | -27 | .NET cross-cluster tests exist but are thin |
| `mqtt_test.go` | 123 | 127 | +4 | .NET tests cover parsing and topic mapping but not session persistence |
| `leafnode_test.go` | 110 | 93 | -17 | .NET lacks solicited connection and JS domain tests |
| `raft_test.go` | 104 | 205 | +101 | .NET has extensive RAFT tests but against simulated transport |
| `monitor_test.go` | 100 | 136 | +36 | .NET monitoring is well-covered |
| `norace_1_test.go` | 100 | ~55 | -45 | Go norace tests are stress/concurrency; .NET stress tests are lighter |
| `norace_2_test.go` | 41 | (included above) | -- | |
| `jwt_test.go` | 88 | 88 | 0 | Good parity in JWT test count |
| `gateway_test.go` | 88 | 115 | +27 | .NET gateway tests cover config and basic forwarding well |
| `opts_test.go` | 86 | 225 | +139 | .NET has extensive config tests due to lexer/parser unit tests |
| `client_test.go` | 82 | 110 | +28 | .NET client tests are unit-level; Go tests are integration |
| `reload_test.go` | 73 | (included in config) | -- | |
| `routes_test.go` | 70 | 91 | +21 | .NET route tests cover handshake and subscription propagation |
| `sublist_test.go` | 65 | 40 | -25 | Go sublist tests include benchmarks and concurrent stress |
| `websocket_test.go` | 61 | 66 | +5 | Good parity |
| `events_test.go` | 51 | 53 | +2 | Good parity in count |
| `accounts_test.go` | 64 | 162 | +98 | .NET account tests cover auth mechanisms broadly but not import/export depth |
| `server_test.go` | 42 | 25 | -17 | Go server tests exercise full lifecycle |
| `msgtrace_test.go` | 33 | ~23 | -10 | .NET message trace tests exist but trace propagation is stubbed |
| `auth_callout_test.go` | 31 | ~15 | -16 | Partial coverage |
| `jetstream_batching_test.go` | 29 | ~10 | -19 | Batching semantics partially tested |
| `client_proxyproto_test.go` | 23 | ~5 | -18 | Proxy protocol minimally tested |
| `dirstore_test.go` | 19 | 0 | -19 | No directory store equivalent |
| `signal_test.go` | 19 | 0 | -19 | No signal handling in .NET |
| `jetstream_versioning_test.go` | 18 | ~5 | -13 | Version negotiation minimally tested |
| `jetstream_jwt_test.go` | 18 | ~10 | -8 | Partial JWT-JetStream integration |
| `parser_test.go` | 17 | 17 | 0 | Good parity |
| `store_test.go` | 17 | ~20 | +3 | Store interface tests |
| `jetstream_leafnode_test.go` | 13 | ~8 | -5 | Leaf-JetStream interaction partially covered |
| `split_test.go` | 12 | 0 | -12 | No split test equivalent |
| `auth_test.go` | 12 | (included in accounts) | -- | |
| `leafnode_proxy_test.go` | 9 | 0 | -9 | No proxy leaf node tests |
| `ipqueue_test.go` | 9 | 0 | -9 | No IP queue equivalent |
| `util_test.go` | 7 | 0 | -7 | Utility functions tested inline |
| `log_test.go` | 6 | ~3 | -3 | Basic logging tests |
| `nkey_test.go` | 5 | ~8 | +3 | Good parity |
| `subject_transform_test.go` | 4 | 53 | +49 | .NET has extensive transform tests |
| `errors_test.go` | 2 | ~4 | +2 | |
| **Subtree tests** | | | | |
| `stree/stree_test.go` | ~20 | ~25 | +5 | Subject tree tests |
| `gsl/gsl_test.go` | ~15 | ~20 | +5 | Generic subject list tests |
| `thw/thw_test.go` | ~10 | ~15 | +5 | Time hash wheel tests |
| `avl/` (in pse/) | ~5 | ~30 | +25 | Sequence set tests |
| **TOTAL** | **2,937** | **3,168** | **+231** | .NET has 8% more test methods |
---
## Architectural Observations
### 1. Cluster Testing is Simulated, Not Distributed
The .NET test suite uses `JetStreamClusterFixture` which simulates a multi-node cluster within a single process using `JetStreamMetaGroup` stubs. Go tests start real separate server processes, establish real TCP route connections, and verify behavior through actual network I/O. This means the .NET cluster tests validate API contracts and state machine transitions but do not test network partitions, connection failures, leader elections under load, or real RAFT consensus.
### 2. FileStore is Functionally a MemStore with Disk Persistence
The .NET `FileStore` appends JSONL records to disk but does not implement the block-based architecture that gives the Go FileStore its performance characteristics. There is no block indexing, no write cache, no compression-on-write, and no encryption-at-rest integration into the storage path. The existing `AeadEncryptor` and `S2Codec` are tested in isolation but not composed into the file store.
### 3. Consumer Engines are Delivery Wrappers, Not State Machines
The Go consumer engine maintains complex state: pending messages, redelivery queues, ack tracking with sequence gaps, flow control counters, and rate limiters. The .NET push and pull consumer engines (67 and 169 lines respectively) handle message delivery but delegate most state management to simpler structures. Features like NAK-with-delay, max-deliveries enforcement, and priority group pinning are not implemented.
### 4. Test Depth vs Test Count
The .NET test suite has 3,168 test methods compared to Go's 2,937 test functions, but this comparison is misleading:
- Many .NET tests are `[Theory]` with `[InlineData]` that test the same code path with different inputs (e.g., config parsing with various values)
- Go test functions frequently contain subtests (`t.Run(...)`) that are not counted in the function count but execute many more scenarios
- Go tests start real servers and make real TCP connections; .NET tests often use mocked interfaces or in-process fixtures
- Go's `norace` tests (141 functions) are specifically designed to catch concurrency bugs under stress; .NET stress tests exist but are lighter
### 5. Protocol Compatibility Surface
The .NET server can interoperate with standard NATS clients for basic pub/sub, subscribe, unsubscribe, and request-reply. The protocol parser (`NatsParser.cs`, 495 lines) handles the core text protocol. However, the following protocol extensions are incomplete:
- `HMSG`/`HPUB` (headers): parsed but header propagation through routes/gateways is incomplete
- Route protocol (`RS+`/`RS-`/`RMSG`): basic forwarding works but pool awareness and account scoping are missing
- Gateway protocol: basic message forwarding but interest-only mode optimization is absent
- Leaf node protocol: subscription propagation works but JetStream domain headers are not forwarded
### 6. Configuration Parsing is Relatively Complete
The NATS config file parser (`NatsConfLexer.cs` at 1,503 lines + `NatsConfParser.cs` at 421 lines) is one of the more complete subsystems. It handles the custom NATS config format including nested blocks, includes, variable interpolation, and most configuration directives. The gap is primarily in the number of configuration options that are parsed but not acted upon at runtime.
### 7. Internal Data Structures are Well-Ported
The internal data structure libraries have good parity:
- `SequenceSet.cs` (777 lines) -- AVL-based sparse sequence tracking
- `GenericSubjectList.cs` (650 lines) -- optimized trie for subject matching
- `SubjectTree/` (1,265 lines across Nodes.cs + SubjectTree.cs + Parts.cs) -- subject-aware tree
- `HashWheel.cs` (414 lines) -- time hash wheel for TTL scheduling
- `SubList.cs` (984 lines) vs Go `sublist.go` (1,728 lines) -- trie-based subscription matching
These are foundational and well-tested, which is positive for building the higher-level features on top of them.
### 8. Authentication Breadth vs Depth
The .NET port supports multiple authentication mechanisms (username/password, token, NKey, JWT, TLS map, external auth callout, proxy auth) which is good breadth. However, each mechanism is thinner than its Go equivalent. JWT authentication in particular lacks the full claims validation, account resolution with caching, and operator trust chain verification that the Go implementation provides.