Files
natsdotnet/docs/structuregaps.md
Joseph Doherty 0317370665 docs: update structuregaps.md with current test counts
- Source: 39.1K lines, 215 files
- Tests: 5,808 methods (was 3,168), 2,485 linked to Go tests
- Fix stray 'comm' prefix in title
2026-02-25 00:18:44 -05:00

36 KiB
Raw Blame History

Implementation Gaps: Go NATS Server vs .NET Port

Last updated: 2026-02-25

Overview

Metric Go Server .NET Port Ratio
Source lines ~130K (109 files) ~39.1K (215 files) 3.3x
Test methods 2,937 func Test* 5,808 [Fact]/[Theory] (~6,409 with parameterized) 2.0x
Go tests mapped 2,646 / 2,937 (90.1%)
.NET tests linked to Go 2,485 / 5,808 (42.8%)
Largest source file filestore.go (12,593 lines) NatsServer.cs (1,883 lines) 6.7x

The .NET port has grown significantly from the prior baseline (~27K→~39.1K lines). All 15 original gaps have seen substantial growth. However, many specific features within each subsystem remain unimplemented. This document catalogs the remaining gaps based on a function-by-function comparison.

Gap Severity Legend

  • CRITICAL: Core functionality missing, blocks production use
  • HIGH: Important feature incomplete, limits capabilities significantly
  • MEDIUM: Feature gap exists but workarounds are possible or feature is less commonly used
  • LOW: Nice-to-have, minor optimization or edge-case handling

Gap 1: FileStore Block Management (CRITICAL)

Go reference: filestore.go — 12,593 lines NET implementation: FileStore.cs (1,633) + MsgBlock.cs (630) + supporting files Current .NET total: ~5,196 lines in Storage/ Gap factor: 2.4x (down from 20.8x)

The .NET FileStore has grown substantially with block-based storage, but the block lifecycle, encryption/compression integration, crash recovery, and integrity checking remain incomplete.

1.1 Block Rotation & Lifecycle (CRITICAL)

Missing Go functions:

  • initMsgBlock() (line 1046) — initializes new block with metadata
  • recoverMsgBlock() (line 1148) — recovers block from disk with integrity checks
  • newMsgBlockForWrite() (line 4485) — creates writable block with locking
  • spinUpFlushLoop() + flushLoop() (lines 57835842) — background flusher goroutine
  • enableForWriting() (line 6622) — opens FD for active block with gating semaphore (dios)
  • closeFDs() / closeFDsLocked() (lines 68436849) — explicit FD management
  • shouldCompactInline() (line 5553) — inline compaction heuristics
  • shouldCompactSync() (line 5561) — sync-time compaction heuristics

.NET status: Basic block creation/recovery exists but no automatic rotation on size thresholds, no block sealing, no per-block metadata files, no background flusher.

1.2 Encryption Integration into Block I/O (CRITICAL)

Go encrypts block files on disk using per-block AEAD keys. The .NET AeadEncryptor exists as a standalone utility but is NOT wired into the block write/read path.

Missing Go functions:

  • genEncryptionKeys() (line 816) — generates AEAD + stream cipher per block
  • recoverAEK() (line 878) — recovers encryption key from metadata
  • setupAEK() (line 907) — sets up encryption for the stream
  • loadEncryptionForMsgBlock() (line 1078) — loads per-block encryption state
  • checkAndLoadEncryption() (line 1068) — verifies encrypted block integrity
  • ensureLastChecksumLoaded() (line 1139) — validates block checksum
  • convertCipher() (line 1318) — re-encrypts block with new cipher
  • convertToEncrypted() (line 1407) — converts plaintext block to encrypted

Impact: Go provides end-to-end encryption of block files on disk. .NET stores block files in plaintext.

1.3 S2 Compression in Block Path (CRITICAL)

Go compresses payloads at the message level during block write and decompresses on read. .NET's S2Codec is standalone only.

Missing Go functions:

  • setupWriteCache() (line 4443) — initializes compression-aware write cache
  • Block-level compression/decompression during loadMsgs/flushPendingMsgs

Impact: Go block files are compressed (smaller disk footprint). .NET block files are uncompressed.

1.4 Crash Recovery & Index Rebuilding (CRITICAL)

Missing Go functions:

  • recoverFullState() (line 1754) — full recovery from blocks with lost-data tracking
  • recoverTTLState() (line 2042) — recovers TTL wheel state post-crash
  • recoverMsgSchedulingState() (line 2123) — recovers scheduled message delivery post-crash
  • recoverMsgs() (line 2263) — scans all blocks and rebuilds message index
  • rebuildState() (line 1446) — per-block state reconstruction
  • rebuildStateFromBufLocked() (line 1493) — parses raw block buffer to rebuild state
  • expireMsgsOnRecover() (line 2401) — enforces MaxAge on recovery
  • writeTTLState() (line 10833) — persists TTL state to disk

.NET status: FileStore.RecoverBlocks() exists but is minimal — no TTL wheel recovery, no scheduling recovery, no lost-data tracking, no tombstone cleanup.

1.5 Checksum Validation (HIGH)

Go uses highway hash for per-message checksums and validates integrity on read.

Missing:

  • msgBlock.lchk[8]byte — last checksum buffer
  • msgBlock.hh *highwayhash.Digest64 — highway hasher
  • msgBlock.lastChecksum() (line 2204) — returns last computed checksum
  • Message checksum validation during msgFromBufEx() (line ~8180)

.NET status: MessageRecord has no checksum field. No integrity checks on read.

1.6 Atomic File Overwrites (HIGH)

Go uses temp file + rename for crash-safe state updates.

Missing:

  • _writeFullState() (line 10599) — atomic state serialization with temp file + rename
  • State file write protection with mutual exclusion (wfsmu/wfsrun)

.NET status: Writes directly without atomic guarantees.

1.7 Tombstone & Deletion Tracking (HIGH)

Missing:

  • removeMsg() (line 5267) — removes message with optional secure erase
  • removeMsgFromBlock() (line 5307) — removes from specific block
  • eraseMsg() (line 5890) — overwrites deleted message with random data (secure erase)
  • Sparse avl.SequenceSet for deletion tracking (Go uses AVL; .NET uses HashSet<ulong>)
  • Tombstone record persistence (deleted messages lost on crash in .NET)

1.8 Multi-Block Write Cache (MEDIUM)

Missing Go functions:

  • setupWriteCache() (line 4443) — sets up write cache
  • finishedWithCache() (line 4477) — marks cache complete
  • expireCache() / expireCacheLocked() (line 6148+) — expires idle cache
  • resetCacheExpireTimer() / startCacheExpireTimer() — manages cache timeout
  • Elastic reference machinery (line 6704, ecache.Strengthen() / WeakenAfter())
  • flushPendingMsgs() / flushPendingMsgsLocked() (line 7560+) — background flush

.NET status: Synchronous flush on rotation; no background flusher, no cache expiration.

1.9 Missing IStreamStore Interface Methods (CRITICAL)

Method Go Equivalent .NET Status
StoreRawMsg filestore.go:4759 Missing
LoadNextMsgMulti filestore.go:8426 Missing
LoadPrevMsg filestore.go:8580 Missing
LoadPrevMsgMulti filestore.go:8634 Missing
NumPendingMulti filestore.go:4051 Missing
SyncDeleted Missing
EncodedStreamState filestore.go:10599 Missing
Utilization filestore.go:8758 Missing
FlushAllPending Missing
RegisterStorageUpdates filestore.go:4412 Missing
RegisterStorageRemoveMsg filestore.go:4424 Missing
RegisterProcessJetStreamMsg filestore.go:4431 Missing
ResetState filestore.go:8788 Missing
UpdateConfig filestore.go:655 Stubbed
Delete(bool) Incomplete
Stop() Incomplete

1.10 Query/Filter Operations (MEDIUM)

Go uses trie-based subject indexing (SubjectTree[T]) for O(log n) lookups. .NET uses linear LINQ scans.

Missing:

  • FilteredState() (line 3191) — optimized filtered state with caching
  • checkSkipFirstBlock() family (lines 32413287)
  • numFiltered*() family (lines 33083413)
  • LoadMsg() (line 8308) — block-aware message loading

Gap 2: JetStream Cluster Coordination (CRITICAL)

Go reference: jetstream_cluster.go — 10,887 lines NET implementation: JetStreamMetaGroup.cs (454) + StreamReplicaGroup.cs (300) + supporting files Current .NET total: ~2,787 lines in Cluster/ + Raft/ Gap factor: ~3.9x Missing functions: ~84

The .NET implementation provides a skeleton suitable for unit testing but lacks the real cluster coordination machinery. The Go implementation is deeply integrated with RAFT, NATS subscriptions for inter-node communication, and background monitoring goroutines.

2.1 Cluster Monitoring Loop (CRITICAL)

The main orchestration loop that drives cluster state transitions.

Missing Go functions:

  • monitorCluster() (lines 14551825, 370 lines) — processes metadata changes from RAFT ApplyQ
  • checkForOrphans() (lines 13781432) — identifies unmatched streams/consumers after recovery
  • getOrphans() (lines 14331454) — scans all accounts for orphaned assets
  • checkClusterSize() (lines 18281869) — detects mixed-mode clusters, adjusts bootstrap size
  • isStreamCurrent(), isStreamHealthy(), isConsumerHealthy() (lines 582679)
  • subjectsOverlap() (lines 751767) — prevents subject collisions
  • Recovery state machine (recoveryUpdates struct, lines 13321366)

2.2 Stream & Consumer Assignment Processing (CRITICAL)

Missing Go functions:

  • processStreamAssignment() (lines 45414647, 107 lines)
  • processUpdateStreamAssignment() (lines 46504768)
  • processStreamRemoval() (lines 52165265)
  • processConsumerAssignment() (lines 53635542, 180 lines)
  • processConsumerRemoval() (lines 55435599)
  • processClusterCreateStream() (lines 49445215, 272 lines)
  • processClusterDeleteStream() (lines 52665362, 97 lines)
  • processClusterCreateConsumer() (lines 56005855, 256 lines)
  • processClusterDeleteConsumer() (lines 58565925)
  • processClusterUpdateStream() (lines 47984943, 146 lines)

2.3 Inflight Request Deduplication (HIGH)

Missing:

  • trackInflightStreamProposal() (lines 11931210)
  • removeInflightStreamProposal() (lines 12141230)
  • trackInflightConsumerProposal() (lines 12341257)
  • removeInflightConsumerProposal() (lines 12601278)
  • Full inflight tracking with ops count, deleted flag, and assignment capture

2.4 Peer Management & Stream Moves (HIGH)

Missing:

  • processAddPeer() (lines 22902340)
  • processRemovePeer() (lines 23422393)
  • removePeerFromStream() / removePeerFromStreamLocked() (lines 23962439)
  • remapStreamAssignment() (lines 70777111)
  • Stream move operations in jsClusteredStreamUpdateRequest() (lines 7757+)

2.5 Leadership Transition (HIGH)

Missing:

  • processLeaderChange() (lines 70017074) — meta-level leader change
  • processStreamLeaderChange() (lines 42624458, 197 lines) — stream-level
  • processConsumerLeaderChange() (lines 66226793, 172 lines) — consumer-level
  • Step-down mechanisms: JetStreamStepdownStream(), JetStreamStepdownConsumer()

2.6 Snapshot & State Recovery (HIGH)

Missing:

  • metaSnapshot() (line 1890) — triggers meta snapshot
  • encodeMetaSnapshot() (lines 20752145) — binary+S2 compression
  • decodeMetaSnapshot() (lines 20312074) — reverse of encode
  • applyMetaSnapshot() (lines 18972030, 134 lines) — computes deltas and applies
  • collectStreamAndConsumerChanges() (lines 21462240)
  • Per-stream monitorStream() (lines 28953700, 800+ lines)
  • Per-consumer monitorConsumer() (lines 60816350, 270+ lines)

2.7 Entry Application Pipeline (HIGH)

Missing:

  • applyMetaEntries() (lines 24742609, 136 lines) — handles all entry types
  • applyStreamEntries() (lines 36454067, 423 lines) — per-stream entries
  • applyStreamMsgOp() (lines 40684261, 194 lines) — per-message ops
  • applyConsumerEntries() (lines 63516621, 271 lines) — per-consumer entries

2.8 Topology-Aware Placement (MEDIUM)

Missing from PlacementEngine (currently 80 lines vs Go's 312 lines for selectPeerGroup()):

  • Unique tag enforcement (JetStreamUniqueTag)
  • HA asset limits per peer
  • Dynamic available storage calculation
  • Tag inclusion/exclusion with prefix handling
  • Weighted node selection based on availability
  • Mixed-mode detection
  • tieredStreamAndReservationCount() (lines 75247546)
  • createGroupForConsumer() (lines 87838923, 141 lines)
  • jsClusteredStreamLimitsCheck() (lines 75997618)

2.9 RAFT Group Creation & Lifecycle (MEDIUM)

Missing:

  • createRaftGroup() (lines 26592789, 131 lines)
  • raftGroup.isMember() (lines 26112621)
  • raftGroup.setPreferred() (lines 26232656)

2.10 Assignment Encoding/Decoding (MEDIUM)

Missing (no serialization for RAFT persistence):

  • encodeAddStreamAssignment(), encodeUpdateStreamAssignment(), encodeDeleteStreamAssignment()
  • decodeStreamAssignment(), decodeStreamAssignmentConfig()
  • encodeAddConsumerAssignment(), encodeDeleteConsumerAssignment()
  • encodeAddConsumerAssignmentCompressed(), decodeConsumerAssignment()
  • decodeConsumerAssignmentCompressed(), decodeConsumerAssignmentConfig()

2.11 Unsupported Asset Handling (LOW)

Missing:

  • unsupportedStreamAssignment type (lines 186247) — graceful handling of version-incompatible streams
  • unsupportedConsumerAssignment type (lines 268330)

2.12 Clustered API Handlers (MEDIUM)

Missing:

  • jsClusteredStreamRequest() (lines 76207701, 82 lines)
  • jsClusteredStreamUpdateRequest() (lines 77578265, 509 lines)
  • System-level subscriptions for result processing, peer removal, leadership step-down

Gap 3: Consumer Delivery Engines (HIGH)

Go reference: consumer.go — 6,715 lines, 209 functions NET implementation: PushConsumerEngine.cs (264) + PullConsumerEngine.cs (411) + AckProcessor.cs (225) + PriorityGroupManager.cs (102) + RedeliveryTracker.cs (92) + ConsumerManager.cs (198) Current .NET total: ~1,292 lines Gap factor: 5.2x

3.1 Core Message Delivery Loop (CRITICAL)

Missing: loopAndGatherMsgs() (Go lines ~14001700) — the main consumer dispatch loop that:

  • Polls stream store for new messages matching filter
  • Applies redelivery logic
  • Handles num_pending calculations
  • Manages delivery interest tracking
  • Responds to stream updates

3.2 Pull Request Pipeline (CRITICAL)

Missing: processNextMsgRequest() (Go lines ~42764450) — full pull request handler with:

  • Waiting request queue management (waiting list with priority)
  • Max bytes accumulation and flow control
  • Request expiry handling
  • Batch size enforcement
  • Pin ID assignment from priority groups

3.3 Inbound Ack/NAK Processing Loop (HIGH)

Missing: processInboundAcks() (Go line ~4854) — background goroutine that:

  • Reads from ack subject subscription
  • Parses +ACK, -NAK, +TERM, +WPI frames
  • Dispatches to appropriate handlers
  • Manages ack deadlines and redelivery scheduling

3.4 Redelivery Scheduler (HIGH)

Missing: Redelivery queue (rdq) as a min-heap/priority queue. RedeliveryTracker only tracks deadlines, doesn't:

  • Order redeliveries by deadline
  • Batch redelivery dispatches efficiently
  • Support per-sequence redelivery state
  • Handle redelivery rate limiting

3.5 Idle Heartbeat & Flow Control (HIGH)

Missing:

  • sendIdleHeartbeat() (Go line ~5222) — heartbeat with pending counts and stall headers
  • sendFlowControl() (Go line ~5495) — dedicated flow control handler
  • Flow control reply generation with pending counts

3.6 Priority Group Pinning (HIGH)

PriorityGroupManager exists (102 lines) but missing:

  • Pin ID (NUID) generation and assignment (setPinnedTimer, assignNewPinId, unassignPinId)
  • Pin timeout timers (pinnedTtl)
  • Nats-Pin-Id header response
  • Advisory messages on pin/unpin events
  • Priority group state persistence

3.7 Pause/Resume State Management (HIGH)

Missing:

  • PauseUntil deadline tracking
  • Pause expiry timer
  • Pause advisory generation ($JS.EVENT.ADVISORY.CONSUMER.PAUSED)
  • Resume/unpause event generation
  • Pause state in consumer info response

3.8 Delivery Interest Tracking (HIGH)

Missing: Dynamic delivery interest monitoring:

  • Subject interest change tracking
  • Gateway interest checks
  • Subscribe/unsubscribe tracking for delivery subject
  • Interest-driven consumer cleanup (deleteNotActive)

3.9 Max Deliveries Enforcement (MEDIUM)

AckProcessor checks basic threshold but missing:

  • Advisory event generation on exceeding max delivers
  • Per-delivery-count policy selection
  • NotifyDeliveryExceeded advisory

3.10 Filter Subject Skip Tracking (MEDIUM)

Missing: Efficient filter matching with compiled regex, skip list tracking, UpdateSkipped state updates.

3.11 Sample/Observe Mode (MEDIUM)

Missing: Sample frequency parsing ("1%"), stochastic sampling, latency measurement, latency advisory generation.

3.12 Reset to Sequence (MEDIUM)

Missing: processResetReq() (Go line ~4241) — consumer state reset to specific sequence.

3.13 Rate Limiting (MEDIUM)

PushConsumerEngine has basic rate limiting but missing accurate token bucket, dynamic updates, per-message delay calculation.

3.14 Cluster-Aware Pending Requests (LOW)

Missing: Pull request proposals to RAFT, cluster-wide pending request tracking.


Gap 4: Stream Mirrors, Sources & Lifecycle (HIGH)

Go reference: stream.go — 8,072 lines, 193 functions NET implementation: StreamManager.cs (644) + MirrorCoordinator.cs (364) + SourceCoordinator.cs (470) Current .NET total: ~1,478 lines Gap factor: 5.5x

4.1 Mirror Consumer Setup & Retry (HIGH)

MirrorCoordinator (364 lines) exists but missing:

  • Complete mirror consumer API request generation
  • Consumer configuration with proper defaults
  • Subject transforms for mirror
  • Filter subject application
  • OptStartSeq/OptStartTime handling
  • Mirror error state tracking (setMirrorErr)
  • Scheduled retry with exponential backoff
  • Mirror health checking

4.2 Mirror Message Processing (HIGH)

Missing from MirrorCoordinator:

  • Gap detection in mirror stream (deleted/expired messages)
  • Sequence alignment (sseq/dseq tracking)
  • Mirror-specific handling for out-of-order arrival
  • Mirror error advisory generation

4.3 Source Consumer Setup (HIGH)

SourceCoordinator (470 lines) exists but missing:

  • Complete source consumer API request generation
  • Subject filter configuration
  • Subject transform setup
  • Account isolation verification
  • Flow control configuration
  • Starting sequence selection logic

4.4 Deduplication Window Management (HIGH)

SourceCoordinator has _dedupWindow but missing:

  • Time-based window pruning
  • Memory-efficient cleanup
  • Statistics reporting (DeduplicatedCount)
  • Nats-Msg-Id header extraction integration

4.5 Purge with Subject Filtering (HIGH)

Missing:

  • Subject-specific purge (preq.Subject)
  • Sequence range purge (preq.Sequence)
  • Keep-N functionality (preq.Keep)
  • Consumer purge cascading based on filter overlap

4.6 Interest Retention Enforcement (HIGH)

Missing:

  • Interest-based message cleanup (checkInterestState/noInterest)
  • Per-consumer interest tracking
  • noInterestWithSubject for filtered consumers
  • Orphan message cleanup

4.7 Stream Snapshot & Restore (MEDIUM)

StreamSnapshotService is a 10-line stub. Missing:

  • Snapshot deadline enforcement
  • Consumer inclusion/exclusion
  • TAR + S2 compression/decompression
  • Snapshot validation
  • Partial restore recovery

4.8 Stream Config Update Validation (MEDIUM)

Missing:

  • Subjects overlap checking
  • Mirror/source config validation
  • Subjects modification handling
  • Template update propagation
  • Discard policy enforcement

4.9 Source Retry & Health (MEDIUM)

Missing:

  • retrySourceConsumerAtSeq() — retry with exponential backoff
  • cancelSourceConsumer() — source consumer cleanup
  • retryDisconnectedSyncConsumers() — periodic health check
  • setupSourceConsumers() — bulk initialization
  • stopSourceConsumers() — coordinated shutdown

4.10 Source/Mirror Info Reporting (LOW)

Missing: Source/mirror lag, error state, consumer info for monitoring responses.


Gap 5: Client Protocol Handling (HIGH)

Go reference: client.go — 6,716 lines, 162+ functions NET implementation: NatsClient.cs (924 lines) Gap factor: 7.3x

5.1 Flush Coalescing (HIGH)

Missing: Flush signal pending counter (fsp), maxFlushPending=10, pcd map of pending clients for broadcast-style flush signaling. Go allows multiple producers to queue data, then ONE flush signals the writeLoop. .NET uses independent channels per client.

5.2 Stall Gate Backpressure (HIGH)

Missing: c.out.stc channel that blocks producers when client is 75% full. No producer rate limiting in .NET.

5.3 Write Timeout with Partial Flush Recovery (HIGH)

Missing:

  • Partial flush recovery logic (written > 0 → retry vs close)
  • WriteTimeoutPolicy enum (Close, TcpFlush)
  • Different handling per client kind (routes can recover, clients cannot)
  • .NET immediately closes on timeout; Go allows ROUTER/GATEWAY/LEAF to continue

5.4 Per-Account Subscription Result Cache (HIGH)

Missing: readCache struct with per-account results cache, route targets, statistical counters. Go caches Sublist match results per account on route/gateway inbound path (maxPerAccountCacheSize=8192).

5.5 Slow Consumer Stall Gate (MEDIUM)

Missing: Full stall gate mechanics — Go creates a channel blocking producers at 75% threshold, per-kind slow consumer statistics (route, gateway, leaf), account-level slow consumer tracking.

5.6 Dynamic Write Buffer Pooling (MEDIUM)

OutboundBufferPool exists but missing flush coalescing integration, pcd broadcast optimization.

5.7 Per-Client Trace Level (MEDIUM)

SetTraceMode() exists but missing message delivery tracing (traceMsgDelivery) for routed messages, per-client echo support.

5.8 Subscribe Permission Caching (MEDIUM)

PermissionLruCache caches PUB permissions but NOT SUB. Go caches per-account results with genid-based invalidation.

5.9 Internal Client Kinds (LOW)

ClientKind enum supports CLIENT, ROUTER, GATEWAY, LEAF but missing SYSTEM, JETSTREAM, ACCOUNT kinds and isInternalClient() predicate.

5.10 Adaptive Read Buffer Short-Read Counter (LOW)

AdaptiveReadBuffer implements 2x grow/shrink but missing Go's srs (short reads) counter for more precise shrink decisions.


Gap 6: MQTT Protocol (HIGH)

Go reference: mqtt.go — 5,882 lines NET implementation: Mqtt/ — 1,238 lines (8 files) Gap factor: 4.7x

6.1 JetStream-Backed Session Persistence (CRITICAL)

Missing: Go creates dedicated JetStream streams for:

  • $MQTT_msgs — message persistence
  • $MQTT_rmsgs — retained messages
  • $MQTT_sess — session state
  • $MQTT_qos2in — QoS 2 incoming tracking
  • $MQTT_out — QoS 1/2 outgoing

.NET MQTT is entirely in-memory. Sessions are lost on restart.

6.2 Will Message Delivery (HIGH)

MqttSessionStore stores will metadata (WillTopic, WillPayload, WillQoS, WillRetain) but does NOT publish the will message on abnormal client disconnection.

6.3 QoS 1/2 Tracking (HIGH)

MqttQoS2StateMachine exists for QoS 2 state transitions but missing:

  • JetStream-backed QoS 1 acknowledgment tracking
  • Durable redelivery of unacked QoS 1/2 messages
  • PUBREL delivery stream for QoS 2 phase 2

6.4 MaxAckPending Enforcement (HIGH)

Missing: No backpressure mechanism for QoS 1 accumulation, no per-subscription limits, no config reload hook.

6.5 Retained Message Delivery on Subscribe (MEDIUM)

MqttRetainedStore exists but not JetStream-backed and not integrated with subscription handling — retained messages aren't delivered on SUBSCRIBE.

6.6 Session Flapper Detection (LOW)

Partially implemented in MqttSessionStore (basic structure present) but integration unclear.


Gap 7: JetStream API Layer (MEDIUM)

Go reference: jetstream_api.go (5,165) + jetstream.go (2,866) = 8,031 lines NET implementation: ~1,374 lines across 11 handler files Gap factor: 5.8x

7.1 Leader Forwarding (HIGH)

API requests arriving at non-leader nodes must be forwarded to the current leader. Not implemented.

7.2 Clustered API Handlers (HIGH)

Missing:

  • jsClusteredStreamRequest() — stream create in clustered mode
  • jsClusteredStreamUpdateRequest() — update/delete/move/scale (509 lines in Go)
  • Cluster-aware consumer create/delete handlers

7.3 API Rate Limiting & Deduplication (MEDIUM)

No request deduplication or rate limiting.

7.4 Snapshot & Restore API (MEDIUM)

No snapshot/restore API endpoints.

7.5 Consumer Pause/Resume API (MEDIUM)

No consumer pause/resume API endpoint.

7.6 Advisory Event Publication (LOW)

No advisory events for API operations.


Gap 8: RAFT Consensus (CRITICAL for clustering)

Go reference: raft.go — 5,037 lines NET implementation: Raft/ — 1,838 lines (17 files) Gap factor: 2.7x

8.1 Persistent Log Storage (CRITICAL)

.NET RAFT uses in-memory log only. RAFT state is not persisted across restarts.

8.2 Joint Consensus for Membership Changes (CRITICAL)

Current single-membership-change approach is unsafe for multi-node clusters. Go implements safe addition/removal per Section 4 of the RAFT paper.

8.3 InstallSnapshot Streaming (HIGH)

RaftSnapshot is fully loaded into memory. Go streams snapshots in chunks with progress tracking.

8.4 Leadership Transfer (MEDIUM)

No graceful leader step-down mechanism for maintenance operations.

8.5 Log Compaction (MEDIUM)

Basic compaction exists but no configurable retention policies.

8.6 Quorum Check Before Proposing (MEDIUM)

May propose entries when quorum is unreachable.

8.7 Read-Only Query Optimization (LOW)

All queries append log entries; no ReadIndex optimization for read-only commands.

8.8 Election Timeout Jitter (LOW)

May not have sufficient randomized jitter to prevent split votes.


Gap 9: Account Management & Multi-Tenancy (MEDIUM)

Go reference: accounts.go — 4,774 lines, 172+ functions NET implementation: Account.cs (280) + auth files Current .NET total: ~1,266 lines Gap factor: 3.8x

9.1 Service Export Latency Tracking (MEDIUM)

Missing: TrackServiceExport(), UnTrackServiceExport(), latency sampling, p50/p90/p99 tracking, sendLatencyResult(), sendBadRequestTrackingLatency(), sendReplyInterestLostTrackLatency().

9.2 Service Export Response Threshold (MEDIUM)

Missing: ServiceExportResponseThreshold(), SetServiceExportResponseThreshold() for SLA limits.

9.3 Stream Import Cycle Detection (MEDIUM)

Only service import cycles detected. Missing: streamImportFormsCycle(), checkStreamImportsForCycles().

9.4 Wildcard Service Exports (MEDIUM)

Missing: getWildcardServiceExport() — cannot define wildcard exports like svc.*.

9.5 Account Expiration & TTL (MEDIUM)

Missing: IsExpired(), expiredTimeout(), setExpirationTimer() — expired accounts won't auto-cleanup.

9.6 Account Claim Hot-Reload (MEDIUM)

Missing: UpdateAccountClaims(), updateAccountClaimsWithRefresh() — account changes may require restart.

9.7 Service/Stream Activation Expiration (LOW)

Missing: JWT activation claim expiry enforcement.

9.8 User NKey Revocation (LOW)

Missing: checkUserRevoked() — cannot revoke individual users without re-deploying.

9.9 Response Service Import (LOW)

Missing: Reverse response mapping for cross-account request-reply (addReverseRespMapEntry, checkForReverseEntries).

9.10 Service Import Shadowing Detection (LOW)

Missing: serviceImportShadowed().


Gap 10: Monitoring & Events (MEDIUM)

Go reference: monitor.go (4,240) + events.go (3,334) + msgtrace.go (799) = 8,373 lines NET implementation: Monitoring/ (1,698) + Events/ (960) + MessageTraceContext.cs = ~2,658 lines Gap factor: 3.1x

10.1 Closed Connections Ring Buffer (HIGH)

Missing: Ring buffer for recently closed connections with disconnect reasons. /connz?state=closed returns empty.

10.2 Account-Scoped Filtering (MEDIUM)

Missing: /connz?acc=ACCOUNT filtering.

10.3 Sort Options (MEDIUM)

Missing: ConnzOptions.SortBy (by bytes, msgs, uptime, etc.).

10.4 Message Trace Propagation (MEDIUM)

MessageTraceContext exists but trace data not propagated across servers in events.

10.5 Auth Error Events (MEDIUM)

Missing: sendAuthErrorEvent(), sendAccountAuthErrorEvent() — cannot monitor auth failures.

10.6 Full System Event Payloads (MEDIUM)

Event types defined but payloads may be incomplete (server info, client info fields).

10.7 Closed Connection Reason Tracking (MEDIUM)

ClosedClient.Reason exists but not populated consistently.

10.8 Remote Server Events (LOW)

Missing: remoteServerShutdown(), remoteServerUpdate(), leafNodeConnected() — limited cluster member visibility.

10.9 Event Compression (LOW)

Missing: getAcceptEncoding() — system events consume more bandwidth.

10.10 OCSP Peer Events (LOW)

Missing: sendOCSPPeerRejectEvent(), sendOCSPPeerChainlinkInvalidEvent().


Gap 11: Gateway Bridging (MEDIUM)

Go reference: gateway.go — 3,426 lines NET implementation: GatewayManager.cs (225) + GatewayConnection.cs (259) + GatewayInterestTracker.cs (190) + ReplyMapper.cs (174) Current .NET total: ~848 lines Gap factor: 4.0x

11.1 Implicit Gateway Discovery (HIGH)

Missing: processImplicitGateway(), gossipGatewaysToInboundGateway(), forwardNewGatewayToLocalCluster(). All gateways must be explicitly configured.

11.2 Gateway Reconnection with Backoff (MEDIUM)

Missing: solicitGateways(), reconnectGateway(), solicitGateway() with structured delay/jitter.

11.3 Account-Specific Gateway Routes (MEDIUM)

Missing: sendAccountSubsToGateway() — routes all subscriptions without per-account isolation.

11.4 Queue Group Propagation (MEDIUM)

Missing: sendQueueSubsToGateway() — queue-based load balancing may not work across gateways.

11.5 Reply Subject Mapping Cache (MEDIUM)

Missing: trackGWReply(), startGWReplyMapExpiration() — dynamic reply tracking with TTL. Basic _GR_ prefix exists but no caching.

11.6 Gateway Command Protocol (LOW)

Missing: gatewayCmdGossip, gatewayCmdAllSubsStart, gatewayCmdAllSubsComplete — no bulk subscription synchronization.

11.7 Connection Registration (LOW)

Missing: registerInboundGatewayConnection(), registerOutboundGatewayConnection(), getRemoteGateway() — minimal gateway registry.


Gap 12: Leaf Node Connections (MEDIUM)

Go reference: leafnode.go — 3,470 lines NET implementation: LeafNodeManager.cs (328) + LeafConnection.cs (284) + LeafHubSpokeMapper.cs (121) + LeafLoopDetector.cs (35) Current .NET total: ~768 lines Gap factor: 4.5x

12.1 TLS Certificate Hot-Reload (MEDIUM)

Missing: updateRemoteLeafNodesTLSConfig() — TLS cert changes require server restart.

12.2 Permission & Account Syncing (MEDIUM)

Missing: sendPermsAndAccountInfo(), initLeafNodeSmapAndSendSubs() — permission changes on hub may not propagate.

12.3 Leaf Connection State Validation (MEDIUM)

Missing: remoteLeafNodeStillValid() — no validation of remote config on reconnect.

12.4 JetStream Migration Checks (LOW)

Missing: checkJetStreamMigrate() — cannot safely migrate leaf JetStream domains.

12.5 Leaf Node WebSocket Support (LOW)

Missing: Go accepts WebSocket connections for leaf nodes.

12.6 Leaf Cluster Registration (LOW)

Missing: registerLeafNodeCluster(), hasLeafNodeCluster() — cannot query leaf topology.

12.7 Connection Disable Flag (LOW)

Missing: isLeafConnectDisabled() — cannot selectively disable leaf solicitation.


Gap 13: Route Clustering (MEDIUM)

Go reference: route.go — 3,314 lines NET implementation: RouteManager.cs (324) + RouteConnection.cs (296) + RouteCompressionCodec.cs (135) Current .NET total: ~755 lines Gap factor: 4.4x

13.1 Implicit Route Discovery (HIGH)

Missing: processImplicitRoute(), forwardNewRouteInfoToKnownServers(). Routes must be explicitly configured.

13.2 Account-Specific Dedicated Routes (MEDIUM)

Missing: Per-account route pinning for traffic isolation.

13.3 Route Pool Size Negotiation (MEDIUM)

Missing: Handling heterogeneous clusters with different pool sizes.

13.4 Route Hash Storage (LOW)

Missing: storeRouteByHash(), getRouteByHash() for efficient route lookup.

13.5 Cluster Split Handling (LOW)

Missing: removeAllRoutesExcept(), removeRoute() — no partition handling.

13.6 No-Pool Route Fallback (LOW)

Missing: Interoperability with older servers without pool support.


Gap 14: Configuration & Hot Reload (MEDIUM)

Go reference: opts.go (6,435) + reload.go (2,653) = 9,088 lines NET implementation: ConfigProcessor.cs (1,438) + NatsConfLexer.cs (1,503) + NatsConfParser.cs (421) + ConfigReloader.cs (526) Current .NET total: ~3,888 lines Gap factor: 2.3x

14.1 SIGHUP Signal Handling (HIGH)

Missing: Unix signal handler for config reload without restart.

14.2 Auth Change Propagation (MEDIUM)

Missing: When users/nkeys change, propagate to existing connections.

14.3 TLS Certificate Reload (MEDIUM)

Missing: Reload certificates for new connections without restart.

14.4 Cluster Config Hot Reload (MEDIUM)

Missing: Add/remove route URLs, gateway URLs, leaf node URLs at runtime.

14.5 Logging Level Changes (LOW)

Missing: Debug/trace/logtime changes at runtime.

14.6 JetStream Config Changes (LOW)

Missing: JetStream option reload.


Gap 15: WebSocket Support (LOW)

Go reference: websocket.go — 1,550 lines NET implementation: WebSocket/ — 1,453 lines (7 files) Gap factor: 1.1x

WebSocket support is the most complete subsystem. Compression negotiation (permessage-deflate), JWT auth through WebSocket, and origin checking are all implemented.

15.1 WebSocket-Specific TLS (LOW)

Minor TLS configuration differences.


Priority Summary

Tier 1: Production Blockers (CRITICAL)

  1. FileStore encryption/compression/recovery (Gap 1) — blocks data security and durability
  2. FileStore missing interface methods (Gap 1.9) — blocks API completeness
  3. RAFT persistent log (Gap 8.1) — blocks cluster persistence
  4. RAFT joint consensus (Gap 8.2) — blocks safe membership changes
  5. JetStream cluster monitoring loop (Gap 2.1) — blocks distributed JetStream

Tier 2: Capability Limiters (HIGH)

  1. Consumer delivery loop (Gap 3.1) — core consumer functionality
  2. Consumer pull request pipeline (Gap 3.2) — pull consumer correctness
  3. Consumer ack/redelivery (Gaps 3.33.4) — consumer reliability
  4. Stream mirror/source retry (Gaps 4.14.3) — replication reliability
  5. Stream purge with filtering (Gap 4.5) — operational requirement
  6. Client flush coalescing (Gap 5.1) — performance
  7. Client stall gate (Gap 5.2) — backpressure
  8. MQTT JetStream persistence (Gap 6.1) — MQTT durability
  9. SIGHUP config reload (Gap 14.1) — operational requirement
  10. Implicit route/gateway discovery (Gaps 11.1, 13.1) — cluster usability

Tier 3: Important Features (MEDIUM)

1640: Account management, monitoring, gateway details, leaf node features, consumer advanced features, etc.

Tier 4: Nice-to-Have (LOW)

41+: Internal client kinds, election jitter, event compression, etc.


Test Coverage Gap Summary

All 2,937 Go tests have been categorized:

  • 2,646 mapped (90.1%) — have corresponding .NET tests
  • 272 not_applicable (9.3%) — platform-specific or irrelevant
  • 19 skipped (0.6%) — intentionally deferred
  • 0 unmapped — none remaining

The .NET test suite has 5,808 test methods (expanding to ~6,409 with parameterized [Theory] test cases). Of these, 2,485 (42.8%) are linked back to Go test counterparts via the test_mappings table (57,289 mapping rows). The remaining 3,323 .NET tests are original — covering areas like unit tests for .NET-specific implementations, additional edge cases, and subsystems that don't have 1:1 Go equivalents.

Many tests validate API contracts against stubs/mocks rather than testing actual distributed behavior. The gap is in test depth, not test count.