perf: add FileStore buffered writes, O(1) state tracking, and eliminate redundant per-publish work
Implement Go-parity background flush loop (coalesce 16KB/8ms) in MsgBlock/FileStore, replace O(n) GetStateAsync with incremental counters, skip PruneExpired/LoadAsync/ PrunePerSubject when not needed, and bypass RAFT for single-replica streams. Fix counter tracking bugs in RemoveMsg/EraseMsg/TTL expiry and ObjectDisposedException races in flush loop disposal. FileStore optimizations verified with 3112/3112 JetStream tests passing; async publish benchmark remains at ~174 msg/s due to E2E protocol path bottleneck.
This commit is contained in:
@@ -57,9 +57,9 @@ Benchmark run: 2026-03-13. Both servers running on the same machine, tested with
|
||||
| Mode | Payload | Storage | Go msg/s | .NET msg/s | Ratio (.NET/Go) |
|
||||
|------|---------|---------|----------|------------|-----------------|
|
||||
| Synchronous | 16 B | Memory | 16,783 | 13,815 | 0.82x |
|
||||
| Async (batch) | 128 B | File | 187,067 | 115 | 0.00x |
|
||||
| Async (batch) | 128 B | File | 210,387 | 174 | 0.00x |
|
||||
|
||||
> **Note:** Async file store publish is extremely slow on the .NET server — likely a JetStream file store implementation bottleneck rather than a client issue.
|
||||
> **Note:** Async file store publish remains extremely slow after FileStore-level optimizations (buffered writes, O(1) state tracking, redundant work elimination). The bottleneck is in the E2E network/protocol processing path (synchronous `.GetAwaiter().GetResult()` calls in the client read loop), not storage I/O.
|
||||
|
||||
---
|
||||
|
||||
@@ -85,13 +85,13 @@ Benchmark run: 2026-03-13. Both servers running on the same machine, tested with
|
||||
| Multi pub/sub | 1.97x | .NET faster (likely measurement artifact at low counts) |
|
||||
| Request/reply latency | 0.77x–0.84x | Good |
|
||||
| JetStream sync publish | 0.82x | Good |
|
||||
| JetStream async file publish | ~0x | Broken — file store bottleneck |
|
||||
| JetStream async file publish | ~0x | Broken — E2E protocol path bottleneck |
|
||||
| JetStream durable fetch | 0.13x | Needs optimization |
|
||||
|
||||
### Key Observations
|
||||
|
||||
1. **Pub-only and request/reply are within striking distance** (0.6x–0.85x), suggesting the core message path is reasonably well ported.
|
||||
2. **Small-payload pub/sub and fan-out are 5x slower** (0.18x ratio). The bottleneck is likely in the subscription dispatch / message delivery hot path — the `SubList.Match()` → `MSG` write loop.
|
||||
3. **JetStream file store is essentially non-functional** for async batch publishing. The sync memory store path works at 0.82x parity, so the issue is specific to file I/O or ack handling.
|
||||
3. **JetStream file store async publish remains at ~174 msg/s** despite FileStore-level optimizations (buffered writes with background flush loop, O(1) state tracking, eliminating redundant per-publish work). The bottleneck is in the E2E network/protocol processing path — synchronous `.GetAwaiter().GetResult()` calls in the client read loop block the async pipeline.
|
||||
4. **JetStream consumption** (durable fetch) is 8x slower than Go. Ordered consumers don't work yet.
|
||||
5. The multi-pub/sub result showing .NET faster is likely a measurement artifact from the small message count (2,000 per publisher) — not representative at scale.
|
||||
|
||||
Reference in New Issue
Block a user