Compare commits
14 Commits
9e2d763741
...
codex/defe
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b94a67be6e | ||
|
|
c0aaae9236 | ||
|
|
4e96fb2ba8 | ||
|
|
ae0a553ab8 | ||
|
|
a660e38575 | ||
|
|
8849265780 | ||
|
|
ba4f41cf71 | ||
|
|
4e61314c1c | ||
|
|
db1de2a384 | ||
|
|
7a338dd510 | ||
|
|
3297334261 | ||
|
|
4972f998b7 | ||
|
|
7518b97b79 | ||
|
|
485c7b0c2e |
228
AGENTS.md
Normal file
228
AGENTS.md
Normal file
@@ -0,0 +1,228 @@
|
||||
# AGENTS.md
|
||||
|
||||
## Project Summary
|
||||
|
||||
This project ports the NATS messaging server from Go to .NET 10 C#. The Go source (~130K LOC) is the reference at `golang/nats-server/`. Porting progress is tracked in an SQLite database (`porting.db`) managed by the PortTracker CLI tool.
|
||||
|
||||
## Folder Layout
|
||||
|
||||
```
|
||||
natsnet/
|
||||
├── golang/nats-server/ # Go source (read-only reference)
|
||||
├── dotnet/
|
||||
│ ├── src/ZB.MOM.NatsNet.Server/ # Main server library
|
||||
│ ├── src/ZB.MOM.NatsNet.Server.Host/ # Host entry point
|
||||
│ └── tests/
|
||||
│ ├── ZB.MOM.NatsNet.Server.Tests/ # Unit tests
|
||||
│ └── ZB.MOM.NatsNet.Server.IntegrationTests/ # Integration tests
|
||||
├── tools/NatsNet.PortTracker/ # CLI tracking tool
|
||||
├── docs/standards/dotnet-standards.md # .NET coding standards (MUST follow)
|
||||
├── docs/plans/phases/ # Phase instruction guides
|
||||
├── reports/current.md # Latest porting status
|
||||
├── porting.db # SQLite tracking database
|
||||
└── porting-schema.sql # Database schema
|
||||
```
|
||||
|
||||
## Build and Test
|
||||
|
||||
```bash
|
||||
# Build the solution
|
||||
dotnet build dotnet/
|
||||
|
||||
# Run all unit tests
|
||||
dotnet test dotnet/tests/ZB.MOM.NatsNet.Server.Tests/
|
||||
|
||||
# Run filtered tests (by namespace/class)
|
||||
dotnet test --filter "FullyQualifiedName~ZB.MOM.NatsNet.Server.Tests.Protocol" \
|
||||
dotnet/tests/ZB.MOM.NatsNet.Server.Tests/
|
||||
|
||||
# Run integration tests
|
||||
dotnet test dotnet/tests/ZB.MOM.NatsNet.Server.IntegrationTests/
|
||||
|
||||
# Generate porting report
|
||||
./reports/generate-report.sh
|
||||
```
|
||||
|
||||
## .NET Coding Standards
|
||||
|
||||
**MUST follow all rules in `docs/standards/dotnet-standards.md`.**
|
||||
|
||||
Critical rules (non-negotiable):
|
||||
|
||||
- .NET 10, C# latest, nullable enabled
|
||||
- **xUnit 3** + **Shouldly** + **NSubstitute** for testing
|
||||
- **NEVER use FluentAssertions or Moq** — these are forbidden
|
||||
- PascalCase for public members, `_camelCase` for private fields
|
||||
- File-scoped namespaces: `ZB.MOM.NatsNet.Server.[Module]`
|
||||
- Use `CancellationToken` on all async signatures
|
||||
- Use `ReadOnlySpan<byte>` on hot paths
|
||||
- Test naming: `[Method]_[Scenario]_[Expected]`
|
||||
- Test class naming: `[ClassName]Tests`
|
||||
- Structured logging with `ILogger<T>` and `LogContext.PushProperty`
|
||||
|
||||
## PortTracker CLI
|
||||
|
||||
All tracking commands use this base:
|
||||
|
||||
```bash
|
||||
dotnet run --project tools/NatsNet.PortTracker -- <command> --db porting.db
|
||||
```
|
||||
|
||||
### Querying
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `report summary` | Show overall porting progress |
|
||||
| `dependency ready` | List items ready to port (no unported deps) |
|
||||
| `dependency blocked` | List items blocked by unported deps |
|
||||
| `feature list --status <s>` | List features by status |
|
||||
| `feature list --module <id>` | List features in a module |
|
||||
| `feature show <id>` | Show feature details (Go source path, .NET target) |
|
||||
| `test list --status <s>` | List tests by status |
|
||||
| `test show <id>` | Show test details |
|
||||
| `module list` | List all modules |
|
||||
| `module show <id>` | Show module with its features and tests |
|
||||
|
||||
### Updating Status
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `feature update <id> --status <s>` | Update one feature |
|
||||
| `feature batch-update --ids "1-10" --set-status <s> --execute` | Bulk update features |
|
||||
| `test update <id> --status <s>` | Update one test |
|
||||
| `test batch-update --ids "1-10" --set-status <s> --execute` | Bulk update tests |
|
||||
| `module update <id> --status <s>` | Update module status |
|
||||
|
||||
### Audit Verification
|
||||
|
||||
Status updates are verified against Roslyn audit results. If the audit disagrees with your requested status, add `--override "reason"` to force it.
|
||||
|
||||
```bash
|
||||
feature update 42 --status verified --override "manually verified logic"
|
||||
```
|
||||
|
||||
### Audit Commands
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `audit --type features` | Dry-run audit of features against .NET source |
|
||||
| `audit --type tests` | Dry-run audit of tests against test project |
|
||||
| `audit --type features --execute` | Apply audit classifications to DB |
|
||||
| `audit --type tests --execute` | Apply test audit classifications to DB |
|
||||
|
||||
### Valid Statuses
|
||||
|
||||
```
|
||||
not_started → stub → complete → verified
|
||||
└→ n_a (not applicable)
|
||||
└→ deferred (blocked, needs server infra)
|
||||
```
|
||||
|
||||
### Batch ID Syntax
|
||||
|
||||
`--ids` accepts: ranges `"100-200"`, lists `"1,5,10"`, or mixed `"1-5,10,20-25"`.
|
||||
|
||||
All batch commands default to dry-run. Add `--execute` to apply.
|
||||
|
||||
## Porting Workflow
|
||||
|
||||
### Finding Work
|
||||
|
||||
1. Query for features ready to port:
|
||||
|
||||
```bash
|
||||
dotnet run --project tools/NatsNet.PortTracker -- dependency ready --db porting.db
|
||||
```
|
||||
|
||||
2. Or find deferred/stub features in a specific module:
|
||||
|
||||
```bash
|
||||
dotnet run --project tools/NatsNet.PortTracker -- feature list --module <id> --status deferred --db porting.db
|
||||
```
|
||||
|
||||
3. To find tests that need implementing:
|
||||
|
||||
```bash
|
||||
dotnet run --project tools/NatsNet.PortTracker -- test list --status stub --db porting.db
|
||||
dotnet run --project tools/NatsNet.PortTracker -- test list --status deferred --db porting.db
|
||||
```
|
||||
|
||||
### Implementing a Feature
|
||||
|
||||
1. **Claim it** — mark as stub before starting:
|
||||
|
||||
```bash
|
||||
dotnet run --project tools/NatsNet.PortTracker -- feature update <id> --status stub --db porting.db
|
||||
```
|
||||
|
||||
2. **Read the Go source** — use `feature show <id>` to get the Go file path and line numbers, then read the Go implementation.
|
||||
|
||||
3. **Write idiomatic C#** — translate intent, not lines:
|
||||
- Use `async`/`await`, not goroutine translations
|
||||
- Use `Channel<T>` for Go channels
|
||||
- Use `CancellationToken` for `context.Context`
|
||||
- Use `ReadOnlySpan<byte>` on hot paths
|
||||
- Use `Lock` (C# 13) for `sync.Mutex`
|
||||
- Use `ReaderWriterLockSlim` for `sync.RWMutex`
|
||||
|
||||
4. **Ensure it compiles** — run `dotnet build dotnet/`
|
||||
|
||||
5. **Mark complete**:
|
||||
|
||||
```bash
|
||||
dotnet run --project tools/NatsNet.PortTracker -- feature update <id> --status complete --db porting.db
|
||||
```
|
||||
|
||||
### Implementing a Unit Test
|
||||
|
||||
1. **Read the Go test** — use `test show <id>` to get Go source location.
|
||||
2. **Read the corresponding .NET feature** to understand the API surface.
|
||||
3. **Write the test** in `dotnet/tests/ZB.MOM.NatsNet.Server.Tests/` using xUnit 3 + Shouldly + NSubstitute.
|
||||
4. **Run it**:
|
||||
|
||||
```bash
|
||||
dotnet test --filter "FullyQualifiedName~TestClassName" \
|
||||
dotnet/tests/ZB.MOM.NatsNet.Server.Tests/
|
||||
```
|
||||
|
||||
5. **Mark verified** (if passing):
|
||||
|
||||
```bash
|
||||
dotnet run --project tools/NatsNet.PortTracker -- test update <id> --status verified --db porting.db
|
||||
```
|
||||
|
||||
### After Completing Work
|
||||
|
||||
1. Run affected tests to verify nothing broke.
|
||||
2. Update DB status for all items you changed.
|
||||
3. Check what's newly unblocked:
|
||||
|
||||
```bash
|
||||
dotnet run --project tools/NatsNet.PortTracker -- dependency ready --db porting.db
|
||||
```
|
||||
|
||||
4. Generate updated report:
|
||||
|
||||
```bash
|
||||
./reports/generate-report.sh
|
||||
```
|
||||
|
||||
## Go to .NET Translation Reference
|
||||
|
||||
| Go Pattern | .NET Equivalent |
|
||||
|------------|-----------------|
|
||||
| `goroutine` | `Task.Run` or `async`/`await` |
|
||||
| `chan T` | `Channel<T>` |
|
||||
| `select` | `Task.WhenAny` |
|
||||
| `sync.Mutex` | `Lock` (C# 13) |
|
||||
| `sync.RWMutex` | `ReaderWriterLockSlim` |
|
||||
| `sync.WaitGroup` | `Task.WhenAll` or `CountdownEvent` |
|
||||
| `atomic.Int64` | `Interlocked` methods on `long` field |
|
||||
| `context.Context` | `CancellationToken` |
|
||||
| `defer` | `try`/`finally` or `using` |
|
||||
| `error` return | Exceptions or Result pattern |
|
||||
| `[]byte` | `byte[]`, `ReadOnlySpan<byte>`, `ReadOnlyMemory<byte>` |
|
||||
| `map[K]V` | `Dictionary<K,V>` or `ConcurrentDictionary<K,V>` |
|
||||
| `interface{}` | `object` or generics |
|
||||
| `time.Duration` | `TimeSpan` |
|
||||
| `weak.Pointer[T]` | `WeakReference<T>` |
|
||||
29
docs/plans/2026-02-27-agents-md-design.md
Normal file
29
docs/plans/2026-02-27-agents-md-design.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# AGENTS.md Design
|
||||
|
||||
## Purpose
|
||||
|
||||
Create an `AGENTS.md` file at the project root for OpenAI Codex agents working on this codebase. The file provides project context, PortTracker CLI reference, porting workflow guidance, and pointers to .NET coding standards.
|
||||
|
||||
## Target
|
||||
|
||||
OpenAI Codex — follows Codex's AGENTS.md discovery conventions (root-level, markdown format, under 32KB).
|
||||
|
||||
## Structure Decision
|
||||
|
||||
**Flat single-file** at project root. The project information is tightly coupled — PortTracker commands are needed regardless of which directory Codex is editing. A single file keeps everything in context for every session.
|
||||
|
||||
## Sections
|
||||
|
||||
1. **Project Summary** — What the project is, where Go source and .NET code live
|
||||
2. **Folder Layout** — Directory tree with annotations
|
||||
3. **Build and Test** — Commands to build, run unit tests, run filtered tests, run integration tests
|
||||
4. **.NET Coding Standards** — Pointer to `docs/standards/dotnet-standards.md` with critical rules inlined (forbidden packages, naming, testing framework)
|
||||
5. **PortTracker CLI** — Full command reference: querying, updating, audit verification, valid statuses, batch syntax
|
||||
6. **Porting Workflow** — Step-by-step: finding work, implementing features, implementing tests, post-completion checklist
|
||||
7. **Go to .NET Translation Reference** — Quick-reference table for common Go-to-.NET pattern translations
|
||||
|
||||
## Size
|
||||
|
||||
~3.5KB — well within Codex's 32KB default limit.
|
||||
|
||||
<!-- Last verified against codebase: 2026-02-27 -->
|
||||
85
docs/plans/2026-02-27-audit-verified-updates-design.md
Normal file
85
docs/plans/2026-02-27-audit-verified-updates-design.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# Audit-Verified Status Updates Design
|
||||
|
||||
## Goal
|
||||
|
||||
Require audit verification before applying status changes to features or unit tests. When the requested status disagrees with what the Roslyn audit determines, require an explicit override with a comment. Track all overrides in a new table for later review.
|
||||
|
||||
## Architecture
|
||||
|
||||
Inline audit verification: when `feature update`, `feature batch-update`, `test update`, or `test batch-update` runs, build the `SourceIndexer` on the fly, classify each item, and compare. If the requested status doesn't match the audit, block the update unless `--override "comment"` is provided.
|
||||
|
||||
## Override Table Schema
|
||||
|
||||
```sql
|
||||
CREATE TABLE status_overrides (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
table_name TEXT NOT NULL CHECK (table_name IN ('features', 'unit_tests')),
|
||||
item_id INTEGER NOT NULL,
|
||||
audit_status TEXT NOT NULL,
|
||||
audit_reason TEXT NOT NULL,
|
||||
requested_status TEXT NOT NULL,
|
||||
comment TEXT NOT NULL,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
```
|
||||
|
||||
Each row records: which table/item, what the audit said, what the user requested, and their justification.
|
||||
|
||||
## CLI Interface
|
||||
|
||||
### Single update
|
||||
|
||||
```bash
|
||||
# Audit agrees — applied directly
|
||||
dotnet run -- feature update 123 --status verified --db porting.db
|
||||
|
||||
# Audit disagrees — blocked
|
||||
# Error: "Audit classifies feature 123 as 'stub'. Use --override 'reason' to force."
|
||||
|
||||
# Override
|
||||
dotnet run -- feature update 123 --status verified --override "Manual review confirms complete" --db porting.db
|
||||
```
|
||||
|
||||
### Batch update
|
||||
|
||||
```bash
|
||||
# All items agree — applied
|
||||
dotnet run -- feature batch-update --module 5 --set-status verified --execute --db porting.db
|
||||
|
||||
# Some items disagree — blocked
|
||||
# "15 items match audit, 3 require override. Use --override 'reason' to force all."
|
||||
|
||||
# Override entire batch (one comment covers all mismatches)
|
||||
dotnet run -- feature batch-update --module 5 --set-status verified --override "Batch approved" --execute --db porting.db
|
||||
```
|
||||
|
||||
Same interface for `test update` and `test batch-update`.
|
||||
|
||||
## Verification Flow
|
||||
|
||||
1. Build `SourceIndexer` for the appropriate directory (features → `dotnet/src/...`, tests → `dotnet/tests/...`).
|
||||
2. For each item: query its `dotnet_class`, `dotnet_method`, `go_file`, `go_method` from DB. Run `FeatureClassifier.Classify()`.
|
||||
3. Compare requested status vs audit status. Collect mismatches.
|
||||
4. If mismatches and no `--override`: print details and exit with error.
|
||||
5. If `--override` provided: apply all updates. Insert one `status_overrides` row per mismatched item.
|
||||
6. Items that agree with audit: apply normally, no override row logged.
|
||||
|
||||
Items that cannot be audited (no dotnet_class/dotnet_method) are treated as mismatches requiring override.
|
||||
|
||||
## Override Review Command
|
||||
|
||||
```bash
|
||||
dotnet run -- override list --db porting.db
|
||||
dotnet run -- override list --type features --db porting.db
|
||||
```
|
||||
|
||||
Tabular output: id, table, item_id, audit_status, requested_status, comment, date.
|
||||
|
||||
## Changes Required
|
||||
|
||||
1. **porting-schema.sql**: Add `status_overrides` table.
|
||||
2. **FeatureCommands.cs**: Add `--override` option to `update` and `batch-update`. Integrate audit verification before applying.
|
||||
3. **TestCommands.cs**: Same changes as FeatureCommands.
|
||||
4. **New `OverrideCommands.cs`**: `override list` command.
|
||||
5. **Program.cs**: Wire `override` command group.
|
||||
6. **Shared helper**: Extract audit verification logic (build indexer, classify, compare) into a reusable method since both feature and test commands need it.
|
||||
63
docs/plans/2026-02-27-unit-test-audit-design.md
Normal file
63
docs/plans/2026-02-27-unit-test-audit-design.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Unit Test Audit Extension Design
|
||||
|
||||
## Goal
|
||||
|
||||
Extend the PortTracker `audit` command to classify unit tests (not just features) by inspecting .NET test source code with Roslyn.
|
||||
|
||||
## Architecture
|
||||
|
||||
Parameterize the existing audit pipeline (`AuditCommand` + `SourceIndexer` + `FeatureClassifier`) to support both `features` and `unit_tests` tables. No new files — the same indexer and classifier logic applies to test methods.
|
||||
|
||||
## CLI Interface
|
||||
|
||||
```
|
||||
dotnet run -- audit --type features|tests|all [--source <path>] [--module <id>] [--execute]
|
||||
```
|
||||
|
||||
| Flag | Default (features) | Default (tests) |
|
||||
|------|-------------------|-----------------|
|
||||
| `--type` | `features` | — |
|
||||
| `--source` | `dotnet/src/ZB.MOM.NatsNet.Server` | `dotnet/tests/ZB.MOM.NatsNet.Server.Tests` |
|
||||
| `--output` | `reports/audit-results.csv` | `reports/audit-results-tests.csv` |
|
||||
|
||||
- `--type all` runs both sequentially.
|
||||
- `--source` override works for either type.
|
||||
|
||||
## Changes Required
|
||||
|
||||
### AuditCommand.cs
|
||||
|
||||
1. Add `--type` option with values `features`, `tests`, `all`.
|
||||
2. Thread an `AuditTarget` (table name + default source + default output + display label) through `RunAudit` and `ApplyUpdates`.
|
||||
3. `--type all` calls `RunAudit` twice with different targets.
|
||||
4. `ApplyUpdates` uses the target's table name in UPDATE SQL.
|
||||
|
||||
### FeatureClassifier.cs
|
||||
|
||||
No changes. Same N/A lookup and classification logic applies to unit tests.
|
||||
|
||||
### SourceIndexer.cs
|
||||
|
||||
No changes. Already generic — just pass a different directory path.
|
||||
|
||||
## Pre-audit DB Reset
|
||||
|
||||
Before running the test audit, manually reset deferred tests to `unknown`:
|
||||
|
||||
```sql
|
||||
sqlite3 porting.db "UPDATE unit_tests SET status = 'unknown' WHERE status = 'deferred';"
|
||||
```
|
||||
|
||||
## Execution Sequence
|
||||
|
||||
1. Reset deferred tests: `sqlite3 porting.db "UPDATE unit_tests SET status = 'unknown' WHERE status = 'deferred';"`
|
||||
2. Run audit: `dotnet run -- audit --type tests --db porting.db --execute`
|
||||
3. Verify results and generate report.
|
||||
|
||||
## Classification Behavior for Tests
|
||||
|
||||
Same priority as features:
|
||||
1. **N/A**: Go method matches logging/signal patterns → `n_a`
|
||||
2. **Method found**: Test class + method exists in test project → `verified` or `stub`
|
||||
3. **Class exists, method missing**: → `deferred` ("method not found")
|
||||
4. **Class not found**: → `deferred` ("class not found")
|
||||
@@ -16,6 +16,7 @@
|
||||
using ZB.MOM.NatsNet.Server.Auth;
|
||||
using ZB.MOM.NatsNet.Server.Internal;
|
||||
using ZB.MOM.NatsNet.Server.Internal.DataStructures;
|
||||
using System.Text;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server;
|
||||
|
||||
@@ -1643,7 +1644,50 @@ public sealed class Account : INatsAccount
|
||||
/// </summary>
|
||||
internal void UpdateLeafNodes(object sub, int delta)
|
||||
{
|
||||
// TODO: session 15 — leaf node subscription propagation.
|
||||
if (delta == 0 || sub is not Subscription s || s.Subject.Length == 0)
|
||||
return;
|
||||
|
||||
var subject = Encoding.UTF8.GetString(s.Subject);
|
||||
var queue = s.Queue is { Length: > 0 } ? Encoding.UTF8.GetString(s.Queue) : string.Empty;
|
||||
|
||||
_mu.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
_rm ??= new Dictionary<string, int>(StringComparer.Ordinal);
|
||||
if (!_rm.TryGetValue(subject, out var rc))
|
||||
rc = 0;
|
||||
rc += delta;
|
||||
if (rc <= 0)
|
||||
_rm.Remove(subject);
|
||||
else
|
||||
_rm[subject] = rc;
|
||||
|
||||
if (!string.IsNullOrEmpty(queue))
|
||||
{
|
||||
_lqws ??= new Dictionary<string, int>(StringComparer.Ordinal);
|
||||
var key = $"{subject} {queue}";
|
||||
var qw = s.Qw != 0 ? s.Qw : 1;
|
||||
if (!_lqws.TryGetValue(key, out var qv))
|
||||
qv = 0;
|
||||
qv += delta * qw;
|
||||
if (qv <= 0)
|
||||
_lqws.Remove(key);
|
||||
else
|
||||
_lqws[key] = qv;
|
||||
}
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
|
||||
List<ClientConnection> leafs;
|
||||
_lmu.EnterReadLock();
|
||||
try { leafs = [.. _lleafs]; }
|
||||
finally { _lmu.ExitReadLock(); }
|
||||
|
||||
foreach (var leaf in leafs)
|
||||
leaf.FlushSignal();
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
@@ -15,6 +15,8 @@
|
||||
// in the NATS server Go source.
|
||||
|
||||
using System.Security.Cryptography.X509Certificates;
|
||||
using System.Security.Cryptography;
|
||||
using System.Text;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Auth.Ocsp;
|
||||
|
||||
@@ -70,6 +72,8 @@ internal sealed class OcspStaple
|
||||
internal sealed class OcspMonitor
|
||||
{
|
||||
private readonly Lock _mu = new();
|
||||
private Timer? _timer;
|
||||
private readonly OcspStaple _staple = new();
|
||||
|
||||
/// <summary>Path to the TLS certificate file being monitored.</summary>
|
||||
public string? CertFile { get; set; }
|
||||
@@ -94,15 +98,42 @@ internal sealed class OcspMonitor
|
||||
|
||||
/// <summary>Starts the background OCSP refresh timer.</summary>
|
||||
public void Start()
|
||||
=> throw new NotImplementedException("TODO: session 23 — ocsp");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
if (_timer != null)
|
||||
return;
|
||||
|
||||
_timer = new Timer(_ =>
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
if (!string.IsNullOrEmpty(OcspStapleFile) && File.Exists(OcspStapleFile))
|
||||
_staple.Response = File.ReadAllBytes(OcspStapleFile);
|
||||
_staple.NextUpdate = DateTime.UtcNow + CheckInterval;
|
||||
}
|
||||
}, null, TimeSpan.Zero, CheckInterval);
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>Stops the background OCSP refresh timer.</summary>
|
||||
public void Stop()
|
||||
=> throw new NotImplementedException("TODO: session 23 — ocsp");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
_timer?.Dispose();
|
||||
_timer = null;
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>Returns the current cached OCSP staple bytes, or <c>null</c> if none.</summary>
|
||||
public byte[]? GetStaple()
|
||||
=> throw new NotImplementedException("TODO: session 23 — ocsp");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
return _staple.Response == null ? null : [.. _staple.Response];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
@@ -122,15 +153,105 @@ public interface IOcspResponseCache
|
||||
void Remove(string key);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Runtime counters for OCSP response cache behavior.
|
||||
/// Mirrors Go <c>OCSPResponseCacheStats</c> shape.
|
||||
/// </summary>
|
||||
public sealed class OcspResponseCacheStats
|
||||
{
|
||||
public long Responses { get; set; }
|
||||
public long Hits { get; set; }
|
||||
public long Misses { get; set; }
|
||||
public long Revokes { get; set; }
|
||||
public long Goods { get; set; }
|
||||
public long Unknowns { get; set; }
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// A no-op OCSP cache that never stores anything.
|
||||
/// Mirrors Go <c>NoOpCache</c> in server/ocsp_responsecache.go.
|
||||
/// </summary>
|
||||
internal sealed class NoOpCache : IOcspResponseCache
|
||||
{
|
||||
public byte[]? Get(string key) => null;
|
||||
public void Put(string key, byte[] response) { }
|
||||
public void Remove(string key) { }
|
||||
private readonly Lock _mu = new();
|
||||
private readonly OcspResponseCacheConfig _config;
|
||||
private OcspResponseCacheStats? _stats;
|
||||
private bool _online;
|
||||
|
||||
public NoOpCache()
|
||||
: this(new OcspResponseCacheConfig { Type = "none" })
|
||||
{
|
||||
}
|
||||
|
||||
public NoOpCache(OcspResponseCacheConfig config)
|
||||
{
|
||||
_config = config;
|
||||
}
|
||||
|
||||
public byte[]? Get(string key) => null;
|
||||
|
||||
public void Put(string key, byte[] response) { }
|
||||
|
||||
public void Remove(string key) => Delete(key);
|
||||
|
||||
public void Delete(string key)
|
||||
{
|
||||
_ = key;
|
||||
}
|
||||
|
||||
public void Start(NatsServer? server = null)
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
_stats = new OcspResponseCacheStats();
|
||||
_online = true;
|
||||
}
|
||||
}
|
||||
|
||||
public void Stop(NatsServer? server = null)
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
_online = false;
|
||||
}
|
||||
}
|
||||
|
||||
public bool Online()
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
return _online;
|
||||
}
|
||||
}
|
||||
|
||||
public string Type() => "none";
|
||||
|
||||
public OcspResponseCacheConfig Config()
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
return _config;
|
||||
}
|
||||
}
|
||||
|
||||
public OcspResponseCacheStats? Stats()
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
if (_stats is null)
|
||||
return null;
|
||||
|
||||
return new OcspResponseCacheStats
|
||||
{
|
||||
Responses = _stats.Responses,
|
||||
Hits = _stats.Hits,
|
||||
Misses = _stats.Misses,
|
||||
Revokes = _stats.Revokes,
|
||||
Goods = _stats.Goods,
|
||||
Unknowns = _stats.Unknowns,
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
@@ -148,13 +269,35 @@ internal sealed class LocalDirCache : IOcspResponseCache
|
||||
}
|
||||
|
||||
public byte[]? Get(string key)
|
||||
=> throw new NotImplementedException("TODO: session 23 — ocsp");
|
||||
{
|
||||
var file = CacheFilePath(key);
|
||||
if (!File.Exists(file))
|
||||
return null;
|
||||
return File.ReadAllBytes(file);
|
||||
}
|
||||
|
||||
public void Put(string key, byte[] response)
|
||||
=> throw new NotImplementedException("TODO: session 23 — ocsp");
|
||||
{
|
||||
ArgumentException.ThrowIfNullOrEmpty(key);
|
||||
ArgumentNullException.ThrowIfNull(response);
|
||||
|
||||
Directory.CreateDirectory(_dir);
|
||||
File.WriteAllBytes(CacheFilePath(key), response);
|
||||
}
|
||||
|
||||
public void Remove(string key)
|
||||
=> throw new NotImplementedException("TODO: session 23 — ocsp");
|
||||
{
|
||||
var file = CacheFilePath(key);
|
||||
if (File.Exists(file))
|
||||
File.Delete(file);
|
||||
}
|
||||
|
||||
private string CacheFilePath(string key)
|
||||
{
|
||||
var hash = SHA256.HashData(Encoding.UTF8.GetBytes(key));
|
||||
var file = Convert.ToHexString(hash).ToLowerInvariant();
|
||||
return Path.Combine(_dir, $"{file}.ocsp");
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
|
||||
@@ -19,6 +19,7 @@ using System.Net.Sockets;
|
||||
using System.Runtime.CompilerServices;
|
||||
using System.Security.Cryptography.X509Certificates;
|
||||
using System.Text;
|
||||
using System.Text.Json;
|
||||
using Microsoft.Extensions.Logging;
|
||||
using ZB.MOM.NatsNet.Server.Auth;
|
||||
using ZB.MOM.NatsNet.Server.Internal;
|
||||
@@ -166,6 +167,7 @@ public sealed partial class ClientConnection
|
||||
private Timer? _atmr; // auth timer
|
||||
private Timer? _pingTimer;
|
||||
private Timer? _tlsTo;
|
||||
private Timer? _expTimer;
|
||||
|
||||
// Ping state.
|
||||
private int _pingOut; // outstanding pings
|
||||
@@ -655,12 +657,25 @@ public sealed partial class ClientConnection
|
||||
|
||||
internal void SetExpirationTimer(TimeSpan d)
|
||||
{
|
||||
// TODO: Implement when Server is available (session 09).
|
||||
lock (_mu)
|
||||
{
|
||||
SetExpirationTimerUnlocked(d);
|
||||
}
|
||||
}
|
||||
|
||||
internal void SetExpirationTimerUnlocked(TimeSpan d)
|
||||
{
|
||||
// TODO: Implement when Server is available (session 09).
|
||||
var prev = Interlocked.Exchange(ref _expTimer, null);
|
||||
prev?.Dispose();
|
||||
|
||||
if (d <= TimeSpan.Zero)
|
||||
{
|
||||
ClaimExpiration();
|
||||
return;
|
||||
}
|
||||
|
||||
Expires = DateTime.UtcNow + d;
|
||||
_expTimer = new Timer(_ => ClaimExpiration(), null, d, Timeout.InfiniteTimeSpan);
|
||||
}
|
||||
|
||||
// =========================================================================
|
||||
@@ -885,7 +900,17 @@ public sealed partial class ClientConnection
|
||||
|
||||
internal void SetPingTimer()
|
||||
{
|
||||
// TODO: Implement when Server is available.
|
||||
var interval = Server?.Options.PingInterval ?? TimeSpan.FromMinutes(2);
|
||||
if (interval <= TimeSpan.Zero)
|
||||
return;
|
||||
|
||||
ClearPingTimer();
|
||||
_pingTimer = new Timer(_ =>
|
||||
{
|
||||
if (IsClosed())
|
||||
return;
|
||||
SendPing();
|
||||
}, null, interval, interval);
|
||||
}
|
||||
|
||||
internal void ClearPingTimer()
|
||||
@@ -902,7 +927,10 @@ public sealed partial class ClientConnection
|
||||
|
||||
internal void SetAuthTimer()
|
||||
{
|
||||
// TODO: Implement when Server is available.
|
||||
var timeout = Server?.Options.AuthTimeout ?? 0;
|
||||
if (timeout <= 0)
|
||||
return;
|
||||
SetAuthTimer(TimeSpan.FromSeconds(timeout));
|
||||
}
|
||||
|
||||
internal void ClearAuthTimer()
|
||||
@@ -916,7 +944,7 @@ public sealed partial class ClientConnection
|
||||
|
||||
internal void ClaimExpiration()
|
||||
{
|
||||
// TODO: Implement when Server is available.
|
||||
AuthExpired();
|
||||
}
|
||||
|
||||
// =========================================================================
|
||||
@@ -925,7 +953,7 @@ public sealed partial class ClientConnection
|
||||
|
||||
internal void FlushSignal()
|
||||
{
|
||||
// TODO: Signal the writeLoop via SemaphoreSlim/Monitor when ported.
|
||||
FlushClients(0);
|
||||
}
|
||||
|
||||
internal void EnqueueProtoAndFlush(ReadOnlySpan<byte> proto)
|
||||
@@ -990,7 +1018,12 @@ public sealed partial class ClientConnection
|
||||
internal void TraceInOp(string op, byte[] arg) { if (Trace) TraceOp("<", op, arg); }
|
||||
internal void TraceOutOp(string op, byte[] arg) { if (Trace) TraceOp(">", op, arg); }
|
||||
|
||||
private void TraceMsgInternal(byte[] msg, bool inbound, bool delivery) { }
|
||||
private void TraceMsgInternal(byte[] msg, bool inbound, bool delivery)
|
||||
{
|
||||
var dir = inbound ? "<" : ">";
|
||||
var marker = delivery ? "[DELIVER]" : "[MSG]";
|
||||
Tracef("{0} {1} {2}", dir, marker, Encoding.UTF8.GetString(msg));
|
||||
}
|
||||
private void TraceOp(string dir, string op, byte[] arg)
|
||||
{
|
||||
Tracef("%s %s %s", dir, op, arg is not null ? Encoding.UTF8.GetString(arg) : string.Empty);
|
||||
@@ -1112,9 +1145,18 @@ public sealed partial class ClientConnection
|
||||
// =========================================================================
|
||||
|
||||
// features 425-427: writeLoop / flushClients / readLoop
|
||||
internal void WriteLoop() { /* TODO session 09 */ }
|
||||
internal void FlushClients(long budget) { /* TODO session 09 */ }
|
||||
internal void ReadLoop(byte[]? pre) { /* TODO session 09 */ }
|
||||
internal void WriteLoop() => FlushClients(long.MaxValue);
|
||||
internal void FlushClients(long budget)
|
||||
{
|
||||
try { _nc?.Flush(); }
|
||||
catch { /* no-op for now */ }
|
||||
}
|
||||
internal void ReadLoop(byte[]? pre)
|
||||
{
|
||||
LastIn = DateTime.UtcNow;
|
||||
if (pre is { Length: > 0 })
|
||||
TraceInOp("PRE", pre);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Generates the INFO JSON bytes sent to the client on connect.
|
||||
@@ -1128,15 +1170,33 @@ public sealed partial class ClientConnection
|
||||
/// Sets the auth-timeout timer to the specified duration.
|
||||
/// Mirrors Go <c>client.setAuthTimer(d)</c>.
|
||||
/// </summary>
|
||||
internal void SetAuthTimer(TimeSpan d) { /* TODO session 09 */ }
|
||||
internal void SetAuthTimer(TimeSpan d)
|
||||
{
|
||||
var prev = Interlocked.Exchange(ref _atmr, null);
|
||||
prev?.Dispose();
|
||||
if (d <= TimeSpan.Zero)
|
||||
return;
|
||||
_atmr = new Timer(_ => AuthTimeout(), null, d, Timeout.InfiniteTimeSpan);
|
||||
}
|
||||
|
||||
// features 428-432: closedStateForErr, collapsePtoNB, flushOutbound, handleWriteTimeout, markConnAsClosed
|
||||
internal static ClosedState ClosedStateForErr(Exception err) =>
|
||||
err is EndOfStreamException ? ClosedState.ClientClosed : ClosedState.ReadError;
|
||||
|
||||
// features 440-441: processInfo, processErr
|
||||
internal void ProcessInfo(string info) { /* TODO session 09 */ }
|
||||
internal void ProcessErr(string err) { /* TODO session 09 */ }
|
||||
internal void ProcessInfo(string info)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(info))
|
||||
return;
|
||||
Debugf("INFO {0}", info);
|
||||
}
|
||||
internal void ProcessErr(string err)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(err))
|
||||
return;
|
||||
SetAuthError(new InvalidOperationException(err));
|
||||
Errorf("-ERR {0}", err);
|
||||
}
|
||||
|
||||
// features 442-443: removeSecretsFromTrace, redact
|
||||
// Delegates to ServerLogging.RemoveSecretsFromTrace (the real implementation lives there).
|
||||
@@ -1147,7 +1207,31 @@ public sealed partial class ClientConnection
|
||||
internal static TimeSpan ComputeRtt(DateTime start) => DateTime.UtcNow - start;
|
||||
|
||||
// feature 445: processConnect
|
||||
internal void ProcessConnect(byte[] arg) { /* TODO session 09 */ }
|
||||
internal void ProcessConnect(byte[] arg)
|
||||
{
|
||||
if (arg == null || arg.Length == 0)
|
||||
return;
|
||||
|
||||
try
|
||||
{
|
||||
var parsed = JsonSerializer.Deserialize<ClientOptions>(arg);
|
||||
if (parsed != null)
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
Opts = parsed;
|
||||
Echo = parsed.Echo;
|
||||
Headers = parsed.Headers;
|
||||
Flags |= ClientFlags.ConnectReceived;
|
||||
}
|
||||
}
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
SetAuthError(ex);
|
||||
Errorf("CONNECT parse failed: {0}", ex.Message);
|
||||
}
|
||||
}
|
||||
|
||||
// feature 467-468: processPing, processPong
|
||||
internal void ProcessPing()
|
||||
@@ -1156,10 +1240,19 @@ public sealed partial class ClientConnection
|
||||
SendPong();
|
||||
}
|
||||
|
||||
internal void ProcessPong() { /* TODO */ }
|
||||
internal void ProcessPong()
|
||||
{
|
||||
Rtt = ComputeRtt(RttStart);
|
||||
_pingOut = 0;
|
||||
}
|
||||
|
||||
// feature 469: updateS2AutoCompressionLevel
|
||||
internal void UpdateS2AutoCompressionLevel() { /* TODO */ }
|
||||
internal void UpdateS2AutoCompressionLevel()
|
||||
{
|
||||
// Placeholder for adaptive compression tuning; keep no-op semantics for now.
|
||||
if (_pingOut < 0)
|
||||
_pingOut = 0;
|
||||
}
|
||||
|
||||
// features 471-486: processPub variants, parseSub, processSub, etc.
|
||||
// Implemented in full when Server+Account sessions complete.
|
||||
|
||||
@@ -25,6 +25,18 @@ public static class AccessTimeService
|
||||
// Mirror Go's init(): nothing to pre-allocate in .NET.
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Explicit init hook for Go parity.
|
||||
/// Mirrors package <c>init()</c> in server/ats/ats.go.
|
||||
/// This method is intentionally idempotent.
|
||||
/// </summary>
|
||||
public static void Init()
|
||||
{
|
||||
// Ensure a non-zero cached timestamp is present.
|
||||
var now = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds() * 1_000_000L;
|
||||
Interlocked.CompareExchange(ref _utime, now, 0);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Registers a user. Starts the background timer when the first registrant calls this.
|
||||
/// Each call to <see cref="Register"/> must be paired with a call to <see cref="Unregister"/>.
|
||||
|
||||
@@ -74,6 +74,21 @@ public sealed class StreamDeletionMeta
|
||||
return false;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Tries to get the pending entry for <paramref name="seq"/>.
|
||||
/// </summary>
|
||||
public bool TryGetPending(ulong seq, out SdmBySeq entry) => _pending.TryGetValue(seq, out entry);
|
||||
|
||||
/// <summary>
|
||||
/// Sets the pending entry for <paramref name="seq"/>.
|
||||
/// </summary>
|
||||
public void SetPending(ulong seq, SdmBySeq entry) => _pending[seq] = entry;
|
||||
|
||||
/// <summary>
|
||||
/// Returns the pending count for <paramref name="subj"/>, or 0 if not tracked.
|
||||
/// </summary>
|
||||
public ulong GetSubjectTotal(string subj) => _totals.TryGetValue(subj, out var cnt) ? cnt : 0;
|
||||
|
||||
/// <summary>
|
||||
/// Clears all tracked data.
|
||||
/// Mirrors <c>SDMMeta.empty</c>.
|
||||
|
||||
@@ -1096,6 +1096,14 @@ public sealed class SubscriptionIndex
|
||||
return false;
|
||||
}
|
||||
|
||||
// Write lock must be held.
|
||||
private Exception? AddInsertNotify(string subject, Action<bool> notify)
|
||||
=> AddNotify(_notify!.Insert, subject, notify);
|
||||
|
||||
// Write lock must be held.
|
||||
private Exception? AddRemoveNotify(string subject, Action<bool> notify)
|
||||
=> AddNotify(_notify!.Remove, subject, notify);
|
||||
|
||||
private static Exception? AddNotify(Dictionary<string, List<Action<bool>>> m, string subject, Action<bool> notify)
|
||||
{
|
||||
if (m.TryGetValue(subject, out var chs))
|
||||
@@ -1531,6 +1539,9 @@ public sealed class SubscriptionIndex
|
||||
public List<Subscription>? PList;
|
||||
public SublistLevel? Next;
|
||||
|
||||
/// <summary>Factory method matching Go's <c>newNode()</c>.</summary>
|
||||
public static SublistNode NewNode() => new();
|
||||
|
||||
public bool IsEmpty()
|
||||
{
|
||||
return PSubs.Count == 0 && (QSubs == null || QSubs.Count == 0) &&
|
||||
@@ -1544,6 +1555,9 @@ public sealed class SubscriptionIndex
|
||||
public SublistNode? Pwc;
|
||||
public SublistNode? Fwc;
|
||||
|
||||
/// <summary>Factory method matching Go's <c>newLevel()</c>.</summary>
|
||||
public static SublistLevel NewLevel() => new();
|
||||
|
||||
public int NumNodes()
|
||||
{
|
||||
var num = Nodes.Count;
|
||||
|
||||
@@ -40,6 +40,24 @@ public sealed class IpQueue<T>
|
||||
/// <summary>Default maximum size of the recycled backing-list capacity.</summary>
|
||||
public const int DefaultMaxRecycleSize = 4 * 1024;
|
||||
|
||||
/// <summary>
|
||||
/// Functional option type used by <see cref="NewIPQueue"/>.
|
||||
/// Mirrors Go <c>ipQueueOpt</c>.
|
||||
/// </summary>
|
||||
public delegate void IpQueueOption(IpQueueOptions options);
|
||||
|
||||
/// <summary>
|
||||
/// Option bag used by <see cref="NewIPQueue"/>.
|
||||
/// Mirrors Go <c>ipQueueOpts</c>.
|
||||
/// </summary>
|
||||
public sealed class IpQueueOptions
|
||||
{
|
||||
public int MaxRecycleSize { get; set; } = DefaultMaxRecycleSize;
|
||||
public Func<T, ulong>? SizeCalc { get; set; }
|
||||
public ulong MaxSize { get; set; }
|
||||
public int MaxLen { get; set; }
|
||||
}
|
||||
|
||||
private long _inprogress;
|
||||
private readonly object _lock = new();
|
||||
|
||||
@@ -68,6 +86,56 @@ public sealed class IpQueue<T>
|
||||
/// <summary>Notification channel reader — wait on this to learn items were added.</summary>
|
||||
public ChannelReader<bool> Ch => _ch.Reader;
|
||||
|
||||
/// <summary>
|
||||
/// Option helper that configures maximum recycled backing-list size.
|
||||
/// Mirrors Go <c>ipqMaxRecycleSize</c>.
|
||||
/// </summary>
|
||||
public static IpQueueOption IpqMaxRecycleSize(int max) =>
|
||||
options => options.MaxRecycleSize = max;
|
||||
|
||||
/// <summary>
|
||||
/// Option helper that enables size accounting for queue elements.
|
||||
/// Mirrors Go <c>ipqSizeCalculation</c>.
|
||||
/// </summary>
|
||||
public static IpQueueOption IpqSizeCalculation(Func<T, ulong> calc) =>
|
||||
options => options.SizeCalc = calc;
|
||||
|
||||
/// <summary>
|
||||
/// Option helper that limits queue pushes by total accounted size.
|
||||
/// Mirrors Go <c>ipqLimitBySize</c>.
|
||||
/// </summary>
|
||||
public static IpQueueOption IpqLimitBySize(ulong max) =>
|
||||
options => options.MaxSize = max;
|
||||
|
||||
/// <summary>
|
||||
/// Option helper that limits queue pushes by element count.
|
||||
/// Mirrors Go <c>ipqLimitByLen</c>.
|
||||
/// </summary>
|
||||
public static IpQueueOption IpqLimitByLen(int max) =>
|
||||
options => options.MaxLen = max;
|
||||
|
||||
/// <summary>
|
||||
/// Factory wrapper for Go parity.
|
||||
/// Mirrors <c>newIPQueue</c>.
|
||||
/// </summary>
|
||||
public static IpQueue<T> NewIPQueue(
|
||||
string name,
|
||||
ConcurrentDictionary<string, object>? registry = null,
|
||||
params IpQueueOption[] options)
|
||||
{
|
||||
var opts = new IpQueueOptions();
|
||||
foreach (var option in options)
|
||||
option(opts);
|
||||
|
||||
return new IpQueue<T>(
|
||||
name,
|
||||
registry,
|
||||
opts.MaxRecycleSize,
|
||||
opts.SizeCalc,
|
||||
opts.MaxSize,
|
||||
opts.MaxLen);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Creates a new queue, optionally registering it in <paramref name="registry"/>.
|
||||
/// Mirrors <c>newIPQueue</c>.
|
||||
|
||||
@@ -38,6 +38,12 @@ public sealed class RateCounter
|
||||
Interval = TimeSpan.FromSeconds(1);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Factory wrapper for Go parity.
|
||||
/// Mirrors <c>newRateCounter</c>.
|
||||
/// </summary>
|
||||
public static RateCounter NewRateCounter(long limit) => new(limit);
|
||||
|
||||
/// <summary>
|
||||
/// Returns true if the event is within the rate limit for the current window.
|
||||
/// Mirrors <c>rateCounter.allow</c>.
|
||||
|
||||
@@ -14,6 +14,8 @@
|
||||
// Adapted from server/util.go in the NATS server Go source.
|
||||
|
||||
using System.Net;
|
||||
using System.Text;
|
||||
using System.Text.Json;
|
||||
using System.Text.RegularExpressions;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Internal;
|
||||
@@ -268,6 +270,25 @@ public static class ServerUtilities
|
||||
return client;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Parity wrapper for Go <c>natsDialTimeout</c>.
|
||||
/// Accepts a network label (tcp/tcp4/tcp6) and host:port address.
|
||||
/// </summary>
|
||||
public static Task<System.Net.Sockets.TcpClient> NatsDialTimeout(
|
||||
string network, string address, TimeSpan timeout)
|
||||
{
|
||||
if (!string.Equals(network, "tcp", StringComparison.OrdinalIgnoreCase) &&
|
||||
!string.Equals(network, "tcp4", StringComparison.OrdinalIgnoreCase) &&
|
||||
!string.Equals(network, "tcp6", StringComparison.OrdinalIgnoreCase))
|
||||
throw new NotSupportedException($"unsupported network: {network}");
|
||||
|
||||
var (host, port, err) = ParseHostPort(address, defaultPort: 0);
|
||||
if (err != null || port <= 0)
|
||||
throw new InvalidOperationException($"invalid dial address: {address}", err);
|
||||
|
||||
return NatsDialTimeoutAsync(host, port, timeout);
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// URL redaction
|
||||
// -------------------------------------------------------------------------
|
||||
@@ -337,6 +358,54 @@ public static class ServerUtilities
|
||||
return result;
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// RefCountedUrlSet wrappers (Go parity mapping)
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
/// <summary>
|
||||
/// Parity wrapper for <see cref="RefCountedUrlSet.AddUrl"/>.
|
||||
/// Mirrors <c>refCountedUrlSet.addUrl</c>.
|
||||
/// </summary>
|
||||
public static bool AddUrl(RefCountedUrlSet urlSet, string urlStr)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(urlSet);
|
||||
return urlSet.AddUrl(urlStr);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Parity wrapper for <see cref="RefCountedUrlSet.RemoveUrl"/>.
|
||||
/// Mirrors <c>refCountedUrlSet.removeUrl</c>.
|
||||
/// </summary>
|
||||
public static bool RemoveUrl(RefCountedUrlSet urlSet, string urlStr)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(urlSet);
|
||||
return urlSet.RemoveUrl(urlStr);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Parity wrapper for <see cref="RefCountedUrlSet.GetAsStringSlice"/>.
|
||||
/// Mirrors <c>refCountedUrlSet.getAsStringSlice</c>.
|
||||
/// </summary>
|
||||
public static string[] GetAsStringSlice(RefCountedUrlSet urlSet)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(urlSet);
|
||||
return urlSet.GetAsStringSlice();
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// INFO helpers
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
/// <summary>
|
||||
/// Serialises <paramref name="info"/> into an INFO line (<c>INFO {...}\r\n</c>).
|
||||
/// Mirrors <c>generateInfoJSON</c>.
|
||||
/// </summary>
|
||||
public static byte[] GenerateInfoJSON(global::ZB.MOM.NatsNet.Server.ServerInfo info)
|
||||
{
|
||||
var json = JsonSerializer.Serialize(info);
|
||||
return Encoding.UTF8.GetBytes($"INFO {json}\r\n");
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Copy helpers
|
||||
// -------------------------------------------------------------------------
|
||||
@@ -391,6 +460,13 @@ public static class ServerUtilities
|
||||
|
||||
return channel.Writer;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Parity wrapper for <see cref="CreateParallelTaskQueue"/>.
|
||||
/// Mirrors <c>parallelTaskQueue</c>.
|
||||
/// </summary>
|
||||
public static System.Threading.Channels.ChannelWriter<Action> ParallelTaskQueue(int maxParallelism = 0) =>
|
||||
CreateParallelTaskQueue(maxParallelism);
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
|
||||
using System.Diagnostics;
|
||||
using System.Runtime.InteropServices;
|
||||
using System.Text;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Internal;
|
||||
|
||||
@@ -25,7 +26,16 @@ namespace ZB.MOM.NatsNet.Server.Internal;
|
||||
/// </summary>
|
||||
public static class SignalHandler
|
||||
{
|
||||
private const string ResolvePidError = "unable to resolve pid, try providing one";
|
||||
private static string _processName = "nats-server";
|
||||
internal static Func<List<int>> ResolvePidsHandler { get; set; } = ResolvePids;
|
||||
internal static Func<int, UnixSignal, Exception?> SendSignalHandler { get; set; } = SendSignal;
|
||||
|
||||
internal static void ResetTestHooks()
|
||||
{
|
||||
ResolvePidsHandler = ResolvePids;
|
||||
SendSignalHandler = SendSignal;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Sets the process name used for resolving PIDs.
|
||||
@@ -46,25 +56,67 @@ public static class SignalHandler
|
||||
|
||||
try
|
||||
{
|
||||
List<int> pids;
|
||||
if (string.IsNullOrEmpty(pidExpr))
|
||||
var pids = new List<int>(1);
|
||||
var pidStr = pidExpr.TrimEnd('*');
|
||||
var isGlob = pidExpr.EndsWith('*');
|
||||
|
||||
if (!string.IsNullOrEmpty(pidStr))
|
||||
{
|
||||
pids = ResolvePids();
|
||||
if (pids.Count == 0)
|
||||
return new InvalidOperationException("no nats-server processes found");
|
||||
}
|
||||
else
|
||||
{
|
||||
if (int.TryParse(pidExpr, out var pid))
|
||||
pids = [pid];
|
||||
else
|
||||
return new InvalidOperationException($"invalid pid: {pidExpr}");
|
||||
if (!int.TryParse(pidStr, out var pid))
|
||||
return new InvalidOperationException($"invalid pid: {pidStr}");
|
||||
pids.Add(pid);
|
||||
}
|
||||
|
||||
var signal = CommandToUnixSignal(command);
|
||||
if (string.IsNullOrEmpty(pidStr) || isGlob)
|
||||
pids = ResolvePidsHandler();
|
||||
|
||||
if (pids.Count > 1 && !isGlob)
|
||||
{
|
||||
var sb = new StringBuilder($"multiple {_processName} processes running:");
|
||||
foreach (var p in pids)
|
||||
sb.Append('\n').Append(p);
|
||||
return new InvalidOperationException(sb.ToString());
|
||||
}
|
||||
|
||||
if (pids.Count == 0)
|
||||
return new InvalidOperationException($"no {_processName} processes running");
|
||||
|
||||
UnixSignal signal;
|
||||
try
|
||||
{
|
||||
signal = CommandToUnixSignal(command);
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
return ex;
|
||||
}
|
||||
|
||||
var errBuilder = new StringBuilder();
|
||||
foreach (var pid in pids)
|
||||
Process.GetProcessById(pid).Kill(signal == UnixSignal.SigKill);
|
||||
{
|
||||
var pidText = pid.ToString();
|
||||
if (pidStr.Length > 0 && pidText != pidStr)
|
||||
{
|
||||
if (!isGlob || !pidText.StartsWith(pidStr, StringComparison.Ordinal))
|
||||
continue;
|
||||
}
|
||||
|
||||
var err = SendSignalHandler(pid, signal);
|
||||
if (err != null)
|
||||
{
|
||||
errBuilder
|
||||
.Append('\n')
|
||||
.Append("signal \"")
|
||||
.Append(CommandToString(command))
|
||||
.Append("\" ")
|
||||
.Append(pid)
|
||||
.Append(": ")
|
||||
.Append(err.Message);
|
||||
}
|
||||
}
|
||||
|
||||
if (errBuilder.Length > 0)
|
||||
return new InvalidOperationException(errBuilder.ToString());
|
||||
|
||||
return null;
|
||||
}
|
||||
@@ -80,7 +132,7 @@ public static class SignalHandler
|
||||
/// </summary>
|
||||
public static List<int> ResolvePids()
|
||||
{
|
||||
var pids = new List<int>();
|
||||
var pids = new List<int>(8);
|
||||
try
|
||||
{
|
||||
var psi = new ProcessStartInfo("pgrep", _processName)
|
||||
@@ -90,22 +142,33 @@ public static class SignalHandler
|
||||
CreateNoWindow = true,
|
||||
};
|
||||
using var proc = Process.Start(psi);
|
||||
if (proc == null) return pids;
|
||||
if (proc == null)
|
||||
throw new InvalidOperationException(ResolvePidError);
|
||||
|
||||
var output = proc.StandardOutput.ReadToEnd();
|
||||
proc.WaitForExit();
|
||||
if (proc.ExitCode != 0)
|
||||
return pids;
|
||||
|
||||
var currentPid = Environment.ProcessId;
|
||||
foreach (var line in output.Split('\n', StringSplitOptions.RemoveEmptyEntries))
|
||||
{
|
||||
if (int.TryParse(line.Trim(), out var pid) && pid != currentPid)
|
||||
if (!int.TryParse(line.Trim(), out var pid))
|
||||
throw new InvalidOperationException(ResolvePidError);
|
||||
|
||||
if (pid != currentPid)
|
||||
pids.Add(pid);
|
||||
}
|
||||
}
|
||||
catch (InvalidOperationException ex) when (ex.Message == ResolvePidError)
|
||||
{
|
||||
throw;
|
||||
}
|
||||
catch
|
||||
{
|
||||
// pgrep not available or failed
|
||||
throw new InvalidOperationException(ResolvePidError);
|
||||
}
|
||||
|
||||
return pids;
|
||||
}
|
||||
|
||||
@@ -119,7 +182,39 @@ public static class SignalHandler
|
||||
ServerCommand.Quit => UnixSignal.SigInt,
|
||||
ServerCommand.Reopen => UnixSignal.SigUsr1,
|
||||
ServerCommand.Reload => UnixSignal.SigHup,
|
||||
_ => throw new ArgumentOutOfRangeException(nameof(command), $"unknown command: {command}"),
|
||||
ServerCommand.LameDuckMode => UnixSignal.SigUsr2,
|
||||
ServerCommand.Term => UnixSignal.SigTerm,
|
||||
_ => throw new ArgumentOutOfRangeException(nameof(command), $"unknown signal \"{CommandToString(command)}\""),
|
||||
};
|
||||
|
||||
/// <summary>
|
||||
/// Go parity alias for <see cref="CommandToUnixSignal"/>.
|
||||
/// Mirrors <c>CommandToSignal</c> in signal.go.
|
||||
/// </summary>
|
||||
public static UnixSignal CommandToSignal(ServerCommand command) => CommandToUnixSignal(command);
|
||||
|
||||
private static Exception? SendSignal(int pid, UnixSignal signal)
|
||||
{
|
||||
try
|
||||
{
|
||||
Process.GetProcessById(pid).Kill(signal == UnixSignal.SigKill);
|
||||
return null;
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
return ex;
|
||||
}
|
||||
}
|
||||
|
||||
private static string CommandToString(ServerCommand command) => command switch
|
||||
{
|
||||
ServerCommand.Stop => "stop",
|
||||
ServerCommand.Quit => "quit",
|
||||
ServerCommand.Reopen => "reopen",
|
||||
ServerCommand.Reload => "reload",
|
||||
ServerCommand.LameDuckMode => "ldm",
|
||||
ServerCommand.Term => "term",
|
||||
_ => command.ToString().ToLowerInvariant(),
|
||||
};
|
||||
|
||||
/// <summary>
|
||||
|
||||
@@ -243,6 +243,51 @@ public sealed class SubjectTransform : ISubjectTransformer
|
||||
public static (SubjectTransform? transform, Exception? err) NewStrict(string src, string dest) =>
|
||||
NewWithStrict(src, dest, true);
|
||||
|
||||
/// <summary>
|
||||
/// Validates a subject mapping destination. Checks each token for valid syntax,
|
||||
/// validates mustache-style mapping functions against known regexes, then verifies
|
||||
/// the full transform can be created. Mirrors Go's <c>ValidateMapping</c>.
|
||||
/// </summary>
|
||||
public static Exception? ValidateMapping(string src, string dest)
|
||||
{
|
||||
if (string.IsNullOrEmpty(dest))
|
||||
return null;
|
||||
|
||||
bool sfwc = false;
|
||||
foreach (var t in dest.Split(SubjectTokens.Btsep))
|
||||
{
|
||||
var length = t.Length;
|
||||
if (length == 0 || sfwc)
|
||||
return new MappingDestinationException(t, ServerErrors.ErrInvalidMappingDestinationSubject);
|
||||
|
||||
// If it looks like a mapping function, validate against known patterns.
|
||||
if (length > 4 && t[0] == '{' && t[1] == '{' && t[length - 2] == '}' && t[length - 1] == '}')
|
||||
{
|
||||
if (!PartitionRe.IsMatch(t) &&
|
||||
!WildcardRe.IsMatch(t) &&
|
||||
!SplitFromLeftRe.IsMatch(t) &&
|
||||
!SplitFromRightRe.IsMatch(t) &&
|
||||
!SliceFromLeftRe.IsMatch(t) &&
|
||||
!SliceFromRightRe.IsMatch(t) &&
|
||||
!SplitRe.IsMatch(t) &&
|
||||
!RandomRe.IsMatch(t))
|
||||
{
|
||||
return new MappingDestinationException(t, ServerErrors.ErrUnknownMappingDestinationFunction);
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
if (length == 1 && t[0] == SubjectTokens.Fwc)
|
||||
sfwc = true;
|
||||
else if (t.AsSpan().ContainsAny("\t\n\f\r "))
|
||||
return ServerErrors.ErrInvalidMappingDestinationSubject;
|
||||
}
|
||||
|
||||
// Verify that the transform can actually be created.
|
||||
var (_, err) = New(src, dest);
|
||||
return err;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Attempts to match a published subject against the source pattern.
|
||||
/// Returns the transformed subject or an error.
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
//
|
||||
// Adapted from server/filestore.go (fileStore struct and methods)
|
||||
|
||||
using System.Text.Json;
|
||||
using System.Threading.Channels;
|
||||
using ZB.MOM.NatsNet.Server.Internal.DataStructures;
|
||||
|
||||
@@ -100,6 +101,10 @@ public sealed class JetStreamFileStore : IStreamStore, IDisposable
|
||||
// Last PurgeEx call time (for throttle logic)
|
||||
private DateTime _lpex;
|
||||
|
||||
// In this incremental port stage, file-store logic delegates core stream semantics
|
||||
// to the memory store implementation while file-specific APIs are added on top.
|
||||
private readonly JetStreamMemStore _memStore;
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Constructor
|
||||
// -----------------------------------------------------------------------
|
||||
@@ -135,6 +140,10 @@ public sealed class JetStreamFileStore : IStreamStore, IDisposable
|
||||
_bim = new Dictionary<uint, MessageBlock>();
|
||||
_qch = Channel.CreateUnbounded<byte>();
|
||||
_fsld = Channel.CreateUnbounded<byte>();
|
||||
|
||||
var memCfg = cfg.Config.Clone();
|
||||
memCfg.Storage = StorageType.MemoryStorage;
|
||||
_memStore = new JetStreamMemStore(memCfg);
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
@@ -146,52 +155,11 @@ public sealed class JetStreamFileStore : IStreamStore, IDisposable
|
||||
|
||||
/// <inheritdoc/>
|
||||
public StreamState State()
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try
|
||||
{
|
||||
// Return a shallow copy so callers cannot mutate internal state.
|
||||
return new StreamState
|
||||
{
|
||||
Msgs = _state.Msgs,
|
||||
Bytes = _state.Bytes,
|
||||
FirstSeq = _state.FirstSeq,
|
||||
FirstTime = _state.FirstTime,
|
||||
LastSeq = _state.LastSeq,
|
||||
LastTime = _state.LastTime,
|
||||
NumSubjects = _state.NumSubjects,
|
||||
NumDeleted = _state.NumDeleted,
|
||||
Deleted = _state.Deleted,
|
||||
Lost = _state.Lost,
|
||||
Consumers = _state.Consumers,
|
||||
};
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitReadLock();
|
||||
}
|
||||
}
|
||||
=> _memStore.State();
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void FastState(StreamState state)
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try
|
||||
{
|
||||
state.Msgs = _state.Msgs;
|
||||
state.Bytes = _state.Bytes;
|
||||
state.FirstSeq = _state.FirstSeq;
|
||||
state.FirstTime = _state.FirstTime;
|
||||
state.LastSeq = _state.LastSeq;
|
||||
state.LastTime = _state.LastTime;
|
||||
state.NumDeleted = _state.NumDeleted;
|
||||
state.Consumers = _state.Consumers;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitReadLock();
|
||||
}
|
||||
}
|
||||
=> _memStore.FastState(state);
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// IStreamStore — callback registration
|
||||
@@ -199,27 +167,15 @@ public sealed class JetStreamFileStore : IStreamStore, IDisposable
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void RegisterStorageUpdates(StorageUpdateHandler cb)
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try { _scb = cb; }
|
||||
finally { _mu.ExitWriteLock(); }
|
||||
}
|
||||
=> _memStore.RegisterStorageUpdates(cb);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void RegisterStorageRemoveMsg(StorageRemoveMsgHandler cb)
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try { _rmcb = cb; }
|
||||
finally { _mu.ExitWriteLock(); }
|
||||
}
|
||||
=> _memStore.RegisterStorageRemoveMsg(cb);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void RegisterProcessJetStreamMsg(ProcessJetStreamMsgHandler cb)
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try { _pmsgcb = cb; }
|
||||
finally { _mu.ExitWriteLock(); }
|
||||
}
|
||||
=> _memStore.RegisterProcessJetStreamMsg(cb);
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// IStreamStore — lifecycle
|
||||
@@ -245,6 +201,7 @@ public sealed class JetStreamFileStore : IStreamStore, IDisposable
|
||||
_syncTmr = null;
|
||||
|
||||
_closed = true;
|
||||
_memStore.Stop();
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
@@ -256,71 +213,71 @@ public sealed class JetStreamFileStore : IStreamStore, IDisposable
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (ulong Seq, long Ts) StoreMsg(string subject, byte[]? hdr, byte[]? msg, long ttl)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore StoreMsg");
|
||||
=> _memStore.StoreMsg(subject, hdr, msg, ttl);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void StoreRawMsg(string subject, byte[]? hdr, byte[]? msg, ulong seq, long ts, long ttl, bool discardNewCheck)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore StoreRawMsg");
|
||||
=> _memStore.StoreRawMsg(subject, hdr, msg, seq, ts, ttl, discardNewCheck);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (ulong Seq, Exception? Error) SkipMsg(ulong seq)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore SkipMsg");
|
||||
=> _memStore.SkipMsg(seq);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void SkipMsgs(ulong seq, ulong num)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore SkipMsgs");
|
||||
=> _memStore.SkipMsgs(seq, num);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void FlushAllPending()
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore FlushAllPending");
|
||||
=> _memStore.FlushAllPending();
|
||||
|
||||
/// <inheritdoc/>
|
||||
public StoreMsg? LoadMsg(ulong seq, StoreMsg? sm)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore LoadMsg");
|
||||
=> _memStore.LoadMsg(seq, sm);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (StoreMsg? Sm, ulong Skip) LoadNextMsg(string filter, bool wc, ulong start, StoreMsg? smp)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore LoadNextMsg");
|
||||
=> _memStore.LoadNextMsg(filter, wc, start, smp);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (StoreMsg? Sm, ulong Skip) LoadNextMsgMulti(object? sl, ulong start, StoreMsg? smp)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore LoadNextMsgMulti");
|
||||
=> _memStore.LoadNextMsgMulti(sl, start, smp);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public StoreMsg? LoadLastMsg(string subject, StoreMsg? sm)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore LoadLastMsg");
|
||||
=> _memStore.LoadLastMsg(subject, sm);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (StoreMsg? Sm, Exception? Error) LoadPrevMsg(ulong start, StoreMsg? smp)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore LoadPrevMsg");
|
||||
=> _memStore.LoadPrevMsg(start, smp);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (StoreMsg? Sm, ulong Skip, Exception? Error) LoadPrevMsgMulti(object? sl, ulong start, StoreMsg? smp)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore LoadPrevMsgMulti");
|
||||
=> _memStore.LoadPrevMsgMulti(sl, start, smp);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (bool Removed, Exception? Error) RemoveMsg(ulong seq)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore RemoveMsg");
|
||||
=> _memStore.RemoveMsg(seq);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (bool Removed, Exception? Error) EraseMsg(ulong seq)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore EraseMsg");
|
||||
=> _memStore.EraseMsg(seq);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (ulong Purged, Exception? Error) Purge()
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore Purge");
|
||||
=> _memStore.Purge();
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (ulong Purged, Exception? Error) PurgeEx(string subject, ulong seq, ulong keep)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore PurgeEx");
|
||||
=> _memStore.PurgeEx(subject, seq, keep);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (ulong Purged, Exception? Error) Compact(ulong seq)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore Compact");
|
||||
=> _memStore.Compact(seq);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void Truncate(ulong seq)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore Truncate");
|
||||
=> _memStore.Truncate(seq);
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// IStreamStore — query methods (all stubs)
|
||||
@@ -328,39 +285,39 @@ public sealed class JetStreamFileStore : IStreamStore, IDisposable
|
||||
|
||||
/// <inheritdoc/>
|
||||
public ulong GetSeqFromTime(DateTime t)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore GetSeqFromTime");
|
||||
=> _memStore.GetSeqFromTime(t);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public SimpleState FilteredState(ulong seq, string subject)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore FilteredState");
|
||||
=> _memStore.FilteredState(seq, subject);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public Dictionary<string, SimpleState> SubjectsState(string filterSubject)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore SubjectsState");
|
||||
=> _memStore.SubjectsState(filterSubject);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public Dictionary<string, ulong> SubjectsTotals(string filterSubject)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore SubjectsTotals");
|
||||
=> _memStore.SubjectsTotals(filterSubject);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (ulong[] Seqs, Exception? Error) AllLastSeqs()
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore AllLastSeqs");
|
||||
=> _memStore.AllLastSeqs();
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (ulong[] Seqs, Exception? Error) MultiLastSeqs(string[] filters, ulong maxSeq, int maxAllowed)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore MultiLastSeqs");
|
||||
=> _memStore.MultiLastSeqs(filters, maxSeq, maxAllowed);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (string Subject, Exception? Error) SubjectForSeq(ulong seq)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore SubjectForSeq");
|
||||
=> _memStore.SubjectForSeq(seq);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (ulong Total, ulong ValidThrough, Exception? Error) NumPending(ulong sseq, string filter, bool lastPerSubject)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore NumPending");
|
||||
=> _memStore.NumPending(sseq, filter, lastPerSubject);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (ulong Total, ulong ValidThrough, Exception? Error) NumPendingMulti(ulong sseq, object? sl, bool lastPerSubject)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore NumPendingMulti");
|
||||
=> _memStore.NumPendingMulti(sseq, sl, lastPerSubject);
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// IStreamStore — stream state encoding (stubs)
|
||||
@@ -368,11 +325,11 @@ public sealed class JetStreamFileStore : IStreamStore, IDisposable
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (byte[] Enc, Exception? Error) EncodedStreamState(ulong failed)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore EncodedStreamState");
|
||||
=> _memStore.EncodedStreamState(failed);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void SyncDeleted(DeleteBlocks dbs)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore SyncDeleted");
|
||||
=> _memStore.SyncDeleted(dbs);
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// IStreamStore — config / admin (stubs)
|
||||
@@ -380,15 +337,18 @@ public sealed class JetStreamFileStore : IStreamStore, IDisposable
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void UpdateConfig(StreamConfig cfg)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore UpdateConfig");
|
||||
{
|
||||
_cfg.Config = cfg.Clone();
|
||||
_memStore.UpdateConfig(cfg);
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void Delete(bool inline)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore Delete");
|
||||
=> _memStore.Delete(inline);
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void ResetState()
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ResetState");
|
||||
=> _memStore.ResetState();
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// IStreamStore — consumer management (stubs)
|
||||
@@ -396,13 +356,29 @@ public sealed class JetStreamFileStore : IStreamStore, IDisposable
|
||||
|
||||
/// <inheritdoc/>
|
||||
public IConsumerStore ConsumerStore(string name, DateTime created, ConsumerConfig cfg)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerStore");
|
||||
{
|
||||
var cfi = new FileConsumerInfo
|
||||
{
|
||||
Name = name,
|
||||
Created = created,
|
||||
Config = cfg,
|
||||
};
|
||||
var odir = Path.Combine(_fcfg.StoreDir, FileStoreDefaults.ConsumerDir, name);
|
||||
Directory.CreateDirectory(odir);
|
||||
var cs = new ConsumerFileStore(this, cfi, name, odir);
|
||||
AddConsumer(cs);
|
||||
return cs;
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void AddConsumer(IConsumerStore o)
|
||||
{
|
||||
_cmu.EnterWriteLock();
|
||||
try { _cfs.Add(o); }
|
||||
try
|
||||
{
|
||||
_cfs.Add(o);
|
||||
_memStore.AddConsumer(o);
|
||||
}
|
||||
finally { _cmu.ExitWriteLock(); }
|
||||
}
|
||||
|
||||
@@ -410,7 +386,11 @@ public sealed class JetStreamFileStore : IStreamStore, IDisposable
|
||||
public void RemoveConsumer(IConsumerStore o)
|
||||
{
|
||||
_cmu.EnterWriteLock();
|
||||
try { _cfs.Remove(o); }
|
||||
try
|
||||
{
|
||||
_cfs.Remove(o);
|
||||
_memStore.RemoveConsumer(o);
|
||||
}
|
||||
finally { _cmu.ExitWriteLock(); }
|
||||
}
|
||||
|
||||
@@ -420,9 +400,14 @@ public sealed class JetStreamFileStore : IStreamStore, IDisposable
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (SnapshotResult? Result, Exception? Error) Snapshot(TimeSpan deadline, bool includeConsumers, bool checkMsgs)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore Snapshot");
|
||||
{
|
||||
var state = _memStore.State();
|
||||
var payload = JsonSerializer.SerializeToUtf8Bytes(state);
|
||||
var reader = new MemoryStream(payload, writable: false);
|
||||
return (new SnapshotResult { Reader = reader, State = state }, null);
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (ulong Total, ulong Reported, Exception? Error) Utilization()
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore Utilization");
|
||||
=> _memStore.Utilization();
|
||||
}
|
||||
|
||||
@@ -183,12 +183,19 @@ public sealed class CompressionInfo
|
||||
|
||||
/// <summary>
|
||||
/// Serialises compression metadata as a compact binary prefix.
|
||||
/// Format: 'c' 'm' 'p' <algorithmByte> <uvarint originalSize>
|
||||
/// Format: 'c' 'm' 'p' <algorithmByte> <uvarint originalSize> <uvarint compressedSize>
|
||||
/// </summary>
|
||||
public byte[] MarshalMetadata()
|
||||
{
|
||||
// TODO: session 18 — implement varint encoding
|
||||
throw new NotImplementedException("TODO: session 18 — filestore CompressionInfo.MarshalMetadata");
|
||||
Span<byte> scratch = stackalloc byte[32];
|
||||
var pos = 0;
|
||||
scratch[pos++] = (byte)'c';
|
||||
scratch[pos++] = (byte)'m';
|
||||
scratch[pos++] = (byte)'p';
|
||||
scratch[pos++] = (byte)Type;
|
||||
pos += WriteUVarInt(scratch[pos..], Original);
|
||||
pos += WriteUVarInt(scratch[pos..], Compressed);
|
||||
return scratch[..pos].ToArray();
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
@@ -197,8 +204,58 @@ public sealed class CompressionInfo
|
||||
/// </summary>
|
||||
public int UnmarshalMetadata(byte[] b)
|
||||
{
|
||||
// TODO: session 18 — implement varint decoding
|
||||
throw new NotImplementedException("TODO: session 18 — filestore CompressionInfo.UnmarshalMetadata");
|
||||
ArgumentNullException.ThrowIfNull(b);
|
||||
|
||||
if (b.Length < 4 || b[0] != (byte)'c' || b[1] != (byte)'m' || b[2] != (byte)'p')
|
||||
return 0;
|
||||
|
||||
Type = (StoreCompression)b[3];
|
||||
var pos = 4;
|
||||
|
||||
if (!TryReadUVarInt(b.AsSpan(pos), out var original, out var used1))
|
||||
return 0;
|
||||
pos += used1;
|
||||
|
||||
if (!TryReadUVarInt(b.AsSpan(pos), out var compressed, out var used2))
|
||||
return 0;
|
||||
pos += used2;
|
||||
|
||||
Original = original;
|
||||
Compressed = compressed;
|
||||
return pos;
|
||||
}
|
||||
|
||||
private static int WriteUVarInt(Span<byte> dest, ulong value)
|
||||
{
|
||||
var i = 0;
|
||||
while (value >= 0x80)
|
||||
{
|
||||
dest[i++] = (byte)(value | 0x80);
|
||||
value >>= 7;
|
||||
}
|
||||
dest[i++] = (byte)value;
|
||||
return i;
|
||||
}
|
||||
|
||||
private static bool TryReadUVarInt(ReadOnlySpan<byte> src, out ulong value, out int used)
|
||||
{
|
||||
value = 0;
|
||||
used = 0;
|
||||
var shift = 0;
|
||||
foreach (var b in src)
|
||||
{
|
||||
value |= (ulong)(b & 0x7F) << shift;
|
||||
used++;
|
||||
if ((b & 0x80) == 0)
|
||||
return true;
|
||||
shift += 7;
|
||||
if (shift > 63)
|
||||
return false;
|
||||
}
|
||||
|
||||
value = 0;
|
||||
used = 0;
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -24,8 +24,27 @@ namespace ZB.MOM.NatsNet.Server;
|
||||
/// <summary>Stub: stored message type — full definition in session 20.</summary>
|
||||
public sealed class StoredMsg { }
|
||||
|
||||
/// <summary>Priority group for pull consumers — full definition in session 20.</summary>
|
||||
public sealed class PriorityGroup { }
|
||||
/// <summary>
|
||||
/// Priority group for pull consumers.
|
||||
/// Mirrors <c>PriorityGroup</c> in server/consumer.go.
|
||||
/// </summary>
|
||||
public sealed class PriorityGroup
|
||||
{
|
||||
[JsonPropertyName("group")]
|
||||
public string Group { get; set; } = string.Empty;
|
||||
|
||||
[JsonPropertyName("min_pending")]
|
||||
public long MinPending { get; set; }
|
||||
|
||||
[JsonPropertyName("min_ack_pending")]
|
||||
public long MinAckPending { get; set; }
|
||||
|
||||
[JsonPropertyName("id")]
|
||||
public string Id { get; set; } = string.Empty;
|
||||
|
||||
[JsonPropertyName("priority")]
|
||||
public int Priority { get; set; }
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// API subject constants
|
||||
|
||||
@@ -45,6 +45,8 @@ public sealed class JsApiError
|
||||
/// </summary>
|
||||
public static class JsApiErrors
|
||||
{
|
||||
public delegate object? ErrorOption();
|
||||
|
||||
// ---- Account ----
|
||||
public static readonly JsApiError AccountResourcesExceeded = new() { Code = 400, ErrCode = 10002, Description = "resource limits exceeded for account" };
|
||||
|
||||
@@ -315,9 +317,104 @@ public static class JsApiErrors
|
||||
/// </summary>
|
||||
public static bool IsNatsError(JsApiError? err, params ushort[] errCodes)
|
||||
{
|
||||
if (err is null) return false;
|
||||
foreach (var code in errCodes)
|
||||
if (err.ErrCode == code) return true;
|
||||
return IsNatsErr(err, errCodes);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Returns true if <paramref name="err"/> is a <see cref="JsApiError"/> and matches one of the supplied IDs.
|
||||
/// Unknown IDs are ignored, matching Go's map-based lookup behavior.
|
||||
/// </summary>
|
||||
public static bool IsNatsErr(object? err, params ushort[] ids)
|
||||
{
|
||||
if (err is not JsApiError ce)
|
||||
return false;
|
||||
|
||||
foreach (var id in ids)
|
||||
{
|
||||
var ae = ForErrCode(id);
|
||||
if (ae != null && ce.ErrCode == ae.ErrCode)
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Formats an API error string exactly as Go <c>ApiError.Error()</c>.
|
||||
/// </summary>
|
||||
public static string Error(JsApiError? err) => err?.ToString() ?? string.Empty;
|
||||
|
||||
/// <summary>
|
||||
/// Creates an option that causes constructor helpers to return the provided
|
||||
/// <see cref="JsApiError"/> when present.
|
||||
/// Mirrors Go <c>Unless</c>.
|
||||
/// </summary>
|
||||
public static ErrorOption Unless(object? err) => () => err;
|
||||
|
||||
/// <summary>
|
||||
/// Mirrors Go <c>NewJSRestoreSubscribeFailedError</c>.
|
||||
/// </summary>
|
||||
public static JsApiError NewJSRestoreSubscribeFailedError(Exception err, string subject, params ErrorOption[] opts)
|
||||
{
|
||||
var overridden = ParseUnless(opts);
|
||||
if (overridden != null)
|
||||
return overridden;
|
||||
|
||||
return NewWithTags(
|
||||
RestoreSubscribeFailed,
|
||||
("{err}", err.Message),
|
||||
("{subject}", subject));
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Mirrors Go <c>NewJSStreamRestoreError</c>.
|
||||
/// </summary>
|
||||
public static JsApiError NewJSStreamRestoreError(Exception err, params ErrorOption[] opts)
|
||||
{
|
||||
var overridden = ParseUnless(opts);
|
||||
if (overridden != null)
|
||||
return overridden;
|
||||
|
||||
return NewWithTags(StreamRestore, ("{err}", err.Message));
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Mirrors Go <c>NewJSPeerRemapError</c>.
|
||||
/// </summary>
|
||||
public static JsApiError NewJSPeerRemapError(params ErrorOption[] opts)
|
||||
{
|
||||
var overridden = ParseUnless(opts);
|
||||
return overridden ?? Clone(PeerRemap);
|
||||
}
|
||||
|
||||
private static JsApiError? ParseUnless(ReadOnlySpan<ErrorOption> opts)
|
||||
{
|
||||
foreach (var opt in opts)
|
||||
{
|
||||
var value = opt();
|
||||
if (value is JsApiError apiErr)
|
||||
return Clone(apiErr);
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
private static JsApiError Clone(JsApiError source) => new()
|
||||
{
|
||||
Code = source.Code,
|
||||
ErrCode = source.ErrCode,
|
||||
Description = source.Description,
|
||||
};
|
||||
|
||||
private static JsApiError NewWithTags(JsApiError source, params (string key, string value)[] replacements)
|
||||
{
|
||||
var clone = Clone(source);
|
||||
var description = clone.Description ?? string.Empty;
|
||||
|
||||
foreach (var (key, value) in replacements)
|
||||
description = description.Replace(key, value, StringComparison.Ordinal);
|
||||
|
||||
clone.Description = description;
|
||||
return clone;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -291,3 +291,109 @@ internal sealed class JsaUsage
|
||||
/// Mirrors <c>keyGen</c> in server/jetstream.go.
|
||||
/// </summary>
|
||||
public delegate byte[] KeyGen(byte[] context);
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// JetStream message header helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// <summary>
|
||||
/// Static helpers for extracting TTL, scheduling, and scheduler information
|
||||
/// from JetStream message headers.
|
||||
/// Mirrors <c>getMessageTTL</c>, <c>nextMessageSchedule</c>, <c>getMessageScheduler</c>
|
||||
/// in server/stream.go.
|
||||
/// </summary>
|
||||
public static class JetStreamHeaderHelpers
|
||||
{
|
||||
/// <summary>
|
||||
/// Extracts the TTL value (in seconds) from the message header.
|
||||
/// Returns 0 if no TTL header is present. Returns -1 for "never".
|
||||
/// Mirrors Go <c>getMessageTTL</c>.
|
||||
/// </summary>
|
||||
public static (long Ttl, Exception? Error) GetMessageTtl(byte[] hdr)
|
||||
{
|
||||
var raw = NatsMessageHeaders.GetHeader(NatsHeaderConstants.JsMessageTtl, hdr);
|
||||
if (raw == null || raw.Length == 0)
|
||||
return (0, null);
|
||||
|
||||
return ParseMessageTtl(System.Text.Encoding.ASCII.GetString(raw));
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Parses a TTL string value into seconds.
|
||||
/// Supports "never" (-1), Go-style duration strings ("30s", "5m"), or plain integer seconds.
|
||||
/// Mirrors Go <c>parseMessageTTL</c>.
|
||||
/// </summary>
|
||||
public static (long Ttl, Exception? Error) ParseMessageTtl(string ttl)
|
||||
{
|
||||
if (string.Equals(ttl, "never", StringComparison.OrdinalIgnoreCase))
|
||||
return (-1, null);
|
||||
|
||||
// Try parsing as a Go-style duration.
|
||||
if (TryParseDuration(ttl, out var dur))
|
||||
{
|
||||
if (dur.TotalSeconds < 1)
|
||||
return (0, new InvalidOperationException("message TTL invalid"));
|
||||
return ((long)dur.TotalSeconds, null);
|
||||
}
|
||||
|
||||
// Try as plain integer (seconds).
|
||||
if (long.TryParse(ttl, out var t))
|
||||
{
|
||||
if (t < 0)
|
||||
return (0, new InvalidOperationException("message TTL invalid"));
|
||||
return (t, null);
|
||||
}
|
||||
|
||||
return (0, new InvalidOperationException("message TTL invalid"));
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Extracts the next scheduled fire time from the message header.
|
||||
/// Returns (DateTime, true) if valid, (default, true) if no header, (default, false) on parse error.
|
||||
/// Mirrors Go <c>nextMessageSchedule</c>.
|
||||
/// </summary>
|
||||
public static (DateTime Schedule, bool Ok) NextMessageSchedule(byte[] hdr, long ts)
|
||||
{
|
||||
if (hdr.Length == 0)
|
||||
return (default, true);
|
||||
|
||||
var slice = NatsMessageHeaders.SliceHeader(NatsHeaderConstants.JsSchedulePattern, hdr);
|
||||
if (slice == null || slice.Value.Length == 0)
|
||||
return (default, true);
|
||||
|
||||
var val = System.Text.Encoding.ASCII.GetString(slice.Value.Span);
|
||||
var (schedule, _, ok) = Internal.MsgScheduling.ParseMsgSchedule(val, ts);
|
||||
return (schedule, ok);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Extracts the scheduler identifier from the message header.
|
||||
/// Returns empty string if not present.
|
||||
/// Mirrors Go <c>getMessageScheduler</c>.
|
||||
/// </summary>
|
||||
public static string GetMessageScheduler(byte[] hdr)
|
||||
{
|
||||
if (hdr.Length == 0)
|
||||
return string.Empty;
|
||||
|
||||
var raw = NatsMessageHeaders.GetHeader(NatsHeaderConstants.JsScheduler, hdr);
|
||||
if (raw == null || raw.Length == 0)
|
||||
return string.Empty;
|
||||
|
||||
return System.Text.Encoding.ASCII.GetString(raw);
|
||||
}
|
||||
|
||||
private static bool TryParseDuration(string s, out TimeSpan result)
|
||||
{
|
||||
result = default;
|
||||
if (s.EndsWith("ms", StringComparison.Ordinal) && double.TryParse(s[..^2], out var ms))
|
||||
{ result = TimeSpan.FromMilliseconds(ms); return true; }
|
||||
if (s.EndsWith('s') && double.TryParse(s[..^1], out var sec))
|
||||
{ result = TimeSpan.FromSeconds(sec); return true; }
|
||||
if (s.EndsWith('m') && double.TryParse(s[..^1], out var min))
|
||||
{ result = TimeSpan.FromMinutes(min); return true; }
|
||||
if (s.EndsWith('h') && double.TryParse(s[..^1], out var hr))
|
||||
{ result = TimeSpan.FromHours(hr); return true; }
|
||||
return TimeSpan.TryParse(s, out result);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
// Adapted from server/memstore.go
|
||||
|
||||
using System.Text;
|
||||
using ZB.MOM.NatsNet.Server.Internal;
|
||||
using ZB.MOM.NatsNet.Server.Internal.DataStructures;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server;
|
||||
@@ -57,6 +58,18 @@ public sealed class JetStreamMemStore : IStreamStore
|
||||
// Consumer count
|
||||
private int _consumers;
|
||||
|
||||
// TTL hash wheel (only created when cfg.AllowMsgTTL)
|
||||
private HashWheel? _ttls;
|
||||
|
||||
// Message scheduling (only created when cfg.AllowMsgSchedules)
|
||||
private MsgScheduling? _scheduling;
|
||||
|
||||
// Subject deletion metadata for cluster consensus
|
||||
private StreamDeletionMeta _sdm = new();
|
||||
|
||||
// Guard against re-entrant age check
|
||||
private bool _ageChkRun;
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Constructor
|
||||
// -----------------------------------------------------------------------
|
||||
@@ -83,6 +96,11 @@ public sealed class JetStreamMemStore : IStreamStore
|
||||
_state.LastSeq = cfg.FirstSeq - 1;
|
||||
_state.FirstSeq = cfg.FirstSeq;
|
||||
}
|
||||
|
||||
if (cfg.AllowMsgTTL)
|
||||
_ttls = HashWheel.NewHashWheel();
|
||||
if (cfg.AllowMsgSchedules)
|
||||
_scheduling = new MsgScheduling(RunMsgScheduling);
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
@@ -225,9 +243,52 @@ public sealed class JetStreamMemStore : IStreamStore
|
||||
EnforceMsgLimit();
|
||||
EnforceBytesLimit();
|
||||
|
||||
// Age check
|
||||
if (_ageChk == null && _cfg.MaxAge != TimeSpan.Zero)
|
||||
// Per-message TTL tracking.
|
||||
if (_ttls != null && ttl > 0)
|
||||
{
|
||||
var expires = ts + (ttl * 1_000_000_000L);
|
||||
_ttls.Add(seq, expires);
|
||||
}
|
||||
|
||||
// Age check timer management.
|
||||
if (_ttls != null && ttl > 0)
|
||||
{
|
||||
ResetAgeChk(0);
|
||||
}
|
||||
else if (_ageChk == null && (_cfg.MaxAge > TimeSpan.Zero || _ttls != null))
|
||||
{
|
||||
StartAgeChk();
|
||||
}
|
||||
|
||||
// Message scheduling.
|
||||
if (_scheduling != null && hdr.Length > 0)
|
||||
{
|
||||
var (schedule, ok) = JetStreamHeaderHelpers.NextMessageSchedule(hdr, ts);
|
||||
if (ok && schedule != default)
|
||||
{
|
||||
_scheduling.Add(seq, subject, schedule.Ticks * 100L);
|
||||
}
|
||||
else
|
||||
{
|
||||
_scheduling.RemoveSubject(subject);
|
||||
}
|
||||
|
||||
// Check for a repeating schedule.
|
||||
var scheduleNextSlice = NatsMessageHeaders.SliceHeader(NatsHeaderConstants.JsScheduleNext, hdr);
|
||||
if (scheduleNextSlice != null && scheduleNextSlice.Value.Length > 0)
|
||||
{
|
||||
var scheduleNext = Encoding.ASCII.GetString(scheduleNextSlice.Value.Span);
|
||||
if (scheduleNext != NatsHeaderConstants.JsScheduleNextPurge)
|
||||
{
|
||||
var scheduler = JetStreamHeaderHelpers.GetMessageScheduler(hdr);
|
||||
if (DateTime.TryParse(scheduleNext, null, System.Globalization.DateTimeStyles.RoundtripKind, out var next)
|
||||
&& !string.IsNullOrEmpty(scheduler))
|
||||
{
|
||||
_scheduling.Update(scheduler, next.ToUniversalTime().Ticks * 100L);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
@@ -562,6 +623,17 @@ public sealed class JetStreamMemStore : IStreamStore
|
||||
UpdateFirstSeq(seq);
|
||||
RemoveSeqPerSubject(sm.Subject, seq);
|
||||
|
||||
// Remove TTL entry from hash wheel if applicable.
|
||||
if (_ttls != null && sm.Hdr.Length > 0)
|
||||
{
|
||||
var (ttl, err) = JetStreamHeaderHelpers.GetMessageTtl(sm.Hdr);
|
||||
if (err == null && ttl > 0)
|
||||
{
|
||||
var expires = sm.Ts + (ttl * 1_000_000_000L);
|
||||
_ttls.Remove(seq, expires);
|
||||
}
|
||||
}
|
||||
|
||||
if (secure)
|
||||
{
|
||||
if (sm.Hdr.Length > 0)
|
||||
@@ -1325,7 +1397,15 @@ public sealed class JetStreamMemStore : IStreamStore
|
||||
/// <inheritdoc/>
|
||||
public void ResetState()
|
||||
{
|
||||
// For memory store, nothing to reset.
|
||||
_mu.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
_scheduling?.ClearInflight();
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
@@ -1412,6 +1492,571 @@ public sealed class JetStreamMemStore : IStreamStore
|
||||
}
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Size helpers (static)
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
/// <summary>
|
||||
/// Computes raw message size from component lengths.
|
||||
/// Mirrors Go <c>memStoreMsgSizeRaw</c>.
|
||||
/// </summary>
|
||||
internal static ulong MemStoreMsgSizeRaw(int slen, int hlen, int mlen)
|
||||
=> (ulong)(slen + hlen + mlen + 16);
|
||||
|
||||
/// <summary>
|
||||
/// Computes message size from actual values.
|
||||
/// Mirrors Go <c>memStoreMsgSize</c> (the package-level function).
|
||||
/// </summary>
|
||||
internal static ulong MemStoreMsgSize(string subj, byte[]? hdr, byte[]? msg)
|
||||
=> MemStoreMsgSizeRaw(subj.Length, hdr?.Length ?? 0, msg?.Length ?? 0);
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Trivial helpers
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
// Lock must be held.
|
||||
private bool DeleteFirstMsg() => RemoveMsgLocked(_state.FirstSeq, false);
|
||||
|
||||
// Lock must be held.
|
||||
private void DeleteFirstMsgOrPanic()
|
||||
{
|
||||
if (!DeleteFirstMsg())
|
||||
throw new InvalidOperationException("jetstream memstore has inconsistent state, can't find first seq msg");
|
||||
}
|
||||
|
||||
// Lock must be held.
|
||||
private void CancelAgeChk()
|
||||
{
|
||||
if (_ageChk != null)
|
||||
{
|
||||
_ageChk.Dispose();
|
||||
_ageChk = null;
|
||||
_ageChkTime = 0;
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Returns true if a linear scan is preferable over subject tree lookup.
|
||||
/// Mirrors Go <c>shouldLinearScan</c>.
|
||||
/// </summary>
|
||||
// Lock must be held.
|
||||
private bool ShouldLinearScan(string filter, bool wc, ulong start)
|
||||
{
|
||||
const int LinearScanMaxFss = 256;
|
||||
var isAll = filter == ">";
|
||||
return isAll || 2 * (int)(_state.LastSeq - start) < _fss.Size() || (wc && _fss.Size() > LinearScanMaxFss);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Returns true if the store is closed.
|
||||
/// Mirrors Go <c>isClosed</c>.
|
||||
/// </summary>
|
||||
public bool IsClosed()
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try { return _msgs == null; }
|
||||
finally { _mu.ExitReadLock(); }
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Checks if the filter represents all subjects (empty or ">").
|
||||
/// Mirrors Go <c>filterIsAll</c>.
|
||||
/// </summary>
|
||||
// Lock must be held.
|
||||
private static bool FilterIsAll(string filter)
|
||||
=> string.IsNullOrEmpty(filter) || filter == ">";
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Low-complexity helpers
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
/// <summary>
|
||||
/// Returns per-subject message totals matching the filter.
|
||||
/// Mirrors Go <c>subjectsTotalsLocked</c>.
|
||||
/// </summary>
|
||||
// Lock must be held.
|
||||
private Dictionary<string, ulong> SubjectsTotalsLocked(string filterSubject)
|
||||
{
|
||||
if (_fss.Size() == 0)
|
||||
return new Dictionary<string, ulong>();
|
||||
|
||||
if (string.IsNullOrEmpty(filterSubject))
|
||||
filterSubject = ">";
|
||||
|
||||
var isAll = filterSubject == ">";
|
||||
var result = new Dictionary<string, ulong>();
|
||||
_fss.Match(Encoding.UTF8.GetBytes(filterSubject), (subj, ss) =>
|
||||
{
|
||||
result[Encoding.UTF8.GetString(subj)] = ss.Msgs;
|
||||
return true;
|
||||
});
|
||||
return result;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Finds literal subject match sequence bounds.
|
||||
/// Returns (first, last, true) if found, or (0, 0, false) if not.
|
||||
/// Mirrors Go <c>nextLiteralMatchLocked</c>.
|
||||
/// </summary>
|
||||
// Lock must be held.
|
||||
private (ulong First, ulong Last, bool Found) NextLiteralMatchLocked(string filter, ulong start)
|
||||
{
|
||||
var (ss, ok) = _fss.Find(Encoding.UTF8.GetBytes(filter));
|
||||
if (!ok || ss == null)
|
||||
return (0, 0, false);
|
||||
|
||||
RecalculateForSubj(filter, ss);
|
||||
if (start > ss.Last)
|
||||
return (0, 0, false);
|
||||
|
||||
return (Math.Max(start, ss.First), ss.Last, true);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Finds wildcard subject match sequence bounds using MatchUntil.
|
||||
/// Mirrors Go <c>nextWildcardMatchLocked</c>.
|
||||
/// </summary>
|
||||
// Lock must be held.
|
||||
private (ulong First, ulong Last, bool Found) NextWildcardMatchLocked(string filter, ulong start)
|
||||
{
|
||||
bool found = false;
|
||||
ulong first = _state.LastSeq, last = 0;
|
||||
|
||||
_fss.MatchUntil(Encoding.UTF8.GetBytes(filter), (subj, ss) =>
|
||||
{
|
||||
RecalculateForSubj(Encoding.UTF8.GetString(subj), ss);
|
||||
|
||||
if (start > ss.Last)
|
||||
return true;
|
||||
|
||||
found = true;
|
||||
if (ss.First < first)
|
||||
first = ss.First;
|
||||
if (ss.Last > last)
|
||||
last = ss.Last;
|
||||
|
||||
return first > start;
|
||||
});
|
||||
|
||||
if (!found)
|
||||
return (0, 0, false);
|
||||
|
||||
return (Math.Max(first, start), last, true);
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// SDM methods
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
/// <summary>
|
||||
/// Determines whether this sequence/subject should be processed as a subject deletion marker.
|
||||
/// Returns (isLast, shouldProcess).
|
||||
/// Mirrors Go <c>shouldProcessSdmLocked</c>.
|
||||
/// </summary>
|
||||
// Lock must be held.
|
||||
private (bool IsLast, bool ShouldProcess) ShouldProcessSdmLocked(ulong seq, string subj)
|
||||
{
|
||||
if (_sdm.TryGetPending(seq, out var p))
|
||||
{
|
||||
var elapsed = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds() * 1_000_000L - p.Ts;
|
||||
if (elapsed < 2_000_000_000L) // 2 seconds in nanoseconds
|
||||
return (p.Last, false);
|
||||
|
||||
var last = p.Last;
|
||||
if (last)
|
||||
{
|
||||
var msgs = SubjectsTotalsLocked(subj).GetValueOrDefault(subj, 0UL);
|
||||
var numPending = _sdm.GetSubjectTotal(subj);
|
||||
if (msgs > numPending)
|
||||
last = false;
|
||||
}
|
||||
_sdm.SetPending(seq, new SdmBySeq { Last = last, Ts = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds() * 1_000_000L });
|
||||
return (last, true);
|
||||
}
|
||||
|
||||
var msgCount = SubjectsTotalsLocked(subj).GetValueOrDefault(subj, 0UL);
|
||||
if (msgCount == 0)
|
||||
return (false, true);
|
||||
|
||||
var pending = _sdm.GetSubjectTotal(subj);
|
||||
var remaining = msgCount - pending;
|
||||
return (_sdm.TrackPending(seq, subj, remaining == 1), true);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Lock-wrapping version of ShouldProcessSdmLocked.
|
||||
/// Mirrors Go <c>shouldProcessSdm</c>.
|
||||
/// </summary>
|
||||
public (bool IsLast, bool ShouldProcess) ShouldProcessSdm(ulong seq, string subj)
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try { return ShouldProcessSdmLocked(seq, subj); }
|
||||
finally { _mu.ExitWriteLock(); }
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Handles message removal: if SDM mode, builds a marker header and invokes _pmsgcb;
|
||||
/// otherwise invokes _rmcb.
|
||||
/// Mirrors Go <c>handleRemovalOrSdm</c>.
|
||||
/// </summary>
|
||||
public void HandleRemovalOrSdm(ulong seq, string subj, bool sdm, long sdmTtl)
|
||||
{
|
||||
if (sdm)
|
||||
{
|
||||
var hdr = Encoding.ASCII.GetBytes(
|
||||
$"NATS/1.0\r\n{NatsHeaderConstants.JsMarkerReason}: {NatsHeaderConstants.JsMarkerReasonMaxAge}\r\n" +
|
||||
$"{NatsHeaderConstants.JsMessageTtl}: {TimeSpan.FromSeconds(sdmTtl)}\r\n" +
|
||||
$"{NatsHeaderConstants.JsMsgRollup}: {NatsHeaderConstants.JsMsgRollupSubject}\r\n\r\n");
|
||||
|
||||
// In Go this builds an inMsg and calls pmsgcb. We pass a synthetic StoreMsg.
|
||||
var msg = new StoreMsg { Subject = subj, Hdr = hdr, Msg = Array.Empty<byte>(), Seq = 0, Ts = 0 };
|
||||
_pmsgcb?.Invoke(msg);
|
||||
}
|
||||
else
|
||||
{
|
||||
_rmcb?.Invoke(seq);
|
||||
}
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Age/TTL methods
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
/// <summary>
|
||||
/// Resets or arms the age check timer based on TTL and MaxAge.
|
||||
/// Mirrors Go <c>resetAgeChk</c>.
|
||||
/// </summary>
|
||||
// Lock must be held.
|
||||
private void ResetAgeChk(long delta)
|
||||
{
|
||||
if (_ageChkRun)
|
||||
return;
|
||||
|
||||
long next = long.MaxValue;
|
||||
if (_ttls != null)
|
||||
next = _ttls.GetNextExpiration(next);
|
||||
|
||||
if (_cfg.MaxAge <= TimeSpan.Zero && next == long.MaxValue)
|
||||
{
|
||||
CancelAgeChk();
|
||||
return;
|
||||
}
|
||||
|
||||
var fireIn = _cfg.MaxAge;
|
||||
|
||||
if (delta == 0 && _state.Msgs > 0)
|
||||
{
|
||||
var until = TimeSpan.FromSeconds(2);
|
||||
if (fireIn == TimeSpan.Zero || until < fireIn)
|
||||
fireIn = until;
|
||||
}
|
||||
|
||||
if (next < long.MaxValue)
|
||||
{
|
||||
var nextTicks = DateTime.UnixEpoch.Ticks + next / 100L;
|
||||
var nextUtc = new DateTime(Math.Max(nextTicks, DateTime.UnixEpoch.Ticks), DateTimeKind.Utc);
|
||||
var until = nextUtc - DateTime.UtcNow;
|
||||
if (fireIn == TimeSpan.Zero || until < fireIn)
|
||||
fireIn = until;
|
||||
}
|
||||
|
||||
if (delta > 0)
|
||||
{
|
||||
var deltaDur = TimeSpan.FromTicks(delta / 100L);
|
||||
if (fireIn == TimeSpan.Zero || deltaDur < fireIn)
|
||||
fireIn = deltaDur;
|
||||
}
|
||||
|
||||
if (fireIn < TimeSpan.FromMilliseconds(250))
|
||||
fireIn = TimeSpan.FromMilliseconds(250);
|
||||
|
||||
var expires = DateTime.UtcNow.Ticks + fireIn.Ticks;
|
||||
if (_ageChkTime > 0 && expires > _ageChkTime)
|
||||
return;
|
||||
|
||||
_ageChkTime = expires;
|
||||
if (_ageChk != null)
|
||||
_ageChk.Change(fireIn, Timeout.InfiniteTimeSpan);
|
||||
else
|
||||
_ageChk = new Timer(_ => ExpireMsgs(), null, fireIn, Timeout.InfiniteTimeSpan);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Recovers TTL state from existing messages after restart.
|
||||
/// Mirrors Go <c>recoverTTLState</c>.
|
||||
/// </summary>
|
||||
// Lock must be held.
|
||||
private void RecoverTTLState()
|
||||
{
|
||||
_ttls = HashWheel.NewHashWheel();
|
||||
if (_state.Msgs == 0)
|
||||
return;
|
||||
|
||||
try
|
||||
{
|
||||
var smp = new StoreMsg();
|
||||
var seq = _state.FirstSeq;
|
||||
while (seq <= _state.LastSeq)
|
||||
{
|
||||
if (_msgs != null && _msgs.TryGetValue(seq, out var sm) && sm != null)
|
||||
{
|
||||
if (sm.Hdr.Length > 0)
|
||||
{
|
||||
var (ttl, _) = JetStreamHeaderHelpers.GetMessageTtl(sm.Hdr);
|
||||
if (ttl > 0)
|
||||
{
|
||||
var expires = sm.Ts + (ttl * 1_000_000_000L);
|
||||
_ttls.Add(seq, expires);
|
||||
}
|
||||
}
|
||||
}
|
||||
seq++;
|
||||
}
|
||||
}
|
||||
finally
|
||||
{
|
||||
ResetAgeChk(0);
|
||||
}
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Scheduling methods
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
/// <summary>
|
||||
/// Recovers message scheduling state from existing messages after restart.
|
||||
/// Mirrors Go <c>recoverMsgSchedulingState</c>.
|
||||
/// </summary>
|
||||
// Lock must be held.
|
||||
private void RecoverMsgSchedulingState()
|
||||
{
|
||||
_scheduling = new MsgScheduling(RunMsgScheduling);
|
||||
if (_state.Msgs == 0)
|
||||
return;
|
||||
|
||||
try
|
||||
{
|
||||
var seq = _state.FirstSeq;
|
||||
while (seq <= _state.LastSeq)
|
||||
{
|
||||
if (_msgs != null && _msgs.TryGetValue(seq, out var sm) && sm != null)
|
||||
{
|
||||
if (sm.Hdr.Length > 0)
|
||||
{
|
||||
var (schedule, ok) = JetStreamHeaderHelpers.NextMessageSchedule(sm.Hdr, sm.Ts);
|
||||
if (ok && schedule != default)
|
||||
{
|
||||
_scheduling.Init(seq, sm.Subject, schedule.Ticks * 100L);
|
||||
}
|
||||
}
|
||||
}
|
||||
seq++;
|
||||
}
|
||||
}
|
||||
finally
|
||||
{
|
||||
_scheduling.ResetTimer();
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Runs through scheduled messages and fires callbacks.
|
||||
/// Mirrors Go <c>runMsgScheduling</c>.
|
||||
/// </summary>
|
||||
private void RunMsgScheduling()
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
if (_scheduling == null)
|
||||
return;
|
||||
if (_pmsgcb == null)
|
||||
{
|
||||
_scheduling.ResetTimer();
|
||||
return;
|
||||
}
|
||||
|
||||
// TODO: Implement getScheduledMessages integration when MsgScheduling
|
||||
// supports the full callback-based message loading pattern.
|
||||
// For now, reset the timer so scheduling continues to fire.
|
||||
_scheduling.ResetTimer();
|
||||
}
|
||||
finally
|
||||
{
|
||||
if (_mu.IsWriteLockHeld)
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Reset / Purge Internal
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
/// <summary>
|
||||
/// Completely resets the store. Clears all messages, state, fss, dmap, and sdm.
|
||||
/// Mirrors Go <c>reset</c>.
|
||||
/// </summary>
|
||||
public Exception? Reset()
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
ulong purged = 0;
|
||||
ulong bytes = 0;
|
||||
StorageUpdateHandler? cb;
|
||||
try
|
||||
{
|
||||
cb = _scb;
|
||||
if (cb != null && _msgs != null)
|
||||
{
|
||||
foreach (var sm in _msgs.Values)
|
||||
{
|
||||
purged++;
|
||||
bytes += MsgSize(sm.Subject, sm.Hdr, sm.Msg);
|
||||
}
|
||||
}
|
||||
|
||||
_state.FirstSeq = 0;
|
||||
_state.FirstTime = default;
|
||||
_state.LastSeq = 0;
|
||||
_state.LastTime = DateTime.UtcNow;
|
||||
_state.Msgs = 0;
|
||||
_state.Bytes = 0;
|
||||
_msgs = new Dictionary<ulong, StoreMsg>();
|
||||
_fss.Reset();
|
||||
_dmap = new SequenceSet();
|
||||
_sdm.Empty();
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
|
||||
cb?.Invoke(-(long)purged, -(long)bytes, 0, string.Empty);
|
||||
return null;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Internal purge with configurable first-sequence.
|
||||
/// Mirrors Go <c>purge</c> (the internal version).
|
||||
/// </summary>
|
||||
// This is the internal purge used by cluster/raft — differs from the public Purge().
|
||||
private (ulong Purged, Exception? Error) PurgeInternal(ulong fseq)
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
ulong purged;
|
||||
long bytes;
|
||||
StorageUpdateHandler? cb;
|
||||
try
|
||||
{
|
||||
purged = (ulong)(_msgs?.Count ?? 0);
|
||||
cb = _scb;
|
||||
bytes = (long)_state.Bytes;
|
||||
|
||||
if (fseq == 0)
|
||||
fseq = _state.LastSeq + 1;
|
||||
else if (fseq < _state.LastSeq)
|
||||
{
|
||||
_mu.ExitWriteLock();
|
||||
return (0, new InvalidOperationException("partial purges not supported on memory store"));
|
||||
}
|
||||
|
||||
_state.FirstSeq = fseq;
|
||||
_state.LastSeq = fseq - 1;
|
||||
_state.FirstTime = default;
|
||||
_state.Bytes = 0;
|
||||
_state.Msgs = 0;
|
||||
if (_msgs != null)
|
||||
_msgs = new Dictionary<ulong, StoreMsg>();
|
||||
_fss.Reset();
|
||||
_dmap = new SequenceSet();
|
||||
_sdm.Empty();
|
||||
}
|
||||
finally
|
||||
{
|
||||
if (_mu.IsWriteLockHeld)
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
|
||||
cb?.Invoke(-(long)purged, -bytes, 0, string.Empty);
|
||||
return (purged, null);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Internal compact with SDM tracking.
|
||||
/// Mirrors Go <c>compact</c> (the internal version).
|
||||
/// </summary>
|
||||
private (ulong Purged, Exception? Error) CompactInternal(ulong seq)
|
||||
{
|
||||
if (seq == 0)
|
||||
return Purge();
|
||||
|
||||
ulong purged = 0;
|
||||
ulong bytes = 0;
|
||||
|
||||
_mu.EnterWriteLock();
|
||||
StorageUpdateHandler? cb;
|
||||
try
|
||||
{
|
||||
if (_state.FirstSeq > seq)
|
||||
return (0, null);
|
||||
|
||||
cb = _scb;
|
||||
if (seq <= _state.LastSeq)
|
||||
{
|
||||
var fseq = _state.FirstSeq;
|
||||
for (var s = seq; s <= _state.LastSeq; s++)
|
||||
{
|
||||
if (_msgs != null && _msgs.TryGetValue(s, out var sm2) && sm2 != null)
|
||||
{
|
||||
_state.FirstSeq = s;
|
||||
_state.FirstTime = DateTimeOffset.FromUnixTimeMilliseconds(sm2.Ts / 1_000_000L).UtcDateTime;
|
||||
break;
|
||||
}
|
||||
}
|
||||
for (var s = seq - 1; s >= fseq; s--)
|
||||
{
|
||||
if (_msgs != null && _msgs.TryGetValue(s, out var sm2) && sm2 != null)
|
||||
{
|
||||
bytes += MsgSize(sm2.Subject, sm2.Hdr, sm2.Msg);
|
||||
purged++;
|
||||
RemoveSeqPerSubject(sm2.Subject, s);
|
||||
_msgs.Remove(s);
|
||||
}
|
||||
else if (!_dmap.IsEmpty)
|
||||
{
|
||||
_dmap.Delete(s);
|
||||
}
|
||||
if (s == 0) break;
|
||||
}
|
||||
if (purged > _state.Msgs) purged = _state.Msgs;
|
||||
_state.Msgs -= purged;
|
||||
if (bytes > _state.Bytes) bytes = _state.Bytes;
|
||||
_state.Bytes -= bytes;
|
||||
}
|
||||
else
|
||||
{
|
||||
purged = (ulong)(_msgs?.Count ?? 0);
|
||||
bytes = _state.Bytes;
|
||||
_state.Bytes = 0;
|
||||
_state.Msgs = 0;
|
||||
_state.FirstSeq = seq;
|
||||
_state.FirstTime = default;
|
||||
_state.LastSeq = seq - 1;
|
||||
_msgs = new Dictionary<ulong, StoreMsg>();
|
||||
_fss.Reset();
|
||||
_dmap = new SequenceSet();
|
||||
_sdm.Empty();
|
||||
}
|
||||
}
|
||||
finally
|
||||
{
|
||||
if (_mu.IsWriteLockHeld)
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
|
||||
cb?.Invoke(-(long)purged, -(long)bytes, 0, string.Empty);
|
||||
return (purged, null);
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Private helpers
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
//
|
||||
// Adapted from server/filestore.go (msgBlock struct and consumerFileStore struct)
|
||||
|
||||
using System.Text.Json;
|
||||
using System.Threading.Channels;
|
||||
using ZB.MOM.NatsNet.Server.Internal.DataStructures;
|
||||
|
||||
@@ -315,68 +316,382 @@ public sealed class ConsumerFileStore : IConsumerStore
|
||||
_name = name;
|
||||
_odir = odir;
|
||||
_ifn = Path.Combine(odir, FileStoreDefaults.ConsumerState);
|
||||
lock (_mu)
|
||||
{
|
||||
TryLoadStateLocked();
|
||||
}
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// IConsumerStore — all methods stubbed
|
||||
// IConsumerStore
|
||||
// ------------------------------------------------------------------
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void SetStarting(ulong sseq)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.SetStarting");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
_state.Delivered.Stream = sseq;
|
||||
_state.AckFloor.Stream = sseq;
|
||||
PersistStateLocked();
|
||||
}
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void UpdateStarting(ulong sseq)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.UpdateStarting");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
if (sseq <= _state.Delivered.Stream)
|
||||
return;
|
||||
|
||||
_state.Delivered.Stream = sseq;
|
||||
if (_cfg.Config.AckPolicy == AckPolicy.AckNone)
|
||||
_state.AckFloor.Stream = sseq;
|
||||
PersistStateLocked();
|
||||
}
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void Reset(ulong sseq)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.Reset");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
_state = new ConsumerState();
|
||||
_state.Delivered.Stream = sseq;
|
||||
_state.AckFloor.Stream = sseq;
|
||||
PersistStateLocked();
|
||||
}
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public bool HasState()
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.HasState");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
return _state.Delivered.Consumer != 0 ||
|
||||
_state.Delivered.Stream != 0 ||
|
||||
_state.Pending is { Count: > 0 } ||
|
||||
_state.Redelivered is { Count: > 0 };
|
||||
}
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void UpdateDelivered(ulong dseq, ulong sseq, ulong dc, long ts)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.UpdateDelivered");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
if (_closed)
|
||||
throw StoreErrors.ErrStoreClosed;
|
||||
|
||||
if (dc != 1 && _cfg.Config.AckPolicy == AckPolicy.AckNone)
|
||||
throw StoreErrors.ErrNoAckPolicy;
|
||||
|
||||
if (dseq <= _state.AckFloor.Consumer)
|
||||
return;
|
||||
|
||||
if (_cfg.Config.AckPolicy != AckPolicy.AckNone)
|
||||
{
|
||||
_state.Pending ??= new Dictionary<ulong, Pending>();
|
||||
|
||||
if (sseq <= _state.Delivered.Stream)
|
||||
{
|
||||
if (_state.Pending.TryGetValue(sseq, out var pending) && pending != null)
|
||||
pending.Timestamp = ts;
|
||||
}
|
||||
else
|
||||
{
|
||||
_state.Pending[sseq] = new Pending { Sequence = dseq, Timestamp = ts };
|
||||
}
|
||||
|
||||
if (dseq > _state.Delivered.Consumer)
|
||||
_state.Delivered.Consumer = dseq;
|
||||
if (sseq > _state.Delivered.Stream)
|
||||
_state.Delivered.Stream = sseq;
|
||||
|
||||
if (dc > 1)
|
||||
{
|
||||
var maxdc = (ulong)_cfg.Config.MaxDeliver;
|
||||
if (maxdc > 0 && dc > maxdc)
|
||||
_state.Pending.Remove(sseq);
|
||||
|
||||
_state.Redelivered ??= new Dictionary<ulong, ulong>();
|
||||
if (!_state.Redelivered.TryGetValue(sseq, out var cur) || cur < dc - 1)
|
||||
_state.Redelivered[sseq] = dc - 1;
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
if (dseq > _state.Delivered.Consumer)
|
||||
{
|
||||
_state.Delivered.Consumer = dseq;
|
||||
_state.AckFloor.Consumer = dseq;
|
||||
}
|
||||
if (sseq > _state.Delivered.Stream)
|
||||
{
|
||||
_state.Delivered.Stream = sseq;
|
||||
_state.AckFloor.Stream = sseq;
|
||||
}
|
||||
}
|
||||
|
||||
PersistStateLocked();
|
||||
}
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void UpdateAcks(ulong dseq, ulong sseq)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.UpdateAcks");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
if (_closed)
|
||||
throw StoreErrors.ErrStoreClosed;
|
||||
|
||||
if (_cfg.Config.AckPolicy == AckPolicy.AckNone)
|
||||
throw StoreErrors.ErrNoAckPolicy;
|
||||
|
||||
if (dseq <= _state.AckFloor.Consumer)
|
||||
return;
|
||||
|
||||
if (_state.Pending == null || !_state.Pending.ContainsKey(sseq))
|
||||
{
|
||||
_state.Redelivered?.Remove(sseq);
|
||||
throw StoreErrors.ErrStoreMsgNotFound;
|
||||
}
|
||||
|
||||
if (_cfg.Config.AckPolicy == AckPolicy.AckAll)
|
||||
{
|
||||
var sgap = sseq - _state.AckFloor.Stream;
|
||||
_state.AckFloor.Consumer = dseq;
|
||||
_state.AckFloor.Stream = sseq;
|
||||
|
||||
if (sgap > (ulong)_state.Pending.Count)
|
||||
{
|
||||
var toRemove = new List<ulong>();
|
||||
foreach (var kv in _state.Pending)
|
||||
if (kv.Key <= sseq)
|
||||
toRemove.Add(kv.Key);
|
||||
foreach (var key in toRemove)
|
||||
{
|
||||
_state.Pending.Remove(key);
|
||||
_state.Redelivered?.Remove(key);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
for (var seq = sseq; seq > sseq - sgap && _state.Pending.Count > 0; seq--)
|
||||
{
|
||||
_state.Pending.Remove(seq);
|
||||
_state.Redelivered?.Remove(seq);
|
||||
if (seq == 0)
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
PersistStateLocked();
|
||||
return;
|
||||
}
|
||||
|
||||
if (_state.Pending.TryGetValue(sseq, out var pending) && pending != null)
|
||||
{
|
||||
_state.Pending.Remove(sseq);
|
||||
if (dseq > pending.Sequence && pending.Sequence > 0)
|
||||
dseq = pending.Sequence;
|
||||
}
|
||||
|
||||
if (_state.Pending.Count == 0)
|
||||
{
|
||||
_state.AckFloor.Consumer = _state.Delivered.Consumer;
|
||||
_state.AckFloor.Stream = _state.Delivered.Stream;
|
||||
}
|
||||
else if (dseq == _state.AckFloor.Consumer + 1)
|
||||
{
|
||||
_state.AckFloor.Consumer = dseq;
|
||||
_state.AckFloor.Stream = sseq;
|
||||
|
||||
if (_state.Delivered.Consumer > dseq)
|
||||
{
|
||||
for (var ss = sseq + 1; ss <= _state.Delivered.Stream; ss++)
|
||||
{
|
||||
if (_state.Pending.TryGetValue(ss, out var p) && p != null)
|
||||
{
|
||||
if (p.Sequence > 0)
|
||||
{
|
||||
_state.AckFloor.Consumer = p.Sequence - 1;
|
||||
_state.AckFloor.Stream = ss - 1;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
_state.Redelivered?.Remove(sseq);
|
||||
PersistStateLocked();
|
||||
}
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void UpdateConfig(ConsumerConfig cfg)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.UpdateConfig");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
_cfg.Config = cfg;
|
||||
PersistStateLocked();
|
||||
}
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void Update(ConsumerState state)
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.Update");
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(state);
|
||||
|
||||
if (state.AckFloor.Consumer > state.Delivered.Consumer)
|
||||
throw new InvalidOperationException("bad ack floor for consumer");
|
||||
if (state.AckFloor.Stream > state.Delivered.Stream)
|
||||
throw new InvalidOperationException("bad ack floor for stream");
|
||||
|
||||
lock (_mu)
|
||||
{
|
||||
if (_closed)
|
||||
throw StoreErrors.ErrStoreClosed;
|
||||
|
||||
if (state.Delivered.Consumer < _state.Delivered.Consumer ||
|
||||
state.AckFloor.Stream < _state.AckFloor.Stream)
|
||||
throw new InvalidOperationException("old update ignored");
|
||||
|
||||
_state = CloneState(state, copyCollections: true);
|
||||
PersistStateLocked();
|
||||
}
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (ConsumerState? State, Exception? Error) State()
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.State");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
if (_closed)
|
||||
return (null, StoreErrors.ErrStoreClosed);
|
||||
return (CloneState(_state, copyCollections: true), null);
|
||||
}
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public (ConsumerState? State, Exception? Error) BorrowState()
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.BorrowState");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
if (_closed)
|
||||
return (null, StoreErrors.ErrStoreClosed);
|
||||
return (CloneState(_state, copyCollections: false), null);
|
||||
}
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public byte[] EncodedState()
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.EncodedState");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
if (_closed)
|
||||
throw StoreErrors.ErrStoreClosed;
|
||||
return JsonSerializer.SerializeToUtf8Bytes(CloneState(_state, copyCollections: true));
|
||||
}
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public StorageType Type() => StorageType.FileStorage;
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void Stop()
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.Stop");
|
||||
{
|
||||
lock (_mu)
|
||||
{
|
||||
if (_closed)
|
||||
return;
|
||||
PersistStateLocked();
|
||||
_closed = true;
|
||||
}
|
||||
_fs.RemoveConsumer(this);
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void Delete()
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.Delete");
|
||||
{
|
||||
Stop();
|
||||
if (Directory.Exists(_odir))
|
||||
Directory.Delete(_odir, recursive: true);
|
||||
}
|
||||
|
||||
/// <inheritdoc/>
|
||||
public void StreamDelete()
|
||||
=> throw new NotImplementedException("TODO: session 18 — filestore ConsumerFileStore.StreamDelete");
|
||||
=> Stop();
|
||||
|
||||
private void TryLoadStateLocked()
|
||||
{
|
||||
if (!File.Exists(_ifn))
|
||||
return;
|
||||
|
||||
try
|
||||
{
|
||||
var raw = File.ReadAllBytes(_ifn);
|
||||
var loaded = JsonSerializer.Deserialize<ConsumerState>(raw);
|
||||
if (loaded != null)
|
||||
_state = CloneState(loaded, copyCollections: true);
|
||||
}
|
||||
catch (Exception)
|
||||
{
|
||||
_state = new ConsumerState();
|
||||
}
|
||||
}
|
||||
|
||||
private void PersistStateLocked()
|
||||
{
|
||||
if (_closed)
|
||||
return;
|
||||
|
||||
Directory.CreateDirectory(_odir);
|
||||
var encoded = JsonSerializer.SerializeToUtf8Bytes(CloneState(_state, copyCollections: true));
|
||||
File.WriteAllBytes(_ifn, encoded);
|
||||
_dirty = false;
|
||||
}
|
||||
|
||||
private static ConsumerState CloneState(ConsumerState state, bool copyCollections)
|
||||
{
|
||||
var clone = new ConsumerState
|
||||
{
|
||||
Delivered = new SequencePair
|
||||
{
|
||||
Consumer = state.Delivered.Consumer,
|
||||
Stream = state.Delivered.Stream,
|
||||
},
|
||||
AckFloor = new SequencePair
|
||||
{
|
||||
Consumer = state.AckFloor.Consumer,
|
||||
Stream = state.AckFloor.Stream,
|
||||
},
|
||||
};
|
||||
|
||||
if (state.Pending is { Count: > 0 })
|
||||
{
|
||||
clone.Pending = new Dictionary<ulong, Pending>(state.Pending.Count);
|
||||
foreach (var kv in state.Pending)
|
||||
{
|
||||
clone.Pending[kv.Key] = new Pending
|
||||
{
|
||||
Sequence = kv.Value.Sequence,
|
||||
Timestamp = kv.Value.Timestamp,
|
||||
};
|
||||
}
|
||||
}
|
||||
else if (!copyCollections)
|
||||
{
|
||||
clone.Pending = state.Pending;
|
||||
}
|
||||
|
||||
if (state.Redelivered is { Count: > 0 })
|
||||
clone.Redelivered = new Dictionary<ulong, ulong>(state.Redelivered);
|
||||
else if (!copyCollections)
|
||||
clone.Redelivered = state.Redelivered;
|
||||
|
||||
return clone;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -35,6 +35,9 @@ internal sealed class NatsConsumer : IDisposable
|
||||
internal long NumRedelivered;
|
||||
|
||||
private bool _closed;
|
||||
private bool _isLeader;
|
||||
private ulong _leaderTerm;
|
||||
private ConsumerState _state = new();
|
||||
|
||||
/// <summary>IRaftNode — stored as object to avoid cross-dependency on Raft session.</summary>
|
||||
private object? _node;
|
||||
@@ -66,7 +69,9 @@ internal sealed class NatsConsumer : IDisposable
|
||||
ConsumerAction action,
|
||||
ConsumerAssignment? sa)
|
||||
{
|
||||
throw new NotImplementedException("TODO: session 21 — consumer");
|
||||
ArgumentNullException.ThrowIfNull(stream);
|
||||
ArgumentNullException.ThrowIfNull(cfg);
|
||||
return new NatsConsumer(stream.Name, cfg, DateTime.UtcNow);
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
@@ -77,15 +82,28 @@ internal sealed class NatsConsumer : IDisposable
|
||||
/// Stops processing and tears down goroutines / timers.
|
||||
/// Mirrors <c>consumer.stop</c> in server/consumer.go.
|
||||
/// </summary>
|
||||
public void Stop() =>
|
||||
throw new NotImplementedException("TODO: session 21 — consumer");
|
||||
public void Stop()
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
if (_closed)
|
||||
return;
|
||||
_closed = true;
|
||||
_isLeader = false;
|
||||
_quitCts?.Cancel();
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Deletes the consumer and all associated state permanently.
|
||||
/// Mirrors <c>consumer.delete</c> in server/consumer.go.
|
||||
/// </summary>
|
||||
public void Delete() =>
|
||||
throw new NotImplementedException("TODO: session 21 — consumer");
|
||||
public void Delete() => Stop();
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Info / State
|
||||
@@ -95,29 +113,91 @@ internal sealed class NatsConsumer : IDisposable
|
||||
/// Returns a snapshot of consumer info including config and delivery state.
|
||||
/// Mirrors <c>consumer.info</c> in server/consumer.go.
|
||||
/// </summary>
|
||||
public ConsumerInfo GetInfo() =>
|
||||
throw new NotImplementedException("TODO: session 21 — consumer");
|
||||
public ConsumerInfo GetInfo()
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try
|
||||
{
|
||||
return new ConsumerInfo
|
||||
{
|
||||
Stream = Stream,
|
||||
Name = Name,
|
||||
Created = Created,
|
||||
Config = Config,
|
||||
Delivered = new SequenceInfo
|
||||
{
|
||||
Consumer = _state.Delivered.Consumer,
|
||||
Stream = _state.Delivered.Stream,
|
||||
},
|
||||
AckFloor = new SequenceInfo
|
||||
{
|
||||
Consumer = _state.AckFloor.Consumer,
|
||||
Stream = _state.AckFloor.Stream,
|
||||
},
|
||||
NumAckPending = (int)NumAckPending,
|
||||
NumRedelivered = (int)NumRedelivered,
|
||||
TimeStamp = DateTime.UtcNow,
|
||||
};
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Returns the current consumer configuration.
|
||||
/// Mirrors <c>consumer.config</c> in server/consumer.go.
|
||||
/// </summary>
|
||||
public ConsumerConfig GetConfig() =>
|
||||
throw new NotImplementedException("TODO: session 21 — consumer");
|
||||
public ConsumerConfig GetConfig()
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try { return Config; }
|
||||
finally { _mu.ExitReadLock(); }
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Applies an updated configuration to the consumer.
|
||||
/// Mirrors <c>consumer.update</c> in server/consumer.go.
|
||||
/// </summary>
|
||||
public void UpdateConfig(ConsumerConfig config) =>
|
||||
throw new NotImplementedException("TODO: session 21 — consumer");
|
||||
public void UpdateConfig(ConsumerConfig config)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(config);
|
||||
_mu.EnterWriteLock();
|
||||
try { Config = config; }
|
||||
finally { _mu.ExitWriteLock(); }
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Returns the current durable consumer state (delivered, ack_floor, pending, redelivered).
|
||||
/// Mirrors <c>consumer.state</c> in server/consumer.go.
|
||||
/// </summary>
|
||||
public ConsumerState GetConsumerState() =>
|
||||
throw new NotImplementedException("TODO: session 21 — consumer");
|
||||
public ConsumerState GetConsumerState()
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try
|
||||
{
|
||||
return new ConsumerState
|
||||
{
|
||||
Delivered = new SequencePair
|
||||
{
|
||||
Consumer = _state.Delivered.Consumer,
|
||||
Stream = _state.Delivered.Stream,
|
||||
},
|
||||
AckFloor = new SequencePair
|
||||
{
|
||||
Consumer = _state.AckFloor.Consumer,
|
||||
Stream = _state.AckFloor.Stream,
|
||||
},
|
||||
Pending = _state.Pending is { Count: > 0 } ? new Dictionary<ulong, Pending>(_state.Pending) : null,
|
||||
Redelivered = _state.Redelivered is { Count: > 0 } ? new Dictionary<ulong, ulong>(_state.Redelivered) : null,
|
||||
};
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Leadership
|
||||
@@ -127,15 +207,30 @@ internal sealed class NatsConsumer : IDisposable
|
||||
/// Returns true if this server is the current consumer leader.
|
||||
/// Mirrors <c>consumer.isLeader</c> in server/consumer.go.
|
||||
/// </summary>
|
||||
public bool IsLeader() =>
|
||||
throw new NotImplementedException("TODO: session 21 — consumer");
|
||||
public bool IsLeader()
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try { return _isLeader && !_closed; }
|
||||
finally { _mu.ExitReadLock(); }
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Transitions this consumer into or out of the leader role.
|
||||
/// Mirrors <c>consumer.setLeader</c> in server/consumer.go.
|
||||
/// </summary>
|
||||
public void SetLeader(bool isLeader, ulong term) =>
|
||||
throw new NotImplementedException("TODO: session 21 — consumer");
|
||||
public void SetLeader(bool isLeader, ulong term)
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
_isLeader = isLeader;
|
||||
_leaderTerm = term;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// IDisposable
|
||||
|
||||
@@ -38,6 +38,9 @@ internal sealed class NatsStream : IDisposable
|
||||
internal bool IsMirror;
|
||||
|
||||
private bool _closed;
|
||||
private bool _isLeader;
|
||||
private ulong _leaderTerm;
|
||||
private bool _sealed;
|
||||
private CancellationTokenSource? _quitCts;
|
||||
|
||||
/// <summary>IRaftNode — stored as object to avoid cross-dependency on Raft session.</summary>
|
||||
@@ -69,7 +72,15 @@ internal sealed class NatsStream : IDisposable
|
||||
StreamAssignment? sa,
|
||||
object? server)
|
||||
{
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
ArgumentNullException.ThrowIfNull(acc);
|
||||
ArgumentNullException.ThrowIfNull(cfg);
|
||||
|
||||
var stream = new NatsStream(acc, cfg.Clone(), DateTime.UtcNow)
|
||||
{
|
||||
Store = store,
|
||||
IsMirror = cfg.Mirror != null,
|
||||
};
|
||||
return stream;
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
@@ -80,22 +91,72 @@ internal sealed class NatsStream : IDisposable
|
||||
/// Stops processing and tears down goroutines / timers.
|
||||
/// Mirrors <c>stream.stop</c> in server/stream.go.
|
||||
/// </summary>
|
||||
public void Stop() =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
public void Stop()
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
if (_closed)
|
||||
return;
|
||||
|
||||
_closed = true;
|
||||
_isLeader = false;
|
||||
_quitCts?.Cancel();
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Deletes the stream and all stored messages permanently.
|
||||
/// Mirrors <c>stream.delete</c> in server/stream.go.
|
||||
/// </summary>
|
||||
public void Delete() =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
public void Delete()
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
if (_closed)
|
||||
return;
|
||||
|
||||
_closed = true;
|
||||
_isLeader = false;
|
||||
_quitCts?.Cancel();
|
||||
Store?.Delete(inline: true);
|
||||
Store = null;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Purges messages from the stream according to the optional request filter.
|
||||
/// Mirrors <c>stream.purge</c> in server/stream.go.
|
||||
/// </summary>
|
||||
public void Purge(StreamPurgeRequest? req = null) =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
public void Purge(StreamPurgeRequest? req = null)
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
if (_closed || Store == null)
|
||||
return;
|
||||
|
||||
if (req == null || (string.IsNullOrEmpty(req.Filter) && req.Sequence == 0 && req.Keep == 0))
|
||||
Store.Purge();
|
||||
else
|
||||
Store.PurgeEx(req.Filter ?? string.Empty, req.Sequence, req.Keep);
|
||||
|
||||
SyncCountersFromState(Store.State());
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Info / State
|
||||
@@ -105,22 +166,62 @@ internal sealed class NatsStream : IDisposable
|
||||
/// Returns a snapshot of stream info including config, state, and cluster information.
|
||||
/// Mirrors <c>stream.info</c> in server/stream.go.
|
||||
/// </summary>
|
||||
public StreamInfo GetInfo(bool includeDeleted = false) =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
public StreamInfo GetInfo(bool includeDeleted = false)
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try
|
||||
{
|
||||
return new StreamInfo
|
||||
{
|
||||
Config = Config.Clone(),
|
||||
Created = Created,
|
||||
State = State(),
|
||||
Cluster = new ClusterInfo
|
||||
{
|
||||
Leader = _isLeader ? Name : null,
|
||||
},
|
||||
};
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Asynchronously returns a snapshot of stream info.
|
||||
/// Mirrors <c>stream.info</c> (async path) in server/stream.go.
|
||||
/// </summary>
|
||||
public Task<StreamInfo> GetInfoAsync(bool includeDeleted = false, CancellationToken ct = default) =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
ct.IsCancellationRequested
|
||||
? Task.FromCanceled<StreamInfo>(ct)
|
||||
: Task.FromResult(GetInfo(includeDeleted));
|
||||
|
||||
/// <summary>
|
||||
/// Returns the current stream state (message counts, byte totals, sequences).
|
||||
/// Mirrors <c>stream.state</c> in server/stream.go.
|
||||
/// </summary>
|
||||
public StreamState State() =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
public StreamState State()
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try
|
||||
{
|
||||
if (Store != null)
|
||||
return Store.State();
|
||||
|
||||
return new StreamState
|
||||
{
|
||||
Msgs = (ulong)Math.Max(0, Interlocked.Read(ref Msgs)),
|
||||
Bytes = (ulong)Math.Max(0, Interlocked.Read(ref Bytes)),
|
||||
FirstSeq = (ulong)Math.Max(0, Interlocked.Read(ref FirstSeq)),
|
||||
LastSeq = (ulong)Math.Max(0, Interlocked.Read(ref LastSeq)),
|
||||
};
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Leadership
|
||||
@@ -130,15 +231,30 @@ internal sealed class NatsStream : IDisposable
|
||||
/// Transitions this stream into or out of the leader role.
|
||||
/// Mirrors <c>stream.setLeader</c> in server/stream.go.
|
||||
/// </summary>
|
||||
public void SetLeader(bool isLeader, ulong term) =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
public void SetLeader(bool isLeader, ulong term)
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
_isLeader = isLeader;
|
||||
_leaderTerm = term;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Returns true if this server is the current stream leader.
|
||||
/// Mirrors <c>stream.isLeader</c> in server/stream.go.
|
||||
/// </summary>
|
||||
public bool IsLeader() =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
public bool IsLeader()
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try { return _isLeader && !_closed; }
|
||||
finally { _mu.ExitReadLock(); }
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Configuration
|
||||
@@ -148,22 +264,43 @@ internal sealed class NatsStream : IDisposable
|
||||
/// Returns the owning account.
|
||||
/// Mirrors <c>stream.account</c> in server/stream.go.
|
||||
/// </summary>
|
||||
public Account GetAccount() =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
public Account GetAccount()
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try { return Account; }
|
||||
finally { _mu.ExitReadLock(); }
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Returns the current stream configuration.
|
||||
/// Mirrors <c>stream.config</c> in server/stream.go.
|
||||
/// </summary>
|
||||
public StreamConfig GetConfig() =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
public StreamConfig GetConfig()
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try { return Config.Clone(); }
|
||||
finally { _mu.ExitReadLock(); }
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Applies an updated configuration to the stream.
|
||||
/// Mirrors <c>stream.update</c> in server/stream.go.
|
||||
/// </summary>
|
||||
public void UpdateConfig(StreamConfig config) =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
public void UpdateConfig(StreamConfig config)
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(config);
|
||||
Config = config.Clone();
|
||||
Store?.UpdateConfig(Config);
|
||||
_sealed = Config.Sealed;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Sealed state
|
||||
@@ -173,15 +310,38 @@ internal sealed class NatsStream : IDisposable
|
||||
/// Returns true if the stream is sealed (no new messages accepted).
|
||||
/// Mirrors <c>stream.isSealed</c> in server/stream.go.
|
||||
/// </summary>
|
||||
public bool IsSealed() =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
public bool IsSealed()
|
||||
{
|
||||
_mu.EnterReadLock();
|
||||
try { return _sealed || Config.Sealed; }
|
||||
finally { _mu.ExitReadLock(); }
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Seals the stream so that no new messages can be stored.
|
||||
/// Mirrors <c>stream.seal</c> in server/stream.go.
|
||||
/// </summary>
|
||||
public void Seal() =>
|
||||
throw new NotImplementedException("TODO: session 21 — stream");
|
||||
public void Seal()
|
||||
{
|
||||
_mu.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
_sealed = true;
|
||||
Config.Sealed = true;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_mu.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
private void SyncCountersFromState(StreamState state)
|
||||
{
|
||||
Interlocked.Exchange(ref Msgs, (long)state.Msgs);
|
||||
Interlocked.Exchange(ref Bytes, (long)state.Bytes);
|
||||
Interlocked.Exchange(ref FirstSeq, (long)state.FirstSeq);
|
||||
Interlocked.Exchange(ref LastSeq, (long)state.LastSeq);
|
||||
}
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// IDisposable
|
||||
|
||||
@@ -321,57 +321,471 @@ internal sealed class Raft : IRaftNode
|
||||
// -----------------------------------------------------------------------
|
||||
// IRaftNode — stub implementations
|
||||
// -----------------------------------------------------------------------
|
||||
public void Propose(byte[] entry) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void ProposeMulti(IReadOnlyList<Entry> entries) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void ForwardProposal(byte[] entry) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void InstallSnapshot(byte[] snap, bool force) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public object CreateSnapshotCheckpoint(bool force) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void SendSnapshot(byte[] snap) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public bool NeedSnapshot() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public (ulong, ulong) Applied(ulong index) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public (ulong, ulong) Processed(ulong index, ulong applied) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void Propose(byte[] entry)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(entry);
|
||||
|
||||
_lock.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
PropQ ??= new IpQueue<ProposedEntry>($"{GroupName}-propose");
|
||||
var pe = new ProposedEntry
|
||||
{
|
||||
Entry = new Entry { Type = EntryType.EntryNormal, Data = [.. entry] },
|
||||
};
|
||||
PropQ.Push(pe);
|
||||
Active = DateTime.UtcNow;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
public void ProposeMulti(IReadOnlyList<Entry> entries)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(entries);
|
||||
foreach (var entry in entries)
|
||||
{
|
||||
if (entry == null)
|
||||
continue;
|
||||
|
||||
Propose(entry.Data);
|
||||
}
|
||||
}
|
||||
|
||||
public void ForwardProposal(byte[] entry) => Propose(entry);
|
||||
|
||||
public void InstallSnapshot(byte[] snap, bool force)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(snap);
|
||||
|
||||
_lock.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
if (Snapshotting && !force)
|
||||
return;
|
||||
|
||||
Snapshotting = true;
|
||||
Wps = [.. snap];
|
||||
if (force)
|
||||
Applied_ = Commit;
|
||||
Snapshotting = false;
|
||||
Active = DateTime.UtcNow;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
public object CreateSnapshotCheckpoint(bool force) => new Checkpoint
|
||||
{
|
||||
Node = this,
|
||||
Term = Term_,
|
||||
Applied = Applied_,
|
||||
PApplied = PApplied,
|
||||
SnapFile = force ? string.Empty : SnapFile,
|
||||
PeerState = [.. Wps],
|
||||
};
|
||||
|
||||
public void SendSnapshot(byte[] snap) => InstallSnapshot(snap, force: false);
|
||||
|
||||
public bool NeedSnapshot()
|
||||
{
|
||||
_lock.EnterReadLock();
|
||||
try
|
||||
{
|
||||
return Snapshotting || PApplied > Applied_;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
public (ulong, ulong) Applied(ulong index)
|
||||
{
|
||||
_lock.EnterReadLock();
|
||||
try
|
||||
{
|
||||
var entries = Applied_ >= index ? Applied_ - index : 0;
|
||||
return (entries, WalBytes);
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
public (ulong, ulong) Processed(ulong index, ulong applied)
|
||||
{
|
||||
_lock.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
if (index > Processed_)
|
||||
Processed_ = index;
|
||||
if (applied > Applied_)
|
||||
Applied_ = applied;
|
||||
return (Processed_, WalBytes);
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
public RaftState State() => (RaftState)StateValue;
|
||||
public (ulong, ulong) Size() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public (ulong, ulong, ulong) Progress() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public bool Leader() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public DateTime? LeaderSince() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public bool Quorum() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public bool Current() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public bool Healthy() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public (ulong, ulong) Size()
|
||||
{
|
||||
_lock.EnterReadLock();
|
||||
try
|
||||
{
|
||||
return (Processed_, WalBytes);
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
public (ulong, ulong, ulong) Progress()
|
||||
{
|
||||
_lock.EnterReadLock();
|
||||
try
|
||||
{
|
||||
return (PIndex, Commit, Applied_);
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
public bool Leader() => State() == RaftState.Leader;
|
||||
|
||||
public DateTime? LeaderSince()
|
||||
{
|
||||
_lock.EnterReadLock();
|
||||
try
|
||||
{
|
||||
return Leader() ? (Lsut == default ? Active : Lsut) : null;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
public bool Quorum()
|
||||
{
|
||||
_lock.EnterReadLock();
|
||||
try
|
||||
{
|
||||
var clusterSize = ClusterSize();
|
||||
if (clusterSize <= 1)
|
||||
return true;
|
||||
|
||||
var required = Qn > 0 ? Qn : (clusterSize / 2) + 1;
|
||||
var available = 1; // self
|
||||
var now = DateTime.UtcNow;
|
||||
foreach (var peer in Peers_.Values)
|
||||
{
|
||||
if (peer.Kp || now - peer.Ts <= TimeSpan.FromSeconds(30))
|
||||
available++;
|
||||
}
|
||||
|
||||
return available >= required;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
public bool Current()
|
||||
{
|
||||
_lock.EnterReadLock();
|
||||
try
|
||||
{
|
||||
return !Deleted_ && !Leaderless();
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
public bool Healthy() => Current() && Quorum();
|
||||
public ulong Term() => Term_;
|
||||
public bool Leaderless() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public string GroupLeader() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public bool HadPreviousLeader() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void StepDown(params string[] preferred) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void SetObserver(bool isObserver) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public bool IsObserver() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void Campaign() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void CampaignImmediately() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public bool Leaderless() => string.IsNullOrEmpty(LeaderId) && Interlocked.Read(ref HasLeaderV) == 0;
|
||||
public string GroupLeader() => Leader() ? Id : LeaderId;
|
||||
public bool HadPreviousLeader() => Interlocked.Read(ref PLeaderV) != 0 || !string.IsNullOrEmpty(LeaderId);
|
||||
|
||||
public void StepDown(params string[] preferred)
|
||||
{
|
||||
_lock.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
StateValue = (int)RaftState.Follower;
|
||||
Interlocked.Exchange(ref HasLeaderV, 0);
|
||||
Interlocked.Exchange(ref PLeaderV, 1);
|
||||
Lxfer = true;
|
||||
Lsut = DateTime.UtcNow;
|
||||
if (preferred is { Length: > 0 })
|
||||
Vote = preferred[0];
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
public void SetObserver(bool isObserver)
|
||||
{
|
||||
_lock.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
Observer_ = isObserver;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
public bool IsObserver() => Observer_;
|
||||
|
||||
public void Campaign()
|
||||
{
|
||||
_lock.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
if (Deleted_)
|
||||
return;
|
||||
|
||||
StateValue = (int)RaftState.Candidate;
|
||||
Active = DateTime.UtcNow;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
public void CampaignImmediately() => Campaign();
|
||||
public string ID() => Id;
|
||||
public string Group() => GroupName;
|
||||
public IReadOnlyList<Peer> Peers() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void ProposeKnownPeers(IReadOnlyList<string> knownPeers) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void UpdateKnownPeers(IReadOnlyList<string> knownPeers) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void ProposeAddPeer(string peer) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void ProposeRemovePeer(string peer) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public bool MembershipChangeInProgress() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void AdjustClusterSize(int csz) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void AdjustBootClusterSize(int csz) => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public int ClusterSize() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public IReadOnlyList<Peer> Peers()
|
||||
{
|
||||
_lock.EnterReadLock();
|
||||
try
|
||||
{
|
||||
var peers = new List<Peer>(Peers_.Count);
|
||||
foreach (var (id, state) in Peers_)
|
||||
{
|
||||
peers.Add(new Peer
|
||||
{
|
||||
Id = id,
|
||||
Current = state.Kp,
|
||||
Last = state.Ts,
|
||||
Lag = PIndex >= state.Li ? PIndex - state.Li : 0,
|
||||
});
|
||||
}
|
||||
|
||||
return peers;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
public void ProposeKnownPeers(IReadOnlyList<string> knownPeers)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(knownPeers);
|
||||
|
||||
_lock.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
var now = DateTime.UtcNow;
|
||||
foreach (var lps in Peers_.Values)
|
||||
lps.Kp = false;
|
||||
|
||||
foreach (var peer in knownPeers)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(peer))
|
||||
continue;
|
||||
|
||||
if (!Peers_.TryGetValue(peer, out var lps))
|
||||
{
|
||||
lps = new Lps();
|
||||
Peers_[peer] = lps;
|
||||
}
|
||||
|
||||
lps.Kp = true;
|
||||
lps.Ts = now;
|
||||
}
|
||||
|
||||
Csz = Math.Max(knownPeers.Count + 1, 1);
|
||||
Qn = (Csz / 2) + 1;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
public void UpdateKnownPeers(IReadOnlyList<string> knownPeers) => ProposeKnownPeers(knownPeers);
|
||||
|
||||
public void ProposeAddPeer(string peer)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(peer))
|
||||
return;
|
||||
|
||||
_lock.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
if (!Peers_.TryGetValue(peer, out var lps))
|
||||
{
|
||||
lps = new Lps();
|
||||
Peers_[peer] = lps;
|
||||
}
|
||||
|
||||
lps.Kp = true;
|
||||
lps.Ts = DateTime.UtcNow;
|
||||
MembChangeIndex = PIndex + 1;
|
||||
Csz = Math.Max(Peers_.Count + 1, 1);
|
||||
Qn = (Csz / 2) + 1;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
public void ProposeRemovePeer(string peer)
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(peer))
|
||||
return;
|
||||
|
||||
_lock.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
Peers_.Remove(peer);
|
||||
Removed[peer] = DateTime.UtcNow;
|
||||
MembChangeIndex = PIndex + 1;
|
||||
Csz = Math.Max(Peers_.Count + 1, 1);
|
||||
Qn = (Csz / 2) + 1;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
public bool MembershipChangeInProgress()
|
||||
{
|
||||
_lock.EnterReadLock();
|
||||
try
|
||||
{
|
||||
return MembChangeIndex != 0 && MembChangeIndex > Applied_;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitReadLock();
|
||||
}
|
||||
}
|
||||
|
||||
public void AdjustClusterSize(int csz)
|
||||
{
|
||||
_lock.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
Csz = Math.Max(csz, 1);
|
||||
Qn = (Csz / 2) + 1;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
public void AdjustBootClusterSize(int csz) => AdjustClusterSize(csz);
|
||||
|
||||
public int ClusterSize()
|
||||
{
|
||||
_lock.EnterReadLock();
|
||||
try
|
||||
{
|
||||
return Csz > 0 ? Csz : Math.Max(Peers_.Count + 1, 1);
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitReadLock();
|
||||
}
|
||||
}
|
||||
public IpQueue<CommittedEntry> ApplyQ() => ApplyQ_ ?? throw new InvalidOperationException("Apply queue not initialized");
|
||||
public void PauseApply() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void ResumeApply() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public bool DrainAndReplaySnapshot() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void PauseApply() => Paused = true;
|
||||
public void ResumeApply() => Paused = false;
|
||||
|
||||
public bool DrainAndReplaySnapshot()
|
||||
{
|
||||
_lock.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
if (Snapshotting)
|
||||
return false;
|
||||
|
||||
HcBehind = false;
|
||||
return true;
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
public ChannelReader<bool> LeadChangeC() => LeadC?.Reader ?? throw new InvalidOperationException("Lead channel not initialized");
|
||||
public ChannelReader<bool> QuitC() => Quit?.Reader ?? throw new InvalidOperationException("Quit channel not initialized");
|
||||
public DateTime Created() => Created_;
|
||||
public void Stop() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void WaitForStop() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void Delete() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void Stop()
|
||||
{
|
||||
_lock.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
StateValue = (int)RaftState.Closed;
|
||||
Elect?.Dispose();
|
||||
Elect = null;
|
||||
Quit ??= Channel.CreateUnbounded<bool>();
|
||||
Quit.Writer.TryWrite(true);
|
||||
}
|
||||
finally
|
||||
{
|
||||
_lock.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
public void WaitForStop()
|
||||
{
|
||||
var q = Quit;
|
||||
if (q == null)
|
||||
return;
|
||||
|
||||
if (q.Reader.TryRead(out _))
|
||||
return;
|
||||
|
||||
q.Reader.WaitToReadAsync().AsTask().Wait(TimeSpan.FromSeconds(1));
|
||||
}
|
||||
|
||||
public void Delete()
|
||||
{
|
||||
Deleted_ = true;
|
||||
Stop();
|
||||
}
|
||||
public bool IsDeleted() => Deleted_;
|
||||
public void RecreateInternalSubs() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public void RecreateInternalSubs() => Active = DateTime.UtcNow;
|
||||
public bool IsSystemAccount() => Interlocked.Read(ref _isSysAccV) != 0;
|
||||
public string GetTrafficAccountName() => throw new NotImplementedException("TODO: session 20 — raft");
|
||||
public string GetTrafficAccountName()
|
||||
=> IsSystemAccount() ? "$SYS" : (string.IsNullOrEmpty(AccName) ? "$G" : AccName);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
@@ -461,16 +875,65 @@ internal sealed class Checkpoint : IRaftNodeCheckpoint
|
||||
public byte[] PeerState { get; set; } = [];
|
||||
|
||||
public byte[] LoadLastSnapshot()
|
||||
=> throw new NotImplementedException("TODO: session 20 — raft");
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(SnapFile))
|
||||
return [];
|
||||
|
||||
try
|
||||
{
|
||||
return File.Exists(SnapFile) ? File.ReadAllBytes(SnapFile) : [];
|
||||
}
|
||||
catch
|
||||
{
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
public IEnumerable<(AppendEntry Entry, Exception? Error)> AppendEntriesSeq()
|
||||
=> throw new NotImplementedException("TODO: session 20 — raft");
|
||||
{
|
||||
if (Node == null)
|
||||
yield break;
|
||||
|
||||
var entry = new AppendEntry
|
||||
{
|
||||
Leader = Node.Id,
|
||||
TermV = Term,
|
||||
Commit = Applied,
|
||||
PTerm = Node.PTerm,
|
||||
PIndex = PApplied,
|
||||
Reply = Node.AReply,
|
||||
};
|
||||
|
||||
yield return (entry, null);
|
||||
}
|
||||
|
||||
public void Abort()
|
||||
=> throw new NotImplementedException("TODO: session 20 — raft");
|
||||
{
|
||||
if (string.IsNullOrWhiteSpace(SnapFile))
|
||||
return;
|
||||
|
||||
try
|
||||
{
|
||||
if (File.Exists(SnapFile))
|
||||
File.Delete(SnapFile);
|
||||
}
|
||||
catch
|
||||
{
|
||||
// Ignore cleanup failures for aborted checkpoints.
|
||||
}
|
||||
}
|
||||
|
||||
public ulong InstallSnapshot(byte[] data)
|
||||
=> throw new NotImplementedException("TODO: session 20 — raft");
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(data);
|
||||
|
||||
if (string.IsNullOrWhiteSpace(SnapFile))
|
||||
SnapFile = Path.Combine(Path.GetTempPath(), $"raft-snapshot-{Guid.NewGuid():N}.bin");
|
||||
|
||||
File.WriteAllBytes(SnapFile, data);
|
||||
Node?.InstallSnapshot(data, force: true);
|
||||
return (ulong)data.LongLength;
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
|
||||
@@ -970,20 +970,21 @@ public static class DiskAvailability
|
||||
private const long JetStreamMaxStoreDefault = 1L * 1024 * 1024 * 1024 * 1024;
|
||||
|
||||
/// <summary>
|
||||
/// Returns approximately 75% of available disk space at <paramref name="path"/>.
|
||||
/// Returns <see cref="JetStreamMaxStoreDefault"/> (1 TB) if the check fails.
|
||||
/// Returns approximately 75% of available disk space at <paramref name="storeDir"/>.
|
||||
/// Ensures the directory exists before probing and falls back to the default
|
||||
/// cap if disk probing fails.
|
||||
/// </summary>
|
||||
public static long Available(string path)
|
||||
public static long DiskAvailable(string storeDir)
|
||||
{
|
||||
// TODO: session 17 — implement via DriveInfo or P/Invoke statvfs on non-Windows.
|
||||
try
|
||||
{
|
||||
var drive = new DriveInfo(Path.GetPathRoot(Path.GetFullPath(path)) ?? path);
|
||||
if (!string.IsNullOrWhiteSpace(storeDir))
|
||||
Directory.CreateDirectory(storeDir);
|
||||
|
||||
var root = Path.GetPathRoot(Path.GetFullPath(storeDir));
|
||||
var drive = new DriveInfo(root ?? storeDir);
|
||||
if (drive.IsReady)
|
||||
{
|
||||
// Estimate 75% of available free space, matching Go behaviour.
|
||||
return drive.AvailableFreeSpace / 4 * 3;
|
||||
}
|
||||
}
|
||||
catch
|
||||
{
|
||||
@@ -993,8 +994,14 @@ public static class DiskAvailability
|
||||
return JetStreamMaxStoreDefault;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Returns approximately 75% of available disk space at <paramref name="path"/>.
|
||||
/// Returns <see cref="JetStreamMaxStoreDefault"/> (1 TB) if the check fails.
|
||||
/// </summary>
|
||||
public static long Available(string path) => DiskAvailable(path);
|
||||
|
||||
/// <summary>
|
||||
/// Returns true if at least <paramref name="needed"/> bytes are available at <paramref name="path"/>.
|
||||
/// </summary>
|
||||
public static bool Check(string path, long needed) => Available(path) >= needed;
|
||||
public static bool Check(string path, long needed) => DiskAvailable(path) >= needed;
|
||||
}
|
||||
|
||||
@@ -409,6 +409,9 @@ public sealed class WaitingRequest
|
||||
|
||||
/// <summary>Bytes accumulated so far.</summary>
|
||||
public int B { get; set; }
|
||||
|
||||
/// <summary>Optional pull request priority group metadata.</summary>
|
||||
public PriorityGroup? PriorityGroup { get; set; }
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
@@ -418,31 +421,213 @@ public sealed class WaitingRequest
|
||||
public sealed class WaitQueue
|
||||
{
|
||||
private readonly List<WaitingRequest> _reqs = new();
|
||||
private readonly int _max;
|
||||
private int _head;
|
||||
private int _tail;
|
||||
|
||||
public WaitQueue(int max = 0)
|
||||
{
|
||||
_max = max;
|
||||
}
|
||||
|
||||
/// <summary>Number of pending requests in the queue.</summary>
|
||||
public int Len => _reqs.Count;
|
||||
public int Len => _tail - _head;
|
||||
|
||||
/// <summary>Add a waiting request to the tail of the queue.</summary>
|
||||
public void Add(WaitingRequest req) =>
|
||||
throw new NotImplementedException("TODO: session 21");
|
||||
public void Add(WaitingRequest req)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(req);
|
||||
_reqs.Add(req);
|
||||
_tail++;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Add a waiting request ordered by priority while preserving FIFO order
|
||||
/// within each priority level.
|
||||
/// </summary>
|
||||
public bool AddPrioritized(WaitingRequest req)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(req);
|
||||
if (IsFull(_max))
|
||||
return false;
|
||||
InsertSorted(req);
|
||||
return true;
|
||||
}
|
||||
|
||||
/// <summary>Insert a request in priority order (lower number = higher priority).</summary>
|
||||
public void InsertSorted(WaitingRequest req)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(req);
|
||||
|
||||
if (Len == 0)
|
||||
{
|
||||
Add(req);
|
||||
return;
|
||||
}
|
||||
|
||||
var priority = PriorityOf(req);
|
||||
var insertAt = _head;
|
||||
while (insertAt < _tail)
|
||||
{
|
||||
if (PriorityOf(_reqs[insertAt]) > priority)
|
||||
break;
|
||||
insertAt++;
|
||||
}
|
||||
|
||||
_reqs.Insert(insertAt, req);
|
||||
_tail++;
|
||||
}
|
||||
|
||||
/// <summary>Peek at the head request without removing it.</summary>
|
||||
public WaitingRequest? Peek() =>
|
||||
throw new NotImplementedException("TODO: session 21");
|
||||
public WaitingRequest? Peek()
|
||||
{
|
||||
if (Len == 0)
|
||||
return null;
|
||||
return _reqs[_head];
|
||||
}
|
||||
|
||||
/// <summary>Remove and return the head request.</summary>
|
||||
public WaitingRequest? Pop() =>
|
||||
throw new NotImplementedException("TODO: session 21");
|
||||
public WaitingRequest? Pop()
|
||||
{
|
||||
var wr = Peek();
|
||||
if (wr is null)
|
||||
return null;
|
||||
|
||||
wr.D++;
|
||||
wr.N--;
|
||||
if (wr.N > 0 && Len > 1)
|
||||
{
|
||||
RemoveCurrent();
|
||||
Add(wr);
|
||||
}
|
||||
else if (wr.N <= 0)
|
||||
{
|
||||
RemoveCurrent();
|
||||
}
|
||||
|
||||
return wr;
|
||||
}
|
||||
|
||||
/// <summary>Returns true if the queue contains no active requests.</summary>
|
||||
public bool IsEmpty() => Len == 0;
|
||||
|
||||
/// <summary>Rotate the head request to the tail.</summary>
|
||||
public void Cycle()
|
||||
{
|
||||
var wr = Peek();
|
||||
if (wr is null)
|
||||
return;
|
||||
|
||||
RemoveCurrent();
|
||||
Add(wr);
|
||||
}
|
||||
|
||||
/// <summary>Pop strategy used by pull consumers based on priority policy.</summary>
|
||||
public WaitingRequest? PopOrPopAndRequeue(PriorityPolicy priority)
|
||||
=> priority == PriorityPolicy.PriorityPrioritized ? PopAndRequeue() : Pop();
|
||||
|
||||
/// <summary>
|
||||
/// Pop and requeue to the end of the same priority band while preserving
|
||||
/// stable order within that band.
|
||||
/// </summary>
|
||||
public WaitingRequest? PopAndRequeue()
|
||||
{
|
||||
var wr = Peek();
|
||||
if (wr is null)
|
||||
return null;
|
||||
|
||||
wr.D++;
|
||||
wr.N--;
|
||||
|
||||
if (wr.N > 0 && Len > 1)
|
||||
{
|
||||
// Remove the current head and insert it back in priority order.
|
||||
_reqs.RemoveAt(_head);
|
||||
_tail--;
|
||||
InsertSorted(wr);
|
||||
}
|
||||
else if (wr.N <= 0)
|
||||
{
|
||||
RemoveCurrent();
|
||||
}
|
||||
|
||||
return wr;
|
||||
}
|
||||
|
||||
/// <summary>Remove the current head request from the queue.</summary>
|
||||
public void RemoveCurrent() => Remove(null, Peek());
|
||||
|
||||
/// <summary>Remove a specific request from the queue.</summary>
|
||||
public void Remove(WaitingRequest? pre, WaitingRequest? wr)
|
||||
{
|
||||
if (wr is null || Len == 0)
|
||||
return;
|
||||
|
||||
var removeAt = -1;
|
||||
|
||||
if (pre is not null)
|
||||
{
|
||||
for (var i = _head; i < _tail; i++)
|
||||
{
|
||||
if (!ReferenceEquals(_reqs[i], pre))
|
||||
continue;
|
||||
|
||||
var candidate = i + 1;
|
||||
if (candidate < _tail && ReferenceEquals(_reqs[candidate], wr))
|
||||
removeAt = candidate;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (removeAt < 0)
|
||||
{
|
||||
for (var i = _head; i < _tail; i++)
|
||||
{
|
||||
if (ReferenceEquals(_reqs[i], wr))
|
||||
{
|
||||
removeAt = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (removeAt < 0)
|
||||
return;
|
||||
|
||||
if (removeAt == _head)
|
||||
{
|
||||
_head++;
|
||||
}
|
||||
else
|
||||
{
|
||||
_reqs.RemoveAt(removeAt);
|
||||
_tail--;
|
||||
}
|
||||
|
||||
if (_head > 32 && _head * 2 >= _tail)
|
||||
Compress();
|
||||
}
|
||||
|
||||
/// <summary>Compact the internal backing list to reclaim removed slots.</summary>
|
||||
public void Compress() =>
|
||||
throw new NotImplementedException("TODO: session 21");
|
||||
public void Compress()
|
||||
{
|
||||
if (_head == 0)
|
||||
return;
|
||||
|
||||
_reqs.RemoveRange(0, _head);
|
||||
_tail -= _head;
|
||||
_head = 0;
|
||||
}
|
||||
|
||||
/// <summary>Returns true if the queue is at capacity (head == tail when full).</summary>
|
||||
public bool IsFull(int max) =>
|
||||
throw new NotImplementedException("TODO: session 21");
|
||||
public bool IsFull(int max)
|
||||
{
|
||||
if (max <= 0)
|
||||
return false;
|
||||
return Len >= max;
|
||||
}
|
||||
|
||||
private static int PriorityOf(WaitingRequest req) => req.PriorityGroup?.Priority ?? int.MaxValue;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
|
||||
@@ -38,6 +38,32 @@ public static class NatsHeaderConstants
|
||||
// Other commonly used headers.
|
||||
public const string JsMsgId = "Nats-Msg-Id";
|
||||
public const string JsMsgRollup = "Nats-Rollup";
|
||||
public const string JsMsgSize = "Nats-Msg-Size";
|
||||
public const string JsResponseType = "Nats-Response-Type";
|
||||
public const string JsMessageTtl = "Nats-TTL";
|
||||
public const string JsMarkerReason = "Nats-Marker-Reason";
|
||||
public const string JsMessageIncr = "Nats-Incr";
|
||||
public const string JsBatchId = "Nats-Batch-Id";
|
||||
public const string JsBatchSeq = "Nats-Batch-Sequence";
|
||||
public const string JsBatchCommit = "Nats-Batch-Commit";
|
||||
|
||||
// Scheduling headers.
|
||||
public const string JsSchedulePattern = "Nats-Schedule";
|
||||
public const string JsScheduleTtl = "Nats-Schedule-TTL";
|
||||
public const string JsScheduleTarget = "Nats-Schedule-Target";
|
||||
public const string JsScheduleSource = "Nats-Schedule-Source";
|
||||
public const string JsScheduler = "Nats-Scheduler";
|
||||
public const string JsScheduleNext = "Nats-Schedule-Next";
|
||||
public const string JsScheduleNextPurge = "purge";
|
||||
|
||||
// Rollup values.
|
||||
public const string JsMsgRollupSubject = "sub";
|
||||
public const string JsMsgRollupAll = "all";
|
||||
|
||||
// Marker reasons.
|
||||
public const string JsMarkerReasonMaxAge = "MaxAge";
|
||||
public const string JsMarkerReasonPurge = "Purge";
|
||||
public const string JsMarkerReasonRemove = "Remove";
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
|
||||
@@ -706,15 +706,27 @@ public sealed partial class NatsServer
|
||||
/// <summary>
|
||||
/// Stub: enables account tracking (session 12 — events.go).
|
||||
/// </summary>
|
||||
internal void EnableAccountTracking(Account acc) { /* session 12 */ }
|
||||
internal void EnableAccountTracking(Account acc)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(acc);
|
||||
Debugf("Enabled account tracking for {0}", acc.Name);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Stub: registers system imports on an account (session 12).
|
||||
/// </summary>
|
||||
internal void RegisterSystemImports(Account acc) { /* session 12 */ }
|
||||
internal void RegisterSystemImports(Account acc)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(acc);
|
||||
acc.Imports.Services ??= new Dictionary<string, List<ServiceImportEntry>>(StringComparer.Ordinal);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Stub: adds system-account exports (session 12).
|
||||
/// </summary>
|
||||
internal void AddSystemAccountExports(Account acc) { /* session 12 */ }
|
||||
internal void AddSystemAccountExports(Account acc)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(acc);
|
||||
acc.Exports.Services ??= new Dictionary<string, ServiceExportEntry>(StringComparer.Ordinal);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -304,7 +304,30 @@ public sealed partial class NatsServer
|
||||
/// <summary>Mirrors Go <c>processProxiesTrustedKeys</c>.</summary>
|
||||
internal void ProcessProxiesTrustedKeys()
|
||||
{
|
||||
// TODO: parse proxy trusted key strings into _proxyTrustedKeys set
|
||||
var opts = GetOpts();
|
||||
var keys = new HashSet<string>(StringComparer.Ordinal);
|
||||
|
||||
if (opts.Proxies?.Trusted is { Count: > 0 })
|
||||
{
|
||||
foreach (var proxy in opts.Proxies.Trusted)
|
||||
{
|
||||
if (!string.IsNullOrWhiteSpace(proxy.Key))
|
||||
keys.Add(proxy.Key.Trim());
|
||||
}
|
||||
}
|
||||
|
||||
if (opts.TrustedKeys is { Count: > 0 })
|
||||
{
|
||||
foreach (var key in opts.TrustedKeys)
|
||||
{
|
||||
if (!string.IsNullOrWhiteSpace(key))
|
||||
keys.Add(key.Trim());
|
||||
}
|
||||
}
|
||||
|
||||
_proxiesKeyPairs.Clear();
|
||||
foreach (var key in keys)
|
||||
_proxiesKeyPairs.Add(key);
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
@@ -318,7 +341,21 @@ public sealed partial class NatsServer
|
||||
/// Config reload stub.
|
||||
/// Mirrors Go <c>Server.Reload</c>.
|
||||
/// </summary>
|
||||
internal void Reload() => throw new NotImplementedException("TODO: config reload — implement in later session");
|
||||
internal void Reload()
|
||||
{
|
||||
_reloadMu.EnterWriteLock();
|
||||
try
|
||||
{
|
||||
_configTime = DateTime.UtcNow;
|
||||
ProcessTrustedKeys();
|
||||
ProcessProxiesTrustedKeys();
|
||||
_accResolver?.Reload();
|
||||
}
|
||||
finally
|
||||
{
|
||||
_reloadMu.ExitWriteLock();
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Returns a Task that shuts the server down asynchronously.
|
||||
|
||||
@@ -785,25 +785,73 @@ public sealed partial class NatsServer
|
||||
// =========================================================================
|
||||
|
||||
/// <summary>Stub — JetStream pull-consumer signalling (session 19).</summary>
|
||||
private void SignalPullConsumers() { }
|
||||
private void SignalPullConsumers()
|
||||
{
|
||||
foreach (var c in _clients.Values)
|
||||
{
|
||||
if (c.Kind == ClientKind.JetStream)
|
||||
c.FlushSignal();
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>Stub — Raft step-down (session 20).</summary>
|
||||
private void StepdownRaftNodes() { }
|
||||
private void StepdownRaftNodes()
|
||||
{
|
||||
foreach (var node in _raftNodes.Values)
|
||||
{
|
||||
var t = node.GetType();
|
||||
var stepDown = t.GetMethod("StepDown", Type.EmptyTypes);
|
||||
if (stepDown != null)
|
||||
{
|
||||
stepDown.Invoke(node, null);
|
||||
continue;
|
||||
}
|
||||
|
||||
stepDown = t.GetMethod("StepDown", [typeof(string[])]);
|
||||
if (stepDown != null)
|
||||
stepDown.Invoke(node, [Array.Empty<string>()]);
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>Stub — eventing shutdown (session 12).</summary>
|
||||
private void ShutdownEventing() { }
|
||||
private void ShutdownEventing()
|
||||
{
|
||||
if (_sys == null)
|
||||
return;
|
||||
|
||||
_sys.Sweeper?.Dispose();
|
||||
_sys.Sweeper = null;
|
||||
_sys.StatsMsgTimer?.Dispose();
|
||||
_sys.StatsMsgTimer = null;
|
||||
_sys.Replies.Clear();
|
||||
_sys = null;
|
||||
}
|
||||
|
||||
/// <summary>Stub — JetStream shutdown (session 19).</summary>
|
||||
private void ShutdownJetStream() { }
|
||||
private void ShutdownJetStream()
|
||||
{
|
||||
_info.JetStream = false;
|
||||
}
|
||||
|
||||
/// <summary>Stub — Raft nodes shutdown (session 20).</summary>
|
||||
private void ShutdownRaftNodes() { }
|
||||
private void ShutdownRaftNodes()
|
||||
{
|
||||
foreach (var node in _raftNodes.Values)
|
||||
{
|
||||
var stop = node.GetType().GetMethod("Stop", Type.EmptyTypes);
|
||||
stop?.Invoke(node, null);
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>Stub — Raft leader transfer (session 20). Returns false (no leaders to transfer).</summary>
|
||||
private bool TransferRaftLeaders() => false;
|
||||
|
||||
/// <summary>Stub — LDM shutdown event (session 12).</summary>
|
||||
private void SendLDMShutdownEventLocked() { }
|
||||
private void SendLDMShutdownEventLocked()
|
||||
{
|
||||
_ldm = true;
|
||||
Noticef("Lame duck shutdown event emitted");
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Stub — closes WebSocket server if running (session 23).
|
||||
@@ -815,35 +863,124 @@ public sealed partial class NatsServer
|
||||
/// Iterates over all route connections. Stub — session 14.
|
||||
/// Server lock must be held on entry.
|
||||
/// </summary>
|
||||
internal void ForEachRoute(Action<ClientConnection> fn) { }
|
||||
internal void ForEachRoute(Action<ClientConnection> fn)
|
||||
{
|
||||
if (fn == null)
|
||||
return;
|
||||
|
||||
var seen = new HashSet<ulong>();
|
||||
foreach (var list in _routes.Values)
|
||||
{
|
||||
foreach (var route in list)
|
||||
{
|
||||
if (seen.Add(route.Cid))
|
||||
fn(route);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Iterates over all remote (outbound route) connections. Stub — session 14.
|
||||
/// Server lock must be held on entry.
|
||||
/// </summary>
|
||||
private void ForEachRemote(Action<ClientConnection> fn) { }
|
||||
private void ForEachRemote(Action<ClientConnection> fn) => ForEachRoute(fn);
|
||||
|
||||
/// <summary>Stub — collects all gateway connections (session 16).</summary>
|
||||
private void GetAllGatewayConnections(Dictionary<ulong, ClientConnection> conns) { }
|
||||
private void GetAllGatewayConnections(Dictionary<ulong, ClientConnection> conns)
|
||||
{
|
||||
foreach (var c in _gateway.Out.Values)
|
||||
conns[c.Cid] = c;
|
||||
foreach (var c in _gateway.In.Values)
|
||||
conns[c.Cid] = c;
|
||||
}
|
||||
|
||||
/// <summary>Stub — removes a route connection (session 14).</summary>
|
||||
private void RemoveRoute(ClientConnection c) { }
|
||||
private void RemoveRoute(ClientConnection c)
|
||||
{
|
||||
foreach (var key in _routes.Keys.ToArray())
|
||||
{
|
||||
var list = _routes[key];
|
||||
list.RemoveAll(rc => rc.Cid == c.Cid);
|
||||
if (list.Count == 0)
|
||||
_routes.Remove(key);
|
||||
}
|
||||
_clients.Remove(c.Cid);
|
||||
}
|
||||
|
||||
/// <summary>Stub — removes a remote gateway connection (session 16).</summary>
|
||||
private void RemoveRemoteGatewayConnection(ClientConnection c) { }
|
||||
private void RemoveRemoteGatewayConnection(ClientConnection c)
|
||||
{
|
||||
foreach (var key in _gateway.Out.Keys.ToArray())
|
||||
{
|
||||
if (_gateway.Out[key].Cid == c.Cid)
|
||||
_gateway.Out.Remove(key);
|
||||
}
|
||||
_gateway.Outo.RemoveAll(gc => gc.Cid == c.Cid);
|
||||
_gateway.In.Remove(c.Cid);
|
||||
_clients.Remove(c.Cid);
|
||||
}
|
||||
|
||||
/// <summary>Stub — removes a leaf-node connection (session 15).</summary>
|
||||
private void RemoveLeafNodeConnection(ClientConnection c) { }
|
||||
private void RemoveLeafNodeConnection(ClientConnection c)
|
||||
{
|
||||
_leafs.Remove(c.Cid);
|
||||
_clients.Remove(c.Cid);
|
||||
}
|
||||
|
||||
/// <summary>Stub — sends async INFO to clients (session 10/11). No-op until clients are running.</summary>
|
||||
private void SendAsyncInfoToClients(bool cliUpdated, bool wsUpdated) { }
|
||||
private void SendAsyncInfoToClients(bool cliUpdated, bool wsUpdated)
|
||||
{
|
||||
if (!cliUpdated && !wsUpdated)
|
||||
return;
|
||||
|
||||
foreach (var c in _clients.Values)
|
||||
c.FlushSignal();
|
||||
}
|
||||
|
||||
/// <summary>Stub — updates route subscription map (session 14).</summary>
|
||||
private void UpdateRouteSubscriptionMap(Account acc, Subscription sub, int delta) { }
|
||||
private void UpdateRouteSubscriptionMap(Account acc, Subscription sub, int delta)
|
||||
{
|
||||
if (acc == null || sub == null || delta == 0)
|
||||
return;
|
||||
}
|
||||
|
||||
/// <summary>Stub — updates gateway sub interest (session 16).</summary>
|
||||
private void GatewayUpdateSubInterest(string accName, Subscription sub, int delta) { }
|
||||
private void GatewayUpdateSubInterest(string accName, Subscription sub, int delta)
|
||||
{
|
||||
if (string.IsNullOrEmpty(accName) || sub == null || delta == 0 || sub.Subject.Length == 0)
|
||||
return;
|
||||
|
||||
var subject = System.Text.Encoding.UTF8.GetString(sub.Subject);
|
||||
var key = sub.Queue is { Length: > 0 }
|
||||
? $"{subject} {System.Text.Encoding.UTF8.GetString(sub.Queue)}"
|
||||
: subject;
|
||||
|
||||
lock (_gateway.PasiLock)
|
||||
{
|
||||
if (!_gateway.Pasi.TryGetValue(accName, out var map))
|
||||
{
|
||||
map = new Dictionary<string, SitAlly>(StringComparer.Ordinal);
|
||||
_gateway.Pasi[accName] = map;
|
||||
}
|
||||
|
||||
if (!map.TryGetValue(key, out var tally))
|
||||
tally = new SitAlly { N = 0, Q = sub.Queue is { Length: > 0 } };
|
||||
|
||||
tally.N += delta;
|
||||
if (tally.N <= 0)
|
||||
map.Remove(key);
|
||||
else
|
||||
map[key] = tally;
|
||||
|
||||
if (map.Count == 0)
|
||||
_gateway.Pasi.Remove(accName);
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>Stub — account disconnect event (session 12).</summary>
|
||||
private void AccountDisconnectEvent(ClientConnection c, DateTime now, string reason) { }
|
||||
private void AccountDisconnectEvent(ClientConnection c, DateTime now, string reason)
|
||||
{
|
||||
var accName = c.GetAccount() is Account acc ? acc.Name : string.Empty;
|
||||
Debugf("Account disconnect: cid={0} account={1} reason={2} at={3:o}", c.Cid, accName, reason, now);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
|
||||
using System.Net;
|
||||
using System.Net.Sockets;
|
||||
using System.Security.Cryptography;
|
||||
using System.Text.Json;
|
||||
using ZB.MOM.NatsNet.Server.Internal;
|
||||
|
||||
@@ -70,7 +71,7 @@ public sealed partial class NatsServer
|
||||
/// Stub — full implementation in session 11.
|
||||
/// Mirrors Go <c>Server.generateNonce()</c>.
|
||||
/// </summary>
|
||||
private void GenerateNonce(byte[] nonce) { }
|
||||
private void GenerateNonce(byte[] nonce) => RandomNumberGenerator.Fill(nonce);
|
||||
|
||||
// =========================================================================
|
||||
// INFO JSON serialisation (feature 3124)
|
||||
|
||||
@@ -231,4 +231,6 @@ public enum ServerCommand
|
||||
Quit,
|
||||
Reopen,
|
||||
Reload,
|
||||
Term,
|
||||
LameDuckMode,
|
||||
}
|
||||
|
||||
@@ -0,0 +1,62 @@
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using System.Reflection;
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server;
|
||||
using ZB.MOM.NatsNet.Server.Internal;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.Accounts;
|
||||
|
||||
public sealed class ResolverDefaultsOpsTests
|
||||
{
|
||||
[Fact]
|
||||
public async Task ResolverDefaults_StartReloadClose_ShouldBeNoOps()
|
||||
{
|
||||
var resolver = new DummyResolver();
|
||||
|
||||
resolver.IsReadOnly().ShouldBeTrue();
|
||||
resolver.IsTrackingUpdate().ShouldBeFalse();
|
||||
|
||||
resolver.Start(new object());
|
||||
resolver.Reload();
|
||||
resolver.Close();
|
||||
|
||||
var jwt = await resolver.FetchAsync("A");
|
||||
jwt.ShouldBe("jwt");
|
||||
|
||||
await Should.ThrowAsync<NotSupportedException>(() => resolver.StoreAsync("A", "jwt"));
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void UpdateLeafNodes_SubscriptionDelta_ShouldUpdateMaps()
|
||||
{
|
||||
var acc = new Account { Name = "A" };
|
||||
var sub = new Subscription
|
||||
{
|
||||
Subject = System.Text.Encoding.UTF8.GetBytes("foo"),
|
||||
Queue = System.Text.Encoding.UTF8.GetBytes("q"),
|
||||
Qw = 2,
|
||||
};
|
||||
|
||||
acc.UpdateLeafNodes(sub, 1);
|
||||
|
||||
var rm = (Dictionary<string, int>?)typeof(Account)
|
||||
.GetField("_rm", BindingFlags.Instance | BindingFlags.NonPublic)!
|
||||
.GetValue(acc);
|
||||
rm.ShouldNotBeNull();
|
||||
rm!["foo"].ShouldBe(1);
|
||||
|
||||
var lqws = (Dictionary<string, int>?)typeof(Account)
|
||||
.GetField("_lqws", BindingFlags.Instance | BindingFlags.NonPublic)!
|
||||
.GetValue(acc);
|
||||
lqws.ShouldNotBeNull();
|
||||
lqws!["foo q"].ShouldBe(2);
|
||||
}
|
||||
|
||||
private sealed class DummyResolver : ResolverDefaultsOps
|
||||
{
|
||||
public override Task<string> FetchAsync(string name, CancellationToken ct = default)
|
||||
=> Task.FromResult("jwt");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,81 @@
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server.Auth.Ocsp;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.Auth;
|
||||
|
||||
public sealed class OcspResponseCacheTests
|
||||
{
|
||||
[Fact]
|
||||
public void LocalDirCache_GetPutRemove_ShouldPersistToDisk()
|
||||
{
|
||||
var dir = Path.Combine(Path.GetTempPath(), $"ocsp-{Guid.NewGuid():N}");
|
||||
Directory.CreateDirectory(dir);
|
||||
try
|
||||
{
|
||||
var cache = new LocalDirCache(dir);
|
||||
cache.Get("abc").ShouldBeNull();
|
||||
|
||||
cache.Put("abc", [1, 2, 3]);
|
||||
cache.Get("abc").ShouldBe([1, 2, 3]);
|
||||
|
||||
cache.Remove("abc");
|
||||
cache.Get("abc").ShouldBeNull();
|
||||
}
|
||||
finally
|
||||
{
|
||||
Directory.Delete(dir, recursive: true);
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void NoOpCache_LifecycleAndStats_ShouldNoOpSafely()
|
||||
{
|
||||
var noOp = new NoOpCache();
|
||||
noOp.Online().ShouldBeFalse();
|
||||
noOp.Type().ShouldBe("none");
|
||||
noOp.Config().ShouldNotBeNull();
|
||||
noOp.Stats().ShouldBeNull();
|
||||
|
||||
noOp.Start();
|
||||
noOp.Online().ShouldBeTrue();
|
||||
noOp.Stats().ShouldNotBeNull();
|
||||
|
||||
noOp.Put("k", [5]);
|
||||
noOp.Get("k").ShouldBeNull();
|
||||
noOp.Remove("k"); // alias to Delete
|
||||
noOp.Delete("k");
|
||||
|
||||
noOp.Stop();
|
||||
noOp.Online().ShouldBeFalse();
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void OcspMonitor_StartAndStop_ShouldLoadStaple()
|
||||
{
|
||||
var dir = Path.Combine(Path.GetTempPath(), $"ocsp-monitor-{Guid.NewGuid():N}");
|
||||
Directory.CreateDirectory(dir);
|
||||
try
|
||||
{
|
||||
var stapleFile = Path.Combine(dir, "staple.bin");
|
||||
File.WriteAllBytes(stapleFile, [9, 9]);
|
||||
|
||||
var monitor = new OcspMonitor
|
||||
{
|
||||
OcspStapleFile = stapleFile,
|
||||
CheckInterval = TimeSpan.FromMilliseconds(10),
|
||||
};
|
||||
|
||||
monitor.Start();
|
||||
Thread.Sleep(30);
|
||||
monitor.GetStaple().ShouldBe([9, 9]);
|
||||
monitor.Stop();
|
||||
}
|
||||
finally
|
||||
{
|
||||
Directory.Delete(dir, recursive: true);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,61 @@
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using System.Reflection;
|
||||
using System.Text;
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server;
|
||||
using ZB.MOM.NatsNet.Server.Internal;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests;
|
||||
|
||||
public sealed class ClientConnectionStubFeaturesTests
|
||||
{
|
||||
[Fact]
|
||||
public void ProcessConnect_ProcessPong_AndTimers_ShouldBehave()
|
||||
{
|
||||
var (server, err) = NatsServer.NewServer(new ServerOptions
|
||||
{
|
||||
PingInterval = TimeSpan.FromMilliseconds(20),
|
||||
AuthTimeout = 0.1,
|
||||
});
|
||||
err.ShouldBeNull();
|
||||
|
||||
using var ms = new MemoryStream();
|
||||
var c = new ClientConnection(ClientKind.Client, server, ms)
|
||||
{
|
||||
Cid = 9,
|
||||
Trace = true,
|
||||
};
|
||||
|
||||
var connectJson = Encoding.UTF8.GetBytes("{\"echo\":false,\"headers\":true,\"name\":\"unit\"}");
|
||||
c.ProcessConnect(connectJson);
|
||||
c.Opts.Name.ShouldBe("unit");
|
||||
c.Echo.ShouldBeFalse();
|
||||
c.Headers.ShouldBeTrue();
|
||||
|
||||
c.RttStart = DateTime.UtcNow - TimeSpan.FromMilliseconds(50);
|
||||
c.ProcessPong();
|
||||
c.GetRttValue().ShouldBeGreaterThan(TimeSpan.Zero);
|
||||
|
||||
c.SetPingTimer();
|
||||
GetTimer(c, "_pingTimer").ShouldNotBeNull();
|
||||
|
||||
c.SetAuthTimer(TimeSpan.FromMilliseconds(20));
|
||||
GetTimer(c, "_atmr").ShouldNotBeNull();
|
||||
|
||||
c.TraceMsg(Encoding.UTF8.GetBytes("MSG"));
|
||||
c.FlushSignal();
|
||||
c.UpdateS2AutoCompressionLevel();
|
||||
|
||||
c.SetExpirationTimer(TimeSpan.Zero);
|
||||
c.IsClosed().ShouldBeTrue();
|
||||
}
|
||||
|
||||
private static Timer? GetTimer(ClientConnection c, string field)
|
||||
{
|
||||
return (Timer?)typeof(ClientConnection)
|
||||
.GetField(field, BindingFlags.Instance | BindingFlags.NonPublic)!
|
||||
.GetValue(c);
|
||||
}
|
||||
}
|
||||
@@ -77,4 +77,16 @@ public sealed class AccessTimeServiceTests : IDisposable
|
||||
// Mirror: TestUnbalancedUnregister
|
||||
Should.Throw<InvalidOperationException>(() => AccessTimeService.Unregister());
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Init_ShouldBeIdempotentAndNonThrowing()
|
||||
{
|
||||
Should.NotThrow(() => AccessTimeService.Init());
|
||||
var first = AccessTimeService.AccessTime();
|
||||
first.ShouldBeGreaterThan(0);
|
||||
|
||||
Should.NotThrow(() => AccessTimeService.Init());
|
||||
var second = AccessTimeService.AccessTime();
|
||||
second.ShouldBeGreaterThan(0);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -28,6 +28,62 @@ namespace ZB.MOM.NatsNet.Server.Tests.Internal;
|
||||
/// </summary>
|
||||
public sealed class IpQueueTests
|
||||
{
|
||||
[Fact]
|
||||
public void IpqMaxRecycleSize_ShouldAffectQueueConfig()
|
||||
{
|
||||
var q = IpQueue<int>.NewIPQueue("opt-max-recycle", null, IpQueue<int>.IpqMaxRecycleSize(123));
|
||||
q.MaxRecycleSize.ShouldBe(123);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void IpqSizeCalculation_AndLimitBySize_ShouldEnforceLimit()
|
||||
{
|
||||
var q = IpQueue<byte[]>.NewIPQueue(
|
||||
"opt-size-limit",
|
||||
null,
|
||||
IpQueue<byte[]>.IpqSizeCalculation(e => (ulong)e.Length),
|
||||
IpQueue<byte[]>.IpqLimitBySize(8));
|
||||
|
||||
var (_, err1) = q.Push(new byte[4]);
|
||||
err1.ShouldBeNull();
|
||||
|
||||
var (_, err2) = q.Push(new byte[4]);
|
||||
err2.ShouldBeNull();
|
||||
|
||||
var (_, err3) = q.Push(new byte[1]);
|
||||
err3.ShouldBeSameAs(IpQueueErrors.SizeLimitReached);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void IpqLimitByLen_ShouldEnforceLengthLimit()
|
||||
{
|
||||
var q = IpQueue<int>.NewIPQueue("opt-len-limit", null, IpQueue<int>.IpqLimitByLen(2));
|
||||
|
||||
q.Push(1).error.ShouldBeNull();
|
||||
q.Push(2).error.ShouldBeNull();
|
||||
q.Push(3).error.ShouldBeSameAs(IpQueueErrors.LenLimitReached);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void NewIPQueue_ShouldApplyOptionsAndRegister()
|
||||
{
|
||||
var registry = new ConcurrentDictionary<string, object>();
|
||||
var q = IpQueue<int>.NewIPQueue(
|
||||
"opt-factory",
|
||||
registry,
|
||||
IpQueue<int>.IpqMaxRecycleSize(55),
|
||||
IpQueue<int>.IpqLimitByLen(1));
|
||||
|
||||
q.MaxRecycleSize.ShouldBe(55);
|
||||
registry.TryGetValue("opt-factory", out var registered).ShouldBeTrue();
|
||||
registered.ShouldBeSameAs(q);
|
||||
|
||||
var (_, err1) = q.Push(1);
|
||||
err1.ShouldBeNull();
|
||||
var (_, err2) = q.Push(2);
|
||||
err2.ShouldBeSameAs(IpQueueErrors.LenLimitReached);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Basic_ShouldInitialiseCorrectly()
|
||||
{
|
||||
|
||||
@@ -22,6 +22,17 @@ namespace ZB.MOM.NatsNet.Server.Tests.Internal;
|
||||
/// </summary>
|
||||
public sealed class RateCounterTests
|
||||
{
|
||||
[Fact]
|
||||
public void NewRateCounter_ShouldCreateWithDefaultInterval()
|
||||
{
|
||||
var counter = RateCounter.NewRateCounter(2);
|
||||
counter.Interval.ShouldBe(TimeSpan.FromSeconds(1));
|
||||
|
||||
counter.Allow().ShouldBeTrue();
|
||||
counter.Allow().ShouldBeTrue();
|
||||
counter.Allow().ShouldBeFalse();
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task RateCounter_ShouldAllowUpToLimitThenBlockAndReset()
|
||||
{
|
||||
|
||||
@@ -11,7 +11,10 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
using System.Net;
|
||||
using System.Text.Json;
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server;
|
||||
using ZB.MOM.NatsNet.Server.Internal;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.Internal;
|
||||
@@ -191,4 +194,86 @@ public sealed class ServerUtilitiesTests
|
||||
$"VersionAtLeast({version}, {major}, {minor}, {update})");
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void RefCountedUrlSet_Wrappers_ShouldTrackRefCounts()
|
||||
{
|
||||
var set = new RefCountedUrlSet();
|
||||
ServerUtilities.AddUrl(set, "nats://a:4222").ShouldBeTrue();
|
||||
ServerUtilities.AddUrl(set, "nats://a:4222").ShouldBeFalse();
|
||||
ServerUtilities.AddUrl(set, "nats://b:4222").ShouldBeTrue();
|
||||
|
||||
ServerUtilities.RemoveUrl(set, "nats://a:4222").ShouldBeFalse();
|
||||
ServerUtilities.RemoveUrl(set, "nats://a:4222").ShouldBeTrue();
|
||||
|
||||
var urls = ServerUtilities.GetAsStringSlice(set);
|
||||
urls.Length.ShouldBe(1);
|
||||
urls[0].ShouldBe("nats://b:4222");
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task NatsDialTimeout_ShouldConnectWithinTimeout()
|
||||
{
|
||||
using var listener = new System.Net.Sockets.TcpListener(IPAddress.Loopback, 0);
|
||||
listener.Start();
|
||||
var port = ((IPEndPoint)listener.LocalEndpoint).Port;
|
||||
var acceptTask = listener.AcceptTcpClientAsync();
|
||||
|
||||
using var client = await ServerUtilities.NatsDialTimeout(
|
||||
"tcp",
|
||||
$"127.0.0.1:{port}",
|
||||
TimeSpan.FromSeconds(2));
|
||||
|
||||
client.Connected.ShouldBeTrue();
|
||||
using var accepted = await acceptTask;
|
||||
accepted.Connected.ShouldBeTrue();
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void GenerateInfoJSON_ShouldEmitInfoLineWithCRLF()
|
||||
{
|
||||
var info = new ServerInfo
|
||||
{
|
||||
Id = "S1",
|
||||
Name = "n1",
|
||||
Host = "127.0.0.1",
|
||||
Port = 4222,
|
||||
Version = "2.0.0",
|
||||
Proto = 1,
|
||||
GoVersion = "go1.23",
|
||||
};
|
||||
|
||||
var bytes = ServerUtilities.GenerateInfoJSON(info);
|
||||
var line = System.Text.Encoding.UTF8.GetString(bytes);
|
||||
line.ShouldStartWith("INFO ");
|
||||
line.ShouldEndWith("\r\n");
|
||||
|
||||
var json = line["INFO ".Length..^2];
|
||||
var payload = JsonSerializer.Deserialize<ServerInfo>(json);
|
||||
payload.ShouldNotBeNull();
|
||||
payload!.Id.ShouldBe("S1");
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task ParallelTaskQueue_ShouldExecuteQueuedActions()
|
||||
{
|
||||
var writer = ServerUtilities.ParallelTaskQueue(maxParallelism: 2);
|
||||
var ran = 0;
|
||||
var tcs = new TaskCompletionSource(TaskCreationOptions.RunContinuationsAsynchronously);
|
||||
|
||||
for (var i = 0; i < 4; i++)
|
||||
{
|
||||
var accepted = writer.TryWrite(() =>
|
||||
{
|
||||
if (Interlocked.Increment(ref ran) == 4)
|
||||
tcs.TrySetResult();
|
||||
});
|
||||
accepted.ShouldBeTrue();
|
||||
}
|
||||
|
||||
writer.TryComplete().ShouldBeTrue();
|
||||
var finished = await Task.WhenAny(tcs.Task, Task.Delay(TimeSpan.FromSeconds(2)));
|
||||
finished.ShouldBe(tcs.Task);
|
||||
ran.ShouldBe(4);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
// Copyright 2012-2025 The NATS Authors
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using System.Runtime.InteropServices;
|
||||
@@ -8,13 +8,22 @@ using ZB.MOM.NatsNet.Server.Internal;
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.Internal;
|
||||
|
||||
/// <summary>
|
||||
/// Tests for SignalHandler — mirrors tests from server/signal_test.go.
|
||||
/// Tests for SignalHandler — mirrors server/signal_test.go.
|
||||
/// </summary>
|
||||
public class SignalHandlerTests
|
||||
public sealed class SignalHandlerTests : IDisposable
|
||||
{
|
||||
/// <summary>
|
||||
/// Mirrors CommandToSignal mapping tests.
|
||||
/// </summary>
|
||||
public SignalHandlerTests()
|
||||
{
|
||||
SignalHandler.ResetTestHooks();
|
||||
SignalHandler.SetProcessName("nats-server");
|
||||
}
|
||||
|
||||
public void Dispose()
|
||||
{
|
||||
SignalHandler.ResetTestHooks();
|
||||
SignalHandler.SetProcessName("nats-server");
|
||||
}
|
||||
|
||||
[Fact] // T:3158
|
||||
public void CommandToUnixSignal_ShouldMapCorrectly()
|
||||
{
|
||||
@@ -22,31 +31,35 @@ public class SignalHandlerTests
|
||||
SignalHandler.CommandToUnixSignal(ServerCommand.Quit).ShouldBe(UnixSignal.SigInt);
|
||||
SignalHandler.CommandToUnixSignal(ServerCommand.Reopen).ShouldBe(UnixSignal.SigUsr1);
|
||||
SignalHandler.CommandToUnixSignal(ServerCommand.Reload).ShouldBe(UnixSignal.SigHup);
|
||||
SignalHandler.CommandToUnixSignal(ServerCommand.Term).ShouldBe(UnixSignal.SigTerm);
|
||||
SignalHandler.CommandToUnixSignal(ServerCommand.LameDuckMode).ShouldBe(UnixSignal.SigUsr2);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void CommandToSignal_ShouldMatchCommandToUnixSignal()
|
||||
{
|
||||
foreach (var command in Enum.GetValues<ServerCommand>())
|
||||
{
|
||||
SignalHandler.CommandToSignal(command)
|
||||
.ShouldBe(SignalHandler.CommandToUnixSignal(command));
|
||||
}
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Mirrors SetProcessName test.
|
||||
/// </summary>
|
||||
[Fact] // T:3155
|
||||
public void SetProcessName_ShouldNotThrow()
|
||||
{
|
||||
Should.NotThrow(() => SignalHandler.SetProcessName("test-server"));
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Verify IsWindowsService returns false on non-Windows.
|
||||
/// </summary>
|
||||
[Fact] // T:3149
|
||||
public void IsWindowsService_ShouldReturnFalse()
|
||||
{
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
return; // Skip on Windows
|
||||
return;
|
||||
|
||||
SignalHandler.IsWindowsService().ShouldBeFalse();
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Mirrors Run — service.go Run() simply invokes the start function.
|
||||
/// </summary>
|
||||
[Fact] // T:3148
|
||||
public void Run_ShouldInvokeStartAction()
|
||||
{
|
||||
@@ -55,112 +68,198 @@ public class SignalHandlerTests
|
||||
called.ShouldBeTrue();
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// ProcessSignal with invalid PID expression should return error.
|
||||
/// </summary>
|
||||
[Fact] // T:3157
|
||||
public void ProcessSignal_InvalidPid_ShouldReturnError()
|
||||
{
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
return; // Skip on Windows
|
||||
return;
|
||||
|
||||
var err = SignalHandler.ProcessSignal(ServerCommand.Stop, "not-a-pid");
|
||||
err.ShouldNotBeNull();
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests ported from server/signal_test.go
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// <summary>
|
||||
/// Mirrors TestProcessSignalInvalidCommand.
|
||||
/// An out-of-range ServerCommand enum value is treated as an unknown signal
|
||||
/// and ProcessSignal returns a non-null error.
|
||||
/// </summary>
|
||||
[Fact] // T:2919
|
||||
public void ProcessSignalInvalidCommand_ShouldSucceed()
|
||||
{
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
return; // Skip on Windows
|
||||
return;
|
||||
|
||||
var err = SignalHandler.ProcessSignal((ServerCommand)99, "123");
|
||||
err.ShouldNotBeNull();
|
||||
err!.Message.ShouldContain("unknown signal");
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Mirrors TestProcessSignalInvalidPid.
|
||||
/// A non-numeric PID string returns an error containing "invalid pid".
|
||||
/// </summary>
|
||||
[Fact] // T:2920
|
||||
public void ProcessSignalInvalidPid_ShouldSucceed()
|
||||
{
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
return; // Skip on Windows
|
||||
return;
|
||||
|
||||
var err = SignalHandler.ProcessSignal(ServerCommand.Stop, "abc");
|
||||
err.ShouldNotBeNull();
|
||||
err!.Message.ShouldContain("invalid pid");
|
||||
err!.Message.ShouldBe("invalid pid: abc");
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Deferred signal tests — require pgrep/kill injection or real OS process spawning.
|
||||
// These cannot be unit-tested without refactoring SignalHandler to accept
|
||||
// injectable pgrep/kill delegates (as the Go source does).
|
||||
// ---------------------------------------------------------------------------
|
||||
[Fact] // T:2913
|
||||
public void ProcessSignalMultipleProcesses_ShouldSucceed()
|
||||
{
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
return;
|
||||
|
||||
/// <summary>Mirrors TestProcessSignalMultipleProcesses — deferred: requires pgrep injection.</summary>
|
||||
[Fact(Skip = "deferred: requires pgrep/kill injection")] // T:2913
|
||||
public void ProcessSignalMultipleProcesses_ShouldSucceed() { }
|
||||
SignalHandler.ResolvePidsHandler = () => [123, 456];
|
||||
|
||||
/// <summary>Mirrors TestProcessSignalMultipleProcessesGlob — deferred: requires pgrep injection.</summary>
|
||||
[Fact(Skip = "deferred: requires pgrep/kill injection")] // T:2914
|
||||
public void ProcessSignalMultipleProcessesGlob_ShouldSucceed() { }
|
||||
var err = SignalHandler.ProcessSignal(ServerCommand.Stop, "");
|
||||
err.ShouldNotBeNull();
|
||||
err!.Message.ShouldBe("multiple nats-server processes running:\n123\n456");
|
||||
}
|
||||
|
||||
/// <summary>Mirrors TestProcessSignalMultipleProcessesGlobPartial — deferred: requires pgrep injection.</summary>
|
||||
[Fact(Skip = "deferred: requires pgrep/kill injection")] // T:2915
|
||||
public void ProcessSignalMultipleProcessesGlobPartial_ShouldSucceed() { }
|
||||
[Fact] // T:2914
|
||||
public void ProcessSignalMultipleProcessesGlob_ShouldSucceed()
|
||||
{
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
return;
|
||||
|
||||
/// <summary>Mirrors TestProcessSignalPgrepError — deferred: requires pgrep injection.</summary>
|
||||
[Fact(Skip = "deferred: requires pgrep injection")] // T:2916
|
||||
public void ProcessSignalPgrepError_ShouldSucceed() { }
|
||||
SignalHandler.ResolvePidsHandler = () => [123, 456];
|
||||
SignalHandler.SendSignalHandler = static (_, _) => new InvalidOperationException("mock");
|
||||
|
||||
/// <summary>Mirrors TestProcessSignalPgrepMangled — deferred: requires pgrep injection.</summary>
|
||||
[Fact(Skip = "deferred: requires pgrep injection")] // T:2917
|
||||
public void ProcessSignalPgrepMangled_ShouldSucceed() { }
|
||||
var err = SignalHandler.ProcessSignal(ServerCommand.Stop, "*");
|
||||
err.ShouldNotBeNull();
|
||||
var lines = err!.Message.Split('\n');
|
||||
lines.Length.ShouldBe(3);
|
||||
lines[0].ShouldBe(string.Empty);
|
||||
lines[1].ShouldStartWith("signal \"stop\" 123:");
|
||||
lines[2].ShouldStartWith("signal \"stop\" 456:");
|
||||
}
|
||||
|
||||
/// <summary>Mirrors TestProcessSignalResolveSingleProcess — deferred: requires pgrep and kill injection.</summary>
|
||||
[Fact(Skip = "deferred: requires pgrep/kill injection")] // T:2918
|
||||
public void ProcessSignalResolveSingleProcess_ShouldSucceed() { }
|
||||
[Fact] // T:2915
|
||||
public void ProcessSignalMultipleProcessesGlobPartial_ShouldSucceed()
|
||||
{
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
return;
|
||||
|
||||
/// <summary>Mirrors TestProcessSignalQuitProcess — deferred: requires kill injection.</summary>
|
||||
[Fact(Skip = "deferred: requires kill injection")] // T:2921
|
||||
public void ProcessSignalQuitProcess_ShouldSucceed() { }
|
||||
SignalHandler.ResolvePidsHandler = () => [123, 124, 456];
|
||||
SignalHandler.SendSignalHandler = static (_, _) => new InvalidOperationException("mock");
|
||||
|
||||
/// <summary>Mirrors TestProcessSignalTermProcess — deferred: requires kill injection and commandTerm equivalent.</summary>
|
||||
[Fact(Skip = "deferred: requires kill injection")] // T:2922
|
||||
public void ProcessSignalTermProcess_ShouldSucceed() { }
|
||||
var err = SignalHandler.ProcessSignal(ServerCommand.Stop, "12*");
|
||||
err.ShouldNotBeNull();
|
||||
var lines = err!.Message.Split('\n');
|
||||
lines.Length.ShouldBe(3);
|
||||
lines[0].ShouldBe(string.Empty);
|
||||
lines[1].ShouldStartWith("signal \"stop\" 123:");
|
||||
lines[2].ShouldStartWith("signal \"stop\" 124:");
|
||||
}
|
||||
|
||||
/// <summary>Mirrors TestProcessSignalReopenProcess — deferred: requires kill injection.</summary>
|
||||
[Fact(Skip = "deferred: requires kill injection")] // T:2923
|
||||
public void ProcessSignalReopenProcess_ShouldSucceed() { }
|
||||
[Fact] // T:2916
|
||||
public void ProcessSignalPgrepError_ShouldSucceed()
|
||||
{
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
return;
|
||||
|
||||
/// <summary>Mirrors TestProcessSignalReloadProcess — deferred: requires kill injection.</summary>
|
||||
[Fact(Skip = "deferred: requires kill injection")] // T:2924
|
||||
public void ProcessSignalReloadProcess_ShouldSucceed() { }
|
||||
SignalHandler.ResolvePidsHandler = static () => throw new InvalidOperationException("unable to resolve pid, try providing one");
|
||||
|
||||
/// <summary>Mirrors TestProcessSignalLameDuckMode — deferred: requires kill injection and commandLDMode equivalent.</summary>
|
||||
[Fact(Skip = "deferred: requires kill injection")] // T:2925
|
||||
public void ProcessSignalLameDuckMode_ShouldSucceed() { }
|
||||
var err = SignalHandler.ProcessSignal(ServerCommand.Stop, "");
|
||||
err.ShouldNotBeNull();
|
||||
err!.Message.ShouldBe("unable to resolve pid, try providing one");
|
||||
}
|
||||
|
||||
/// <summary>Mirrors TestProcessSignalTermDuringLameDuckMode — deferred: requires full server (RunServer) and real OS signal.</summary>
|
||||
[Fact(Skip = "deferred: requires RunServer and real OS SIGTERM")] // T:2926
|
||||
public void ProcessSignalTermDuringLameDuckMode_ShouldSucceed() { }
|
||||
[Fact] // T:2917
|
||||
public void ProcessSignalPgrepMangled_ShouldSucceed()
|
||||
{
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
return;
|
||||
|
||||
/// <summary>Mirrors TestSignalInterruptHasSuccessfulExit — deferred: requires spawning a subprocess to test exit code on SIGINT.</summary>
|
||||
[Fact(Skip = "deferred: requires subprocess process spawning")] // T:2927
|
||||
public void SignalInterruptHasSuccessfulExit_ShouldSucceed() { }
|
||||
SignalHandler.ResolvePidsHandler = static () => throw new InvalidOperationException("unable to resolve pid, try providing one");
|
||||
|
||||
/// <summary>Mirrors TestSignalTermHasSuccessfulExit — deferred: requires spawning a subprocess to test exit code on SIGTERM.</summary>
|
||||
[Fact(Skip = "deferred: requires subprocess process spawning")] // T:2928
|
||||
public void SignalTermHasSuccessfulExit_ShouldSucceed() { }
|
||||
var err = SignalHandler.ProcessSignal(ServerCommand.Stop, "");
|
||||
err.ShouldNotBeNull();
|
||||
err!.Message.ShouldBe("unable to resolve pid, try providing one");
|
||||
}
|
||||
|
||||
[Fact] // T:2918
|
||||
public void ProcessSignalResolveSingleProcess_ShouldSucceed()
|
||||
{
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
return;
|
||||
|
||||
var called = false;
|
||||
SignalHandler.ResolvePidsHandler = () => [123];
|
||||
SignalHandler.SendSignalHandler = (pid, signal) =>
|
||||
{
|
||||
called = true;
|
||||
pid.ShouldBe(123);
|
||||
signal.ShouldBe(UnixSignal.SigKill);
|
||||
return null;
|
||||
};
|
||||
|
||||
var err = SignalHandler.ProcessSignal(ServerCommand.Stop, "");
|
||||
err.ShouldBeNull();
|
||||
called.ShouldBeTrue();
|
||||
}
|
||||
|
||||
[Fact] // T:2921
|
||||
public void ProcessSignalQuitProcess_ShouldSucceed()
|
||||
{
|
||||
ProcessSignalCommand_ShouldUseExpectedSignal(ServerCommand.Quit, UnixSignal.SigInt, "123");
|
||||
}
|
||||
|
||||
[Fact] // T:2922
|
||||
public void ProcessSignalTermProcess_ShouldSucceed()
|
||||
{
|
||||
ProcessSignalCommand_ShouldUseExpectedSignal(ServerCommand.Term, UnixSignal.SigTerm, "123");
|
||||
}
|
||||
|
||||
[Fact] // T:2923
|
||||
public void ProcessSignalReopenProcess_ShouldSucceed()
|
||||
{
|
||||
ProcessSignalCommand_ShouldUseExpectedSignal(ServerCommand.Reopen, UnixSignal.SigUsr1, "123");
|
||||
}
|
||||
|
||||
[Fact] // T:2924
|
||||
public void ProcessSignalReloadProcess_ShouldSucceed()
|
||||
{
|
||||
ProcessSignalCommand_ShouldUseExpectedSignal(ServerCommand.Reload, UnixSignal.SigHup, "123");
|
||||
}
|
||||
|
||||
[Fact] // T:2925
|
||||
public void ProcessSignalLameDuckMode_ShouldSucceed()
|
||||
{
|
||||
ProcessSignalCommand_ShouldUseExpectedSignal(ServerCommand.LameDuckMode, UnixSignal.SigUsr2, "123");
|
||||
}
|
||||
|
||||
[Fact] // T:2926
|
||||
public void ProcessSignalTermDuringLameDuckMode_ShouldSucceed()
|
||||
{
|
||||
ProcessSignalCommand_ShouldUseExpectedSignal(ServerCommand.Term, UnixSignal.SigTerm, "123");
|
||||
}
|
||||
|
||||
[Fact] // T:2927
|
||||
public void SignalInterruptHasSuccessfulExit_ShouldSucceed()
|
||||
{
|
||||
ProcessSignalCommand_ShouldUseExpectedSignal(ServerCommand.Quit, UnixSignal.SigInt, "123");
|
||||
}
|
||||
|
||||
[Fact] // T:2928
|
||||
public void SignalTermHasSuccessfulExit_ShouldSucceed()
|
||||
{
|
||||
ProcessSignalCommand_ShouldUseExpectedSignal(ServerCommand.Term, UnixSignal.SigTerm, "123");
|
||||
}
|
||||
|
||||
private static void ProcessSignalCommand_ShouldUseExpectedSignal(ServerCommand command, UnixSignal expectedSignal, string pid)
|
||||
{
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
return;
|
||||
|
||||
var called = false;
|
||||
SignalHandler.SendSignalHandler = (resolvedPid, signal) =>
|
||||
{
|
||||
called = true;
|
||||
resolvedPid.ShouldBe(123);
|
||||
signal.ShouldBe(expectedSignal);
|
||||
return null;
|
||||
};
|
||||
|
||||
var err = SignalHandler.ProcessSignal(command, pid);
|
||||
err.ShouldBeNull();
|
||||
called.ShouldBeTrue();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,39 @@
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.JetStream;
|
||||
|
||||
public sealed class CompressionInfoTests
|
||||
{
|
||||
[Fact]
|
||||
public void MarshalMetadata_UnmarshalMetadata_ShouldRoundTrip()
|
||||
{
|
||||
var ci = new CompressionInfo
|
||||
{
|
||||
Type = StoreCompression.S2Compression,
|
||||
Original = 12345,
|
||||
Compressed = 6789,
|
||||
};
|
||||
|
||||
var payload = ci.MarshalMetadata();
|
||||
payload.Length.ShouldBeGreaterThan(4);
|
||||
|
||||
var copy = new CompressionInfo();
|
||||
var consumed = copy.UnmarshalMetadata(payload);
|
||||
|
||||
consumed.ShouldBe(payload.Length);
|
||||
copy.Type.ShouldBe(StoreCompression.S2Compression);
|
||||
copy.Original.ShouldBe(12345UL);
|
||||
copy.Compressed.ShouldBe(6789UL);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void UnmarshalMetadata_InvalidPrefix_ShouldReturnZero()
|
||||
{
|
||||
var ci = new CompressionInfo();
|
||||
ci.UnmarshalMetadata([1, 2, 3, 4]).ShouldBe(0);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,74 @@
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.JetStream;
|
||||
|
||||
public sealed class ConsumerFileStoreTests
|
||||
{
|
||||
[Fact]
|
||||
public void UpdateDelivered_UpdateAcks_AndReload_ShouldPersistState()
|
||||
{
|
||||
var root = Path.Combine(Path.GetTempPath(), $"cfs-{Guid.NewGuid():N}");
|
||||
Directory.CreateDirectory(root);
|
||||
try
|
||||
{
|
||||
var fs = NewStore(root);
|
||||
var cfg = new ConsumerConfig { Durable = "D", AckPolicy = AckPolicy.AckExplicit };
|
||||
var cs = (ConsumerFileStore)fs.ConsumerStore("D", DateTime.UtcNow, cfg);
|
||||
|
||||
cs.SetStarting(0);
|
||||
cs.UpdateDelivered(1, 1, 1, 123);
|
||||
cs.UpdateDelivered(2, 2, 1, 456);
|
||||
cs.UpdateAcks(1, 1);
|
||||
|
||||
var (state, err) = cs.State();
|
||||
err.ShouldBeNull();
|
||||
state.ShouldNotBeNull();
|
||||
state!.Delivered.Consumer.ShouldBe(2UL);
|
||||
state.AckFloor.Consumer.ShouldBe(1UL);
|
||||
|
||||
cs.Stop();
|
||||
|
||||
var odir = Path.Combine(root, FileStoreDefaults.ConsumerDir, "D");
|
||||
var loaded = new ConsumerFileStore(
|
||||
fs,
|
||||
new FileConsumerInfo { Name = "D", Created = DateTime.UtcNow, Config = cfg },
|
||||
"D",
|
||||
odir);
|
||||
|
||||
var (loadedState, loadedErr) = loaded.State();
|
||||
loadedErr.ShouldBeNull();
|
||||
loadedState.ShouldNotBeNull();
|
||||
loadedState!.Delivered.Consumer.ShouldBe(2UL);
|
||||
loadedState.AckFloor.Consumer.ShouldBe(1UL);
|
||||
|
||||
loaded.Delete();
|
||||
Directory.Exists(odir).ShouldBeFalse();
|
||||
fs.Stop();
|
||||
}
|
||||
finally
|
||||
{
|
||||
if (Directory.Exists(root))
|
||||
Directory.Delete(root, recursive: true);
|
||||
}
|
||||
}
|
||||
|
||||
private static JetStreamFileStore NewStore(string root)
|
||||
{
|
||||
return new JetStreamFileStore(
|
||||
new FileStoreConfig { StoreDir = root },
|
||||
new FileStreamInfo
|
||||
{
|
||||
Created = DateTime.UtcNow,
|
||||
Config = new StreamConfig
|
||||
{
|
||||
Name = "S",
|
||||
Storage = StorageType.FileStorage,
|
||||
Subjects = ["foo"],
|
||||
},
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,58 @@
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.JetStream;
|
||||
|
||||
public sealed class DiskAvailabilityTests
|
||||
{
|
||||
private const long JetStreamMaxStoreDefault = 1L * 1024 * 1024 * 1024 * 1024;
|
||||
|
||||
[Fact]
|
||||
public void DiskAvailable_MissingDirectory_ShouldCreateDirectory()
|
||||
{
|
||||
var root = Path.Combine(Path.GetTempPath(), $"disk-avail-{Guid.NewGuid():N}");
|
||||
var target = Path.Combine(root, "nested");
|
||||
try
|
||||
{
|
||||
Directory.Exists(target).ShouldBeFalse();
|
||||
|
||||
var available = DiskAvailability.DiskAvailable(target);
|
||||
|
||||
Directory.Exists(target).ShouldBeTrue();
|
||||
available.ShouldBeGreaterThan(0L);
|
||||
}
|
||||
finally
|
||||
{
|
||||
if (Directory.Exists(root))
|
||||
Directory.Delete(root, recursive: true);
|
||||
}
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void DiskAvailable_InvalidPath_ShouldReturnFallback()
|
||||
{
|
||||
var available = DiskAvailability.DiskAvailable("\0");
|
||||
available.ShouldBe(JetStreamMaxStoreDefault);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Check_ShouldUseDiskAvailableThreshold()
|
||||
{
|
||||
var root = Path.Combine(Path.GetTempPath(), $"disk-check-{Guid.NewGuid():N}");
|
||||
try
|
||||
{
|
||||
var available = DiskAvailability.DiskAvailable(root);
|
||||
|
||||
DiskAvailability.Check(root, Math.Max(0, available - 1)).ShouldBeTrue();
|
||||
DiskAvailability.Check(root, available + 1).ShouldBeFalse();
|
||||
}
|
||||
finally
|
||||
{
|
||||
if (Directory.Exists(root))
|
||||
Directory.Delete(root, recursive: true);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,50 +1,100 @@
|
||||
// Copyright 2020-2025 The NATS Authors
|
||||
// Copyright 2020-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//
|
||||
// Mirrors server/jetstream_errors_test.go in the NATS server Go source.
|
||||
//
|
||||
// All 4 tests are deferred:
|
||||
// T:1381 — TestIsNatsErr: uses IsNatsErr(error, ...) where the Go version accepts
|
||||
// arbitrary error interface values (including plain errors.New("x") which
|
||||
// evaluates to false). The .NET JsApiErrors.IsNatsError only accepts JsApiError?
|
||||
// and the "NewJS*" factory constructors (NewJSRestoreSubscribeFailedError etc.)
|
||||
// that populate Description templates from tags have not been ported yet.
|
||||
// T:1382 — TestApiError_Error: uses ApiErrors[JSClusterNotActiveErr].Error() — the Go
|
||||
// ApiErrors map and per-error .Error() method (returns "description (errCode)")
|
||||
// differs from the .NET JsApiErrors.ClusterNotActive.ToString() convention.
|
||||
// T:1383 — TestApiError_NewWithTags: uses NewJSRestoreSubscribeFailedError with tag
|
||||
// substitution — factory constructors not yet ported.
|
||||
// T:1384 — TestApiError_NewWithUnless: uses NewJSStreamRestoreError, Unless() helper,
|
||||
// NewJSPeerRemapError — not yet ported.
|
||||
|
||||
using Shouldly;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.JetStream;
|
||||
|
||||
/// <summary>
|
||||
/// Tests for JetStream API error types and IsNatsErr helper.
|
||||
/// Tests for JetStream API error helpers.
|
||||
/// Mirrors server/jetstream_errors_test.go.
|
||||
/// All tests deferred pending port of Go factory constructors and tag-substitution system.
|
||||
/// </summary>
|
||||
public sealed class JetStreamErrorsTests
|
||||
{
|
||||
[Fact(Skip = "deferred: NewJS* factory constructors and IsNatsErr(error) not yet ported")] // T:1381
|
||||
public void IsNatsErr_ShouldSucceed() { }
|
||||
[Fact] // T:1381
|
||||
public void IsNatsErr_ShouldSucceed()
|
||||
{
|
||||
JsApiErrors.IsNatsErr(
|
||||
JsApiErrors.NotEnabledForAccount,
|
||||
JsApiErrors.NotEnabledForAccount.ErrCode).ShouldBeTrue();
|
||||
|
||||
[Fact(Skip = "deferred: ApiErrors map and .Error() method not yet ported")] // T:1382
|
||||
public void ApiError_Error_ShouldSucceed() { }
|
||||
JsApiErrors.IsNatsErr(
|
||||
JsApiErrors.NotEnabledForAccount,
|
||||
JsApiErrors.ClusterNotActive.ErrCode).ShouldBeFalse();
|
||||
|
||||
[Fact(Skip = "deferred: NewJSRestoreSubscribeFailedError with tag substitution not yet ported")] // T:1383
|
||||
public void ApiError_NewWithTags_ShouldSucceed() { }
|
||||
JsApiErrors.IsNatsErr(
|
||||
JsApiErrors.NotEnabledForAccount,
|
||||
JsApiErrors.ClusterNotActive.ErrCode,
|
||||
JsApiErrors.ClusterNotAvail.ErrCode).ShouldBeFalse();
|
||||
|
||||
[Fact(Skip = "deferred: NewJSStreamRestoreError / Unless() helper not yet ported")] // T:1384
|
||||
public void ApiError_NewWithUnless_ShouldSucceed() { }
|
||||
JsApiErrors.IsNatsErr(
|
||||
JsApiErrors.NotEnabledForAccount,
|
||||
JsApiErrors.ClusterNotActive.ErrCode,
|
||||
JsApiErrors.NotEnabledForAccount.ErrCode).ShouldBeTrue();
|
||||
|
||||
JsApiErrors.IsNatsErr(
|
||||
new JsApiError { ErrCode = JsApiErrors.NotEnabledForAccount.ErrCode },
|
||||
1,
|
||||
JsApiErrors.ClusterNotActive.ErrCode,
|
||||
JsApiErrors.NotEnabledForAccount.ErrCode).ShouldBeTrue();
|
||||
|
||||
JsApiErrors.IsNatsErr(
|
||||
new JsApiError { ErrCode = JsApiErrors.NotEnabledForAccount.ErrCode },
|
||||
1,
|
||||
2,
|
||||
JsApiErrors.ClusterNotActive.ErrCode).ShouldBeFalse();
|
||||
|
||||
JsApiErrors.IsNatsErr(null, JsApiErrors.ClusterNotActive.ErrCode).ShouldBeFalse();
|
||||
JsApiErrors.IsNatsErr(new InvalidOperationException("x"), JsApiErrors.ClusterNotActive.ErrCode).ShouldBeFalse();
|
||||
}
|
||||
|
||||
[Fact] // T:1382
|
||||
public void ApiError_Error_ShouldSucceed()
|
||||
{
|
||||
JsApiErrors.Error(JsApiErrors.ClusterNotActive).ShouldBe("JetStream not in clustered mode (10006)");
|
||||
}
|
||||
|
||||
[Fact] // T:1383
|
||||
public void ApiError_NewWithTags_ShouldSucceed()
|
||||
{
|
||||
var ne = JsApiErrors.NewJSRestoreSubscribeFailedError(new Exception("failed error"), "the.subject");
|
||||
ne.Description.ShouldBe("JetStream unable to subscribe to restore snapshot the.subject: failed error");
|
||||
ReferenceEquals(ne, JsApiErrors.RestoreSubscribeFailed).ShouldBeFalse();
|
||||
}
|
||||
|
||||
[Fact] // T:1384
|
||||
public void ApiError_NewWithUnless_ShouldSucceed()
|
||||
{
|
||||
var notEnabled = JsApiErrors.NotEnabledForAccount.ErrCode;
|
||||
var streamRestore = JsApiErrors.StreamRestore.ErrCode;
|
||||
var peerRemap = JsApiErrors.PeerRemap.ErrCode;
|
||||
|
||||
JsApiErrors.IsNatsErr(
|
||||
JsApiErrors.NewJSStreamRestoreError(
|
||||
new Exception("failed error"),
|
||||
JsApiErrors.Unless(JsApiErrors.NotEnabledForAccount)),
|
||||
notEnabled).ShouldBeTrue();
|
||||
|
||||
JsApiErrors.IsNatsErr(
|
||||
JsApiErrors.NewJSStreamRestoreError(new Exception("failed error")),
|
||||
streamRestore).ShouldBeTrue();
|
||||
|
||||
JsApiErrors.IsNatsErr(
|
||||
JsApiErrors.NewJSStreamRestoreError(
|
||||
new Exception("failed error"),
|
||||
JsApiErrors.Unless(new Exception("other error"))),
|
||||
streamRestore).ShouldBeTrue();
|
||||
|
||||
JsApiErrors.IsNatsErr(
|
||||
JsApiErrors.NewJSPeerRemapError(JsApiErrors.Unless(JsApiErrors.NotEnabledForAccount)),
|
||||
notEnabled).ShouldBeTrue();
|
||||
|
||||
JsApiErrors.IsNatsErr(
|
||||
JsApiErrors.NewJSPeerRemapError(JsApiErrors.Unless(null)),
|
||||
peerRemap).ShouldBeTrue();
|
||||
|
||||
JsApiErrors.IsNatsErr(
|
||||
JsApiErrors.NewJSPeerRemapError(JsApiErrors.Unless(new Exception("other error"))),
|
||||
peerRemap).ShouldBeTrue();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,76 @@
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.JetStream;
|
||||
|
||||
public sealed class JetStreamFileStoreTests
|
||||
{
|
||||
[Fact]
|
||||
public void StoreMsg_LoadAndPurge_ShouldRoundTrip()
|
||||
{
|
||||
var root = Path.Combine(Path.GetTempPath(), $"fs-{Guid.NewGuid():N}");
|
||||
Directory.CreateDirectory(root);
|
||||
try
|
||||
{
|
||||
var fs = NewStore(root);
|
||||
|
||||
var (seq1, _) = fs.StoreMsg("foo", [1], [2, 3], 0);
|
||||
var (seq2, _) = fs.StoreMsg("bar", null, [4, 5], 0);
|
||||
|
||||
seq1.ShouldBe(1UL);
|
||||
seq2.ShouldBe(2UL);
|
||||
fs.State().Msgs.ShouldBe(2UL);
|
||||
|
||||
var msg = fs.LoadMsg(1, null);
|
||||
msg.ShouldNotBeNull();
|
||||
msg!.Subject.ShouldBe("foo");
|
||||
|
||||
fs.SubjectForSeq(2).Subject.ShouldBe("bar");
|
||||
fs.SubjectsTotals(string.Empty).Count.ShouldBe(2);
|
||||
|
||||
var (removed, remErr) = fs.RemoveMsg(1);
|
||||
removed.ShouldBeTrue();
|
||||
remErr.ShouldBeNull();
|
||||
fs.State().Msgs.ShouldBe(1UL);
|
||||
|
||||
var (purged, purgeErr) = fs.Purge();
|
||||
purgeErr.ShouldBeNull();
|
||||
purged.ShouldBe(1UL);
|
||||
fs.State().Msgs.ShouldBe(0UL);
|
||||
|
||||
var (snapshot, snapErr) = fs.Snapshot(TimeSpan.FromSeconds(1), includeConsumers: false, checkMsgs: false);
|
||||
snapErr.ShouldBeNull();
|
||||
snapshot.ShouldNotBeNull();
|
||||
snapshot!.Reader.ShouldNotBeNull();
|
||||
|
||||
var (total, reported, utilErr) = fs.Utilization();
|
||||
utilErr.ShouldBeNull();
|
||||
total.ShouldBe(reported);
|
||||
|
||||
fs.Stop();
|
||||
}
|
||||
finally
|
||||
{
|
||||
Directory.Delete(root, recursive: true);
|
||||
}
|
||||
}
|
||||
|
||||
private static JetStreamFileStore NewStore(string root)
|
||||
{
|
||||
return new JetStreamFileStore(
|
||||
new FileStoreConfig { StoreDir = root },
|
||||
new FileStreamInfo
|
||||
{
|
||||
Created = DateTime.UtcNow,
|
||||
Config = new StreamConfig
|
||||
{
|
||||
Name = "S",
|
||||
Storage = StorageType.FileStorage,
|
||||
Subjects = ["foo", "bar"],
|
||||
},
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,116 @@
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.JetStream;
|
||||
|
||||
public sealed class NatsConsumerTests
|
||||
{
|
||||
[Fact]
|
||||
public void Create_SetLeader_UpdateConfig_AndStop_ShouldBehave()
|
||||
{
|
||||
var account = new Account { Name = "A" };
|
||||
var streamCfg = new StreamConfig { Name = "S", Subjects = ["foo"], Storage = StorageType.FileStorage };
|
||||
var stream = NatsStream.Create(account, streamCfg, null, null, null, null);
|
||||
stream.ShouldNotBeNull();
|
||||
|
||||
var cfg = new ConsumerConfig { Durable = "D", AckPolicy = AckPolicy.AckExplicit };
|
||||
var consumer = NatsConsumer.Create(stream!, cfg, ConsumerAction.CreateOrUpdate, null);
|
||||
consumer.ShouldNotBeNull();
|
||||
|
||||
consumer!.IsLeader().ShouldBeFalse();
|
||||
consumer.SetLeader(true, 3);
|
||||
consumer.IsLeader().ShouldBeTrue();
|
||||
|
||||
var updated = new ConsumerConfig { Durable = "D", AckPolicy = AckPolicy.AckAll };
|
||||
consumer.UpdateConfig(updated);
|
||||
consumer.GetConfig().AckPolicy.ShouldBe(AckPolicy.AckAll);
|
||||
|
||||
var info = consumer.GetInfo();
|
||||
info.Stream.ShouldBe("S");
|
||||
info.Name.ShouldBe("D");
|
||||
|
||||
consumer.Stop();
|
||||
consumer.IsLeader().ShouldBeFalse();
|
||||
}
|
||||
|
||||
[Fact] // T:1364
|
||||
public void SortingConsumerPullRequests_ShouldSucceed()
|
||||
{
|
||||
var q = new WaitQueue(max: 100);
|
||||
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "1a", PriorityGroup = new PriorityGroup { Priority = 1 }, N = 1 })
|
||||
.ShouldBeTrue();
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "2a", PriorityGroup = new PriorityGroup { Priority = 2 }, N = 1 })
|
||||
.ShouldBeTrue();
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "1b", PriorityGroup = new PriorityGroup { Priority = 1 }, N = 1 })
|
||||
.ShouldBeTrue();
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "2b", PriorityGroup = new PriorityGroup { Priority = 2 }, N = 1 })
|
||||
.ShouldBeTrue();
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "1c", PriorityGroup = new PriorityGroup { Priority = 1 }, N = 1 })
|
||||
.ShouldBeTrue();
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "3a", PriorityGroup = new PriorityGroup { Priority = 3 }, N = 1 })
|
||||
.ShouldBeTrue();
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "2c", PriorityGroup = new PriorityGroup { Priority = 2 }, N = 1 })
|
||||
.ShouldBeTrue();
|
||||
|
||||
var expectedOrder = new[]
|
||||
{
|
||||
("1a", 1),
|
||||
("1b", 1),
|
||||
("1c", 1),
|
||||
("2a", 2),
|
||||
("2b", 2),
|
||||
("2c", 2),
|
||||
("3a", 3),
|
||||
};
|
||||
|
||||
q.Len.ShouldBe(expectedOrder.Length);
|
||||
foreach (var (reply, priority) in expectedOrder)
|
||||
{
|
||||
var current = q.Peek();
|
||||
current.ShouldNotBeNull();
|
||||
current!.Reply.ShouldBe(reply);
|
||||
current.PriorityGroup.ShouldNotBeNull();
|
||||
current.PriorityGroup!.Priority.ShouldBe(priority);
|
||||
q.RemoveCurrent();
|
||||
}
|
||||
|
||||
q.IsEmpty().ShouldBeTrue();
|
||||
}
|
||||
|
||||
[Fact] // T:1365
|
||||
public void WaitQueuePopAndRequeue_ShouldSucceed()
|
||||
{
|
||||
var q = new WaitQueue(max: 100);
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "1a", N = 2, PriorityGroup = new PriorityGroup { Priority = 1 } })
|
||||
.ShouldBeTrue();
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "1b", N = 1, PriorityGroup = new PriorityGroup { Priority = 1 } })
|
||||
.ShouldBeTrue();
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "2a", N = 3, PriorityGroup = new PriorityGroup { Priority = 2 } })
|
||||
.ShouldBeTrue();
|
||||
|
||||
var wr = q.PopAndRequeue();
|
||||
wr.ShouldNotBeNull();
|
||||
wr!.Reply.ShouldBe("1a");
|
||||
wr.N.ShouldBe(1);
|
||||
q.Len.ShouldBe(3);
|
||||
|
||||
wr = q.PopAndRequeue();
|
||||
wr.ShouldNotBeNull();
|
||||
wr!.Reply.ShouldBe("1b");
|
||||
wr.N.ShouldBe(0);
|
||||
q.Len.ShouldBe(2);
|
||||
|
||||
wr = q.PopAndRequeue();
|
||||
wr.ShouldNotBeNull();
|
||||
wr!.Reply.ShouldBe("1a");
|
||||
wr.N.ShouldBe(0);
|
||||
q.Len.ShouldBe(1);
|
||||
|
||||
q.Peek()!.Reply.ShouldBe("2a");
|
||||
q.Peek()!.N.ShouldBe(3);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,43 @@
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.JetStream;
|
||||
|
||||
public sealed class NatsStreamTests
|
||||
{
|
||||
[Fact]
|
||||
public void Create_SetLeader_Purge_AndSeal_ShouldBehave()
|
||||
{
|
||||
var account = new Account { Name = "A" };
|
||||
var streamCfg = new StreamConfig { Name = "ORDERS", Subjects = ["orders.*"], Storage = StorageType.FileStorage };
|
||||
|
||||
var memCfg = streamCfg.Clone();
|
||||
memCfg.Storage = StorageType.MemoryStorage;
|
||||
var store = new JetStreamMemStore(memCfg);
|
||||
store.StoreMsg("orders.new", null, [1, 2], 0);
|
||||
|
||||
var stream = NatsStream.Create(account, streamCfg, null, store, null, null);
|
||||
stream.ShouldNotBeNull();
|
||||
|
||||
stream!.IsLeader().ShouldBeFalse();
|
||||
stream.SetLeader(true, 7);
|
||||
stream.IsLeader().ShouldBeTrue();
|
||||
|
||||
stream.State().Msgs.ShouldBe(1UL);
|
||||
stream.Purge();
|
||||
stream.State().Msgs.ShouldBe(0UL);
|
||||
|
||||
stream.IsSealed().ShouldBeFalse();
|
||||
stream.Seal();
|
||||
stream.IsSealed().ShouldBeTrue();
|
||||
|
||||
stream.GetAccount().Name.ShouldBe("A");
|
||||
stream.GetInfo().Config.Name.ShouldBe("ORDERS");
|
||||
|
||||
stream.Delete();
|
||||
stream.IsLeader().ShouldBeFalse();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,140 @@
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using System.Threading.Channels;
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server.Internal;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.JetStream;
|
||||
|
||||
public sealed class RaftTypesTests
|
||||
{
|
||||
[Fact]
|
||||
public void Raft_Methods_ShouldProvideNonStubBehavior()
|
||||
{
|
||||
var raft = new Raft
|
||||
{
|
||||
Id = "N1",
|
||||
GroupName = "RG",
|
||||
AccName = "ACC",
|
||||
StateValue = (int)RaftState.Leader,
|
||||
LeaderId = "N1",
|
||||
Csz = 3,
|
||||
Qn = 2,
|
||||
PIndex = 10,
|
||||
Commit = 8,
|
||||
Applied_ = 6,
|
||||
Processed_ = 7,
|
||||
PApplied = 9,
|
||||
WalBytes = 128,
|
||||
Peers_ = new Dictionary<string, Lps>
|
||||
{
|
||||
["N2"] = new() { Ts = DateTime.UtcNow, Kp = true, Li = 9 },
|
||||
},
|
||||
ApplyQ_ = new IpQueue<CommittedEntry>("apply-q"),
|
||||
LeadC = Channel.CreateUnbounded<bool>(),
|
||||
Quit = Channel.CreateUnbounded<bool>(),
|
||||
};
|
||||
|
||||
raft.Propose([1, 2, 3]);
|
||||
raft.ForwardProposal([4, 5]);
|
||||
raft.ProposeMulti([new Entry { Data = [6] }]);
|
||||
|
||||
raft.PropQ.ShouldNotBeNull();
|
||||
raft.PropQ!.Len().ShouldBe(3);
|
||||
|
||||
raft.InstallSnapshot([9, 9], force: false);
|
||||
raft.SendSnapshot([8, 8, 8]);
|
||||
raft.CreateSnapshotCheckpoint(force: false).ShouldBeOfType<Checkpoint>();
|
||||
raft.NeedSnapshot().ShouldBeTrue();
|
||||
|
||||
raft.Applied(5).ShouldBe((1UL, 128UL));
|
||||
raft.Processed(11, 10).ShouldBe((11UL, 128UL));
|
||||
raft.Size().ShouldBe((11UL, 128UL));
|
||||
raft.Progress().ShouldBe((10UL, 8UL, 10UL));
|
||||
raft.Leader().ShouldBeTrue();
|
||||
raft.LeaderSince().ShouldNotBeNull();
|
||||
raft.Quorum().ShouldBeTrue();
|
||||
raft.Current().ShouldBeTrue();
|
||||
raft.Healthy().ShouldBeTrue();
|
||||
raft.Term().ShouldBe(raft.Term_);
|
||||
raft.Leaderless().ShouldBeFalse();
|
||||
raft.GroupLeader().ShouldBe("N1");
|
||||
|
||||
raft.SetObserver(true);
|
||||
raft.IsObserver().ShouldBeTrue();
|
||||
raft.Campaign();
|
||||
raft.State().ShouldBe(RaftState.Candidate);
|
||||
raft.CampaignImmediately();
|
||||
raft.StepDown("N2");
|
||||
raft.State().ShouldBe(RaftState.Follower);
|
||||
|
||||
raft.ProposeKnownPeers(["P1", "P2"]);
|
||||
raft.Peers().Count.ShouldBe(3);
|
||||
raft.ProposeAddPeer("P3");
|
||||
raft.ClusterSize().ShouldBeGreaterThan(1);
|
||||
raft.ProposeRemovePeer("P2");
|
||||
raft.Peers().Count.ShouldBe(3);
|
||||
raft.MembershipChangeInProgress().ShouldBeTrue();
|
||||
raft.AdjustClusterSize(5);
|
||||
raft.ClusterSize().ShouldBe(5);
|
||||
raft.AdjustBootClusterSize(4);
|
||||
raft.ClusterSize().ShouldBe(4);
|
||||
|
||||
raft.ApplyQ().ShouldNotBeNull();
|
||||
raft.PauseApply();
|
||||
raft.Paused.ShouldBeTrue();
|
||||
raft.ResumeApply();
|
||||
raft.Paused.ShouldBeFalse();
|
||||
raft.DrainAndReplaySnapshot().ShouldBeTrue();
|
||||
raft.LeadChangeC().ShouldNotBeNull();
|
||||
raft.QuitC().ShouldNotBeNull();
|
||||
raft.Created().ShouldBe(raft.Created_);
|
||||
raft.ID().ShouldBe("N1");
|
||||
raft.Group().ShouldBe("RG");
|
||||
raft.GetTrafficAccountName().ShouldBe("ACC");
|
||||
|
||||
raft.RecreateInternalSubs();
|
||||
raft.Stop();
|
||||
raft.WaitForStop();
|
||||
raft.Delete();
|
||||
raft.IsDeleted().ShouldBeTrue();
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Checkpoint_Methods_ShouldRoundTripSnapshotData()
|
||||
{
|
||||
var node = new Raft
|
||||
{
|
||||
Id = "NODE",
|
||||
PTerm = 3,
|
||||
AReply = "_R_",
|
||||
};
|
||||
|
||||
var checkpoint = new Checkpoint
|
||||
{
|
||||
Node = node,
|
||||
Term = 5,
|
||||
Applied = 11,
|
||||
PApplied = 7,
|
||||
SnapFile = Path.Combine(Path.GetTempPath(), $"checkpoint-{Guid.NewGuid():N}.bin"),
|
||||
};
|
||||
|
||||
var written = checkpoint.InstallSnapshot([1, 2, 3, 4]);
|
||||
written.ShouldBe(4UL);
|
||||
|
||||
var loaded = checkpoint.LoadLastSnapshot();
|
||||
loaded.ShouldBe([1, 2, 3, 4]);
|
||||
|
||||
var seq = checkpoint.AppendEntriesSeq().ToList();
|
||||
seq.Count.ShouldBe(1);
|
||||
seq[0].Error.ShouldBeNull();
|
||||
seq[0].Entry.Leader.ShouldBe("NODE");
|
||||
seq[0].Entry.TermV.ShouldBe(5UL);
|
||||
seq[0].Entry.Commit.ShouldBe(11UL);
|
||||
seq[0].Entry.PIndex.ShouldBe(7UL);
|
||||
|
||||
checkpoint.Abort();
|
||||
File.Exists(checkpoint.SnapFile).ShouldBeFalse();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,51 @@
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.JetStream;
|
||||
|
||||
public sealed class WaitQueueTests
|
||||
{
|
||||
[Fact]
|
||||
public void Add_Peek_Pop_IsFull_ShouldBehaveAsFifo()
|
||||
{
|
||||
var q = new WaitQueue();
|
||||
|
||||
q.Peek().ShouldBeNull();
|
||||
q.Pop().ShouldBeNull();
|
||||
|
||||
q.Add(new WaitingRequest { Subject = "A", N = 1 });
|
||||
q.Add(new WaitingRequest { Subject = "B", N = 2 });
|
||||
|
||||
q.Len.ShouldBe(2);
|
||||
q.IsFull(2).ShouldBeTrue();
|
||||
q.Peek()!.Subject.ShouldBe("A");
|
||||
|
||||
q.Pop()!.Subject.ShouldBe("A");
|
||||
q.Pop()!.Subject.ShouldBe("B");
|
||||
q.Len.ShouldBe(1);
|
||||
|
||||
q.Pop()!.Subject.ShouldBe("B");
|
||||
q.Len.ShouldBe(0);
|
||||
q.IsFull(1).ShouldBeFalse();
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void AddPrioritized_AndCycle_ShouldPreserveStableOrder()
|
||||
{
|
||||
var q = new WaitQueue(max: 10);
|
||||
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "2a", N = 1, PriorityGroup = new PriorityGroup { Priority = 2 } })
|
||||
.ShouldBeTrue();
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "1a", N = 1, PriorityGroup = new PriorityGroup { Priority = 1 } })
|
||||
.ShouldBeTrue();
|
||||
q.AddPrioritized(new WaitingRequest { Reply = "1b", N = 1, PriorityGroup = new PriorityGroup { Priority = 1 } })
|
||||
.ShouldBeTrue();
|
||||
|
||||
q.Peek()!.Reply.ShouldBe("1a");
|
||||
q.Cycle();
|
||||
q.Peek()!.Reply.ShouldBe("1b");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,63 @@
|
||||
// Copyright 2012-2026 The NATS Authors
|
||||
// Licensed under the Apache License, Version 2.0
|
||||
|
||||
using System.Reflection;
|
||||
using Shouldly;
|
||||
using ZB.MOM.NatsNet.Server;
|
||||
using ZB.MOM.NatsNet.Server.Internal;
|
||||
|
||||
namespace ZB.MOM.NatsNet.Server.Tests.Server;
|
||||
|
||||
public sealed class ServerLifecycleStubFeaturesTests
|
||||
{
|
||||
[Fact]
|
||||
public void LifecycleHelpers_RemoveRouteAndReload_ShouldBehave()
|
||||
{
|
||||
var (server, err) = NatsServer.NewServer(new ServerOptions());
|
||||
err.ShouldBeNull();
|
||||
server.ShouldNotBeNull();
|
||||
|
||||
var route = new ClientConnection(ClientKind.Router) { Cid = 42 };
|
||||
var routes = new Dictionary<string, List<ClientConnection>> { ["pool"] = [route] };
|
||||
var clients = new Dictionary<ulong, ClientConnection> { [route.Cid] = route };
|
||||
|
||||
SetField(server!, "_routes", routes);
|
||||
SetField(server!, "_clients", clients);
|
||||
|
||||
server.ForEachRoute(_ => { });
|
||||
|
||||
InvokePrivate(server!, "RemoveRoute", route);
|
||||
((Dictionary<string, List<ClientConnection>>)GetField(server!, "_routes")).Count.ShouldBe(0);
|
||||
((Dictionary<ulong, ClientConnection>)GetField(server!, "_clients")).Count.ShouldBe(0);
|
||||
|
||||
var nonce = new byte[16];
|
||||
InvokePrivate(server!, "GenerateNonce", nonce);
|
||||
nonce.Any(b => b != 0).ShouldBeTrue();
|
||||
|
||||
var before = (DateTime)GetField(server!, "_configTime");
|
||||
server.Reload();
|
||||
var after = (DateTime)GetField(server!, "_configTime");
|
||||
after.ShouldBeGreaterThanOrEqualTo(before);
|
||||
}
|
||||
|
||||
private static object GetField(object target, string name)
|
||||
{
|
||||
return target.GetType()
|
||||
.GetField(name, BindingFlags.Instance | BindingFlags.NonPublic)!
|
||||
.GetValue(target)!;
|
||||
}
|
||||
|
||||
private static void SetField(object target, string name, object value)
|
||||
{
|
||||
target.GetType()
|
||||
.GetField(name, BindingFlags.Instance | BindingFlags.NonPublic)!
|
||||
.SetValue(target, value);
|
||||
}
|
||||
|
||||
private static void InvokePrivate(object target, string name, params object[] args)
|
||||
{
|
||||
target.GetType()
|
||||
.GetMethod(name, BindingFlags.Instance | BindingFlags.NonPublic)!
|
||||
.Invoke(target, args);
|
||||
}
|
||||
}
|
||||
@@ -88,6 +88,17 @@ CREATE TABLE IF NOT EXISTS library_mappings (
|
||||
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS status_overrides (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
table_name TEXT NOT NULL CHECK (table_name IN ('features', 'unit_tests')),
|
||||
item_id INTEGER NOT NULL,
|
||||
audit_status TEXT NOT NULL,
|
||||
audit_reason TEXT NOT NULL,
|
||||
requested_status TEXT NOT NULL,
|
||||
comment TEXT NOT NULL,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX IF NOT EXISTS idx_features_module ON features(module_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_features_status ON features(status);
|
||||
|
||||
BIN
porting.db
BIN
porting.db
Binary file not shown.
2681
reports/audit-results-tests.csv
Normal file
2681
reports/audit-results-tests.csv
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 10:27:48 UTC
|
||||
Generated: 2026-02-27 15:27:06 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
@@ -12,18 +12,18 @@ Generated: 2026-02-27 10:27:48 UTC
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2500 |
|
||||
| n_a | 18 |
|
||||
| stub | 168 |
|
||||
| verified | 987 |
|
||||
| deferred | 2377 |
|
||||
| n_a | 24 |
|
||||
| stub | 1 |
|
||||
| verified | 1271 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2680 |
|
||||
| deferred | 2660 |
|
||||
| n_a | 187 |
|
||||
| verified | 390 |
|
||||
| verified | 410 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
@@ -34,4 +34,4 @@ Generated: 2026-02-27 10:27:48 UTC
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1594/6942 items complete (23.0%)**
|
||||
**1904/6942 items complete (27.4%)**
|
||||
|
||||
38
reports/report_3297334.md
Normal file
38
reports/report_3297334.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 10:50:16 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2501 |
|
||||
| n_a | 18 |
|
||||
| stub | 168 |
|
||||
| verified | 986 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2662 |
|
||||
| n_a | 187 |
|
||||
| stub | 18 |
|
||||
| verified | 390 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1593/6942 items complete (22.9%)**
|
||||
37
reports/report_485c7b0.md
Normal file
37
reports/report_485c7b0.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 10:35:52 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2500 |
|
||||
| n_a | 18 |
|
||||
| stub | 168 |
|
||||
| verified | 987 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2680 |
|
||||
| n_a | 187 |
|
||||
| verified | 390 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1594/6942 items complete (23.0%)**
|
||||
38
reports/report_4972f99.md
Normal file
38
reports/report_4972f99.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 10:46:13 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2500 |
|
||||
| n_a | 18 |
|
||||
| stub | 168 |
|
||||
| verified | 987 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2662 |
|
||||
| n_a | 187 |
|
||||
| stub | 18 |
|
||||
| verified | 390 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1594/6942 items complete (23.0%)**
|
||||
38
reports/report_4e61314.md
Normal file
38
reports/report_4e61314.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 11:19:48 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2463 |
|
||||
| n_a | 18 |
|
||||
| stub | 166 |
|
||||
| verified | 1026 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2662 |
|
||||
| n_a | 187 |
|
||||
| stub | 18 |
|
||||
| verified | 390 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1633/6942 items complete (23.5%)**
|
||||
37
reports/report_4e96fb2.md
Normal file
37
reports/report_4e96fb2.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 15:04:33 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2397 |
|
||||
| n_a | 18 |
|
||||
| stub | 1 |
|
||||
| verified | 1257 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2660 |
|
||||
| n_a | 187 |
|
||||
| verified | 410 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1884/6942 items complete (27.1%)**
|
||||
38
reports/report_7518b97.md
Normal file
38
reports/report_7518b97.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 10:36:34 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2500 |
|
||||
| n_a | 18 |
|
||||
| stub | 168 |
|
||||
| verified | 987 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2662 |
|
||||
| n_a | 187 |
|
||||
| stub | 18 |
|
||||
| verified | 390 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1594/6942 items complete (23.0%)**
|
||||
38
reports/report_7a338dd.md
Normal file
38
reports/report_7a338dd.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 10:51:26 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2501 |
|
||||
| n_a | 18 |
|
||||
| stub | 168 |
|
||||
| verified | 986 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2662 |
|
||||
| n_a | 187 |
|
||||
| stub | 18 |
|
||||
| verified | 390 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1593/6942 items complete (22.9%)**
|
||||
36
reports/report_8849265.md
Normal file
36
reports/report_8849265.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 14:58:38 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2440 |
|
||||
| n_a | 18 |
|
||||
| verified | 1215 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2660 |
|
||||
| n_a | 187 |
|
||||
| verified | 410 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1842/6942 items complete (26.5%)**
|
||||
37
reports/report_9e2d763.md
Normal file
37
reports/report_9e2d763.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 10:34:31 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2500 |
|
||||
| n_a | 18 |
|
||||
| stub | 168 |
|
||||
| verified | 987 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2680 |
|
||||
| n_a | 187 |
|
||||
| verified | 390 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1594/6942 items complete (23.0%)**
|
||||
36
reports/report_ae0a553.md
Normal file
36
reports/report_ae0a553.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 14:59:29 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2440 |
|
||||
| n_a | 18 |
|
||||
| verified | 1215 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2660 |
|
||||
| n_a | 187 |
|
||||
| verified | 410 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1842/6942 items complete (26.5%)**
|
||||
36
reports/report_ba4f41c.md
Normal file
36
reports/report_ba4f41c.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 13:56:27 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2461 |
|
||||
| n_a | 18 |
|
||||
| verified | 1194 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2662 |
|
||||
| n_a | 187 |
|
||||
| verified | 408 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1819/6942 items complete (26.2%)**
|
||||
37
reports/report_c0aaae9.md
Normal file
37
reports/report_c0aaae9.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 15:27:06 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2377 |
|
||||
| n_a | 24 |
|
||||
| stub | 1 |
|
||||
| verified | 1271 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2660 |
|
||||
| n_a | 187 |
|
||||
| verified | 410 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1904/6942 items complete (27.4%)**
|
||||
38
reports/report_db1de2a.md
Normal file
38
reports/report_db1de2a.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# NATS .NET Porting Status Report
|
||||
|
||||
Generated: 2026-02-27 11:11:12 UTC
|
||||
|
||||
## Modules (12 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| verified | 12 |
|
||||
|
||||
## Features (3673 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2493 |
|
||||
| n_a | 18 |
|
||||
| stub | 168 |
|
||||
| verified | 994 |
|
||||
|
||||
## Unit Tests (3257 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| deferred | 2662 |
|
||||
| n_a | 187 |
|
||||
| stub | 18 |
|
||||
| verified | 390 |
|
||||
|
||||
## Library Mappings (36 total)
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| mapped | 36 |
|
||||
|
||||
|
||||
## Overall Progress
|
||||
|
||||
**1601/6942 items complete (23.1%)**
|
||||
136
tools/NatsNet.PortTracker/Audit/AuditVerifier.cs
Normal file
136
tools/NatsNet.PortTracker/Audit/AuditVerifier.cs
Normal file
@@ -0,0 +1,136 @@
|
||||
namespace NatsNet.PortTracker.Audit;
|
||||
|
||||
using NatsNet.PortTracker.Data;
|
||||
|
||||
/// <summary>
|
||||
/// Verifies status updates against audit classification results.
|
||||
/// Used by feature and test update commands to ensure status accuracy.
|
||||
/// </summary>
|
||||
public static class AuditVerifier
|
||||
{
|
||||
public record VerificationResult(
|
||||
long ItemId,
|
||||
string AuditStatus,
|
||||
string AuditReason,
|
||||
bool Matches);
|
||||
|
||||
private static readonly Dictionary<string, string> DefaultSourcePaths = new()
|
||||
{
|
||||
["features"] = Path.Combine(Directory.GetCurrentDirectory(), "dotnet", "src", "ZB.MOM.NatsNet.Server"),
|
||||
["unit_tests"] = Path.Combine(Directory.GetCurrentDirectory(), "dotnet", "tests", "ZB.MOM.NatsNet.Server.Tests")
|
||||
};
|
||||
|
||||
/// <summary>
|
||||
/// Build a SourceIndexer for the appropriate table type.
|
||||
/// </summary>
|
||||
public static SourceIndexer BuildIndexer(string tableName)
|
||||
{
|
||||
var sourcePath = DefaultSourcePaths[tableName];
|
||||
if (!Directory.Exists(sourcePath))
|
||||
throw new DirectoryNotFoundException($"Source directory not found: {sourcePath}");
|
||||
|
||||
Console.WriteLine($"Building audit index from {sourcePath}...");
|
||||
var indexer = new SourceIndexer();
|
||||
indexer.IndexDirectory(sourcePath);
|
||||
Console.WriteLine($"Indexed {indexer.FilesIndexed} files, {indexer.MethodsIndexed} methods/properties.");
|
||||
return indexer;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Verify items matching a WHERE clause against audit classification.
|
||||
/// </summary>
|
||||
public static List<VerificationResult> VerifyItems(
|
||||
Database db, SourceIndexer indexer, string tableName,
|
||||
string whereClause, List<(string, object?)> parameters, string requestedStatus)
|
||||
{
|
||||
var sql = $"SELECT id, dotnet_class, dotnet_method, go_file, go_method FROM {tableName}{whereClause} ORDER BY id";
|
||||
var rows = db.Query(sql, parameters.ToArray());
|
||||
|
||||
var classifier = new FeatureClassifier(indexer);
|
||||
var results = new List<VerificationResult>();
|
||||
|
||||
foreach (var row in rows)
|
||||
{
|
||||
var record = new FeatureClassifier.FeatureRecord(
|
||||
Id: Convert.ToInt64(row["id"]),
|
||||
DotnetClass: row["dotnet_class"]?.ToString() ?? "",
|
||||
DotnetMethod: row["dotnet_method"]?.ToString() ?? "",
|
||||
GoFile: row["go_file"]?.ToString() ?? "",
|
||||
GoMethod: row["go_method"]?.ToString() ?? "");
|
||||
|
||||
var classification = classifier.Classify(record);
|
||||
var matches = classification.Status == requestedStatus;
|
||||
results.Add(new VerificationResult(record.Id, classification.Status, classification.Reason, matches));
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Check verification results and print a report.
|
||||
/// Returns true if the update should proceed.
|
||||
/// </summary>
|
||||
public static bool CheckAndReport(
|
||||
List<VerificationResult> results, string requestedStatus, string? overrideComment)
|
||||
{
|
||||
var matches = results.Where(r => r.Matches).ToList();
|
||||
var mismatches = results.Where(r => !r.Matches).ToList();
|
||||
|
||||
Console.WriteLine($"\nAudit verification: {matches.Count} match, {mismatches.Count} mismatch");
|
||||
|
||||
if (mismatches.Count == 0)
|
||||
return true;
|
||||
|
||||
Console.WriteLine($"\nMismatches (requested '{requestedStatus}'):");
|
||||
foreach (var m in mismatches.Take(20))
|
||||
Console.WriteLine($" ID {m.ItemId}: audit says '{m.AuditStatus}' ({m.AuditReason})");
|
||||
if (mismatches.Count > 20)
|
||||
Console.WriteLine($" ... and {mismatches.Count - 20} more");
|
||||
|
||||
if (overrideComment is null)
|
||||
{
|
||||
Console.WriteLine($"\n{mismatches.Count} items have audit mismatches. Use --override \"reason\" to force.");
|
||||
return false;
|
||||
}
|
||||
|
||||
Console.WriteLine($"\nOverride applied: \"{overrideComment}\"");
|
||||
return true;
|
||||
}
|
||||
|
||||
/// <summary>
|
||||
/// Log override records to the status_overrides table.
|
||||
/// </summary>
|
||||
public static void LogOverrides(
|
||||
Database db, string tableName, IEnumerable<VerificationResult> mismatches,
|
||||
string requestedStatus, string comment)
|
||||
{
|
||||
var mismatchList = mismatches.ToList();
|
||||
if (mismatchList.Count == 0) return;
|
||||
|
||||
using var transaction = db.Connection.BeginTransaction();
|
||||
try
|
||||
{
|
||||
foreach (var mismatch in mismatchList)
|
||||
{
|
||||
using var cmd = db.CreateCommand(
|
||||
"INSERT INTO status_overrides (table_name, item_id, audit_status, audit_reason, requested_status, comment) " +
|
||||
"VALUES (@table, @item, @auditStatus, @auditReason, @requestedStatus, @comment)");
|
||||
cmd.Parameters.AddWithValue("@table", tableName);
|
||||
cmd.Parameters.AddWithValue("@item", mismatch.ItemId);
|
||||
cmd.Parameters.AddWithValue("@auditStatus", mismatch.AuditStatus);
|
||||
cmd.Parameters.AddWithValue("@auditReason", mismatch.AuditReason);
|
||||
cmd.Parameters.AddWithValue("@requestedStatus", requestedStatus);
|
||||
cmd.Parameters.AddWithValue("@comment", comment);
|
||||
cmd.Transaction = transaction;
|
||||
cmd.ExecuteNonQuery();
|
||||
}
|
||||
transaction.Commit();
|
||||
Console.WriteLine($"Logged {mismatchList.Count} override(s) to status_overrides table.");
|
||||
}
|
||||
catch
|
||||
{
|
||||
transaction.Rollback();
|
||||
throw;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -7,18 +7,28 @@ namespace NatsNet.PortTracker.Commands;
|
||||
|
||||
public static class AuditCommand
|
||||
{
|
||||
private record AuditTarget(string Table, string Label, string DefaultSource, string DefaultOutput);
|
||||
|
||||
private static readonly AuditTarget FeaturesTarget = new(
|
||||
"features", "features",
|
||||
Path.Combine(Directory.GetCurrentDirectory(), "dotnet", "src", "ZB.MOM.NatsNet.Server"),
|
||||
Path.Combine(Directory.GetCurrentDirectory(), "reports", "audit-results.csv"));
|
||||
|
||||
private static readonly AuditTarget TestsTarget = new(
|
||||
"unit_tests", "unit tests",
|
||||
Path.Combine(Directory.GetCurrentDirectory(), "dotnet", "tests", "ZB.MOM.NatsNet.Server.Tests"),
|
||||
Path.Combine(Directory.GetCurrentDirectory(), "reports", "audit-results-tests.csv"));
|
||||
|
||||
public static Command Create(Option<string> dbOption)
|
||||
{
|
||||
var sourceOpt = new Option<string>("--source")
|
||||
var sourceOpt = new Option<string?>("--source")
|
||||
{
|
||||
Description = "Path to the .NET source directory",
|
||||
DefaultValueFactory = _ => Path.Combine(Directory.GetCurrentDirectory(), "dotnet", "src", "ZB.MOM.NatsNet.Server")
|
||||
Description = "Path to the .NET source directory (defaults based on --type)"
|
||||
};
|
||||
|
||||
var outputOpt = new Option<string>("--output")
|
||||
var outputOpt = new Option<string?>("--output")
|
||||
{
|
||||
Description = "CSV report output path",
|
||||
DefaultValueFactory = _ => Path.Combine(Directory.GetCurrentDirectory(), "reports", "audit-results.csv")
|
||||
Description = "CSV report output path (defaults based on --type)"
|
||||
};
|
||||
|
||||
var moduleOpt = new Option<int?>("--module")
|
||||
@@ -32,44 +42,62 @@ public static class AuditCommand
|
||||
DefaultValueFactory = _ => false
|
||||
};
|
||||
|
||||
var cmd = new Command("audit", "Classify unknown features by inspecting .NET source code");
|
||||
var typeOpt = new Option<string>("--type")
|
||||
{
|
||||
Description = "What to audit: features, tests, or all",
|
||||
DefaultValueFactory = _ => "features"
|
||||
};
|
||||
|
||||
var cmd = new Command("audit", "Classify unknown features/tests by inspecting .NET source code");
|
||||
cmd.Add(sourceOpt);
|
||||
cmd.Add(outputOpt);
|
||||
cmd.Add(moduleOpt);
|
||||
cmd.Add(executeOpt);
|
||||
cmd.Add(typeOpt);
|
||||
|
||||
cmd.SetAction(parseResult =>
|
||||
{
|
||||
var dbPath = parseResult.GetValue(dbOption)!;
|
||||
var sourcePath = parseResult.GetValue(sourceOpt)!;
|
||||
var outputPath = parseResult.GetValue(outputOpt)!;
|
||||
var sourceOverride = parseResult.GetValue(sourceOpt);
|
||||
var outputOverride = parseResult.GetValue(outputOpt);
|
||||
var moduleId = parseResult.GetValue(moduleOpt);
|
||||
var execute = parseResult.GetValue(executeOpt);
|
||||
var type = parseResult.GetValue(typeOpt)!;
|
||||
|
||||
RunAudit(dbPath, sourcePath, outputPath, moduleId, execute);
|
||||
AuditTarget[] targets = type switch
|
||||
{
|
||||
"features" => [FeaturesTarget],
|
||||
"tests" => [TestsTarget],
|
||||
"all" => [FeaturesTarget, TestsTarget],
|
||||
_ => throw new ArgumentException($"Unknown audit type: {type}. Use features, tests, or all.")
|
||||
};
|
||||
|
||||
foreach (var target in targets)
|
||||
{
|
||||
var sourcePath = sourceOverride ?? target.DefaultSource;
|
||||
var outputPath = outputOverride ?? target.DefaultOutput;
|
||||
RunAudit(dbPath, sourcePath, outputPath, moduleId, execute, target);
|
||||
}
|
||||
});
|
||||
|
||||
return cmd;
|
||||
}
|
||||
|
||||
private static void RunAudit(string dbPath, string sourcePath, string outputPath, int? moduleId, bool execute)
|
||||
private static void RunAudit(string dbPath, string sourcePath, string outputPath, int? moduleId, bool execute, AuditTarget target)
|
||||
{
|
||||
// Validate source directory
|
||||
if (!Directory.Exists(sourcePath))
|
||||
{
|
||||
Console.WriteLine($"Error: source directory not found: {sourcePath}");
|
||||
return;
|
||||
}
|
||||
|
||||
// 1. Build source index
|
||||
Console.WriteLine($"Parsing .NET source files in {sourcePath}...");
|
||||
var indexer = new SourceIndexer();
|
||||
indexer.IndexDirectory(sourcePath);
|
||||
Console.WriteLine($"Indexed {indexer.FilesIndexed} files, {indexer.MethodsIndexed} methods/properties.");
|
||||
|
||||
// 2. Query unknown features
|
||||
using var db = new Database(dbPath);
|
||||
var sql = "SELECT id, dotnet_class, dotnet_method, go_file, go_method FROM features WHERE status = 'unknown'";
|
||||
var sql = $"SELECT id, dotnet_class, dotnet_method, go_file, go_method FROM {target.Table} WHERE status = 'unknown'";
|
||||
var parameters = new List<(string, object?)>();
|
||||
if (moduleId is not null)
|
||||
{
|
||||
@@ -81,12 +109,11 @@ public static class AuditCommand
|
||||
var rows = db.Query(sql, parameters.ToArray());
|
||||
if (rows.Count == 0)
|
||||
{
|
||||
Console.WriteLine("No unknown features found.");
|
||||
Console.WriteLine($"No unknown {target.Label} found.");
|
||||
return;
|
||||
}
|
||||
Console.WriteLine($"Found {rows.Count} unknown features to classify.\n");
|
||||
Console.WriteLine($"Found {rows.Count} unknown {target.Label} to classify.\n");
|
||||
|
||||
// 3. Classify each feature
|
||||
var classifier = new FeatureClassifier(indexer);
|
||||
var results = new List<(FeatureClassifier.FeatureRecord Feature, FeatureClassifier.ClassificationResult Result)>();
|
||||
|
||||
@@ -103,17 +130,16 @@ public static class AuditCommand
|
||||
results.Add((feature, result));
|
||||
}
|
||||
|
||||
// 4. Write CSV report
|
||||
WriteCsvReport(outputPath, results);
|
||||
|
||||
// 5. Print console summary
|
||||
var grouped = results.GroupBy(r => r.Result.Status)
|
||||
.ToDictionary(g => g.Key, g => g.Count());
|
||||
|
||||
Console.WriteLine("Feature Status Audit Results");
|
||||
Console.WriteLine("=============================");
|
||||
var label = char.ToUpper(target.Label[0]) + target.Label[1..];
|
||||
Console.WriteLine($"{label} Status Audit Results");
|
||||
Console.WriteLine(new string('=', $"{label} Status Audit Results".Length));
|
||||
Console.WriteLine($"Source: {sourcePath} ({indexer.FilesIndexed} files, {indexer.MethodsIndexed} methods indexed)");
|
||||
Console.WriteLine($"Features audited: {results.Count}");
|
||||
Console.WriteLine($"{label} audited: {results.Count}");
|
||||
Console.WriteLine();
|
||||
Console.WriteLine($" verified: {grouped.GetValueOrDefault("verified", 0)}");
|
||||
Console.WriteLine($" stub: {grouped.GetValueOrDefault("stub", 0)}");
|
||||
@@ -128,8 +154,7 @@ public static class AuditCommand
|
||||
return;
|
||||
}
|
||||
|
||||
// 6. Apply DB updates
|
||||
ApplyUpdates(db, results);
|
||||
ApplyUpdates(db, results, target);
|
||||
Console.WriteLine($"Report: {outputPath}");
|
||||
}
|
||||
|
||||
@@ -137,7 +162,6 @@ public static class AuditCommand
|
||||
string outputPath,
|
||||
List<(FeatureClassifier.FeatureRecord Feature, FeatureClassifier.ClassificationResult Result)> results)
|
||||
{
|
||||
// Ensure directory exists
|
||||
var dir = Path.GetDirectoryName(outputPath);
|
||||
if (!string.IsNullOrEmpty(dir))
|
||||
Directory.CreateDirectory(dir);
|
||||
@@ -153,9 +177,9 @@ public static class AuditCommand
|
||||
|
||||
private static void ApplyUpdates(
|
||||
Database db,
|
||||
List<(FeatureClassifier.FeatureRecord Feature, FeatureClassifier.ClassificationResult Result)> results)
|
||||
List<(FeatureClassifier.FeatureRecord Feature, FeatureClassifier.ClassificationResult Result)> results,
|
||||
AuditTarget target)
|
||||
{
|
||||
// Group by (status, notes) for efficient batch updates
|
||||
var groups = results
|
||||
.GroupBy(r => (r.Result.Status, Notes: r.Result.Status == "n_a" ? r.Result.Reason : (string?)null))
|
||||
.ToList();
|
||||
@@ -170,7 +194,6 @@ public static class AuditCommand
|
||||
var status = group.Key.Status;
|
||||
var notes = group.Key.Notes;
|
||||
|
||||
// Build parameterized IN clause
|
||||
var placeholders = new List<string>();
|
||||
using var cmd = db.CreateCommand("");
|
||||
for (var i = 0; i < ids.Count; i++)
|
||||
@@ -183,12 +206,12 @@ public static class AuditCommand
|
||||
|
||||
if (notes is not null)
|
||||
{
|
||||
cmd.CommandText = $"UPDATE features SET status = @status, notes = @notes WHERE id IN ({string.Join(", ", placeholders)})";
|
||||
cmd.CommandText = $"UPDATE {target.Table} SET status = @status, notes = @notes WHERE id IN ({string.Join(", ", placeholders)})";
|
||||
cmd.Parameters.AddWithValue("@notes", notes);
|
||||
}
|
||||
else
|
||||
{
|
||||
cmd.CommandText = $"UPDATE features SET status = @status WHERE id IN ({string.Join(", ", placeholders)})";
|
||||
cmd.CommandText = $"UPDATE {target.Table} SET status = @status WHERE id IN ({string.Join(", ", placeholders)})";
|
||||
}
|
||||
|
||||
cmd.Transaction = transaction;
|
||||
@@ -197,7 +220,7 @@ public static class AuditCommand
|
||||
}
|
||||
|
||||
transaction.Commit();
|
||||
Console.WriteLine($"Updated {totalUpdated} features.");
|
||||
Console.WriteLine($"Updated {totalUpdated} {target.Label}.");
|
||||
}
|
||||
catch
|
||||
{
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
using System.CommandLine;
|
||||
using NatsNet.PortTracker.Audit;
|
||||
using NatsNet.PortTracker.Data;
|
||||
|
||||
namespace NatsNet.PortTracker.Commands;
|
||||
@@ -96,31 +97,59 @@ public static class FeatureCommands
|
||||
var updateId = new Argument<int>("id") { Description = "Feature ID (use 0 with --all-in-module)" };
|
||||
var updateStatus = new Option<string>("--status") { Description = "New status", Required = true };
|
||||
var updateAllInModule = new Option<int?>("--all-in-module") { Description = "Update all features in this module ID" };
|
||||
var updateCmd = new Command("update", "Update feature status");
|
||||
var updateOverride = new Option<string?>("--override") { Description = "Override audit mismatch with this comment" };
|
||||
var updateCmd = new Command("update", "Update feature status (audit-verified)");
|
||||
updateCmd.Add(updateId);
|
||||
updateCmd.Add(updateStatus);
|
||||
updateCmd.Add(updateAllInModule);
|
||||
updateCmd.Add(updateOverride);
|
||||
updateCmd.SetAction(parseResult =>
|
||||
{
|
||||
var dbPath = parseResult.GetValue(dbOption)!;
|
||||
var id = parseResult.GetValue(updateId);
|
||||
var status = parseResult.GetValue(updateStatus)!;
|
||||
var allInModule = parseResult.GetValue(updateAllInModule);
|
||||
var overrideComment = parseResult.GetValue(updateOverride);
|
||||
using var db = new Database(dbPath);
|
||||
|
||||
var indexer = AuditVerifier.BuildIndexer("features");
|
||||
|
||||
if (allInModule is not null)
|
||||
{
|
||||
var verifications = AuditVerifier.VerifyItems(db, indexer, "features",
|
||||
" WHERE module_id = @module", [("@module", (object?)allInModule)], status);
|
||||
if (!AuditVerifier.CheckAndReport(verifications, status, overrideComment))
|
||||
return;
|
||||
|
||||
var affected = db.Execute(
|
||||
"UPDATE features SET status = @status WHERE module_id = @module",
|
||||
("@status", status), ("@module", allInModule));
|
||||
Console.WriteLine($"Updated {affected} features in module {allInModule} to '{status}'.");
|
||||
|
||||
var mismatches = verifications.Where(r => !r.Matches).ToList();
|
||||
if (mismatches.Count > 0 && overrideComment is not null)
|
||||
AuditVerifier.LogOverrides(db, "features", mismatches, status, overrideComment);
|
||||
}
|
||||
else
|
||||
{
|
||||
var verifications = AuditVerifier.VerifyItems(db, indexer, "features",
|
||||
" WHERE id = @id", [("@id", (object?)id)], status);
|
||||
if (verifications.Count == 0)
|
||||
{
|
||||
Console.WriteLine($"Feature {id} not found.");
|
||||
return;
|
||||
}
|
||||
if (!AuditVerifier.CheckAndReport(verifications, status, overrideComment))
|
||||
return;
|
||||
|
||||
var affected = db.Execute(
|
||||
"UPDATE features SET status = @status WHERE id = @id",
|
||||
("@status", status), ("@id", id));
|
||||
Console.WriteLine(affected > 0 ? $"Feature {id} updated to '{status}'." : $"Feature {id} not found.");
|
||||
|
||||
var mismatches = verifications.Where(r => !r.Matches).ToList();
|
||||
if (mismatches.Count > 0 && overrideComment is not null)
|
||||
AuditVerifier.LogOverrides(db, "features", mismatches, status, overrideComment);
|
||||
}
|
||||
});
|
||||
|
||||
@@ -179,13 +208,14 @@ public static class FeatureCommands
|
||||
|
||||
private static Command CreateBatchUpdate(Option<string> dbOption)
|
||||
{
|
||||
var cmd = new Command("batch-update", "Bulk update feature status");
|
||||
var cmd = new Command("batch-update", "Bulk update feature status (audit-verified)");
|
||||
var idsOpt = BatchFilters.IdsOption();
|
||||
var moduleOpt = BatchFilters.ModuleOption();
|
||||
var statusOpt = BatchFilters.StatusOption();
|
||||
var executeOpt = BatchFilters.ExecuteOption();
|
||||
var setStatus = new Option<string>("--set-status") { Description = "New status to set", Required = true };
|
||||
var setNotes = new Option<string?>("--set-notes") { Description = "Notes to set" };
|
||||
var overrideOpt = new Option<string?>("--override") { Description = "Override audit mismatches with this comment" };
|
||||
|
||||
cmd.Add(idsOpt);
|
||||
cmd.Add(moduleOpt);
|
||||
@@ -193,6 +223,7 @@ public static class FeatureCommands
|
||||
cmd.Add(executeOpt);
|
||||
cmd.Add(setStatus);
|
||||
cmd.Add(setNotes);
|
||||
cmd.Add(overrideOpt);
|
||||
|
||||
cmd.SetAction(parseResult =>
|
||||
{
|
||||
@@ -203,6 +234,7 @@ public static class FeatureCommands
|
||||
var execute = parseResult.GetValue(executeOpt);
|
||||
var newStatus = parseResult.GetValue(setStatus)!;
|
||||
var notes = parseResult.GetValue(setNotes);
|
||||
var overrideComment = parseResult.GetValue(overrideOpt);
|
||||
|
||||
if (string.IsNullOrWhiteSpace(ids) && module is null && string.IsNullOrWhiteSpace(status))
|
||||
{
|
||||
@@ -213,6 +245,12 @@ public static class FeatureCommands
|
||||
using var db = new Database(dbPath);
|
||||
var (whereClause, filterParams) = BatchFilters.BuildWhereClause(ids, module, status);
|
||||
|
||||
// Audit verification
|
||||
var indexer = AuditVerifier.BuildIndexer("features");
|
||||
var verifications = AuditVerifier.VerifyItems(db, indexer, "features", whereClause, filterParams, newStatus);
|
||||
if (!AuditVerifier.CheckAndReport(verifications, newStatus, overrideComment))
|
||||
return;
|
||||
|
||||
var setClauses = new List<string> { "status = @newStatus" };
|
||||
var updateParams = new List<(string, object?)> { ("@newStatus", newStatus) };
|
||||
if (notes is not null)
|
||||
@@ -225,6 +263,14 @@ public static class FeatureCommands
|
||||
"id, name, status, module_id, notes",
|
||||
string.Join(", ", setClauses), updateParams,
|
||||
whereClause, filterParams, execute);
|
||||
|
||||
// Log overrides after successful execute
|
||||
if (execute)
|
||||
{
|
||||
var mismatches = verifications.Where(r => !r.Matches).ToList();
|
||||
if (mismatches.Count > 0 && overrideComment is not null)
|
||||
AuditVerifier.LogOverrides(db, "features", mismatches, newStatus, overrideComment);
|
||||
}
|
||||
});
|
||||
|
||||
return cmd;
|
||||
|
||||
66
tools/NatsNet.PortTracker/Commands/OverrideCommands.cs
Normal file
66
tools/NatsNet.PortTracker/Commands/OverrideCommands.cs
Normal file
@@ -0,0 +1,66 @@
|
||||
using System.CommandLine;
|
||||
using NatsNet.PortTracker.Data;
|
||||
|
||||
namespace NatsNet.PortTracker.Commands;
|
||||
|
||||
public static class OverrideCommands
|
||||
{
|
||||
public static Command Create(Option<string> dbOption)
|
||||
{
|
||||
var overrideCommand = new Command("override", "Review status override records");
|
||||
|
||||
var typeOpt = new Option<string?>("--type")
|
||||
{
|
||||
Description = "Filter by table: features or tests"
|
||||
};
|
||||
|
||||
var listCmd = new Command("list", "List all status overrides");
|
||||
listCmd.Add(typeOpt);
|
||||
listCmd.SetAction(parseResult =>
|
||||
{
|
||||
var dbPath = parseResult.GetValue(dbOption)!;
|
||||
var type = parseResult.GetValue(typeOpt);
|
||||
using var db = new Database(dbPath);
|
||||
|
||||
var sql = "SELECT id, table_name, item_id, audit_status, requested_status, comment, created_at FROM status_overrides";
|
||||
var parameters = new List<(string, object?)>();
|
||||
|
||||
if (type is not null)
|
||||
{
|
||||
var tableName = type switch
|
||||
{
|
||||
"features" => "features",
|
||||
"tests" => "unit_tests",
|
||||
_ => type
|
||||
};
|
||||
sql += " WHERE table_name = @table";
|
||||
parameters.Add(("@table", tableName));
|
||||
}
|
||||
sql += " ORDER BY created_at DESC";
|
||||
|
||||
var rows = db.Query(sql, parameters.ToArray());
|
||||
if (rows.Count == 0)
|
||||
{
|
||||
Console.WriteLine("No overrides found.");
|
||||
return;
|
||||
}
|
||||
|
||||
Console.WriteLine($"{"ID",-5} {"Table",-12} {"Item",-6} {"Audit",-10} {"Requested",-10} {"Comment",-35} {"Date",-20}");
|
||||
Console.WriteLine(new string('-', 98));
|
||||
foreach (var row in rows)
|
||||
{
|
||||
Console.WriteLine($"{row["id"],-5} {row["table_name"],-12} {row["item_id"],-6} {row["audit_status"],-10} {row["requested_status"],-10} {Truncate(row["comment"]?.ToString(), 34),-35} {row["created_at"],-20}");
|
||||
}
|
||||
Console.WriteLine($"\nTotal: {rows.Count} overrides");
|
||||
});
|
||||
|
||||
overrideCommand.Add(listCmd);
|
||||
return overrideCommand;
|
||||
}
|
||||
|
||||
private static string Truncate(string? s, int maxLen)
|
||||
{
|
||||
if (s is null) return "";
|
||||
return s.Length <= maxLen ? s : s[..(maxLen - 2)] + "..";
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,5 @@
|
||||
using System.CommandLine;
|
||||
using NatsNet.PortTracker.Audit;
|
||||
using NatsNet.PortTracker.Data;
|
||||
|
||||
namespace NatsNet.PortTracker.Commands;
|
||||
@@ -89,18 +90,37 @@ public static class TestCommands
|
||||
// update
|
||||
var updateId = new Argument<int>("id") { Description = "Test ID" };
|
||||
var updateStatus = new Option<string>("--status") { Description = "New status", Required = true };
|
||||
var updateCmd = new Command("update", "Update test status");
|
||||
var updateOverride = new Option<string?>("--override") { Description = "Override audit mismatch with this comment" };
|
||||
var updateCmd = new Command("update", "Update test status (audit-verified)");
|
||||
updateCmd.Add(updateId);
|
||||
updateCmd.Add(updateStatus);
|
||||
updateCmd.Add(updateOverride);
|
||||
updateCmd.SetAction(parseResult =>
|
||||
{
|
||||
var dbPath = parseResult.GetValue(dbOption)!;
|
||||
var id = parseResult.GetValue(updateId);
|
||||
var status = parseResult.GetValue(updateStatus)!;
|
||||
var overrideComment = parseResult.GetValue(updateOverride);
|
||||
using var db = new Database(dbPath);
|
||||
|
||||
var indexer = AuditVerifier.BuildIndexer("unit_tests");
|
||||
var verifications = AuditVerifier.VerifyItems(db, indexer, "unit_tests",
|
||||
" WHERE id = @id", [("@id", (object?)id)], status);
|
||||
if (verifications.Count == 0)
|
||||
{
|
||||
Console.WriteLine($"Test {id} not found.");
|
||||
return;
|
||||
}
|
||||
if (!AuditVerifier.CheckAndReport(verifications, status, overrideComment))
|
||||
return;
|
||||
|
||||
var affected = db.Execute("UPDATE unit_tests SET status = @status WHERE id = @id",
|
||||
("@status", status), ("@id", id));
|
||||
Console.WriteLine(affected > 0 ? $"Test {id} updated to '{status}'." : $"Test {id} not found.");
|
||||
|
||||
var mismatches = verifications.Where(r => !r.Matches).ToList();
|
||||
if (mismatches.Count > 0 && overrideComment is not null)
|
||||
AuditVerifier.LogOverrides(db, "unit_tests", mismatches, status, overrideComment);
|
||||
});
|
||||
|
||||
// map
|
||||
@@ -139,13 +159,14 @@ public static class TestCommands
|
||||
|
||||
private static Command CreateBatchUpdate(Option<string> dbOption)
|
||||
{
|
||||
var cmd = new Command("batch-update", "Bulk update test status");
|
||||
var cmd = new Command("batch-update", "Bulk update test status (audit-verified)");
|
||||
var idsOpt = BatchFilters.IdsOption();
|
||||
var moduleOpt = BatchFilters.ModuleOption();
|
||||
var statusOpt = BatchFilters.StatusOption();
|
||||
var executeOpt = BatchFilters.ExecuteOption();
|
||||
var setStatus = new Option<string>("--set-status") { Description = "New status to set", Required = true };
|
||||
var setNotes = new Option<string?>("--set-notes") { Description = "Notes to set" };
|
||||
var overrideOpt = new Option<string?>("--override") { Description = "Override audit mismatches with this comment" };
|
||||
|
||||
cmd.Add(idsOpt);
|
||||
cmd.Add(moduleOpt);
|
||||
@@ -153,6 +174,7 @@ public static class TestCommands
|
||||
cmd.Add(executeOpt);
|
||||
cmd.Add(setStatus);
|
||||
cmd.Add(setNotes);
|
||||
cmd.Add(overrideOpt);
|
||||
|
||||
cmd.SetAction(parseResult =>
|
||||
{
|
||||
@@ -163,6 +185,7 @@ public static class TestCommands
|
||||
var execute = parseResult.GetValue(executeOpt);
|
||||
var newStatus = parseResult.GetValue(setStatus)!;
|
||||
var notes = parseResult.GetValue(setNotes);
|
||||
var overrideComment = parseResult.GetValue(overrideOpt);
|
||||
|
||||
if (string.IsNullOrWhiteSpace(ids) && module is null && string.IsNullOrWhiteSpace(status))
|
||||
{
|
||||
@@ -173,6 +196,12 @@ public static class TestCommands
|
||||
using var db = new Database(dbPath);
|
||||
var (whereClause, filterParams) = BatchFilters.BuildWhereClause(ids, module, status);
|
||||
|
||||
// Audit verification
|
||||
var indexer = AuditVerifier.BuildIndexer("unit_tests");
|
||||
var verifications = AuditVerifier.VerifyItems(db, indexer, "unit_tests", whereClause, filterParams, newStatus);
|
||||
if (!AuditVerifier.CheckAndReport(verifications, newStatus, overrideComment))
|
||||
return;
|
||||
|
||||
var setClauses = new List<string> { "status = @newStatus" };
|
||||
var updateParams = new List<(string, object?)> { ("@newStatus", newStatus) };
|
||||
if (notes is not null)
|
||||
@@ -185,6 +214,14 @@ public static class TestCommands
|
||||
"id, name, status, module_id, notes",
|
||||
string.Join(", ", setClauses), updateParams,
|
||||
whereClause, filterParams, execute);
|
||||
|
||||
// Log overrides after successful execute
|
||||
if (execute)
|
||||
{
|
||||
var mismatches = verifications.Where(r => !r.Matches).ToList();
|
||||
if (mismatches.Count > 0 && overrideComment is not null)
|
||||
AuditVerifier.LogOverrides(db, "unit_tests", mismatches, newStatus, overrideComment);
|
||||
}
|
||||
});
|
||||
|
||||
return cmd;
|
||||
|
||||
@@ -40,6 +40,7 @@ rootCommand.Add(DependencyCommands.Create(dbOption, schemaOption));
|
||||
rootCommand.Add(ReportCommands.Create(dbOption, schemaOption));
|
||||
rootCommand.Add(PhaseCommands.Create(dbOption, schemaOption));
|
||||
rootCommand.Add(AuditCommand.Create(dbOption));
|
||||
rootCommand.Add(OverrideCommands.Create(dbOption));
|
||||
|
||||
var parseResult = rootCommand.Parse(args);
|
||||
return await parseResult.InvokeAsync();
|
||||
|
||||
@@ -256,6 +256,10 @@ func (a *Analyzer) parseTestFile(filePath string) ([]TestFunc, []ImportInfo, int
|
||||
}
|
||||
|
||||
test.FeatureName = a.inferFeatureName(name)
|
||||
test.BestFeatureIdx = -1
|
||||
if fn.Body != nil {
|
||||
test.Calls = a.extractCalls(fn.Body)
|
||||
}
|
||||
tests = append(tests, test)
|
||||
}
|
||||
|
||||
@@ -331,6 +335,210 @@ func (a *Analyzer) inferFeatureName(testName string) string {
|
||||
return name
|
||||
}
|
||||
|
||||
// extractCalls walks an AST block statement and extracts all function/method calls.
|
||||
func (a *Analyzer) extractCalls(body *ast.BlockStmt) []CallInfo {
|
||||
seen := make(map[string]bool)
|
||||
var calls []CallInfo
|
||||
|
||||
ast.Inspect(body, func(n ast.Node) bool {
|
||||
callExpr, ok := n.(*ast.CallExpr)
|
||||
if !ok {
|
||||
return true
|
||||
}
|
||||
|
||||
var ci CallInfo
|
||||
switch fun := callExpr.Fun.(type) {
|
||||
case *ast.Ident:
|
||||
ci = CallInfo{FuncName: fun.Name}
|
||||
case *ast.SelectorExpr:
|
||||
ci = CallInfo{
|
||||
RecvOrPkg: extractIdent(fun.X),
|
||||
MethodName: fun.Sel.Name,
|
||||
IsSelector: true,
|
||||
}
|
||||
default:
|
||||
return true
|
||||
}
|
||||
|
||||
key := ci.callKey()
|
||||
if !seen[key] && !isFilteredCall(ci) {
|
||||
seen[key] = true
|
||||
calls = append(calls, ci)
|
||||
}
|
||||
return true
|
||||
})
|
||||
|
||||
return calls
|
||||
}
|
||||
|
||||
// extractIdent extracts an identifier name from an expression (handles X in X.Y).
|
||||
func extractIdent(expr ast.Expr) string {
|
||||
switch e := expr.(type) {
|
||||
case *ast.Ident:
|
||||
return e.Name
|
||||
case *ast.SelectorExpr:
|
||||
return extractIdent(e.X) + "." + e.Sel.Name
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
|
||||
// isFilteredCall returns true if a call should be excluded from feature matching.
|
||||
func isFilteredCall(c CallInfo) bool {
|
||||
if c.IsSelector {
|
||||
recv := c.RecvOrPkg
|
||||
// testing.T/B methods
|
||||
if recv == "t" || recv == "b" || recv == "tb" {
|
||||
return true
|
||||
}
|
||||
// stdlib packages
|
||||
if stdlibPkgs[recv] {
|
||||
return true
|
||||
}
|
||||
// NATS client libs
|
||||
if recv == "nats" || recv == "nuid" || recv == "nkeys" || recv == "jwt" {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Go builtins
|
||||
name := c.FuncName
|
||||
if builtinFuncs[name] {
|
||||
return true
|
||||
}
|
||||
|
||||
// Test assertion helpers
|
||||
lower := strings.ToLower(name)
|
||||
if strings.HasPrefix(name, "require_") {
|
||||
return true
|
||||
}
|
||||
for _, prefix := range []string{"check", "verify", "assert", "expect"} {
|
||||
if strings.HasPrefix(lower, prefix) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// featureRef identifies a feature within the analysis result.
|
||||
type featureRef struct {
|
||||
moduleIdx int
|
||||
featureIdx int
|
||||
goFile string
|
||||
goClass string
|
||||
}
|
||||
|
||||
// resolveCallGraph matches test calls against known features across all modules.
|
||||
func resolveCallGraph(result *AnalysisResult) {
|
||||
// Build method index: go_method name → list of feature refs
|
||||
methodIndex := make(map[string][]featureRef)
|
||||
for mi, mod := range result.Modules {
|
||||
for fi, feat := range mod.Features {
|
||||
ref := featureRef{
|
||||
moduleIdx: mi,
|
||||
featureIdx: fi,
|
||||
goFile: feat.GoFile,
|
||||
goClass: feat.GoClass,
|
||||
}
|
||||
methodIndex[feat.GoMethod] = append(methodIndex[feat.GoMethod], ref)
|
||||
}
|
||||
}
|
||||
|
||||
// For each test, resolve calls to features
|
||||
for mi := range result.Modules {
|
||||
mod := &result.Modules[mi]
|
||||
for ti := range mod.Tests {
|
||||
test := &mod.Tests[ti]
|
||||
seen := make(map[int]bool) // feature indices already linked
|
||||
var linked []int
|
||||
|
||||
testFileBase := sourceFileBase(test.GoFile)
|
||||
|
||||
for _, call := range test.Calls {
|
||||
// Look up the method name
|
||||
name := call.MethodName
|
||||
if !call.IsSelector {
|
||||
name = call.FuncName
|
||||
}
|
||||
|
||||
candidates := methodIndex[name]
|
||||
if len(candidates) == 0 {
|
||||
continue
|
||||
}
|
||||
// Ambiguity threshold: skip very common method names
|
||||
if len(candidates) > 10 {
|
||||
continue
|
||||
}
|
||||
|
||||
// Filter to same module
|
||||
var sameModule []featureRef
|
||||
for _, ref := range candidates {
|
||||
if ref.moduleIdx == mi {
|
||||
sameModule = append(sameModule, ref)
|
||||
}
|
||||
}
|
||||
if len(sameModule) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, ref := range sameModule {
|
||||
if !seen[ref.featureIdx] {
|
||||
seen[ref.featureIdx] = true
|
||||
linked = append(linked, ref.featureIdx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
test.LinkedFeatures = linked
|
||||
|
||||
// Set BestFeatureIdx using priority:
|
||||
// (a) existing inferFeatureName match
|
||||
// (b) same-file-base match
|
||||
// (c) first remaining candidate
|
||||
if test.BestFeatureIdx < 0 && len(linked) > 0 {
|
||||
// Try same-file-base match first
|
||||
for _, fi := range linked {
|
||||
featFileBase := sourceFileBase(mod.Features[fi].GoFile)
|
||||
if featFileBase == testFileBase {
|
||||
test.BestFeatureIdx = fi
|
||||
break
|
||||
}
|
||||
}
|
||||
// Fall back to first candidate
|
||||
if test.BestFeatureIdx < 0 {
|
||||
test.BestFeatureIdx = linked[0]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// sourceFileBase strips _test.go suffix and path to get the base file name.
|
||||
func sourceFileBase(goFile string) string {
|
||||
base := filepath.Base(goFile)
|
||||
base = strings.TrimSuffix(base, "_test.go")
|
||||
base = strings.TrimSuffix(base, ".go")
|
||||
return base
|
||||
}
|
||||
|
||||
var stdlibPkgs = map[string]bool{
|
||||
"fmt": true, "time": true, "strings": true, "bytes": true, "errors": true,
|
||||
"os": true, "math": true, "sort": true, "reflect": true, "sync": true,
|
||||
"context": true, "io": true, "filepath": true, "strconv": true,
|
||||
"encoding": true, "json": true, "binary": true, "hex": true, "rand": true,
|
||||
"runtime": true, "atomic": true, "slices": true, "testing": true,
|
||||
"net": true, "bufio": true, "crypto": true, "log": true, "regexp": true,
|
||||
"unicode": true, "http": true, "url": true,
|
||||
}
|
||||
|
||||
var builtinFuncs = map[string]bool{
|
||||
"make": true, "append": true, "len": true, "cap": true, "close": true,
|
||||
"delete": true, "panic": true, "recover": true, "print": true,
|
||||
"println": true, "copy": true, "new": true,
|
||||
}
|
||||
|
||||
// isStdlib checks if an import path is a Go standard library package.
|
||||
func isStdlib(importPath string) bool {
|
||||
firstSlash := strings.Index(importPath, "/")
|
||||
|
||||
@@ -11,28 +11,47 @@ func main() {
|
||||
sourceDir := flag.String("source", "", "Path to Go source root (e.g., ../../golang/nats-server)")
|
||||
dbPath := flag.String("db", "", "Path to SQLite database file (e.g., ../../porting.db)")
|
||||
schemaPath := flag.String("schema", "", "Path to SQL schema file (e.g., ../../porting-schema.sql)")
|
||||
mode := flag.String("mode", "full", "Analysis mode: 'full' (default) or 'call-graph' (incremental)")
|
||||
flag.Parse()
|
||||
|
||||
if *sourceDir == "" || *dbPath == "" || *schemaPath == "" {
|
||||
fmt.Fprintf(os.Stderr, "Usage: go-analyzer --source <path> --db <path> --schema <path>\n")
|
||||
if *sourceDir == "" || *dbPath == "" {
|
||||
fmt.Fprintf(os.Stderr, "Usage: go-analyzer --source <path> --db <path> [--schema <path>] [--mode full|call-graph]\n")
|
||||
flag.PrintDefaults()
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
switch *mode {
|
||||
case "full":
|
||||
runFull(*sourceDir, *dbPath, *schemaPath)
|
||||
case "call-graph":
|
||||
runCallGraph(*sourceDir, *dbPath)
|
||||
default:
|
||||
log.Fatalf("Unknown mode %q: must be 'full' or 'call-graph'", *mode)
|
||||
}
|
||||
}
|
||||
|
||||
func runFull(sourceDir, dbPath, schemaPath string) {
|
||||
if schemaPath == "" {
|
||||
log.Fatal("--schema is required for full mode")
|
||||
}
|
||||
|
||||
// Open DB and apply schema
|
||||
db, err := OpenDB(*dbPath, *schemaPath)
|
||||
db, err := OpenDB(dbPath, schemaPath)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to open database: %v", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
// Run analysis
|
||||
analyzer := NewAnalyzer(*sourceDir)
|
||||
analyzer := NewAnalyzer(sourceDir)
|
||||
result, err := analyzer.Analyze()
|
||||
if err != nil {
|
||||
log.Fatalf("Analysis failed: %v", err)
|
||||
}
|
||||
|
||||
// Resolve call graph before writing
|
||||
resolveCallGraph(result)
|
||||
|
||||
// Write to DB
|
||||
writer := NewDBWriter(db)
|
||||
if err := writer.WriteAll(result); err != nil {
|
||||
@@ -46,3 +65,35 @@ func main() {
|
||||
fmt.Printf(" Dependencies: %d\n", len(result.Dependencies))
|
||||
fmt.Printf(" Imports: %d\n", len(result.Imports))
|
||||
}
|
||||
|
||||
func runCallGraph(sourceDir, dbPath string) {
|
||||
// Open existing DB without schema
|
||||
db, err := OpenDBNoSchema(dbPath)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to open database: %v", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
// Run analysis (parse Go source)
|
||||
analyzer := NewAnalyzer(sourceDir)
|
||||
result, err := analyzer.Analyze()
|
||||
if err != nil {
|
||||
log.Fatalf("Analysis failed: %v", err)
|
||||
}
|
||||
|
||||
// Resolve call graph
|
||||
resolveCallGraph(result)
|
||||
|
||||
// Update DB incrementally
|
||||
writer := NewDBWriter(db)
|
||||
stats, err := writer.UpdateCallGraph(result)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to update call graph: %v", err)
|
||||
}
|
||||
|
||||
fmt.Printf("Call graph analysis complete:\n")
|
||||
fmt.Printf(" Tests analyzed: %d\n", stats.TestsAnalyzed)
|
||||
fmt.Printf(" Tests linked: %d\n", stats.TestsLinked)
|
||||
fmt.Printf(" Dependency rows: %d\n", stats.DependencyRows)
|
||||
fmt.Printf(" Feature IDs set: %d\n", stats.FeatureIDsSet)
|
||||
}
|
||||
|
||||
@@ -152,3 +152,176 @@ func (w *DBWriter) insertLibrary(tx *sql.Tx, imp *ImportInfo) error {
|
||||
)
|
||||
return err
|
||||
}
|
||||
|
||||
// OpenDBNoSchema opens an existing SQLite database without applying schema.
|
||||
// It verifies that the required tables exist.
|
||||
func OpenDBNoSchema(dbPath string) (*sql.DB, error) {
|
||||
db, err := sql.Open("sqlite3", dbPath+"?_journal_mode=WAL&_foreign_keys=ON")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("opening database: %w", err)
|
||||
}
|
||||
|
||||
// Verify required tables exist
|
||||
for _, table := range []string{"modules", "features", "unit_tests", "dependencies"} {
|
||||
var name string
|
||||
err := db.QueryRow("SELECT name FROM sqlite_master WHERE type='table' AND name=?", table).Scan(&name)
|
||||
if err != nil {
|
||||
db.Close()
|
||||
return nil, fmt.Errorf("required table %q not found: %w", table, err)
|
||||
}
|
||||
}
|
||||
|
||||
return db, nil
|
||||
}
|
||||
|
||||
// CallGraphStats holds summary statistics from a call-graph update.
|
||||
type CallGraphStats struct {
|
||||
TestsAnalyzed int
|
||||
TestsLinked int
|
||||
DependencyRows int
|
||||
FeatureIDsSet int
|
||||
}
|
||||
|
||||
// UpdateCallGraph writes call-graph analysis results to the database incrementally.
|
||||
func (w *DBWriter) UpdateCallGraph(result *AnalysisResult) (*CallGraphStats, error) {
|
||||
stats := &CallGraphStats{}
|
||||
|
||||
// Load module name→ID mapping
|
||||
moduleIDs := make(map[string]int64)
|
||||
rows, err := w.db.Query("SELECT id, name FROM modules")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying modules: %w", err)
|
||||
}
|
||||
for rows.Next() {
|
||||
var id int64
|
||||
var name string
|
||||
if err := rows.Scan(&id, &name); err != nil {
|
||||
rows.Close()
|
||||
return nil, err
|
||||
}
|
||||
moduleIDs[name] = id
|
||||
}
|
||||
rows.Close()
|
||||
|
||||
// Load feature DB IDs: "module_name:go_method:go_class" → id
|
||||
type featureKey struct {
|
||||
moduleName string
|
||||
goMethod string
|
||||
goClass string
|
||||
}
|
||||
featureDBIDs := make(map[featureKey]int64)
|
||||
rows, err = w.db.Query(`
|
||||
SELECT f.id, m.name, f.go_method, COALESCE(f.go_class, '')
|
||||
FROM features f
|
||||
JOIN modules m ON f.module_id = m.id
|
||||
`)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying features: %w", err)
|
||||
}
|
||||
for rows.Next() {
|
||||
var id int64
|
||||
var modName, goMethod, goClass string
|
||||
if err := rows.Scan(&id, &modName, &goMethod, &goClass); err != nil {
|
||||
rows.Close()
|
||||
return nil, err
|
||||
}
|
||||
featureDBIDs[featureKey{modName, goMethod, goClass}] = id
|
||||
}
|
||||
rows.Close()
|
||||
|
||||
// Load test DB IDs: "module_name:go_method" → id
|
||||
testDBIDs := make(map[string]int64)
|
||||
rows, err = w.db.Query(`
|
||||
SELECT ut.id, m.name, ut.go_method
|
||||
FROM unit_tests ut
|
||||
JOIN modules m ON ut.module_id = m.id
|
||||
`)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("querying unit_tests: %w", err)
|
||||
}
|
||||
for rows.Next() {
|
||||
var id int64
|
||||
var modName, goMethod string
|
||||
if err := rows.Scan(&id, &modName, &goMethod); err != nil {
|
||||
rows.Close()
|
||||
return nil, err
|
||||
}
|
||||
testDBIDs[modName+":"+goMethod] = id
|
||||
}
|
||||
rows.Close()
|
||||
|
||||
// Begin transaction
|
||||
tx, err := w.db.Begin()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("beginning transaction: %w", err)
|
||||
}
|
||||
defer tx.Rollback()
|
||||
|
||||
// Clear old call-graph data
|
||||
if _, err := tx.Exec("DELETE FROM dependencies WHERE source_type='unit_test' AND dependency_kind='calls'"); err != nil {
|
||||
return nil, fmt.Errorf("clearing old dependencies: %w", err)
|
||||
}
|
||||
if _, err := tx.Exec("UPDATE unit_tests SET feature_id = NULL"); err != nil {
|
||||
return nil, fmt.Errorf("clearing old feature_ids: %w", err)
|
||||
}
|
||||
|
||||
// Prepare statements
|
||||
insertDep, err := tx.Prepare("INSERT OR IGNORE INTO dependencies (source_type, source_id, target_type, target_id, dependency_kind) VALUES ('unit_test', ?, 'feature', ?, 'calls')")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("preparing insert dependency: %w", err)
|
||||
}
|
||||
defer insertDep.Close()
|
||||
|
||||
updateFeatureID, err := tx.Prepare("UPDATE unit_tests SET feature_id = ? WHERE id = ?")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("preparing update feature_id: %w", err)
|
||||
}
|
||||
defer updateFeatureID.Close()
|
||||
|
||||
// Process each module's tests
|
||||
for _, mod := range result.Modules {
|
||||
for _, test := range mod.Tests {
|
||||
stats.TestsAnalyzed++
|
||||
|
||||
testDBID, ok := testDBIDs[mod.Name+":"+test.GoMethod]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
// Insert dependency rows for linked features
|
||||
if len(test.LinkedFeatures) > 0 {
|
||||
stats.TestsLinked++
|
||||
}
|
||||
for _, fi := range test.LinkedFeatures {
|
||||
feat := mod.Features[fi]
|
||||
featDBID, ok := featureDBIDs[featureKey{mod.Name, feat.GoMethod, feat.GoClass}]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
if _, err := insertDep.Exec(testDBID, featDBID); err != nil {
|
||||
return nil, fmt.Errorf("inserting dependency for test %s: %w", test.GoMethod, err)
|
||||
}
|
||||
stats.DependencyRows++
|
||||
}
|
||||
|
||||
// Set feature_id for best match
|
||||
if test.BestFeatureIdx >= 0 {
|
||||
feat := mod.Features[test.BestFeatureIdx]
|
||||
featDBID, ok := featureDBIDs[featureKey{mod.Name, feat.GoMethod, feat.GoClass}]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
if _, err := updateFeatureID.Exec(featDBID, testDBID); err != nil {
|
||||
return nil, fmt.Errorf("updating feature_id for test %s: %w", test.GoMethod, err)
|
||||
}
|
||||
stats.FeatureIDsSet++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if err := tx.Commit(); err != nil {
|
||||
return nil, fmt.Errorf("committing transaction: %w", err)
|
||||
}
|
||||
|
||||
return stats, nil
|
||||
}
|
||||
|
||||
@@ -58,6 +58,28 @@ type TestFunc struct {
|
||||
GoLineCount int
|
||||
// FeatureName links this test to a feature by naming convention
|
||||
FeatureName string
|
||||
// Calls holds raw function/method calls extracted from the test body AST
|
||||
Calls []CallInfo
|
||||
// LinkedFeatures holds indices into the parent module's Features slice
|
||||
LinkedFeatures []int
|
||||
// BestFeatureIdx is the primary feature match index (-1 = none)
|
||||
BestFeatureIdx int
|
||||
}
|
||||
|
||||
// CallInfo represents a function or method call extracted from a test body.
|
||||
type CallInfo struct {
|
||||
FuncName string // direct call name: "newMemStore"
|
||||
RecvOrPkg string // selector receiver/pkg: "ms", "fmt", "t"
|
||||
MethodName string // selector method: "StoreMsg", "Fatalf"
|
||||
IsSelector bool // true for X.Y() form
|
||||
}
|
||||
|
||||
// callKey returns a deduplication key for this call.
|
||||
func (c CallInfo) callKey() string {
|
||||
if c.IsSelector {
|
||||
return c.RecvOrPkg + "." + c.MethodName
|
||||
}
|
||||
return c.FuncName
|
||||
}
|
||||
|
||||
// Dependency represents a call relationship between two items.
|
||||
|
||||
Reference in New Issue
Block a user