Map 553 deferred tests whose feature dependencies are all verified into batch 0 (batch_tests table). Add Codex-generated design and implementation plan for batch 0 execution.
157 lines
5.5 KiB
Markdown
157 lines
5.5 KiB
Markdown
# Batch 0 Implementable Tests Design
|
|
|
|
**Date:** 2026-02-27
|
|
**Scope:** Plan how to port/implement Batch 0 unit tests only (no execution in this document).
|
|
|
|
## Problem
|
|
|
|
Batch 0 is defined as: deferred tests whose feature dependencies are already verified/complete/n_a.
|
|
However, current tracker output is inconsistent:
|
|
|
|
- `batch show 0` reports `Tests: 0`
|
|
- `report summary` reports `2640 deferred` unit tests
|
|
- direct DB query shows **553 deferred tests** already satisfy Batch 0 dependency rules
|
|
- all 553 have `dotnet_class` + `dotnet_method` mappings and method names already present in test source
|
|
|
|
This means Batch 0 implementation planning must treat the DB `batch_tests` mapping as stale and use a dependency query as the source of truth.
|
|
|
|
## Context Findings
|
|
|
|
### Command Findings
|
|
|
|
- `batch show 0 --db porting.db`: Batch metadata exists but has no mapped tests.
|
|
- `batch list --db porting.db`: total mapped tests across batches = 2087.
|
|
- `report summary --db porting.db`: deferred tests = 2640.
|
|
- Gap: `2640 - 2087 = 553` deferred tests are currently unassigned to any batch.
|
|
|
|
### Source-of-Truth Query (Batch 0 Candidates)
|
|
|
|
```sql
|
|
WITH implementable AS (
|
|
SELECT t.id
|
|
FROM unit_tests t
|
|
WHERE t.status = 'deferred'
|
|
AND EXISTS (
|
|
SELECT 1
|
|
FROM dependencies d
|
|
JOIN features f ON f.id = d.target_id AND d.target_type = 'feature'
|
|
WHERE d.source_type = 'unit_test' AND d.source_id = t.id
|
|
)
|
|
AND NOT EXISTS (
|
|
SELECT 1
|
|
FROM dependencies d
|
|
JOIN features f ON f.id = d.target_id AND d.target_type = 'feature'
|
|
WHERE d.source_type = 'unit_test'
|
|
AND d.source_id = t.id
|
|
AND f.status NOT IN ('verified', 'complete', 'n_a')
|
|
)
|
|
)
|
|
SELECT COUNT(*) FROM implementable; -- 553
|
|
```
|
|
|
|
### Candidate Distribution (553 tests)
|
|
|
|
All 553 are in module `server` and currently land in `dotnet/tests/ZB.MOM.NatsNet.Server.Tests/ImplBacklog/*.Impltests.cs`.
|
|
|
|
Largest classes:
|
|
|
|
- `JetStreamEngineTests` (89)
|
|
- `MonitoringHandlerTests` (76)
|
|
- `MqttHandlerTests` (56)
|
|
- `LeafNodeHandlerTests` (47)
|
|
- `NatsConsumerTests` (35)
|
|
- `JwtProcessorTests` (28)
|
|
- `RouteHandlerTests` (25)
|
|
|
|
## Approaches
|
|
|
|
### Approach A: Use Existing `batch_tests` Mapping Only
|
|
|
|
- Work only from `batch show 0`.
|
|
- Pros: Uses existing CLI flow exactly.
|
|
- Cons: Batch 0 is empty today, so this blocks all useful work and misses 553 valid candidates.
|
|
|
|
### Approach B: Query-Driven Batch 0 (No DB Mapping Changes)
|
|
|
|
- Treat the SQL dependency query as authoritative for execution order/status updates.
|
|
- Pros: No batch table mutation; can start immediately.
|
|
- Cons: `batch show 0` remains misleading; no built-in batch completion visibility.
|
|
|
|
### Approach C (Recommended): Query-Driven Execution + Batch 0 Mapping Reconciliation
|
|
|
|
- First, reconcile `batch_tests` for Batch 0 from dependency query.
|
|
- Then implement tests by class waves using a manifest generated from the same query.
|
|
- Pros: Correct batch visibility, supports `batch show 0`, preserves workflow consistency.
|
|
- Cons: One upfront tracker-maintenance step.
|
|
|
|
## Recommended Design
|
|
|
|
### 1. Batch 0 Candidate Manifest
|
|
|
|
Create a reproducible manifest (CSV/SQL output) containing:
|
|
|
|
- `test_id`
|
|
- `dotnet_class`
|
|
- `dotnet_method`
|
|
- `go_file`
|
|
- `go_method`
|
|
|
|
The manifest is regenerated from DB each session to avoid stale lists.
|
|
|
|
### 2. Batch Mapping Reconciliation
|
|
|
|
Insert missing candidate tests into `batch_tests(batch_id=0)` only when absent.
|
|
Do not reassign already-mapped tests in other batches.
|
|
|
|
### 3. Implementation Model
|
|
|
|
Port tests class-by-class in `ImplBacklog` files first (lowest-risk), then optionally move mature tests to domain folders later.
|
|
|
|
Per test workflow:
|
|
|
|
1. Open Go test body at mapped `go_file + go_method`.
|
|
2. Classify test as:
|
|
- `implementable-unit` (no running server/cluster required)
|
|
- `runtime-blocked` (needs real server/cluster infra) -> keep `deferred`, add note
|
|
3. Replace placeholder/assertion-only body with behavior-faithful xUnit 3 + Shouldly test.
|
|
4. Run method and class filters.
|
|
5. Update tracker status only after green test evidence.
|
|
|
|
### 4. Wave Sequencing
|
|
|
|
Order by lowest infra risk first:
|
|
|
|
1. Accounts/Auth/JWT/Options/reload
|
|
2. Routes/Gateways/Leaf/WebSocket
|
|
3. Events/Monitoring/MsgTrace/Server core
|
|
4. JetStream lightweight deterministic tests
|
|
5. JetStream heavy + MQTT + concurrency edge cases
|
|
|
|
### 5. Verification Gates
|
|
|
|
- Gate 1: Method-level run passes after each test port.
|
|
- Gate 2: Class-level run passes before status updates.
|
|
- Gate 3: Wave-level run passes before commit.
|
|
- Gate 4: End-of-batch query returns zero remaining implementable deferred tests.
|
|
|
|
## Error Handling and Risk Controls
|
|
|
|
- If Go test requires unavailable runtime infra, do not force a brittle port; keep `deferred` with explicit note.
|
|
- If ported test reveals feature regression, stop and create a feature-fix branch of work before marking test `verified`.
|
|
- Keep DB updates idempotent: only update statuses for IDs proven by passing runs.
|
|
- Avoid mass `test batch-update ... verified` without per-class pass evidence.
|
|
|
|
## Success Criteria
|
|
|
|
- Batch 0 mapping reflects true implementable set (or documented query-based equivalent).
|
|
- All implementable unit tests in the set are ported and passing.
|
|
- Runtime-blocked tests remain deferred with explicit reason notes.
|
|
- `report summary` and Batch 0 progress trend down deterministically after each wave.
|
|
|
|
## Non-Goals
|
|
|
|
- No server/cluster integration infrastructure build-out in this batch.
|
|
- No refactoring of production server code except where discovered test failures require follow-up work.
|
|
- No attempt to complete non-Batch-0 deferred tests.
|
|
|