Recover stashed driver-gaps work from pre-v2-mxgw-merge working tree
Captures uncommitted work that lived in the working tree on
v2-mxgw-integration but was orthogonal to the migration. Stashed
during the v2-mxgw merge to master (2026-04-30) and replanted here on
a feature branch off master so it's git-visible rather than living in
the stash list.
Two distinct buckets:
1. Tracked fixture/config refinements (10 files, ~36 lines):
- scripts/e2e/test-opcuaclient.ps1
- src/ZB.MOM.WW.OtOpcUa.Admin/appsettings.json
- 5 docker-compose.yml under tests/.../IntegrationTests/Docker/
(AbCip, Modbus, OpcUaClient, S7)
- 4 fixture .cs files (AbServerFixture, ModbusSimulatorFixture,
OpcPlcFixture, Snap7ServerFixture)
2. Untracked driver-gaps queue artifacts (~8000 lines):
- docs/plans/{abcip,ablegacy,focas,opcuaclient,s7,twincat}-plan.md
— per-driver gap plans
- docs/featuregaps.md — cross-cutting analysis
- docs/v2/focas-deployment.md, docs/v2/implementation/focas-simulator-plan.md
- followup.md — auto/driver-gaps queue follow-ups
- scripts/queue/ — PR-queue automation tooling (12 files including
pr-manifest.yaml at 1473 lines)
This commit is a snapshot for recoverability — review and split into
focused PRs (or discard) before merging anywhere downstream.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
21
scripts/queue/.label-ids.json
Normal file
21
scripts/queue/.label-ids.json
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"auto-managed": 10,
|
||||
"cross-driver": 14,
|
||||
"driver/abcip": 13,
|
||||
"driver/ablegacy": 16,
|
||||
"driver/focas": 11,
|
||||
"driver/opcuaclient": 12,
|
||||
"driver/s7": 19,
|
||||
"driver/twincat": 17,
|
||||
"phase/1": 8,
|
||||
"phase/2": 7,
|
||||
"phase/3": 6,
|
||||
"phase/4": 5,
|
||||
"phase/5": 4,
|
||||
"phase/6": 3,
|
||||
"queue/blocked": 2,
|
||||
"queue/done": 15,
|
||||
"queue/failed": 9,
|
||||
"queue/in-progress": 1,
|
||||
"queue/queued": 18
|
||||
}
|
||||
320
scripts/queue/.partial-twincat.yaml
Normal file
320
scripts/queue/.partial-twincat.yaml
Normal file
@@ -0,0 +1,320 @@
|
||||
- id: twincat-1.1
|
||||
driver: twincat
|
||||
phase: 1
|
||||
plan_pr_id: "1.1"
|
||||
title: "TwinCAT — Int64 fidelity for LINT/ULINT"
|
||||
plan_anchor: "docs/plans/twincat-plan.md"
|
||||
summary: |
|
||||
Map LInt/ULInt to DriverDataType.Int64 instead of silently truncating to Int32.
|
||||
The TwinCATDataType.cs:40 truncation comment "matches Int64 gap" is removed and
|
||||
MapToClrType already returns long/ulong, so the wire-level read returns the
|
||||
correct boxed types. May add Int64 to Core.Abstractions DriverDataType enum if
|
||||
missing. Closes a long-standing fixture caveat noted in the test suite.
|
||||
files:
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATDataType.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Core.Abstractions/DriverDataType.cs"
|
||||
docs:
|
||||
- "docs/Driver.TwinCAT.Cli.md"
|
||||
- "docs/drivers/TwinCAT-Test-Fixture.md"
|
||||
fixture:
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/PLC/GVLs/GVL_Primitives.TcGVL"
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/README.md"
|
||||
e2e: []
|
||||
effort: S
|
||||
deps: []
|
||||
cross_driver: false
|
||||
notes: "Hardware-gated via TWINCAT_TARGET_NETID; no e2e change to test-twincat.ps1."
|
||||
|
||||
- id: twincat-1.2
|
||||
driver: twincat
|
||||
phase: 1
|
||||
plan_pr_id: "1.2"
|
||||
title: "TwinCAT — TIME/DATE/DT/TOD as native UA types"
|
||||
plan_anchor: "docs/plans/twincat-plan.md"
|
||||
summary: |
|
||||
Stop marshalling IEC TIME/DATE/DT/TOD as raw UDINT and convert to native UA
|
||||
Duration/DateTime types via post-processing in ReadValueAsync, ConvertForWrite,
|
||||
and OnAdsNotificationEx. May expose missing Duration in DriverDataType. CLI
|
||||
syntax updates so users write ISO-8601 / IEC literals instead of numeric raw
|
||||
values.
|
||||
files:
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATDataType.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/AdsTwinCATClient.cs"
|
||||
docs:
|
||||
- "docs/Driver.TwinCAT.Cli.md"
|
||||
fixture:
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/PLC/GVLs/GVL_Primitives.TcGVL"
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/README.md"
|
||||
e2e: []
|
||||
effort: M
|
||||
deps: []
|
||||
cross_driver: false
|
||||
notes: "Hardware-gated via TWINCAT_TARGET_NETID. May add Duration to DriverDataType enum."
|
||||
|
||||
- id: twincat-1.3
|
||||
driver: twincat
|
||||
phase: 1
|
||||
plan_pr_id: "1.3"
|
||||
title: "TwinCAT — Bit-indexed BOOL writes (read-modify-write)"
|
||||
plan_anchor: "docs/plans/twincat-plan.md"
|
||||
summary: |
|
||||
Replace the NotSupportedException at AdsTwinCATClient.cs:99 with read-modify-write
|
||||
on the parent word, serializing concurrent bit writes to the same parent via a
|
||||
keyed SemaphoreSlim. Closes referenced task #181. CLI gains an example and the
|
||||
fixture caveat in the bugs-caught list updates to note writes now work.
|
||||
files:
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/AdsTwinCATClient.cs"
|
||||
docs:
|
||||
- "docs/Driver.TwinCAT.Cli.md"
|
||||
- "docs/drivers/TwinCAT-Test-Fixture.md"
|
||||
fixture: []
|
||||
e2e: []
|
||||
effort: S
|
||||
deps: []
|
||||
cross_driver: false
|
||||
notes: "Reuses GVL_Primitives.vWord (0xBEEF) — no fixture schema change."
|
||||
|
||||
- id: twincat-1.4
|
||||
driver: twincat
|
||||
phase: 1
|
||||
plan_pr_id: "1.4"
|
||||
title: "TwinCAT — Multi-dim and whole-array reads"
|
||||
plan_anchor: "docs/plans/twincat-plan.md"
|
||||
summary: |
|
||||
Expand ReadValueAsync/WriteValueAsync to handle whole-array reads in a single
|
||||
AdsClient call rather than element-by-element. Surface IsArray + ArrayDimensions
|
||||
on TwinCATTagDefinition and through DriverAttributeInfo from DiscoverAsync. Sets
|
||||
up the array-shape plumbing the rest of the driver needs.
|
||||
files:
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/AdsTwinCATClient.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATDriver.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATDriverOptions.cs"
|
||||
docs:
|
||||
- "docs/Driver.TwinCAT.Cli.md"
|
||||
- "docs/drivers/TwinCAT-Test-Fixture.md"
|
||||
fixture:
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/PLC/GVLs/GVL_Arrays.TcGVL"
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/README.md"
|
||||
e2e: []
|
||||
effort: M
|
||||
deps: []
|
||||
cross_driver: false
|
||||
notes: "Hardware-gated via TWINCAT_TARGET_NETID. New 5x5 aReal2D seed with deterministic pattern."
|
||||
|
||||
- id: twincat-1.5
|
||||
driver: twincat
|
||||
phase: 1
|
||||
plan_pr_id: "1.5"
|
||||
title: "TwinCAT — ENUM and ALIAS at discovery"
|
||||
plan_anchor: "docs/plans/twincat-plan.md"
|
||||
summary: |
|
||||
MapSymbolTypeName currently returns null for non-atomic types, dropping ENUM and
|
||||
ALIAS symbols silently. Switch to inspecting symbol.DataType + Category from
|
||||
TwinCAT.TypeSystem so DataTypeCategory.Enum walks EnumValues and Alias resolves
|
||||
to base atomic recursively. Surface enum members for later EnumStrings rendering.
|
||||
POINTER/REFERENCE/INTERFACE/UNION explicitly out of scope.
|
||||
files:
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/AdsTwinCATClient.cs"
|
||||
docs:
|
||||
- "docs/Driver.TwinCAT.Cli.md"
|
||||
- "docs/drivers/TwinCAT-Test-Fixture.md"
|
||||
fixture:
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/README.md"
|
||||
e2e: []
|
||||
effort: M
|
||||
deps: []
|
||||
cross_driver: false
|
||||
notes: "Reuses existing GVL_Enums + DUTs; only README integration-test contract entry added."
|
||||
|
||||
- id: twincat-2.1
|
||||
driver: twincat
|
||||
phase: 2
|
||||
plan_pr_id: "2.1"
|
||||
title: "TwinCAT — ADS Sum-read / Sum-write"
|
||||
plan_anchor: "docs/plans/twincat-plan.md"
|
||||
summary: |
|
||||
Replace per-tag ReadValueAsync loops with Beckhoff's ADS Sum commands
|
||||
(IndexGroup 0xF080-0xF084) via SumSymbolRead/SumSymbolWrite to batch N
|
||||
reads/writes per AMS request. Bucket fullReferences by DeviceHostAddress and
|
||||
expose a new ReadValuesAsync surface on ITwinCATClient. Targets ~10x throughput
|
||||
on multi-thousand-tag scans; perf-tier test gated behind TWINCAT_PERF=1.
|
||||
files:
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/AdsTwinCATClient.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATDriver.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/ITwinCATClient.cs"
|
||||
docs:
|
||||
- "docs/v3/twincat-backlog.md"
|
||||
- "docs/drivers/TwinCAT-Test-Fixture.md"
|
||||
fixture:
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/PLC/GVLs/GVL_Perf.TcGVL"
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/README.md"
|
||||
e2e: []
|
||||
effort: L
|
||||
deps: []
|
||||
cross_driver: false
|
||||
notes: "Perf test gated behind TWINCAT_PERF=1 plus TWINCAT_TARGET_NETID; new FB_PerfChurn POU."
|
||||
|
||||
- id: twincat-2.2
|
||||
driver: twincat
|
||||
phase: 2
|
||||
plan_pr_id: "2.2"
|
||||
title: "TwinCAT — Handle-based access with caching"
|
||||
plan_anchor: "docs/plans/twincat-plan.md"
|
||||
summary: |
|
||||
Cache CreateVariableHandleAsync results so per-read overhead drops to
|
||||
read-by-handle (4-byte index vs N-byte symbol path). On
|
||||
DeviceSymbolVersionInvalid (0x710) evict and retry once. Clear cache on
|
||||
AdsClient reconnect until the symbol-version listener (PR 2.3) ships. Dispose
|
||||
path calls DeleteVariableHandleAsync for cached handles.
|
||||
files:
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/AdsTwinCATClient.cs"
|
||||
docs:
|
||||
- "docs/Driver.TwinCAT.Cli.md"
|
||||
- "docs/drivers/TwinCAT-Test-Fixture.md"
|
||||
fixture: []
|
||||
e2e: []
|
||||
effort: M
|
||||
deps: []
|
||||
cross_driver: false
|
||||
notes: "Combines with PR 2.1 for sum-read-by-handle. Reuses GVL_Perf.aTags."
|
||||
|
||||
- id: twincat-2.3
|
||||
driver: twincat
|
||||
phase: 2
|
||||
plan_pr_id: "2.3"
|
||||
title: "TwinCAT — Symbol-version invalidation listener"
|
||||
plan_anchor: "docs/plans/twincat-plan.md"
|
||||
summary: |
|
||||
Register an AddDeviceNotificationAsync on the symbol-version index group
|
||||
(AdsReservedIndexGroup.SymbolVersion 0xF008) so the handle cache from PR 2.2
|
||||
is wiped on online-change bumps. Initial integration test gated as
|
||||
requires-manual-online-change until automation lands. Resolves open question
|
||||
(c) confirming the v6 enum constant.
|
||||
files:
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/AdsTwinCATClient.cs"
|
||||
docs:
|
||||
- "docs/drivers/TwinCAT-Test-Fixture.md"
|
||||
fixture:
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/README.md"
|
||||
e2e: []
|
||||
effort: M
|
||||
deps: ["twincat-2.2"]
|
||||
cross_driver: false
|
||||
notes: "Hardware-gated via TWINCAT_TARGET_NETID; manual online-change drill documented in README."
|
||||
|
||||
- id: twincat-3.1
|
||||
driver: twincat
|
||||
phase: 3
|
||||
plan_pr_id: "3.1"
|
||||
title: "TwinCAT — Per-tag MaxDelay tuning"
|
||||
plan_anchor: "docs/plans/twincat-plan.md"
|
||||
summary: |
|
||||
Surface NotificationSettings MaxDelay as a per-tag option (default 0 to
|
||||
preserve current behavior). Plumb int? MaxDelayMs through TwinCATTagDefinition,
|
||||
SubscribeAsync, and AddNotificationAsync. Coalesces high-frequency PLC signals
|
||||
so the OPC UA subscription queue stops flooding under bursty change rates.
|
||||
files:
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATDriverOptions.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATDriver.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/AdsTwinCATClient.cs"
|
||||
docs:
|
||||
- "docs/Driver.TwinCAT.Cli.md"
|
||||
- "docs/drivers/TwinCAT-Test-Fixture.md"
|
||||
fixture:
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/README.md"
|
||||
e2e: []
|
||||
effort: S
|
||||
deps: []
|
||||
cross_driver: false
|
||||
notes: "Reuses GVL_Fixture.nCounter as 100 Hz driver. Hardware-gated via TWINCAT_TARGET_NETID."
|
||||
|
||||
- id: twincat-3.2
|
||||
driver: twincat
|
||||
phase: 3
|
||||
plan_pr_id: "3.2"
|
||||
title: "TwinCAT — Cycle-time / jitter / PLC-state diagnostics"
|
||||
plan_anchor: "docs/plans/twincat-plan.md"
|
||||
summary: |
|
||||
Augment the probe loop to read _AppInfo.OnlineChangeCnt/AppName and
|
||||
_TaskInfo[1].CycleTime/LastExecTime, surface as TwinCATDeviceDiagnostics on
|
||||
DeviceState, and emit through IDriverDiagnostics (cross-driver surface from
|
||||
Modbus task #154). Read system symbols directly without going through the user
|
||||
browse filter.
|
||||
files:
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATDriver.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATSystemSymbolFilter.cs"
|
||||
docs:
|
||||
- "docs/drivers/TwinCAT-Test-Fixture.md"
|
||||
- "docs/Driver.TwinCAT.Cli.md"
|
||||
- "docs/v3/twincat-backlog.md"
|
||||
fixture:
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/README.md"
|
||||
e2e: []
|
||||
effort: M
|
||||
deps: []
|
||||
cross_driver: true
|
||||
notes: "Reuses IDriverDiagnostics from Modbus task #154. Hardware-gated via TWINCAT_TARGET_NETID."
|
||||
|
||||
- id: twincat-4.1
|
||||
driver: twincat
|
||||
phase: 4
|
||||
plan_pr_id: "4.1"
|
||||
title: "TwinCAT — Nested UDT browse via online type walker"
|
||||
plan_anchor: "docs/plans/twincat-plan.md"
|
||||
summary: |
|
||||
Largest single piece of work. Recurse BrowseSymbolsAsync into IStructType.SubItems
|
||||
yielding one TwinCATDiscoveredSymbol per leaf with dotted instance paths. Expand
|
||||
arrays-of-structs up to a configurable bound (default 1024). Add a pure
|
||||
TwinCATTypeWalker helper. Folds recursed structure into Discovered/ folder tree.
|
||||
Online runtime path only — TMC offline parsing deferred per open question (a).
|
||||
files:
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/AdsTwinCATClient.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATDriver.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATTypeWalker.cs"
|
||||
docs:
|
||||
- "docs/Driver.TwinCAT.Cli.md"
|
||||
- "docs/drivers/TwinCAT-Test-Fixture.md"
|
||||
- "docs/v3/twincat-backlog.md"
|
||||
fixture:
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/PLC/DUTs/ST_NestedFlags.TcDUT"
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/PLC/DUTs/ST_RecursiveCap.TcDUT"
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/PLC/DUTs/ST_AlarmRecord.TcDUT"
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/README.md"
|
||||
e2e: []
|
||||
effort: L
|
||||
deps: ["twincat-1.5"]
|
||||
cross_driver: false
|
||||
notes: "Hardware-gated via TWINCAT_TARGET_NETID. PR 1.4 helpful but not blocking."
|
||||
|
||||
- id: twincat-5.1
|
||||
driver: twincat
|
||||
phase: 5
|
||||
plan_pr_id: "5.1"
|
||||
title: "TwinCAT — IAlarmSource via TC3 EventLogger"
|
||||
plan_anchor: "docs/plans/twincat-plan.md"
|
||||
summary: |
|
||||
Implement IAlarmSource over TcEventLogger on AMS port 110 so PLC alarms
|
||||
surface as OPC UA AC events. Begins with a one-day spike (open question (b))
|
||||
documented in docs/v3/twincat-eventlogger-spike.md to determine if a managed
|
||||
wrapper exists or if we hit AMS port 110 directly via a secondary AdsClient
|
||||
+ AddDeviceNotificationAsync on the alarm-list index group. Gated by new
|
||||
EnableAlarms option (default false).
|
||||
files:
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATAlarmSource.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATDriver.cs"
|
||||
- "src/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT/TwinCATDriverOptions.cs"
|
||||
docs:
|
||||
- "docs/drivers/TwinCAT.md"
|
||||
- "docs/v3/twincat-eventlogger-spike.md"
|
||||
- "docs/Driver.TwinCAT.Cli.md"
|
||||
- "docs/drivers/TwinCAT-Test-Fixture.md"
|
||||
fixture:
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/PLC/POUs/FB_AlarmHarness.TcPOU"
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/PLC/GVLs/GVL_Alarms.TcGVL"
|
||||
- "tests/ZB.MOM.WW.OtOpcUa.Driver.TwinCAT.IntegrationTests/TwinCatProject/README.md"
|
||||
e2e:
|
||||
- "scripts/e2e/test-twincat.ps1"
|
||||
effort: L
|
||||
deps: []
|
||||
cross_driver: false
|
||||
notes: "Hardware-gated via TWINCAT_TARGET_NETID. Spike-first; e2e Test-AlarmRoundTrip likely deferred to follow-up."
|
||||
36
scripts/queue/README.md
Normal file
36
scripts/queue/README.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# Plan-execution queue
|
||||
|
||||
Gitea-backed work queue that drives the per-driver implementation plans (`docs/plans/*-plan.md`) to completion in **Mode B** (autonomous: auto-merges into the `auto/driver-gaps` integration branch when build+tests pass).
|
||||
|
||||
## Pieces
|
||||
|
||||
- `pr-manifest.yaml` — canonical list of every PR across all six plans.
|
||||
- `setup-labels.sh` — idempotently creates the queue labels in Gitea.
|
||||
- `file-issues.sh` — files one Gitea issue per manifest entry (idempotent — skips ids that already exist).
|
||||
- `next-pr.sh` — picks the next eligible queue issue (queued, blockers all done) as JSON.
|
||||
- `start-pr.sh ISSUE BRANCH` — flips queued → in-progress and creates the branch off `auto/driver-gaps`.
|
||||
- `open-pr.sh ISSUE BRANCH TITLE BODY_FILE` — opens a PR from BRANCH into `auto/driver-gaps`.
|
||||
- `merge-pr.sh PR` — merges a PR with branch-delete (Mode B).
|
||||
- `finish-pr.sh ISSUE success PR` / `finish-pr.sh ISSUE failed REASON_FILE` — closes / marks failed.
|
||||
|
||||
## Flow per loop iteration
|
||||
|
||||
1. `next-pr.sh` → issue#, branch, canonical id.
|
||||
2. `start-pr.sh` → mark in-progress, create branch.
|
||||
3. Loop driver dispatches a Claude Agent to implement the PR on the branch.
|
||||
4. Loop runs `dotnet build` + `dotnet test`.
|
||||
5. On green: `open-pr.sh`, `merge-pr.sh`, `finish-pr.sh success`.
|
||||
6. On red: capture log → `finish-pr.sh failed log.txt`. Issue stays open with `queue/failed` label for retry.
|
||||
|
||||
## Environment
|
||||
|
||||
- Gitea repo: `dohertj2/lmxopcua` on `gitea.dohertylan.com`.
|
||||
- Token: read from `%LOCALAPPDATA%\tea\config.yml` (or `$GITEA_TOKEN` override).
|
||||
- Integration branch: `auto/driver-gaps` (created off master).
|
||||
- Per-PR branches: `auto/<driver>/<plan-pr-id>`.
|
||||
|
||||
## Reset / debug
|
||||
|
||||
- Re-list eligible issues: `bash scripts/queue/next-pr.sh`.
|
||||
- Manually unblock: remove `queue/blocked` label and add `queue/queued`.
|
||||
- Drop a failed PR back into queue: remove `queue/failed`, add `queue/queued`.
|
||||
122
scripts/queue/file-issues.sh
Normal file
122
scripts/queue/file-issues.sh
Normal file
@@ -0,0 +1,122 @@
|
||||
#!/usr/bin/env bash
|
||||
# Reads scripts/queue/pr-manifest.yaml and creates one Gitea issue per PR.
|
||||
# Idempotent: skips PRs whose canonical id already exists as an open issue.
|
||||
set -euo pipefail
|
||||
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
. "$HERE/lib.sh"
|
||||
|
||||
if [ ! -f "$MANIFEST" ]; then
|
||||
echo "manifest not found: $MANIFEST" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Collect existing canonical-id → issue# mapping (from queue-meta blocks)
|
||||
EXISTING_JSON=$(api_repo GET "issues?state=all&type=issues&limit=200&page=1")
|
||||
# multiple pages — keep paging until empty
|
||||
PAGE=2
|
||||
while :; do
|
||||
PG=$(api_repo GET "issues?state=all&type=issues&limit=200&page=$PAGE")
|
||||
COUNT=$(echo "$PG" | python -c "import sys,json; print(len(json.load(sys.stdin)))")
|
||||
if [ "$COUNT" = "0" ]; then break; fi
|
||||
EXISTING_JSON=$(python -c "import sys,json; a=json.loads(sys.argv[1]); b=json.loads(sys.argv[2]); print(json.dumps(a+b))" "$EXISTING_JSON" "$PG")
|
||||
PAGE=$((PAGE+1))
|
||||
done
|
||||
|
||||
python - "$MANIFEST" "$LABEL_MAP" <<'PY'
|
||||
import json, sys, re, yaml, urllib.request, os
|
||||
|
||||
manifest_path, label_map_path = sys.argv[1], sys.argv[2]
|
||||
gitea_token = os.environ["GITEA_TOKEN"]
|
||||
api_base = "https://gitea.dohertylan.com/api/v1/repos/dohertj2/lmxopcua"
|
||||
|
||||
with open(manifest_path) as f: manifest = yaml.safe_load(f)
|
||||
with open(label_map_path) as f: lmap = json.load(f)
|
||||
|
||||
def api(method, path, data=None):
|
||||
req = urllib.request.Request(
|
||||
f"{api_base}/{path}",
|
||||
method=method,
|
||||
headers={
|
||||
"Authorization": f"token {gitea_token}",
|
||||
"Content-Type": "application/json",
|
||||
"Accept": "application/json",
|
||||
},
|
||||
data=json.dumps(data).encode() if data else None,
|
||||
)
|
||||
with urllib.request.urlopen(req) as r:
|
||||
return json.loads(r.read().decode())
|
||||
|
||||
# Collect existing issues' canonical ids → issue#
|
||||
existing = {}
|
||||
page = 1
|
||||
while True:
|
||||
items = api("GET", f"issues?state=all&type=issues&limit=50&page={page}")
|
||||
if not items: break
|
||||
for it in items:
|
||||
m = re.search(r'<!-- queue-meta\s*(\{.*?\})\s*-->', it.get("body","") or "", re.S)
|
||||
if m:
|
||||
try:
|
||||
meta = json.loads(m.group(1))
|
||||
if "id" in meta:
|
||||
existing[meta["id"]] = it["number"]
|
||||
except: pass
|
||||
page += 1
|
||||
|
||||
print(f"existing queue issues: {len(existing)}")
|
||||
|
||||
filed = 0
|
||||
skipped = 0
|
||||
for pr in manifest["prs"]:
|
||||
if pr["id"] in existing:
|
||||
skipped += 1
|
||||
continue
|
||||
title = f"[{pr['driver']}] {pr['title']}"
|
||||
meta = {
|
||||
"id": pr["id"],
|
||||
"driver": pr["driver"],
|
||||
"phase": pr["phase"],
|
||||
"plan_pr_id": pr.get("plan_pr_id",""),
|
||||
"deps": pr.get("deps", []),
|
||||
"cross_driver": pr.get("cross_driver", False),
|
||||
}
|
||||
body_parts = [
|
||||
f"<!-- queue-meta\n{json.dumps(meta)}\n-->",
|
||||
"## Auto-managed PR — Mode B (autonomous)",
|
||||
f"**Driver**: `{pr['driver']}` **Phase**: `{pr['phase']}` **Plan PR**: `{pr.get('plan_pr_id','')}`",
|
||||
f"**Plan**: [`{pr.get('plan_anchor','docs/plans/' + pr['driver'] + '-plan.md')}`]({pr.get('plan_anchor','../docs/plans/' + pr['driver'] + '-plan.md')})",
|
||||
f"**Effort**: `{pr.get('effort','M')}` **Cross-driver**: `{pr.get('cross_driver', False)}`",
|
||||
"",
|
||||
"## Summary",
|
||||
pr.get("summary","_(see plan)_"),
|
||||
]
|
||||
if pr.get("files"):
|
||||
body_parts += ["", "## Source files", *[f"- `{f}`" for f in pr["files"]]]
|
||||
if pr.get("docs"):
|
||||
body_parts += ["", "## Docs", *[f"- `{d}`" for d in pr["docs"]]]
|
||||
if pr.get("fixture"):
|
||||
body_parts += ["", "## Fixture", *[f"- `{x}`" for x in pr["fixture"]]]
|
||||
if pr.get("e2e"):
|
||||
body_parts += ["", "## E2E", *[f"- `{x}`" for x in pr["e2e"]]]
|
||||
if pr.get("deps"):
|
||||
body_parts += ["", "## Depends on", *[f"- canonical: `{d}`" for d in pr["deps"]]]
|
||||
if pr.get("notes"):
|
||||
body_parts += ["", "## Notes", pr["notes"]]
|
||||
body_parts += ["",
|
||||
"---",
|
||||
f"_Branch: `auto/{pr['driver']}/{pr.get('plan_pr_id','').replace('/','-')}`. Target: `auto/driver-gaps`._"]
|
||||
body = "\n".join(body_parts)
|
||||
|
||||
label_names = [
|
||||
f"driver/{pr['driver']}",
|
||||
f"phase/{pr['phase']}",
|
||||
"queue/queued",
|
||||
"auto-managed",
|
||||
]
|
||||
if pr.get("cross_driver"): label_names.append("cross-driver")
|
||||
label_ids = [lmap[n] for n in label_names if n in lmap]
|
||||
issue = api("POST", "issues", {"title": title, "body": body, "labels": label_ids})
|
||||
print(f" filed #{issue['number']}: {pr['id']}")
|
||||
filed += 1
|
||||
|
||||
print(f"\nfiled {filed}, skipped (existing) {skipped}")
|
||||
PY
|
||||
39
scripts/queue/finish-pr.sh
Normal file
39
scripts/queue/finish-pr.sh
Normal file
@@ -0,0 +1,39 @@
|
||||
#!/usr/bin/env bash
|
||||
# Closes the issue (success) or marks failed and reopens for retry.
|
||||
# Usage:
|
||||
# finish-pr.sh ISSUE_NUM success PR_NUM
|
||||
# finish-pr.sh ISSUE_NUM failed REASON_FILE
|
||||
set -euo pipefail
|
||||
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
. "$HERE/lib.sh"
|
||||
|
||||
ISSUE="${1:?ISSUE_NUM required}"
|
||||
RESULT="${2:?success|failed required}"
|
||||
ARG3="${3:?PR_NUM or REASON_FILE required}"
|
||||
|
||||
INPROG=$(python -c "import json; print(json.load(open('$LABEL_MAP'))['queue/in-progress'])")
|
||||
DONE=$(python -c "import json; print(json.load(open('$LABEL_MAP'))['queue/done'])")
|
||||
FAILED=$(python -c "import json; print(json.load(open('$LABEL_MAP'))['queue/failed'])")
|
||||
|
||||
api_repo DELETE "issues/$ISSUE/labels/$INPROG" >/dev/null || true
|
||||
|
||||
case "$RESULT" in
|
||||
success)
|
||||
PR_NUM="$ARG3"
|
||||
api_repo POST "issues/$ISSUE/labels" "{\"labels\":[$DONE]}" >/dev/null
|
||||
BODY=$(python -c "import json; print(json.dumps({'body':'✅ Auto-loop completed. Merged via PR #$PR_NUM.'}))")
|
||||
api_repo POST "issues/$ISSUE/comments" "$BODY" >/dev/null
|
||||
api_repo PATCH "issues/$ISSUE" '{"state":"closed"}' >/dev/null
|
||||
echo " issue #$ISSUE closed (PR #$PR_NUM merged)"
|
||||
;;
|
||||
failed)
|
||||
REASON_FILE="$ARG3"
|
||||
REASON=$(cat "$REASON_FILE" 2>/dev/null | head -c 4000 || echo "(no reason file)")
|
||||
api_repo POST "issues/$ISSUE/labels" "{\"labels\":[$FAILED]}" >/dev/null
|
||||
BODY=$(python -c "import json,sys; r=open('$REASON_FILE').read()[:4000] if __import__('os').path.exists('$REASON_FILE') else '(no log)'; print(json.dumps({'body':'❌ Auto-loop failed.\n\n\`\`\`\n'+r+'\n\`\`\`'}))")
|
||||
api_repo POST "issues/$ISSUE/comments" "$BODY" >/dev/null
|
||||
echo " issue #$ISSUE marked failed (still open for retry)"
|
||||
;;
|
||||
*)
|
||||
echo "unknown result: $RESULT" >&2; exit 1 ;;
|
||||
esac
|
||||
57
scripts/queue/lib.sh
Normal file
57
scripts/queue/lib.sh
Normal file
@@ -0,0 +1,57 @@
|
||||
#!/usr/bin/env bash
|
||||
# Shared helpers for the Gitea-backed plan-execution queue.
|
||||
set -euo pipefail
|
||||
|
||||
GITEA_URL="https://gitea.dohertylan.com"
|
||||
GITEA_REPO="dohertj2/lmxopcua"
|
||||
GITEA_API="$GITEA_URL/api/v1"
|
||||
|
||||
if [ -z "${GITEA_TOKEN:-}" ]; then
|
||||
TEA_CONFIG="${LOCALAPPDATA:-$HOME/AppData/Local}/tea/config.yml"
|
||||
if [ ! -f "$TEA_CONFIG" ]; then
|
||||
TEA_CONFIG="$HOME/.config/tea/config.yml"
|
||||
fi
|
||||
GITEA_TOKEN="$(awk '/token:/{gsub(/[ \t]/,"",$2); print $2; exit}' "$TEA_CONFIG" 2>/dev/null || true)"
|
||||
fi
|
||||
if [ -z "${GITEA_TOKEN:-}" ]; then
|
||||
echo "lib.sh: GITEA_TOKEN not set and tea config not readable" >&2
|
||||
exit 1
|
||||
fi
|
||||
export GITEA_TOKEN
|
||||
|
||||
INTEGRATION_BRANCH="auto/driver-gaps"
|
||||
QUEUE_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")" && { pwd -W 2>/dev/null || pwd; })"
|
||||
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && { pwd -W 2>/dev/null || pwd; })"
|
||||
MANIFEST="$QUEUE_ROOT/pr-manifest.yaml"
|
||||
LABEL_MAP="$QUEUE_ROOT/.label-ids.json"
|
||||
|
||||
LABEL_QUEUED="queue/queued"
|
||||
LABEL_IN_PROGRESS="queue/in-progress"
|
||||
LABEL_BLOCKED="queue/blocked"
|
||||
LABEL_FAILED="queue/failed"
|
||||
LABEL_DONE="queue/done"
|
||||
LABEL_AUTO="auto-managed"
|
||||
LABEL_CROSS="cross-driver"
|
||||
|
||||
api() {
|
||||
local method="$1" path="$2" data="${3:-}"
|
||||
if [ -n "$data" ]; then
|
||||
curl -sf -X "$method" \
|
||||
-H "Authorization: token $GITEA_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$data" \
|
||||
"$GITEA_API/$path"
|
||||
else
|
||||
curl -sf -X "$method" \
|
||||
-H "Authorization: token $GITEA_TOKEN" \
|
||||
"$GITEA_API/$path"
|
||||
fi
|
||||
}
|
||||
|
||||
api_repo() {
|
||||
api "$1" "repos/$GITEA_REPO/$2" "${3:-}"
|
||||
}
|
||||
|
||||
label_id() {
|
||||
python -c "import json,sys; m=json.load(open('$LABEL_MAP')); print(m['$1'])"
|
||||
}
|
||||
57
scripts/queue/loop-iteration.md
Normal file
57
scripts/queue/loop-iteration.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# Loop iteration prompt (Mode B autonomous)
|
||||
|
||||
This is the single self-contained prompt that `/loop` re-fires until the queue empties. Each iteration handles exactly one PR end-to-end.
|
||||
|
||||
---
|
||||
|
||||
You are running one iteration of the autonomous plan-execution loop. The queue lives in Gitea at `dohertj2/lmxopcua`. Helpers: `scripts/queue/*.sh`.
|
||||
|
||||
## Step 1 — pick the next PR
|
||||
Run `bash scripts/queue/next-pr.sh`. It returns JSON.
|
||||
- If `{"empty": true}` → the queue is drained. **Do not call ScheduleWakeup.** Report "queue empty — loop terminating" and exit. The /loop will end.
|
||||
- Otherwise parse: `issue_num`, `canonical_id`, `driver`, `phase`, `plan_pr_id`, `branch`, `title`, `url`.
|
||||
|
||||
## Step 2 — claim it
|
||||
Run `bash scripts/queue/start-pr.sh "$ISSUE_NUM" "$BRANCH"`. This swaps `queue/queued` → `queue/in-progress` and creates the branch off `auto/driver-gaps`.
|
||||
|
||||
## Step 3 — pull the issue body
|
||||
Run `curl -sf -H "Authorization: token $(awk '/token:/{print $2}' "$LOCALAPPDATA/tea/config.yml")" "https://gitea.dohertylan.com/api/v1/repos/dohertj2/lmxopcua/issues/$ISSUE_NUM"` and extract the `body` field. The body contains the Plan link, summary, source files, docs/fixture/e2e files.
|
||||
|
||||
## Step 4 — implement on a worktree
|
||||
Dispatch a general-purpose Agent with `isolation: "worktree"`. Brief it with:
|
||||
- the issue body verbatim
|
||||
- the linked plan section (read `docs/plans/<driver>-plan.md` and quote the relevant per-PR detail)
|
||||
- explicit instructions: implement the source-file changes, the doc updates, the fixture extensions, and the e2e test additions named in the issue
|
||||
- run `dotnet build c:/Users/dohertj2/Desktop/lmxopcua/ZB.MOM.WW.OtOpcUa.slnx` until green
|
||||
- run `dotnet test` for the relevant test project until green
|
||||
- commit on `$BRANCH` with message `Auto: <canonical_id> — <short summary>` followed by `Closes #$ISSUE_NUM`
|
||||
- return a brief summary of what changed
|
||||
|
||||
## Step 5 — verify and push
|
||||
Verify the agent did commit + push. If branch isn't pushed, push it: `git push origin "$BRANCH"`.
|
||||
|
||||
## Step 6 — open PR
|
||||
Build a body file: include the issue summary + the agent's summary. Then:
|
||||
```
|
||||
PR_NUM=$(bash scripts/queue/open-pr.sh "$ISSUE_NUM" "$BRANCH" "$TITLE" /tmp/pr-body.md)
|
||||
```
|
||||
|
||||
## Step 7 — auto-merge (Mode B)
|
||||
Run `bash scripts/queue/merge-pr.sh "$PR_NUM"`.
|
||||
|
||||
## Step 8 — close issue
|
||||
Run `bash scripts/queue/finish-pr.sh "$ISSUE_NUM" success "$PR_NUM"`.
|
||||
|
||||
## On failure
|
||||
If anywhere from Step 4 onward fails (build red, tests red, agent gives up, push fails, merge conflict):
|
||||
- write the failure log to `/tmp/loop-fail-$ISSUE_NUM.log`
|
||||
- run `bash scripts/queue/finish-pr.sh "$ISSUE_NUM" failed /tmp/loop-fail-$ISSUE_NUM.log`
|
||||
- the issue keeps `queue/failed` and stays open for retry
|
||||
- **do not** retry the same issue this iteration; let the loop pick a different one next fire
|
||||
|
||||
## Re-arm
|
||||
At the very end of the iteration (success OR failure), call `ScheduleWakeup` with the same `/loop` prompt and `delaySeconds: 60` to fire the next iteration.
|
||||
|
||||
If the queue was empty in Step 1, do NOT call ScheduleWakeup.
|
||||
|
||||
Report a one-line summary to the user before re-arming.
|
||||
11
scripts/queue/merge-pr.sh
Normal file
11
scripts/queue/merge-pr.sh
Normal file
@@ -0,0 +1,11 @@
|
||||
#!/usr/bin/env bash
|
||||
# Merges a PR (Mode B autonomous merge into auto/driver-gaps).
|
||||
# Usage: merge-pr.sh PR_NUM
|
||||
set -euo pipefail
|
||||
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
. "$HERE/lib.sh"
|
||||
|
||||
PR="${1:?PR_NUM required}"
|
||||
PAYLOAD='{"Do":"merge","delete_branch_after_merge":true}'
|
||||
api_repo POST "pulls/$PR/merge" "$PAYLOAD" >/dev/null
|
||||
echo " PR #$PR merged into $INTEGRATION_BRANCH (branch deleted)"
|
||||
77
scripts/queue/next-pr.sh
Normal file
77
scripts/queue/next-pr.sh
Normal file
@@ -0,0 +1,77 @@
|
||||
#!/usr/bin/env bash
|
||||
# Prints the next eligible queue issue as JSON: {issue_num, canonical_id, driver, plan_pr_id, branch, ...}
|
||||
# Eligible = open + label queue/queued + all canonical deps closed.
|
||||
# Picks lowest phase first, then lowest issue number within phase.
|
||||
set -euo pipefail
|
||||
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
. "$HERE/lib.sh"
|
||||
|
||||
python - <<PY
|
||||
import json, urllib.request, re, os, sys
|
||||
|
||||
token = os.environ["GITEA_TOKEN"]
|
||||
api_base = "https://gitea.dohertylan.com/api/v1/repos/dohertj2/lmxopcua"
|
||||
|
||||
def api(path):
|
||||
req = urllib.request.Request(f"{api_base}/{path}",
|
||||
headers={"Authorization": f"token {token}"})
|
||||
with urllib.request.urlopen(req) as r:
|
||||
return json.loads(r.read().decode())
|
||||
|
||||
# Gather all queue issues
|
||||
issues = []
|
||||
page = 1
|
||||
while True:
|
||||
items = api(f"issues?state=all&type=issues&limit=50&page={page}&labels=auto-managed")
|
||||
if not items: break
|
||||
issues.extend(items)
|
||||
page += 1
|
||||
|
||||
by_id = {}
|
||||
for it in issues:
|
||||
m = re.search(r'<!-- queue-meta\s*(\{.*?\})\s*-->', it.get("body","") or "", re.S)
|
||||
if not m: continue
|
||||
try: meta = json.loads(m.group(1))
|
||||
except: continue
|
||||
by_id[meta["id"]] = (it, meta)
|
||||
|
||||
def is_done(issue):
|
||||
if issue["state"] == "closed": return True
|
||||
labels = {l["name"] for l in issue["labels"]}
|
||||
return "queue/done" in labels
|
||||
|
||||
eligible = []
|
||||
for cid, (it, meta) in by_id.items():
|
||||
labels = {l["name"] for l in it["labels"]}
|
||||
if it["state"] != "open": continue
|
||||
if "queue/queued" not in labels: continue
|
||||
deps = meta.get("deps", [])
|
||||
blocked = False
|
||||
for d in deps:
|
||||
if d not in by_id:
|
||||
blocked = True; break
|
||||
if not is_done(by_id[d][0]):
|
||||
blocked = True; break
|
||||
if blocked: continue
|
||||
eligible.append((meta.get("phase",99), it["number"], cid, it, meta))
|
||||
|
||||
if not eligible:
|
||||
print(json.dumps({"empty": True}))
|
||||
sys.exit(0)
|
||||
|
||||
eligible.sort(key=lambda x: (x[0], x[1]))
|
||||
phase, num, cid, it, meta = eligible[0]
|
||||
plan_pr = meta.get("plan_pr_id","").replace("/","-")
|
||||
result = {
|
||||
"empty": False,
|
||||
"issue_num": num,
|
||||
"canonical_id": cid,
|
||||
"driver": meta["driver"],
|
||||
"phase": meta["phase"],
|
||||
"plan_pr_id": meta.get("plan_pr_id",""),
|
||||
"title": it["title"],
|
||||
"branch": f"auto/{meta['driver']}/{plan_pr}",
|
||||
"url": it["html_url"],
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
PY
|
||||
24
scripts/queue/open-pr.sh
Normal file
24
scripts/queue/open-pr.sh
Normal file
@@ -0,0 +1,24 @@
|
||||
#!/usr/bin/env bash
|
||||
# Opens a PR from BRANCH into auto/driver-gaps, references the issue, sets ready/draft.
|
||||
# Usage: open-pr.sh ISSUE_NUM BRANCH_NAME TITLE BODY_FILE
|
||||
# Echoes the PR number on stdout.
|
||||
set -euo pipefail
|
||||
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
. "$HERE/lib.sh"
|
||||
|
||||
ISSUE="${1:?}"; BRANCH="${2:?}"; TITLE="${3:?}"; BODY_FILE="${4:?}"
|
||||
|
||||
BODY=$(cat "$BODY_FILE")
|
||||
PAYLOAD=$(python -c "
|
||||
import json, sys
|
||||
print(json.dumps({
|
||||
'title': sys.argv[1],
|
||||
'body': sys.argv[2] + '\n\nCloses #' + sys.argv[3],
|
||||
'head': sys.argv[4],
|
||||
'base': sys.argv[5],
|
||||
}))
|
||||
" "$TITLE" "$BODY" "$ISSUE" "$BRANCH" "$INTEGRATION_BRANCH")
|
||||
|
||||
PR=$(api_repo POST pulls "$PAYLOAD")
|
||||
PR_NUM=$(echo "$PR" | python -c "import sys,json; print(json.load(sys.stdin)['number'])")
|
||||
echo "$PR_NUM"
|
||||
1473
scripts/queue/pr-manifest.yaml
Normal file
1473
scripts/queue/pr-manifest.yaml
Normal file
File diff suppressed because it is too large
Load Diff
58
scripts/queue/setup-labels.sh
Normal file
58
scripts/queue/setup-labels.sh
Normal file
@@ -0,0 +1,58 @@
|
||||
#!/usr/bin/env bash
|
||||
# Idempotent: creates queue labels in Gitea and stores name→id map at .label-ids.json
|
||||
set -euo pipefail
|
||||
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
. "$HERE/lib.sh"
|
||||
|
||||
declare -A LABELS=(
|
||||
["driver/abcip"]="0e8a16"
|
||||
["driver/ablegacy"]="0e8a16"
|
||||
["driver/focas"]="0e8a16"
|
||||
["driver/opcuaclient"]="0e8a16"
|
||||
["driver/s7"]="0e8a16"
|
||||
["driver/twincat"]="0e8a16"
|
||||
["phase/1"]="bfd4f2"
|
||||
["phase/2"]="bfd4f2"
|
||||
["phase/3"]="bfd4f2"
|
||||
["phase/4"]="bfd4f2"
|
||||
["phase/5"]="bfd4f2"
|
||||
["phase/6"]="bfd4f2"
|
||||
["queue/queued"]="d4c5f9"
|
||||
["queue/in-progress"]="fbca04"
|
||||
["queue/blocked"]="b60205"
|
||||
["queue/failed"]="b60205"
|
||||
["queue/done"]="2ea44f"
|
||||
["auto-managed"]="cccccc"
|
||||
["cross-driver"]="d93f0b"
|
||||
)
|
||||
|
||||
# Pull existing labels
|
||||
EXISTING=$(api_repo GET "labels?limit=200")
|
||||
|
||||
emit_map() {
|
||||
python - <<PY
|
||||
import json, sys
|
||||
existing = json.loads('''$EXISTING''')
|
||||
print(json.dumps({l['name']: l['id'] for l in existing}, indent=2))
|
||||
PY
|
||||
}
|
||||
|
||||
# Create any missing
|
||||
for name in "${!LABELS[@]}"; do
|
||||
color="${LABELS[$name]}"
|
||||
exists=$(echo "$EXISTING" | python -c "import json,sys; ls=json.load(sys.stdin); print('yes' if any(l['name']=='$name' for l in ls) else 'no')")
|
||||
if [ "$exists" = "no" ]; then
|
||||
payload=$(python -c "import json; print(json.dumps({'name':'$name','color':'#$color','description':'queue management'}))")
|
||||
api_repo POST labels "$payload" >/dev/null
|
||||
echo "created label: $name"
|
||||
fi
|
||||
done
|
||||
|
||||
# Refresh and write the map file
|
||||
api_repo GET "labels?limit=200" | python -c "
|
||||
import json, sys
|
||||
ls = json.load(sys.stdin)
|
||||
m = {l['name']: l['id'] for l in ls}
|
||||
open('$LABEL_MAP','w').write(json.dumps(m, indent=2))
|
||||
print(f'wrote {len(m)} labels to $LABEL_MAP')
|
||||
"
|
||||
31
scripts/queue/start-pr.sh
Normal file
31
scripts/queue/start-pr.sh
Normal file
@@ -0,0 +1,31 @@
|
||||
#!/usr/bin/env bash
|
||||
# Marks an issue in-progress and creates its branch off the integration branch.
|
||||
# Usage: start-pr.sh ISSUE_NUM BRANCH_NAME
|
||||
set -euo pipefail
|
||||
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
. "$HERE/lib.sh"
|
||||
|
||||
ISSUE="${1:?ISSUE_NUM required}"
|
||||
BRANCH="${2:?BRANCH_NAME required}"
|
||||
|
||||
# Swap labels: queued -> in-progress
|
||||
QUEUED=$(python -c "import json; print(json.load(open('$LABEL_MAP'))['queue/queued'])")
|
||||
INPROG=$(python -c "import json; print(json.load(open('$LABEL_MAP'))['queue/in-progress'])")
|
||||
|
||||
api_repo DELETE "issues/$ISSUE/labels/$QUEUED" >/dev/null || true
|
||||
api_repo POST "issues/$ISSUE/labels" "{\"labels\":[$INPROG]}" >/dev/null
|
||||
|
||||
# Create branch off integration
|
||||
EXISTS=$(api_repo GET "branches/$BRANCH" 2>/dev/null || echo "")
|
||||
if [ -z "$EXISTS" ]; then
|
||||
PAYLOAD=$(python -c "import json; print(json.dumps({'new_branch_name':'$BRANCH','old_branch_name':'$INTEGRATION_BRANCH'}))")
|
||||
api_repo POST branches "$PAYLOAD" >/dev/null
|
||||
echo " branch created: $BRANCH"
|
||||
else
|
||||
echo " branch exists: $BRANCH"
|
||||
fi
|
||||
|
||||
# Comment
|
||||
COMMENT=$(python -c "import json; print(json.dumps({'body':'🤖 Auto-loop picked this up. Branch: \`$BRANCH\`. Status: in-progress.'}))")
|
||||
api_repo POST "issues/$ISSUE/comments" "$COMMENT" >/dev/null
|
||||
echo " issue #$ISSUE marked in-progress"
|
||||
Reference in New Issue
Block a user