Files
lmxopcua/scripts/queue/next-pr.sh
Joseph Doherty 2d07d716dc Recover stashed driver-gaps work from pre-v2-mxgw-merge working tree
Captures uncommitted work that lived in the working tree on
v2-mxgw-integration but was orthogonal to the migration. Stashed
during the v2-mxgw merge to master (2026-04-30) and replanted here on
a feature branch off master so it's git-visible rather than living in
the stash list.

Two distinct buckets:

1. Tracked fixture/config refinements (10 files, ~36 lines):
   - scripts/e2e/test-opcuaclient.ps1
   - src/ZB.MOM.WW.OtOpcUa.Admin/appsettings.json
   - 5 docker-compose.yml under tests/.../IntegrationTests/Docker/
     (AbCip, Modbus, OpcUaClient, S7)
   - 4 fixture .cs files (AbServerFixture, ModbusSimulatorFixture,
     OpcPlcFixture, Snap7ServerFixture)

2. Untracked driver-gaps queue artifacts (~8000 lines):
   - docs/plans/{abcip,ablegacy,focas,opcuaclient,s7,twincat}-plan.md
     — per-driver gap plans
   - docs/featuregaps.md — cross-cutting analysis
   - docs/v2/focas-deployment.md, docs/v2/implementation/focas-simulator-plan.md
   - followup.md — auto/driver-gaps queue follow-ups
   - scripts/queue/ — PR-queue automation tooling (12 files including
     pr-manifest.yaml at 1473 lines)

This commit is a snapshot for recoverability — review and split into
focused PRs (or discard) before merging anywhere downstream.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 08:28:01 -04:00

78 lines
2.3 KiB
Bash

#!/usr/bin/env bash
# Prints the next eligible queue issue as JSON: {issue_num, canonical_id, driver, plan_pr_id, branch, ...}
# Eligible = open + label queue/queued + all canonical deps closed.
# Picks lowest phase first, then lowest issue number within phase.
set -euo pipefail
HERE="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
. "$HERE/lib.sh"
python - <<PY
import json, urllib.request, re, os, sys
token = os.environ["GITEA_TOKEN"]
api_base = "https://gitea.dohertylan.com/api/v1/repos/dohertj2/lmxopcua"
def api(path):
req = urllib.request.Request(f"{api_base}/{path}",
headers={"Authorization": f"token {token}"})
with urllib.request.urlopen(req) as r:
return json.loads(r.read().decode())
# Gather all queue issues
issues = []
page = 1
while True:
items = api(f"issues?state=all&type=issues&limit=50&page={page}&labels=auto-managed")
if not items: break
issues.extend(items)
page += 1
by_id = {}
for it in issues:
m = re.search(r'<!-- queue-meta\s*(\{.*?\})\s*-->', it.get("body","") or "", re.S)
if not m: continue
try: meta = json.loads(m.group(1))
except: continue
by_id[meta["id"]] = (it, meta)
def is_done(issue):
if issue["state"] == "closed": return True
labels = {l["name"] for l in issue["labels"]}
return "queue/done" in labels
eligible = []
for cid, (it, meta) in by_id.items():
labels = {l["name"] for l in it["labels"]}
if it["state"] != "open": continue
if "queue/queued" not in labels: continue
deps = meta.get("deps", [])
blocked = False
for d in deps:
if d not in by_id:
blocked = True; break
if not is_done(by_id[d][0]):
blocked = True; break
if blocked: continue
eligible.append((meta.get("phase",99), it["number"], cid, it, meta))
if not eligible:
print(json.dumps({"empty": True}))
sys.exit(0)
eligible.sort(key=lambda x: (x[0], x[1]))
phase, num, cid, it, meta = eligible[0]
plan_pr = meta.get("plan_pr_id","").replace("/","-")
result = {
"empty": False,
"issue_num": num,
"canonical_id": cid,
"driver": meta["driver"],
"phase": meta["phase"],
"plan_pr_id": meta.get("plan_pr_id",""),
"title": it["title"],
"branch": f"auto/{meta['driver']}/{plan_pr}",
"url": it["html_url"],
}
print(json.dumps(result, indent=2))
PY