feat: complete gRPC streaming channel — site host, docker config, docs, integration tests
Switch site host to WebApplicationBuilder with Kestrel HTTP/2 gRPC server, add GrpcPort/keepalive config, wire SiteStreamManager as ISiteStreamSubscriber, expose gRPC ports in docker-compose, add site seed script, update all 10 requirement docs + CLAUDE.md + README.md for the new dual-transport architecture.
This commit is contained in:
@@ -43,7 +43,7 @@ This project contains design documentation for a distributed SCADA system built
|
||||
2. Deployment Manager — Central-side deployment pipeline, system-wide artifact deployment, instance lifecycle.
|
||||
3. Site Runtime — Site-side actor hierarchy (Deployment Manager singleton, Instance/Script/Alarm Actors), script compilation, Akka stream.
|
||||
4. Data Connection Layer — Protocol abstraction (OPC UA, custom), subscription management, clean data pipe.
|
||||
5. Central–Site Communication — Akka.NET ClusterClient/ClusterClientReceptionist, message patterns, debug streaming.
|
||||
5. Central–Site Communication — Akka.NET ClusterClient (command/control) + gRPC server-streaming (real-time data), message patterns, debug streaming.
|
||||
6. Store-and-Forward Engine — Buffering, fixed-interval retry, parking, SQLite persistence, replication.
|
||||
7. External System Gateway — External system definitions, API method invocation, database connections.
|
||||
8. Notification Service — Notification lists, email delivery, store-and-forward integration.
|
||||
@@ -81,7 +81,8 @@ This project contains design documentation for a distributed SCADA system built
|
||||
- Tag path resolution retried periodically for devices still booting.
|
||||
- Static attribute writes persisted to local SQLite (survive restart/failover, reset on redeployment).
|
||||
- All timestamps are UTC throughout the system.
|
||||
- Inter-cluster communication uses ClusterClient/ClusterClientReceptionist. Both CentralCommunicationActor and SiteCommunicationActor registered with receptionist. Central creates one ClusterClient per site using NodeA/NodeB as contact points. Sites configure multiple central contact points for failover. Addresses cached in CentralCommunicationActor, refreshed periodically (60s) and on admin changes. Heartbeats serve health monitoring only.
|
||||
- Inter-cluster communication uses two transports: ClusterClient for command/control (deployments, lifecycle, subscribe/unsubscribe handshake, snapshots) and gRPC server-streaming for real-time data (attribute values, alarm states). Both CentralCommunicationActor and SiteCommunicationActor registered with receptionist. Central creates one ClusterClient per site using NodeA/NodeB as contact points. Sites configure multiple central contact points for failover. Addresses cached in CentralCommunicationActor, refreshed periodically (60s) and on admin changes. Heartbeats serve health monitoring only.
|
||||
- gRPC streaming channel: SiteStreamGrpcServer on each site node (Kestrel HTTP/2, port 8083); central creates per-site SiteStreamGrpcClient via SiteStreamGrpcClientFactory. Site entity has GrpcNodeAAddress/GrpcNodeBAddress fields. Proto: sitestream.proto with SiteStreamService, SiteStreamEvent (oneof: AttributeValueUpdate, AlarmStateUpdate). DebugStreamEvent message removed (no longer flows through ClusterClient).
|
||||
|
||||
### External Integrations
|
||||
- External System Gateway: HTTP/REST only, JSON serialization, API key + Basic Auth.
|
||||
@@ -126,7 +127,7 @@ This project contains design documentation for a distributed SCADA system built
|
||||
### UI & Monitoring
|
||||
- Central UI: Blazor Server (ASP.NET Core + SignalR) with Bootstrap CSS. No third-party component frameworks (no Blazorise, MudBlazor, Radzen, etc.). Build custom Blazor components for tables, grids, forms, etc.
|
||||
- UI design: Clean, corporate, internal-use aesthetic. Not flashy. Use the `frontend-design` skill when designing UI pages/components.
|
||||
- Debug view: real-time streaming via DebugStreamBridgeActor. Health dashboard: 10s polling timer. Deployment status: real-time push via SignalR.
|
||||
- Debug view: real-time streaming via DebugStreamBridgeActor + gRPC (events via SiteStreamGrpcClient, snapshot via ClusterClient). Health dashboard: 10s polling timer. Deployment status: real-time push via SignalR.
|
||||
- Health reports: 30s interval, 60s offline threshold, monotonic sequence numbers, raw error counts per interval.
|
||||
- Dead letter monitoring as a health metric.
|
||||
- Site Event Logging: 30-day retention, 1GB storage cap, daily purge, paginated queries with keyword search.
|
||||
|
||||
Reference in New Issue
Block a user