diff --git a/docs/plans/grpc_streams.md b/docs/plans/grpc_streams.md
new file mode 100644
index 0000000..cf93e55
--- /dev/null
+++ b/docs/plans/grpc_streams.md
@@ -0,0 +1,976 @@
+# gRPC Streaming Channel: Site → Central Real-Time Data
+
+## Context
+
+Debug streaming events currently flow through Akka.NET ClusterClient (`InstanceActor → SiteCommunicationActor → ClusterClient.Send → CentralCommunicationActor → bridge actor`). ClusterClient wasn't built for high-throughput value streaming — it's a cluster coordination tool with gossip-based routing. As we scale beyond debug view to health streaming, alarm feeds, or future live dashboards, pushing all real-time data through ClusterClient will become a bottleneck.
+
+**Goal**: Add a dedicated gRPC server-streaming channel on each site node. Central subscribes to sites over gRPC for real-time data. ClusterClient continues to handle command/control (subscribe, unsubscribe, deploy, lifecycle) but all streaming values flow through the gRPC channel.
+
+**Scope**: General-purpose site→central streaming transport. Debug view is the first consumer, but the proto and server are designed so future features (health streaming, alarm feeds, live dashboards) can subscribe with different event types and filters.
+
+## Why gRPC Streaming Instead of ClusterClient
+
+| Concern | ClusterClient | gRPC Server Streaming |
+|---------|---------------|----------------------|
+| **Purpose** | Cluster coordination, service discovery, request/response | High-throughput data streaming |
+| **Sender preservation** | Temporary proxy ref — breaks for stored future Tells | N/A — callback-based, no actor refs cross boundary |
+| **Flow control** | None (fire-and-forget Tell) | HTTP/2 flow control + Channel backpressure |
+| **Scalability** | Gossip-based routing, single receptionist | Direct TCP/HTTP2 per-site, multiplexed streams |
+| **Reconnection** | ClusterClient auto-reconnect (coarse, cluster-level) | gRPC channel-level reconnect per subscription |
+| **Serialization** | Akka.NET Hyperion (runtime IL, fragile across versions) | Protocol Buffers (schema-driven, cross-platform) |
+
+The DCL already uses this exact pattern — `RealLmxProxyClient` opens gRPC server-streaming subscriptions to LmxProxy servers for real-time tag value updates. This plan applies the same pattern to site→central communication.
+
+## Architecture
+
+```
+Central Cluster Site Cluster
+───────────── ────────────
+
+DebugStreamBridgeActor InstanceActor
+ │ │
+ │── SubscribeDebugView ──► │ (ClusterClient: command/control)
+ │◄── DebugViewSnapshot ── │
+ │ │
+ │ │ publishes AttributeValueChanged
+ │ │ publishes AlarmStateChanged
+ │ ▼
+SiteStreamGrpcClient ◄──── gRPC stream ───── SiteStreamGrpcServer
+ (per-site, on central) (HTTP/2) (Kestrel, on site)
+ │ │
+ │ reads from gRPC stream │ receives from SiteStreamManager
+ │ routes by correlationId │ filters by instance name
+ ▼ │
+DebugStreamBridgeActor │
+ │ │
+ ▼ │
+SignalR Hub / Blazor UI │
+```
+
+**Key separation**: ClusterClient handles subscribe/unsubscribe/snapshot (request-response). gRPC handles the ongoing value stream (server-streaming).
+
+## Port & Address Configuration
+
+### Site-Side (appsettings)
+
+`ScadaLink:Node:GrpcPort` — explicit config setting, not derived from `RemotingPort`:
+
+```json
+"Node": {
+ "Role": "Site",
+ "NodeHostname": "scadalink-site-a-a",
+ "RemotingPort": 8082,
+ "GrpcPort": 8083
+}
+```
+
+**Why explicit, not offset**: `RemotingPort` is itself a config value (8081 central, 8082 sites). A rigid offset silently breaks if someone changes `RemotingPort` to a non-standard value. Explicit ports are visible and independently configurable.
+
+Add `GrpcPort` to `NodeOptions` (`src/ScadaLink.Host/NodeOptions.cs`):
+
+```csharp
+public int GrpcPort { get; set; } = 8083;
+```
+
+Add validation in `StartupValidator` (site role only — central doesn't host a gRPC streaming server).
+
+### Central-Side (Database — Site Entity)
+
+Central needs to know each site node's gRPC endpoint. Add two fields to the `Site` entity:
+
+**Modify**: `src/ScadaLink.Commons/Entities/Sites/Site.cs`
+
+```csharp
+public class Site
+{
+ public int Id { get; set; }
+ public string Name { get; set; }
+ public string SiteIdentifier { get; set; }
+ public string? Description { get; set; }
+ public string? NodeAAddress { get; set; } // Akka: "akka.tcp://scadalink@host:8082"
+ public string? NodeBAddress { get; set; } // Akka: "akka.tcp://scadalink@host:8082"
+ public string? GrpcNodeAAddress { get; set; } // gRPC: "http://host:8083"
+ public string? GrpcNodeBAddress { get; set; } // gRPC: "http://host:8083"
+}
+```
+
+### Database Migration
+
+Add `GrpcNodeAAddress` and `GrpcNodeBAddress` nullable string columns to the `Sites` table. Existing sites get `NULL` (gRPC streaming unavailable until configured).
+
+### Management Commands
+
+**Modify**: `src/ScadaLink.Commons/Messages/Management/SiteCommands.cs`
+
+```csharp
+public record CreateSiteCommand(
+ string Name, string SiteIdentifier, string? Description,
+ string? NodeAAddress = null, string? NodeBAddress = null,
+ string? GrpcNodeAAddress = null, string? GrpcNodeBAddress = null);
+
+public record UpdateSiteCommand(
+ int SiteId, string Name, string? Description,
+ string? NodeAAddress = null, string? NodeBAddress = null,
+ string? GrpcNodeAAddress = null, string? GrpcNodeBAddress = null);
+```
+
+### ManagementActor Handlers
+
+**Modify**: `src/ScadaLink.ManagementService/ManagementActor.cs`
+
+Update `HandleCreateSite` and `HandleUpdateSite` to pass gRPC addresses through to the repository.
+
+### CLI
+
+**Modify**: `src/ScadaLink.CLI/Commands/SiteCommands.cs`
+
+Add `--grpc-node-a-address` and `--grpc-node-b-address` options to `site create` and `site update`:
+
+```sh
+scadalink site create --name "Site A" --identifier site-a \
+ --node-a-address "akka.tcp://scadalink@site-a-a:8082" \
+ --node-b-address "akka.tcp://scadalink@site-a-b:8082" \
+ --grpc-node-a-address "http://site-a-a:8083" \
+ --grpc-node-b-address "http://site-a-b:8083"
+```
+
+### Central UI
+
+**Modify**: `src/ScadaLink.CentralUI/Components/Pages/Admin/Sites.razor`
+
+Add two form fields below the existing Node A / Node B address inputs in the site create/edit form:
+
+```html
+
+
+
+
+
+```
+
+Add corresponding columns to the sites list table. Wire `_formGrpcNodeAAddress` / `_formGrpcNodeBAddress` into `CreateSiteCommand` / `UpdateSiteCommand` in the save handler.
+
+### SiteStreamGrpcClientFactory
+
+Reads `GrpcNodeAAddress` / `GrpcNodeBAddress` from the `Site` entity (loaded by `CentralCommunicationActor.LoadSiteAddressesFromDb()`) when creating per-site gRPC channels. Falls back to NodeB if NodeA connection fails (same pattern as ClusterClient dual-contact-point failover).
+
+### Docker Compose Port Allocation
+
+**Modify**: `docker/docker-compose.yml`
+
+Expose gRPC ports for each site node (internal 8083):
+- Site-A: `9023:8083` / `9024:8083` (nodes A/B)
+- Site-B: `9033:8083` / `9034:8083`
+- Site-C: `9043:8083` / `9044:8083`
+
+### Files Affected by Port & Address Configuration
+
+| File | Change |
+|------|--------|
+| `src/ScadaLink.Host/NodeOptions.cs` | Add `GrpcPort` property |
+| `src/ScadaLink.Host/StartupValidator.cs` | Validate `GrpcPort` for site role |
+| `src/ScadaLink.Host/appsettings.Site.json` | Add `GrpcPort: 8083` |
+| `src/ScadaLink.Commons/Entities/Sites/Site.cs` | Add `GrpcNodeAAddress`, `GrpcNodeBAddress` |
+| `src/ScadaLink.Commons/Messages/Management/SiteCommands.cs` | Add gRPC address params |
+| `src/ScadaLink.ConfigurationDatabase/` | EF migration for new columns |
+| `src/ScadaLink.ManagementService/ManagementActor.cs` | Pass gRPC addresses in handlers |
+| `src/ScadaLink.CLI/Commands/SiteCommands.cs` | Add `--grpc-node-a-address` / `--grpc-node-b-address` |
+| `src/ScadaLink.CentralUI/Components/Pages/Admin/Sites.razor` | Add gRPC address form fields + table columns |
+| `docker/docker-compose.yml` | Expose gRPC ports |
+
+## Proto Definition
+
+**File**: `src/ScadaLink.Communication/Protos/sitestream.proto`
+
+The `oneof event` pattern is extensible — future event types (health metrics, connection state changes, etc.) are added as new fields without breaking existing consumers.
+
+```protobuf
+syntax = "proto3";
+option csharp_namespace = "ScadaLink.Communication.Grpc";
+package sitestream;
+
+service SiteStreamService {
+ // Subscribe to real-time events filtered by instance.
+ // Server streams events until the client cancels or the site shuts down.
+ rpc SubscribeInstance(InstanceStreamRequest) returns (stream SiteStreamEvent);
+}
+
+message InstanceStreamRequest {
+ string correlation_id = 1;
+ string instance_unique_name = 2;
+}
+
+message SiteStreamEvent {
+ string correlation_id = 1;
+ oneof event {
+ AttributeValueUpdate attribute_changed = 2;
+ AlarmStateUpdate alarm_changed = 3;
+ // Future: HealthMetricUpdate health_metric = 4;
+ // Future: ConnectionStateUpdate connection_state = 5;
+ }
+}
+
+message AttributeValueUpdate {
+ string instance_unique_name = 1;
+ string attribute_path = 2;
+ string attribute_name = 3;
+ string value = 4; // string-encoded (same as LmxProxy VtqMessage pattern)
+ string quality = 5; // "Good", "Uncertain", "Bad"
+ int64 timestamp_utc_ticks = 6;
+}
+
+message AlarmStateUpdate {
+ string instance_unique_name = 1;
+ string alarm_name = 2;
+ int32 state = 3; // 0=Normal, 1=Active (maps to AlarmState enum)
+ int32 priority = 4;
+ int64 timestamp_utc_ticks = 5;
+}
+```
+
+Pre-generate C# stubs and check into `src/ScadaLink.Communication/SiteStreamGrpc/` (same pattern as LmxProxy — no `protoc` in Docker for ARM64 compatibility).
+
+## Server-Streaming Pattern (Site Side)
+
+### gRPC Server Implementation
+
+`SiteStreamGrpcServer` inherits from `SiteStreamService.SiteStreamServiceBase`:
+
+```csharp
+public override async Task SubscribeInstance(
+ InstanceStreamRequest request,
+ IServerStreamWriter responseStream,
+ ServerCallContext context)
+{
+ var channel = Channel.CreateBounded(
+ new BoundedChannelOptions(1000) { FullMode = BoundedChannelFullMode.DropOldest });
+
+ // Local actor subscribes to SiteStreamManager, writes to channel
+ var relayActor = actorSystem.ActorOf(
+ Props.Create(() => new StreamRelayActor(request, channel.Writer)));
+ streamManager.Subscribe(request.InstanceUniqueName, relayActor);
+
+ try
+ {
+ await foreach (var evt in channel.Reader.ReadAllAsync(context.CancellationToken))
+ {
+ await responseStream.WriteAsync(evt, context.CancellationToken);
+ }
+ }
+ finally
+ {
+ streamManager.RemoveSubscriber(relayActor);
+ actorSystem.Stop(relayActor);
+ }
+}
+```
+
+### Channel\ Bridging Pattern
+
+`IServerStreamWriter` is **not thread-safe**. Multiple Akka actors may publish events concurrently. The `Channel` bridges these worlds:
+
+```
+Akka Actor Thread(s) gRPC Response Stream
+ │ ▲
+ │ channel.Writer.TryWrite(evt) │ await responseStream.WriteAsync(evt)
+ ▼ │
+ ┌─────────────────────────────────────────┐
+ │ Channel │
+ │ BoundedChannelOptions(1000) │
+ │ FullMode = DropOldest │
+ └─────────────────────────────────────────┘
+```
+
+- **Bounded capacity** (1000): prevents unbounded memory growth if the gRPC client is slow
+- **DropOldest**: matches the existing `SiteStreamManager` overflow strategy
+- **ReadAllAsync**: yields items as they arrive, naturally async
+
+### Kestrel HTTP/2 Setup
+
+Site hosts switch from `Host.CreateDefaultBuilder()` to `WebApplicationBuilder` with Kestrel configured for a dedicated gRPC port:
+
+```csharp
+builder.WebHost.ConfigureKestrel(options =>
+{
+ options.ListenAnyIP(grpcPort, listenOptions =>
+ {
+ listenOptions.Protocols = HttpProtocols.Http2; // gRPC requires HTTP/2
+ });
+});
+builder.Services.AddGrpc();
+// ... existing site services ...
+app.MapGrpcService();
+```
+
+Reference: `infra/lmxfakeproxy/Program.cs` uses the identical Kestrel setup.
+
+## Client-Streaming Pattern (Central Side)
+
+### gRPC Client Implementation
+
+`SiteStreamGrpcClient` manages per-site gRPC channels and streaming subscriptions:
+
+```csharp
+public async Task SubscribeAsync(
+ string correlationId, string instanceUniqueName,
+ Action