New project Driver.Historian.Wonderware.Client (net10 x64) implements both Core.Abstractions.IHistorianDataSource (read paths consumed by the server's IHistoryRouter) and Core.AlarmHistorian.IAlarmHistorianWriter (alarm-event drain consumed by SqliteStoreAndForwardSink) against the sidecar's PR 3.3 pipe protocol. Wire-format files (Framing/MessageKind, Hello, Contracts, FrameReader, FrameWriter) are byte-identical mirrors of the sidecar's net48 originals — the sidecar can't be referenced as a ProjectReference because of the runtime/bitness gap, so we duplicate and pin the wire bytes via tests. PipeChannel owns one bidirectional NamedPipeClientStream + Hello handshake + serializes calls. Single in-flight at a time (semaphore); transport failures trigger one in-flight reconnect-and-retry before propagating. Connect is abstracted behind a Func<CancellationToken, Task<Stream>> so tests inject in-process pipes. WonderwareHistorianClient maps: - HistorianSampleDto.Quality (raw OPC DA byte) → OPC UA StatusCode uint via QualityMapper (port of HistorianQualityMapper from sidecar). - HistorianAggregateSampleDto.Value=null → BadNoData (0x800E0000). - WriteAlarmEventsReply.PerEventOk[i]=true → Ack, false → RetryPlease. Whole-call failure or transport exception → RetryPlease for every event in the batch (drain worker handles backoff). - AlarmHistorianEvent → AlarmHistorianEventDto with severity bucketed via AlarmSeverity-to-ushort mapping (Low=250, Medium=500, High=700, Crit=900). GetHealthSnapshot tracks transport success + sidecar-reported failure separately; ConsecutiveFailures rises on operation-level errors, not just transport drops. 10 round-trip tests via FakeSidecarServer (in-process net10 fake using the client's own framing): byte→uint quality mapping, null-bucket BadNoData, at-time order preservation, event-field round-trip, sidecar error surfacing, WriteBatch per-event status, whole-call retry-please mapping, Hello shared-secret rejection, transport-drop reconnect-and-retry, health snapshot counters. PR 3.W will register this client as IHistorianDataSource + IAlarmHistorianWriter in OpcUaServerService DI. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
27 lines
1.6 KiB
C#
27 lines
1.6 KiB
C#
namespace ZB.MOM.WW.OtOpcUa.Driver.Historian.Wonderware.Client;
|
|
|
|
/// <summary>
|
|
/// Connection options for <see cref="WonderwareHistorianClient"/>.
|
|
/// </summary>
|
|
/// <param name="PipeName">Named-pipe name the sidecar listens on (matches the sidecar's <c>OTOPCUA_HISTORIAN_PIPE</c>).</param>
|
|
/// <param name="SharedSecret">Per-process shared secret the sidecar will verify in the Hello frame.</param>
|
|
/// <param name="PeerName">Diagnostic peer identifier sent in Hello — typically the OtOpcUa instance id.</param>
|
|
/// <param name="ConnectTimeout">Cap on the named-pipe connect + Hello round trip on each (re)connect.</param>
|
|
/// <param name="CallTimeout">Cap on a single read/write call once connected.</param>
|
|
/// <param name="ReconnectInitialBackoff">Backoff between the first failed reconnect attempts.</param>
|
|
/// <param name="ReconnectMaxBackoff">Upper bound on the exponential backoff between reconnects.</param>
|
|
public sealed record WonderwareHistorianClientOptions(
|
|
string PipeName,
|
|
string SharedSecret,
|
|
string PeerName = "OtOpcUa",
|
|
TimeSpan? ConnectTimeout = null,
|
|
TimeSpan? CallTimeout = null,
|
|
TimeSpan? ReconnectInitialBackoff = null,
|
|
TimeSpan? ReconnectMaxBackoff = null)
|
|
{
|
|
public TimeSpan EffectiveConnectTimeout => ConnectTimeout ?? TimeSpan.FromSeconds(10);
|
|
public TimeSpan EffectiveCallTimeout => CallTimeout ?? TimeSpan.FromSeconds(30);
|
|
public TimeSpan EffectiveReconnectInitialBackoff => ReconnectInitialBackoff ?? TimeSpan.FromMilliseconds(500);
|
|
public TimeSpan EffectiveReconnectMaxBackoff => ReconnectMaxBackoff ?? TimeSpan.FromSeconds(30);
|
|
}
|