About OPC UA timestamps

Timestamp is a way of recording the date and time of each data point or event in the data acquisition system. Having correct timestamp in process control data acquisition is essential for ensuring the validity, reliability, and usability of the data, and for enabling effective process monitoring, control, and optimization. Precise timestamps enable accurate analysis and interpretation of the data, such as identifying trends, patterns, anomalies, and correlations. They also facilitate synchronization and integration of data from multiple sources, such as different sensors, instruments, devices, or systems.

According to the OPC UA Standard, each value of an OPC UA variable is associated with 2 timestamps: Source and Server timestamps.

The Source Timestamp is used to reflect the timestamp that was applied to a variable value by the data source. In other words, this is the time when data value was measured at the lowest level data source. Note that this data source can be located either in the same machine where OPC UA Server runs, or it can be some separate device with its own system clock.

The Server Timestamp is used to reflect the time that the Server received a variable value or knew it to be accurate.

If the server reads data values itself, then source timestamp and server timestamp usually will be equal. If OPC UA Server gets data from some other device which supports timestamps, then source timestamp can be significantly different than the server timestamp.

It is important to note that system clocks in all devices participating in data acquisition path should be in sync. Ideally system clocks should be synchronized with time servers, for example using NTP protocol.

To avoid confusion and errors which might happen during conversions, all timestamps in OPC UA are UTC timestamps: no time zones, no daylight saving times. Usually timestamp values are converted to the user’s local time by the application displaying them.

When OPC UA client creates subscription and creates monitored items in it, it can define what timestamps it needs to receive: only source, only server timestamp, or both.

Our Industrial Data Collector  (further below Idako) creates subscriptions and monitored items requesting both timestamps. When data values are forwarded to the destination time-series database or MQTT broker, how timestamps are written depends on the destination database type.

When the destination is a SQL database, the source timestamp is written in a “time” column of the values table. If source timestamp is not defined, then server timestamp is used. Also it is possible to write so called client timestamp in the column “client_time“. The client_time is the time when data value had been received by Idako from the OPC UA Server. This timestamp can be useful when server or source timestamps are not reliable and can be not accurate enough.

When the destination is InfluxDB or Confluent / Redpanda / Apache Kafka, then source timestamp is used as a record timestamp. If the payload is composed using template, then all 3 timestamps can be also included into the payload using placeholders “[SourceTimestamp]”, “[ServerTimestamp]” and/or “[ClientTimestamp]”.

When the destination is MQTT broker, timestamps cannot be part of the published messages as the MQTT protocol does not explicitly specify how timestamps should be passed from the publisher to the broker. So they can be only included in the payload. For that, the payload should be defined using template, with placeholders for timestamps: “[SourceTimestamp]”, “[ServerTimestamp]” and/or “[ClientTimestamp]”.

Note that in some cases duplicate records (with the same value and timestamp for the same variable) can be written to the database. This can happen, when Idako disconnects from the server and reconnects, or when it restarts. In these cases it is possible that for a variable data value can be still the same, and also the source timestamp can be the same as before reconnecting. This might cause duplicate records error in SQL databases if values table is configured to have unique index by source_id and time column values. To resolve this issue, the OVL has configuration settings allowing duplicate records (refer the User Manual for details).

Another important thing to mention in regard of OPC UA timestamps is that OPC UA allows to define timestamps with high enough resolution: down to 10 picoseconds precision. Usually target databases do not support such high resolution. Idako allows to configure the precision with which timestamps are written: can be seconds, milliseconds or microseconds.

Also it is worth to mention that different storage destinations have different ways to represent timestamps. Idako allows to fine tune timestamp format: it can be either integer number representing so called Unix epoch time (depending on the precision, number of seconds or milliseconds or microseconds since 1 Jan 1970), or can be a string value formatted following ISO-8601 standard, or OPC UA DateTime value (integer number of 100 nanosecond intervals passed since January 1, 1601)

New version 4.2.2 of the ogamma Visual Logger for OPC

We are pleased to announce the release of a new version, 4.2.2, of the ogamma Visual Logger for OPC.
The most important changes are:

Free software/tools

Indistrial Data Explorer

Industrial Data Explorer is a two-in-one web application for exploring industrial data published on MQTT brokers and exposed via OPC UA servers.

Runs in Docker: https://hub.docker.com/r/ogamma/ide.

Love demo is available at https://ide.opcfy.io Feel free to register and try to connect to publicly accessible OPC UA servers and MQTT brokers listed below.

Hint: after login, to add connections click on the gear icon at the top right corner.

For details and to report feedback / issues please visit product page on GitHub.

OPC UA Servers

Recently, one of our customers asked: What OPC UA Servers are available to test OPC UA client applications, such as ogamma Visual Logger for OPC (now re-branded as Idako: Industrial Data Collector) ? In other words, they needed to simulate a data stream that an OPC UA Client could ingest using the OPC UA Server interface.

There are many demo/simulation OPC UA servers available that you can either download and run on your PCs, and also there are some already running instances with public endpoints available to connect over the Internet.  Here is the list:

Product name Vendor Platform Link
OPC UA Simulation Server Prosys OPC Windows, Linux, MacOS https://prosysopc.com/products/opc-ua-simulation-server/
OPC UA C++ Demo Server Unified Automation Windows

https://www.unified-automation.com/downloads/opc-ua-servers.html

Requires registration (free) to download

Public endpoint, hosted by One-Way Automation: opc.tcp://opcuaserver.com:48010

Eclipse Milo OPC UA Demo Server Open-source, the main contributor is Kevin Herron (Inductive Automation). Written in Java, runs on Windows, Linux, Docker.

https://github.com/digitalpetri/opc-ua-demo-server

Public OPC UA endpoint URL: opc.tcp://milo.digitalpetri.com:62541/milo

OPC PLC Server Microsoft Cross-platform .NET application. Runs on Windows, Linux, Docker

https://learn.microsoft.com/en-us/samples/azure-samples/iot-edge-opc-plc/azure-iot-sample-opc-ua-server/

Docker image: mcr.microsoft.com/iotedge/opc-plc

Public endpoint hosted by One-Way Automation: opc.tcp://opcplc.opcfy.io:50000

Note: only secured connections are allowed. Client certificates will be accepted automatically.

Node OPC UA Server Sterfive Cross-platform  Public endpoint: opc.tcp://opcuademo.sterfive.com:26543

 

Public MQTT Brokers

When we created recently Industrial Data Explorer, we needed to have also publicly available MQTT brokers. Below is a list of some MQTT brokers we used for tests:
Product name Vendor Host name
MonsterMQ  Andreas Vogler https://monstermq.com/ test.monstermq.com
Mosquitto https://mosquitto.org/ test.mosquitto.org
EMQX https://www.emqx.com/en broker.emqx.io
HiveMQ https://www.hivemq.com/ broker.hivemq.com
 

OPC UA Client C++ SDK: new version has been released!

We’ve officially released our OPC UA Client C++ SDK 2.0.0!

 

It’s the Easiest-To-Use C++ SDK in the market!

 

Here’s what’s new:

With a re-designed distribution package, you don’t need to build dependencies. Now, it’s easier to evaluate and you can start using it in minutes!

Its main advantages:

✅  Designed by Developers for Developers, so it implements all heavy lifting functionalities. This translates to writing minimal code to communicate with OPC UA Servers.

✅  Compared to other SDKs for C/C++, implementing your application using our SDK is way faster. This leads to project development with a lower budget and shorter timelines.

✅  Makes coding more enjoyable by featuring utilities and helpers!

✅  Automatically reconnects to OPC UA servers, recreating subscriptions and monitored items.

✅  Our simple API fully utilizes the power of modern C++

✅  Uses native C++ base data types (without SDK-specific types).

✅  Uses asynchronous & synchronous calls, and callbacks.

✅  Supports complex data types, enabling values to be easily converted to JSON or string format.

MQTT vs OPC UA

Introduction.

 

It is very common to read about MQTT protocol that it is very lightweight in terms of network traffic:

MQTT is an OASIS standard messaging protocol for the Internet of Things (IoT). It is designed as an extremely lightweight publish/subscribe messaging transport that is ideal for connecting remote devices with a small code footprint and minimal network bandwidth.
(Quote from MQTT home page).

At the same time, OPC UA is often considered heavier than MQTT.

For me, this was not something obvious and expected. Because the most popular encoding used in OPC UA is TCP binary, which usually is more efficient and lightweight than free text formatting used in MQTT payloads.

When I found this report by Johnathan Hottell, about OPC UA and MQTT benchmarks and saw that the MQTT payload is JSON formatted (in which each data value transferred includes multiple key-value pairs, for the values itself and for the timestamp), I thought, how come this can be lighter than OPC UA binary formatted payload? And decided to reproduce these tests myself.

Test methodology.

 

OPC UA Server and Variables.

In the initial tests, I picked the same number and type of data variables. And later decided to re-run tests, in a more controlled setup. I thought that for easier reproducing, it is worth using a popular and easily available OPC UA server as a data source, which can provide simulated data. OPC UA C++ Demo Server from Unified Automation was the obvious choice. The set of default variables does not have a number of variables of types used in the tests run by Johnathan Hottell, therefore I had to select a different set of variable types, but still, the total number of variables is 24. Their node identifiers can be found in the Excel file with test results.

MQTT Broker.

EMQX broker running as a Docker container was chosen, because of easy deployment and use, and support for secured connections out of the box, without generating and installing the SSL certificate.

Test applications generating OPC UA and MQTT traffic.

2 applications that can convert OPC UA data to MQTT were used:

  1. ogamma Visual Logger for OPC, the product of our company. For tests, you can use its Free Community Edition. For instructions on how to install it and use it, please refer to its online User Manual. Version 2.1.4 was used.
  2. Cogent DataHub from Skkynet.From this link, you can download the trial version. Used version 9.0.10.747.

Using OPC UA to MQTT converter applications makes sure that the same data values are collected over OPC UA connection, and then published to the MQTT broker, as well as received by the subscriber application.

As MQTT Subscriber, MQTT Explorer version 0.4.0 Beta-1 was used.

Measuring network traffic.

To capture OPC UA and MQTT packets, WireShark was used. 2 instances – 1 for OPC UA traffic on port 48010 was running, and the second for MQTT traffic on port 1883 for non-secure connections, and 8883 for secured connections.

For each test case, 4 metrics were collected:

  1. Size of file, where captured packets are saved from WireShark, in Kb.
  2. The number of captured frames.
  3. Total frames size – the sum of all network packets sizes. This includes all protocols, can be viewed in Wireshark via menu Statistics / Protocol Hierarchy, line Frame, column Bytes, converted to Kb.
  4. OPC UA payload size – as shown in line OPC UA Binary Protocol, column Bytes in Wireshark, dialog window opened by menu Statistics / Protocol Hierarchy, converted to Kb.
  5. MQTT payload size – as shown in line MQTT Telemetry Transport Protocol (in non-secure mode), or Transport Layer Security (on secured mode), column Bytes in Wireshark, dialog window opened by menu Statistics / Protocol Hierarchy, converted to Kb.

Configuration of test applications.

Each test application is configured to collect data from the OPC UA Server for selected 24 variables and log data to the MQTT broker. For configuration steps, please refer to the documentation of the product.

In the OPC UA side, both Sampling interval and Publishing should be set to 1 second.

MQTT Topics for variables in ogamma Visual Logger for OPC were set to Test24/[VariableDisplayName]. For example, for the variable with browse path Objects/Demo/001_Dynamic/Scalar/Boolean it is set to Test24/Boolean.

In Cogent DataHub, topics were a little bit longer, set to Test24/[Browse Path]. For example, Test/Demo/Dynamic/Scalar/Boolean.

The MQTT payload was set to the variable value, converted to a string (not JSON format, to keep payload size smaller).

 

Running tests.

Test applications should be shut down before starting tests.

First, 2 Wireshark instances should start. They should not detect any packets yet. Then one of the test applications starts. Wireshark should display captured packets. After 5 minutes, the test application stops, and no more network activity should be seen in Wireshark. Capturing stops, and packets saved into files. And then metrics can be taken: file size and data displayed in the dialog window Statistics / Protocol Hierarchy.

For each application used (ogamma Visual Logger for OPC and Cogent DataHub), tests run 4 times:

  1. Both OPC UA and MQTT communication is in non-secure mode, and there is no subscriber connecting to the broker (the MQTT Explorer does not run).
  2. Both OPC UA and MQTT communication is in non-secure mode, and MQTT Explorer runs, with a subscription to the topic Test24/#. That is, data is delivered from Publisher to the Subscriber.
  3. Both OPC UA and MQTT communication is in secured mode, and there is no subscriber connecting to the broker (the MQTT Explorer does not run).
  4. Both OPC UA and MQTT communication is in secured mode, and MQTT Explorer runs, with a subscription to the topic Test24/#. That is, data is delivered from Publisher to the Subscriber.

Test Results

Zip file with all captured packet files and test results in Excel file can be downloaded from here.

The summary of collected test results is given in the table below.

Conclusion

Total network traffic in the case of using MQTT protocol is multiple times (from 3.76 to 7.91 depending on the test case) higher than in OPC UA. Which does not correlate with results published by Johnathan Hottell.

Next Steps

It would be interesting to see what the metrics would look like if the MQTT payload is formatted in SparkPlugB, especially in compressed format. We did not have such an application converting from OPC UA to SparkPlugB (not implemented yet in our application, but it is in the roadmap, so stay tuned!). Perhaps in this case results will correlate with the results published by Johnathan Hottell.
If you can reproduce such tests, please share them with us!