About OPC UA timestamps

As software engineers and manufacturing industry professionals, it is important to understand the use of timestamps. This article explains the basics of what timestamps are, the features it boasts, and a possible solution to any associated headaches.

What is a Timestamp?

A timestamp is a way of recording the date and time of each data point or event in a data acquisition system. Having correct timestamps in process control data acquisition is essential for ensuring the validity, reliability, and usability of the data. It also helps to enable effective process monitoring, control, and optimization.

Precise timestamps ensure accurate analysis and interpretation of the data. This is especially crucial because it aids in the identification of trends, patterns, anomalies, and correlations. Timestamps also facilitate synchronization and integration of data from multiple sources. This includes different sensors, instruments, devices, or systems.

Types of Timestamps:

According to the OPC UA Standard, each value of an OPC UA variable is associated with 2 timestamps: Source and Server timestamps.

  • The Source Timestamp is used to reflect the timestamp that was applied to a variable value by the data source. This indicates the time at which data values were measured at the lowest-level data source. It’s important to consider that this data source can be located in the same machine that runs the OPC UA Server, or it can be in a different device with its own system clock.
  • The Server Timestamp is used to reflect the time that the Server received a variable value or deemed it as accurate.

If the server reads data values itself, then the source and server timestamps will usually be the same. If an OPC UA Server gets data from another device that supports timestamps, then the source timestamp can be significantly different than the server timestamp.

Time Zones: A Mystery

No matter the situation, system clocks in devices that are a part of the data acquisition path should be in sync. Ideally, system clocks should be synchronized with time servers, such as an NTP protocol.

To avoid confusion and errors during conversions, all timestamps in OPC UA are UTC timestamps. Interestingly, this means that there are no time zones or daylight savings. To offset this, timestamp values are usually converted to the user’s local time by the application displaying them.

When an OPC UA client creates a subscription with monitored items, the device can define which timestamps it needs to receive. This can be a source or server timestamp, and sometimes even both.

The Solution:

Our bestselling product Idako: Industrial Data Collector, creates subscriptions and monitored items that request both timestamps. This allows you to have the choice of freedom between different timestamp formats. In addition, Idako is scalable, energy efficient, and robust when handling an unlimited number of tags.  When data values are forwarded to the destination time-series database or MQTT broker, the format of timestamps depends on the destination database type.

Different Scenarios, Depending on the Destination Database:

  • When the destination is an SQL database or Snowflake, the source timestamp is written in a “time” column of the values table. If the source timestamp is not defined, then a server timestamp is used. It is also possible to write a client timestamp in the column “client_time”. The “client_time” is the time when data values were received by the Idako from the OPC UA Server. This timestamp is especially useful when the server or source timestamps are unreliable and not accurate enough.
  • When the destination is InfluxDB, Confluent, or Apache Kafka, then the source timestamp is used as a record timestamp. If the payload is composed using a template, then all three timestamps can be included in the payload using placeholders like “[SourceTimestamp]”, “[ServerTimestamp]”, and/ or “[ClientTimestamp]”.
  • When the destination is an MQTT broker, timestamps cannot be a part of the published messages. This is because the MQTT protocol does not exactly specify how timestamps should be transported from the publisher to the broker. So, they can be only included in the payload. In this case, the payload should be defined using a template, with placeholders for timestamps like: “[SourceTimestamp]”, “[ServerTimestamp]”, and/ or “[ClientTimestamp]“.

Duplicate Records and High-Resolution:

In some cases, duplicate records (with the same value and timestamp for the same variable) can be written on the database. This can occur when Idako disconnects from the server and reconnects, or when it restarts. In these cases, a variable data value can be the same, and the source timestamp can also be the same as before it was reconnected. This might cause duplicate record errors in SQL databases if the values table is configured to have a unique index by “source_id” and time column values. To resolve this issue, our product Idako has configuration settings that allow duplicate records (refer to our User Manual for details).

Furthermore, OPC UA allows the definition of timestamps with high resolution: down to a 10 picosecond precision. Other target databases usually do not support such high resolutions, so this is a very impressive feature. The Idako configures the precision with which timestamps are written. This can be seconds, milliseconds, or microseconds. Different storage destinations have different ways to represent timestamps.

Our product Idako is a master in fine-tuning the timestamp format. This can occur in the following ways:

  • An integer number representing a Unix epoch time (depending on the number of seconds, milliseconds, or microseconds precision since Jan. 1st, 1970).
  • A string value formatted with the ISO-8601 standard
  • An OPC UA DateTime value (which is an integer number of 100 nanosecond intervals passed since Jan. 1st, 1601).

If you enjoyed learning more about timestamps and its uses, feel free to comment and ask questions. If you’re interested in finding out more about the Idako, email us at sales@onewayautomation.com to begin the purchase process.

Free software/tools

Industrial Data Collector

Industrial Data Collector is an integration tool that harvests industrial control data from OPC UA servers, and stores & forwards it to databases or MQTT brokers. The former name is ogamma Visual Logger for OPC.

Community Edition is free forever for commercial use (limitation: up to 64 variables).

Runs in Docker: https://hub.docker.com/r/ogamma/idako, as well as on Windows, Linux, and Raspberry PI/ Siemens IoT2050.
For details on deployment and using it, refer online User Manual.

Industrial Data Explorer

Industrial Data Explorer is a two-in-one web application for exploring industrial data published on MQTT brokers and exposed via OPC UA servers.
Runs in Docker: https://hub.docker.com/r/ogamma/ide
Live demo is available at https://ide.opcfy.io Feel free to register and try to connect to publicly accessible OPC UA servers and MQTT brokers listed below.
Hint: after login, to add connections, click on the gear icon at the top right corner.
For details and to report feedback / issues please visit the product page on GitHub.

OPC UA Servers

Recently, one of our customers asked: What OPC UA Servers are available to test OPC UA client applications, such as ogamma Visual Logger for OPC (now re-branded as Idako: Industrial Data Collector) ? In other words, they needed to simulate a data stream that an OPC UA Client could ingest using the OPC UA Server interface.

There are many demo/simulation OPC UA servers available that you can either download and run on your PCs, and also there are some already running instances with public endpoints available to connect over the Internet.  Here is the list:

Product name Vendor Platform Link
OPC UA Simulation Server Prosys OPC Windows, Linux, MacOS

https://prosysopc.com/products/opc-ua-simulation-server/

Public OPC UA Endpoint: opc.tcp://uademo.prosysopc.com:53530/OPCUA/SimulationServer

OPC UA Weather Server One-Way Automation Linux + Docker

Public OPC UA Endpoint: opc.tcp://ows.opcuaserver.com:4855/ 
hosted by One-Way Automation

Hint: Call method Objects/Weather/AddCity to add your location. Pass 2 arguments: 2-letter country code (https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2) and city/location name.

OPC UA C++ Demo Server Unified Automation Windows

https://www.unified-automation.com/downloads/opc-ua-servers.html

Requires registration (free) to download

Public endpoint, hosted by One-Way Automation: opc.tcp://opcuaserver.com:48010

Eclipse Milo OPC UA Demo Server Open-source, the main contributor is Kevin Herron (Inductive Automation). Written in Java, runs on Windows, Linux, Docker.

https://github.com/digitalpetri/opc-ua-demo-server

Public OPC UA endpoint URL: opc.tcp://milo.digitalpetri.com:62541/milo

OPC PLC Server Microsoft Cross-platform .NET application. Runs on Windows, Linux, Docker

https://learn.microsoft.com/en-us/samples/azure-samples/iot-edge-opc-plc/azure-iot-sample-opc-ua-server/

Docker image: mcr.microsoft.com/iotedge/opc-plc

Public endpoint hosted by One-Way Automation: opc.tcp://opcplc.opcfy.io:50000

Note: only secured connections are allowed. Client certificates will be accepted automatically.

Node OPC UA Server Sterfive Cross-platform  Public endpoint: opc.tcp://opcuademo.sterfive.com:26543

 

Public MQTT Brokers

When we created recently Industrial Data Explorer, we needed to have also publicly available MQTT brokers. Below is a list of

some MQTT brokers we used for tests:

Product name Vendor Host name
MonsterMQ  Andreas Vogler

https://monstermq.com/

test.monstermq.com
Mosquitto https://mosquitto.org/ test.mosquitto.org
EMQX https://www.emqx.com/en broker.emqx.io
HiveMQ https://www.hivemq.com/ broker.hivemq.com

 

MQTT vs OPC UA

Introduction.

 

It is very common to read about MQTT protocol that it is very lightweight in terms of network traffic:

MQTT is an OASIS standard messaging protocol for the Internet of Things (IoT). It is designed as an extremely lightweight publish/subscribe messaging transport that is ideal for connecting remote devices with a small code footprint and minimal network bandwidth.
(Quote from MQTT home page).

At the same time, OPC UA is often considered heavier than MQTT.

For me, this was not something obvious and expected. Because the most popular encoding used in OPC UA is TCP binary, which usually is more efficient and lightweight than free text formatting used in MQTT payloads.

When I found this report by Johnathan Hottell, about OPC UA and MQTT benchmarks and saw that the MQTT payload is JSON formatted (in which each data value transferred includes multiple key-value pairs, for the values itself and for the timestamp), I thought, how come this can be lighter than OPC UA binary formatted payload? And decided to reproduce these tests myself.

Test methodology.

 

OPC UA Server and Variables.

In the initial tests, I picked the same number and type of data variables. And later decided to re-run tests, in a more controlled setup. I thought that for easier reproducing, it is worth using a popular and easily available OPC UA server as a data source, which can provide simulated data. OPC UA C++ Demo Server from Unified Automation was the obvious choice. The set of default variables does not have a number of variables of types used in the tests run by Johnathan Hottell, therefore I had to select a different set of variable types, but still, the total number of variables is 24. Their node identifiers can be found in the Excel file with test results.

MQTT Broker.

EMQX broker running as a Docker container was chosen, because of easy deployment and use, and support for secured connections out of the box, without generating and installing the SSL certificate.

Test applications generating OPC UA and MQTT traffic.

2 applications that can convert OPC UA data to MQTT were used:

  1. ogamma Visual Logger for OPC, the product of our company. For tests, you can use its Free Community Edition. For instructions on how to install it and use it, please refer to its online User Manual. Version 2.1.4 was used.
  2. Cogent DataHub from Skkynet.From this link, you can download the trial version. Used version 9.0.10.747.

Using OPC UA to MQTT converter applications makes sure that the same data values are collected over OPC UA connection, and then published to the MQTT broker, as well as received by the subscriber application.

As MQTT Subscriber, MQTT Explorer version 0.4.0 Beta-1 was used.

Measuring network traffic.

To capture OPC UA and MQTT packets, WireShark was used. 2 instances – 1 for OPC UA traffic on port 48010 was running, and the second for MQTT traffic on port 1883 for non-secure connections, and 8883 for secured connections.

For each test case, 4 metrics were collected:

  1. Size of file, where captured packets are saved from WireShark, in Kb.
  2. The number of captured frames.
  3. Total frames size – the sum of all network packets sizes. This includes all protocols, can be viewed in Wireshark via menu Statistics / Protocol Hierarchy, line Frame, column Bytes, converted to Kb.
  4. OPC UA payload size – as shown in line OPC UA Binary Protocol, column Bytes in Wireshark, dialog window opened by menu Statistics / Protocol Hierarchy, converted to Kb.
  5. MQTT payload size – as shown in line MQTT Telemetry Transport Protocol (in non-secure mode), or Transport Layer Security (on secured mode), column Bytes in Wireshark, dialog window opened by menu Statistics / Protocol Hierarchy, converted to Kb.

Configuration of test applications.

Each test application is configured to collect data from the OPC UA Server for selected 24 variables and log data to the MQTT broker. For configuration steps, please refer to the documentation of the product.

In the OPC UA side, both Sampling interval and Publishing should be set to 1 second.

MQTT Topics for variables in ogamma Visual Logger for OPC were set to Test24/[VariableDisplayName]. For example, for the variable with browse path Objects/Demo/001_Dynamic/Scalar/Boolean it is set to Test24/Boolean.

In Cogent DataHub, topics were a little bit longer, set to Test24/[Browse Path]. For example, Test/Demo/Dynamic/Scalar/Boolean.

The MQTT payload was set to the variable value, converted to a string (not JSON format, to keep payload size smaller).

 

Running tests.

Test applications should be shut down before starting tests.

First, 2 Wireshark instances should start. They should not detect any packets yet. Then one of the test applications starts. Wireshark should display captured packets. After 5 minutes, the test application stops, and no more network activity should be seen in Wireshark. Capturing stops, and packets saved into files. And then metrics can be taken: file size and data displayed in the dialog window Statistics / Protocol Hierarchy.

For each application used (ogamma Visual Logger for OPC and Cogent DataHub), tests run 4 times:

  1. Both OPC UA and MQTT communication is in non-secure mode, and there is no subscriber connecting to the broker (the MQTT Explorer does not run).
  2. Both OPC UA and MQTT communication is in non-secure mode, and MQTT Explorer runs, with a subscription to the topic Test24/#. That is, data is delivered from Publisher to the Subscriber.
  3. Both OPC UA and MQTT communication is in secured mode, and there is no subscriber connecting to the broker (the MQTT Explorer does not run).
  4. Both OPC UA and MQTT communication is in secured mode, and MQTT Explorer runs, with a subscription to the topic Test24/#. That is, data is delivered from Publisher to the Subscriber.

Test Results

Zip file with all captured packet files and test results in Excel file can be downloaded from here.

The summary of collected test results is given in the table below.

Conclusion

Total network traffic in the case of using MQTT protocol is multiple times (from 3.76 to 7.91 depending on the test case) higher than in OPC UA. Which does not correlate with results published by Johnathan Hottell.

Next Steps

It would be interesting to see what the metrics would look like if the MQTT payload is formatted in SparkPlugB, especially in compressed format. We did not have such an application converting from OPC UA to SparkPlugB (not implemented yet in our application, but it is in the roadmap, so stay tuned!). Perhaps in this case results will correlate with the results published by Johnathan Hottell.
If you can reproduce such tests, please share them with us!

Why our products are not open source?

Every once a while we receive inquiry asking about our products: is source code for it available as open source. It is not, and here is why.


  If you look at open-source projects, you can see that they almost never compose the core business of their creators. If it is created by individual developers, it might be to showcase their skills for potential employers, or hobby project. Very often companies sponsor or moreover are main contributors to the project. In these cases it is never the core business of that company, instead, that company is the main user of that project. Making the project open source brings multiple benefits for such a company:
  • Developers and users around the world contribute to the project by reporting bugs and sending pull requests with fixes for such bugs or with the implementation of new features. This lowers the cost of development and support of the project.
  • Using open-source project makes it easier for third parties to use services provided by the company as its core business. So at the end cost of open source development is transferred to those end customers of core business services and products anyway.
Here are some examples:
  • Microsoft is a strong supporter and now the main contributor to .NET OPC UA SDK (https://github.com/OPCFoundation/UA-.NETStandard). Why? Because it makes it easy for customers who are used to use Microsoft stack to collect industrial control data and push it to Microsoft Azure, which is its core business.
  • Amazon is the main contributor to the SDK for connecting to AWS IoT from a device using embedded C (https://github.com/aws/aws-iot-device-sdk-embedded-C ). A significant portion of this project is MQTT client functionality, so it can be used as a generic MQTT client library too. Although the development of such MQTT client libraries is not what Amazon’s business is, they are the main contributors and made is open source, to pave the way for customers to use core business – AWS IoT platform.
  • InfluxData made InfluxDb (https://github.com/influxdata/influxdb) open-source because their core business is providing services hosting InfluxDb in the cloud, so they are main users of this project and benefit from making it open source.
  • The same with Confluent and other service providers who are the main contributors and users of Apache Kafka (https://github.com/apache/kafka).
One-Way Automation’s core business is the development of OPC UA software for customers, including ogamma Visual Logger for OPC. We are not users of our products and they are not just hobby projects. For that reason, our products are not open source. We need financial support in the form of revenue from selling licenses in order to continue the development and support of those products.