Deploy

Industrial Data Collector can be deployed as a:

Tip

After deployment, web based configuration GUI will be available by default at the address http://localhost:4880. To login, use default user name admin and password password. If you are using Industrial Data Collector in production, reset password immedately using menu Account/Change Password.

Note

Web GUI of the Industrial Data Collector is optimized and tested for screen resolution 1920x1080 in full size mode, in Google Chrome and Microsoft Edge. GUI layout might be disrupted in other combinations of browse and screen resolution. If this becomes blocking issue for you, please report it at https://github.com/onewayautomation/idako/issues, or contact Support. As a workaroud, try zoom-in or zoom-out feature of the web browser.

Note

Please also note that size of dialog windows or width / visibility / order of columns in tables can be adjusted. Adjusted settings will be stored in web browser cache, so these adjustments are enough to make just once.

Setup using Docker / Podman

The easiest and guaranteed to work in minutes way to start using Industrial Data Collector is running it as a Docker container. If you never tried to use Docker before, we encourage you to try to install and use this powerful technology. You can find instructions about how to install Docker Desktop at its home page.

Note

Podman also can be used as an alternative to Docker. The same image can be started and run by any of these solutions.

Following below console commands assume that you use bash terminal in Linux hosts, and PowerShell in Windows hosts. If you don’t want to use PowerShell in Windows, it is possible to use regular windows command console too, but in this case some commands will need to be modified. For example, instead of using PowerShell variable for current folder ${pwd}, absolute path value should be used.

Pulling docker image of Industrial Data Collector and running it.

Linux image of Industrial Data Collector is hosted at Docker Hub and can be pulled by command:

docker pull ogamma/idako

Note

For Raspberry Pi the container image name is ogamma/logger-pi64.

Then you can start the container by command:

docker run --name ogamma-logger --hostname ogamma-logger -v ${pwd}/data:/home/ogamma/logger/data -e OVL_USER_ID='admin' -e OVL_USER_PASSWORD='password' -p 4880:4880 ogamma/idako

Warning

In production setup define custom values for default (initial) login credentials using environment variables OVL_USER_ID and OVL_USER_PASSWORD! Although at the very first login attempt the application will force to change the admin password, it is better never to use default user name and password.

Using docker-compose.

Industrial Data Collector can run as stand-alone application without other dependencies, if SQLite type databases are used to store configuration settings and to store time-series data. But very often PostgreSQL is used as a configuiration database, and time-series data can be stored in other databases, like TimescaleDB, InfluxDB or Apache Kafka. While these third-party dependencies can be installed and run independently, you can also use docker-compose files from product’s GitHub repository, to install them and run in just a few minutes.

Warning

Note that docker-compose files are not intended for production use as is. At least default user credentials must be modified to more secure values.

Easiest way to get all example docker-compose files is to clone the repository to local folder by running command:

git clone https://github.com/onewayautomation/idako

Tip

If you don’t have git installed, you can download it here.

As a result, repository files will be pulled into sub-folder idako. In sub-folder docker of this folder you will find multiple files with extension .yml. Each of them intended to start specific application in its own Docker container. You can start desired set of containers by passing names of these files in docker-compose command.

Short description of docker compose configuration files.

  • docker-compose.yml describes container service for Industrial Data Collector.

  • docker-compose-ha-node-1.yml and docker-compose-ha-node-2.yml - examples of running 2 instances of the Industrial Data Collector as a High Availability cluster nodes.

  • portainer.yml starts Portainer, web based configuration and management tool for docker containers. With it, you can easily see list of running containers, stop, start and restart them, upgrade or re-create them, view usage statistics, logs, connect to them via terminal, and lot of other features are available.

  • timescaledb.yml starts instance of TimescaleDB database server, where you can create a database to store time-series data, and also it can used as regular PostgreSQL database to store configuration data.

  • grafana.yml - starts instance of Grafana.

  • confluent/confluent.yml - Apache Kafka distribution by Confluent, bundled with bunch of other services, as described at Confluent Platform Quick Start page.

    Once services start, Confluent Control Center will be available at http://localhost:9021

  • confluent/confluent-connect.yml - Confluent Connect service. Within this service, Industrial Data Collector can be deployed as Confluent Source connector. Use Confluent Control Center to start new instance of the source connector.

  • redpanda/redpanda.yml and redpanda/redpanda-connect.yml are services for the Kafka broker and source connector by Redpanda.


  • influxdb.yml - InfluxDB version 1.x, will be available at localhost:8084, and management web application for InfluxDB 1.x, will be available at http://localhost:8085. For details on how to use it refer to project home page at https://timeseriesadmin.github.io/. No credentials are required to access the page.


  • influxdb2.x.yml - InfluxDB version 2.x, web GUI will be available from the host machine at http://localhost:8086. To configure connection from Industrial Data Collector which is running within Docker container to this instance of the Influx DB set Host field to influxdb2x and Port to 8086.

    You will need to initialize InfluxDB using its web interface at port 8086.

    Note

    InfluxDB 2 web GUI also provides tools to run queries, build dashboards with graphs, configure monitoring of data for specified conditions and generating alerts, etc. Refer to its documentation for more details at https://v2.docs.influxdata.com/v2.0/.

  • mssql.yml - Micsofot SQL database server.

  • mysql.yml - MySQL database server.

  • mqtt.yml - MQTT Broker Eclipse Mosquitto.

  • opcplc.yml - OPC UA simulation server (https://github.com/Azure-Samples/iot-edge-opc-plc). To connect to it from instance of the Industrial Data Collector running as a Docker container you can use OPC UA enpoint URL opc.tcp//opcplc:50000.

    Note

    From the host machine it will not be available using opcplc as host name though, which might cause problems if you want to connect to it using OPC UA client running in your host PC. Refer to the Docker documentation on how to make it accessible from the host machine using host name.



Starting multiple containers using multiple docker compose files.

For example, if you want to use TimescaleDB to store time-series data, and also use Grafana to visualize data, you can start them together with Industrial Data Collector by passing multiple .yml files in the docker compose command from working directory ./idako/docker:

docker compose -f docker-compose.yml -f portainer.yml -f timescaledb.yml -f grafana.yml up -d

As a result, Docker images will be downloaded from Docker Hub, containers from those images will start, and components of Industrial Data Collector and database with web based management tool will be available at the following below URLs:

  • http://localhost:4880 : Industrial Data Collector configuration GUI, where you can setup connections to OPC UA servers and define what variables to collect data for. Default credentials are admin / password.


  • http://localhost:9000 : Portainer GUI.


  • localhost:5432 - PostgreSQL database with TimescaleDB extension;

    Default credentials are: ogamma/ogamma.


  • http://localhost:4888: PgAdmin web GUI, using which you can analyze historical data utilizing standard SQL queries.

    Default credentials are admin@ogamma.io/admin.


  • http://localhost:3000: Grafana web GUI, using which you can visualize data from OPC UA Servers;


To stop all containers, use the command:

docker-compose -f docker-compose.yml -f portainer.yml -f timescaledb.yml -f grafana.yml down

Note

You can also stop or start any one container independently, by passing corresponding .yml file name.

Breaking change in Docker container version 4.2.0

In versions pioir to version 4.2.0 the Industrial Data Collector container was running under the root user. Starting with version 4.2.0 it runs under user ogamma. As a result, data files of the older version volume that had a root user as an owner, not accessible in the new version anymore.

To fix this permission issue, within the container, run this command under the root user: chown -R ogamma:ogamma /home/ogamma/logger.

For that, you can pull the Git repository https://github.com/onewayautomation/idako and use the helper configuration file docker-compose-migrate-to-4.3.0.yml, as descried here: https://github.com/onewayautomation/idako#breaking-change-in-docker-container-version-420

Setup in Windows using the installer

Industrial Data Collector can be installed as a Windows Service using this executable distribution file: https://onewayautomation.com/opcua-binaries/idako-windows-installer-4.3.0.exe

It will install required pre-requisites, set basic configuration options, install Industrial Data Collector as Windows service One-Way Automation Industrial Data Collector, and start this service. Optionally you can also install the service One-Way Automation HA Node Manager, used to run the Industrial Data Collector instance as a node of the High Available Cluster. For more technical details please contact support@onewayautomation.com.

The application runtime files are installed under the folder C:\Program Files\opcfy\idako. Default application data files location is C:\ProgramData\opcfy\idako.

Tip

If you would like to migrate from older installations (or Portable Setup), all you need is to set the data folder location to the folder where the older version of the application was installed. Alternatively, you can copy/move folder data from the old installation folder to the new location C:\ProgramData\opcfy\idako.

Tip

The installer uses nssm to run the application as a service. If there is a need to view or change after installation some configuration options for the Industrial Data Collector’s main service such as environment variables, run the command:

` "C:\Program Files\opcfy\idako\nssm.exe" edit "One-Way Automation Industrial Data Collector" `

To edit settings of the High availability service, run the command:

` "C:\Program Files\opcfy\idako\nssm.exe" edit "One-Way Automation HA Node Manager" `

Portable Setup in Windows

1. Download portable distribution package

2. Install required prerequisites

a). Install Visual C++ 2019 redistributables (64 bit version)

Can be installed using installation file vc-redist.x64.exe included into the distribution package.

Latest downloads are available at https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170#latest-supported-redistributable-version

b). Install Microsoft ODBC Driver 18 for SQL Server.

If not installed already, install Microsoft ODBC Driver 18 for SQL Server from file msodbcsql.msi included into distribution package.

Note

This driver must be installed even if you are not using MS SQL Server as a time-series database.

c). Install / Configure database to store time-series data.

Note

This step can be skipped if you plan to read real time and historical data from OPC UA servers directly, without storing values in database, or plan to store historical data in the SQLite database, which does not require installation.

For information on supported databases and how to install/configure them, refer to the section Time-Series Databases.

3. Start Industrial Data Collector.

Warning

Before starting Industrial Data Collector first time in production setup, define custom values for default login credentials using environment variables OVL_USER_ID and OVL_USER_PASSWORD! Otherwise, default credentials will be set to admin / password. After the very first login, the applicatoin will require changing of the credentilas from these default values.

Open Windows command line console, navigate to the folder where Industrial Data Collector files are unzipped, and start the application idako.exe.

At the very first start, it might run some initialization steps in the database (for example, if PostgreSQL is used, it will create required tables in the PostgreSQL database). Before connecting to the first OPC UA server (after adding it from GUI and attempting to browse), it will also generate OPC UA Application Instance Certificate, which might take some time.

The application has built-in web server to support web based GUI to configure it, and it will listen to the http port of that configuration endpoint (by default, port 4880). Windows operating system might pop-up dialog window asking for permission to listen on the port, you will need to allow it.

Tip

Configuration GUI will be aavailable to access from web browser at address http://localhost:4880, default user name and password are admin and password.

4. Running Industrial Data Collector as a Windows service.

For applicatoin versions 4.2.0 or later, to install the applicatoin as a Windows service, use Windows Installer as described in the previous section.

For the older versions that did not have Windows installer, or to run portable installation as a service, you can use the Service Manager available to download and use for free here.

5. Install Grafana

Note

This step can be skipped if you do not want visualize data.

To install Grafana, follow instructions at Download Grafana web page

Setup in Linux

Note

Tested with Ubuntu 2022.04. Most likely will work in other distributions too. If not, please contact Support.

Distribution package for Ubuntu 2022.04 is available at https://onewayautomation.com/opcua-binaries/idako-ubuntu2204-4.3.0.zip

To download and install it from terminal:

  • Open terminal (keyboard shortcupt Ctrl+Alt+t can be used).

  • Update package lists:

    sudo apt update
    
  • Install tools wget and unzip:

    sudo apt install wget unzip
    
  • Download distribution package:

    wget https://onewayautomation.com/opcua-binaries/idako-ubuntu2204-4.3.0.zip
    
  • Unzip it:

    unzip idako-ubuntu2204-4.3.0.zip -d ogamma-logger
    
  • Navigate to the folder where it is extracted to:

    cd ogamma-logger
    
  • Install MS SQL ODBC libraries:

    sudo ./install-odbc-driver.sh
    

    Note

    This is required even if you do not plan to store data in MS SQL Server.

  • Adjust environment variable LB_LIBRARY_PATH to find shared libraries:

    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PWD:$PWD/python/lib:$PWD/python:$PWD/python/lib/lib-dynload:/lib/x86_64-linux-gnu:/lib:/usr/lib
    
  • Run Industrial Data Collector:

    ./idako
    

Warning

Before starting the application idako first time in production setup, define custom values for default login credentials using environment variables OVL_USER_ID and OVL_USER_PASSWORD!

If application does not start, then check error messages in console. If required, adjust settings in the configuration file ./data/config.json and start application again.

Note

In default basic configuration file ./data/config.json field configDb is set to use built-in SQLite database.

  • Open configuration GUI

    Configuration GUI will be aavailable to access from web browser at the address http://localhost:4880.

Setup in 64-bit Raspberry Pi.

Note

This section will be updated soon. Please contact Support if you would like to deploy the latest version of the product on Raspberry PI.

To downlaod the distribution package and start it, open terminal and run commands:

echo "Set the working directory"
      ovl-folder=$PWD
      wget https://onewayautomation.com/opcua-binaries/ogamma-logger-pi-64-4.1.2.zip
      unzip ogamma-logger-pi-64-4.1.2.zip -d $ovl-folder/ogamma-logger
      cd $ovl-folder/ogamma-logger
      export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PWD/python:$PWD/python/lib:$PWD/usr/local/lib
      ./ogamma-logger

If you would like to access configuration GUI remotely from the Internet using Macchina Remote Agent (https://macchina.io/remote.html), run commands:

To download executable binary file and example configuration file, run:

      wget https://onewayautomation.com/opcua-binaries/WebTunnelAgent
wget https://onewayautomation.com/opcua-binaries/macchina.properties

Then edit the configuration file macchina.properties.

And start the application:

./WebTunnelAgent -c macchina.properties

Then login at the Macchina portal at https://remote.macchina.io/ to access the configuration GUI remotely.

Setup in Raspberry 32-bit Pi.

Note

Currently we have only older version of the application to run in 32 bit Raspberry PI. If you would like to run the latest version, please contact Support to get the latest version distribution package.

Prepare Micro SD card with 32 bit debian-bullseye OS.

  • Using Raspberry Pi Imager, write the image to the SD card.

  • Insert SD card into the Raspberry Pi device and start it.

  • Configure OS: set up country, language, WiFi connection, interface preferences (SSH, VNC), etc.

  • Install OS upgrades.

Install Docker and Docker Compose

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker pi
sudo apt-get install -y docker-compose
sudo systemctl start docker
sudo systemctl status docker

If Docker service fails to start, rebooting of the device should help:

sudo reboot

Get docker compose files for Industrial Data Collector and start it:

git clone https://github.com/onewayautomation/ogamma-logger.git
cd ogamma-logger/docker
docker-compose -f docker-compose-pi32.yml up -d

Optionally, you can also install Portainer service to manage containers:

docker-compose -f portainer.yml up -d

Installing as Kafka Source Connector on Confluent and Redpanda platforms.

Installing as Confluent / Kafka Source Connector

Currently the application packaged as a Kafka Source Connector jar file is availabe to download at https://onewayautomation.com/opcua-binaries/OpcUaKafkaSourceConnector-1.0.0-4.3.0.jar

Soon (Q1 2026) it should be available to download it at the Confluent marketplace.

For details about how to run Kafka Connectors in general, please refer the Confluent documentation. Quick Start guide on running of the Industrial Data Collector as a Kafka Connector is avilable at https://github.com/onewayautomation/idako/blob/master/docker/confluent/confluent.md

After starting OPC UA Source Connector task, its configuration endpoint is available by default at port 4880. From there it can be configured further following this section: Configure

Installing as Redpanda / Kafka Source Connector

Currently the application packaged as a Kafka Source Connector jar file is availabe to download at https://onewayautomation.com/opcua-binaries/OpcUaKafkaSourceConnector-1.0.0-4.3.0.jar

For details about how to run the Industrial Data Collector as a Kafka Connector in Redpanda ecosystem, please refer this Quick Start guide at https://github.com/onewayautomation/ogamma-logger/blob/master/docker/redpanda/redpanda.md

After starting OPC UA Source Connector task, its configuration endpoint is available by default at port 4880. From there it can be configured further following this section: Configure

How to deploy Industrial Data Collector as Azure IoT Edge Module

Note

This section needs to be updated. Please contact Support if you need to run Industrial Data Collector as Azure IoT Edge Module.

If you have an Azure IoT Edge Device, you can install Industrial Data Collector in it remotely via Microsoft Azure Portal, as an Azure IoT Edge Module. This section describes, how industrial PC box with Ubuntu 18.04 operating system can be turned into IoT Edge device, and steps on installing of the Industrial Data Collector in it. Further below we will refer to this PC as a device.

Install Azure IoT Edge runtime in the device.

Installation of the Azure IoT Edge runtime in the device is descibed in detail at Microsoft web site here: https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge?view=iotedge-2018-06

For convenience, they are descibed here too. Before proceeding, add an IoT Edge device in Microsoft Azure portal. It will represent like a twin in the Azure Portal, mirroring real device. After creating it in the Azure Portal, find the option Primary Connection String in device setting - you will need it to initialize the IoT Edge runtime in the device. You can follow tutorial from this page: https://docs.microsoft.com/en-us/azure/iot-edge/how-to-register-device?view=iotedge-2020-11&tabs=azure-portal (Option 1: Register with symmetric keys).

Next steps need to be performed in the device:

Install prerequisites.

Utility program curl and moby Docker engine should be installed in the device.

Open command line terminal and run the following commands:

sudo apt-get update
sudo apt-get install curl
curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
sudo apt-get update
sudo apt-get install moby-engine

Install Azure IoT Edge runtime

Open command line terminal and run the following commands:

sudo apt-get update
sudo apt-get install iotedge

To install specific version of the IoT Edge runtime, for example, 1.1.1-1, run:

sudo apt-get install iotedge=1.1.1-1 libiothsm-std=1.1.1-1

Provision IoT Edge device with its cloud identity (using symmetric key).

This provisioning step maps, or binds this device to its twin IoT Edge device added in Azure Portal.

Open IoT Edge configuration file for editing:

sudo nano /etc/iotedge/config.yaml

Find section Manual provisioning with an IoT Hub connection string in the file and uncomment the provisioning section.

provisioning:

source: “manual”

device_connection_string: “<ADD DEVICE CONNECTION STRING HERE>”

Update the value of the option device_connection_string with the value of the Primary Connection String option from your IoT Edge device at Azure Portal. Make sure that any other provisioning sections are commented out. Make sure the provisioning: line has no preceding whitespace and that nested items are indented by two spaces.

Save and close the file: CTRL + X, Y, Enter

Restart IoT Edge runtime.

sudo systemctl restart iotedge

To check status of the IoT Edge daemon, use command:

sudo systemctl status iotedge

Verify in the Azure Portal page for the device, that IoT Edge system modules $edgeAgent and $edgeHub have runtime status running. It might take some time to update the status.

Deploy Industrial Data Collector

Once Azure IoT Edge runtime is installed in the device and provisioned, you can install Industrial Data Collector in it remotely using Azure Portal.

In the device page, click on the Set Modules tab page. In the IoT Edge Modules section, click on the Add button. In the dropdown list select IoT Edge Module. As a result, Add IoT Edge Module page should be opened.

Here, in the Module Settings tab, enter the following fields:

  • IoT Edge Module Name - arbitrary name of the module.

  • Image URI - set to the full URL of the Docker container image: registry.hub.docker.com/ogamma/logger-dev.

In the Container Create Options tab, enter the following settings:

{
    "Hostname": "ovl",
    "Volumes": {
        "/home/ogamma/logger/data": {}
    },
    "WorkingDir": "/home/ogamma/logger",
    "NetworkDisabled": false,
    "ExposedPorts": {
        "4880/tcp": {}
    },
    "HostConfig": {
        "PortBindings": {
            "4880/tcp": [
                {
                    "HostPort": "4880"
                }
            ]
        },
        "Binds": [
          "/var/ogamma-logger-data:/home/ogamma/logger/data"
        ]
    }
}

Click on the Add button. The new module entry should be added to the list of modules.

Click on the Review + create button to complete the module deployment.

Installing older versions.

If you need help with installing older versions, please contact Support.

Upgrading from older versions.

Note

Starting from version 2.2.0 valid Annual Maintenance and Upgrades License is required. Before upgrading of the application please check if your existing license allows to run the new version to which you are considering to upgrade to. If release date of the new version is no later than the date displayed in the First Activation Date field of the License Information dialog window plus one year, then no separate AMU License is required. In the newer versions the license status dialog window has separate field Expirsation Date in the Annual Maintenance and Upgrades License group.

To purhcase the AMU License, contact Support.

For details about this license please refer this section: Annual Maintenance and Upgrades License.

For information about how to install the AMU License please refer section Upload Annual Maintenance and Upgrades License.

Upgrading older Windows installations to version 4.2.0 or later.

Starting from version 4.2.0 it is possible to use the Windows installer, which configures the application to run as a Windows service. If you use this installer, select current installation folder (working directory where the ./data subfolder is located) in the step Application Data Folder. Note that the data folder is different than the folder where runtime executable files are located, which is selected at the Choose Install Location step.

Steps before upgrade.

  • Before upgrading to the new version, stop existing application and create backup of your application data folder (usually sub-folder ./data) and backup of configuration database to avoid any data loss.

  • If as a time-series database either InfluxDB or Confluent/Apache Kafka or MQTT Broker is configured, where either measurement and tags or topic name / key name are used, to make sure that the same values for those fields are used after upgrade as before upgrade, before starting upgrade process, in the Logged Variables table review current values for those columns, and remember what are the current values for some rows. Later, after upgrade, verify that the same measurement/topic name and tags/keys are used as before the upgrade.

Also, review and remember configuration settings for the time-series database.

Tip

As a precaution, to eliminate writing into the time-series database records with un-desired measurement/topic name, or tags/key name, before starting of the upgrade you can change configuration of the time-series database to use host name which does not exist.

  • In version 2.1.0, for the Local Storage, a new database engine is used. Data migration from older version of the Local Storage to the newer version is not implemented. Therefore, before starting the upgrade, make sure that the Local Storage does not have data for significant time range. If connection with the destination time-series database is normal, then the Local Storage should have still not forwarded data for very short time (around 1 second or so) and this should not be an issue.

Upgrading Docker container.

Note

In this section it is assumed that docker-compose files from Github project https://github.com/onewayautomation/idako used.

  • Open terminal and navigate to the sub-folder idako

  • Stop Industrial Data Collector container:

    docker-compose -f docker-compose.yml down
    
  • Open file docker-compose.yml in text editor, and modify line which defines what image to use (image: 'ogamma/idako:4.3.0', or image: 'ogamma/idako:latest') to use the latest version of the Industrial Data Collector (as of today, it is version 4.3.0): image: 'ogamma/idako:4.3.0'.

  • Start the container:

    docker-compose -f docker-compose.yml up -d
    

Upgrading Windows and Ubuntu portable installations.

  • Stop existing instance.

  • Create a backup of the installation folder.

  • Unzip Industrial Data Collector files into the existing installation folder.

  • Start new instance. If required, it will upgrade config files and configuration database.

Versions the upgrade is supported from.

Upgrading from any previous version to the newer version is supported. Downgrading is not implemented.