Project Description
The Deformation Monitoring Package (DMPACK) is a free and open source software package for sensor control and automated time series processing in geodesy and geotechnics. The package consists of a library libdmpack and additional programs based on it which serve as a reference implementation of solutions to various problems in deformation monitoring, such as:
-
sensor control
-
sensor data parsing and processing
-
database access
-
remote procedure calls
-
data synchronisation and export
-
spatial transformations
-
time series analysis
-
plotting and reporting
-
web-based data access
-
distributed logging
-
MQTT connectivity
-
Leica GeoCOM API
-
scripting
-
e-mail
DMPACK is a scientific monitoring system developed for automated control measurements of buildings, infrastructure, terrain, geodetic nets, and other objects. The software runs on sensor nodes, usually industrial embedded systems or single-board computers, and obtains observation data from arbitrary sensors, like total stations, digital levels, inclinometers, weather stations, or GNSS receivers. The raw sensor data is then processed, stored, and optionally transmitted to a server. The software package may be used to monitor objects like:
-
bridges, tunnels, dams
-
motorways, railways
-
construction sites, mining areas
-
landslides, cliffs, glaciers
-
churches, monasteries, and other heritage buildings
DMPACK is built around the relational SQLite database for time series and log storage on client and server. The server component is optional. It is possible to run DMPACK on clients only, without data distribution. The client-side message passing is based on POSIX message queues and POSIX semaphores.
Currently, only 64-bit Linux and FreeBSD are supported as operating systems.
Software Architecture
Similar Software
There are similar open source projects that provide middleware for autonomous sensor networks:
- 52°North Sensor Observation Service
-
The reference implementation of the OGC Sensor Observation Service (SOS) in Java, by 52°North Spatial Information Research GmbH. Offers an interoperable interface for publishing and querying sensor data and meta data. Additional client applications enable analysis and visualisation of the measurement data. The project is mostly inactive. (GPLv2)
- Argus
-
A non-geodetic sensor data monitoring and alerting solution built with Node.js, MariaDB, and React. (MIT)
- FROST
-
Fraunhofer Open Source SensorThings (FROST) is the reference implementation of the OGC SensorThings API in Java. The project provides an HTTP- and MQTT-based message bus for data transmission between client and server. Developed by Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung (IOSB), Karlsruhe. (LGPLv3)
- Global Sensor Networks
-
A Java-based software middleware designed to facilitate the deployment and programming of sensor networks, by Distributed Information Systems Laboratory (EPFL), Switzerland. The project appears to be abandoned. (GPLv3)
- istSOS
-
A server implementation of the OGC Sensor Observation Service in Python, for managing and dispatching observations from monitoring sensors. The project also provides a graphical user interface and a RESTful web API to automate administration procedures. Developed by Istituto Scienze della Terra, University of Applied Sciences and Arts of Southern Switzerland. The software seems not to be actively maintained anymore. (GPLv2)
- Kotori
-
A multi-channel, multi-protocol, telemetry data acquisition and graphing toolkit for time-series data processing in Python. It supports scientific environmental monitoring projects, distributed sensor networks, and likewise scenarios. (AGPLv3)
- OpenADMS
-
The Open Automatic Deformation Monitoring software is an IoT sensor network middleware in Python 3. The system was developed as a prototype of DMPACK and includes client and server programs. (BSD)
- OpenSensorHub
-
Java-based middleware for building Sensor Webs in the Internet of Things. Based on OGC standards from the Sensor Web Enablement (SWE) initiative. (MPLv2)
- Project Mjolnir
-
An open source client–server IoT architecture for scientific sensor networks written in Python, by University of Alabama in Huntsville and NASA. Includes a sensor client for data logging, uplink and control, as well as a server component to store, serve/display, and monitor data from remote sensors. Further development of the software has been stopped. (MIT)
- Ulyxes
-
An open source project in Python to control robotic total stations (RTS) and other sensors, and to publish observation results on web based maps. Developed at the Department of Geodesy and Surveying of the Budapest University of Technology and Economics. (GPLv2)
Requirements
DMPACK has the following requirements:
-
Linux or FreeBSD operating system
-
64-bit platform (x86-64, AArch64)
-
Fortran 2018 and ANSI C compiler (GCC, Intel oneAPI)
Additional dependencies have to be present to build and run the software of this package:
The web applications require a compatible web server:
To generate the man pages, the User Guide, and the source code documentation, you will also need:
-
AsciiDoctor, Pygments, and pygments.rb
DMPACK depends on the following interface libraries:
If the repository is cloned recursively through Git, or if the project is built
using FPM, the submodules will be downloaded automatically. Without Git or FPM,
this step has to be done manually by executing fetchvendor.sh
, for example:
$ curl -L -s -o master.zip https://github.com/dabamos/dmpack/archive/refs/heads/master.zip $ unzip master.zip $ cd dmpack-master/ $ sh fetchvendor.sh $ make [freebsd|linux] $ make install
The shell script fetchvendor.sh
requires curl(1) and unzip(1).
Installation
This section describes the steps to build the DMPACK library and programs from source, either with POSIX Make or the Fortran Package Manager (FPM). At the moment, support for the Fortran Package Manager is experimental, and using GNU/BSD Make is the recommended way. Display the available build targets of the Makefile:
$ make help
Or, output the selected build options:
$ make options PREFIX=/opt
See section System Configuration on how to configure the operating system
after the installation. The shared libraries libgcc.so
, libgfortran.so
, and
libquadmath.so
must be present on the target system if the DMPACK programs
have been compiled with GNU Fortran.
FreeBSD
First, install the build and run-time dependencies:
$ doas pkg install archivers/zstd databases/sqlite3 devel/git devel/pcre2 \ devel/pkgconf ftp/curl lang/gcc lang/lua54 math/gnuplot math/lapack \ science/hdf5 www/fcgi
Instead of math/gnuplot
, you may want to install package math/gnuplot-lite
which does not depend on X11 (but lacks the raster graphic terminals). The web
applications additionally require a web server:
$ doas pkg install www/lighttpd
Optionally, install Pygments and AsciiDoctor to generate the man pages and the User Guide:
$ doas pkg install devel/rubygem-pygments.rb textproc/rubygem-asciidoctor
Make
The repository has to be cloned recursively using command-line argument
--recursive
. Execute the Makefile with build target freebsd
:
$ git clone --depth 1 --recursive https://github.com/dabamos/dmpack $ cd dmpack/ $ make freebsd
Install the library and all programs system-wide to /usr/local
:
$ doas make install
You can change the installation prefix with argument PREFIX
. To install to
directory /opt
instead, run:
$ doas make install PREFIX=/opt
In this case, path /opt/bin
must be included in environment variable PATH
.
Fortran Package Manager
Either clone the repository with Git, or download the archive of the master branch. Then, run:
$ export FFLAGS="-D__FreeBSD__ -I/usr/local/include -ffree-line-length-0" $ fpm test --flag "${FFLAGS}" $ fpm build --profile release --flag "${FFLAGS}" $ fpm install
The Fortran Package Manager will fetch all third-party dependencies
automatically, but the configuration and shared files have to be installed
manually. The library and programs will be installed to ~/.local
by default.
Linux
On Debian, install GCC, GNU Fortran, and the build environment first:
$ sudo apt install gcc gfortran git make pkg-config
The third-party dependencies have to be installed with development headers:
$ sudo apt install --no-install-recommends libblas-dev liblapack-dev \ curl libcurl4 libcurl4-openssl-dev libfcgi-bin libfcgi-dev gnuplot \ libhdf5 libhdf5-dev lua5.4 liblua5.4 liblua5.4-dev libpcre2-8-0 \ libpcre2-dev sqlite3 libsqlite3-dev zlib1g zlib1g-dev libzstd1 \ libzstd-dev
Instead of package gnuplot
, you may prefer the no-X11 flavour gnuplot-nox
if
raster graphic formats are not required (essentially, SVG output only). The
SQLite 3 package version must be ≥ 3.39.0. Depending on the package repository,
the names of the HDF5 and Lua packages may differ.
Note
|
If Intel oneAPI is used instead of GCC to compile DMPACK, it is necessary to build HDF5 from source, as the versions in the Linux package repositories have been compiled with GNU Fortran and are therefore incompatible. See section HDFView for hints regarding the build process. |
Make
Clone the DMPACK repository using command-line argument --recursive
, and
execute build target linux
of the Makefile:
$ git clone --depth 1 --recursive https://github.com/dabamos/dmpack $ cd dmpack/ $ make linux
Install the DMPACK libraries and programs system-wide to /usr/local
:
$ sudo make install
Or, to install to directory /opt
instead, run:
$ sudo make install PREFIX=/opt
Path /opt/bin
must be added to the global PATH
environment variable.
Note
|
Custom SQLite 3
If the SQLite 3 library has been built from source and installed to
$ make OS=linux PREFIX=/usr LIBSQLITE3="-L/usr/local/lib -lsqlite3" If more than one library is installed, specify the path with linker flag
|
Note
|
Intel oneAPI Compilers
If Intel oneAPI is used instead of GCC, run: $ make CC=icx FC=ifx PPFLAGS= \ CFLAGS="-mtune=native -O2 -fpic" FFLAGS="-mtune=native -O2 -fpic" \ LDFLAGS="-module ./include -I./include" \ INCHDF5="-I/opt/include" \ LIBHDF5="-Wl,-rpath=/opt/lib -L/opt/lib -lhdf5_fortran -lhdf5" In this particular case, the HDF5 libraries are installed to |
Fortran Package Manager
To build DMPACK using the Fortran Package Manager, change to the cloned or downloaded repository, and run:
$ export FFLAGS="-D__linux__ `pkg-config --cflags hdf5` -ffree-line-length-0" $ fpm test --flag "${FFLAGS}" $ fpm build --profile release --flag "${FFLAGS}" $ fpm install
The library and programs will be installed to directory ~/.local
by default.
If the compilation fails with an error message stating that -llua-5.4
cannot
be found, update the build manifests first:
$ sed -i "s/lua-5/lua5/g" fpm.toml $ sed -i "s/lua-5/lua5/g" build/dependencies/fortran-lua54/fpm.toml
Deformation Monitoring Entities
The data structures of DMPACK are based on the following entities. The
date and time format used internally is a 32-characters long ISO 8601 time stamp
in microsecond resolution, with time separator T
and mandatory GMT offset, for
example, 1970-01-01T00:00:00.000000+00:00
. A human-readable format
1970-01-01 00:00:00 +00:00
may be used where reasonable.
Observation Entities
- Node
-
A unique sensor node within a sensor network. Contains id, name, description, and optional position.
- Sensor
-
A unique sensor attached to a node, with id, name, description, and optional position.
- Target
-
A unique measurement target (point of interest, location) with id, name, description, and optional position. Multiple nodes and sensors may share a single target.
- Observation
-
A single measurement identified by name and unique UUID4 that contains requests to and responses from a sensor, referencing a node, a sensor, and a target. An observation can contain up to 8 requests which will be sent to the sensor in sequential order.
- Request
-
Command to send to the sensor, referencing an observation and ordered by index. A request can contain up to 16 responses.
- Response
-
Floating-point values in the raw response of a sensor can be matched by regular expression groups. Each matched group is stored as a response. Responses reference a request, and are ordered by index. They contain name, type, value, unit, and an optional error code.
Log Entities
- Log
-
Log message of a sensor node, either of level debug, info, warning, error, or critical, and optionally related to a sensor, a target, and an observation.
Beat Entities
- Beat
-
Short status message (heartbeat, handshake) that contains node id, client address, client version, time stamp, system uptime, and last connection error, sent periodically from client to server.
RPC Entities
- API Status
-
Short key–value response of the HTTP-RPC API service in plain-text format.
Program Overview
DMPACK includes programs for sensor I/O, database management, observation processing, and other tasks related to automated control measurements. The programs may be classified into the following categories.
Databases
- dmbackup
-
Creates an online backup of a database by either using the SQLite backup API or
VACUUM INTO
. - dmdb
-
Stores observations received from POSIX message queue in a SQLite database.
- dmdbctl
-
A command-line interface to the DMPACK observation database, to read, add, update, or delete nodes, sensors, and targets.
- dmexport
-
Exports beats, nodes, sensors, targets, observations, and logs from database to file, either in CSV, JSON, or JSON Lines format.
- dmimport
-
Imports nodes, sensors, targets, observations, and logs from CSV file into database.
- dminit
-
Creates and initialises SQLite observation, log, and beat databases.
- dmlogger
-
Stores logs received from POSIX message queue in a SQLite database.
Message Passing
- dmlog
-
A utility program to send log messages from command-line or shell script to the POSIX message queue of a dmlogger process, to be stored in the log database.
- dmrecv
-
Receives logs or observations from POSIX message queue and writes them to stdout, file, or named pipe.
- dmsend
-
Sends observations or logs from file to a DMPACK application via POSIX message queue.
Observation Processing
Plots & Reports
Remote Procedure Calls
- dmapi
-
A FastCGI-based HTTP-RPC service that provides an API for node, sensor, target, observation, and log synchronisation, as well as heartbeat transmission. Clients may either send records to be stored in the server database, or request data of a given time range. Depending on the HTTP Accept header, the server returns data in CSV, JSON, JSON Lines or Namelist format. Requires a FastCGI-compatible web server, such as lighttpd(1).
- dmbeat
-
Sends short status messages (heartbeats) periodically to a remote dmapi instance.
- dmsync
-
Synchronises nodes, sensors, targets, observations, and log messages between client and dmapi server. Only uni-directional synchronisation from client to server is supported.
Sensor Control
- dmfs
-
Reads sensor data from virtual file system, file, or named pipe. The program be used to read values from sensors connected via 1-Wire (OWFS). Observations are forwarded via POSIX message queue and/or written to file.
- dmpipe
-
Executes a program as a sub-process connected through an anonymous pipe and forwards the output via POSIX message queue. Optionally, observations are written to file or stdout.
- dmserial
-
Connects to a TTY/PTY serial port for sensor communication. The program sends requests to a connected sensor to receive responses. The program pre-processes the response data using regular expressions and forwards observations via POSIX message queue.
Utilities
Web
- dmfeed
-
Creates an Atom syndication feed in XML format (RFC 4287) from logs of given sensor node and log level. If the feed is served by a web server, clients can subscribe to it by using a feed reader or news aggregator. The program may be executed periodically as a cron job.
- dmweb
-
A CGI-based web user interface for DMPACK database access on client and server. Requires a web server and gnuplot(1).
Programs
Some programs read settings from an optional or mandatory configuration file.
Examples of configuration files are provided in directory
/usr/local/etc/dmpack/
. The configuration file format is based on Lua tables
and is scriptable. Comments in the configuration file start with --
.
You may want to enable Lua syntax highlighting in your editor (for instance,
set syntax=lua
in Vim), or use the file ending .lua
instead of .conf
.
dmapi
dmapi is an HTTP-RPC API service for remote DMPACK database access. The web application has to be executed through a FastCGI-compatible web server or a FastCGI spawner. It is recommended to use lighttpd(1).
The dmapi service offers endpoints for clients to insert beats, logs, and observations into the local SQLite database, and to request data in CSV or JSON format. Authentication and encryption are independent from dmapi and have to be provided by the web server.
All POST data has to be serialised in Fortran 95 Namelist format, with optional deflate or zstd compression.
If HTTP Basic Auth is enabled, the sensor id of each beat, log, node, sensor,
and observation sent to the HTTP-RPC service must match the name of the
authenticated user. For example, to store an observation of a node with the id
node-1
, the HTTP Basic Auth user name must be node-1
as well. If the
observation is sent by any other user, it will be rejected (HTTP 401).
Environment Variable | Description |
---|---|
|
Path to heartbeat database (required). |
|
Path to log database (required). |
|
Path to observation database (required). |
|
Set to |
The web application is configured through environment variables. The web server or FastCGI spawner must be able to pass environment variables to dmapi. See RPC Server for an example configuration.
The service accepts HTTP GET and POST requests. Section RPC API gives an overview of the available endpoints. The response format depends on the MIME type set in the HTTP Accept header of the request, either:
-
application/json
(JSON) -
application/jsonl
(JSON Lines) -
application/namelist
(Fortran 95 Namelist) -
text/comma-separated-values
(CSV)
By default, responses are in CSV format. The Namelist format is available only
for single records. Status messages are returned as key–value pairs, indicated
by content type text/plain
.
dmbackup
The dmbackup utility creates an online backup of a running SQLite database. By default, the SQLite backup API is used. The program is functional equivalent to running the sqlite3(1) command-line interface:
$ sqlite3 <database> ".backup '<output>'"
dmbackup does not replace existing backup databases.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path of the backup database. |
|
|
– |
Path of the SQLite database to backup. |
|
|
– |
Output available command-line arguments and quit. |
|
|
off |
Use |
|
|
off |
Print backup progess (not in vacuum mode). |
|
|
– |
Output version information and quit. |
|
|
off |
Enable WAL journal for backup database. |
Examples
Create an online backup of an observation database:
$ dmbackup --database /var/dmpack/observ.sqlite --backup /tmp/observ.sqlite
dmbeat
The dmbeat program is a heartbeat emitter that sends heartbeats
or handshakes via HTTP POST to a remote dmapi service. The heartbeat
messages include time stamp, system uptime, and last connection error. The
server may inspect this data to check if a client is still running and has
network access. The RPC endpoint is expected at
[http|https]://<host>:<port>/api/v1/beat
.
Passing the server credentials via the command-line arguments --username
and
--password
is insecure on multi-user operating systems and only recommended
for testing.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file. |
|
|
0 |
Maximum number of heartbeats to send (unlimited if |
|
|
off |
Forward log messages of level debug via IPC (if logger is set). |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
IP or FQDN of HTTP-RPC host (for instance, |
|
|
0 |
Emit interval in seconds. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Optional name of instance and table in given configuration file. |
|
|
– |
Node id. |
|
|
– |
HTTP-RPC API password. |
|
|
0 |
Port of HTTP-RPC API server. The default |
|
|
off |
Use TLS encryption. |
|
|
– |
HTTP-RPC API user name. If set, implies HTTP Basic Auth. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
dmdb
The dmdb program collects observations from a POSIX message queue and
stores them in a SQLite database. The name of the message queue equals the
given dmdb name, by default dmdb
. The IPC option enables process
synchronisation via POSIX semaphores. The value of the semaphore is changed
from 0 to 1 if a new observation has been received. The name of the semaphore
equals the dmdb name. Only a single process may wait for the semaphore.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file. |
|
|
– |
Path to SQLite observation database. |
|
|
off |
Forward log messages of level debug via IPC (if logger is set). |
|
|
– |
Output available command-line arguments and quit. |
|
|
off |
Uses a POSIX semaphore for process synchronisation. The name of the semaphore
matches the instance name (with leading |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Optional name of program instance, configuration, POSIX message queue, and POSIX semaphore. |
|
|
– |
Node id. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
Create a message queue /dmdb
, wait for incoming observations, and store them
in the given database:
$ dmdb --name dmdb --node dummy-node --database /var/dmpack/observ.sqlite --verbose
Log messages and observation ids are printed to stdout.
dmdbctl
The dmdbctl utility program performs create, read, update, or delete operations (CRUD) on the observation database.
- Create
-
Add nodes, sensors, and targets to the database.
- Read
-
Read nodes, sensors, and targets from database. Print the records to standard output.
- Update
-
Update nodes, sensors, and targets in the database.
- Delete
-
Delete nodes, sensors, and targets from the database.
Only nodes, sensors, and targets are supported. All data attributes are passed through command-line arguments.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Node, sensor, or target altitude (optional). |
|
|
– |
Create record of given type ( |
|
|
– |
Path to SQLite observation database (required). |
|
|
– |
Delete record of given type ( |
|
|
– |
Node, sensor, or target easting (optional). |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Node, sensor, or target id (required). |
|
|
– |
Node, sensor, or target meta description (optional). |
|
|
– |
Node, sensor, or target name. |
|
|
– |
Id of node the sensor is associated with. |
|
|
– |
Node, sensor, or target northing (optional). |
|
|
– |
Read record of given type ( |
|
|
– |
Serial number of sensor (optional). |
|
|
– |
Target state (optional). |
|
|
|
Sensor type ( |
|
|
– |
Updates record of given type ( |
|
|
off |
Print additional log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
Add node, sensor, and target to observation database:
$ dmdbctl -d observ.sqlite -C node --id node-1 --name "Node 1" $ dmdbctl -d observ.sqlite -C sensor --id sensor-1 --name "Sensor 1" --node node-1 $ dmdbctl -d observ.sqlite -C target --id target-1 --name "Target 1"
Delete a target from the database:
$ dmdbctl -d observ.sqlite -D target --id target-1
Read attributes of sensor sensor-1
:
$ dmdbctl -d observ.sqlite -R sensor --id sensor-1 sensor.id: sensor-1 sensor.node_id: node-1 sensor.type: virtual sensor.name: Sensor 1 sensor.sn: 12345 sensor.meta: dummy sensor sensor.x: 0.000000000000 sensor.y: 0.000000000000 sensor.z: 0.000000000000
dmexport
The dmexport program writes beats, logs, nodes, sensors, targets, observations, and data points from database to file, in ASCII block, CSV, JSON, or JSON Lines format. The ASCII block format is only available for X/Y data points. The types data point, log, and observation require a sensor id, a target id, and a time range in ISO 8601 format.
If no output file is given, the data is printed to standard output. The output file will be overwritten if it already exists. If no records are found, an empty file will be created.
Format | Supported Types | Description |
---|---|---|
|
|
ASCII block format. |
|
|
CSV format. |
|
|
JSON format. |
|
|
JSON Lines format. |
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to SQLite database (required). |
|
|
– |
Output file format ( |
|
|
– |
Start of time range in ISO 8601 (required for types |
|
|
off |
Add CSV header. |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Node id (required). |
|
|
– |
Path of output file. |
|
|
– |
Response name for type |
|
|
– |
Sensor id (requied for types |
|
|
|
CSV field separator. |
|
|
– |
Target id (required for types |
|
|
– |
End of time range in ISO 8601 (required for types |
|
|
– |
Type of record to export: |
|
|
– |
Output version information and quit. |
Examples
Export log messages from database to JSON file:
$ dmexport --database log.sqlite --type log --format json --node dummy-node \ --from 2020-01-01 --to 2023-01-01 --output /tmp/log.json
Export observations from database to CSV file:
$ dmexport --database observ.sqlite --type observ --format csv --node dummy-node \ --sensor dummy-sensor --target dummy-target --from 2020-01-01 --to 2025-01-01 \ --output /tmp/observ.csv
dmfeed
The dmfeed program creates a web feed from log messages in Atom Syndication Format. The log messages are read from database and written as XML to standard output or file.
The feed id has to be a 36 characters long UUID with hyphens. News aggregators use the id to identify the feed. Therefore, the id should not be reused among different feeds. Run dmuuid to generate a valid UUID4.
The time stamp of the feed in element updated is set to the date and time of
the last log message. If no logs have been added to the database since the last
file modification of the feed, the output file is not updated, unless argument
--force
is passed. To update the feed periodically, add dmfeed to
crontab.
If an XSLT style sheet is given, web browsers may be able to display the Atom
feed in HTML format. Set the option to the (relative) path of the public XSL on
the web server. An example style sheet feed.xsl
is located in
/usr/local/share/dmpack/
.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Name of feed author or organisation. |
|
|
– |
Path to configuration file. |
|
|
– |
Path to SQLite log database. |
|
|
– |
E-mail address of feed author (optional). |
|
|
50 |
Maximum number of entries in feed (max. 500). |
|
|
– |
Force file output even if no new log records are available. |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
UUID of the feed, 36 characters long with hyphens. |
|
|
5 |
Select log messages of the given maximum log level (between 1 and 5). Must be greater or equal the minimum level. |
|
|
1 |
Select log messages of the given minimum log level (between 1 and 5). |
|
|
|
Name of instance and table in given configuration file. |
|
|
– |
Select log messages of the given node id. |
|
|
stdout |
Path of the output file. If empty or |
|
|
– |
Sub-title of feed. |
|
|
– |
Title of feed. |
|
|
– |
Public URL of the feed. |
|
|
– |
Output version information and quit. |
|
|
– |
Path to XSLT style sheet. |
Examples
First, generate a unique feed id:
$ dmuuid --hyphens 19c12109-3e1c-422c-ae36-3ba19281f2e
Then, write the last 50 log messages in Atom format to file feed.xml
, and
include a link to the XSLT style sheet feed.xsl
:
$ dmfeed --database /var/dmpack/log.sqlite --output /var/www/feed.xml \ --id 19c12109-3e1c-422c-ae36-3ba19281f2e --xsl feed.xsl
Copy the XSLT style sheet to the directory of the Atom feed:
$ cp /usr/local/share/dmpack/feed.xsl /var/www/
If /var/www/
is served by a web server, feed readers can subscribe to the
feed. Additionally, we may translate feed and style sheet into a single HTML
document feed.html
, using an arbitrary XSLT processor, for instance:
$ xsltproc --output feed.html /var/www/feed.xsl /var/www/feed.xml
dmfs
The dmfs program reads observations from file system, virtual file, or named pipe. The program can be used to read sensor data from the 1-Wire File System (OWFS).
If any receivers are specified, observations are forwarded to the next receiver
via POSIX message queue. dmfs can act as a sole data logger if output and
format are set. If the output path is set to -
, observations are written to
stdout instead of file.
The requests of each observation have to contain the path of the (virtual) file
in attribute request
. Response values are extracted by named group from the
raw response using the given regular expression pattern. Afterwards, the
observation is forwarded to the next receiver via POSIX message queue.
A configuration file is mandatory to describe the jobs to perform. Each observation must have a valid target id. Node, sensor, and target have to be present in the database.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file (required). |
|
|
off |
Forward log messages of level debug via IPC (if logger is set). |
|
|
– |
Output format, either |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of instance and table in given configuration file. |
|
|
– |
Node id. |
|
|
– |
Output file to append observations to ( |
|
|
– |
Sensor id. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
First, install the 1-Wire file system package. On FreeBSD, run:
# pkg install comms/owfs
On Linux, install the package instead with:
# apt install owfs
Connect a 1-Wire temperature sensor through USB (device /dev/ttyU0
), and mount
the 1-Wire file system with owfs(1) under /mnt/1wire/
:
# mkdir -p /mnt/1wire # owfs -C -d /dev/ttyU0 --allow_other -m /mnt/1wire/
On Linux, the path to the USB adapter slightly differs:
# owfs -C -d /dev/ttyUSB0 --allow_other -m /mnt/1wire/
The command-line argument -C
selects output in °C. The settings can be added
to the owfs(1) configuration file, usually /usr/local/etc/owfs.conf
or
/etc/owfs.conf
:
device = /dev/ttyU0 mountpoint = /mnt/1wire allow_other Celsius
The file system is mounted automatically at system start-up if owfs(1) is configured to run as a service.
Reading a temperature value from the connected sensor:
$ cat /mnt/1wire/10.DCA98C020800/temperature 19.12
Then, initialise the observation and log databases:
$ cd /var/dmpack/ $ dminit --type observ --database observ.sqlite --wal $ dminit --type log --database log.sqlite --wal
Create node node-1
, sensor sensor-1
, and target target-1
in database
/var/dmpack/observ.sqlite
through dmweb or dmdbctl:
$ dmdbctl -d observ.sqlite -C node --id node-1 --name "Node 1" $ dmdbctl -d observ.sqlite -C sensor --id sensor-1 --name "Sensor 1" --node node-1 $ dmdbctl -d observ.sqlite -C target --id target-1 --name "Target 1"
Set the program settings in configuration file
/usr/local/etc/dmpack/dmfs.conf
:
-- dmfs.conf
dmfs = {
logger = "dmlogger", -- Logger to send logs to.
node = "node-1", -- Node id (required).
sensor = "sensor-1", -- Sensor id (required).
output = "", -- Path to output file, or `-` for stdout.
format = "none", -- Output format (`csv` or `jsonl`).
jobs = { -- List of jobs to perform.
{
disabled = false, -- Enable to ignore job.
onetime = false, -- Run job only once.
observation = { -- Observation to execute (required).
name = "observ-1", -- Observation name (required).
target_id = "target-1", -- Target id (required).
receivers = { "dmdb" }, -- List of receivers (up to 16).
requests = { -- List of files to read.
{
request = "/mnt/1wire/10.DCA98C020800/temperature", -- File path.
pattern = "(?<temp>[-+0-9\\.]+)", -- RegEx pattern.
delay = 500, -- Delay in mseconds.
responses = {
{
name = "temp", -- RegEx group name (max. 8 characters).
unit = "degC", -- Response unit (max. 8 characters).
type = RESPONSE_TYPE_REAL64 -- Response value type.
}
}
}
}
},
delay = 10 * 1000, -- Delay in mseconds to wait afterwards.
}
},
debug = false, -- Forward logs of level DEBUG via IPC.
verbose = true -- Print messages to standard output.
}
Log messages will be sent to logger dmlogger
, observations to receiver dmdb
.
Start the logger process:
$ dmlogger --name dmlogger --database /var/dmpack/log.sqlite
Start the database process:
$ dmdb --name dmdb --database /var/dmpack/observ.sqlite --node node-1 --logger dmlogger
Start dmfs to execute the configured job:
$ dmfs --name dmfs --config /usr/local/etc/dmpack/dmfs.conf
dmgrc
The dmgrc program creates log messages from Leica GeoCOM return codes.
Observations received by POSIX message queue are searched for a GeoCOM return
code (GRC) response. If the code does not equal GRC_OK
, a log message is sent
to the configured logger instance.
By default, observation responses of name grc
are verified. For each GeoCOM
error code, a custom log level may be specified in the configuration file.
Otherwise, the default log level is used instead.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file (required). |
|
|
off |
Forward log messages of level debug via IPC (if logger is set). |
|
|
– |
Output available command-line arguments and quit. |
|
|
3 |
Default level of log messages, between 1 and 5. |
|
|
– |
Name of dmlogger process to send logs to. |
|
|
|
Name of instance and table in given configuration file. |
|
|
– |
Node id. |
|
|
|
Response name of the GeoCOM return code. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
A configuration file is not required, but allows to specifiy the log level of
certain GeoCOM return codes. In the following example configuration, the default
log level for all return codes other than GRC_OK
is set to LL_WARNING
. The
level is further refined for specific GeoCOM codes:
-- dmgrc.conf
dmgrc = {
logger = "dmlogger",
node = "dummy-node",
response = "grc",
level = LL_WARNING,
levels = {
debug = { GRC_ABORT, GRC_SHUT_DOWN, GRC_NO_EVENT },
info = { GRC_SLEEP_NODE, GRC_NA, GRC_STOPPED },
warning = { GRC_TMC_ACCURACY_GUARANTEE, GRC_AUT_NO_TARGET },
error = { GRC_FATAL },
critical = {}
},
debug = false,
verbose = true
}
See section GeoCOM API for a table of all supported return codes. Pass the path of the configuration file through the command-line argument:
$ dmgrc --name dmgrc --config /usr/local/etc/dmpack/dmgrc.conf
The name argument must match the name of the configuration table. A logger
process of name dmlogger
must be running to process the generated log
messages.
dminfo
The dminfo utility program prints build, database, and system information to
standard output. The path to the beat, log, or observation database is passed
through command-line argument --database
. Only one database can be specified.
The output contains compiler version and options; database PRAGMAs, tables, and number of rows; as well as system name, version, and host name.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to SQLite database. |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Output version information and quit. |
Examples
Print build, database, and system information:
$ dminfo --database /var/dmpack/observ.sqlite build.compiler: GCC version 13.1.0 build.options: -mtune=generic -march=x86-64 -std=f2018 db.application_id: 444D31 db.foreign_keys: T db.journal_mode: wal db.path: /var/dmpack/observ.sqlite db.size: 286720 db.table.beats: F db.table.beats.rows: 0 ...
dmimport
The dmimport program reads logs, nodes, sensors, targets, and observations in CSV format from file and imports them into the database. The database inserts are transaction-based. If an error occurs, the transaction is rolled back, and no records are written to the database at all.
The database has to be a valid DMPACK database and must contain the tables required for the input records. The nodes, sensors, and targets referenced by input observations must exist in the database. The nodes referenced by input sensors must exist as well.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to SQLite database (required, unless in dry mode). |
|
|
off |
Dry mode. Reads and validates records from file but skips database import. |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Path to input file in CSV format (required). |
|
|
– |
CSV quote character. |
|
|
|
CSV field separator. |
|
|
– |
Type of record to import: |
|
|
off |
Print progress to stdout. |
|
|
– |
Output version information and quit. |
Examples
Import observations from CSV file observ.csv
into database observ.sqlite
:
$ dmimport --type observ --input observ.csv --database observ.sqlite --verbose
dminit
The dminit utility program creates beat, log, and observation databases. No action is performed if the specified database already exists. A synchronisation table is required for observation and log synchronisation with an dmapi server. The argument can be omitted if this feature is not needed. The journal mode Write-Ahead Logging (WAL) should be enabled for databases with multiple readers.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path of the new SQLite database. |
|
|
– |
Output available command-line arguments and quit. |
|
|
off |
Add synchronisation tables. Enable for data synchronisation between client and server. |
|
|
– |
Type of database, either |
|
|
– |
Output version information and quit. |
|
|
off |
Enable journal mode Write-Ahead Logging (WAL). |
Examples
Create an observation database with remote synchronisation tables (WAL):
$ dminit --database /var/dmpack/observ.sqlite --type observ --sync --wal
Create a log database with remote synchronisation tables (WAL):
$ dminit --database /var/dmpack/log.sqlite --type log --sync --wal
Create a heartbeat database (WAL):
$ dminit --database /var/dmpack/beat.sqlite --type beat --wal
dmlog
The dmlog utility forwards a log message to the message queue of a
dmlogger instance. The argument --message
is mandatory. The default log
level is info. Pass the name of the dmlogger instance through command-line
argument --logger
. The program terminates after log transmission.
The following log levels are accepted:
Level | Name |
---|---|
1 |
debug |
2 |
info |
3 |
warning |
4 |
error |
5 |
critical |
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
0 |
DMPACK error code (optional). |
|
|
– |
Output available command-line arguments and quit. |
|
|
2 |
Log level, from 1 to 5 (level or name). |
|
|
|
Name of logger instance and POSIX message queue. |
|
|
– |
Log message (max. 512 characters). |
|
|
– |
Node id (optional). |
|
|
– |
Observation id (optional). |
|
|
– |
Sensor id (optional). |
|
|
– |
Source of the log message (optional). |
|
|
– |
Target id (optional). |
|
|
off |
Print log to stderr. |
|
|
– |
Output version information and quit. |
Examples
Send a log message to the message queue of logger dmlogger
:
$ dmlog --level warning --message "low battery" --source dmlog --verbose 2022-12-09T22:50:44.161000+01:00 [WARNING] dmlog - low battery
The dmlogger
process will receive the log message in real-time and store it in
the log database (if the log level is ≥ the configured minimum log level):
$ dmlogger --node dummy-node --database /var/dmpack/log.sqlite --verbose 2022-12-09T22:50:44.161000+01:00 [WARNING] dmlog - low battery
dmlogger
The dmlogger program collects log messages from a POSIX message queue and
writes them to a SQLite database. The name of the message queue will equal the
given dmlogger name with leading /
, by default /dmlogger
.
If a minimum log level is selected, only logs of a level greater or equal the minimum are stored in the database. Log messages with a lower level are printed to standard output before being discarded (only if the verbose flag is enabled).
The IPC option allows an optional process synchronisation via a named POSIX
semaphores. The value of the semaphore is changed from 0
to 1
whenever a
new log was received. The name of the semaphore will equal the dmlogger name
with leading /
. Only a single process should wait for the semaphore unless
round-robin passing is desired. This feature may be used to automatically
synchronise incoming log messages with a remote HTTP-RPC API server. dmsync
will wait for new logs before starting synchronisation if the dmlogger
instance name has been passed through command-line argument --wait
.
The following log levels are accepted:
Level | Name |
---|---|
1 |
debug |
2 |
info |
3 |
warning |
4 |
error |
5 |
critical |
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file. |
|
|
– |
Path to SQLite log database. |
|
|
– |
Output available command-line arguments and quit. |
|
|
off |
Use POSIX semaphore for process synchronisation. The name of the semaphore matches the instance name (with leading slash). The semaphore is set to 1 whenever a new log message was received. Only a single process may wait for this semaphore. |
|
|
3 |
Minimum level for a log to be stored in the database, from 1 to 5. |
|
|
|
Name of logger instance, configuration, POSIX message queue, and POSIX semaphore. |
|
|
– |
Node id. |
|
|
off |
Print received logs to stderr. |
|
|
– |
Output version information and quit. |
Examples
Create a message queue /dmlogger
, wait for incoming logs, and store them in
the given database if logs are of level 4 (ERROR) or higher:
$ dmlogger --node dummy-node --database log.sqlite --minlevel 4
Push semaphore /dmlogger
each time a log has been received:
$ dmlogger --node dummy-node --database log.sqlite --ipc
Let dmsync wait for semaphore /dmlogger
before synchronising the log
database with host 192.168.1.100
, then repeat:
$ dmsync --type log --database log.sqlite --host 192.168.1.100 --wait dmlogger
dmlua
The dmlua program runs a custom Lua script to process observations received
from message queue. Each observation is passed as a Lua table to the function of
the name given in option procedure
. If the option is not set, function name
process
is assumed by default. The Lua function must return the (modified)
observation table on exit.
The observation returned from the Lua function is forwarded to the next receiver specified in the receivers list of the observation. If no receivers are left, the observation will be discarded.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file (optional). |
|
|
off |
Forward log messages of level debug via IPC (if logger is set). |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of instance and table in given configuration file. |
|
|
– |
Node id. |
|
|
|
Name of Lua function to call. |
|
|
– |
Path to Lua script to run. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
The following Lua script script.lua
just prints observation table observ
to
standard output, before returning it to dmlua unmodified:
-- script.lua
function process(observ)
print(dump(observ))
return observ
end
function dump(o)
if type(o) == 'table' then
local s = '{\n'
for k, v in pairs(o) do
if type(k) ~= 'number' then k = '"' .. k .. '"' end
s = s .. '[' .. k .. '] = ' .. dump(v) .. ',\n'
end
return s .. '}'
else
return tostring(o)
end
end
Any observation sent to receiver dmlua
will be passed to the Lua function
process()
in script.lua
, then forwarded to the next receiver (if any):
$ dmlua --name dmlua --node dummy-node --script script.lua --verbose
dmpipe
The dmpipe program reads responses from processes connected via pipe.
All requests of an observation have to contain the process in attribute
request
. Response values are extracted by group from the raw response using
the given regular expression pattern.
If any receivers are specified, observations are forwarded to the next receiver
via POSIX message queue. The program can act as a sole data logger if output and
format are set. If the output path is set to -
, observations are printed to
stdout.
A configuration file is mandatory to configure the jobs to perform. Each observation must have a valid target id. Node id, sensor id, and observation id are added by dmpipe. Node, sensor, and target have to be present in the database for the observation to be stored.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file (required). |
|
|
off |
Forward log messages of level debug via IPC (if logger is set). |
|
|
– |
Output format, either |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of instance and table in given configuration file. |
|
|
– |
Node id. |
|
|
– |
Output file to append observations to ( |
|
|
– |
Sensor id. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
The example reads the remaining battery life returned by the sysctl(8) tool (available on FreeBSD):
$ sysctl hw.acpi.battery.life hw.acpi.battery.life: 100
On Linux, the battery life can be read with dmfs from
/sys/class/power_supply/BAT0/capacity
instead.
The regular expression pattern describes the response and defines the group
battery
for extraction. The name of one of the responses in the responses
table must equal the group name. The observation will be forwarded to the
message queue of a dmdb process.
Backslash characters in the string values have to be escaped with \
.
-- dmpipe.conf
dmpipe = {
logger = "dmlogger", -- Logger to send logs to.
node = "dummy-node", -- Node id (required).
sensor = "dummy-sensor", -- Sensor id (required).
output = "", -- Path to output file, `-` for stdout.
format = "none", -- Output format (`csv` or `jsonl`).
jobs = { -- Jobs to perform.
{
disabled = false, -- Enable to ignore job.
onetime = false, -- Run job only once.
observation = { -- Observation to execute.
name = "dummy-observ", -- Observation name (required).
target_id = "dummy-target", -- Target id (required).
receivers = { "dmdb" }, -- List of receivers (up to 16).
requests = { -- Pipes to open.
{
request = "sysctl hw.acpi.battery.life", -- Command to execute.
pattern = "hw\\.acpi\\.battery\\.life: (?<battery>[0-9]+)", -- RegEx.
delay = 0, -- Delay in mseconds.
responses = {
{
name = "battery", -- RegEx group name (max. 8 characters).
unit = "%" -- Response unit (max. 8 characters).
type = RESPONSE_TYPE_REAL64 -- Response value type.
}
}
}
}
},
delay = 60 * 1000, -- Delay to wait afterwards in mseconds.
}
},
debug = false, -- Forward logs of level DEBUG via IPC.
verbose = true -- Print messages to standard output.
}
Pass the path of the configuration file to dmpipe:
$ dmpipe --name dmpipe --config /usr/local/etc/dmpipe.conf
The result returned by sysctl(8) will be formatted according to the current locale (decimal separator). You may have to change the locale first to match the regular expression pattern:
$ export LANG=C $ dmpipe --name dmpipe --config /usr/local/etc/dmpipe.conf
dmplot
The dmplot program is a front-end to gnuplot(1) that creates plots of observations read from database. Plots are either written to file or displayed in terminal or X11 window.
Depending on the selected terminal back-end, you may have to set the environment
variable GDFONTPATH
to the local font directory first:
$ export GDFONTPATH="/usr/local/share/fonts/webfonts/"
If gnuplot(1) is installed under a name other than gnuplot
, for example,
gnuplot-nox
, an alias has to be added to the global profile:
alias gnuplot="gnuplot-nox"
The output file is ignored when using the terminals sixelgd
and x11
.
Plotting parameters passed via command-line have priority over those from
configuration file.
Terminal | Description |
---|---|
|
ASCII format, in ANSI colours. |
|
ASCII format. |
|
GIF format (libgd). |
|
PNG format (libgd). |
|
PNG format (libcairo), created from vector graphics. |
|
Sixel format (libgd), originally for DEC terminals. |
|
W3C Scalable Vector Graphics (SVG) format. |
|
Persistent X11 window (libX11). |
Descriptor | Description (Format) |
---|---|
|
year (YYYY) |
|
month (MM) |
|
day of month (DD) |
|
hour (hh) |
|
minute (mm) |
|
second (ss) |
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Background colour (for example, |
|
|
– |
Path to configuration file. |
|
|
– |
Path to SQLite observation database. |
|
|
– |
Font name or file path (for example, |
|
|
|
Foreground colour (for example, |
|
|
– |
Start of time range in ISO 8601. |
|
|
400 |
Plot height. |
|
|
– |
Output available command-line arguments and quit. |
|
|
|
Name of table in configuration file. |
|
|
– |
Node id. |
|
|
– |
File path of plot image. May include format descriptors. |
|
|
– |
Response name. |
|
|
– |
Sensor id. |
|
|
– |
Target id. |
|
|
– |
|
|
|
– |
Plot title. |
|
|
– |
End of time range in ISO 8601. |
|
|
– |
Output version information and quit. |
|
|
1000 |
Plot width. |
Examples
Create a plot of observations selected from database observ.sqlite
in PNG
format, and write the file to /tmp/plot.png
:
$ dmplot --node dummy-node --sensor dummy-sensor --target dummy-target \ --response dummy --from 2020 --to 2024 --database observ.sqlite \ --terminal pngcairo --output /tmp/plot.png
Output the plot directly to terminal, using the configuration in dmplot.conf
:
$ dmplot --name dmplot --config dmplot.conf --terminal sixelgd
The sixelgd
format requires a terminal emulator with Sixel support, such as
xterm(1) or mlterm(1).
dmrecv
The dmrecv program listens to the POSIX message queue of its name and writes
received logs or observations to stdout, file, or named pipe; in CSV, JSON
Lines, or Namelist format. By default, the serialised data is appended to the
end of the output file. If argument --replace
is passed, the file will be
replaced consecutively.
Received observations are not forwarded to the next specified receiver unless
argument --forward
is set. If no receivers are defined or left, the
observation will be discarded after output.
The output format block
is only available for observation data and requires
a response name to be set. Observations will be searched for this response name
and converted to data point type if found. The data point is printed in ASCII
block format.
If the JSON Lines output format is selected, logs and observations are written
as JSON objects to file or stdout, separated by new line (\n
). Use jq(1)
to convert records in JSON Lines file input.jsonl
into a valid JSON array in
output.json
:
$ jq -s '.' input.jsonl > output.json
The program settings are passed through command-line arguments or an optional configuration file. The arguments overwrite settings from file.
Format | Type | Description |
---|---|---|
|
|
ASCII block format (time stamp and response value). |
|
|
CSV format. |
|
|
JSON Lines format. |
|
|
Fortran 95 Namelist format. |
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file. |
|
|
off |
Forward log messages of level debug via IPC (if logger is set). |
|
|
– |
Output format ( |
|
|
off |
Forward observations to the next specified receiver. |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of table in configuration file and POSIX message queue to subscribe to. |
|
|
– |
Optional node id. |
|
|
stdout |
Output file to append observations to ( |
|
|
off |
Replace output file instead of appending data. |
|
|
– |
Name of observation response to output (required for format |
|
|
– |
Data type to receive: |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
Write log messages received from POSIX message queue /dmrecv
to file
/tmp/logs.csv
in CSV format:
$ dmrecv --name dmrecv --type log --format csv --output /tmp/logs.csv
Output observations in JSON Lines format to stdout:
$ dmrecv --name dmrecv --type observ --format jsonl
Write the observations serialised in JSON Lines format to named pipe
/tmp/dmrecv_pipe
:
$ mkfifo /tmp/dmrecv_pipe $ dmrecv --name dmrecv --type observ --format jsonl --output /tmp/dmrecv_pipe
Another process can now read the observations from /tmp/dmrecv_pipe
:
$ tail -f /tmp/dmrecv_pipe
dmreport
The dmreport program creates reports in HTML5 format, containing plots of observations and/or log messages selected from database. Plots are created by calling gnuplot(1) and inlining the returned image (GIF, PNG, SVG) as a base64-encoded data URI. Any style sheet file with classless CSS can be included to alter the presentation of the report. The output of dmreport is a single HTML file.
Depending on the selected plot format, the environment variable GDFONTPATH
may
have to be set to the local font directory first:
$ export GDFONTPATH="/usr/local/share/fonts/webfonts/"
If gnuplot(1) is installed under a name other than gnuplot
, for example,
gnuplot-nox
, an alias has to be added to the global profile:
alias gnuplot="gnuplot-nox"
A configuration file is mandatory to create reports. Only a few parameters can be set through command-line arguments. Passed command-line arguments have priority over settings in the configuration file.
Descriptor | Description (Format) |
---|---|
|
year (YYYY) |
|
month (MM) |
|
day of month (DD) |
|
hour (hh) |
|
minute (mm) |
|
second (ss) |
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file (required). |
|
|
– |
Start of time range in ISO 8601. |
|
|
– |
Output available command-line arguments and quit. |
|
|
|
Name of program instance and configuration. |
|
|
– |
Sensor node id. |
|
|
– |
Path of the HTML output file. May include format descriptors. |
|
|
– |
Path to the CSS file to inline. |
|
|
– |
End of time range in ISO 8601. |
|
|
– |
Output version information and quit. |
Examples
The settings are stored in Lua table dmreport
in the configuration file. The
observations are read from database observ.sqlite
, the log messages from
log.sqlite
.
-- dmreport.conf
dmreport = {
node = "dummy-node",
from = "1970-01-01T00:00:00.000000+00:00",
to = "2070-01-01T00:00:00.000000+00:00",
output = "%Y-%M-%D_dummy-report.html",
style = "/usr/local/share/dmpack/dmreport.min.css",
title = "Monitoring Report",
subtitle = "Project",
meta = "",
plots = {
disabled = false, -- Disable plots.
database = "observ.sqlite", -- Path to observation database.
title = "Plots", -- Overwrite default heading.
meta = "", -- Optional description.
observations = { -- List of plots to generate.
{
sensor = "dummy-sensor", -- Sensor id (required).
target = "dummy-target", -- Target id (required).
response = "tz0", -- Response name (required).
unit = "deg C", -- Response unit.
format = "svg", -- Plot format (gif, png, pngcairo, svg).
title = "Temperature", -- Plot title.
subtitle = "tz0", -- Plot sub-title.
meta = "", -- Optional description.
color = "#ff0000", -- Graph colour.
width = 1000, -- Plot width.
height = 300, -- Plot height.
}
}
},
logs = {
disabled = false, -- Disable logs.
database = "log.sqlite", -- Path to log database.
minlevel = LL_WARNING, -- Minimum log level (default: LL_WARNING).
maxlevel = LL_CRITICAL, -- Maximum log level (default: LL_CRITICAL).
title = "Logs", -- Overwrite default heading.
meta = "", -- Optional description.
}
}
Write a report to file report.html
based on settings in dmreport.conf
:
$ dmreport --name dmreport --config dmreport.conf --output report.html
The command-line arguments overwrite the settings of the configuration file.
In order to create monthly reports, we may customise the shell script
/usr/local/share/dmpack/mkreport.sh
to determine the timestamps of the last
and the current month, which will then be passed to dmreport. Modify the
script mkreport.sh
to your set-up:
dmreport="/usr/local/bin/dmreport"
name="dmreport"
config="/usr/local/etc/dmpack/dmreport.conf"
output="/var/www/reports/"
Executing the shell script creates two reports, one for time series of the previous month (in case some observations have arrived late), and one for those of the current month, for example:
$ sh /usr/local/share/dmpack/mkreport.sh --- Writing report of 2023-08 to file /var/www/reports/2023-08_report.html ... --- Writing report of 2023-09 to file /var/www/reports/2023-09_report.html ...
To run the report generation periodically, simply add the script to your crontab.
dmsend
The dmsend program reads observations or logs in CSV or Fortran 95 Namelist
format, and sends them sequentially to the POSIX message queue of the given
receiver. The data is either read from file or from standard input. If the input
data is of type observ
and the argument --forward
is passed, each
observation will be sent to its next specified receiver in the receivers list.
If no receivers are declared, or if the end of the receivers list is reached,
the observation will not be forwarded.
The program settings are passed through command-line arguments or an optional configuration file. The arguments overwrite settings from file.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file. |
|
|
off |
Forward log messages of level debug via IPC (if logger is set). |
|
|
– |
Input format: |
|
|
stdin |
Path to input file (empty or |
|
|
off |
Forward observations to the next specified receiver. |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of instance and table in configuration file. |
|
|
– |
Optional node id. |
|
|
– |
Name of receiver/message queue. |
|
|
– |
Input data type: |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
Read observation from Namelist file observ.nml
and send it to the next
specified receiver:
$ dmsend --type observ --format nml --input observ.nml --forward
Send logs in CSV file logs.csv
sequentially to process dmrecv
:
$ dmsend --receiver dmrecv --type log --format csv --input logs.csv
dmserial
The dmserial program sends requests to a sensor or actor connected via USB/RS-232/RS-422/RS-485. Sensor commands and responses are sent/received through a teletype (TTY) device provided by the operating system. A pseudo-terminal (PTY) may be used to connect a virtual sensor.
Each request of an observation must contains the raw request intended for the
sensor in attribute request
. Response values are extracted by group from the
raw response using the given regular expression pattern. Each group name must
match a response name. Response names are limited to eight characters.
Observations will be forwarded to the next receiver via POSIX message queue if
any receiver is specified. The program can act as a sole data logger if output
and format are set. If the output path is set to -
, observations are printed
to stdout, else to file.
A configuration file is required to configure the jobs to perform. Each observation must have a valid target id. The database must contain the specified node, sensor, and targets. Parameters and functions of the Lua API may be used in the configuration file.
The following baud rates are supported: 50, 75, 110, 134, 150, 200, 300, 600, 1200, 1800, 2400, 4800, 9600, 19200, 38400, 57600, 115200, 230400, 460800, 921600.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
9600 |
Number of symbols transmitted per second (4800, 9600, 115200, …). |
|
|
8 |
Byte size (5, 6, 7, 8). |
|
|
– |
Path to configuration file (required). |
|
|
off |
Forward log messages of level debug via IPC (if logger is set). |
|
|
off |
Enable Data Terminal Ready (DTR). |
|
|
– |
Output format, either |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
Optional name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of instance and table in given configuration file. |
|
|
– |
Node id. |
|
|
– |
Output file to append observations to ( |
|
|
|
Parity bits ( |
|
|
off |
Enable Request To Send (RTS). |
|
|
– |
Sensor id. |
|
|
1 |
Number of stop bits (1, 2). |
|
|
0 |
Connection timeout in seconds (max. 25). |
|
|
– |
Path to TTY/PTY device (for example, |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
Examples
Read the jobs to perform from configuration file and execute them sequentially:
$ dmserial --name dmserial --config /usr/local/etc/dmpack/dmserial.conf --verbose
dmsync
The dmsync program synchronises logs, nodes, observations, sensors, and targets from local database concurrently with a remote dmapi server. The synchronisation may be started only once if no interval is set (to transfer nodes, sensors, and targets from client to server), periodically as a cron job, or by waiting for a POSIX semaphore.
The nodes, sensors, and targets referenced by observations in the local database must also exist in the remote server database. They can be created either with dmdbctl or dmweb, but also synchronised with dmsync. Logs and targets do not require any additional database entries on server-side.
The client databases must contain synchronisation tables. The tables are created
automatically by dminit if command-line argument --sync
is passed.
Alternatively, start dmsync with argument --create
once.
If the RPC server uses HTTP Basic Auth for authentication, the RPC user name must match the node id of the transmitted node, sensor, observation, log, or beat record. Otherwise, the server will reject the record and return HTTP 401 (Unauthorized).
The database records are send in compressed Fortran 95 Namelist format via HTTP
to the server. The program uses libcurl for data transfer. The accessed RPC API
endpoints are expected under URL [http|https]://<host>:<port>/api/v1/<endpoint>
.
The result of each synchronisation attempt is stored in the local database. Records are marked as synchronised only if the server returns HTTP 201 (Created).
Passing the server credentials via the command-line arguments --username
and
--password
is insecure on multi-user operating systems and only recommended
for testing.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
– |
Path to configuration file. |
|
|
off |
Create database synchronisation tables if they do not exist. |
|
|
– |
Path to log or observation database, depending on |
|
|
off |
Forward log messages of level debug via IPC (if logger is set). |
|
|
– |
Output available command-line arguments and quit. |
|
|
– |
IP address or FQDN of HTTP-RPC host (for instance, |
|
|
60 |
Synchronisation interval in seconds. If |
|
|
– |
Name of logger. If set, sends logs to dmlogger process of given name. |
|
|
|
Name of program instance and configuration. |
|
|
– |
Node id, required for types |
|
|
– |
HTTP-RPC API password. |
|
|
0 |
Port of HTTP-RPC API server (set to |
|
|
off |
Use TLS-encrypted connection. |
|
|
– |
Type of data to sychronise, either |
|
|
– |
HTTP-RPC API user name. If set, implies HTTP Basic Auth. |
|
|
off |
Print log messages to stderr. |
|
|
– |
Output version information and quit. |
|
|
– |
Name of POSIX semaphore to wait for. Synchronises databases if semaphore is > 0. |
Examples
Synchronise nodes, sensors, and targets in the local observation database with an HTTP-RPC server (without authentication):
$ dmsync --database observ.sqlite --type node --host 192.168.1.100 $ dmsync --database observ.sqlite --type sensor --node dummy-node --host 192.168.1.100 $ dmsync --database observ.sqlite --type target --host 102.168.1.100
Synchronise observations:
$ dmsync --database observ.sqlite --type observ --host 192.168.1.100
Synchronise log messages:
$ dmsync --database log.sqlite --type log --host 192.168.1.100
dmuuid
The dmuuid program is a command-line tool to generate pseudo-random UUID4s. By
default, DMPACK uses 32 characters long UUID4s in hexadecimal format (without
hyphens). Hyphens can be added by a command-line flag. The option --convert
expects UUID4s to be passed via standard input. Invalid UUID4s will be replaced
with the default UUID4.
Command-Line Options
Option | Short | Default | Description |
---|---|---|---|
|
|
off |
Add hyphens to 32 characters long hexadecimal UUIDs passed via stdin. |
|
|
1 |
Number of UUIDs to generate. |
|
|
– |
Output available command-line arguments and quit. |
|
|
off |
Return 36 characters long UUIDs with hyphens. |
|
|
– |
Output version information and quit. |
Examples
Create three identifiers:
$ dmuuid --count 3 6827049760c545ad80d4082cc50203e8 ad488d0b8edd4c6c94582e702a810ada 3d3eee7ae1fb4259b5df72f854aaa369
Create a UUID4 with hyphens:
$ dmuuid --hyphens d498f067-d14a-4f98-a9d8-777a3a131d12
Add hyphens to a hexadecimal UUID4:
$ echo '3d3eee7ae1fb4259b5df72f854aaa369' | dmuuid --convert 3d3eee7a-e1fb-4259-b5df-72f854aaa369
dmweb
dmweb is a CGI-based web user interface for DMPACK database access on client and server. The web application has to be executed through a CGI-compatible web server. It is recommended to run lighttpd(1). Any other server must be able to pass environment variables to the CGI application. gnuplot(1) is required for the plotting back-end (no-X11 flavour is sufficient).
The web application provides the following pages:
- Dashboard
-
Lists heartbeats, logs, and observations that have been added to the databases most recently.
- Nodes
-
Lists all sensor nodes, and allows to add new ones.
- Sensors
-
Lists all sensors, and allows to add new ones.
- Targets
-
Lists all targets, and allows to add new ones.
- Observations
-
Lists observations in database, selected by filter.
- Plots
-
Creates plots in SVG format from observation responses in database.
- Logs
-
Lists log messages stored in database, with optional filter.
- Beats
-
Lists received heartbeat messages, sorted by node id. The beat view shows the time the heartbeat was sent and received, as well as the time passed since then, additionally in Swatch Internet Time.
The style sheet of dmweb is based on missing.css. It may be replaced with any other classless CSS theme. For best experience, the IBM Plex font family should be installed locally.
If gnuplot(1) is installed under a name other than gnuplot
, for example,
gnuplot-nox
, an alias has to be added to the global profile:
alias gnuplot="gnuplot-nox"
Environment Variable | Description |
---|---|
|
Path to heartbeat database (server). |
|
Path to log database (client, server). |
|
Path to observation database (client, server). |
|
Set to |
Copy the style sheet dmpack.min.css
manually to the WWW root directory, or
create a symlink. Environment variables are used to configure dmweb. Transport
security and authentication have to be provided by the web server. See section
Web UI for an example configuration.
Web Applications
dmapi | dmweb | |
---|---|---|
Description |
HTTP-RPC API |
Web UI |
Base Path |
|
|
Protocol |
FastCGI |
CGI |
Location |
server |
client, server |
Configuration |
environment variables |
environment variables |
Authentication |
HTTP Basic Auth |
HTTP Basic Auth |
Content-Types |
CSV, JSON, JSON Lines, Namelist, Text |
HTML5 |
HTTP Methods |
GET, POST |
GET, POST |
Database |
SQLite 3 |
SQLite 3 |
Read-Only Mode |
Yes |
Yes |
The following web applications are part of DMPACK:
Both applications may be served by the same web server. It is recommended to run them in lighttpd(1). On FreeBSD, install the package with:
# pkg install www/lighttpd
The web server is configured through /usr/local/etc/lighttpd/lighttpd.conf
.
See the lighttpd wiki
on how to configure the web server.
In the listed examples, the DMPACK executables are assumend to be in
/usr/local/bin/
, but you may copy the programs to /var/www/cgi-bin/
or any
other directory. Set an appropriate owner, such as the one the server is running
as.
Authentication
In the lighttpd(1) configuration file, set auth.backend.htpasswd.userfile
to
the path of the file that contains the HTTP Basic Auth credentials, or remove
the related lines from the configuration if authentication is not desired. You
can run openssl(1) to add credentials to the htpasswd user file:
# printf "<user>:`openssl passwd -crypt '<password>'`\n" >> /usr/local/etc/lighttpd/htpasswd
Replace <user>
and <password>
with real values. Instead of a htpasswd
file, we may select a different authentication back-end, for example, LDAP,
MySQL/MariaDB, PostgreSQL, or SQLite 3. See the lighttpd(1) auth module
documentation for further instructions.
Cross-Origin Resource Sharing
If the HTTP-RPC API will be accessed by a client-side application running in the
browser, the web server has to be configured to send the appropriate
Cross-Origin Resource Sharing
(CORS) headers. By default, asynchronous JavaScript requests are forbidden by
the same-origin security policy. Refer to the documentation of the web server on
how to set the Access-Control-*
headers. For lighttpd(1), load the module
mod_setenv
and add response headers for OPTION requests:
$HTTP["request-method"] =~ "^(OPTIONS)$" {
setenv.add-response-header = (
"Access-Control-Allow-Origin" => "*",
"Access-Control-Allow-Headers" =>
"accept, origin, x-requested-with, content-type, x-transmission-session-id",
"Access-Control-Expose-Headers" => "X-Transmission-Session-Id",
"Access-Control-Allow-Methods" => "GET, POST, OPTIONS"
)
}
If the web server is behind a reverse proxy, CORS headers should be set by the proxy instead.
Databases
The databases are expected to be in directory /var/dmpack/
. Change the
environment variables in the web server configuration to the actual paths. The
observation, log, and beat databases the web applications will access must be
created and initialised beforehand:
# dminit --type observ --database /var/dmpack/observ.sqlite --wal # dminit --type log --database /var/dmpack/log.sqlite --wal # dminit --type beat --database /var/dmpack/beat.sqlite --wal
Make sure the web server has read and write access to the directory and all databases inside:
# chown -R www:www /var/dmpack
Change www:www
to the user and the group the web server is running as.
RPC Server
The snippet in this section may be added to the lighttpd(1) configuration to run the dmapi service. The lighttpd(1) web server does not require an additional FastCGI spawner. The following server modules have to be imported:
-
mod_authn_file
(HTTP Basic Auth) -
mod_extforward
(real IP, only if the server is behind a reverse proxy) -
mod_fastcgi
(FastCGI)
Add the IP address of the proxy server to the list of trusted forwarders to have access to the real IP of a client.
$SERVER["socket"] == "0.0.0.0:80" { }
# Load lighttpd modules.
server.modules += (
"mod_authn_file",
"mod_extforward",
"mod_fastcgi"
)
# Set authentication back-end and path of password file.
auth.backend = "htpasswd"
auth.backend.htpasswd.userfile = "/usr/local/etc/lighttpd/htpasswd"
# Real IP of client in case the server is behind a reverse proxy. Set one or
# more trusted proxies.
# extforward.headers = ( "X-Real-IP" )
# extforward.forwarder = ( "<PROXY IP>" => "trust" )
# FastCGI configuration. Run 4 worker processes, and pass the database paths
# through environment variables.
fastcgi.server = (
"/api/v1" => ((
"socket" => "/var/lighttpd/sockets/dmapi.sock",
"bin-path" => "/usr/local/bin/dmapi",
"max-procs" => 4,
"check-local" => "disable",
"bin-environment" => (
"DM_DB_BEAT" => "/var/dmpack/beat.sqlite",
"DM_DB_LOG" => "/var/dmpack/log.sqlite",
"DM_DB_OBSERV" => "/var/dmpack/observ.sqlite",
"DM_READ_ONLY" => "0"
)
))
)
# URL routing.
$HTTP["url"] =^ "/api/v1" {
# Enable HTTP Basic Auth.
auth.require = ( "" => (
"method" => "basic",
"realm" => "dmpack",
"require" => "valid-user"
))
}
The FastCGI socket will be written to /var/run/lighttpd/sockets/dmapi.sock
.
Change max-procs
to the desired number of FastCGI processes. Set the
environment variables to the locations of the databases. The databases must
exist prior start.
On FreeBSD, add the service to the system rc file /etc/rc.conf
and start the
server manually:
# sysrc lighttpd_enable="YES" # service lighttpd start
If served locally, access the RPC API at http://127.0.0.1/api/v1/.
Web UI
The lighttpd(1) web server has to be configured to run the CGI
application under base path /dmpack/
. The following server modules are
required:
-
mod_alias
(URL rewrites) -
mod_authn_file
(HTTP Basic Auth) -
mod_cgi
(Common Gateway Interface) -
mod_setenv
(CGI environment variables)
The example configuration may be appended to your lighttpd.conf
:
$SERVER["socket"] == "0.0.0.0:80" { }
# Load lighttpd modules.
server.modules += (
"mod_alias",
"mod_authn_file",
"mod_cgi",
"mod_setenv"
)
# Set maximum number of concurrent connections and maximum
# HTTP request size of 8192 KiB (optional).
server.max-connections = 32
server.max-request-size = 8192
# Pass the database paths through environment variables.
setenv.add-environment = (
"DM_DB_BEAT" => "/var/dmpack/beat.sqlite",
"DM_DB_LOG" => "/var/dmpack/log.sqlite",
"DM_DB_OBSERV" => "/var/dmpack/observ.sqlite",
"DM_READ_ONLY" => "0"
)
# Set authentication back-end and path of password file.
auth.backend = "htpasswd"
auth.backend.htpasswd.userfile = "/usr/local/etc/lighttpd/htpasswd"
# URL routing.
$HTTP["url"] =^ "/dmpack/" {
# Map URL to CGI executable.
alias.url += ( "/dmpack" => "/usr/local/bin/dmweb" )
# Enable HTTP Basic Auth for all paths.
auth.require = ( "" => (
"method" => "basic",
"realm" => "dmpack",
"require" => "valid-user"
))
# CGI settings. Do not assign file endings to script interpreters,
# execute only applications with execute bit set, enable write and
# read timeouts of 30 seconds.
cgi.assign = ( "" => "" )
cgi.execute-x-only = "enable"
cgi.limits = (
"write-timeout" => 30,
"read-timeout" => 30,
"tcp-fin-propagate" => "SIGTERM"
)
}
Copy the CSS file dmpack.min.css
from /usr/local/share/dmpack/
(/usr/share/dmpack/
on Linux) to the WWW root directory, in this case,
/var/www/
, or simply create a symlinks:
# cd /var/www/ # ln -s /usr/local/share/dmpack/dmpack.min.css dmpack.min.css
If the files have to be served from a path other than the root path, add a
rewrite rule or alias to the web server configuration. On FreeBSD, add the
service to the system rc file /etc/rc.conf
and start the web server manually:
# sysrc lighttpd_enable="YES" # service lighttpd start
If served locally, access the web application at http://127.0.0.1/dmpack/.
RPC API
All database records are returned in CSV format by default, with content type
text/comma-separated-values
. Status and error messages are returned as
key–values pairs, with content type text/plain
.
The following HTTP endpoints are provided by the RPC API:
Endpoint | HTTP Method | Description |
---|---|---|
|
GET |
|
|
GET |
|
|
GET |
|
|
GET |
|
|
GET |
|
|
GET |
|
|
GET |
|
|
GET |
|
|
GET, POST |
|
|
GET, POST |
|
|
GET, POST |
|
|
GET, POST |
|
|
GET, POST |
|
|
GET, POST |
Read Service Status
Returns service status in API status format as text/plain
.
Endpoint
-
/api/v1/
HTTP Methods
-
GET
Responses
Status | Description |
---|---|
|
Default response. |
|
Unauthorised. |
|
Server error. |
Example
Return the HTTP-RPC service status:
$ curl -s -u <username>:<password> --header "Accept: text/plain" \ "http://localhost/api/v1/"
Read Beats
Endpoint
-
/api/v1/beats
-
/api/v1/beats?header=<0|1>
HTTP Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
integer |
Add CSV header (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Beats are returned. |
|
Unauthorised. |
|
No beats found. |
|
Server error. |
|
Database error. |
Example
Return beats of all nodes in JSON format, pretty-print the result with jq(1):
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/beats" | jq
Read Logs
Returns logs of a given node and time range in CSV, JSON, or JSON Lines format from database. Node id and time range are mandatory.
Endpoint
-
/api/v1/logs?node_id=<id>&from=<timestamp>&to=<timestamp>
HTTP Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Node id. |
|
string |
Start of time range (ISO 8601). |
|
string |
End of time range (ISO 8601). |
|
integer |
Add CSV header (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Logs are returned. |
|
Invalid request. |
|
Unauthorised. |
|
No logs found. |
|
Server error. |
|
Database error. |
Example
Return all logs of node dummy-node
and year 2023 in CSV format:
$ curl -s -u <username>:<password> --header "Accept: text/comma-separated-values" \ "http://localhost/api/v1/logs?node_id=dummy-node&from=2023&to=2024"
Read Nodes
Endpoint
-
/api/v1/nodes
-
/api/v1/nodes?header=<0|1>
HTTP Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
integer |
Add CSV header (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Nodes are returned. |
|
Unauthorised. |
|
No nodes found. |
|
Server error. |
|
Database error. |
Example
Return all nodes in database as JSON array:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/nodes"
Read Observations
Returns observations of given node, sensor, target, and time range from database, in CSV, JSON, or JSON Lines format.
Endpoint
-
/api/v1/observs?<parameters>
HTTP Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Node id. |
|
string |
Sensor id. |
|
string |
Target id. |
|
string |
Response name. |
|
string |
Start of time range (ISO 8601). |
|
string |
End of time range (ISO 8601). |
|
integer |
Max. number of results (optional). |
|
integer |
Add CSV header (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Observations are returned. |
|
Invalid request. |
|
Unauthorised. |
|
No observations found. |
|
Server error. |
|
Database error. |
Example
Return all observations related to node dummy-node
, sensor dummy-sensor
, and
target dummy-target
of a single month in JSON format, pretty-print the result
with jq(1):
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/observs?node_id=dummy-node&sensor_id=dummy-sensor\ &target_id=dummy-target&from=2023-01&to=2024-01" | jq
Read Sensors
Endpoint
-
/api/v1/sensors
-
/api/v1/sensors?header=<0|1>
HTTP Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
integer |
Add CSV header (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Sensors are returned. |
|
Unauthorised. |
|
No sensors found. |
|
Server error. |
|
Database error. |
Example
Return all sensors of node dummy-node
in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/sensors?node_id=dummy-node"
Read Targets
Endpoint
-
/api/v1/targets
-
/api/v1/targets?header=<0|1>
HTTP Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
integer |
Add CSV header (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Targets are returned. |
|
Unauthorised. |
|
No targets found. |
|
Server error. |
|
Database error. |
Example
Return all targets in CSV format:
$ curl -s -u <username>:<password> --header "Accept: text/comma-separated-values" \ "http://localhost/api/v1/targets"
Read Time Series
Returns time series as observation views or data points (X/Y records) in CSV format from database. In comparison to the observation endpoint, the time series include only a single response, selected by name.
Endpoint
-
/api/v1/timeseries?<parameters>
HTTP Methods
-
GET
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Node id. |
|
string |
Sensor id. |
|
string |
Target id. |
|
string |
Response name. |
|
string |
Start of time range (ISO 8601). |
|
string |
End of time range (ISO 8601). |
|
integer |
Max. number of results (optional). |
|
integer |
Add CSV header (0 or 1). |
|
integer |
Return observation views instead of data points (0 or 1). |
Request Headers
Name | Values |
---|---|
Accept |
|
Responses
Status | Description |
---|---|
|
Observations are returned. |
|
Invalid request. |
|
Unauthorised. |
|
No observations found. |
|
Server error. |
|
Database error. |
Example
Return time series of responses dummy
related to node dummy-node
, sensor
dummy-sensor
, and target dummy-sensor
, from 2023 to 2024, as X/Y data in CSV
format:
$ curl -s -u <username>:<password> --header "Accept: text/comma-separated-values" \ "http://localhost/api/v1/timeseries?node_id=dummy-node&sensor_id=dummy-sensor\ &target_id=dummy-target&response=dummy&from=2023&to=2024"
For additional meta information, add the parameter view=1
.
Read or Update Beat
On POST, adds or updates heartbeat given in Namelist format. Optionally, the payload may be deflate or zstd compressed. The API returns HTTP 201 Created if the beat was accepted.
If HTTP Basic Auth is used, the user name must match the node_id
attribute of
the beat, otherwise, the request will be rejected as unauthorised (HTTP 401).
Endpoint
-
/api/v1/beat
-
/api/v1/beat?node_id=<id>
HTTP Methods
-
GET
-
POST
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Node id. |
Request Headers
Name | Values |
---|---|
Accept |
|
Name | Values |
---|---|
Content-Encoding |
|
Content-Type |
|
Responses
Status | Description |
---|---|
|
Beat is returned. |
|
Invalid request. |
|
Unauthorised. |
|
Beat not found. |
|
Server error. |
|
Database error. |
Status | Description |
---|---|
|
Beat was accepted. |
|
Invalid request or payload. |
|
Unauthorised. |
|
Payload too large. |
|
Invalid payload format. |
|
Server error. |
|
Database error. |
Example
Return the heartbeat of node dummy-node
in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/beat?node_id=dummy-node"
Read or Create Log
On POST, adds log in Namelist format to database. Optionally, the payload may be deflate or zstd compressed. The API returns HTTP 201 Created if the log was accepted.
If HTTP Basic Auth is used, the user name must match the node_id
attribute of
the log, otherwise, the request will be rejected as unauthorised (HTTP 401).
Endpoint
-
/api/v1/log
-
/api/v1/log?id=<id>
HTTP Methods
-
GET
-
POST
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Log id (UUID4). |
Request Headers
Name | Values |
---|---|
Accept |
|
Name | Values |
---|---|
Content-Encoding |
|
Content-Type |
|
Responses
Status | Description |
---|---|
|
Log is returned. |
|
Invalid request. |
|
Unauthorised. |
|
Log not found. |
|
Server error. |
|
Database error. |
Status | Description |
---|---|
|
Log was accepted. |
|
Invalid request or payload. |
|
Unauthorised. |
|
Log exists in database. |
|
Payload too large. |
|
Invalid payload format. |
|
Server error. |
|
Database error. |
Example
Return a specific log in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/log?id=51adca2f1d4e42a5829fd1a378c8b6f1"
Read or Create Node
On POST, adds node in Namelist format to database. Optionally, the payload may be deflate or zstd compressed. The API returns HTTP 201 Created if the node was accepted.
If HTTP Basic Auth is used, the user name must match the node_id
attribute of
the node, otherwise, the request will be rejected as unauthorised (HTTP 401).
Endpoint
-
/api/v1/node
-
/api/v1/node?id=<id>
HTTP Methods
-
GET
-
POST
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Node id. |
Request Headers
Name | Values |
---|---|
Accept |
|
Name | Values |
---|---|
Content-Encoding |
|
Content-Type |
|
Responses
Status | Description |
---|---|
|
Node is returned. |
|
Invalid request. |
|
Unauthorised. |
|
Node not found. |
|
Server error. |
|
Database error. |
Status | Description |
---|---|
|
Node was accepted. |
|
Invalid request or payload. |
|
Unauthorised. |
|
Node exists in database. |
|
Payload too large. |
|
Invalid payload format. |
|
Server error. |
|
Database error. |
Example
Return node dummy-node
in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/node?node_id=dummy-node"
Read or Create Observation
On POST, adds observation in Namelist format to database. Optionally, the payload may be deflate or zstd compressed. The API returns HTTP 201 Created if the observation was accepted.
If HTTP Basic Auth is used, the user name must match the node_id
attribute of
the observation, otherwise, the request will be rejected as unauthorised (HTTP
401).
Endpoint
-
/api/v1/observ
-
/api/v1/observ?id=<id>
HTTP Methods
-
GET
-
POST
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Observation id (UUID4). |
Request Headers
Name | Values |
---|---|
Accept |
|
Name | Values |
---|---|
Content-Encoding |
|
Content-Type |
|
Responses
Status | Description |
---|---|
|
Observation is returned. |
|
Invalid request. |
|
Unauthorised. |
|
Observation not found. |
|
Server error. |
|
Database error. |
Status | Description |
---|---|
|
Observation was accepted. |
|
Invalid request or payload. |
|
Unauthorised. |
|
Observation exists in database. |
|
Payload too large. |
|
Invalid payload format. |
|
Server error. |
|
Database error. |
Example
Return a specific observation in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/observ?id=7b98ae11d80b4ee392fe1a74d2c05809"
Read or Create Sensor
On POST, adds node in Namelist format to database. Optionally, the payload may be deflate or zstd compressed. The API returns HTTP 201 Created if the sensor was accepted.
If HTTP Basic Auth is used, the user name must match the node_id
attribute of
the sensor, otherwise, the request will be rejected as unauthorised (HTTP 401).
Endpoint
-
/api/v1/sensor
-
/api/v1/sensor?id=<id>
HTTP Methods
-
GET
-
POST
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Sensor id. |
Request Headers
Name | Values |
---|---|
Accept |
|
Name | Values |
---|---|
Content-Encoding |
|
Content-Type |
|
Responses
Status | Description |
---|---|
|
Sensor is returned. |
|
Invalid request. |
|
Unauthorised. |
|
Sensor not found. |
|
Server error. |
|
Database error. |
Status | Description |
---|---|
|
Sensor was accepted. |
|
Invalid request or payload. |
|
Unauthorised. |
|
Sensor exists in database. |
|
Payload too large. |
|
Invalid payload format. |
|
Server error. |
|
Database error. |
Example
Return sensor dummy-sensor
in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/sensor?id=dummy-sensor"
Read or Create Target
On POST, adds target in Namelist format to database. Optionally, the payload may be deflate or zstd compressed. The API returns HTTP 201 Created if the target was accepted.
Endpoint
-
/api/v1/target
-
/api/v1/target?id=<id>
HTTP Methods
-
GET
-
POST
Request Parameters
GET Parameter | Type | Description |
---|---|---|
|
string |
Target id. |
Request Headers
Name | Values |
---|---|
Accept |
|
Name | Values |
---|---|
Content-Encoding |
|
Content-Type |
|
Responses
Status | Description |
---|---|
|
Target is returned. |
|
Invalid request. |
|
Unauthorised. |
|
Target not found. |
|
Server error. |
|
Database error. |
Status | Description |
---|---|
|
Target was accepted. |
|
Invalid request or payload. |
|
Target exists in database. |
|
Payload too large. |
|
Invalid payload format. |
|
Server error. |
|
Database error. |
Example
Return target dummy-target
in JSON format:
$ curl -s -u <username>:<password> --header "Accept: application/json" \ "http://localhost/api/v1/target?id=dummy-target"
Data Serialisation
DMPACK supports the following data serialisation formats:
- Atom XML
-
Export of log messages in Atom Syndication Format (RFC 4287), with optional XSLT style sheet.
- Block
-
Export of observation responses as X/Y data points in ASCII block format, consisting of time stamp (ISO 8601) and real value.
- CSV
-
Export and import of beat, log, node, observation, sensor, and target data, with custom field separator and quote character. A CSV header is added optionally.
- HDF5
-
Export and import of node, observation, sensor, and target data as HDF5 compound data types.
- JSON
-
Export of beat, log, node, observation, sensor, and target data as JSON objects or JSON arrays.
- JSON Lines
-
Export of beat, log, node, observation, sensor, and target data in JSON Lines / Newline Delimited JSON format.
- Lua
-
Converting observations from and to Lua tables. Import of observations from Lua file or stack-based data exchange between Fortran and Lua.
- Namelist
-
Import from and export to Fortran 95 Namelist format of single beat, log, node, observation, sensor, and target data. The syntax is case-insensitive, line-breaks are optional. Default values are assumed for omitted attributes of data in Namelist format.
- Text
-
Status messages of the HTTP-RPC API are returned as key–value pairs in plain text format
The JSON Lines format equals the JSON format, except that multiple records are
separated by new line. The HDF5 format description for observations is omitted
due to length. You can output the format from the command-line. For example, if
the file observ.hdf5
contains DMPACK observations:
$ h5dump -H -A 0 observ.hdf5
API Status
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
DMPACK application version. |
|
string |
32 |
DMPACK library version. |
|
string |
32 |
Server host name. |
|
string |
32 |
Server software (web server). |
|
string |
32 |
Server date and time in ISO 8601. |
|
string |
32 |
Server status message (optional). |
|
integer |
4 |
Beat
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
Node id ( |
|
string |
45 |
IPv4/IPv6 address of client. |
|
string |
32 |
Client software name and version. |
|
string |
32 |
Date and time heartbeat was sent (ISO 8601). |
|
string |
32 |
Date and time heartbeat was received (ISO 8601). |
|
integer |
4 |
Last client connection error. |
|
integer |
4 |
Emit interval in seconds. |
|
integer |
4 |
Client uptime in seconds. |
{
"node_id": "dummy-node",
"address": "127.0.0.1",
"client": "dmbeat 1.0.0 (DMPACK 1.0.0)",
"time_sent": "1970-01-01T00:00:00.000000+00:00",
"time_recv": "1970-01-01T00:00:00.000000+00:00",
"error": 0,
"interval": 0,
"uptime": 0
}
Data Point
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
X value (ISO 8601). |
|
double |
8 |
Y value. |
Column | Attribute | Description |
---|---|---|
1 |
|
X value. |
2 |
|
Y value. |
Log
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
Log id (UUID4). |
|
integer |
4 |
|
|
integer |
4 |
|
|
string |
32 |
Date and time (ISO 8601). |
|
string |
32 |
Node id (optional). |
|
string |
32 |
Sensor id (optional). |
|
string |
32 |
Target id (optional). |
|
string |
32 |
Observation id (optional). |
|
string |
32 |
Log source (optional). |
|
string |
512 |
Log message. |
Level | Name |
---|---|
1 |
debug |
2 |
info |
3 |
warning |
4 |
error |
5 |
critical |
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<generator version="1.0">DMPACK</generator>
<title>DMPACK Logs</title>
<subtitle>Log Messages Feed</subtitle>
<id>urn:uuid:a6baaf1a-43b7-4e59-a18c-653e6ee61dfa</id>
<updated>1970-01-01T00:00:00.000000+00:00</updated>
<entry>
<title>DEBUG: dummy log message</title>
<id>urn:uuid:26462d27-d7ff-4ef1-b10e-0a2e921e638b</id>
<published>1970-01-01T00:00:00.000000+00:00</published>
<updated>1970-01-01T00:00:00.000000+00:00</updated>
<summary>DEBUG: dummy log message</summary>
<content type="xhtml">
<div xmlns="http://www.w3.org/1999/xhtml">
<table>
<tbody>
<tr><th>ID</th><td><code>26462d27d7ff4ef1b10e0a2e921e638b</code></td></tr>
<tr><th>Timestamp</th><td>1970-01-01T00:00:00.000000+00:00</td></tr>
<tr><th>Level</th><td>DEBUG (1)</td></tr>
<tr><th>Error</th><td>dummy error (2)</td></tr>
<tr><th>Node ID</th><td>dummy-node</td></tr>
<tr><th>Sensor ID</th><td>dummy-sensor</td></tr>
<tr><th>Target ID</th><td>dummy-target</td></tr>
<tr><th>Observation ID</th><td><code>9bb894c779e544dab1bd7e7a07ae507d</code></td></tr>
<tr><th>Source</th><td>dummy</td></tr>
<tr><th>Message</th><td>dummy log message</td></tr>
</tbody>
</table>
</div>
</content>
<author>
<name>dummy</name>
</author>
</entry>
</feed>
{
"id": "26462d27d7ff4ef1b10e0a2e921e638b",
"level": 1,
"error": 2,
"timestamp": "1970-01-01T00:00:00.000000+00:00",
"node_id": "dummy-node",
"sensor_id": "dummy-sensor",
"target_id": "dummy-target",
"observ_id": "9bb894c779e544dab1bd7e7a07ae507d",
"message": "dummy log message"
}
&DMLOG LOG%ID="26462d27d7ff4ef1b10e0a2e921e638b", LOG%LEVEL=1, LOG%ERROR=2, LOG%TIMESTAMP="1970-01-01T00:00:00.000000+00:00", LOG%NODE_ID="dummy-node", LOG%SENSOR_ID="dummy-sensor", LOG%TARGET_ID="dummy-target", LOG%OBSERV_ID="9bb894c779e544dab1bd7e7a07ae507d", LOG%SOURCE="dummy", LOG%MESSAGE="dummy log message", /
Node
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
Node id ( |
|
string |
32 |
Node name. |
|
string |
32 |
Node description (optional). |
|
double |
8 |
Node x or easting (optional). |
|
double |
8 |
Node y or northing (optional). |
|
double |
8 |
Node z or altitude (optional). |
Column | Attribute | Description |
---|---|---|
1 |
|
Node id. |
2 |
|
Node name. |
3 |
|
Node description. |
4 |
|
Node x or easting. |
5 |
|
Node y or northing. |
6 |
|
Node z or altitude. |
DATASET "node_type" { DATATYPE H5T_COMPOUND { H5T_ARRAY { [32] H5T_STRING { STRSIZE 1; STRPAD H5T_STR_SPACEPAD; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } } "id"; H5T_ARRAY { [32] H5T_STRING { STRSIZE 1; STRPAD H5T_STR_SPACEPAD; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } } "name"; H5T_ARRAY { [32] H5T_STRING { STRSIZE 1; STRPAD H5T_STR_SPACEPAD; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } } "meta"; H5T_IEEE_F64LE "x"; H5T_IEEE_F64LE "y"; H5T_IEEE_F64LE "z"; } DATASPACE SIMPLE { ( 8 ) / ( 8 ) } }
{
"id": "dummy-node",
"name": "Dummy Node",
"meta": "Description",
"x": 0.0,
"y": 0.0,
"z": 0.0
}
Observation
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
Observation id (UUID4). |
|
string |
32 |
Node id ( |
|
string |
32 |
Sensor id ( |
|
string |
32 |
Target id ( |
|
string |
32 |
Observation name ( |
|
string |
32 |
Date and time of observation (ISO 8601). |
|
string |
32 |
Observation source or name of origin ( |
|
string |
32 |
Path of TTY/PTY device. |
|
integer |
4 |
Message queue priority (>= 0). |
|
integer |
4 |
Observation error code. |
|
integer |
4 |
Cursor of receiver list (0 to 16). |
|
integer |
4 |
Number of receivers (0 to 16). |
|
integer |
4 |
Number of sensor requests (0 to 8). |
|
array |
16 × 32 |
Array of receiver names (16). |
|
array |
8 × 1380 |
Array of requests (8). |
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
Request name ( |
|
string |
32 |
Date and time of request (ISO 8601). |
|
string |
256 |
Raw request to sensor. Non-printable characters have to be escaped. |
|
string |
256 |
Raw response of sensor. Non-printable characters will be escaped. |
|
string |
8 |
Request delimiter. Non-printable characters have to be escaped. |
|
string |
256 |
Regular expression pattern that describes the raw response using named groups. |
|
integer |
4 |
Delay in mseconds to wait after the request. |
|
integer |
4 |
Request error code. |
|
integer |
4 |
Request mode (unused, for future additions). |
|
integer |
4 |
Number of performed retries. |
|
integer |
4 |
Request state (unused, for future additions). |
|
integer |
4 |
Request timeout in mseconds. |
|
integer |
4 |
Number of responses (0 to 16). |
|
array |
16 × 32 |
Extracted values from the raw response (16). |
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
8 |
Response name ( |
|
string |
8 |
Response unit. |
|
integer |
4 |
Response value type. |
|
integer |
4 |
Response error code. |
|
double |
8 |
Response value. |
{
"id": "9273ab62f9a349b6a4da6dd274ee83e7",
"node_id": "dummy-node",
"sensor_id": "dummy-sensor",
"target_id": "dummy-target",
"name": "dummy-observ",
"timestamp": "1970-01-01T00:00:00.000000+00:00",
"source": "dmdummy",
"path": "/dev/null",
"priority": 0,
"error": 0,
"next": 0,
"nreceivers": 2,
"nrequests": 1,
"receivers": [
"dummy-receiver1",
"dummy-receiver2"
],
"requests": [
{
"name": "dummy",
"timestamp": "1970-01-01T00:00:00.000000+00:00",
"request": "?\\n",
"response": "10.0\\n",
"delimiter": "\\n",
"pattern": "(?<sample>[-+0-9\\.]+)",
"delay": 0,
"error": 0,
"mode": 0,
"retries": 0,
"state": 0,
"timeout": 0,
"nresponses": 1,
"responses": [
{
"name": "sample",
"unit": "none",
"type": 0,
"error": 0,
"value": 10.0
}
]
}
]
}
{
id = "9273ab62f9a349b6a4da6dd274ee83e7",
node_id = "dummy-node",
sensor_id = "dummy-sensor",
target_id = "dummy-target",
name = "dummy-observ",
timestamp = "1970-01-01T00:00:00.000000+00:00",
source = "dmdummy",
path = "/dev/null",
error = 0,
next = 1,
priority = 0,
nreceivers = 2,
nrequests = 1,
receivers = { "dummy-receiver1", "dummy-receiver2" },
requests = {
{
name = "dummy",
timestamp = "1970-01-01T00:00:00.000000+00:00",
request = "?\\n",
response = "10.0\\n",
pattern = "(?<sample>[-+0-9\\.]+)",
delimiter = "\\n",
delay = 0,
error = 0,
mode = 0,
retries = 0,
state = 0,
timeout = 0,
nresponses = 1,
responses = {
{
name = "sample",
unit = "none",
type = 0,
error = 0,
value = 10.0
}
}
}
}
}
&DMOBSERV OBSERV%ID="9273ab62f9a349b6a4da6dd274ee83e7", OBSERV%NODE_ID="dummy-node", OBSERV%SENSOR_ID="dummy-sensor", OBSERV%TARGET_ID="dummy-target", OBSERV%NAME="dummy-observ", OBSERV%TIMESTAMP="1970-01-01T00:00:00.000000+00:00", OBSERV%SOURCE="dmdummy", OBSERV%PATH="/dev/null", OBSERV%PRIORITY=0, OBSERV%ERROR=0, OBSERV%NEXT=0, OBSERV%NRECEIVERS=2, OBSERV%NREQUESTS=1, OBSERV%RECEIVERS="dummy-receiver1","dummy-receiver2", OBSERV%REQUESTS(1)%NAME="dummy", OBSERV%REQUESTS(1)%TIMESTAMP="1970-01-01T00:00:00.000000+00:00", OBSERV%REQUESTS(1)%REQUEST="?\n", OBSERV%REQUESTS(1)%RESPONSE="10.0\n", OBSERV%REQUESTS(1)%DELIMITER="\n", OBSERV%REQUESTS(1)%PATTERN="(?<sample>[-+0-9\.]+)", OBSERV%REQUESTS(1)%DELAY=0, OBSERV%REQUESTS(1)%ERROR=0, OBSERV%REQUESTS(1)%MODE=0, OBSERV%REQUESTS(1)%RETRIES=0, OBSERV%REQUESTS(1)%STATE=0, OBSERV%REQUESTS(1)%TIMEOUT=0, OBSERV%REQUESTS(1)%NRESPONSES=1, OBSERV%REQUESTS(1)%RESPONSES(1)%NAME="sample", OBSERV%REQUESTS(1)%RESPONSES(1)%UNIT="none", OBSERV%REQUESTS(1)%RESPONSES(1)%TYPE=0, OBSERV%REQUESTS(1)%RESPONSES(1)%ERROR=0, OBSERV%REQUESTS(1)%RESPONSES(1)%VALUE=10.00000000000000, /
Sensor
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
Sensor id ( |
|
string |
32 |
Node id ( |
|
integer |
4 |
|
|
string |
32 |
Sensor name. |
|
string |
32 |
Sensor serial number (optional). |
|
string |
32 |
Sensor description (optional). |
|
double |
8 |
Sensor x or easting (optional). |
|
double |
8 |
Sensor y or northing (optional). |
|
double |
8 |
Sensor z or altitude (optional). |
Value | Name | Description |
---|---|---|
0 |
|
Unknown sensor type. |
1 |
|
Virtual sensor. |
2 |
|
File system. |
3 |
|
Process or service. |
4 |
|
Meteorological sensor. |
5 |
|
Robotic total station. |
6 |
|
GNSS receiver. |
7 |
|
Level sensor. |
8 |
|
MEMS sensor. |
DATASET "sensor_type" { DATATYPE H5T_COMPOUND { H5T_ARRAY { [32] H5T_STRING { STRSIZE 1; STRPAD H5T_STR_SPACEPAD; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } } "id"; H5T_ARRAY { [32] H5T_STRING { STRSIZE 1; STRPAD H5T_STR_SPACEPAD; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } } "node_id"; H5T_STD_I32LE "type"; H5T_ARRAY { [32] H5T_STRING { STRSIZE 1; STRPAD H5T_STR_SPACEPAD; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } } "name"; H5T_ARRAY { [32] H5T_STRING { STRSIZE 1; STRPAD H5T_STR_SPACEPAD; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } } "sn"; H5T_ARRAY { [32] H5T_STRING { STRSIZE 1; STRPAD H5T_STR_SPACEPAD; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } } "meta"; H5T_IEEE_F64LE "x"; H5T_IEEE_F64LE "y"; H5T_IEEE_F64LE "z"; } DATASPACE SIMPLE { ( 8 ) / ( 8 ) } }
{
"id": "dummy-sensor",
"node_id": "dummy-node",
"type": 3,
"name": "Dummy Sensor",
"sn": "00000",
"meta": "Description.",
"x": 0.0,
"y": 0.0,
"z": 0.0
}
Target
Attribute | Type | Size | Description |
---|---|---|---|
|
string |
32 |
Target id ( |
|
string |
32 |
Target name. |
|
string |
32 |
Target description (optional). |
|
integer |
4 |
Target state (optional). |
|
double |
8 |
Target x or easting (optional). |
|
double |
8 |
Target y or northing (optional). |
|
double |
8 |
Target z or altitude (optional). |
Value | Name | Description |
---|---|---|
0 |
|
No special target state. |
1 |
|
Target has been removed. |
2 |
|
Target is missing. |
3 |
|
Target is invalid. |
4 |
|
Target should be ignored. |
5 |
|
Target is obsolete. |
6 |
|
User-defined target state. |
Column | Attribute | Description |
---|---|---|
1 |
|
Target id. |
2 |
|
Target name. |
3 |
|
Target description. |
4 |
|
Target state. |
5 |
|
Target x or easting. |
6 |
|
Target y or northing. |
7 |
|
Target z or altitude. |
DATASET "target_type" { DATATYPE H5T_COMPOUND { H5T_ARRAY { [32] H5T_STRING { STRSIZE 1; STRPAD H5T_STR_SPACEPAD; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } } "id"; H5T_ARRAY { [32] H5T_STRING { STRSIZE 1; STRPAD H5T_STR_SPACEPAD; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } } "name"; H5T_ARRAY { [32] H5T_STRING { STRSIZE 1; STRPAD H5T_STR_SPACEPAD; CSET H5T_CSET_ASCII; CTYPE H5T_C_S1; } } "meta"; H5T_STD_I32LE "state"; H5T_IEEE_F64LE "x"; H5T_IEEE_F64LE "y"; H5T_IEEE_F64LE "z"; } DATASPACE SIMPLE { ( 8 ) / ( 8 ) } }
{
"id": "dummy-target",
"name": "Dummy Target",
"meta": "Description",
"state": 0,
"x": 0.0,
"y": 0.0,
"z": 0.0
}
Databases
The DMPACK programs use three distinct databases to store deformation monitoring entity records:
- Observation Database
-
Stores nodes, sensors, targets, observations, observation receivers, observation requests, and observation responses, with optional synchronisation tables for all record types.
- Log Database
-
Stores all log messages in single table.
- Beat Database
-
Stores heartbeat messages by unique node id.
The databases are usually located in directory /var/dmpack/
.
Administration
The sqlite3(1) program is stand-alone command-line shell for SQLite database access that allows the user to execute arbitrary SQL statements. Third-party programs provide an additional graphical user interface:
- DB Browser for SQLite (DB4S)
-
A spreadsheet-like visual interface for Linux, Unix, macOS, and Windows. (MPLv2, GPLv3)
- HeidiSQL
-
A free database administration tool for MariaDB, MySQL, MS SQL Server, PostgreSQL, and SQLite. For Windows only. (GPLv2)
- phpLiteAdmin
-
A web front-end for SQLite database administration written in PHP. (GPLv3)
- SQLite Web
-
A web-based SQLite database browser in Python. (MIT)
Entity–Relationship Model
Examples
Write all schemas of an observation database to file schema.sql
, using the
sqlite3(1) command-line tool:
$ sqlite3 /var/dmpack/observ.sqlite ".schema" > schema.sql
To dump an observation database as raw SQL to observ.sql
:
$ sqlite3 /var/dmpack/observ.sqlite ".dump" > observ.sql
Dump only table logs
of a log database:
$ sqlite3 /var/dmpack/log.sqlite ".dump 'logs'" > log.sql
System Configuration
Additional changes to the system configuration should be considered to prevent issues while conducting a long-term monitoring.
Time Zone
The local time zone of the sensor client should be set to a zone without summer
daylight-saving. For instance, time zone Europe/Berlin
implies Central
European Summer Time (CEST), which is usually not desired for long-term
observations, as it leads to time jumps. Instead, use time zone GMT+1
or UTC
in this case.
FreeBSD
On FreeBSD, configure the time zone using:
# tzsetup
Linux
On Linux, list all time zones and set the preferred one with timedatectl(1):
# timedatectl list-timezones # timedatectl set-timezone Etc/GMT+1
Time Synchronisation
The system time should be updated periodically by synchronising it with network time servers. A Network Time Protocol (NTP) client has to be installed and configured to enable the synchronisation.
FreeBSD
Set the current date and time intially by passing the IP or FQDN of the NTP server to ntpdate(1):
# ntpdate -b ptbtime1.ptb.de
The NTP daemon ntpd(8) is configured through file /etc/ntp.conf
. If
favoured, we can replace the existing NTP server pool 0.freebsd.pool.ntp.org
with a single server, for example:
server ptbtime1.ptb.de iburst
Add the following entries to /etc/rc.conf
:
ntpd_enable="YES" ntpd_sync_on_start="YES" ntpd_flags="-g"
Start the ntpd(8) service:
# service ntpd start
Linux
On Debian Linux, install the NTP package:
# apt install ntp
Query the NTP servers to synchronise with:
# ntpq -p
The system time should be updated now:
# date -R
On error, try to reconfigure the NTP service:
# dpkg-reconfigure ntp
Power Saving
On Linux, power saving for USB devices may be enabled by default. This can cause
issues if sensors are attached through an USB adapter. USB power saving is
enabled if the kernel boot parameter usbcore.autosuspend
is not -1
:
# cat /sys/module/usbcore/parameters/autosuspend 2
We can update the boot loader to turn auto-suspend off. Edit /etc/default/grub
and change GRUB_CMDLINE_LINUX_DEFAULT
to:
GRUB_CMDLINE_LINUX_DEFAULT="quiet usbcore.autosuspend=-1"
Then, update the boot loader:
# update-grub
The system has to be rebooted for the changes to take effect.
Message Queues
The operating system must have POSIX message queues enabled to run DMPACK programs on sensor nodes.
FreeBSD
On FreeBSD, make sure the kernel module mqueuefs
is loaded, and the message
queue file system is mounted:
# kldstat -m mqueuefs Id Refs Name 522 1 mqueuefs
Otherwise, we can simply load and mount the file system:
# kldload mqueuefs # mkdir -p /mnt/mqueue # mount -t mqueuefs null /mnt/mqueue
To load messages queues at system start, add the module mqueuefs
to
/etc/rc.conf
, and the file system to /etc/fstab
:
# sysrc kld_list+="mqueuefs" # echo "null /mnt/mqueue mqueuefs rw 0 0" >> /etc/fstab
Additionally, we may increase the system limits of POSIX message queues with
sysctl(8), or in /etc/sysctl.conf
. The defaults are:
# sysctl kern.mqueue.maxmsg kern.mqueue.maxmsg: 100 # sysctl kern.mqueue.maxmsgsize kern.mqueue.maxmsgsize: 16384
The maximum message size has to be at least 16384 bytes.
Linux
The POSIX message queue file system should be mounted by default on Linux. If not, run:
# mkdir -p /dev/mqueue # mount -t mqueue none /dev/mqueue
Set the maximum number of messages and the maximum message size to some reasonable values:
# sysctl fs.mqueue.msg_max=100 # sysctl fs.mqueue.msgsize_max=16384
The maximum message size has to be at least 16384 bytes.
Cron
On Unix-like operating system, cron is usually used to run jobs periodically. For instance, in order to update an XML feed, or to generate HTML reports, add a schedule of the task to perform to the crontab(5) file of a local user.
Edit the cron jobs of user www
with crontab(1):
# crontab -u www -e
The following crontab(5) entry adds a task to generate reports every hour,
using utility script mkreport.sh
:
SHELL=/bin/sh
MAILTO=/dev/null
# Create reports every hour, suppress logging.
@hourly -q /usr/local/share/dmpack/mkreport.sh
Status mails and logging are disabled. The shell script mkreport.sh
must have
the execution bits set. Modify the script according to your set-up.
Additionally, we may update an Atom XML feed of logs by running dmfeed every five minutes:
*/5 * * * * -q /usr/local/bin/dmfeed --config /usr/local/etc/dmpack/dmfeed.conf
The feed is updated only if new logs have arrived in the meantime, unless option force is enabled.
GeoCOM API
The official GeoCOM API is divided into the following sub-systems:
Acronym | Name |
---|---|
|
Automation |
|
Basic Applications |
|
Basic Man–Machine Interface |
|
Communication Settings |
|
Central Services |
|
Electronic Distance Measurement |
|
File Transfer |
|
Image Processing |
|
Motorisation |
|
Supervisor |
|
Theodolite Measurement and Calculation |
Types
All GeoCOM named types and enumerators supported by DMPACK start with prefix
GEOCOM_
.
Name | Description |
---|---|
|
Standard intensity of beep expressed as percentage. |
|
Direction close-wise. |
|
Direction counter clock-wise. |
Name | Description |
---|---|
|
Positioning to Hz and V angle. |
|
Positioning to a target in the env. of the Hz and V angle. |
Name | Description |
---|---|
|
Fast positioning mode. |
|
Exact positioning mode. |
|
For TM30/TS30. |
Name | Description |
---|---|
|
Reflector not defined. |
|
Reflector prism. |
|
Reflector tape. |
Name | Description |
---|---|
|
With reflector. |
|
Without reflector. |
Name | Description |
---|---|
|
ASCII protocol. |
|
Binary protocol. |
Name | Description |
---|---|
|
Power source is external. |
|
Power source is the internal battery. |
Name | Description |
---|---|
|
Not supported by TPS1200. |
|
RPC is enabled (online mode). |
Name | Description |
---|---|
|
Power down instrument. |
|
Not supported by TPS1200. |
Name | Description |
---|---|
|
Internal memory module. |
|
External memory card. |
Name | Description |
---|---|
|
Undocumented (0). |
|
Extension wildcard: |
Name | Description |
---|---|
|
Internal memory module. |
|
External memory card. |
Name | Description |
---|---|
|
Locked out. |
|
Locked in. |
|
Prediction mode. |
Name | Description |
---|---|
|
Slow down with current acceleration. |
|
Slow down by switch off power supply. |
Name | Description |
---|---|
|
Instrument remains on. |
|
Turns off mechanism. |
Name | Description |
---|---|
|
Position 1 of telescope. |
|
Position 2 of telescope. |
Name | Description |
---|---|
|
Face in normal position. |
|
Face turned. |
Return Codes
All GeoCOM return codes start with prefix GRC_
.
Code | Name | Description |
---|---|---|
0 |
|
Function successfully completed. |
1 |
|
Unknown error, result unspecified. |
2 |
|
Invalid parameter detected. Result unspecified. |
3 |
|
Invalid result. |
4 |
|
Fatal error. |
5 |
|
Not implemented. |
6 |
|
Function execution timed out. Result unspecified. |
7 |
|
Parameter setup for subsystem is incomplete. |
8 |
|
Function execution has been aborted. |
9 |
|
Fatal error (not enough memory). |
10 |
|
Fatal error (subsystem not initialised). |
12 |
|
Subsystem is down. |
13 |
|
System busy/already in use of another process. |
14 |
|
Fatal error (hardware failure). |
15 |
|
Execution of application has been aborted. |
16 |
|
Operation aborted (insufficient power supply level). |
17 |
|
Invalid version of file. |
18 |
|
Battery empty, about 1 minute remaining. |
20 |
|
No event pending. |
21 |
|
Out of temperature range. |
22 |
|
Instrument tilting out of range. |
23 |
|
Communication error. |
24 |
|
GRC_TYPE input (do no action) |
25 |
|
Instrument went into sleep mode. |
26 |
|
Function not successfully completed. |
27 |
|
Not available (licence key not available). |
28 |
|
Overflow error. |
29 |
|
System or subsystem has been stopped. |
256 |
|
ANG error. |
257 |
|
Angles and inclinations not valid. |
258 |
|
Inclinations not valid. |
259 |
|
Value accuracies not reached. |
260 |
|
Angle accuracies not reached. |
261 |
|
Inclination accuracies not reached. |
266 |
|
No write access allowed. |
267 |
|
Value out of range. |
268 |
|
Function aborted due to interrupt. |
269 |
|
Hz moved during incline measurement. |
270 |
|
Troubles with operation system. |
271 |
|
Overflow at parameter values. |
272 |
|
Too less peaks. |
273 |
|
Reading timeout. |
274 |
|
Too many exposures wanted. |
275 |
|
Picture height out of range. |
276 |
|
Positive exposure dynamic overflow. |
277 |
|
Negative exposure dynamic overflow. |
278 |
|
Exposure time overflow. |
279 |
|
Picture under-exposured. |
280 |
|
Picture over-exposured. |
300 |
|
Too many peaks detected. |
301 |
|
Too less peaks detected. |
302 |
|
Peak too slim. |
303 |
|
Peak to wide. |
304 |
|
Bad peak difference. |
305 |
|
Too less peak amplitude. |
306 |
|
Inhomogeneous peak amplitudes. |
307 |
|
No peak decoding possible. |
308 |
|
Peak decoding not stable. |
309 |
|
Too less valid fine-peaks. |
316 |
|
Inclination plane out of time range. |
317 |
|
Inclination no plane available. |
326 |
|
Errors in 5 kHz and or 2.5 kHz angle. |
327 |
|
Errors in 5 kHz angle. |
328 |
|
Errors in 2.5 kHz angle. |
329 |
|
LVDS transfer error detected. |
330 |
|
LVDS transfer error detected in 5 kHz mode. |
331 |
|
LVDS transfer error detected in 2.5 kHz mode. |
512 |
|
ATR system is not ready. |
513 |
|
Result is not available yet. |
514 |
|
Several targets detected. |
515 |
|
Spot is too big for analyse. |
516 |
|
Background is too bright. |
517 |
|
No targets detected. |
518 |
|
Accuracy worse than asked for. |
519 |
|
Spot is on the edge of the sensing area. |
522 |
|
Blooming or spot on edge detected. |
523 |
|
ATR is not in a continuous mode. |
524 |
|
Not the spot of the own target illuminator. |
525 |
|
Communication error to sensor (ATR). |
526 |
|
Received Arguments cannot be decoded. |
527 |
|
No spot detected in Hz direction. |
528 |
|
No spot detected in V direction. |
529 |
|
Strange light in Hz direction. |
530 |
|
Strange light in V direction. |
531 |
|
On multiple ATA_SLDR_OpenTransfer. |
532 |
|
No ATA_SLDR_OpenTransfer happened. |
533 |
|
Unexpected data format received. |
534 |
|
Checksum error in transmitted data. |
535 |
|
Address out of valid range. |
536 |
|
Firmware file has invalid format. |
537 |
|
Current (loaded) firmware does not support upload. |
538 |
|
PowerSearch system is not ready. |
539 |
|
ATR system error. |
768 |
|
EDM error. |
769 |
|
Fatal EDM sensor error. |
770 |
|
Invalid command or unknown command. |
771 |
|
Boomerang error. |
772 |
|
Received signal to low, prism to far away, or natural barrier, bad environment, etc. |
773 |
|
Obsolete. |
774 |
|
Received signal to strong, prism to near, strange light effect. |
775 |
|
Timeout, measuring time exceeded (signal too weak, beam interrupted). |
776 |
|
Too much turbulences or distractions. |
777 |
|
Filter motor defective. |
778 |
|
Device like EGL, DL is not installed. |
779 |
|
Search result invalid. |
780 |
|
Communication ok, but an error reported from the EDM sensor. |
781 |
|
No service password is set. |
782 |
|
Communication ok, but an unexpected answer received. |
783 |
|
Data send error, sending buffer is full. |
784 |
|
Data receive error, like parity buffer overflow. |
785 |
|
Internal EDM subsystem error. |
786 |
|
Sensor is working already, abort current measuring first. |
787 |
|
No measurement activity started. |
788 |
|
Calculated checksum, resp. received data wrong. |
789 |
|
During start up or shut down phase an error occured. |
790 |
|
Red laser not available on this sensor HW. |
791 |
|
Measurement will be aborted (will be used for the laser security). |
798 |
|
Multiple OpenTransfer calls. |
799 |
|
No open transfer happened. |
800 |
|
Unexpected data format received. |
801 |
|
Checksum error in transmitted data. |
802 |
|
Address out of valid range. |
803 |
|
Firmware file has invalid format. |
804 |
|
Current (loaded) firmware doesn’t support upload. |
808 |
|
Undocumented error from the EDM sensor, should not occur. |
818 |
|
Out of distance range (too small or large). |
819 |
|
Signal to noise ratio too small. |
820 |
|
Noise to high. |
821 |
|
Password is not set. |
822 |
|
Elapsed time between prepare and start fast measurement for ATR too long. |
823 |
|
Possibly more than one target (also a sensor error). |
824 |
|
EEPROM consts are missing. |
825 |
|
No precise measurement possible. |
826 |
|
Measured distance is too big (not allowed). |
1024 |
|
GMF error. |
1025 |
|
Wrong area definition. |
1026 |
|
Identical points. |
1027 |
|
Points on one line |
1028 |
|
Out of range. |
1029 |
|
Plausibility error. |
1030 |
|
Too few observations to calculate the average. |
1031 |
|
No solution. |
1032 |
|
Only one solution. |
1033 |
|
Second solution. |
1034 |
|
Intersection angle < 15 gon. |
1035 |
|
Invalid triangle. |
1036 |
|
Invalid angle unit. |
1037 |
|
Invalid distance unit. |
1038 |
|
Invalid vertical angle. |
1039 |
|
Invalid temperature system. |
1040 |
|
Invalid pressure unit. |
1041 |
|
Invalid radius. |
1042 |
|
Insufficient data (GM2). |
1043 |
|
Bad data (GM2). |
1044 |
|
Bad data distr (GM2). |
1045 |
|
Same tie points (GM2). |
1046 |
|
Station and tie point same (GM2). |
1280 |
|
TMC error. |
1283 |
|
Measurement without full correction. |
1284 |
|
Accuracy can not be guaranteed. |
1285 |
|
Only angle measurement valid. |
1288 |
|
Only angle measurement valid but without full correction. |
1289 |
|
Only angle measurement valid but accuracy can not be guaranteed. |
1290 |
|
No angle measurement. |
1291 |
|
Wrong setting of PPM or MM on EDM. |
1292 |
|
Distance measurement not done (no aim). |
1293 |
|
System is busy (no measurement done). |
1294 |
|
No signal on EDM (only in signal mode). |
1792 |
|
Motorisation is not ready. |
1793 |
|
Motorisation is handling another task. |
1794 |
|
Motorisation is not in velocity mode. |
1795 |
|
Motorisation is in the wrong mode or busy. |
1796 |
|
Motorisation is not in posit mode. |
1797 |
|
Motorisation is not in service mode. |
1798 |
|
Motorisation is handling no task. |
1799 |
|
Motorisation is not in tracking mode. |
1800 |
|
Motorisation is not in spiral mode. |
1801 |
|
Vertical encoder/motor error. |
1802 |
|
Horizontal encoder/motor error. |
1803 |
|
Horizontal and vertical encoder/motor error. |
2304 |
|
BMM error. |
2305 |
|
Loading process already opened. |
2306 |
|
Transfer not opened. |
2307 |
|
Unknown character set. |
2308 |
|
Display module not present. |
2309 |
|
Character set already exists. |
2310 |
|
Character set cannot be deleted. |
2311 |
|
Memory cannot be allocated. |
2312 |
|
Character set still used. |
2313 |
|
Charset cannot be deleted or is protected. |
2314 |
|
Attempt to copy a character block outside the allocated memory. |
2315 |
|
Error during release of allocated memory. |
2316 |
|
Number of bytes specified in header does not match the bytes read. |
2317 |
|
Allocated memory could not be released. |
2318 |
|
Max. number of character sets already loaded. |
2319 |
|
Layer cannot be deleted. |
2320 |
|
Required layer does not exist. |
2321 |
|
Layer length exceeds maximum. |
3072 |
|
Initiate Extended Runtime Operation (ERO). |
3073 |
|
Cannot encode arguments in client. |
3074 |
|
Cannot decode results in client. |
3075 |
|
Hardware error while sending. |
3076 |
|
Hardware error while receiving. |
3077 |
|
Request timed out. |
3078 |
|
Packet format error. |
3079 |
|
Version mismatch between client and server. |
3080 |
|
Cannot decode arguments in server. |
3081 |
|
Unknown RPC, procedure ID invalid. |
3082 |
|
Cannot encode results in server. |
3083 |
|
Unspecified generic system error. |
3085 |
|
Unspecified error. |
3086 |
|
Binary protocol not available. |
3087 |
|
Call interrupted. |
3090 |
|
Protocol needs 8 bit encoded characters. |
3093 |
|
TRANSACTIONS ID mismatch error. |
3094 |
|
Protocol not recognisable. |
3095 |
|
Invalid port address. |
3099 |
|
ERO is terminating. |
3100 |
|
Internal error (data buffer overflow). |
3101 |
|
Invalid checksum on server side received. |
3102 |
|
Invalid checksum on client side received. |
3103 |
|
Port not available. |
3104 |
|
Port not opened. |
3105 |
|
Unable to find TPS. |
3106 |
|
Extended Runtime Operation could not be started. |
3107 |
|
Att to send cons reqs. |
3108 |
|
TPS has gone to sleep (wait and try again). |
3109 |
|
TPS has shut down (wait and try again). |
3110 |
|
No checksum in ASCII protocol available. |
8704 |
|
Position not reached. |
8705 |
|
Positioning not possible due to mounted EDM. |
8706 |
|
Angle measurement error. |
8707 |
|
Motorisation error. |
8708 |
|
Position not exactly reached. |
8709 |
|
Deviation measurement error. |
8710 |
|
No target detected. |
8711 |
|
Multiple targets detected. |
8712 |
|
Bad environment conditions. |
8713 |
|
Error in target acquisition. |
8714 |
|
Target acquisition not enabled. |
8715 |
|
ATR calibration failed. |
8716 |
|
Target position not exactly reached. |
8717 |
|
Dist. measurement has been started. |
8718 |
|
External supply voltage is too high. |
8719 |
|
Int. or ext. supply voltage is too low. |
8720 |
|
Working area not set. |
8721 |
|
Power search data array is filled. |
8722 |
|
No data available. |
12544 |
|
KDM device is not available. |
13056 |
|
File access error. |
13057 |
|
Block number was not the expected one. |
13058 |
|
Not enough space on device to proceed uploading. |
13059 |
|
Rename of file failed. |
13060 |
|
Invalid parameter as input. |
Lua API
Parts of the DMPACK library are exposed to Lua through a distinct API. Log levels and error codes are registered as named parameters. The GeoCOM API includes named parameters of enumerators and return codes beside functions for request preparation. The GeoCOM functions may be called from the configuration file of dmserial to initialise jobs, for example:
jobs = {
{
--
-- Start and initialisation of station p99.
-- Attribute `onetime` must be enabled!
--
onetime = true,
delay = 5 * 1000,
observation = {
name = "init_tps",
target_id = "p99",
receivers = { "dmdb" },
requests = {
geocom_beep_normal(),
geocom_set_refraction_mode(1),
geocom_set_inclination_correction(true),
geocom_set_user_atr_mode(true),
geocom_set_target_type(GEOCOM_BAP_REFL_USE),
geocom_set_prism_type(GEOCOM_BAP_PRISM_ROUND)
}
},
{
--
-- Single measurement of target p01 every 10 seconds.
--
onetime = false,
delay = 10 * 1000,
observation = {
name = "get_p01",
target_id = "p01",
receivers = { "dmdb" },
requests = {
geocom_set_position(gon2rad(0.0), gon2rad(100.0), GEOCOM_AUT_NORMAL, GEOCOM_AUT_TARGET),
geocom_do_measure(GEOCOM_TMC_DEF_DIST, GEOCOM_TMC_AUTO_INC),
geocom_get_simple_measurement(3000, GEOCOM_TMC_AUTO_INC)
}
}
}
}
The targets p01
and p99
have to exist in the observation database. The
performed observations are forwarded to dmdb.
Parameters
# | Name | Level |
---|---|---|
0 |
|
invalid level |
1 |
|
debug level |
2 |
|
info level |
3 |
|
warning level |
4 |
|
error level |
5 |
|
critical level |
# | Name | Type |
---|---|---|
0 |
|
8-byte signed real |
1 |
|
4-byte signed real |
2 |
|
8-byte signed integer |
3 |
|
4-byte signed integer |
4 |
|
1-byte boolean |
5 |
|
byte |
6 |
|
byte string |
Functions
The following utility functions are exported to convert units:
-
deg2gon(deg)
– Converts degrees to gradians. -
deg2rad(deg)
– Converts degrees to radiants. -
gon2deg(gon)
– Converts gradians to degrees. -
gon2rad(gon)
– Converts gradians to radiants. -
rad2deg(rad)
– Converts radiants to degrees. -
rad2gon(rad)
– Converts radiants to gradians.
The Lua functions may be called inside of configuration files. For testing, load
the shared library libdmpack.so
first, for example:
-- Import the shared library `libdmpack.so`.
-- The file must be located in the Lua search path.
require("libdmpack")
-- Convert angle from [deg] to [gon]. Output: 400.0
gon = deg2gon(360.0)
print(gon)
GeoCOM
The GeoCOM API for Lua is used to automate the creation of observation requests
in DMPACK configuration files. The Lua function names do not match the official
GeoCOM API names. All functions start with prefix geocom_
, all named
parameters with GEOCOM_
. The names of the requests are set to the name of the
respective function without prefix.
Lua API | GeoCOM API |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
geocom_abort_download()
Returns request for FTR_AbortDownload procedure. Creates request to abort or end the file download command.
geocom_abort_list()
Returns request for FTR_AbortList procedure. Creates request to abort or end the file list command.
geocom_beep_alarm()
Returns request for BMM_BeepAlarm procedure. Creates request to output an alarm signal (triple beep).
geocom_beep_normal()
Returns request for BMM_BeepNormal procedure. Creates request to output an alarm signal (single beep).
geocom_beep_off()
Returns request for IOS_BeepOff procedure. Creates request to stop an active beep signal.
geocom_beep_on(intensity)
-
intensity
(integer) – Intensity of the beep signal.
Returns request for IOS_BeepOn procedure. Creates request for continuous beep
signal of given intensity
from 0 to 100. The constant
GEOCOM_IOS_BEEP_STDINTENS
sets the intensity to 100.
geocom_change_face(pos_mode, atr_mode)
-
pos_mode
(integer) – Position mode (GEOCOM_AUT_POSMODE). -
atr_mode
(integer) – ATR mode (GEOCOM_AUT_ATRMODE).
Returns request for AUT_ChangeFace procedure. Creates request for turning the telescope to the other face.
If pos_mode
is GEOCOM_AUT_NORMAL
, the instrument uses the current value of
the compensator. For positioning distances > 25 gon, this mode might tend to
inaccuracy. If set to GEOCOM_AUT_PRECISE
, it tries to measure the exact
inclination of the target. Tends to long positioning time.
If atr_mode
is GEOCOM_AUT_POSITION
, the instrument uses conventional
positioning to other face. If set to GEOCOM_AUT_TARGET
, it tries to position
into a target in the destination area. This mode requires activated ATR.
geocom_delete(device_type, file_type, day, month, year, file_name)
-
device_type
(integer) – Internal memory or memory card (GEOCOM_FTR_DEVICETYPE). -
file_type
(integer) – Type of file (GEOCOM_FTR_FILETYPE). -
day
(integer) – Day of month (DD
). -
month
(integer) – Month (MM
). -
year
(integer) – Year (YY
). -
file_name
(string) – Name of file to delete.
Returns request for FTR_Delete procedure. Creates request for deleting one or more files. Wildcards may be used to delete multiple files. If the deletion date is valid, only files older than the deletion date are deleted.
geocom_do_measure(tmc_prog, inc_mode)
-
tmc_prog
(integer) – Measurement program (GEOCOM_TMC_MEASURE_PRG). -
inc_mode
(integer) – Inclination measurement mode (GEOCOM_TMC_INCLINE_PRG).
Returns request for TMC_DoMeasure procedure. Creates request for trying a
distance measurement. This command does not return any values. If a distance
measurement is performed in measurement program GEOCOM_TMC_DEF_DIST
, the
distance sensor will work in the set EDM mode.
geocom_download(block_number)
-
block_number
(integer) – Block number to download (0 – 65535).
Returns request for FTR_Download procedure. Creates request to get a single block of data. The geocom_setup_download() function has to be called first. The block sequence starts with 1. The download process will be aborted if the block number is set to 0. The maximum block number is 65535. The file size is therefore limited to 28 MiB. The function should not be used inside of configuration files.
geocom_fine_adjust(search_hz, search_v)
-
search_hz
(number) – Search range, Hz axis [rad]. -
search_v
(number) – Search range, V axis [rad].
Returns request for AUT_FineAdjust procedure. Creates request for automatic target positioning.
The procedure positions the telescope onto the target prosm and measures the ATR
Hz and V deviations. If the target is not within the visible area of the ATR
sensor (field of view), a target search will be executed. The target search
range is limited by the parameter search_v
in V direction, and by parameter
search_hz
in Hz direction. If no target was found, the instrument turns back
to the initial start position.
The Fine Adjust Lock-in towards a target is terminated by this procedure call. After positioning, the lock mode will be active. The timeout of the operation is set to 5 seconds, regardless of the general position timeout settings. The position tolerance depends on the previously selected find adjust mode.
The tolerance settings have no influence to this operation. The tolerance settings and the ATR precision depend on the instrument class and the used EDM mode.
geocom_get_angle(inc_mode)
-
inc_mode
(integer) – Inclination measurement mode (GEOCOM_TMC_INCLINE_PRG).
Returns request for TMC_GetAngle5 procedure. Creates request for returning a simple angle measurement. The function starts an angle measurement and returns the results.
geocom_get_angle_complete(inc_mode)
-
inc_mode
(integer) – Inclination measurement mode (GEOCOM_TMC_INCLINE_PRG).
Returns request for TMC_GetAngle1 procedure. Creates request for returning a complete angle measurement. The function starts an angle and, depending on the configuration, an inclination measurement, and returns the results.
geocom_get_angle_correction()
Returns request for TMC_GetAngSwitch procedure. Creates request for getting the angular correction status.
geocom_get_atmospheric_correction()
Returns request for TMC_GetAtmCorr procedure. Creates request for getting the atmospheric correction parameters
geocom_get_atmospheric_ppm()
Returns request for TMC_GetAtmPpm procedure. Creates request for getting the atmospheric ppm correction factor.
geocom_get_atr_error()
Returns request for TMC_IfDataAzeCorrError procedure. Creates request for getting the ATR error status.
geocom_get_atr_setting()
Returns request for BAP_GetATRSetting procedure. Creates request for getting the current ATR low-vis mode.
geocom_get_binary_mode()
Returns request for COM_GetBinaryAvailable procedure. Creates request for getting the binary attribute of the server.
geocom_get_config()
Returns request for SUP_GetConfig procedure. Creates request for getting the
power management configuration status. The power timeout specifies the time
after which the device switches into the mode indicated by response autopwr
.
geocom_get_coordinate(wait_time, inc_mode)
-
wait_time
(integer) – Delay to wait for the distance measurement to finish [msec]. -
inc_mode
(integer) – Inclination measurement mode (GEOCOM_TMC_INCLINE_PRG).
Returns request for TMC_GetCoordinate procedure. Creates request for getting the coordinates of a measured point.
This function conducts an angle and, in dependence of the selected inc_mode
,
an inclination measurement, and the calculates the coordinates of the measured
point with the last distance. The argument wait_time
specifies the delay to
wait for the distance measurement to finish. Single and tracking measurements
are supported. The quality of the result is returned in the GeoCOM return code.
geocom_get_date_time()
Returns request for CSV_GetDateTime procedure. Creates request for getting the
current date and time of the instrument. A possible response may look like
%R1P,0,0:0,1996,'07','19','10','13','2f'
.
geocom_get_date_time_centi()
Returns request for CSV_GetDateTimeCentiSec procedure. Creates request for getting the current date and time of the instrument, including centiseconds.
geocom_get_device_config()
Returns request for CSV_GetDeviceConfig procedure. Creates request for getting the instrument configuration.
geocom_get_double_precision()
Returns request for COM_GetDoublePrecision procedure. Creates request for getting the double precision setting – the number of digits to the right of the decimal point – when double floating-point values are transmitted.
geocom_get_edm_mode()
Returns request for TMC_GetEdmMode procedure. Creates request for getting the EDM measurement mode.
geocom_get_egl_intensity()
Returns request for EDM_GetEglIntensity procedure. Creates request for getting the value of the intensity of the electronic guide light (EGL).
geocom_get_face()
Returns request for TMC_GetFace procedure. Creates request for getting the face of the current telescope position.
geocom_get_fine_adjust_mode()
Returns request for AUT_GetFineAdjustMode procedure. Creates request for getting the fine adjustment positioning mode.
geocom_get_full_measurement(wait_time, inc_mode)
-
wait_time
(integer) – Delay to wait for the distance measurement to finish [msec]. -
inc_mode
(integer) – Inclination measurement mode (GEOCOM_TMC_INCLINE_PRG).
Returns request for TMC_GetFullMeas procedure. The GeoCOM function returns
angle, inclination, and distance measurement data, including accuracy and
measurement time. This command does not issue a new distance measurement. A
distance measurement has to be started in advance. If the distance is valid, the
function ignores wait_time
and returns the results immediately. If no valid
distance is available, and the measurement unit is not activated, the angle
measurement result is retuned after the waiting time.
geocom_get_geocom_version()
Returns request for COM_GetSWVersion procedure. Creates request for getting the GeoCOM server software version.
geocom_get_geometric_ppm()
Returns request for TMC_GeoPpm procedure. Creates request for getting the geometric ppm correction factor.
geocom_get_height()
Returns request for TMC_GetHeight procedure. Creates request for getting the current reflector height.
geocom_get_image_config(mem_type)
-
mem_type
(integer) – Memory device type (GEOCOM_IMG_MEM_TYPE).
Returns request for IMG_GetTccConfig procedure. Creates request to read
the current image configuration. The response subfunc
is a binary
combination of the following settings:
-
1
– Test image. -
2
– Automatic exposure time selection. -
4
– Two-times sub-sampling. -
8
– Four-times sub-sampling.
geocom_get_inclination_correction()
Returns request for TMC_GetInclineSwitch procedure. Creates request for getting the dual-axis compensator status
geocom_get_inclination_error()
Returns request for TMC_IfDataIncCorrError procedure. Creates request for getting the inclination error status.
geocom_get_instrument_name()
Returns request for CSV_GetInstrumentName procedure. Creates request for getting the Leica-specific instrument name.
geocom_get_instrument_number()
Returns request for CSV_GetInstrumentNo procedure. Creates request for getting the factory defined instrument number.
geocom_get_internal_temperature()
Returns request for CSV_GetIntTemp procedure. Creates request for getting the internal temperature of the instrument, measured on the mainboard side.
geocom_get_lock_status()
Returns request for MOT_ReadLockStatus procedure. Creates request for returning the condition of the Lock-In control.
geocom_get_measurement_program()
Returns request for BAP_GetMeasPrg procedure. Creates request for getting the distance measurement mode of the instrument.
geocom_get_power()
Returns request for CSV_CheckPower procedure. Creates request for checking the available power.
geocom_get_prism_constant()
Returns request for TMC_GetPrismCorr procedure. Creates request for getting the prism constant.
geocom_get_prism_definition(prism_type)
-
prism_type
(integer) – Prism type (GEOCOM_BAP_PRISMTYPE).
Returns request for BAP_GetPrismDef procedure. Creates request for getting the default prism definition.
geocom_get_prism_type()
Returns request for TMC_GetPrismType procedure. Creates request for getting the default prism type.
geocom_get_prism_type_v2()
Returns request for TMC_GetPrismType2 procedure. Creates request for getting the default or user prism type.
geocom_get_quick_distance()
Returns request for TMC_QuickDist procedure. Creates request for returning the slope distance and both angles. The function starts an EDM tracking measurement, and waits until a distance has been measured. Then, it returns the angles and the slope distance, but no coordinates. If no distance could be measured, only angles and an error code are returned. A measurement may be aborted by calling geocom_do_measure().
geocom_get_reduced_atr_fov()
Returns request for BAP_GetRedATRFov procedure. Creates request for getting the reduced ATR field of view.
geocom_get_reflectorless_class()
Returns request for CSV_GetReflectorlessClass procedure. Creates request for getting the RL type. The function returns the class of the reflectorless and long-range distance measurement of the instrument.
geocom_get_refraction_mode()
Returns request for TMC_GetRefractiveMethod procedure. Creates request for getting the refraction model. The function is used to get the current refraction model. Changing the method is not indicated on the interface of the instrument.
geocom_get_search_area()
Returns request for AUT_GetSearchArea procedure. Creates request for getting the dimensions of the PowerSearch window. This command is valid for all instruments, but has only effects for instruments equipped with PowerSearch.
geocom_get_signal()
Returns request for TMC_GetSignal procedure. Creates request for getting the
EDM signal intensity. The function can only perform a measurement if the signal
measurement mode is activated. Start the signal measurement with
geocom_do_measure() in mode GEOCOM_TMC_SIGNAL
.
After the measurement, the EDM must be switched off with mode
GEOCOM_TMC_CLEAR
. While measuring, there is no angle data available.
geocom_get_simple_coordinates(wait_time, inc_mode)
-
wait_time
(integer) – Delay to wait for the distance measurement to finish [msec]. -
inc_mode
(integer) – Inclination measurement mode (GEOCOM_TMC_INCLINE_PRG).
Returns request for TMC_GetSimpleCoord procedure. The API function returns the
cartesian coordinates if a valid distance is set. The argument wait_time
sets
the maximum time to wait for a valid distance. Without a valid distance, the
coordinates are set to 0.0, and an error is returned. The coordinate calculation
requires inclination results. The argument inc_mode
sets the inclination
measurement mode.
geocom_get_simple_measurement(wait_time, inc_mode)
-
wait_time
(integer) – Delay to wait for the distance measurement to finish [msec]. -
inc_mode
(integer) – Inclination measurement mode (GEOCOM_TMC_INCLINE_PRG).
Returns request for TMC_GetSimpleMea procedure. The API function returns the
angles and distance measurement data. The argument wait_time
sets the maximum
time to wait for a valid distance. If a distance is available, the wait time is
ignored.
geocom_get_slope_distance_correction()
Returns request for TMC_GetSlopeDistCorr procedure. The function returns the total ppm value (atmospheric ppm + geometric ppm) plus the current prism constant.
geocom_get_software_version()
Returns request for CSV_GetSWVersion procedure. Creates request for getting the system software version of the instrument.
geocom_get_station()
Returns request for TMC_GetStation procedure. Creates request for getting the station coordinates of the instrument.
geocom_get_target_type()
Returns request for BAP_GetTargetType procedure. Creates request for getting
the EDM type. The function returns the current EDM type
(GEOCOM_BAP_TARGET_TYPE
) for distance measurements: reflector (IR) or
reflectorless (RL).
geocom_get_timeout()
Returns request for AUT_ReadTimeout procedure. Creates request for getting the timeout for positioning. The function returns the maximum time to perform positioning.
geocom_get_tolerance()
Returns request for AUT_ReadTol procedure. The function returns the positioning tolerances of the Hz and V instrument axis.
geocom_get_user_atr_mode()
Returns request for AUS_GetUserAtrState procedure. Creates request for getting the status of the ATR mode.
geocom_get_user_lock_mode()
Returns request for AUS_GetUserLockState procedure. Creates request for getting the status of the lock mode.
geocom_get_user_prism_definition(name)
-
name
(string) – Prism name.
Returns equest of BAP_GetUserPrismDef procedure. Creates request for getting the user prism definition.
geocom_get_user_spiral()
Returns request for AUT_GetUserSpiral procedure. The function returns the current dimensions of the searching spiral. Requires at least a TCA instrument.
geocom_list(next)
-
next
(bool) – First or next entry.
Returns request for FTR_List procedure. Creates request for listing file information.
geocom_lock_in()
Returns request for AUT_LockIn procedure. Creates request for starting the target tracking. The API function will start the target tracking if the lock mode has been activated through geocom_set_user_lock_mode(). The geocom_fine_adjust() call must have finished successfully before executing this function.
geocom_measure_distance_angle(dist_mode)
-
dist_mode
(integer) – Distance measurement program (GEOCOM_BAP_MEASURE_PRG
).
Returns request for BAP_MeasDistanceAngle procedure. Creates request for
measuring Hz, V angles and a single distance. The API function measures angles
and a single distance depending on the distance measurement mode dist_mode
.
It is not suited for continuous measurements (lock mode and TRK mode), and
uses the current automation settings.
geocom_null()
Returns request for COM_NullProc procedure. Creates request for checking the communication.
geocom_ps_enable_range(enabled)
-
enabled
(bool) – Enable PowerSearch.
Returns request for AUT_PS_EnableRange procedure. The function enabled or
disables the predefined PowerSearch window including the PowerSearch range
limits set by API call geocom_ps_set_range()
(requires GeoCOM robotic licence). If enabled
is false
, the default range
is set to ≤ 400 m.
geocom_ps_search_next(direction, swing)
-
direction
(integer) – Searching direction (GEOCOM_AUT_CLOCKWISE
orGEOCOM_AUT_ANTICLOCKWISE
). -
swing
(bool) – Searching starts –10 gon to the given direction.
Returns request for AUT_PS_SearchNext procedure. The function executes the 360° default PowerSearch and searches for the next targets. A previously defined PowerSearch window of geocom_set_search_area() is not taken into account. Use API call geocom_ps_search_window() first.
geocom_ps_search_window()
Returns request for AUT_PS_SearchWindow procedure. Creates request for starting PowerSearch. The function starts PowerSearch in the window defined by API calls geocom_set_search_area() and geocom_ps_set_range() (requires GeoCOM robotic licence).
geocom_ps_set_range(min_dist, max_dist)
-
min_dist
(integer) – Min. distance to prism (≥ 0) [m]. -
max_dist
(integer) – Max. distance to prism (≤ 400, ≥min_dist
+ 10) [m].
Returns request for AUT_PS_SetRange procedure. Creates request for setting the PowerSearch range.
geocom_search(search_hz, search_v)
-
search_hz
(number) – Horizontal search region [rad]. -
search_v
(number) – Vertical search region [rad].
Returns request for AUT_Search procedure. The function performs an automatic target search within the given search area (requires GeoCOM robotic licence). The search is terminated once the prism appears in the field of view of the ATR sensor. If no prism is found in the specified area, the instrument turns back into the initial position. For an exact positioning onto the prism centre, use the fine-adjust API call geocom_fine_adjust() afterwards. If the search range of the API function geocom_fine_adjust() is expanded, target search and fine positioning are done in one step.
geocom_search_target()
Returns request for BAP_SearchTarget procedure. Creates request for searching a target. The function searches for a target in the ATR search window.
geocom_set_angle_correction(incline, stand_axis, collimation, tilt_axis)
-
incline
(bool) – Enable inclination correction. -
stand_axis
(bool) – Enable standard axis correction. -
collimation
(bool) – Enable collimation correction. -
tilt_axis
(bool) – Enable tilt axis correction.
Returns request for TMC_SetAngSwitch procedure. Creates request for turning angle corrections on or off.
geocom_set_atmospheric_correction(lambda, pressure, dry_temp, wet_temp)
-
lambda
(number) – Wave-length of EDM transmitter [m]. -
pressure
(number) – Atmospheric pressure [mbar]. -
dry_temp
(number) – Dry temperature [°C]. -
wet_temp
(number) – Wet temperature [°C].
Returns request for BAP_SetAtmCorr procedure. Creates request for setting the
atmospheric correction parameters. The argument lambda
should be queried with
API call
geocom_get_atmospheric_correction().
geocom_set_atmospheric_ppm(atm_ppm)
-
atm_ppm
(number) – Atmospheric ppm correction factor [ppm].
Returns request for BAP_SetAtmPpm procedure. Creates request for setting the atmospheric ppm correction factor.
geocom_set_atr_mode(atr_mode)
-
atm_mode
(integer) – ATR low-vis mode (GEOCOM_BAP_ATRSETTING).
Returns request for BAP_SetATRSetting procedure. Creates request for setting the ATR low-vis mode.
geocom_set_binary_mode(enabled)
-
enabled
(bool) – Enable binary communication.
Returns request for COM_SetBinaryAvailable procedure. Creates request for setting the binary attribute of the server. The function sets the ability of the GeoCOM server to handle binary communication (not supported by DMPACK).
geocom_set_config(auto_power, timeout)
-
auto_power
(integer) – Power-off mode (GEOCOM_SUP_AUTO_POWER). -
timeout
(integer) – Timeout [msec].
Returns request for SUP_SetConfig procedure. Creates request for setting the
power management configuration. The argument timeout
sets the duration after
which the instrument switches into the mode auto_power
when no user activity
occured (key press, GeoCOM communication). The value must be between 60,000 m/s
(1 min) and 6,000,000 m/s (100 min).
geocom_set_date_time(year, month, day, hour, minute, second)
-
year
(integer) – Year (YYYY
). -
month
(integer) – Month (MM
). -
day
(integer) – Day of month (DD
). -
hour
(integer) – Hour (hh
). -
minute
(integer) – Minute (mm
). -
second
(integer) – Second (ss
).
Returns request for CSV_SetDateTime procedure. Creates request for setting the date and time of the instrument.
geocom_set_distance(slope_dist, height_offset, inc_mode)
-
slope_dist
(number) – Slope distance [m]. -
height_offset
(number) – Height offset [m]. -
inc_mode
(integer) – Inclination measurement mode (GEOCOM_TMC_INCLINE_PRG).
Returns request for TMC_SetHandDist procedure. The function is used to set the manually measured slope distance and height offset for a following measurement. Additionally, an inclination and an angle measurement are started to determine the coordinates of the target. The vertical angle is corrected to π/2 or 3π/2, depending on the face of the instrument. The previously measured distance is cleared.
geocom_set_double_precision(ndigits)
-
ndigits
(integer) – Number of digits right to the comma.
Returns request for COM_SetDoublePrecision procedure. The function sets the
precision – the number of digits right of the decimal – when double
floating-point values are transmitted. The default precision is 15 digits. The
setting is only valid for the ASCII transmission mode. Trailing zeroes will not
be sent by the instrument. For example, if ndigits
is set to 3 and the exact
value is 1.99975, the resulting value will be 2.0.
geocom_set_edm_mode(edm_mode)
-
edm_mode
(integer) – EDM measurement mode (GEOCOM_EDM_MODE).
Returns request for TMC_SetEdmMode procedure. Creates request for setting the
EDM measurement mode. The EDM mode set by this function is used by
geocom_do_measure() in mode GEOCOM_TMC_DEF_DIST
.
geocom_set_egl_intensity(intensity)
-
intensity
(integer) – EGL intensity (GEOCOM_EDM_EGLINTENSITY_TYPE).
Returns request for EDM_SetEglIntensity procedure. Creates request for setting the intensity of the electronic guide light.
geocom_set_fine_adjust_mode(adj_mode)
-
adj_mode
(integer) – Fine adjust positioning mode (GEOCOM_AUT_ADJMODE).
Returns request for AUT_SetFineAdjustMode procedure. The function sets the
positioning tolerances relating to angle accuracy or point accuracy for the fine
adjust (requires GeoCOM robotic licence). If a target is near or held by hand,
it is recommended to set the adjust mode to GEOCOM_AUT_POINT_MODE
. The
argument adj_mode
has to be either GEOCOM_AUT_NORM_MODE
or
GEOCOM_AUT_POINT_MODE
.
geocom_set_geometric_ppm(enabled, scale_factor, offset, height_ppm, individual_ppm)
-
enabled
(bool) – Enable binary communication. -
scale_factor
(number) – Scale factor on central meridian. -
offset
(number) – Offset from central meridian [m]. -
height_ppm
(number) – Ppm value due to height above reference. -
individual_ppm
(number) – Individual ppm value.
Returns request for TMC_SetGeoPpm procedure. Creates request for setting the geometric ppm correction factor.
geocom_set_height(height)
-
height
(number) – Reflector height [m].
Returns request for TMC_SetHeight procedure. Creates request for setting a new reflector height.
geocom_set_image_config(mem_type, image_number, quality, sub_function, prefix)
-
mem_type
(integer) – Memory device type (GEOCOM_IMG_MEM_TYPE). -
image_number
(integer) – Actual image number. -
quality
(integer) – JPEG compression factor (0 – 100). -
sub_function
(integer) – Additional sub-functions to call. -
prefix
(string) – File name prefix.
Returns request for IMG_SetTccConfig procedure. Creates request for setting
the image configuration. The argument sub_function
may be a binary combination
of the following settings:
-
1
– Test image. -
2
– Automatic exposure-time selection. -
3
– Two-times sub-sampling. -
4
– Four-times sub-sampling.
geocom_set_inclination_correction(enabled)
-
enabled
(bool) – Enable dual-axis compensator.
Returns request for TMC_SetInclineSwitch procedure. Creates request for turning the dual-axis compensator on or off.
geocom_set_laser_pointer(enabled)
-
enabled
(bool) – Enable laser pointer.
Returns request for EDM_Laserpointer procedure. Creates request for turning the laser pointer on or off. The function is only available on models which support reflectorless distance measurement.
geocom_set_measurement_program(bap_prog)
-
bap_prog
(integer) – Measurement program (GEOCOM_BAP_USER_MEASPRG).
Returns request for BAP_SetMeasPrg procedure. The function sets the distance measurement program, for example, for API call geocom_measure_distance_angle(). The RL EDM type programs are not available on all instruments. Changing the measurement program may change the EDM type as well (IR, RL).
geocom_set_orientation(hz)
-
hz
(number) – Horizontal orientation [rad].
Returns request for TMC_SetOrientation procedure. Creates request for
orientating the instrument in horizontal direction. The API function is a
combination of an angle measurement to get the horizontal offset and setting the
angle offset afterwards, in order to orientate to a target. Before the new
orientation can be set, an existing distance must be cleared by calling API
function geocom_do_measure() with argument
GEOCOM_TMC_CLEAR
.
geocom_set_position(hz, v, pos_mode, atr_mode)
-
hz
(number) – Horizontal angle [rad]. -
v
(number) – Vertical angle [rad]. -
pos_mode
(integer) – Position mode (GEOCOM_AUT_POSMODE). -
atr_mode
(integer) – ATR mode (GEOCOM_AUT_ATRMODE).
Returns request for AUT_MakePositioning procedure. Creates request for turning the telescope to a specified position.
If pos_mode
is GEOCOM_AUT_NORMAL
, uses the current value of the compensator.
For positioning distances > 25 gon, this mode might tend to inaccuracy. If set
to GEOCOM_AUT_PRECISE
, tries to measure the exact inclination of the target
(tends to long position time).
If atr_mode
is GEOCOM_AUT_POSITION
, uses conventional position to other
face. If set to GEOCOM_AUT_TARGET
, tries to position into a target in the
destination area. This mode requires activated ATR.
geocom_set_positioning_timeout(time_hz, time_v)
-
time_hz
(number) – Timeout in Hz direction [sec]. -
time_v
(number) – Timeout in V direction [sec].
Returns request for AUT_SetTimeout procedure. This function sets the maximum
time to perform a positioning. The timeout is reset on 7 seconds after each
power on. Valid value for hz
and v
are between 7 [sec] and 60 [sec].
geocom_set_prism_constant(prism_const)
-
prism_const
(number) – Prism constant [mm].
Returns request for TMC_SetPrismCorr procedure. Creates request for setting the prism constant. The API function geocom_set_prism_type() overwrites this setting.
geocom_set_prism_type(prism_type)
-
prism_type
(integer) – Prism type (GEOCOM_BAP_PRISMTYPE).
Returns request for BAP_SetPrismType procedure. Creates request for setting
the default prism type. This function sets the prism type for measurement with a
reflector (GEOCOM_BAP_PRISMTYPE
). It overwrites the prism constant set by API
call geocom_set_prism_constant().
geocom_set_prism_type_v2(prism_type, prism_name)
-
prism_type
(integer) – Prism type (GEOCOM_BAP_PRISMTYPE). -
prism_name
(string) – Prism name (required if prism type isGEOCOM_BAP_PRISM_USER
).
Returns request for BAP_SetPrismType2 procedure. Creates request for setting the default or user prism type. This function sets the default or the user prism type for measurements with a reflector. It overwrites the prism constant set by geocom_set_prism_constant(). The user defined prism must have been added with API call geocom_set_user_prism_definition() beforehand.
geocom_set_reduced_atr_fov(enabled)
-
enabled
(bool) – Use reduced field of view.
Returns request for BAP_SetRedATRFov procedure. Creates request for setting
the reduced ATR field of view. If enabled
is true
, ATR uses reduced field of
view (about 1/9), full field of view otherwise.
geocom_set_refraction_mode(mode)
-
mode
(integer) – Refraction data method (1 or 2).
Returns request for TMC_SetRefractiveMethod procedure. Creates request for
setting the refraction model. Mode 1
means method 1 for the rest of the world,
mode 2
means method for Australia.
geocom_set_search_area(center_hz, center_v, range_hz, range_v, enabled)
-
center_hz
(number) – Search area center Hz angle [rad]. -
center_v
(number) – Search area center V angle [rad]. -
range_v
(number) – Search area range Hz angle [rad]. -
range_v
(number) – Search area range V angle [rad]. -
enabled
(bool) – Enable search area.
Returns request for AUT_SetSearchArea procedure. The function sets the position and dimensions of the PowerSearch window, and activates it. The API call is valid for all instruments, but has effects only for those equipped with PowerSearch (requires GeoCOM robotic licence).
geocom_set_station(easting, northing, height, instr_height)
-
easting
(number) – E coordinate [m]. -
northing
(number) – N coordinate [m]. -
height
(number) – H coordinate [m]. -
instr_height
(number) – Instrument height [m].
Returns request for TMC_SetStation procedure. Creates request for setting the station coordinates of the instrument.
geocom_set_target_type(target_type)
-
target_type
(integer) – Target type (GEOCOM_BAP_TARGET_TYPE).
Returns request for BAP_SetTargetType procedure. Creates request for setting the EDM type.
The function sets the current EDM type (GEOCOM_BAP_TARGET_TYPE
) for distance
measurements: reflector (IR) or reflectorless (RL). For each EDM type, the EDM
mode used last is remembered and actived if the EDM type is changed. If EDM type
IR is selected, the automation mode used last is activated automatically. The
API function
geocom_set_measurement_program() can
also change the target type. The EDM type RL is not available on all
instruments.
geocom_set_tolerance(hz, v)
-
hz
(number) – Positioning tolerance in Hz direction [rad]. -
v
(number) – Positioning tolerance in V direction [rad].
Returns request for AUT_SetTol procedure. Creates request for setting the positioning tolerances.
This function sets the position tolerances of the Hz and V instrument axes (GeoCOM robotic licence required). The tolerances must be in the range of 1 [cc] (1.57079E-06 [rad]) to 100 [cc] (1.57079E-04 [rad]).
The maximum resolution of the angle measurement system depends on the instrument accuracy class. If smaller positioning tolerances are required, the positioning time can increase drastically
geocom_set_user_atr_mode(enabled)
-
enabled
(bool) – Enable ATR state.
Returns request for AUS_SetUserAtrState procedure. Creates request for setting the status of the ATR state.
The function activates or deactivates the ATR mode (requires GeoCOM robotic
licence). If enabled
is true
, ATR mode is activated, and if lock mode is
enabled while the API call is made, lock mode will change to ATR mode. If
enabled
is false
, ATR mode is deactivated, and if lock mode is enabled
then it stays enabled.
geocom_set_user_lock_mode(enabled)
-
enabled
(bool) – Enable lock state.
Returns request for AUS_SetUserLockState procedure. Creates request for setting the status of the lock state.
The function activated or deactivates the lock mode (GeoCOM robotic licence
required). If enabled
is true
, lock mode is activated. In order to lock
and follow a moving target, call API function
geocom_lock_in(). If enabled
is false
, lock
mode is deactivated. Tracking of a moving target will be aborted, and the manual
drive wheel is activated.
geocom_set_user_prism_definition(prism_name, prism_const, refl_type, creator)
-
prism_name
(string) – Prism name. -
prism_const
(number) – Prism constant [mm]. -
refl_type
(integer) – Reflector type (GEOCOM_BAP_REFLTYPE). -
creator
(string) – Name of creator.
Returns request for BAP_SetUserPrismDef procedure. Creates request for setting a user prism definition.
geocom_set_user_spiral(hz, v)
-
hz
(number) – ATR search window in Hz direction [rad]. -
v
(number) – ATR search window in V direction [rad].
Returns request for AUT_SetUserSpiral procedure. The function sets the dimensions of the ATR search window (GeoCOM robotic licence required).
geocom_set_velocity(omega_hz, omega_v)
-
omega_hz
(number) – Velocity in Hz direction [rad/sec]. -
omega_v
(number) – Velocity in V direction [rad/sec].
Returns request for MOT_SetVelocity procedure. Creates request for driving the instrument with constant speed.
The function is used to set up the velocity of the motorisation (GeoCOM robotic
licence required). The API function
geocom_start_controller() must have been
called with argument GEOCOM_MOT_OCONST
before.
The velocity in horizontal and vertical direction are in [rad/sec]. The maximum velocity is ±3.14 rad/sec for TM30/TS30, and ±0.79 rad/sec for TPS1100/TPS1200.
geocom_setup_download(device_type, file_type, file_name, block_size)
-
device_type
(integer) – Device type (GEOCOM_FTR_DEVICETYPE). -
file_type
(integer) – File type (GEOCOM_FTR_FILETYPE). -
file_name
(string) – File name with extension. -
block_size
(integer) – Block size.
Returns request for FTR_SetupDownload procedure. Creates request for setting
up a file download. The function has to be called before
geocom_download(). If the file type is
GEOCOM_FTR_FILE_UNKNOWN
, an additional file path is required.
The argument device_type
must be one of the following:
-
GEOCOM_FTR_DEVICE_INTERNAL
– Internal memory module (path/ata1a/
). -
GEOCOM_FTR_DEVICE_PCPARD
– External memory card (path/ata0a/
).
The argument file_type
is usually GEOCOM_FTR_FILE_IMAGES
. The maximum value
for block_size
is GEOCOM_FTR_MAX_BLOCKSIZE
.
geocom_setup_list(device_type, file_type, search_path)
-
device_type
(integer) – Device type (GEOCOM_FTR_DEVICETYPE). -
file_type
(integer) – File type (GEOCOM_FTR_FILETYPE). -
search_path
(string) – Optional search path, required for file typeGEOCOM_FTR_FILE_UNKNOWN
.
Returns request for FTR_SetupList procedure. Creates request for setting up file listing. The function sets up the device type, file type, and search path. It has to be called before geocom_list().
geocom_start_controller(start_mode)
-
start_mode
(integer) – Controller start mode (GEOCOM_MOT_MODE).
Returns request for MOT_StartController procedure. Creates request for
starting the motor controller. If this function is used in combination with API
call geocom_set_velocity(), the controller mode
has to be GEOCOM_MOT_OCONST
.
The argument start_mode
must be one of the following:
-
GEOCOM_MOT_POSIT
– Relative positioning. -
GEOCOM_MOT_OCONST
– Constant speed. -
GEOCOM_MOT_MANUPOS
– Manual positioning (default setting). -
GEOCOM_MOT_LOCK
– “Lock-in” controller. -
GEOCOM_MOT_BREAK
– “Brake” controller. -
GEOCOM_MOT_TERM
– Terminates the controller task.
geocom_stop_controller(stop_mode)
-
stop_mode
(integer) – Controller stop mode (GEOCOM_MOT_STOPMODE).
Returns request for MOT_StartController procedure. Creates request for stopping the motor controller.
The argument stop_mode
must be one of the following:
-
GEOCOM_MOT_NORMAL
– Slow down with current acceleration. -
GEOCOM_MOT_SHUTDOWN
– Slow down by switching off power supply.
geocom_switch_off(stop_mode)
-
stop_mode
(integer) – Switch-off mode (GEOCOM_COM_TPS_STOP_MODE).
Returns request for COM_SwitchOffTPS procedure. Creates request for turning the instrument off.
The argument stop_mode
has to be one of the following:
-
GEOCOM_COM_TPS_STOP_SHUT_DOWN
– Power down instrument. -
GEOCOM_COM_TPS_STOP_SLEEP
– Sleep mode (not supported by TPS1200).
geocom_switch_on(start_mode)
-
start_mode
(integer) – Switch-on mode (GEOCOM_COM_TPS_STARTUP_MODE).
Returns request for COM_SwitchOnTPS procedure. Creates request for turning the instrument on.
The argument start_mode
has to be one of the following:
-
GEOCOM_COM_TPS_STARTUP_LOCAL
– Not supported by TPS1200. -
GEOCOM_COM_TPS_STARTUP_REMOTE
– Online mode (RPC is enabled).
geocom_take_image(mem_type)
-
mem_type
(integer) – Memory type (GEOCOM_IMG_MEM_TYPE
).
Returns request for IMG_TakeTccImage procedure. Creates request for capturing a telescope image.
The memory type mem_type
has to be one of the following:
-
GEOCOM_IMG_INTERNAL_MEMORY
– Internal memory module. -
GEOCOM_IMG_PC_CARD
– External memory card.
Third-Party Programs
HDFView
HDFView is a Java-based visual tool for browsing and editing HDF5 and HDF4 files. Application images for Linux, macOS, and Windows are available for download on the website of The HDF Group. On FreeBSD, the program has to be compiled from source. The following build dependencies are required:
-
java/openjdk19 (or any other version)
The HDF4 and HDF5 libraries have to be built from source as well.
Building HDF4
Clone the HDF4 repository and compile with CMake:
$ cd /tmp/ $ git clone --depth 1 https://github.com/HDFGroup/hdf4.git $ cd hdf4/ $ mkdir build && cd build/ $ cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE:STRING=Release \ -DBUILD_SHARED_LIBS:BOOL=ON -DBUILD_TESTING:BOOL=OFF \ -DHDF4_BUILD_TOOLS:BOOL=OFF -DHDF4_BUILD_EXAMPLES=OFF \ -DHDF4_BUILD_FORTRAN=ON -DHDF4_BUILD_JAVA=ON \ -DZLIB_LIBRARY:FILEPATH=/usr/lib/libz.so \ -DZLIB_INCLUDE_DIR:PATH=/usr/include \ -DCMAKE_Fortran_COMPILER=gfortran -DCMAKE_C_COMPILER=gcc .. $ cmake --build . --config Release
Afterwards, copy java/src/hdf/hdflib/jarhdf-4.3.0.jar
to bin/
in the HDF4
build directory.
Building HDF5
In the next step, clone the HDF5 repository and build with CMake, too:
$ cd /tmp/ $ git clone --depth 1 https://github.com/HDFGroup/hdf5.git $ cd hdf5/ $ mkdir build && cd build/ $ cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE:STRING=Release \ -DBUILD_SHARED_LIBS:BOOL=ON -DBUILD_TESTING:BOOL=OFF \ -DHDF5_BUILD_TOOLS:BOOL=OFF -DHDF5_BUILD_EXAMPLES=OFF \ -DHDF5_BUILD_FORTRAN=ON -DHDF5_BUILD_JAVA=ON \ -DZLIB_LIBRARY:FILEPATH=/usr/lib/libz.so \ -DZLIB_INCLUDE_DIR:PATH=/usr/include \ -DCMAKE_Fortran_COMPILER=gfortran -DCMAKE_C_COMPILER=gcc .. $ cmake --build . --config Release
Then, copy java/src/hdf/hdf5lib/jarhdf5-1.15.0.jar
and src/libhdf5.settings
to bin/
in the HDF5 build directory.
Building HDFView
Finally, clone the HDFView repository, set the build properties, and compile with ant(1):
$ cd /tmp/ $ git clone --depth 1 https://github.com/HDFGroup/hdfview.git $ cd hdfview/
Set the following properties in build.properties
:
hdf.lib.dir = /tmp/hdf4/build/bin hdf5.lib.dir = /tmp/hdf5/build/bin hdf5.plugin.dir = /tmp/hdf5/build/bin/plugin build.debug = false
Build with ant(1):
$ ant run
The binaries are written to build/HDF_Group/HDFView/99.99.99/
. The archive
swt.jar
has to be replaced with the version installed system-wide:
$ cp /usr/local/share/java/classes/swt.jar build/HDF_Group/HDFView/99.99.99/
Replace the last line in build/HDF_Group/HDFView/99.99.99/hdfview.sh
with:
java "$JAVAOPTS" -Djava.library.path=".:/usr/local/lib" -Dhdfview.root="." \ -cp "./*" hdf.view.HDFView "$@"
To start HDFView, run:
$ cd build/HDF_Group/HDFView/99.99.99/ $ sh hdfview.sh
Error Codes
Code | Error Name | Error Description |
---|---|---|
0 |
|
No error. |
1 |
|
Generic error. |
2 |
|
Dummy error. |
3 |
|
Invalid input/argument. |
4 |
|
Input/argument missing. |
5 |
|
Type error. |
6 |
|
I/O operation failed. |
7 |
|
Read operation failed. |
8 |
|
Write operation failed. |
9 |
|
End of file. |
10 |
|
End of record. |
11 |
|
Memory allocation failed. |
12 |
|
Out of bounds error. |
13 |
|
Resource exists. |
14 |
|
System call failed. |
15 |
|
No memory. |
16 |
|
Disk full. |
17 |
|
No data. |
18 |
|
Limit reached. |
19 |
|
Timeout occured. |
20 |
|
Format error. |
21 |
|
Resource not found. |
22 |
|
No permission. |
23 |
|
Read-only access. |
24 |
|
Data corrupted. |
25 |
|
Invalid configuration. |
26 |
|
GeoCOM error. |
30 |
|
Generic database error. |
31 |
|
Invalid database (wrong application id). |
32 |
|
Database is busy. |
33 |
|
Database is locked. |
34 |
|
Database execution failed. |
35 |
|
Database contraint error. |
36 |
|
Database transaction failed. |
37 |
|
Database rollback failed. |
38 |
|
Database prepare failed. |
39 |
|
Database statement finalisation error. |
40 |
|
Database binding failed. |
41 |
|
Database type mismatch. |
42 |
|
Database step failed or no write permission. |
43 |
|
Database returned no rows. |
44 |
|
Database backup error. |
45 |
|
Database attach failed. |
46 |
|
Database detach failed. |
50 |
|
Generic command-line error. |
51 |
|
Argument not passed. |
52 |
|
Argument invalid or missing. |
53 |
|
Argument value missing. |
54 |
|
Argument type mismatch. |
55 |
|
Argument value length invalid. |
56 |
|
Argument is unknown. |
60 |
|
Generic message queue error. |
61 |
|
Empty message. |
70 |
|
Generic regular expression error. |
71 |
|
Failed to compile regular expression. |
72 |
|
Number of matches exceeds array size. |
73 |
|
No match. |
74 |
|
No group. |
80 |
|
Generic sensor error. |
90 |
|
Generic RPC error. |
91 |
|
RPC connection error. |
92 |
|
RPC SSL/TLS error. |
93 |
|
RPC API call failed. |
94 |
|
RPC authorisation error. |
95 |
|
RPC resource exists. |
96 |
|
RPC internal server error. |
100 |
|
Generic mail error. |
101 |
|
Mail connection error. |
102 |
|
Mail SSL/TLS error. |
103 |
|
Mail authorisation error. |
110 |
|
Generic MQTT error. |
120 |
|
Generic Lua error. |
121 |
|
Lua thread (coroutine) yields (not an error). |
122 |
|
Lua runtime error. |
123 |
|
Lua syntax error. |
124 |
|
Lua memory allocation error. |
125 |
|
Lua message handling error. |
126 |
|
Lua file I/O error. |
130 |
|
Generic library error. |
131 |
|
FastCGI library error. |
132 |
|
HDF5 library error. |
133 |
|
Zlib library error. |
134 |
|
Zstandard library error. |