Time Series Database Cache Module by Automation Professionals
E-Mail Support

Time Series Database Cache


The Time Series Database Cache module supplies dataset manipulation and database query functions that are optimized for use with large and/or high-resolution database tables containing time-stamped data.

As of version 1.9 of the NoteChart Module, its EasyNoteChart component will automatically use these caches for DB pens and corresponding histograms, unless explicitly disabled in the component's properties. The timeSeriesCache() expression function remains usable with Ignition's Classic Chart component, or the regular NoteChart component, or the Notes datasets in either NoteChart component, or any other component that can use time-series datasets.


Requests for data are organized by datasource, table, timestamp column, and optional WHERE clause into individual caches, and the behavior of each combination can be customized through the gateway web interface. The defaults are suitable for light- to medium-duty databases. When a request is first received, any data still present from a previous request that fits the given criteria is immediately returned from the local cache. The fraction of the request that wasn't immediately satisfied is passed on to the background query engine in the gateway.

For a given cache combination, requests for various time spans and value columns (via expression function, script function, or compatible module) are combined from all clients. The requests are then sorted and divided by time span into either large bulk historical requests and/or small realtime requests, and queued to separate execution managers. The default realtime span is five seconds before to ten seconds after now(). Bulk requests are further subdivided by request priority, though less important columns will be included in more important time spans. Warning: due to the design of the split between bulk and realtime spans, rows with timestamps beyond the realtime window will not be returned.

Scripted requests must be repeated at regular intervals at least until all missing rows arrive. Continuing regular requests after that produces no new database activity but will hold the corresponding data in the cache. The timeSeriesCache() expression function automatically continues requests to maintain its data as long as its containing window is open. Data that hasn't been requested for several minutes will be discarded, or when the last requestor releases its handle.

Bulk queries are LIMITed to a configurable chunk size per query and the arriving chunks are delivered via push notifications from the gateway to all interested clients. Queries for the realtime window are also delivered by push notifications. Timing for both bulk and realtime queries is configurable as well.

Cache data transferred from the gateway to a Vision client will be tracked in detail so that overlapping requests from that client do not have to repeatedly send the same data. Each Vision client will maintain its own cache with its own timeouts. Thus, a client may still have data in cache after the gateway has discarded it.

Expression Functions

Scripting Functions

Supporting Data Types

Usage Notes

Caching of very high resolution data, or allowing extremely long timespans, will consume substantial Java Heap memory. Pathologically large queries that would cause DB timeouts in clients may cause java heap allocation errors instead. Be sure to set client launch properties to allow high memory usage. Similar considerations apply to the memory allowance in the gateway.

When using caching to optimize identical realtime charts on multiple clients, consider using a Gateway Timer Event script to keep the desired timespan cached in the gateway. This will make the client charts open most efficiently, even if all clients are closed at some point. Have a project script module like the following, and call its preload() function every couple seconds:

preloadHandle = system.db.registerSeriesCache('mysource', 'mytable', 't_stamp")

def preload():
    global preloadHandle
    endts = system.date.now()
    begints = system.date.addHours(endts, -6)
        # Discard returned dataset.
        system.db.getSeriesCache(preloadHandle, begints, endts, 'valCol1', 'valCol2', 'valCol3')
        # Deal with an expired cache handle and try again
        preloadHandle = system.db.registerSeriesCache('mysource', 'mytable', 't_stamp")
        system.db.getSeriesCache(preloadHandle, begints, endts, 'valCol1', 'valCol2', 'valCol3')