You are browsing a read-only backup copy of Wikitech. The primary site can be found at

Middleware platform

From Wikitech-static
Jump to navigation Jump to search

The Middleware is the intermediate layer which allows the communication among the different entities in the CAMPUS21 system (ECOService). This CAMPUS21 middleware comes from the necessity of interconnecting multiple heterogeneous sensor networks. First of all, the physical environment defines the sensor, actuator and controllers at field level where the communication protocol is also established. By means of a driver this physical context is translated into the specific protocol ontology which treats the data-points in a logical way (e.g. objects). Next, the requirement is to adapt the devices into the CAMPUS21 ontology by means of virtual devices that represent the physical ones. Finally, through virtual controllers the data is dispatched to the server entities and databases.

CAMPUS21 MW approach

Middleware architecture

The high level architecture of the entire Campus-21 framework is divided into 3 major system layers – named Data Layer, Middleware Layer, and Application Logic. Moreover, the interfaces offered by the various components are represented as arrows pointing to the offering component, i.e. interface MW-3 is implemented in the middleware unit Data Acquisition and Control Manager. The items defined and specified in this framework are:

  1. Middleware Layer units:
  • Domain Controller DC
  • Data Acquisition and Control Manager DACM
  1. Interfaces:
  • MW-1 – interface offered by DACM to DCS
  • MW-2 – interface offered by DCS to DACM
  • MW-3 – interface offered by the middleware in general to the application logic layer
CAMPUS21 MW architecture

In the Domain Controller, there are five components which are all linked by an OSGi tier:

  • BMS-Data Acquisition/Specific Discovery

The Domain Controller handles the BMS data locally: receives /captures data from the physical BMS or ICT systems, stores these data in raw database, upon request from other components, retrieves data from the database and sends the data to the respective component

  • Local Data Acquisition Scheduler

Performs local scheduled tasks, e.g. periodic data acquisition, schedules data acquisition, etc. Configurable by other components (in all layers of the middleware, i.e. date layer, middleware, and logic layer) for specific operations required by the respective components to be performed periodically

  • Actuation Triggering

Is the component that eventually writes the new set-points to the actuators Can be called by other components

  • External ICT System Connector

Connects non-BMS ICT systems

  • Local Raw Data Storage

Store raw BMS / external ICT data in an external local database Retrieves data from said database

  • Communicator

Implements the MW-1,2 interfaces Cooperates with Local Raw Data Storage in case of previous connection loss to communicate read data into the DACM In the Data Acquisition and Control Manager, there are six components, also linked by an OSGi tier:

  • BMS Inventory Consistency Checker

Retrieve current building / building domain device information, i.e. which sensors and actuators are available and what are their capabilities Matching current received devices against stored BIM to identify failures of sensors or actuators (i.e. no response is received during the discovery process) or to find newly installed devices that are not yet in the BIM Take appropriate actions in unexpected situations (i.e. too many or too few devices having been discovered)

  • Generic Scheduler

Scheduled tasks for running periodic operations.

  • Data Handler

Handles all data requests from the application logic components: Current dynamic data from the building (BMS, external ICT systems) via Domain Controller functionalities, External data services via the data warehouse connector, e.g. weather data from the Internet, Historical data via the Data warehouse connector, BIM data via BIM Data Connector Sends control commands to actuators (e.g. change setpoints) by calling the Domain Controller functionalities

  • Data Warehouse Connector

Functional option: storing the acquired data in a data warehouse for statistical analysis or optimisation operations Retrieves historical data (e.g. of the entire building, a particular section of the building, or over a certain period of time)

  • BIM Data Connector

Retrieves BIM data

  • External Data Connector

Connects to external services such as weather forcast

The paradigm for the implementation of the interfaces MW-1 and MW-2 will be REST (Representational State Transfer). Using REST, XML-encoded data will be exchanged between the Domain Controller and the Data Acquisition and Control Manager operation.

Middleware deployment

CAMPUS21 MW deplyoment

The system deployment scheme is drawn in the figure. On the left side, the buildings and the associated Middleware Domain Controllers (DCs) are presented. In this scheme, typically one DC corresponds to one building. In case of UCC, two individual buildings with separated network connectivity are connected to the Middleware and therefore, two DCs have been deployed for this demonstrator. All DCs are connected to a single entity of the Data Acquisition and Control Manager (DACM). In order to allow the connectivity while respecting IT security concerns, a VPN has been established between the DCs and the DACM entities. Similarly, application layer partners connect via VPN to the DACM for requesting data.

Relying on OpenVPN technology proved as resilient and stable. 24 hour IP level disconnects etc were largely hidden from the Middleware.

The DACM fetches Weather Forecast data in regular intervals for storage. Data storage is performed by the DWH which runs separate data schemes for the different demonstator sites.

Middleware performance

The maximum number of devices (approximately 10,000 in Frankfurt Commerzbank Arena) encountered within this project being sent via one bulk XML message did not pose a problem with regard to timing or parsing. The resulting XML payload was ~1.3 MB.

A typical Application Level request for 10 or less devices related to the Grass Heating system of the Frankfurt Commerzbank Arena takes overall 0.3 seconds (for reference: a 64 byte ping between the DACM and the Commerzbank Arena DC via the OpenVPN tunnel has typically a RTT of 18.5 ms). This includes XML serialization/de-serializations via TCP/IP on MW-1, MW-2 and MW-3, the OpenVPN and Internet connectivity delay as well as the BACnet/IP communication.

Since the initial deployments, the Middleware could periodically poll BMS data from Frankfurt and Valladolid demonstration sites in intervals of 10 and 15 Minutes for weeks without requiring restarts. Ever since, the Middleware has been running without stability problems, but interrupted by occasional partner dependent site maintenance. The designed fall-back mechanism buffering the data in the DCs while the DACM was not available and “re-playing” the collected data events once the DACM became available again proved successful. However, it was identified that DACM and DC restarts resulted in a loss of configured Scheduler state, thus a simple persistence mechanism was created.

In this deployment, processor load rarely exceeds 3% of CPU load and Memory usage is at the default Java Heap Size of 128 MB. The Middleware loads of memory and CPU occur when java objects need to be serialized/de- serialized related to MW-1 and MW-2. The Linux tool top shows typically a load average value of below 0.1. Specifically, exemplary output of cat /proc/loadavg shows on Nov 14th 2013: 0.01 0.03 0.05 1/329 25366

Extensibility and scalability

Additional, independent ICT systems, providing further functionalities to those of the existing BMS systems, will be integrated either via extensions of the existing BMS (if applicable) or via Data Layer interface DL-1 and its connectivity with the Domain Controller. External data services of any kind (e.g. internet-based weather forecast services) are a principle part of the Data Layer and are integrated via DL-4x interfaces. Due to the separation of the Data Layer and the Middleware, extending the sensor array with new devices (the so-called upscaling) will be enabled via the data layer interfaces DL-1x. The Middleware will provide access to any new sensor data infrastructure through these interfaces. Extensibility of the Application Logic is independent of the Middleware services.

Via the open interface MW-3, connectivity to all Application Logic services is ensured using standard HTTP and XML technologies for communication. By splitting the functionality of the middleware into the Domain Controller (DC) and the Data Acquisition and Control Manager (DACM), components and bundles can be distributed over a number of servers. In addition, the components can be geographically distributed. This allows e.g. building facility managers to manage buildings from the same owner on a central manner. Being able to have multiple DCs also helps to manage highly complex systems since the tasks and data traffic to be handled on the site of an asset can be shared by several DCs if need be. It is even envisaged to use load balancing techniques for managing assets which are too complex to be handled by a single DC.