Case Study – Smart Factory

CHARIOT aims at revealing the added value of the holistic networking of IoT through an Industry 4.0 use-case, namely, Smart Urban Factory. The scenarios were implemented on a lab-scale smart factory testbed with many diverse physical components, i.e., sensors and actuators, in order not to rely on heavy industry components. In addition, simulation environments were developed that are capable of running hardware-in-the-loop to introduce more complexity and to evaluate the scalability of the CHARIOT framework. The level of device heterogeneity and service complications are sufficiently high to reflect the need for holistic networking of IoT entities.

Autonomous smart factory toward mass customization

We aim to address how the components of CHARIOT can realize the smart urban factory environment, i.e., how smart things can be connected to each other in the production line process in cooperation with humans and robots, how the warehouse stores products, and how the products are delivered. All these challenging processes include various devices ranging from primitive to complex ones, and also human actors. For that, we focus on a scenario and applications ensuring an autonomous mass customization process.

The CHARIOT demonstrator consists of heterogeneous devices that have different characteristics and run under different runtime environments. To demonstrate the flexibility of the CHARIOT Runtime Environment (RE), we integrated several different REs that are already widely used by the community, e.g., ROS, OCTOPrint, KURA. Due to the diversity of communication protocols, we implemented an API covering the main communication protocols (e.g., Websocket, MQTT, HTTP) to ensure data exchange between devices, their device agents, and the CHARIOT RE. Each device is represented by a device agent that forwards the data to the data gateway. These gateways store the data in the cloud server. Through CHARIOT framework’s knowledge management capability, this data is accessible to all devices, services, and applications, along with the machine learning algorithms. This makes it possible to run complex machine learning applications in real-time, e.g., our Predictive Maintenance (PM) app.

Integrated demonstrators on CHARIOT framework for the autonomous smart factory

Applications

Predictive Maintenance Application

“Predictive Maintenance App (PM-App)”  is a representative IoT application to demonstrate not only the added value of the interconnected components, but also the rich functionalities of the CHARIOT Knowledge Management component. PM-App offers a generic functionality to integrate and perform predictive maintenance on any IoT device. It executes separate instances for each requesting device, which provides an adaptive ML process on the individual device data, and the prediction results are provided in real-time. After a device prediction request, PM service calls generic machine learning functions with the requested parameters and the algorithm for a training, unless a trained model with these parameters is already available, followed by the real-time prediction. The app accesses the requesting device’s both history and real-time data through our Knowledge Management Service, and the ML functionalities are provided by the ML component. For the purposes, we have developed anomaly detection algorithms of  One-Class Support Vector Machine (OCSVM), k-Nearest Neighbors (k-NN) and kernel Principal Component Analysis (kPCA) on the ML component, which are available to the app for training and classification operations. The prediction process outputs if a data point reflects a normal or abnormal behavior of a device. The results of the process are notified to the subscribed agents (e.g., a human agent) through our notification mechanism. We run and test the app for CHARIOT’s real-time knowledge management on a servo motor running a conveyor belt. This app is also coupled with an Adaptive Control application that further analyzes the predicted behaviors and makes operational decisions autonomously toward protecting the servo motor from possible harm and keeping the operation running, e.g., by lowering the speed of the motor for less power drainage. A genetic algorithm has been integrated on the ML component for an optimal operation decision considering the motor’s and the operation’s life.


Notification Mechanism

Notification Mechanism

Notification mechanism distributes the identified tasks to the most suitable workers inside the factory. The worker is informed about the new assignment request through the notification mechanism app on her smartwatch. In addition, each permitted worker can communicate with devices such as robots in both directions. For instance, the worker can execute a service in the robot, whereas a device can inform the worker about her current status. As part of the task assignment procedure, the health status of the worker can be monitored, should the worker accept to share her smartwatch sensor data with the IoT middleware. The monitored data is compared with the trained health data under normal conditions, and then the currently tracked data can be evaluated by using a model that takes parameters such as the worker’s age and medical records into consideration. After this comparison, if the smart factory service detects any abnormality, the system may cancel the execution of a task by this worker. Worker location data is also part of the notification mechanism, as a location detection module can help workers in the indoor navigation during the working hours. As many wearables have built-in Bluetooth sensors, the place of the worker can be detected by deploying Bluetooth beacons inside the factory, so that the time-critical tasks can be assigned to the worker closest to the work location with the required skills and permissions.


Human-Robot Collaboration Application

This application enables cooperation between the human workers and the robot in the context of our smart factory use-case. It integrates two cameras to monitor human activities, the container sensors to track the status of a pick and place task on a conveyor belt, the robot agent that controls the robot arm and the human agent as an interface between the humans and our CHARIOT CPS. The app, through the human activity recognition application mentioned below, stores the recognized human head and hand gestures and the task status. The processed information, such as, “if the human is doing the task wrong”, “the human is tired or lost attention” and “the human is doing fine with the task”,  is then used in making human-aware decisions. The human workers are also categorized according to the expertise from their observed actions, which are also continuously updated by the system according to their previous performances. Some example robot decisions in this context are: “I should help the human with the task”, “the human is not available or the human is a beginner and currently not capable of doing the task, so I should take over it”. Corresponding CHARIOT agents have been developed for the application, i.e., the camera agent, the robot arm agent, the container agents and the human agent, are orchestrated by this application. The app finally returns the generated action decisions to the robot agent, which executes them on the robot arm. This application is developed to show the human integration of the CHARIOT framework, and how we utilize the human in our CPS.


Human Activity Recognition Application and Human Interfacing a Robotic CPS

Human Activity Recognition (HAR)

In this application, we have developed a HAR agent to recognize human activities in a human-robot collaboration scenario, where the recognized actions are further used by a robot arm to interpret and track a human’s control signals, success status with an assignment, erroneous behaviors like lost attention, and her availability. This way, we learn and respond to a human actor’s characteristics and preferences. We recognize human gestures of staying idle, being unavailable for the task, acting on the task and warning the robot (these gestures are respectively shown in the figure above from a to d). Additionally, the app semantically recognizes when a product is placed in a container to reason on the success status of a human on the task of pick-and-place (as shown in figure labeled with c). Once a human actor is assigned with the role of working on the assembly line, the agent representation of the human actor (human agent) incorporates camera input as a property and receives the raw camera data from the camera agent. HAR agent retrieves this data from the human agent and processes the image to recognize human activities that are stored under the human data model as a property. The human agent multicasts these properties to the relevant agent groups in CHARIOT, thus making these properties available to a variety of smart factory applications. For example, our human-robot collaboration agent (HRC agent) uses an application layer protocol and subscribes to these multicasted properties. This way, the HRC agent generates a human-aware planning for the robot arm.


Autonomous Warehousing

We have developed a smart factory environment on simulation that realizes an autonomous warehousing operation. This environment provides many simulated devices increasing the device heterogeneity and contributing to the scalability tests of the CHARIOT framework. All the sensors and actuators, namely, the conveyor belts, the containers (with proximity sensors detecting when a package is on), 3D printers, the storage units (with proximity sensors), the charging stations (with the battery status of a charging robot) and the transporter robots (with their task load and battery levels), are integrated in the CHARIOT framework through their corresponding device agents. Thanks to the CHARIOT middleware and the runtime environments, the simulation environment is able to communicate with the CHARIOT components, such as the agents, the applications and the knowledge management service. With that, we are also able to run the warehousing operation as a hardware-in-the-loop system, which receives commands from the real devices; hence, realizing a smart factory operation from the production, assembly, packaging and storing autonomously. In addition, for the autonomous warehousing operation, we have developed task and path planning applications for the transporter robots satisfying their collision-free and optimal navigation.


AR Object Recognition

Augmented Reality App

A smart factory CPS is a complex environment for the untrained personnel. AR enables intuitive interfaces to workers by providing the required information as to how to interact with a CPS entity, how to apply maintenance instructions, etc. AR integrates the virtually generated content to the real world in order to hide the complexity of smart factory CPS. AR app is composed of two main features: (i) object recognition and (ii) indoor navigation. Object recognition detects and identifies pre-configured physical entities using either natural feature detection or marker detection. Then, the recognized object functionalities are obtained through the CHARIOT Middleware and displayed on the worker’s mobile device. Indoor navigation module can localize the user within an environment using the predefined markers that are associated with exact coordinates in the floor plan and navigate her to the target destination by computing the shortest path. This app recomputes the user’s position within the learned environment and adjusts the visualized navigation path accordingly whenever a marker has been detected. AR app retrieves all entity information from the smart factory and supplies the connected application with the required data, pushes any notification of changed or updated information using MQTT protocol, and delegates action requests to the environment system.