Fall Detection using Computer Vision for Industrial Workers
Mount a camera in an industrial or manufacturing location to detect worker falls with computer vision.
In the manufacturing and construction industries, workers face serious health and safety risks every day. Workers on a job site or manufacturing floor can trip on materials and equipment or be struck by falling objects. Falls can cause serious injuries when they are not detected early. As a solution to this, we are developing a device that can quickly detect fall downs in the monitored area and alert to the designated person with the specific area. The device constitutes of Raspberry Pi 4 and a camera module running with the FOMO that is capable of detecting the fall down in real-time. Each incident can be written to the database and can be displayed in the web dashboard,so the safety manager can easily check the current safety status in the monitored facility. At the implementation level, this FOMO-based ML model can be applied to the video output from the cameras which is installed in the monitored area.
Data collection is the first step in every machine-learning project. Proper collection of data is one of the major factors that influence the performance of the model. It is helpful to have a wide range of perspectives and zoom levels for the items you are collecting. You may take data from any device or development board, or upload your own datasets, for data acquisition. As we have our own datasets, we upload them using the Data Acquisition tab.
First we linked the Raspberry pi with the Edge Impulse and captured the images by attaching the camera on the roof of the building. To link the Raspberry pi with the Edge Impulse please follow this tutorial.
The more data that neural networks have access to, the better their ability to recognize the object.
After collecting the images we labeled it by moving onto the labeling queue. In our case we have only two labels - Standing and Fall. The surprising fact is that Edge Impulse will attempts to automate this procedure by running an object tracking algorithm in the background in order to make this labeling procedure easier. Then we split the images between test and training set and it is very essential to validate our model. There we kept 78/22 ratio and it is better to keep the ratio like this.
This is our Impulse. As you can see we used 96x96 images and resize mode as "Fit to shortest axes", because in this settings FOMO performs very well.
In the image tab we used Grayscale as the color depth. Then we generated the features for our images. Even though the objects are same the features are distinguishable.
Now it's time to start training the machine learning model. Generating a machine learning model from scratch requires great time and effort. Instead, we will use a technique called "transfer learning" which uses a pre-trained model on our data. That way we can create an accurate machine learning model, with fewer data inputs. Then we adjusted the training parameters to get a model with better accuracy and finally we got this.
We are using FOMO (MobileNet V2 0.35) as the neural network.
This is our training output. We got 98% accuracy.
By examining the confusion matrix, it is clear that the model works very well but we need to check there is a possibility of over fitting. Here is our model testing results under model testing tab and it works very well with the test data also.
testing_data.jpg
For testing ,we used images which is not given in testing and training. Here we are testing 2 sample images and let's see how our model performs.
In all our testing samples, the model performed very well, so we can go ahead and deploy it to the device.
By using this library we can run our machine learning models on Linux machines using Python. For doing that we need to follow this installation guide.
Then we downloaded the model from the Edge Impulse and modified the sample code to make our project alive.
Last modified 2mo ago