Elevator Passenger Counting Using Computer Vision
Automatic FOMO-based people counting using computer vision to increase elevator safety.
As elevators become more and more indispensable in people's lives, safety has become more of a concern. Overloading is an important cause of elevator accidents. The existing elevators use weighing sensors to find the load in elevators, they may sometimes fail. And their maintenance is too expensive. To avoid such accidents, we are going to design a device which can count passengers in real-time at high speed and can give an alert if the no of passengers is above the threshold.
This device can be attached anywhere in the elevators. In comparison with existing technology, its implementation cost is too low, and maintenance is easy.
In this prototype, we only consider two floors, the ground floor and 1st floor. After entering all the passengers in the elevators, anyone needs to press the close button. Then we count the number of passengers, if it is above the threshold the device will make an alarm.
So,some of the people can evacuate from it, if they listens to this auido visual alarm.
If it is not above the threshold the elevator will move on. The threshold passenger limit can be set by the user in the code. Check out the Demo Video
In addition to the overload alert, we also provide elevator statistics. This means that the device can upload the count in elevators with the specific time stamp to the excel software. The count will be always updated after pressing the close button.
One of the interesting aspects of this data is that it can be easily visualised by any graphs or charts. So, it can be useful for any person who analyses elevator usage. This is the count coming from the device and it is logged in CSV format(opened in Microsoft Excel).
Below are the various graphs generated for the above data.
clustered column.jpg
line_chart.jpg
Consider the case of elevators in a shopping mall, so by using these statistics, the mall owner can easily add up elevators, if the usage of the elevator is too high and can also remove the elevator if usage is too low. This is a prime aspect of this project.
In this project, we are using Nicla Vision a tiny AI board from Arduino. It features a 2MP colour camera and has the intelligence to process and extract useful information from anything it sees.
For the data collection, we mounted the board on a tripod and connected it to a laptop using a lengthy USB cable. The below image shows the data acquisition setup.
The whole setup was on one end of the room and we actually stood on the opposite end. So our microcontroller unit can easily pick up the passengers. You can follow this tutorial to connect Nicla vision to Edge Impulse.
We captured 73 images and split them between testing and training. The images will contain only one person or two people at a time. Then we labelled each image one by one and here we have only one class named "people". The orientation of the Nicla vision provides inverted images and that's not a problem at all.
This is the machine learning pipeline for this project
We choose the image width and height as 96x96 and the resize mode to "Fit the shortest axis". After saving the impulse we moved onto the image tab and chose "Grayscale" as the colour depth and saved the parameters and generated features for our images. The image below shows the generated features.
This is our Neural network training settings and architecture for generating the model.
We only changed the training cycle from 60 to 70. Further increasing a training cycle or learning rate can overfit the data, so we stuck to this.
As the Neural network architecture, we used FOMO (MobileNet V2 0.35). The results are surprising. We got around 96% accuracy for the model(used quantized int version).
It's time for testing the model. First, we tested the test data which we separated earlier and We got around 84 % accuracy. That seems to be fine.
Now let's move on to the Live classification. So we are testing 3 sample images captured from Nicla vision and let's see how our model performs.
In all our testing samples, the model performed very well.
Now we have our ML model, we need to deploy it to our Nicla vision. We just created an arduino library by pressing the building button, so a zip file will be downloaded.
Then we added that library to the Arduino. Then we modified the example sketch to get the project done. You can find code and assets in this github repo.
In addition to the Nicla Vision we used a buzzer and a LED to make an alarm.
But the output current of Nicla(4.7 mA) is not enough to properly power up the LED and buzzer. So we used a 2N222A transistor to drive these devices.
So we used an external power supply of 5V in addition to the USB power supply for powering the Nicla vision itself. A push button is also used to check whether the door is closed or not.
Finally, We made a nice tiny case for this device.
Then we inserted each component into it.
Our device is ready to implement.
We used this software to stream data from the Nicla vision. The streaming data can be logged anywhere in any format. Here we proceeded with CSV(comma separated value) format, so the file can be easily opened with Microsoft excel.
Make sure to tick the timestamp before logging the data. The below image shows a sample data streamed from our device which is opened in excel.
We can easily generate graphical reports from this data by selecting that text. The below figure represents the line chart for the above data.
There is a wide variety of options available and they are shown below.
This device can be easily integrated with any elevator, so the elevator will only start when the passengers are in the permissible range. For reducing the cost we can use an ESP32 Eye-like microcontroller unit instead of the Nicla Vision.