This post is also available in:
עברית (Hebrew)
Researchers from Edith Cowan University have developed a new computer tracking technology that uses camera footage to detect drunk drivers.
The scientists took data and videos of alcohol-impaired drivers in a controlled but realistic environment. The participants ranged across three levels of alcohol intoxication—sober, low intoxication, and severely intoxicated – and were recorded while driving a simulator. The researchers then put footage of the drivers’ faces through a machine learning system that uses discernible cues from standard RGB (red, green, and blue) to gauge the degree of alcohol-related impairment, using elements like facial features, gaze direction, and head position.
ECU Ph.D. student Ms. Ensiyeh Keshtkaran, who worked on the project, claimed that the system detects varying levels of alcohol intoxication impairment with an overall accuracy of 75% for the three-level classification. She further explained that the system can identify intoxication levels at the beginning of a drive, allowing for the potential prevention of impaired drivers from being on the road. “This sets it apart from methods reliant on observable driving behaviors, which require extended active vehicle operation to identify impairment.”
ECU Senior Lecturer Dr. Syed Zulqarnain Gilani said this technology is the first to use a standard RGB camera to detect alcohol intoxication levels based on signs of impairment in drivers’ faces, confirming the possibility of detecting intoxication levels using a camera only.
“The next step in our research is to define the image resolution needed to employ this algorithm. If low resolution videos are proven sufficient, this technology can be employed by surveillance cameras installed on roadside, and law enforcement agencies can use this to prevent [drunk] driving,” he explained.
According to Techxplore, the technology also contains 3D and infrared videos of the driver’s face, rearview RGB videos showing driver posture and steering interactions, driving simulation event logs, and screen recordings of driving behavior. This computer vision-based approach could potentially be integrated into road cameras, similar to how these cameras currently detect whether the people in the car are wearing a seatbelt, or if the driver is using their phone.
This innovation could be revolutionary for road safety, as drunk driving is one of the leading causes of car crashes and fatalities all over the world. While there are other methods used to detect drunk drivers, those primarily rely on random breath tests and are not sufficiently widespread and efficient. Ms. Keshtkaran notes that most current research about detecting intoxicated driving centers around analyzing driving behavior (like driving and steering patterns, pedal usage, and vehicle speed) while others incorporate external sensors like alcohol detection or touch-based sensors.
However, there have so far been very few attempts to explore the potential of using computer vision techniques to identify signs of intoxication based on biobehavioral changes in drivers, and this innovation could help bring a future of safer roads.