CVPR challenge pushes researchers to improve car accident detection AI
AI researchers from more than 30 countries around the world came together this week for the AI City Challenge, a competition to spur the development of better machine learning via tasks such as detecting car accidents and tracking a vehicle across a network of cameras. Now in its fourth year, the challenge pushes AI researchers to create more efficient Intelligent Transportation Systems (ITS).
Teams from Baidu won three of the four competitions: vehicle counting, multi-camera reidentification, and car accident and stalled vehicle detection. Organizing committee member and University of Albany assistant professor Ming-Ching Chang said during the virtual workshop that the top performing model in this category achieved 95.3% accuracy.
A team from Carnegie Mellon University won one of the four challenges for tracking a vehicle over a network of multiple cameras. The benchmark data set for this challenge stretches across 46 camera views spanning 16 intersections in Dubuque, Iowa.
In total, the competition drew more than 800 individual researchers on 300 teams from 36 nations; 76 teams submitted code for final review. The organizing committee included companies like Amazon and Nvidia as well as researchers associated with Iowa State University, Santa Clara University, and the Indian Institute of Technology Kanpur. Organizers called this year’s AI City Challenge the first to use effectiveness and computation efficiency standards the U.S. Department of Transportation says it needs to consider deployment of this form of automation in the wild. Past competitions focused on transportation systems for traffic signaling, public transit, and infrastructure.
The AI City Challenge was one of numerous challenges hosted this week at the Computer Vision and Pattern Recognition (CVPR), which was the second-largest annual AI research conference in 2019 and attracted more than 6,500 participants this year. CVPR also hosted competitions for leading a bot through a RoboTHOR simulated environment, as well as the Deepfake Detection Challenge (although The Register reported the winning team was disqualified).
This year at the AI City Challenge workshop, National Institute of Standards and Technology (NIST) officials detailed plans for the 2020 ASAPS Prize Challenge to create a real-time automated system. That competition will focus on building real-time automated analytics systems for law enforcement and first responders to address real-time emergency event detection like a child falling into a harbor, medical emergencies, abandoned building fires, or other situations.
ASAPS is a multimodal challenge for systems capable of ingesting multiple forms of media, from social media posts and text messages to surveillance camera footage and home video doorbells. ASAPS will feature a series of emergency events in a mock 810-acre city over 24 hours, combining simulated data with physically staged emergencies. The competition will also challenge AI researchers to carry out live video analysis rather than applying the AI to a pre-recorded video.
AI City Challenge organizers said next year it may introduce scenarios involving live video analysis. Dash camera video footage is also under consideration.
The addition of a synthetic data set of 190,000 images for the vehicle reidentification challenge was a unique new part of the AI City Challenge this year. Benchmark data sets used in this year’s competition came from footage provided in part by the Iowa Department of Transportation. Nvidia curated data sets used in the challenge.
In related news, opponents of institutional racism demanding lawmakers defund the police in recent weeks have criticized surveillance systems like the kind championed at the AI City Challenge. Earlier this week, New York City lawmakers passed the POST Act requiring the nation’s largest police department to share what surveillance technology it uses.
Comments are closed.