BLINK OF AN EYE: OXBOTICA SELF-DRIVING SOFTWARE ANALYSES 150 VEHICLES A SECOND ON STREETS OF LONDON | Oxbotica

20th September 2019

BLINK OF AN EYE: OXBOTICA SELF-DRIVING SOFTWARE ANALYSES 150 VEHICLES A SECOND ON STREETS OF LONDON

  • To understand the complexity of driving in the city, AVs operating in London analyse 150 vehicles per second – faster than the human eye
  • Oxbotica software can detect traffic lights in 1 / 2,000th of a second
  • Oxbotica is conducting extensive trialling on some of the most congested and complex roads in the world
  • Oxbotica’s Universal Autonomy software is already been deployed in mines, airports shuttles, trucks and overseas vehicle fleets
  • £8.6M consortium project plots the ecosystem needed to deliver autonomy in London

London, UK. September 20 2019. Autonomous vehicles driving on London’s complex and congested streets are required to make 150 independent vehicle detections every second and can detect traffic lights in 1/2,000th of a second – faster than the human eye, according to Oxbotica, world leading autonomous vehicle software provider.

The Oxford-based company is already using its pioneering Universal Autonomy software system in cities, mines, airports, quarries and ports. This software can run on everyday computer hardware – similar to the power of an average desktop PC.

Oxbotica is also trialling two fully autonomous vehicles in London as part of the DRIVEN consortium, an £8.6 million research project that seeks to address fundamental real-world challenges facing self-driving vehicles such as insurance, cyber-security and data privacy.

Oxbotica has completed extensive trialling in London since initial trials in the Borough of Hounslow in Dec 2018, with the capital proving the ideal testing ground due to its classical architecture and complex road networks; it ranked as the sixth most congested city in the world [1] and records over 30,000 road casualties per year[2].

The advanced journey learnings from London can then be applied to improve safety around the globe – whether that’s on the roads of Oxford or in a truck working a mine in Northern Australia – thanks to its machine learning algorithms and advanced vision perception.

Paul Newman, Oxbotica founder, said: “As humans, we get better at driving the more experience we have but we don’t share our learnings with each other. This is the covenant for autonomous vehicles. They learn as a community in a way that we don’t. If we, humans, have a mishap, or see something extraordinary, we aren’t guaranteed to make our neighbour or colleague a better driver.

“Even if we could learn from each other like computers can, we can’t share at scale, across vast numbers and we can’t do it all the time. That’s what our AI software will do for every host vehicle wherever it is in the world. Providing life-long shared learning, and with it in-depth, and continually improved knowledge of the local area – allowing our cars to not just read the roads but to predict common hazards with ever greater sophistication.”

The Oxbotica software utilises two strands to achieve Universal Autonomy: Selenium and Caesium. Selenium is a complete end-to-end solution which works anywhere, anytime and is the equivalent of a computer operating system, pulling in data from the sensors fitted to each self-driving vehicle. The data from Selenium is then uploaded to Caesium, a data and vehicle management tool that allows learning to be shared between vehicles anywhere in the world without the need for human input.

Oxbotica, the lead partner in the DRIVEN consortium, will showcase its software capability as part of a live demonstration in London later this month, as part of a 30-month ground-breaking research project which began in July 2017.