blog posts

Why Self-driving cars Still are not on our roads?

Why Self-driving cars Still are not on our roads?

Still, in 2021, we do not have self-driving cars on the roads, mainly due to the whole unresolved safety issue? Are self-propelled cars complete – can this be done?

Image: The 21st century promises self-driving cars, but are they coming true?

Do you remember the time when self-driving cars were about to arrive? Nearly a decade ago, Google’s self-driving vehicle division (now Waymo) promised a world where people were driven around by self-driving robotic cars.

We were shown computer-generated presentations of future cities, full of self-driving robotic taxis and abstract luxury vehicles that riders could relax in fully reclined seats while watching high-definition TVs.

That’s what they promised us. As it turned out, they were wrong.

Unfulfilled potential

The self-driving industrial complex has suffered major blows in technology. And safety over the past decade, to the point that even John Karafczyk, the former CEO of Waimo and one of the most ardent proponents of self-driving cars, is slowly giving up…

Well, what went wrong?

The simple answer is that society overestimates the potential of even the most advanced technologies. And underestimates the abilities of even the least trained drivers.

Despite what many think, driving is a complex and dynamic multi-tasking endeavor. Maintaining the speed and position of a vehicle is not an easy task, regardless of climate change, traffic, road conditions. And the human driver’s various mental, perceptual, and motor abilities.

When you add to the growing difficulty of using “intelligent” incomprehensible systems, it is not surprising that many drivers abandon the full use of assistive systems.

Let us not forget the numerous attempts by self-proclaimers to mislead the public by using names such as “autopilot” for less capable technology.

People accept driverless cars with much higher safety standards. And interpret disposable crashes as proof that these vehicles are too unsafe to leave on public roads.

Public opinion reaction

For years, scientists who have been researching the human factors of self-driving have warned of gaping flaws in their efforts to drive self-driving.

Accident reports on vehicles that, at least in principle, were fully capable of negotiating the “simplest” highway scenarios cited design limitations of these systems as possible causes of accidents. Because of the false sense of security of “autopilot” driving systems, some drivers may feel that they can avoid monitoring their behavior, thus leading to avoidable accidents.

This reaction occurred in decline in public confidence and acceptance of self-propelled vehicles. These crashes also do not help the targets of their vehicles.

Do not hold your breath.

So what next? Despite the increasing challenges of manual labor, commercial driving where trucks are technically more advanced or batch driving is a point where self-driving can be more advanced, at least in the short term.

Commercial driving, where trucks are more technically advanced, or batch driving, is where self-driving can be further developed, at least in the short term.

In the case of self-driving cars, the next time a self-driving vehicle manager tells me to get ready. I make sure I don’t hold my breath.

Ability to complete self-driving cars

For years, scientists who have been researching the human factors of self-driving have warned of gaping flaws in their efforts to drive self-driving.

Robotic vehicles have been used for decades in using the Fukushima Nuclear Power Plant or the inspection of the North Sea Underwater Energy Infrastructure in hazardous environments. Recently, self-propelled vehicles, from boats to food delivery carts, have had a smooth transition to research centers in the real world, yet they have had very few hiccups.

The promised arrival of self-propelled cars has not progressed beyond the testing phase. And in a test drive of an Uber self-propelled car in 2018, a pedestrian was killed by it.

Although these crashes happen every day when a person is behind the wheel, people accept driverless cars with much higher safety standards and use disposable crashes as proof that these vehicles are left on the road. The public is too insecure about interpreting.

Despite what many think, driving is a complex and dynamic multi-tasking endeavor.

Planning for a fully self-driving car that always makes the safest decision is a big, technical task. Unlike other self-propelled vehicles typically operated in fully controlled environments. Self-driving cars must operate uninterrupted and unpredictable road networks and quickly process many complex variables. To stay safe.

Inspired by the great code of the road, we are working on rules that help self-righteous people make the safest decisions in any decisional scenario. Confirmation that these laws work is the ultimate roadblock. That we must overcome to bring self-driving cars on our roads safely.

Asimov’s first law

Science fiction writer Isaac Asimov wrote his “Three Robotic Laws” in 1942. The first and most important rule states: “A robot may not harm a human or allow it to harm a human through inaction.” When self-driving cars harm humans, they violate this first rule.

We are conducting research at the National Robotarium to ensure. That self-righteous people always make decisions that comply with this law. Such a guarantee could solve serious safety concerns that prevent self-driving cars from taking over the world.

AI software is quite good at learning scenarios you have never encountered. Using “neural networks” inspired by the design of the human brain, the software can detect patterns in data such as the movement of cars and pedestrians and then remember these patterns in new situations.

But we still have to confirm that any safety practices taught to self-driving cars will work in these new techniques. To do this, we can turn to formal confirmation: the way computer scientists use to prove that law works in any situation.

In mathematics, for example, the rules prove that x + y equals y + x without the need for case tests of any possible values ​​of x and y. Formal verification does the same thing: it allows us to demonstrate how AI software responds to different scenarios without testing all the scenarios that could be tested on public roads. Let’s do a full experiment.

One of the most notable recent successes in this area is the approval of the AI ​​system, which uses neural networks to prevent collisions by self-propelled aircraft. Researchers have officially confirmed that the system will always respond correctly, regardless of the aircraft’s horizontal and vertical maneuvers.

Great way coding

To keep all road users safe, human drivers follow a great code. That relies on the human brain to learn these rules and apply them rationally in countless real-world scenarios. We can also teach big road codes to our cars. This requires us to select each rule in the code to teach vehicle neural networks how to obey each rule. And then verify that they can be trusted in any situation with Ensure they follow these rules.

However, the challenge of verifying and enforcing these rules safely . Examining the consequences of the phrase “never should” in highway code is complex.

For a self-driving car to react like a human driver in any reaction scenario. We must plan these policies so that they are fraught with details. And the occasional scenario in which different rules work together. They are in direct conflict and require the machine to ignore one or more reasons given.

Planning for a fully self-driving car that always makes the safest decision is a big, technical task.

Training self-driving cars to be perfect will be a dynamic process. And it depends on legal, cultural, and technological experts defining completion over time. The AISEC tool is being built with this account in mind, providing a “mission control panel” for monitoring completion. And complying with the most successful self-driving car laws is then made available to industry.

We hope to be able to offer the first prototype of the AISEC tool by 2024. But we still need to establish compliance verifications to address remaining safety. And security concerns, and it will take years to build and fit them in our cars.

Accidents involving self-driving cars always make headlines. The self-driving car detects a pedestrian and stops 99% of the time before colliding with me. It is a subject of praise in research labs, but it’s a real-world killing machine that creates strong and verifiable safety rules for self-driving cars. We are trying to make this 1% of accidents a thing of the past.