Categories
AI

The problem with self-driving cars

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Aka: Why this burning money pit has failed to produce meaningful results for decades.

The future is here, and it looks nothing like we expected. As we approach the 10-year anniversary of Alexnet, we have to critically examine the successes and failures of machine learning.

We are looking out from a higher plateau.

We have achieved things in computer vision, natural language processing and speech recognition that would have been unthinkable just a few years ago. By all accounts, the accuracy of our AI systems exceeds the wildest imaginations of yesteryear. 

And yet, it’s not enough. 

We were wrong about the future. Every prediction about self-driving cars has been wrong. We are not living in a future of autonomous cyborgs, and something else has come into focus. 

Augmentation over automation.

Humans crave control. It is one of our deepest, most instinctual desires. There is no world where we give it up. One of the biggest misunderstandings of the AI community today is that people become comfortable with automation over time. As the reliability of automated solutions is proven, the microwave background comfort of society steadily rises.

This is false.

The history of technology is not the history of automation. It is the history of control and abstraction. We are tool-builders, so uncomfortable with experiences beyond our control that for thousands of years we developed entire civilizations and mythos around the movement of the heavens. So it is with all technology.

And so it is with AI.

Since the early days, the problem with self-driving cars has been obvious: there’s no control. When we look at the successful implementations of self-driving cars — now several years old — we see lane assist and parallel parking. We see situations and use cases where the control pane between human and machine is obvious. In all other situations, where the goal has been the pursuit of mythical level 5 autonomy, self-driving cars have failed miserably.

Technology is not the bottleneck.

In 1925 we had a radio-controlled car navigating the streets of New York City through a busy traffic jam without a driver behind the wheel. At the 1939 World’s Fair, Norman Geddes’ Futurama exhibit outlined a plausible smart highway system that would effectively use magnetized spikes — like electromagnetic fiducials — embedded in the road to guide cars. He predicted that autonomous cars would be the dominant form of transportation by the 1960s.

Of course, he was wrong too.

Not about the technology though. No, “smart highways” have been tremendously successful and straightforward where they’ve been implemented. Even without additional infrastructure, we’ve got self-driving cars today that are more than capable of driving as safely as humans. Yet, even with more than $80 billion flowing into the field from 2014 to 2017, we have no self-driving cars. For reference, the $108 billion the U.S. federal government committed to public transit over a 5-year period was the largest investment the country has ever made in public transportation.

The difference of course, is that I can actually ride a train.

The problem, fundamentally, is that nobody has bothered to think about the new control panes that we’re trying to enable. The question was never about automating driving. That’s a myopic, closed-minded way of thinking. The question is about how to transform the transit experience.

Cars suck.

They’re big, loud, smelly and basically the most inefficient form of transportation someone could imagine. They’re the most expensive thing a person owns after their home, but they don’t create value. It’s not an asset that anybody wants to own, it’s an asset that people have to own. It’s a regressive tax that destroys the planet and subsidizes the highways that blight our cities. It’s an expensive, dangerous hunk of metal that sits unused in an expensive garage nearly 100% of the time.

Cars suck.

And making them self-driving solves approximately none of these problems. That’s the problem. When we spend too much time focusing on the quasi-mythical state of full automation, we ignore the impactful problems that sit in front of us. Uber was successful because you could call a car with the press of a button. Leases are successful, despite the cost, because it’s a different control pane for the car. These are new transit experiences.

So, where’s the actual opportunity?

I think that companies like Zoox have an interesting and compelling thesis. By focusing on the rider experience, and critically by designing a highly novel interface for teleguidance, I think they have a real shot of delivering something useful out of the self-driving car frenzy. I think it’s important to realize, though, that their teleguidance system is not some temporary bridge to get from here to there. The teleguidance system and its supporting architecture is arguably a more defensible breakthrough for them than any algorithmic advantage. That, combined with a model that eliminates ownership delivers a compelling vision. Of … ya know … a bus.

Don’t be distracted.

I haven’t used Zoox’s teleguidance system. I don’t know for certain that it is more efficient than driving, but at least they’re pointed in the right direction. We have to stop thinking about self-driving cars as fully autonomous. When level 5 autonomy is always right around the corner, there’s no need to think about all the messy intermediate states. The truth is that those messy intermediate states are the whole point.

This is the crux of the problem with self-driving cars.

If you’re an investor looking for the first company that’s going to “solve” self-driving cars, you’re barking up the wrong tree. The winner is the company that can actually deliver improved unit economics on the operation of a vehicle. Until we solve that problem, all of the closed track demos and all of the vanity metrics in the world mean nothing. We’re dreaming about the end of a race when we haven’t even figured out how to take the first step.

And the barrier isn’t machine learning.

It’s user experience. 

Slater Victoroff is founder and CTO of Indico Data.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
Computing

BMW shipping cars sans advertised Apple and Google features

The global chip shortage continues to cause problems for automakers to the point where some are shipping vehicles without all of their advertised features.

BMW, for example, is shipping some of its new cars without support for Apple CarPlay and Android Auto, according to a recent report by Automotive News.

In an email to affected customers, the German auto giant confirmed that some vehicles built between January and April of this year contain chips that require updated software in order to be able to offer Apple CarPlay and Android Auto. The necessary update will be rolled out “by the end of June at the latest,” the automaker said.

The issue is reportedly the result of BMW changing chip supplier in a bid to deal with the shortage in the most efficient way possible. In other words, changing supplier prevented it from halting shipments while it waited for the chips to come in. Instead, it’s been able to add the new supplier’s chips and then ship the cars, the only challenge being that it needs to roll out updated software to activate certain features.

It’s not clear how many customers and vehicle models are impacted by BMW’s decision to ship vehicles without CarPlay and Android Auto, but Automotive News’ own research suggests the situation involves the automaker’s American, British, French, Italian, and Spanish markets.

While the issue may be an unwelcome annoyance for customers, it shouldn’t prove to be too much trouble provided BMW delivers on its promise to resolve the problem by the end of next month. It’s certainly better than the automaker holding on to the vehicle until the functionality can be added.

Digital Trends has reached out to BMW for more information on the situation and we will update this article when we hear back.

BMW’s decision to ship vehicles without all of the advertised features is similar to moves made by other car companies in recent months. Ford, for example, also cited the global chip shortage for its decision to ship some of its Explorer SUVs without particular features, though it promised to add them when the chips become available.

In Ford’s case, it meant shipping some of its Explorers without functionality for rear seat controls that operate heating, ventilation, and air conditioning, though they are controllable from the driver’s seat.

Caused by pandemic-related supply chain problems and other factors, the chip shortage isn’t expected to end anytime soon, with Intel’s chief saying last month that it could take several more years for his company to get on top of the situation.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Tech News

Self-driving cars don’t need LiDAR

What is the technology stack you need to create fully autonomous vehicles? Companies and researchers are divided on the answer to that question. Approaches to autonomous driving range from just cameras and computer vision to a combination of computer vision and advanced sensors.

Tesla has been a vocal champion for the pure vision-based approach to autonomous driving, and in this year’s Conference on Computer Vision and Pattern Recognition (CVPR), its chief AI scientist Andrej Karpathy explained why.

Speaking at CVPR 2021 Workshop on Autonomous Driving, Karpathy, who has been leading Tesla’s self-driving efforts in the past years, detailed how the company is developing deep learning systems that only need video input to make sense of the car’s surroundings. He also explained why Tesla is in the best position to make vision-based self-driving cars a reality.

A general computer vision system

Deep neural networks are one of the main components of the self-driving technology stack. Neural networks analyze on-car camera feeds for roads, signs, cars, obstacles, and people.

But deep learning can also make mistakes in detecting objects in images. This is why most self-driving car companies, including Alphabet subsidiary Waymo, use lidars, a device that creates 3D maps of the car’s surrounding by emitting laser beams in all directions. Lidars provided added information that can fill the gaps of the neural networks.

However, adding lidars to the self-driving stack comes with its own complications. “You have to pre-map the environment with the lidar, and then you have to create a high-definition map, and you have to insert all the lanes and how they connect and all the traffic lights,” Karpathy said. “And at test time, you are simply localizing to that map to drive around.”

It is extremely difficult to create a precise mapping of every location the self-driving car will be traveling. “It’s unscalable to collect, build, and maintain these high-definition lidar maps,” Karpathy said. “It would be extremely difficult to keep this infrastructure up to date.”

Tesla does not use lidars and high-definition maps in its self-driving stack. “Everything that happens, happens for the first time, in the car, based on the videos from the eight cameras that surround the car,” Karpathy said.

The self-driving technology must figure out where the lanes are, where the traffic lights are, what is their status, and which ones are relevant to the vehicle. And it must do all of this without having any predefined information about the roads it is navigating.

Karpathy acknowledged that vision-based autonomous driving is technically more difficult because it requires neural networks that function incredibly well based on the video feeds only. “But once you actually get it to work, it’s a general vision system, and can principally be deployed anywhere on earth,” he said.

With the general vision system, you will no longer need any complementary gear on your car. And Tesla is already moving in this direction, Karpathy says. Previously, the company’s cars used a combination of radar and cameras for self-driving. But it has recently started shipping cars without radars.

“We deleted the radar and are driving on vision alone in these cars,” Karpathy said, adding that the reason is that Tesla’s deep learning system has reached the point where it is a hundred times better than the radar, and now the radar is starting to hold things back and is “starting to contribute noise.”

Supervised learning

The main argument against the pure computer vision approach is that there is uncertainty on whether neural networks can do range-finding and depth estimation without help from lidar depth maps.

“Obviously humans drive around with vision, so our neural net is able to process visual input to understand the depth and velocity of objects around us,” Karpathy said. “But the big question is can the synthetic neural networks do the same. And I think the answer to us internally, in the last few months that we’ve worked on this, is an unequivocal yes.”

Tesla’s engineers wanted to create a deep learning system that could perform object detection along with depth, velocity, and acceleration. They decided to treat the challenge as a supervised learning problem, in which a neural network learns to detect objects and their associated properties after training on annotated data.

To train their deep learning architecture, the Tesla team needed a massive dataset of millions of videos, carefully annotated with the objects they contain and their properties. Creating datasets for self-driving cars is especially tricky, and the engineers must make sure to include a diverse set of road settings and edge cases that don’t happen very often.

“When you have a large, clean, diverse datasets, and you train a large neural network on it, what I’ve seen in practice is… success is guaranteed,” Karpathy said.

Auto-labeled dataset

With millions of camera-equipped cars sold across the world, Tesla is in a great position to collect the data required to train the car vision deep learning model. The Tesla self-driving team accumulated 1.5 petabytes of data consisting of one million 10-second videos and 6 billion objects annotated with bounding boxes, depth, and velocity.

But labeling such a dataset is a great challenge. One approach is to have it annotated manually through data-labeling companies or online platforms such as Amazon Turk. But this would require a massive manual effort, could cost a fortune, and become a very slow process.

Instead, the Tesla team used an auto-labeling technique that involves a combination of neural networks, radar data, and human reviews. Since the dataset is being annotated offline, the neural networks can run the videos back in forth, compare their predictions with the ground truth, and adjust their parameters. This contrasts with test-time inference, where everything happens in real-time and the deep learning models can’t make recourse.

Offline labeling also enabled the engineers to apply very powerful and compute-intensive object detection networks that can’t be deployed on cars and used in real-time, low-latency applications. And they used radar sensor data to further verify the neural network’s inferences. All of this improved the precision of the labeling network.

“If you’re offline, you have the benefit of hindsight, so you can do a much better job of calmly fusing [different sensor data],” Karpathy said. “And in addition, you can involve humans, and they can do cleaning, verification, editing, and so on.”

According to videos Karpathy showed at CVPR, the object detection network remains consistent through debris, dust, and snow clouds.

Karpathy did not say how much human effort was required to make the final corrections to the auto-labeling system. But human cognition played a key role in steering the auto-labeling system in the right direction.

While developing the dataset, the Tesla team found more than 200 triggers that indicated the object detection needed adjustments. These included problems such as inconsistency between detection results in different cameras or between the camera and the radar. They also identified scenarios that might need special care such as tunnel entry and exit and cars with objects on top.

It took four months to develop and master all these triggers. As the labeling network became better, it was deployed in “shadow mode,” which means it is installed in consumer vehicles and run silently without issuing commands to the car. The network’s output is compared to that of the legacy network, the radar, and the driver’s behavior.

The Tesla team went through seven iterations of data engineering. They started with an initial dataset on which they trained their neural network. They then deployed the deep learning in shadow mode on real cars and used the triggers to detect inconsistencies, errors, and special scenarios. The errors were then revised, corrected, and if necessary, new data was added to the dataset.

“We spin this loop over and over again until the network becomes incredibly good,” Karpathy said.

So, the architecture can better be described as a semi-auto labeling system with an ingenious division of labor, in which the neural networks do the repetitive work and humans take care of the high-level cognitive issues and corner cases.

Interestingly, when one of the attendees asked Karpathy whether the generation of the triggers could be automated, he said, “[Automating the trigger] is a very tricky scenario, because you can have general triggers, but they will not correctly represent the error modes. It would be very hard to, for example, automatically have a trigger that triggers for entering and exiting tunnels. That’s something semantic that you as a person have to intuit [emphasis mine] that this is a challenge… It’s not clear how that would work.”

Hierarchical deep learning architecture

Tesla’s self-driving team needed a very efficient and well-designed neural network to make the most out of the high-quality dataset they had gathered.

The company created a hierarchical deep learning architecture composed of different neural networks that process information and feed their output to the next set of networks.

The deep learning model uses convolutional neural networks to extract features from the videos of eight cameras installed around the car and fuses them together using transformer networks. It then fuses them across time, which is important for tasks such as trajectory-prediction and to smooth out inference inconsistencies.

The spatial and temporal features are then fed into a branching structure of neural networks that Karpathy described as heads, trunks, and terminals.

“The reason you want this branching structure is because there’s a huge amount of outputs that you’re interested in, and you can’t afford to have a single neural network for every one of the outputs,” Karpathy said.

The hierarchical structure makes it possible to reuse components for different tasks and enable feature-sharing between the different inference pathways.

Another benefit of the modular architecture of the network is the possibility of distributed development. Tesla is currently employing a large team of machine learning engineers working on the self-driving neural network. Each of them works on a small component of the network and they plug in their results into the larger network.

“We have a team of roughly 20 people who are training neural networks full time. They’re all cooperating on a single neural network,” Karpathy said.

Vertical integration

In his presentation at CVPR, Karpathy shared some details about the supercomputer Tesla is using to train and finetune its deep learning models.

The compute cluster is composed of 80 nodes, each containing eight Nvidia A100 GPUs with 80 gigabytes of video memory, amounting to 5,760 GPUs and more than 450 terabytes of VRAM. The supercomputer also has 10 petabytes of NVME superfast storage and 640 tbps networking capacity to connect all the nodes and allow efficient distributed training of the neural networks.

Tesla also owns and builds the AI chips installed inside its cars. “These chips are specifically designed for the neural networks we want to run for [full self-driving] applications,” Karpathy said.

Tesla’s big advantage is its vertical integration. Tesla owns the entire self-driving car stack. It manufactures the car and the hardware for self-driving capabilities. It is in a unique position to collect a wide variety of telemetry and video data from the millions of cars it has sold. It also creates and trains its neural networks on its proprietary datasets, its special in-house compute clusters, and validates and finetunes the networks through shadow testing on its cars. And, of course, it has a very talented team of machine learning engineers, researchers, and hardware designers to put all the pieces together.

“You get to co-design and engineer at all the layers of that stack,” Karpathy said. “There’s no third party that is holding you back. You’re fully in charge of your own destiny, which I think is incredible.”

This vertical integration and repeating cycle of creating data, tuning machine learning models, and deploying them on many cars puts Tesla in a unique position to implement vision-only self-driving car capabilities. In his presentation, Karpathy showed several examples where the new neural network alone outmatched the legacy ML model that worked in combination with radar information.

And if the system continues to improve, as Karpathy says, Tesla might be on the track of making lidars obsolete. And I don’t see any other company being able to reproduce Tesla’s approach.

Open issues

But the question remains as to whether deep learning in its current state will be enough to overcome all the challenges of self-driving. Surely, object detection and velocity and range estimation play a big part in driving. But human vision also performs many other complex functions, which scientists call the “dark matter” of vision. Those are all important components in the conscious and subconscious analysis of visual input and navigation of different environments.

Deep learning models also struggle with making causal inference, which can be a huge barrier when the models face new situations they haven’t seen before. So, while Tesla has managed to create a very huge and diverse dataset, open roads are also very complex environments where new and unpredicted things can happen all the time.

The AI community is divided over whether you need to explicitly integrate causality and reasoning into deep neural networks or if you can overcome the causality barrier through “direct fit,” where a large and well-distributed dataset will be enough to reach general-purpose deep learning. Tesla’s vision-based self-driving team seems to favor the latter (though given their full control over the stack, they could always try new neural network architectures in the future). It will be interesting to how the technology fares against the test of time.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.



Repost: Original Source and Author Link

Categories
AI

Tesla AI chief explains why self-driving cars don’t need lidar

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


What is the technology stack you need to create fully autonomous vehicles? Companies and researchers are divided on the answer to that question. Approaches to autonomous driving range from just cameras and computer vision to a combination of computer vision and advanced sensors.

Tesla has been a vocal champion for the pure vision-based approach to autonomous driving, and in this year’s Conference on Computer Vision and Pattern Recognition (CVPR), its chief AI scientist Andrej Karpathy explained why.

Speaking at CVPR 2021 Workshop on Autonomous Driving, Karpathy, who has been leading Tesla’s self-driving efforts in the past years, detailed how the company is developing deep learning systems that only need video input to make sense of the car’s surroundings. He also explained why Tesla is in the best position to make vision-based self-driving cars a reality.

A general computer vision system

Deep neural networks are one of the main components of the self-driving technology stack. Neural networks analyze on-car camera feeds for roads, signs, cars, obstacles, and people.

But deep learning can also make mistakes in detecting objects in images. This is why most self-driving car companies, including Alphabet subsidiary Waymo, use lidars, a device that creates 3D maps of the car’s surrounding by emitting laser beams in all directions. Lidars provided added information that can fill the gaps of the neural networks.

However, adding lidars to the self-driving stack comes with its own complications. “You have to pre-map the environment with the lidar, and then you have to create a high-definition map, and you have to insert all the lanes and how they connect and all the traffic lights,” Karpathy said. “And at test time, you are simply localizing to that map to drive around.”

It is extremely difficult to create a precise mapping of every location the self-driving car will be traveling. “It’s unscalable to collect, build, and maintain these high-definition lidar maps,” Karpathy said. “It would be extremely difficult to keep this infrastructure up to date.”

Tesla does not use lidars and high-definition maps in its self-driving stack. “Everything that happens, happens for the first time, in the car, based on the videos from the eight cameras that surround the car,” Karpathy said.

The self-driving technology must figure out where the lanes are, where the traffic lights are, what is their status, and which ones are relevant to the vehicle. And it must do all of this without having any predefined information about the roads it is navigating.

Karpathy acknowledged that vision-based autonomous driving is technically more difficult because it requires neural networks that function incredibly well based on the video feeds only. “But once you actually get it to work, it’s a general vision system, and can principally be deployed anywhere on earth,” he said.

With the general vision system, you will no longer need any complementary gear on your car. And Tesla is already moving in this direction, Karpathy says. Previously, the company’s cars used a combination of radar and cameras for self-driving. But it has recently started shipping cars without radars.

“We deleted the radar and are driving on vision alone in these cars,” Karpathy said, adding that the reason is that Tesla’s deep learning system has reached the point where it is a hundred times better than the radar, and now the radar is starting to hold things back and is “starting to contribute noise.”

Supervised learning

The main argument against the pure computer vision approach is that there is uncertainty on whether neural networks can do range-finding and depth estimation without help from lidar depth maps.

“Obviously humans drive around with vision, so our neural net is able to process visual input to understand the depth and velocity of objects around us,” Karpathy said. “But the big question is can the synthetic neural networks do the same. And I think the answer to us internally, in the last few months that we’ve worked on this, is an unequivocal yes.”

Tesla’s engineers wanted to create a deep learning system that could perform object detection along with depth, velocity, and acceleration. They decided to treat the challenge as a supervised learning problem, in which a neural network learns to detect objects and their associated properties after training on annotated data.

To train their deep learning architecture, the Tesla team needed a massive dataset of millions of videos, carefully annotated with the objects they contain and their properties. Creating datasets for self-driving cars is especially tricky, and the engineers must make sure to include a diverse set of road settings and edge cases that don’t happen very often.

“When you have a large, clean, diverse datasets, and you train a large neural network on it, what I’ve seen in practice is… success is guaranteed,” Karpathy said.

Auto-labeled dataset

With millions of camera-equipped cars sold across the world, Tesla is in a great position to collect the data required to train the car vision deep learning model. The Tesla self-driving team accumulated 1.5 petabytes of data consisting of one million 10-second videos and 6 billion objects annotated with bounding boxes, depth, and velocity.

But labeling such a dataset is a great challenge. One approach is to have it annotated manually through data-labeling companies or online platforms such as Amazon Turk. But this would require a massive manual effort, could cost a fortune, and become a very slow process.

Instead, the Tesla team used an auto-labeling technique that involves a combination of neural networks, radar data, and human reviews. Since the dataset is being annotated offline, the neural networks can run the videos back in forth, compare their predictions with the ground truth, and adjust their parameters. This contrasts with test-time inference, where everything happens in real-time and the deep learning models can’t make recourse.

Offline labeling also enabled the engineers to apply very powerful and compute-intensive object detection networks that can’t be deployed on cars and used in real-time, low-latency applications. And they used radar sensor data to further verify the neural network’s inferences. All of this improved the precision of the labeling network.

“If you’re offline, you have the benefit of hindsight, so you can do a much better job of calmly fusing [different sensor data],” Karpathy said. “And in addition, you can involve humans, and they can do cleaning, verification, editing, and so on.”

According to videos Karpathy showed at CVPR, the object detection network remains consistent through debris, dust, and snow clouds.

tesla object tracking auto-labeling

Above: Tesla’s neural networks can consistently detect objects in various visibility conditions.

Image Credit: Logitech

Karpathy did not say how much human effort was required to make the final corrections to the auto-labeling system. But human cognition played a key role in steering the auto-labeling system in the right direction.

While developing the dataset, the Tesla team found more than 200 triggers that indicated the object detection needed adjustments. These included problems such as inconsistency between detection results in different cameras or between the camera and the radar. They also identified scenarios that might need special care such as tunnel entry and exit and cars with objects on top.

It took four months to develop and master all these triggers. As the labeling network became better, it was deployed in “shadow mode,” which means it is installed in consumer vehicles and run silently without issuing commands to the car. The network’s output is compared to that of the legacy network, the radar, and the driver’s behavior.

The Tesla team went through seven iterations of data engineering. They started with an initial dataset on which they trained their neural network. They then deployed the deep learning in shadow mode on real cars and used the triggers to detect inconsistencies, errors, and special scenarios. The errors were then revised, corrected, and if necessary, new data was added to the dataset.

“We spin this loop over and over again until the network becomes incredibly good,” Karpathy said.

So, the architecture can better be described as a semi-auto labeling system with an ingenious division of labor, in which the neural networks do the repetitive work and humans take care of the high-level cognitive issues and corner cases.

Interestingly, when one of the attendees asked Karpathy whether the generation of the triggers could be automated, he said, “[Automating the trigger] is a very tricky scenario, because you can have general triggers, but they will not correctly represent the error modes. It would be very hard to, for example, automatically have a trigger that triggers for entering and exiting tunnels. That’s something semantic that you as a person have to intuit [emphasis mine] that this is a challenge… It’s not clear how that would work.”

Hierarchical deep learning architecture

Tesla neural network self-driving car

Tesla’s self-driving team needed a very efficient and well-designed neural network to make the most out of the high-quality dataset they had gathered.

The company created a hierarchical deep learning architecture composed of different neural networks that process information and feed their output to the next set of networks.

The deep learning model uses convolutional neural networks to extract features from the videos of eight cameras installed around the car and fuses them together using transformer networks. It then fuses them across time, which is important for tasks such as trajectory-prediction and to smooth out inference inconsistencies.

The spatial and temporal features are then fed into a branching structure of neural networks that Karpathy described as heads, trunks, and terminals.

“The reason you want this branching structure is because there’s a huge amount of outputs that you’re interested in, and you can’t afford to have a single neural network for every one of the outputs,” Karpathy said.

The hierarchical structure makes it possible to reuse components for different tasks and enable feature-sharing between the different inference pathways.

Another benefit of the modular architecture of the network is the possibility of distributed development. Tesla is currently employing a large team of machine learning engineers working on the self-driving neural network. Each of them works on a small component of the network and they plug in their results into the larger network.

“We have a team of roughly 20 people who are training neural networks full time. They’re all cooperating on a single neural network,” Karpathy said.

Vertical integration

In his presentation at CVPR, Karpathy shared some details about the supercomputer Tesla is using to train and finetune its deep learning models.

The compute cluster is composed of 80 nodes, each containing eight Nvidia A100 GPUs with 80 gigabytes of video memory, amounting to 5,760 GPUs and more than 450 terabytes of VRAM. The supercomputer also has 10 petabytes of NVME superfast storage and 640 tbps networking capacity to connect all the nodes and allow efficient distributed training of the neural networks.

Tesla also owns and builds the AI chips installed inside its cars. “These chips are specifically designed for the neural networks we want to run for [full self-driving] applications,” Karpathy said.

Tesla’s big advantage is its vertical integration. Tesla owns the entire self-driving car stack. It manufactures the car and the hardware for self-driving capabilities. It is in a unique position to collect a wide variety of telemetry and video data from the millions of cars it has sold. It also creates and trains its neural networks on its proprietary datasets, its special in-house compute clusters, and validates and finetunes the networks through shadow testing on its cars. And, of course, it has a very talented team of machine learning engineers, researchers, and hardware designers to put all the pieces together.

“You get to co-design and engineer at all the layers of that stack,” Karpathy said. “There’s no third party that is holding you back. You’re fully in charge of your own destiny, which I think is incredible.”

This vertical integration and repeating cycle of creating data, tuning machine learning models, and deploying them on many cars puts Tesla in a unique position to implement vision-only self-driving car capabilities. In his presentation, Karpathy showed several examples where the new neural network alone outmatched the legacy ML model that worked in combination with radar information.

And if the system continues to improve, as Karpathy says, Tesla might be on the track of making lidars obsolete. And I don’t see any other company being able to reproduce Tesla’s approach.

Open issues

But the question remains as to whether deep learning in its current state will be enough to overcome all the challenges of self-driving. Surely, object detection and velocity and range estimation play a big part in driving. But human vision also performs many other complex functions, which scientists call the “dark matter” of vision. Those are all important components in the conscious and subconscious analysis of visual input and navigation of different environments.

Deep learning models also struggle with making causal inference, which can be a huge barrier when the models face new situations they haven’t seen before. So, while Tesla has managed to create a very huge and diverse dataset, open roads are also very complex environments where new and unpredicted things can happen all the time.

The AI community is divided over whether you need to explicitly integrate causality and reasoning into deep neural networks or if you can overcome the causality barrier through “direct fit,” where a large and well-distributed dataset will be enough to reach general-purpose deep learning. Tesla’s vision-based self-driving team seems to favor the latter (though given their full control over the stack, they could always try new neural network architectures in the future). It will be interesting to how the technology fares against the test of time.

Ben Dickson is a software engineer and the founder of TechTalks, a blog that explores the ways technology is solving and creating problems.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
Tech News

Why the dream of truly driverless cars is slowly dying

We were told there’d be fully-autonomous vehicles dominating roadways across the globe by 2018. When that didn’t happen, we were told they’d be here by the end of 2020. And when that didn’t happen, we were told they were just around the corner.

It seems like now the experts are taking a more pragmatic approach. According to a recent report from the Wall Street Journal, many computer science experts believe we could be at least a decade away from fully-driverless cars, and some believe they may never arrive.

The big problem

Driverless vehicle technology is a difficult challenge to solve.

As Elon Musk puts it:

A major part of real-world AI has to be solved to make unsupervised, generalized full self-driving work, as the entire road system is designed for biological neural nets with optical imagers.

Despite this assessment, Musk continues to push the baseless idea that Tesla’s Full Self Driving feature is on the cusp of turning the company’s cars into fully-autonomous vehicles.

FSD is a powerful driver assistance-feature, but no matter how close we move the goal posts there’s still no indication that Tesla’s any closer to achieving actual full self-driving than any other manufacturer.

Why were the old experts so wrong?

In 2014 deep learning saw an explosion in popularity on the back of work from numerous computer scientists including Ian Goodfellow. His work developing and defining the general adversarial network (aka, the GAN, a neural network that helps AI produce outputs by acting as both generator and discriminator) made it seem like nearly any feat of autonomy could be accomplished with algorithms and neural networks.

GANs are responsible for many of the amazingly human-like feats modern AI is able to accomplish including DeepFakes, This Person Does Not Exist, and many other systems.

The modern rise of deep learning acted as a rising tide that lifted the field of artificial intelligence research and turned Google, Amazon, and Microsoft into AI-first companies almost overnight.

Now, artificial intelligence is slated to be worth nearly a trillion dollars by 2028, according to experts.

But those gains haven’t translated into the level of technomagic that experts such as Ray Kurzweil predicted. AI that works well in the modern world is almost exclusively very narrow, meaning it’s designed to do a very specific thing and nothing else.

When you imagine all the things a human driver has to do – from paying attention to their surroundings to navigating to actually operating the vehicle itself – it quickly becomes a matter of designing either dozens (or hundreds) of interworking narrow AI systems or figuring out how to create a general AI.

What the future holds

The invention and development of a general AI would likely be a major catalyst for solving driverless cars, but so would three wishes from a genie.

This is because general AI is just another way of saying “an AI capable of performing any relative task a human can.” And, so far, the onset of general AI remains far-future technology.

In the near future we can expect more of the same. Driver-assistance technology continues to develop at a decent pace and tomorrow’s cars will certainly be safer and more advanced than today’s. But there’s little reason to believe they’ll be driving themselves in the wild any time soon.

We’re likely to see specific areas set aside in major cities across the world and entire highways designated for driverless vehicle technologies in the next few years. But that won’t herald the era of driverless cars.

There’s no sign whatsoever (outside of the aforementioned Tesla gambit) that any vehicle manufacturer is approaching even a five year window towards the development of a consumer production vehicle rated to drive fully-autonomously.

And that’s a good indication that we’re at least a decade or more from seeing a consumer production vehicle made available to individual buyers that doesn’t have a steering wheel or means of manual control.

Driverless cars aren’t necessarily impossible. But there’s more to their development than just clever algorithms, brute-force compute, and computer vision.

According to the experts, it’ll take a different type of AI, a different approach all-together,  a massive infrastructure endeavor, or all three to move the field forward.



Repost: Original Source and Author Link

Categories
Tech News

The UK will make ‘self-driving’ cars legal this year

The UK is making self-driving partially legal in the spring, so you should have an easier time in the cockpit. But you shouldn’t think of watching a movie or playing games on your phones while driving just yet.

A report from BBC noted that the Department for Transport will allow autonomous driving modes that are limited to a single lane with a speed cap of 60km per hour (37mph). 

The government said that vehicles with automated lane-keeping systems (ALKS) will have to take a GB-type approval to be classified as self-driving cars. It added that the drivers won’t need to pay attention to the road or keep their hands on the steering wheel, but they should be alert.While that’s the official articulation, you’re better keeping your hands on the wheel at all times, even with ALKS engaged.

In case there’s a warning signal from the vehicle’s automated system, drivers should be able to take over the control within 10 seconds. 

Last May, the UK started working on a 300 km road to test self-driving vehicles in a public setting. The idea was to let a car drive through different areas such as urban, suburban, rural, motorway roads to try out different scenarios.

Later in August, it asked for industry consultation for vehicles with ALKS to form regulations. Now, the transport department has put out initial set of regulations.

Tesla’s autopilot system is one of the well-known ALKS features in the industry. Last year, a UK-based YouTuber was warned for using it as it was illegal at that time. But soon, that YouTuber, and many other will be able to use this system in the country.



Repost: Original Source and Author Link

Categories
Tech News

EVs are safer than traditional cars, but uhm, why?

Electric cars have the potential to help in our fight against climate disaster. For example, if all cars in the UK were electric, the country’s emissions would drop by 12%.

But electric cars might also be able to address another issue that’s affecting people around the world.

Traffic-related fatalities are the eighth leading cause of death for people of all ages – ahead of HIV/AIDS and tuberculosis – and the number one cause of death for children and young adults.

Both because of the way they are driven and the mechanics inside them, electric vehicles could play an important role in making our roads safer. Charging an electric car takes longer than filling up a tank of petrol, and a typical full charge won’t get a car as far as a typical full tank.

Although the academic research into it is limited, anecdotal evidence suggests drivers of electric vehicles are warier of conserving energy, and drive differently as a result. The UK’s department of transport has published a set of guidelines on how to save energy when driving an electric car.

One of the ways to save a car’s charge is by driving more slowly. Sticking to the speed limit of 70 miles per hour, rather than going faster on motorways, saves battery. So does reducing the amount of stop-start driving, accelerating, and braking more gently. They also have a secondary impact of making the driver safer.

Another aspect of life with an electric car, particularly on long journeys, is taking an extended break to charge. Instead of just filling up a tank of petrol and being on their way, electric car drivers have to wait while their battery charges. Exact times vary depending on the type of charger being used and the capacity of the battery, but it’s not unusual for electric car drivers to wait half an hour to an hour at a service station for a charge.

If you’ve ever seen the signs across the motorway imploring you to “take a break”, you will know that stopping to recharge, and maybe to grab some food or a coffee, helps concentration on long journeys. Drivers who take breaks are safer than those who do not.

But it’s not just about drivers’ habits. There are reasons related to the way electric cars are designed that make them safer than internal combustion engines, too.

More than one motor

Most people are used to driving cars with a single source of power – an internal combustion engine. But when it comes to electric cars, it’s normal to have two, sometimes even four electric motors in the same vehicle.

This opens up possibilities that couldn’t be imagined before. For example, when you push your car’s throttle pedal, the vehicle interprets the position of the pedal to decide how much force it needs to accelerate. When this force is applied to a rotating part, like a wheel, it creates a torque. But which motors should the torque be allocated to? This is the principle of “torque vectoring” – the possibility to distribute traction or braking to different motors within the vehicle.

Torque vectoring could be used to make vehicles safer. If different amounts of torque are applied to the left and right sides of the vehicle, a turning effect will be produced. This can be used to influence the vehicle’s cornering response, making it safer, especially in critical conditions such as avoiding a crash when taking a corner too fast, or in case of hard swerves to avert an obstacle.

My team and I compared the safety of a torque vectoring technique to a car with an electronic stability control system, which has been a mandatory requirement for cars in the UK since 2014. We found that in some situations – like driving on slippery roads – torque vectoring can help prevent the driver losing control of the vehicle.

More than 90% of accidents are due to human error. According to the World Health Organization, “road traffic injuries are predicted to become the fifth leading cause of death by 2030, unless action is taken”. Future vehicle engineers and researchers have plenty to deal with.

Already here

Some manufacturers are already using torque vectoring. The Subaru Forester uses brake torque vectoring, for example. Other cars use different variations of the technology, achieving an uneven distribution of torque between left and right sides in different ways. But this technology is usually reserved for high-end luxury or sports cars.

At the moment electric cars are still being developed, and manufacturers have yet to make the most of the torque vectoring capability. Advanced safety systems could also be devised that would anticipate rather than react to loss of control situations, dramatically enhancing passenger vehicle safety.

In the future, vehicles are expected to be both autonomous and electric. Essentially this is due to environmental aspects – reduced emissions – and the fact that in a few years the cost of electric vehicles is expected to be similar to traditional vehicles.

The capabilities of electric vehicles, such as torque vectoring, can contribute to making these vehicles safer than traditional cars. This is very important considering claims that autonomous vehicles need to be four to five times safer than standard vehicles in order for the public to accept this “self-driving” technology.

This article by Basilio Lenzo, Senior Lecturer in Automotive Engineering, Sheffield Hallam University is republished from The Conversation under a Creative Commons license. Read the original article.



Repost: Original Source and Author Link

Categories
Tech News

What Waymo’s new leadership means for its self-driving cars

Waymo, Alphabet’s self-driving car subsidiary, is reshuffling its top executive lineup. On April 2, John Krafcik, Waymo’s CEO since 2015, declared that he will be stepping down from his role. He will be replaced by Tekedra Mawakana and Dmitri Dolgov, the company’s former COO and CTO. Krafcik will remain as an advisor to the company.

“[With] the fully autonomous Waymo One ride-hailing service open to all in our launch area of Metro Phoenix, and with the fifth generation of the Waymo Driver being prepared for deployment in ride-hailing and goods delivery, it’s a wonderful opportunity for me to pass the baton to Tekedra and Dmitri as Waymo’s co-CEOs,” Krafcik wrote on LinkedIn as he declared his departure.

The change in leadership can have significant implications for Waymo, which has seen many ups and downs as it continues to develop its driverless car business. It can also hint at the broader state of the self-driving car industry, which has failed to live up to its hype in the past few years.

The deep learning hype

In 2015, Krafcik joined Google’s self-driving car effort, then called Project Chauffeur. At the time, there was a lot of excitement around deep learning, the branch of artificial intelligence that has made great inroads in computer vision, one of the key components of driverless cars. The belief was that, thanks to continued advances in deep learning, it was only a matter of time before self-driving cars became the norm on the streets.

Deep learning models rely on vast amounts of training data to develop stable behavior. And if the AI algorithms were ready, as it seemed at the time, reaching deployment-level self-driving car technology was only a question of having a scalable data-collection strategy to train deep learning models. While some of this data can be generated in simulated environments, the main training of deep learning models used in self-driving cars comes from driving in the real world.

Self-driving cars use deep learning to make sense of their surroundings.

Therefore, what Project Chauffeur needed was a leader who had longtime experience in the automotive industry and could bridge the gap between carmakers and the fast-developing AI sector, and deploy Google’s technology on roads.

And Krafcik was the perfect candidate. Before joining Google, he was the CEO of Hyundai Motor America, had held several positions at Ford, and had worked in the International Motor Vehicle Program at MIT as a lean production researcher and consultant.

Under Krafcik’s tenure, Project Chauffeur spun off as Waymo under Alphabet, Google’s parent company, and quickly transformed into a leader in testing self-driving cars on roads. During this time, Waymo struck partnerships with several automakers, integrated Waymo’s AI and lidar technology into Jaguar and Chrysler vehicles, and expanded its test-driving project to more than 25 states.

Today, Waymo’s cars have driven more than 20 million miles on roads and 20 billion miles in simulation, more than any other self-driving car company.

The limits of self-driving technology

Like the executives of other companies working on driverless car technology, Krafcik promised time and again that fully autonomous vehicles were on the horizon. In Waymo’s 2020 Web Summit, Krafcik presented a video of a Waymo self-driving car driving in streets without a backup driver.

“We’ve been working on this technology a long time, for about eight years,” Krafcik said. “And every company, including Waymo, has always started with a test driver behind the wheel, ready to take over. We recently surveyed 3,000 adults across the U.S. and asked them when they expected to see self-driving vehicles, ones without a person in the driver’s seat, on their own roads. And the common answer we heard was around 2020… It’s not happening in 2020. It’s happening today.”

But despite Krafcik’s leverage in the automotive industry, Google’s crack AI research team, and Alphabet’s deep pockets, Waymo—like other self-driving car companies—has failed to produce a robust driverless technology. The cars still require backup drivers to monitor and take control as soon as the AI starts to act erratically.

The AI technology is not ready, and despite the lidar, radar, and other sensor technologies used to complement deep learning models, self-driving cars still can’t handle unknown conditions in the same way as humans do.

They can run thousands of miles without making errors, but they might suddenly make very dumb and dangerous mistakes when they face corner cases, such as an overturned truck on the highway or a fire truck parked at the wrong angle.

So far, Waymo has avoided major self-driving scandals such as Tesla and Uber’s fatal accidents. But it has yet to deliver a technology that can be deployed at scale. Waymo One, the company’s fully driverless robo-taxi service, is only available in limited parts of Phoenix, AZ. After two years, Waymo still hasn’t managed to expand the service to more crowded and volatile urban areas.

The company is still far from becoming profitable. Alphabet’s Other Bets segment, which includes Waymo, had an operating cost of $4.48 billion in 2020, against $657 million in revenue. And Waymo’s valuation has seen a huge drop amid cooling sentiments surrounding self-driving cars, going from nearly $200 billion in 2018 to $30 billion in 2020.

The AI and legal challenges of self-driving cars

Waymo driverless car

While Krafcik didn’t explicitly state the reason for his departure, Waymo’s new leadership lineup suggests that the company has acknowledged that the “fully self-driving cars are here” narrative is a bit fallacious.

Driverless technology has come a long way, but a lot more needs to be done. It’s clear that just putting more miles on your deep learning algorithms will not make them more robust against unpredictable situations. We need to address some of the fundamental problems of deep learning, such as lack of causality, poor transfer learning, and intuitive understanding of physics. These are active areas of research, and no one has still provided a definitive answer to them.

The self-driving car industry also faces several legal complications. For instance, if a driverless car becomes involved in an accident, how will culpability be defined? How will self-driving cars share roads with human-driven cars? How do you define whether a road or environment is stable enough for driverless technology? These are some of the questions that the self-driving car community will have to solve as the technology continues to develop and prepare for mass adoption.

In this regard, the new co-CEOs of Waymo are well-positioned to face these challenges. Dologov, who was Waymo’s CTO before his new role, has a PhD in computer science with a focus on artificial intelligence and has a long history of working on self-driving car technology.

As a postdoc researcher, he was part of Stanford’s self-driving car team that won second place in DARPA’s 2007 Urban Challenge. He was also a researcher at Toyota’s Research Institute in Ann Arbor, MI. And since 2009, he has been among the senior engineers in Google’s self-driving car outfit that later became Waymo.

In a nutshell, he’s as good a leader you can have to deal with the AI software, algorithm, and hardware challenges that a driverless car company will face in the coming years.

Mawakana, on the other hand, is a Doctor of Law. She had led policy teams at Yahoo, eBay, and AOL before joining Waymo and becoming the COO. She’s now well-positioned to tackle the legal and policy challenges that Waymo will face as self-driving cars gradually find try to find their way in more jurisdictions.

The dream of self-driving cars is far from dead. In fact, in his final year as CEO, Krafcik managed to secure more than $3 billion in funding for Waymo. There’s still a lot of interest in self-driving cars and their potential value. But Waymo’s new lineup suggests that self-driving cars still have a bumpy road ahead.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.



Repost: Original Source and Author Link

Categories
AI

Global chip shortage affects more than cars

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


(Reuters) — From delayed car deliveries to a supply shortfall in home appliances to costlier smartphones, businesses and consumers across the globe are facing the brunt of an unprecedented shortage in semiconductor microchips.

The shortage stems from a confluence of factors as carmakers, which shut plants during the COVID-19 pandemic last year, compete against the sprawling consumer electronics industry for chip supplies.

Consumers have stocked up on laptops, gaming consoles and other electronic products during the pandemic, leading to tighter inventory. They also bought more cars than industry officials expected last spring, further straining supplies.

Sanctions against Chinese tech companies have further exacerbated the crisis. Originally concentrated in the auto industry, the shortage has now spread to a range of other consumer electronics, including smartphones, refrigerators and microwaves.

With every company that uses chips in production panic buying to shore up stocks, the shortage has squeezed capacity and driven up costs of even the cheapest components of nearly all microchips, increasing prices of final products.

Cars

Automobiles have become increasingly dependent on chips — for everything from computer management of engines for better fuel economy to driver-assistance features such as emergency braking.

The crisis has forced many to curtail the production of less profitable vehicles. General Motors Co and Ford Motor Co are among the big carmakers who said they would scale down production, joining other automakers including Volkswagen AG, Subaru Corp, Toyota Motor Corp and Nissan Motor Co.

A shortage of auto semiconductor chips could impact nearly 1.3 million units of global light vehicle production in the first quarter, according to data firm IHS Markit.

IHS said a fire at a Japanese chip-making factory owned by Renesas Electronics Corp, which accounts for 30% of the global market for microcontroller units used in cars, has worsened the situation.

Severe winter weather in Texas has also forced Samsung Electronics Co Ltd, NXP Semiconductors and Infineon to shut down factories temporarily. Infineon and NXP are major automotive chip suppliers, and analysts expect the disruptions to add to the shortfalls in the ailing sector.

Asian squeeze

At the root of the squeeze is the under-investment in 8-inch chip manufacturing plants owned mostly by Asian firms, which means they have struggled to ramp up production as demand for 5G phones and laptops picked up faster than expected.

Qualcomm Inc, whose chips feature in Samsung phones, is one major chipmaker struggling to keep up with demand. Apple Inc’s major supplier Foxconn also warned of the chip shortage affecting supply chains to clients.

The majority of chip production occurs in Asia currently, where major contract manufacturers such as Taiwan Semiconductor Manufacturing Co Ltd (TSMC) and Samsung handle production for hundreds of different chip companies. U.S. semiconductor companies account for 47% of global chip sales, but only 12% of global manufacturing is done in the United States.

What’s being done about it?

Factories that produce wafers cost tens of billions of dollars to build, and expanding their capacity can take up to a year for testing and qualifying complex tools.

U.S. President Joe Biden has sought $37 billion in funding for legislation to supercharge chip manufacturing in the country.

Currently, four new factories are slated in the country, two by Intel Corp and one by TSMC in Arizona, and another by Samsung in Texas.

China has also offered a myriad of subsidies to the chip industry as it tries to reduce its dependence on Western technology.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

The Best Cars for Driving in the Snow

If you regularly drive through blizzards, look no further than the Subaru Crosstrek the next time you’re shopping for a car. It’s rugged, it’s reliable, and above all, it’s extremely capable, thanks in part to Subaru’s time-tested all-wheel drive system and a generous amount of ground clearance. It remains user-friendly around town, too.

See more

There are other good options if the Crosstrek is too small or underpowered for your needs. Digital Trends has traveled to the coldest, snowiest parts of the world to find out which cars keep old man winter at bay and which ones get stuck on ice. We’ve also selected the best electric snow car and the best luxury snow car, among other choices.

Product Category Rating
Subaru Crosstrek Best snow car overall Not yet rated
Volvo V90 Cross Country Best luxury snow car Not yet rated
Audi E-Tron Best electric snow car Not yet rated
Subaru WRX Best performance snow car Not yet rated
Jeep Grand Cherokee Best SUV for the snow 3.5 out of 5

The best: Subaru Crosstrek

Why you should buy this: It will get you where you need to go, regardless of the weather.

Who it’s for: The winter-weary.

How much it will cost: $22,245+

Why we picked the Subaru Crosstrek:

Almost every Subaru is a good winter car. With the notable exception of the rear-wheel-drive BRZ sports car, which just entered its second generation, every model in the Japanese automaker’s lineup comes standard with all-wheel drive. In particular, we think the Crosstrek hatchback is a good all-around package for winter driving.

The Crosstrek is basically an Impreza hatchback with extra ground clearance and plastic body cladding added to mimic the styling of SUVs. It isn’t an SUV though; it proves that you don’t need one.

All-wheel drive lets the Crosstrek handle all sorts of nasty weather, and the extra ground clearance is helpful on dirt roads. The rest of the time, the Crosstrek drives like a normal car. Its compact dimensions give it relatively responsive handling, and its acceleration is adequate, though we wouldn’t call it fast. Subaru did add a bigger engine for the 2021 model year that puts much-needed extra power under the driver’s right foot. All told, it’s a well-executed package with handsome styling, a spacious interior, and a modern infotainment system. What more do you need?

The best luxury car for the snow: Volvo V90 Cross Country

best cars for the snow volvo v90 cross country

Why you should buy this: It’s a masterpiece of Swedish design.

Who it’s for: People who want a rugged wagon with more appeal than a Subaru Outback.

How much it will cost: $54,900+

Why we picked the Volvo V90 Cross Country:

Volvo has been building its Cross Country-badged models in one form or another since 1997. They’re station wagons (and, rarely, sedans) with SUV-like ground clearance and rugged-looking styling cues such as plastic body cladding.

All-wheel drive turns the V90 Cross Country into a true winter warrior. Digital Trends tested it in the middle of winter in northern Sweden, and it never got stuck. It offered excellent traction even on a frozen lake. In addition to an extra dose of ruggedness, the V90 Cross Country offers everything that’s great about recent additions to the Volvo family, like an ergonomic interior made with high-quality materials, and user-friendly tech features.

Volvo offers the V90 Cross Country with a supercharged and turbocharged 2.0-liter four-cylinder engine tuned to deliver 316 horsepower. That makes for brisk acceleration, but the Cross Country is happier when it’s cruising on the highway. It’s perfect for, well, crossing the country.

The best electric car for the snow: Audi E-Tron

Why you should buy this: It offers an electrified version of Audi’s Quattro system.

Who it’s for: Tech-savvy motorists.

How much it will cost: $65,900+

Why we picked the Audi E-Tron:

Quattro all-wheel drive is one of Audi’s claims to fame. It helped the company dominate the rallying scene during the 1980s, and it allows thousands of motorists to drive through awful weather each year. Going electric wasn’t an excuse for Audi to ditch Quattro; it perfected it. Two electric motors power the E-Tron — one is mounted over the front axle to spin the front wheels, and the other is positioned above the rear axle to zap the rear wheels into motion.

This is called a through-the-road setup, because there’s no physical connection between the axles yet all four wheels are driven. Speaking to Digital Trends, Audi engineer Tobias Greiner compared the powertrain his team developed to a network. The different components share information and work together to decide how much torque each axle needs in real-time. For example, if the armada of sensors detects understeer during hard cornering it will brake the inside wheels to counter it. If the sensors detect that the rear axle loses traction, they’ll send more torque to the front wheels to keep the car moving. In other words, snow and sand won’t stop the E-Tron in its tracks.

We also liked the E-Tron’s infotainment system, which is one of the most intuitive systems on the market, and we appreciated its smooth, silent ride on the highway. It’s a good daily driver — even in the winter — that just happens to be electric.

The best performance car for the snow: Subaru WRX

best cars for the snow subaru wrx

Why you should buy this: It’s a performance car that foul weather can’t stop.

Who it’s for: Snowbound speed freaks.

How much it will cost: $27,495+

Why we picked the Subaru WRX:

If the Crosstrek is a good all-rounder for winter driving, then the WRX is a performance-focused smile machine that plays well in slippery conditions. Like the Crosstrek, the WRX is a derivative of the Subaru Impreza compact, but it’s based on an older body style. That’s not the difference that really counts, though.

The WRX packs a turbocharged 2.0-liter boxer-four engine, which produces 268 hp and 258 lb.-ft. (Subaru also offers a WRX STI with a 2.5-liter, 305-hp engine). All-wheel drive allows the WRX to keep going when most other performance cars would be spinning off the road and into snowbanks. Torque vectoring channels power side-to-side, helping to turn the car into corners. That’s something you’ll appreciate even on dry pavement.

All-wheel drive isn’t the only thing that makes the WRX a practical choice. Underneath the boy-racer hood scoop and quad exhaust tips, it’s still a practical four-door sedan. A reasonably sized interior and trunk, as well as good road manners, make the WRX a performance car you’ll actually want to use every day.

The best SUV for the snow: Jeep Grand Cherokee

best cars for the snow 2018 jeep grand cherokee

Why you should buy this: It’s a family SUV for the Rubicon Trail.

Who it’s for: Outdoorsy types.

How much it will cost: $36,220+ (4×4)

Why we picked the Jeep Grand Cherokee:

When you think of a Jeep, you picture a vehicle with impressive off-road prowess for the serious adventurer. To the company, this is more than just innovative marketing; they put an impressive amount of hardware into their vehicles to help them tackle the elements. Like the Wrangler, the Grand Cherokee benefits from Jeep’s decades-long expertise in manufacturing serious go-anywhere off-roaders, but it also reflects the firm’s upmarket ambitions.

The Grand Cherokee has a long legacy of being a capable winter vehicle and can handle snowy roads with ease. Whether you’re heading to the store or the slopes, the Grand Cherokee has plenty of room for five adults and gear. 

The Grand Cherokee is available in 12 trim levels, all including A touchscreen-based UConnect infotainment system for easy navigation for your winter excursions. While the standard V-6 is extremely capable, the hot-rodded Trackhawk delivers an amazing 707 horsepower for those wanting extra performance.

How we test

Our team evaluates each vehicle we review by using extensive testing processes. We take advantage of the car experts on staff to ensure that the reviews posted on Digital Trends give buyers all of the information they need before making such a major decision.

Each vehicle is put through real-world testing on highways and back roads, as well as off-road and on race tracks when applicable. We use experienced test drivers to take the vehicles out in all sorts of weather so they can accurately report on the safety levels of each car.

We also evaluate all of a vehicle’s safety features, testing as many as possible in a controlled environment. The experts test and review each vehicle from the inside out to make sure you’re getting exactly what you expect according to the dealer’s listing. Vehicles are ranked based on others in their class to help you in your decision process.

Editors’ Choice




Repost: Original Source and Author Link