Categories
AI

Why Airbus is betting on AI to fix pilot shortage, flight safety

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


As airline passengers slog through a summer of rampant flight delays and cancellations, airlines are grappling with a massive post-pandemic increase in air travel demand and a long-term pilot shortage – while also prioritizing safety. Aerospace leader Airbus is betting autonomous and AI-driven commercial flight functions can bridge that gap. 

Wayfinder, a research project within Acubed, the Silicon Valley innovation center of Airbus, is developing autonomous flight and machine learning solutions for the next generation of aircraft. Its core mission is to build a “scalable, certifiable autonomy system capable of powering a range of self-piloted aircraft applications in single pilot operations.”

“Industry estimates expect passenger volume to grow from pre-COVID levels of 4 billion passengers a year to 8 billion – a little more than the current world population –  in about 15-20 years,” Airbus’ Wayfinder project executive, Arne Stoschek, told VentureBeat. “It’s a massive, massive scaling topic.” 

This means there will be more aircraft flying and more flights to manage, he explained, while at the same time Airbus and the commercial aviation industry need to keep increasing safety — which they say is a top priority — in mind. 

Autonomous functions to increase safety on Airbus aircraft

Wayfinder’s immediate goal, said Stotschek, is to develop autonomous functions to make aircraft operations safer – which can include supporting the pilot to better understand the environment and come to the right decisions. There is also ongoing discussion around single-pilot operation, meaning reducing the crew from several pilots to one, which would also mean a shift in pilot responsibility. 

“Autonomy is not a goal per se,” he said. “The main goal that we consider in the framework of technology is operational safety.”

The history of aviation, he added, is a continuum of automation – the current generation of aircraft is already highly automated and “each of those steps has substantially contributed to safety. This is the next big step.” 

Amid a pilot shortage that is forcing airlines to reduce the number of flights and raise the retirement age, Airbus’ autonomous flight efforts are already moving toward the goal of single-pilot operations. Last year, Acubed began flights in California to advance autonomous technology that will make the next redesigned narrow body aircraft capable of single-pilot operation.

“We certainly believe that the next generation of single-aisle aircraft will be single-pilot capable,” former Acubed executive, Mark Cousin, told FlightGlobal in 2020, noting that any single-pilot commercial aircraft will need advanced autonomous systems capable of taking over and landing should the pilot become incapacitated.

Gathering aircraft data is a challenge

One of the biggest challenges Wayfinder faces, said Stotschek, is dealing with the complexity of aircraft data. It requires scaling the data to reflect all types of conditions – including takeoff, landing, daytime, nighttime, snow storms and several thousands of different airports. 

“AI and machine learning technologies have to promise to provide this type of robustness, so the key is to have training and testing data,” said Stotschek. 

In an autonomous car, that is fairly straightforward with human drivers, or by purchasing high-quality annotated data. For large commercial aircraft, however, it’s a different story. 

“There are many steps you need to take in order to operate in a safe way,” he said. “So a lot of it is building the groundwork to get the ball rolling. We spend a lot of time collecting relevant data, determining what is relevant data and processing data in terms of safety certification.” 

In a blog post, Stotschek described Wayfinder’s goal this way: “We plan to observe the work that experienced pilots are doing every day, and then aggregate this data on a massive scale to train our machine learning models. As a rule, when a pilot retires, their experience is lost forever, but with our model, their contributions will endure. The historical data we gain from them will always be accessible to our AI systems, contributing to their continual learning and improvement. With this approach, we believe our AI system can eventually match the abilities of human pilots, bringing to life the business case of urban mobility and easing the strain put on pilots already in the field.”

Airbus seeks commercial viability of autonomous solutions

As Wayfinder continues to advance its ML and autonomous software solutions, Stotschek says its technologies will likely be commercially viable in the not-too-distant future. 

In 2020, he explained, Wayfinder had the opportunity to apply its ML and autonomy technologies developed in an Airbus technology demonstrator for autonomous taxi takeoff and landing.

“Since then, we have been focusing on the next steps beyond a technology demonstrator and towards a commercially viable solution, which includes aspects of scalability and the demonstration of safety,” he said. “This is a big undertaking that requires tens of petabytes of global data and encompasses all schemes and modes of operations that an aircraft is experiencing.” 

Wayfinder developed the processes and tools to create simulation runs in the order of millions, continuously improving and expanding its data, as well as testing its software against many scenarios so that it can expand the technology to any airport Airbus’ aircraft fly into, under all conditions the aircraft can experience.

Recent achievements, Stotschek added, include developing a new framework for assessing the safety and robustness of ML models that he expects significantly to improve the certification process. It received support, he said, both within and outside Airbus. 

“Our detection models also achieved very high levels of accuracy while being an order of magnitude smaller and faster than standard detection models in the industry,” he said. “Having compact and fast detection models is an important consideration for product realizations to run on on-aircraft computers.” 

Wayfinder is also setting up toolchains that will help rapidly move algorithm development to hardware-software implementation in a physical lab to hardware-software implementation on an actual aircraft, he explained. 

“The focus is to create processes that utilize the advantages of agile software development while ensuring stringent development and testing procedures mandated in our industry are followed” he said. 

From proof-of-concept to product-ready

The bottom line, says Stotschek, is that the goal is to move from a proof of concept, where Wayfinder shows AI applicable for a couple of test flights, to the realization that it’s really product-ready.

“That means it works for a global rollout. It works with the aircraft systems. It’s safe. It gets signed off by the regulatory bodies and is integrated into a product,” he said. “Once those types of core technologies are resolved, there’s a whole new world of applications and opportunities you can put on top of that. The availability of technology is just a first step in transforming the industry. That is the holy grail.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

AI safety tools can help mitigate bias in algorithms

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


As AI proliferates, researchers are beginning to call for technologies that might foster trust in AI-powered systems. According to a survey conducted by KPMG, across five countries — the U.S., the U.K., Germany, Canada, and Australia — over a third of the general public says that they’re unwilling to place trust in AI systems in general. And in a report published by Pega, only 25% of consumers said they’d trust a decision made by an AI system regarding a qualification for a bank loan, for example.

The concern has yielded a breed of software that attempts to impose constraints on AI systems charged with risky decision-making. Some focus on reinforcement learning, or AI that’s progressively spurred toward goals via rewards, which forms the foundation of self-driving cars and drug discovery systems. Others focus more broadly on fairness, which can be an elusive quality in AI systems — mostly owing to biases in algorithms and datasets.

Among others, OpenAI and Alphabet-owned DeepMind have released environments to train “safe” AI systems for a different types of applications. More make their way into open source on a regular cadence, ensuring that the study of constrained or safe AI has legs — and a lasting impact.

Safety tools

Safety tools for AI training are designed to prevent systems from engaging in dangerous behaviors that might lead to errors. They typically make use of techniques like constrained reinforcement learning, which implements “cost functions” that the AI must learn to constrain over time. Constrained systems figure out tradeoffs that achieve certain defined outcomes. For example, a “constrained” driverless car might learn to avoid collisions rather than allow itself to have collisions as long as it completes trips.

Safety tools also encourage AI to explore a range of states through different hypothetical behaviors. For example, they might use a generative system to predict behaviors informed by data like random trajectories or safe expert demonstrations. A human supervisor can label the behaviors with rewards, so that the AI interactively learns the safest behaviors to maximize its total reward.

Beyond reinforcement learning, safety tools encompass frameworks for mitigating biases while training AI models. For example, Google offers MinDiff, which aims to inject fairness into classification, or the process of sorting data into categories. Classification is prone to biases against groups underrepresented in model training datasets, and it can be difficult to achieve balance because of sparse demographics data and potential accuracy tradeoffs.

Google has also open-sourced ML-fairness-gym, a set of components for evaluating algorithmic fairness in simulated social environments. Other model debiasing and fairness tools in the company’s suite include the What-If Tool, a bias-detecting feature of the TensorBoard web dashboard for its TensorFlow machine learning framework; and SMACTR (Scoping, Mapping, Artifact Collection, Testing, and Reflection), an accountability framework intended to add a layer of quality assurance for businesses deploying AI models.

Not to be outdone, Microsoft provides Fairlearn, which addresses two kinds of harms: allocation harms and quality-of-service harms. Allocation harms occur when AI systems extend or withhold opportunities, resources, or information — for example, in hiring, school admissions, and lending. Quality-of-service harms refer to whether a system works as well for one person as it does for another, even if no opportunities, resources, or information are extended or withheld.

According to Microsoft, professional services firm Ernst & Young used Fairlearn to evaluate the fairness of model outputs with respect to sex. The toolkit revealed a 15.3% difference between positive loan decisions for males versus females, and Ernst & Young’s modeling team then developed and trained multiple remediated models and visualized the common trade-off between fairness and model accuracy.

LinkedIn not long ago released the LinkedIn Fairness Toolkit (LiFT), a software library aimed at enabling the measurement of fairness in AI and machine learning workflows. The company says LiFT can be deployed during training and scoring to measure biases in training data sets, and to evaluate notions of fairness for models while detecting differences in their performance across subgroups.

To date, LinkedIn says it has applied LiFT internally to measure the fairness metrics of training data sets for models prior to their training. In the future, the company plans to increase the number of pipelines where it’s measuring and mitigating bias on an ongoing basis through deeper integration of LiFT.

Rounding out the list of high-profile safety tools is IBM’s AI Fairness 360 toolkit, which contains a library of algorithms, code, and tutorials that demonstrate ways to implement bias detection in models. The toolkit recommends adjustments — such as algorithmic tweaks or counterbalancing data — that might lessen their impact, explaining which factors influenced a given machine learning model’s decision as well as its overall accuracy, performance, fairness, and lineage.

A more recent addition to the scene is a dataset and tool for detecting demographic bias in voice and speech recognition apps. The Artie Bias Corpus (ABC), which consists of audio files along with their transcriptions, aims to diagnose and mitigate the impact of factors like age, gender, and accent in voice recognition systems. AI startup Pymetrics’ Audit AI, which was also recently launched, is designed to determine whether a specific statistic or trait fed into an algorithm is being favored or disadvantaged at a statistically significant, systematic rate that leads to adverse impact on people underrepresented in a dataset.

Steps in the right direction

Not all safety tools are created equal. Some aren’t being maintained or lack documentation, and there’s a limit to the degree that they can remediate potential harm. Still, adopting these tools in the enterprise can instill a sense of trust among both external and internal stakeholders.

study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t. The study suggests that there’s both reputational risk and a direct impact on the bottom line for companies that don’t approach the issue thoughtfully.

Moreover, an overwhelming majority of Americans — 82% —  believe that AI should be carefully managed, comparable to survey results from European Union respondents, according to a 2019 report from the Center for the Governance of AI. This suggests a clear mandate for businesses to exercise the responsible and fair deployment of AI, using whatever tools are necessary to achieve this objective.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Peloton made a free feature subscription-only to address safety concerns

The sudden obsession over indoor fitness, especially during the pandemic, has catapulted Peloton’s name into mainstream media. Unfortunately, it stayed under the spotlight not because of its success but because of the accidents and even death related to the super-expensive exercise equipment. Initially refusing to recall its products, Peloton eventually announced a voluntary recall of its Tread+ and Tread treadmills. Now it is taking one step further by locking what used to be a free feature of the treadmills behind a monthly subscription in a move that some have characterized as effectively bricking the equipment.

Peloton’s popularity isn’t actually based on its mass appeal and ubiquity. On the contrary, Peloton is notorious for selling exercise treadmills and bikes that amount to around $4,000, more or less. But that’s actually just the upfront price of the equipment, as Peloton also sells exercise programs, services, and addons through a subscription that costs at least $39 a month.

Not everyone wants to pay that monthly fee and have eventually opted to use the Tread+ and Tread as just treadmills and took advantage of Peloton’s “Just Run” feature. It was a simple virtual button on the treadmill’s screen that allowed the use of the equipment as a regular treadmill. Now Peloton requires a four-digit passcode to unlock that functionality. The problem is that this Tread Lock feature is available only as part of the Peloton Membership subscription.

What this effectively means is that owners will have to pay that $39 a month fee to be able to use the Tread+ at all, whether they get in with Peloton’s exercise regimen or not. Peloton says this is to prevent unauthorized access and activation of the treadmills that have resulted in those accidents. In other words, it is locking users out of their $4,000 purchase in order to address what may have been the products’ faulty design.

Reactions to this news are surprisingly split, with some chiding Peloton owners for being cheap for not willing to pay $39 a month after buying a $3,000 piece of equipment. There are, of course, also legal ramifications to Peloton’s decision and the company claims it is working to restore free access to Just Run so that owners can just use their treadmills to just run.



Repost: Original Source and Author Link

Categories
Tech News

These bone conduction headphones balance top quality audio and safety brilliantly

TLDR: The Zulu Exero Bone Conduction Headphones deliver excellent sound quality via bone conduction technology, so you never stop hearing the world around you.

Ever since their explosion into the mainstream in the 1980s, headphones have become an increasingly permanent part of mobile communication, not to mention in consuming your favorite music, podcasts, live streams, and more.

But while their size has continued to shrink over the years, there remained a constant headphone issue that few have been able to address. How do users stay connected to the world around them with a bud wedged into their ear blocking out all sound?

Noise cancellation has been one approach, but that still involved covered ear canals. That’s where bone conducting technology comes in, an almost sci-fi, yet strikingly simple answer to headphone safety for items like these Zulu Exero Bone Conducting Headphones ($34.99, 30 percent off, from NTW Deals).

Rather than blasting sound waves into your eardrum like normal headphones, bone conduction works by using little pads pressed right in front of each ear, surging vibrations directly into the skull and bypassing the eardrum right to the tiny amplifying inner ear bones. 

That produces a sensation of hearing music or other audio just like you would with normal headphones, but without the blockage of earbuds, allowing users to stay safe in the world around them. And they only weigh about an ounce, making it easy to forget you’re wearing them, even if you aren’t listening to anything. They’re also a lot more friendly for those who wear glasses than tradition headphones as well.

The Exero pairs easily with Bluetooth-enabled devices for clear wireless audio streaming at a range of up to more than 30 feet. IPX5-rated for water resistance, these headphones are also great for listening while on the go, whether that’s commuting, running errands, or even working out. 

The Exero comes with a built-in USB rechargeable lithium ion battery to supply up to six hours of playback time on a single charge. It’s also equipped with an on-board mic for making or receiving phone calls or engaging with a voice assistant.

Right now, you can check out the power of bone conduction for yourself with these Zulu Exero Bone Conduction Headphones, a $49 value now on sale at 30 percent off, dropping the final price down to only $34.99.

Prices subject to change.

Repost: Original Source and Author Link

Categories
AI

SafetyCulture boosts its workplace safety tools, hits $1.6B valuation

Join Transform 2021 this July 12-16. Register for the AI event of the year.


SafetyCulture, a startup developing software for workplace safety and quality management, today announced that it closed a new $73 million round of funding, valuing the company at $1.6 billion. Led by Insight Partners and Tiger Global, SafetyCulture CEO Luke Anear says the funds will support growth as the company evolves from a checklist app into an operations platform for working teams.

Failure to provide a safe atmosphere at work is a costly mistake for both businesses and their employers. The National Safety Council estimates a worker is injured on the job every seven seconds, which equates to 4.6 million injuries a year. And the Centers for Disease Control and Prevention’s National Institute for Occupational Safety and Health pegs the costs of work-related injuries and illnesses at $170 billion a year.

Queensland, Australia-based SafetyCulture, which was founded in 2004, offers a mobile and web app called iAuditor for workplace safety and quality standards; it’s designed to collect data, standardize operations, and identify problems. The platform enables users to create customized forms from paper checklists, Word documents, and Excel spreadsheets, as well as take and annotate photos, add notes, and assign follow-up remedial actions. With SafetyCulture, managers can have teams share observations outside of regular inspections and generate reports for contracts, clients, and more, which can be exported to third-party platforms and tools. They can also connect to sensors — either SafetyCulture’s own or third-party sensors — to monitor work conditions in real time.

“The SafetyCulture journey started with a simple question: How do we keep people in the workplace safe? After witnessing the tragedy of workplace incidents in my time as a private investigator, I realized something needed to be done when it came to safety in the workplace,” Anear told VentureBeat via email. “I went on to recruit a team to help develop a mobile solution and so, iAuditor was born. When we started out this journey in my garage in regional Australia, we had no idea where it would lead. We’d created the first iteration of a digital platform that went on to revolutionize safety inspections globally. It’s now the world’s largest checklist app — and that’s just the beginning.”

SafetyCulture supports text and email notifications that trigger as soon as sensors detect changes that are out of the normal range. Customers can set alerts for things like local weather and have employees capture test results with barcode scanners, drawing tools, Bluetooth thermometers, and more.

SafetyCulture

Above: SafetyCulture’s mobile app, iAuditor.

Image Credit: SafetyCulture

Last September, SafetyCulture acquired EdApp, a learning management system akin to LinkedIn Learning, for a reported $29 million. At the time, SafetyCulture said that EdApp, which was delivering around 50,000 lessons per day, would enable it to offer “micro-learning” resources to workers in a range of industries.

Market momentum

SafetyCulture claims to host over 1.5 million users across 28,000 companies on its platform in more than 85 countries. It powers 600 million checks per year and millions of corrective actions per day, according to Anear.

“Statistics-wise, we’ve seen inspections within iAuditor grow 108% year-over-year and actions grow 161%. This indicates our customers are using our software across a far broader range of use cases, from safety to quality and customer experience. It also suggests our customers are benefiting strongly from the workflows we have built into iAuditor over the last couple of years,” Anear said. “This additional data makes our analytics functionality all the more powerful. As working teams input more and more data, organizations benefit from increased visibility and a broader base of staff contributing to driving safety, quality, and efficiency improvements.”

With 54% of U.S. employees worried about exposure to COVID-19 at their job, the pandemic has helped to drive SafetyCulture’s growth — even in the face of rivals like Aclaimant. According to a recent Pew Center survey, about 25% of employees say that they’re “not too” or “not at all” satisfied with the steps that have been taken in their workplace to keep them safe from the coronavirus. To address this, SafetyCulture created a number of checklists tailored for companies affected by the crises, including cleaning and disinfection logs as well as employee temperature log sheets.

“We’ve built a reputation globally for helping some of the most dangerous industries digitize and adhere to safety and quality protocols. Unfortunately, everywhere is a high risk in a pandemic, so 2020 saw a major push to ensure retailers, schools, hotels, and many other industries had the right support and tech to manage COVID-19,” Anear said.

SafetyCulture’s most recent funding round — which brings the company’s total raised to roughly $241 million — follows a $36 million tranche contributed by TDM Growth Partners, Blackbird Ventures, Index Ventures, former Australian prime minister Malcolm Turnbull and his wife Lucy Turnbull, and Atlassian cofounder Scott Farquhar. The company’s employee base grew 2.5 times in the past three years to over 500 people, and SafetyCulture says it plans to continue the same hiring trajectory “for the foreseeable future.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Adversarial training reduces safety of neural networks in robots: Research

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

There’s a growing interest in employing autonomous mobile robots in open work environments such as warehouses, especially with the constraints posed by the global pandemic. And thanks to advances in deep learning algorithms and sensor technology, industrial robots are becoming more versatile and less costly.

But safety and security remain two major concerns in robotics. And the current methods used to address these two issues can produce conflicting results, researchers at the Institute of Science and Technology Austria, the Massachusetts Institute of Technology, and Technische Universitat Wien, Austria have found.

On the one hand, machine learning engineers must train their deep learning models on many natural examples to make sure they operate safely under different environmental conditions. On the other, they must train those same models on adversarial examples to make sure malicious actors can’t compromise their behavior with manipulated images.

But adversarial training can have a significantly negative impact on the safety of robots, the researchers at IST Austria, MIT, and TU Wien discuss in a paper titled “Adversarial Training is Not Ready for Robot Learning.” Their paper, which has been accepted at the International Conference on Robotics and Automation (ICRA 2021), shows that the field needs new ways to improve adversarial robustness in deep neural networks used in robotics without reducing their accuracy and safety.

Adversarial training

Deep neural networks exploit statistical regularities in data to carry out prediction or classification tasks. This makes them very good at handling computer vision tasks such as detecting objects. But reliance on statistical patterns also makes neural networks sensitive to adversarial examples.

An adversarial example is an image that has been subtly modified to cause a deep learning model to misclassify it. This usually happens by adding a layer of noise to a normal image. Each noise pixel changes the numerical values of the image very slightly, enough to be imperceptible to the human eye. But when added together, the noise values disrupt the statistical patterns of the image, which then causes a neural network to mistake it for something else.

artificial intelligence adversarial example panda

Above: Adding a layer of noise to the panda image on the left turns it into an adversarial example.

Adversarial examples and attacks have become a hot topic of discussion at artificial intelligence and security conferences. And there’s concern that adversarial attacks can become a serious security concern as deep learning becomes more prominent in physical tasks such as robotics and self-driving cars. However, dealing with adversarial vulnerabilities remains a challenge.

One of the best-known methods of defense is “adversarial training,” a process that fine-tunes a previously trained deep learning model on adversarial examples. In adversarial training, a program generates a set of adversarial examples that are misclassified by a target neural network. The neural network is then retrained on those examples and their correct labels. Fine-tuning the neural network on many adversarial examples will make it more robust against adversarial attacks.

Adversarial training results in a slight drop in the accuracy of a deep learning model’s predictions. But the degradation is considered an acceptable tradeoff for the robustness it offers against adversarial attacks.

In robotics applications, however, adversarial training can cause unwanted side effects.

“In a lot of deep learning, machine learning, and artificial intelligence literature, we often see claims that ‘neural networks are not safe for robotics because they are vulnerable to adversarial attacks’ for justifying some new verification or adversarial training method,” Mathias Lechner, Ph.D. student at IST Austria and lead author of the paper, told TechTalks in written comments. “While intuitively, such claims sound about right, these ‘robustification methods’ do not come for free, but with a loss in model capacity or clean (standard) accuracy.”

Lechner and the other coauthors of the paper wanted to verify whether the clean-vs-robust accuracy tradeoff in adversarial training is always justified in robotics. They found that while the practice improves the adversarial robustness of deep learning models in vision-based classification tasks, it can introduce novel error profiles in robot learning.

Adversarial training in robotic applications

autonomous robot in warehouse

Say you have a trained convolutional neural network and want to use it to classify a bunch of images stored in a folder. If the neural network is well trained, it will classify most of them correctly and might get a few of them wrong.

Now imagine that someone inserts two dozen adversarial examples in the images folder. A malicious actor has intentionally manipulated these images to cause the neural network to misclassify them. A normal neural network would fall into the trap and give the wrong output. But a neural network that has undergone adversarial training will classify most of them correctly. It might, however, see a slight performance drop and misclassify some of the other images.

In static classification tasks, where each input image is independent of others, this performance drop is not much of a problem as long as errors don’t occur too frequently. But in robotic applications, the deep learning model is interacting with a dynamic environment. Images fed into the neural network come in continuous sequences that are dependent on each other. In turn, the robot is physically manipulating its environment.

autonomous robot in warehouse

“In robotics, it matters ‘where’ errors occur, compared to computer vision which primarily concerns the amount of errors,” Lechner says.

For instance, consider two neural networks, A and B, each with a 5% error rate. From a pure learning perspective, both networks are equally good. But in a robotic task, where the network runs in a loop and makes several predictions per second, one network could outperform the other. For example, network A’s errors might happen sporadically, which will not be very problematic. In contrast, network B might make several errors consecutively and cause the robot to crash. While both neural networks have equal error rates, one is safe and the other isn’t.

Another problem with classic evaluation metrics is that they only measure the number of incorrect misclassifications introduced by adversarial training and don’t account for error margins.

“In robotics, it matters how much errors deviate from their correct prediction,” Lechner says. “For instance, let’s say our network misclassifies a truck as a car or as a pedestrian. From a pure learning perspective, both scenarios are counted as misclassifications, but from a robotics perspective the misclassification as a pedestrian could have much worse consequences than the misclassification as a car.”

Errors caused by adversarial training

The researchers found that “domain safety training,” a more general form of adversarial training, introduces three types of errors in neural networks used in robotics: systemic, transient, and conditional.

Transient errors cause sudden shifts in the accuracy of the neural network. Conditional errors will cause the deep learning model to deviate from the ground truth in specific areas. And systemic errors create domain-wide shifts in the accuracy of the model. All three types of errors can cause safety risks.

errors caused by adversarial training

Above: Adversarial training causes three types of errors in neural networks employed in robotics.

To test the effect of their findings, the researchers created an experimental robot that is supposed to monitor its environment, read gesture commands, and move around without running into obstacles. The robot uses two neural networks. A convolutional neural network detects gesture commands through video input coming from a camera attached to the front side of the robot. A second neural network processes data coming from a lidar sensor installed on the robot and sends commands to the motor and steering system.

The researchers tested the video-processing neural network with three different levels of adversarial training. Their findings show that the clean accuracy of the neural network decreases considerably as the level of adversarial training increases. “Our results indicate that current training methods are unable to enforce non-trivial adversarial robustness on an image classifier in a robotic learning context,” the researchers write.

adversarial training robot vision

Above: The robot’s visual neural network was trained on adversarial examples to increase its robustness against adversarial attacks.

“We observed that our adversarially trained vision network behaves really opposite of what we typically understand as ‘robust,’” Lechner says. “For instance, it sporadically turned the robot on and off without any clear command from the human operator to do so. In the best case, this behavior is annoying, in the worst case it makes the robot crash.”

The lidar-based neural network did not undergo adversarial training, but it was trained to be extra safe and prevent the robot from moving forward if there was an object in its path. This resulted in the neural network being too defensive and avoiding benign scenarios such as narrow hallways.

“For the standard trained network, the same narrow hallway was no problem,” Lechner said. “Also, we never observed the standard trained network to crash the robot, which again questions the whole point of why we are doing the adversarial training in the first place.”

Adversarial training error profiles

Above: Adversarial training causes a significant drop in the accuracy of neural networks used in robotics.

Future work on adversarial robustness

“Our theoretical contributions, although limited, suggest that adversarial training is essentially re-weighting the importance of different parts of the data domain,” Lechner says, adding that to overcome the negative side-effects of adversarial training methods, researchers must first acknowledge that adversarial robustness is a secondary objective, and a high standard accuracy should be the primary goal in most applications.

Adversarial machine learning remains an active area of research. AI scientists have developed various methods to protect machine learning models against adversarial attacks, including neuroscience-inspired architectures, modal generalization methods, and random switching between different neural networks. Time will tell whether any of these or future methods will become the golden standard of adversarial robustness.

A more fundamental problem, also confirmed by Lechner and his coauthors, is the lack of causality in machine learning systems. As long as neural networks focus on learning superficial statistical patterns in data, they will remain vulnerable to different forms of adversarial attacks. Learning causal representations might be the key to protecting neural networks against adversarial attacks. But learning causal representations itself is a major challenge and scientists are still trying to figure out how to solve it.

“Lack of causality is how the adversarial vulnerabilities end up in the network in the first place,” Lechner says. “So, learning better causal structures will definitely help with adversarial robustness.”

“However,” he adds, “we might run into a situation where we have to decide between a causal model with less accuracy and a big standard network. So, the dilemma our paper describes also needs to be addressed when looking at methods from the causal learning domain.”

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Robotics safety system developer Fort Robotics raises $13M to tackle workplace automation

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Fort Robotics, a startup developing a safety and security platform for robots, today announced it has raised $13 million in a funding round led by Prime Movers Lab. With the new investment, Fort says it will more than double the size of its workforce and expand its product offerings in “key markets.”

Worker shortages attributable to the pandemic have accelerated the adoption of automation. According to ABI Research, more than 4 million commercial robots will be installed in over 50,000 warehouses around the world by 2025, up from under 4,000 warehouses as of 2018. In China, Oxford Economics anticipates 12.5 million manufacturing jobs will become automated, while in the U.S., McKinsey projects machines will take upwards of 30% of such jobs.

Founded in 2018, Philadelphia, Pennsylvania-based Fort is CEO Samuel Reeves’ second startup. In 2006, he launched Humanistic Robotics, which developed autonomous landmine-clearing systems. Through the refinement of that technology, Reeves saw an opportunity to create a safety-and-security overlay that could be integrated with industrial machines.

Fort’s platform allows managers to configure, monitor, and control robots from web and mobile apps. An endpoint controller wirelessly connects machines to devices, while an embedded hardware security module ensures communications are secure, reducing breach risk. Fort also provides a remote control with a built-in emergency stop that lets workers use one or more machines — ostensibly without compromising safety or command security.

As industrial manufacturers increasingly embrace automation, workplace injuries involving robots are becoming more common. For example, a recent report revealed that Amazon fulfillment warehouses with robots had higher rates of serious worker injuries than less automated facilities. A warehouse in Kent, Washington that employs robots had 292 serious injuries in one year, or 13 serious injuries per 100 workers. Meanwhile, the serious injury rate of a Tracy, California center nearly quadrupled after it introduced robots five years ago, going from 2.9 per 100 workers in 2015 to 11.3 in 2018.

“Fort is creating systems that will accelerate the rise of autonomous vehicles and robotics, making work safer and more productive,” Prime Movers Lab’s Suzanne Fletcher said in a press release. “We’re proud to partner with this forward-looking company that recognizes that safe collaboration between humans and robots is synonymous with the workforce of tomorrow.”

For one customer that has multiple robots operating within a gated area, Fort says it built a solution that enables workers to stop all the robots when a person enters the zone. While each machine has its own e-stop button, manual stops affect productivity, and getting close to the moving robots can be hazardous. Fort installed its safety controller on each robot and a master controller on the door of the caged area so that the system automatically sends a wireless e-stop signal that stops every robot whenever the door opens.

“We’re excited to partner with our new investors to help position Fort at the forefront of this rapidly evolving space,” Reeves said. “The world is on the cusp of a new industrial revolution in mobile automation. With added investment and support, we’ll be able to rapidly scale the company to capitalize on the convergence of trend and opportunity to ensure that robotic systems are safely deployed across all industries.”

Fort claims to have served more than 100 clients last year, including “dozens” of Fortune 1,000 companies in markets such as warehousing, construction, agriculture, manufacturing, mining, turf care, last-mile delivery, and transportation.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Safelite’s Advanced Safety Systems Recalibration: Keep Your Vehicle Safe

It’s important to keep safe on the roads and that’s precisely why advanced safety systems are increasingly common in newer vehicles. These vehicles offer features that keep you safe on the road with what feels like an extra pair of eyes looking out for any dangers. These safety features can include automatic emergency braking, forward collision warning, lane keep assist, and even the ability to detect when a pedestrian has stepped out in front of your vehicle. They’re an effective way of reducing the risk of collisions and injuries thereby keeping your family and everyone around you safer.

Such advanced safety systems typically work via cameras mounted on or just behind the windshield of your vehicle or integrated into vehicle components such as your car’s bumpers. However, for these systems to work accurately, vehicle manufacturers require these cameras to be recalibrated any time your windshield is replaced. At Safelite, windshield camera recalibration can often be done during the same appointment as the windshield replacement, meaning less hassle and less time spent getting your vehicle repaired.

Why do I need recalibration?

If your forward facing camera is not recalibrated after a windshield replacement, it could interfere with the ability of your vehicle’s advanced safety systems to accurately spot potential problems or even avoid collisions. That defeats the purpose of advanced safety systems. According to the 2019 CCC Crash Report, advanced safety systems are expected to achieve a 20% reduction in vehicle crashes.

Safelite technicians are highly experienced when it comes to the recalibration process and can complete the specific recalibration required by your vehicle manufacturer. When you book a replacement appointment online, you’ll be alerted if your vehicle needs recalibration.

Expertise is essential with different types of recalibration

There are different types of recalibration, which depend on the vehicle make and model, and are determined by the vehicle manufacturer. Recalibration service requires specialized tools, equipment, and skilled precision. Sometimes recalibration can be performed at the customer’s location, but other vehicles may need to be brought to a Safelite shop.

Safelite will know exactly what your vehicle requires when you make an appointment.

What is Safelite?

Safelite is America’s largest vehicle glass specialist. Starting out as a single store based in Wichita, Kansas in 1947, the company now services millions of customers every year across all 50 states. More than 97% of U.S. drivers can access Safelite’s services, with the company offering the best and simplest way to replace your broken windshield or other vehicle glass.

All you need to do is head over to safelite.com. Safelite has thousands of its MobileGlassShopsTM and repair facilities across the country so you can choose the location that’s most convenient for you.

Why Safelite is a great choice for recalibrating your advanced safety systems

Safelite offers windshield replacement and recalibration in a single appointment, saving you time and ensuring vehicle safety. With a nationwide lifetime warranty on auto glass replacements and a guarantee on recalibrations, you can trust Safelite. If you’re looking for a repair or replacement of your vehicle’s windshield, simply head over to the Safelite website and schedule an appointment online today.

What other services does Safelite provide?

Besides being able to replace or repair your windshield and perform advanced safety systems recalibration, Safelite also offers side window and rear glass replacement services, top-quality windshield wipers and glass cleaners, and Safelite AutoGlass Rain Defense, a technician-applied water-repellant treatment.

Editors’ Choice




Repost: Original Source and Author Link