How one company is optimizing building data for smarter monitoring

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Mapped, which normalizes access to smart building data, has launched a free tier for its service. The platform helps companies discover and utilize data from building systems, sensors and equipment from different vendors. This makes it easier to develop building management apps and digital twins using a single API for all equipment —  it also helps normalize data into a consistent format. 

The new launch promises to help connect building monitoring and management capabilities to cloud apps for scheduling, analytics and business workflows via cloud services such as Google Calendar, VergeSense, Microsoft 365 and OpenPath. It can also help operations teams fully map and integrate the data from building sensors, controls, equipment and infrastructure in as little as four days — allowing developers to focus on innovation rather than integration. It aims to make connecting with third-party services easier. 

Mapped’s founder and CEO, Shaun Cooley, launched the company after struggling with data integration challenges while previously leading IoT efforts at Cisco. Since then, the company has quickly grown — emerging out of stealth last year. It has mapped more than 30 million square feet across 100 buildings with upwards of 30,000 device types. The company has also been a driver behind the Brick Schema, an open-source graph for building data. 

Brick Schema was designed to improve access to and control of building data. It helps organize access to sensor, HVAC, lighting and electrical systems — and defines spatial, control and operational relationships. The tool is a promising alternative to other specifications and standards, such as industry foundation classes (IFC), smart appliances reference ontology (SAREF), building topology ontology (BOT) and Project Haystack. 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Opening doors

Cooley told VentureBeat that the launch of the latest self-serve starter plan opens doors for developers, data scientists, and building solutions teams to connect and integrate cloud data sources with just a few clicks. Developers can immediately create their account to begin adding their cloud-based integrations, see the data in the Mapped Console, and access it via an API. 

The company anticipates that many enterprises will opt for an upgrade to Mapped’s pro plan, which makes it easy to bring in data from building management systems that use legacy protocols like Modbus, BACnet, and LonWorks. The pro plan allows teams to deploy a virtual or physical Mapped Universal Gateway to discover, extract and normalize all on-premises data into a consolidated independent data layer. Developers can access this data via the Mapped Console or the GraphQL API. 

When Cooley was at Cisco, he found that building asset managers would take months of manual inspections to discover and locate physical devices within a building. Then they would spend additional months connecting and integrating this data into customized building systems for each building. He claims that Mapped distills this down to four data categories: sustainability, predictive maintenance, security and tenant experience goals. 

Eliminating silos

Cooley predicts that an independent data layer will become a critical necessity of the smart building stack for commercial and industrial assets. The company has been busy developing tools to break data silos across building systems. 

“By eliminating these silos and democratizing access to data across systems and buildings, Mapped enables flexibility in accessing real-time and historic time-series data, providing more in-depth insights for building owners, operators and solution providers,” he said.

The platform could also help building operators with owning and securing their data to avoid vendor lock-in. Cooley claims the solution will also make it easier to take advantage of new solutions that leverage artificial intelligence (AI) and machine learning to improve energy and sustainability management, predictive maintenance and occupant experiences. 

“With less time spent on integrating and onboarding, and more time spent on the things that actually drive business value, we expect the caliber of value provided by the next generation of proptech [property technology] solutions to increase dramatically,” Cooley said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link


The MLops company making it easier to run AI workloads across hybrid clouds

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

There is no shortage of options for organizations seeking places in the cloud, or on-premises to deploy and run machine learning and artificial intelligence (AI) workloads. A key challenge for many though is figuring out how to orchestrate those workloads across multi-cloud and hybrid-cloud environments.

Today, AI compute orchestration vendor Run AI is announcing an update to its Atlas Platform that is designed to make it easier for data scientists to deploy, run and manage machine learning workloads across different deployment targets including cloud providers and on-premises environments.

In March, Run AI raised $75 million to help the company advance its technology and go-to-market efforts. At the foundation of the company’s platform is a technology that helps organizations manage and schedule resources on which to run machine learning. That technology is now getting enhanced to help with the challenge of hybrid cloud machine learning.

“It’s a given that IT organizations are going to have infrastructure in the cloud and some infrastructure on-premises,” Ronen Dar, cofounder and CTO of Run AI, told VentureBeat. “Companies are now strategizing around hybrid cloud and they are thinking about their workloads and about where is the right place for the workload to run.”


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The increasingly competitive landscape for hybrid MLops

The market for MLops services is increasingly competitive as vendors continue to ramp up their efforts.

A Forrester Research report, sponsored by Nvidia, found that hybrid support for AI workload development is something that two-thirds of IT decision-makers have already invested in. It’s a trend that is not lost on vendors.

Domino Data Lab announced its hybrid approach in June, which also aims to help organizations run in the cloud and on-premises. Anyscale, which is the leading commercial sponsor behind the open-source Ray AI scaling platform, has also been building out its technologies to help data scientists run across distributed hardware infrastructure.

Run AI is positioning itself as a platform that can integrate with other MLops platforms, such as Anyscale, Domino and Weights & Biases. Lior Balan, director of sales and cloud at Run AI, said that his company operates as a lower level solution in the stack than many other MLops platforms, since Run AI plugs directly into Kubernetes.

As such, what Run AI provides is an abstraction layer for optimizing Kubernetes resources. Run AI also provides capabilities to share and optimize GPU resources for machine learning that can then be used to benefit other MLops technologies.

 The complexity of multicloud and hybrid cloud deployments

A common approach today for organizations to manage multicloud and hybrid clouds is to use the Kubernetes container orchestration system.

If an organization is running Kubernetes in the public cloud or on-premises, then a workload could run anywhere that Kubernetes is running. The reality is a bit more complex, as different cloud providers have different configurations for Kubernetes and on-premises deployments have their own nuances. Run AI has created a layer that abstracts the underlying complexity and difference across public cloud and on-premises Kubernetes services to provide a unified operations layer.

Dar explained that Run AI has built its own proprietary scheduler and control plane for Kubernetes, which manages how workloads and resources are handled across the various types of Kubernetes deployments. The company has added a new approach to its Atlas Platform that allows data scientists and machine learning engineers to run workloads from a single user interface, across the different types of deployments. Prior to the update, data scientists had to use different interfaces to log into each type of deployment in order to manage a workload.

In addition to now being able to manage workloads from a single interface, it’s also easier to move workloads across different environments.

“So they can run and train workloads in the cloud, and then switch and deploy them on premises with just a single button,” Dar said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link


Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Today’s demand for real-time data analytics at the edge marks the dawn of a new era in machine learning (ML): edge intelligence. That need for time-sensitive data is, in turn, fueling a massive AI chip market, as companies look to provide ML models at the edge that have less latency and more power efficiency. 

Conventional edge ML platforms consume a lot of power, limiting the operational efficiency of smart devices, which live on the edge. Those devices are also hardware-centric, limiting their computational capability and making them incapable of handling varying AI workloads. They leverage power-inefficient GPU- or CPU-based architectures and are also not optimized for embedded edge applications that have latency requirements. 

Even though industry behemoths like Nvidia and Qualcomm offer a wide range of solutions, they mostly use a combination of GPU- or data center-based architectures and scale them to the embedded edge as opposed to creating a purpose-built solution from scratch. Also, most of these solutions are set up for larger customers, making them extremely expensive for smaller companies.

In essence, the $1 trillion global embedded-edge market is reliant on legacy technology that limits the pace of innovation.


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

A new machine learning solution for the edge

ML company Sima AI seeks to address these shortcomings with its machine learning-system-on-chip (MLSoC) platform that enables ML deployment and scaling at the edge. The California-based company, founded in 2018, announced today that it has begun shipping the MLSoC platform for customers, with an initial focus of helping solve computer vision challenges in smart vision, robotics, Industry 4.0, drones, autonomous vehicles, healthcare and the government sector.

The platform uses a software-hardware codesign approach that emphasizes software capabilities to create edge-ML solutions that consume minimal power and can handle varying ML workloads. 

Built on 16nm technology, the MLSoC’s processing system consists of computer vision processors for image pre- and post-processing, coupled with dedicated ML acceleration and high-performance application processors. Surrounding the real-time intelligent video processing are memory interfaces, communication interfaces, and system management — all connected via a network-on-chip (NoC). The MLSoC features low operating power and high ML processing capacity, making it ideal as a standalone edge-based system controller, or to add an ML-offload accelerator for processors, ASICs and other devices.

The software-first approach includes carefully-defined intermediate representations (including the TVM Relay IR), along with novel compiler-optimization techniques. This software architecture enables Sima AI to support a wide range of frameworks (e.g., TensorFlow, PyTorch, ONNX, etc.) and compile over 120+ networks. 

The MLSoC promise – a software-first approach

Many ML startups are focused on building only pure ML accelerators and not an SoC that has a computer-vision processor, applications processors, CODECs, and external memory interfaces that enable the MLSoC to be used as a stand-alone solution not needing to connect to a host processor. Other solutions usually lack network flexibility, performance per watt, and push-button efficiency – all of which are required to make ML effortless for the embedded edge.

Sima AI’s MLSoC platform differs from other existing solutions as it solves all these areas at the same time with its software-first approach. 

The MLSoC platform is flexible enough to address any computer vision application, using any framework, model, network, and sensor with any resolution. “Our ML compiler leverages the open-source Tensor Virtual Machine (TVM) framework as the front-end, and thus supports the industry’s widest range of ML models and ML frameworks for computer vision,” Krishna Rangasayee, CEO and founder of Sima AI, told VentureBeat in an email interview. 

From a performance point of view, Sima AI’s MLSoC platform claims to deliver 10x better performance in key figures of merit such as FPS/W and latency than alternatives. 

The company’s hardware architecture optimizes data movement and maximizes hardware performance by precisely scheduling all computation and data movement ahead of time, including internal and external memory to minimize wait times. 

Achieving scalability and push-button results

Sima AI offers APIs to generate highly optimized MLSoC code blocks that are automatically scheduled on the heterogeneous compute subsystems. The company has created a suite of specialized and generalized optimization and scheduling algorithms for the back-end compiler that automatically convert the ML network into highly optimized assembly codes that run on the machine learning-accelerator (MLA) block. 

For Rangasayee, the next phase of Sima AI’s growth is focused on revenue and scaling their engineering and business teams globally. As things stand, Sima AI has raised $150 million in funding from top-tier VCs such as Fidelity and Dell Technologies Capital. With the goal of transforming the embedded-edge market, the company has also announced partnerships with key industry players like TSMC, Synopsys, Arm, Allegro, GUC and Arteris. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link


Intel VP talks AI strategy as company takes on Nvidia

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Intel is on an artificial intelligence (AI) mission that it considers very, very possible.  

The company is the world’s largest semiconductor chip manufacturer by revenue, and is best known for its CPU market dominance, with its familiar “Intel inside” campaign — reminding us all what resided inside our personal computers. However, in an age when AI chips are all the rage, the company finds itself chasing competitors, most notably Nvidia, which has a massive head start in AI processing with its GPUs. 

There are significant benefits to catching up in this space. According to a report, the AI chip market was worth around $8 billion in 2020, but is expected to grow to nearly $200 billion by 2030. 

At Intel’s Vision event in May, the company’s new CEO, Pat Gelsinger, highlighted AI as central to the company’s future products, while predicting that AI’s need for higher performance levels of compute makes it a key driver for Intel’s overall strategy. 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Gelsinger said he envisioned four superpowers that spur innovation at Intel: Pervasive connectivity, ubiquitous compute, AI and cloud-to-edge infrastructure. 

That requires high-performance hardware-plus-software systems, including in tools and frameworks used to implement end-to-end AI and data pipelines. As a result, Intel’s strategy is “to build a stable of chips and open-source software that covers a broad range of computing needs as AI becomes more prevalent,” a recent Wall Street Journal article noted. 

“Each of these superpowers is impressive on its own, but when they come together, that’s magic,” Geisinger said at the Vision event. “If you’re not applying AI to every one of your business processes, you’re falling behind. We’re seeing this across every industry.” 

It is in that context that VentureBeat spoke recently with Wei Li, vice president and general manager of AI and analytics at Intel. He is responsible for AI and analytics software and hardware acceleration for deep learning, machine learning and big data analytics on Intel CPUs, GPUs, AI accelerators and XPUs with heterogeneous and distributed computing. 

Intel’s software and hardware connection

According to Li, it is Intel’s strong connection between software and hardware that makes the company stand out and ready to compete in the AI space. 

“The biggest problem we’re trying to solve is creating a bridge between data and insights,” he said. “The bridge needs to be wide enough to handle a lot of traffic, and the traffic needs to have speed and not get stuck.” 

That means AI needs software to perform efficiently and fast, with an entire ecosystem that enables data scientists to take large amounts of data and devise solutions, as well as hardware acceleration that provides the capacity to process the data efficiently. 

“On the hardware side, when we add specific acceleration inside hardware, we need to know what we accelerate,” Li said. “So we are doing a lot of co-design, where the software team works with the hardware team very closely.”

The two groups operate almost like a single team, he added, to understand the models, discover performance bottlenecks and to add hardware capacity. 

“It’s an outside-in approach, a tightly integrated co-design, to make sure the hardware is designed the right way,” he said, adding that the original GPU was not designed for AI but happened to have the right amount of compute and bandwidth. Since then, GPUs have evolved. 

“When we design GPUs nowadays, we look at AI as an important workload to drive the GPU design,” he said. “There are specific features inside the GPU that are only for AI. That is the advantage of being in a company where we have both software and hardware teams.” 

Intel’s goal is to scale its AI efforts, said Li, which he maintained is about developing an ecosystem rather than separate solutions. 

“It’s going to be how we lead and nurture an open AI software ecosystem,” he explained. “Intel has always been an open ecosystem that enables competition, which allows Intel’s technologies to get to market more quickly at scale.” 

Intel’s trained AI reference kits increase speed

Historically, Intel has done a lot of work on the software capacity side to get better performance – basically increasing the width of the bridge between data and insights. 

Last month, Intel released trained AI reference kits to the open-source community, which Li said is one of the steps the company is taking to increase the speed of crossing the bridge. 

“Traditionally, AI software was designed for specialists, for the most part,” he said. “But we want to target a much broader set of developers.” 

The AI models in the reference kits were designed, trained, and tested from among thousands of models for specific use cases, while data scientists can customize and fine-tune the model with their own data. 

“You get a combination of ease of use because you’re starting from something almost pre-cooked, plus you get all the optimized software as part of the package so you can get your solution quickly,” Li explained. 

Priorities over the next year 

In the coming year, one of Intel’s biggest AI priorities is on the software side.

“We will be spending more effort focusing on ease of use,” Li said. 

On the hardware side, he added, new products will focus heavily on performance, including the Sapphire Rapids Xeon server processor that will be released in 2023

“It’s like a CPU with a GPU embedded inside because of the amount of compute capabilities you have,” said Li. “It’s a game changer to have all the acceleration inside the GPU.” 

In addition, Intel is focusing on the performance of their data center GPU, working with their customer Argonne National Laboratory, which serves their customers and developers. 

Biggest Intel AI challenges

Li said the biggest challenge his team faces is executing on Intel’s AI vision. 

“We really want to make sure we execute well so we can deliver on the right schedule and make sure we run fast,” he said. “We want to have a torrid pace, which is not easy as a big company.” 

However, Li will not blame external factors creating challenges for Intel, such as the economy or inflation. 

“Everybody has headwinds, but I want to make sure we do the best we can with the things we have control over as a team,” he said. “So I’m pretty optimistic, particularly in the AI domain. It’s like it’s back to my graduate student days – you can really think big. Anything is possible.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link


Report: 76% of consumers would stop doing business with a company after just one bad customer experience

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

A new study revealed that as inflation increases, so do the expectations for the customer experience, making it a make-or-break moment for brands when it comes to earning customer loyalty. Rising inflation has consumers rethinking high-stakes purchases, like cars, vacations and home improvement projects, but almost two-thirds (63%) report that they’re still willing to pay more to get better customer service.

This is according to a new survey from AI-powered conversation intelligence company, Invoca, which polled 500 consumers who completed a high-stakes purchase in the last year across the following industries: automotive, financial services, healthcare, home services, insurance, telecom and travel to better understand the importance of the customer experience throughout the buying journey. 

While consumers are shopping based on price, they’re also demanding great experiences, too. The survey found that more than three-quarters of respondents (76%) said they would stop doing business with a company after just one bad experience. When respondents ranked the possible reasons why they would stop doing business with a company, a bad phone experience was second only to high prices. A bad experience includes: rude agents (59%), long hold times (58%) and too many transfers (58%) as the top three reasons. 

Image credit: Invoca.

Consumers are now picking up the phone more in their buying journey, with nearly three-quarters (68%) making a phone call at some point. They expect businesses to know their reason for calling, especially if they’ve done business with them before, with the vast majority (85%) expecting they know some details about them (e.g., purchase history, know who I am, etc.). This is a big increase compared to 2021, where just 71% of respondents believed businesses should already know these details.


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The survey also showed that when a business knows why a customer is calling, the customers trust them more and are more likely to do business with them in the future. Therefore, brands must empower their contact center agents with impactful data and technology to ensure they can deliver positive customer experiences.

Read the full 2022 Invoca Buyer Experience Benchmark Report.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link


Block is contacting 8.2 million customers after a former employee downloaded company reports

Block, the parent company of products like Cash App and Tidal, said in an SEC filing that a former employee downloaded “certain reports” that “contained some US customer information” without permission from Cash App Investing (via Protocol).

Data in the reports, which Block said were downloaded on December 10th, included “full name and brokerage account number” and for “some customers” included “brokerage portfolio value, brokerage portfolio holdings and/or stock trading activity for one trading day.” The employee, who downloaded the data after they left the company, had access to the reports “as part of their past job responsibilities,” according to Block.

“The reports did not include usernames or passwords, Social Security numbers, date of birth, payment card information, addresses, bank account information, or any other personally identifiable information,” Block said. “They also did not include any security code, access code, or password used to access Cash App accounts. Other Cash App products and features (other than stock activity) and customers outside of the United States were not impacted.” Block says it is contacting “approximately 8.2 million current and former customers” in regards to the incident.

“At Cash App we value customer trust and are committed to the security of customers’ information,” Cash App spokesperson Danika Owsley said in a statement to The Verge. “Upon discovery, we took steps to remediate this issue and launched an investigation with the help of a leading forensics firm. We know how these reports were accessed, and we have notified law enforcement. We are also contacting customers whose data was impacted. In addition, we continue to review and strengthen administrative and technical safeguards to protect information.”

Update April 5th, 7:34PM ET: Added Cash App statement.

Repost: Original Source and Author Link


Crisis Text Line stops sharing conversation data with AI company

Crisis Text Line has decided to stop sharing conversation data with spun-off AI company after facing scrutiny from data privacy experts. “During these past days, we have listened closely to our community’s concerns,” the 24/7 hotline service writes in a statement on its website. “We hear you. Crisis Text Line has had an open and public relationship with Loris AI. We understand that you don’t want Crisis Text Line to share any data with Loris, even though the data is handled securely, anonymized and scrubbed of personally identifiable information.” will delete any data it has received from Crisis Text Line.

Politico recently reported how Crisis Text Line (which is not affiliated with the National Suicide Prevention Lifeline) is sharing data from conversations with, which builds AI systems designed to increase empathetic conversation by customer service reps. Crisis Text Line is a not-for-profit service that, according to Shawn Rodriguez, VP and General Counsel of Crisis Text Line, provides “mental health crisis intervention services.” It is also a shareholder in and, according to Politico, at one point shared a CEO with the company.

Before hotline users seeking assistance speak with volunteer counselors, they consent to data collection and can read the company’s data-sharing practices. Those volunteer counselors, which CTL calls “ Empathy MVPs,” are expected to make a commitment of “volunteering 4 hours per week until 200 hours are reached.” Politico quoted one volunteer who claimed that the people who contact the line “have an expectation that the conversation is between just the two people that are talking” and said he was terminated in August after raising concerns about CTL’s handling of data. That same volunteer, Tim Reierson, has started a petition pushing CTL “to reform its data ethics.”

Politico noted how Crisis Text Line says data use and AI play a role in how it operates:

“Data science and AI are at the heart of the organization — ensuring, it says, that those in the highest-stakes situations wait no more than 30 seconds before they start messaging with one of its thousands of volunteer counselors. It says it combs the data it collects for insights that can help identify the neediest cases or zero in on people’s troubles, in much the same way that Amazon, Facebook and Google mine trends from likes and searches.”

Following the report, Crisis Text Line released a statement on its website and via a Twitter thread. In a statement, Crisis Text Line said it does not “sell or share personally identifiable data with any organization or company.” It went on to claim that “[t]he only for-profit partner that we have shared fully scrubbed and anonymized data with is We founded to leverage the lessons learned from operating our service to make customer support more human and empathetic. is a for-profit company that helps other for-profit companies employ de-escalation techniques in some of their most notoriously stressful and painful moments between customer service representatives and customers.”

In its defense, Crisis Text Line said over the weekend that “Our data scrubbing process has been substantiated by independent privacy watchdogs such as the Electronic Privacy Information Center, which called Crisis Text Line “a model steward of personal data.” It was citing a 2018 letter to the FCC, however, that defense is shakier now that the Electronic Privacy Information Center (EPIC) has responded with its own statement saying the quote was used outside of its original context:

“Our statements in that letter were based on a discussion with CTL about their data anonymization and scrubbing policies for academic research sharing, not a technical review of their data practices. Our review was not related to, and we did not discuss with CTL, the commercial data transfer arrangement between CTL and If we had, we could have raised the ethical concerns with the commercial use of intimate message data directly with the organization and their advisors. But we were not, and the reference to our letter now, out of context, is wrong.”

On the website, it claims “safeguarding personal data is at the heart of everything we do,” and that “we draw our insights from anonymized, aggregated data that have been scrubbed of Personally Identifiable Information (PII).” That’s not enough for EPIC, which makes the point that Loris and CTL are seeking to “extract commercial value out of the most sensitive, intimate, and vulnerable moments in the lives (of) those individuals seeking mental health assistance and of the hard-working volunteer responders… No data scrubbing technique or statement in a terms of service can resolve that ethical violation.”

Update, 10.15PM ET: This story has been updated to reflect Crisis Text Line’s decision to stop sharing data with

Correction February 1st, 10:54AM ET: An earlier version of this story identified Tim Reierson as both a volunteer and an employee who was fired. He was a volunteer on the hotline who was terminated. We regret the error.

Repost: Original Source and Author Link


Autonomous trucking company Plus drives faster transition to semi-autonomous trucks

This article is part of a VB Lab Insight series paid for by Plus.

Breaking away from the competition, Plus, a Silicon Valley-based provider of autonomous trucking technology, is taking an innovative driver-in approach to commercialization that aligns with the critical challenges facing the trucking industry today.

According to newly-released estimates of traffic fatalities in 2021, crashes involving at least one large truck increased 13% compared to the previous year.

With a nationwide truck driver shortage estimated at 80,000 last year and growing, PlusDrive, Plus’s market-ready supervised autonomous driving solution, helps long-haul operators reduce stress while improving safety for all road users.

First-to-market solution helps fleets today

In 2021 Plus achieved a critical industry milestone, becoming the first self-driving trucking technology company to deliver a commercial product to the hands of customers. Over the past year Plus has delivered units of PlusDrive to some of the world’s largest fleets and truck manufacturers.

These units are not demos or test systems. Shippers have installed the technology on their trucks and PlusDrive-equipped trucks with the shippers’ drivers are hauling commercial loads on public roads nationwide.

PlusDrive improves safety and driver comfort, and saves at least 10% in fuel expenses, addressing driver recruitment and retention while offsetting surging diesel prices and other costs tied to today’s volatile trucking market.

Plus’s first-to-market shipments and installations validate the progress the company has made in developing a safe, reliable driver-in technology solution for the long-haul trucking industry.

PlusDrive will reach more fleets this year as Plus continues to expand the close collaboration with customers pioneering the use of driver-in autonomous trucking technology for their heavy-duty truck operations.

Partnerships unlock commercial pathways for PlusDrive

Partnerships with industry stakeholders — from automotive suppliers to truck manufacturers and regulators — have been critical to Plus’s success, helping to unlock innovation and commercial pathways to deploy its technology globally. Its autonomous driving technology can be retrofitted on existing trucks or installed at the factory level. With Cummins and IVECO, Plus is also developing autonomous trucks powered by natural gas for the U.S. and Europe.

Building on the market penetration it has achieved already, Plus this month announced a collaboration with Velociti, a fleet technology solutions company, creating a nationwide installation and service network capable of delivering PlusDrive semi-autonomous trucks to customers within 12 hours. Maintenance services are also available nationwide by utilizing mobile resources to meet customers at their preferred location.

The program, known as Plus Build, equips Class 8 trucks with state-of-the-art lidar, radar and camera sensors and Plus’s proprietary autonomous driving software. 

With PlusDrive, truck drivers stay in the cabin to oversee the Level 4 technology, but they do not have to actively drive the vehicle. Instead, they can turn on PlusDrive to automatically drive the truck on highways in all traffic conditions, including staying centered in the lane, changing lanes and handling stop-and-go traffic. PlusDrive reduces driver stress and fatigue, providing a compelling recruitment and retention tool during a time of driver shortages.

Velociti’s nationwide installation and maintenance network will help get PlusDrive into the hands of more truck drivers across the country, making their jobs “safer, easier and better,” said Shawn Kerrigan, COO and co-founder of Plus.

“Plus Build helps companies unlock the benefits of autonomous driving technology today by quickly modernizing trucks to improve their safety and uptime.”

Drivers are on board for next-generation autonomous driving technology

Plus works closely with customers, drivers and industry partners to help them understand the advantages of PlusDrive. Their testimonials validate the key benefits of the system.

“I am an admitted cynic, and I was blown away,” said commercial vehicle and transportation industry analyst Ann Rundle, Vice President of ACT Research. After taking a demo ride of a PlusDrive-enabled truck at the recent Advanced Clean Transportation Expo, Rundle said, “It was so seamless. I suppose if I wasn’t watching the screen that indicated the system is ‘Engaged’ I might have wondered [if PlusDrive was indeed still doing the driving].”

A professional driver invited to test PlusDrive echoed those sentiments. “If I were to have a system like this in my truck,” he said, “it would make my job a whole lot smoother, easier and a lot less stressful.”

Another operator from a customer fleet praised PlusDrive for “reinforcing safety; just keep giving us these tools — it’s a tool to help us help the company.” 

PlusDrive keeps economy moving, safely

Startups competing for a slice of the self-driving trucking future are aligned on the long-term goal of getting fully driverless commercial trucks (with no safety drivers) on the road. But Plus stands out as the only company to release a product that enables fleets and drivers to benefit from automation today, when there is little sign of an end to the chaos and stress roiling the freight markets. Through its collaborative, considered approach to delivering its autonomous trucking technology as a commercial product now, Plus is maximizing benefits for fleets while making long-haul trucking safer and easier for the hard-working operators who keep the nation’s economy moving.

VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information. For more information, contact

Repost: Original Source and Author Link


Alphabet is launching a company that uses AI for drug discovery

A new Alphabet company will use artificial intelligence methods for drug discovery, Google’s parent company announced Thursday. It’ll build off of the work done by DeepMind, another Alphabet subsidiary that has done groundbreaking work using AI to predict the structure of proteins.

The new company, called Isomorphic Laboratories, will leverage that success to build tools that can help identify new pharmaceuticals. DeepMind CEO Demis Hassabis will also serve as the CEO for Isomorphic, but the two companies will stay separate and collaborate occasionally, a spokesperson said.

For years, experts have pointed to AI as a way to make it faster and cheaper to find new medications to treat various conditions. AI could help scan through databases of potential molecules to find some that best fit a particular biological target, for example, or to fine-tune proposed compounds. Hundreds of millions of dollars have been invested in companies building AI tools over the past two years.

Isomorphic will try to build models that can predict how drugs will interact with the body, Hassabis told Stat News. It could leverage DeepMind’s work on protein structure to figure out how multiple proteins might interact with each other. The company may not develop its own drugs but instead sell its models. It will focus on developing partnerships with pharmaceutical companies, a spokesperson said in a statement to The Verge.

Developing and testing drugs, though, could be a steeper challenge than figuring out protein structure. For example, even if two proteins have structures that fit together physically, it’s hard to tell how well they’ll actually stick. A drug candidate that looks promising based on how it works at a chemical level also might not always work when it’s given to an animal or a person. Over 90 percent of drugs that make it to a clinical trial end up not working, as chemist and writer Derek Lowe pointed out in Science this summer. Most of the problems aren’t because there was something wrong at the molecular level.

The work done at DeepMind and the proposed work at Isomorphic could help bust through some research bottlenecks but aren’t a quick fix for the the countless challenges of drug development. “The laborious, resource-draining work of doing the biochemistry and biological evaluation of, for example, drug functions” will remain, as Helen Walden, a professor of structural biology at the University of Glasgow, previously told The Verge.

Repost: Original Source and Author Link


High-end braking company, Brembo to release AI braking system in 2024

High-end Italian brake manufacturer Brembo announced in October its plan to release Sensify, an AI-enhanced braking system that promises both “driving pleasure and total safety” when it rolls out via an unnamed manufacturer in 2024. Going beyond anti-lock brakes, traction, and stability control, it replaces hydraulic controls with electronic ones for design flexibility and, potentially, more precise control.

Incorporating AI into vehicles isn’t new, as algorithms control playlists, maps, driver assistance, and even “self-driving” to various degrees. AI-based brake systems, however, are enough to raise eyebrows about how exactly they will work or enhance safety.

“As you start to deal with artificial intelligence and neural networks, they’re only as good as the training data you have,” said J. Christian Gerdes, an engineering professor and co-director of the Center of Automotive Research at Stanford University. “If you have a new case that represents something it hasn’t seen before, it’s hard to know in advance what it decides to do.”

Autocar UK reports that Sensify is using a “dedicated app” to program itself based on data and enhance the driving experience. The system supposedly will use predictive algorithms, sensors, and data management tools that give it a “digital brain” capable of controlling each wheel independently.

Gerdes says that modern anti-lock brake systems, first introduced in the 1970s, are a Band-Aid for wheels locking up under heavy braking. “What would make more sense is to have an understanding of what’s going on at each of your wheels. And to then intelligently ask for brake force at different wheels.”

Car with various labels describing the Sensify system

Brembo Sensify system
Image: Brembo

Despite the digital footprint with AI and Sensify, the physical mechanics have more of a presence compared to the software in the Sensify system, according to Brembo chief executive Daniele Schillaci. The exec told Reuters “the mechanic and software contents will soon be equivalent and by the end of the decade software will become predominant in braking systems.”

The company plans to open a technology lab in Silicon Valley by the end of the year to further its digital strategies. Brembo says, “Data collection is leveraged to improve the driver experience and allows the system to be constantly updated,” but how it will handle questions like privacy and security of that collected data is unclear.

One of Sensify’s benefits is adapting to driving styles, adjusting to weather and road conditions, and shorter locking times. Brembo also says its system will be cheaper over the life of a car because it removes brake fluid by adding electromechanical control, has lower maintenance costs, lower disc consumption, and lower or zero drag torque. In an electric or hybrid vehicle, better control of regenerative braking can help reduce the battery size.

What happens when there is a hardware failure in that AI brain? In a demo for Autocar UK, Brembo explained the system has two ECUs (electronic control units) that are connected as a fail-safe but send their commands separately.

The system is set to be released in 2024. Brembo said that it’s designed to work on multiple car types like sedans, racecars, SUVs, and commercial vehicles, but it’s not clear how much it needs to be customized for each type.

Repost: Original Source and Author Link