Categories
AI

Autonomous trucking company Plus drives faster transition to semi-autonomous trucks

This article is part of a VB Lab Insight series paid for by Plus.


Breaking away from the competition, Plus, a Silicon Valley-based provider of autonomous trucking technology, is taking an innovative driver-in approach to commercialization that aligns with the critical challenges facing the trucking industry today.

According to newly-released estimates of traffic fatalities in 2021, crashes involving at least one large truck increased 13% compared to the previous year.

With a nationwide truck driver shortage estimated at 80,000 last year and growing, PlusDrive, Plus’s market-ready supervised autonomous driving solution, helps long-haul operators reduce stress while improving safety for all road users.

First-to-market solution helps fleets today

In 2021 Plus achieved a critical industry milestone, becoming the first self-driving trucking technology company to deliver a commercial product to the hands of customers. Over the past year Plus has delivered units of PlusDrive to some of the world’s largest fleets and truck manufacturers.

These units are not demos or test systems. Shippers have installed the technology on their trucks and PlusDrive-equipped trucks with the shippers’ drivers are hauling commercial loads on public roads nationwide.

PlusDrive improves safety and driver comfort, and saves at least 10% in fuel expenses, addressing driver recruitment and retention while offsetting surging diesel prices and other costs tied to today’s volatile trucking market.

Plus’s first-to-market shipments and installations validate the progress the company has made in developing a safe, reliable driver-in technology solution for the long-haul trucking industry.

PlusDrive will reach more fleets this year as Plus continues to expand the close collaboration with customers pioneering the use of driver-in autonomous trucking technology for their heavy-duty truck operations.

Partnerships unlock commercial pathways for PlusDrive

Partnerships with industry stakeholders — from automotive suppliers to truck manufacturers and regulators — have been critical to Plus’s success, helping to unlock innovation and commercial pathways to deploy its technology globally. Its autonomous driving technology can be retrofitted on existing trucks or installed at the factory level. With Cummins and IVECO, Plus is also developing autonomous trucks powered by natural gas for the U.S. and Europe.

Building on the market penetration it has achieved already, Plus this month announced a collaboration with Velociti, a fleet technology solutions company, creating a nationwide installation and service network capable of delivering PlusDrive semi-autonomous trucks to customers within 12 hours. Maintenance services are also available nationwide by utilizing mobile resources to meet customers at their preferred location.

The program, known as Plus Build, equips Class 8 trucks with state-of-the-art lidar, radar and camera sensors and Plus’s proprietary autonomous driving software. 

With PlusDrive, truck drivers stay in the cabin to oversee the Level 4 technology, but they do not have to actively drive the vehicle. Instead, they can turn on PlusDrive to automatically drive the truck on highways in all traffic conditions, including staying centered in the lane, changing lanes and handling stop-and-go traffic. PlusDrive reduces driver stress and fatigue, providing a compelling recruitment and retention tool during a time of driver shortages.

Velociti’s nationwide installation and maintenance network will help get PlusDrive into the hands of more truck drivers across the country, making their jobs “safer, easier and better,” said Shawn Kerrigan, COO and co-founder of Plus.

“Plus Build helps companies unlock the benefits of autonomous driving technology today by quickly modernizing trucks to improve their safety and uptime.”

Drivers are on board for next-generation autonomous driving technology

Plus works closely with customers, drivers and industry partners to help them understand the advantages of PlusDrive. Their testimonials validate the key benefits of the system.

“I am an admitted cynic, and I was blown away,” said commercial vehicle and transportation industry analyst Ann Rundle, Vice President of ACT Research. After taking a demo ride of a PlusDrive-enabled truck at the recent Advanced Clean Transportation Expo, Rundle said, “It was so seamless. I suppose if I wasn’t watching the screen that indicated the system is ‘Engaged’ I might have wondered [if PlusDrive was indeed still doing the driving].”

A professional driver invited to test PlusDrive echoed those sentiments. “If I were to have a system like this in my truck,” he said, “it would make my job a whole lot smoother, easier and a lot less stressful.”

Another operator from a customer fleet praised PlusDrive for “reinforcing safety; just keep giving us these tools — it’s a tool to help us help the company.” 

PlusDrive keeps economy moving, safely

Startups competing for a slice of the self-driving trucking future are aligned on the long-term goal of getting fully driverless commercial trucks (with no safety drivers) on the road. But Plus stands out as the only company to release a product that enables fleets and drivers to benefit from automation today, when there is little sign of an end to the chaos and stress roiling the freight markets. Through its collaborative, considered approach to delivering its autonomous trucking technology as a commercial product now, Plus is maximizing benefits for fleets while making long-haul trucking safer and easier for the hard-working operators who keep the nation’s economy moving.


VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information. For more information, contact sales@venturebeat.com.

Repost: Original Source and Author Link

Categories
AI

Why AI and autonomous response are crucial for cybersecurity (VB On-Demand)

Presented by Darktrace


Today, cybersecurity is in a state of continuous growth and improvement. In this on-demand webinar, learn how two organizations use a continuous AI feedback loop to identify vulnerabilities, harden defenses and improve the outcomes of their cybersecurity programs.

Watch free on-demand here.


The security risk landscape is in tremendous flux, and the traditional on-premises approach to cybersecurity is no longer enough. Remote work has become the norm, and outside the office walls, employees are letting down their personal security defenses. Cyber risks introduced by the supply chain via third parties are still a major vulnerability, so organizations need to think about not only their defenses but those of their suppliers to protect their priority assets and information from infiltration and exploitation.

And that’s not all. The ongoing Russia-Ukraine conflict has provided more opportunities for attackers, and social engineering attacks have ramped up tenfold and become increasingly sophisticated and targeted. Both play into the fears and uncertainties of the general population. Many security industry experts have warned about future threat actors leveraging AI to launch cyber-attacks, using intelligence to optimize routes and hasten their attacks throughout an organization’s digital infrastructure.

“In the modern security climate, organizations must accept that it is highly likely that attackers could breach their perimeter defenses,” says Steve Lorimer, group privacy and information security officer at Hexagon. “Organizations must focus on improving their security posture and preventing business disruption, so-called cyber resilience. You don’t have to win every battle, but you must win the important ones.”

ISOs need to look for cybersecurity options that alleviate some resource challenges, add value to their team, and reduce response time. Self-learning AI trains itself using unlabeled data. Autonomous response is a technology that calculates the best action to take to contain in-progress attacks at machine speed, preventing attacks from spreading throughout the business and interrupting crucial operations. And both are becoming essential for a security program to address these challenges.

Why self-learning AI is essential in the new cybersecurity landscape

Attackers are constantly innovating, transforming old attack patterns into new ones. Self-learning AI can detect when something in an organization’s digital infrastructure changes, identify behaviors or patterns that haven’t been seen previously, and act to quarantine the potential threat before it can escalate into a full-blown crisis, disrupting business. 

“It’s about building layers at the end of the day,” Lorimer adds. “AI will always be a supporting element, not a replacement for human teams and knowledge. AI can empower human teams and decrease the burden. But we can never entirely rely on machines; you need the human element to make gut feeling decisions and emotional reactions to influence more significant business decisions.”

The advantages of autonomous response

Often, cyber attacks start slowly; many take months to move between reconnaissance and penetration, but the most important components of an attack happen very quickly. Autonomous response unlocks the ability to react at machine speed to identify and contain threats in that short window.

The second key advantage of autonomous response is that it enables “always-on” defense. Even with the best intentions in the world, security teams will always be constrained by resources. There aren’t enough people to defend everything all the time. Organizations need a layer that can augment the human team, providing them time to think and respond with crucial human context, like business and strategy acumen. Autonomous response capabilities allow the AI to make decisions instantaneously. These micro-decisions give human teams enough time to make those macro-decisions.

Leveling up: Leveraging attack path modeling

Once an organization has matured its thinking to the point of assumed breach, the next question is understanding how attackers traverse the network, Lorimer says. Now, AI can help businesses better understand their own systems and identify the most high-risk paths an attacker might take to reach their crown jewels or most important information and assets.

This attack simulation allows them to harden defenses around their most vulnerable areas, Lorimer says. And self-learning AI is really all about a paradigm shift: instead of building up defenses based on historical attack data, you need to be able to defend against novel threats.

Attack path modeling (APM) is a revolutionary technology because it allows organizations to map the paths where security teams may not have as much visibility or may not have originally thought of as vulnerable. The network is never static; a large, modern, and innovative enterprise constantly changes. So, APM can run continuously and alert teams of new attack paths created via new integrations with a third party or a new device joining the digital infrastructure.

“This continuous, AI-based approach allows organizations to harden their defenses continually, rather than relying on biannual, or even more infrequent, red teaming exercises,” Lorimer says. “APM enables organizations to remediate vulnerabilities in the network proactively.”

Choosing a cybersecurity solution

When choosing a cybersecurity solution, there are a few things ISOs need to look for, Lorimer says. First, the solution should augment the human teams without creating substantial additional work. The technologies should be able to increase the value that an organization delivers.

ISOs should also look to repair any significant overlaps or gaps in technology in their existing security stacks. Today’s solutions can replace much of the existing stack with better, faster, more optimized, more automated and technology-led approaches. 

Beyond the technology itself, ISOs must seek out a vendor that adds human expertise and contextual analysis on top.

“For example, Darktrace’s Security Operations Center (SOC) and Ask the Expert services allow our team at Hexagon to glean insights from their global fleet, partner community, and entire customer base,” Lorimer says. “Darktrace works with companies across all different industries and geographies, and that context allows us to understand threats and trends that may not have immediately impacted us yet.” 

Hexagon operates in two key industry sectors: manufacturing and software engineering, and so each facet of the business faces different, specific threats from different threat actors. Darktrace’s SOC offers insights from broader industry experts and analysts based on their wealth of knowledge. 

But even with the best tools, you can’t solve every problem. You need to focus on solving the issues that will genuinely affect your ability to deliver to your customers and, thus, your bottom line. You should establish controls that can help manage and reduce that risk.

“It’s all about getting in front of issues before they can escalate and mapping out potential consequences,” Lorimer says. “It all comes down to understanding risk for your organization.”

For more insight into the current threat landscape and to learn more about how AI can transform your cybersecurity program, don’t miss this VB On-Demand event!

Watch free on-demand here.

You’ll learn about:

  • Protecting and securing citizens, nations, facilities, and data with autonomous decision making
  • Applying continuous AI feedback systems to improve outcomes and harden security systems
  • Simulating real-world scenarios to understand attack paths adversaries may leverage against critical assets
  • Fusing the physical and digital worlds to create intelligent security for infrastructure

Presenters:

  • Nicole Eagan,Chief Strategy Officer and AI Officer, Darktrace
  • Norbert Hanke, Executive Vice President, Hexagon
  • Mike Beck,Global CISO, Darktrace
  • Steve Lorimer, Group Privacy & Information Security Officer, Hexagon
  • Chris Preimesberger,Moderator, Contributing Writer, VentureBeat

Repost: Original Source and Author Link

Categories
AI

Standard AI acquires ThirdEye and teams up to bolster autonomous checkout tech

San Francisco headquartered Standard AI, a startup providing autonomous checkout technology for the retail industry, today announced it has completed the acquisition and hiring of the London-based computer vision startup ThirdEye Labs. The terms of the deal were not disclosed.

Founded in 2017, Standard enables retailers to set up a cashier-free checkout experience. The company has developed a solution that leverages ceiling-mounted cameras, computer vision, and proprietary AI to anonymously monitor a shopper’s activity and the products they pick up from the shelves. This way, customers can simply check into a store using the company’s app, pick up the products needed, and walk out — without standing in long queues or stopping to scan and pay.

The autonomous checkout system automatically bills the shoppers as they exit the store, giving retailers a way to reduce their labor costs and improve profit margins while providing an elevated in-store shopping experience. Standard, which goes against Zippin, Trigo, and Amazon Go cashier-free stores, says the whole thing just includes some cameras and onsite servers and can be installed overnight.

ThirdEye brings expertise in theft detection

With ThirdEye’s team coming on board, the retail-tech company aims to expand the capabilities of its platform. Founded in 2016, ThirdEye has built expertise around leveraging AI and computer vision to help retailers detect instances of in-store theft and checkout fraud. The company’s machine learning algorithms focus on specific human movements — similar to those used by autonomous vehicles to identify and predict pedestrian behavior — and notify store employees to follow up wherever required. It claims to reduce theft by up to 50% and even flags cases of stockouts or flash queues, helping improve overall operations.

“Computer vision has rapidly become one of the most valuable ways for brick-and-mortar retailers to keep up with the data-driven flexibility of eCommerce,” Jordan Fisher, CEO of Standard AI, said. “Raz and the ThirdEye product and engineering team have been engaged in cutting-edge work, and they will be invaluable to our team as we expand the capabilities of our platform deeper into retail.”

New VP to lead the team behind autonomous checkout

In addition to the acquisition and hiring, Standard has also appointed Tesla and Lyft alum Sameer Qureshi as its VP of machine learning.

This whole explosion of talent, Fisher told VentureBeat, will ultimately help the company take its offering to more diverse retail environments. So far, Standard has only upgraded a bunch of convenience stores, including those from Circle K and Compass Group, with its autonomous checkout technology.

“We’ll continue to push the boundaries with new products, analytics, and inventory management,” he added, while noting that the company also plans to expand beyond retail — into areas such as manufacturing facilities, offices, and gyms — but not in the next three to four years.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Kodiak Robotics to expand autonomous trucking with $125M

Kodiak Robotics, a startup developing self-driving truck technologies, today announced that it raised $125 million in an oversubscribed series B round for a total of $165 million to date. The tranche — which includes investments from SIP Global Partners, Lightspeed Venture Partners, Battery Ventures, CRV, Muirwoods Ventures, Harpoon Ventures, StepStone Group, Gopher Asset Management, Walleye Capital, Aliya Capital Partners, and others — will be put toward expanding Kodiak’s team, adding trucks to its fleet, and growing its autonomous service capabilities, according to CEO Don Burnette.

“Our series B drives us into hyper-growth so we can double our team, our fleet, and continue to scale our business,” Burnette said in a statement. “With [it], we will further accelerate towards launching our commercial self-driving service with our partners in the coming years to help address these critical challenges.”

While autonomous trucks could face challenges in commercializing at scale until clearer regulatory guidelines are established, the technology has the potential to reduce the cost of trucking from $1.65 per mile to $1.30 per mile by mid-decade, according to a Pitchbook analysis. That’s perhaps why in the first half of 2021, investors poured a record $5.6 billion into driverless trucking companies, eclipsing the $4.2 billion invested in all of 2020.

The semi- and fully autonomous truck market will reach approximately $88 billion by 2027, a recent Acumen Research and Consulting estimates, growing at a compound annual growth rate of 10.1% between 2020 and 2027.

Kodiak technology

Kodiak, which was cofounded by Burnette and former venture capitalist Paz Eshel, emerged from stealth in 2018. After leaving Google’s self-driving project for Otto in early 2016, Burnette briefly worked at Uber following the company’s acquisition of Otto in 2016 at a reported $680 million valuation.

“I was very fortunate to be an early member of and software tech lead at the Google self-driving car project, the predecessor to Waymo. I spent five years there working on robotaxis, but ultimately came to believe that there were tons of technical challenges for such applications, and the business case wasn’t clear,” Burnette told VentureBeat via email. “I realized in those early days that long-haul trucking represented a more compelling use case than robotaxis. I wanted a straight-forward go-to-market opportunity, and I saw early on that autonomous trucking was the logical first application at scale.”

Kodiak’s self-driving platform uses a combination of light detection and ranging radar known as lidar as well as camera, radar, and sonar hardware. A custom computer processes sensor data and plans the truck’s path. Overseen by a safety driver, the brakes, steering column, and throttle are controlled by the computer to move the truck to its destination.

Kodiak

Kodiak’s sensor suite collects raw data about the world around the truck, processing raw data to locate and classify objects and pedestrians. The above-mentioned computer reconciles the data with lightweight road maps, which are shipped to Kodiak’s fleet over the air and contain information about the highway, including construction zones and lane changes.

Kodiak claims its technology can detect shifting lanes, speed changes, heavy machinery, road workers, construction-specific signs, and more in rain or sunshine. Moreover, the company says its truck can merge on and off highways and anticipate rush hour, holiday traffic, and construction backups, adjusting their braking and acceleration to optimize for delivery windows while maximizing fuel efficiency.

“Slower-moving vehicles, interchanges, vehicles on the shoulder, and even unexpected obstacles are common on highways. The Kodiak Driver can identify, plan, and execute a path around obstacles to safely continue towards its destination,” Kodiak says on its website. “The Kodiak Driver was built from the ground up specifically for trucks. Trucks run for hundreds of thousands of miles, in the harshest of environments, for extremely long stretches. Our focus has always been on building technology that’s reliable, safe, automotive-grade, and commercial ready.”

The growing network of autonomous trucking

In the U.S. alone, the American Trucking Association (ATA) estimates that there are more than 3.5 million truck drivers on the roads, with close to 8 million people employed across the segment. Trucks moved more than 80.4% percent of all U.S. freight and generated $791.7 billion in revenue in 2019, according to the ATA.

But the growing driver shortage remains a strain on the industry. Estimates peg the shortfall of long-haul truck drivers at 60,000 in the U.S., a gap that’s projected to widen to 160,000 within the decade.

Chasing after the lucrative opportunity, autonomous vehicle startups focused on freight delivery have racked up hundreds of millions in venture capital. In May, Plus agreed to merge with a special purpose acquisition company in a deal worth an estimated $3.3 billion. Self-driving truck maker TuSimple raised $1 billion through an initial public offering (IPO) in March. Autonomous vehicle software developer Aurora filed for an IPO last week. And Waymo, which is pursuing driverless truck technology through its Waymo Via business line, has raised billions of dollars to date at a valuation of just over $30 billion.

Other competitors in the self-driving truck space include Wilson Logistics and Pony.ai. But Kodiak points to a minority investment from Bridgestone to test and develop smart tire technology as one of its key differentiators. BMW i Ventures is another backer, along with South Korean conglomerate SK, which is exploring the possibility of deploying Kodiak’s vehicle technology in Asia.

Kodiak

“Kodiak was founded in April 2018 and took delivery of its first truck in late 2018. We completed our first closed-course test drive just three weeks later, and began autonomously moving freight for [12] customers between Dallas and Houston in the summer of 2019,” Burnette said. “Our team is the most capital-efficient of the autonomous driving companies while also having developed industry leading technology. We plan to achieve driverless operations at scale for less than 10% of what Waymo has publicly raised to date, and less than 25% of what TuSimple has raised to date.”

Eight-five-employee Kodiak recently said that it plans to expand freight-carrying pilots to San Antonio and other cities in Texas. The company is also testing trucks in Mountain View, California, with a headcount that now stands at 85 people.

In the next few months, Kodiak plans to add 15 new trucks to its fleet, for a total of 25.

“We are at a pivotal moment in the autonomous vehicle industry. It’s not a question of will autonomous trucking technology happen — it’s when is it going to happen,” Burnette continued. “That being said, logistics is an $800 billion-per-year industry with a lot of room for many players to be successful.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Have autonomous robots started killing in war? The reality is messier than it appears

It’s the sort of thing that can almost pass for background noise these days: over the past week, a number of publications tentatively declared, based on a UN report from the Libyan civil war, that killer robots may have hunted down humans autonomously for the first time. As one headline put it: “The Age of Autonomous Killer Robots May Already Be Here.”

But is it? As you might guess, it’s a hard question to answer.

The new coverage has sparked a debate among experts that goes to the heart of our problems confronting the rise of autonomous robots in war. Some said the stories were wrongheaded and sensational, while others suggested there was a nugget of truth to the discussion. Diving into the topic doesn’t reveal that the world quietly experienced the opening salvos of the Terminator timeline in 2020. But it does point to a more prosaic and perhaps much more depressing truth: that no one can agree on what a killer robot is, and if we wait for this to happen, their presence in war will have long been normalized.

It’s cheery stuff, isn’t it? It’ll take your mind off the global pandemic at least. Let’s jump in:

The source of all these stories is a 548-page report from the United Nations Security Council that details the tail end of the Second Libyan Civil War, covering a period from October 2019 to January 2021. The report was published in March, and you can read it in full here. To save you time: it is an extremely thorough account of an extremely complex conflict, detailing various troop movements, weapon transfers, raids and skirmishes that took place among the war’s various factions, both foreign and domestic.

The paragraph we’re interested in, though, describes an offensive near Tripoli in March 2020, in which forces supporting the UN-backed Government of National Accord (GNA) routed troops loyal to the Libyan National Army of Khalifa Haftar (referred to in the report as the Haftar Affiliated Forces or HAF). Here’s the relevant passage in full:

Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.”

The Kargu-2 system that’s mentioned here is a quadcopter built in Turkey: it’s essentially a consumer drone that’s used to dive-bomb targets. It can be manually operated or steer itself using machine vision. A second paragraph in the report notes that retreating forces were “subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems” and that the HAF “suffered significant casualties” as a result.

The Kargu-2 drone is essentially a quadcopter that dive-bombs enemies.
Image: STM

But that’s it. That’s all we have. What the report doesn’t say — at least not outright — is that human beings were killed by autonomous robots acting without human supervision. It says humans and vehicles were attacked by a mix of drones, quadcopters, and “loitering munitions” (we’ll get to those later), and that the quadcopters had been programmed to work offline. But whether the attacks took place without connectivity is unclear.

These two paragraphs made their way into the mainstream press via a story in the New Scientist, which ran a piece with the headline: “Drones may have attacked humans fully autonomously for the first time.” The NS is very careful to caveat that military drones might have acted autonomously and that humans might have been killed, but later reports lost this nuance. “Autonomous drone attacked soldiers in Libya all on its own,” read one headline. “For the First Time, Drones Autonomously Attacked Humans,” said another.

Let’s be clear: by itself, the UN does not say for certain whether drones autonomously attacked humans in Libya last year, though it certainly suggests this could have happened. The problem is that even if it did happen, for many experts, it’s just not news.

The reason why some experts took issue with these stories was because they followed the UN’s wording, which doesn’t distinguish clearly between loitering munitions and lethal autonomous weapons systems or LAWS (that’s policy jargon for killer robots).

Loitering munitions, for the uninitiated, are the weapon equivalent of seagulls at the beachfront. They hang around a specific area, float above the masses, and wait to strike their target — usually military hardware of one sort or another (though it’s not impossible that they could be used to target individuals).

The classic example is Israel’s IAI Harpy, which was developed in the 1980s to target anti-air defenses. The Harpy looks like a cross between a missile and a fixed-wing drone, and is fired from the ground into a target area where it can linger for up to nine hours. It scans for telltale radar emissions from anti-air systems and drops onto any it finds. The loitering aspect is crucial as troops will often turn these radars off, given they act like homing beacons.

The IAI Harpy is launched from the ground and can linger for hours over a target area.
Image: IAI

“The thing is, how is this the first time of anything?” tweeted Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations. “Loitering munition have been on the battlefield for a while – most notably in Nagorno-Karaback. It seems to me that what’s new here isn’t the event, but that the UN report calls them lethal autonomous weapon systems.”

Jack McDonald, a lecturer at the department of war studies at King’s College London, says the distinction between the two terms is controversial and constitutes an unsolved problem in the world of arms regulation. “There are people who call ‘loitering munitions’ ‘lethal autonomous weapon systems’ and people who just call them ‘loitering munitions,’” he tells The Verge. “This is a huge, long-running thing. And it’s because the line between something being autonomous and being automated has shifted over the decades.”

So is the Harpy a lethal autonomous weapons system? A killer robot? It depends on who you ask. IAI’s own website describes it as such, calling it “an autonomous weapon for all weather,” and the Harpy certainly fits a makeshift definition of LAWS as “machines that target combatants without human oversight.” But if this is your definition, then you’ve created a very broad church for killer robots. Indeed, under this definition a land mine is a killer robot, as it, too, autonomously targets combatants in war without human oversight.

If killer robots have been around for decades, why has there been so much discussion about them in recent years, with groups like the Campaign To Stop Killer Robots pushing for regulation of this technology in the UN? And why is this incident in Libya special?

The rise of artificial intelligence plays a big role, says Zak Kallenborn, a policy fellow at the Schar School of Policy and Government. Advances in AI over the past decade have given weapon-makers access to cheap vision systems that can select targets as quickly as your phone identifies pets, plants, and familiar faces in your camera roll. These systems promise nuanced and precise identification of targets but are also much more prone to mistakes.

“Loitering munitions typically respond to radar emissions, [and] a kid walking down the street isn’t going to have a high-powered radar in their backpack,” Kallenborn tells The Verge. “But AI targeting systems might misclassify the kid as a soldier, because current AI systems are highly brittle — one study showed a change in a single pixel is sufficient to cause machine vision systems to draw radically different conclusions about what it sees. An open question is how often those errors occur during real-world use.”

This is why the incident in Libya is interesting, says Kallenborn, as the Kargu-2 system mentioned in the UN report does seem to use AI to identify targets. According to the quadcopter’s manufacturer, STM, it uses “machine learning algorithms embedded on the platform” to “effectively respond against stationary or mobile targets (i.e. vehicle, person etc.)” Demo videos appear to show it doing exactly that. In the clip below, the quadcopter hones in on a mannequin in a stationary group.

But should we trust a manufacturers’ demo reel or brochure? And does the UN report make it clear that machine learning systems were used in the attack?

Kallenborn’s reading of the report is that it “heavily implies” that this was the case, but McDonald is more skeptical. “I think it’s sensible to say that the Kargu-2 as a platform is open to being used in an autonomous way,” he says. “But we don’t necessarily know if it was.” In a tweet, he also pointed out that this particular skirmish involved long-range missiles and howitzers, making it even harder to attribute casualties to any one system.

What we’re left with is, perhaps unsurprisingly, the fog of war. Or more accurately: the fog of LAWS. We can’t say for certain what happened in Libya and our definitions of what is and isn’t a killer robot are so fluid that even if we knew, there would be disagreement.

For Kallenborn, this is sort of the point: it underscores the difficulties we face trying to create meaningful oversight in the AI-assisted battles of the future. Of course the first use of autonomous weapons on the battlefield won’t announce itself with a press release, he says, because if the weapons work as they’re supposed to, they won’t look at all out of the ordinary. “The problem is autonomy is, at core, a matter of programming,” he says. “The Kargu-2 used autonomously will look exactly like a Kargu-2 used manually.”

Elke Schwarz, a senior lecturer in political theory at Queen Mary University London who’s affiliated with the International Committee for Robot Arms Control, tells The Verge that discussions like this show we need to move beyond “slippery and political” debates about definitions and focus on the specific functionality of these systems. What do they do and how do they do it?

“I think we really have to think about the bigger picture […] which is why I focus on the practice, as well as functionality,” says Schwarz. “In my work I try and show that the use of these types of systems is very likely to exacerbate violent action as an ‘easier’ choice. And, as you rightly point out, errors will very likely prevail […] which will likely be addressed only post hoc.”

Schwarz says that despite the myriad difficulties, in terms of both drafting regulation and pushing back against the enthusiasm of militaries around the world to integrate AI into weaponry, “there is critical mass building amongst nations and international organizations to push for a ban for systems that have the capacity to autonomously identify, select and attack targets.”

Indeed, the UN is still conducting a review into possible regulations for LAWS, with results due to be reported later this year. As Schwarz says: “With this news story having made the rounds, now is a great time to mobilize the international community toward awareness and action.”



Repost: Original Source and Author Link

Categories
AI

Informatica GM touts advances in autonomous data management

Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.


Master data management (MDM) has historically been a murky area many organizations simply ignored. In fact, no one is still quite sure who is responsible for data quality within many organizations.

But data quality has become an increasingly critical issue as organizations look to improve the customer experience and lay the groundwork for investments in AI.

VentureBeat caught up with Manouj Tahiliani, general manager for MDM at Informatica, to gain insights into how MDM is rapidly evolving into an autonomous process that needs to be infused within a larger data management strategy.

This interview has been edited for brevity and clarity.

VentureBeat: Why is there so much focus on MDM now? To some degree, it’s always been a challenge.

Manouj Tahiliani: The pandemic was this … event where companies have been forced to digitize. It’s happening across every area. The natural instinct is for businesses to roll out point solutions. It’s kind of the easy way out. All of this is leading to applications and technology sprawl. We’ve always had a fragmentation issue — now the problem is further compounded, so I think there is a realization. That is a trend that we are seeing now. It’s very apparent MDM is the foundation for any kind of digital transformation.

Above: Manouj Tahiliani, general manager for MDM at Informatica.

VentureBeat: Who is responsible for data quality these days?

Tahiliani: There is a classic problem within data management in terms of how to govern. You need to lay out a data strategy that aligns with your board strategy or your other business outcomes that an organization wants to get. But then you get to the next level of detail. How do you implement the data strategy? One of the foundations of any successful data management journey is having a good governance framework in place because governance is what brings the people process and technology together and puts in the necessary guardrails. Businesses own the data. IT teams can’t tell if two customers are the same or not, but IT does need to lay out the processes to give the business stewardship capabilities. It’s a collaboration between business and IT to be successful. It’s not a one-and-done kind of thing.

VentureBeat: Does every organization need a chief data officer?

Tahiliani: We’ve seen the chief data officer. Sometimes they’re also chief digital officers. It’s still an evolving space. I see that longer-term the chief data officer will be someone responsible for data. It may be a title with a role to it, but it’s certainly a role with certain responsibilities. When their role is not defined, it’s usually somebody from the business who’s kind of taking responsibility and kind of acting as a chief data officer without the title.

VentureBeat: It seems data management is ripe for automation. How much progress is being made on that front?

Tahiliani: MDM is evolving very rapidly. We are already using AI to improve. There is definitely a line of sight toward autonomous MDM. A core process of MDM is to get a single view of the customer. Then you have complex business processes on top of it, or what we call MDM applications. The MDM secret sauce is to get a single view of a customer that can be shared using a process that is autonomous. We’re able to discover metadata. We have the ability to automate pipelines from the standpoint of connectivity and then getting data to flow. We have a capability from the standpoint of matching that is based on machine learning and then being able to share data in a process once you have those governance policies laid out. We’re not all the way there, as yet. What we are doing currently is applying machine learning in different aspects of that MDM process. Matching is a big one. Being able to give recommendations around what should be the business model based on the metadata is another. It’s getting automated. It will move from the semi-automated process that is supervised to basically becoming an autonomous MDM process at some point.

VentureBeat: How will we know we can trust that autonomous outcome?

Tahiliani: Governance will become an important aspect if you’re automating everything. How do people trust that this data is really good enough just because the machine told me? You need to provide some explainability out there. We need to make sure the governance process around the front end in terms of deploying the machine learning algorithms is much more robust. It’s an exciting time because the core MDM process can definitely be fully autonomous. We see that happening soon.

VentureBeat: Business users don’t always trust the data. Will that improve as a result?

Tahiliani: As long as you can visualize how things were done, it’s not a black box process. There is a way to introspect and understand how the machine learning algorithm actually came to a conclusion. That would build confidence. I think we just have to make sure that we take everybody on that journey with us.

VentureBeat: Is there simply now too much data to be processed?

Tahiliani: It’s an aspect as the volumes increase. It’s just not humanly possible to have that stewardship scale. A lot more data is going to go through in an autonomous way.

VentureBeat: Is more of the data also now being processed in real time?

Tahiliani: Real time and streaming are in the future, primarily because data gets stale pretty fast. If you look at it from a standpoint of engaging with a customer, those kinds of interactions have a lifespan. You need to provide a recommendation to a customer in near real time to provide a relevant experience. It is important for any data management technology to be able to support streaming capability and provide real-time recommendations. We’re using a lot of exhaust data to get intelligence out of it. But that exhaust data also gets stale very fast and loses relevance.

VentureBeat: What impact has the cloud had on MDM?

Tahiliani: It’s an exciting time currently to be in MDM. We launched the intelligent data management cloud. MDM is a key service within the Informatica intelligent data management cloud. MDM cannot get deployed much faster. You no longer have two months of deployment. We’ve come a long way. MDM projects used to be six to nine months, now you have customers getting value in three to six months. There is a near-term opportunity for us to start harnessing that kind of community knowledge. Cloud is definitely facilitating that.

VentureBeat: What’s your best MDM advice then?

Tahiliani: I always tell customers it starts with the data strategy. Lay out a data strategy that is aligned to your business strategy and then really go through the journey. It’s not about boiling the ocean. You can start small. It’s also important to factor in the longer-term aspects of your journey and not be very near-term focused. It has to support your data needs, not just for today, but for the future too. It’s important that you look at your data strategy from the lens of a data platform. It’s important to have a data platform as a foundation that supports multiple domain needs.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Oracle’s Autonomous Data Warehouse expansion offers potential upside for tech professionals

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


In March, Oracle announced an expansion to their Autonomous Data Warehouse that can bring the benefits of ADW — automating previously manual tasks — to large groups of new potential users. Oracle calls the expansion “the first self-driving database,” and its goal with the new features is to “completely transform cloud data warehousing from a complex ecosystem … that requires extensive expertise into an intuitive, point-and-click experience” that will enable all types of professionals to access, work with, and build business insights with data, from engineers to analysts and data scientists to business users, all without the help of IT.

A serious bottleneck to data work delivering business value across industries is the amount of expertise required at many steps along the data pipeline. The democratization of data tooling is about increasing ROI when it comes to an organization’s data capabilities, as well as increasing the total addressable market for Oracle’s ADW. Oracle is also reducing the total cost of ownership with elastic scaling and auto-scaling for changing workloads. We spoke with George Lumpkin, Neil Mendelson, and William Endress from Oracle, who shared their time and perspective for this article.

The landscape: democratization of data tooling

There is a growing movement of data tooling democratization, and the space is getting increasingly crowded with tools such as AWS SageMaker Studio (which we have reviewed here, here, and here), DataRobot, Qlik, Tableau, and Looker. It is telling that in recent times, Google has acquired Looker and Salesforce has acquired Tableau. On top of this, the three major cloud providers are all providing drag-and-drop data tooling, to various extents: AWS has an increasing amount of GUI-based data transformation and machine learning tools; Microsoft Azure has a point-and-click visual interface for machine learning, “data preparation, feature engineering, training algorithms, and model evaluation”; and Google Cloud Platform has similar functionality as part of their Cloud AutoML offering.

In their announcement, Oracle frames the AWS enhancements as self-service tools for:

  • Analysts, including loading and transforming data, building business models, and extracting insights from data (note that ADW also provides some interesting third-party integrations, such as automatically building data models that can be consumed by Tableau or Qlik).
  • Data scientists (and “citizen data scientists”), along with building and deploying machine learning models (in a video, Andrew Mendelsohn, executive VP of Oracle Database Server Technologies, describes how data scientists can “easily create models with AutoML” and “integrate ML models into apps via REST or SQL”).
  • LoB developers, including Low-Code App Dev and API-Driven Development.

Oracle Autonomous Data Warehouse competes with incumbent products including Amazon Redshift, Azure Synapse, Google BigQuery, and Snowflake. But Oracle does not necessarily see ADW as directly competitive, targeting existing on-premises customers in the short run but with an eye to self-service ones in the longer term. As Lumpkin explained, “Many of Oracle’s Autonomous Data Warehouse customers are existing on-prem users of Oracle who are looking to migrate to the cloud. However, we have also designed Autonomous Data Warehouse for the self-service market, with easy interfaces that allow sales and marketing operations teams to move their team’s workloads to the cloud.”

Oracle’s strategy highlights a tension in tech: Traditional CIOs with legions of database administrators (DBAs) are worried about the migration to the cloud. DBAs who have built entire careers around being an expert at patching and tuning databases may find themselves lacking work in a self-service world where cloud providers like Oracle are patching and tuning enterprise databases.

CIOs who measure their success based on headcount and on-premises spend might also be worried. As Mendelson put it: “70% of what the DBA used to do should be automated.” Given that Oracle’s legacy business is still catering towards DBAs and CIOs, how do they feel about potentially upsetting their traditional advocates? While they acknowledged that automation would reduce some of the tasks traditionally performed by DBAs, they were not worried about complete job redundancy. Lumpkin explained, “By lowering the total cost of ownership for analytics, the business will be demanding 5x the number of databases.” In other words, DBAs and CIOs will see the same transformation that accountants saw with the advent of the spreadsheet, and there should be plenty of higher-level strategic work for DBAs in the new era for Oracle cloud.

Of course, this isn’t to say there won’t be any changes. After all, change is inevitable as certain functions are automated away. DBAs need to refocus on their unique value add. “Some DBAs may have built their skill sets around patching Oracle databases,” explains Lumpkin. “That’s now automated because it was the same for every customer, and we could do it more consistently and reliably in the cloud. It was never adding value to the customer. What you want is your people doing work that is unique to your datasets and your organization.”

We did a deep dive into different parts of ADW tools. Here’s what we found.

Autonomous Data Warehouse setup

The automated provisioning and database setup tools were well done. The in-app screens and tutorials mostly adhered to one another and we could get set up in about five minutes. That said, there were still some rather annoying steps. For example, the user needs to create both a “database user” and an “analytics user.” This makes a lot of sense on centrally administered databases serving an entire enterprise, but is overkill for a tool for a single analyst trying to get started (much less a tutorial for an analyst tool). The vast majority of data scientists and data analysts do not want to be database administrators, and the tool could benefit from a mode that hides this detail from the end user. This is a shortcoming that Oracle understands. As Lumpkin explains, “We have been looking at how to simplify the create-user flow for new databases. There are competing best practices for security [separation of duties between multiple users] and fastest onboarding experiences [with only one user].” But overall, the documentation is very well done, and onboarding is straightforward but could be a bit smoother.

Data insights

The automated insights tool is also interesting and could prove powerful. The insights run many queries against your dataset, generating predicted values against a target column. They then highlight the unexpected values where the predicted values deviate significantly from actual values. The algorithm appears to be running multiple groupbys and identifying groups with highly unexpected values. While this may lead to some risk of data dredging if used naively, it does provide some quick speedups: Some large fraction of data analysis comes from understanding unexpected results, and this feature can help with that.

Business model

One of the pervasive challenges with data modeling is defining business logic on raw enterprise data. Typically, this logic might reside in the heads of individual business analysts, leading to the inconsistent application of business logic across reports by different analysts. Oracle’s Data Tools provide a “Business Model” centralizing business logic into the database, increasing consistency and improving performance via caching. The tool offers some excellent features, like automatically detecting schemas and finding the keys for table joins. However, some of these features may not be very robust. While the tool could identify many valuable potential table joins in the tutorial movie dataset, it could only find a small subset of the relationships in the publicly available MovieLens dataset. Nonetheless, this is a valuable tool for solving a critical enterprise problem.

Data transform

The data transform tool provides a GUI to specify functions to clean data. Cleaning data is the No. 1 job of a data scientist or data analyst, making this a critical feature. Unfortunately, the tool has made certain questionable design choices. They stem from the use of a GUI: Rather than specifying the transformation using a CREATE TABLE query in SQL, they ask you to write code in a GUI, awkwardly connecting functions with lines and clicking through menus to select options. While the end result is a CREATE TABLE query, this abandons the syntax that data scientists and analysts are familiar with, makes code less reproducible and less portable, and ultimately makes analysts and their queries more dependent on Oracle’s GUI. Data professionals may wish to avoid this feature if they are eager to develop transferable skills and sidestep tool lock-in.

To be clear, there are useful drag-and-drop features in a SQL integrated development environment (IDE). For example, Count.co, which offers a BI notebook for analysts, supports drag and drop for table and field names into SQL queries. This nicely connects the data catalog to the SQL IDE query and helps prevent misspelled table or field names without abandoning the fundamental text-based query scripts we are used to. Overall, it felt much more natural as an interface.

Oracle Machine Learning

Oracle’s Machine Learning offering is growing and now includes ML notebooks, AutoML capabilities, and model deployment tools. One of the big challenges for Oracle and its competitors will be to demonstrate utility to data scientists and, more generally, people working in both ML and AI. While these new capabilities have come a long way, there’s still room for improvement. Making data scientists use Apache Zeppelin-based notebooks will likely hamper adoption when so many of us are Jupyter natives; so will preventing users from custom-installing Python packages, such as PyTorch and TensorFlow.

The problem Oracle is attempting to solve here is one of the biggest in the space: How do you get data scientists and machine learners to use enterprise data that sits in databases such as Oracle DBs? The ability to use familiar objects such as pandas data frames and APIs such as matplotlib and scikit-learn is a good step in the right direction, as is the decision to host notebooks. However, we need to see more: Data scientists often prototype code on their laptops in Jupyter Notebooks, VSCode, or PyCharm (among many other choices) with cutting-edge OSS package releases. When they move their code to production, they need enterprise tools that mimic their local workflows and allow them to utilize the full suite of OSS packages.

A representative of Oracle said that the ability to custom install packages on Autonomous Database is a road map item to address in future releases. In the meantime, the inclusion of scikit-learn in OML4Py allows users to work with familiar Python ML algorithms directly in notebooks or through embedded Python execution, where user-defined Python functions run in database-spawned and controlled Python engines. This supplements the scalable, parallelized, and distributed in-database algorithms and provides the ability to manipulate data in database tables and views using Python syntax. Overall, this is a step in the right direction.

Oracle Machine Learning’s documentation and example notebook library is extensive and valuable, allowing us to get up and running in a notebook in a matter of minutes with intuitive SQL and Python examples of anomaly detection, classification, and clustering among many others. This is welcome in a tooling landscape that all too often falls short in useful DevRel material. Learning new tooling is a serious bottleneck, and Oracle has removed a lot of friction here with their extensive documentation.

Oracle has also recognized that the MLOps space is heating up and that table stakes include the need to deploy and productionize machine learning models. To this end, OML4Py provides a REST API with Embedded Python Execution, as well as providing a REST API that allows users to store ML models and create scoring endpoints for them. It is welcome that this functionality not only supports classification and regression OML models, but also Open Neural Network Exchange (ONNX) format models, which include TensorFlow. Once again, the documentation here is extensive and very useful.

Graph Analytics

Oracle’s Graph Analytics offers the ability to run graph queries on databases. It is unique in that it allows users to directly query their data warehouse data. In contrast, Neptune, AWS’ graph solution, requires loading data from their data warehouse (Redshift). Graph Analytics uses PGQL, an Oracle-supported language that queries graph data in the same way that SQL queries structured tabular data. The language’s design is closer to SQL, and it is released under the open-source Apache 2.0 License. However, the main contributor is an Oracle employee, and Oracle is the only vendor supporting PGQL. The preferred mode of interacting with PGQL is through the company’s proprietary Graph Studio tool, which doesn’t promote reproducibility, advanced workflows, or interfacing with the rest of the development ecosystem. Lumpkin promised that REST APIs with Python and Java would be coming soon.

Perhaps unsurprisingly, Oracle’s graph query language appears to be less popular than Cypher, the query language supported by neo4j, a rival graph database (i.e., the PGQL language has 114 stars on GitHub, while neo4j has 8K+ stars). A proposal to bring together PGQL, Cypher, and G-Core has over 95% support from users for nearly 4K votes, has its own landing page, and is gaining traction internationally. While the survey methodology may be questionable — the proposal is authored by the Neo4j team on a Neo4j website — it’s understandable why graph database users would prefer a more commonly used open standard. Hopefully, graph query standards will emerge to streamline competing standards and simplify graph querying for data scientists.

Final thoughts

Oracle is a large incumbent in an increasingly crowded space that’s moving rapidly. The company is playing catch-up, with recent developments in open source tooling and the long tail of emerging data tooling businesses as well as with the ever-growing total addressable market of the space. We’re not only talking about just well-seasoned data scientist and machine learning engineers, but the increasing number of data analysts and citizen data scientists.

For Oracle, best known for its database software, these recent moves are intended to update its offerings to the data analytics, data science, machine learning, and AI spaces. In many ways, this is the data tooling equivalent of Disney making moves to streaming with Disney+. For the most part, Oracle’s recent expansion of its Autonomous Data Warehouse delivers on its promise: to bring the benefits of ADW to large groups of new potential users. There are some lingering questions around whether these tools will meet all the needs of working data professionals, such as being able to work with their open-source packages of choice. We urge Oracle to prioritize such developments on its road map, as access to open source tooling is now table stakes for working data scientists.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
Tech News

EHang long-range VT-30 autonomous aerial vehicle takes off and lands vertically

EHang has revealed the latest version of its aircraft that it calls an autonomous aerial vehicle or AAV. The aircraft is a long-range VT-30 meant to expand the company’s regional air mobility coverage. VT-30 is designed specifically for inter-city transportation featuring a hybrid structure. The aircraft can travel a distance of up to 300 kilometers with a planned flight time of up to 100 minutes.

EHang says that it designed the aircraft to be a safe, convenient, efficient, eco-friendly, and intelligent air mobility solution for inter-city travel. VT-30 is meant to complement the company’s EH216, focusing on intra-city air mobility and expand the air transportation and future urban air mobility ecosystem. The aircraft has a streamlined fuselage that combines a lifting rotor surface at the tail along with eight propellers on each side of the aircraft and a pair of fixed wings.

A propeller is in the rear of the aircraft in a pusher configuration. The aircraft supports vertical and taxi takeoff and landing modes depending on the needs at the particular landing site. The aircraft particularly focuses on longer distance and flight time. Safety features including triple redundancy fly-by-wire control system that can be altered into multiple modes to provide higher safety for the aircraft and passengers.

Two passengers can ride inside the VT-30, and the company promises the aircraft produces zero emissions and low noise. When landing aircraft in densely populated city areas, low noise is critical. EHang says that the VT-30 has conducted vertical takeoff and landing, power system, and other tests so far. In the future, additional tests and optimizations under different environmental conditions will take place.

EHang says the goal of its AAV is to be used in scenarios including air commuting, aerial sightseeing, and aerial logistics. It’s unclear when the aircraft might be ready to begin commercial operations at this time.

Repost: Original Source and Author Link

Categories
Tech News

Watch NASA’s autonomous helicopter take its first flight on Mars

An autonomous helicopter called Ingenuity made the first powered and controlled flight on another planet on Monday — and NASA has the video to prove it.

The 1.8 kg robot climbed three meters above Mars, hovered in the air for about 30 seconds, and then returned to the red planet’s surface.

It was only a brief trip, but Ingenuity had to overcome some major challenges on the way.

The Martian atmosphere is just 1% as dense as that of the surface of our planet, which means there’s little air for the chopper’s 1.2-meter-wide rotor blades to push against.

[Read: The biggest tech trends of 2021, according to 3 founders]

The flight also had to be entirely autonomous. Ingenuity can’t be controlled with a joystick or observed in real-time, because it takes too long for data to travel the 173-million-miles between Earth and Mars. Instead, the drone’s course was guided by onboard algorithms.

Those delays forced NASA to initially rely on telemetry to confirm that Ingenuity took flight. But the space agency has now received video footage of the feat, which was captured by cameras on the Perseverance rover. You can watch it for yourself at the top of this article.

Looking to the future

Ingenuity’s maiden voyage may not have lasted long, but it set a precedent that could pave the way for future flights on other worlds.

The helicopter will now attempt more challenging trips on the red planet.

We don’t know exactly where Ingenuity will lead us, but today’s results indicate the sky — at least on Mars — may not be the limit,” said acting NASA Administrator Steve Jurczyk.

NASA describes the $85 million mission as a technology demonstration. Ingenuity doesn’t carry any science instruments and is a separate experiment from the Perseverance rover, which gave the helicopter a ride to Mars.

Perseverance is leading the search for signs of ancient life on the red planet. It hasn’t found evidence of aliens just yet, but plans to keep looking for at least another 600 days.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.



Repost: Original Source and Author Link

Categories
Tech News

What would an autonomous Apple Car mean for Tesla?

I’m not sure what podcasts Elon Musk listens to. But I hope he caught Kara Swisher’s “Sway” today, because it definitely concerns him.

Apple CEO Tim Cook joined Swisher and the two discussed, among other things, the mysterious Apple Car.

Details are scarce and I’ll be right up front: Cook didn’t give away anything big or make any firm statements. But he did dance around the topic enough to leave a few impressions behind.

For starters, speaking about the Apple Car, Cook told Swisher:

The autonomy itself is a core technology, in my view. If you sort of step back, the car, in a lot of ways, is a robot. An autonomous car is a robot. And so there’s lots of things you can do with autonomy. And we’ll see what Apple does.

We already suspected the Apple Car would be an autonomous or semi-autonomous vehicle, but it’s always nice to hear Cook confirm that. Unfortunately when he talks about how you can “do a lot of things with autonomy” it muddies things back up again. The company still hasn’t revealed if the Apple Car is going to be a passenger vehicle, a delivery van,  a robo-taxi or shuttle service, or something else entirely.

Luckily Cook dropped another juicy tidbit – though it may have been unintentional. He brought up Elon Musk and, if you ask me, that tells us who he thinks the competition is: Tesla.

Here’s the quote:

I’ve never spoken to Elon, although I have great admiration and respect for the company he’s built. I think Tesla has done an unbelievable job of not only establishing the lead, but keeping the lead for such a long period of time in the EV space. So I have great appreciation for them.

Is it just me or does that sound like the kind of thing Apple could have said to the world’s top MP3 player manufacturer a year or two before it released the iPod?

On the one hand, it’s hard to imagine a vehicle that surpasses Tesla’s in form and function. We don’t need to go any faster than they can go, they’re drop-dead gorgeous, and they’re relatively inexpensive to purchase while being environmentally friendly.

But, on the other, Tesla hasn’t really done much in the way of autonomy. It’s almost laughable that the company calls its software “Autopilot” and “Full Self Driving” when you consider that both require an attentive driver behind the wheel at all times.

It’s very easy to imagine Apple’s AI team coming up with a product that puts Tesla’s paltry autonomous offerings to shame – don’t forget that Ian Goodfellow’s at Cupertino now.

And, realistically, Apple doesn’t have to beat Tesla at making an EV. In fact, it’s likely Apple won’t even make its own vehicles at first. I’d wager the company’s still pursuing another manufacturer to work with after a reported Hyundai-Kia deal fell through.

It just needs to make good on the promises Tesla’s failed to by giving people a car that safely drives itself or offering a robo-taxi service to the general public.

In a winner-takes-all battle for the driverless future, who would you put your money on?

H/t: Rebecca Bellan, Tech Crunch

Published April 5, 2021 — 23:55 UTC



Repost: Original Source and Author Link