These prototype XR glasses sold me on mixed reality gaming

I was skeptical about the idea of gaming on XR glasses, to say the least. I had questions swirling in my head about how I would use them, why I would use them, and cynical answers to both.

But all those questions faded into the background when I got a chance to actually experience it myself. I had a few days to play with a prototype version of the Viture One XR glasses, a project funded on Kickstarter — and as outlandish as the concept seems, it does work.

This isn’t the future of gaming for everyone, but the bells of the early days of VR were ringing in my head with the Viture One XR glasses. There’s a lot of work to be done on the prototype I tried, but despite all my assumptions, they could be the first step in an exciting new category for gaming.

A massive screen, anywhere you want

Jacob Roach / Digital Trends

The Viture Ones give you the equivalent of a 120-inch screen, and although it doesn’t have the specs of one of the best VR headsets, it doesn’t need to. You’re getting a pixel density of 55, full signal at 1080p running at 60 fps, and a peak brightness of 1,800 nits, according to Viture. Now, I wasn’t able to strap a luminance meter inside the frames to verify 1,800 nits, but the screen was bright enough to combat even direct sunlight pouring through my living room windows.

There’s a little blur around the edge, but the screen looks great. It’s sharp and super responsive, and I constantly drifted off into a game or video every time I put on the glasses. Sure, you can see the surrounding room and it’s evident you’re not looking at a physical screen, but I never fought against getting engrossed in whatever I was doing. The Viture Ones pulled me in, which is shocking considering I normally wear glasses with a pretty heavy prescription.

Elden Ring Gif

Devil May Cry 5 is what tipped me off. I played it on my Steam Deck, connected directly to the glasses through a USB-C cable, and it felt like playing the game on a normal 60Hz display. Devil May Cry 5 is extremely fast, and the Viture Ones held up exceptionally. I also watched a few YouTube videos and some Netflix on my couch, allowing me to lay down or rest my head while always having my media in the center of my field of view.

Someone sitting and playing a game on the Viture One glasses.
Jacob Roach / Digital Trends

Having a screen anywhere is a huge plus. I can’t tell you how many times I’ve had to lay my head down awkwardly while playing a game or watching a movie when I want to rest and still see the screen, and it’s generally so uncomfortable that I just don’t do it. The Viture Ones get past that issue unlike any device you can buy right now, clocking in at only 78 grams so they never feel heavy.

The experience at home is great, but I’d really like to see the Viture Ones in action on a plane.  Sunglasses on a plane may look silly, but I can’t stand looking down at my phone to watch a movie or my Steam Deck to play a game on a flight. These glasses seem like a huge win if you travel a lot.

The Viture Ones may be a glimpse into the future of gaming, at least for enthusiasts like myself that don’t mind strapping crazy tech to their faces. It’s a glimpse into the future, but we haven’t arrived yet.

Growing pains

Hands holding the Viture One glasses.
Jacob Roach / Digital Trends

Any early prototypes come with a laundry list of issues, and Viture sent over a list of problems it’s aware of and working on for glasses that will ship out. I’m focused more on the hurdles that come up when designing a unique product, and I hope to see Viture address these issues either before launch in a version two.

Above all else, size is an issue. You’re given three nose pads in different sizes, but none of them fit my (admittedly large) nose. Comfort isn’t the issue here, either. If the glasses aren’t positioned on your face in the right way, you can’t see the full screen. I’m well aware of how awkward the glasses look on my face, but that was the only way I could set them and still see the screen.

There’s a reason that regular glasses have so many points of adjustment, and it’s hard to have that flexibility with how much tech is inside the Viture Ones. The ergonomics definitely need more tuning and more flexibility for larger heads.

The glasses themselves don’t have much computing power in them. If you want to access the Android TV operating system, you’ll need to connect the glasses to the neckband. The band is super comfortable and light, and all of your controls are easy to access. Within a couple of hours, I knew where everything was without a second thought.

A hand controlling the Viture One glasses.
Jacob Roach / Digital Trends

The actual computing power is inside the neckband, and it’s actively cooled. The neckband warms up, and you can hear a fan inside trying to keep everything cool with minimal ventilation. It’s not uncomfortable, but with the lackluster built-in speakers, it feels like the fan noise and the speakers are fighting against each other.

I didn’t get to try out the optional mobile dock, which is the third part of the Viture Ones. This dock is exclusively for the Switch and it connects directly to the console. It probably works as well as the Steam Deck, which is great, and Viture says it can even upscale from 720p at 30 fps to 1080p. The company says 1080p at 60 fps, but we’re talking about the Nintendo Switch here.

Racing toward the finish line

Someone smiling while wearing the Viture One glasses.
Jacob Roach / Digital Trends

The Viture Ones are the first step in what could become a popular category over the next few years, especially as we see glasses like the Lenovo Glasses T1 start to pop up.

There are some usability hurdles to overcome, but Viture has clearly done a lot to get its first version right out of the gate. The glasses work, and that’s about as much as I can ask for right now.

Editors’ Choice

Repost: Original Source and Author Link


How reality gets in the way of quantum computing hype

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Baidu is the latest entrant in the quantum computing race, which has been ongoing for years among both big tech and startups. Nevertheless, quantum computing may face a trough of disillusionment as practical applications remain far from reality.

Baidu makes its quantum move

Last week, Baidu unveiled its first quantum computer, coined Qian Shi, as well as what it claimed is the world’s first “all-platform integration solution,” called Liang Xi. The quantum computer is based on superconducting qubits, which is one of the first types of qubits, among many techniques that have been investigated, that became widely adopted, most notably in the quantum computer which Google used to proclaim quantum supremacy.

Qian Shi has a computing power of 10 high-fidelity qubits. High fidelity refers to low error rates. According to the Department of Energy’s Office of Science, once the error rate is less than a certain threshold — i.e., about 1% — quantum error correction can, in theory, reduce it even further. Beating this threshold is a milestone for any qubit technology, according to the DOE’s report. 

Further, Baidu said it has also completed the design of a 36-qubit chip with couplers, which offers a way to reduce errors. Baidu said its quantum computer integrates both hardware, software and applications. The software-hardware integration allows access to quantum chips via mobile, PC and the cloud. 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Moreover, Liang Xi, Baidu claims, can be plugged into both its own and third-party quantum computers. This may include quantum chips built on other technologies, with Baidu giving a trapped ion device developed by the Chinese Academy of Sciences as an example.

“With Qian Shi and Liang Xi, users can create quantum algorithms and use quantum computing power without developing their own quantum hardware, control systems or programming languages,” said Runyao Duan, director of the Institute for Quantum Computing at Baidu Research. “Baidu’s innovations make it possible to access quantum computing anytime and anywhere, even via smartphone. Baidu’s platform is also instantly compatible with a wide range of quantum chips.”

Despite Baidu’s claim of being the world’s first such solution, the Liang Xi platform is reminiscent of Israel’s Innovation Authority approach, which is also aimed at being compatible with various types of qubits. 

Although this is Baidu’s first quantum computer, the company has already submitted over 200 patents throughout the last four years since the founding of its quantum computing research institute. The patents span various areas of research including quantum algorithms and applications, communications and networks, encryption and security, error correction, architecture, measurement and control and chip design. 

Baidu claims its offering paves the way for the industrialization of quantum computing, making it the latest company to make grandiose claims about quantum computing being on the verge of widespread adoption. Some quantum startups have already amassed staggering valuations of over $1 billion.

However, real applications for quantum computers, besides encryption, have yet to emerge. And even if they do, it’s expected that those will require thousands, which is far from what has anyone yet been able to achieve. For example, this scalability concern led Intel to stop pursuing the popular superconducting qubit approach in favor of the less mature silicon and silicon-germanium qubits, which are based on transistor-like structures that can be manufactured using traditional semiconductor equipment.

Nevertheless, voices are already emerging to warn of overhyping the technology. In the words of the Gartner Hype Cycle, this may mean that quantum computing may approach its trough of disillusionment. 

The other main challenge in quantum computing is that real qubits tend to be too noisy, leading to decoherence This leads to the necessity of using quantum error correction, which increases the number of qubits far above the theoretical minimum for a given application. A solution called noisy intermediate scale quantum (NISQ) has been proposed as a sort of midway, but its success has yet to be shown. 

The history of classical computers is filled with examples of applications that the technology enabled that had never been thought of beforehand. This makes it tempting to think that quantum computing may similarly revolutionize civilization. However, most approaches for qubits currently rely on near-absolute zero temperature. This inherent barrier implies quantum computing may remain limited to enterprises.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link


Why look at reality when you can edit what you see in real time?

The adoption of augmented reality is happening slowly but surely, and it’s easy to see one possible future for the technology: hardware that lets you edit what you see in real time, replacing objects around you with virtual overlays. Call it mixed reality, to be more precise.

Recent research from academics at the University of Duisburg-Essen and ETH Zurich, along with the AI team at Porsche (yes, the carmakers — we’ll get to that in a bit), shows how this might work. The team has built an AI system dubbed TransforMR that detects objects like cars and people, removes them, then replaces them with CGI alternatives in real-time. The end results are hardly flawless (edits are haphazard and the CGI models look like they were borrowed from 3D Movie Maker) but the concept is striking. It’s not hard to imagine applications like this becoming commonplace in decades to come.

The team behind the work told The Verge that although individual elements of their work had been done before, the composite system is novel. TansforMR can run on regular smartphones and tablets, but requires a 4G connection to send data to the cloud. Images are processed so that objects are not just covered up, as with Snapchat AR lenses or Apple’s Memoji, but edited out entirely. Objects are detected, segmented, then “inpainted” (replaced with AI-generated background), and a CGI model substituted for the original.

The TransforMR model involves many distinct steps.

There are obviously lots of areas for improvement. The frame-rate is just 15fps at low quality inpainting; the lag is 50 to 100 milliseconds; and the CGI replacements are not the best quality. But, the team behind the system say these aspects are relatively easy to improve.

“The main limitation is that large images are very compute-intensive,” Mohamed Kari, a machine learning researcher at Porsche, told The Verge. “So for the inpainting we do this with very small images currently, operating on 512 x 512 images. But the bandwidth [usage] is negligible. If you can do FaceTime you can do TransforMR.”

One of the key elements of the system, says Kari, is its use of pose detection. This means that when the system detects a person, for example, it identifies 18 separate joints in the body. That means the CGI replacement can be anchored to the target’s movement in real time. Kari compares this to other AR systems which simply identify geometric surfaces.

Looking at clips of TransforMR in action, it’s not hard to imagine such software being integrated into AR glasses. Users could pick a “theme” for their day, replacing cars, buildings, and people with sci-fi alternatives, or items taken from nature. But, as Kari points out, this would involves a huge hardware challenge. Current augmented reality glasses can only project low-resolution, semi-opaque overlays onto their lenses. Right now, we just don’t have the technology to “edit” what users are seeing with this sort of hardware. (Though it could presumably be done using a “passthrough” VR system, where first-person cameras play a live video feed onto screens that completely occlude the wearer’s vision.)

“We are reproducing the full image on screen, so we can remove whatever we want to, but with augmented reality glasses removing objects is difficult because it adds light intensity,” says Kari. “In HoloLens for example, you are looking through the glass, so removing stuff is more difficult. That question is open to research.”

But why is Porsche investigating this sort of tech in the first place? According to one of the company’s AI architects, Tobias Grosse-Puppendahl, it’s all about improving the experience of passengers and drivers. Future versions of the TransforMR software could be used to entertain people when they’re stuck in traffic, Grosse-Puppendahl tells The Verge. “Our main question was, how can we modify reality in a way that is fun and entertaining to react with? And that’s where our idea originated from.”

Other research projects at Porsche follow a similar theme. For example, the company has also built a prototype system called SoundRide which uses a car’s machine vision to detect changes in scenery and cue up appropriate music. “Maybe, for example, you’re driving through the Alps, driving through a beautiful route, and suddenly you have a wonderful view and maybe the music changes,” says Kari. “We’re thinking how technology can make the experience in the car even more interesting and beautiful.” And that means tinkering with what people would otherwise see and hear.

Repost: Original Source and Author Link


Niantic is making an augmented reality basketball game with the NBA

Pokémon Go developer Niantic is creating a new augmented reality mobile game with more big-name partners: the NBA and its players’ association. NBA All-World will task you with exploring your neighborhood to find some of the league’s stars such as Chris Paul, Steph Curry and James Harden. You can challenge and compete against virtual players in mini-games like three-point contests before recruiting them to your team.

NBA All-World players will be able to deck out their NBA stars in custom apparel. Polygon notes that you can also improve your squad with items you find out in the wild at places such as sporting goods stores and convenience stores. You’ll have the chance to battle others in one-on-one matches with swipe-based commands too. These encounters will be available at various locations, including real-life basketball courts.

Following Pokémon Go and Pikmin Bloom, Niantic has a few other games in the works. Transformers: Heavy Metal is in beta, but it’s only available in a few countries for now. The same goes for Peridot, a modern AR take on Tamagotchi.

It’s not yet clear exactly when Niantic will release NBA All-World, but the game will soon enter a soft launch period. You can sign up for updates if you’re interested.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link


Steam games are coming to Nreal’s augmented reality glasses

Nreal users can now play some Steam games on their augmented reality glasses. The Chinese company has released the beta version of “Steam on Nreal,” which gives users a way to stream games from their PC to their AR eyewear. Nreal admits that installing the beta release will require a bit of effort during the setup process, and the current version is not optimized for all Steam games just yet. It will work on both Nreal Light and Nreal Air models, though, and it already supports some popular titles like the entire Halo series. 

To note, users can already play games on Nreal’s glasses by accessing Xbox Cloud Gaming on a browser inside the company’s 3D system called Nebula. But Steam on Nreal will give users who don’t have Xbox accounts the opportunity to see what gaming on the device would be like. Company co-founder Peng Jin said the beta release is “meant to give people a glimpse into what is possible.” He added: “AAA games should be played on a 200-inch HD screen and they should be played free of location restrictions.”

Nreal launched its Light mixed reality glasses in 2020 after a US court ruled in its favor for the lawsuit filed by Magic Leap. The American company accused its former employee Chi Xu of using stolen secrets to set up Nreal, but the court decided that Magic Leap failed to make any viable claim. In 2021, Nreal launched a new model called Air that was designed with streaming shows and playing mobile games in mind. Air looks more like a pair of ordinary sunglasses than its predecessor does, and it also comes with a better display.

In an effort to offer more content and perhaps entice those on the fence to grab a pair of its glasses, Nreal has also announced AR Jam, an online international contest for AR developers that will kick off on June 27th. Developers can compete in various categories that include at-home fitness, art, games and video, with each one having a $10,000 grand prize. Those interested can head over to the company’s Developer page for more information.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link


Nvidia AI research takes science fiction one step closer to reality

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Could AI be taught multiple skills at the same time? Are immersive displays using holography closer to reality than ever? No one can say with any certainty what precisely the future of artificial intelligence (AI) will hold. But one way to get a glimpse is by looking at the research that Nvidia will present at Siggraph 2022, to be held August 8-11.

Nvidia is collaborating with researchers to present 16 papers at Siggraph 2022, spanning multiple research topics that impact the intersection of graphics and AI technologies. 

One paper details innovation with reinforcement learning models, done by researchers from the University of Toronto and UC Berkeley, that could help to teach AI multiple skills at the same time.

Another delves into new techniques to help build large-scale virtual worlds with instant neural graphics primitives. Stepping closer to technologies only seen in science fiction, there is also research on holography that could one day pave the way for new display technology that will enable immersive telepresence.

“Our goal is to do work that’s going to impact the company,” David Luebke, vice president of graphics research at Nvidia, told VentureBeat. “It’s about solving problems where people don’t already know the answer and there is no easy engineering solution, so you have to do research.”

The intersection of research and enterprise AI

The 16 papers that Nvidia is helping to present focus on innovations that impact graphics, which is what the Siggraph show is all about. Luebke noted, however, that nearly all the research is also relevant for AI use outside the graphics field.

“I think of graphics as one of the hardest and most interesting applications of computation,” Luebke said. “So it’s no surprise that AI is revolutionizing graphics and graphics is providing a real showcase for AI.”

Luebke said that the researchers who worked on the reinforcement learning model paper actually view themselves as more in the robotics field than graphics. The model has potential applicability to robots as well as any other AI that needs to learn how to perform multiple actions.

“The thing about graphics is that it’s really, really hard and it’s really, really compelling,” he said. “Siggraph is a place where we showcase our graphics accomplishments, but almost everything we do there is applicable in a broader context as well.”

Computational holography and the future of telepresence

Throughout the COVID-19 pandemic, individuals and organizations around the world suddenly become a lot more familiar with video conferencing technologies like Zoom. There has also been a growing use of virtual reality headset usage, connecting to the emerging concept of the metaverse. The metaverse and telepresence could well one day become significantly more immersive.

One of the papers being presented by Nvidia at Siggraph has to do with a concept known as computational holography. Luebke explained that at a basic level, computational holography is a technique that can construct a three-dimensional scene, where the human eye can focus anywhere within that scene and see the correct thing as if it were really there. The research being presented at Siggraph details some new approaches to computational holography that could one day lead to VR headsets that are dramatically thinner than current options, providing a more immersive and lifelike experience.

“That has been kind of a holy grail for computer graphics for years and years,” Luebke said about the work on computational holography. “This research is showing that you can use computation, including neural networks and AI, to improve the quality of holographic displays that work and look good.”

Looking beyond just the papers being presented at Siggraph, Luebke said that Nvidia research is really interested in telepresence innovations. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link


Apple’s AR/VR headset gets one step closer to a reality

Apple’s rumored first step into AR and VR has been hush-hush, but a new report indicates that the mixed-reality headset may be getting closer to its grand unveiling.

As reported by Bloomberg, Apple’s board got a sneak peek at the company’s upcoming mixed-reality headset at a quarterly meeting. This meeting was attended by “eight independent directors” and CEO Tim Cook.

Antonio De Rosa

Bloomberg’s report indicates that Apple demonstrated the capabilities of the headset, according to unnamed sources familiar with the matter.

Apple is also ramping up development of the headset’s operating system, dubbed “realityOS” or just rOS for short. This continues the OS naming scheme that Apple uses for its other products.

The report also says that Apple initially wanted to unveil the headset at its Worldwide Developers Conference this year, but had to delay it due to issues with overheating. Additionally, ongoing supply chain issues and inflation have made things difficult for the tech industry in general.

There have been some conflicting rumors about what Apple’s mixed-reality headset will actually look like and function. However, most rumors agree that there will be a number of cameras and sensors to allow you to see the outside world.

It will also likely feature micro-LED displays with an amazing 8K resolution for both eyes. There might even be a third display for peripheral vision. Noted Apple analyst Ming-Chi Kuo alleges those lenses might include iris recognition for authentication.

The headset will most certainly be powered by Apple Silicon, one that might even be more powerful than the current M1. Of course, having a powerful chip that’s highly energy efficient would be perfect for a wearable. Hopefully, Apple can work out the rumored overheating issues.

Looks like #Apple just accidentally confirmed #RealityOS. 🥽


— matthewdavis.eth (@IAmMatthewDavis) February 9, 2022

The operating system powering the headset, realityOS, has been seen a number of times in Apple code. Developer Matthew Davis apparently found references to “realityOS” on an Apple GitHub page.

While this will be Apple’s first foray into virtual and augmented reality, other companies like Meta have much experience. Meta’s Project Cambria is aiming to eventually replace a laptop and work setup.

However, despite the complete dominance of the Meta Quest 2, Apple may be one of the few companies that can truly challenge (and surpass) Meta.

Editors’ Choice

Repost: Original Source and Author Link


Intel Arc Alchemist Might Make Sub-$200 GPUs a Reality Again

According to a new leak from Moore’s Law Is Dead, Intel’s upcoming Arc Alchemist graphics card could finally offer an affordable, sub-$200 GPU to consumers.

The well-known leaker revealed in a YouTube video that a variant of Intel’s Arc Alchemist entry-level graphics card, which will run on the company’s Xe-HPG GPU architecture, will be based on the 128-EU model. It’ll reportedly feature a clock speed ranging between 2.2GHz and 2.5GHz on chipmaker TSMC’s 6nm process node.

MLID also said that the GPU will utilize 6GB of GDDR6 memory clocked at 16Gbps over a 96-bit memory bus for the desktop variant. The laptop model, meanwhile, is expected to deliver 4GB of GDDR6 memory across a 64-bit bus at 14Gbps.

Notably, Moore’s Law Is Dead predicts the GPU could cost $179 or less. Due to the purported components of the entry-level Arc Alchemist, he expects Intel could even attach a price point as low as $150 to the graphics card.

If the aforementioned estimation becomes a reality when the product gets officially announced, it would mark the return of inexpensive graphics cards priced at $200 or below. The only GPU that comes close to that price point in the current generation of video cards is Nvidia’s RTX 3060 with an MSRP of $329. 

One of the reasons why the graphics card could cost below $200 is its thermal design power — the GPU will allegedly yield a power draw of only 75 watts. AMD’s most efficient card, the RX 6600, has a power draw of 132W, so Intel’s looks to be much more efficient overall. 

As for other specs related to the Arc Alchemist, the cut-down models will reportedly supply 96 EUs with a 64-bit bus interface. ​​As Wccftech notes, there have been rumors pertaining to a variant providing 4GB of GDDR6 memory, but MLID doesn’t rule out a 3GB desktop model. 

The 128-EU model of the GPU is expected to launch at the end of February or March on laptops. It’ll be followed by a desktop release sometime during the second quarter of 2022. Intel will thus go head-to-head with AMD, with team Red also set to announce its own entry-level card, the Navi 24 RDNA 2 Radeon RX GPU, in the first few months of 2022 as well.

With the current shortage of GPUs and the subsequent price increases, hopefully the incoming launch of entry-level graphics cards will at least provide an affordable solution for consumers until the unprecedented state of affairs improves in 2023.

Editors’ Choice

Repost: Original Source and Author Link


Apple’s mixed reality headset might play ‘high-quality’ VR games

Apple’s rumored mixed reality headset may be a boon for VR gaming. In his most recent newsletter, Bloomberg‘s Mark Gurman claimed Apple is aiming for a headset that can handle “high-quality” VR games with both fast chips and high-res displays. While it’s not certain just what chips would be involved, a previous leak mentioned a possible 8K resolution per eye — Apple might not expect games to run at that resolution, but it would hint at serious processing power.

The headset is still poised to arrive “as early as” 2022, Gurman said. He also suggested Apple would eventually follow up the mixed headset with an augmented-reality-only model, but that was “years down the road.”

However accurate the claim might be, it’s doubtful the mixed reality headset would be meant primarily for gaming The price (rumored to be as high as $3,000) might relegate it to developers and other pros. It wouldn’t be a rival to the $299 Quest 2, then. Instead, the report suggests Apple might use this initial headset to pave the way for more affordable wearables where gaming is more realistic.

It’s safe to presume Apple is committed to a headset, no matter the end result. Apple has acquired companies and reportedly shuffled executives with mixed reality in mind. This wouldn’t just be a side project for the company, even if the mixed reality tech could take years to reach the mainstream. Gaming might play a pivotal role if Apple intends to reach a wider audience.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link


Have autonomous robots started killing in war? The reality is messier than it appears

It’s the sort of thing that can almost pass for background noise these days: over the past week, a number of publications tentatively declared, based on a UN report from the Libyan civil war, that killer robots may have hunted down humans autonomously for the first time. As one headline put it: “The Age of Autonomous Killer Robots May Already Be Here.”

But is it? As you might guess, it’s a hard question to answer.

The new coverage has sparked a debate among experts that goes to the heart of our problems confronting the rise of autonomous robots in war. Some said the stories were wrongheaded and sensational, while others suggested there was a nugget of truth to the discussion. Diving into the topic doesn’t reveal that the world quietly experienced the opening salvos of the Terminator timeline in 2020. But it does point to a more prosaic and perhaps much more depressing truth: that no one can agree on what a killer robot is, and if we wait for this to happen, their presence in war will have long been normalized.

It’s cheery stuff, isn’t it? It’ll take your mind off the global pandemic at least. Let’s jump in:

The source of all these stories is a 548-page report from the United Nations Security Council that details the tail end of the Second Libyan Civil War, covering a period from October 2019 to January 2021. The report was published in March, and you can read it in full here. To save you time: it is an extremely thorough account of an extremely complex conflict, detailing various troop movements, weapon transfers, raids and skirmishes that took place among the war’s various factions, both foreign and domestic.

The paragraph we’re interested in, though, describes an offensive near Tripoli in March 2020, in which forces supporting the UN-backed Government of National Accord (GNA) routed troops loyal to the Libyan National Army of Khalifa Haftar (referred to in the report as the Haftar Affiliated Forces or HAF). Here’s the relevant passage in full:

Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.”

The Kargu-2 system that’s mentioned here is a quadcopter built in Turkey: it’s essentially a consumer drone that’s used to dive-bomb targets. It can be manually operated or steer itself using machine vision. A second paragraph in the report notes that retreating forces were “subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems” and that the HAF “suffered significant casualties” as a result.

The Kargu-2 drone is essentially a quadcopter that dive-bombs enemies.
Image: STM

But that’s it. That’s all we have. What the report doesn’t say — at least not outright — is that human beings were killed by autonomous robots acting without human supervision. It says humans and vehicles were attacked by a mix of drones, quadcopters, and “loitering munitions” (we’ll get to those later), and that the quadcopters had been programmed to work offline. But whether the attacks took place without connectivity is unclear.

These two paragraphs made their way into the mainstream press via a story in the New Scientist, which ran a piece with the headline: “Drones may have attacked humans fully autonomously for the first time.” The NS is very careful to caveat that military drones might have acted autonomously and that humans might have been killed, but later reports lost this nuance. “Autonomous drone attacked soldiers in Libya all on its own,” read one headline. “For the First Time, Drones Autonomously Attacked Humans,” said another.

Let’s be clear: by itself, the UN does not say for certain whether drones autonomously attacked humans in Libya last year, though it certainly suggests this could have happened. The problem is that even if it did happen, for many experts, it’s just not news.

The reason why some experts took issue with these stories was because they followed the UN’s wording, which doesn’t distinguish clearly between loitering munitions and lethal autonomous weapons systems or LAWS (that’s policy jargon for killer robots).

Loitering munitions, for the uninitiated, are the weapon equivalent of seagulls at the beachfront. They hang around a specific area, float above the masses, and wait to strike their target — usually military hardware of one sort or another (though it’s not impossible that they could be used to target individuals).

The classic example is Israel’s IAI Harpy, which was developed in the 1980s to target anti-air defenses. The Harpy looks like a cross between a missile and a fixed-wing drone, and is fired from the ground into a target area where it can linger for up to nine hours. It scans for telltale radar emissions from anti-air systems and drops onto any it finds. The loitering aspect is crucial as troops will often turn these radars off, given they act like homing beacons.

The IAI Harpy is launched from the ground and can linger for hours over a target area.
Image: IAI

“The thing is, how is this the first time of anything?” tweeted Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations. “Loitering munition have been on the battlefield for a while – most notably in Nagorno-Karaback. It seems to me that what’s new here isn’t the event, but that the UN report calls them lethal autonomous weapon systems.”

Jack McDonald, a lecturer at the department of war studies at King’s College London, says the distinction between the two terms is controversial and constitutes an unsolved problem in the world of arms regulation. “There are people who call ‘loitering munitions’ ‘lethal autonomous weapon systems’ and people who just call them ‘loitering munitions,’” he tells The Verge. “This is a huge, long-running thing. And it’s because the line between something being autonomous and being automated has shifted over the decades.”

So is the Harpy a lethal autonomous weapons system? A killer robot? It depends on who you ask. IAI’s own website describes it as such, calling it “an autonomous weapon for all weather,” and the Harpy certainly fits a makeshift definition of LAWS as “machines that target combatants without human oversight.” But if this is your definition, then you’ve created a very broad church for killer robots. Indeed, under this definition a land mine is a killer robot, as it, too, autonomously targets combatants in war without human oversight.

If killer robots have been around for decades, why has there been so much discussion about them in recent years, with groups like the Campaign To Stop Killer Robots pushing for regulation of this technology in the UN? And why is this incident in Libya special?

The rise of artificial intelligence plays a big role, says Zak Kallenborn, a policy fellow at the Schar School of Policy and Government. Advances in AI over the past decade have given weapon-makers access to cheap vision systems that can select targets as quickly as your phone identifies pets, plants, and familiar faces in your camera roll. These systems promise nuanced and precise identification of targets but are also much more prone to mistakes.

“Loitering munitions typically respond to radar emissions, [and] a kid walking down the street isn’t going to have a high-powered radar in their backpack,” Kallenborn tells The Verge. “But AI targeting systems might misclassify the kid as a soldier, because current AI systems are highly brittle — one study showed a change in a single pixel is sufficient to cause machine vision systems to draw radically different conclusions about what it sees. An open question is how often those errors occur during real-world use.”

This is why the incident in Libya is interesting, says Kallenborn, as the Kargu-2 system mentioned in the UN report does seem to use AI to identify targets. According to the quadcopter’s manufacturer, STM, it uses “machine learning algorithms embedded on the platform” to “effectively respond against stationary or mobile targets (i.e. vehicle, person etc.)” Demo videos appear to show it doing exactly that. In the clip below, the quadcopter hones in on a mannequin in a stationary group.

But should we trust a manufacturers’ demo reel or brochure? And does the UN report make it clear that machine learning systems were used in the attack?

Kallenborn’s reading of the report is that it “heavily implies” that this was the case, but McDonald is more skeptical. “I think it’s sensible to say that the Kargu-2 as a platform is open to being used in an autonomous way,” he says. “But we don’t necessarily know if it was.” In a tweet, he also pointed out that this particular skirmish involved long-range missiles and howitzers, making it even harder to attribute casualties to any one system.

What we’re left with is, perhaps unsurprisingly, the fog of war. Or more accurately: the fog of LAWS. We can’t say for certain what happened in Libya and our definitions of what is and isn’t a killer robot are so fluid that even if we knew, there would be disagreement.

For Kallenborn, this is sort of the point: it underscores the difficulties we face trying to create meaningful oversight in the AI-assisted battles of the future. Of course the first use of autonomous weapons on the battlefield won’t announce itself with a press release, he says, because if the weapons work as they’re supposed to, they won’t look at all out of the ordinary. “The problem is autonomy is, at core, a matter of programming,” he says. “The Kargu-2 used autonomously will look exactly like a Kargu-2 used manually.”

Elke Schwarz, a senior lecturer in political theory at Queen Mary University London who’s affiliated with the International Committee for Robot Arms Control, tells The Verge that discussions like this show we need to move beyond “slippery and political” debates about definitions and focus on the specific functionality of these systems. What do they do and how do they do it?

“I think we really have to think about the bigger picture […] which is why I focus on the practice, as well as functionality,” says Schwarz. “In my work I try and show that the use of these types of systems is very likely to exacerbate violent action as an ‘easier’ choice. And, as you rightly point out, errors will very likely prevail […] which will likely be addressed only post hoc.”

Schwarz says that despite the myriad difficulties, in terms of both drafting regulation and pushing back against the enthusiasm of militaries around the world to integrate AI into weaponry, “there is critical mass building amongst nations and international organizations to push for a ban for systems that have the capacity to autonomously identify, select and attack targets.”

Indeed, the UN is still conducting a review into possible regulations for LAWS, with results due to be reported later this year. As Schwarz says: “With this news story having made the rounds, now is a great time to mobilize the international community toward awareness and action.”

Repost: Original Source and Author Link