NY State is giving out hundreds of robots as companions for the elderly

The state of New York will distribute robot companions to the homes of more than 800 older adults. The robots are not able to help with physical tasks, but function as more proactive versions of digital assistants like Siri or Alexa — engaging users in small talk, helping contact love ones, and keeping track of health goals like exercise and medication.

The scheme is being organized by the New York State Office for the Aging (NYSOFA), and is intended to help address the growing problem of social isolation among the elderly. An estimated 14 million Americans over the age of 65 currently live alone, and this figure is projected to increase over the next decade as the boomer generation ages. Studies have suggested that long-term loneliness is as damaging to an individual’s health as smoking.

NYSOFA director Greg Olsen says the robots — named ElliQ and built by Israeli firm Intuition Robotics — could help tackle this growing health problem by encouraging independence among older adults living alone and providing companionship.

“Many features attracted us to ElliQ — that it is a proactive tool, remembers the interactions with the individual, focuses on health and wellness, stress reduction, sleep, hydration, etc,” Olsen told The Verge. “It focuses on what matters to individuals: memories, life validation, interactions with friends and families and promotes overall good health and well being.”

ElliQ consists of two parts attached to a single base. The first part is a lamp-like “face” with microphone and speakers, that lights up and swivels to face people it’s talking to. The second is a touchscreen tablet, used to display pictures, additional information, and conduct video calls. The unit has been deliberately design to appear more robotic than humanoid, in order to better focus attention on its conversational abilities.

Intuition Robotics’ claim is that ElliQ can project empathy and form bonds with users. The robot is supposed to remember key details about a user’s life, and shape its character to their own. It will crack more jokes if the user tends to laugh a lot, for example. Media reports suggest the robot can certainly endear itself to people (ElliQ has been in development for many years with Intuition Robotics conducting dozens of home trials to hone its functionality) but the real test will be widespread deployment.

Olsen says that NYSOFA case managers will identify individuals who might benefit from ElliQ based on a few criteria. “ElliQ is designed for people aged 75 and older, who have access to Wi-Fi, and are comfortable with tech equipment and who are isolated or lonely,” he tells The Verge. “Once individuals are identified as being in the target group, Intuition Robotics will work to provide installation and training.”

NYSOFA has bought some 800 ElliQ units from Intuition Robotics for an unknown price. The usual cost to hire the robot is a $250 upfront fee and then a $30 monthly service charge. NYSOFA says by buying the robots outright it will be able to relocate them more easily.

Deploying robots for elderly care is often controversial. Advocates say robots are a necessary tool, especially when humans aren’t available. Critics warn machines have the potential to dehumanize their users, and their deployment reflects the low value society places on older adults. Scientific studies suggest social robots do “appear to have the potential to improve the well-being of older adults,” but researchers say it’s hard to draw conclusions without wider trials. In New York state, a new experiment is just beginning.

Repost: Original Source and Author Link


Dyson eyes robots that can do your household chores

Dyson has shown off a series of prototype robots it’s developing, and announced plans to hire hundreds of engineers over the next five years in order to build robots capable of household chores. The images are designed to show off the fine motor skills of the machines, with arms capable of lifting plates out of a drying rack, vacuuming a sofa, or lifting up a children’s toy.

The company, best known for its range of vacuum cleaners, says that it aims to develop “an autonomous device capable of household chores and other tasks,” with The Guardian noting that such a device could be released by 2030. It comes over half a decade after the company released its first robotic device, the Dyson 360 Eye robot vacuum cleaner, in 2014. Dyson has long emphasized its interest in AI and robotics to underpin its future products.

Vacuuming an armchair.
Image: Dyson

Another prototype shown handling plates.
Image: Dyson

The announcement was made to coincide with the International Conference on Robotics and Automation in Philadelphia, and serve as a recruitment tool with a prominent “Start your Dyson career” link placed near the top of Dyson’s press release. The company says it’s in the midst of the “largest engineering recruitment drive in its history.” It’s currently recruiting 250 robotics engineers with expertise in “computer vision, machine learning, sensors and mechatronics,” and hopes to hire 700 more over the next five years. Dyson says it’s already added 2,000 new employees to its workforce this year.

As well as making hires, the company is also building out what it hopes will be the UK’s largest robotics research center, The Guardian reports. The center will be based at Hullavington Airfield near the company’s existing design center in Malmesbury, Wiltshire, where it’s refitting an aircraft hanger where 250 roboticists will work. The site had previously been earmarked for development of Dyson’s electric car, before the project was canceled in 2019. Research will also take place in a lab in London, as well as at the company’s global headquarters in Singapore.

“This is a ‘big bet’ on future robotic technology that will drive research across the whole of Dyson, in areas including mechanical engineering, vision systems, machine learning and energy storage,” said Jake Dyson, the company’s chief engineer and son of company founder James Dyson. In 2020, Dyson announced plans to invest £2.75 billion (around $3.45 billion) in areas including robotics, new motor tech, and machine learning software by 2025. It plans to spend £600 million (around $750 million) of that investment this year.

Repost: Original Source and Author Link


Future of work: Beyond bossware and job-killing robots

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

The public conversation around AI’s impact on the labor market often revolves around the job-displacing or job-destroying potential of increasingly intelligent machines. The wonky economic phrase for the phenomenon is “technological unemployment.” Less attention is paid to another significant problem: the dehumanization of labor by companies that use what’s known as “bossware” — AI-based digital platforms or software programs that monitor employee performance and time on task.

To discourage companies from both replacing jobs with machines and deploying bossware to supervise and control workers, we need to change the incentives at play, says Rob Reich, professor of political science in the Stanford School of Humanities and Sciences, director of the McCoy Family Center for Ethics in Society, and associate director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

“It’s a question of steering ourselves toward a future in which automation augments our work lives rather than replaces human beings or transforms the workplace into a surveillance panopticon,” Reich says. Reich recently shared his thoughts on these topics in response to an online Boston Review forum hosted by Daron Acemoglu of MIT.

To promote the automation we want and discourage the automation we don’t want, Reich says we need to increase awareness of bossware, include impacted workers in the product development lifecycle, and ensure product design reflects a wider range of values beyond the commercial desire to increase efficiency. Additionally, we must provide economic incentives to support labor over capital and boost federal investment in AI research at universities to help stem the brain drain to industry, where profit motives often lead to negative consequences such as job displacement.


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

“It’s up to us to create a world where financial reward and social esteem lie with companies that augment rather than displace human labor,” Reich says. 

Increased awareness of bossware

From cameras that automatically track employees’ attention to software monitoring whether employees are off task, bossware is often in place before employees are aware of it. And the pandemic has made it worse as we’ve rapidly adapted to remote tools that have bossware features built in — without any deliberation about whether we wanted those features in the first place, Reich says.

“The first key to addressing the bossware problem is awareness,” Reich says. “The introduction of bossware should be seen as something that’s done through a consensual practice, rather than at the discretion of the employer alone.”

Beyond awareness, researchers and policymakers need to get a handle on the ways employers use bossware to shift some of their business risks to their employees. For example, employers have historically borne the risk of inefficiencies such as paying staff during shifts when there are few customers. By using automated AI-based scheduling practices that assign work shifts based on demand, employers save money but essentially shift their risk to workers who can no longer expect a predictable or reliable schedule.

Reich is also concerned that bossware threatens privacy and can undermine human dignity. “Do we want to have a workplace in which employers know exactly how long we leave our desks to use the restroom, or an experience of work in which sending a personal email on your work computer is keystroke logged and deducted from your hourly pay, or in which your performance evaluations are dependent upon your maximal time on task with no sense of trust or collaboration?” he asks. “It gets to the heart of what it means to be a human being in a work environment.”

Privileging labor over capital investment in machines

Policymakers should directly incentivize investment in human-augmentative AI rather than AI that will replace jobs, Reich says. And such human-augmentative options do exist.

But policymakers should also take some bold moves to support labor over capital. For example, Reich supports an idea proposed by Acemoglu and others including Stanford Digital Economy Lab Director Erik Brynjolfsson: Decrease payroll taxes and increase taxes on capital investment so that companies are less inclined to purchase labor-replacing machinery to supplant workers.

Currently the tax on human labor is approximately 25%, Reich says, while software or computer equipment is subject to only a 5% tax. As a result, the economic incentives currently favor replacing humans with machines whenever feasible. By changing these incentives to favor labor over machines, policymakers would go a long way toward shifting the impact of AI on workers, Reich says. 

“These are the kinds of bigger policy questions that need to be confronted and updated so that there’s a thumb on the scale of investing in AI and machinery that complements human workers rather than displaces them,” he says.

Invest in academic AI research

If recent history is any guide, Reich says, when industry serves as the primary site of research and development for AI and automation, it will tend to develop profit-maximizing robots and machines that take over human jobs. By contrast, in a university environment, the frontier of AI research and development is not harnessed to a commercial incentive or to a set of investors who are seeking short-term, profit-maximizing returns. “Academic researchers have the freedom to imagine human-augmenting forms of automation and to steer our technological future in a direction quite different from what we might expect from a strictly commercial environment,” he says.

To shift the AI frontier to academia, policymakers might start by funding the National Research Cloud so that universities across the country have access to essential infrastructure for cutting-edge research. In addition, the federal government should fund the creation and sharing of training data.

“These would be the kinds of undertakings that the federal government could pursue, and would comprise a classic example of public infrastructure that can produce extraordinary social benefits,” Reich says.

Katharine Miller is a contributing writer for the Stanford Institute for Human-Centered AI.

This story originally appeared on Copyright 2022


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link


Want to make robots run faster? Try letting AI take control

Quadrupedal robots are becoming a familiar sight, but engineers are still working out the full capabilities of these machines. Now, a group of researchers from MIT says one way to improve their functionality might be to use AI to help teach the bots how to walk and run.

Usually, when engineers are creating the software that controls the movement of legged robots, they write a set of rules about how the machine should respond to certain inputs. So, if a robot’s sensors detect x amount of force on leg y, it will respond by powering up motor a to exert torque b, and so on. Coding these parameters is complicated and time-consuming, but it gives researchers precise and predictable control over the robots.

An alternative approach is to use machine learning — specifically, a method known as reinforcement learning that functions through trial and error. This works by giving your AI model a goal known as a “reward function” (e.g., move as fast as you can) and then letting it loose to work out how to achieve that outcome from scratch. This takes a long time, but it helps if you let the AI experiment in a virtual environment where you can speed up time. It’s why reinforcement learning, or RL, is a popular way to develop AI that plays video games.

This is the technique that MIT’s engineers used, creating new software (known as a “controller”) for the university’s research quadruped, Mini Cheetah. Using reinforcement learning, they were able to achieve a new top-speed for the robot of 3.9m/s, or roughly 8.7mph. You can watch what that looks like in the video below:

As you can see, Mini Cheetah’s new running gait is a little ungainly. In fact, it looks like a puppy scrabbling to accelerate on a wooden floor. But, according to MIT PhD student Gabriel Margolis (a co-author of the research along with postdoc fellow Ge Yang), this is because the AI isn’t optimizing for anything but speed.

“RL finds one way to run fast, but given an underspecified reward function, it has no reason to prefer a gait that is ‘natural-looking’ or preferred by humans,” Margolis tells The Verge over email. He says the model could certainly be instructed to develop a more flowing form of locomotion, but the whole point of the endeavor is to optimize for speed alone.

Margolis and Yang say a big advantage of developing controller software using AI is that it’s less time-consuming than messing about with all the physics. “Programming how a robot should act in every possible situation is simply very hard. The process is tedious because if a robot were to fail on a particular terrain, a human engineer would need to identify the cause of failure and manually adapt the robot controller,” they say.

Mini Cheetah gets the once-over from a non-robot dog.
Image: MIT

By using a simulator, engineers can place the robot in any number of virtual environments — from solid pavement to slippery rubble — and let it work things out for itself. Indeed, the MIT group says its simulator was able to speed through 100 days’ worth of staggering, walking, and running in just three hours of real time.

Some companies that develop legged robots are already using these sorts of methods to design new controllers. Others, though, like Boston Dynamics, apparently rely on more traditional approaches. (This makes sense given the company’s interest in developing very specific movements — like the jumps, vaults, and flips seen in its choreographed videos.)

There are also faster-legged robots out there. Boston Dynamics’ Cheetah bot currently holds the record for a quadruped, reaching speeds of 28.3 mph — faster than Usain Bolt. However, not only is Cheetah a much bigger and more powerful machine than MIT’s Mini Cheetah, but it achieved its record running on a treadmill and mounted to a lever for stability. Without these advantages, maybe AI would give the machine a run for its money.

Repost: Original Source and Author Link


Alphabet is putting its prototype robots to work cleaning up around Google’s offices

What does Google’s parent company Alphabet want with robots? Well, it would like them to clean up around the office, for a start.

The company announced today that its Everyday Robots Project — a team within its experimental X labs dedicated to creating “a general-purpose learning robot” — has moved some of its prototype machines out of the lab and into Google’s Bay Area campuses to carry out some light custodial tasks.

“We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices,” said Everyday Robot’s chief robot officer Hans Peter Brøndmo in a blog post. “The same robot that sorts trash can now be equipped with a squeegee to wipe tables and use the same gripper that grasps cups can learn to open doors.”

These robots in question are essentially arms on wheels, with a multipurpose gripper on the end of a flexible arm attached to a central tower. There’s a “head” on top of the tower with cameras and sensors for machine vision and what looks like a spinning lidar unit on the side, presumably for navigation.

One of Alphabet’s Everyday Robot machines cleans the crumbs off a cafe table.
Image: Alphabet

As Brøndmo indicates, these bots were first seen sorting out recycling when Alphabet debuted the Everyday Robot team in 2019. The big promise that’s being made by the company (as well as by many other startups and rivals) is that machine learning will finally enable robots to operate in “unstructured” environments like homes and offices.

Right now, we’re very good at building machines that can carry out repetitive jobs in a factory, but we’re stumped when trying to get them to replicate simple tasks like cleaning up a kitchen or folding laundry.

Think about it: you may have seen robots from Boston Dynamics performing backflips and dancing to The Rolling Stones, but have you ever seen one take out the trash? It’s because getting a machine to manipulate never-before-seen objects in a novel setting (something humans do every day) is extremely difficult. This is the problem Alphabet wants to solve.

Unit 033 makes a bid for freedom.
Image: Alphabet

Is it going to? Well, maybe one day — if company execs feel it’s worth burning through millions of dollars in research to achieve this goal. Certainly, though, humans are going to be cheaper and more efficient than robots for these jobs in the foreseeable future. The update today from Everyday Robot is neat, but it’s far from a leap forward. You can see from the GIFs that Alphabet shared of its robots that they’re still slow and awkward, carrying out tasks inexpertly and at a glacial pace.

However, it’s still definitely something that the robots are being tested “in the wild” rather than in the lab. Compare Alphabet’s machines to Samsung’s Bot Handy, for example; a similar-looking tower-and-arm bot that the company showed off at CES last year, apparently pouring wine and loading a dishwasher. At least, Bot Handy looks like it’s performing these jobs, but really it was only carrying out a prearranged demo. Who knows how capable, if at all, this robot is in the real world? At least Alphabet is finding this out for itself.

Repost: Original Source and Author Link


DeepMind proposes new benchmark to improve robots’ object-stacking abilities

Stacking an object on top of another object is a straightforward task for most people. But even the most complex robots struggle to handle more than one such task at a time. Stacking requires a range of different motor, perception, and analytics skills, including the ability to interact with different kinds of objects. The level of sophistication involved has elevated this simple human task to a “grand challenge” in robotics and spawned a cottage industry dedicated to developing new techniques and approaches.

A team of researchers at DeepMind believe that advancing the state of the art in robotic stacking will require a new benchmark. In a paper to be presented at the Conference on Robot Learning (CoRL 2021), they introduce RGB-Stacking, which tasks a robot with learning how to grasp different objects and balance them on top of one another. While benchmarks for stacking tasks already exist in the literature, the researchers assert that what sets their research apart is the diversity of objects used, and the evaluations performed to validate their findings. The results demonstrate that a combination of simulation and real-world data can be used to learn “multi-object manipulation,” suggesting a strong baseline for the problem of generalizing to novel objects, the researchers wrote in the paper.

“To support other researchers, we’re open-sourcing a version of our simulated environment, and releasing the designs for building our real-robot RGB-stacking environment, along with the RGB-object models and information for 3D printing them,” the researchers said. “We are also open-sourcing a collection of libraries and tools used in our robotics research more broadly.”


With RGB-Stacking, the goal is to train a robotic arm via reinforcement learning to stack objects of different shapes. Reinforcement learning is a type of machine learning technique that enables a system — in this case a robot — to learn by trial and error using feedback from its actions and experiences.

RGB-Stacking places a gripper attached to a robot arm above a basket, and three objects in the basket: one red, one green, and one blue (hence the name RGB). A robot must stack the red object on top of the blue object within 20 seconds, while the green object serves as an obstacle and distraction.

According to DeepMind researchers, the learning process ensures that a robot acquires generalized skills through training on multiple object sets. RGB-Stacking intentionally varies the grasp and stack qualities that define how a robot can grasp and stack each object, which forces the robot to exhibit behaviors that go beyond a simple pick-and-place strategy.


“Our RGB-Stacking benchmark includes two task versions with different levels of difficulty,” the researchers explain. “In ‘Skill Mastery,’ our goal is to train a single agent that’s skilled in stacking a predefined set of five triplets. In ‘Skill Generalization,’ we use the same triplets for evaluation, but train the agent on a large set of training objects — totaling more than a million possible triplets. To test for generalization, these training objects exclude the family of objects from which the test triplets were chosen. In both versions, we decouple our learning pipeline into three stages.”

The researchers claim that their methods in RGB-Stacking result in “surprising” stacking strategies and “mastery” of stacking a subset of objects. Still, they concede that they only scratch the surface of what’s possible and that the generalization challenge remains unsolved.

“As researchers keep working to solve the open challenge of true generalization in robotics, we hope this new benchmark, along with the environment, designs, and tools we have released, contribute to new ideas and methods that can make manipulation even easier and robots more capable,” the researchers added.

As robots become more adept at stacking and grasping objects, some experts believe that this type of automation could drive the next U.S. manufacturing boom. In a recent study from Google Cloud and The Harris Poll, two-thirds of manufacturers said that the use of AI in their day-to-day operations is increasing, with 74% claiming that they align with the changing work landscape. Companies in manufacturing expect efficiency gains over the next five years attributable to digital transformations. McKinsey’s research with the World Economic Forum puts the value creation potential of manufacturers implementing “Industry 4.0” — the automation of traditional industrial practices — at $3.7 trillion by 2025.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Have autonomous robots started killing in war? The reality is messier than it appears

It’s the sort of thing that can almost pass for background noise these days: over the past week, a number of publications tentatively declared, based on a UN report from the Libyan civil war, that killer robots may have hunted down humans autonomously for the first time. As one headline put it: “The Age of Autonomous Killer Robots May Already Be Here.”

But is it? As you might guess, it’s a hard question to answer.

The new coverage has sparked a debate among experts that goes to the heart of our problems confronting the rise of autonomous robots in war. Some said the stories were wrongheaded and sensational, while others suggested there was a nugget of truth to the discussion. Diving into the topic doesn’t reveal that the world quietly experienced the opening salvos of the Terminator timeline in 2020. But it does point to a more prosaic and perhaps much more depressing truth: that no one can agree on what a killer robot is, and if we wait for this to happen, their presence in war will have long been normalized.

It’s cheery stuff, isn’t it? It’ll take your mind off the global pandemic at least. Let’s jump in:

The source of all these stories is a 548-page report from the United Nations Security Council that details the tail end of the Second Libyan Civil War, covering a period from October 2019 to January 2021. The report was published in March, and you can read it in full here. To save you time: it is an extremely thorough account of an extremely complex conflict, detailing various troop movements, weapon transfers, raids and skirmishes that took place among the war’s various factions, both foreign and domestic.

The paragraph we’re interested in, though, describes an offensive near Tripoli in March 2020, in which forces supporting the UN-backed Government of National Accord (GNA) routed troops loyal to the Libyan National Army of Khalifa Haftar (referred to in the report as the Haftar Affiliated Forces or HAF). Here’s the relevant passage in full:

Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.”

The Kargu-2 system that’s mentioned here is a quadcopter built in Turkey: it’s essentially a consumer drone that’s used to dive-bomb targets. It can be manually operated or steer itself using machine vision. A second paragraph in the report notes that retreating forces were “subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems” and that the HAF “suffered significant casualties” as a result.

The Kargu-2 drone is essentially a quadcopter that dive-bombs enemies.
Image: STM

But that’s it. That’s all we have. What the report doesn’t say — at least not outright — is that human beings were killed by autonomous robots acting without human supervision. It says humans and vehicles were attacked by a mix of drones, quadcopters, and “loitering munitions” (we’ll get to those later), and that the quadcopters had been programmed to work offline. But whether the attacks took place without connectivity is unclear.

These two paragraphs made their way into the mainstream press via a story in the New Scientist, which ran a piece with the headline: “Drones may have attacked humans fully autonomously for the first time.” The NS is very careful to caveat that military drones might have acted autonomously and that humans might have been killed, but later reports lost this nuance. “Autonomous drone attacked soldiers in Libya all on its own,” read one headline. “For the First Time, Drones Autonomously Attacked Humans,” said another.

Let’s be clear: by itself, the UN does not say for certain whether drones autonomously attacked humans in Libya last year, though it certainly suggests this could have happened. The problem is that even if it did happen, for many experts, it’s just not news.

The reason why some experts took issue with these stories was because they followed the UN’s wording, which doesn’t distinguish clearly between loitering munitions and lethal autonomous weapons systems or LAWS (that’s policy jargon for killer robots).

Loitering munitions, for the uninitiated, are the weapon equivalent of seagulls at the beachfront. They hang around a specific area, float above the masses, and wait to strike their target — usually military hardware of one sort or another (though it’s not impossible that they could be used to target individuals).

The classic example is Israel’s IAI Harpy, which was developed in the 1980s to target anti-air defenses. The Harpy looks like a cross between a missile and a fixed-wing drone, and is fired from the ground into a target area where it can linger for up to nine hours. It scans for telltale radar emissions from anti-air systems and drops onto any it finds. The loitering aspect is crucial as troops will often turn these radars off, given they act like homing beacons.

The IAI Harpy is launched from the ground and can linger for hours over a target area.
Image: IAI

“The thing is, how is this the first time of anything?” tweeted Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations. “Loitering munition have been on the battlefield for a while – most notably in Nagorno-Karaback. It seems to me that what’s new here isn’t the event, but that the UN report calls them lethal autonomous weapon systems.”

Jack McDonald, a lecturer at the department of war studies at King’s College London, says the distinction between the two terms is controversial and constitutes an unsolved problem in the world of arms regulation. “There are people who call ‘loitering munitions’ ‘lethal autonomous weapon systems’ and people who just call them ‘loitering munitions,’” he tells The Verge. “This is a huge, long-running thing. And it’s because the line between something being autonomous and being automated has shifted over the decades.”

So is the Harpy a lethal autonomous weapons system? A killer robot? It depends on who you ask. IAI’s own website describes it as such, calling it “an autonomous weapon for all weather,” and the Harpy certainly fits a makeshift definition of LAWS as “machines that target combatants without human oversight.” But if this is your definition, then you’ve created a very broad church for killer robots. Indeed, under this definition a land mine is a killer robot, as it, too, autonomously targets combatants in war without human oversight.

If killer robots have been around for decades, why has there been so much discussion about them in recent years, with groups like the Campaign To Stop Killer Robots pushing for regulation of this technology in the UN? And why is this incident in Libya special?

The rise of artificial intelligence plays a big role, says Zak Kallenborn, a policy fellow at the Schar School of Policy and Government. Advances in AI over the past decade have given weapon-makers access to cheap vision systems that can select targets as quickly as your phone identifies pets, plants, and familiar faces in your camera roll. These systems promise nuanced and precise identification of targets but are also much more prone to mistakes.

“Loitering munitions typically respond to radar emissions, [and] a kid walking down the street isn’t going to have a high-powered radar in their backpack,” Kallenborn tells The Verge. “But AI targeting systems might misclassify the kid as a soldier, because current AI systems are highly brittle — one study showed a change in a single pixel is sufficient to cause machine vision systems to draw radically different conclusions about what it sees. An open question is how often those errors occur during real-world use.”

This is why the incident in Libya is interesting, says Kallenborn, as the Kargu-2 system mentioned in the UN report does seem to use AI to identify targets. According to the quadcopter’s manufacturer, STM, it uses “machine learning algorithms embedded on the platform” to “effectively respond against stationary or mobile targets (i.e. vehicle, person etc.)” Demo videos appear to show it doing exactly that. In the clip below, the quadcopter hones in on a mannequin in a stationary group.

But should we trust a manufacturers’ demo reel or brochure? And does the UN report make it clear that machine learning systems were used in the attack?

Kallenborn’s reading of the report is that it “heavily implies” that this was the case, but McDonald is more skeptical. “I think it’s sensible to say that the Kargu-2 as a platform is open to being used in an autonomous way,” he says. “But we don’t necessarily know if it was.” In a tweet, he also pointed out that this particular skirmish involved long-range missiles and howitzers, making it even harder to attribute casualties to any one system.

What we’re left with is, perhaps unsurprisingly, the fog of war. Or more accurately: the fog of LAWS. We can’t say for certain what happened in Libya and our definitions of what is and isn’t a killer robot are so fluid that even if we knew, there would be disagreement.

For Kallenborn, this is sort of the point: it underscores the difficulties we face trying to create meaningful oversight in the AI-assisted battles of the future. Of course the first use of autonomous weapons on the battlefield won’t announce itself with a press release, he says, because if the weapons work as they’re supposed to, they won’t look at all out of the ordinary. “The problem is autonomy is, at core, a matter of programming,” he says. “The Kargu-2 used autonomously will look exactly like a Kargu-2 used manually.”

Elke Schwarz, a senior lecturer in political theory at Queen Mary University London who’s affiliated with the International Committee for Robot Arms Control, tells The Verge that discussions like this show we need to move beyond “slippery and political” debates about definitions and focus on the specific functionality of these systems. What do they do and how do they do it?

“I think we really have to think about the bigger picture […] which is why I focus on the practice, as well as functionality,” says Schwarz. “In my work I try and show that the use of these types of systems is very likely to exacerbate violent action as an ‘easier’ choice. And, as you rightly point out, errors will very likely prevail […] which will likely be addressed only post hoc.”

Schwarz says that despite the myriad difficulties, in terms of both drafting regulation and pushing back against the enthusiasm of militaries around the world to integrate AI into weaponry, “there is critical mass building amongst nations and international organizations to push for a ban for systems that have the capacity to autonomously identify, select and attack targets.”

Indeed, the UN is still conducting a review into possible regulations for LAWS, with results due to be reported later this year. As Schwarz says: “With this news story having made the rounds, now is a great time to mobilize the international community toward awareness and action.”

Repost: Original Source and Author Link


Duke Energy used computer vision and robots to cut costs by $74M

All the sessions from Transform 2021 are available on-demand now. Watch now.

Duke Energy’s AI journey began because the utility company had a business problem to solve,  Duke Energy chief information officer Bonnie Titone told VentureBeat’s head of AI content strategy Hari Sivaraman at the Transform 2021 virtual conference on Thursday.

Duke Energy was facing some significant challenges, such as the growing issue of climate change and the need to transition to clean energy in order to reach net zero emissions by 2050. Duke Energy is considered an essential service, as it supplies 25 million people with electricity daily, and everything the utility company does revolves around a culture of safety and reliability. The variables together was a catalyst for exploring AI technologies, Titone said, because whatever the company chose to do, it had to support the clean energy transition, deliver value to customers, and find a way for employees to work and improve safety.

“We look to emerging data science tools and AI solutions, which in turn brought us to computer vision, and ultimately, drones in order to inspect our solar farms,” Titone said.

There is a significant amount of solar farms in the shift to clean energy — Florida alone has 3 million solar panels, Titone said — and inspecting them is a very labor-intensive, time-consuming, and risky endeavor. It can take about 40 hours to inspect one unit, and a regular solar site may have somewhere between 20 and 25 units to inspect. It’s a dangerous task, as technicians walk around 500-acre solar sites with heat guns so they can inspect the panels and may need to touch live wires. The company began experimenting with advanced drones with infrared cameras to try to streamline the work. The technicians were able to use the images taken by the drones to determine where they’re seeing faults and issues. Thousands of images were stitched together with computer vision, giving technicians the ability to look for issues using the images in a much safer way, Titone said.

After finding the computer vision, Duke Energy began to consider automating the process. The company developed a MOVES model (Mobile Observation Vehicle and Equipment Solutions) that collects and processes the data and images from the drones and identifies the faults within minutes. Through applying AI and machine learning technologies, the program has significantly reduced labor and time costs for the company. Accuracy also continued to improve over the time; the latest model used in the inspection reached 91% accuracy.

“We compiled that information for the technicians and gave them the ability to navigate pretty easily to where we can schedule maintenance for customers, and we did this all without a technician ever having to go out to the site,” Titone said. The program has led to more than $74 million reductions in cost and 385,000 in man-hours.

Cloud and edge processing

Duke Energy had to consider the question of how to process the data the drones were collecting. A typical drone flight can produce thousands of photos, sometimes with no precise location data associated with the images. Trying to do the analysis in the cloud to figure out if the drone image showed a solar site would be impossible because of the sheer amount of data and information involved. Duke Energy had to process the images at the edge so that it could make real-time decisions. The images had to be stitched together to make a precise picture of the solar farm without having to require somebody go and actually walk around the site.

Instead of trying to do everything at once, Duke Energy worked on small increments of the project. Once one thing worked, the team moved on to the next step. Since Duke Energy had its own software engineering team, it was able to build its own models with its own methodologies as part of a one-stop shop. This process eventually led to creating over 40 products.

Titone said, “Had we not had that footprint in the cloud journey, we wouldn’t have been able to develop these models and be able to process that data as quickly as we could.”

Working with data

Titone also discussed best practices with storing and cleaning data. As the team has moved toward a cloud-based data strategy, it uses a lot of data lakes. The data lakes are accessible by other systems and also by some data analysis and data science components that must quickly process the information.

“I would say we’re using a lot of the traditional methods around data lakes in order to process all of that,” Titone said, and the team models the data with “what we call our MATLAB, which stands for machine learning, AI and deep learning.”

Reflecting upon the high accuracy that the product reached, Titone said that it was important to be OK with failing in the beginning. “I think at the beginning of the journey, we didn’t have an expectation that we would get right out of the gate,” she said. As time moved on, the team learned and continued to modify the model according to the results. For example, in the journey with iterations and reflections, the team realized that they should not only extract images but piece different processing techniques together. They also adjusted the angle and height of the drone.

AI as a career opportunity

The fact that AI is more efficient and cost-effective does result in reduced labor hours, which raises the concern that AI is taking jobs away from people. Titone said the better perspective was to view this as an opportunity. She said that upskilling employees to be able to work with AI was an investment in the workforce. If the employees understand AI, she said, they become more valuable as workers because they qualify for more advanced roles.

“I never approach AI as taking somebody’s job or role; the way I’ve always approached AI is that it should complement our workforce, that it should give us a set of skills and career paths that our teammates can take,” Titone said.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Farming is finally ready for robots

All the sessions from Transform 2021 are available on-demand now. Watch now.

As an investor in the ag-tech space for the last decade, I would argue that 2020 was a tipping point. The pandemic accelerated a shift that was already underway to deploy automation in the farming industry. Other venture capitalists seem to agree. In 2020, they invested $6.1 billion into ag-tech startups in the US, a 60% increase over 2019, according to research from PitchBook. The number is all the more staggering when you consider that, in 2010, VC investment in US ag-tech startups totaled just $322 milion. Why did 2020 turn the tide in ag-tech investment, and what does that mean for the future of farming? Which ag-tech startups are poised to become leaders in the multi-trillion-dollar global agriculture industry? I’ll delve into those questions below.

The pandemic changed many sectors forever and agriculture is no exception. With idled processing plants, supply chain disruptions, and COVID outbreaks among workers curtailing an already-strapped labor force, farmers faced unprecedented challenges and quickly realized some of these problems could be solved by automation. Entrepreneurs already active in the ag-tech space, and those who may have been considering launching an ag-tech company, saw an opportunity to apply innovation to agriculture’s long-standing — and now all the more pressing — challenges.

Ag-tech investment boom

Companies applying robotics, computer vision, and automation solutions to farming were the greatest beneficiaries of the record levels of VC funding in the last year, in particular vertical farming companies that grow crops indoors on scaffolding. Bowery recently raised $300 million in additional venture funding and is now valued at $2.3 billion, while AeroFarming recently announced plans to go public in a SPAC deal valuing the company at $1.2 billion. Vertical herb farmer Plenty has raised over $500 million in venture funding, while vertical tomato grower AppHarvest went public via a SPAC in February and is now valued at over $1.7 billion, despite recent share price fluctuations.

But while vertical farming companies have received a large share of venture capital dollars, there are many other ag-tech startups emerging in the race to automate agriculture. Some of the ag-tech sub-sectors where we see the most potential for growth in the next five years include tiller, weeding, and planting robots; sensor-fitted drones used to assess crops and plan fertilizer schedules; greenhouse and nursery automation technology; computer vision systems to identify crop health, weeds, nitrogen, and water levels in plants and soil; crop transport, sorting, and packing robots; and AI software for predictive yield planning.

Some of the sub-sectors that are a bit further out include picking and planting robots, as well as fertilizer and watering drones. A few startups are building tactile robotic hands that could be used to pick delicate fruit such as strawberries or tomatoes. Yet the picked fruit must be placed on autonomous vehicles that can navigate uneven terrain and paired with packing robots that place the fruit carefully to avoid bruising, so challenges remain. Meanwhile, drones exist today that can drop fertilizer or water on fields, but their use is strictly regulated and their range and battery capacity is limited by payload capabilities. In about 10 years, we could begin to see drones that use cameras, computer vision, and AI to assess plant health and then automatically apply the right amount and type of fertilizer based on the plants’ size and chemical composition.

Solving the right problems

For any ag-tech company to win over farmers, it must solve a big problem and do it in a way that saves them significant time and/or money. While a manufacturer might be happy to deploy a robot for incremental improvement, farmers operate on exceedingly tight margins and want to see exponential improvement. Weeds, for example, are a huge problem for farmers, and the preferred method of killing them in the past, pesticides, is dangerous and unpopular. A number of companies have emerged to address this problem with a combination of AI, computer vision, and robotics to identify and pull weeds in fields. Naio Technologies and FarmWise are examples (disclosure: my firm is an investor in FarmWise). Meanwhile Bear Flag Robotics is making great strides in the automated farm vehicle space, building robotic tractors that intelligently monitor and till large fields. And Burro is a leader in crop transport robots, with its autonomous vehicles used to move picked fruit and vegetables from the field to processing centers.

While fully-autonomous harvesting is still a ways off, apple-picking robots are starting to gain ground, including those from Tevel Aerobotics Technologies. Tevel’s futuristic drones can recognize ripe apples and fly autonomously about the trees picking them and placing them carefully in a large transport box. Abundant Robotics takes a different approach to harvesting apples, using a terrestrial robot with an intelligent suction arm to harvest and pack ripe fruit.

Several greenhouse and nursery robots aim to improve handling, climate control, and other tasks in plant-growing operations. Harvest Automation’s small autonomous robots can recognize, pick up, and move plants around a nursery. Other greenhouse automation companies to watch include iUNU, which offers a computer vision system for greenhouses, and Iron Ox, which has built large robot-driven greenhouses to grow vegetables.

And, finally, satellite imaging companies such as PlanetLabs and Descartes Labs will also play an important role in ag-tech, as they provide geo-spatial images of crop land that can help farmers understand global climate trends.

Roadblocks remain

Facing climate change, a growing population, worker shortages, and other challenges that will only grow more intense, the agriculture sector is ripe for disruption. Agricultural giants such as Monsanto and John Deere, as well as small and mid-sized farms, are embracing automation to improve crop yields and production. But wide-scale adoption of farm automation won’t happen overnight. For any ag-tech innovation to take hold, it must solve a huge problem and do so in a repeatable way that doesn’t interfere with a farm’s current workflow. It doesn’t help to deploy picker robots if they can’t integrate into a farm’s current crop packing and transport systems, for example.

We may well see small and medium-sized farms leading the way in the adoption of automation. Even though industrial farms have large capital reserves, they also have established systems in place that are harder to replace. Smaller farms have fewer such systems to replace and are willing to try robots-as-a-service (RaaS) solutions from lesser-known startups. For millennia, farmers have thought outside the box to find solutions to everyday problems, so it stands to reason they want to work with startups that think the same way they do. Farmers wake up every day and think, here’s a big problem, what innovative trick can I use to solve it? Perhaps farmers steeped in self-reliance and ag-tech entrepreneurs steeped in engineering and computer science aren’t so different after all.

Kevin Dunlap is Co-founder and Managing Partner at Calibrate Ventures. He was previously a managing director at Shea Ventures, where he backed companies such as Ring, Solar City, and Chegg. Kevin currently sits on the boards of Broadly, Soft Robotics, and Realized.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Tech News

Want to code and build robots and other cool gadgets? This Raspberry training can help

TLDR: The 2021 Raspberry Pi and Arduino Bootcamp Bundle melds the worlds of coding, electronics, and robotics for the first time creators with this five-course training package.

There are probably loads of you out there who really wish they understood the finer points of programming, electronics, robotics, the Internet of Things, and all that…but just don’t know where to start.

We don’t blame you. There aren’t a lot of simple proven entry points into the grass roots subculture of tinkerers and tech innovators that don’t feel overwhelming quickly. But that doesn’t mean there aren’t a few accessible ways in.

The 2021 Raspberry Pi and Arduino Bootcamp ($19.99, over 90 percent off, from TNW Deals) is one of those access points. It’s a five-course distillation that can help even first time creators understand the basics of modern programming, the fundamentals of crafting small electronics, and linking those two together through the popular Raspberry Pi microcomputer and Arduino electronics platform to help lift a learner’s technical training to the next level.

Even if you’ve never coded before or don’t understand how a circuit works, no need to fear. The Raspberry Pi For Beginners and Arduino for Beginners courses can ease learners in gently. 

The Raspberry Pi course examines all the possibilities with this versatile single-board computer, ranging from basic introductions and capabilities up to learning the Python coding language from scratch to start creating some cool starter projects like a complete surveillance and alarm system to a web server run completely through the Pi.

Meanwhile, the Arduino training also includes some soothing handholding for novice creators. As users get comfortable working with Arduino circuits, boards, controllers and other components, these practical hands-on lessons will bring it all to life with 20 knowledge-building activities all leading to a final Arduino project.

Other courses plunge ever deeper into understanding both the Pi and Arduino environments, as well as how the two coordinate together. Arduino OOP (Object Oriented Programming) examines how to write a complete Arduino project, step by step.

Finally, robotics take center stage with ROS2 for Beginners, which gets into creating in the Robot Operating System (ROS) a collection of software frameworks used in robot development. After learning how to create reusable code for any robot powered by ROS, Learn ROS2 as a ROS1 Developer and Migrate Your ROS Projects advances that training, covering the relationship between ROS and ROS2 as well as how to take projects from one to the other.

The 2021 Raspberry Pi and Arduino Bootcamp Bundle includes coursework that would usually cost almost $1,000, but right now, all five courses in this package are available for only $19.99.

Prices are subject to change.

Repost: Original Source and Author Link