Alphabet is putting its prototype robots to work cleaning up around Google’s offices

What does Google’s parent company Alphabet want with robots? Well, it would like them to clean up around the office, for a start.

The company announced today that its Everyday Robots Project — a team within its experimental X labs dedicated to creating “a general-purpose learning robot” — has moved some of its prototype machines out of the lab and into Google’s Bay Area campuses to carry out some light custodial tasks.

“We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices,” said Everyday Robot’s chief robot officer Hans Peter Brøndmo in a blog post. “The same robot that sorts trash can now be equipped with a squeegee to wipe tables and use the same gripper that grasps cups can learn to open doors.”

These robots in question are essentially arms on wheels, with a multipurpose gripper on the end of a flexible arm attached to a central tower. There’s a “head” on top of the tower with cameras and sensors for machine vision and what looks like a spinning lidar unit on the side, presumably for navigation.

One of Alphabet’s Everyday Robot machines cleans the crumbs off a cafe table.
Image: Alphabet

As Brøndmo indicates, these bots were first seen sorting out recycling when Alphabet debuted the Everyday Robot team in 2019. The big promise that’s being made by the company (as well as by many other startups and rivals) is that machine learning will finally enable robots to operate in “unstructured” environments like homes and offices.

Right now, we’re very good at building machines that can carry out repetitive jobs in a factory, but we’re stumped when trying to get them to replicate simple tasks like cleaning up a kitchen or folding laundry.

Think about it: you may have seen robots from Boston Dynamics performing backflips and dancing to The Rolling Stones, but have you ever seen one take out the trash? It’s because getting a machine to manipulate never-before-seen objects in a novel setting (something humans do every day) is extremely difficult. This is the problem Alphabet wants to solve.

Unit 033 makes a bid for freedom.
Image: Alphabet

Is it going to? Well, maybe one day — if company execs feel it’s worth burning through millions of dollars in research to achieve this goal. Certainly, though, humans are going to be cheaper and more efficient than robots for these jobs in the foreseeable future. The update today from Everyday Robot is neat, but it’s far from a leap forward. You can see from the GIFs that Alphabet shared of its robots that they’re still slow and awkward, carrying out tasks inexpertly and at a glacial pace.

However, it’s still definitely something that the robots are being tested “in the wild” rather than in the lab. Compare Alphabet’s machines to Samsung’s Bot Handy, for example; a similar-looking tower-and-arm bot that the company showed off at CES last year, apparently pouring wine and loading a dishwasher. At least, Bot Handy looks like it’s performing these jobs, but really it was only carrying out a prearranged demo. Who knows how capable, if at all, this robot is in the real world? At least Alphabet is finding this out for itself.

Repost: Original Source and Author Link


DeepMind proposes new benchmark to improve robots’ object-stacking abilities

Stacking an object on top of another object is a straightforward task for most people. But even the most complex robots struggle to handle more than one such task at a time. Stacking requires a range of different motor, perception, and analytics skills, including the ability to interact with different kinds of objects. The level of sophistication involved has elevated this simple human task to a “grand challenge” in robotics and spawned a cottage industry dedicated to developing new techniques and approaches.

A team of researchers at DeepMind believe that advancing the state of the art in robotic stacking will require a new benchmark. In a paper to be presented at the Conference on Robot Learning (CoRL 2021), they introduce RGB-Stacking, which tasks a robot with learning how to grasp different objects and balance them on top of one another. While benchmarks for stacking tasks already exist in the literature, the researchers assert that what sets their research apart is the diversity of objects used, and the evaluations performed to validate their findings. The results demonstrate that a combination of simulation and real-world data can be used to learn “multi-object manipulation,” suggesting a strong baseline for the problem of generalizing to novel objects, the researchers wrote in the paper.

“To support other researchers, we’re open-sourcing a version of our simulated environment, and releasing the designs for building our real-robot RGB-stacking environment, along with the RGB-object models and information for 3D printing them,” the researchers said. “We are also open-sourcing a collection of libraries and tools used in our robotics research more broadly.”


With RGB-Stacking, the goal is to train a robotic arm via reinforcement learning to stack objects of different shapes. Reinforcement learning is a type of machine learning technique that enables a system — in this case a robot — to learn by trial and error using feedback from its actions and experiences.

RGB-Stacking places a gripper attached to a robot arm above a basket, and three objects in the basket: one red, one green, and one blue (hence the name RGB). A robot must stack the red object on top of the blue object within 20 seconds, while the green object serves as an obstacle and distraction.

According to DeepMind researchers, the learning process ensures that a robot acquires generalized skills through training on multiple object sets. RGB-Stacking intentionally varies the grasp and stack qualities that define how a robot can grasp and stack each object, which forces the robot to exhibit behaviors that go beyond a simple pick-and-place strategy.


“Our RGB-Stacking benchmark includes two task versions with different levels of difficulty,” the researchers explain. “In ‘Skill Mastery,’ our goal is to train a single agent that’s skilled in stacking a predefined set of five triplets. In ‘Skill Generalization,’ we use the same triplets for evaluation, but train the agent on a large set of training objects — totaling more than a million possible triplets. To test for generalization, these training objects exclude the family of objects from which the test triplets were chosen. In both versions, we decouple our learning pipeline into three stages.”

The researchers claim that their methods in RGB-Stacking result in “surprising” stacking strategies and “mastery” of stacking a subset of objects. Still, they concede that they only scratch the surface of what’s possible and that the generalization challenge remains unsolved.

“As researchers keep working to solve the open challenge of true generalization in robotics, we hope this new benchmark, along with the environment, designs, and tools we have released, contribute to new ideas and methods that can make manipulation even easier and robots more capable,” the researchers added.

As robots become more adept at stacking and grasping objects, some experts believe that this type of automation could drive the next U.S. manufacturing boom. In a recent study from Google Cloud and The Harris Poll, two-thirds of manufacturers said that the use of AI in their day-to-day operations is increasing, with 74% claiming that they align with the changing work landscape. Companies in manufacturing expect efficiency gains over the next five years attributable to digital transformations. McKinsey’s research with the World Economic Forum puts the value creation potential of manufacturers implementing “Industry 4.0” — the automation of traditional industrial practices — at $3.7 trillion by 2025.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Have autonomous robots started killing in war? The reality is messier than it appears

It’s the sort of thing that can almost pass for background noise these days: over the past week, a number of publications tentatively declared, based on a UN report from the Libyan civil war, that killer robots may have hunted down humans autonomously for the first time. As one headline put it: “The Age of Autonomous Killer Robots May Already Be Here.”

But is it? As you might guess, it’s a hard question to answer.

The new coverage has sparked a debate among experts that goes to the heart of our problems confronting the rise of autonomous robots in war. Some said the stories were wrongheaded and sensational, while others suggested there was a nugget of truth to the discussion. Diving into the topic doesn’t reveal that the world quietly experienced the opening salvos of the Terminator timeline in 2020. But it does point to a more prosaic and perhaps much more depressing truth: that no one can agree on what a killer robot is, and if we wait for this to happen, their presence in war will have long been normalized.

It’s cheery stuff, isn’t it? It’ll take your mind off the global pandemic at least. Let’s jump in:

The source of all these stories is a 548-page report from the United Nations Security Council that details the tail end of the Second Libyan Civil War, covering a period from October 2019 to January 2021. The report was published in March, and you can read it in full here. To save you time: it is an extremely thorough account of an extremely complex conflict, detailing various troop movements, weapon transfers, raids and skirmishes that took place among the war’s various factions, both foreign and domestic.

The paragraph we’re interested in, though, describes an offensive near Tripoli in March 2020, in which forces supporting the UN-backed Government of National Accord (GNA) routed troops loyal to the Libyan National Army of Khalifa Haftar (referred to in the report as the Haftar Affiliated Forces or HAF). Here’s the relevant passage in full:

Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability.”

The Kargu-2 system that’s mentioned here is a quadcopter built in Turkey: it’s essentially a consumer drone that’s used to dive-bomb targets. It can be manually operated or steer itself using machine vision. A second paragraph in the report notes that retreating forces were “subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems” and that the HAF “suffered significant casualties” as a result.

The Kargu-2 drone is essentially a quadcopter that dive-bombs enemies.
Image: STM

But that’s it. That’s all we have. What the report doesn’t say — at least not outright — is that human beings were killed by autonomous robots acting without human supervision. It says humans and vehicles were attacked by a mix of drones, quadcopters, and “loitering munitions” (we’ll get to those later), and that the quadcopters had been programmed to work offline. But whether the attacks took place without connectivity is unclear.

These two paragraphs made their way into the mainstream press via a story in the New Scientist, which ran a piece with the headline: “Drones may have attacked humans fully autonomously for the first time.” The NS is very careful to caveat that military drones might have acted autonomously and that humans might have been killed, but later reports lost this nuance. “Autonomous drone attacked soldiers in Libya all on its own,” read one headline. “For the First Time, Drones Autonomously Attacked Humans,” said another.

Let’s be clear: by itself, the UN does not say for certain whether drones autonomously attacked humans in Libya last year, though it certainly suggests this could have happened. The problem is that even if it did happen, for many experts, it’s just not news.

The reason why some experts took issue with these stories was because they followed the UN’s wording, which doesn’t distinguish clearly between loitering munitions and lethal autonomous weapons systems or LAWS (that’s policy jargon for killer robots).

Loitering munitions, for the uninitiated, are the weapon equivalent of seagulls at the beachfront. They hang around a specific area, float above the masses, and wait to strike their target — usually military hardware of one sort or another (though it’s not impossible that they could be used to target individuals).

The classic example is Israel’s IAI Harpy, which was developed in the 1980s to target anti-air defenses. The Harpy looks like a cross between a missile and a fixed-wing drone, and is fired from the ground into a target area where it can linger for up to nine hours. It scans for telltale radar emissions from anti-air systems and drops onto any it finds. The loitering aspect is crucial as troops will often turn these radars off, given they act like homing beacons.

The IAI Harpy is launched from the ground and can linger for hours over a target area.
Image: IAI

“The thing is, how is this the first time of anything?” tweeted Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations. “Loitering munition have been on the battlefield for a while – most notably in Nagorno-Karaback. It seems to me that what’s new here isn’t the event, but that the UN report calls them lethal autonomous weapon systems.”

Jack McDonald, a lecturer at the department of war studies at King’s College London, says the distinction between the two terms is controversial and constitutes an unsolved problem in the world of arms regulation. “There are people who call ‘loitering munitions’ ‘lethal autonomous weapon systems’ and people who just call them ‘loitering munitions,’” he tells The Verge. “This is a huge, long-running thing. And it’s because the line between something being autonomous and being automated has shifted over the decades.”

So is the Harpy a lethal autonomous weapons system? A killer robot? It depends on who you ask. IAI’s own website describes it as such, calling it “an autonomous weapon for all weather,” and the Harpy certainly fits a makeshift definition of LAWS as “machines that target combatants without human oversight.” But if this is your definition, then you’ve created a very broad church for killer robots. Indeed, under this definition a land mine is a killer robot, as it, too, autonomously targets combatants in war without human oversight.

If killer robots have been around for decades, why has there been so much discussion about them in recent years, with groups like the Campaign To Stop Killer Robots pushing for regulation of this technology in the UN? And why is this incident in Libya special?

The rise of artificial intelligence plays a big role, says Zak Kallenborn, a policy fellow at the Schar School of Policy and Government. Advances in AI over the past decade have given weapon-makers access to cheap vision systems that can select targets as quickly as your phone identifies pets, plants, and familiar faces in your camera roll. These systems promise nuanced and precise identification of targets but are also much more prone to mistakes.

“Loitering munitions typically respond to radar emissions, [and] a kid walking down the street isn’t going to have a high-powered radar in their backpack,” Kallenborn tells The Verge. “But AI targeting systems might misclassify the kid as a soldier, because current AI systems are highly brittle — one study showed a change in a single pixel is sufficient to cause machine vision systems to draw radically different conclusions about what it sees. An open question is how often those errors occur during real-world use.”

This is why the incident in Libya is interesting, says Kallenborn, as the Kargu-2 system mentioned in the UN report does seem to use AI to identify targets. According to the quadcopter’s manufacturer, STM, it uses “machine learning algorithms embedded on the platform” to “effectively respond against stationary or mobile targets (i.e. vehicle, person etc.)” Demo videos appear to show it doing exactly that. In the clip below, the quadcopter hones in on a mannequin in a stationary group.

But should we trust a manufacturers’ demo reel or brochure? And does the UN report make it clear that machine learning systems were used in the attack?

Kallenborn’s reading of the report is that it “heavily implies” that this was the case, but McDonald is more skeptical. “I think it’s sensible to say that the Kargu-2 as a platform is open to being used in an autonomous way,” he says. “But we don’t necessarily know if it was.” In a tweet, he also pointed out that this particular skirmish involved long-range missiles and howitzers, making it even harder to attribute casualties to any one system.

What we’re left with is, perhaps unsurprisingly, the fog of war. Or more accurately: the fog of LAWS. We can’t say for certain what happened in Libya and our definitions of what is and isn’t a killer robot are so fluid that even if we knew, there would be disagreement.

For Kallenborn, this is sort of the point: it underscores the difficulties we face trying to create meaningful oversight in the AI-assisted battles of the future. Of course the first use of autonomous weapons on the battlefield won’t announce itself with a press release, he says, because if the weapons work as they’re supposed to, they won’t look at all out of the ordinary. “The problem is autonomy is, at core, a matter of programming,” he says. “The Kargu-2 used autonomously will look exactly like a Kargu-2 used manually.”

Elke Schwarz, a senior lecturer in political theory at Queen Mary University London who’s affiliated with the International Committee for Robot Arms Control, tells The Verge that discussions like this show we need to move beyond “slippery and political” debates about definitions and focus on the specific functionality of these systems. What do they do and how do they do it?

“I think we really have to think about the bigger picture […] which is why I focus on the practice, as well as functionality,” says Schwarz. “In my work I try and show that the use of these types of systems is very likely to exacerbate violent action as an ‘easier’ choice. And, as you rightly point out, errors will very likely prevail […] which will likely be addressed only post hoc.”

Schwarz says that despite the myriad difficulties, in terms of both drafting regulation and pushing back against the enthusiasm of militaries around the world to integrate AI into weaponry, “there is critical mass building amongst nations and international organizations to push for a ban for systems that have the capacity to autonomously identify, select and attack targets.”

Indeed, the UN is still conducting a review into possible regulations for LAWS, with results due to be reported later this year. As Schwarz says: “With this news story having made the rounds, now is a great time to mobilize the international community toward awareness and action.”

Repost: Original Source and Author Link


Duke Energy used computer vision and robots to cut costs by $74M

All the sessions from Transform 2021 are available on-demand now. Watch now.

Duke Energy’s AI journey began because the utility company had a business problem to solve,  Duke Energy chief information officer Bonnie Titone told VentureBeat’s head of AI content strategy Hari Sivaraman at the Transform 2021 virtual conference on Thursday.

Duke Energy was facing some significant challenges, such as the growing issue of climate change and the need to transition to clean energy in order to reach net zero emissions by 2050. Duke Energy is considered an essential service, as it supplies 25 million people with electricity daily, and everything the utility company does revolves around a culture of safety and reliability. The variables together was a catalyst for exploring AI technologies, Titone said, because whatever the company chose to do, it had to support the clean energy transition, deliver value to customers, and find a way for employees to work and improve safety.

“We look to emerging data science tools and AI solutions, which in turn brought us to computer vision, and ultimately, drones in order to inspect our solar farms,” Titone said.

There is a significant amount of solar farms in the shift to clean energy — Florida alone has 3 million solar panels, Titone said — and inspecting them is a very labor-intensive, time-consuming, and risky endeavor. It can take about 40 hours to inspect one unit, and a regular solar site may have somewhere between 20 and 25 units to inspect. It’s a dangerous task, as technicians walk around 500-acre solar sites with heat guns so they can inspect the panels and may need to touch live wires. The company began experimenting with advanced drones with infrared cameras to try to streamline the work. The technicians were able to use the images taken by the drones to determine where they’re seeing faults and issues. Thousands of images were stitched together with computer vision, giving technicians the ability to look for issues using the images in a much safer way, Titone said.

After finding the computer vision, Duke Energy began to consider automating the process. The company developed a MOVES model (Mobile Observation Vehicle and Equipment Solutions) that collects and processes the data and images from the drones and identifies the faults within minutes. Through applying AI and machine learning technologies, the program has significantly reduced labor and time costs for the company. Accuracy also continued to improve over the time; the latest model used in the inspection reached 91% accuracy.

“We compiled that information for the technicians and gave them the ability to navigate pretty easily to where we can schedule maintenance for customers, and we did this all without a technician ever having to go out to the site,” Titone said. The program has led to more than $74 million reductions in cost and 385,000 in man-hours.

Cloud and edge processing

Duke Energy had to consider the question of how to process the data the drones were collecting. A typical drone flight can produce thousands of photos, sometimes with no precise location data associated with the images. Trying to do the analysis in the cloud to figure out if the drone image showed a solar site would be impossible because of the sheer amount of data and information involved. Duke Energy had to process the images at the edge so that it could make real-time decisions. The images had to be stitched together to make a precise picture of the solar farm without having to require somebody go and actually walk around the site.

Instead of trying to do everything at once, Duke Energy worked on small increments of the project. Once one thing worked, the team moved on to the next step. Since Duke Energy had its own software engineering team, it was able to build its own models with its own methodologies as part of a one-stop shop. This process eventually led to creating over 40 products.

Titone said, “Had we not had that footprint in the cloud journey, we wouldn’t have been able to develop these models and be able to process that data as quickly as we could.”

Working with data

Titone also discussed best practices with storing and cleaning data. As the team has moved toward a cloud-based data strategy, it uses a lot of data lakes. The data lakes are accessible by other systems and also by some data analysis and data science components that must quickly process the information.

“I would say we’re using a lot of the traditional methods around data lakes in order to process all of that,” Titone said, and the team models the data with “what we call our MATLAB, which stands for machine learning, AI and deep learning.”

Reflecting upon the high accuracy that the product reached, Titone said that it was important to be OK with failing in the beginning. “I think at the beginning of the journey, we didn’t have an expectation that we would get right out of the gate,” she said. As time moved on, the team learned and continued to modify the model according to the results. For example, in the journey with iterations and reflections, the team realized that they should not only extract images but piece different processing techniques together. They also adjusted the angle and height of the drone.

AI as a career opportunity

The fact that AI is more efficient and cost-effective does result in reduced labor hours, which raises the concern that AI is taking jobs away from people. Titone said the better perspective was to view this as an opportunity. She said that upskilling employees to be able to work with AI was an investment in the workforce. If the employees understand AI, she said, they become more valuable as workers because they qualify for more advanced roles.

“I never approach AI as taking somebody’s job or role; the way I’ve always approached AI is that it should complement our workforce, that it should give us a set of skills and career paths that our teammates can take,” Titone said.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Farming is finally ready for robots

All the sessions from Transform 2021 are available on-demand now. Watch now.

As an investor in the ag-tech space for the last decade, I would argue that 2020 was a tipping point. The pandemic accelerated a shift that was already underway to deploy automation in the farming industry. Other venture capitalists seem to agree. In 2020, they invested $6.1 billion into ag-tech startups in the US, a 60% increase over 2019, according to research from PitchBook. The number is all the more staggering when you consider that, in 2010, VC investment in US ag-tech startups totaled just $322 milion. Why did 2020 turn the tide in ag-tech investment, and what does that mean for the future of farming? Which ag-tech startups are poised to become leaders in the multi-trillion-dollar global agriculture industry? I’ll delve into those questions below.

The pandemic changed many sectors forever and agriculture is no exception. With idled processing plants, supply chain disruptions, and COVID outbreaks among workers curtailing an already-strapped labor force, farmers faced unprecedented challenges and quickly realized some of these problems could be solved by automation. Entrepreneurs already active in the ag-tech space, and those who may have been considering launching an ag-tech company, saw an opportunity to apply innovation to agriculture’s long-standing — and now all the more pressing — challenges.

Ag-tech investment boom

Companies applying robotics, computer vision, and automation solutions to farming were the greatest beneficiaries of the record levels of VC funding in the last year, in particular vertical farming companies that grow crops indoors on scaffolding. Bowery recently raised $300 million in additional venture funding and is now valued at $2.3 billion, while AeroFarming recently announced plans to go public in a SPAC deal valuing the company at $1.2 billion. Vertical herb farmer Plenty has raised over $500 million in venture funding, while vertical tomato grower AppHarvest went public via a SPAC in February and is now valued at over $1.7 billion, despite recent share price fluctuations.

But while vertical farming companies have received a large share of venture capital dollars, there are many other ag-tech startups emerging in the race to automate agriculture. Some of the ag-tech sub-sectors where we see the most potential for growth in the next five years include tiller, weeding, and planting robots; sensor-fitted drones used to assess crops and plan fertilizer schedules; greenhouse and nursery automation technology; computer vision systems to identify crop health, weeds, nitrogen, and water levels in plants and soil; crop transport, sorting, and packing robots; and AI software for predictive yield planning.

Some of the sub-sectors that are a bit further out include picking and planting robots, as well as fertilizer and watering drones. A few startups are building tactile robotic hands that could be used to pick delicate fruit such as strawberries or tomatoes. Yet the picked fruit must be placed on autonomous vehicles that can navigate uneven terrain and paired with packing robots that place the fruit carefully to avoid bruising, so challenges remain. Meanwhile, drones exist today that can drop fertilizer or water on fields, but their use is strictly regulated and their range and battery capacity is limited by payload capabilities. In about 10 years, we could begin to see drones that use cameras, computer vision, and AI to assess plant health and then automatically apply the right amount and type of fertilizer based on the plants’ size and chemical composition.

Solving the right problems

For any ag-tech company to win over farmers, it must solve a big problem and do it in a way that saves them significant time and/or money. While a manufacturer might be happy to deploy a robot for incremental improvement, farmers operate on exceedingly tight margins and want to see exponential improvement. Weeds, for example, are a huge problem for farmers, and the preferred method of killing them in the past, pesticides, is dangerous and unpopular. A number of companies have emerged to address this problem with a combination of AI, computer vision, and robotics to identify and pull weeds in fields. Naio Technologies and FarmWise are examples (disclosure: my firm is an investor in FarmWise). Meanwhile Bear Flag Robotics is making great strides in the automated farm vehicle space, building robotic tractors that intelligently monitor and till large fields. And Burro is a leader in crop transport robots, with its autonomous vehicles used to move picked fruit and vegetables from the field to processing centers.

While fully-autonomous harvesting is still a ways off, apple-picking robots are starting to gain ground, including those from Tevel Aerobotics Technologies. Tevel’s futuristic drones can recognize ripe apples and fly autonomously about the trees picking them and placing them carefully in a large transport box. Abundant Robotics takes a different approach to harvesting apples, using a terrestrial robot with an intelligent suction arm to harvest and pack ripe fruit.

Several greenhouse and nursery robots aim to improve handling, climate control, and other tasks in plant-growing operations. Harvest Automation’s small autonomous robots can recognize, pick up, and move plants around a nursery. Other greenhouse automation companies to watch include iUNU, which offers a computer vision system for greenhouses, and Iron Ox, which has built large robot-driven greenhouses to grow vegetables.

And, finally, satellite imaging companies such as PlanetLabs and Descartes Labs will also play an important role in ag-tech, as they provide geo-spatial images of crop land that can help farmers understand global climate trends.

Roadblocks remain

Facing climate change, a growing population, worker shortages, and other challenges that will only grow more intense, the agriculture sector is ripe for disruption. Agricultural giants such as Monsanto and John Deere, as well as small and mid-sized farms, are embracing automation to improve crop yields and production. But wide-scale adoption of farm automation won’t happen overnight. For any ag-tech innovation to take hold, it must solve a huge problem and do so in a repeatable way that doesn’t interfere with a farm’s current workflow. It doesn’t help to deploy picker robots if they can’t integrate into a farm’s current crop packing and transport systems, for example.

We may well see small and medium-sized farms leading the way in the adoption of automation. Even though industrial farms have large capital reserves, they also have established systems in place that are harder to replace. Smaller farms have fewer such systems to replace and are willing to try robots-as-a-service (RaaS) solutions from lesser-known startups. For millennia, farmers have thought outside the box to find solutions to everyday problems, so it stands to reason they want to work with startups that think the same way they do. Farmers wake up every day and think, here’s a big problem, what innovative trick can I use to solve it? Perhaps farmers steeped in self-reliance and ag-tech entrepreneurs steeped in engineering and computer science aren’t so different after all.

Kevin Dunlap is Co-founder and Managing Partner at Calibrate Ventures. He was previously a managing director at Shea Ventures, where he backed companies such as Ring, Solar City, and Chegg. Kevin currently sits on the boards of Broadly, Soft Robotics, and Realized.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Tech News

Want to code and build robots and other cool gadgets? This Raspberry training can help

TLDR: The 2021 Raspberry Pi and Arduino Bootcamp Bundle melds the worlds of coding, electronics, and robotics for the first time creators with this five-course training package.

There are probably loads of you out there who really wish they understood the finer points of programming, electronics, robotics, the Internet of Things, and all that…but just don’t know where to start.

We don’t blame you. There aren’t a lot of simple proven entry points into the grass roots subculture of tinkerers and tech innovators that don’t feel overwhelming quickly. But that doesn’t mean there aren’t a few accessible ways in.

The 2021 Raspberry Pi and Arduino Bootcamp ($19.99, over 90 percent off, from TNW Deals) is one of those access points. It’s a five-course distillation that can help even first time creators understand the basics of modern programming, the fundamentals of crafting small electronics, and linking those two together through the popular Raspberry Pi microcomputer and Arduino electronics platform to help lift a learner’s technical training to the next level.

Even if you’ve never coded before or don’t understand how a circuit works, no need to fear. The Raspberry Pi For Beginners and Arduino for Beginners courses can ease learners in gently. 

The Raspberry Pi course examines all the possibilities with this versatile single-board computer, ranging from basic introductions and capabilities up to learning the Python coding language from scratch to start creating some cool starter projects like a complete surveillance and alarm system to a web server run completely through the Pi.

Meanwhile, the Arduino training also includes some soothing handholding for novice creators. As users get comfortable working with Arduino circuits, boards, controllers and other components, these practical hands-on lessons will bring it all to life with 20 knowledge-building activities all leading to a final Arduino project.

Other courses plunge ever deeper into understanding both the Pi and Arduino environments, as well as how the two coordinate together. Arduino OOP (Object Oriented Programming) examines how to write a complete Arduino project, step by step.

Finally, robotics take center stage with ROS2 for Beginners, which gets into creating in the Robot Operating System (ROS) a collection of software frameworks used in robot development. After learning how to create reusable code for any robot powered by ROS, Learn ROS2 as a ROS1 Developer and Migrate Your ROS Projects advances that training, covering the relationship between ROS and ROS2 as well as how to take projects from one to the other.

The 2021 Raspberry Pi and Arduino Bootcamp Bundle includes coursework that would usually cost almost $1,000, but right now, all five courses in this package are available for only $19.99.

Prices are subject to change.

Repost: Original Source and Author Link

Tech News

New Amazon robots could enable ‘safer’ exploitation of warehouse staff

Weeks after a study revealed that Amazon warehouse workers are injured at higher rates than staff at rival firms, the company has revealed it’s testing new robots designed to improve employee safety.

The e-commerce giant has ingratiatingly named two of the bots after Sesame Street’s Bert and Ernie.

Bert is an Autonomous Mobile Robot (AMR) that’s built to navigate through Amazon facilities. In the future, the company envisions the bot carrying large and heavy items or carts across a site, reducing the strain on its human coworkers.

Ernie, meanwhile, is a workstation system that removes totes from robotic shelves and then deliveries them to employees.

“The innovation with a robot like Ernie is interesting because while it doesn’t make the process go any faster, we’re optimistic, based on our testing, it can make our facilities safer for employees,” said Kevin Keck worldwide director of Advanced Technology at Amazon.

The duo may one day be joined at work by another pair of robot colleagues: Scooter and Kermit, which transport carts across facilities.

Amazon said it plans to deploy Scooter in at least one Amazon facility this year, and introduce Kermit in a minimum of 12 North American sites.

[Read: Why entrepreneurship in emerging markets matters]

The robots were unveiled amid growing concerns about worker safety at Amazon. Earlier this month, a union-backed report on safety data found serious injury rates at the company were almost 80% higher than the rest of the industry.

Amazon has previously been accused of deceiving the public about the rising injury rates in its warehouses. But in recent months, the company has begun to publicly acknowledge the problem.

In April, Jeff Bezos revealed another system designed to improve worker safety: an algorithm that rotates staff around tasks that use different body parts.

These initiatives are unlikely to discourage accusations that Amazon treats workers like robots. But hopefully, the systems can provide some support for their overworked human colleagues — and don’t end up replacing them.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Repost: Original Source and Author Link


The robots are coming for your office

As the editor-in-chief of The Verge, I can theoretically assign whatever I want. However, there is one topic I have failed to get people at The Verge to write about for years: robotic process automation, or RPA.

Admittedly, it’s not that exciting, but it’s an increasingly important kind of workplace automation. RPA isn’t robots in factories, which is often what we think of when it comes to automation. This is different: RPA is software. Software that uses other software, like Excel or an Oracle database.

On this week’s Decoder, I finally found someone who wants to talk about it with me: New York Times tech columnist Kevin Roose. His new book, Futureproof: 9 Rules for Humans in the Age of Automation, has just come out, and it features a lengthy discussion of RPA, who’s using it, who it will affect, and how to think about it as you design your career.

What struck me during our conversation were the jobs that Kevin talks about as he describes the impact of automation: they’re not factory workers and truck drivers. They’re accountants, lawyers, and even journalists. If you have the kind of job that involves sitting in front of a computer using the same software the same way every day, automation is coming for you. It won’t be cool or innovative or even work all that well — it’ll just be cheaper, faster, and less likely to complain. That might sound like a downer, but Kevin’s book is all about seeing that as an opportunity. You’ll see what I mean.

Okay, Kevin Roose, tech columnist, author, and the only reporter who has ever agreed to talk to me about RPAs. Here we go.

This transcript has been lightly edited for clarity.

Kevin Roose, you’re a tech columnist at The New York Times and you have a new book, Futureproof: 9 Rules for Humans in the Age of Automation, which is out now. Welcome to Decoder.

Thank you for having me.

You’re ostensibly here to promote your book, which is great. And I wanna talk about your book. But there’s one piece of the book that I am absolutely fascinated by, which is this thing called “robotic process automation.” And I’m gonna do my best with you on this show, today, to make that super interesting.

But before we get there, let’s talk about your book for a minute. What is your book about? Because I read it, and it has a big idea and then there’s literally nine rules for regular people to survive. So, tell me how the book came together.

So, the book is basically divided into two parts. And the first part is basically the diagnosis. It’s sort of, what is AI and automation doing today, in the economy, in our lives, in our homes, in our communities? How is it showing up? Who is it displacing, who is at risk of losing career opportunities or, you know, other things to these machines? What do we think about the arguments that this is all gonna turn out fine, what’s the evidence for that? And the second half of the book is really the sort of practical advice piece, that’s the nine rules that you mentioned.

And so it was my attempt to basically say, “What can we do about AI and automation?” Because I think you and I have been to dozens of tech conferences, and there’s always some talk about AI and automation and jobs. And some people are very optimistic, some people are very pessimistic, but at the end there’s always this chart that shows how many jobs could be displaced by automation in the next 10 years. And then the talk ends.


Everyone just goes to lunch, you know? And it’s like, “Okay, but…” I’m sitting there like, “What do I do?” I am a journalist, I work in an industry that is employing automation to do parts of my job; what should I, what should anyone, do to prepare for this? So, I wanted to write that, because I didn’t see that it existed anywhere.

You just said, “We’re journalists, it’s an industry that employs automation to do parts of our job.” I think that gets kinda right to the heart of the matter, which is the definition of automation, right?

I think when most people think of automation, they think of robots building cars and replacing factory workers in Detroit. You are talking about something much broader than that.

Yeah. I mean, that’s sort of the classic model of automation. And still, every time there’s a story about automation — and I hate this, and it’s like my personal vendetta against newspaper and magazine editors — every time you see a story about automation, there’s always a picture of a physical robot. And I get it. Most robots that we think of from sci-fi are physical robots. But most robots that exist in the world today, by a vast majority, are software.

And so, what you’re seeing today in corporate environments, in journalism, in lots of places, is that automation is showing up as software, that does parts of the job that, frankly, I used to do. My first job in journalism was writing corporate earnings stories. And that’s a job that has been largely automated by these software products now.

So an earnings story is, just to put in sort of an abstract framework, a company releases its earnings, those earnings are usually in a format, because the SEC dictates that earnings are released in a format.

You say, “Okay, here’s the earnings per share, here is the revenue. Here’s what the consensus analyst estimates were. They either beat the earnings or didn’t.” You can just write a script that makes that a story, you don’t really need a person in the mix because there’s almost no analysis to that. Right?

Right. And that’s not even a very hard form of automation. I mean, that technology existed years ago, because it’s very much like filling in Mad Libs. You know, it’s like, “Put the share price here, put the estimate here, put the revenue here.”

But now, what we’re seeing with GPT-3 and other language models that are based on machine learning, is that it’s not just Mad Libs anymore. These generated texts are getting much better, they’re much more convincing and compelling. They’re much more original, they’re not just sort of repeating things that they’ve picked up from other places. So I think we’ll see a lot more AI in journalism in the coming years.

So, we cover earnings at The Verge, we do it with a very different lens than a business publication, but we pay attention to a lot of companies. We care about their earnings, we cover them. If I could hire the robot to write the first two paragraphs of an earnings story for a reporter, I think all of my reporters would be like, “Great. I don’t wanna do that part. I wanna get to the fun part where Tim Cook on the call said something shocking about the future of the Mac.” Right? And that’s the part of the story that’s interesting to us, anyway.

It seems like a lot of the automation story is doing jobs that are really boring, that people don’t necessarily like to do. The tension there is, “Well, shouldn’t we automate the jobs that people don’t like to do?”

Yeah, this is the argument for automation in the workplace, is that all the jobs that are automatable are repetitive and boring and people don’t wanna be doing them anyway. And so that’s what you’ll hear if you call up a CEO of a company that sells automating software, I mean, RPA (robotic process automation) software. And that’s what I heard over and over writing this book. But it’s a little simplistic, because automation can also take away the fun parts of people’s jobs that they enjoy.

There’s a lot of examples of this through history, where a factory automates, and the owners of the factory are like, “This is great for workers, they hated lugging big pieces of steel and so now we’ll have machines do that and they’ll be able to do the fun and creative parts of the job.” And then they install the automation and the robots, and it turns out that the workers don’t like it because that was part of the job that they enjoyed. It wasn’t necessarily lugging the pieces of steel, but was the camaraderie that built around that. And the downtime between big tasks.

Ideally, it would be the case that automation only took away the bad and boring and dull parts of people’s jobs, but in practice that’s not always how it works. And now, with things like RPA, we’re seeing automation that is designed not just to replace one task or two tasks, but is really designed to replace an entire human’s workload. The RPA companies now are selling what they call digital workers.

So instead of automating earnings reports, you can automate entry-level corporate journalism. Or you can automate internal communications. There are various ways that this is appearing in the corporate world. But I think there’s a gap between what the sort of utopian vision of this is, and how it’s actually being put into practice.

Let’s talk about RPA. I’m very excited. You’re the only person who’s ever wanted who’s ever volunteered an hour of their life to talk about RPA with me. So, RPA is robotic process automation, which is an incredible name. In my opinion, made to sound as dull as possible.

It’s like ASMR, if you wanna fall asleep you could just read a story about RPA.

[Laughs] The first time anyone told me about RPA, it was a consultant at a big consulting firm, and they were like, “Our fastest growing line of business is going into hospitals and insurance companies where they have an old computer system, and it is actually cheaper and easier for us to replace the workers who use the old computer system, than it is to upgrade the computer system.”

“So, we install scripts that automate medical billing, and are basically KVM switches, so keyboard-video-mouse switches that use an old computer, like they click on the buttons. The mouse moves around and clicks on the old computer system, and that is faster and easier to replace the people, than it is to migrate the data out of the old system into a new system. Because everyone knows how complicated and expensive that is, and this is our fastest-growing line of business.”

And I thought that was just the most dystopian thing I’d ever heard. But then it turns out to be this massive industry that has grown tentacles everywhere.

Yeah, it’s amazing. I mean, my introduction to this world was sort of the same as yours. I was talking to a consultant. I was actually in Davos. That’s not my favorite way to start a story.


But we’ll go with it. And in Davos, you know, it’s this big conference. I call it “the Coachella of capitalism.” It’s like a week-long festival of rich people and heads of state. The main drag, the promenade, is all corporate-sponsored buildings and tents and, you know, corporations rent out restaurants and turn them into sort of branded hang-out zones for their people and guests during the week. And by far the biggest displays on the promenade the year that I went were from consulting companies. Consulting companies like Deloitte and Accenture and Cognizant and Infosys, and all these companies that are doing massive amounts of business in RPA, or what they sometimes refer to as digital transformation. That’s sort of a euphemism.

They were spending millions of dollars and bringing in millions of dollars. And it was like, “What is going on here?” Like, “What are these people actually selling?” And it turns out that a lot of what they’re selling is stuff that’ll plug into your Oracle database, that’ll allow it to talk to this other software suite that you use. The kind of human replacement that you’re talking about. It’s very expensive to rebuild your entire tech stack if you’re an old-line Fortune 500 company. But it’s relatively cheap to plug in an RPA bot that’ll take out, you know, three to five humans in the billing department.

One of the things in your book that you mention, you call this boring bots. And you go into the process by which, yeah, you don’t show up to work one day and there’s a robot sitting at your desk. As a company grows and scales, it just stops hiring some of these people. It lets their jobs get smaller and smaller, it doesn’t give them pathways up.

I see that very clearly, right? Like if their entire job is pasting from one Excel database, one Excel spreadsheet to another Excel spreadsheet all day, they might themselves just write a macro to do it. Why wouldn’t you as a company be like, “We’re just gonna automate that”? But all that other stuff in an office is the stuff that you’re saying is important. The social camaraderie, the culture of a company. Is that even on the table for these digital transformation companies?

It’s not really what they’re incentivized to think about. I mean, these consulting firms get brought in to cut costs. And cut costs pretty rapidly. And so that’s their mandate and that’s what they’re doing. Some of the way that they’re doing that is by taking out humans. They’re also streamlining processes so that maybe you can reorg some of the people who used to work in accounts payable into a different division, give them something to do. But a big piece of the sales pitch is like, “you can do as much or more work with many fewer people.” And I talked to one consultant in Davos, and I’m sorry, this is the last time I will ever mention Davos on this podcast.

I’m putting your over/under on Davos mentions at five.

[Laughs] It’s like the worst name drop in the world. But I talked to one consultant and he said that executives were coming up to him and saying, “How can I basically get rid of 99 percent of the people that I employ?” Like the target was not, “How do we automate a few jobs around the edges? How do we save some money here and here?” It was like, “Can we wipe out basically the entire payroll?”

And “Is that plausible? And how do we get there as quickly as possible?”

How big is the total RPA market right now?

It’s in the billions of dollars. I don’t know the exact figure, but the biggest companies in this are called UiPath and Automation Anywhere and there are other companies in this space, like Blue Prism. But just UiPath alone is valued at something like $35 billion and is expected to IPO later this year. So, these are large companies that are doing many billions of dollars in revenue a year, and they’re working with most of the Fortune 500 at this point.

And the actual product they sell, is it basically software that uses other software?

A lot of it is that. A lot of it is, this bot will convert between these two file formats or it’ll do sort of basic-level optical character recognition so that you can scan expense reports and import that data into Excel, or something like that. So, a lot of it is pretty simple. You know, a lot of AI researchers don’t even consider RPA AI, because so much of it is just like static, rule-based algorithms. But a lot of them are starting to layer on more AI and predictive capability and things like that.

So you get some that are, you know, this plugs into your Salesforce and allows it to talk to this other program that maybe is a little bit older. Some of it is converting between one currency and another. But then there are these kind of digital workers, like you can hire — I’m making air quotes — you can “hire” a tax auditor, who you just install, it’s a robot, and theoretically that can do the work that a person whose job title was tax auditor, did before.

So let’s say I run like a mid-size manufacturing company, I’m already thinking about “Okay, on the line, there are lots of jobs that are dangerous or difficult or super repetitive, and I can run my line 24 hours a day, if I just put a robot on there.” Then I’m looking at my back office and I’m saying, “Oh, I’ve got a lot of accountants and tax lawyers, and, I don’t know, invoice preparers and all these people just doing stuff. I wanna hire Automation Anywhere, to come in and replace them.” What does that pitch look like from the RPA company?

Well, I went to a conference for Automation Anywhere. This was pre-pandemic when conferences were still a thing.

And, you know, there were executives on stage talking to an audience of corporate executives and telling them that they could save between 20 and 40 percent of their operating costs by automating jobs in their back and middle offices. And so that pitch, you know, some companies might save less than that, some companies might save more than that, but that’s the sales pitch: You can be more productive, you can free up workers to focus on higher-value tasks. Oh, and also you can shave 20 to 40 percent off your operating budget.

And so they would come in and they would assess, okay, you use Salesforce, you use an old database, you use some other program, right? I mean, at the end of the day back office work is people sitting down in front of a Windows PC and using it. So they’re like, which of these tasks are repetitive?

Yeah. Which are repetitive? What are the steps involved? There are some stories that I’ve heard of people being sort of asked to train their robot replacements.

To kind of like, walk the RPA vendor or the consultant through the steps of their jobs so that, that can then be programmed into a script. So there’s a lot of that, but there’s also sort of reimagining processes and like, “Do you really need people in three separate offices touching this piece of paper or could it be one person and a bot”? I think part of what they market as “digital transformation,” is just going in and asking people, “What outdated stuff are you using and how could we modernize that a little bit?”

One of the themes here is that maybe the entire national political and cultural conversation about automation is pointed at blue-collar work. Right? It’s a deindustrialized society, we don’t make a lot of things here. Blue-collar workers are hurting all over America. You are talking very much about white-collar workers in corporate America getting replaced by, I mean, let’s be honest, very fancy Windows scripting programs.

Yeah, that’s where the sort of excess is in the economy. I mean, if you go into a factory today, they’re very lean. Most of the jobs in factories that could be automated were automated many years ago. And especially if you go to places like China, I mean, there’re factories that have very few humans at all, it’s mostly robots. So there isn’t a lot of excess there to trim.

On the other hand, a lot of white-collar workplaces are still brimming with people in the back office who are doing these kinds of repetitive tasks. And so that’s sort of the strike zone right now. If you are doing repetitive tasks in a corporate environment, in a back office somewhere, your job is not long for this world. But now there’s also some more advanced AI that can do kind of more repetitive cognitive work.

One example I talk about in the book is there’s a guy I met, who’s making essentially production planning software. So this would be not replacing the people in the factories who are working on the assembly line, it’d be replacing their bosses who tell them, “Okay, this part needs to be made in this quantity, on this day, on this machine.” And then, you know, “Two days later we’re gonna switch to making this part and we need this many units, and they need to go to this part of the warehouse.”

All that used to be done by supervisors. And now that work can be mostly automated too. So it’s not purely the kind of entry-level data clerks that are getting automated, it’s also their bosses in some cases.

That feels like I could map it to a pretty familiar consumer story. You’ve got a factory, it’s got some output. It’s almost like a video game, right? You’ve got a factory, it’s got some output, you need to make X, Y, and Z parts in various quantities and you need to deliver on a certain time. And to some extent, your job is to play tower defense and just fill all the bins at the right time. Or you could just play against the computer and the computer will beat you every time. That’s what that seems like. It seems very obvious that you should just let the computer do it.

Totally. And that’s the logic that a lot of executives have. And I don’t even know that that’s the wrong logic. Like I don’t think we should be preserving jobs that can be automated just to preserve jobs. The concern, I think I, and some other folks who watch this industry have, is that this type of automation is purely substitutive.

So in the past we’ve had automation that carried positive consequences and negative consequences. So the factory machines put some people out of their jobs, but they created many more jobs and they lowered the cost of the factories’ goods and they made it more accessible to people and so people bought more of them. And it had this kind of offsetting effect where you had some workers losing their jobs, but more jobs being created elsewhere in the economy that those people could then go do.

And the concern that the economists that I’ve talked to had, was that this kind of RPA, like replacing people in the back office, like it’s not actually that good.

It’s not the good kind of automation that actually does move the economy forward. It’s kind of this crappy, patchwork automation that purely takes out people and doesn’t give them anything else to do. And so I think on a macroeconomic level, the problem with this kind of automation is not actually how advanced it is, it’s how simple it is. And if we are worried about the sort of future of the economy and jobs, we should actually want more sophisticated AI, more sophisticated automation that could actually create sort of dynamic, new jobs for these people who are displaced, to go into.

One of the things I think about a lot is, yeah, a lot of white-collar jobs are pretty boring, they’re pretty repetitive. One of my favorite TikTok paths to go down is Microsoft Excel TikTok. And there’s just a lot of people who are bored at work who have come up with a lot of wild ways to use Excel and they make TikToks about it. And it’s great. And I highly recommend it to anyone.

But their jobs are boring. Like the reason they have fodder for their TikTok careers is because Excel is boring and they’ve made it entertaining. Those jobs, apart from the social element, are sort of unfulfilling, but at the same time, those are the people who might catch mistakes, might come up with a new way of doing something, might flag a new idea. Is that cost baked into the automation puzzle?

No. And in fact, I’ve heard some stories from companies that did a big RPA implementation, you know, took out a bunch of workers, and then had to start hiring people back because the machines were making mistakes and they weren’t catching errors and the quality suffered as a result. So I think there’s a danger of overselling the benefits of this kind of automation to these companies. I think some of the firms that are doing this, it’s a little more snake oil than real innovation.

So yeah, I think there is a danger of kind of over-automating. But I think the problem is that executives in a lot of companies, and I would say this applies largely outside of tech, this is largely in your beverage companies, hotel chains, Fortune 500 companies that maybe are running on a little bit of outdated technology.

I think the executives at those companies have come to view labor as purely a cost center. It’s like, you’re optimizing your workforce the same way that you would optimize your factory production. You’re trying to do things as efficiently as possible and I don’t think there’s a lot of appreciation for the benefit that even someone like an Excel number cruncher could have in the organization. Or maybe if you retrain that person to do something different, they could be more productive and more valuable to the organization.

But right now it’s just a numbers game. They’re trying to hit next quarter’s targets and if automating 500 jobs in the back office is the way to do that, then that’s what they’re gonna do.

You just brought up retraining. In the book you’re not so hot on retraining. You don’t think it has a lot of benefits. How does that play out?

Well, the data just isn’t there on retraining. I mean, this is the sort of go-to stock response when you ask politicians or corporate executives, what do we do about automation and AI displacing jobs? And there’s re-skilling, there’s up-skilling.

There’s telling journalists to learn to code.

Right, there’s telling journalists to learn to code. [laughing]

And like, you know, you hear these heartwarming stories about coal miners who got laid off and then went to coding bootcamp and became Python engineers, and started doing front-end software development. But those are the exception rather than the rule. There’s a lot of evidence that re-skilling programs actually don’t have a long-term positive impact on the people who go through them, in economic terms. And some of that is probably, you know, about the kind of humans who are participating in them.

If you are a coal miner, your skill set is maybe not well-matched to be a software engineer. It’s not that they’re not smart enough to do it, it’s that they frankly sometimes don’t want to do it. It’s not rewarding in the same way that the old job was. So the long-term benefit of these re-skilling programs is still something that we don’t have a lot of evidence for. And there’s been some estimates that say private sector re-skilling, companies retraining their own workers, there’ve been some estimates that something like only one out of every four private sector workers can be profitably retrained.

So we’re really talking about something that needs to happen at the federal level if it’s gonna happen at all. And right now there’s no momentum on that from either side of the aisle in Washington, to do any kind of federal retraining program.

The politician who comes to mind, first and most clearly in this conversation is obviously, Andrew Yang, who ran in the Democratic primary. He only talked about automation, basically. He’s advocated for universal basic income because he says automation is coming for all of our jobs. Is his approach more focused on the “boring bot” white-collar automation? Or is it at the manufacturing level?

No. And I think this is a place where he and I disagree. I mean, I like Andrew. I think he was right on a lot, but I think, you know, when he’s talking on the trail about automation, he’s largely talking about blue-collar automation. He talks a lot about truck drivers and manufacturing workers and even retail workers. And I’m sort of sold on this idea that those industries are actually not the issue right now; the more pressing and urgent issue is white-collar automation.

And I think something like self-driving trucks is a great example of something that I am not as worried about as he is, because absolutely there will be self-driving trucks, and absolutely some truck drivers will lose their jobs. And the same goes for self-driving cars and, you know, taxi drivers and delivery drivers. I mean, there’s going to be disruption there, but those are actually like gigantic technological achievements.

They will unlock huge new industries. I mean, you can just imagine, when there are self-driving cars, there will be self-driving hotels and restaurants and gyms, and there’ll be all kinds of jobs popping up for people who are making and selling these cars, who are repairing them, who are programming them, who are developing the hospitality around them. It’s like, there’s gonna be a lot of dynamism in that industry. So while, yes, it will crush some jobs, it will also save lives because it’ll be safer than the human drivers and it’ll open up new opportunities for people. So that’s an area where I’m actually not as pessimistic as Andrew Yang is.

What do you think about universal basic income?

I think it’s a pretty good idea. I mean, what we’re learning now with the stimulus checks is that giving people direct cash transfers is a really good idea in times when things are perilous and you need to give people a way to stay afloat. And there are other ideas that I think are wise too. I mean right now the tax rate for labor is a lot higher than for capital and for equipment. So companies are actually financially incentivized to automate more jobs because they get taxed less on money that they spend on robots versus on employing humans. So I think equalizing those tax rates could be a way to deal with this on a policy level.

But ultimately I think we have a long way to go on any of this stuff. There aren’t really a lot of politicians agitating for this except for Andrew Yang. So I think my goal is not to give people perfect policy recommendations. I’m assuming some sort of stasis on the government level, and I’m trying to convince people that it’s in their interest to take this into their own hands and come up with their own plans. Because I don’t think the cavalry are coming.

One of the things that I have talked about, on maybe every episode of the show is how trends have accelerated in the pandemic. And obviously we’re moving to remote work, we’re out of offices. Even maybe three years ago, I was at a Microsoft event and I saw Satya Nadella, CEO of Microsoft. And he was talking about all the things they were doing, and at the end he’s like, “And I just heard about this robotic process automation. It sounds amazing.”

And now it’s like, oh, everyone’s doing it. Microsoft is in that business. He went from, “I thought it was interesting” to, “If you’re writing robots to use Excel, we’re gonna write the robots for you.” That is a huge business. That’s a great business for Microsoft to be in. Google’s doing it. You mentioned the other two companies that are already big. How much has the pandemic accelerated this curve?

A huge amount. I mean, I talked to a bunch of consultants who get these calls to come in and automate, you know, the call center or the finance department at big companies. And they said, there are basically two reasons why things have accelerated. One is that, I think, the pandemic has created a lot more demand for certain types of services and goods and created some supply chain issues. And so companies actually need to automate parts of their operations just to keep up with the demand.

But they also mention that there’s been this kind of political cover that the pandemic gave the executives, because a lot of this technology, the RPA technology, is not new. Like this has been around. It’s not sophisticated, it’s not mind-blowing in its complexity. But it’s fairly obviously displacing workers, and so a lot of executives have resisted it because, you know, it doesn’t save them that much money, it’s not that much more productive or accurate than the humans doing those jobs, and if they implement RPA in normal times, workers get freaked out. There’s a backlash, maybe the mayor of their city calls and asks them why they’re automating jobs. It’s a political headache in the instances when it happens publicly.

But during COVID there’s been no real backlash to that. In fact, customers want automation because it let’s them get goods and services without coming into contact with humans who might potentially be sick. So it kind of freed up executives to do the kind of RPA automation that they had been wanting to do and have been capable of doing for years. And so the consultants I talked to said, “Yeah, we’re fielding calls from a lot of people who are saying, ‘Yeah, let’s do that automation project we talked about a couple years ago. Now is the right time.’”

You’re gonna come into our back office, while everyone’s out of the office, and figure out which accountants we don’t need anymore.

Exactly, and you know, there’s some precedent for this. I mean, economic disruption is often when big changes happen in the workplace. You’ve already seen millions of jobs disappearing during the pandemic, and some of those jobs might not come back. It might just be that these companies are able to operate with many fewer people.

So you’ve called them boring bots. You say the technology is not so sophisticated. The industry calls it RPA. Like, there’s a lot of pressure on making this seem not the most technologically sophisticated or exciting thing. It comes with a lot of change, but I’m wondering, are there any stories of RPA going horribly wrong?

I’m just imagining like, I think the most consumer-facing automation is, you call the customer support line and you go through the phone tree. It makes all the sense in the world on paper: if all I need is the balance of my credit card, I should just press 5 and a robot will read it to me, but like I just want to talk to a person every time. Because that phone tree never has the options I want or it’s always confused or something is wrong. There has to be a similar story in the back office where the accounting software went completely sideways and no one caught it, right?

Yeah, I mean, there’s several stories like that in the book. There’s a trading firm called Knight Capital that had an algorithm go haywire and it lost millions of dollars in milliseconds. There was actually just a story in the financial markets — I forget who it was, it was one of the big banks — accidentally wired hundreds of millions of dollars to someone else and couldn’t get it back. And so it was just like, they just lost that. I’m sure that automation had some role in that, but that might have been a human error.

But there are also lower-level instances of this going haywire. One of the examples I talk about in the book is this guy Mike Fowler, who is an Australian entrepreneur who came up with a way to automate T-shirt design. So, I don’t know if you remember like five or six years ago, but there were all these auto-generated T-shirts on Facebook that were advertised. So, you know, it’d be like, “Kiss me, I’m a tech blogger who loves punk rock.” You know, and those would just be like Mad Libs, you know?

Hang on, I gotta buy a T-shirt.

[Laughing] Or like, “My other car is a flying bike,” or whatever. You know, it was just the weirdest, most nonsensical combinations of demographic targeting IDs, like plugged into T-shirt designs and uploaded to the internet. And Mike Fowler was one of the people who was making that, and he pioneered this algorithm that would take, you know, sort of catchphrases, and plug words into them and then automatically generate the designs and list the SKUs on Amazon and make the ads for Facebook.

And so he made a lot of money doing this, and then one day it went totally wrong because he hadn’t cleaned up the word bank that this algorithm drew from. So there were people noticing shirts for sale on Amazon that were saying things like “Keep calm and hit her,” or, “Keep calm and rape a lot.” Like just words that he had forgotten to clean out of the database, and so as a result, his store got taken down. He lost all his business. He had to change jobs, like it was a traumatic event for him. And that’s a colorful example but there are, I’m sure, lots of more mundane examples of this happening at places that have implemented RPA.

Is that cost baked in? I’m imagining, you know, the mid-sized bottling firm in the Midwest and the slick top five consulting companies selling RPA, “Everything’s gonna be great.” Then they leave. The software is going sideways. No one really knows how to use it. Like, is that all baked into the cost? Is that just, the consulting company gets to come back in and charge you more money to fix it?

I think that’s how it’s going a lot of the time. The consulting companies end up sort of playing a kind of oversight role with the bots when they malfunction. Because there just isn’t a whole lot of tech expertise in a lot of these companies, and certainly not for things like this. So, yeah, the consulting companies are making money hand over fist on this. There’s no question about it. And this has been a transformative line of business for them because it’s actually like, it’s not that hard, frankly.

And a lot of the stuff is off the shelf. You can go into a company, you know, maybe they haven’t updated their servers in 30 years. And so you’re arriving with this thing that they think is very fancy, but is actually just like a couple lines of code that plug into the Oracle database. So, it makes them look like wizards and it doesn’t require a whole lot of new technology and innovation.

One of the other things you cover for the Times is misinformation, the dark side of the internet. You’re talking a lot about white-collar workers, accountants, back office people, they’re often men. It seems like there’s a real apocalypse coming where a lot of sort of mid-level white dudes in seemingly safe corporate jobs get pushed out of the workplace. Literally your podcast is called Rabbit Hole. Fall down the rabbit hole of YouTube disinformation. Like, I can just add all that up in my head, but it’s not too rigorous. Do you see that connection?

I do. There is no one stereotype of a person who gets radicalized on the internet. But a lot of people that I’ve run into in reporting on extremist communities have a fairly similar origin story. Which is like, “I graduated from college or community college. I had a lot of debt. There wasn’t a lot of opportunity for me. And you know, I needed a social life and so this was sort of the way that I found status and meaning and friends and a purpose, was by joining an extremist community.”

I don’t know that the link is sort of causal but I think it’s probably correlated. There’s a reason so many people are out of the labor force, the participation rate is quite low, historically. And so there are just a lot of people who are sitting at home looking for things to do, things to entertain them, things to keep their attention, a sort of mission to plug into. And so maybe for some people that’s an extremist community.

Yeah, I just, for better or worse, I’m thinking about Fight Club, right? Which, you know, it’s a movie that has been framed and reframed many times over the years, but at the heart of it there’s a guy with a really, really boring white-collar job that he hates, and he finds a community that is outside of that. And then they blow up some credit agencies.

I’m not saying that’s happening here, but the population of disaffected people being pushed out of the workforce has second order effects, some of which can be positive, but many of which are negative. And that doesn’t seem to be factored into the RPA equation, either at the consultancy level, certainly not, and definitely not at the political level.

This is the big error that I think has resulted from giving this whole conversation about automation and AI over to economists and technologists. Because those communities in particular look at things in the long run and in the aggregate. So they’ll say, “Yeah, the industrial revolution wasn’t great in all ways and there was some child labor and, you know, some factories with gross safety violations. But in the long run, people’s lives improved. And you know, we had more time to spend with our families and we weren’t working back-breaking jobs on the farms.”

And I think that when I went back and started researching kind of contemporaneous accounts of these past technological shifts, what really sticks out is how much this sucks for people. Like, it’s not a happy experience for a lot of them. I mean, the industrial revolution was horrible for workers. There were these squalid boarding houses where the factory workers would be put and they would be paid barely subsistence wages and they would basically be tortured at work. They would all get sick and it was Dickensian and horrible.

And so, I think if you had gone to those people and said, “Well, you know, on the plus side, 30 years from now GDP will have risen 20 percent.” They’re gonna be like, “Screw you. Like, I don’t like this. This is not making a material difference in my life for the better, in fact, it’s made it much worse in my immediate circumstances.” And so I think, yes, it’s important to look at what happens in the long run, in the aggregate with new technology. But it’s also important to just listen to the people who are telling us what it’s doing in their lives right now.

I want to talk about the second half of your book. So the first chunk of your book, Kevin, is very much, “Here’s the conditions of automation, pay attention to this. It’s happening way faster than you think.” The second half of your book is like an instruction manual to you, as an individual, how to dance around the wave of change that’s coming. Walk me through that.

Yeah, this is the happier portion of the book.

Yeah, I save the smallest chunk of time in the podcast for the happy part.

I think it’s important to give people the good and the bad news. The bad news is that you know automation’s coming and it’s gonna displace people. But the good news is that there’s something you can do about it, and it doesn’t require becoming a coder, it doesn’t require going back to school for a STEM degree. It doesn’t involve any sort of productivity hacking.

What I found in talking to people who work on AI is that it’s actually just about being more human. The things that we can do to protect ourselves, I have nine of them in the book, but more of them revolve around this idea that we are going to need to move toward jobs and activities that can only be done by humans. And that just makes sense, right? When the robots come into your workplace, the stuff that’s left is the stuff that the robots can’t do. So, I was trying to figure out, what can’t the robots do? What is only done by humans right now, and what is likely to only be done by humans into the future?

And so those are by definition the very human things that I think we’ve been steering people away from, unfortunately, for years. Telling them, “Don’t major in humanities.” You know, I think Vinod Khosla and Marc Andreeson made some form of tirade against how the liberal arts are worthless and everyone should major in engineering and anything else is a waste of your time.

But if you look at just what the AI researchers are doing, they’re not sending their kids to coding bootcamps. They’re sending their kids to the Waldorf School, where they can learn to dance and play and be creative and express themselves, and they’re not idiots. Like, they know that the skills that are gonna be valuable in the future are those softer human skills.

Give me an example of some of those softer human skills that apply broadly across the white-collar workforce?

I think the big one that people talk about is empathy and I think that is a key part of it. I mean, a lot of the jobs of the future will involve relating to other people. They will be interpersonal jobs, nursing, therapy, social work, that kind of job. But I think that the discussion often stops there. I think there’s a lot of pieces of empathy. One of them is sort of active listening, being able to focus. I mean that’s a really key piece of the puzzle here. You have to be able to control and direct your own attention. Which is why there’s an entire chapter in the book that’s about how to have a better relationship with your phone and the other screen-based devices in your life. I think one prerequisite for being a human is being able to sort of control what you think about.

Other skills in the book, one of them I talk about is the ability to kind of read a room. This is something that I got from Jed Kolko, who’s an economist. And he is gay and he talks about the experience of growing up as an LGBTQ person and having to kind of fit in, to read people’s states to figure out, “How safe am I here? What kind of code do I have to switch into?” And obviously that’s not great that they have to do that, I wish they didn’t, but he said, basically, that skill of being able to quickly take the emotional temperature of a room is a really important skill for the future.

And that doesn’t show up in any kind of skills inventory, but that’s gonna be very valuable for the people who are good at doing that. There are lots of others I could go into, but they all kind of boil down to the basic human skills that we nurture in little kids. Sharing, playing well with others, you know? Being a good partner, being a good collaborator, but that we often let sort of atrophy as people get older.

You have a little vignette in your book of the guy who does your taxes, and how he effectively competes with — I’m sure I’ve even read these ads on these podcasts, like Quicken or QuickBooks, you just like dump the data on them directly from your horrible employee management software at work and then some taxes are generated and they cost $50. But you actually use a person and your vignette is like why his job still exists and how he saw competing with Quicken.

My accountant is this guy named Russ Garafalo, and he is a former stand-up comedian. And one of the things I was interested in when I was looking at this book is finding the survival stories, like who are the people who should have been automated out of their jobs but weren’t, and why? And Russ is a classic example of that. Tax preparation is largely an automated business now. Most people use TurboTax or some form of software to do their taxes. And yet, Russ is there. His firm’s growing. He’s doing well. So, I wanted to figure out why that is.

And it’s because he’s a former stand-up comedian. He’s really funny. It’s really interesting to talk to him, and he’s really good at relating to people in a thoughtful and interesting way, and he hires other creative people and pays for them all to take improv classes because he thinks that those skills will make them better accountants. And he’s right, like it is genuinely an enjoyable experience.

I have to call him soon because taxes are due in less than a month, and I’m looking forward to that. That’s not gonna be a chore for me because I actually enjoy talking to him. So, the sort of human side of any profession is just getting more and more valuable, as automation takes over more and more of the actual functional work of doing taxes.

How you’re able to differentiate yourself from TurboTax as an accountant, if you’re Russ, is by giving people an experience that they want, and not necessarily being the most eagle-eyed tax preparer. It’s about being the best human.

One of the tropes of all coverage of Gen Z or millennials, or whatever, is, we now pay for experiences over products, right? We spend more money on vacations. I think every generation does this, but these are the tropes of covering particularly people in their 20s because their dollars shift the economy very fast.

But the idea that we pay for experiences over products, that we pay for interactions over, you know, a fancier car, is that what you’re getting at? Is it, at the end of the day, your accountant is still using Excel, and you could have TurboTax do that for free, essentially, but you want to talk to a person who’s funny, so you’re willing to pay a premium for that?

Yeah. I think that’s the lesson of the past little while here, is that experiences are really valuable for people. And so it’s not just gonna be that people are paying for experiences in travel and retail and hospitality. They’re gonna be paying for experiences when they hire a lawyer or go to a doctor or engage a marketing firm. They’re not going to be paying for, necessarily, efficiency and expertise. They’re gonna be paying to feel something. And that’s one of the sort of rubrics that I’ve used to figure out which jobs are gonna be more stable as we get more and more automated as a society.

The jobs that involve making things for people are going to become less and less valuable, and a smaller and smaller piece of the economy. And the bigger piece, the growing piece, is gonna be jobs that involve making people feel things. So that’s not an original idea, I’ve gotten that from a number of AI researchers because they point out, this is already happening. You can already see this happening, this kind of artisanal boom in goods and services that sort of have more of a human touch to them than something that’s mass produced in a factory by robots.

Isn’t the counterexample of this already that customer service at big companies is horrible? Like I use Google every day, I use all of their products and services every day. If something goes wrong with Google, my only real recourse is to Google it, which has always seemed Kafka-esque to me.

That I’m turning to this company that has a broken product to figure out how to fix this broken product. And there’s no one to call. If I have to call AT&T — it’s funny to me that you feel more excited about calling your tax professional than I feel about calling AT&T, right? They should be on the same spectrum. But I know that’s gonna be a negative experience.

If that is an easy way for AT&T to boost its customer loyalty, to make people feel better about it, why wouldn’t they spend that cost if it’s so obvious?

Well, it’s not obvious right now because I think a lot of companies haven’t gotten very good at that. I mean they’re so involved in the mindset that, you know, customer service is a cost center that should be made as small as possible. But you see this happening on the edges right now. I mean, let’s take Google as an example. The only new successful email product — successful being defined as like, a lot of people I know are very excited about it — of the past 10 years is this app Superhuman, which actually is built on Gmail. It’s like a high-end luxury subscription email product that’s sort of a skin for Gmail, but that includes all this extra functionality.

And one of the key pieces of value that you get when you subscribe to Superhuman is a person, like a rep from the company, does a Zoom with you to walk you through how to use the email. It’s like a very bespoke, concierge model of something that, you know, is free when you just get it on the open market of Gmail.

But people are willing to pay for that extra touch, that extra part that involves relating to humans and also allows them to get what they see as a better product. So I think that model is transferring to a lot of industries, where you’ll have the kind of mass experience that is purely machine-driven and there are very few humans involved in it, and then there’ll be kind of this luxury skin on top of it that involves much more human contact and connection.

I love the idea that it’s all software at the bottom and it’s just, you get to pay for various levels of people to help you use it in empathetic ways.

Yeah. I mean, we might all have a team of IT tech coaches. One of the fascinating case studies I came across in the course of writing this book was Best Buy. Best Buy was supposed to die. Amazon was supposed to kill Best Buy many years ago because they sold all the same stuff. The big box was going away, Best Buy was largely dependent on new DVD and video game releases for profits, which went away. So I was interested in how they didn’t die, what they did. And it turns out that they moved to a very high-touch customer service model.

They started this in-home adviser program, where, for a fee, they would come to your house, a Best Buy rep would take a look at your stereo system and your speakers and tell you which upgrade you needed, or they would sort of be there with you as kind of a personal tech consultant, and then they would sell you stuff on the back end. But the human connection was actually what drove the renaissance of Best Buy.

It was not that they competed with Amazon on price or logistics, they did do those things, but the thing that set them apart was really that, unlike Amazon, where everything is done by robots and low-paid human pickers in warehouses, they would actually send someone to your house who would talk to you, who would talk you through it and answer your questions.

Several years ago, I talked to the CEO of a company called Asurion, which is a tech support company in Nashville. They sell phone insurance and all sorts of stuff, but their fastest growing line of business is they just sell a subscription to tech support. And they just know every problem that you might have with Bluetooth on your iPhone. And you can call them and they’ll just be friendly and help you.

And people need it. And there’s just 10,000 people in Nashville who are helping people set up their Rokus every day. And that to me feels like a huge miss, right? Silicon Valley, particularly consumer products in Silicon Valley, pride themselves on being easy to use. But there is an entire company that’s built a business, Best Buy has built a business around how hard it is to actually use. And you can see that just bleeding into the enterprise space.

Totally. And I think companies are starting to realize this. I mean, one example I’ve been looking at recently is a company like Airbnb, which for many years had a very limited customer service department and ability. And then they started getting a lot of people who were angry at them.

The pandemic hit and the hosts were having their stuff canceled, and people were showing up to residences and they’d look nothing like what they looked in the photos. There was a lot of bad juju around that product and the customer service. And so they essentially de-automated that process. They hired a lot of humans and trained them in empathetic communication and so now they have many, many customer service people that you can actually call and talk to. So, I think when businesses get into trouble with the automated model is usually when they start de-automating and bringing in humans. Because there’s a lot that machines can’t do.

We’ve just got a couple minutes left. Your book, the headline is “Nine Rules.” What are the nine rules?

Well, I have to save something for the premium tier “Verge Plus” subscribers.

[Laughs] Yeah.

I’ll list them and we can leave some of the explanation to the people who actually buy the book.


Come on, man! I’ve gotta, there’s gotta be a curiosity gap.

Give them eight rules, but the ninth will surprise you.

Oh yeah, the ninth one is crazy, so I’m just gonna read eight. No, okay. Let’s go. Be surprising, social, and scarce. Resist machine drift. Demote your devices. Leave handprints. Don’t be an end point. Treat AI like a chimp army. Build big nets and small webs. Learn machine-age humanities. And number nine I’m not gonna reveal.

Amazing. All of those are curiosity gaps in and of themselves. I don’t know that you gave much away, but it’s a great book. I thoroughly enjoyed reading it. I am just so excited that I got to talk to somebody for almost an hour about RPA.


After years.

I have been waiting for this my entire life.

It’s like this dark cloud of consulting on the horizon. It’s just like sweeping over America, and I’m like, “I can see it.” And no one wants to see it except for you. So that was great. Thanks a lot.

Well, anytime you want to call up and just, you know, chat on a Sunday about RPA, you have my number.

And if I can’t get to sleep at night, I’ll give you a call.


Tremendous. Thanks a lot, Kevin.

Repost: Original Source and Author Link


Realtime Robotics raises $31.4M to help industrial robots plan their moves

Elevate your enterprise data technology and strategy at Transform 2021.

Realtime Robotics, a company developing technology that enables robots to alter their motions in dynamic, fast-moving environments, has raised $31.4 million in a series A round of funding.

Founded out of Boston in 2016, Realtime Robotics said that it has developed a processor capable of creating “collision-free motion plans” in milliseconds, helping industrial robots and other autonomous vehicles plan their every move and alter course if needed.


While Realtime Robotics caters to structured environments where object locations and positions are known, it’s unstructured environments and unpredictable workspaces where things get particularly interesting.

In situations where other robots, moving machinery, static objects, and humans coexist, this can prove challenging for robots tasked with a particular job — if a robotic arm can move in any number of directions while simultaneously rolling along a factory floor, how will it react to a forklift truck that shoots out of nowhere? Or how can it safely collaborate with other robots in the same space without banging into each other?

That is what Realtime Robotics is setting out to solve, enabling companies to automatically generate a “network of potential motion plans” that adapt to changing environments instantly, removing the engineering complexity involved in humans having to manually configure all possible variations themselves.

Above: Realtime Robotics: Dynamic environments and adaptive motion planning

There is more than enough evidence of the profound impact that AI and automation is already having on assembly lines, ecommerce warehouses, and other verticals. The industrial automation market was pegged as a $164 billion industry in 2020, a figure that’s forecast to nearly double within six years. What Realtime Robotics and its ilk are striving to achieve is to bring human-level perception and reactions to machines that operate in dynamic or hazardous environments — anticipating their next move before they need to make it.

Prior to now, Realtime Robotics had raised around $16 million, and with another $31.4 million in the bank, the company said that it plans to expand its reach into warehouse logistics automation. It added that it also plans to continue on its existing trajectory, which has so far been focused on the automotive industry, enabling it to attract companies such as Ford to its roster of early partners, while Hyundai, Toyota, and Mitsubishi have all previously invested in Realtime Robotics too.

Investors in Realtime Robotics’ series A round included newcomers HAHN Automation, SAIC Capital Management, Soundproof Ventures, Heroic Ventures, alongside existing backers Toyota AI Ventures, Sparx Asset Management, Omron Ventures,  Scrum Ventures, and Duke Angels.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Pope Francis urges followers to pray that AI and robots ‘always serve mankind’

Pope Francis has asked believers around the world to pray that robots and artificial intelligence “always serve mankind.”

The message is one of the pope’s monthly prayer intentions — regular missives shared on YouTube that are intended to help Catholics “deepen their daily prayer” by focusing on particular topics or events. In August, the pope urged prayer for “the maritime world”; in April, the topic was “freedom for addiction.” Now, in November, it’s AI and robots.

Although the message sounds similar to warnings issued by tech notables like Elon Musk (the Tesla CEO famously compared work on artificial intelligence to “summoning the demon”), the pope’s focus is more prosaic. He doesn’t seem to be worrying about the sort of exotic doomsday scenario where a superintelligent AI turns the world into paperclips, but more about how the tech could exacerbate existing inequalities here and now.

(We should note also that the call to prayer came out earlier this month, but we only saw it recently via the Import AI newsletter because of the… events that have taken up so much of everyone’s time, energy, and general mental acuity in recent weeks.)

In his message, the pope said AI was “at the heart of the epochal change we are experiencing” and that robotics had the power to change the world for the better. But this would only be the case if these forces are harnessed correctly, he said. “Indeed, if technological progress increases inequalities, it is not true progress. Future advances should be orientated towards respecting the dignity of the person.”

Perhaps surprisingly, this isn’t new territory for the pope. Earlier this year, the Vatican, along with Microsoft and IBM, endorsed the “Rome Call for AI Ethics” — a policy document containing six general principles that guide the deployment of artificial intelligence. These include transparency, inclusion, impartiality, and reliability, all sensible attributes when it comes to deploying algorithms.

Although the pope didn’t touch on any particular examples in his video, it’s easy to think of ways that AI is entrenching or increasing divisions in society. Examples include biased facial recognition systems that lead to false arrests and algorithmically allotted exam results that replicate existing inequalities between students. In other words: regardless of whether you think prayer is the appropriate course of action, the pope certainly has a point.

Repost: Original Source and Author Link