Categories
AI

Gitamini is a cute, compact, cargo-carrying robot that will follow you around like a dog

Piaggio Fast Forward, a subsidiary of storied Italian automotive firm Piaggio, has launched its second robot, a compact version of its cargo-carrying bot Gita named Gitamini.

The form and function of Gitamini remain the same as that of full-sized Gita (the name is Italian for a small trip or outing). The robot consists of two large wheels, a central trunk, and a machine vision system that it uses to identify and follow its owner. Gitamini weighs 28 pounds and can carry up to 20 pounds in its interior for 21 miles. That makes an interesting comparison to Gita, which can carry more — 40 pounds but only for 12 miles.

Gitamini uses an array of cameras and sensors, including radar (not available for the original Gita), to navigate and follow its user. To activate this follow mode, you simply stand in front of the Gitamini and tap a pairing button. The robot will then lock on to you using vision only (no GPS or Bluetooth are utilized) and will follow you at speeds of up to 6mph.

The original Gita (left) and new Gitamini (right).
Image: Piaggio Fast Forward

The robot’s trunk can be locked and its follow mode disabled, but there are no active theft mitigation features. When asked about this, Piaggio Fast Forward’s CEO Greg Lynn told The Verge that it was “unlikely someone could get away with walking away with it unnoticed” as it’s such a noticeable object. “A stolen Gita isn’t of much use to anyone as it uses a secure connection to a phone to be unlocked, updated, and used,” says Lynn. “We have yet to learn of a Gita being stolen or broken into while being used or when parked.”

The Gita has always been a bit of an odd product. It certainly looks fantastic, and videos suggest it works more or less as advertised (though it’s noisier than you might expect). But it’s not clear exactly who’s going to spend thousands of dollars on something that only carries a few bags and is stymied by steps and stairs. Gitamini doesn’t change any of these basic annoyances, though it is at least a little cheaper — it costs $1,850 (and will be available to buy from October 15th at mygita.com) while the launch sees the price of the original Gita drop to $2,950.

When we asked CEO Greg Lynn about the robot, he declined to share any sales figures with us but said there were Gita robots operating in “half the states in the US […] with a focus on the Southern belt where outdoor weather is more friendly year-round.”

“Most of the consumer Gitas are being used to replace car trips for neighborhood errands in a variety of communities, and they are used outdoors for round trips of a mile or more,” said Lynn. Though, he noted that the company had some business customers, too. There are currently Gitas in eight airports in the US (including JFK and LAX) and a number more in planned communities, like Water Street Tampa in Florida and Ontario Ranch in California.

Repost: Original Source and Author Link

Categories
AI

Nvidia releases robot toolbox to deepen support of AI-powered robotics in ROS

Nvidia announced today that Isaac, its developer toolbox for supporting AI-powered robotics, will deepen support of the Robot Operating System (ROS).

The announcement is being made this morning at ROS World 2021, a conference for developers, engineers, and hobbyists who work on ROS, a popular open-source framework that helps developers build and reuse code used for robotics applications.

Nvidia, which is trying to assert its lead as a supplier of processors for AI applications, announced a host of “performance perception” technologies that would be part of what it will now call Isaac ROS. This includes computer vision and AI/ML functionality in ROS-based applications to support things like autonomous robots.

The move comes as Amazon’s robotic platform, RoboMaker, has also moved quickly to support ROS.

The ROS World 2021 is the ninth annual developers’ conference — modeled after PyCon and BoostCon — for developers of all levels to learn from and network with the ROS community.

Nvidia said its offerings are intended to accelerate and improve the standards of product development and product performance.

Isaac ROS GEM solution for optimized real-time Stereo Visual Odometry Solution

The purpose of the newly launched Isaac ROS GEM for Stereo Visual Odometry is to help autonomous vehicles keep track of where a camera is relative to its initial position. If seen from a broader perspective, it assists these autonomous machines to track where they are concerning the larger environment.

With this solution, ROS developers get a real-time (>60fps@720p) stereo camera visual odometry solution that runs immensely fast and can run HD resolution in real-time on a Jetson Xavier AGX.

ROS developers can now access all Nvidia NGC DNN inference models

With DNN Inference GEM, ROS developers can now leverage any of Nvidia’s inference models available on NGC, or can offer their own DNN. TensorRT or Triton, Nvidia’s inference servers, will deploy these optimized packages. The GEM is also compatible with U-Net and DOPE. The U-Net helps generate semantic segmentation masks from images, while DOPE helps in estimating three-dimensional poses for all detected objects. If you are keen to integrate performant AI inference in a ROS application, the DNN inference GMM is one of the fastest alternatives you can get.

Isaac SIM GA release for AI-powered robotics

Scheduled to be launched in November 2021, this GA release of Isaac SIM will come with improvements in the UI and performance, making simulation-building much faster. The ROS bridge will improve, and so will the developer experience with an increased number of ROS samples. The new release will reduce memory usage and startup times and better the process of Occupancy Map Generation. The new environment variants include large warehouses, offices, and hospitals, and the new Python building blocks can interface with robots, objects, and environments.

Synthetic data generation workflow

Addressing the safety and quality concerns of autonomous robots is crucial as it deals with a large and diverse data volume to shape up its AI models perfectly. It is these AI models that run the perception stack. The new synthetic data workflow that comes with the Isaac Sim helps build production quality datasets, addressing the safety and quality concerns of autonomous robots.

With this data generation workflow, the control of the developer becomes extensive. The developer can control the stochastic distribution of the objects in the scene, the scene itself, the lighting, the synthetic sensors, and the inclusion of crucial corner cases in the datasets. Eventually, the workflow also helps version and debug information for the exact reproduction of the datasets for auditing and safety.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Soft burrowing robot can explore the subterranean world

Scientists working with soft robots have created devices made for exploring all manner of environments, from the air to the ocean. Soft robots can also operate on dry land. Thanks to researchers from the University of California, Santa Barbara, we now have a soft robot capable of burrowing under the ground. Designers of the robot were inspired by plants and animals that evolved to navigate subterranean spaces.

The team developed fast and controllable soft robots that can burrow through sand enabling new applications for fast, precise, and minimally invasive movement underground. The team believes their work lays mechanical foundations for a new type of robot. Researcher Nicholas Naclerio says the biggest challenge with moving to the ground is the forces involved. When trying to move the ground, the robot has to push the soil, sand, or other medium away.

Researchers used principles employed by diverse organisms that successfully swim and dig within granular media to develop new mechanisms robots can use to move. The team created a vine-like soft robot designed to mimic plants in the way they navigate by growing from their tips while the remainder of the body remains stationary. Tip extension keeps forces low localized only to the growing end. Researchers note that if the entire body moved as it grew, friction over the entire surface would increase as the robot entered the sand until it couldn’t move.

Burrowing animals were the inspiration for another strategy for the soft robot called granular fluidization. That process suspends particles in a fluid-like state allowing the animal to overcome high levels of resistance presented by sand or loose soil. The researchers specifically modeled the southern sand octopus, which shoots a jet of water into the ground and uses its arms to pull itself into the temporarily loosened sand.

The team created a small, exploratory soft robot with multiple applications where burrowing through dry granular media is needed. It has the potential to be used in soil sampling, underground installation of utilities, and erosion control. The researchers are currently working on a project with NASA to develop burrowing robots for moon exploration and other uses.

Repost: Original Source and Author Link

Categories
Game

Valorant’s latest Agent is a robot with an ability that lets anyone revive him

Ever since Riot Games offered a small tease during the Summer Games Fest, Valorant players have had a genuine reason to believe that the new agent coming to the popular free-to-play shooter will be a robot. Not one to keep fans guessing, the company today unveiled KAY/O, a “machine of war” whose mechanics are all named for code terminology and borrow a little from other popular FPSes.

KAY/O is an initiator with three throwable abilities. The first is ZERO/point, a knife that when cast lodges into the first surface it hits and suppresses anyone within its blast radius. Think Revenant’s Silence from Apex Legends but with a blade instead of a device. Opponents caught within its area of effect will find themselves unable to select their own abilities and have to simply rely on gun skill. It can also deactivate weapons like Killjoy’s turret, which will be music to the ears of players who just want to “click heads.”

Next up is FLASH/drive. It comes in the form of a flash grenade, which can be charged to reduce ‘cook’ time. Right clicking will throw a charged flash that only needs one second to cook, whereas a left click will up that wait time to 1.6 seconds. KAY/O’s final non-ultimate ability is FRAG/ment, an “explosive fragment” that will stick to the ground and explode numerous times. It’s capable of almost entirely wiping out an opponent’s health if they’re caught at the centre of its blast, kind of like damage from Raze’s Cluster Grenade.

While Sage remains the only hero capable of bringing eliminated players back into battle, KAY/O’s ultimate ability looks set to change mid-to-late game skirmishes. NULL/cmd allows players to “instantly overload with polarized radianite energy” which gives KAY/O a combat stim, emits pulses that can suppress enemies and, most importantly, allows him to be downed rather than eliminated. But be mindful, that only happens while he’s in his overloaded state.

Similar to the revive mechanics in battle royale shooters like Fortnite and Warzone, teammates will be vulnerable to damage while they “stabilze” KAY/O’s core and eventually get the killer robot back to its feet. Like Sage mains, players willing to take evasive manoeuvres during battle could turn a 1v1 in their favor with a well-timed rescue.

KAY/O isn’t the only new addition coming to the Episode 3 Act I battlepass when it goes live on June 22nd. On top of the usual free and paid weapon skins, gun buddies, cards, sprays and titles, Riot is offering bonus XP to players who squad up with friends during the Valorant Year One Event between June 22nd and July 6th. Parties of two will see an 8 percent boost, while squads of three and four will see 12- and 16-percent bonuses respectively. If can you fill all five spots, the XP bonus rises to an impressive 20 percent.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Tech News

A quantum physics lullaby to ward off your killer robot nightmares

The reports of humanity’s imminent demise at the hands of sentient killer robots have been greatly exaggerated.

Based on the current state of artificial intelligence – that is, it’s really good at sifting through data and it can usually tell the difference between a dog and a cat – we don’t have to worry about “conscious” AI anytime soon.

I put “conscious” in quotes because, as every article you’re likely to read on the subject will point out, we don’t really understand consciousness.

There’s a contingency of experts who believe consciousness manifests in specialty organisms and there’s an emergent group who feels that everything – and they mean everything – is conscious.

The idea that consciousness only exists in certain entities is a fun one: it means we’re the cosmos’ special little people. And that makes us very, very important.

But let’s take a gander at the idea that doesn’t make us the center of the known universe too, just for fun: Panpsychism.

This snippet from an article by Caroline Delbert at Popular Mechanics does a fantastic job of explaining what universal consciousness could be:

The resulting theory is called integrated information theory (IIT) … In IIT, consciousness is everywhere, but it accumulates in places where it’s needed to help glue together different related systems.

The revolutionary thing in IIT … it’s that consciousness isn’t biological at all, but rather is simply this value, phi, that can be calculated if you know a lot about the complexity of what you’re studying.

If your brain has almost countless interrelated systems, then the entire universe must have virtually infinite ones. And if that’s where consciousness accumulates, then the universe must have a lot of phi.

I don’t know about phi, but if the universe itself is where consciousness is derived: that’s probably bad news for AI. At least under its current paradigm.

Simply put: non-algorithmic intelligence would be the baseline norm in a universe where consciousness manifested as a result of systemic perturbation. That’s another way of saying that the only reason we have free will is because you can’t brute-force consciousness using algorithms.

This is because the existence of algorithmic-consciousness would indicate that you could determine exactly what any given consciousness would do in perpetuity, if you could simply recreate the algorithms it runs on. And that means there’s no such thing as free will: we’d basically all just be pre-determined intelligence systems executing our code.

But this doesn’t really fit in with our experience of reality or the theory of universal consciousness. We appear to be quantum creatures. Our brains can surface thoughts based on a theoretically near-infinite number of parameters. And the amount of compute it would take in a binary system to imitate this could be unfathomable.

Have you ever tried to remember the name of a song or TV character for weeks and then had that memory triggered by a taste or smell? Ever made up a silly rhyme to help you memorize something for a test? This is evidence of the vast interconnected quantum neural network operating inside your skull. This indicates we’re probably operating as nonalgorithmic-consciousnesses.

If intelligence and consciousness are manifestations of quantum mechanics, it could very well be impossible to recreate them in a binary system.

So, the the bad news is that you’re unlikely to have a truly-alive robot pal anytime soon. We’ve only just begun to dabble in quantum computing, and if you believe the universal consciousness theory: we’re probably a very long way away from general quantum AI and cracking the consciousness code.

The good news is that this would also mean there’s almost no chance an AI will become sentient and decide to create killer robots to murder us all so the machines can rule the Earth.

Repost: Original Source and Author Link

Categories
Tech News

This AI robot mimics human expressions to build trust with users

Scientists at Columbia University have developed a robot that mimics the facial expressions of humans to gain their trust.

Named Eva, the droid uses deep learning to analyze human facial gestures captured by a camera. Cables and motors then pull on different points of the robot’s soft skin to mimic the expressions of nearby people in real-time.

The effect is pretty creepy, but the researchers say that giving androids this ability can facilitate more natural and engaging human-robot interactions.

Eva produces different expressions by utilizing one or more of six basic emotions: anger, disgust, fear, joy, sadness, and surprise. Per the study paper:

For example, while joy would correspond to one facial expression, the combination of joy and surprise would result in happily surprised, which would correspond to a separate facial expression.

[Read: This dude drove an EV from the Netherlands to New Zealand — here are his 3 top road trip tips]

The team trained the robot to generate these expressions by filming it making a series of random faces. Eva’s neural networks then learned to match the humanoid’s gestures to those of human faces captured on its video camera.

Credit: Creative Machines Lab/Columbia Engineering
Categories
Tech News

This professional grade Mirobot 6-Axis Mini Robot Arm is just the tool engineering students need.

TLDR: This amazingly precise robotic arm is a perfect tool for up-and-coming engineers, designed to teach the principles of manufacturing robotics right from your desktop.

Let’s get something out of the way quickly here. While there are any number of both fun and educational tinkerer sets to choose from out there, most are fundamentally toys. Oh sure, the lessons they offer and some of the functions they serve are certainly real enough, but at the end of the day, they’re mostly just enjoyable side diversions into the world of basic engineering.

But make no mistake. The WLKATA Mirobot 6-Axis Mini Robot Arm ($1,539.99, 8 percent off, from TNW Deals) is absolutely no toy. 

Designed specifically for engineering students, this professional grade, open source tool is essentially just like a robotic arm you’d find on a factory floor or in an ultra-precise lab setting…just at a tiny fraction of the price of those extremely expensive bigger brothers.

This arm comes ready to use right out of the box, packing hardware based on an Arduino control board with an open source programming origin, so it’s ready for any level of customization. With the same 6-axis range of movement found in larger industrial arms, this one only sits 9 inches high, allowing engineers and manufacturers to use this cute, yet powerful tool to simulate start fashioning real world robotic uses in a test setting right on a desktop.

And users know they’re getting it right with this arm, which features 0.2mm repeated positioning accuracy, making it ideal for both education purposes as well as light duty tasks, but with the same attention to intricacy and absolute precision robotic work demands.

Its programming produces smooth and stable movement, without a lot of the jerking or slight tremors found on arms that run on servos. Plus, this arm comes with a range of convenient attachments for all your arm modifications, with effectors to serve as a micro-servo gripper, a pen holder, a suction cup, a pneumatic two-finger gripper, a universal ball gripper, and even a GoPro carrier.

This kit also comes with a Bluetooth remote controller, designed to function just like a real industrial robot, with an accompanying act, so you can run the arm right through your smartphone.

Right now, students can get the WLKATA Mirobot 6-Axis Mini Robot Arm Professional Kit and The Educational Kit at nearly 10 percent off the regular price, cutting almost $150 off the total down to only $1,539.99 and $1,399.99 for the Education Kit

Prices are subject to change.

Repost: Original Source and Author Link

Categories
Tech News

Disney Project Kiwi robot brings kid Groot to life

We’ve all seen our fair share of animatronic characters in theme parks. Save for a few exceptions, almost all of them are obviously fake, even the ones made to look humanoid. Boston Dynamics may have ruined some people’s images of bipedpal and quadrupedal robots but, for many robotics engineers and scientists, the holy grail is still to produce a believable simulacrum. Disney’s Imagineering R&D arm may be finally close to that goal as it reveals its Project Kiwi platform that almost convincingly recreates the lovable character, Groot.

Robotics, especially when mixed with animatronics, have really come a long way. There are a few, especially at Disney’s theme parks, that have almost believable movements and facial expressions. They are, however, mostly stuck to where they stand, which is what makes Disney Imagineering Project Kiwi a monumental development.

Started way back in 2018, the project’s goal was to create a fully mobile, bipedal robot in the exact likeness and size of a certain character. Considering its smaller stature and fame, it’s no surprise that the team chose to skin it in “Kid” Groot’s appearance, something in between the adorable Baby Groot and the towering adult version. And after three years of designing and developing custom parts, Project Kiwi was ready for a public preview.

TechCrunch reveals a bipedal robot that walks almost naturally, at least as natural as it is for Groot to walk. Unlike even the most advanced animatronics, only a thin cable for programming connects to the robot, giving it almost total freedom to move around. Its movements and even its expressions are pretty impressive and, thanks to not looking human, avoids uncanny valley completely.

More than just recreating Groot, though, Project Kiwi is a platform meant to be used to develop other characters using the same design and technology. At the moment, however, it is still an early work in progress, even at this stage, so don’t expect to see its kind walking down Disney parks any time soon.

Repost: Original Source and Author Link

Categories
Tech News

Pepper the robot has been talking to itself to gain your trust

Talking to yourself has a bad reputation, but it doesn’t always mean you’re going mad. Studies show that thinking out loud can help you manage your emotions and complete tricky tasks — and it isn’t only humans who are doing it.

A group of Italian researchers recently programmed Pepper the robot to “think” out loud so that users can understand what influences its decisions. They suspected that this would improve its interactions with humans.

They tested their theory by asking people to set a dinner table with the robot according to etiquette rules.

They found that the robot was better at solving dilemmas when it used self-dialogue.

[Read: 3 new technologies ecommerce brands can use to connect better with customers]

When one person asked Pepper to breach the code of etiquette by placing a napkin on a fork, the robot used “inner voice” it analyzed the request. Pepper concluded that the user might be confused but followed their instruction:

Ehm, this situation upsets me. I would never break the rules, but I can’t upset him, so I’m doing what he wants.

By using self-dialogue, Pepper let the user know that it had solved the predicament by prioritizing the human’s request.

The researchers say this form of transparency could build our trust with robots. They also believe it will help humans and droids collaborate and find solutions to dilemmas.

“Inner speech could be useful in all the cases where we trust the computer or a robot for the evaluation of a situation,” said study co-author Antonio Chella, a professor of robotics at the University of Palermo.

There might be one problem, however. If a robot’s constantly talking to itself, users might prefer to sacrifice some of its performance for a bit of peace and quiet. Pepper is gonna need a mute button.

You can read the research paper in the journal iScience.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.



Repost: Original Source and Author Link

Categories
Tech News

Boston Dynamics’ latest robot doesn’t do backflips — and that’s a smart move

Boston Dynamics has made a name for itself through fascinating videos of biped and quadruped robots doing backflips, opening doors, and dancing to Uptown Funk. Now, it has revealed its latest gadget: A robot that looks like a huge overhead projector on wheels.

It’s called Stretch, it doesn’t do backflips, it doesn’t dance, and it’s made to do one task: moving boxes. It sounds pretty boring.

But this could, in fact, become the most successful commercial product of Boston Dynamics and turn it into a profitable company.

What does Stretch do?

Stretch has a box-like base with a set of wheels that can move in all directions. On top of the base are a large robotic arm and a perception mast. The robotic arm has seven degrees of freedom and a suction pad array that can grab and lift boxes. The perception mast uses computer vision–powered cameras and sensors to analyze its surroundings.

While we have yet to see Stretch in action, according to information Boston Dynamics provided to the media, it can handle boxes weighing up to 23 kilograms, it can make 800 displacements per hour, and it has a battery that can last eight hours. The video posted by Boston Dynamics on its YouTube channel suggests that the robot can reach the 800-cases-per-hour speed if everything remains static in its environment.

Traditional industrial robots must be installed in a fixed location, which puts severe limits on the workflows and infrastructure of the warehouses where they are deployed. Stretch, on the other hand, is mobile and can be used in many different settings with little prerequisite beyond a flat ground and a little bit of training (we still don’t know how the training works). This could be a boon for many warehouses that don’t have automation equipment and infrastructure.

As Boston Dynamics’ VP of business development Michael Perry told The Verge, “You can take this capability and you can move it into the back of the truck, you can move it into aisles, you can move it next to your conveyors. It all depends what the problem of the day is.”

A boring but useful robot

At first glance Stretch seems like a step back from the previous robots Boston Dynamics has created. It can’t navigate uneven terrain, climb stairs, jump on surfaces, open doors, and handle objects in complicated ways.

It did manage to do some amusing feats on its intro video, but we can’t expect it to be as entertaining as Spot, Atlas, and Handle.

But that’s exactly what real-world applications of robotics and artificial intelligence are all about. We still haven’t figured out how to create artificial general intelligence, the kind of AI that can mimic all aspects of the cognitive and physical abilities of humans and animals.

Current AI systems are robust when performing narrow tasks in stable environments but start to break when they’re forced to tackle various problems in unpredictable settings. Therefore, the success of AI systems is to find the right balance between versatility and robustness, especially in physical settings where safety and material damage are major concerns.

And Stretch exactly fits that description. It does a very specific task (picking up and displacing boxes) in a predictable environment (flat surfaces in warehouses).

Stretch might sound boring in comparison to the other things that Boston Dynamics has done in the past. But if it lives up to its promise, it can directly result in reduced costs and improved production for many warehouses, which makes it a viable business model and product.

As Boston Dynamics’ vice president of business development Michael Perry told The Verge last June, “[A] lot of the most interesting stuff from a business perspective are things that people would find boring, like enabling the robot to read analogue gauges in an industrial facility. That’s not something that will set the internet on fire, but it’s transformative for a lot of businesses.”

boston dynamics stretch spot robots

The competitive edge of Boston Dynamics

Boston Dynamics is not alone in working on autonomous mobile robots for warehouses and other industrial settings. There are dozens of companies competing in the field, ranging from longstanding companies such as Honeywell to startups such as Fetch Robotics.

And unloading boxes is just one of the several physical tasks that are ripe for automation. There’s also a growing market for sorting robots, order-picking robots, and autonomous forklifts.

What would make Boston Dynamics a successful contender in this competitive market? The way I see it, success in the industrial autonomous mobile robots market will be defined by versatility/robustness threshold on the one hand and cost efficiency on the other. In this respect, Boston Dynamics has two factors working to its advantage.

First, Boston Dynamics will leverage its decades of experience to push the versatility of its robots without sacrificing their robustness and safety. Stretch has inherited technology and experience from Handle, Atlas, Spot, and other robots Boston Dynamics has developed in the past years. It also contains elements of Pick, a computer vision­–based depalletizing solution mentioned in the press release that declared Hyundai’s acquisition of Boston Dynamics. This can enable Stretch to work in a broader set of conditions than its competitors.

Second, the company’s new owner Hyundai is one of the leading companies in mobile robot research and development. Hyundai has already made extensive research in creating autonomous robots and vehicles that can navigate various environments and terrains. Hyundai also has a great manufacturing capacity. This will enable Boston Dynamics to reduce the costs of manufacturing Stretch and sell it at a competitive price. Hyundai’s manufacturing facilities will also enable Boston Dynamics to deliver new parts and props for Stretch at a cost-efficient price. This will further improve the versatility of the robot in the future and allow customers to repurpose it for new tasks without making large purchases.

boston dynamics stretch robot depalletizing

The future of Boston Dynamics

Stretch is the second commercial product of Boston Dynamics, the first one being the quadruped robot Spot. But Spot’s sales were only covering a fraction of the company’s costs, which were at least $150 million per year when Hyundai acquired it. Stretch has a greater potential for making Boston Dynamics a profitable company.

How will the potential success of Stretch affect the future of Boston Dynamics? Here’s an observation I made last year after Hyundai acquired Boston Dynamics: “Boston Dynamics might claim to be a commercial company. But at heart, it is still an AI and robotics research lab. It has built its fame on its advanced research and a continuous stream of videos showing robots doing things that were previously thought impossible. The reality, however, is that real-world applications seldom use cutting-edge AI and robotics technology. Today’s businesses don’t have much use for dancing and backflipping robots. What they need are stable solutions that can integrate with their current software and hardware ecosystem, boost their operations, and cut costs.”

How will Stretch’s success affect Boston Dynamics’ plans for human-like robots? It’s hard to remain committed to long-term scientific goals when you’re owned by a commercial enterprise that counts profits by the quarter.

But it’s not impossible. In the early 1900s, Albert Einstein worked as an assistant examiner at the Swiss patent office in Bern because physics research didn’t put food on his family’s table. But he remained a physicist at heart and continued his research in his idle time while his job as patent clerk paid the bills. His passion eventually paid off, earning him a Nobel prize and resulting in some of the greatest contributions to science in history.

Will Stretch and its successors become the norm for Boston Dynamics, or is this the patent-clerk job that keeps the lights on while Boston Dynamics continues to chase the dream of humanoid robots that push the limits of science?

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.



Repost: Original Source and Author Link