Categories
AI

AI in robotics: Problems and solutions

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Robotics is a diverse industry with many variables. Its future is filled with uncertainty: nobody can predict which way it will develop and what directions will be leading a few years from now. Robotics is also a growing sector of more than 500 companies working on products that can be divided into four categories:

  • Conventional industrial robots,
  • Stationary professional services (such as medical and agricultural applications),
  • Mobile professional services (construction and underwater activities),
  • Automated guided vehicles (AGVs) for carrying small and large loads in logistics or assembly lines.

According to the International Federation of Robotics data, 3 million industrial robots are operating worldwide – the number has increased by 10% over 2021. The global robotics market is estimated at $55.8 billion and is expected to grow to $91.8 billion by 2026 with a 10.5% annual growth rate.

Biggest industry challenges

The field of robotics is facing numerous issues based on its hardware and software capabilities. The majority of challenges surround facilitating technologies like artificial intelligence (AI), perception, power sources, etc. From manufacturing procedures to human-robot collaboration, several factors are slowing down the development pace of the robotics industry.

Let’s look at the significant problems facing robotics:

Intelligence

Different real-world environments may become challenging for robots to comprehend and take suitable action. There is no match for human thinking; thus, robotic solutions are not entirely dependable.

Navigation

There was considerable progress in robots perceiving and navigating the environments – for example, self-driving vehicles. Navigation solutions will continue to evolve, but future robots need to be able to work in environments that are unmapped and not fully understood.

Autonomy

Full autonomy is impractical and too distant as of now. However, we can reason about energy autonomy. Our brains require lots of energy to function; without evolutionary mechanisms of optimizing these processes, they wouldn’t be able to achieve the current levels of human intelligence. This also applies to robotics: more power required decreases autonomy. 

New materials

Elaborate hardware is crucial to today’s robots. Massive work still needs to be performed with artificial muscles, soft robotics, and other items that will help to develop efficient machines.

The above challenges are not unique, and they are generally expected for any developing technology. The potential value of robotics is immense, attracting tremendous investment that focuses on removing existing issues. Among the solutions is collaborating with artificial intelligence.

Robotics and AI

Robots have the potential to replace about 800 million jobs globally in the future, making about 30% of all positions irrelevant. Unsurprisingly, only 7% of businesses currently do not employ AI-based technology but are looking into it. However, we need to be careful when discussing robots and AI, as these terms are often assumed to be identical, which has never been the case.

The definition of artificial intelligence tells about enabling machines to perform complex tasks autonomously. Tools based on AI can solve complicated problems by analyzing large quantities of information and finding dependencies not visible to humans. We at ENOT.ai featured six cases when improvements in navigation, recognition, and energy consumption reached between 48% and 800% after applying AI. 

While robotics is also connected to automation, it combines with other fields – mechanical engineering, computer science, and AI. AI-driven robots can perform functions autonomously with machine learning algorithms. AI robots can be described as intelligent automation applications in which robotics provides the body while AI supplies the brain.

AI applications for robotics

The cooperation between robotics and AI is naturally called to serve mankind. There are numerous valuable applications developed so far, starting from household usage. For example, AI-powered vacuum cleaners have become a part of everyday life for many people.

However, much more elaborate applications are developed for industrial use. Let’s go over a few of them:

  • Agriculture. As in healthcare or other fields, robotics in agriculture will mitigate the impact of labour shortages while offering sustainability. Many apps, for example, Agrobot, enable precision weeding, pruning, and harvesting. Powered by sophisticated software, apps allow farmers to analyze distances, surfaces, volumes, and many other variables.
  • Aerospace. While NASA is looking to improve its Mars rovers’ AI and working on an automated satellite repair robot, other companies want to enhance space exploration through robotics and AI. Airbus’ CIMON, for example, is developed to assist astronauts with their daily tasks and reduce stress via speech recognition while operating as an early-warning system to detect issues.
  • Autonomous driving. After Tesla, you cannot surprise anybody with self-driving cars. Nowadays, there are two critical cases: self-driving robo-taxis and autonomous commercial trucking. In the short-term, advanced driver-assistance systems (ADAS) technology will be essential as the market gets ready for complete autonomy and seeks to gain profits from the technology capabilities.

With advances in artificial intelligence coming on in leaps and bounds every year, it’s certainly possible that the line between robotics and artificial intelligence will become more blurred over the coming decades, resulting in a rocketing increase in valuable applications.

Main market tendency

The competitive field of artificial intelligence in robotics is getting more fragmented as the market is growing and is providing clear opportunities to robot vendors. The companies are ready to make the first-mover advantage and grab the opportunities laid by the different technologies. Also, the vendors view expansion in terms of product innovation and global impact as a path toward gaining maximum market share.

However, there is a clear need for increasing the number of market players. The potential of robotics to substitute routine human work promises to be highly consequential by freeing people’s time for creativity. Therefore, we need many more players to speed up the process. 

Future of AI in robotics

Artificial intelligence and robotics have already formed a concrete aim for business investments. This technology alliance will undoubtedly change the world, and we can hope to see it happen in the coming decade. AI allows robotic automation to improve and perform complicated operations without a hint of error: a straightforward path to excellence. Both industries are the future driving force, and we will see many astounding technological inventions based on AI in the next decade.

Sergey Alyamkin, Ph.D. is CEO and founder of ENOT.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

Amazon launches AWS RoboRunner to support robotics apps

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


At a keynote during its Amazon Web Services (AWS) re:Invent 2021 conference today, Amazon launched AWS IoT RoboRunner, a new robotics service designed to make it easier for enterprises to build and deploy apps that enable fleets of robots to work together. Alongside IoT RoboRunner, Amazon announced the AWS Robotics Startup Accelerator, an incubator program in collaboration with nonprofit MassRobotics to tackle challenges in automation, robotics, and industrial internet of things (IoT) technologies.

The adoption of robotics — and automation more broadly — in enterprises has accelerated as the pandemic prompts digital transformations. A recent report from Automation World found that the bulk of companies that embraced robotics in the past year did so to decrease labor costs, increase capacity, and navigate a lack of available workers. The same survey found that 44.9% of companies now consider the robots in their assembly and manufacturing facilities to be an integral part of daily operations.

Amazon — a heavy investor in robotics itself — hasn’t been shy about its intent to capture a larger part of a robotics software market that is anticipated to be worth over $7.52 billion by 2022. In 2018, the company unveiled AWS RoboMaker, a product to assist developers with deploying robotics applications with AI and machine learning capabilities. And Amazon earlier this year rolled out SageMaker Reinforcement Learning Kubeflow Components, a toolkit supporting the RoboMaker service for orchestrating robotics workflows.

IoT RoboRunner

IoT RoboRunner, currently in preview, builds on the technology already in use at Amazon warehouses for robotics management. It allows AWS customers to connect robots and existing automation software to orchestrate work across operations, combining data from each type of robot in a fleet and standardizing data types like facility, location, and robotic task data in a central repository.

The goal of IoT RoboRunner is to simplify the process of building management apps for fleets of robots, according to Amazon. As enterprises increasingly rely on robotics to automate their operations, they’re choosing different types of robots, making it more difficult to organize their robots efficiently. Each robot vendor and work management system has its own, often incompatible control software, data format, and data repository. And when a new robot is added to a fleet, programming is required to connect the control software to work management systems and program the logic for management apps.

Developers can use IoT RoboRunner to access the data required to build robotics management apps and leverage prebuilt software libraries to create apps for tasks like work allocation. Beyond this, IoT RoboRunner can be used to deliver metrics and KPIs via APIs to administrative dashboards.

IoT RoboRunner competes with robotics management systems from Freedom Robotics, Exotec, and others. But Amazon makes the case that IoT RoboRunner’s integration with AWS — including services like SageMaker, Greengrass, and SiteWise — gives it an advantage over rivals on the market.

“Using AWS IoT RoboRunner, robotics developers no longer need to manage robots in silos and can more effectively automate tasks across a facility with centralized control,” Amazon wrote in a blog post. “As we look to the future, we see more companies adding more robots of more types. Harnessing the power of all those robots is complex, but we are dedicated to helping enterprises get the full value of their automation by making it easier to optimize robots through a single system view.”

AWS Robotics Startup Accelerator

Amazon also announced the Robotics Startup Accelerator, which the company says will foster robotics companies by providing them with resources to develop, prototype, test, and commercialize their products and services. “Combined with the technical resources and network that AWS provides, the strategic collaboration will help robotics startups and the industry overall to experiment and innovate, while connecting startups and their technologies with the AWS customer base,” Amazon wrote in a blog post.

Startups accepted into the Robotics Startup Accelerator program will consult with AWS and MassRobotics experts on business models and with AWS robotics engineers for technical assistance. Benefits include hands-on training on AWS robotics solutions and up to $ 10,000 in promotional credits to use AWS IoT, robotics, and machine learning services. Startups will also receive business development and investment guidance from MassRobotics and co-marketing opportunities with AWS via blogs and case studies.

Robotics startups — particularly in industrial robotics — have attracted the eye of venture capitalists as the trend toward automation continues. From March 2020 to March 2021, venture firms poured $6.3 billion into robotics companies, up nearly 50% from March 2019 to March 2020, according to data from PitchBook. Over the longer term, robotics investments have climbed more than fivefold throughout the past five years, to $5.4 billion in 2020 from $1 billion in 2015.

“Looking ahead, the expectations of robotics suppliers are bullish, with many believing that with the elections over and increased availability of COVID-19 vaccines on the horizon, much demand will return in industries where market skittishness has slowed robotic adoption,” Automation World wrote in its report. “Meanwhile, those industries already seeing an uptick are expected to plough ahead at an even faster pace.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

SVT Robotics nabs $25M to simplify industrial robotics deployment

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


SVT Robotics, a provider of software that orchestrates robots in warehouses and factories, has raised $25 million in series A funding led by Tiger Global with participation from Prologis Ventures, the company announced this morning. SVT says that it’ll use the new capital to bolster its product R&D and expand its customer outreach efforts.

According to cofounder and CEO A.K. Shultz, SVT’s platform helps customers to solve the growing “integration problem” in industrial automation. The industry is severely limited by its capacity to execute, he says. Integrations are typically custom-coded, translating to long, complex development cycles. A recent piece in Industry Today finds that factors ranking among the top concerns of manufacturers adopting automation include a lack of experienced workers to operate the machines, high transition expenses, and safety concerns.

“It’s expensive, and companies wait as much as a year or more for new automation to go live,” Shultz said in a statement. “Solving that problem with [SVT’s platform] empowers the market to grow at its full potential.”

Robotics orchestration

Adoption of automation technologies including robotics has accelerated throughout the pandemic. For example, American Eagle deployed robots to help it sort clothes in its warehouses to meet a surge of online orders. Meanwhile, startup Brain Corp — an SVT rival — reported that the use of robots to clean retail locations in the U.S. rose 24% in Q2 2020 compared with the same period in 2019.

According to Deloitte’s survey on AI adoption in manufacturing, 93% of companies believe that AI will be a pivotal technology to drive growth and innovation in the sector. But not every company is equipped to make the automation transition.

To assist, SVT offers prebuilt integrations and functionality programmed by its various automation partners. Customers select which technologies they want, and SVT designs a robotics solution using drag-and-drop tools. The solution can then be deployed on-premises or in the cloud, depending on the customers’ requirements.

While SVT isn’t without rivals in the over-$150 billion industrial automation space, the three-year-old startup claims that deployments of its platform increased 375% from Q4 2020 to July 2021. Current customers include “top companies” within the warehousing and manufacturing space.

“With no ‘plug-and-play’ integration solution for industrial robotics, warehouses and manufacturers have been prevented from quickly deploying the automation they need to keep pace with the dramatic shifts in labor dynamics we’ve seen over the past year,” Tiger Global partner Griffin Schroeder said in a press release. “With its [platform], SVT is solving this crucial interoperability problem.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Unity moves robotics design and training to the metaverse

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Unity, the San Francisco-based platform for creating and operating games and other 3D content, on November 10 announced the launch of Unity Simulation Pro and Unity SystemGraph to improve modeling, testing, and training complex systems through AI.

With robotics usage in supply chains and manufacturing increasing, such software is critical to ensuring efficient and safe operations.

Danny Lange, senior vice president of artificial intelligence for Unity, told VentureBeat via email that the Unity SystemGraph uses a node-based approach to model the complex logic typically found in electrical and mechanical systems. “This makes it easier for roboticists and engineers to model small systems, and allows grouping those into larger, more complex ones — enabling them to prototype systems, test and analyze their behavior, and make optimal design decisions without requiring access to the actual hardware,” said Lange.

Unity’s execution engine, Unity Simulation Pro, offers headless rendering — eliminating the need to project each image to a screen and thus increasing simulation efficiency by up to 50% and lowering costs, the company said.

Use cases for robotics

“The Unity Simulation Pro is the only product built from the ground up to deliver distributed rendering, enabling multiple graphics processing units (GPUs) to render the same Unity project or simulation environment simultaneously, either locally or in the private cloud,” the company said. This means multiple robots with tens, hundreds, or even thousands of sensors can be simulated faster than real time on Unity today.

According to Lange, users in markets like robotics, autonomous driving, drones, agriculture technology, and more are building simulations containing environments, sensors, and models with million-square-foot warehouses, dozens of robots, and hundreds of sensors. With these simulations, they can test software against realistic virtual worlds, teach and train robot operators, or try physical integrations before real-world implementation. This is all faster, more cost-effective, and safer, taking place in the metaverse.

“A more specific use case would be using Unity Simulation Pro to investigate collaborative mapping and mission planning for robotic systems in indoor and outdoor environments,” Lange said. He added that some users have built a simulated 4,000 square-foot building sitting within a larger forested area and are attempting to identify ways to map the environment using a combination of drones, off-road mobile robots, and walking robots. The company reports it has been working to enable creators to build and model the sensors and systems of mechatronic systems to run in simulations.

A major application of Unity SystemGraph is how it enables those looking into building simulations with a physically accurate camera, lidar models, and SensorSDK to take advantage of SystemGraph’s library of ready-to-use models and easily configure them to their specific cases.

Customers can now simulate at scale, iterate quickly, and test more to drive insights at a fraction of current simulation costs, Unity says. The company adds that customers like Volvo Cars, Allen Institute of AI, and Carnegie Mellon University are already seeing results.

While there are several companies that have built simulators targeted especially at AI applications like robotics or synthetic data generation, Unity claims that the ease of use of its authoring tools makes it stand out above its rivals, including top competitors like Roblox, Aarki, Chartboost, MathWorks, and Mobvista. Lange says this is evident in the size of Unity’s existing user base of over 1.5 million creators using its editor tools.

Unity says its technology is aimed at impacting the industrial metaverse, where organizations continue to push the envelope on cutting-edge simulations.

“As these simulations grow in complexity in terms of the size of the environment, the number of sensors used in that environment, or the number of avatars operating in that environment, the need for our product increases. Our distributed rendering feature, which is unique to Unity Simulation Pro, enables you to leverage the increasing amount of GPU compute resources available to customers, in the cloud or on-premise networks, to render this simulation faster than real time. This is not possible with many open source rendering technologies or even the base Unity product — all of which will render at less than 50% real time for these scenarios,” Lange said.

The future of AI-powered  technologies

Moving into 2022, Unity says it expects to see a steep increase in the adoption of AI-powered technologies, with two key adoption motivators. “On one side, companies like Unity will continue to deliver products that help lower the barrier to entry and help increase adoption by wider ranges of customers. This is combined with the decreasing cost of compute, sensors, and other hardware components,” Lange said. “Then on the customer adoption side, the key trends that will drive adoption are broader labor shortages and the demand for more operational efficiencies — all of which have the effect of accelerating the economics that drive the adoption of these technologies on both fronts.”

Unity is doubling down on building purpose-built products for its simulation users, enabling them to mimic the real world by simulating environments with various sensors, multiple avatars, and agents for significant performance gains with lower costs. The company says this will help its customers to take the first step into the industrial metaverse.

Unity will showcase the Unity Simulation Pro and Unity SystemGraph through in-depth sessions at the forthcoming Unity AI Summit on November 18, 2021.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Kodiak Robotics to expand autonomous trucking with $125M

Kodiak Robotics, a startup developing self-driving truck technologies, today announced that it raised $125 million in an oversubscribed series B round for a total of $165 million to date. The tranche — which includes investments from SIP Global Partners, Lightspeed Venture Partners, Battery Ventures, CRV, Muirwoods Ventures, Harpoon Ventures, StepStone Group, Gopher Asset Management, Walleye Capital, Aliya Capital Partners, and others — will be put toward expanding Kodiak’s team, adding trucks to its fleet, and growing its autonomous service capabilities, according to CEO Don Burnette.

“Our series B drives us into hyper-growth so we can double our team, our fleet, and continue to scale our business,” Burnette said in a statement. “With [it], we will further accelerate towards launching our commercial self-driving service with our partners in the coming years to help address these critical challenges.”

While autonomous trucks could face challenges in commercializing at scale until clearer regulatory guidelines are established, the technology has the potential to reduce the cost of trucking from $1.65 per mile to $1.30 per mile by mid-decade, according to a Pitchbook analysis. That’s perhaps why in the first half of 2021, investors poured a record $5.6 billion into driverless trucking companies, eclipsing the $4.2 billion invested in all of 2020.

The semi- and fully autonomous truck market will reach approximately $88 billion by 2027, a recent Acumen Research and Consulting estimates, growing at a compound annual growth rate of 10.1% between 2020 and 2027.

Kodiak technology

Kodiak, which was cofounded by Burnette and former venture capitalist Paz Eshel, emerged from stealth in 2018. After leaving Google’s self-driving project for Otto in early 2016, Burnette briefly worked at Uber following the company’s acquisition of Otto in 2016 at a reported $680 million valuation.

“I was very fortunate to be an early member of and software tech lead at the Google self-driving car project, the predecessor to Waymo. I spent five years there working on robotaxis, but ultimately came to believe that there were tons of technical challenges for such applications, and the business case wasn’t clear,” Burnette told VentureBeat via email. “I realized in those early days that long-haul trucking represented a more compelling use case than robotaxis. I wanted a straight-forward go-to-market opportunity, and I saw early on that autonomous trucking was the logical first application at scale.”

Kodiak’s self-driving platform uses a combination of light detection and ranging radar known as lidar as well as camera, radar, and sonar hardware. A custom computer processes sensor data and plans the truck’s path. Overseen by a safety driver, the brakes, steering column, and throttle are controlled by the computer to move the truck to its destination.

Kodiak

Kodiak’s sensor suite collects raw data about the world around the truck, processing raw data to locate and classify objects and pedestrians. The above-mentioned computer reconciles the data with lightweight road maps, which are shipped to Kodiak’s fleet over the air and contain information about the highway, including construction zones and lane changes.

Kodiak claims its technology can detect shifting lanes, speed changes, heavy machinery, road workers, construction-specific signs, and more in rain or sunshine. Moreover, the company says its truck can merge on and off highways and anticipate rush hour, holiday traffic, and construction backups, adjusting their braking and acceleration to optimize for delivery windows while maximizing fuel efficiency.

“Slower-moving vehicles, interchanges, vehicles on the shoulder, and even unexpected obstacles are common on highways. The Kodiak Driver can identify, plan, and execute a path around obstacles to safely continue towards its destination,” Kodiak says on its website. “The Kodiak Driver was built from the ground up specifically for trucks. Trucks run for hundreds of thousands of miles, in the harshest of environments, for extremely long stretches. Our focus has always been on building technology that’s reliable, safe, automotive-grade, and commercial ready.”

The growing network of autonomous trucking

In the U.S. alone, the American Trucking Association (ATA) estimates that there are more than 3.5 million truck drivers on the roads, with close to 8 million people employed across the segment. Trucks moved more than 80.4% percent of all U.S. freight and generated $791.7 billion in revenue in 2019, according to the ATA.

But the growing driver shortage remains a strain on the industry. Estimates peg the shortfall of long-haul truck drivers at 60,000 in the U.S., a gap that’s projected to widen to 160,000 within the decade.

Chasing after the lucrative opportunity, autonomous vehicle startups focused on freight delivery have racked up hundreds of millions in venture capital. In May, Plus agreed to merge with a special purpose acquisition company in a deal worth an estimated $3.3 billion. Self-driving truck maker TuSimple raised $1 billion through an initial public offering (IPO) in March. Autonomous vehicle software developer Aurora filed for an IPO last week. And Waymo, which is pursuing driverless truck technology through its Waymo Via business line, has raised billions of dollars to date at a valuation of just over $30 billion.

Other competitors in the self-driving truck space include Wilson Logistics and Pony.ai. But Kodiak points to a minority investment from Bridgestone to test and develop smart tire technology as one of its key differentiators. BMW i Ventures is another backer, along with South Korean conglomerate SK, which is exploring the possibility of deploying Kodiak’s vehicle technology in Asia.

Kodiak

“Kodiak was founded in April 2018 and took delivery of its first truck in late 2018. We completed our first closed-course test drive just three weeks later, and began autonomously moving freight for [12] customers between Dallas and Houston in the summer of 2019,” Burnette said. “Our team is the most capital-efficient of the autonomous driving companies while also having developed industry leading technology. We plan to achieve driverless operations at scale for less than 10% of what Waymo has publicly raised to date, and less than 25% of what TuSimple has raised to date.”

Eight-five-employee Kodiak recently said that it plans to expand freight-carrying pilots to San Antonio and other cities in Texas. The company is also testing trucks in Mountain View, California, with a headcount that now stands at 85 people.

In the next few months, Kodiak plans to add 15 new trucks to its fleet, for a total of 25.

“We are at a pivotal moment in the autonomous vehicle industry. It’s not a question of will autonomous trucking technology happen — it’s when is it going to happen,” Burnette continued. “That being said, logistics is an $800 billion-per-year industry with a lot of room for many players to be successful.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

DeepMind takes next step in robotics research

DeepMind is mostly known for its work in deep reinforcement learning, especially in mastering complicated games and predicting protein structures. Now, it is taking its next step in robotics research.

According to a blog post on DeepMind’s website, the company has acquired the rigid-body physics simulator MuJoCo and has made it freely available to the research community. MuJoCo is now one of several open-source platforms for training artificial intelligence agents used in robotics applications. Its free availability will have a positive impact on the work of scientists who are struggling with the costs of robotics research. It can also be an important factor for DeepMind’s future, both as a science lab seeking artificial general intelligence and as a business unit of one of the largest tech companies in the world.

Simulating the real world

Simulation platforms are a big deal in robotics. Training and testing robots in the real world is expensive and slow. Simulated environments, on the other hand, allow researchers to train multiple AI agents in parallel and at speeds that are much faster than real life. Today, most robotics research teams carry out the bulk of training their AI models in simulated environments. The trained models are then tested and further fine-tuned on real physical robots.

The past few years have seen the launch of several simulation environments for reinforcement learning and robotics.

MuJoCo, which stands for Multi-Joint Dynamics with Contact, is not the only game in town. There are other physics simulators such as PyBullet, Roboschool, and Isaac Gym. But what makes MuJoCo stand out from others is the fine-grained detail that has gone into simulating contact surfaces. MuJoCo performs a more accurate modeling of the laws of physics, which is shown in the emergence of physical phenomena such as Newton’s Cradle.

MuJoCo also has built-in features that support the simulation of musculoskeletal models of humans and animals, which is especially important in bipedal and quadruped robots.

The increased accuracy of the physics environment can help reduce the differences between the simulated environment and the real world. Called the “sim2real gap,” these differences cause a degradation in the performance of the AI models when they are transferred from simulation to the real world. A smaller sim2real gap reduces the need for adjustments in the physical world.

Making MuJoCo available for free

Before DeepMind open-sourced MuJuCo, many researchers were frustrated with its license costs and opted to use the free PyBullet platform. In 2017, OpenAI released Roboschool, a license-free alternative to MuJoCo, for Gym, its toolkit for training deep reinforcement learning models for robotics and other applications.

“After we launched Gym, one issue we heard from many users was that the MuJoCo component required a paid license … Roboschool removes this constraint, letting everyone conduct research regardless of their budget,” OpenAI wrote in a blog post.

A more recent paper by researchers in Cardiff University states that “The cost of a Mujoco institutional license is at least $3000 per year, which is often unaffordable for many small research teams, especially when a long-term project depends on it.”

DeepMind’s blog refers to a recent article in PNAS that discusses the use of simulation in robotics. The authors recommend better support for the development of open-source simulation platforms and write, “A robust and feature-rich set of four or five simulation tools available in the open-source domain is critical to advancing the state of the art in robotics.”

“In line with these aims, we’re committed to developing and maintaining MuJoCo as a free, open-source, community-driven project with best-in-class capabilities,” DeepMind’s blog post states.

It is worth noting, however, that license fees account for a very small part of the costs of training AI models for robots. The computational costs of robotics research tend to rise along with the complexity of the application.

MuJoCo only runs on CPUs, according to its documentation. It hasn’t been designed to leverage the power of GPUs, which have many more computation cores than traditional processors.

A recent paper by researchers at the University of Toronto, Nvidia, and other organizations highlights the limits of simulation platforms that work on CPUs only. For example, Dactyl, a robotic hand developed by OpenAI, was trained on a compute cluster comprising around 30,000 CPU cores. These kinds of costs remain a challenge with CPU-based platforms such as MuJoCo.

DeepMind’s view on intelligence

DeepMind’s mission is to develop artificial general intelligence (AGI), the flexible kind of innate and learned problem-solving capabilities found in humans and animals. While the path to AGI (and whether we will ever reach it or not) is hotly debated among scientists, DeepMind has a clearly expressed view on it.

In a paper published earlier this year, some of DeepMind’s top scientists suggested that “reward is enough” to reach AGI. According to DeepMind’s scientists, if you have a complex environment, a well-defined reward, and a good reinforcement learning algorithm, you can develop AI agents that will acquire the traits of general intelligence. Richard Sutton, who is among the co-authors of the paper, is one of the pioneers of reinforcement learning and describes it as “the first computational theory of intelligence.”

The acquisition of MuJoCo can provide DeepMind with a powerful tool to test this hypothesis and gradually build on top of its results. By making it available to small research teams, DeepMind can also help nurture talent it will hire in the future.

MuJoCo can also boost DeepMind’s efforts to turn in profits for its parent company, Alphabet. In 2020, the AI lab recorded its first profit after six years of sizable costs for Alphabet. DeepMind is already home to some of the brightest scientists in AI. And with autonomous mobile robots such as Boston Dynamics’ Spot slowly finding their market, DeepMind might be able to develop a business model that serves both its scientific goal and its owner’s interests.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

A call for increased visual representation and diversity in robotics

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Sometimes it’s the obvious things that are overlooked. Why aren’t there pictures of women building robots on the internet? Or if they are there, why can’t we find them when we search? I have spent years decades doing outreach activities, providing STEM opportunities, and doing women in robotics speaker or networking events. So I’ve done a lot of image searches looking for a representative picture. I have scrolled through page after page of search results ranging from useless to downright insulting every single time.

Finally, I counted.

graph showing women in robotics image results with a female robot taking the lead, followed by a fake robot, and after that women standing near robots

Above: Graph: Image search results via Google showing results of what comes up when the term “woman building robot” is searched.

Image Credit: Andra Keay

My impressions were correct. The majority of the images you find when you look for ‘woman building robot’ are of female robots. This is not what happens if you search for ‘building robot’, or ‘man building robot’. That’s the insulting part, that this misrepresentation and misclassification hasn’t been challenged or fixed. Sophia the robot, or the ScarJo bot, or a sexbot has a much greater impact on the internet than women doing real robotics. What if male roboticists were confronted with pictures of robotic dildos whenever they searched for images of their work?

andra keay's example images women building robots showing female robots, sex robots, fake robots, and men explaining robots to others

Above: Example of image results from Andra Keay’s Google search for ‘women building robots’

Image Credit: Andra Keay

The number of women in the robotics industry is hard to gauge. Best estimates are 5% in most locations, perhaps 10% in some areas. It is slowly increasing, but then the robotics industry is also in a period of rapid growth and everyone is struggling to hire. To my mind, the biggest wasted opportunity for a young robotics company growing like Topsy is to depend on the friends of founders network when it leads to homogenous hiring practices. The sooner you incorporate diversity, the easier it will be for you to scale and attract talent.

For a larger robotics company, the biggest wasted opportunity is not fixing retention. Across the board in the tech industry, retention rates for women and underrepresented minorities are much worse than for pale males. That means that you are doing something wrong. Why not seriously address the complaints of the workers who leave you? Otherwise, you’ll never retain diverse hires, no matter how much money you throw at acquiring them.

The money wasted in talent acquisition when you have poor retention should instead be used to improve childcare, or flexible work hours, or support for affinity groups, or to fire the creep that everyone complains about, or restructure so that you increase the number of female and minority managers. The upper echelons are echoing with the absence of diversity.

On the plus side, the number of pictures of girls building robots has definitely increased in the last ten years. As my own children have grown, I’ve seen more and more images showing girls building robots. But with two daughters now leaving college, I’ve had to tell them that robotics is not one of the female-friendly career paths (if any of them are). Unless they are super passionate about it. Medicine, law, or data analytics might be better domains for their talents. As an industry, we can’t afford to lose bright young women. We can’t afford to lose talented older women. We can’t afford to overlook minority hires. The robotics industry is entering exponential growth. Capital is in abundance, market opportunities are in abundance. Talent is scarce.

These days, I’m focused on supporting professional women in the robotics community, industry, or academia. These are women who are doing critical research and building cutting-edge robots. What do solutions look like for them? Our wonderful annual Ada Lovelace Day list hosted on Robohub has increased the awareness of many ‘new’ faces in robotics. But we have been forced to use profile pictures, primarily because that’s what is available. That’s also the tradition for profile pieces about the work that women do in robotics. The focus is on the woman, not the woman building or programming, or testing the robot. That means that the images are not quite right as role models.

andrea keay's image search results that better represented females in robotics showing images of women brainstorming on a see-through whiteboard, and sitting near constructed robots

Above: Further examples from Andrea Keay’s image search results that better represented females in robotics

Image Credit: Andrea Keay

A real role model shows you the way forward. And that the future is in your hands. The Civil Rights activist Marian Wright Edelman said, “You can’t be what you can’t see.”

A set of images from andra keay's search results displaying the few good images found in the search more accurately representing women working in robotics

Above: A set of images from Andra Keay’s search results displaying the few good images found in the search more accurately representing women working in robotics.

Image Credit: andra keay

So Women in Robotics has launched a photo challenge. Our goal is to see more than 3 images of real women building robots in the top 100 search results. Our stretch goal is to see more images of women building robots than there are of female robots in the top 100 search results! Take great photos following these guidelines, hashtag your images #womeninrobotics #photochallenge #ibuildrobots, and upload them to Wikimedia with a creative commons license so that we can all use them. We’ll share them on the Women in Robotics organization website, too.

Andra Keay's guidelines for what does make a great photo of women in robotics includes: real robot programming, adults of various ages working on robotics, active single subject of the image, individuals pictured shown using tools or code to build a robot, unbranded images, and images that have permission from the subject to be use

Above: Andra Keay’s guidelines for what makes a great, accurate, and realistic photo representing women in robotics.

Image Credit: andra keay

Hey, we’d also love mentions of Women in Robotics in any citable fashion! Wikipedia won’t let us have a page because we don’t have third-party references, and sadly, the mention of our Ada Lovelace Day lists by other organizations have not credited us. We are now an official 501c3 organization, registered in the US, with the mission of supporting women and non-binary people who work in robotics, or who are interested in working in robotics.

andra keay's women in robotics photo challenge additional example and call for submission to photos@womeninrobotics.org

Above: Additional details of the women in robotics photo challenge additional example and call for submission to photos@womeninrobotics.org.

Image Credit: andra keay

If a picture is worth a thousand words, then we can save a forest’s worth of outreach, diversity, and equity work, simply by showing people what women in robotics really do.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Nvidia releases robot toolbox to deepen support of AI-powered robotics in ROS

Nvidia announced today that Isaac, its developer toolbox for supporting AI-powered robotics, will deepen support of the Robot Operating System (ROS).

The announcement is being made this morning at ROS World 2021, a conference for developers, engineers, and hobbyists who work on ROS, a popular open-source framework that helps developers build and reuse code used for robotics applications.

Nvidia, which is trying to assert its lead as a supplier of processors for AI applications, announced a host of “performance perception” technologies that would be part of what it will now call Isaac ROS. This includes computer vision and AI/ML functionality in ROS-based applications to support things like autonomous robots.

The move comes as Amazon’s robotic platform, RoboMaker, has also moved quickly to support ROS.

The ROS World 2021 is the ninth annual developers’ conference — modeled after PyCon and BoostCon — for developers of all levels to learn from and network with the ROS community.

Nvidia said its offerings are intended to accelerate and improve the standards of product development and product performance.

Isaac ROS GEM solution for optimized real-time Stereo Visual Odometry Solution

The purpose of the newly launched Isaac ROS GEM for Stereo Visual Odometry is to help autonomous vehicles keep track of where a camera is relative to its initial position. If seen from a broader perspective, it assists these autonomous machines to track where they are concerning the larger environment.

With this solution, ROS developers get a real-time (>60fps@720p) stereo camera visual odometry solution that runs immensely fast and can run HD resolution in real-time on a Jetson Xavier AGX.

ROS developers can now access all Nvidia NGC DNN inference models

With DNN Inference GEM, ROS developers can now leverage any of Nvidia’s inference models available on NGC, or can offer their own DNN. TensorRT or Triton, Nvidia’s inference servers, will deploy these optimized packages. The GEM is also compatible with U-Net and DOPE. The U-Net helps generate semantic segmentation masks from images, while DOPE helps in estimating three-dimensional poses for all detected objects. If you are keen to integrate performant AI inference in a ROS application, the DNN inference GMM is one of the fastest alternatives you can get.

Isaac SIM GA release for AI-powered robotics

Scheduled to be launched in November 2021, this GA release of Isaac SIM will come with improvements in the UI and performance, making simulation-building much faster. The ROS bridge will improve, and so will the developer experience with an increased number of ROS samples. The new release will reduce memory usage and startup times and better the process of Occupancy Map Generation. The new environment variants include large warehouses, offices, and hospitals, and the new Python building blocks can interface with robots, objects, and environments.

Synthetic data generation workflow

Addressing the safety and quality concerns of autonomous robots is crucial as it deals with a large and diverse data volume to shape up its AI models perfectly. It is these AI models that run the perception stack. The new synthetic data workflow that comes with the Isaac Sim helps build production quality datasets, addressing the safety and quality concerns of autonomous robots.

With this data generation workflow, the control of the developer becomes extensive. The developer can control the stochastic distribution of the objects in the scene, the scene itself, the lighting, the synthetic sensors, and the inclusion of crucial corner cases in the datasets. Eventually, the workflow also helps version and debug information for the exact reproduction of the datasets for auditing and safety.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

DeepMind acquires and open-sources robotics simulator MuJoCo

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Let the OSS Enterprise newsletter guide your open source journey! Sign up here.

DeepMind, the AI lab owned by Google parent company Alphabet, today announced that it has acquired and released the MuJoCo simulator, making it freely available to researchers as a precompiled library. In a blog post, the lab says that it’ll work to prepare the codebase for a release in 2022 and “continue to improve” MuJoCo as open-source software under the Apache 2.0 license.

recent article in the Proceedings of the National Academy of Sciences exploring the state of simulation in robotics identifies open source tools as critical for advancing research. The authors’ recommendations are to develop open source simulation platforms as well as establish community-curated libraries of models, a step that DeepMind claims it has now taken.

“Our robotics team has been using MuJoCo as a simulation platform for various projects … Ultimately, MuJoCo closely adheres to the equations that govern our world,” DeepMind wrote. “We’re committed to developing and maintaining MuJoCo as a free, open-source, community-driven project with best-in-class capabilities. We’re currently hard at work preparing MuJoCo for full open sourcing.”

Simulating physics

MuJoCo, which stands for Multi-Joint Dynamics with Contact, is widely used within the robotics community alongside simulators like Facebook’s Habitat, OpenAI’s Gym, and DARPA-backed Gazebo. Initially developed by Emo Todorov, a neuroscientist and director of the Movement Control Laboratory at the University of Washington, MuJoCo was made available through startup Roboti LLC as a commercial product in 2015.

Unlike many simulators designed for gaming and film applications, MuJoCo takes few shortcuts that prioritize stability over accuracy. For example, the library accounts for gyroscopic forces, implementing full equations of motion — the equations that describe the behavior of a physical system in terms of its motion as a function of time. MuJoCo also supports musculoskeletal models of humans and animals, meaning that applied forces can be distributed correctly to the joints.

MuJoCo

MuJoCo’s core engine is written in the programming language C, which makes it easily translatable other other architectures. Moreover, the library’s scene description and simulation state are stored in just two data structures, which constitute all the information needed to recreate a simulation including results from intermediate stages.

“MuJoCo’s scene description format uses cascading defaults — avoiding multiple repeated values ​​– and contains elements for real-world robotic components like equality constraints, motion-capture markers, tendons, actuators, and sensors. Our long-term roadmap includes standardising [it] as an open format, to extend its usefulness beyond the MuJoCo ecosystem,” DeepMind wrote.

MuJoCo

Of course, no simulator is perfect. A paper published by researchers at Carnegie Mellon outlines the issues with them, including:

  • The reality gap: No matter how accurate, simulated environments don’t always adequately represent physical reality.
  • Resource costs: The computational overhead of simulation requires specialized hardware like graphics cards, which drives high cloud costs.
  • Reproducibility: Even the best simulators can contain “non-deterministic” elements that make reproducing tests impossible.

Overcoming these is a grand challenge in simulation research. In fact, some experts believe that developing a simulation with 100% accuracy and complexity might require as much problem-solving and resources as developing robots themselves, which is why simulators are likely to be used in tandem with real-world testing for the foreseeable future.

MuJoCo 2.1 has been released as unlocked binaries, available at the project’s original website and on GitHub along with updated documentation. DeepMind is granting licenses to provide an unlocked activation key for legacy versions of MuJoCo (2.0 and earlier), which will expire on October 18, 2031.

DeepMind’s acquisition of MuJoCo comes after the company’s first profitable year. According to a filing last week, the company raked in £826 million ($1.13 billion USD) in revenue in 2020, more than three times the £265 million ($361 million USD) it filed in 2019.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Facebook open-sources robotics development platform Droidlet

All the sessions from Transform 2021 are available on-demand now. Watch now.


Facebook today open-sourced Droidlet, a platform for building robots that leverage natural language processing and computer vision to understand the world around them. Droidlet simplifies the integration of machine learning algorithms in robots, according to Facebook, facilitating rapid software prototyping.

Robots today can be choreographed to vacuum the floor or perform a dance, but they struggle to accomplish much more than that. This is because they fail to process information at a deep level. Robots can’t recognize what a chair is or know that bumping into a spilled soda can will make a bigger mess, for example.

Facebook Droidlet

Droidlet isn’t a be-all and end-all solution to the problem, but rather a way to test out different computer vision and natural language processing models. It allows researchers to build systems that can accomplish tasks in the real world or in simulated environments like Minecraft or Facebook’s Habitat, supporting the use of the same system on different robotics by swapping out components as needed. The platform provides a dashboard researchers can add debugging and visualization widgets and tools to, as well as an interface for correcting errors and annotation. And Droidlet ships with wrappers for connecting machine learning models to robots, in addition to environments for testing vision models fine-tuned for the robot setting.

Modular design

Droidlet is made up of a collection of components — some heuristic, some learned — that can be trained with static data when convenient or dynamic data where appropriate. The design consists of several module-to-module interfaces:

  • A memory system that acts as a store for information across the various modules
  • A set of perceptual modules that process information from the outside world and store it in memory
  • A set of lower-level tasks, such as “Move three feet forward” and “Place item in hand at given coordinates,” that can affect changes in a robot’s environment
  • A controller that decides which tasks to execute based on the state of the memory system

Each of these modules can be further broken down into trainable or heuristic components, Facebook says, and the modules and dashboards can be used outside of the Droidlet ecosystem. For researchers and hobbyists, Droidlet also offers “battery-included” systems that can perceive their environment via pretrained object detection and pose estimation models and store their observations in the robot’s memory. Using this representation, the systems can respond to language commands like “Go to the red chair,” tapping a pretrained neural semantic parser that converts natural language into programs.

Facebook Droidlet

“The Droidlet platform supports researchers building embodied agents more generally by reducing friction in integrating machine learning models and new capabilities, whether scripted or learned, into their systems, and by providing user experiences for human-agent interaction and data annotation,” Facebook wrote in a blog post. “As more researchers build with Droidlet, they will improve its existing components and add new ones, which others in turn can then add to their own robotics projects … With Droidlet, robotics researchers can now take advantage of the significant recent progress across the field of AI and build machines that can effectively respond to complex spoken commands like ‘Pick up the blue tube next to the fuzzy chair that Bob is sitting in.’”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link