What Google’s AI-designed chip tells us about the nature of intelligence

Elevate your enterprise data technology and strategy at Transform 2021.

In a paper published in the peer-reviewed scientific journal Nature last week, scientists at Google Brain introduced a deep reinforcement learning technique for floorplanning, the process of arranging the placement of different components of computer chips.

The researchers managed to use the reinforcement learning technique to design the next generation of Tensor Processing Units, Google’s specialized artificial intelligence processors.

The use of software in chip design is not new. But according to the Google researchers, the new reinforcement learning model “automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area.” And it does it in a fraction of the time it would take a human to do so.

The AI’s superiority to human performance has drawn a lot of attention. One media outlet described it as “artificial intelligence software that can design computer chips faster than humans can” and wrote that “a chip that would take humans months to design can be dreamed up by [Google’s] new AI in less than six hours.”

Another outlet wrote, “The virtuous cycle of AI designing chips for AI looks like it’s only just getting started.”

But while reading the paper, what amazed me was not the intricacy of the AI system used to design computer chips but the synergies between human and artificial intelligence.

Analogies, intuitions, and rewards

go board game chip floorplanning

The paper describes the problem as such: “Chip floorplanning involves placing netlists onto chip canvases (two-dimensional grids) so that performance metrics (for example, power consumption, timing, area and wirelength) are optimized, while adhering to hard constraints on density and routing congestion.”

Basically, what you want to do is place the components in the most optimal way. However, like any other problem, as the number of components in a chip grows, finding optimal designs becomes more difficult.

Existing software help to speed up the process of discovering chip arrangements, but they fall short when the target chip grows in complexity. The researchers decided to draw experience from the way reinforcement learning has solved other complex space problems, such as the game Go.

“Chip floorplanning is analogous [emphasis mine] to a game with varying pieces (for example, netlist topologies, macro counts, macro sizes and aspect ratios), boards (varying canvas sizes and aspect ratios) and win conditions (relative importance of different evaluation metrics or different density and routing congestion constraints),” the researchers wrote.

This is the manifestation of one of the most important and complex aspects of human intelligence: analogy. We humans can draw out abstractions from a problem we solve and then apply those abstractions to a new problem. While we take these skills for granted, they’re what makes us very good at transfer learning. This is why the researchers could reframe the chip floorplanning problem as a board game and could tackle it in the same way that other scientists had solved the game of Go.

Deep reinforcement learning models can be especially good at searching very large spaces, a feat that is physically impossible with the computing power of the brain. But the scientists faced a problem that was orders of magnitude more complex than Go. “[The] state space of placing 1,000 clusters of nodes on a grid with 1,000 cells is of the order of 1,000! (greater than 102,500), whereas Go has a state space of 10360,” the researchers wrote. The chips they wanted to design would be composed of millions of nodes.

They solved the complexity problem with an artificial neural network that could encode chip designs as vector representations and made it much easier to explore the problem space. According to the paper, “Our intuition [emphasis mine] was that a policy capable of the general task of chip placement should also be able to encode the state associated with a new unseen chip into a meaningful signal at inference time. We therefore trained a neural network architecture capable of predicting reward on placements of new netlists, with the ultimate goal of using this architecture as the encoder layer of our policy.”

The term intuition is often used loosely. But it is a very complex and little-understood process that involves experience, unconscious knowledge, pattern recognition, and more. Our intuitions come from years of working in one field, but they can also be obtained from experiences in other areas. Fortunately, putting these intuitions to test is becoming easier with the help of high-power computing and machine learning tools.

It’s also worth noting that reinforcement learning systems need a well-designed reward. In fact, some scientists believe that with the right reward function, reinforcement learning is enough to reach artificial general intelligence. Without the right reward, however, an RL agent can get stuck in endless loops, doing stupid and meaningless things. In the following video, an RL agent playing Coast Runners is trying to maximize its points and abandons the main goal, which is to win the race.

Google’s scientists designed the reward for the floorplanning system as the “negative weighted sum of proxy wirelength, congestion and density.” The weights are hyperparameters they had to adjust during the development and training of the reinforcement learning model.

With the right reward, the reinforcement learning model was able to take advantage of its compute power and find all kinds of ways to design floorplans that maximized the reward.

Curated datasets

The deep neural network used in the system was developed using supervised learning. Supervised machine learning requires labeled data to adjust the parameters of the model during training. Google’s scientists created “a dataset of 10,000 chip placements where the input is the state associated with a given placement and the label is the reward for that placement.”

To avoid manually creating every floorplan, the researchers used a mix of human-designed plans and computer-generated data. There’s not much information in the paper about how much human effort was involved in the evaluation of the algorithm-generated examples included in the training dataset. But without quality training data, supervised learning models will end up making poor inferences.

In this sense, the AI system is different from other reinforcement learning programs such as AlphaZero, which developed its game-playing policy without the need for human input. In the future, the researchers might develop an RL agent that can design its own floorplans without the need for supervised learning components. But my guess is that, given the complexity of the problem, there’s a great chance that solving such problems will continue to require a combination of human intuition, machine learning, and high-performance computing.

Reinforcement learning design vs human design

Among the interesting aspects of the work presented by Google’s researchers is the layout of the chips. We humans use all kinds of shortcuts to overcome the limits of our brains. We can’t tackle complex problems in one big chunk. But we can design modular, hierarchical systems to divide and conquer complexity. Our ability to think and design top-down architectures has played a great part in developing systems that can perform very complicated tasks.

I’ll give an example of software engineering, my own area of expertise. In theory, you can write entire programs in a very large, contiguous stream of commands in a single file. But software developers never write their programs that way. We create software in small pieces, functions, classes, modules, that can interact with each other through well-defined interfaces. We then nest those pieces into larger pieces and gradually create a hierarchy of components. You don’t need to read every line of a program to understand what it does. Modularity enables multiple programmers to work on a single program and several programs to reuse previously built components. Sometimes, just looking at the class architecture of a program is enough to point you in the right direction to locate a bug or find the right place to add an upgrade. We often trade speed for modularity and better design.

After a fashion, the same can be seen in the design of computer chips. Human-designed chips tend to have neat boundaries between different modules. On the other hand, the floorplans designed by Google’s reinforcement learning agent have found the least path of resistance, regardless of how the layout looks (see picture below).

Google AI chip

Above: Right: Manually designed chip Left: AI-designed chip

I’m interested to see whether this will become a sustainable model of design in the future or if it will require some type of compromise between highly-optimize machine learning–generated designs and top-down order imposed by human engineers.

AI + human intelligence

As Google’s reinforcement learning–powered chip designer shows, innovations in AI hardware and software will continue to require abstract thinking, finding the right problems to solve, developing intuitions about the solutions, and choosing the right kind of data to validate the solutions. Those are the kinds of skills that better AI chips can enhance but not replace.

At the end of the day, I don’t see this as a story of “AI outsmarting humans,” “AI creating smarter AI,” or AI developing “recursive self-improvement” capabilities. It is rather a manifestation of humans finding ways to use AI as a prop to overcome their own cognitive limits and extend their capabilities. If there’s a virtuous cycle, it’s one of AI and humans finding better ways to cooperate.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Copyright 2021


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Tech News

Google’s Wear OS update for offline audio has me re-hyped about smartwatches

I’ve been turned off smartwatches for a while now. Whether it was their poor battery life and uninteresting design, I’ve found I much prefer the simplicity and elegance of my analogue timepieces. Since the pandemic though, things have changed.

In order to get out the house and avoid screens, I now walk about 10km a day, time I use to catch up on audiobooks and podcasts. Thing is, I’d like to do this while leaving my phone at home. I’ve finally found my reason to get a smartwatch — and Google has just made this decision easier.

The company is working on a major update to its Wear OS platform to make the offline audio experience for wearables far slicker and more useful later this year.

You’ll basically be able to download music and podcasts on apps like Spotify and YouTube Music to your watch, so you don’t need to carry your phone along for your morning stroll. As someone who hates strapping on a 6.7-inch phone to their arm in a sweaty holder thing twice a day, this will be a godsend.

Here’s Bjorn Kilburn, the director of Product Management for Wear, talking about YouTube Music on smartwatches (starts at 2:38):

For those wondering if this sounds familiar, let me clear the air: as The Verge noted, a few pricey Garmin and Samsung wearables supported offline Spotify music storage and playback, and LTE-equipped Apple Watch models can stream tunes too.

This development changes things. It opens up the possibility of listening to your audio on a wider (and hopefully cheaper) range of smartwatches, as well as consuming less power since they won’t need an LTE connection to stream content from the cloud.


Stadia Was a No-Show at Google’s 2-hour I/O Keynote

Google I/O 2021 is off to a bang. The developer conference kicked off with a two-hour keynote speech that ushered in the future of Google. We saw some eye-popping developments in machine learning, more inclusive design considerations, and much more across Google’s suite of products.

There was one notable absence at the party: Stadia. Google’s gaming service didn’t get any new announcements during the platform-spanning keynote. In fact, the word Stadia wasn’t mentioned once.

On one hand, that’s not too surprising. Google I/O isn’t exactly a consumer-facing event; it’s a conference that’s focused on large-scale tech innovations. It would have been a little strange to see Google showing off Resident Evil Village footage in-between announcements about its commitment to inclusivity and privacy.

Even still, it’s a little odd to not hear the service mentioned at all considering Google’s lofty goals for cloud gaming. When the service was first revealed, it got a full half-hour spotlight that emphasized how the tech made gaming more accessible. Stadia even got a big deep dive during the 2019 I/O event, demoing the technology. For a time, Google was positioning it as part of its vision for the future.

This year, Stadia only appears on the I/O schedule once as part of an AMA for the app creation tool Flutter. Stadia itself won’t be onstage beyond that this year according to Google, but much is in store for the future of Stadia, including new games and quality of life improvements. Stadia is also coming to Chromecast with Google TV soon, so there’s a lot in the pipeline. It just doesn’t appear that Google has bigger picture developments to share with us, at least for now.

Additionally, with E3 coming up, it’s possible that Google is saving any news for that event where it could speak to gamers directly, rather than tech-heads. There’s also the chance that there’s just not much to be done in terms of high-level innovation. At this point, the name of the game is stability with Stadia. Getting it to the point where anyone can play, regardless of their internet access is a foundational goal for all cloud gaming services. Perhaps the nuance of how gaming servers work is just a little too granular for this kind of event.

With that knowledge in mind, it still raises questions about how serious Google is about supporting Stadia long-term. Recently, the company reportedly fired around 150 developers as Stadia shifted away from in-house games. Since then, there hasn’t been too much big news about Google’s plans, save for Resident Evil Village launching on the platform.

Google appears to be at a crossroads with Stadia. As cloud gaming becomes more competitive thanks to established gaming companies like Microsoft, Stadia still needs to find its niche or a selling point that really sells its innovation. For the current moment, it doesn’t seem like there’s anything significant enough happening behind the scenes to merit a mention at Google’s biggest showcase of the year.

Perhaps we’ll see those details emerge at E3 in just a few weeks.

Editors’ Choice

Repost: Original Source and Author Link

Tech News

Google’s new AI can have eerily human-like conversations

Google has unveiled a creepily human-like language model called LaMDA that CEO Sundar Pichai says “can converse on any topic.”

During a demo at Google I/O 2021 on Wednesday, Pichai showcased how LaMDA can enable new ways of conversing with data, like chatting to Pluto about life in outer space or asking a paper airplane about its worst travel experiences.

Credit: Google