Categories
AI

Google is using AI to help users explore the topics they’re searching for — here’s how

“Can you get medicine for someone at the pharmacy?”

It’s a simple enough question for humans to understand, says Pandu Nayak, vice president of search at Google, but such a query represents the cutting-edge of machine comprehension. You and I can see that the questioner is asking if they can fill out a subscription for another person, Nayak tells The Verge. But until recently, if you typed this question into Google, it would direct you to websites explaining how to fill out your prescription. “It missed the subtlety that the prescription was for someone else,” he says.

The key to delivering the right answer, says Nayak, is AI, which Google is using today to improve its search results. The prescription query was solved in 2019, when Google integrated a machine learning model called BERT into search. As part of a new generation of AI language systems known as large language models (the most famous of which is OpenAI’s GPT-3), BERT was able to parse the nuances of our prescription query correctly and return the right results. Now, in 2021, Google is updating its search tools yet again, using another acronymized AI system that’s BERT’s successor: MUM.

Originally revealed at Google I/O in May, MUM is at least 1,000 times bigger than BERT, says Nayak; on the same order of magnitude as GPT-3, which has 175 billion parameters. (Parameters being a measure of a model’s size and complexity.) MUM is also multimodal, meaning it processes visual data as well as text. And it’s been trained on 75 languages, which allows the system to “generalize from languages where there’s a lot of data, like English, to languages where there’s less data, like Hindi,” says Nayak. That helps in ensuring that any upgrades it provides are spread across Google’s many markets.

A new feature rolling out in the coming months named “Things to know” will use AI to help users explore topics related to their searches.
Image: Google

Nayak speaks of MUM with pride, as the latest AI wunderkind trained in Google’s labs. But the company is also cautious. Large language models are controversial for a number of reasons. They’re prone to lying, for example — as happy writing fiction as fact. And they’ve been shown time and time again to encode racial and gender biases. This is a problem that Google’s own researchers have highlighted and been shot down for doing so. Notably, Google fired two of its top ethics researchers, Timnit Gebru and Margaret Mitchell, after they co-authored a paper highlighting problems with exactly this technology.

For these reasons, perhaps, the changes to search that Google is launching are relatively restrained. The company is introducing three new features “in the coming months,” some powered by MUM, each of which is ancillary to its search engine’s primary function — ranking web results. But Nayak says they’re just the tip of the iceberg when it comes to Google’s ambitions to improve its products with AI. “To me, this is just the start,” he says.

First, though, the features. Number one is called “Things to know” and acts as an advanced snippet function, pulling out answers to predicted questions based on user’s searches. Type in “acrylic painting,” for example, and “Things to know” will automatically generate new queries, like “How do you use household items in acrylic painting.” Nayak says there are certain “sensitive queries” that won’t trigger this response (like “bomb making”) but that most topics are automatically covered. It will be rolling out in the “coming months.”

The second new feature suggests further searches that might help users broaden or refine their queries. So, with the “acrylic painting” search above, Google might now suggest a narrower focus, like “acrylic painting techniques,” or a broader remit, like “different styles of painting.” As Nayak puts it, Google wants to use AI’s ability to recognize “the space of possibilities within [a] topic” and help people explore variants of their own searches. This feature will be available immediately, though it is not powered by MUM.

The third new feature is more straightforward and based on video transcription. When users are searching for video content, Google will use MUM to suggest new searches based on what it hears within the video. Nayak gives the example of watching a video about Macaroni penguins and Google suggesting a new search of “Macaroni penguin life story.” Again, it’s about suggesting new areas of search for users. This feature will launch on September 29th in English in the US.

In addition to these AI-based changes, Google is also expanding its “About This” feature in search, which will give new information about the source of results. It’s also bringing its MUM-powered AI smarts to its visual search tech, Google Lens.

Google will give users new option to “refine” or “broaden” their search — using MUM to explore related topics.
Image: Google

The change to search is definitely the main focus, but what’s interesting is also what Google isn’t launching. When it demoed MUM and another model LaMDA at I/O earlier this year, it showed off ambitious features where users could literally talk to the subjects of their searches, like the dwarf planet Pluto, and ask them questions. In another, users asked expansive questions, like “I just hiked Mt. Adams, I want to hike Mt. Fuji in the fall. What should I do differently?” before being directed to relevant snippets and web pages.

It seems these sorts of searches, which are rooted deeply in the functionality of large language models, are too free-form for Google to launch publicly. Most likely, the reason for this is that the language models could easily say the wrong thing. That’s when those bias problems come into play. For example, when GPT-3 is asked to complete a sentence like “Audacious is to boldness as Muslim is to …,” nearly a quarter of the time, it finishes the sentence with the word “terrorism.” These aren’t problems that are easy to navigate.

When questioned about these difficulties, Nayak reframes the problems. He says it’s obvious that language models suffer from biases but that this isn’t necessarily the challenge for Google. “Even if the model has biases, we’re not putting it out for people to consume directly,” he says. “We’re launching products. And what matters is, are the products serving our users? Are they surfacing undesirable things or not?”

But the company can’t completely stamp out these problems in its finished products either. Google’s Photo app infamously tagged Black people as “gorillas” in one well-known incident, and the sort of racial and gender-based discrimination present in language AI is often much more subtle and difficult to detect.

There’s also the problem of what the shift to AI-generated answers might mean for the wider future of Google search. In a speculative paper published earlier this year, Google’s researchers considered the question of replacing search altogether with large language models and highlighted a number of difficulties with the approach. (Nayak is definitive that this is not a serious prospect for the company: “That is absolutely not the plan.”)

And there’s also the consistent grumbling that Google continues to take up more space in search results with its own product, shunting searches to Google Shopping, Google Maps, and so on. The new MUM-powered “Things to know” feature certainly seems to be part of this trend: filleting out the most informative search results from web pages, and potentially stopping users from clicking through, and therefore sustaining the creator of that data.

Nayak’s response to this is that Google delivers more traffic to the web each year and that if it doesn’t “build compelling experiences” for users, then the company “will not be around to send traffic to the web” in the future. It’s not a wholly convincing answer. Google may deliver more traffic each year, but how much of that is just a function of increasing web use? And even if Google does disappear from search, wouldn’t other search engines pick up the slack in sending people traffic?

Whatever the case, it’s clear that the company is putting AI language understanding at the heart of its search tools — at the heart of Google, indeed. There are many open questions about the challenges of integrating this tech, but for now, Google is happy to continue the search for answers of its own.

Repost: Original Source and Author Link

Categories
Game

The new Assassin’s Creed educational tour lets you explore the Viking Age

Assassin’s Creed Discovery Tours can offer valuable educational insights into historical periods, and that may be particularly true for the latest instalment. Ubisoft has released a Discovery Tour: Viking Age update for Assassin’s Creed Valhalla that gives you the chance to explore Viking-era England and Norway without the usual conflicts. There’s a new format, however. Rather than go on guided tours and visit exhibits, you assume the roles of four Anglo-Saxon and Viking characters (such as Anglo-Saxon king Alfred the Great and a Viking merchant) as they undertake eight quests that illustrate their daily lives.

You can also study period artifacts from museums in the UK, France and Denmark. And yes, there are rewards to unlock in the main Valhalla game for Eivor and his longship.

The Discovery Tour update is free for Valhalla players on all platforms, and you can buy the $20 stand-alone version for PCs through either Ubisoft’s store or the Epic Games Store. Console and streaming players will have to wait until 2022 for a stand-alone release. However you get a copy, it could be a worthwhile experience if you’ve wanted to dispel the myths surrounding Vikings and their conquests.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

New Pokemon Snap content update adds three new areas to explore

So far, New Pokemon Snap is one of the Switch’s biggest releases of the year, and today Nintendo and The Pokemon Company announced that the game is getting a free content update. This doesn’t appear to be a small update either, as it adds 20 new Pokemon to photograph and three new areas to visit. So if you were hoping for more New Pokemon Snap, then it seems that you’re going to get it.

While The Pokemon Company didn’t outright the 20 new Pokemon being added to the game, there are definitely some glimpses offered in a new trailer that was released today. We get a look at the three new zones in the trailer as well, and they’re certainly very interesting, to say the least.

The first new area is called Secret Side Path, which has both day and night versions. On this course, the NEO-ONE will be miniaturized and you’ll be snapping photos of gigantic Pokemon. With that in mind, players should be able to snap some shots that aren’t possible in other zones, so Secret Side Path is definitely one of the more intriguing additions coming along with this update.

Mightywide River is the second new area, and as the name suggests, players will be floating down a river as they make their way through the course. In that way, it almost sounds like Mightywide River could be a spiritual successor to the original Pokemon Snap‘s Valley course. Finally, we’ve got the Barren Badlands, which harbors rocky cliffs, geysers, and “poisonous, gas-spewing swamps.” Like Secret Side Path, both Mightywide River and Barren Badlands have day and night courses as well.

Interestingly enough, not only has The Pokemon Company given us a release date for this update but it’s also provided a release time. The update will be going live at 6 PM PDT/9 PM EDT on Tuesday, August 3rd. Will you be returning to New Pokemon Snap to check out this update? Head down to the comments and let us know.

Repost: Original Source and Author Link

Categories
Tech News

Soft burrowing robot can explore the subterranean world

Scientists working with soft robots have created devices made for exploring all manner of environments, from the air to the ocean. Soft robots can also operate on dry land. Thanks to researchers from the University of California, Santa Barbara, we now have a soft robot capable of burrowing under the ground. Designers of the robot were inspired by plants and animals that evolved to navigate subterranean spaces.

The team developed fast and controllable soft robots that can burrow through sand enabling new applications for fast, precise, and minimally invasive movement underground. The team believes their work lays mechanical foundations for a new type of robot. Researcher Nicholas Naclerio says the biggest challenge with moving to the ground is the forces involved. When trying to move the ground, the robot has to push the soil, sand, or other medium away.

Researchers used principles employed by diverse organisms that successfully swim and dig within granular media to develop new mechanisms robots can use to move. The team created a vine-like soft robot designed to mimic plants in the way they navigate by growing from their tips while the remainder of the body remains stationary. Tip extension keeps forces low localized only to the growing end. Researchers note that if the entire body moved as it grew, friction over the entire surface would increase as the robot entered the sand until it couldn’t move.

Burrowing animals were the inspiration for another strategy for the soft robot called granular fluidization. That process suspends particles in a fluid-like state allowing the animal to overcome high levels of resistance presented by sand or loose soil. The researchers specifically modeled the southern sand octopus, which shoots a jet of water into the ground and uses its arms to pull itself into the temporarily loosened sand.

The team created a small, exploratory soft robot with multiple applications where burrowing through dry granular media is needed. It has the potential to be used in soil sampling, underground installation of utilities, and erosion control. The researchers are currently working on a project with NASA to develop burrowing robots for moon exploration and other uses.

Repost: Original Source and Author Link

Categories
Tech News

Mars, schmars — let’s explore these other planets instead

Last month, China successfully landed and deployed the Zhurong rover on Mars, becoming the second country ever to set wheels on the surface of the red planet.

Last year the United States, the United Arab Emirates and China all launched missions to Mars, taking advantage of the relatively short journey time offered by the two planets’ unusually close proximity.

Why are planetary scientists so obsessed with Mars? Why spend so much time and money on this one planet when there are at least seven others in our solar system, more than 200 moons, countless asteroids, and much more besides?

Fortunately, we are going to other worlds, and there are lots of missions to very exciting places in our solar system — worlds bursting with exotic features such as ice volcanoes, rings of icy debris, and huge magnetic fields.

There are currently 26 active spacecraft dotted around our solar system. Some are orbiting other planets and moons, some have landed on the surfaces of other worlds, and some have performed fly-bys to beam back images. Only half of them are visiting Mars.

Included in those 26 spacecraft are long-term missions like Voyager 1 and 2 – which are still operational after over 40 years and have now left the Solar system and ventured into interstellar space. And it also includes some less famous, but no less weird and wonderful, spacecraft.

 

Credit: Olaf Frohn