Categories
AI

Google is using AI to help users explore the topics they’re searching for — here’s how

“Can you get medicine for someone at the pharmacy?”

It’s a simple enough question for humans to understand, says Pandu Nayak, vice president of search at Google, but such a query represents the cutting-edge of machine comprehension. You and I can see that the questioner is asking if they can fill out a subscription for another person, Nayak tells The Verge. But until recently, if you typed this question into Google, it would direct you to websites explaining how to fill out your prescription. “It missed the subtlety that the prescription was for someone else,” he says.

The key to delivering the right answer, says Nayak, is AI, which Google is using today to improve its search results. The prescription query was solved in 2019, when Google integrated a machine learning model called BERT into search. As part of a new generation of AI language systems known as large language models (the most famous of which is OpenAI’s GPT-3), BERT was able to parse the nuances of our prescription query correctly and return the right results. Now, in 2021, Google is updating its search tools yet again, using another acronymized AI system that’s BERT’s successor: MUM.

Originally revealed at Google I/O in May, MUM is at least 1,000 times bigger than BERT, says Nayak; on the same order of magnitude as GPT-3, which has 175 billion parameters. (Parameters being a measure of a model’s size and complexity.) MUM is also multimodal, meaning it processes visual data as well as text. And it’s been trained on 75 languages, which allows the system to “generalize from languages where there’s a lot of data, like English, to languages where there’s less data, like Hindi,” says Nayak. That helps in ensuring that any upgrades it provides are spread across Google’s many markets.

A new feature rolling out in the coming months named “Things to know” will use AI to help users explore topics related to their searches.
Image: Google

Nayak speaks of MUM with pride, as the latest AI wunderkind trained in Google’s labs. But the company is also cautious. Large language models are controversial for a number of reasons. They’re prone to lying, for example — as happy writing fiction as fact. And they’ve been shown time and time again to encode racial and gender biases. This is a problem that Google’s own researchers have highlighted and been shot down for doing so. Notably, Google fired two of its top ethics researchers, Timnit Gebru and Margaret Mitchell, after they co-authored a paper highlighting problems with exactly this technology.

For these reasons, perhaps, the changes to search that Google is launching are relatively restrained. The company is introducing three new features “in the coming months,” some powered by MUM, each of which is ancillary to its search engine’s primary function — ranking web results. But Nayak says they’re just the tip of the iceberg when it comes to Google’s ambitions to improve its products with AI. “To me, this is just the start,” he says.

First, though, the features. Number one is called “Things to know” and acts as an advanced snippet function, pulling out answers to predicted questions based on user’s searches. Type in “acrylic painting,” for example, and “Things to know” will automatically generate new queries, like “How do you use household items in acrylic painting.” Nayak says there are certain “sensitive queries” that won’t trigger this response (like “bomb making”) but that most topics are automatically covered. It will be rolling out in the “coming months.”

The second new feature suggests further searches that might help users broaden or refine their queries. So, with the “acrylic painting” search above, Google might now suggest a narrower focus, like “acrylic painting techniques,” or a broader remit, like “different styles of painting.” As Nayak puts it, Google wants to use AI’s ability to recognize “the space of possibilities within [a] topic” and help people explore variants of their own searches. This feature will be available immediately, though it is not powered by MUM.

The third new feature is more straightforward and based on video transcription. When users are searching for video content, Google will use MUM to suggest new searches based on what it hears within the video. Nayak gives the example of watching a video about Macaroni penguins and Google suggesting a new search of “Macaroni penguin life story.” Again, it’s about suggesting new areas of search for users. This feature will launch on September 29th in English in the US.

In addition to these AI-based changes, Google is also expanding its “About This” feature in search, which will give new information about the source of results. It’s also bringing its MUM-powered AI smarts to its visual search tech, Google Lens.

Google will give users new option to “refine” or “broaden” their search — using MUM to explore related topics.
Image: Google

The change to search is definitely the main focus, but what’s interesting is also what Google isn’t launching. When it demoed MUM and another model LaMDA at I/O earlier this year, it showed off ambitious features where users could literally talk to the subjects of their searches, like the dwarf planet Pluto, and ask them questions. In another, users asked expansive questions, like “I just hiked Mt. Adams, I want to hike Mt. Fuji in the fall. What should I do differently?” before being directed to relevant snippets and web pages.

It seems these sorts of searches, which are rooted deeply in the functionality of large language models, are too free-form for Google to launch publicly. Most likely, the reason for this is that the language models could easily say the wrong thing. That’s when those bias problems come into play. For example, when GPT-3 is asked to complete a sentence like “Audacious is to boldness as Muslim is to …,” nearly a quarter of the time, it finishes the sentence with the word “terrorism.” These aren’t problems that are easy to navigate.

When questioned about these difficulties, Nayak reframes the problems. He says it’s obvious that language models suffer from biases but that this isn’t necessarily the challenge for Google. “Even if the model has biases, we’re not putting it out for people to consume directly,” he says. “We’re launching products. And what matters is, are the products serving our users? Are they surfacing undesirable things or not?”

But the company can’t completely stamp out these problems in its finished products either. Google’s Photo app infamously tagged Black people as “gorillas” in one well-known incident, and the sort of racial and gender-based discrimination present in language AI is often much more subtle and difficult to detect.

There’s also the problem of what the shift to AI-generated answers might mean for the wider future of Google search. In a speculative paper published earlier this year, Google’s researchers considered the question of replacing search altogether with large language models and highlighted a number of difficulties with the approach. (Nayak is definitive that this is not a serious prospect for the company: “That is absolutely not the plan.”)

And there’s also the consistent grumbling that Google continues to take up more space in search results with its own product, shunting searches to Google Shopping, Google Maps, and so on. The new MUM-powered “Things to know” feature certainly seems to be part of this trend: filleting out the most informative search results from web pages, and potentially stopping users from clicking through, and therefore sustaining the creator of that data.

Nayak’s response to this is that Google delivers more traffic to the web each year and that if it doesn’t “build compelling experiences” for users, then the company “will not be around to send traffic to the web” in the future. It’s not a wholly convincing answer. Google may deliver more traffic each year, but how much of that is just a function of increasing web use? And even if Google does disappear from search, wouldn’t other search engines pick up the slack in sending people traffic?

Whatever the case, it’s clear that the company is putting AI language understanding at the heart of its search tools — at the heart of Google, indeed. There are many open questions about the challenges of integrating this tech, but for now, Google is happy to continue the search for answers of its own.

Repost: Original Source and Author Link

Categories
Computing

Vaio laptops are back, and they’re surprisingly affordable

The Vaio laptop brand is dipping its toe into the affordable space with a new series of notebooks featuring several impressive specs at prices under $950.

The new Vaio FE Series are currently selling at Walmart and Walmart.com and will also be available at Sam’s Club.

You might remember the legendary Vaio brand from a bygone era, known for its high-end build quality and cutting-edge design. The new Vaio is no longer owned by Sony, however, but that hasn’t stopped the brand from attempting a comeback.

The 14.1-inch notebook comes in three models, with Full HD anti-glare screens, two built-in speakers, a front-facing camera with a privacy shutter, a fingerprint scanner, and a precision touchpad, the brand noted.

The series is powered by 12th-gen Intel Core processors, which power many of the highlight features of the Vaio FE notebooks. These include THX Spatial Audio technology, which allows for a 360-degree sound experience that can be enjoyed over speakers, earbuds, or headphones, the brand added. The processors also allow for up to 46% energy conservation, which supports a battery life of up to 10 hours on each model.

The Viao FE series will be available at Walmart at Sam's Club.

Running Windows 11 Home as its operating system, the Viao FE Series benefits from such features as password-free unlocking, Focus Assist to block notifications, Microsoft Photos, and Snap layouts to better organize apps and windows. The system also features an Xbox Game Bar, which allows gamers to chat, track performance, and screen record.

The base model Vaio FE sells for $700 and features a 12th-gen Intel Core i5-1235U processor, 512 GB Solid State Drive, and 8GB memory. It comes in Black, Blue, Silver, and Rose Gold color options.

The mid-tier Vaio FE model sells for $800 and features a 12th-gen Intel Core i5-1235U processor, 1 TB solid-state drive, and 16GB memory. It comes in Black, Blue, Silver, and Rose Gold color options.

The top-tier Vaio FE model sells for $950 and features a 12th-gen Intel Core i7-1255U processor, 1 TB solid-state drive, and 16GB memory. It comes in Black, Blue, Silver, and Rose Gold color options.

This isn’t the first Vaio laptop to make a resurgence. The Vaio Z launched last year as an attempt to recapture the premium branding of the original line.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

These Pre-Owned Games Are So Cheap They’re a Must-Buy

You can buy video games brand new, sure, but a lot of times the new prices don’t reflect the age of the game. You’ll have older games that rarely go on sale, or do, but the sale price isn’t all that great. The alternative is to grab used games, especially now that the industry has backed off on online licenses – you only have to buy the copy of the game now, not extra DLC, unless you want it. GameStop is one of the best places to go for pre-owned games, as you might expect.

Right now, they’re offering some amazing deals on pre-owned titles new and old. If you don’t want to pay full price, or you want a quick and cheap way to stock up your library, this is the perfect opportunity. You can get games like Sekiro: Shadows Die Twice, Resident Evil Village, Skyrim Special Edition, or even Call of Duty: Black Ops Cold War, all for some pretty good prices. You can always peruse the sale yourself below, or check out our favorites picks!

PlayStation 4 | 5

Want some new games to play on your PS4 or your PS5? Check these out.

Sekiro: Shadows Die Twice – $22, was $30

It’s brutal, but rewarding, like most From Software and Souls-series games. Explore 1500s Sengoku Japan as a one-armed Ninja with some pretty cool abilities. It’s also super cheap pre-owned, at $22 all-in. Just be aware that this is the Game of the Year edition and downloadable content was included with the new copy that might be missing here.

It’s also available on Xbox One, albeit not on sale.

Resident Evil Village – $37, was $55

The newest game in the Resident Evil series taking it back to first-person, like RE7, this title has you stepping back into the shoes of Ethan Winter. Pick up right where RE7 left off, and fight through a living, breathing village of horror and mayhem. A pre-owned standard copy is on sale for $45 with free shipping.

It’s also available on Xbox One for the same price.

Persona 5 Royal – $33, was $55

Persona 5 The Royale release date update new enhancements upgrades new edition

Persona 5 Royal is an upgrade to the original Persona 5 with new content, new characters, and new things to experience. And Persona 5, if you don’t know, is a mashup of genres, mostly RPG, where you also attend school, while fighting crime in the after-hours. Wear your mask as Joker, and reveal the truth — steal some hearts! A pre-owned copy is $33 with free shipping.

Xbox One | Series X, S

These games are just what you need in your Xbox One collection.

Red Dead Redemption 2 – $25, was $55

Fishing in Red Dead Redemption 2.

Save a horse, become a virtual cowboy in this fantastic follow-up to the original Red Dead Redemption. It’s a prequel, taking place just at the end of the good ol’ Wild West days. Experience a story like never before, or go online. A pre-owned copy is $28 right now, which is an amazing deal.

It’s also available on PlayStation 4 for the same price.

Halo: The Master Chief Collection – $28, was $40

Master Chief holds a battle rifle in Halo: The Master Chief Collection.

Iconic. Legendary. An absolute blast to revisit and play, the Halo collection takes you back to the series roots with re-mastered versions of Halo 2: Anniversary, Halo: Combat Evolved Anniversary, Halo 3, Halo 4, and the new digital series, Halo: Nightfall. Experience Master Chief from the beginning until the present. The pre-owned collection is $28 at GameStop which is a heck of a steal!

Forza Motosport 7 – $22, was $40

Experience beautiful graphics at 4K resolution with HDR and a 60 frames-per-second frame rate in Forza Motorsport 7. The cars look realistic, the tracks look realistic, and the gameplay is lots of fun, even at breakneck speeds. The pre-owned standard copy is $22 at GameStop. Nice!

Nintendo Switch

Switch games don’t always go on sale, so grab these great deals while you can!

Animal Crossing: New Horizons – $45, was $48

Animal Crossing New Horizons Feature

It’s Animal Crossing for a new generation, with a new island, and lots of new stuff to collect and customize! Grab it for $45 pre-owned at GameStop with free shipping, an excellent price, especially since Switch games rarely go on sale!

Monster Hunter Rise – $37, was $55

Monster hunter rise armor set.

This is the newest title in the Monster Hunter series, bringing the games back to Nintendo consoles. Fight alongside the new canynes, use new tools, and experience seamless gameplay with virtually no loading times. A pre-owned copy is $37 at GameStop with free shipping.

The Legend of Zelda: Breath of the Wild – $38, was $48

Breath of the Wild

Experience a living, breathing open world with lots to do and see, and lots to conquer! The Legend of Zelda: Breath of the Wild marks a turning point for the iconic series, and pretty soon, the next game will be available. Now’s your chance to get an awesome price on the first BoTW. It’s $38 for a pre-owned copy with free shipping.

Don’t like our picks? No problem

Don’t agree with the titles we chose? Gaming is subjective, we know, you know it, but fortunately, there are a ton of pre-owned games on sale at GameStop. You can always check out the full catalog to see if there’s something you like better!

We strive to help our readers find the best deals on quality products and services, and we choose what we cover carefully and independently. The prices, details, and availability of the products and deals in this post may be subject to change at anytime. Be sure to check that they are still in effect before making a purchase.

Digital Trends may earn commission on products purchased through our links, which supports the work we do for our readers.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Some Retailers Shipping Alder Lake CPUs, and They’re Cheap

Digital Trends may earn a commission when you buy through links on our site.

Intel has promised that its 12th-gen Alder Lake platform is arriving in 2021, but the company has yet to announce an official release date. It seems some retailers are getting antsy, and have even shipped processors to customers weeks ahead of the rumored launch date.

Reddit user u/Seby9123 apparently received two Intel Core i9-12900K processors, two weeks ahead of the rumored release date. We always recommend handling random Reddit posts with skepticism, but the images of the processor and its box are hard to argue with. The user posted images of the Core i9-12900K — there’s no arguing with that.

You can see a few of the images above detailing the intricacies of the packaging and the processor itself. The box lines up with a leaked render, which showed the Core i9-12900K sandwiched between a replica of a golden wafer slice.

Even more interesting than the box is the price. The original poster said they paid only $610 for each processor before tax. That’s $60 more than last-gen’s Core i9-11900K, but still $190 less than AMD’s competing Ryzen 9 5950X. If leaked benchmarks are true, Intel could be going for an aggressive price/performance target.

The price runs counter to earlier rumors, which showed the flagship chip selling for as much as $1,000. Going into this generation, price is going to play a big role as AMD continues to assert its lead in desktop processors.

We went to some popular retailer websites and, unsurprisingly, didn’t find any 12th-gen processors available. It seems this order slipped through the cracks somehow, so the original poster was able to snag a couple of chips early. They weren’t, however, able to track down a 600-series chipset motherboard to take the processors out for a spin.

Although Intel hasn’t confirmed the release date of Alder Lake, the chips should be arriving soon. Rumors point to the chips launching on November 4, though we’ve also seen rumors suggesting a November 19 launch. Regardless, we should know more soon.

Intel is set to begin its Innovation event on October 27, where we anticipate an announcement for the Alder Lake release date. The company has confirmed time and again that the chips are arriving in 2021, so we should know the specifics soon.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Android 12 Will Let Users Play Games as They’re Downloading

Google announced that Android 12 is supporting a new play-as-you-download functionality for newer games. Announced during the Google for Games Developer Summit, this allows players to start playing faster than ever and lets the games be started after only downloading the essentials for running.

The play-as-you-download function isn’t completely new to Android. However, this new Android 12 firmware will make the system default to it for new Google Play titles.

The way that the procedure works is that the system will split up the largest assets of the game. The categories of the split will involve what is immediately needed to run the game and what can simply be downloaded later. The remaining pieces will be added in the background without getting in the way of playtime.

During the digital conference, Google stated, “We’re seeing 400-megabyte games becoming available in 10 seconds instead of several minutes.” It also claims that this will make for a dramatically upgraded user experience.

The company adds that players won’t have to change their game if they choose to opt into the play-as-you-download feature, doing away with fears of losing progress. “If you use the Android app bundle format, simply upload your game, and we’ll do the rest on Android 12,” Google says.

While developers currently have to join the beta for this option, in the future, Google will default to this system for games, with it becoming mandatory in August.

For those wondering when their phone will be receiving Android 12, Google shared a roadmap with beta dates, confirming a final release is coming sometime after August.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Tech News

Majority of Europeans would replace government with AI — oof, they’re so wrong

A recent survey conducted by researchers at the IE Center for the Governance of Change indicates that a majority of people would support replacing members of their respective parliaments with AI systems.

Yikes. The majority might have this one wrong. But we’ll get into why in a moment.

The survey

Researchers interviewed 2,769 Europeans representing varying demographics. Questions ranged from whether they’d prefer to vote via smartphone all the way to whether they’d replace existing politicians with algorithms given the chance.

Per the survey:

51% of Europeans support reducing the number of national parliamentarians and giving those seats to an algorithm. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 are excited about this idea.

On the surface, this makes perfect sense – younger people are more likely to embrace a new technology, no matter how radical.

But it gets even more interesting when you drill things down a bit.

According to a report from CNBC:

The study found the idea was particularly popular in Spain, where 66% of people surveyed supported it. Elsewhere, 59% of the respondents in Italy were in favor and 56% of people in Estonia.

In the U.K., 69% of people surveyed were against the idea, while 56% were against it in the Netherlands and 54% in Germany.

Outside Europe, some 75% of people surveyed in China supported the idea of replacing parliamentarians with AI, while 60% of American respondents opposed it.

It’s difficult to draw insight from these numbers without resorting to speculation – when you consider the political divide in the UK and the US, for example, it’s interesting to note that people in both nations still seem to prefer the status quo over an AI system.

Here’s the problem

All those people in favor of an AI parliament are wrong.

The idea here, according to the CNBC report, is that this survey captures the “general zeitgeist” when it comes to public perception of their current human representatives.

This seems to indicate that the survey tells us more about how people feel about their politicians than it does about how people feel about AI.

But we really need to consider what an AI parliamentarian would actually mean before we start throwing our support behind the idea.

Governments may not operate the same in every country, but if enough people support an idea – no matter how bad it is – there’s always a chance the people will get what they want.

Why an AI parliamentarian is a terrible idea

Here’s the conclusion right up front: It would not only be filled with baked-in bias, but trained with the biases of the government implementing it. Furthermore, any applicable AI technology in this domain would be considered “black box” AI, and thus it would be even worse at explaining its decisions than contemporary human politicians.

And, finally, if we hand over our constituent data to a centralized government system that has parliamentarian rights, we’d essentially be allowing our respective governments to use digital gerrymandering to conduct mass-scale social engineering.

Here’s how

When people imagine a robot politician they often conceptualize a being that cannot be corrupted. Robots don’t lie, they don’t have agendas, they’re not xenophobic or bigoted, and they can’t be bought off. Right?

Wrong.

AI is inherently biased. Any system designed to surface insights based on data that applies to people will automatically have bias built into its very core.

The short version of why this is true goes like this: think about the 2,769 person survey mentioned above. How many of those people are Black? How many are queer? How many are Jewish? How many are conservative? Are 2,769 people really enough people to represent the entirety of Europe?

Probably not. It’s just a pretty close guess. When researchers conduct these surveys, they’re trying to get a general idea of how people feel: this isn’t scientifically accurate information. We simply have no way of forcing every single person on the continent to answer these questions.

That’s how AI works. When we train an AI to do work – for example, to take data related to voter sentiment and determine whether to vote yay or nay on a particular motion – we train it on data that was generated, curated, interpreted, transcribed, and implemented by humans.

At every step of the AI training process, every bias that’s crept in becomes exacerbated. If you train an AI on data featuring a disproportionate amount of representation between groups, the AI will develop and amplify bias against those groups with less representation. That’s how algorithms work inside of a black box.

And therein lies our second problem: the black box. If a politician makes a decision that results in a negative consequence we can ask that politician to explain the motive behind that decision.

As a hypothetical example, if a politician successfully lobbied to abolish all traffic lights in their district and that action resulted in an increase in accidents, we could find out why they voted that way and demand they never do it again.

You can’t do that with most AI systems. Simple automation systems can be looked at in reverse if something goes wrong, but AI paradigms that involve deep learning and surfacing insights – the very kind you’d need to use in order to replace members of parliament with AI-powered representation – cannot generally be understood in reverse.

AI developers essentially dial-in a system’s output like they’re tuning in a radio signal from static. They keep playing with the parameters until the AI starts making decisions they like. This process cannot be repeated in reverse: you can’t turn the dial backwards until the signal is noisy again to see how it became clear.

Here’s the scary part

AI systems are goal-based. When we imagine the worst things that could possibly go wrong when it comes to artificial intelligence we might be thinking killer robots, but the experts tend to think misaligned objectives is the more likely evil.

Basically, think about AI developers like Mickey Mouse in Disney’s “The Sorcerer’s Apprentice.” If big government tells Silicon Valley to create an AI parliamentarian, it’s going to come up with the best leader it can possibly create.

Unfortunately, the goal of government isn’t to produce or collect the best leaders. It’s to serve society. Those are two entirely different goals.

The bottom line is that AI developers and politicians can train an AI system to surface any results they want.

If you can imagine gerrymandering, as it happens in the US, but at the scale of which “constituent data” gets weighted more in a machine’s parameters, then you can imagine how politicians could use AI systems to automate partisanship.

The last thing we need to do, as a global community, is use AI to supercharge the worst parts of our respective political systems.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Repost: Original Source and Author Link

Categories
Game

Cyberpunk 2077 update 1.2 patch notes are here and they’re huge

Cyberpunk 2077 owners have faced a longer-than-expected wait for the game’s version 1.2 update, but now that patch is finally on the horizon. While CD Projekt Red hasn’t revealed a release date for the update yet, it did today publish a list of patch notes for it. The list is very long, though CD Projekt Red suggests that these aren’t even all the changes and fixes that are shipping along with update 1.2.

Indeed, CD Projekt Red says that this list of patch notes contains “the most notable changes coming in this update,” so it sounds like we can expect more than what’s listed here. The patch notes are split into several different sections: Gameplay; Quests; Open World; Cinematic Design; Environment and Levels; Graphics, Audio, Animation; UI; Stability and Performance; Miscellaneous; PC-specific; and Console-specific.

It’s impossible to detail every little change here – and in fact anyone who sets out to read these patch notes in full will be committing a lot of time to the task – but one section Cyberpunk 2077 players may want to check out first is the “Stability and Performance” one. Here, CD Projekt Red says that it has listed a number of optimizations that are shipping out to all platforms, but notes that many of them will make the biggest difference on last-gen consoles and underpowered PCs.

There are a number of changes in there that have been implemented to reduce the number of random crashes, along with optimizations for various systems like shadows, shaders, and physics. The changes CD Projekt Red teased a couple of weeks back are all present and accounted for here, and there also seem to be a number of quest-related fixes that quash issues which could block progression. Indeed, the “Quests” section is the longest one, so if you’ve been having issues with a certain quest – or quests – then it’s worth combing through that section to see if a fix is contained in this update.

This is the second of two major updates CD Projekt Red committed to following Cyberpunk 2077‘s disappointing launch, though update 1.2 is arriving a little later than expected thanks to a ransomware attack in which hackers infiltrated CD Projekt’s servers and made off with a bunch of internal documents. We don’t have a specific timeframe for the rollout of update 1.2 yet, but the arrival of these patch notes indicates that the update is right around the corner. We’ll let you know when CD Projekt Red gives a specific date and time for the rollout of this update, so stay tuned.

Repost: Original Source and Author Link

Categories
AI

AI Weekly: Here’s how enterprises say they’re deploying AI responsibly

To get a sense of the extent to which brands are thinking about — and practicing — the tenets of responsible AI, VentureBeat surveyed executives at companies that claim to be using AI in a tangible capacity. Their responses reveal that a single definition of “responsible AI” remains elusive. At the same time, they show an awareness of the consequences of opting not to deploy AI thoughtfully.

Companies in enterprise automation

ServiceNow was the only company VentureBeat surveyed to admit that there’s no clear definition of what constitutes responsible AI usage. “Every company really needs to be thinking about how to implement AI and machine learning responsible,” ServiceNow chief innovation officer Dave Wright told VentureBeat. “[But] every company has to define it for themselves, which unfortunately means there’s a lot of potential for harm to occur.”

According to Wright, ServiceNow’s responsible AI approach encompasses the three pillars of diversity, transparency, and privacy. When building an AI product, the company brings in a variety of perspectives and has them agree on what counts as fair, ethical, and responsible before development begins. ServiceNow also ensures that its algorithms remain explainable in the sense that it’s clear why they arrive at their predictions. Lastly, the company says it limits and obscures the amount of personally identifiable information it collects to train its algorithms. Toward this end, ServiceNow is investigating “synthetic AI” that could allow developers to train algorithms without handling real data and the sensitive information it contains.

“At the end of the day, responsible AI usage is something that only happens when we pay close attention to how AI is used at all levels of our organization. It has to be an executive-level priority,” Wright said.

Automation Anywhere says it established AI and bot ethical principles to provide guidelines to its employees, customers, and partners. They include monitoring the results of any process automated using AI or machine learning so as to prevent them from producing outputs that might reflect racial, sexist, or other biases.

“New technologies are a two-edged sword. While they can free humans to realize their potential in entirely new ways, sometimes these technologies can also, unfortunately, entrap humans in bad behavior and otherwise lead to negative outcomes,” Automation Anywhere CTO Prince Kohli told VentureBeat via email. “[W]e have made the responsible use of AI and machine learning one of our top priorities since our founding, and have implemented a variety of initiatives to achieve this.”

Beyond the principles, Automation Anywhere created an AI committee charged with challenging employees to consider ethics in their internal and external actions. For example, engineers must seek to address the threat of job loss raised by AI and machine learning technologies and the concerns of customers from an “all-inclusive” range of different minority groups. The committee also reevaluates Automation Anywhere’s principles on a regular basis so that they evolve with emerging AI technologies.

Splunk SVP and CTO Tim Tully, who anticipates the industry will see a renewed focus on transparent AI practices over the next two years, says that Splunk’s approach to putting “responsible AI” into practice is fourfold. First, the company makes sure that the algorithms it’s developing and operating are in alignment with governance policies. Then, Splunk prioritizes talent to work with its AI and machine learning algorithms to “[drive] continual improvement.” Splunk also takes steps to bake security into its R&D processes while keeping “honesty, transparency, and fairness” top of mind throughout the building lifecycle.

“In the next few years, we’ll see newfound industry focus on transparent AI practices and principles — from more standardized ethical frameworks, to additional ethics training mandates, to more proactively considering the societal implications of our algorithms — as AI and machine learning algorithms increasingly weave themselves into our daily lives,” Tully said. “AI and machine learning was a hot topic before 2020 disrupted everything, and over the course of the pandemic, adoption has only increased.”

Companies in hiring and recruitment

LinkedIn says that it doesn’t look at bias in algorithms in isolation but rather identifies what biases cause harm to users and works to eliminate this. Two years ago, the company launched an initiative called Project Every Member to take a more rigorous approach to reducing and eliminating unintended consequences in the services it builds. By using inequality A/B testing throughout the product design process, LinkedIn says it aims to build trustworthy, robust AI systems and datasets with integrity that comply with laws and “benefit society.”

For example, LinkedIn says it uses differential privacy in its LinkedIn Salary product to allow members to gain insights from others without compromising information. And the company claims its Smart Replies product, which taps machine learning to suggest responses to conversations, was built to prioritize member privacy and avoid gender-specific replies.

“Responsible AI is very hard to do without company-wide alignment. ‘Members first’ is a core company value, and it is a guiding principle in our design process,” a spokesperson told VentureBeat via email. “We can positively influence the career decisions of more than 744 million people around the world.”

Mailchimp, which uses AI to, among other things, provide personalized product recommendations for shoppers, tells VentureBeat that it trains each of its data scientists in the fields that they’re modeling. (For example, data scientists at the company working on products related to marketing receive training in marketing.) However, Mailchimp also admits that its systems are trained on data gathered by human-powered processes that can lead to a number of quality-related problems, including errors in the data, data drift, and bias.

“Using AI responsibly takes a lot of work. It takes planning and effort to gather enough data, to validate that data, and to train your data scientists,” Mailchimp chief data science officer David Dewey told VentureBeat. “And it takes diligence and foresight to understand the cost of failure and adapt accordingly.”

For its part, Zendesk says it places an emphasis on a diversity of perspectives where its AI adoption is concerned. The company claims that broadly, its data scientists examine processes to ensure that its software is beneficial, unbiased, following strong ethical principles, and securing the data that makes its AI work. “As we continue to leverage AI and machine learning for efficiency and productivity, Zendesk remains committed to continuously examining our processes to ensure transparency, accountability and ethical alignment in our use of these exciting and game-changing technologies, particularly in the world of customer experience,” Zendesk president of products Adrian McDermott told VentureBeat.

Companies in marketing and management

Adobe EVP of general counsel and corporate secretary Dana Rao points to the company’s ethics principles as an example of its commitment to responsible AI.  Last year, Adobe launched an AI ethics committee and review board to help guide its product development teams and review new AI-powered features and products prior to release. At the product development stage, Adobe says its engineers use an AI impact assessment tool created by the committee to capture the potential ethical impact of any AI feature to avoid perpetuating biases.

“The continued advancement of AI puts greater accountability on us to address bias, test for potential misuse, and inform our community about how AI is used,” Rao said. “As the world evolves, it is no longer sufficient to deliver the world’s best technology for creating digital experiences; we want our technology to be used for the good of our customers and society.”

Among the first AI-powered features the committee reviewed was Neural Filters in Adobe Photoshop, which lets users add non-destructive, generative filters to create things that weren’t previously in images (e.g., facial expressions and hair styles). In accordance with its principles, Adobe added an option within Photoshop to report whether the Neural Filters output a biased result. This data is monitored to identify undesirable outcomes and allows the company’s product teams to address them by means of updating an AI model in the cloud.

Adobe says that while evaluating Neural Filters, one review board member flagged that the AI didn’t properly model the hairstyle of a particular ethnic group. Based on this feedback, the company’s engineering teams updated the AI dataset before Neural Filters was released.

“This constant feedback loop with our user community helps further mitigate bias and uphold our values as a company — something that the review board helped implement,” Rao said. “Today, we continue to scale this review process for all of the new AI-powered features being generated across our products.”

As for Hootsuite CTO Ryan Donovan, he believes that responsible AI ultimately begins and ends with transparency. Brands should demonstrate where and how they’re using AI — an ideal that Hootsuite strives to achieve, he says.

“As a consumer, for instance, I fully appreciate the implementation of bots to respond to high level customer service inquiries. However, I hate when brands or organizations masquerade those bots off as human, either through a lack of transparent labelling or assigning them human monikers,” Donovan told VentureBeat via email. “At Hootsuite, where we do use AI within our product, we have consciously endeavored to label it distinctly — suggested times to post, suggested replies, and schedule for me being the most obvious.”

SVP of product development at ADP Jack Berkowitz says that that responsible AI at ADP starts with the ethical use of data. In this context, “ethical use of data” means looking carefully at what the goal of an AI system is and the right way to achieve it.

“When AI is baked into technology, it comes with inherently heightened concerns, because it means an absence of direct human involvement in producing results,” Berkowitz said. “But a computer only considers the information you give it and only the questions you ask, and that’s why we believe human oversight is key.”

ADP retains an AI and data ethics board of experts in tech, privacy, law, and auditing that works with teams across the company to evaluate the way they use data. It also provides guidance to teams developing new uses and follows up to ensure the outputs are desirable. The board reviews ideas and evaluates potential uses to determine whether data is executed on fairly and in compliance with legal requirements and ADP’s own standards. If an idea falls short of meeting transparency, fairness, accuracy, privacy, and accountability requirements, it doesn’t move forward within the company, Berkowitz says.

Marketing platform HubSpot similarly says its AI projects undergo a peer review for ethical considerations and bias. According to senior machine learning engineer Sadhbh Stapleton Doyle, the company uses proxy data and external datasets to “stress test” its models for fairness. In addition to model cards, HubSpot also maintains a knowledge base of ways to detect and mitigate bias.

The road ahead

A number of companies declined to tell VentureBeat how they’re deploying AI responsibly in their organizations, highlighting one of the major challenges in the field: Transparency. A spokesperson for UiPath said that the robotic process automation startup “wouldn’t be able to weigh in” on responsible AI. Zoom, which recently faced allegations that its face-detection algorithm erased Black faces when applying virtual backgrounds, chose not to comment. And Intuit told VentureBeat that it had nothing to share on the topic.

Of course, transparency isn’t the end-all-be-all when it comes to responsible AI. For example, Google, which loudly trumpets its responsible AI practices, was recently the subject of a boycott by AI researchers over the company’s firing of Timnit Gebru and Margaret Mitchell, coleaders of a team working to make AI systems more ethical. Facebook also purports to be implementing AI responsibly, but to date, the company has failed to present evidence that its algorithms don’t encourage polarization on its platforms.

Returning to the Boston Consulting Group survey, Steven Mills, chief ethics officer and a coauthor, noted that the depth and breadth of most responsible AI efforts fall behind what’s needed to truly ensure responsible AI. Organizations’ responsible AI programs typically neglect the dimensions of fairness and equity, social and environmental impact, and human-AI cooperation because they’re difficult to address.

Greater oversight is a potential remedy. Companies like Google, Amazon, IBM, and Microsoft; entrepreneurs like Sam Altman; and even the Vatican recognize this — they’ve called for clarity around certain forms of AI, like facial recognition. Some governing bodies have begun to take steps in the right direction, like the EU, which earlier this year floated rules focused on transparency and oversight. But it’s clear from developments over the past months that much work remains to be done.

As Salesforce principal architect of ethical AI practice Kathy Baxter told VentureBeat in a recent interview, AI can result in harmful, unintended consequences if algorithms aren’t trained and designed inclusively. Technology alone can’t solve systemic health and social inequities, she asserts. In order to be effective, technology must be built and used responsibly — because no matter how good a tool is, people won’t use it unless they trust it.

“Ultimately, I believe the benefits of AI should be accessible to everyone, but it is not enough to deliver only the technological capabilities of AI,” Baxter said. “Responsible AI is technology developed inclusively, with a consideration towards specific design principles to mitigate, as much as possible, unforeseen consequences of deployment — and it’s our responsibility to ensure that AI is safe and inclusive. At the end of the day, technology alone cannot solve systemic health and social inequities.”

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Repost: Original Source and Author Link

Categories
Tech News

Snap’s leaked AR Spectacles sound great, but they’re not for you

It seems that Snap, the company behind Snapchat, is planning to launch another Spectacles hardware product — but you probably won’t be able to get it. That’s a bit disappointing considering the alleged features that have leaked, including the ability to display augmented reality Snapchat filters directly on the headset in a real-life setting.

Details about the alleged hardware product come from sources that spoke with The Information, which claims the next pair of Spectacles will be more robust than what we’ve seen so far — and, sadly, limited to only creators and developers. This means, according to the report, users could see a beard filter overlaid onto someone sitting nearby.

Currently available Spectacles models feature built-in cameras and microphones for capturing still images and videos. Users can apply effects to this content, but they’re not able to see them in real-time directly on the smartglasses’ lenses.

The new report claims the next-gen Spectacles will change this, but that the model won’t be offered at the consumer level. It seems Snap may hope developers will create new types of AR experiences that involve the glasses, the availability and price of which weren’t included in the leak.

In addition, the report claims that Snap is once again focusing on drones, which may be targeted at content creators who want to get more dynamic or extreme shots not possible with a smartphone. Details are lacking, but this isn’t the first time we’ve heard claims about Snap’s drone project — rumors about such efforts first surfaced back in early 2017.

Repost: Original Source and Author Link

Categories
Tech News

The OnePlus 8 and 8 Pro aren’t overpriced, but they’re way too expensive

The OnePlus 8 and OnePlus 8 Pro smartphones are out now, and normally we’d be intrigued. Over the years, OnePlus has played a game of priorities, offering the newest processor and latest display enhancements for hundreds of dollars less than its peers do. With Verizon joining T-Mobile this year in offering OnePlus, it seemed like the little phone maker was on the verge of its big breakthrough. 

That breakthrough might not happen, unfortunately. While the newest OnePlus phones certainly bring the goods, with stunning displays, impressive camera arrays, and gorgeous designs, they also bring a change no one will like: Their prices have gone up. A lot. It also doesn’t help that OnePlus is removing its biggest competitive advantage at a time when millions of people are suddenly unemployed, and premium phone sales are cratering.

oneplus 8 bezels Michael Simon/IDG

The OnePlus 8 and 8 Pro both have gorgeous screens.

No matter which model you choose, you’re going to be paying significantly more for the OnePlus 8 than you would have for last year’s 7 Pro or 7T. The top model fetches four figures. Here are the new models and prices:

OnePlus 7T
8GB/128GB: $599

OnePlus 8
8GB/128GB: $699
8GB/128GB (Verizon): $799
12GB/256GB: $799

OnePlus 7 Pro
6GGB/128GB: $669
8GB/256GB: $699

OnePlus 8 Pro
8GB/128GB: $899
12GB/256GB: $999

To be fair, Qualcomm’s pricing for the Snapdragon 865 and X55 5G modem has driven up the price of every premium Android phone this year, including those by OnePlus. And for what you’re getting—a top-of-the-line processor, speedy RAM and storage, great displays, and 5G—the OnePlus prices are right, even good. Still, you’d be paying at least $100 and possibly $300 more for a 2020 OnePlus phone than a comparable 2019 version. 

Repost: Original Source and Author Link