Categories
AI

Alphabet is putting its prototype robots to work cleaning up around Google’s offices

What does Google’s parent company Alphabet want with robots? Well, it would like them to clean up around the office, for a start.

The company announced today that its Everyday Robots Project — a team within its experimental X labs dedicated to creating “a general-purpose learning robot” — has moved some of its prototype machines out of the lab and into Google’s Bay Area campuses to carry out some light custodial tasks.

“We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices,” said Everyday Robot’s chief robot officer Hans Peter Brøndmo in a blog post. “The same robot that sorts trash can now be equipped with a squeegee to wipe tables and use the same gripper that grasps cups can learn to open doors.”

These robots in question are essentially arms on wheels, with a multipurpose gripper on the end of a flexible arm attached to a central tower. There’s a “head” on top of the tower with cameras and sensors for machine vision and what looks like a spinning lidar unit on the side, presumably for navigation.

One of Alphabet’s Everyday Robot machines cleans the crumbs off a cafe table.
Image: Alphabet

As Brøndmo indicates, these bots were first seen sorting out recycling when Alphabet debuted the Everyday Robot team in 2019. The big promise that’s being made by the company (as well as by many other startups and rivals) is that machine learning will finally enable robots to operate in “unstructured” environments like homes and offices.

Right now, we’re very good at building machines that can carry out repetitive jobs in a factory, but we’re stumped when trying to get them to replicate simple tasks like cleaning up a kitchen or folding laundry.

Think about it: you may have seen robots from Boston Dynamics performing backflips and dancing to The Rolling Stones, but have you ever seen one take out the trash? It’s because getting a machine to manipulate never-before-seen objects in a novel setting (something humans do every day) is extremely difficult. This is the problem Alphabet wants to solve.

Unit 033 makes a bid for freedom.
Image: Alphabet

Is it going to? Well, maybe one day — if company execs feel it’s worth burning through millions of dollars in research to achieve this goal. Certainly, though, humans are going to be cheaper and more efficient than robots for these jobs in the foreseeable future. The update today from Everyday Robot is neat, but it’s far from a leap forward. You can see from the GIFs that Alphabet shared of its robots that they’re still slow and awkward, carrying out tasks inexpertly and at a glacial pace.

However, it’s still definitely something that the robots are being tested “in the wild” rather than in the lab. Compare Alphabet’s machines to Samsung’s Bot Handy, for example; a similar-looking tower-and-arm bot that the company showed off at CES last year, apparently pouring wine and loading a dishwasher. At least, Bot Handy looks like it’s performing these jobs, but really it was only carrying out a prearranged demo. Who knows how capable, if at all, this robot is in the real world? At least Alphabet is finding this out for itself.

Repost: Original Source and Author Link

Categories
AI

Google’s future in enterprise hinges on strategic cybersecurity

Gaps in Google’s cybersecurity strategy make banks, financial institutions, and larger enterprises slow to adopt the Google Cloud Platform (GCP), with deals often going to Microsoft Azure and Amazon Web Services instead.

It also doesn’t help that GCP has long had the reputation that it is more aligned with developers and their needs than with enterprise and commercial projects. But Google now has a timely opportunity to open its customer aperture with new security offerings designed to fill many of those gaps.

During last week’s Google Cloud Next virtual conference, Google executives leading the security business units announced an ambitious new series of cybersecurity initiatives precisely for this purpose. The most noteworthy announcements are the formation of the Google Cybersecurity Action Team, new zero-trust solutions for Google Workspace, and extending Work Safer with CrowdStrike and Palo Alto Networks partnerships.

The most valuable new announcements for enterprises are on the BeyondCorp Enterprise platform, however. BeyondCorp Enterprise is Google’s zero-trust platform that allows virtual workforces to access applications in the cloud or on-premises and work from anywhere without a traditional remote-access VPN. Google’s announced Work Safer initiative combines BeyondCorp Enterprise for zero-trust security and their Workspace collaboration platform.

Workspace now has 4.8 billion installations of 5,300 public applications across more than 3 billion users, making it an ideal platform to build and scale cybersecurity partnerships. Workspace also reflects the growing problem chief information security officers (CISOs) and CIOs have with protecting the exponentially increasing number of endpoints that dominate their virtual-first IT infrastructures.

Bringing order to cybersecurity chaos

With the latest series of cybersecurity strategies and product announcements, Google is attempting to sell CISOs on the idea of trusting Google for their complete security and public cloud tech stack. Unfortunately, that doesn’t reflect the reality of how many legacy systems CISOs have lifted and shifted to the cloud for many enterprises.

Missing from the many announcements were new approaches to dealing with just how chaotic, lethal, and uncontrolled breaches and ransomware attacks have become. But Google’s announcement of Work Safer, a program that combines Workspace with Google cybersecurity services and new integrations to CrowdStrike and Palo Alto Networks, is a step in the right direction.

The Google Cybersecurity Action Team claimed in a media advisory it will be “the world’s premier security advisory team with the singular mission of supporting the security and digital transformation of governments, critical infrastructure, enterprises, and small businesses.”  But let’s get real: This is a professional services organization designed to drive high-margin engagement in enterprise accounts. Unfortunately, small and mid-tier enterprises won’t be able to afford engagements with the Cybersecurity Action Team, which means they’ll have to rely on system integrators or their own IT staff.

Why every cloud needs to be a trusted cloud

CISOs and CIOs tell VentureBeat that it’s a cloud-native world now, and that includes closing the security gaps in hybrid cloud configurations. Most enterprise tech stacks grew through mergers, acquisitions, and a decade or more of cybersecurity tech-buying decisions. These are held together with custom integration code written and maintained by outside system integrators in many cases. New digital-first revenue streams are generated from applications running on these tech stacks. This adds to their complexity. In reality, every cloud now needs to be a trusted cloud.

Google’s series of announcements relating to integration and security monitoring and operations are needed, but they are not enough. Historically Google has lagged behind the market when it comes to security monitoring by prioritizing its own data loss prevention (DLP) APIs, given their proven scalability in large enterprises. To Google’s credit, it has created a technology partnership with Cybereason, which will use Google’s cloud security analytics platform Chronicle to improve its extended detection and response (XDR) service and will help security and IT teams identify and prevent attacks using threat hunting and incident response logic.

Google now appears to have the components it previously lacked to offer a much-improved selection of security solutions to its customers. Creating Work Safer by bundling the BeyondCorp Enterprise Platform, Workspace, the suite of Google cybersecurity products, and new integrations with CrowdStrike and Palo Alto Networks will resonate the most with CISOs and CIOs.

Without a doubt, many will want a price break on BeyondCorp maintenance fees at a minimum. While BeyondCorp is generally attractive to large enterprises, it’s not addressing the quickening pace of the arms race between bad actors and enterprises. Google also includes Recapture and Chrome Enterprise for desktop management, both needed by all organizations to scale website protection and browser-level security across all devices.

It’s all about protecting threat surfaces

Enterprises operating in a cloud-native world mostly need to protect threat points. Google announced a new client connector for its BeyondCorp Enterprise platform that can be configured to protect Google-native and also legacy applications — which are very important to older companies. The new connector also supports identity and context-aware access to non-web applications running in both Google Cloud and non-Google Cloud environments. BeyondCorp Enterprise will also have a policy troubleshooter that gives admins greater flexibility to diagnose access failures, triage events, and unblock users.

Throughout Google Cloud Next, cybersecurity executives spoke of embedding security into the DevOps process and creating zero trust supply chains to protect new executable code from being breached. Achieving that ambitious goal for the company’s overall cybersecurity strategy requires zero trust to be embedded in every phase of a build cycle through deployment.

Cloud Build is designed to support builds, tests, and deployments on Google’s serverless CI/CD platform. It’s SLSA Level -1 compliant, with scripted builds and support for available provenance. In addition, Google launched a new build integrity feature as Cloud Build that automatically generates a verifiable build manifest. The manifest includes a signed certificate describing the sources that went into the build, the hashes of artifacts used, and other parameters. In addition, binary authorization is now integrated with Cloud Build to ensure that only trusted images make it to production.

These new announcements will protect software supply chains for large-scale enterprises already running a Google-dominated tech stack. It’s going to be a challenge for mid-tier and smaller organizations to get these systems running on their IT budgets and resources, however.

Bottom line: Cybersecurity strategy needs to work for everybody  

As Google’s cybersecurity strategy goes, so will the sales of the Google Cloud Platform. Convincing enterprise CISOs and CIOs to replace or extend their tech stack and make it Google-centric isn’t the answer. Recognizing how chaotic, diverse, and unpredictable the cybersecurity threatscape is today and building more apps, platforms, and adaptive tools that learn fast and thwart breaches.

Getting integration right is just part of the challenge. The far more challenging aspect is how to close the widening cybersecurity gaps all organizations face — not only large-scale enterprises — without requiring a Google-dominated tech stack to achieve it.

 

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Walk the Great Wall of China in Google’s Latest Virtual Tour

If your pandemic-related precautions still prevent you from traveling but you’d like to take a trip somewhere far away, then how about diving into the latest virtual tour from Google Arts & Culture?

The Street View-style experience features a 360-degree virtual tour of one of the best-preserved sections of the Great Wall, which in its entirety stretches for more than 13,000 miles — about the round-trip distance between Los Angeles and New Zealand.

A section of China’s Great Wall. Google Arts & Culture

The new virtual tour includes 370 high-quality images of the Great Wall, together with 35 stories offering an array of architectural details about the world-famous structure.

“It’s a chance for people to experience parts of the Great Wall that might otherwise be hard to access, learn more about its rich history, and understand how it’s being preserved for future generations,” Google’s Pierre Caessa wrote in a blog post announcing the new content.

The wall was used to defend against various invaders through the ages and took more than 2,000 years to build. The structure is often described as “the largest man-made project in the world.”

But climate conditions and human activities have seen a third of the UNESCO World Heritage site gradually crumble away, though many sections of the wall are now being restored so that it can be enjoyed and appreciated for years to come.

Google Arts & Culture has been steadily adding to its library of virtual tours, which can be enjoyed on mobile and desktop devices. The collection includes the The Hidden Worlds of the National Parks and an immersive exploration of some the world’s most remote and historically significant places.

If you’re looking for more content along the same lines, then check out these virtual-tour apps that transport you to special locations around the world, and even to outer space.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Google’s SoundStream codec simultaneously suppresses noise and compresses audio

All the sessions from Transform 2021 are available on-demand now. Watch now.


Google today detailed SoundStream, an end-to-end “neural” audio codec that can provide higher-quality audio while encoding different sound types, including clean speech, noisy and reverberant speech, music, and environmental sounds. The company claims this is the first AI-powered codec to work on speech and music while at the being able to run in real time on a smartphone processor at the same time.

Audio codecs compress audio to reduce the need for high storage and bandwidth requirements. Ideally, the decoded audio should be perceptually indistinguishable from the original and introduce little latency. While most codecs leverage domain expertise and carefully engineered signal processing pipelines, there’s been interest in replacing handcrafted specs with AI that can learn to encode on the fly.

Earlier this year, Google released Lyra, a neural audio codec trained to compress low-bitrate speech. SoundStream extends this work with a system consisting of an encoder, decoder, and quantizer. The encoder converts audio into a coded signal that’s compressed using the quantizer and converted back to audio using the decoder. Once trained, the encoder and decoder can be run on separate clients to transmit audio over the internet, and the decoder can operate at any bitrate.

Compressing audio

In traditional audio processing pipelines, compression and enhancement — i.e., the removal of background noise — are typically performed by different modules. But SoundStream is designed to carry out compression and enhancement at the same time. At 3kbps, SoundStream outperforms the popular Opus codec at 12kbps and approaches the quality of EVS at 9.6kbps while using 3.2-4 times fewer bits, Google claims. Moreover, SoundStream performs better than the current version of Lyra when compared at the same bitrate.

Here’s reference audio before processing with SoundStream:


And here’s the audio after processing:

Google cautions that SoundStream is still in the experimental stages. However, the company plans to release an updated version of Lyra that incorporates its components to deliver both higher audio quality and “reduced complexity.”

“Efficient compression is necessary whenever one needs to transmit audio, whether when streaming a video or during a conference call. SoundStream is an important step toward improving machine learning-driven audio codecs. It outperforms state-of-the-art codecs, such as Opus and EVS, can enhance audio on demand, and requires deployment of only a single scalable model, rather than many,” Google research scientist Neil Zeghidour and staff research Marco Tagliasacchi wrote in a blog post. “By integrating SoundStream with Lyra, developers can leverage the existing Lyra APIs and tools for their work, providing both flexibility and better sound quality.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Security

Google’s new Titan security key lineup won’t make you choose between USB-C and NFC

Google announced updates to its Titan security key lineup on Monday, simplifying it by removing a product and bringing NFC to all its keys. The company will now offer two options: one has a USB-A connector, one has USB-C, and both have NFC for connecting to “most mobile devices.” The USB-A key will cost $30, and the USB-C key will cost $35 when they go on sale on August 10th.

One of the biggest changes in Google’s new lineup is an updated USB-C key, which has added NFC support. Google’s previous USB-C option, made in collaboration with Yubico, didn’t support the wireless standard. Now, the choice between USB-C and A is easy, as there aren’t features that one has that the other doesn’t. It’s simply a matter of what ports your computer has. Google did not immediately respond to a request for comment on whether Yubico was involved with the new key.

According to Google’s support document, its Titan security keys can be used to protect your Google account as well as with third-party apps and services that support FIDO standards, such as 1Password. They, and other security keys from companies like Yubico, can act as second factors to secure your account even if an attacker obtains your username and password. They also fight back against phishing since they won’t authenticate a login to a fake website that’s trying to steal your credentials. The Titan keys also work with Google’s Advanced Protection Program, which is designed to provide extra security to people whose accounts may be targeted.

Google’s current USB-A security key already includes NFC and sells for $25. The USB-A plus NFC key that Google lists in its blog post will sell for $30, but it comes with a USB-C adapter. The USB-A key currently listed on the store doesn’t include one, unless bought as part of a (sold-out) bundle, according to Google’s spec page.

Google’s NFC / Bluetooth / USB key, which was made available to the public in 2018, will no longer be sold as part of the updated lineup. It’s already listed as sold out on Google’s store page. Google’s blog post says that it’s discontinuing the Bluetooth model so it can focus on “easier and more widely available NFC capability.”

While the updated Titan Security Key lineup seems to lack a Bluetooth option, it’s nice to see that the USB-C key is getting NFC. If you’re living the MacBook / iPhone lifestyle, you’ll be able to use the updated USB-C plus NFC key without any dongles. Google says in its blog post that the Bluetooth / NFC / USB key will still work over Bluetooth and NFC “on most modern mobile devices.” Google’s Titan Security Key store page currently lists the old models, but Google’s post says the updated lineup will be available starting on August 10th.

Repost: Original Source and Author Link

Categories
AI

Google’s Unattended Project Recommender aims to cut cloud costs

All the sessions from Transform 2021 are available on-demand now. Watch now.


Google today announced Unattended Project Recommender, a new feature of Active Assist, Google’s collection of tools designed to help optimize Google Cloud environments. Unattended Project Recommender is intended to provide a “a one-stop shop” for discovering, reclaiming, and shutting down unattended cloud computing projects, Google says, via actionable and automatic recommendations powered by machine learning algorithms.

In enterprise environments, it’s not uncommon for cloud resources to occasionally be forgotten about. Not only can these resources can be difficult to identify, but they also tend to create a lot of headaches for product teams down the road — including unnecessary waste. A recent Anodot survey found fewer than 20% of companies were able to immediately detect spikes in cloud costs and 77% of companies with over $2 million in cloud costs were often surprised by how much they spent.

Unattended Project Recommender, which is available through Google Cloud’s Recommender API, aims to address this by identifying projects that are likely abandoned based on API and networking activity, billing, usage of cloud services, and other signals. As product managers Dima Melnyk and Bakh Inamov explain in a blog post, the tool was first tested with teams at Google over the course of 2021, where it was used to clean up internal unattended projects and eventually the projects of select Google Cloud customers, who helped to tune Unattended Project Recommender based on real-life data.

Machine learning

Unattended Project Recommender analyzes usage activity across all projects within an organization, including items like service accounts with authentication activity, API calls consumed, network ingress and egress, services with billable usage, active project owners, the number of active virtual machines, BigQuery jobs, and storage requests. Google Cloud customers can automatically export recommendations for investigation or use spreadsheets to interact with the data, as well as Google Workspace.

“Based on [various] signals, Unattended Project Recommender can generate recommendations to clean up projects that have low usage activity, where ‘low usage’ is defined using a machine learning model that ranks projects in [an] organization by level of usage, or recommendations to reclaim projects that have high usage activity but no active project owners,” Melnyk and Inamov wrote. “We hope that [customers] can leverage Unattended Project Recommender to improve [their] cloud security posture and reduce cost.”

Google notes that, as with any other tool, customers can choose to opt out of data processing by disabling the corresponding groups in the “Transparency & control” tab under Google Cloud’s Privacy & Security settings.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Google’s ethical AI researchers complained of harassment long before Timnit Gebru’s firing

Google’s AI leadership came under fire in December when star ethics researcher Timnit Gebru was abruptly fired while working on a paper about the dangers of large language models. Now, new reporting from Bloomberg suggests the turmoil began long before her termination — and includes allegations of bias and sexual harassment.

Shortly after Gebru arrived at Google in 2018, she informed her boss that a colleague had been accused of sexual harassment at another organization. Katherine Heller, a Google researcher, reported the same incident, which included allegations of inappropriate touching. Google immediately opened an investigation into the man’s behavior. Bloomberg did not name the man accused of harassment, and The Verge does not know his identity.

The allegations coincided with an even more explosive story. Andy Rubin, the “father of Android” had received a $90 million exit package despite being credibly accused of sexual misconduct. The news sparked outrage at Google, and 20,000 employees walked out of work to protest the company’s handling of sexual harassment.

Gebru and Margaret Mitchell, co-lead of the ethical AI team, went to AI chief Jeff Dean with a “litany of concerns,” according to Bloomberg. They told Dean about the colleague who’d been accused of harassment, and said there was a perceived pattern of women being excluded and undermined on the research team. Some were given lower roles than men, despite having better qualifications. Mitchell also said she’d been denied a promotion due to “nebulous complaints to HR about her personality.”

Dean was skeptical about the harassment allegations but said he would investigate, Bloomberg reports. He pushed back on the idea that there was a pattern of women on the research team getting lower-level positions than men.

After the meeting, Dean announced a new research project with the alleged harasser at the helm. Nine months later, the man was fired for “leadership issues,” according to Bloomberg. He’d been accused of misconduct at Google, although the investigation was still ongoing.

After the man was fired, he threatened to sue Google. The legal team told employees who’d spoken out about his conduct that they might hear from the man’s lawyers. The company was “vague” about whether it would defend the whistleblowers, Bloomberg reports.

The harassment allegation was not an isolated incident. Gebru and her co-workers reported additional claims of inappropriate behavior and bullying after the initial accusation.

In a statement emailed to The Verge, a Google spokesperson said: “We investigate any allegations and take firm action against employees who violate our clear workplace policies.”

Gebru said there were also ongoing issues with getting Google to respect the ethical AI team’s work. When she tried to look into a dataset released by Google’s self-driving car company Waymo, the project became mired in “legal haggling.” Gebru wanted to explore how skin tone impacted Waymo’s pedestrian-detection technology. “Waymo employees peppered the team with inquiries, including why they were interested in skin color and what they were planning to do with the results,” according to the Bloomberg article.

After Gebru went public about her firing, she received an onslaught of harassment from people who claimed that she was trying to get attention and play the victim. The latest news further validates her response that the issues she raised were part of a pattern of alleged bias on the research team.

Update April 21st, 6:05PM ET: Article updated with statement from Google.

Repost: Original Source and Author Link

Categories
AI

Google’s Visual Inspection AI spots defects in manufactured goods

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Google today announced the launch of Visual Inspection AI, a new Google Cloud Platform (GCP) solution designed to help manufacturers, consumer packaged goods companies, and other businesses reduce defects during the manufacturing and inspection process. Google says it’s the first dedicated GCP service for manufacturers, representing a doubling down on the vertical.

It’s estimated that defects cost manufacturers billions of dollars every year — in fact, quality-related costs can consume 15% to 20% of sales revenue. Twenty-three percent of all unplanned downtime in manufacturing is the result of human error compared with rates as low as 9 percent in other sectors, according to a Vanson Bourne study. The $327.6 million Mars Climate Orbiter spacecraft was destroyed because of a failure to properly convert between units of measurement, and one pharma company reported a misunderstanding that resulted in an alert ticket being overridden, which cost four days on the production line at £200,000 ($253,946) per day.

Powered by GCP’s computer vision technology, Visual Inspection AI aims to automate quality assurance workflows, enabling companies to identify and correct defects before products are shipped. By identifying defects early in the manufacturing process, Visual Inspection AI can improve production throughput, increase yields, reduce rework, and slash return and repair costs, Google boldly claims.

AI-powered inspection

As Dominik Wee, GCP’s managing director of manufacturing and industrial, explains, Visual Inspection AI specifically addresses two high-level use cases in manufacturing: cosmetic defection detection and assembly inspection. Once the service is fine-tuned on images of a business’ products, it can spot potential issues in real time, optionally operating on an on-premises server while leveraging the power of the cloud for additional processing.

Visual Inspection AI competes with Amazon’s Lookout for Vision, a cloud service that analyzes images using computer vision to spot product or process defects and anomalies in manufactured goods. Announced in preview at the company’s virtual re:Invent conference in December 2020 and launched in general availability in February, Amazon claims that Lookout for Vision’s computer vision algorithms can learn to detect manufacturing and production defects including cracks, dents, incorrect colors, and irregular shapes from as few as 30 baseline images.

But while Lookout for Vision counts GE Healthcare, Basler, and Sweden-based Dafgards among its users, Google says that Renault, Foxconn, and Kyocera have chosen Visual Inspection AI to augment their quality assurance testing. Wee says that with the Visual Inspection AI, Renault is automatically identifying defects in paint finish in real time.

Moreover, Google claims that Visual Inspection AI can build models with up to 300 times fewer human-labeled images than general-purpose machine learning platforms — as few as 10. Accuracy automatically increases over time as the service is exposed to new products.

“The benefit of a dedicated solution [like Visual Inspection AI] is that it basically gives you ease of deployment and the peace of mind of being able to run it on the shop floor. It doesn’t have to run the cloud,” Wee said. “At the same time, it gives you the power of Google’s AI and analytics. What we’re basically trying to do is get the capability of AI at scale into the hands of manufacturers.”

Trend toward automation

Manufacturing is undergoing a resurgence as business owners look to modernize their factories and speed up operations. According to ABI Research, more than 4 million commercial robots will be installed in over 50,000 warehouses around the world by 2025, up from under 4,000 warehouses as of 2018. Oxford Economics anticipates 12.5 million manufacturing jobs will be automated in China, while McKinsey projects machines will take upwards of 30% of these jobs in the U.S.

Indeed, 76% of respondents to a GCP and The Harris Poll survey said that they’ve turned to “disruptive technologies” like AI, data analytics, and the cloud to help navigate the pandemic. Manufacturers told surveyors that they’ve tapped AI to optimize their supply chains including in the management, risk management, and inventory management domains. Even among firms that currently don’t use AI in their day-to-day operations, about a third believe it would make employees more efficient and be helpful for employees overall, according to GCP.

“We’re seeing a lot of more demand, and I think it’s because we’re getting to a point where AI is becoming really widespread,” Wee said. “Our fundamental strategy is to make Google’s horizontal AI capabilities and integrate them into the capabilities of the existing technology providers.”

According to a 2020 PricewaterhouseCoopers survey, companies in manufacturing expect efficiency gains over the next five years attributable to digital transformations. McKinsey’s research with the World Economic Forum puts the value creation potential of manufacturers implementing “Industry 4.0” — the automation of traditional industrial practices — at $3.7 trillion in 2025.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Google’s algorithm misidentified an engineer as a serial killer

Google’s algorithmic failures can dreadful consequences, from directing racist search terms to the White House in Google Maps to labeling Black people as gorillas in Google Photos.

This week, the Silicon Valley giant added another algorithmic screw-up to the list: misidentifying a software engineer as a serial killer.

The victim of this latest botch was Hristo Georgiev, an engineer based in Switzerland. Georgiev discovered that a Google search of his name returned a photo of him linked to a Wikipedia entry on a notorious murderer.

“My first reaction was that somebody was trying to pull off some sort of an elaborate prank on me, but after opening the Wikipedia article itself, it turned out that there’s no photo of me there whatsoever,” said Georgiev in a blog post.

[Read: Why entrepreneurship in emerging markets matters]

Georgiev believes the error was caused by Google‘s knowledge graph, which generates infoboxes next to search results.

He suspects the algorithm matched his picture to the Wikipedia entry because the now-dead killer shared his name.

Georgiev is far from the first victim of the knowledge graph misfiring. The algorithm has previously generated infoboxes that falsely registered actor Paul Campbell as deceased and listed the California Republican Party’s ideology as “Nazism”.

In Georgiev’s case, the issue was swiftly resolved. After reporting the bug to Google, the company removed his image from the killer’s infobox. Georgiev gave credit to the HackerNews community for accelerating the response.

Other victims, however, may not be so lucky. If they never find the error — or struggle to resolve it — the misinformation could have troubling consequences.

I certainly wouldn’t want a potential employer, client, or partner to see my face next to an article about a serial killer.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.



Repost: Original Source and Author Link

Categories
AI

Google’s AI reservation service Duplex is now available in 49 states

More than two years after it initially began trials, Google’s AI-powered reservation service Duplex is now available in 49 US states. This looks like it’ll be the limit of Duplex’s coverage in the US for the time being, as Google tells The Verge it has no timeline to launch the service in the last hold-out state — Louisiana — due to unspecified local laws.

Adapting to local legislation is one of the reasons Duplex has taken so long to roll out across the US. Google tells The Verge it’s had to add certain features to the service (like offering a call-back number for businesses contacted by Duplex) to make it legal in some states. In others, it’s simply waited for legislation to change.

The new milestone of 49 states was spotted by AndroidPolice, based on a Google support page that lists Duplex’s availability. In each of these states Duplex will be able to book appointments (like reservations at restaurants) and call businesses to check information like opening hours.

Google wowed audiences when it first unveiled Duplex at its 2018 I/O conference. As a feature of Google Assistant, Duplex uses AI to call local businesses, making reservations at restaurants and hairdressers on your behalf using a realistic-sounding artificial voice.

Initially, it seemed Google promised more than it could deliver. In 2019 it was revealed that 25 percent of Duplex calls are made by humans, and that 19 percent of calls started by the automated system have to be completed by people. And in our own reporting, we found that restaurants often confused Duplex with automated spam robocalls. As of October last year, though, Google says 99 percent of Duplex calls are fully automated.

As businesses begin to open up again this year, it’ll be interesting to see if Duplex can keep up.

Update April 1, 12:45PM ET: Story has been updated with most recent data on the percentage of Duplex calls that are fully automated.

Repost: Original Source and Author Link