Categories
Computing

Google’s Nest Wifi Pro leaks weeks ahead of Pixel event

The Google Nest Wifi Pro is expected to arrive soon, and a recent leak suggests that the price has gone up compared to earlier models.

The info comes from B&H Photo, which has dropped some details of the product ahead of Google’s Pixel event, scheduled for October 6. A search for “Google Nest Wifi Pro 6E” returned several results. Those B&H listings have now been removed, but we managed to capture screenshots before they disappeared. The price is shown as $199, $30 more than the 2019 Nest Wifi’s retail price of $169 and $80 more than the current $119 sale price.

B&H also reveals two bundle options: a two-pack priced at $299 and a three-pack costing $399. Four color choices are listed, including Snow, Linen, Fog, and Lemongrass. The two-pack is only available in Snow, while the three-pack has a choice of Snow and multicolor. No photos or other details were shown in these accidental early listings.

The name Google Nest Wifi Pro 6E gives some clues about what we can expect. Wi-Fi 6E is the latest version of the Wi-Fi standard and it adds an extra frequency band to the already fast Wi-Fi 6 specification. This will help in crowded networks that have several phones, computers, and smart devices connected. The “Pro” naming scheme, as 9to5Google points out, suggests that Google might continue to sell the standard Nest Wi-Fi alongside this new model.

John Velasco / Digital Trends

Another detail worth noting is the lack of any mention of a Nest Point in the B&H listings. This lower-cost accessory works with the Nest Wifi to expand the range of coverage within your home or office. Even if the Nest Wifi Pro works with the older Nest Point, the latter is a Wi-Fi 5 device and won’t be able to broadcast a Wi-Fi 6 or 6E signal.

With Google’s next event coming in just a few weeks, we probably won’t have to wait long to learn more about the Google Nest Wifi Pro router. If you need a network upgrade sooner, there are some great deals available right now on Google Wi-Fi routers.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

All these images were generated by Google’s latest text-to-image AI

There’s a new hot trend in AI: text-to-image generators. Feed these programs any text you like and they’ll generate remarkably accurate pictures that match that description. They can match a range of styles, from oil paintings to CGI renders and even photographs, and — though it sounds cliched — in many ways the only limit is your imagination.

To date, the leader in the field has been DALL-E, a program created by commercial AI lab OpenAI (and updated just back in April). Yesterday, though, Google announced its own take on the genre, Imagen, and it just unseated DALL-E in the quality of its output.

The best way to understand the amazing capability of these models is to simply look over some of the images they can generate. There’s some generated by Imagen above, and even more below (you can see more examples at Google’s dedicated landing page).

In each case, the text at the bottom of the image was the prompt fed into the program, and the picture above, the output. Just to stress: that’s all it takes. You type what you want to see and the program generates it. Pretty fantastic, right?

But while these pictures are undeniably impressive in their coherence and accuracy, they should also be taken with a pinch of salt. When research teams like Google Brain release a new AI model they tend to cherry-pick the best results. So, while these pictures all look perfectly polished, they may not represent the average output of the Image system.

Often, images generated by text-to-image models look unfinished, smeared, or blurry — problems we’ve seen with pictures generated by OpenAI’s DALL-E program. (For more on the trouble spots for text-to-image systems, check out this interesting Twitter thread that dives into problems with DALL-E. It highlights, among other things, the tendency of the system to misunderstand prompts, and struggle with both text and faces.)

Google, though, claims that Imagen produces consistently better images than DALL-E 2, based on a new benchmark it created for this project named DrawBench.

DrawBench isn’t a particularly complex metric: it’s essentially a list of some 200 text prompts that Google’s team fed into Imagen and other text-to-image generators, with the output from each program then judged by human raters. As shown in the graphs below, Google found that humans generally preferred the output from Imagen to that of rivals’.

Google’s DrawBench benchmark compares the output of Imagen to rival text-to-image systems like OpenAI’s DALL-E 2.
Image: Google

It’ll be hard to judge this for ourselves, though, as Google isn’t making the Imagen model available to the public. There’s good reason for this, too. Although text-to-image models certainly have fantastic creative potential, they also have a range of troubling applications. Imagine a system that generates pretty much any image you like being used for fake news, hoaxes, or harassment, for example. As Google notes, these systems also encode social biases, and their output is often racist, sexist, or toxic in some other inventive fashion.

A lot of this is due to how these systems are programmed. Essentially, they’re trained on huge amounts of data (in this case: lots of pairs of images and captions) which they study for patterns and learn to replicate. But these models need a hell of a lot of data, and most researchers — even those working for well-funded tech giants like Google — have decided that it’s too onerous to comprehensively filter this input. So, they scrape huge quantities of data from the web, and as a consequence their models ingest (and learn to replicate) all the hateful bile you’d expect to find online.

As Google’s researchers summarize this problem in their paper: “[T]he large scale data requirements of text-to-image models […] have have led researchers to rely heavily on large, mostly uncurated, web-scraped dataset […] Dataset audits have revealed these datasets tend to reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups.”

In other words, the well-worn adage of computer scientists still applies in the whizzy world of AI: garbage in, garbage out.

Google doesn’t go into too much detail about the troubling content generated by Imagen, but notes that the model “encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes.”

This is something researchers have also found while evaluating DALL-E. Ask DALL-E to generate images of a “flight attendant,” for example, and almost all the subjects will be women. Ask for pictures of a “CEO,” and, surprise, surprise, you get a bunch of white men.

For this reason OpenAI also decided not release DALL-E publicly, but the company does give access to select beta testers. It also filters certain text inputs in an attempt to stop the model being used to generate racist, violent, or pornographic imagery. These measures go some way to restricting potential harmful applications of this technology, but the history of AI tells us that such text-to-image models will almost certainly become public at some point in the future, with all the troubling implications that wider access brings.

Google’s own conclusion is that Imagen “is not suitable for public use at this time,” and the company says it plans to develop a new way to benchmark “social and cultural bias in future work” and test future iterations. For now, though, we’ll have to be satisfied with the company’s upbeat selection of images — raccoon royalty and cacti wearing sunglasses. That’s just the tip of the iceberg, though. The iceberg made from the unintended consequences of technological research, if Imagen wants to have a go at generating that.



Repost: Original Source and Author Link

Categories
Security

Google’s open-source bug bounty aims to clamp down on supply chain attacks

Google has introduced a new vulnerability rewards program to pay researchers who find security flaws in its open-source software or in the building blocks that its software is built on. It’ll pay anywhere from $101 to $31,337 for information about bugs in projects like Angular, GoLang, and Fuchsia or for vulnerabilities in the third-party dependencies that are included in those projects’ codebases.

While it’s important for Google to fix bugs in its own projects (and in the software that it uses to keep track of changes to its code, which the program also covers), perhaps the most interesting part is the bit about third-party dependencies. Programmers often use code from open-source projects so they don’t continuously have to reinvent the same wheel. But since developers often directly import that code, as well as any updates to it, that introduces the possibility of supply chain attacks. That’s when hackers don’t target the code directly controlled by Google itself but go after these third-party dependencies instead.

As SolarWinds showed, this type of attack isn’t limited to open-source projects. But in the past few years, we’ve seen several stories where big companies have had their security put at risk thanks to dependencies. There are ways to mitigate this sort of attack vector — Google itself has begun vetting and distributing a subset of popular open-source programs, but it’s almost impossible to check over all the code a project uses. Incentivizing the community to check through dependencies and first-party code helps Google cast a wider net.

According to Google’s rules, payouts from the Open Source Software Vulnerability Rewards Program will depend on the severity of the bug, as well as the importance of the project it was found in (Fuchsia and the like are considered “flagship” projects and thus have the biggest payouts). There are also some additional rules around bounties for supply chain vulnerabilities — researchers will have to inform whoever’s actually in charge of the third-party project first before telling Google. They also have to prove that the issue affects Google’s project; if there’s a bug in a part of the library the company’s not using, it won’t be eligible for the program.

Google also says that it doesn’t want people poking around at third-party services or platforms it uses for its open-source projects. If you find an issue with how its GitHub repository is configured, that’s fine; if you find an issue with GitHub’s login system, that’s not covered. (Google says it can’t authorize people to “conduct security research of assets that belong to other users and companies on their behalf.”)

For researchers who aren’t motivated by money, Google offers to donate their rewards to a charity picked by the researcher — the company even says it’ll double those donations.

Obviously, this isn’t Google’s first crack at a bug bounty — it had some form of vulnerability reward program for over a decade. But it’s good to see that the company’s taking action on a problem that it’s been raising the alarm about. Earlier this year, in the wake of the Log4Shell exploit found in the popular open-source Log4j library, Google said the US government needs to be more involved in finding and dealing with security issues in critical open-source projects. Since then, as BleepingComputer notes, the company has temporarily bumped up payouts for people who find bugs in certain open-source projects like Kubernetes and the Linux kernel.



Repost: Original Source and Author Link

Categories
AI

Alphabet is putting its prototype robots to work cleaning up around Google’s offices

What does Google’s parent company Alphabet want with robots? Well, it would like them to clean up around the office, for a start.

The company announced today that its Everyday Robots Project — a team within its experimental X labs dedicated to creating “a general-purpose learning robot” — has moved some of its prototype machines out of the lab and into Google’s Bay Area campuses to carry out some light custodial tasks.

“We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices,” said Everyday Robot’s chief robot officer Hans Peter Brøndmo in a blog post. “The same robot that sorts trash can now be equipped with a squeegee to wipe tables and use the same gripper that grasps cups can learn to open doors.”

These robots in question are essentially arms on wheels, with a multipurpose gripper on the end of a flexible arm attached to a central tower. There’s a “head” on top of the tower with cameras and sensors for machine vision and what looks like a spinning lidar unit on the side, presumably for navigation.

One of Alphabet’s Everyday Robot machines cleans the crumbs off a cafe table.
Image: Alphabet

As Brøndmo indicates, these bots were first seen sorting out recycling when Alphabet debuted the Everyday Robot team in 2019. The big promise that’s being made by the company (as well as by many other startups and rivals) is that machine learning will finally enable robots to operate in “unstructured” environments like homes and offices.

Right now, we’re very good at building machines that can carry out repetitive jobs in a factory, but we’re stumped when trying to get them to replicate simple tasks like cleaning up a kitchen or folding laundry.

Think about it: you may have seen robots from Boston Dynamics performing backflips and dancing to The Rolling Stones, but have you ever seen one take out the trash? It’s because getting a machine to manipulate never-before-seen objects in a novel setting (something humans do every day) is extremely difficult. This is the problem Alphabet wants to solve.

Unit 033 makes a bid for freedom.
Image: Alphabet

Is it going to? Well, maybe one day — if company execs feel it’s worth burning through millions of dollars in research to achieve this goal. Certainly, though, humans are going to be cheaper and more efficient than robots for these jobs in the foreseeable future. The update today from Everyday Robot is neat, but it’s far from a leap forward. You can see from the GIFs that Alphabet shared of its robots that they’re still slow and awkward, carrying out tasks inexpertly and at a glacial pace.

However, it’s still definitely something that the robots are being tested “in the wild” rather than in the lab. Compare Alphabet’s machines to Samsung’s Bot Handy, for example; a similar-looking tower-and-arm bot that the company showed off at CES last year, apparently pouring wine and loading a dishwasher. At least, Bot Handy looks like it’s performing these jobs, but really it was only carrying out a prearranged demo. Who knows how capable, if at all, this robot is in the real world? At least Alphabet is finding this out for itself.

Repost: Original Source and Author Link

Categories
AI

Google’s future in enterprise hinges on strategic cybersecurity

Gaps in Google’s cybersecurity strategy make banks, financial institutions, and larger enterprises slow to adopt the Google Cloud Platform (GCP), with deals often going to Microsoft Azure and Amazon Web Services instead.

It also doesn’t help that GCP has long had the reputation that it is more aligned with developers and their needs than with enterprise and commercial projects. But Google now has a timely opportunity to open its customer aperture with new security offerings designed to fill many of those gaps.

During last week’s Google Cloud Next virtual conference, Google executives leading the security business units announced an ambitious new series of cybersecurity initiatives precisely for this purpose. The most noteworthy announcements are the formation of the Google Cybersecurity Action Team, new zero-trust solutions for Google Workspace, and extending Work Safer with CrowdStrike and Palo Alto Networks partnerships.

The most valuable new announcements for enterprises are on the BeyondCorp Enterprise platform, however. BeyondCorp Enterprise is Google’s zero-trust platform that allows virtual workforces to access applications in the cloud or on-premises and work from anywhere without a traditional remote-access VPN. Google’s announced Work Safer initiative combines BeyondCorp Enterprise for zero-trust security and their Workspace collaboration platform.

Workspace now has 4.8 billion installations of 5,300 public applications across more than 3 billion users, making it an ideal platform to build and scale cybersecurity partnerships. Workspace also reflects the growing problem chief information security officers (CISOs) and CIOs have with protecting the exponentially increasing number of endpoints that dominate their virtual-first IT infrastructures.

Bringing order to cybersecurity chaos

With the latest series of cybersecurity strategies and product announcements, Google is attempting to sell CISOs on the idea of trusting Google for their complete security and public cloud tech stack. Unfortunately, that doesn’t reflect the reality of how many legacy systems CISOs have lifted and shifted to the cloud for many enterprises.

Missing from the many announcements were new approaches to dealing with just how chaotic, lethal, and uncontrolled breaches and ransomware attacks have become. But Google’s announcement of Work Safer, a program that combines Workspace with Google cybersecurity services and new integrations to CrowdStrike and Palo Alto Networks, is a step in the right direction.

The Google Cybersecurity Action Team claimed in a media advisory it will be “the world’s premier security advisory team with the singular mission of supporting the security and digital transformation of governments, critical infrastructure, enterprises, and small businesses.”  But let’s get real: This is a professional services organization designed to drive high-margin engagement in enterprise accounts. Unfortunately, small and mid-tier enterprises won’t be able to afford engagements with the Cybersecurity Action Team, which means they’ll have to rely on system integrators or their own IT staff.

Why every cloud needs to be a trusted cloud

CISOs and CIOs tell VentureBeat that it’s a cloud-native world now, and that includes closing the security gaps in hybrid cloud configurations. Most enterprise tech stacks grew through mergers, acquisitions, and a decade or more of cybersecurity tech-buying decisions. These are held together with custom integration code written and maintained by outside system integrators in many cases. New digital-first revenue streams are generated from applications running on these tech stacks. This adds to their complexity. In reality, every cloud now needs to be a trusted cloud.

Google’s series of announcements relating to integration and security monitoring and operations are needed, but they are not enough. Historically Google has lagged behind the market when it comes to security monitoring by prioritizing its own data loss prevention (DLP) APIs, given their proven scalability in large enterprises. To Google’s credit, it has created a technology partnership with Cybereason, which will use Google’s cloud security analytics platform Chronicle to improve its extended detection and response (XDR) service and will help security and IT teams identify and prevent attacks using threat hunting and incident response logic.

Google now appears to have the components it previously lacked to offer a much-improved selection of security solutions to its customers. Creating Work Safer by bundling the BeyondCorp Enterprise Platform, Workspace, the suite of Google cybersecurity products, and new integrations with CrowdStrike and Palo Alto Networks will resonate the most with CISOs and CIOs.

Without a doubt, many will want a price break on BeyondCorp maintenance fees at a minimum. While BeyondCorp is generally attractive to large enterprises, it’s not addressing the quickening pace of the arms race between bad actors and enterprises. Google also includes Recapture and Chrome Enterprise for desktop management, both needed by all organizations to scale website protection and browser-level security across all devices.

It’s all about protecting threat surfaces

Enterprises operating in a cloud-native world mostly need to protect threat points. Google announced a new client connector for its BeyondCorp Enterprise platform that can be configured to protect Google-native and also legacy applications — which are very important to older companies. The new connector also supports identity and context-aware access to non-web applications running in both Google Cloud and non-Google Cloud environments. BeyondCorp Enterprise will also have a policy troubleshooter that gives admins greater flexibility to diagnose access failures, triage events, and unblock users.

Throughout Google Cloud Next, cybersecurity executives spoke of embedding security into the DevOps process and creating zero trust supply chains to protect new executable code from being breached. Achieving that ambitious goal for the company’s overall cybersecurity strategy requires zero trust to be embedded in every phase of a build cycle through deployment.

Cloud Build is designed to support builds, tests, and deployments on Google’s serverless CI/CD platform. It’s SLSA Level -1 compliant, with scripted builds and support for available provenance. In addition, Google launched a new build integrity feature as Cloud Build that automatically generates a verifiable build manifest. The manifest includes a signed certificate describing the sources that went into the build, the hashes of artifacts used, and other parameters. In addition, binary authorization is now integrated with Cloud Build to ensure that only trusted images make it to production.

These new announcements will protect software supply chains for large-scale enterprises already running a Google-dominated tech stack. It’s going to be a challenge for mid-tier and smaller organizations to get these systems running on their IT budgets and resources, however.

Bottom line: Cybersecurity strategy needs to work for everybody  

As Google’s cybersecurity strategy goes, so will the sales of the Google Cloud Platform. Convincing enterprise CISOs and CIOs to replace or extend their tech stack and make it Google-centric isn’t the answer. Recognizing how chaotic, diverse, and unpredictable the cybersecurity threatscape is today and building more apps, platforms, and adaptive tools that learn fast and thwart breaches.

Getting integration right is just part of the challenge. The far more challenging aspect is how to close the widening cybersecurity gaps all organizations face — not only large-scale enterprises — without requiring a Google-dominated tech stack to achieve it.

 

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Walk the Great Wall of China in Google’s Latest Virtual Tour

If your pandemic-related precautions still prevent you from traveling but you’d like to take a trip somewhere far away, then how about diving into the latest virtual tour from Google Arts & Culture?

The Street View-style experience features a 360-degree virtual tour of one of the best-preserved sections of the Great Wall, which in its entirety stretches for more than 13,000 miles — about the round-trip distance between Los Angeles and New Zealand.

A section of China’s Great Wall. Google Arts & Culture

The new virtual tour includes 370 high-quality images of the Great Wall, together with 35 stories offering an array of architectural details about the world-famous structure.

“It’s a chance for people to experience parts of the Great Wall that might otherwise be hard to access, learn more about its rich history, and understand how it’s being preserved for future generations,” Google’s Pierre Caessa wrote in a blog post announcing the new content.

The wall was used to defend against various invaders through the ages and took more than 2,000 years to build. The structure is often described as “the largest man-made project in the world.”

But climate conditions and human activities have seen a third of the UNESCO World Heritage site gradually crumble away, though many sections of the wall are now being restored so that it can be enjoyed and appreciated for years to come.

Google Arts & Culture has been steadily adding to its library of virtual tours, which can be enjoyed on mobile and desktop devices. The collection includes the The Hidden Worlds of the National Parks and an immersive exploration of some the world’s most remote and historically significant places.

If you’re looking for more content along the same lines, then check out these virtual-tour apps that transport you to special locations around the world, and even to outer space.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Google’s SoundStream codec simultaneously suppresses noise and compresses audio

All the sessions from Transform 2021 are available on-demand now. Watch now.


Google today detailed SoundStream, an end-to-end “neural” audio codec that can provide higher-quality audio while encoding different sound types, including clean speech, noisy and reverberant speech, music, and environmental sounds. The company claims this is the first AI-powered codec to work on speech and music while at the being able to run in real time on a smartphone processor at the same time.

Audio codecs compress audio to reduce the need for high storage and bandwidth requirements. Ideally, the decoded audio should be perceptually indistinguishable from the original and introduce little latency. While most codecs leverage domain expertise and carefully engineered signal processing pipelines, there’s been interest in replacing handcrafted specs with AI that can learn to encode on the fly.

Earlier this year, Google released Lyra, a neural audio codec trained to compress low-bitrate speech. SoundStream extends this work with a system consisting of an encoder, decoder, and quantizer. The encoder converts audio into a coded signal that’s compressed using the quantizer and converted back to audio using the decoder. Once trained, the encoder and decoder can be run on separate clients to transmit audio over the internet, and the decoder can operate at any bitrate.

Compressing audio

In traditional audio processing pipelines, compression and enhancement — i.e., the removal of background noise — are typically performed by different modules. But SoundStream is designed to carry out compression and enhancement at the same time. At 3kbps, SoundStream outperforms the popular Opus codec at 12kbps and approaches the quality of EVS at 9.6kbps while using 3.2-4 times fewer bits, Google claims. Moreover, SoundStream performs better than the current version of Lyra when compared at the same bitrate.

Here’s reference audio before processing with SoundStream:


And here’s the audio after processing:

Google cautions that SoundStream is still in the experimental stages. However, the company plans to release an updated version of Lyra that incorporates its components to deliver both higher audio quality and “reduced complexity.”

“Efficient compression is necessary whenever one needs to transmit audio, whether when streaming a video or during a conference call. SoundStream is an important step toward improving machine learning-driven audio codecs. It outperforms state-of-the-art codecs, such as Opus and EVS, can enhance audio on demand, and requires deployment of only a single scalable model, rather than many,” Google research scientist Neil Zeghidour and staff research Marco Tagliasacchi wrote in a blog post. “By integrating SoundStream with Lyra, developers can leverage the existing Lyra APIs and tools for their work, providing both flexibility and better sound quality.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Security

Google’s new Titan security key lineup won’t make you choose between USB-C and NFC

Google announced updates to its Titan security key lineup on Monday, simplifying it by removing a product and bringing NFC to all its keys. The company will now offer two options: one has a USB-A connector, one has USB-C, and both have NFC for connecting to “most mobile devices.” The USB-A key will cost $30, and the USB-C key will cost $35 when they go on sale on August 10th.

One of the biggest changes in Google’s new lineup is an updated USB-C key, which has added NFC support. Google’s previous USB-C option, made in collaboration with Yubico, didn’t support the wireless standard. Now, the choice between USB-C and A is easy, as there aren’t features that one has that the other doesn’t. It’s simply a matter of what ports your computer has. Google did not immediately respond to a request for comment on whether Yubico was involved with the new key.

According to Google’s support document, its Titan security keys can be used to protect your Google account as well as with third-party apps and services that support FIDO standards, such as 1Password. They, and other security keys from companies like Yubico, can act as second factors to secure your account even if an attacker obtains your username and password. They also fight back against phishing since they won’t authenticate a login to a fake website that’s trying to steal your credentials. The Titan keys also work with Google’s Advanced Protection Program, which is designed to provide extra security to people whose accounts may be targeted.

Google’s current USB-A security key already includes NFC and sells for $25. The USB-A plus NFC key that Google lists in its blog post will sell for $30, but it comes with a USB-C adapter. The USB-A key currently listed on the store doesn’t include one, unless bought as part of a (sold-out) bundle, according to Google’s spec page.

Google’s NFC / Bluetooth / USB key, which was made available to the public in 2018, will no longer be sold as part of the updated lineup. It’s already listed as sold out on Google’s store page. Google’s blog post says that it’s discontinuing the Bluetooth model so it can focus on “easier and more widely available NFC capability.”

While the updated Titan Security Key lineup seems to lack a Bluetooth option, it’s nice to see that the USB-C key is getting NFC. If you’re living the MacBook / iPhone lifestyle, you’ll be able to use the updated USB-C plus NFC key without any dongles. Google says in its blog post that the Bluetooth / NFC / USB key will still work over Bluetooth and NFC “on most modern mobile devices.” Google’s Titan Security Key store page currently lists the old models, but Google’s post says the updated lineup will be available starting on August 10th.

Repost: Original Source and Author Link

Categories
AI

Google’s Unattended Project Recommender aims to cut cloud costs

All the sessions from Transform 2021 are available on-demand now. Watch now.


Google today announced Unattended Project Recommender, a new feature of Active Assist, Google’s collection of tools designed to help optimize Google Cloud environments. Unattended Project Recommender is intended to provide a “a one-stop shop” for discovering, reclaiming, and shutting down unattended cloud computing projects, Google says, via actionable and automatic recommendations powered by machine learning algorithms.

In enterprise environments, it’s not uncommon for cloud resources to occasionally be forgotten about. Not only can these resources can be difficult to identify, but they also tend to create a lot of headaches for product teams down the road — including unnecessary waste. A recent Anodot survey found fewer than 20% of companies were able to immediately detect spikes in cloud costs and 77% of companies with over $2 million in cloud costs were often surprised by how much they spent.

Unattended Project Recommender, which is available through Google Cloud’s Recommender API, aims to address this by identifying projects that are likely abandoned based on API and networking activity, billing, usage of cloud services, and other signals. As product managers Dima Melnyk and Bakh Inamov explain in a blog post, the tool was first tested with teams at Google over the course of 2021, where it was used to clean up internal unattended projects and eventually the projects of select Google Cloud customers, who helped to tune Unattended Project Recommender based on real-life data.

Machine learning

Unattended Project Recommender analyzes usage activity across all projects within an organization, including items like service accounts with authentication activity, API calls consumed, network ingress and egress, services with billable usage, active project owners, the number of active virtual machines, BigQuery jobs, and storage requests. Google Cloud customers can automatically export recommendations for investigation or use spreadsheets to interact with the data, as well as Google Workspace.

“Based on [various] signals, Unattended Project Recommender can generate recommendations to clean up projects that have low usage activity, where ‘low usage’ is defined using a machine learning model that ranks projects in [an] organization by level of usage, or recommendations to reclaim projects that have high usage activity but no active project owners,” Melnyk and Inamov wrote. “We hope that [customers] can leverage Unattended Project Recommender to improve [their] cloud security posture and reduce cost.”

Google notes that, as with any other tool, customers can choose to opt out of data processing by disabling the corresponding groups in the “Transparency & control” tab under Google Cloud’s Privacy & Security settings.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Google’s ethical AI researchers complained of harassment long before Timnit Gebru’s firing

Google’s AI leadership came under fire in December when star ethics researcher Timnit Gebru was abruptly fired while working on a paper about the dangers of large language models. Now, new reporting from Bloomberg suggests the turmoil began long before her termination — and includes allegations of bias and sexual harassment.

Shortly after Gebru arrived at Google in 2018, she informed her boss that a colleague had been accused of sexual harassment at another organization. Katherine Heller, a Google researcher, reported the same incident, which included allegations of inappropriate touching. Google immediately opened an investigation into the man’s behavior. Bloomberg did not name the man accused of harassment, and The Verge does not know his identity.

The allegations coincided with an even more explosive story. Andy Rubin, the “father of Android” had received a $90 million exit package despite being credibly accused of sexual misconduct. The news sparked outrage at Google, and 20,000 employees walked out of work to protest the company’s handling of sexual harassment.

Gebru and Margaret Mitchell, co-lead of the ethical AI team, went to AI chief Jeff Dean with a “litany of concerns,” according to Bloomberg. They told Dean about the colleague who’d been accused of harassment, and said there was a perceived pattern of women being excluded and undermined on the research team. Some were given lower roles than men, despite having better qualifications. Mitchell also said she’d been denied a promotion due to “nebulous complaints to HR about her personality.”

Dean was skeptical about the harassment allegations but said he would investigate, Bloomberg reports. He pushed back on the idea that there was a pattern of women on the research team getting lower-level positions than men.

After the meeting, Dean announced a new research project with the alleged harasser at the helm. Nine months later, the man was fired for “leadership issues,” according to Bloomberg. He’d been accused of misconduct at Google, although the investigation was still ongoing.

After the man was fired, he threatened to sue Google. The legal team told employees who’d spoken out about his conduct that they might hear from the man’s lawyers. The company was “vague” about whether it would defend the whistleblowers, Bloomberg reports.

The harassment allegation was not an isolated incident. Gebru and her co-workers reported additional claims of inappropriate behavior and bullying after the initial accusation.

In a statement emailed to The Verge, a Google spokesperson said: “We investigate any allegations and take firm action against employees who violate our clear workplace policies.”

Gebru said there were also ongoing issues with getting Google to respect the ethical AI team’s work. When she tried to look into a dataset released by Google’s self-driving car company Waymo, the project became mired in “legal haggling.” Gebru wanted to explore how skin tone impacted Waymo’s pedestrian-detection technology. “Waymo employees peppered the team with inquiries, including why they were interested in skin color and what they were planning to do with the results,” according to the Bloomberg article.

After Gebru went public about her firing, she received an onslaught of harassment from people who claimed that she was trying to get attention and play the victim. The latest news further validates her response that the issues she raised were part of a pattern of alleged bias on the research team.

Update April 21st, 6:05PM ET: Article updated with statement from Google.

Repost: Original Source and Author Link