Categories
Game

A YouTuber built his own PS5 Slim that’s less than an inch thick

Sony typically follows up its PlayStation consoles with a slim version a few years later, but that time hasn’t come for the PS5 yet. While we all wait for a slimmer PS5 that would fit in small spaces better, a YouTuber called DIY Perks already built one for himself. He took apart a standard PlayStation 5 and replaced everything that needed to be replaced to get rid of the console’s bulk. He substituted components with similar parts and his own home-made creations, including the console’s rather voluminous casing, to come up with a device that’s just 1.9 centimeters thick.

Putting the current device’s power supply and cooling system with the rest of the console’s components wouldn’t yield a “slim” version of the PS5, though. So, what Perks did was build his own water-cooling system and put the power supply in a long, slim external case that can be placed behind the TV, where it won’t be noticeable. While he did run into some issues that took time to solve, he was able to make the console work in the end. His cooling system was even more efficient than the the standard PS5’s, based on the temperatures he took when he tested it out using Horizon Forbidden West

Unfortunately, Perks’ PS5 Slim is one of a kind and not easy to replicate. You can check out his process in the video below if you need ideas or just want to be awed.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Meta has built an AI supercomputer it says will be world’s fastest by end of 2022

Social media conglomerate Meta is the latest tech company to build an “AI supercomputer” — a high-speed computer designed specifically to train machine learning systems. The company says its new AI Research SuperCluster, or RSC, is already among the fastest machines of its type and, when complete in mid-2022, will be the world’s fastest.

“Meta has developed what we believe is the world’s fastest AI supercomputer,” said Meta CEO Mark Zuckerberg in a statement. “We’re calling it RSC for AI Research SuperCluster and it’ll be complete later this year.”

The news demonstrates the absolute centrality of AI research to companies like Meta. Rivals like Microsoft and Nvidia have already announced their own “AI supercomputers,” which are slightly different from what we think of as regular supercomputers. RSC will be used to train a range of systems across Meta’s businesses: from content moderation algorithms used to detect hate speech on Facebook and Instagram to augmented reality features that will one day be available in the company’s future AR hardware. And, yes, Meta says RSC will be used to design experiences for the metaverse — the company’s insistent branding for an interconnected series of virtual spaces, from offices to online arenas.

“RSC will help Meta’s AI researchers build new and better AI models that can learn from trillions of examples; work across hundreds of different languages; seamlessly analyze text, images, and video together; develop new augmented reality tools; and much more,” write Meta engineers Kevin Lee and Shubho Sengupta in a blog post outlining the news.

“We hope RSC will help us build entirely new AI systems that can, for example, power real-time voice translations to large groups of people, each speaking a different language, so they can seamlessly collaborate on a research project or play an AR game together.”

Meta’s AI supercomputer is due to be complete by mid-2022.
Image: Meta

Work on RSC began a year and a half ago, with Meta’s engineers designing the machine’s various systems — cooling, power, networking, and cabling — entirely from scratch. Phase one of RSC is already up and running and consists of 760 Nvidia GGX A100 systems containing 6,080 connected GPUs (a type of processor that’s particularly good at tackling machine learning problems). Meta says it’s already providing up to 20 times improved performance on its standard machine vision research tasks.

Before the end of 2022, though, phase two of RSC will be complete. At that point, it’ll contain some 16,000 total GPUs and will be able to train AI systems “with more than a trillion parameters on data sets as large as an exabyte.” (This raw number of GPUs only provides a narrow metric for a system’s overall performance, but, for comparison’s sake, Microsoft’s AI supercomputer built with research lab OpenAI is built from 10,000 GPUs.)

These numbers are all very impressive, but they do invite the question: what is an AI supercomputer anyway? And how does it compare to what we usually think of as supercomputers — vast machines deployed by universities and governments to crunch numbers in complex domains like space, nuclear physics, and climate change?

The two types of systems, known as high-performance computers or HPCs, are certainly more similar than they are different. Both are closer to datacenters than individual computers in size and appearance and rely on large numbers of interconnected processors to exchange data at blisteringly fast speeds. But there are key differences between the two, as HPC analyst Bob Sorensen of Hyperion Research explains to The Verge. “AI-based HPCs live in a somewhat different world than their traditional HPC counterparts,” says Sorensen, and the big distinction is all about accuracy.

The brief explanation is that machine learning requires less accuracy than the tasks put to traditional supercomputers, and so “AI supercomputers” (a bit of recent branding) can carry out more calculations per second than their regular brethren using the same hardware. That means when Meta says it’s built the “world’s fastest AI supercomputer,” it’s not necessarily a direct comparison to the supercomputers you often see in the news (rankings of which are compiled by the independent Top500.org and published twice a year).

To explain this a little more, you need to know that both supercomputers and AI supercomputers make calculations using what is known as floating-point arithmetic — a mathematical shorthand that’s extremely useful for making calculations using very large and very small numbers (the “floating point” in question is the decimal point, which “floats” between significant figures). The degree of accuracy deployed in floating-point calculations can be adjusted based on different formats, and the speed of most supercomputers is calculated using what are known as 64-bit floating-point operations per second, or FLOPs. However, because AI calculations require less accuracy, AI supercomputers are often measured in 32-bit or even 16-bit FLOPs. That’s why comparing the two types of systems is not necessarily apples to apples, though this caveat doesn’t diminish the incredible power and capacity of AI supercomputers.

Sorensen offers one extra word of caution, too. As is often the case with the “speeds and feeds” approach to assessing hardware, vaunted top speeds are not always representative. “HPC vendors typically quote performance numbers that indicate the absolute fastest their machine can run. We call that the theoretical peak performance,” says Sorensen. “However, the real measure of a good system design is one that can run fast on the jobs they are designed to do. Indeed, it is not uncommon for some HPCs to achieve less than 25 percent of their so-called peak performance when running real-world applications.”

In other words: the true utility of supercomputers is to be found in the work they do, not their theoretical peak performance. For Meta, that work means building moderation systems at a time when trust in the company is at an all-time low and means creating a new computing platform — whether based on augmented reality glasses or the metaverse — that it can dominate in the face of rivals like Google, Microsoft, and Apple. An AI supercomputer offers the company raw power, but Meta still needs to find the winning strategy on its own.

Repost: Original Source and Author Link

Categories
AI

Adobe has built a deepfake tool, but it doesn’t know what to do with it

Deepfakes have made a huge impact on the world of image, audio, and video editing, so why isn’t Adobe, corporate behemoth of the content world, getting more involved? Well, the short answer is that it is — but slowly and carefully. At the company’s annual Max conference today, it unveiled a prototype tool named Project Morpheus that demonstrates both the potential and problems of integrating deepfake techniques into its products.

Project Morpheus is basically a video version of the company’s Neural Filters, introduced in Photoshop last year. These filters use machine learning to adjust a subject’s appearance, tweaking things like their age, hair color, and facial expression (to change a look of surprise into one of anger, for example). Morpheus brings all those same adjustments to video content while adding a few new filters, like the ability to change facial hair and glasses. Think of it as a character creation screen for humans.

The results are definitely not flawless and are very limited in scope in relation to the wider world of deepfakes. You can only make small, pre-ordained tweaks to the appearance of people facing the camera, and can’t do things like face swaps, for example. But the quality will improve fast, and while the feature is just a prototype for now with no guarantee it will appear in Adobe software, it’s clearly something the company is investigating seriously.

What Project Morpheus also is, though, is a deepfake tool — which is potentially a problem. A big one. Because deepfakes and all that’s associated with them — from nonconsensual pornography to political propaganda — aren’t exactly good for business.

Now, given the looseness with which we define deepfakes these days, Adobe has arguably been making such tools for years. These include the aforementioned Neural Filters, as well as more functional tools like AI-assisted masking and segmentation. But Project Morpheus is obviously much more deepfakey than the company’s earlier efforts. It’s all about editing video footage of humans — in ways that many will likely find uncanny or manipulative.

Changing someone’s facial expression in a video, for example, might be used by a director to punch up a bad take, but it could also be used to create political propaganda — e.g. making a jailed dissident appear relaxed in court footage when they’re really being starved to death. It’s what policy wonks refer to as a “dual-use technology,” which is a snappy way of saying that the tech is “sometimes maybe good, sometimes maybe shit.”

This, no doubt, is why Adobe didn’t once use the word “deepfake” to describe the technology in any of the briefing materials it sent to The Verge. And when we asked why this was, the company didn’t answer directly but instead gave a long answer about how seriously it takes the threats posed by deepfakes and what it’s doing about them.

Adobe’s efforts in these areas seem involved and sincere (they’re mostly focused on content authentication schemes), but they don’t mitigate a commercial problem facing the company: that the same deepfake tools that would be most useful to its customer base are those that are also potentially most destructive.

Take, for example, the ability to paste someone’s face onto someone else’s body — arguably the ur-deepfake application that started all this bother. You might want such a face swap for legitimate reasons, like licensing Bruce Willis’ likeness for a series of mobile ads in Russia. But you might also be creating nonconsensual pornography to harass, intimidate, or blackmail someone (by far the most common malicious application of this technology).

Regardless of your intent, if you want to create this sort of deepfake, you have plenty of options, none of which come from Adobe. You can hire a boutique deepfake content studio, wrangle with some open-source software, or, if you don’t mind your face swaps being limited to preapproved memes and gifs, you can download an app. What you can’t do is fire up Adobe Premiere or After Effects. So will that change in the future?

It’s impossible to say for sure, but I think it’s definitely a possibility. After all, Adobe survived the advent of “Photoshopped” becoming shorthand for digitally edited images in general, and often with negative connotations. And for better or worse, deepfakes are slowly losing their own negative associations as they’re adopted in more mainstream projects. Project Morpheus is a deepfake tool with some serious guardrails (you can only make prescribed changes and there’s no face-swapping, for example), but it shows that Adobe is determined to explore this territory, presumably while gauging reactions from the industry and public.

It’s fitting that as “deepfake” has replaced “Photoshopped” as the go-to accusation of fakery in the public sphere, Adobe is perhaps feeling left out. Project Morpheus suggests it may well catch up soon.

Repost: Original Source and Author Link

Categories
Game

Scuf’s first PS5 controllers include one built for first-person shooters

Scuf Gaming is finally diving into PS5 controllers, and its first offerings might be noteworthy if you thrive on titles like Call of Duty. The Corsair brand has launched its first PS5 gamepad line, Reflex, with a variant specifically tuned for first-person shooters. The appropriately named Reflex FPS adds instant bumpers and triggers, includes grippier surfaces and even ditches haptic feedback — in theory, you’ll a lighter control that won’t disrupt your aim.

The FPS, the regular Reflex and the Reflex Pro all have four remappable paddles, a profile switch, removable faceplates and interchangeable thumbsticks. The Pro adds the improved grip of the FPS while preserving the adaptive triggers you might need for some PS5 titles.

Scuf’s pricing shows these controllers are meant for enthusiasts and competitive players. The base Reflex is available now for $200, while the Reflex Pro and Reflex FPS respectively sell for $230 and $260. That’s difficult to rationalize if you just want an alternative to the official DualSense pad, but might be justifiable if you’re determined to emerge triumphant in a battle royale.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

Razer built a MagSafe cooling fan for iPhone gamers

Do you play enough mobile games that your phone gets hot to the touch? Probably not, but Razer has you covered regardless. According to iMore, Razer has released a $60 Phone Cooler Chroma that promises to keep your handset cool. There’s a version with a clamp for Android phones and older iPhones, but the star of the show is the MagSafe model — you won’t completely sully the design of your iPhone 12 or 13.

This being a Razer accessory, you can expect the seemingly obligatory RGB lighting (controlled through Bluetooth) as well as a high-powered seven-blade fan that remains quiet at about 30dB. Be prepared to stay near power outlets, though, as you’ll need to plug in a USB-C charger whether or not you’re using MagSafe.

The Phone Cooler Chroma is available now. The question, of course, is whether or not you’ll benefit from it in the first place. Modern phones do get warm and can throttle performance under sustained heavy loads, but it’s not clear how much cooler your phone will get when the fan sits outside of your handset. There’s also the simple matter of necessity. Do you really want a wired fan just for a performance bump in Genshin Impact or PUBG Mobile? This might not be completely far-fetched, though — gaming phones with elaborate cooling have an audience in countries like China, and Razer’s fan makes that overkill available to a wider audience.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

Pikmin Bloom first impressions: A different beast built on the bones of Pokemon GO

Pikmin Bloom was launched today inside the United States for Android and iOS devices. This game is made by Niantic in collaboration with Nintendo, and is built on the same platform as Pokemon GO. This is not the first game Niantic has made on this platform, but it might very well be the most important (after Pokemon GO, of course).

The Concept

Pikmin Bloom is more than a simple stand-alone game. It uses the Pikmin intellectual property – the game’s universe and most basic elements of gameplay – to create something new. This is similar to what Niantic did with Pokemon, without the April Fools Joke origins.

The idea with Pikmin Bloom is that you’ll use the experience to add a layer of joy to your day-to-day exercise routine. When you go for a jog or a run, or go out to walk your dog, Pikmin Bloom will be right there with you, planting virtual flowers along your path.

The flowers stick to your map in the game. Niantic suggests that they’ll be able to allow these flowers to stick to the map for not only you, but other people playing the game in your area. Eventually your in-game world should be covered in many different sorts of colorful flowers.

That could be amazing! It’d be even more amazing if we were at a point where we could get something like that to work with augmented reality – but alas, we’re not quite there yet.

For now, the AR bit of the game is similar to Pokemon GO, largely in play for temporary placement of creatures so that you can take a fun photo.

Above you’ll see a few screenshots showing how exceedingly similar Pikmin Bloom is to Pokemon GO. If you’ve played games like Harry Potter: Wizards Unite, you’ll begin to see a few UI trends here from Niantic.

Signing in

The game Pikmin Bloom uses Nintendo and Google accounts with Niantic to deliver incentives to users who play other games with either account. If you’re just starting to play Pikmin Bloom now, I’d recommend you start by signing in with your Google account first.

If you do this, you’ll be able to sign in with your Nintendo Account later, and recieve benefits and bonuses in the future for both. If you start with Nintendo, you will not be able to connect with your Google account – which you’ll find will be beneficial in this game in the very near future if you’re a long-time player of the game Pokemon GO.

The double sign-in situation is just one of a a series of steps you must take to understand the way Pikmin Bloom works. If you’re using an Android device, you’ll also be required to download and sign in to Google Fit – which is a whole extra bit of unnecessary business on its own.

Learning Curve

Pikmin Bloom is not the sort of game most people will love playing immediately. If you’ve never heard of Pikmin before, especially, there’ll be a learning curve that’s a bit too much to handle in the first few minutes after you first open the app.

There isn’t so much a curve for learning how the game is played, but in understanding why the game is worth playing. This might be the downfall of the game, right out the gate.

If you find the concept of the game interesting, you might stick around and play. I’ll be playing, mostly because I like the idea that the game will eventually look as good as the trailers suggest – with AR. But getting a whole lot of other people to stick around is going to take some serious work.

Download and try

The game Pikmin Bloom is ready to roll right now for users in the United States playing on iOS and Android devices of many sorts. Give the game a whirl and let us know if you find it compelling enough to continue beyond day one!

Repost: Original Source and Author Link

Categories
AI

Microsoft has built an AI-powered autocomplete for code using GPT-3

In September 2020, Microsoft purchased an exclusive license to the underlying technology behind GPT-3, an AI language tool built by OpenAI. Now, the Redmond, Washington-based tech giant has announced its first commercial use case for the program: an assistive feature in the company’s PowerApps software that turns natural language into readymade code.

The feature is limited in its scope and can only produce formulas in Microsoft Power Fx, a simple programming language derived from Microsoft Excel formulas that’s used mainly for database queries. But it shows the huge potential for machine learning to help novice programmers by functioning as an autocomplete tool for code.

“There’s massive demand for digital solutions but not enough coders out there. There’s a million-developer shortfall in the US alone,” Charles Lamanna, CVP of Microsoft’s Low Code Application Platform, tells The Verge. “So instead of making the world learn how to code, why don’t we make development environments speak the language of a normal human?”

Autocomplete for coders

Microsoft has been pursuing this vision for a while through Power Platform, its suite of “low code, no code” software aimed at enterprise customers. These programs run as web apps and help companies that can’t hire experienced programmers tackle basic digital tasks like analytics, data visualization, and workflow automation. GPT-3’s talents have found a home in PowerApps, a program in the suite used to create simple web and mobile apps.

Lamanna demonstrates the software by opening up an example app built by Coca-Cola to keep track of its supplies of cola concentrate. Elements in the app like buttons can be dragged and dropped around the app as if the users were arranging a PowerPoint presentation. But creating the menus that let users run specific database queries (like, say, searching for all supplies that were delivered to a specific location at a specific time) requires basic coding in the form of Microsoft Power Fx formulas.

“This is when it goes from no code to low code,” says Lamanna. “You go from drag and drop, click click click, to writing formulas. And that quickly becomes complex.” Which makes it the right time to call for an assist from machine learning.

Instead of having users learn how to make database queries in Power Fx, Microsoft is updating PowerApps so they can simply write out their query in natural language, which GPT-3 then translates into usable code. So for example, instead of a user searching the database with a query “FirstN(Sort(Search(‘BC Orders’, “Super_Fizzy”, “aib_productname”), ‘Purchase Date’, Descending), 10),” they can just write “Show 10 orders that have Super Fizzy in the product name and sort by purchase date with newest on the top,” and GPT-3 will produce the correct the code.

It’s a simple trick, but it has the potential to save time for millions of users, while also enabling non-coders to build products previously out of their reach. “I remember when we got the first prototype working on a Friday night, I used it, and I was like ‘oh my god, this is creepy good,’” says Lamanna. “I haven’t felt this way using technology for a long, long time.”

The feature will be available in preview in June, but Microsoft is not the first to use machine learning in this way. A number of AI-assisted coding programs have appeared in recent years, including some, like Deep TabNine, that are also powered by the GPT series. These programs show promise but are not yet widely used, mostly due to issues of reliability.

Programming languages are notoriously fickle, with tiny errors capable of crashing entire systems. And the output of AI language models is often haphazard, mixing up words and phrases and contradicting itself from sentence to sentence. The result is that it often requires coding experience to check the output of AI coding autocomplete programs. That, of course, undermines their appeal for novices.

But Microsoft’s implementation has one big advantage over other systems: Power Fx is extremely simple. The language has its roots in Microsoft Excel formula, explains Lamanna, and is very constrained in what it can do. “It’s data-binding, single-line expressions; there’s no concept of build and compile. What you write just computes instantly,” he says. It has nothing like the power or flexibility of a programming language like Python or JavaScript, but that also means it doesn’t have as much room to commit AI-assisted errors.

As an additional safeguard, the Power Apps interface will also require that users confirm all Power Fx formulas generated from their input. Lamanna argues that this will not only reduce mistakes, but even teach users how to code over time. This seems like an optimistic read. What’s equally likely is that people will unthinkingly confirm the first option they’re given by the computer, as we tend to do with so many pop-up nuisances, from cookies to Ts&Cs.

Mitigating bias

The feature accelerates Microsoft’s “low code, no code” ambitions, but it’s also noteworthy as a major commercial application of GPT-3, one of a new breed of AI language models that dominate the contemporary AI landscape.

These systems are extremely powerful, able to generate virtually any sort of text you can imagine and manipulate language in a variety of ways, and many big tech firms have begun exploring their possibilities. Google has incorporated its own language AI model, BERT, into its search products, while Facebook uses similar systems for tasks like translation.

But these models also have their problems. The core of their capacity often comes from studying language patterns found in huge vats of text data scraped from the web. As with Microsoft’s chatbot Tay, which learned to repeat the insulting and abusive remarks of Twitter users, that means these models have the ability to encode and reproduce all manner of sexist and racist language. The text they produce can also be toxic in unexpected ways. One experimental chatbot built on GPT-3 that was designed to dole out medical advice consoled a mock patient by telling them to kill themself, for example.

The challenge of mitigating these risks depends on the exact function of the AI. In Microsoft’s case, using GPT-3 to create code means the danger is low, says Lamanna, but not nonexistent. The company has fine-tuned GPT-3 to “translate” into code by training it on examples of Power Fx formula, but the core of the program is still based on language patterns learned from the web, meaning it retains this potential for toxicity and bias.

Lamanna gives the example of a user asking the program to find “all job applicants that are good.” How will it interpret that command? It’s within GPT-3’s power to invent criteria in order to answer the question, and it’s possible it might assume that “good” is synonymous with white-sounding names, given that this is one of a number of categories favored by biased hiring practices.

Microsoft says it’s addressing this issue in a number of ways. The first is implementing a ban list of words and phrases that the system just won’t respond to. “If you’re poking the AI to generate something bad, we’re not going to generate it for you,” says Lamanna. And if the system produces something it thinks might be problematic, it’ll prompt users to report it to tech support. Then, someone will come and register the problem (and hopefully fix it).

But making the program safe without limiting its functionality is difficult, says Lamanna. Filtering by race, religion, or gender can be discriminatory, but it can also have legitimate applications, and it sounds like Microsoft is still working out how to tell the difference.

“Like any filter, it’s not perfect,” says Lamanna, emphasizing that users will have to confirm any formula written by the AI, and implying that any abuses of the program will ultimately be their responsibility. “The human does choose to inject the expression. We never inject the expression automatically,” he says.

Despite these and other unanswered questions about the program’s utility, it’s clear that this is the start of a much bigger experiment for Microsoft. It’s not hard to imagine a similar feature being integrated into Microsoft Excel, where it would reach hundreds of millions of users and dramatically expand the accessibility of this product.

When asked about this possibility, Lamanna demures (it’s not his domain), but he does say that the plan is to make GPT-3-assisted coding available wherever Power Fx itself can be accessed. “And Power Fx is showing up in lots of different places in Microsoft products,” he says. So expect to see AI completing your code much more frequently in the future.

Repost: Original Source and Author Link

Categories
Computing

HP Monitor With Built in Microphone Is Half off at Staples

Digital Trends may earn a commission when you buy through links on our site.

An easy way to get not only the best visuals, but to totally immerse yourself in your work, content, or games, is with a top-performing monitor, like the ones you’ll find in these desktop monitor deals. If you browse Staples deals, you’ll find even more great discounts.  And right now, you can get this 23-inch HP Mini-in-One HD Monitor with built-in Microphone, Speakers & Webcam, for only $150. That’s a huge discount — $140 off — from its regular price of $290. That’s nearly 50% off!

Why settle for less, visually, when you can have the most clear, detailed, colorful pictures, and tons of great features in this 23-inch HP Mini-in-One HD Monitor? Maximize your workspace and lose yourself in 23.8 inches of HD brilliance, with a built-in microphone, speakers, and webcam. Whether you’re dialing into class, meetings, or online games with friends, this is the kind of monitor that can serve all your needs.

The screen itself is 23.8 inches wide and has 1080p integrated FHD, as well as TFT active matrix display technology for better clarity, and 1920 x 1080 resolution for the clearest and most accurate details. Add to this a 16:9 aspect ratio for widescreen viewing, as well as viewing angles measuring 178 degrees (both vertical and horizontal) and you will have nothing short of absolutely brilliant imagery. And then you have the built-in microphone and camera, making this a monitor that not only immerses you, but can work for you as well. The speakers are front-facing, giving you a total audio experience featuring fully integrated sound — and you’ll never have to worry about cluttering your desk or work area with external speakers. Meanwhile, the built-in webcam sets you up for all your video chats and meetings.

It arrives with much of what you need, but this monitor is comes absolutely loaded with ports, to ensure that you can connect whatever devices you need — whether it’s your HP Elite Desktop Mini, another laptop, tablet, cameras; you name it. There are six USB ports, including a USB-C port for the fastest, most coherent transfer of data. This monitor is not simply an addition to the desktops found in these desktop computer deals — it’s a wonder to itself. And it can be yours, right now at Staples, for only $150.

More monitor deals

Looking for the right monitor to enhance your work, home, or gaming experience? Check out our roundup of the best monitor deals, below.

We strive to help our readers find the best deals on quality products and services, and we choose what we cover carefully and independently. The prices, details, and availability of the products and deals in this post may be subject to change at anytime. Be sure to check that they are still in effect before making a purchase.

Digital Trends may earn commission on products purchased through our links, which supports the work we do for our readers.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

How Pinterest built a more representative shopping experience with AI

All the sessions from Transform 2021 are available on-demand now. Watch now.


Artificial intelligence and augmented reality can work hand in hand to support innovations that can change the shopping and advertising game.

Pinterest combined artificial intelligence with augmented reality and incorporated diversity and inclusion in a way that helped the company’s bottom line, Jeremy King, head of engineering at Pinterest, said in a conversation with Matt Marshall, founder and CEO of VentureBeat, at the Transform 2021 virtual conference.

With more than 80 million people searching for beauty items every month, Pinterest had to innovate how people could see themselves in the products they were interested in. The company created the Try On tool, which lets users use their phones to see how they would look with different types of makeup. Offering a palette of skin tone ranges increased engagement, which increased searches, which led to platform growth.

“AR Try On is our augmented reality feature that allows you to use the lens camera on your phone to try on lipsticks and eyeshadows and filter makeup,” King said. “You pick up your phone, and essentially point it right at your face and you can do a search for something like red lipstick or makeup ideas and you’ll see not only a palette of skin tone ranges, but also a button to try on the shades.”

Pinterest's AR Try On

Above: Pinterest’s AR Try On is an augmented reality tool that allows customers to try on makeup.

Image Credit: Volley

Pinterest worked to prevent bias from being introduced into the data by diversifying the data sources and testing computer vision and machine learning algorithms on all kinds of skin tone ranges and makeup colors. Pinterest is able to serve diverse recommendations across the board, and the tool is able to understand and process skin tone ranges in complex lighting scenarios.

“Building for diversity and representation isn’t just the right thing to do. For us, it’s really good for our business,” King said.

As for the payoff? When using the skin tone range feature with AR, customers were five times more likely to show purchase intent.

Connecting customers to products

Pinterest has been working to improve customer and vendor experiences in other ways as well, turning its values of diversity and inclusion into action.

“We frankly heard from pinners that we weren’t naturally relevant or diverse in our recommendations, and they had to add more descriptors to their searches to filter it,” King said. “We took that as an action item to create these capabilities not only to allow more diverse content to show up, but to integrate new features like AR Try on. And once we had that, then we really got the engagement that we’re looking for.”

The company has also worked with a diverse set of creators to produce content while also working with retail partners to ingest hundreds of thousands of products from around the world in Pinterest’s catalog.

“Creators are coming in creating unique content,” King said. “We want to be able to not only use creators, but also our computer vision technology, to identify every single picture and every one of those billions of times so that you can make a really great experience.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

80% of tech could be built outside IT by 2024, thanks to low-code tools

Elevate your enterprise data technology and strategy at Transform 2021.


It looks like no-code and low-code tools are here to stay. Today, Gartner released new predictions about technology products and services, specifically who will build them and the impact of AI and the pandemic. The research firm found that by 2024, 80% of tech products and services will be built by people who are not technology professionals. Gartner also expects to see more high-profile announcements of technology launches from nontech companies over the next year.

“The barrier to become a technology producer is falling due to low-code and no-code development tools,” Gartner VP Rajesh Kandaswamy told VentureBeat. When asked what kinds of tech products and services these findings apply to, he said “all of them.” Overall, Kandaswamy sees enterprises increasingly treating digital business as a team sport, rather than the sole domain of the IT department.

For this research, Gartner defined technology professionals as those whose primary job function is to help build technology products and services, using specific skills like software development testing and infrastructure management. This includes IT professionals and workers with specialized expertise such as CRM, AI, blockchain, and DevOps.

But instead of tech professionals driving what’s next, Gartner predicts a democratization of technology development that includes citizen developers, data scientists, and “business technologists,” a term that encompasses a wide range of employees who modify, customize, or configure their own analytics, process automation, or solutions as part of their day-to-day work. Aside from non-IT humans, AI systems that generate software will also play a significant role, according to Gartner.

From low-code to AI

Tools used for low-code development — such as drag-and-drop editors, code-generators, and the like — allow non-technical users to achieve what was previously only possible with coding knowledge. But by automating and abstracting some of the underlying technical processes — and by making the use of coding or scripting optional — these tools now make it easier for more people to customize features and functions in various applications. What’s more, AI has the potential to automate and improve many aspects of software development, from evaluating needs to deployment.

“For instance, machine learning features for helping coding are available. One example is Microsoft’s Intellicode,” Kandaswamy told VentureBeat. “While such tools are in their infancy, we expect their sophistication to improve and [that] they’ll help reduce the barriers for those without specialized skills to develop useful technology products and services.”

Overall, this trend toward democratizing technology development is driven by a new category of buyers outside of the traditional IT enterprise, Gartner says. Total business-led IT spend averages up to 36% of the formal IT budget, according to the firm’s findings.

Pandemic impact

From retail to financial services, more companies are increasing efforts to embrace digital transformation. As they do so, they’re more often entering markets related to, or in competition with, traditional technology providers. By 2042, Gartner predicts over one-third of technology providers will be competing with non-technology providers.

In driving digital transformation, the pandemic only accelerated this shift, according to Gartner. Cloud services, digital business initiatives, and remote services rapidly expanded as a direct result of the crisis, opening the door to new possibilities in integration and optimization.

In 2023, Gartner anticipates that $30 billion in revenue will be generated by products and services that did not exist pre-pandemic.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link