Categories
Computing

Why I’m almost ready to switch to AMD GPUs for streaming

Although AMD makes some of the best graphics cards, they’ve been much less competent than Nvidia GPUs for streaming. Nvidia GPUs have almost always offered better encoding performance and extra features absent on AMD cards. It’s one of the reasons why I’ve decided to switch to Nvidia graphics despite being a longtime fan of AMD; I just don’t want to give up a good streaming experience.

But all that might be different now thanks to two key updates to AMD software: a brand new encoder and AMD Noise Suppression, which are competitors to Nvidia NVENC and RTX Voice. I tested out AMD’s new tools and the results make me think switching to AMD is a possibility now.

Streaming looks just as good on AMD

From left to right, AMF vs. NVENC. Matthew Connatser/Digital Trends

When it comes to streaming, it’s crucial to have a good encoder, and if you’re streaming games you’ll probably want to use GPU encoding rather than CPU encoding. Not only does Nvidia’s NVENC encoder have good quality, but it also doesn’t use very much data, which is crucial for streaming. You want the best ratio of visual quality to data usage possible, and in this area, Nvidia’s encoder was far ahead of AMD’s. But now that the latest version of AMD’s encoder AMF is finally out, I think Nvidia has lost this advantage.

The above image is from the opening shot of 3DMark’s Time Spy benchmark, which I recorded using streaming optimized settings at 6000Kbps. I selected this specific part because there’s lots of foliage, which is often difficult to capture with good quality (especially when there’s very little data to go around), but as you can see the difference between AMF and NVENC is essentially nonexistent. AMF did well in the rest of the benchmark as well, and you wouldn’t be able to tell the difference if the two recordings weren’t labeled.

It’s especially important that AMF was able to achieve this using the same bitrate that NVENC was using. It would be pretty pointless if AMF looked good but needed a substantially higher bitrate to compensate. Twitch, arguably the most popular game streaming platform, only allows up to 6000Kbps, which is a very small amount of data to work with. With respect to recording, each video was only about 3 minutes long and each was about 100MB, which is really good for people who upload unedited stream VODs to YouTube for archival purposes.

AMD RX 6950 XT graphics card on a pink background.
Jacob Roach / Digital Trends

That being said, only AMD GPUs based on the RDNA2 architecture (which includes RX 6000 series GPUs) can take full advantage of the AMF encoder because older GPUs don’t have support for B-Frames, which help to increase image quality. This is a limitation of the hardware, not the software, so your RX 5700 XT will never be quite as good as an RX 6950 XT for streaming.

What AMD really needs to focus on in the future is updating its encoder just as often as Nvidia. The newest version of AMF was done and just sitting as open source software until Open Broadcast Software contributors finally added it to the app, and now we have to wait for all the streaming services to update so you can use the new encoder. I’d like to see AMD assume as active of a role as Nvidia has in this area, not just when it comes to making updates but also distributing those updates.

AMD Noise Suppression is good but has poor support

A podcast microphone with headphones on top.
Getty Images

Audio quality is an important (and sometimes neglected) part of streaming, and here too Nvidia held the edge thanks to its RTX Voice software, which is basically an AI-enhanced noise gate. AMD is catching up in this area with its new Noise Suppression tool, which is supposed to do the exact same thing as RTX Voice.

Given that AMD GPUs have no AI acceleration features like Nvidia GPUs, I was skeptical that Noise Suppression would be any good. Much to my surprise, the results were quite good: my gaming keyboard was nearly inaudible, even while I was talking, and the quality of my voice wasn’t reduced. If I switched to AMD Noise Suppression, I don’t think anyone that watches my streams would be able to tell the difference.

But did AMD GPUs even need this feature? Why not just set a noise gate in OBS? Well, the problem with noise gates is that they can only work based on volume, and background noise can get quite loud, especially the clicky noises from gaming keyboards. RTX Voice is a critical part of my streaming setup because it can intelligently separate my voice and my keyboard. Now that AMD GPUs have the same exact functionality, I can actually consider streaming on AMD hardware, like my ROG Zephyrus G14.

I also like that AMD’s Noise Suppression is built into the Radeon driver suite, whereas RTX Voice is only usable by installing Nvidia Broadcast. Not only is AMD’s solution more simple, but it’s also more reliable. I can’t tell you how many times I started my stream just to realize my mic audio wasn’t coming through because Nvidia Broadcast was closed for some reason. Nvidia could learn much from AMD when it comes to driver suites, not just for this specific feature but in general.

But I do have quite a bit of criticism for AMD here when it comes to support. Not only does Noise Suppression require an RX 6000 GPU, but it also requires a Ryzen 5000 CPU or newer. The CPU requirement in particular is frustrating and almost certainly arbitrary. Not only does it lock out users running older versions of Ryzen (most of which are still fast enough in 2022), but it also excludes everyone that uses an Intel CPU. It’s impossible to justify this requirement when some of the best CPUs available today are made by Intel.

Finally caught up in key areas, for now

Having finally bridged the gap in video and audio quality features, AMD GPUs are finally as capable as Nvidia GPUs for streaming in the most important areas. While the level of support AMD offers leaves much to be desired, with current generation AMD hardware you can stream games with the same kind of quality you’d see from an Nvidia-powered PC. There are some other features that Nvidia offers, such as a digital green screen for webcam users, but AMD doesn’t really need to offer the same feature when third-party software can do the same thing.

AMD’s focus right now should be ensuring it never falls this far behind ever again. AMF was worse than NVENC for several years, and RTX Voice has been around since 2020. Technology is always a moving target, and it’s hard to see Nvidia resting on its laurels any time soon. In order to compete with Nvidia, AMD can’t just rely on open source software and hoping someone makes something. AMD needs to do that itself.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Security

Dashlane is ready to replace all your passwords with passkeys

Passwords are dying, long live passkeys. Practically the entire tech industry seems to agree that hexadecimal passwords need to die, and that the best way to replace them is with the cryptographic keys that have come to be known as passkeys. Basically, rather than having you type a phrase to prove you’re you, websites and apps use a standard called WebAuthn to connect directly to a token you have saved — on your device, in your password manager, ultimately just about anywhere — and authenticate you automatically. It’s more secure, it’s more user-friendly, it’s just better.

The transition is going to take a while, though, and even when you can use passkeys, it’ll be a while before all your apps and websites let you do so. But Dashlane is trying to help move things along, announcing today that it’s integrating passkeys into its cross-platform password manager. “We said, you know what, our job is to make security simple for users,” says Dashlane CEO JD Sherman, “and this is a great tool to do that. So we should actually be thinking about ushering in this passwordless era.”

Going forward, Dashlane users can start to set up passkeys to log into sites and apps where they previously would have created passwords. And whereas systems like Apple’s upcoming implementation in iOS 16 will often involve taking a picture of a QR code to log in, Dashlane says it can make the process even simpler because it has apps for most platforms and an extension for most browsers.

A screenshot of the WebAuthn website, with a Dashlane passkey creation page.

Dashlane’s passkeys can automatically log into any app that supports them.
Image: Dashlane

To demonstrate, Rew Islam, a software engineer at Dashlane, shared his screen with me over Zoom and opened up the WebAuthn website — so few apps support passkeys that the standard’s website is the best way to test them — and typed in his email address to register a new account. “At this point, you’d do your dance with the phone, you’d be scanning a QR code, but here in the corner, Dashlane is like, ‘Hey, do you want to create a new key with Dashlane?’ And you click confirm and it’s done.”

The passkey tech works, Islam says. It has for a while, and companies have been testing it and beginning to implement it for several years. The biggest challenge for the industry has been getting everyone on board with the same model for the future of authentication, which has actually happened — Google, Apple, Microsoft, and others are all betting on the same underlying passkey technology, managed through the FIDO Alliance. Apple is adding passkey support to iCloud keychain, letting users log into their devices and apps just by authenticating with Touch ID or Face ID; Google is also planning support for passkeys in Android and Chrome. Microsoft has been building passkey support for some time, using Windows Hello and other authentication tools.

Ultimately, competing with the tech giants could be a problem for Dashlane and the other password managers — it’s hard to out-convenience the built-in software that Google, Apple, and Microsoft can ship with their devices. But for now, Dashlane is happy to have the world’s biggest companies, and their commensurately big marketing budgets, telling the world about passkeys.

“FIDO and the three big platform vendors have put in a lot of marketing, a lot of messaging, to get people off this drug that is ‘okay, type in my password,’” Islam says. “That has nothing to do with technology — it’s culture and user behavior.”

And yes, competing will be hard, Sherman says, but isn’t it always? “Technology’s changing, and the big platforms have a lot of power. I have never worked in an industry where that was not the case.”

As more platforms authenticate with passkeys, Islam says, that will also help with adoption. He points out that most of those companies hate passwords just as much as users do and have plenty of incentives to make the switch. The main sticking point for now is mobile; Android and iOS are getting passkey support, but Islam says he anticipates third parties like Dashlane won’t get access to mobile passkey tech until next year at the earliest.

The next few months are almost certainly going to be a season of passkeys, as security apps of all kinds begin to support them and apps begin to let you use them. The FIDO Alliance is a who’s-who of the companies you’d want to be invested in the project, and with so much of the tech settled, it’s just a matter of implementation now. Passwords aren’t dead yet, but we know what’ll kill them. And it’s slowly coming to life.

Repost: Original Source and Author Link

Categories
Game

Walmart PS5 restock ready for Cyber Monday, but there’s a catch

It’s Cyber Monday, and even though there aren’t going to be any discounts on the PS5, there are some stores that will be restocking the hard-to-find console. Walmart is the main retailer we’ll be looking at for this PlayStation 5 stock on Cyber Monday, but as with many restocks these days, there are some strings attached. If you miss your chance with Walmart today, you might be able to snag a PS5 later this week from another major retailer.

Walmart PS5 restock slated for today

First and foremost, Walmart will be restocking the PS5 later today at 12 PM EST. Both versions of the PS5 – the $500 standard edition and the $400 digital edition – will be restocked, which is good news. However, the bad news is that you’ll need an active subscription to Walmart+ in order to buy one of these PS5s later today.

Walmart+ is a monthly subscription that gives various special benefits, such as free delivery from a specific store, free shipping for online orders, and gas discounts. It costs $12.95 per month or $98 per year, so it’s pricey if you usually shop elsewhere and are only turning to Walmart in an attempt to secure a PS5.

PS5 at Walmart
PS5 Digital Edition at Walmart

The fact that it’s required for today’s PS5 restock could be a good thing for subscribers in the end. After all, not everyone will be willing to open a Walmart+ subscription just so they can have a chance at snagging a PS5, so this requirement could thin the pool of potential buyers significantly. On the other hand, it doesn’t feel good to have to sign up for a monthly membership just for a chance at a PS5, and even worse is the fact that you can’t simply sign up for a trial membership before the listings go live later today – it seems you have to be a paying member for a chance to buy a PS5.

Walmart’s restock later today actually isn’t the first PS5 restock of Cyber Monday. GameStop restocked some of its PS5 bundles early this morning, but that went more or less as expected, with available stock selling out almost immediately. Someday a retailer will be able to keep PS5 stock on hand for more than a few minutes, but it apparently is not this day.

A potentially massive Target restock inbound

While Walmart and GameStop seem to be the only retailers restocking today (barring any surprise, unannounced restocks), those on the prowl for PS5 might want to keep tabs on Target as we move through the week. YouTuber Jake Randall – who has been primarily using his channel to track PS5 and Xbox Series X restocks since both consoles launched last year – has tipped that Target may have a massive restock coming soon.

In a Twitter thread posted over the holiday weekend, Randall said that Target is gearing up for its biggest PS5 restock of 2021, noting that some stores have over 100 consoles on hand and ready to sell. Randall thinks this restock is likely to come sometime on Wednesday, Thursday, or Friday instead of Cyber Monday, but notes that it’s worth keeping tabs on Target throughout the week just in case consoles drop early.

Of course, there’s always a chance that other retailers enter the fray with PS5 restocks of their own, but for now, it seems that Walmart and Target are the ones to watch. If you’re on the hunt for a console, be sure to check out our tips guide for securing a PS5, but otherwise, good luck.



Repost: Original Source and Author Link

Categories
Game

PS5 side plate patent might mean Sony is ready for official custom parts

Ever since the world collectively realized that the plates on the sides of the PlayStation 5 could be removed, there’s been some degree of demand for custom fits and DIY alternatives. Several companies have to capitalize on that demand early only to meant with threats of litigation by Sony, and now we may know why. A Sony patent for PS5 side plates was made public this week. This may suggest that the company plans to offer custom plates of its own.

Official PS5 replacement plates on the way?

The design patent, which was first discovered by the folks over at OP Attack, is brief and to the point, as most design patents are. It mostly consists of illustrations showing off the PlayStation 5’s side plates from several angles. The final illustration in the patent shows the plates attached to a PS5 console.

The full text associated with the patent tells us that Sony filed for it on November 5th, 2020 – just a few days before the PlayStation 5 launched – but it wasn’t published on the USPTO’s website until November 16th. The claim for the patent is short and sweet, describing, “The ornamental design for a cover for electronic device, as shown and described.”

All told, the patent itself is to the point, but the illustrations make it very clear what we’re talking about here. Sony has, essentially, patented the side plates on the PlayStation 5, opening the door to not only launching custom plates itself, but also to potential licensing deals for third-party companies to create and sell their own.

Sony snuffs out unauthorized custom plates

Of course, we’ve known since the very first PlayStation 5 teardown that not only are the side plates on the PS5 removable but taking them off is necessary as the M.2 expansion slot and dust vents are housed underneath. The moment we saw those side plates come off, many of us assumed that we’d see custom plates before long, but a year out from release, there’s still a notable lack of them.

That’s not because companies haven’t tried. We’ve seen at least a couple of companies try their hand at offering custom PS5 plates only to be shut down by Sony. One company, named Platestation at the time, started selling its plates before the PS5 even launched. It was later forced to cancel orders after threats of legal action from Sony, once again before the PS5 was on store shelves.

Skin maker Dbrand also entered the fray with an offering called Darkplates but was forced to pull those under legal threat from Sony as well. After the original Darkplates vanished, Dbrand came back with a new version that featured an original design that trimmed down the overall size of the plates and added vents for better airflow. Darkplates are still available on Dbrand’s website, with the company as insistent as ever that these are legal, original designs that it can’t be sued over.

The fact that Sony has patented the PS5’s plates could potentially make the path to market easier for these companies through licensing agreements. Of course, the big question is whether or not Sony will offer those licensing agreements or simply opt to make its own custom faceplates. Hopefully, now that this patent is out in the open, it won’t be much longer before we find out.

Repost: Original Source and Author Link

Categories
Game

Sega’s “Super Game” gets ready for something huge

Sega and Microsoft kicked off November by making a big announcement, revealing that they are considering entering a “strategic alliance.” While the details of this partnership are somewhat vague at this early stage, it seems like it could have significant implications for this new generation of consoles. Sega also uses the words “Super Game” at one point in this announcement, so you know that whatever the two companies are plotting, at least they consider it to be big.

Azure at the center of proposed Microsoft-Sega partnership

In the announcement of this potential partnership, published today to Sega’s corporate website, Sega says that it will enable the company to explore ways it can “produce large-scale, global games in a next-generation development environment built on Microsoft’s Azure cloud platform.” While a lot of that statement is vague, the key there is the Azure name drop. Whatever Sega winds up making, it will use Microsoft’s popular cloud computing service.

Sega says that this partnership will be important to its “mid to long-term strategy” involving a new project called “Super Game.” Lacking any other details about Super Game, Sega says that it will focus on four key concepts: “Global, Online, Community, and IP utilization.” We should also point out that this partnership isn’t technically official yet, as the two companies say they have “agreed in principal” and are “considering” such an alliance.

The rest of the announcement is rather vague PR speak, but it’s worth noting that Sega does call out 5G and cloud computing specifically later on in the press release. That, in turn, suggests that Sega envisions “Super Game” as one that can be played on various platforms, likely thanks to Azure. At first blush, it almost seems like Sega is describing an MMO or a live service title with a focus on community and cloud-based gameplay.

Live service or something different?

If Sega were to make a live service title that taps Azure so it can be played on phones, consoles, PCs, or any device with an internet connection, it wouldn’t be a shocking revelation. Live service games are very popular with major publishers these days, as successful ones keep players playing and keep players spending money.

That is assuming, of course, that the live service game in question actually becomes popular. Games like Apex Legends and Destiny 2 seem to rake in cash year-over-year, while the future of live service games that fail to attract a consistent audience – Marvel’s Avengers or BioWare’s Anthem come to mind – is constantly in question.

This isn’t the first time Sega has mentioned “Super Game,” either. During a financial presentation earlier this year, Sega told investors that Super Game is a long play, anticipating that it would launch within the next five years. While we didn’t learn much else at the time, Sega did confirm that it would introduce a new IP, so if you read the words “IP utilization” earlier in the article and immediately assumed that Super Game would be a Sega mash-up that included many of its existing characters, you may not want to get your hopes up on that front.

Sega’s statements during that financial presentation in May tell us that regardless of what Super Game is, we probably aren’t going to find out more about it for some time to come. If Sega’s goal is to launch this title in the next five years, we could still be very far out from a full reveal. Indeed, Sega’s own admission is that this partnership with Microsoft “represents Sega looking ahead,” so it’s safe to assume that we’ll be well into the new console generation by the time Super Game is on shelves.

What I think is coming

Obviously, with so few concrete details about Super Game, it’s hard to say what it is with any kind of confidence. However, I think it’s probably fair to expect that Super Game will wind up being a live service title that emphasizes always-online gameplay, long-term player engagement, and monetization through microtransactions and in-game purchases.

That seems like the most logical choice simply because live service games are the current hot thing with developers large enough to build them and publishers large enough to fund them. Live service games have stepped into the space MMOs used to occupy, giving players reasons to return to the game often through frequent content updates while providing developers and publishers with consistent income.

While that consistent income may have been monthly subscriptions during the height of the MMO craze, these days they’re more often in-game purchases like skins and other cosmetic items or battle passes – reward tracks that are purchased upfront and then progressed by playing the game. Those battle passes give players reason to keep playing and induce the fear of missing out when they put the game down for a significant period of time, which is partly why live service games are controversial with some gamers.

Of course, there’s always the potential for Sega to turn Super Game into something vastly different and maybe even novel, but we’ll have to wait until we have more details about the project before we make that call one way or another.

Repost: Original Source and Author Link

Categories
AI

Google isn’t ready to turn search into a conversation

The future of search is a conversation — at least, according to Google.

It’s a pitch the company has been making for years, and it was the centerpiece of last week’s I/O developer conference. There, the company demoed two “groundbreaking” AI systems — LaMDA and MUM — that it hopes, one day, to integrate into all its products. To show off its potential, Google had LaMDA speak as the dwarf planet Pluto, answering questions about the celestial body’s environment and its flyby from the New Horizons probe.

As this tech is adopted, users will be able to “talk to Google”: using natural language to retrieve information from the web or their personal archives of messages, calendar appointments, photos, and more.

This is more than just marketing for Google. The company has evidently been contemplating what would be a major shift to its core product for years. A recent research paper from a quartet of Google engineers titled “Rethinking Search” asks exactly this: is it time to replace “classical” search engines, which provide information by ranking webpages, with AI language models that deliver these answers directly instead?

There are two questions to ask here. First is can it be done? After years of slow but definite progress, are computers really ready to understand all the nuances of human speech? And secondly, should it be done? What happens to Google if the company leaves classical search behind? Appropriately enough, neither question has a simple answer.

There’s no doubt that Google has been pushing a vision of speech-driven search for a long time now. It debuted Google Voice Search in 2011, then upgraded it to Google Now in 2012; launched Assistant in 2016; and in numerous I/Os since, has foregrounded speech-driven, ambient computing, often with demos of seamless home life orchestrated by Google.

Despite clear advances, I’d argue that actual utility of this technology falls far short of the demos. Check out the introduction below of Google Home in 2016, for example, where Google promises that the device will soon let users “control things beyond the home, like booking a car, ordering dinner, or sending flowers to mom, and much, much more.” Some of these things are now technically feasible, but I don’t think they’re common: speech has not proven to be the flexible and faultless interface of our dreams.

Everyone will have different experiences, of course, but I find that I only use my voice for very limited tasks. I dictate emails on my computer, set timers on my phone, and play music on my smart speaker. None of these constitute a conversation. They are simple commands, and experience has taught me that if I try anything more complicated, words will fail. Sometimes this is due to not being heard correctly (Siri is atrocious on that score), but often it just makes more sense to tap or type my query into a screen.

Watching this year’s I/O demos I was reminded of the hype surrounding self-driving cars, a technology that has so far failed to deliver on its biggest claims (remember Elon Musk promising that a self-driving car would take a cross country trip in 2018? It hasn’t happened yet). There are striking parallels between the fields of autonomous driving and speech tech. Both have seen major improvements in recent years thanks to the arrival of new machine learning techniques coupled with abundant data and cheap computation. But both also struggle with the complexity of the real world.

In the case of self-driving cars, we’ve created vehicles that don’t perform reliably outside of controlled settings. In good weather, with clear road markings, and on wide streets, self-driving cars work well. But steer them into the real world, with its missing signs, sleet and snow, unpredictable drivers, and they are clearly far from fully autonomous.

It’s not hard to see the similarity with speech. The technology can handle simple, direct commands that require the recognition of only a small number of verbs and nouns (think “play music,” “check the weather” and so on) as well as a few basic follow-ups, but throw these systems into the deep waters of conversation and they flounder. As Google’s CEO Sundar Pichai commented at I/O last week: “Language is endlessly complex. We use it to tell stories, crack jokes, and share ideas. […] The richness and flexibility of language make it one of humanity’s greatest tools and one of computer sciences’ greatest challenges.”

However, there are reasons to think things are different now (for speech anyway). As Google noted at I/O, it’s had tremendous success with a new machine learning architecture known as Transformers, a model that now underpins the world’s most powerful natural language processing (NLP) systems, including OpenAI’s GPT-3 and Google’s BERT. (If you’re looking for an accessible explanation of the underlying tech and why it’s so good at parsing language, I highly recommend this blog post from Google engineer Dale Markowitz.)

The arrival of Transformers has created a truly incredible, genuinely awe-inspiring flowering of AI language capabilities. As has been demonstrated with GPT-3, AI can now generate a seemingly endless variety of text, from poetry to plays, creative fiction to code, and much more, always with surprising ingenuity and verve. They also deliver state-of-the-art results in various speech and linguistic tests and, what’s better, systems scale incredibly well. That means if you pump in more computational power, you get reliable improvements. The supremacy of this paradigm is sometimes known in AI as the “bitter lesson” and is very good news for companies like Google. After all, they’ve got plenty of compute, and that means there’s lots of road ahead to improve these systems.

Google channeled this excitement at I/O. During a demo of LaMDA, which has been trained specifically on conversational dialogue, the AI model pretended first to be Pluto, then a paper airplane, answering questions with imagination, fluency, and (mostly) factual accuracy. “Have you ever had any visitors?” a user asked LaMDA-as-Pluto. The AI responded: “Yes I have had some. The most notable was New Horizons, the spacecraft that visited me.”

A demo of MUM, a multi-modal model that understands not only text but also image and video, had a similar focus on conversation. When the model was asked: “I’ve hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently to prepare?” it was smart enough to know that the questioner is not only looking to compare mountains, but that “preparation” means finding weather-appropriate gear and relevant terrain training. If this sort of subtlety can transfer into a commercial product — and that’s obviously a huge, skyscraper-sized if — then it would be a genuine step forward for speech computing.

That, though, brings us to the next big question: even if Google can turn speech into a conversation, should it? I won’t pretend to have a definitive answer to this, but it’s not hard to see big problems ahead if Google goes down this route.

First are the technical problems. The biggest is that it’s impossible for Google (or any company) to reliably validate the answers produced by the sort of language AI the company is currently demoing. There’s no way of knowing exactly what these sorts of models have learned or what the source is for any answer they provide. Their training data usually consists of sizable chunks of the internet and, as you’d expect, this includes both reliable data and garbage misinformation. Any response they give could be pulled from anywhere online. This can also lead them to producing output that reflects the sexist, racist, and biased notions embedded in parts of their training data. And these are criticisms that Google itself has seemingly been unwilling to reckon with.

Similarly, although these systems have broad capabilities, and are able to speak on a wide array of topics, their knowledge is ultimately shallow. As Google’s researchers put it in their paper “Rethinking Search,” these systems learn assertions like “the sky is blue,” but not associations or causal relationships. That means that they can easily produce bad information based on their own misunderstanding of how the world works.

Kevin Lacker, a programmer and former Google search quality engineer, illustrated these sorts of errors in GPT-3 in this informative blog post, noting how you can stump the program with common sense questions like “Which is heavier, a toaster or a pencil?” (GPT-3 says: “A pencil”) and “How many eyes does my foot have?” (A: “Your foot has two eyes”).

To quote Google’s engineers again from “Rethinking Search”: these systems “do not have a true understanding of the world, they are prone to hallucinating, and crucially they are incapable of justifying their utterances by referring to supporting documents in the corpus they were trained over.”

These issues are amplified by the sort of interface Google is envisioning. Although it’s possible to overcome difficulties with things like sourcing (you can train a model to provide citations, for example, noting the source of each fact it gives), Google imagines every answer being delivered ex cathedra, as if spoken by Google itself. This potentially creates a burden of trust that doesn’t exist with current search engines, where it’s up to the user to assess the credibility of each source and the context of the information they’re shown.

The pitfalls of removing this context is obvious when we look at Google’s “featured snippets” and “knowledge panels” — cards that Google shows at the top of the Google.com search results page in response to specific queries. These panels highlight answers as if they’re authoritative but the problem is they’re often not, an issue that former search engine blogger (and now Google employee) Danny Sullivan dubbed the “one true answer” problem.

These snippets have made headlines when users discover particularly egregious errors. One example from 2017 involved asking Google “Is Obama planning martial law?” and receiving the answer (cited from a conspiracy news site) that, yes, of course he is (if he was, it didn’t happen).

In the demos Google showed at I/O this year of LaMDA and MUM, it seems the company is still leaning toward this “one true answer” format. You ask and the machine answers. In the MUM demo, Google noted that users will also be “given pointers to go deeper on topics,” but it’s clear that the interface the company dreams of is a direct back and forth with Google itself.

This will work for some queries, certainly; for simple demands that are the search equivalent of asking Siri to set a timer on my phone (e.g. asking when was Madonna born, who sang “Lucky Star,” and so on). But for complex problems, like those Google demoed at I/O with MUM, I think they’ll fall short. Tasks like planning holidays, researching medical problems, shopping for big-ticket items, looking for DIY advice, or digging into a favorite hobby, all require personal judgement, rather than computer summary.

The question, then, is will Google be able to resist the lure of offering one true answer? Tech watchers have noted for a while that the company’s search products have become more Google-centric over time. The company increasingly buries results under ads that are both external (pointing to third-party companies) and internal (directing users to Google services). I think the “talk to Google” paradigm fits this trend. The underlying motivation is the same: it’s about removing intermediaries and serving users directly, presumably because Google believes it’s best positioned to do so.

In a way, this is the fulfillment of Google’s corporate mission “to organise the world’s information and make it universally accessible and useful.” But this approach could also undermine what makes the company’s product such a success in the first place. Google isn’t useful because it tells you what you need to know, it’s useful because it helps you find this information for yourself. Google is the index, not the encyclopedia and it shouldn’t sacrifice search for results.

Repost: Original Source and Author Link

Categories
Game

Shiny Pokemon GO Eevee Community Day: Are you ready for these?

Today we’re taking a peek at Eevee Community Day in Pokemon GO, complete with an early peek at code that’ll go live on the first big day. Eevee Community Day in Pokemon GO is your next best bet for finding a Shiny Eevee and whichever Eevee evolution Pokemon you’ve not yet collected. The latest update to this event (before it happens) has to do with the ticket you can attain for the event to make the event more valuable to you on August 14.

The latest update to the text for the ticket reads as follows: “A ticket to access the What You Choose to Be Special Research on August 14 and 15, 2021, from 11:00 a.m. to 5:00 p.m. local time each day, wherever you are. Details can be found in the in-game News.” This ticket will be available before the start of the event in the in-game shop in Pokemon GO.

The resouce text for the ticket also include a change for the key quest title in this most recent update to the game file for Pokemon GO. The Eevee Community Day ticket used to be called “Who You Choose to Be Ticket” and is now called “What You Choose To Be Ticket.” The quest used to be “Who You Choose to Be” and is now called “What You Choose to Be.”

During the event, you’ll find a photobomb opportunity that’ll work five times. Each of the first five times you take a snapshot with your in-game camera, you’ll find an Eevee appear in the shot. After the shot is taken, you’ll find an Eevee spawn near your current location.

If you’re here just looking for the name trick names for Eevee evolving into any of the 8 potential Eevee evolutions, take a peek at the full Eevee evolution names list. That’ll work for Vaporeon, Umbreon, Jolteon, Flareon, Sylveon, Glaceon, Leafeon, and Espeon.

Whether or not you buy the special ticket for this event, you’ll find a variety of bonuses and perks for the event on both days it’ll take place. There’ll be Eevee Community Day action on Saturday the 14th of August, 2021 and Sunday the 15th of August, 2021.

Repost: Original Source and Author Link

Categories
AI

Farming is finally ready for robots

All the sessions from Transform 2021 are available on-demand now. Watch now.


As an investor in the ag-tech space for the last decade, I would argue that 2020 was a tipping point. The pandemic accelerated a shift that was already underway to deploy automation in the farming industry. Other venture capitalists seem to agree. In 2020, they invested $6.1 billion into ag-tech startups in the US, a 60% increase over 2019, according to research from PitchBook. The number is all the more staggering when you consider that, in 2010, VC investment in US ag-tech startups totaled just $322 milion. Why did 2020 turn the tide in ag-tech investment, and what does that mean for the future of farming? Which ag-tech startups are poised to become leaders in the multi-trillion-dollar global agriculture industry? I’ll delve into those questions below.

The pandemic changed many sectors forever and agriculture is no exception. With idled processing plants, supply chain disruptions, and COVID outbreaks among workers curtailing an already-strapped labor force, farmers faced unprecedented challenges and quickly realized some of these problems could be solved by automation. Entrepreneurs already active in the ag-tech space, and those who may have been considering launching an ag-tech company, saw an opportunity to apply innovation to agriculture’s long-standing — and now all the more pressing — challenges.

Ag-tech investment boom

Companies applying robotics, computer vision, and automation solutions to farming were the greatest beneficiaries of the record levels of VC funding in the last year, in particular vertical farming companies that grow crops indoors on scaffolding. Bowery recently raised $300 million in additional venture funding and is now valued at $2.3 billion, while AeroFarming recently announced plans to go public in a SPAC deal valuing the company at $1.2 billion. Vertical herb farmer Plenty has raised over $500 million in venture funding, while vertical tomato grower AppHarvest went public via a SPAC in February and is now valued at over $1.7 billion, despite recent share price fluctuations.

But while vertical farming companies have received a large share of venture capital dollars, there are many other ag-tech startups emerging in the race to automate agriculture. Some of the ag-tech sub-sectors where we see the most potential for growth in the next five years include tiller, weeding, and planting robots; sensor-fitted drones used to assess crops and plan fertilizer schedules; greenhouse and nursery automation technology; computer vision systems to identify crop health, weeds, nitrogen, and water levels in plants and soil; crop transport, sorting, and packing robots; and AI software for predictive yield planning.

Some of the sub-sectors that are a bit further out include picking and planting robots, as well as fertilizer and watering drones. A few startups are building tactile robotic hands that could be used to pick delicate fruit such as strawberries or tomatoes. Yet the picked fruit must be placed on autonomous vehicles that can navigate uneven terrain and paired with packing robots that place the fruit carefully to avoid bruising, so challenges remain. Meanwhile, drones exist today that can drop fertilizer or water on fields, but their use is strictly regulated and their range and battery capacity is limited by payload capabilities. In about 10 years, we could begin to see drones that use cameras, computer vision, and AI to assess plant health and then automatically apply the right amount and type of fertilizer based on the plants’ size and chemical composition.

Solving the right problems

For any ag-tech company to win over farmers, it must solve a big problem and do it in a way that saves them significant time and/or money. While a manufacturer might be happy to deploy a robot for incremental improvement, farmers operate on exceedingly tight margins and want to see exponential improvement. Weeds, for example, are a huge problem for farmers, and the preferred method of killing them in the past, pesticides, is dangerous and unpopular. A number of companies have emerged to address this problem with a combination of AI, computer vision, and robotics to identify and pull weeds in fields. Naio Technologies and FarmWise are examples (disclosure: my firm is an investor in FarmWise). Meanwhile Bear Flag Robotics is making great strides in the automated farm vehicle space, building robotic tractors that intelligently monitor and till large fields. And Burro is a leader in crop transport robots, with its autonomous vehicles used to move picked fruit and vegetables from the field to processing centers.

While fully-autonomous harvesting is still a ways off, apple-picking robots are starting to gain ground, including those from Tevel Aerobotics Technologies. Tevel’s futuristic drones can recognize ripe apples and fly autonomously about the trees picking them and placing them carefully in a large transport box. Abundant Robotics takes a different approach to harvesting apples, using a terrestrial robot with an intelligent suction arm to harvest and pack ripe fruit.

Several greenhouse and nursery robots aim to improve handling, climate control, and other tasks in plant-growing operations. Harvest Automation’s small autonomous robots can recognize, pick up, and move plants around a nursery. Other greenhouse automation companies to watch include iUNU, which offers a computer vision system for greenhouses, and Iron Ox, which has built large robot-driven greenhouses to grow vegetables.

And, finally, satellite imaging companies such as PlanetLabs and Descartes Labs will also play an important role in ag-tech, as they provide geo-spatial images of crop land that can help farmers understand global climate trends.

Roadblocks remain

Facing climate change, a growing population, worker shortages, and other challenges that will only grow more intense, the agriculture sector is ripe for disruption. Agricultural giants such as Monsanto and John Deere, as well as small and mid-sized farms, are embracing automation to improve crop yields and production. But wide-scale adoption of farm automation won’t happen overnight. For any ag-tech innovation to take hold, it must solve a huge problem and do so in a repeatable way that doesn’t interfere with a farm’s current workflow. It doesn’t help to deploy picker robots if they can’t integrate into a farm’s current crop packing and transport systems, for example.

We may well see small and medium-sized farms leading the way in the adoption of automation. Even though industrial farms have large capital reserves, they also have established systems in place that are harder to replace. Smaller farms have fewer such systems to replace and are willing to try robots-as-a-service (RaaS) solutions from lesser-known startups. For millennia, farmers have thought outside the box to find solutions to everyday problems, so it stands to reason they want to work with startups that think the same way they do. Farmers wake up every day and think, here’s a big problem, what innovative trick can I use to solve it? Perhaps farmers steeped in self-reliance and ag-tech entrepreneurs steeped in engineering and computer science aren’t so different after all.

Kevin Dunlap is Co-founder and Managing Partner at Calibrate Ventures. He was previously a managing director at Shea Ventures, where he backed companies such as Ring, Solar City, and Chegg. Kevin currently sits on the boards of Broadly, Soft Robotics, and Realized.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Qualcomm CEO Says It’s Ready to Compete With Apple’s M1

Qualcomm, best known for designing chips inside many Android devices, is setting its sights on a different market: Laptops. In his first interview since becoming president and CEO of Qualcomm, Cristiano Amon says he believes that Qualcomm can have the best laptop chip on the market. And there’s no one better to design that chip than a team of architects who have worked on chips at Apple.

A new interview from Reuters shows a confidant Qualcomm looking to expand its business. Although Qualcomm creates the chips that power many Android handsets, the company used to license the core blueprint from chip designer ARM. Now, the company designs its own mobile cores. It’s also investing in its own laptop designs thanks to a $1.4 billion acquisition of startup Nuvia.

Qualcomm inked the final deal with Nuvia in March. Nuvia was a short-lived startup founded by Gerard Williams, who was the lead designer on iOS chips from the A7 to the A12X. After departing Apple, the company sued Williams for a breach of contract, claiming that he violated anticompetitive clauses in his employment contract by founding Nuvia.

Williams now works as the senior vice president of engineering at Qualcomm, leading the company’s efforts into designing its own laptop chips. Amon told Reuters that Qualcomm needed its own in-house chip designs to offer to customers in order to compete with Apple’s M1 chip, which is built using a core design from ARM. A key part of that is implementing 5G connectivity into the chip design, which Qualcomm has a lot of experience with.

“We needed to have the leading performance for a battery-powered device. If ARM, which we’ve had a relationship with for years, eventually develops a CPU that’s better than what we can build ourselves, then we always have the option to license from ARM,” Amon said. Reuters says that Qualcomm will start selling Nuvia-based designs next year, but it could be an additional year before we see them on the market.

Right now, Qualcomm is solely focused on the laptop market. The company doesn’t have any plans to enter the data center market that’s dominated by Nvidia, AMD, and Intel. However, Amon says the company is willing to license designs from Nuvia in the data center.

Qualcomm’s new stance in the market is sure to turn some heads at Apple, as the company currently supplies iPhones with connectivity chips. The two companies settled a lengthy legal battle in 2019, during which it was alleged that the two held meetings that turned hostile between Apple CEO TimeCook and former Qualcomm CEO Steve Mollenkopf.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Dell’s XPS 15 and XPS 17 Are Finally Ready for Purchase

After being announced earlier this year, Dell’s flagship XPS 15 and XPS 17 laptops are finally available for purchase. Though these notebooks look like sleek productivity machines, don’t let the svelte designs of the XPS 15 and XPS 17 fool you — the laptops come with serious gaming chops with their 11th Gen Intel Core i9 processors and Nvidia RTX 3000 series graphics. The XPS 15 tops out with an RTX 3050 Ti, making it a capable on-the-go gaming system, while the larger XPS 17 comes loaded with up to a Core i9k processor and Nvidia’s RTX 3060 discrete graphics.

Dell announced that the laptops are now available on its website. Pricing starts at $1,249 for the XPS 15, which comes with a 15.6-inch Infinity Edge display with a 92.5% screen-to-body ratio, 16:10 aspect ratio display with up to a 3.5K resolution on an OLED panel that gives you 100% of the wide DCI-P3 color space. The XPS 17 comes with a 17.3-inch display with even smaller bezels giving it a 93.7% screen-to-body ratio. The larger notebook starts at $1,449. Both laptops come with Waves Nx audio tuning, up-firing speakers, and a machined aluminum build with carbon fiber.

In addition to the XPS systems, more serious gamers will want to turn their attention to Dell’s Alienware X15 and X17 notebooks. These gaming-oriented laptops comes packed with power, topping out with 32GB of memory and Nvidia’s RTX 3080 graphics alongside Intel’s 11th Gen Core i9-11900H processor on premium configurations. Dell stated that an upgraded configuration of the Alienware X17 with a 360Hz screen will be arriving next month, however.

The company also is making updated configurations of its Alienware m15 r6 laptop available as well as its new Dell G15 laptop. The Dell G15 starts at $799 and comes with a more gaming-forward design that appears to have been inspired by the more premium Alienware m15 laptop.

In addition to gaming, Dell also announced new mobile workstations targeted at creative professionals with updated Intel processors and Nvidia graphics. The Studio RTX laptops include the Dell Precision 5560, 5760, 7560, and 7760 laptops. These mobile workstations take their design cues from Dell’s flagship XPS series but come with Nvidia GPUs tailored more for content creation. The Dell Precision 5560, for example, comes with Nvidia’s RTX A2000 GPU, while the 7760 maxes out with an Intel Xeon processor and RTX A5000 GPU from Nvidia.

If you’re interested in any of these notebooks for work, fun, or back to school, you can head over to Dell.com to order them today.

Editors’ Choice




Repost: Original Source and Author Link