Facebook has announced the results of its first Deepfake Detection Challenge, an open competition to find algorithms that can spot AI-manipulated videos. The results, while promising, show there’s still lots of work to be done before automated systems can reliably spot deepfake content, with researchers describing the issue as an “unsolved problem.”
Facebook says the winning algorithm in the contest was able to spot “challenging real world examples” of deepfakes with an average accuracy of 65.18 percent. That’s not bad, but it’s not the sort of hit-rate you would want for any automated system.
Deepfakes have proven to be something of an exaggerated menace for social media. Although the technology prompted much handwringing about the erosion of reliable video evidence, the political effects of deepfakes have so far been minimal. Instead, the more immediate harm has been the creation of nonconsensual pornography, a category of content that’s easier for social media platforms to identify and remove.
Mike Schroepfer, Facebook’s chief technology officer, told journalists in a press call that he was pleased by the results of the challenge, which he said would create a benchmark for researchers and guide their work in the future. “Honestly the contest has been more of a success than I could have ever hoped for,” he said.
Some 2,114 participants submitted more than 35,000 detection algorithms to the competition. They were tested on their ability to identify deepfake videos from a dataset of around 100,000 short clips. Facebook hired more than 3,000 actors to create these clips, who were recorded holding conversations in naturalistic environments. Some clips were altered using AI by having other actors’ faces pasted on to their videos.
Researchers were given access to this data to train their algorithms, and when tested on this material, they produced accuracy rates as high as 82.56 percent. However, when the same algorithms were tested against a “black box” dataset consisting of unseen footage, they performed much worse, with the best-scoring model achieving an accuracy rate of 65.18 percent. This shows detecting deepfakes in the wild is a very challenging problem.
Schroepfer said Facebook is currently developing its own deepfake detection technology separate from this competition. “We have deepfake detection technology in production and we will be improving it based on this context,” he said. The company announced it was banning deepfakes earlier this year, but critics pointed out that the far greater threat to disinformation was from so-called “shallowfakes” — videos edited using traditional means.
The winning algorithms from this challenge will be released as open-source code to help other researchers, but Facebook said it would be keeping its own detection technology secret to prevent it from being reverse-engineered.
Schroepfer added that while deepfakes were “currently not a big issue” for Facebook, the company wanted to have the tools ready to detect this content in the future — just in case. Some experts have said the upcoming 2020 election could be a prime moment for deepfakes to be used for serious political influence.
“The lesson I learned the hard way over the last couple of years, is I want to be prepared in advance and not be caught flat footed,” said Schroepfer. “I want to be really prepared for a lot of bad stuff that never happens rather than the other way around.”
While Amazon has traditionally been a one-stop shop for tech gear, its mid-April delays are making other retailers viable options for getting what you want with a shorter lead time. Online stores like Newegg and B&H are stepping up, alongside (gasp!) actual brick-and-mortar stores like Best Buy and Office Depot.
Alas, the old way of buying tech—looking for the absolute lowest price—is being set aside, as consumers are willing to pay a bit more for any products actually in stock. While we can’t guarantee a particular retailer will have the specific product you’re looking for, all of the ones we checked appeared to have a decent assortment of items in stock that we haven’t been able to get through Amazon.
Why you can’t find anything on Amazon
The watchword right now isn’t so much price as availability. According to retail analyst firm NPD, productivity hardware saw “historic” sales increases over the first two weeks of March, as more regions or entire states started issuing shelter-in-place orders. NPD vice president and technology analyst Stephen Baker noted that sales of computer monitors to consumers almost doubled during the first two weeks of March, versus a year ago. Mice, keyboards, and notebook PC sales increased by 10 percent. Businesses are buying, too: Corporate notebook sales were up 30 percent in the last week of February, and then 50 percent for the first two weeks of March.
The buying spree has put a severe and unexpected crimp in the tech supply chain. Notebooks and monitors are still shipping from Amazon, though the company has said it’s prioritizing household goods. A survey of Amazon products PCWorld recently conducted showed a wide swath of tech products delayed until late April, and that’s still the case, generally. We are seeing more short-term availability of tech goods than before, though not on a par with Amazon’s typical performance.
Newegg: A good place to start
Newegg is making an aggressive pitch to replace Amazon as the one-stop shop for tech gear during the work-from-home mandates. “Newegg is very much open for business with little to no interruption,” said Anthony Chow, the global chief executive of Newegg, when asked for comment. “Sales across the board—especially tech products—are up substantially in recent weeks. In light of that, we’ve been working to secure even more inventory to keep up with the demand, much of which stems from those who need hardware and other products to work from home.”
Like Amazon, Newegg is a storefront for many sellers, and that’s likely why Newegg’s inventory is broader and more available right now across a variety of categories than it is at other retailers. Newegg has built in quite a bit of wiggle room, though.
Unlike Amazon, Newegg doesn’t guarantee its shipping dates. It offers only a window within which “most customers” will likely receive their products. In the example below, for instance, Newegg merely says that “most customers” will receive their webcam shipment within 6 to 16 days.
Once you start reading the fine print and realize the product’s shipping from Australia, the elasticity of the ship date starts to make a lot more sense. It was a similar deal with the Razer Kiyo 1080p camera with a ring light: At press time, Amazon was entirely sold outRemove non-product link, but via Newegg, Razer promised deliveryRemove non-product link—from Hong Kong—in as little as 4 days…or as many as 17.
At least Newegg’s promising you can actually buy tough-to-find products like webcams, which is further than you’ll get sometimes on Amazon or via other retailers like Office Depot. Newegg’s Chow said his company was well-sourced to keep up with demand.
“Our supply chain is distributed and durable, which means we’re able to source the tech products customers are looking for, and ship them on time,” Chow explained, adding that Newegg was a source for other channels, too. “In many cases,” Chow said, “brands that are having trouble selling through other distribution channels are coming to us to help them meet this growing demand, knowing that we’re able to fulfill orders as usual, even in the face of unprecedented circumstances surrounding the COVID-19 crisis.”
Best Buy: Curbside pickup means quick availability
Historically, the showcase for in-store technology has been Best Buy, and its physical locations continue to play a pivotal role. The company is offering either curbside pickup (instead of coming into the store) or deliveries to your home from local warehouses.
Best Buy has its strengths. If you’re willing to buy one of its Insignia brand USB-C hubsRemove non-product link, for example, that in-house brand shows more availability than name brands. Those name brands are where Best Buy suffers: Whether it’s just demand or a strain on its supply chain (or both), I found a number of products at my local Bay Area Best Buy were simply sold out. (Best Buy will sometimes append a note that the product is unavailable within a certain number of miles from your location.)
In general, though, at press time Best Buy carried a number of USB-C hubs with immediate availability. Notebook PCs were plentiful, available either in-store or within three days if my nearby store was sold out. (Occasionally, laptops were available in another store about 10 miles away.) Best Buy listed numerous monitors, though shipping lead times were about a week. And if you need an HDMI cable—the sort of low-value necessity Amazon is forgoing right now—Best Buy seems to have plenty, such as this Insignia two-pack of six-foot Ultra HD HDMI cables for $40Remove non-product link, either in-store now or within a few days.
The big boxes: Target, Walmart
Don’t forget about Target and Walmart. Though the physical Target store’s electronics section is just a sliver of shelf area, the Target website’s Computers & Office page offers an easily navigated arrangement of popular tech products.
The estimated date of delivery isn’t among the information Target displays in its grid of products, unfortunately. Antonline handles quite a bit of Target’s fulfillment, however, and any Target listing with “antonline” as the partner seems to stand a good chance of being available. We found a surprisingly broad range of monitors available, such as a Dell UltraSharp 24-inch, 1920×1200-pixel LCD on sale for $189Remove non-product link, all scheduled to arrive within 7-10 days.
Target offers a decent selection of laptops or Chromebooks (about 100), and most tended to be in stock, including solid mainstream choices like the HP Pavilion x360 14-inch 2-in-1 for $745Remove non-product link. Target didn’t display any midrange, affordable webcams, apparently stocking only $400 to $600 models out of the range of most consumers. Cables and cords, such as a Philips 6-foot HDMIRemove non-product link, seemed to be in stock, either available at a nearby store or shipping within a week or so.
Walmart also appears to be using its supply chain to its advantage, with either direct shipping or ship-to-store options available. Unfortunately, shipping availability isn’t highlighted on the first grid of products you’ll see, forcing you to dive into each product to check whether it’s in stock.
Fortunately, it usually is, where laptops and Chromebooks were concerned—though, oddly, some of its surprisingly good Motile laptops didn’t have a shipping date attached. The site offered a broad selection of available USB-C hubs, such as thisAnker 7-in-1 USB C Hub AdapterRemove non-product link, even though the category seemed scarce on Amazon. I found a decent monitor availability at my local store, too. Every external hard drive I checked was in stock and would ship within a week, if not sooner.
Staples, Office Depot, OfficeMax: Don’t sleep on these giants
Although traditional office-supply retailers carry tech necessities, most enthusiasts shop elsewhere unless they need a printer or an office chair. Adjust that attitude. Just like a bodega or rural market may hold an undiscovered trove of toilet paper, these stores may conceal a cache of tech essentials.
Remember, too, that these retailers are often offering curbside pickup: Pay ahead of time, pop the trunk, and bring some gloves. (A nice Office Depot employee put a friend’s order in the back of their trunk so they didn’t even have to to get out of the car.) Staples will ship for free with no minimum, as well as offer curbside pickup in most locations. Office Max/Office Depot (Office Depot bought OfficeMax in 2013, so they’re the same store) want you to pay at least $45 for next-day shipping.
Office Depot/Office Max has a big kink to work out of their database, though: What they promise on the product page isn’t backed up by their inventory. Take a Dell Inspiron l found. Oh, store pickup is available? And I can get it today? Nope. Click through to the product page, and Office Depot pulls a bait-and-switch: The Inspiron is only available for in-store pickup…but it’s not available for pickup within 100 miles.
In other examples, I found a few “in-store” pickups that would require a 20-minute drive to purchase, especially in laptops. The general inventory of home-office tech like external hard drives, mice, keyboards was abundant. Monitors, though, seemed to be non-existent, at least among the five options I chose. Printers and especially ink were in stock.
(Be aware that Office Max/Office Depot and Staples also sell paper and cleaning products, similar to the cleaning aisles in your grocery store. Yes, they sell toilet paper; my local stores were completely sold out, but maybe you’ll have better luck.)
You won’t waste quite as much time buying at Staples. Staples does tell you clearly whether a product is in stock, though it makes you click the “1-Hour pick-up” button before it checks inventory. If Staples then makes you check other stores, it returns only locations where the item is available. Office Depot/OfficeMax make you scroll through store after store, even if none have the item in stock.
Staples currently delivers for free, with no minimum charge. Unfortunately, there’s often a disclaimer that accompanies the standard “delivered FREE in 3-5 business days:” “Due to high demand, delivery may take longer than expected.” How long? Staples doesn’t tell you, at least not on the product page.
I saw decent availability of notebook PCs for delivery (in about 3-4 days) and in-store pickup. Monitors were somewhat difficult to find, though available. Staples offered one-day delivery of computer mice and keyboards, even an antimicrobial mouse-and-keyboard comboRemove non-product link I would have sworn would be out of stock. One-day delivery was typical for external hard drives. USB-C hubs seemed to require about a week for shipping, through they were available.
B&H: Crystal-clear inventory and shipping data
We routinely feature consumer electronics retailer B&H in our daily deals, but the company (which is privately held and has one store in New York City, along with a bustling online store) probably doesn’t have the name recognition it deserves. It’s worth checking out, especially in these times of scarcity, as the company seems to have a good handle on its inventory and delivery data.
You’ll find ship dates clearly listed, with clear communication of when the product will ship and when you can expect it—well, except for the featured products at the top of the page. We found a decent amount of inventory across a number of different product lines, and “request stock alert” notifications that you could sign up for. And if you’ve become numbed by scrolling through lists of “sold out” notifications (looking at you, Best Buy) B&H is one of the retailers that seems to prioritize showing what it does have, versus what it doesn’t.
It isn’t exactly out of the ordinary that Google retires an app or service in favor of a new one. Often, but not always, it waits for the new version to at least be ready for the all old users to switch to. That didn’t actually happen smoothly in the transition to YouTube Music but hopefully, Google Pay will be a different story. No, users are just being pushed to a new version of the Google Pay app and the company will be forcing users’ hands by making the old one practically useless in April.
The new Google Pay announced late last year is both a redesign of the old app as well as a consolidation of its confusing “G Pay” brand of the past. It wasn’t a one-for-one replacement of the old Google Pay, with some of its features moved to other parts of the Google Play services framework. That said, it also added more features that the old app didn’t have and will never have since it has been deprecated.
Not all users may have moved over to the new Google Pay app by now for one reason or another but they may have no other choice soon. Come April 5, the old Google pay app will lose its ability to send or receive money, view past transactions, or even see your remaining balance. In other words, you’ll still be able to open the app but can’t do anything else unless you move to the new app.
This change, however, applies only to the US, according to Google’s confirmation to Android Police. The new Google Pay app isn’t available in other markets yet so those will keep the status quo, at least for now. Curiously, there is no mention of other markets that do have the new Google Pay app already.
The new Google Pay app is apparently still marked as Early Access, suggesting it’s still in pre-release development. That could change before April 5, however, and there’s still plenty of time to polish up the app before that deadline.
Pokemon GO has a collection of updates and releases ready to roll for the month of March, 2021. In this month of chills, users will find a Research Breakthrough connected to Gible. If you’re looking to make use of this tiny baby monster, you’ll need to start this Research Breakthrough quest and follow through on seven individual days – it’s not going to be easy.
There’ll be some Legendary Shadow Pokemon action in the month of March, starting with Shadow Articuno. It’s POSSIBLE we’ll see the rest of the Legendary bird Pokemon collection available in a similar manner throughout the year – but Niantic’s currently tight-lipped on the matter. For now, the first of these will appear with Giovanni in a battle that ALSO, you guessed it: won’t be easy.
Throughout the month of March, 2021, there’ll be several events. The one we’re most looking forward to will be “Weather Week”, that’ll take place from Wednesday, March 24, 2021, all the way to March 29, 2021. That event will deliver weather-themed Pokemon aplenty! This means we’re seeing Tonadus, Thundurus, and Landorous, all of which will be joined by new avatar items for great justice!
On Sunday, March 14, 2021, there’ll be an Incense Day. That Incense Day will allow said Incense to attract more Psychic and Steel-type Pokemon than usual. According to Niantic, this specifically includes Beldum. If you’ve ever wanted a Metagross, now’s the time to go at it full force!
This is only part of the list of March bits and pieces for Pokemon GO. We’re expecting a whole lot more specific details on the event series that’ll effectively be the most action-packed March in the history of Pokemon GO – stay tuned as we learn more!
UPDATE: Niantic suggests they’ll be delivering a free one-time bundle every single Monday in March, 2021. These bundles will include 10 Ultra Balls, 10 Razz Berries, and a Remote Raid Pass. You’ll need to head to the in-game shop and tap said bundle to access for free – easy!
Almost all of Elon Musk’s endeavors and dreams have proven to be controversial but most of them prove themselves in the long run. While The Boring Company has still to bear any usable fruit and Neuralink is still too early in its infancy, SpaceX’s Starlink Internet satellite constellation continues to split people up. It still has to reach its full potential, both in number of satellites and promised speeds, but Elon Musk says that there might be a speed bump on the way.
Just a few days ago, Musk boasted that Starlink’s speeds will eventually reach 300 MB/s with latencies as low as around 20ms. Those are definitely ambitious goals compared to the range that SpaceX advertises. Just to set expectations correctly, it says that speeds could go from 50 to 150 MB/s, depending on certain conditions, and latency can be as high as 40 ms.
Those numbers might still be higher than what most beta testers experience on average but figures are really not that consistent. Such is really the case when using satellites that have to move at certain positions in low earth orbits which, in turn, also determines the speeds that users get depending on their location on the Earth.
That said, Elon Musk is saying that users might experience higher downloads at times while their testing system upgrades. That said, those upgrades can also cause performance to fluctuate during that period.
You might see much higher download speeds on Starlink at times. Testing system upgrades.
Users are reporting exactly that, with some getting 20 to 180 MB/s, at certain times. The irregularity is to be expected during this system upgrade testing phase but some beta testers are hoping that Starlink would be able to deliver on the advertised 150 MB/s promise first before promising double that speed sometime later this year.
Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.
Most companies today have invested in data science to some degree. In the majority of cases, data science projects have tended to spring up team by team inside an organization, resulting in a disjointed approach that isn’t scalable or cost-efficient.
Think of how data science is typically introduced into a company today: Usually, a line-of-business organization that wants to make more data-driven decisions hires a data scientist to create models for its specific needs. Seeing that group’s performance improvement, another business unit decides to hire a data scientist to create its own R or Python applications. Rinse and repeat, until every functional entity within the corporation has its own siloed data scientist or data science team.
What’s more, it’s very likely that no two data scientists or teams are using the same tools. Right now, the vast majority of data science tools and packages are open source, downloadable from forums and websites. And because innovation in the data science space is moving at light speed, even a new version of the same package can cause a previously high-performing model to suddenly — and without warning — make bad predictions.
The result is a virtual “Wild West” of multiple, disconnected data science projects across the corporation into which the IT organization has no visibility.
To fix this problem, companies need to put IT in charge of creating scalable, reusable data science environments.
In the current reality, each individual data science team pulls the data they need or want from the company’s data warehouse and then replicates and manipulates it for their own purposes. To support their compute needs, they create their own “shadow” IT infrastructure that’s completely separate from the corporate IT organization. Unfortunately, these shadow IT environments place critical artifacts — including deployed models — in local environments, shared servers, or in the public cloud, which can expose your company to significant risks, including lost work when key employees leave and an inability to reproduce work to meet audit or compliance requirements.
Let’s move on from the data itself to the tools data scientists use to cleanse and manipulate data and create these powerful predictive models. Data scientists have a wide range of mostly open source tools from which to choose, and they tend to do so freely. Every data scientist or group has their favorite language, tool, and process, and each data science group creates different models. It might seem inconsequential, but this lack of standardization means there is no repeatable path to production. When a data science team engages with the IT department to put its model/s into production, the IT folks must reinvent the wheel every time.
The model I’ve just described is neither tenable nor sustainable. Most of all, it’s not scalable, something that’s of tantamount importance over the next decade, when organizations will have hundreds of data scientists and thousands of models that are constantly learning and improving.
IT has the opportunity to assume an important leadership role in creating a data science function that can scale. By leading the charge to make data science a corporate function rather than a departmental skill, the CIO can tame the “Wild West” and provide strong governance, standards guidance, repeatable processes, and reproducibility — all things at which IT is experienced.
When IT leads the charge, data scientists gain the freedom to experiment with new tools or algorithms but in a fully governed way, so their work can be raised to the level required across the organization. A smart centralization approach based on Kubernetes, Docker, and modern microservices, for example, not only brings significant savings to IT but also opens the floodgates on the value the data science teams can bring to bear. The magic of containers allows data scientists to work with their favorite tools and experiment without fear of breaking shared systems. IT can provide data scientists the flexibility they need while standardizing a few golden containers for use across a wider audience. This golden set can include GPUs and other specialized configurations that today’s data science teams crave.
A centrally managed, collaborative framework enables data scientists to work in a consistent, containerized manner so that models and their associated data can be tracked throughout their lifecycle, supporting compliance and audit requirements. Tracking data science assets, such as the underlying data, discussion threads, hardware tiers, software package versions, parameters, results, and the like helps reduce onboarding time for new data science team members. Tracking is also critical because, if or when a data scientist leaves the organization, the institutional knowledge often leaves with them. Bringing data science under the purview of IT provides the governance required to stave off this “brain drain” and make any model reproducible by anyone, at any time in the future.
What’s more, IT can actually help accelerate data science research by standing up systems that enable data scientists to self-serve their own needs. While data scientists get easy access to the data and compute power they need, IT retains control and is able to track usage and allocate resources to the teams and projects that need it most. It’s really a win-win.
But first CIOs must take action. Right now, the impact of our COVID-era economy is necessitating the creation of new models to confront quickly changing operating realities. So the time is right for IT to take the helm and bring some order to such a volatile environment.
A number of PlayStation owners reported issues playing certain games starting on Friday. Soon after the reports started rolling in, Sony updated its PlayStation Network status to indicate that it is experiencing issues in its ‘games and social’ category — an issue that has persisted through Saturday and into Sunday with no clear relief in sight.
Sony acknowledged that its PSN is having problems, but it has since remained quiet about the issue. It’s unclear when the issue is expected to be fixed and what is behind the troubles. The problem revolves around network gameplay, making it difficult to get into online matches in some games like Fortnite.
Down Detector continues to show issues with the PSN, with reports remaining steady through around midnight Eastern Time into Sunday. The majority of reports from users cite issues with playing games, though a significant portion also states they’re having problems signing into their PlayStation accounts.
Likewise, some PlayStation owners are also reporting issues with the platform’s social features, which include things like messaging and parties. Sony notes on its PSN status website that this issue is impacting the PlayStation 4 and PlayStation 5, as well as the older PS Vita handheld console and the PS3.
Sadly, this isn’t the first PlayStation Network outage that has persisted over multiple days; it’s hard to guess when the solution will finally arrive. The outage hit only a day after Microsoft experienced similar issues with its Xbox services, but that problem was quickly resolved.
You can monitor the outage on Sony’s PlayStation Network status website.
Plenty of Windows laptops try to position themselves as MacBook Pro alternatives, combining slick designs and vibrant displays with powerful PC performance, but the Lenovo Yoga C940 15 is no mere copycat.
While the Yoga looks sharp and has a bright display, it also leans into its differences as a Windows PC. It has a touchscreen that flips around into tablet mode, a built-in stylus for writing or sketching, and—thank heavens—a full-sized USB-A port to complement its two USB-C connections. It even fits in a number pad without cramping its excellent keyboard.
This review is part of our ongoing roundup of the best laptops. Go there for information on competing products and how we tested them.
That’s not to say the Lenovo Yoga C940 15 ticks every imaginable box. Screen backlighting is a bit uneven, audio quality could be better, and limited configuration options will prevent you from turning this into a beastly desktop replacement. Also, if we’re comparing to Apple’s MacBooks, Lenovo’s laptop doesn’t include all the same niceties, such as a slightly larger screen and jumbo-sized trackpad.
Still, the Lenovo Yoga C940 15 is a decent choice for those who want a luxurious workhorse PC without giving up what Windows does best.
Our Lenovo Yoga C940 15 review unit has a list price of $1,700 at Best Buy and includes the following specs:
15.6-inch display with 1920 x 1080 resolution
9th-generation Intel Core i7-9750H processor
Nvidia GeForce GTX 1650 Max-Q graphics
16GB DDR4-2666 RAM
Left side: Two USB-C 3.1 ports (with Thunderbolt 3), proprietary charger, headphone jack
Right side: USB-A 3.0 port
Wi-Fi 6 Support
Stylus with 4,096 pressure sensitivity levels
720p webcam with privacy shutter
Windows 10 Home
Dimensions: 14 x 9.4 x 0.8 inches
Weight: 4.41 pounds (5.68 pounds with charger)
Best Buy also offers a 4K display version with the same other specs as above for $1,899. Lenovo’s website offers several other configurations: On the low end, you can drop to 12GB of RAM and 256GB of storage (currently $1,460), while on the high end you can upgrade to an Intel Core-i9-9880H processor, 4K display, and 2TB SSD ($2,440 as of this writing). In all cases, the Nvidia GeForce GTX 1650 is the only option for graphics, and the non-upgradeable RAM tops out at 16GB.
While it’s nice that Lenovo included a USB-A port on this laptop, its use of a proprietary charger—similar to the one Lenovo uses for its ThinkPads—is a drag, as is the lack of a MicroSD card slot and additional USB ports on the right side of the laptop.
Design and display
The Lenovo Yoga C940 15 comes in either a darker “Iron Gray” or a lighter shade that Lenovo refers to as “Mica.” Its all-aluminum body, along with its sharp edges and flat sides, only serve to underscore the laptop’s heft, which is unavoidable given the discrete GPU inside. But for a workhorse laptop that folds around into a tablet, the C940’s size and weight are reasonable.
As with Lenovo’s smaller and skinnier Yoga C940 14, the 15-inch model packs some clever design ideas. The laptop’s rotating hinge doubles as a speaker grille that projects sound outward, and the display’s top bezel extends upward near the center of the screen, leaving extra room for the webcam and forming a little lip that makes it easier to open the laptop.
Our review unit’s display had a resolution of 1920 x 1080, which on a 15.6-inch screen has clearly visible pixels, but that’s going to be an acceptable trade-off for most folks given the performance and battery life issues—not to mention the higher price tags—that often come with 4K screens. (It’d be great to see a laptop like this with a 1440p display instead, and perhaps a 16:10 aspect ratio to boot, but that’s a rant for another day.)
The display’s brightness measures an eye-popping 415 nits at the center of the screen. While Lenovo’s ThinkPad X1 Extreme Gen 2 is in the same league (442 nits), most other 15-inch workhorse laptops don’t get anywhere close unless they’re made by Apple. Although the glass display can create a lot of glare when you’re outdoors or sitting in front of a window, the screen itself can get bright enough to compensate.
Viewing angles could be better, though. While this is an IPS display, it was hard to find an angle that provided steady brightness throughout, particularly on white backgrounds. It’s the kind of thing you might not notice until you notice it, at which point it becomes an annoyance.
Keyboard and trackpad
The keyboard on the Lenovo Yoga C940 15 is among the nicest you’ll find on a consumer laptop. Although the keys lack the same travel as on Lenovo’s ThinkPads, they provide a pleasant tactile bump on the way down without being overly loud. (Oddly enough, the snappy sounds I heard on Lenovo’s 14-inch Yoga C940 weren’t as apparent here, perhaps because the larger and heavier body absorbs more of the noise.)
While not everyone needs a number pad on their laptop, Lenovo managed to fit one on the Yoga C940 15 without major sacrifice. Modifier keys such as Shift and Ctrl are a bit smaller than usual, but the size and spacing of letter keys are the same as on Lenovo’s 14-inch laptop.
It’s too bad the Yoga C940 15’s trackpad doesn’t utilize the extra space. It’s no larger than that of Lenovo’s 14-inch Yoga C940, and much smaller than the jumbo-sized trackpad on Apple’s MacBook Pro. And like most Windows laptop trackpads, the click mechanism gets increasingly stiff as your finger moves toward the top section.
Audio, webcam, and security
After being wowed by the speakers on Lenovo’s 14-inch Yoga C940, the 15-inch model was a bit of a letdown. The larger laptop does have even louder speakers built into its rotating hinge—still better than the tin-can speakers you’d find on cheap laptops—but it lacks the low-end warmth that the smaller laptop offers and comes off as overly harsh at high volumes.
The webcam, meanwhile, met its already-low expectations, being a 720p shooter like pretty much every other laptop on the market. The privacy shutter is a nice touch, though, and while there’s no face recognition for Windows Hello, you can sign in with the fingerprint sensor just below the laptop’s cursor keys.
Although we sometimes see 2-in-1 laptops take a performance hit compared to their non-convertible peers, that doesn’t seem to be the case with the Lenovo Yoga C940 15. It’s a solid performer on the CPU side, and it holds its own on the GPU side despite not being officially a gaming rig. Battery life holds up quite nicely as well, both in benchmarks and everyday use.
Let’s start with Cinebench, which tests CPU performance over a short period of time. Here, the Lenovo Yoga C940 15 topped every other laptop with a 9th-generation Intel Core i7-9750H processor, both in single-threaded and multi-threaded performance. (The former accounts for most office productivity tasks, while the latter comes into play for image processing and rendering.)
To see how laptops fare under sustained workloads, we encode a large video file with the free HandBrake utility. Again, the C940 got the job done faster than nearly every laptop with a comparable CPU, save for MSI’s GS65 Stealth 9SD.
PCMark 8’s Work benchmark provides another look at how the Lenovo Yoga C940 15 zips through work tasks. Any score over 2,000 is satisfactory, so the Yoga C940 15’s score of 3,927 keeps you well above the baseline.
The Lenovo Yoga C940 uses the Max-Q version of Nvidia’s GeForce GTX 1650 graphics card, which balances performance with the thermal challenges of thin-and-light designs. As such, laptops with standard GTX 1650 GPUs fare a bit better. In 3DMark’s TimeSpy benchmark, for instance, the C940 fell markedly behind Dell’s XPS 15 7590 and Acer’s Nitro 7.
Similar results emerged in Rise of the Tomb Raider’s benchmarking tool, where the Lenovo Yoga C940 15 posted an average of 53.2 frames per second. Several other GTX 1650 laptops managed higher framerates (though Dell’s XPS 15 actually did worse here).
In practical terms, this means you’ll be able to play some modern competitive shooters like Fortnite and Apex Legends without issue, even at 1080p and the highest possible settings. But with games that are more demanding, you may have to ratchet resolution down to 720p or reduce quality settings for consistently high framerates. On the brighter side, while the system’s fan can get loud during gaming, it avoids the jet engine-like whine of more powerful gaming machines.
With any laptop that has a discrete graphics card, you should expect some compromise on battery life. Tthe Lenovo Yoga C940’s combination of a large 69,000 wHr battery and a 1080p, 60Hz display give it an advantage. In our looping video rundown test, the C940 15 lasted just over nine hours. Of course, battery life typically fell below two hours for gaming, but we found that the C940 15 could get through most of a workday with general productivity use.
Is this big Yoga for you?
As with so many other laptops, the Yoga C940 15’s worthiness is a matter of which trade-offs you’re willing to make. HP’s 15-inch Spectre x360 has a similar convertible design, but the version with GTX 1650 graphics has a much dimmer display. Dell’s XPS 15 7590 packs in 4K graphics and even better battery life, but its touchscreen variants get pricey and it’s not a convertible design. Gaming-first laptops like the Acer Nitro 7 will give you even better performance, but they’re not as nice to look at.
Of course, you could always go with a MacBook Pro, but then you’ll miss out on all those neat choices that make PCs like the Yoga C940 15 more interesting in the first place.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
Every day, digital advertisement agencies serve billions of ads on news websites, search engines, social media networks, video streaming websites, and other platforms. And they all want to answer the same question: Which of the many ads they have in their catalog is more likely to appeal to a certain viewer? Finding the right answer to this question can have a huge impact on revenue when you are dealing with hundreds of websites, thousands of ads, and millions of visitors.
Fortunately (for the ad agencies, at least),reinforcement learning, the branch of artificial intelligence that has become renowned formastering board and video games, provides a solution. Reinforcement learning models seek to maximize rewards. In the case of online ads, the RL model will try to find the ad that users are more likely to click on.
The digital ad industry generates hundreds of billions of dollars every year and provides an interesting case study of the powers of reinforcement learning.
Naïve A/B/n testing
To better understand how reinforcement learning optimizes ads, consider a very simple scenario: You’re the owner of a news website. To pay for the costs of hosting and staff, you have entered a contract with a company to run their ads on your website. The company has provided you with five different ads and will pay you one dollar every time a visitor clicks on one of the ads.
Your first goal is to find the ad that generates the most clicks. In advertising lingo, you will want to maximize your click-trhough rate (CTR). The CTR is ratio of clicks over number of ads displayed, also called impressions. For instance, if 1,000 ad impressions earn you three clicks, your CTR will be 3 / 1000 = 0.003 or 0.3%.
Before we solve the problem with reinforcement learning, let’s discuss A/B testing, the standard technique for comparing the performance of two competing solutions (A and B) such as different webpage layouts, product recommendations, or ads. When you’re dealing with more than two alternatives, it is called A/B/n testing.
[Read: How do you build a pet-friendly gadget? We asked experts and animal owners]
In A/B/n testing, the experiment’s subjects are randomly divided into separate groups and each is provided with one of the available solutions. In our case, this means that we will randomly show one of the five ads to each new visitor of our website and evaluate the results.
Say we run our A/B/n test for 100,000 iterations, roughly 20,000 impressions per ad. Here are the clicks-over-impression ratio of our ads:
Ad 1: 80/20,000 = 0.40% CTR
Ad 2: 70/20,000 = 0.35% CTR
Ad 3: 90/20,000 = 0.45% CTR
Ad 4: 62/20,000 = 0.31% CTR
Ad 5: 50/20,000 = 0.25% CTR
Our 100,000 ad impressions generated $352 in revenue with an average CTR of 0.35%. More importantly, we found out that ad number 3 performs better than the others, and we will continue to use that one for the rest of our viewers. With the worst performing ad (ad number 2), our revenue would have been $250. With the best performing ad (ad number 3), our revenue would have been $450. So, our A/B/n test provided us with the average of the minimum and maximum revenue and yielded the very valuable knowledge of the CTR rates we sought.
Digital ads have very low conversion rates. In our example, there’s a subtle 0.2% difference between our best- and worst-performing ads. But this difference can have a significant impact on scale. At 1,000 impressions, ad number 3 will generate an extra $2 in comparison to ad number 5. At a million impressions, this difference will become $2,000. When you’re running billions of ads, a subtle 0.2% can have a huge impact on revenue.
Therefore, finding these subtle differences is very important in ad optimization. The problem with A/B/n testing is that it is not very efficient at finding these differences. It treats all ads equally and you need to run each ad tens of thousands of times until you discover their differences at a reliable confidence level. This can result in lost revenue, especially when you have a larger catalog of ads.
Another problem with classic A/B/n testing is that it is static. Once you find the optimal ad, you will have to stick to it. If the environment changes due to a new factor (seasonality, news trends, etc.) and causes one of the other ads to have a potentially higher CTR, you won’t find out unless you run the A/B/n test all over again.
What if we could change A/B/n testing to make it more efficient and dynamic?
This is where reinforcement learning comes into play. A reinforcement learning agent starts by knowing nothing about its environment’s actions, rewards, and penalties. The agent must find a way to maximize its rewards.
In our case, the RL agent’s actions are one of five ads to display. The RL agent will receive a reward point every time a user clicks on an ad. It must find a way to maximize ad clicks.
The multi-armed bandit
In some reinforcement learning environments, actions are evaluated in sequences. For instance, in video games, you must perform a series of actions to reach the reward, which is finishing a level or winning a match. But when serving ads, the outcome of every ad impression is evaluated independently; it is a single-step environment.
To solve the ad optimization problem, we’ll use a “multi-armed bandit” (MAB), a reinforcement learning algorithm that is suited for single-step reinforcement learning. The name of the multi-armed bandit comes from an imaginary scenario in which a gambler is standing at a row of slot machines. The gambler knows that the machines have different win rates, but he doesn’t know which one provides the highest reward.
If he sticks to one machine, he might lose the chance of selecting the machine with the highest win rate. Therefore, the gambler must find an efficient way to discover the machine with the highest reward without using up too much of his tokens.
Ad optimization is a typical example of a multi-armed bandit problem. In this case, the reinforcement learning agent must find a way to discover the ad with the highest CTR without wasting too many valuable ad impressions on inefficient ads.
Exploration vs exploitation
One of the problems every reinforcement learning model faces is the “exploration vs exploitation” challenge. Exploitation means sticking to the best solution the RL agent has so far found. Exploration means trying other solutions in hopes of landing on one that is better than the current optimal solution.
In the context of ad selection, the reinforcement learning agent must decide between choosing the best-performing ad and exploring other options.
One solution to the exploitation-exploration problem is the “epsilon-greedy” (ε-greedy) algorithm. In this case, the reinforcement learning model will choose the best solution most of the time, and in a specified percent of cases (the epsilon factor) it will choose one of the ads at random.
Here’s how it works in practice. Say we have an epsilon-greedy MAB agent with the ε factor set to 0.2. This means that the agent chooses the best-performing ad 80% of the time and explores other options 20% of the time.
The reinforcement learning model starts without knowing which of the ads performs better, therefore it assigns each of them an equal value. When all ads are equal, it will choose one of them at random each time it wants to serve an ad.
After serving 200 ads (40 impressions per ad), a user clicks on ad number 4. The agent adjusts the CTR of the ads as follows:
Ad 1: 0/40 = 0.0%
Ad 2: 0/40 = 0.0%
Ad 3: 0/40 = 0.0%
Ad 4: 1/40 = 2.5%
Ad 5: 0/40 = 0.0%
Now, the agent thinks that ad number 4 is the top-performing ad. For every new ad impression, it will pick a random number between 0 and 1. If the number is above 0.2 (the ε factor), it will choose ad number 4. If it’s below 0.2, it will choose one of the other ads at random.
Now, our agent runs 200 other ad impressions before another user clicks on an ad, this time on ad number 3. Note that of these 200 impressions, 160 belong to ad number 4, because it was the optimal ad. The rest are equally divided between the other ads. Our new CTR values are as follows:
Ad 1: 0/50 = 0.0%
Ad 2: 0/50 = 0.0%
Ad 3: 1/50 = 2.0%
Ad 4: 1/200 = 0.5%
Ad 5: 0/50 = 0.0%
Now the optimal ad becomes ad number 3. It will get 80% of the ad impressions. Let’s say after another 100 impressions (80 for ad number three, four for each of the other ads), someone clicks on ad number 2. Here’s how what the new CTR distribution looks like:
Ad 1: 0/54 = 0.0%
Ad 2: 1/54 = 1.8%
Ad 3: 1/130 = 0.7%
Ad 4: 1/204 = 0.49%
Ad 5: 0/54 = 0.0%
Now, ad number 2 is the optimal solution. As we serve more ads, the CTRs will reflect the real value of each ad. The best ad will get the lion’s share of the impressions, but the agent will continue to explore other options. Therefore, if the environment changes and users start to show more positive reactions to a certain ad, the RL agent can discover it.
After running 100,000 ads, our distribution can look something like the following:
Ad 1: 123/30,600 = 0.40% CTR
Ad 2: 67/18,900 = 0.35% CTR
Ad 3: 187/41,400 = 0.45% CTR
Ad 4: 35/11,300 = 0.31% CTR
Ad 5: 15/5,800 = 0.26% CTR
With the ε-greedy algorithm, we were able to increase our revenue from $352 to $426 on 100,000 ad impressions and an average CTR of 0.42%. This is a great improvement over the classic A/B/n testing model.
Improving the ε-greedy algorithm
The key to the ε-greedy reinforcement learning algorithm is adjusting the epsilon factor. If you set it too low, it will exploit the ad which it thinks is optimal at the expense of not finding a possibly better solution. For instance, in the example we explored above, ad number four happens to generate the first click, but in the long run, it doesn’t have the highest CTR. Small sample sizes do not necessarily represent true distributions.
On the other hand, if you set the epsilon factor too high, your RL agent will waste too many resources exploring non-optimal solutions.
One way you can improve the epsilon-greedy algorithm is by defining a dynamic policy. When the MAB model is fresh, you can start with a high epsilon value to do more exploration and less exploitation. As your model serves more ads and gets a better estimate of the value of each solution, it can gradually reduce the epsilon value until it reaches a threshold value.
In the context of our ad-optimization problem, we can start with an epsilon value of 0.5 and reduce it by 0.01 after every 1,000 ad impressions until it reaches 0.1.
Another way to improve our multi-armed bandit is to put more weight on new observations and gradually reduces the value of older observations. This is especially useful in dynamic environments such as digital ads and product recommendations, where the value of solutions can change over time.
Here’s a very simple way you can do this. The classic way to update the CTR after serving an ad is as follows:
(result + past_results) / impressions
Here,resultis the outcome of the ad displayed (1 if clicked, 0 if not clicked),past_resultsis the cumulative number of clicks the ad has garnered so far, andimpressionsis the total number of times the ad has been served.
To gradually fade old results, we add a new alphafactor (between 0 and 1), and make the following change:
(result + past_results * alpha) / impressions
This small change will give more weight to new observations. Therefore, if you have two competing ads that have an equal number of clicks and impressions, the ones whose clicks are more recent will be favored by your reinforcement learning model. Also, if an ad had a very high CTR rate in the past but has become unresponsive in recent times, its value will decline faster in this model, forcing the RL model to move to other alternatives earlier and waste fewer resources on the inefficient ad.
Adding context to the reinforcement learning model
In the age of internet, websites, social media, and mobile apps haveplenty of information on every single user such as their geographic location, device type, and the exact time of day they’re viewing the ad. Social media companies have even more information about their users, including age and gender, friends and family, the type of content they have shared in the past, the type of posts they liked or clicked on in the past, and more.
This rich information gives these companies the opportunity to personalize ads for each viewer. But the multi-armed bandit model we created in the previous section shows the same ad to everyone and doesn’t take the specific characteristic of each viewer into account. What if we wanted to add context to our multi-armed bandit?
One solution is to create several multi-armed bandits, each for a specific sub-field of users. For instance, we can create separate RL models for users in North America, Europe, Middle East, Asia, Africa, and so on. What if we wanted to also factor in gender? Then we would have one reinforcement learning model for female users in North America, one for male users in North America, one for female users in Europe, male users in Europe, etc. Now, add age ranges and device types, and you can see that it will quickly develop into a big problem, creating an explosion of multi-armed bandits that become hard to train and maintain.
An alternative solution is to use a “contextual bandit,” an upgraded version of the multi-armed bandit that takes contextual information into account. Instead of creating a separate MAB for each combination of characteristics, the contextual bandit uses “function approximation,” which tries to model the performance of each solution based on a set of input factors.
Without going too much into the details (that could be the subject of another post), our contextual bandit usessupervised machine learning to predict the performance of each ad based on location, device type, gender, age, etc. The benefit of the contextual bandit is that it uses one machine learning model per ad instead of creating a MAB per combination of characteristics.
This wraps up our discussion of ad optimization with reinforcement learning. The same reinforcement learning techniques can be used to solve many other problems, such as content and product recommendation or dynamic pricing, and are used in other domains such as health care, investment, and network management.
This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.
The company behind the Roomba robotic vacuum cleaners, iRobot, has announced that the software updates it issued have been causing problems for some of its robotic vacuum cleaners. Models specifically impacted by the software issue are Roomba i7 and s9 robots. The company states that it is currently working on a software upgrade to fix issues owners have complained about.
The issue for owners of the impacted robotic vacuum cleaners is that the update will be rolled out over the next several weeks. Owners of impacted Roomba vacuums say that the recent 3.12.8 firmware has caused navigation issues with the vacuum cleaners. After applying that software update, one user says their Roomba acted “drunk,” spinning around and bumping into furniture.
The owners also said the vacuum cleaned in strange patterns and would get stuck in an empty area along with not being able to return home to its dock. Other users have reported that the updates wiped out environmental maps made by the Roomba vacuums essential to their cleaning function. Impacts from the bad software update have caused a variety of issues, with some taking longer to clean than usual. Units unable to make it back to their docking station are unable to charge, leaving them unusable.
iRobot has been working with users impacted to roll back the update, but even after the update is rolled back some report they still have issues. Some users who were promised help rolling back the software update say they have waited weeks and still haven’t received help. These robotic vacuum cleaners are typically quite expensive, and a software update leaving them unusable understandably angers owners.