Lenovo’s ultra-durable Surface Pro 8 rival is $1,930 off today

If you’re looking for one of the biggest savings when it comes to laptop deals, you’re going to love what Lenovo is offering. Right now, you can buy the Lenovo ThinkPad X1 Tablet Gen 3 for $949 when you buy direct from Lenovo, saving you a huge $1,930. That works out as 67% off so you can get a powerful 2-in-1 laptop for far less than usual. If that sounds appealing to you, keep on reading while we tell you all about it.

Why you should buy the Lenovo ThinkPad X1 Tablet Gen 3

Lenovo is one of the best laptop brands, especially when it comes to providing what business users need, so you’re immediately onto a good thing here. The Lenovo ThinkPad X1 Tablet Gen 3 is certainly powerful for its size. It has an 8th-generation Intel Core i7 processor along with 8GB of memory plus 256GB of SSD storage. Best of all is its 13-inch QHD+ display that has a resolution of 3000 x 2000 plus touchscreen qualities. Essentially, this means you can use this as both a tablet or a laptop depending on want you need to do. It’s the kind of quality that means this is a system designed to rival the best 2-in-1 laptops.

It’s also packed with lots of other useful features. For instance, it has a fingerprint reader built in so that your data is always kept securely locked away and you don’t have to enter as many passwords manually. It also weighs less than 3 pounds so it’s easy to carry around with you. As well as that, it has a redesigned kickstand with two useful angles for typing, all while being just over a quarter of an inch thin. It has a 9.5-hour battery life so it’s good to go all day long. Worried about durability? Don’t be. The Lenovo ThinkPad X1 Tablet Gen 3 is tested against 12 military-grade requirements so it can handle extreme conditions.

Easily one of the best laptops for someone looking to work hard on the move, while also having the flexibility of being able to use it in tablet form, the Lenovo ThinkPad X1 Tablet Gen 3 is an ideal choice for anyone looking for high-end versatility without spending as much as usual. Normally priced at $2,879, it’s down to just $949 for a limited time only at Lenovo.

Editors’ Choice

Repost: Original Source and Author Link


Dell’s XPS 15 MacBook Pro rival just got a massive $730 price cut

If you’re looking for great performance and good looks, we’ve found one of the best laptop deals for your needs. Available at Dell right now, you can buy the Dell XPS 15 Touch laptop for $1,568, saving you a huge $732 off the usual price. One of the most appealing laptops around right now, this is a great opportunity to save big on something that will sustain you for a long time to come. Let’s take a look at why it’s so great.

Why you should buy the Dell XPS 15 Touch laptop

Dell laptop deals aren’t exactly hard to come by but you don’t see such considerable savings every day. In the case of the Dell XPS 15 Touch laptop, you’re getting one of the best laptops around. The system offers an 11th-generation Intel Core i7 processor along with 16GB of memory and 512GB of SSD storage. Extensive storage is always great for saving all your files but it’s the fast processor and high amount of memory that means multitasking and opening new apps will be super speedy. It’s the kind of performance that you’ll wonder how you lived without and particularly benefits anyone who has a busy working life.

Alongside that, the Dell XPS 15 Touch laptop has a 15.6-inch 3.5K screen with a resolution of 3456 x 2160. That’s plenty of room to see what you’re doing with an OLED panel ensuring that colors really pop on screen no matter what you’re working on or watching. In addition, of course, it’s a touchscreen so it’s great if you want to get more hands-on with what you’re doing. Anti-reflective coating plus 400 nits of brightness means it looks great in any weather or lighting condition, too. The display also has an edge-to-edge view so it looks great taking up less room than you would think.

With extra features like the ability to reduce harmful blue light emissions and an advanced thermal design that means no risk of overheating, the Dell XPS 15 Touch oozes the kind of class that you would expect from one of the best laptop brands. It’s just the kind of system that’s perfect if you want MacBook Pro-level performance but you prefer using Windows over MacOS.

Normally priced at $2,300, the Dell XPS 15 Touch is reduced by $732 right now at Dell, bringing it down to $1,568. A considerable discount on a highly respected laptop, if you’re looking for a long-term investment, you need to snap this one up. It won’t stay at this price for long.

Editors’ Choice

Repost: Original Source and Author Link


Nvidia RTX 4070 Ti may rival the 3090 Ti for half the price

The rumored specifications of Nvidia’s upcoming GeForce RTX 4070 Ti just leaked, and it looks like it’ll be one beast of a graphics card.

If the specs turn out to be true, the RTX 4070 Ti might be powerful enough to match the current-gen flagship RTX 3090 Ti, but it’s also expected to cost a lot less than the $1,999 GPU.

As I have mentioned before, there is an AD104 SKU with a 400W limit.
a full-fat AD104 with 7680FP32
21Gbps 12G GDDR6X
It can easily match RTX 3090 Ti.

— kopite7kimi (@kopite7kimi) August 1, 2022

This tantalizing bit of news comes from a fairly trustworthy source — Kopite7kimi, a well-known leaker in the GPU space. However, it’s best to not take it for granted and assume that everything is subject to change, especially if you consider that Nvidia might only release a single GPU this year, and if that happens, it won’t be the rumored RTX 4070 Ti.

With that disclaimer out of the way, let’s talk about the exciting stuff — the specs of the upcoming Nvidia GeForce RTX 4070 Ti, the successor to the RTX 3070 Ti. The latter had proven itself to be one of the best graphics cards this generation, and it seems that its successor might follow that same path and prove to be even better than previously expected.

Kopite7kimi talks about an AD104 GPU based on the PG141-SKU331 PCB. The card utilizes the full AD104 GPU core, which implies that it’s the RTX 4070 Ti and not the base RTX 4070 that is expected to feature a cut-down version of AD104. This would unlock a much higher power limit of 400 watts, and with that, a lot of potential performance.

The card is expected to come with 7,680 cores or 60 streaming multiprocessors (SMs). The leaker predicts a whole lot of memory for this GPU, with 12GB of GDDR6X memory clocked at 21Gbps across a 192-bit bus. Although Kopite didn’t mention that in their tweet, Wccftech notes that other rumors about the RTX 4070 Ti imply that it will also have a massive 48MB of L2 cache and 160 render output units (ROPs).

These specifications mark a huge increase from the RTX 3070 Ti, with a 25% boost in core count and a cache that’s 12 times larger. Unfortunately, Kopite7kimi didn’t talk about the clock speeds for this GPU, but something in the 2GHz-2.8GHz range seems like a safe prediction.

Now, let’s compare these specs to the current-gen flagship, the RTX 3090 Ti. The $1,999 flagship has a higher core count of 10,752, with 24GB of GDDR6X memory across a 384-bit bus. However, it has a drastically smaller L2 cache (6MB) and fewer ROPs (112). If the RTX 4070 Ti can match, or come close to, the RTX 3090 Ti in performance, it will be enough for the next-gen card to be a winner here.

We still don’t know how Nvidia will price the new graphics cards. With the current situation in the world, plus an oversupply of RTX 30-series GPUs lying around, there have been whispers of the list prices being quite high. However, if we assume that the RTX 4070 Ti will be priced in the $600-$700 range, which seems reasonable, it will still be a much better value than the RTX 3090 Ti.

Editors’ Choice

Repost: Original Source and Author Link


Dell’s MacBook Pro rival (the Dell XPS 13) is $200 off today

If you’re looking for a discount on a great new laptop and the MacBook Pro is beyond your budget, Dell has some great discounts taking place right now. One of the best Dell laptop deals is on the Dell XPS 13 Touch laptop, the most portable of its popular XPS laptop lineup. While it typically costs $1,049, today you can get the Dell XPS 13 for just $850, a savings of $200. Free next-day shipping is included, so click over to Dell now to grab a discount and get up and running with your new Dell laptop as soon as tomorrow.

Why you should buy the Dell XPS 13

A person using the Dell XPS 13 Touch laptop.

The Dell XPS laptop lineup spans several different screen sizes, making it an extremely popular laptop option. The Dell XPS 13 is the most compact and portable model of the XPS lineup, as it has a 13-inch display. The display on this model also has touchscreen capabilities, making it a unique laptop option for creatives or anyone else who seeks the power of a laptop combined with the functionality of a tablet. The display is also a high-definition display and clocks in with a 60Hz refresh rate, making it a great laptop for binge watchers, and something for gamers to consider as well. Bezels are smaller than ever on any model of XPS laptop, allowing the 13.3-inch screen to fit into an 11-inch form factor. This allows for even more portability and convenience for computer users who like to do their work on the go.

Like all of the best laptops, the Dell XPS 13 Touch brings plenty of performance to the table, especially for a laptop that’s on the smaller side. As spec’d for this deal, it has a quad-core Intel i5 processor, Intel Iris Xe graphics, 8GB of system RAM, and a speedy 256 GB solid-state drive. If you like to have the portability of a laptop but still like to work at your desk, the Dell XPS 13 can push two external 4K displays, so you can easily expand your screen real estate. A fingerprint reader, great battery life, and a modern design round out the top features of the Dell XPS 13 laptop.

More laptop deals you can shop today

A 2021 MacBook Air sits partially open with its colorful screen illuminating in darkness.

While the Dell XPS 13 Touch is a bit of a steal at its current price point, there are a lot of great laptop deals taking place right now. These great laptop deals include:

  • Microsoft Surface Laptop Go —
  • Dell G15 Gaming Laptop — $686, was $1,019
  • Apple MacBook Air — $899, was $999

And if perhaps you like the touchscreen features of the Dell XPS 13 Touch but want an even smaller form factor, there are a lot of great tablet deals going on that are worth exploring.

Editors’ Choice

Repost: Original Source and Author Link


AMD Ryzen 7000 mobile specs revealed, may rival Intel’s best

A new leak gives us more insight into the specifications of the upcoming AMD Dragon Range and AMD Phoenix CPUs. Both of these lineups are the next-generation Zen 4 processors made for laptops, although each will have its own niche.

With the specifications of Dragon Range and Phoenix now coming into play, it seems that AMD will be well-positioned to compete against its rivals, Intel and Nvidia, in future gaming laptops.

Red Gaming Tech (RGT) on YouTube talked about the capabilities and specifications of some of the Ryzen 7000 processors for the mobile sector. AMD Dragon Range and Phoenix will each power laptops for gamers, but while Dragon Range will focus on delivering the best possible CPU performance, Phoenix will be competitive thanks to its built-in RDNA 3 iGPU.

Let’s start with Dragon Range. According to Red Gaming Tech, AMD is approaching the lineup much the same way Intel did with Alder Lake-HX. This means that the manufacturer is downsizing its desktop Raphael CPUs to fit inside laptops without needing to compromise on the specifications too much. As a result, the top processor of the four leaked today will have the most cores of any AMD mobile CPU so far.

As per the rumor, the Ryzen 9 7980HX will come with 16 cores, followed by the Ryzen 9 7900HX with 12 cores. There’s also a Ryzen 7 entry, the Ryzen 7 7800HX with eight cores, as well as the Ryzen 5 7600HX with just six cores. Clock speeds will vary and may reach as high as 5GHz and above in boost mode while ranging between 3.6GHz to 4GHz+ at base frequencies.

AMD Dragon Range will be powerful in terms of CPU performance, but it will fall behind when it comes to the integrated graphics card. The idea here is that AMD wants to offer these CPUs in enthusiast gaming laptops, which will typically have one of the best GPUs installed anyway. As such, Dragon Range will only come with two RDNA 2 compute units, which won’t be enough to power any serious gaming. However, it doesn’t really need to — CPUs of this caliber are going to be paired with a discrete graphics card.

Red Gaming Tech

Moving on to AMD Phoenix (also known as Phoenix Point), the CPU clearly takes a much different approach. While it’s still a Zen 4 processor, the focus here has shifted to providing a good gaming experience even with thin and light laptops. Seeing as it was made to power lightweight notebooks, Phoenix will run on 35 to 45 watts, keeping power requirements low and battery life higher. This oftentimes translates to poor gaming performance — but AMD has an ace up its sleeve in the form of RDNA 3 graphics.

Compared to Dragon Range, Phoenix is said to serve up to six times more GPU cores, which means up to 12 compute units. As noted by RGT, this means up to 1536 shaders and an iGPU clock frequency of up to 3GHz. AMD may be hoping to rival the Nvidia GeForce RTX 3060 mobile with the top variant of Phoenix.

In this lineup, RGT also expects four different processors, the AMD Ryzen 9 7980HS, the Ryzen 9 7900HS, the Ryzen 7 7800HS, and, lastly, the Ryzen 5 7600HS. These processors would provide better graphics at the cost of significantly lowered core counts, ranging from eight to six cores.

If the rumors prove to be true, next-gen gaming laptops based on AMD CPUs and APUs will have a lot to offer. However, before they ever hit the market, we have the Ryzen 7000 for desktops and the Intel Raptor Lake launch to look forward to later this year.

Editors’ Choice

Repost: Original Source and Author Link


OpenAI rival Cohere launches language model API

Cohere, a startup creating large language models to rival those from OpenAI and AI2Labs, today announced the general availability of its commercial platform for app and service development. Through an API, customers can access models fine-tuned for a range of natural language applications, in some cases at a fraction of the cost of rival offerings.

The pandemic has accelerated the world’s digital transformation, pushing businesses to become more reliant on software to streamline their processes. As a result, the demand for natural language technology is now higher than ever — particularly in the enterprise. According to a 2021 survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their natural language processing (NLP) budgets grew by at least 10% compared to 2020, while a third — 33% — said that their spending climbed by more than 30%.

The global NLP market is expected to climb in value from $11.6 billion in 2020 to $35.1 billion by 2026.

“Language is essential to humanity and arguably its single greatest invention — next to the development of computers. Ironically, computers still lack the ability to fully comprehend language, finding it difficult to parse the syntax, semantics, and context that all work together to give words meaning,” Cohere CEO Aidan Gomez told VentureBeat via email. “However, the latest in NLP technology is continuously improving our ability to communicate seamlessly with computers.”


Headquartered in Toronto, Canada, Cohere was founded in 2019 by a pedigreed team including Gomez, Ivan Zhang, and Nick Frosst. Gomez, a former intern at Google Brain, coauthored the academic paper “Attention Is All You Need,” which introduced the world to a fundamental AI model architecture called the Transformer. (Among other high-profile systems, OpenAI’s GPT-3 and Codex are based on the Transformer architecture.) Zhang, alongside Gomez, is a contributor at, an open AI research collective involving data scientists and engineers. As for Frosst, he, like Gomez, worked at Google Brain, publishing research on machine learning alongside Turing Award winner Geoffrey Hinton.

In a vote of confidence, even before launching its commercial service, Cohere raised $40 million from institutional venture capitalists as well as Hinton, Google Cloud AI chief scientist Fei-Fei Li, UC Berkeley AI lab co-director Pieter Abbeel, and former Uber autonomous driving head Raquel Urtasun. “Very large language models are now giving computers a much better understanding of human communication. The team at Cohere is building technology that will make this revolution in natural language understanding much more widely available,” Hinton said in a statement to Fast Company in September.

Unlike some of its competitors, Cohere offers two types of English NLP models, generation and representation, in languages that include Large, Medium, Small. The generation models can complete tasks involving generating text — for example, writing product descriptions or extracting document metadata. By contrast, the representational models are about understanding language, driving apps like semantic search, chatbots, and sentiment analysis.

Intro to Large Language Models with Cohere | Cohere API Documentation

Cohere is already providing the NLP capability for Ada, a company in the chatbot space. Ada leverages a Cohere model to match customer chat requests with available support information.

“By being in both [the generative and representative space], Cohere has the flexibility that many enterprise customers need, and can offer a range of model sizes that allow customers to choose the model that best fits their needs across the spectrums of latency and performance,” Gomez said. “[Use] cases across industries include the ability to more accurately track and categorize spending, expedite data entry for medical providers, or leverage semantic search for legal cases, insurance policies and financial documents. Companies can easily generate product descriptions with minimal input, draft and analyze legal contracts, and analyze trends and sentiment to inform investment decisions.”

To keep its technology relatively affordable, Cohere charges access on a per-character basis based on the size of the model and the number of characters apps use (ranging from $0.0025 to $0.12 per 10,000 characters for generation and $0.019 per 10,000 characters for representation). Only the generate models charge on input and output characters, while other models charge on output characters. All fine-tuned models, meanwhile — i.e., models tailored to particular domains, industries, or scenarios — are charged at two times the baseline model rate.

“The problem remains that the only companies able to capitalize on NLP technology require seemingly bottomless resources in order to access the technology for large language models — which is due to the cost of these models ranging from the tens to hundreds of millions of dollars to build,” Gomez said. “Cohere is easy-to-deploy. With just three lines of code, companies can apply [our] full-stack engine to power all their NLP needs. The models themselves are … already pre-trained.”

Intro to Large Language Models with Cohere | Cohere API Documentation

To Gomez’s point, training and deploying large language models into production isn’t an easy feat, even for enterprises with massive resources. For example, Nvidia’s recently released Megatron 530B model was originally trained across 560 Nvidia DGX A100 servers, each hosting 8 Nvidia A100 80GB GPUs. Microsoft and Nvidia say that they observed between 113 to 126 teraflops per second per GPU while training Megatron 530B, which would put the training cost in the millions of dollars. (A teraflop rating measures the performance of hardware including GPUs.)

Inference — actually running the trained model — is another challenge. On two of its costly DGX SuperPod systems, Nvidia claims that inference (e.g., autocompleting a sentence) with Megatron 530B only takes half a second. But it can take over a minute on a CPU-based on-premises server. While cloud alternatives might be cheaper, they’re not dramatically so — one estimate pegs the cost of running GPT-3 on a single Amazon Web Services instance at a minimum of $87,000 per year.

Training the models

To build Cohere’s models, Gomez says that the team scrapes the web and feeds billions of ebooks and web pages (e.g., WordPress, Tumblr, Stack Exchange, Genius, the BBC, Yahoo, and the New York Times) to the models so that they learn to understand the meaning and intent of language. (The training dataset for the generation models amounts to 200GB dataset after some filtering, while the dataset for the representation models, which wasn’t filtered, totals 3TB.) Like all AI models, Cohere’s trains by ingesting a set of examples to learn patterns among data points, like grammatical and syntactical rules.

It’s well-established that models can amplify the biases in data on which they were trained. In a paper, the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism claims that GPT-3 and similar models can generate text that might radicalize people into far-right extremist ideologies. A group at Georgetown University has used GPT-3 to generate misinformation, including stories around a false narrative, articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation. Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular open source models, including Google’s BERT and   XLNet and Facebook’s RoBERTa.

Generation | Cohere API Documentation

Cohere, for its part, claims that it’s committed to safety and trains its models “to minimize bias and toxicity.” Customers must abide by the company’s usage guidelines or risk having their access to the API revoked. And Cohere — which has an external advisory council in addition to an internal safety team — says that it plans to monitor “evolving risks” with tools designed to identify harmful outputs.

But Cohere’s NLP models aren’t perfect. In its documentation, the company admits that the models might generate “obscenities, sexually explicit content, and messages that mischaracterize or stereotype groups of people based on problematic historical biases perpetuated by internet communities.” For example, when fed prompts about people, occupations, and political/religious ideologies, the API’s output could be toxic 5 to 6 times per 1,000 generations and discuss men twice as much as it does women, Cohere says. Meanwhile, the Otter model in particular tends to associate men and women with stereotypically “male” and “female” occupations (e.g., male scientist versus female housekeeper).

In response, Gomez says that the Cohere team “puts substantial effort into filtering out toxic content and bad text,” including running adversarial attacks and measuring the models against safety research benchmarks. “[F]iltration is done at the keyword and domain levels in order to minimize bias and toxicity,” he added. “[The team has made] meaningful progress that sets Cohere apart from other [companies developing] large language models …  [W]e’re confident in the impact it will have on the future of work over the course of this transformative era.”


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


AMD Zen 4D Could Use Hybrid Design to Rival Intel Alder Lake

YouTuber and leaker Moore’s Law is Dead revealed new information regarding AMD’s future architecture plans. According to leaks, AMD is working on a “dense” version of Zen 4 called Zen 4D. Zen 4D is basically a fork of Zen 4 that strips out features and reduces clock speeds.

It will also feature a newly designed cache system. All of this is to slightly reduce single-core performance in exchange for greatly increased multi-core performance. This would also allow AMD to increase the chip density, hence the “D” in the name.

If the leaks are true, it seems the company may be creating its own hybrid architecture to compete with the success of Intel’s 12th-gen Alder Lake chips. This follows in the footsteps of both Intel and Apple, who have utilized similar architectures in their respective CPU designs.

These Zen 4D processors would have about half the L3 cache of regular Zen 4 and feature 16 cores per chiplet. Moore’s Law is Dead stated that Zen 4D is expected to have simultaneous multithreading (SMT), but they couldn’t be 100% certain. He was also uncertain if Zen 4D would support AVX-512 but did confirm that Bergamo, AMD’s 128-core server-grade EPYC CPU slated for second quarter 2023, would feature the new architecture.

The new architecture for Zen 5 was also leaked, and this is by far the most interesting news. The leaks suggest that Zen 5 will be AMD’s first hybrid processor architecture. It would use eight Zen 5 “big” cores and up to 16 Zen 4D “little” cores. Zen 5 is also rumored to be codenamed Granite Ridge and based on the Ryzen 8000 series processors built on TSMC’s ridiculously tiny 3nm process.

As we’ve seen with Intel’s Alder Lake chips and Apple’s M1 Pro/Max CPUs, the hybrid approach can offer huge performance increases. It makes sense that AMD would architecture their chips in a similar manner, as Zen 5 could offer a 20-25% IPC increase over Zen 4. The problem is that Zen 5 is still a few years out, and Alder Lake currently outperforms AMD’s best consumer chips.

Editors’ Choice

Repost: Original Source and Author Link


Why the M1 Is Intel’s True Rival For Alder Lake and Beyond

There have been two major CPU announcements in the past couple of weeks — Apple’s M1 Pro and M1 Max and today, the Intel 12th-gen Alder Lake platform. Although two different CPU generations with different purposes, Apple and Intel are in hot competition with each other, even if that competition isn’t direct.

These two platforms are more alike than they may seem, which could shift the balance of power in the CPU market. For decades, it has been a matchup between Intel and AMD. Apple is a new competitor in the ring, which is something that Intel recognized with the launch of Alder Lake.

AMD is resting on its laurels, which might pay off in the short term. Going forward, though, hybrid CPU architectures are what will dominate desktop and mobile platforms. Here’s why.

M1 Max and Alder Lake: More alike than different

Intel’s 12th-gen Alder Lake chips and Apple’s M1 range both use hybrid architectures. Sure, Intel uses an x86 instruction set while Apple uses the ARM instruction set, but both ranges of processors drive toward a similar goal: Increase performance and efficiency by putting the right workload on the right core.

If you’re unfamiliar, a hybrid CPU combines performant (P) cores and efficient (E) cores onto a single processor. This design — known as big.LITTLE — was pioneered by chip designer ARM, and you can find it in nearly all mobile devices available today. Apple brought that design to laptops and desktops, and now Intel is following suit.

Intel actually tried this concept a couple of years back with Lakefield, but the range never got off the ground. Intel only made two Lakefield chips, and they only showed up in a few laptops like the Galaxy Book S. Alder Lake is different. It uses a hybrid architecture, but it keeps the same improved P-cores you’d find in a typical CPU generation.

Although it’s tempting to throw more fast cores at a processor to improve performance, that’s not the best way to go about things. Small workloads, background tasks, and simple calculations don’t need such powerful cores. The result is that P-cores end up sharing bandwidth with low priority tasks instead of focusing resources on the most important tasks at hand.

That’s what makes hybrid architectures different. The P-cores can focus on the big, important tasks while the E-cores handle all of the minute background tasks. The results speak for themselves. Phones now use the latest chip-making technology, not computers, and Apple’s M1 chip — which is basically a tricked-out mobile chip — manages to outperform its Intel predecessors while staying cooler and consuming less power.

Intel sees the writing on the walls. The company hasn’t been shy about pointing out Apple as its true competitor in the future, not AMD. Meanwhile, AMD continues to stick with architectures that focus on fast cores and a lot of them instead of focusing on a hybrid approach.

The true competitor

MacBook Pro laptops.

Intel CEO Pat Gelsinger has made one thing clear since returning to Intel: Apple is the competition, not AMD. In an interview from October, Gelsinger made that crystal: “We ultimately see the real competition [is] to enable the ecosystem to compete with Apple.”

Apple has used its own silicon in mobile devices dating back to the original iPhone. But it wasn’t until the M1 chip replaced Intel’s options in MacBooks, the iMac, and the iMac Mini that Intel started to change its stance. In a recent interview, Gelsinger said that was ultimately a good move. “They moved the core of their product line to their own M1 and, you know, its derivative family because they thought they could do a better chip. And they’ve done a good job with that.”

Gelsinger says the ultimate goal is to “win them back,” which requires making a chip that outperforms the M1 — or whatever future generation Apple is on — with higher efficiency and similar power draw. Apple has little incentive to switch back to Intel. For that, Intel has to make chips that are too good to ignore.

Alder Lake looks like a paradigm shift for Intel, and if leaked benchmarks are accurate, the mobile chips could outperform Apple’s M1 Max. It’s important to recognize that Alder Lake is part of a larger strategy for Intel, though. The company has shared its road map through 2025, and it’s filled with hybrid.

AMD hasn’t been as clear about its roadmap, likely because it doesn’t need to be. With desktop and server leadership, AMD is sitting cozy at the moment. For now, we know that AMD’s next-generation Ryzen 6000 chips won’t use a hybrid architecture. AMD has suggested that hybrid still needs work, and has pointed the finger at hybrid architectures as a marketing ploy to “have a bigger number.”

It’s true that hybrid needs work, mainly to optimize the operating system’s scheduler to handle each core type appropriately. Apple has clearly done some work on that front, and Intel worked with Microsoft to optimize Windows 11 for Alder Lake’s Thread Director feature. We’ll just have to wait until Alder Lake is here to see if that work will pay off.

Regardless, it’s clear Intel is looking forward. Guided by marketing or a chance at market leadership, it doesn’t matter: Intel is driving after Apple, and AMD is still driving after Intel. I don’t know who’s gambit will pay off. But I do know that Apple is leaving Intel and AMD in the dust, and Intel is the only one talking about it right now.

Hybrid is the wave of the future

Render of Intel Alder Lake chip.

With the launch of Alder Lake, Intel has shown that hybrid is here to stay. Apple is continuing to develop its own hybrid chips, and Intel will continue doing the same for the next few years. Early murmurs suggest AMD could use a hybrid architecture on its Zen 5 CPUs — the generation after Ryzen 6000 — but that’s a couple of years off, at least.

Intel has made some big claims about Alder Lake — identical multi-threaded performance as 11th-gen chips at less than a fourth of the power, up to a 47% improvement when multi-tasking, and up to double the content creation performance as the previous generation. Some of that is on the back of Intel’s new manufacturing process. However, a lot of it comes from Alder Lake’s high core counts and hybrid architecture.

As long as AMD and Intel are making chips, they’ll be compared to each other. With Intel’s switch to a hybrid architecture, though, it’s clear that the company sees a new challenger approaching — one it used to call a partner. If Intel’s performance claims are true, Alder Lake will take the fight to Apple. And if that battle pays off, AMD will likely follow suit.

Editors’ Choice

Repost: Original Source and Author Link


AI21 Labs trains a massive language model to rival OpenAI’s GPT-3

All the sessions from Transform 2021 are available on-demand now. Watch now.

For the better part of a year, OpenAI’s GPT-3 has remained among the largest AI language models ever created, if not the largest of its kind. Via an API, people have used it to automatically write emails and articles, summarize text, compose poetry and recipes, create website layouts, and generate code for deep learning in Python. But an AI lab based in Tel Aviv, Israel — AI21 Labs — says it’s planning to release a larger model and make it available via a service, with the idea being to challenge OpenAI’s dominance in the “natural language processing-as-a-service” field.

AI21 Labs, which is advised by Udacity founder Sebastian Thrun, was cofounded in 2017 by Crowdx founder Ori Goshen, Stanford University professor Yoav Shoham, and Mobileye CEO Amnon Shashua. The startup says that the largest version of its model — called Jurassic-1 Jumbo — contains 178 billion parameters, or 3 billion more than GPT-3 (but not more than PanGu-Alpha, HyperCLOVA, or Wu Dao 2.0). In machine learning, parameters are the part of the model that’s learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication has held up remarkably well.

AI21 Labs claims that Jurassic-1 can recognize 250,000 lexical items including expressions, words, and phrases, making it bigger than most existing models including GPT-3, which has a 50,000-item vocabulary. The company also claims that Jurassic-1 Jumbo’s vocabulary is among the first to span “multi-word” items like named entities — “The Empire State Building,” for example — meaning that the model might have a richer semantic representation of concepts that make sense to humans.

“AI21 Labs was founded to fundamentally change and improve the way people read and write. Pushing the frontier of language-based AI requires more than just pattern recognition of the sort offered by current deep language models,” CEO Shoham told VentureBeat via email.

Scaling up

The Jurassic-1 models will be available via AI21 Labs’ Studio platform, which lets developers experiment with the model in open beta to prototype applications like virtual agents and chatbots. Should developers wish to go live with their apps and serve “production-scale” traffic, they’ll be able to apply for access to custom models and get their own private fine-tuned model, which they’ll be able to scale in a “pay-as-you-go” cloud services model.

“Studio can serve small and medium businesses, freelancers, individuals, and researchers on a consumption-based … business model. For clients with enterprise-scale volume, we offer a subscription-based model. Customization is built into the offering. [The platform] allows any user to train their own custom model that’s based on Jurassic-1 Jumbo, but fine-tuned to better perform a specific task,” Shoham said. “AI21 Labs handles the deployment, serving, and scaling of the custom models.”

AI21 Labs’ first product was Wordtune, an AI-powered writing aid that suggests rephrasings of text wherever users type. Meant to compete with platforms like Grammarly, Wordtune offers “freemium” pricing as well as a team offering and partner integration. But the Jurassic-1 models and Studio are much more ambitious.

Jurassic models

Shoham says that the Jurassic-1 models were trained in the cloud with “hundreds” of distributed GPUs on an unspecified public service. Simply storing 178 billion parameters requires more than 350GB of memory — far more than even the highest-end GPUs — which necessitated that the development team use a combination of strategies to make the process as efficient as possible.

The training dataset for Jurassic-1 Jumbo, which contains 300 billion tokens, was compiled from English-language websites including Wikipedia, news publications, StackExchange, and OpenSubtitles. Tokens, a way of separating pieces of text into smaller units in natural language, can be either words, characters, or parts of words.

In a test on a benchmark suite that it created, AI21 Labs says that the Jurassic-1 models perform on a par or better than GPT-3 across a range of tasks, including answering academic and legal questions. By going beyond traditional language model vocabularies, which include words and word pieces like “potato” and “make” and “e-,” “gal-,” and “itarian,” Jurassic-1 canvasses less common nouns and turns of phrase like “run of the mill,” “New York Yankees,” and “Xi Jinping.” It’s also ostensibly more sample-efficient — while the sentence “Once in a while I like to visit New York City” would be represented by 11 tokens for GPT-3 (“Once,” “in,” “a,” “while,” and so on), it would be represented by just 4 tokens for the Jurassic-1 models.

“Logic and math problems are notoriously hard even for the most powerful language models. Jurassic-1 Jumbo can solve very simple arithmetic problems, like adding two large numbers,” Shoham said. “There’s a bit of a secret sauce in how we customize our language models to new tasks, which makes the process more robust than standard fine-tuning techniques. As a result, custom models built in Studio are less likely to suffer from catastrophic forgetting, [or] when fine-tuning a model on a new task causes it to lose core knowledge or capabilities that were previously encoded in it.”

Jurassic models

Connor Leahy, a member of the open source research group EleutherAI, told VentureBeat via email that while he believes there’s nothing fundamentally novel about the Jurassic-1 Jumbo model, it’s an impressive feat of engineering, and he has “little doubt” it will perform on a par with GPT-3. “It will be interesting to observe how the ecosystem around these models develops in the coming years, especially what kinds of downstream applications emerge as robustly useful,” he added. “[The question is] whether such services can be run profitably with fierce competition, and how the inevitable security concerns will be handled.”

Open questions

Beyond chatbots, Shoham sees the Jurassic-1 models and Studio being used for paraphrasing and summarization, like generating short product names from product description. The tools could also be used to extract entities, events, and facts from texts and label whole libraries of emails, articles, notes by topic or category.

But troublingly, AI21 Labs has left key questions about the Jurassic-1 models and their possible shortcomings unaddressed. For example, when asked what steps had been taken to mitigate potential gender, race, and religious biases as well as other forms of toxicity in the models, the company declined to comment. It also refused to say whether it would allow third parties to audit or study the models’ outputs prior to launch.

This is cause for concern, as it’s well-established that models amplify the biases in data on which they were trained. A portion of the data in the language is often sourced from communities with pervasive gender, race, physical, and religious prejudices. In a paper, the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism claims that GPT-3 and like models can generate “informational” and “influential” text that might radicalize people into far-right extremist ideologies and behaviors. A group at Georgetown University has used GPT-3 to generate misinformation, including stories around a false narrative, articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation. Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular open source models, including Google’s BERT and XLNet and Facebook’s RoBERTa.

More recent research suggests that toxic language models deployed into production might struggle to understand aspects of minority languages and dialects. This could force people using the models to switch to “white-aligned English” to ensure the models work better for them, or discourage minority speakers from engaging with the models at all.

It’s unclear to what extent the Jurassic-1 models exhibit these kinds of biases, in part because AI21 Labs hasn’t released — and doesn’t intend to release — the source code. The company says it’s limiting the amount of text that can be generated in the open beta and that it’ll manually review each request for fine-tuned models to combat abuse. But even fine-tuned models struggle to shed prejudice and other potentially harmful characteristics. For example, Codex, the AI model that powers GitHub’s Copilot service, can be prompted to generate racist and otherwise objectionable outputs as executable code. When writing code comments with the prompt “Islam,” Codex often includes the word “terrorist” and “violent” at a greater rate than with other religious groups.

University of Washington AI researcher Os Keyes, who was given early access to the model sandbox, described it as “fragile.” While the Jurassic-1 models didn’t expose any private data — a growing problem in the large language model domain — using preset scenarios, Keyes was able to prompt the models to imply that “people who love Jews are closed-minded, people who hate Jews are extremely open-minded, and a kike is simultaneously a disreputable money-lender and ‘any Jew.’”

Jurassic models

Above: An example of toxic output from the Jurassic models.

“Obviously: all models are wrong sometimes. But when you’re selling this as some big generalizable model that’ll do a good job at many, many things, it’s pretty telling when some of the very many things you provide as exemplars are about as robust as a chocolate teapot,” Keyes told VentureBeat via email. “What it suggests is that what you are selling is nowhere near as generalizable as you’re claiming. And this could be fine — products often start off with one big idea and end up discovering a smaller thing along the way they’re really, really good at and refocusing.”

Jurassic models

Above: Another example of toxic output from the models.

AI21 Labs demurred when asked whether it conducted a thorough bias analysis on the Jurassic-1 models’ training datasets. In an email, a spokesperson said that when measured against StereoSet, a benchmark to evaluate bias related to gender, profession, race, and religion in language systems, the Jurassic-1 models were found by the company’s engineers to be “marginally less biased” than GPT-3.

Still, that’s in contrast to groups like EleutherAI, which have worked to exclude data sources determined to be “unacceptably negatively biased” toward certain groups or views. Beyond limiting text inputs, AI21 Labs isn’t adopting additional countermeasures, like toxicity filters or fine-tuning the Jurassic-1 models on “value-aligned” datasets like OpenAI’s PALMS.

Among others, leading AI researcher Timnit Gebru has questioned the wisdom of building large language models, examining who benefits from them and who’s disadvantaged. A paper coauthored by Gebru spotlights the impact of large language models’ carbon footprint on minority communities and such models’ tendency to perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing language aimed at specific groups of people.

The effects of AI and machine learning model training on the environment have also been brought into relief. In June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly 5 times the lifetime emissions of the average U.S. car. OpenAI itself has conceded that models like Codex require significant amounts of compute — on the order of hundreds of petaflops per day — which contributes to carbon emissions.

The way forward

The coauthors of the OpenAI and Stanford paper suggest ways to address the negative consequences of large language models, such as enacting laws that require companies to acknowledge when text is generated by AI — possibly along the lines of California’s bot law.

Other recommendations include:

  • Training a separate model that acts as a filter for content generated by a language model
  • Deploying a suite of bias tests to run models through before allowing people to use the model
  • Avoiding some specific use cases

AI21 Labs hasn’t committed to these principles, but Shoham stresses that the Jurassic-1 models are only the first in a line of language models that it’s working on, to be followed by more sophisticated variants. The company also says that it’s adopting approaches to reduce both the cost of training models and their environment impact, as well as working on a suite of natural language processing products of which Wordtune, Studio, and the Jurassic-1 models are only the first.

“We take misuse extremely seriously and have put measures in place to limit the potential harms that have plagued others,” Shoham said. “We have to combine brain and brawn: enriching huge statistical models with semantic elements, while leveraging computational power and data at unprecedented scale.”

AI21 Labs, which emerged from stealth in October 2019, has raised $34.5 million in venture capital to date from investors including Pitango and TPY Capital. The company has around 40 employees currently, and it plans to hire more in the months ahead.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


HP Launches Gorgeous Chromebase All-In-One to Rival the iMac

HP has a new all-in-one desktop to take on Apple’s newly redesigned 24-inch M1 iMac computer. This time, the company is teaming up with Google to bring Chrome OS to its 21.5-inch Chromebase, an elegant all-in-one featuring a swiveling display, a conical speaker that doubles as the desktop’s floating stand, and a touchscreen. The latter is a feature that isn’t supported on any MacOS-powered computer to date.

The biggest highlight of HP’s Chromebase is that’s designed for your home and can quickly be used for entertainment and productivity. The 21.5-inch FHD display floats on top of the conical speaker, which serves as the stand. Additionally, you can rotate the screen between portrait and landscape modes.

This makes HP’s Chromebase the first Chrome OS-powered desktop to feature a fully rotating display.

The feature could be useful for select Android apps that only support portrait orientation, for example, and for e-reading and coding. And when you’re done, you can easily switch back to landscape view by rotating the screen.

A rotating screen is one of HP's Chromebase's signature design features.

While the Chromebase doesn’t sport the same flat and angular aesthetic as its more popular rival, the design of HP’s Chromebase is still striking. The speaker stand also features Bang & Olufsen-tuned audio as well as a dual-array digital microphone to summon Google Assistant.

To make things clean, all the ports — you get four USB-A ports and two USB-C ports — are located on the back of the speaker stand. HP includes a white wireless keyboard and mouse with the Chromebase.

The company also brought some hardware privacy controls from its other PC products, including a 5-megapixel front-facing webcam with a physical privacy shutter.

Ports are located on the rear for connectivity on the HP Chromebase.

With Chrome OS under the hood, you’re also getting access to a whole library of Android apps through the Google Play Store as well as access to Stadia on Chromebooks for game streaming. All of this is powered by a dual-core 2.4GHz Intel Pentium Gold 6405U processor alongside 64GB of built-in eMMC storage, 4GB of DDR4 memory, and an extra M.2 expansion slot for expansion.

HP’s Chromebase 21.5-inch All-in-One Desktop is expected to be available starting next month with a starting price of $599. The device will be sold through HP’s online store as well as through U.S. retailers like Amazon and Best Buy.

Editors’ Choice

Repost: Original Source and Author Link