The Pixelbook dream may finally be gone for good

Google’s flagship Chromebook may finally be dead and gone, even before the line could make its official comeback.

A recent report from The Verge cites an unnamed source that claims Google has canceled work on a new Pixelbook and shut down the team working on the product. Pixelbook team members have supposedly been transferred to other positions. Google hasn’t commented on the rumor; however, CEO Sundar Pichai’s memo in July 2022 stated the company planned to slow hiring and cut some projects.

Riley Young / Digital Trends

This revelation shouldn’t come as a big surprise since Google hasn’t updated the Pixelbook in the last three years. While the original Pixelbook was a high-end laptop that ran ChromeOS, it never ended up getting a true sequel. Instead, Google tried its hand at a 2-in-1 with the Pixel Slate, and then a midrange Chromebook called the Pixelbook Go.

The Pixel Slate was shuttered shortly after launch. And though Google still sells the Pixelbook Go, it’s long overdue for an update. The entire Pixelbook production was seemingly shut down in 2019, but some reports in 2021 pointed to a possible launch sometime in 2023. But the report from The Verge seems to indicate that those plans have changed yet again.

It could be that Google has stopped Pixelbook production because several high-quality Chromebooks are already available from other manufacturers. Rather than compete with Asus, Acer, Lenovo, Samsung, HP, and Dell for Chromebook shoppers, Google may have decided to use its resources on improving its other products, such as the long-neglected Android tablet.

But Pixelbook fans always hoped that the “Pixel” brand meant it was a true consumer product that would get updates year over year, much like the Pixel smartphones. Unfortunately, these ChromeOS devices have been treated more like Google’s early Nexus program — products meant to stir interest in the platform, not necessarily become a best-seller.

Google Pixel Tablet on an off-white background.

As further evidence of Google’s shifting strategies, a new Pixel tablet was announced at Google I/O in May of 2022, slated to be launched as soon as 2023.

With a Google Pixel tablet on the horizon, having more developers and designers focusing on a well-designed and supported tablet is a wise move. Apple has a dominant position with its incredibly popular iPad, leaving Google facing a challenge to prove itself to be a serious tablet manufacturer.

Editors’ Choice

Repost: Original Source and Author Link


Intel Raptor Lake release date leak spells good news for AMD

A new leak confirms what we already suspected — all signs point to Intel announcing its 13th generation Raptor Lake processors on September 27.

Considering that just yesterday, AMD revealed that Ryzen 7000 CPUs will become available on September 27, this spells bad news for Intel. Which giant will be able to steal the spotlight on September 27?

— Алексей (@wxnod) August 30, 2022

A Twitter leaker shared what seems to be an Intel presentation from China that reveals various important dates for the upcoming next-gen lineup. It shows that Intel plans to break the news to the public on September 27 during its Intel Innovation event. We already expected this to be the case, and now, this leak only serves to solidify that suspicion.

According to the Twitter tipster, the product embargo date has been set to September 27 at 9.20 a.m. PT. The sales embargo lifts on October 20 at 6 a.m. PT, implying that would be the date when the CPUs appear for sale — almost a month after AMD’s Zen 4 platform hits the shelves. These dates apply to the enthusiast-level Intel Raptor Lake K and KF processors as well as the high-end Z790 chipset.

Intel will reportedly start taking pre-orders for the flagship Core i9-13900K (F) on the same day as the initial announcement, but those who want a Core i7-13700K or Core i5-13600K will have to wait until October 13 to pre-order. Further dates are less specific. Between February 19 and March 18, 2023, the product information and sales embargos will lift for the commercial and entry-level consumer and workstation CPUs.

Intel Raptor Lake will maintain socket compatibility with Alder Lake motherboards. It will be based on the 10nm “Intel 7” process node and will support dual-channel DDR5-5600 RAM as well as PCIe 5.0 (up to 28 lanes). So far, a total of 14 processors seem to be in the works, including four Core i9 models, four Core i7, five Core i5, and a single Core i3. Aside from the enthusiast Z790 platform, H770 and B760 motherboards are also expected to arrive at a later date.

The new Raptor Lake lineup will include some rather impressive CPUs. The flagship Core i9-13900K arrives with 24 cores (eight performance cores and 16 efficiency cores) and 32 threads. The base clock is said to be set to 3.0GHz and it can be boosted up to 5.8GHz for a single core and 5.5GHz for all cores. Intel definitely ups the numbers in terms of cache size, bringing the total to 68MB of combined cache. The maximum power consumption is said to be 250 watts, but it can go as high as 350 watts in Extreme Performance Mode. This dwarfs the new AMD Ryzen 9 7950X with its 170-watt TDP.

Bad luck for Intel

Jacob Roach / Digital Trends

With all that said, the fact that Intel is likely going to announce Raptor Lake on September 27 is just pure bad luck — or perhaps a good idea from AMD. It’s possible that Intel had already planned to break the news during Intel Innovation before AMD ever decided to push up the release date of Zen 4.

If Ryzen 7000 was to launch on September 15 (as per the initial rumors), AMD would hold the spotlight for those two weeks, and then Intel could attempt to reclaim it with its own products. Now, Intel’s big announcement will arrive at a time when the news cycle will be buzzing with information about the freshly-released Ryzen 7000 processors. It’s tough luck for Intel, but good news for the customers — things are about to get really exciting in the best CPU arena.

Editors’ Choice

Repost: Original Source and Author Link


The end of DDR4 hurts, but it’s ultimately a good thing

If AMD’s new Ryzen 7000-series processors get their way, DDR4 RAM is destined for the trash heap. Sure, Intel is still supporting the obsolete RAM architecture, but after yesterday’s AMD event, I’m confident we’re witnessing the end of DDR4. I’m also happy about it.

AMD’s enigmatic CEO, Dr. Lisa Su, took the stage at a pre-recorded event to launch AMD’s much-anticipated Ryzen 7000-series processors. She introduced the new 5nm chips in her iconic straight-to-the-point manner. This wasn’t a Microsoft event, so there was no crying or sweaty dancing. She explained AMD’s roadmap for the next four quarters and to 3nm chips by 2024 and then passed the stage to Mark Papermaster to explain the Zen 4 core.

image: AMD together we advance event

The end of DDR4

Amidst all the technical talk from Dr. Su, Mark Papermaster, and David McAfee was news of the new AM5 socket to harness the power of the Ryzen 7000-series processors. The AM5 is designed to work best with the Ryzen processors, Radeon GPUs, and most importantly, DDR5 RAM only.

That last part was the real kicker and was barely mentioned at the event. But this is big news. AMD is officially ditching DDR4 RAM. Even Intel wasn’t so bold when they released their 12th-gen ‘Alder Lake’ processors last November, and they continue to support DDR4 RAM. I’m not sure how long that will last now.

DDR4 RAM has been the standard in desktops, laptops, consoles, and mobile devices since 2014. It’s almost nine years old and as a result, it’s cheap and readily available. DDR4 RAM is also fast enough to power all those computers from the 2010s.

The fastest DDR4 RAM chips can top out at 4,800MHz, which is plenty fast for the XPS, MacBooks, Surface Pros, and iPads of that decade. The average device from 2018 had 8 GB of DDR4 RAM with 3,200MHz. And while it served those machines well, we are entering a new era of computer processing. This is the 2020’s, baby.

DDR5 RAM is the new normal

DDR5 RAM is significantly faster and more power efficient than DDR4. The baseline, bottom-of-the-barrel DDR5 starts at 4,800 MHz clock speed, which is what the fastest and most expensive DDR4 maxed out at. The high end of DDR5 can push 6,400MHz. It also uses its own power supply, so it reduces drain on laptop batteries and doesn’t need to fight with the processor and GPU for resources. That’s a win for the entire computer.

This new memory type first released in the autumn of 2020 and was immediately met with howls of protest from enthusiasts, mainly because it was filthily expensive and impossible to find. There was a global pandemic and a global chip shortage. But it’s two years later and DDR5 RAM is now affordable and easily available.

The main thing to know about DDR5 RAM is that it currently shares space with DDR4 RAM. Many PC manufacturers still offer a choice between the two. Some stick with DDR4 out of pure laziness.

Home computer builders still buy DDR4 because it has been cheaper and easier to find up until now. This is why Intel continues to support it despite the incredible power of its latest Alder Lake and upcoming Raptor Lake processors. Intel’s afraid it’ll chase off the enthusiasts. AMD thinks differently.

AMD is looking forward

By ditching support for DDR4 and fully embracing DDR5 going forward, AMD is saying they’re focused on the future of computing. They’ve taken a big plunge here, and one which spells the ultimate demise of DDR4.

AMD controls just over 24% of the global processor market in both desktops and laptops and they are extremely popular with gamers. Ryzen processors outperform Intel processors in multicore benchmarks and are usually cheaper. And do you know who loves to build their own desktop PCs? Gamers.

This is a key market for desktop PC manufacturers, and AMD is telling everyone to switch to DDR5. Gamers building their own rigs at home won’t have a choice unless they go with the more expensive and power-intensive Intel chips. Personally, I don’t think it makes sense to pair an Intel i9 12900K processor with DDR4-3200MHz RAM in 2022. I’m sure most gamers will agree.

24% of the global computer market is a big number and it’s now only a matter of time before Intel follows AMD down this road. Sure, Intel is keeping DDR4 on life support for now, but that won’t last. AMD just killed DDR4 RAM at their Ryzen 7000-series event.

This is a good thing. DDR4 is obsolete technology. The longer it sticks around, the more outdated it becomes. We’re in a new age of ultra-powerful CPUs and GPUs unlike anything we have seen before. The power behind both AMD and Intel x86 chips, and ARM chips like Apple’s M1 and M2, is incredible.

We’ve taken huge strides forward in processors lately and these processors need modern RAM. DDR5 has the speed and power efficiency to satisfy those computing needs. DDR4 will only hold us back and needs to fade away.

Editors’ Choice

Repost: Original Source and Author Link


Intel drops support for DirectX 9, but it may be a good thing

Intel has now officially dropped native hardware support for DirectX 9, and this applies to both integrated Xe graphics on Alder Lake CPUs and discrete Arc Alchemist GPUs.

This doesn’t mean that Intel won’t offer access to DX9. Instead, DirectX 9 will be supported through DirectX 12 via emulation. Will that be sufficient for gamers?

First spotted by SquashBionic on Twitter, this change was quietly announced by Intel on its product support page. It appears that the integrated graphics cards on 12th-generation processors, as well as Intel’s discrete GPU solutions (Arc Alchemist), both no longer support DirectX 9 natively. Instead of dealing with that support on its own, Intel delegated the task to Microsoft, which will redirect DX9 support to DX12 instead.

This will be done through emulation using an open-source conversion layer that Microsoft itself has prepared, known as “D3D9On12.” The way it works is that it sends 3D DirectX 9 graphics commands straight to D3D9On12, which then converts these D9 calls into DirectX 12 commands. It kind of replaces the GPU driver here, which would usually handle DirectX 9 calls, and acts as a bridge between the two technologies.

The response to this change has been a bit of a mixed bag, but the change in itself should not be surprising. We’ve already known that Intel Arc GPUs heavily favor DirectX 12, with performance being halved when DirectX 11 is in use. Seeing as DirectX 9 is even older, having launched twenty years ago, it should hardly be a priority for Intel going forward.

Microsoft also seems quite optimistic about the emulation tech in general, claiming that it has become a decent implementation of DirectX 9. While the performance may not be quite as good as natively supporting DX9, it should be close. In some cases, the performance might even be equal to native DX9. However, one side effect of using this emulation process might be an increase in CPU usage.

Intel's response to a DirectX 9 question.

In a way, this means that Intel has completely handed over the handling of DirectX 9 to Microsoft. It even says as much on its support page: “Since DirectX is property of and is sustained by Microsoft, troubleshooting of DX9 apps and games issues require promoting any findings to Microsoft Support so they can include the proper fixes in their next update of the operating system and the DirectX APIs.”

All in all, this change should turn out to be fairly low impact. Most games that are popular these days include support for DirectX 11 or newer, meaning that no conversion will have to take place for Intel GPUs to support them. The older titles that rely solely on DX9 will have to go through Microsoft’s emulation process. On the other hand, if you own an older integrated GPU from Intel (pre-Xe), you will retain DX9 support without emulation.

Let’s hope that outsourcing DX9 matters to Microsoft will free up more room for Intel to work on its DX11 optimization before the global launch of Intel Arc Alchemist.

Editors’ Choice

Repost: Original Source and Author Link


This Mac hacker’s code is so good, corporations keep stealing it

Patrick Wardle is known for being a Mac malware specialist — but his work has traveled farther than he realized.

A former employee of the NSA and NASA, he is also the founder of the Objective-See Foundation: a nonprofit that creates open-source security tools for macOS. The latter role means that a lot of Wardle’s software code is now freely available to download and decompile — and some of this code has apparently caught the eye of technology companies that are using it without his permission.

Wardle will lay out his case in a presentation on Thursday at the Black Hat cybersecurity conference with Tom McGuire, a cybersecurity researcher at Johns Hopkins University. The researchers found that code written by Wardle and released as open source has made its way into a number of commercial products over the years — all without the users crediting him or licensing and paying for the work.

The problem, Wardle says, is that it’s difficult to prove that the code was stolen rather than implemented in a similar way by coincidence. Fortunately, because of Wardle’s skill in reverse-engineering software, he was able to make more progress than most.

“I was only able to figure [the code theft] out because I both write tools and reverse engineer software, which is not super common,” Wardle told The Verge in a call before the talk. “Because I straddle both of these disciplines I could find it happening to my tools, but other indie developers might not be able to, which is the concern.”

The thefts are a reminder of the precarious status of open-source code, which undergirds enormous portions of the internet. Open-source developers typically make their work available under specific licensing conditions — but since the code is often already public, there are few protections against unscrupulous developers who decide to take advantage. In one recent example, the Donald Trump-backed Truth Social app allegedly lifted significant portions of code from the open-source Mastodon project, resulting in a formal complaint from Mastodon’s founder.

One of the central examples in Wardle’s case is a software tool called OverSight, which Wardle released in 2016. Oversight was developed as a way to monitor whether any macOS applications were surreptitiously accessing the microphone or webcam, with much success: it was effective not only as a way to find Mac malware that was surveilling users but also to uncover the fact that a legitimate application like Shazam was always listening in the background.

Wardle — whose cousin Josh Wardle created the popular Wordle game — says he built OverSight because there wasn’t a simple way for a Mac user to confirm which applications were activating the recording hardware at a given time, especially if the applications were designed to run in secret. To solve this challenge, his software used a combination of analysis techniques that turned out to be unusual and, thus, unique.

But years after Oversight was released, he was surprised to find a number of commercial applications incorporating similar application logic in their own products — even down to replicating the same bugs that Wardle’s code had.

A slide from Wardle and McGuire’s Defcon presentation.
Image: Patrick Wardle

Three different companies were found to be incorporating techniques lifted from Wardle’s work in their own commercially sold software. None of the offending companies are named in the Black Hat talk, as Wardle says that he believes the code theft was likely the work of an individual employee, rather than a top-down strategy.

The companies also reacted positively when confronted about it, Wardle says: all three vendors he approached reportedly acknowledged that his code had been used in their products without authorization, and all eventually paid him directly or donated money to the Objective-See Foundation.

Code theft is an unfortunate reality, but by bringing attention to it, Wardle hopes to help both developers and companies protect their interests. For software developers, he advises that anyone writing code (whether open or closed source) should assume it will be stolen and learn how to apply techniques that can help uncover instances where this has happened.

For corporations, he suggests that they better educate employees on the legal frameworks surrounding reverse engineering another product for commercial gain. And ultimately, he hopes they’ll just stop stealing.

Repost: Original Source and Author Link


4 simple reasons I’ll never give up Windows for good

My household is firmly entrenched in Apple’s cozy orchard, but I keep sneaking back to my Windows PC when nobody is looking. There’s no way I could ever give up Windows.

Don’t get me wrong: MacOS is a slick bit of software. It is uniform and easy to navigate. The animations are top-notch. MacOS still has a built-in assistant (I will never forget you, Cortana). But none of that is enough to stop me from going back to Windows every day.

Cross-platform compatibility

Dung Caovn/Unsplash

The Apple ecosystem is amazing. I love how easily I can share anything across Apple devices. My wife is all-in on Apple (and is the only reason I have Apple-anything). My kids share an iPad. We have an Apple TV and a Home Pod Mini. We all sync photos and reminders and music playlists and TV shows without having to think about it.

But as a techie, I also dabble in Android. I have an Xbox, and even an Oculus Quest 2 VR headset. I have several Alexa devices scattered about the home, along with some non-HomeKit smart plugs. Apple refuses to play nice with these things. On the other hand, Microsoft is friends with everybody.

I also use Outlook and OneDrive and OneNote and ToDo on my iPhone. They sync with my iCloud account, so I still get to share things with my wife. Alexa can even turn on my Xbox.

Apple tends to be a one-way street, and the website is as bare-bones as it gets. There’s no other way I could use Reminders or Notes on my PC. Truth is, if I were to lose my Apple devices tomorrow, I would still have a complete unified ecosystem of Microsoft-compatible devices.


Razer Blade 17 angled view showing display and left side.

Gaming is the Mac’s Achilles’ heel. No matter how useful the OS becomes, it is dead to gamers. The upcoming MacOS Ventura promises to woo game developers over to the Mac, but I’m not holding my breath. Apple CEO Tim Cook announced Metal 3, Apple’s new framework to allow game developers to take full advantage of processes in the M2 chip. Cook also announced MetalFX Upscaling, which renders complex graphical scenes with less computational power on the GPU.

But even if the M2 is friendlier to gaming, game developers and gamers are focused solely on PC. Attracting large game studios to build for Mac will take time. From what I can tell, it still feels like we’re many years out from Mac reaching the same game ecosystem Microsoft has built up.

Cloud gaming is one area I’m closely following. I love having the ability to play many of my favorite Game Pass titles on my Mac, although I admittedly use the Edge browser and not Safari. I also have access to most of my Steam library through GeForce Now.

But not every game I enjoy is available on the cloud. Age of Empires IV and Crusader Kings III are nowhere to be found. And forget it with PC VR games. I often use my Quest 2 with Steam VR and my dedicated RTX card just manages to keep up. That’s just not something you can do on a Mac right now.

Window management

Two Mac OS windows open side by side

If, by some miracle, game developers were to suddenly flock to MacOS, I would still stay with Windows, and the reason is because I loathe Mac’s windows management. When I click the X button, I expect the window to close. If I wanted to simply minimize the window, I would click the minimize button.

Multitasking on a Mac remains a frustrating experience to this day. I can have a total of two windows open side by side. If I want more, I need to pay for a third-party extension.

Meanwhile, Windows 11 lets me choose from six multi-window configurations out of the box. Let’s not forget the ability to simply snap windows to different corners of the screen in Windows 11, an extremely useful trick when working.

Windows 11 also has Snap Groups, which has quickly become a tool I can’t live without. Snap Groups allow me to group a bunch of windows together, for example when I’m working on a project requiring writing, research, and note-taking. Once I’ve snapped some programs into a multi-window layout, Windows 11 remembers this. I can minimize the entire group and work on something else, and then simply open the group up again when I’m ready to get back to it. Nothing on Mac comes close — especially not Stage Manager.

Look and feel

The Widgets feature in Windows 11.

At the end of the day, both MacOS and Windows 11 succeed in achieving the same thing: a useful interface for people to get stuff done. But I like the look and feel of the Windows 11 UX so much more.

The Windows 11 Start Menu looks more mature and professional than the Mac launchpad. Of course, many people prefer the Unix environment of a Mac, but beauty is in the eye of the beholder. I also prefer the frosted glass backgrounds of Windows menus compared to Apple’s. And the widget menu on Windows packs a lot more information than on MacOS.

It is true that Microsoft took a lot of cues from Apple when redesigning Windows. From the rounded edges to the frosted glass look of the menus, Windows 11 has a Mac OS feel to it. However, Microsoft did a better job of it. Aesthetics are all completely subjective, of course. My wife would disagree, for example, but I think Windows 11 gives you the best of both worlds.

Editors’ Choice

Repost: Original Source and Author Link


Nvidia’s RTX 4000 get new specs, and it’s not all good news

Nvidia’s upcoming Ada Lovelace graphics cards just received a new set of rumored specifications, and this time around, it’s a bit of a mixed bag.

While the news is good for one of the GPUs, the RTX 4070 actually received a cut when it comes to its specs — but the leaker says this won’t translate to a cheaper price.

And TBP, 450/420?/300W.

— kopite7kimi (@kopite7kimi) June 23, 2022

The information comes from kopite7kimi, a well-recognized name when it comes to PC hardware leaks, who has just revealed an update to the specifications of the RTX 4090, RTX 4080, and the RTX 4070. While we’ve already heard previous whispers about the specs of the RTX 4090 and the RTX 4070, this is the first time we’re getting predictions about the specs of the RTX 4080.

Let’s start with the good news. If this rumor is true, the flagship RTX 4090 seems to have received a slight bump in the core count. The previously reported number was 16,128 CUDA cores, and this has now gone up to 16,384 cores, which translates to an upgrade from 126 streaming multiprocessors (SMs) to 128. As for the rest of the specs, they remain unchanged — the current expectation is that the GPU will get 24GB of GDDR6X memory across a 384-bit memory bus, as well as 21Gbps bandwidth.

The RTX 4090 includes the AD102 GPU, which maxes out at 144 SMs, but it seems unlikely that the RTX 4090 itself will ever reach such heights. The full version of the AD102 GPU is probably going to be found in an even better graphics card, be it a Titan or simply an RTX 4090 Ti. It’s also rumored to have monstrous power requirements. This time around, kopite7kimi didn’t reveal anything new about that card, and as of now, we still don’t know for a fact that it even exists.

Moving on to the RTX 4080 with the AD103 GPU, it’s said to come with 10,240 CUDA cores and 16GB of memory. However, according to kopite7kimi, it would rely on GDDR6 memory as opposed to GDDR6X. Seeing as the leaker predicts it to be 18Gbps, that would actually make it slower than the RTX 3080 with its 19Gbps memory. The core count is exactly the same as in the RTX 3080 Ti. So far, this GPU doesn’t sound very impressive, but it’s said to come with a much larger L2 cache that could potentially offer an upgrade in its gaming performance versus its predecessors.

Jacob Roach / Digital Trends

When it comes to the RTX 4070, the GPU was previously rumored to come with 12GB of memory, but now, kopite7kimi predicts just 10GB across a 160-bit memory bus. It’s said to offer 7,168 CUDA cores. While it’s certainly an upgrade over the RTX 3070, it might not quite be the generational leap some users are hoping for. It’s also supposedly not going to receive a price discount based on the reduction in specs, but we still don’t know the MSRP of this GPU, so it’s hard to judge its value.

Lastly, the leaker delivered an update on the power requirements of the GPUs, which have certainly been the subject of much speculation over the last few months. The predicted TBP for the RTX 4090 is 450 watts. It’s 420 watts for the RTX 4080 and 300 watts for the RTX 4070. Those numbers are a lot more conservative than the 600 watts (and above) that we’ve seen floating around.

What does all of this mean for us — the end-users of the upcoming RTX 40-series GPUs? Not too much just yet. The specifications may yet change, and although kopite7kimi has a proven track record, they could be wrong about the specs, too. However, as things stand now, only the RTX 4090 seems to mark a huge upgrade over its predecessor while the other two are a much more modest change. It remains to be seen whether the pricing will reflect that or not.

Editors’ Choice

Repost: Original Source and Author Link


Microsoft could finally kill HDD boot drives for good

Microsoft could have plans to scrap its use of hard disk drives (HDD) among its main storage components on PCs running Windows 11, according to a recent report by industry analyst firm Trendfocus, as reported by Tom’s Hardware.

If Microsoft goes through with its plans, consumers could begin to see solid-state hard drives (SSD) instead, with the exception of dual-drive desktop PCs and gaming laptops, which require multiple types of storage, as Tom’s Hardware noted.

While Microsoft has declined to comment on the matter, the current trends indicate a complete market transition to SSD by 2023. Many PC makers already use SSD as their main storage option; however, it is still not a set standard, especially in emerging markets.

Trendforce claims Microsoft is internally pushing for the switch to SSD as the main storage standard for Windows 11 PCs; however, the brand has not implemented any requirements for computer or laptop makers to follow.

Tom’s Hardware noted that Windows 11 requires PCs to have at least 64GB of storage for installation but does not specify a type of hard drive. The operating system has, of course, been available since last October to both HDD and SSD devices.

However, the publication wonders if Microsoft requiring Windows 11 PCs to have SSDs in 2023 will lead to a list of minimum specifications for computers as a whole, and furthermore, whether device makers would be penalized for not following the list.

Overall, analysts note that Microsoft’s moves are financially driven, with SSDs costing more per unit than HDDs. With the pandemic boom of PCs dwindling and the price of computer components increasing due to inflation. Manufacturers remain uncertain about how they will be affected by global standings, in addition to business.

Trendfocus Vice President John Chen told Tom’s Hardware that 2023 is still not a hard date for the transition to SSD. Some suggestions considered in talks with Microsoft include holding off the transition of emerging markets until 2024 or pausing the desktop switch until that time.

Editors’ Choice

Repost: Original Source and Author Link


DeepMind says its new AI coding engine is as good as an average human programmer

DeepMind has created an AI system named AlphaCode that it says “writes computer programs at a competitive level.” The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an “estimated rank” placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode’s skills are not necessarily representative of the sort of programming tasks faced by the average coder.

Oriol Vinyals, principal research scientist at DeepMind, told The Verge over email that the research was still in the early stages but that the results brought the company closer to creating a flexible problem-solving AI — a program that can autonomously tackle coding challenges that are currently the domain of humans only. “In the longer-term, we’re excited by [AlphaCode’s] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software,” said Vinyals.

AlphaCode was tested against challenges curated by Codeforces, a competitive coding platform that shares weekly problems and issues rankings for coders similar to the Elo rating system used in chess. These challenges are different from the sort of tasks a coder might face while making, say, a commercial app. They’re more self-contained and require a wider knowledge of both algorithms and theoretical concepts in computer science. Think of them as very specialized puzzles that combine logic, maths, and coding expertise.

In one example challenge that AlphaCode was tested on, competitors are asked to find a way to convert one string of random, repeated s and t letters into another string of the same letters using a limited set of inputs. Competitors cannot, for example, just type new letters but instead have to use a “backspace” command that deletes several letters in the original string. You can read a full description of the challenge below:

An example challenge titled “Backspace” that was used to evaluate DeepMind’s program. The problem is of medium difficulty, with the left side showing the problem description, and the right side showing example test cases.
Image: DeepMind / Codeforces

Ten of these challenges were fed into AlphaCode in exactly the same format they’re given to humans. AlphaCode then generated a larger number of possible answers and winnowed these down by running the code and checking the output just as a human competitor might. “The whole process is automatic, without human selection of the best samples,” Yujia Li and David Choi, co-leads of the AlphaCode paper, told The Verge over email.

AlphaCode was tested on 10 of challenges that had been tackled by 5,000 users on the Codeforces site. On average, it ranked within the top 54.3 percent of responses, and DeepMind estimates that this gives the system a Codeforces Elo of 1238, which places it within the top 28 percent of users who have competed on the site in the last six months.

“I can safely say the results of AlphaCode exceeded my expectations,” Codeforces founder Mike Mirzayanov said in a statement shared by DeepMind. “I was sceptical [sic] because even in simple competitive problems it is often required not only to implement the algorithm, but also (and this is the most difficult part) to invent it. AlphaCode managed to perform at the level of a promising new competitor.”

An example interface of AlphaCode tackling a coding challenge. The input is given as it is to humans on the left and the output generated on the right.
Image: DeepMind

DeepMind notes that AlphaCode’s current skill set is only currently applicable within the domain of competitive programming but that its abilities open the door to creating future tools that make programming more accessible and one day fully automated.

Many other companies are working on similar applications. For example, Microsoft and the AI lab OpenAI have adapted the latter’s language-generating program GPT-3 to function as an autocomplete program that finishes strings of code. (Like GPT-3, AlphaCode is also based on an AI architecture known as a transformer, which is particularly adept at parsing sequential text, both natural language and code). For the end user, these systems work just like Gmails’ Smart Compose feature — suggesting ways to finish whatever you’re writing.

A lot of progress has been made developing AI coding systems in recent years, but these systems are far from ready to just take over the work of human programmers. The code they produce is often buggy, and because the systems are usually trained on libraries of public code, they sometimes reproduce material that is copyrighted.

In one study of an AI programming tool named Copilot developed by code repository GitHub, researchers found that around 40 percent of its output contained security vulnerabilities. Security analysts have even suggested that bad actors could intentionally write and share code with hidden backdoors online, which then might be used to train AI programs that would insert these errors into future programs.

Challenges like these mean that AI coding systems will likely be integrated slowly into the work of programmers — starting as assistants whose suggestions are treated with suspicion before they are trusted to carry out work on their own. In other words: they have an apprenticeship to carry out. But so far, these programs are learning fast.

Repost: Original Source and Author Link


Notorious ransomware gang Conti shuts down, but not for good

The ransomware group known as Conti has officially shut down, with all of its infrastructures now offline.

Although this might seem like good news, it’s only good on the surface — Conti is not over, it has simply split into smaller operations.

Advanced Intel

Conti was launched in the summer of 2020 as a successor to the Ryuk ransomware. It relied on partnerships with other malware infections in order to distribute. Malware such as TrickBot and BazarLoader was the initial point of entry for Conti, which then proceeded with the attack. Conti proved to be so successful that it eventually evolved into a cybercrime syndicate that took over TrickBot, BazarLoader, and Emotet.

During the past two years, Conti carried out a number of high-profile attacks, targeting the City of Tulsa, Advantech, and Broward County Public Schools. Conti also held the IT systems of Ireland’s Health Service Executive and Department of Health ransom for weeks and only let go when they were facing serious trouble from law enforcement around the world. However, this attack gave Conti a lot of attention from the global media.

Most recently, it targeted the country of Costa Rica, but according to Yelisey Bogslavskiy of Advanced Intel, the attack was just a cover-up for the fact that Conti was disbanding the whole operation. Boguslavskiy told Bleeping Computer that the attack on Costa Rica was made so public in order to give the members of Conti time to migrate to different ransomware operations.

“The agenda to conduct the attack on Costa Rica for the purpose of publicity instead of ransom was declared internally by the Conti leadership. Internal communications between group members suggested that the requested ransom payment was far below $1 million (despite unverified claims of the ransom being $10 million, followed by Conti’s own claims that the sum was $20 million),” says a yet-to-be-published report from Advanced Intel, shared ahead of time by Bleeping Computer.

Conti ransomware group logo.

The ultimate end to Conti was brought on by the group’s open approval of Russia and its invasion of Ukraine. On official channels, Conti went as far as to say that it will pool all of its resources into defending Russia from possible cyberattacks. Following that, a Ukrainian security researcher leaked over 170,000 internal chat messages between the members of the Conti group, and ultimately also leaked the source code for the gang’s ransomware encryptor. This encryptor was later used to attack Russian entities.

As things stand now, all of Conti’s infrastructure has been taken offline, and the leaders of the group said that the brand is over. However, this doesn’t mean that Conti members will no longer pursue cybercrime. According to Boguslavskiy, the leadership of Conti decided to split up and team up with smaller ransomware gangs, such as AvosLocker, HelloKitty, Hive, BlackCat, and BlackByte.

Members of the previous Conti ransomware gang, including intel analysts, pentesters, devs, and negotiators, are spread throughout various cybercrime operations, but they are still part of the Conti syndicate and fall under the same leadership. This helps them avoid law enforcement while still carrying out the same cyberattacks as they did under the Conti brand.

Conti was considered one of the most expensive and dangerous types of ransomware ever created, with over $150 million of ransom payments collected during its two-year stint. The U.S. government offers a substantial reward of up to $15 million for help in identifying the individuals involved with Conti, especially those in leadership roles.

Editors’ Choice

Repost: Original Source and Author Link