This dangerous hacking tool is now on the loose

A dangerous post-exploitation toolkit, first used for cybersecurity purposes, has now been cracked and leaked to hacking communities.

The toolkit is being shared across many different websites, and the potential repercussions could be huge now that it can fall into the hands of various threat actors.

Bleeping Computer

This could be bad. The post-exploitation toolkit in question, called Brute Ratel C4, was initially created by Chetan Nayak. Nayak is an ex-red teamer, meaning that his job included attempting to breach the securities of a given network, which was being actively defended by those on the blue team. Afterward, both teams discuss how it went and whether there are some security flaws to improve upon.

Brute Ratel was created for that exact purpose. It was made for “red teamers” to use, with the ultimate purpose of being able to execute commands remotely on a compromised network. This would then grant the attacker access to the rest of the network in an easier way.

Cobalt Strike is seen as a similar tool to Brute Ratel, and that tool has been heavily abused by ransomware gangs, which is why it’s fairly easy to detect. Brute Ratel has not been quite as widely spread up until now, and it has a licensing verification system that mostly kept the hackers at bay. Nayak is able to revoke the license of any company found to be fake or misusing the tool.

Unfortunately, that’s now a thing of the past, because a cracked version of the tool started to circulate. It was first uploaded to VirusTotal in its uncracked state, but a Russian group called Molecules was able to crack it and entirely remove the licensing requirement from it. This means that now, any potential hacker can get their hands on it if they know where to look.

Will Thomas, a cyber threat intelligence researcher, published a report on the cracked version of the tool. It has already spread to many English and Russian-speaking communities, including CryptBB, RAMP, BreachForums, Exploit[.]in, Xss[.]is, and Telegram and Discord groups.

Person typing on a computer keyboard.

“There are now multiple posts on multiple of the most populated cybercrime forums where data brokers, malware developers, initial access brokers, and ransomware affiliates all hang out,” said Thomas in the report. In a conversation with Bleeping Computer, Thomas said that the tool works and no longer requires a license key.

Thomas explained the potential dangers of the tech, saying, “One of the most concerning aspects of the BRC4 tool for many security experts is its ability to generate shellcode that is undetected by many EDR and AV products. This extended window of detection evasion can give threat actors enough time to establish initial access, begin lateral movement, and achieve persistence elsewhere.”

Knowing that this powerful tool is out there, in the hands of hackers who should never have gained access to it, is definitely scary. Let’s hope that antivirus software developers can tighten the defenses against Brute Ratel soon enough.

Editors’ Choice

Repost: Original Source and Author Link


CD Projekt Red releases an official modding tool for ‘Cyberpunk 2077’

Cyberpunk 2077 now has an official modding tool. CD Projekt RED has launched REDmod, which provides players integrated support to easily install and load mods onto the PC version of the action RPG. As the developer’s official announcement notes, it will also allow players to modify and personalize their game by using the custom sounds, animations and scripts that come with the tool. CD Projekt Red promises to update the tool alongside future patches to ensure that it remains compatible with the game. It is a free DLC, though, and players don’t have to install it at all if they don’t want to.

As popular mod website Nexus Mods clarifies, while new mods are required to use a specific format to be compatible with REDmod, old mods will continue to work just fine. Older mods that aren’t compatible with the tool simply won’t show up in the new REDmod menu. That’s also were players can toggle mods that are compatible with the tool on or off. 

The free DLC is now available for download from the official Cyberpunk 2077 website, but players can get also get it from GOG, Steam or Epic.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link


AI text-to-image processors: Threat to creatives or new tool in the toolbox?

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

An image produced from scratch by a video game designer using an AI tool recently won an art competition at the Colorado State Fair, as has been widely reported. Some artists are alarmed, but should they be? 

For several years AI has been incorporated into tools used by artists every day, from computational photography within the Apple iPhone to image enhancement tools from Topaz Labs and Lightricks, and even open source applications. But because an image generated entirely by an AI tool won a competition, some see this as a tipping point — a sign of an AI catastrophe to come that will lead to widespread job displacement for those in creative fields including graphic design and illustration, photography, journalism, creative writing and even software development

Source: Twitter

The winning image was generated using Midjourney, a cloud-based text-to-image tool developed by a small research lab by that name that is “exploring new mediums of thought and expanding the imaginative powers of the human species.” Their product is a text-to-image generator, the result of AI neural networks trained on vast numbers of images. The company has not disclosed its technology stack, but CEO David Holz said it uses very large AI models with billions of parameters. “They’re trained over billions of images.” Although Midjourney has only recently emerged from stealth mode, already hundreds of thousands of people are using the service.

There is suddenly a proliferation of similar tools, including DALL-E from OpenAI and Imagen from Google. According to a Vanity Fair story, Imagen provides “photorealistic images [that] are even more indistinguishable from the real thing.” Stable Diffusion from is another new text-to-image tool that is open-source and can run locally on a PC with a good graphics card. Stable Diffusion can also be used via art generator services including Artbreeder, and Lightricks. 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Using is believing

As an avid hobbyist photographer who displays work in galleries, I have my own concerns that these tools could mark the end of photography. I decided to try Midjourney myself to see what it could output, and to better think through the possible ramifications. The following image was generated by trying variations on these text prompts: “An emerald-green lake backed by steep Canadian Rockies + A few patches of snow on the mountains + Soft morning light + mountains with green conifer forest + Sunrise + 4K UHD.” 

Canadian Rockies by Gary Grossman via Midjourney

This seems like an amazing result for a novice user. The total time it took from when I first accessed the system to the final image was less than 30 minutes. I must admit to experiencing a childlike wonder as I watched the image materialize in mere seconds from the prompts I supplied. This brought to memory a 60-year-old quote from science fiction writer and futurist Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.” It felt like magic.

There are others using Midjourney who display far more sophistication. For example, one user produced an “alien cat” image from more than 30 text prompts including: “cat+alien with rainbow shimmering scales, glowing, hyper-detailed, micro details, ultra-wide angle, octane render, realistic …” It appears that more detailed prompts can lead to more sophisticated and higher-quality images. 

Alien Cat by Bella Gritty via Midjourney

These AI text-to-image tools are already good enough for commercial endeavors. Creative artist Karen X. Cheng was engaged to create an AI-produced cover image for Cosmopolitan. To help generate ideas and the final image, she used DALL-E, or more specifically the newest version, DALL-E 2. Cheng describes the process including the search for the right set of prompts, noting that she generated thousands of images, modifying the text prompts hundreds of times over many hours before finding one image that felt right. 

Source: Twitter

Text-to-image: A new tool or threat to a way of life?

In a LinkedIn post, Cheng commented: “I think the natural reaction is to fear that AI will replace human artists. Certainly, that thought crossed my mind, especially in the beginning. But the more I use DALL-E, the less I see this as a replacement for humans, and the more I see it as tool for humans to use — an instrument to play.”

I had the same feeling when using Midjourney. I posted the Canadian Rockies image on Flickr, an image-sharing site for artists — mainly photographers and digital artists — and asked for opinions. Specifically, I wanted to know whether people viewed an AI image generator as an abomination and threat or simply another tool. One professional responded: “I’ve also been playing around with Midjourney. I’m a creative! How can I NOT mess around with it to see what it can do? I am of the opinion that the results are art, even though it is AI-generated. A human imagination creates the prompt, then curates the results or tries to coax a different result from the system. I think it’s wonderful.” 

A common refrain in the debate over AI is that it will destroy jobs. The response to this worry is often twofold: first, that many existing jobs will be augmented by AI such that humans and machines working together will produce better output by extending human creativity, not replacing it; second, that AI will also create new jobs, possibly in fields that did not exist before. 

Entrepreneur and influencer Rob Lennon predicted recently that AI text and image generators will lead to new career opportunities, specifically citing “prompt engineering.” Prompt craft is the art of knowing how to write a prompt to get optimal results from an AI. The best prompts are concise while giving the AI context to understand the desired outcome. Already, PromptBase has started to market this service. Its platform enables prompt engineers to “sell text descriptions that reliably produce a certain art style or subject on a specific AI platform.” 

Megan Paetzhold, a photo editor at New York magazine, put DALL-E to the test with assignments she would normally give to artists on her team. In the end, she called it “a draw” and noted: “DALL-E never gave me a satisfying image on the first try — there was always a workshopping process.” She added: “As I refined my techniques, the process began to feel shockingly collaborative; I was working with DALL-E rather than using it. DALL-E would show me its work, and I’d adjust my prompt until I was satisfied.”  

Isn’t there a dark side?

Clearly, these tools can be used to produce high-quality content. While many creative jobs could ultimately be threatened, for now, text-to-image generators are an example of people and machines working together in a new area of artistic exploration. Ethically, the key is to disclose that an image or text was created using an AI generator so people know that the content has been produced by a machine. They may like the output or not, and in that regard, it is no different from any other creative endeavor. 

This perspective will not satisfy everyone. Many writers, photographers, illustrators and other creatives — even if they agree that the AI generation tools lack refinement — believe it is only a matter of time until they, the creative professionals, are replaced by machines. Bloomberg technology editor Vlad Savov encapsulated these arguments, seeing these tools as both stifling and ripping off artists. He may ultimately be correct, though as a respondent to my Flickr query noted, “It is another kind of art, which is not necessarily bad and potentially allows for incredible creativity.” Another wrote, “I don’t feel threatened by AI. Everything changes.” It does. I guess we just thought there would be more time. 

It is possible these tools are just one more in the artist’s kit. They will be used to produce images and text that will be enjoyed and sold. As Jesus Diaz writes in Fast Company: “Once you try a text-to-image program, the joy of artificial intelligence seems undeniable despite the many dangers that lie ahead.” This does not automatically mean that more traditional creative pursuits will vanish. Ironically there may come a time in the not-too-distant future when “human-made” will carry a cachet, and work produced without an AI image or text generator could command a premium.   

Gary Grossman is the senior VP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link


The O․MG Elite cable is a scarily stealthy hacker tool

I didn’t think I would be scared of a USB cable until I went to Def Con. But that’s where I first learned about the O.MG Cable. Released at the notorious hacker conference, the Elite cable wowed me with a combination of technical prowess and its extremely stealth design.

Put simply, you can do a lot of damage with a cable that doesn’t behave the way your target expects.

What is it?

It’s just an ordinary, unremarkable USB cable — or that’s what a hacker would want you to think.

“It’s a cable that looks identical to the other cables you already have,” explains MG, the cable’s creator. “But inside each cable, I put an implant that’s got a web server, USB communications, and Wi-Fi access. So it plugs in, powers up, and you can connect to it.”

That means this ordinary-looking cable is, in fact, designed to snoop on the data that passes through it and send commands to whatever phone or computer it’s connected to. And yes, there’s a Wi-Fi access point built into the cable itself. That feature existed in the original cable, but the newest version comes with expanded network capabilities that make it capable of bidirectional communications over the internet — listening for incoming commands from a control server and sending data from whatever device it’s connected to back to the attacker.

MG, creator of the O.MG Cable, at Def Con.
Photo by Corin Faife / The Verge

What can it do?

Stressing, again, that this is a totally normal-looking USB cable, its power and stealth are impressive.

Firstly, like the USB Rubber Ducky (which I also tested at Def Con), the O.MG cable can perform keystroke injection attacks, tricking a target machine into thinking it’s a keyboard and then typing in text commands. That already gives it a huge range of possible attack vectors: using the command line, it could launch software applications, download malware, or steal saved Chrome passwords and send them over the internet.

It also contains a keylogger: if used to connect a keyboard to a host computer, the cable can record every keystroke that passes through it and save up to 650,000 key entries in its onboard storage for retrieval later. Your password? Logged. Bank account details? Logged. Bad draft tweets you didn’t want to send? Also logged.

(This would most probably require physical access to a target machine, but there are many ways that an “evil maid attack” can be executed in real life.)

An X-ray of the O.MG Cable showing the chip implant.
Image via the O.MG website

Lastly, about that built-in Wi-Fi. Many “exfiltration” attacks — like the Chrome password theft mentioned above — rely on sending data out over the target machine’s internet connection, which runs the risk of being blocked by antivirus software or a corporate network’s configuration rules. The onboard network interface skirts around these protections, giving the cable its own communications channel to send and receive data and even a way to steal data from targets that are “air gapped,” i.e., completely disconnected from external networks.

Basically, this cable can spill your secrets without you ever knowing.

How much of a threat is it?

The scary thing about the O.MG cable is that it’s extremely covert. Holding the cable in my hand, there was really nothing to make me suspicious. If someone had offered it as a phone charger, I wouldn’t have had a second thought. With a choice of connections from Lightning, USB-A, and USB-C, it can be adapted for almost any target device including Windows, macOS, iPhone, and Android, so it’s suitable for many different environments.

For most people, though, the threat of being targeted is very low. The Elite version costs $179.99, so this is definitely a tool for professional penetration testing, rather than something a low-level scammer could afford to leave lying around in the hope of snaring a target. Still, costs tend to come down over time, especially with a streamlined production process. (“I originally made these in my garage, by hand, and it took me four to eight hours per cable,” MG told me. Years later, a factory now handles the assembly.)

Overall, chances are that you won’t be hacked with an O.MG cable unless there’s something that makes you a valuable target. But it’s a good reminder that anyone with access to sensitive information should be careful with what they plug into a computer, even with something as innocuous as a cable.

Could I use it myself?

I didn’t get a chance to test the O.MG cable directly, but judging by the online setup instructions and my experience with the Rubber Ducky, you don’t need to be an expert to use it.

The cable takes some initial setup, like flashing firmware to the device, but can then be programmed through a web interface that’s accessible from a browser. You can write attack scripts in a modified version of DuckyScript, the same programming language used by the USB Rubber Ducky; when I tested that product, I found it easy enough to get to grips with the language but also noted a few things that could trip up an inexperienced programmer.

Given the price, this wouldn’t make sense as a first hacking gadget for most people — but with a bit of time and motivation, someone with a basic technical grounding could find many ways to put it to work.

Repost: Original Source and Author Link


Is your doctor providing the right treatment? This healthcare AI tool can help

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

How does a medical professional stay aware of the right procedures and treatments for patient ailments in the modern world? While many often rely on experience, there is another way that could have life-saving consequences. The trick is, it relies heavily on the power of artificial intelligence (AI).

New York-based medical startup H1 released a new update to its HCP Universe platform today to inject a dose of healthcare AI into medical intelligence. The HCP Universe platform is currently used by medical affairs teams at life sciences companies, which make sure doctors are aware of and use the latest science and medicine. 

With HCP Universe, medical affairs teams can target the right doctors and educate them about the latest and most important medical treatments and which patients should receive that treatment.

“Our mission for this product is to make sure that the latest medicine is used on the right patients, so that patients get the right treatment,” Ariel Katz, H1 cofounder and CEO, told VentureBeat.

Using healthcare AI to improve adoption of new treatments

For H1, the use of healthcare AI is all about providing the intelligence to help medical affairs people find the right doctors – proactively. 

“What we have done in the past is provide a platform for users to go and search and find doctors for a specific field or treatment, but that’s not the right thing to do,” Katz explained. 

Instead, identifying and reaching the right doctors in a given field of medicine is key. Katz said that the updated HCP Universe platform has AI-powered features to highlight and help drive evidence-based medicine usage around the world.

The hardest part of making the data actionable for medical affairs, Katz explained, was relating the data together and putting it in the right taxonomies. For example, if a user searches for “obesity,” there are any number of medical people that could be involved, including endocrinologists, dieticians or even psychiatrists. 

“If a patient searches for obesity, they don’t just want to find a doctor that specializes in diets, they want to find one that is relatable to that person’s needs,” Katz said. “So it’s relating the data together and then the machine learning libraries learn from user behavior to drive relevance.”

Why a graph database wasn’t enough

The idea of connecting relationships together is a common concept in graph databases. In fact, the HCP Universe platform is built with a graph database. But on its own, Katz said that his company has discovered that this is not accurate enough when it comes to important healthcare treatment decisions. 

“If a recommendation engine is 80% accurate for a restaurant where you just want a bagel, you’re probably happy,” Katz said. “If it is 80% accurate and you’re trying to find a doctor and you should have been diagnosed with cancer, that’s not okay.”

With the machine learning libraries, H1 is able to learn from the data and can correlate complex relationships that the graph database doesn’t identify on its own. H1 uses AWS machine learning tools, including Sagemaker, to help power its healthcare AI efforts, Katz said.

When H1 launched, Katz noted that the biggest problem it had to solve was aggregating and collecting sources of information on medical professionals.

“We started by solving a data problem first and making sure all the information is accurate, trustworthy and reliable,” Katz said. “The next generation, which is what we’re launching, is how do you actually make it smart and turn that data into insights?”

Looking forward, H1 will be training its AI to provide clinical quality scores for medical professionals. For example, if a user wants to identify the best medical professional for treating bladder cancer, the system will help identify the best doctor based on multiple factors, including patient surveys and hospital readmission rates, among other pertinent factors.

“This information, on who is a better doctor and what’s a better hospital, will change the experience for many people who are engaged with the healthcare ecosystem,” Katz said. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link


The latest tool in the hacker arsenal: Microsoft Calculator

Hackers have found an unusual and unconventional method to infect PCs with malware: distributing dangerous code with Windows Calculator.

The individuals behind the well-known QBot malware have managed to find a way to use the program to side-load malicious code on infected systems.

Getty Images

As reported by Bleeping Computer, Dynamic Link Libraries (DLLs) side-loading is when an actual DLL is spoofed, after which it is moved to a folder in order to trick the machine’s operating system to load the doctored version as opposed to the real DLL files.

QBot, a strain of Windows malware, was initially known as a banking trojan. However, ransomware gangs now rely on it due to its evolution into a malware distribution platform.

QBot has been utilizing the Windows 7 Calculator program in particular to execute DLL side-loading attacks, according to security researcher ProxyLife. These attacks have been infecting PCs since at least July 11, and it’s also an effective method for carrying out malicious spam (malspam) campaigns.

Emails that contain the malware in the form of an HTML file attachment include a ZIP archive that comes with an ISO file, which contains a .LNK file, a copy of ‘calc.exe’ (Windows Calculator), as well as two DLL files: WindowsCodecs.dll, joined by a malicious payload (7533.dll).

Opening the ISO file eventually executes a shortcut, which upon further investigation of the properties dialog for the files, is linked to Windows’ Calculator app. Once that shortcut has been opened, the infection infiltrates the system with QBot malware through Command Prompt.

The new version of the Calculator app in Windows 11.

Due to the fact that Windows Calculator is obviously a trusted program, tricking the system to distribute a payload through the app means security software could fail to detect the malware itself, making it an extremely effective — and creative — way to avoid detection.

That said, hackers can no longer use the DLL sideloading technique on Windows 10 or Windows 11, so anyone with Windows 7 should be wary of any suspicious emails and ISO files.

Windows Calculator is not a program commonly used by threat actors to infiltrate targets with, but when it comes to the current state of hacking and its advancement, nothing seems to be beyond the realm of possibility. The first appearance of QBot itself occurred more than a decade ago, and it has previously been used for ransomware purposes.

Elsewhere, we’ve been seeing an aggressive rate of activity in the malware and hacking space throughout 2022, such as the largest HTTPS DDoS attack in history. Ransomware gangs themselves are also evolving, so it’s not a surprise they’re continuously finding loopholes to benefit from.

With the alarming rise in cybercrime in general, technology giant Microsoft has even launched a cybersecurity initiative, with the “security landscape [becoming] increasingly challenging and complex for our customers.”

Editors’ Choice

Repost: Original Source and Author Link


Twitch unveils built-in fundraising tool for streamers

Twitch creators will soon be able to raise money for charity directly on the livestreaming platform. The company launched a closed beta of Twitch Charity today to a select group of partners and affiliates, according to a . Donations will be processed through the Paypal Giving Fund, and be tracked in both the stream’s activity feed and chat. Traditionally, viewers have made donations through subs and Bits, after which streamers have to rely on a third-party charity portal like Tiltify to actually send money to their chosen organization. By launching its own charity product, Twitch will cut out the middleman and likely make it easier to both host a fundraiser and donate to one.

Fundraising on Twitch has become a significant moneymaker in recent years. Charity streams can easily rake in six or seven-figure sums, particularly if a major streamer or other celebrity is involved. Well-known creators can easily raise millions of dollars with charity streams — last year’s Z Event featured multiple streamers and raised for Action Against Hunger in around 72 hours.

An additional benefit to a native fundraising feature is that creators won’t be subject to the fees of a third-party feature. Unlike Tiltify (which takes a 5 percent cut of of whatever money gets raised), Twitch has decided to let creators donate 100 percent of their revenue and will forgo a tax incentive.

Creators will be randomly selected for the closed beta of Twitch Charity later today. For everyone else, Twitch expects to unveil the feature to most partners and affiliates later this year.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link


AMD’s new tool compares its GPUs to Nvidia, with a catch

AMD has just launched its new GPU Comparison Tool, which is aimed at being a quick and easy way to determine which graphics card might be best for you based on your gaming needs.

The tool gives you insight into the performance of nearly all of the best GPUs from both AMD and Nvidia. However, a closer look at the tool raises questions as to how legit it is.


AMD’s new tool is simple to navigate and free to use, so you can check it out if you’d like to see the results for yourself. It’s easy — pick the game that interests you from a list of 11 titles, and then choose the settings that best apply to your ideal gaming scenario. You’ll be able to pick between one of the three most popular screen resolutions (1920 x 1080, 2560 x 1440, and 3840 x 2160), as well as the game settings and API used in the test. Then, based on AMD’s internal testing done in its own labs, the tool will tell you which GPU is the best for that given scenario.

The idea itself is pretty great. A lot of users don’t want to dig deep into benchmarks and just want to know whether a particular graphics card will excel in a particular game (or several). Seeing as the feature is easy to find your way around, it seems like a winner for AMD — except, there’s a catch. Or maybe even a couple.

The first catch should come as no surprise — the tool seems to be skewed to favor AMD over Nvidia. It’s not that the benchmark results lie, it’s that the games AMD picked were just the right titles to make its own GPUs look good. Resident Evil Village, Deathloop, and Assassin’s Creed Valhalla are all AMD-promoted games, and as noted by Tom’s HardwareTiny Tina’s Wonderland and Forza Horizon 5 tend to shine when played on an AMD GPU. There are also some other, more “neutral” games on the list, such as Valorant and Fortnite.

Taking Assassin’s Creed Valhalla as an example, AMD wins in every single test. Only one setting option is available (ultra high), but there are three resolutions, and AMD wins in all of them. In Tiny Tina’s, AMD only surrenders to the Nvidia GeForce RTX 3090 Ti in the 4K gaming test, standing victorious in the other two. There are certainly other instances of Nvidia winning the test, such as in Valorant in 3840 x 2160, but AMD seems to be superior in the majority of the benchmarks.

An AMD Radeon RX 6000-Series graphics card in front of a black and red background.

It’s also interesting that AMD didn’t include its low-end GPUs in the tests. The (less-than-stellar) Radeon RX 6500 XT and the RX 6400 are both missing from the charts, even though in some of the esports comparisons, they could very well snag a mention alongside the Nvidia GeForce RTX 3050. Instead, AMD moves right along to the Radeon RX 6600.

Was this an intentional omission or just an oversight in a still very fresh tool? Hard to say. It’s worth noting that the tool itself is still quite limited for what AMD seems to be trying to do. Most games only have one setting option, so realistically, you’re just changing the games and the screen resolution. The tool could also add more games to give a more comprehensive overview of the GPUs’ performance.

Given time, AMD will hopefully update the tool to include more benchmarks, as well as more titles and game settings. Right now, it’s more of a promotional tool than anything else, but it has so much potential that AMD could tap into if it’s not afraid of drawing realistic comparisons between its own GPUs and those made by Nvidia. For the time being, make sure you trust independent benchmarks and reviews if you’re looking into buying a new graphics card now that the prices are falling quickly.

Editors’ Choice

Repost: Original Source and Author Link


This free web tool is a fast and easy way to remove objects from images

Ever needed to quickly edit something or someone out of an image?

Maybe it’s the stranger who wandered into your family photo, or the stand holding up your final artwork for school? Whatever the job, if you don’t have the time or Photoshop skills needed to edit the thing yourself, why not try — a handy web tool that does exactly what it promises in the URL.

Just upload your picture, paint over the thing you want removed with the brush tool, and hey presto: new image. The results aren’t up to the standards of professionals, especially if the picture is particularly busy or complex, but they’re surprisingly good. The tool is also just quite fun to play around with. You can check out some examples in the gallery below: seems to be from the same team that made a fun augmented reality demo that lets you “copy and paste” the real world, and is open-source (you can find the underlying code here). Obviously, tools of this sort have long been available, dating back at least to the launch of Photoshop’s Clone Stamp tool, but the quality of these automated programs has increased considerably in recent years thanks to AI.

Machine learning systems are now not only better at the problem of segmentation (marking the divisions between an object and the background) but also inpainting, or filling in the new content. Just last week, Google launched its new Pixel 6 tools with an identical “Magic Eraser” feature, but shows how this feature has become a commodity.

I think my favorite use of this tool, though, is this fantastic series of pictures removing the people from Edward Hopper paintings:

Repost: Original Source and Author Link


Adobe has built a deepfake tool, but it doesn’t know what to do with it

Deepfakes have made a huge impact on the world of image, audio, and video editing, so why isn’t Adobe, corporate behemoth of the content world, getting more involved? Well, the short answer is that it is — but slowly and carefully. At the company’s annual Max conference today, it unveiled a prototype tool named Project Morpheus that demonstrates both the potential and problems of integrating deepfake techniques into its products.

Project Morpheus is basically a video version of the company’s Neural Filters, introduced in Photoshop last year. These filters use machine learning to adjust a subject’s appearance, tweaking things like their age, hair color, and facial expression (to change a look of surprise into one of anger, for example). Morpheus brings all those same adjustments to video content while adding a few new filters, like the ability to change facial hair and glasses. Think of it as a character creation screen for humans.

The results are definitely not flawless and are very limited in scope in relation to the wider world of deepfakes. You can only make small, pre-ordained tweaks to the appearance of people facing the camera, and can’t do things like face swaps, for example. But the quality will improve fast, and while the feature is just a prototype for now with no guarantee it will appear in Adobe software, it’s clearly something the company is investigating seriously.

What Project Morpheus also is, though, is a deepfake tool — which is potentially a problem. A big one. Because deepfakes and all that’s associated with them — from nonconsensual pornography to political propaganda — aren’t exactly good for business.

Now, given the looseness with which we define deepfakes these days, Adobe has arguably been making such tools for years. These include the aforementioned Neural Filters, as well as more functional tools like AI-assisted masking and segmentation. But Project Morpheus is obviously much more deepfakey than the company’s earlier efforts. It’s all about editing video footage of humans — in ways that many will likely find uncanny or manipulative.

Changing someone’s facial expression in a video, for example, might be used by a director to punch up a bad take, but it could also be used to create political propaganda — e.g. making a jailed dissident appear relaxed in court footage when they’re really being starved to death. It’s what policy wonks refer to as a “dual-use technology,” which is a snappy way of saying that the tech is “sometimes maybe good, sometimes maybe shit.”

This, no doubt, is why Adobe didn’t once use the word “deepfake” to describe the technology in any of the briefing materials it sent to The Verge. And when we asked why this was, the company didn’t answer directly but instead gave a long answer about how seriously it takes the threats posed by deepfakes and what it’s doing about them.

Adobe’s efforts in these areas seem involved and sincere (they’re mostly focused on content authentication schemes), but they don’t mitigate a commercial problem facing the company: that the same deepfake tools that would be most useful to its customer base are those that are also potentially most destructive.

Take, for example, the ability to paste someone’s face onto someone else’s body — arguably the ur-deepfake application that started all this bother. You might want such a face swap for legitimate reasons, like licensing Bruce Willis’ likeness for a series of mobile ads in Russia. But you might also be creating nonconsensual pornography to harass, intimidate, or blackmail someone (by far the most common malicious application of this technology).

Regardless of your intent, if you want to create this sort of deepfake, you have plenty of options, none of which come from Adobe. You can hire a boutique deepfake content studio, wrangle with some open-source software, or, if you don’t mind your face swaps being limited to preapproved memes and gifs, you can download an app. What you can’t do is fire up Adobe Premiere or After Effects. So will that change in the future?

It’s impossible to say for sure, but I think it’s definitely a possibility. After all, Adobe survived the advent of “Photoshopped” becoming shorthand for digitally edited images in general, and often with negative connotations. And for better or worse, deepfakes are slowly losing their own negative associations as they’re adopted in more mainstream projects. Project Morpheus is a deepfake tool with some serious guardrails (you can only make prescribed changes and there’s no face-swapping, for example), but it shows that Adobe is determined to explore this territory, presumably while gauging reactions from the industry and public.

It’s fitting that as “deepfake” has replaced “Photoshopped” as the go-to accusation of fakery in the public sphere, Adobe is perhaps feeling left out. Project Morpheus suggests it may well catch up soon.

Repost: Original Source and Author Link