Categories
AI

Humans must have override power over military AI

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


For years, U.S. defense officials and Washington think-tankers alike have debated whether the future of our military could — or should — look a little less human.

Already, the U.S. military has started to rely on technology that employs machine learning, artificial intelligence (AI), and big data — raising ethical questions along the way. While these technologies have countless beneficial applications, ranging from threat assessment to preparing troops for battle, they rightfully evoke concerns about a future in which Terminator-like machines take over.

But pitting man against machine misses the point. There’s room for both in our military future — as long as machines aid human decision-making, rather than replace it.

Military AI and machine learning are here

Machine learning technology, a type of AI that allows computer systems to process enormous data sets and “learn” to recognize patterns, has rapidly gained steam across many industries. The systems can comb through massive amounts of data from multiple sources and then make recommendations based on the patterns it senses — all in a matter of seconds.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

That makes AI and machine learning useful tools for humans who need to make well-thought-out decisions at a moment’s notice. Consider doctors, who may only have a few minutes with each patient but must make potentially life-altering diagnoses. Some hospitals are using AI and machine learning to identify heart disease and lung cancer before a patient ever shows symptoms.

The same goes for military leaders. When making a strategic choice that could have a human cost, officials must be able to process all the available data as quickly as possible to make the most informed decision.

AI is helping them do so. Some systems, for example, can take two-dimensional surveillance images and create a detailed, three-dimensional model of a space. That helps officials chart a safe path forward for their troops through a previously unexplored area.

A human thumbs-up

Whether in an emergency room or on the battlefield, these machine learning applications have one thing in common. They do not have the power to make an ultimate decision. In the end, it’s the doctor’s decision to make a diagnosis — and the officer’s decision to give an order.

Companies developing new military AI or machine learning technologies have a responsibility to help keep it that way. They can do so by outfitting their innovations with a few critical guardrails.

For one, humans must have the power to override AI at any point. Computer algorithms may be able to analyze piles of data and provide helpful recommendations for action. But machines can’t possibly grasp the complexity or novelty of the ever-changing factors influencing a strategic operation.

Only humans have the ability to think through the long-term consequences of military action. Therefore humans must be able to decline a machine’s recommendations.

Companies developing military AI should give users options to make decisions manually, without technological support; in a semi-automated fashion; or in a fully automated manner, with the ability to override. The goal should be to develop AI that complements — rather than eliminates the need for — uniquely human decision-making capabilities that enable troops to respond effectively to unforeseen circumstances.

A peaceful transfer of power

Military machine learning systems also need a clear chain of command. Most United States military technologies are developed by private firms that contract with the U.S. government. When the military receives those machines, it’s often forced to rely on the private firm for ongoing support and upkeep of the system.

That shouldn’t be the case. Companies in the defense industry should build AI systems and computer algorithms so that military officials can one day assume full control over the technology. That will ensure a smooth transition between the contractor and the military — and that the AI has a clear sense of goals free of any conflicts of interest.

To keep military costs low, AI systems and machine learning programs should be adaptable, upgradeable and easy to install across multiple applications. This will enable officials to move the technology from vehicle to vehicle and utilize the its analysis at a moment’s notice as well. It will also allow military personnel to develop more institutional knowledge about the system, enabling them to better understand and respond to the AI’s recommendations.

In the same vein, companies can continue working on algorithms that can explain why they made a certain recommendation — just as any human can. That technology could help to develop a better sense of trust in and more efficient oversight of AI systems.

Military decision-making doesn’t have to be a zero-sum game. With the right guardrails, AI systems and machine learning algorithms can help commanders make the most informed decisions possible — and stave off a machine-controlled future in the process.

Kristin Robertson is president of Space and C2 Systems at Raytheon Intelligence & Space.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
Computing

Nvidia addresses rumors about RTX 40 GPUs’ power consumption

The new Nvidia GeForce RTX 40 lineup includes some of the most power-hungry graphics cards on the market. Because of that, you may be wondering if you’ll need a new power supply (PSU) in order to support the borderline monstrous capabilities of the RTX 4090.

To answer some of these concerns, Nvidia released new information about the power consumption of its new GPUs. The conclusion? Well, it’s really not all that bad after all.

Nvidia

Prior to the official announcement of the RTX 40-series, the cards have been the subject of much power-related speculation. The flagship RTX 4090 received the most coverage of all, with many rumors pointing toward insane requirements along the lines of 800-900W. Fortunately, we now know that those rumors weren’t true.

The RTX 4090 has a TGP of 450W, the same as the RTX 3090 Ti, and calls for a minimum 850W PSU. The RTX 4080 16GB takes things down a few notches with a 320W TGP and a 750W power supply. Lastly, the RTX 4070 in disguise, also known as the RTX 4080 12GB, draws 285W and calls for a 700W PSU.

Nvidia claims that this is not an increase from the previous generation, but it kind of is — after all, the RTX 3090 had a TGP of 350W. With that said, it’s not as bad as we had thought, but many are still left to wonder if they need to upgrade their existing PSUs or not.

Nvidia has now assured its customers that they can stick to the PSU they currently own as long as it meets the wattage requirements for that given card.

Similarly, Nvidia doesn’t expect there to be any problems when it comes to 8-pin to PCIe Gen 5 16-pin adapter compatibility. As said by Nvidia on its FAQ page: “The adapter has active circuits inside that translate the 8-pin plug status to the correct sideband signals according to the PCIe Gen 5 (ATX 3.0) spec.”

There’s also another fun little fact to be found in that FAQ: Nvidia confirms that the so-called smart power adapter will detect the number of 8-pin connectors that are plugged in. When four such connectors are used versus just three, it will enable the RTX 4090 to draw more power (up to 600 watts) for extra overclocking capabilities.

Nvidia CEO Jensen Huang with an RTX 4090 graphics card.

There have also been questions about the durability of the PCIe 5.0 connectors, which are rated at 30 cycles. Some might consider that to not be much, but Nvidia clears this up by saying that this has almost always been the case, or at least has been over the past twenty years.

Lastly, Nvidia clarified the matter of the possibility of an overcurrent or overpower risk when using the 16-pin power connector with non-ATX 3.0 power supply units. It had, indeed, spotted an issue during the early stages of development, but it has since been cleared up. Again, seemingly nothing to worry about there.

All in all, the power consumption fears have largely been squelched. Nvidia did ramp up the power requirements, but not as significantly as expected, so as long as your PSU matches what the card asks for, you should be fine. Let’s not breathe that sigh of relief yet, though — the RTX 4090 Ti might still happen, and that will likely be one power-hungry beast.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Nvidia’s DLSS 3 may cut the RTX 4090’s insane power demands

Nvidia’s upcoming flagship, the RTX 4090, was tested in Cyberpunk 2077. It did a great job, but the results were far better with DLSS 3 enabled.

The card managed to surprise us in two ways. One, the maximum clock was higher than expected, and two, DLSS 3 actually managed to lower the card’s power draw by a considerable amount.

The card was tested in 1440p in a system with an Intel Core i9-12900K CPU, running the highest possible settings that Cyberpunk 2077 has to offer, meaning with ultra ray tracing enabled and on Psycho (max) settings. First, let’s look at how the GPU was doing without DLSS 3 enabled.

At the native resolution, the game was running at an average of 59 frames per second (fps) with a latency that hovered around 72 to 75 milliseconds (ms). The RTX 4090 was able to hit a whopping 2.8GHz clock speed, and that’s without overclocking — those are stock speeds, even though the maximum advertised clock speed for the RTX 4090 is just over 2.5GHz. This means an increase of roughly 13% without an overclock. During the demo, the GPU reached 100% utilization, but the temperatures stayed reasonable at around 55 degrees Celsius.

It’s a different story once DLSS 3 is toggled on, though. As Wccftech notes in its report, the GPU was using a pre-release version of DLSS 3, so these results might still change. For now, however, DLSS 3 is looking more and more impressive by the minute.

Enabling DLSS 3 also enables the DLSS Frame Generation setting, and for this test, the Quality preset was used. Once again, the GPU hit maximum utilization and a 2.8GHz boost clock, but the temperature was closer to 50C rather than 55C. The fps gains were nothing short of massive, hitting 119 fps and an average latency of 53ms. This means that the frame rates doubled while the latency was reduced by 30%.

We also have the power consumption figures for both DLSS 3 on and off, and this is where it gets even more impressive. Without DLSS 3, the GPU was consuming 461 watts of power on average, and the performance per watt (Frames/Joule) was rated at 0.135 points. Enabling DLSS 3 brought the wattage down to just 348 watts, meaning a reduction of 25%, while the performance per watt was boosted to 0.513 — nearly four times that of the test without DLSS 3.

The RTX 4090 among green stripes.

Wccftech has also tested this on an RTX 3090 Ti and found similar, albeit worse, results. The GPU still saw a boost in performance (64%) and a drop in power draw (10%), so the energy consumption numbers are not as impressive, confirming that DLSS 3 will offer a real upgrade over its predecessor.

The reason behind this unexpected difference in power consumption might lie in the way the GPU is utilized with DLSS 3 enabled. The load placed on the FP32 cores moves to the GPU tensor cores. This helps free up some of the load placed on the whole GPU and, as a result, cuts the power consumption.

It’s no news that the RTX 4090 is one power-hungry card, so it’s good to see that DLSS 3 might be able to bring those figures down a notch or two. Now, all we need is a game that can fully take advantage of this kind of performance. Nvidia’s GeForce RTX 4090 is set to release on October 12 and will arrive with a $1,599 price tag. With less than a month left until its launch, we should start seeing more comparisons and benchmarks soon.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

How machine learning helps the New York Times power its paywall

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Every organization applying artificial intelligence (AI) and machine learning (ML) to their business is looking to use these powerful technologies to tackle thorny problems. For the New York Times, one of the biggest challenges is striking a balance between meeting its latest target of 15 million digital subscribers by 2027 while also getting more people to read articles online. 

These days, the multimedia giant is digging into that complex cause-and-effect relationship using a causal machine learning model, called the Dynamic Meter, which is all about making its paywall smarter. According to Chris Wiggins, chief data scientist at the New York Times, for the past three or four years the company has worked to understand their user journey scientifically in general and the workings of the paywall.

Back in 2011, when the Times began focusing on digital subscriptions, “metered” access was designed so that non-subscribers could read the same fixed number of articles every month before hitting a paywall requiring a subscription. That allowed the company to gain subscribers while also allowing readers to explore a range of offerings before committing to a subscription. 

Machine learning for better decision-making

Now, however, the Dynamic Meter can set personalized meter limits — that is, by powering the model with data-driven user insights — the causal machine learning model can be prescriptive, determining the right number of free articles each user should get so they get interested enough in the New York Times to subscribe to continue reading more. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

According to a blog post written by Rohit Supekar, a data scientist on the New York Times’ algorithmic targeting team, at the top of the site’s subscription funnel are unregistered users. At a specific meter limit, they are shown a registration wall that blocks access and asks them to create an account. This allows them access to more free content, and a registration ID allows the company to better understand their activity. Once registered users reach another meter limit, they are served a paywall with a subscription offer. The Dynamic Meter model learns from all of this registered user data and determines the appropriate meter limit to optimize for specific key performance indicators (KPIs). 

The idea, said Wiggins, is to form a long-term relationship with readers. “It’s a much slower problem in which people engage over the span of weeks or months,” he said. “Then, at some point, you ask them to become a subscriber and see whether or not you did a good job.” 

Causal AI helps understand what would have happened

The most difficult challenge in building the causal machine learning model was in setting up the robust data pipeline to understand the user activity for over 130 million registered users on the New York Times’ site, said Supekar.

The key technical advancement powering the Dynamic Meter is around causal AI, a machine learning method where you want to build models which can predict what would have happened. 

“We’re really trying to understand the cause and effect,” he explained.

If a particular user was given a different number of free articles, what would be the likelihood that they would subscribe or the likelihood that they would read a certain number of articles? This is a complicated question, he explained, because in reality, they can only observe one of these outcomes. 

“If we give somebody 100 free articles, we have to guess what would have happened if they were given 50 articles,” he said. “These sorts of questions fall in the realm of causal AI.”

Supekar’s blog post explained that it’s clear how the causal machine learning model works by performing a randomized control trial, where certain groups of people are given different numbers of free articles and the model can learn based on this data. As the meter limit for registered users increases, the engagement measured by the average number of page views gets larger. But it also leads to a reduction in subscription conversions because fewer users encounter the paywall. The Dynamic Meter has to both optimize for and balance a trade-off between conversion engagement.

“For a specific user who got 100 free articles, we can determine what would have happened if they got 50 because we can compare them with other registered users who were given 50 articles,” said Supekar. This is an example of why causal AI has become popular, because “There are a lot of business decisions, which have a lot of revenue impact in our case, where we would like to understand the relationship between what happened and what would have happened,” he explained. “That’s where causal AI has really picked up steam.” 

Machine learning requires understanding and ethics

Wiggins added that with so many organizations bringing AI into their businesses for automated decision-making, they really want to understand what is going to happen. 

“It’s different from machine learning in the service of insights, where you do a classification problem once and maybe you study that as a model, but you don’t actually put the ML into production to make decisions for you,” he said. Instead, for a business that wants AI to really make decisions, they want to have an understanding of what’s going on. “You don’t want it to be a blackbox model,” he pointed out.

Supekar added that his team is conscious of algorithmic ethics when it comes to the Dynamic Meter model. “Our exclusive first-party data is only about the engagement people have with the Times content, and we don’t include any demographic or psychographic features,” he said. 

The future of the New York Times paywall

As for the future of the New York Times’ paywall, Supekar said he is excited about exploring the science about the negative aspects of introducing paywalls in the media business. 

“We do know if you show paywalls we get a lot of subscribers, but we are also interested in knowing how a paywall affects some readers’ habits and the likelihood they would want to return in the future, even months or years down the line,” he said. “We want to maintain a healthy audience so they can potentially become subscribers, but also serve our product mission to increase readership.” 

The subscription business model has these kinds of inherent challenges, added Wiggins.

“You don’t have those challenges if your business model is about clicks,” he said. “We think about how our design choices now impact whether someone will continue to be a subscriber in three months, or three years. It’s a complex science.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
Computing

Intel Raptor Lake unleashes monstrous power requirements

While we already know that Intel Raptor Lake is likely to introduce some hefty power requirements, it seems that Intel may have a plan to deliver even more performance — at a staggering cost.

According to a new leak, Intel will allegedly add a factory overclock mode to the flagship Core i9-13900K, bringing the performance up to a new level alongside a monstrous power limit of 350 watts.

Wccftech

First reported by ProHardver, this extreme power limit comes as a surprise, but perhaps not entirely. We’ve already seen 13th-gen Intel processors hitting quite high overclocks, and the Core i9-13900K was spotted reaching higher numbers than any of them, maxing out at 345 watts. The difference is that this was all done through manual overclocking, and today’s leak suggests something else entirely — a factory mode prepared by Intel that will help you push your new CPU to the very limit.

The Raptor Lake-S CPU is said to support the 350-watt power limit on top of the default power limits that max out at 241 watts. This won’t be available on all motherboards. Intel’s next-gen CPUs remain backward compatible with current Intel 600-series motherboards, but to make use of the new power limit, users will presumably need one of the high-end 700-series boards instead. On these motherboards, you will have the option to boost your Core i9-13900K up to 350 watts.

With that feature enabled, the overall performance of the CPU is said to increase by up to 15%, which is pretty massive. Considering that Intel Raptor Lake is already said to bring in significant performance gains over Alder Lake, we could have an intensely powerful processor on our hands in just a few months.

Intel itself is yet to confirm the official specs for the new flagship, but most rumors point toward it being decked out with 24 cores (eight P-cores and 16 E-cores) and 32 threads as well as a clock speed of up to 5.8GHz. That frequency will likely be challenged by overclockers — we’ve already seen the Core i7-13700K breaking past the 6GHz barrier in an early benchmark, and these are still just engineering samples.

Left:
Under the power and current unlimited setting.
(default frequency, almost 340-350w)
Right:
Under the power limited setting.
(default frequency, almost 250w) pic.twitter.com/rxISzs55eU

— Raichu (@OneRaichu) August 7, 2022

This type of power comes at a price — 350 watts is a lot. You’ll need one beefy power supply unit (PSU) and appropriate CPU cooling to be able to support this kind of power consumption. If you pair the Core i9-13900K with one of Nvidia’s next-gen RTX 4000-series graphics cards, the power requirements are really going to be quite staggering. A 1,200-watt PSU will probably be a necessity in such a setup.

Intel is likely to reveal the lineup during its upcoming Intel Innovation event on September 27, and until then, the above will remain nothing but an exciting rumor. AMD is also readying its Zen 4 processors, which are now rumored to release on September 27 — the same day as the Raptor Lake announcement.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

TSMC focuses on power and efficiency with the new 2nm node

The Taiwan Semiconductor Manufacturing Co. (TSMC) has just officially unveiled its 2nm node, dubbed the N2. Set to release sometime in 2025, the new process will introduce a new manufacturing technology.

According to TSMC’s teaser, the 2nm process will either provide an uplift in pure performance compared to its predecessor, or, when used at the same power levels, will be much more power-efficient.

TSMC

TSMC talked about the new 2N technology at great length, explaining the inner workings of its architecture. The 2N is going to be TSMC’s first node to use gate-all-around field-effect transistors (GAAFETs) and will increase the chip density over the N3E node by 1.1 times. Before the 2N is ever released, TSMC will launch 3nm chips, which have also been teased at the 2022 TSMC Technology Symposium.

The 3nm node is going to come in five different tiers, and with each new release, the transistor count will go up, therefore increasing the chip’s performance and efficiency. Starting with the N3, TSMC will later release the N3E (Enhanced), N3P (Performance Enhanced), N3S (Density Enhanced), and lastly, the “Ultra-High Performance” N3X. The first 3nm chips are said to hit launch in the second half of this year.

While the 3nm process is nearer to us in terms of the launch date, it’s the 2nm that’s slightly more interesting, even though it’s still a couple of years away. TSMC’s goal with the 2nm node seems to be clear — increasing the performance-per-watt to enable both higher levels of output and efficiency. The architecture as a whole has a lot to recommend it. Let’s take the GAA nanosheet transistors as an example. They have channels surrounded by gates on all sides. This will reduce leakage, but the channels can also be widened, and that brings a performance boost. Alternatively, the channels can be shrunk to optimize the power cost.

Both the N3 and the N2 will offer considerable performance increases compared to the current N5, and all of them give the choice of balancing power consumption with performance-per-watt. As an example (first shared by Tom’s Hardware), comparing the N3 to the N5 nets an up to 15% gain in raw performance, and an up to 30% power reduction when used at the same frequency. The N3E will bring those numbers even further, up to 18% and 34%, respectively.

TSMC's wafer.
TSMC

Now, the N2 is where things start to get exciting. We can expect to see an up to 15% performance boost when used at the same power draw as the N3E node, and if the frequency is brought down to the levels provided by the N3E, the N2 will deliver an up to 30% lower power consumption.

Where will the N2 be used? It will likely find its way into all kinds of chips, ranging from mobile system-on-a-chips (SoCs), advanced graphics cards, and equally advanced processors. TSMC has mentioned that one of the features of the 2nm process is “chiplet integration.” This implies that many manufacturers may use the N2 to utilize multi-chiplet packages to pack even more power into their chips.

Smaller process nodes are never a bad thing. The N2, once it’s here, will deliver high performance to all manner of hardware, including the best CPUs and GPUs, while optimizing the power consumption and thermals. However, until that happens, we’ll have to wait. TSMC won’t start mass production until 2025, so realistically, we are unlikely to see 2nm-based devices entering the market before 2026.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Intel Meteor Lake will pack more punch for the same power

Intel has just given us a much larger glimpse into its future Meteor Lake lineup. At the 2022 IEEE VLSI Symposium, the company talked about the 14th generation of its processors, detailing the future process node and the improvements the new Intel 4 process should bring.

The teaser certainly sounds promising. Intel claims that Meteor Lake CPUs will provide 20% higher clock speeds than the previous generations, all while maintaining the same power requirements.

Intel

Intel Meteor Lake is still quite far off — the company confirms that the new chips are on track to meet the 2023 launch deadline, although no specifics have been given at this time. Before we ever see Meteor Lake, we will see the launch of Intel Raptor Lake in the fall. However, unsurprisingly, both Intel and the tech world at large are looking to the future — and as far as the 14th generation of Intel chips goes, the future looks pretty exciting.

During the 2022 IEEE VLSI Symposium, Intel took the public on a deep dive into the upcoming Intel 4 process node, which is what Meteor Lake is based on. As a successor to the Intel 7 (used for Alder Lake and Raptor Lake), it will require a new socket, and it will feature a new architecture. Intel claims that the changes introduced in that generation will deliver huge performance gains while keeping the power consumption at a similar level to what we’ve grown used to with 12th-gen CPUs.

The company teased that Meteor Lake will deliver up to 21.5% higher frequencies at the same power requirements as the Intel 7 process. Similarly, when scaled down to the same frequency as Intel 7, Meteor Lake will sport an up to 40% power reduction. This is going to be achieved through various changes in the chip’s architecture, such as a 2x improvement in area scaling. This means that it has doubled transistor density compared to the Intel 7, at least for the high-performance libraries.

With the new process node, Intel will largely use extreme ultraviolet (EUV) lithography as a way to simplify manufacturing. Simply put, this reduces the number of steps needed to manufacture the node by a significant amount. It should result in higher yields and reduce production errors. As a result of EUV, Intel noted a 5% reduction in process steps and a 20% lower total mask count.

The Intel 4 name is a code name for Intel’s 7nm process node, which means a switch from 10nm to 7nm for Intel. The new chips will utilize Intel’s Foveros 3D packaging technology and will feature a four-die setup joined by TSV (through silicon-via) connections. These four tiles will be split into the input/output tile (I/O), the system-on-a-chip tile, the compute tile, and the graphics tile.

Intel Meteor Lake slide, part two.
Intel

Intel has shared a blown-up image of the compute die for Meteor Lake, complete with six blue-colored performance cores (Redwood Cove) and two clusters of four Crestmont efficiency cores, colored in purple. In the middle of the chip, you can see the L3 cache and the interconnect circuitry. The company has yet to divulge the exact description of the I/O and the SOC tiles.

In addition to teasing the Intel 4 process, the manufacturer also talked about what comes next — moving on to Intel 3. Intel 3 will come with enhanced transistors and interconnects, and it’s worth noting that I4 will be forward compatible with I3, so it won’t require a full redesign. Intel will stay true to the EUV technology, with more EUV layers that simplify the design even further. According to the current estimations, the I3 node will be around 18% faster than the I4. Once Intel is done with I3, it will move on to the 20A and 18A nodes and even more exciting technologies.

All in all, Intel’s sneak peek is very detailed and quite technical, so if you’re a fan of that, make sure you read the full write-up prepared by Tom’s Hardware. Although Meteor Lake is a while off, there’s still plenty to be hyped for this year. We’ve got the Intel Raptor Lake coming up, and around the same time, AMD is slated to launch the Ryzen 7000 series of CPUs.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Security

Ukraine says it stopped a Russian cyberattack on its power grid

An attack on Ukraine’s power grid was foiled by cybersecurity analysts and officials, as reported by Reuters. After investigating the methods and software used by the attackers, cybersecurity firm ESET says that it was likely carried out by a hacking group called Sandworm, which The Record reports allegedly has ties to the Russian government.

The group planned to shut down computers that controlled substations and infrastructure belonging to a particular power company, according to the Computer Emergency Response Team of Ukraine (or CERT-UA). The hackers meant to cut off power on April 8th while also wiping the computers that would be used to try and get the grid back online.

This attempted attack involved a wide variety of malware, according to ESET, including the recently discovered CaddyWiper. ESET also found a new piece of malware, which it calls Industroyer2. The original Industroyer was used in a successful 2016 cyberattack that cut off power in parts of Kyiv, according to the security firm, probably by the same group behind this month’s foiled attack. Industroyer isn’t widely used by hackers — ESET notes that it’s only seen it used twice (earlier this month and in 2016), which implies that it’s written for very specific uses.

CERT-UA says that the hackers were biding their time, initially breaching the company’s systems before March. ESET’s analysis shows that one of the main pieces of malware was compiled over two weeks before the attack was supposed to take place.

It’s unclear how the hackers initially got into the company’s network or how they gained access to the network that controls industrial equipment like the targeted substations. The analysis does show, however, that the hackers were planning on covering their tracks after the attack.

Ukraine and its infrastructure have been targeted by hackers since before the Russian invasion began. It’s likely that this won’t be the last attack on its power grid, but the country’s response to this incident shows that its cybersecurity defense strategy is capable of warding off complex attacks.

Repost: Original Source and Author Link

Categories
Security

Not even your PC’s power supply is safe from hackers

Hackers have managed to find a way to successfully gain access to uninterruptable power supply (UPS) computer systems, according to a report from The Cybersecurity and Infrastructure Security Agency (CISA).

As reported by Bleeping Computer and Tom’s Hardware, both the Department of Energy and CISA issued a warning to organizations based in the U.S. that malicious threat actors have started to focus on infiltrating UPS devices, which are used by data centers, server rooms, and hospitals.

UPS devices allow companies to rely on emergency power when the central source of power is cut off for any given reason. If the attacks concentrated on these systems come to fruition, the consequences could prove to be catastrophic. In fact, it could cause PCs or their power supplies to burn up, potentially leading to fires breaking out at data centers and even homes.

Both federal agencies confirmed that hackers have found entry points to several internet-connected UPS devices predominantly via unchanged default usernames and passwords.

“Organizations can mitigate attacks against their UPS devices, which provide emergency power in a variety of applications when normal power sources are lost, by removing management interfaces from the internet,” the report stated.

Other mitigation responses the agencies recommended putting in place include safeguarding devices and systems by protecting them through a virtual private network, applying multi-factor authentication, and making use of effective passwords or passphrases that can’t be easily deciphered.

To this end, it stresses that organizations change UPS’s usernames and passwords that have remained on the factory default settings. CISA also mentioned that login timeout and lockout features should be applied as well for further protection.

Severe consequences

The report highlights how UPS vendors have increasingly incorporated a connection between these devices and the internet for power monitoring and routine maintenance purposes. This practice has made these systems vulnerable to potential attacks.

A prime example of hackers targeting UPS systems is the recently discovered APC UPS zero-day bugs exploit. Known as TLStorm, three critical zero-day vulnerabilities opened the door for hackers to obtain admin access to devices belonging to APC, a subsidiary of an electrical company.

If successful, these attacks could severely impact governmental agencies, as well as health care and IT organizations, by burning out the devices and disabling the power source remotely.

The number of cyberattacks against crucial services has been trending upwards in recent years as cybercriminals progressively identify exploits. For example, cyberattacks against health care facilities almost doubled in 2020 compared to 2019.

It’s not just large organizations that are being targeted — online criminals stole nearly $7 billion from individuals in 2021 alone.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Halo Infinite’s Surprise Launch is a Power Play for Xbox

For once, a seemingly ridiculous video game rumor turned out to be true: Halo Infinite’s multiplayer released nearly one month early. Leaks indicated that the surprise could happen, but it still seemed too good to be true. But the fact is that players are enjoying Halo Infinite’s first season much sooner than anticipated.

In an age where video game release dates only get moved back, not forward, the news came as a straight-up shock. Shooter fans were just sitting down with Call of Duty: Vanguard and waiting for Battlefield 2042’s full release. Xbox Game Pass subscribers had just begun digging into the recently released Forza Horizon 5. If you had a strict plan for tackling all the games launching this holiday season, go ahead and toss it in the fire.

The decision to drop Halo Infinite early isn’t just a sweet “thank you” to fans for their support. It’s the sneakiest power play a video game company has pulled since Sony’s infamous “$299” mic drop at E3 1995.

Un-freakin’ believable

Before the surprise drop, Microsoft was in something of an awkward position. Halo Infinite was set to be its big holiday game, but its planned December 8 release date wasn’t ideal. A December date meant that the game wouldn’t be out in time for Black Friday and Cyber Monday, when many people buy holiday gifts or hunt for discounted games. Battlefield 2042 and Call of Duty: Vanguard would headline sales events, putting those shooters in the spotlight heading into the holidays. Even if Halo Infinite got positive buzz at launch, it would be late to the party.

Getting good word of mouth was going to be a challenge, too. December releases also tend to miss the Game of the Year season as many sites publish their lists by the end of November. While Digital Trends planned to hold our GOTY decision until we played Halo, others likely would have left it out of contention and saved it for their 2022 lists. Similarly, the game would be ineligible for The Game Awards this year and would be considered for the following year’s show instead, much like what happened to Super Smash Bros. Ultimate when it dropped in mid-December 2018. Any critical acclaim would come late, making it hard for Microsoft to capitalize heading into the holidays.

By dropping the multiplayer mode early, Microsoft has rewritten the rules. While the game isn’t fully out (single-player is still coming in December), the conversation around it is now in full swing. Players will start posting clips all over social media, it’ll dominate Twitch charts, and media will start kicking out impressions way earlier than planned (ourselves included). And all of that will happen before people start putting together their holiday wish lists.

It’s a bombshell move and one that might tick the competition off. Battlefield 2042 was supposed to be the most high-profile game launching this month (especially after tepid Call of Duty: Vanguard reviews), but Halo Infinite just crashed a Warthog full of banana peels on its clear runway. Now it’ll have to share the spotlight with the biggest shooter of the year — one that’s totally free to play and has the element of surprise behind it.

Halo Infinite is no longer at risk of getting lost in the mix; it’s the competition who should be worrying.

A sneaky beta

The sneakiest part of the whole early launch is the clever use of a “beta” label. Fans aren’t experiencing the final version of Halo Infinite right now. Microsoft is strategically calling the multiplayer mode a “beta.” That gives the company a fair bit of flexibility. Players are more likely to forgive any technical issues when they know they’re playing a non-final version of a game. EA won’t get the same good will when Battlefield 2042 launches in full later this week. In fact, the game is already getting “review bombed” by early access players who are bumping into stability issues in a game they paid $60 or more for.

What remains to be seen is whether or not the multiplayer mode actually leaves beta once the game’s release date rolls around. There’s a good chance that Microsoft will just leave the label on — an admission that the long-delayed game still wasn’t ready for launch. Had Microsoft fully released the multiplayer on December 8 as a beta, fans would have been outraged. The company would be under scrutiny for releasing an unfinished game (it will already lack campaign co-op and Forge mode at launch, which has drawn criticism from fans). Instead, fans are simply delighted they’re getting to play it weeks early.

New Halo Infinite Map Behemoth.

Messaging is everything in video games and Microsoft seems acutely aware of that. By positioning the launch as a “gift,” players are going to approach the game much differently than they would have in December. Microsoft now looks like a good guy kindly giving fans a surprise, rather than a giant company rushing out a game to pump up its fourth quarter financial earnings at any cost. It’s a devilishly clever move that could change the way companies roll out their games moving forward.

I’m not sure if that’s good for players in the long term, but that’s unimportant at the moment. Microsoft has delivered a rare shock in an industry that’s usually predictable. Rule-breaking power plays like this are scarce, but they tend to be turning points for the industry. Don’t be surprised if the Xbox Series X suddenly usurps PS5 as this holiday’s hottest console as a result.

Halo Infinite’s multiplayer is now free to download and play on PC, Xbox One, and Xbox Series X/S. The full game, including its single-player mode, launches on December 8.

Editors’ Choice




Repost: Original Source and Author Link