Categories
Security

Cloudflare just stopped one of the largest DDoS attacks ever

Cloudflare, a company that specializes in web security and distributed denial of service (DDoS) attack mitigation, just reported that it managed to stop an attack of an unprecedented scale.

The HTTPS DDoS attack was one of the largest such attacks ever recorded, and it came from unusual sources — data centers.

Cloudflare

The attack was detected and mitigated automatically by Cloudflare’s defense systems, which were set up for one of its customers using the paid Professional plan. At its peak, the attack reached a massive 15.3 million requests-per-second (rps). This makes it the largest HTTPS DDoS attack ever mitigated by Cloudflare.

Cloudflare has previously seen attacks on a larger scale targeting unencrypted HTTP, but as Cloudflare mentions in its announcement, targeting HTTPS is a much more expensive and difficult venture. Such attacks typically require extra computational resources due to the need to establish a transport layer security (TLS) encrypted connection. The increase in costs is twofold: It costs more for the attacker to establish the attack, and it costs more for the targeted server to mitigate it.

The attack lasted less than 15 seconds, and its target was a cryptocurrency launchpad. Crypto launchpads are platforms that startups within the crypto space can use to raise early-stage funding while leveraging the reach of the launchpad. Cloudflare mitigated the attack without any additional actions being taken by the customer.

The source of the attack was not unfamiliar to Cloudflare — it said that it has seen attacks hitting up to 10 million rps from sources that match the same attack fingerprint. However, the devices that carried out the attack were something new, seeing as they came mostly from data centers. Cloudflare notes that this marks a shift that it has already been noticing as of late, with larger attacks moving from residential network internet service providers (ISPs) to huge networks of cloud compute ISPs.

Cloudflare DDoS attack sources.
Cloudflare

Approximately 6,000 unique bots across over 1,300 networks carried out the DDoS attack that Cloudflare managed to mitigate automatically, without any human intervention. Perhaps more impressive is the number of locations involved, adding up to a total of 112 countries all around the globe. The largest share of it (15%) came from Indonesia, followed by Russia, Brazil, India, Colombia, and the U.S.

While this wasn’t the largest DDoS attack ever mitigated by Cloudflare, it’s definitely up there in terms of volume and severity. In 2021, the service managed to stop a 17.2 million rps HTTP DDoS attack. Earlier this year, the company reported that it has seen a massive rise in the number of DDoS attacks which increased by a staggering 175% quarter-over-quarter based on data from the fourth quarter of 2021.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

AMD Now Controls Its Largest CPU Market Share In 15 Years

It’s official: AMD has secured its second-highest CPU market share since 2006. It’s been an excellent last few years for AMD, and the latest reports prepared by analyst firm Mercury Research show just how far the rival to Intel has come.

The reports were first shared by HardwareTimes, and according to the market analysis, AMD continues its climb in terms of overall x86 market share, increasing by 2.1 points quarter-over-quarter. This adds up to a total 24.6% market share compared to Intel’s 75.4%. That’s inching closer to beating its own market share record.

AMD

Aside from the success that AMD has seen on the overall x86 market, it also continues making gains on the notebook x86 unit share. AMD has hit 22% in this sector, a new all-time high and an improvement of 1.8% over the previous year.

The increase in laptop market share comes along with a new sales record. In the third quarter of 2021, AMD achieved a 16.2% revenue share when it comes to the notebook x86 sector. This is a jump both in quarterly and yearly sales — 1.3 share points quarter-over-quarter and 3.9 share points year-over-year.

The x86 processor market continues to be a two-horse race, so a gain for AMD means a loss for Intel. Seeing Team red make gains on Team Blue is not surprising, as AMD has definitely released some of the best processors in recent years. After a definite lull forAMD and total domination for Intel, AMD is now almost back at its highest point ever, with considerable gains over the last years.

AMD’s highest overall x86 result was all the way back in the fourth quarter of 2006 with a 25.3% share. This puts the company just 0.7% shy of beating its all-time record, and with the sales continuing to grow quarter-over-quarter, AMD just might hit that number soon.

Render of an AMD Ryzen chip.

Intel is likely to see some gains due to the recent release of its next generation of processors, Alder Lake. The new 12th-gen Intel processors have been performing excellently, beating both Intel and AMD predecessors by miles. On the other hand, AMD is rumored to follow up Intel’s success with the launch of Zen4 CPUs in 2022, which will undoubtedly propel it further up the list in terms of CPU market share.

While the release of Zen4 is unlikely to happen before the second half of 2022, AMD is not resting on its laurels until then. The company is rumored to release Zen 3 processors with 3D V-Cache technology, new Rembrandt APUs, and Milan-X server chips in 2022.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Security

Police arrest 150 suspects after closure of dark web’s largest illegal marketplace

A 10-month investigation following the closure of the dark web’s largest illegal marketplace, DarkMarket, has resulted in the arrest of 150 suspected drug vendors and buyers.

DarkMarket was taken offline earlier this year as part of an international operation. The site boasted some 500,000 users and facilitated around 320,000 transactions, reports the EU’s law enforcement agency, Europol, with clientele buying and selling everything from malware and stolen credit card information, to weapons and drugs. When German authorities arrested the site’s alleged operator in January this year, they also seized valuable evidence of transactions which led to this week’s arrest of key players.

According to the US Department of Justice and Europol, Operation Dark HunTor saw law enforcement make numerous arrests in the United States (65), Germany (47), the United Kingdom (24), Italy (4), the Netherlands (4), France, (3), Switzerland (2), and Bulgaria (1). More than $31.6 million in cash and cryptocurrencies were seized during the arrests, as well 45 firearms and roughly 234 kilograms of drugs including cocaine, opioids, amphetamine, MDMA, and fentanyl. According to the DoJ: “A number of investigations are still ongoing.”

As part of the operation, Italian authorities also shut down two other dark web marketplaces — DeepSea and Berlusconi — arresting four alleged administrators and seizing €3.6 million ($4.17 million) in cryptocurrency.

The operation was conducted across the US, Europe, and Australia.
Image: Europol

Although the dark web was once considered to be a relatively safe haven for those selling and buying drugs, international operations like Dark HunTor have seen regular arrests of suspects and speedy closure of marketplaces. The list of dark web markets closed just in recent years is extensive, including Dream, WallStreet, White House, DeepSea, and Dark Market. Although law enforcement certainly have to play Whac-A-Mole with such sites, with new markets springing up as soon as established ones are closed, doing so makes it harder for buyers and sellers to build steady businesses.

“The point of operations such as the one today is to put criminals operating on the dark web on notice: the law enforcement community has the means and global partnerships to unmask them and hold them accountable for their illegal activities, even in areas of the dark web,” said Europol’s Deputy Executive Director of Operations, Jean-Philippe Lecouffe, in a press statement.

Repost: Original Source and Author Link

Categories
AI

Microsoft and Nvidia team up to train one of the world’s largest language models

Microsoft and Nvidia today announced that they trained what they claim is the largest and most capable AI-powered language model to date: Megatron-Turing Natural Language Generation (MT-NLP). The successor to the companies’ Turing NLG 17B and Megatron-LM models, MT-NLP contains 530 billion parameters and achieves “unmatched” accuracy in a broad set of natural language tasks, Microsoft and Nvidia say — including reading comprehension, commonsense reasoning, and natural language inferences.

“The quality and results that we have obtained today are a big step forward in the journey towards unlocking the full promise of AI in natural language. The innovations of DeepSpeed and Megatron-LM will benefit existing and future AI model development and make large AI models cheaper and faster to train,” Nvidia’s senior director of product management and marketing for accelerated computing, Paresh Kharya, and group program manager for the Microsoft Turing team, Ali Alvi wrote in a blog post. “We look forward to how MT-NLG will shape tomorrow’s products and motivate the community to push the boundaries of natural language processing (NLP) even further. The journey is long and far from complete, but we are excited by what is possible and what lies ahead.”

Training massive language models

In machine learning, parameters are the part of the model that’s learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. Language models with large numbers of parameters, more data, and more training time have been shown to acquire a richer, more nuanced understanding of language, for example gaining the ability to summarize books and even complete programming code.

Microsoft Nvidia MT-NLP

To train MT-NLG, Microsoft and Nvidia say that they created a training dataset with 270 billion tokens from English-language websites. Tokens, a way of separating pieces of text into smaller units in natural language, can either be words, characters, or parts of words. Like all AI models, MT-NLP had to “train” by ingesting a set of examples to learn patterns among data points, like grammatical and syntactical rules.

The dataset largely came from The Pile, an 835GB collection of 22 smaller datasets created by the open source AI research effort EleutherAI. The Pile spans academic sources (e.g., Arxiv, PubMed), communities (StackExchange, Wikipedia), code repositories (Github), and more, which Microsoft and Nvidia say they curated and combined with filtered snapshots of the Common Crawl, a large collection of webpages including news stories and social media posts.

Microsoft Nvidia MT-NLP

Above: The data used to train MT-NLP.

Training took place across 560 Nvidia DGX A100 servers, each containing 8 Nvidia A100 80GB GPUs.

When benchmarked, Microsoft says that MT-NLP can infer basic mathematical operations even when the symbols are “badly obfuscated.” While not extremely accurate, the model seems to go beyond memorization for arithmetic and manages to complete tasks containing questions that prompt it for an answer, a major challenge in NLP.

It’s well-established that models like MT-NLP can amplify the biases in data on which they were trained, and indeed, Microsoft and Nvidia acknowledge that the model “picks up stereotypes and biases from the [training] data.” That’s likely because a portion of the dataset was sourced from communities with pervasive gender, race, physical, and religious prejudices, which curation can’t completely address.

In a paper, the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism claim that GPT-3 and similar models can generate “informational” and “influential” text that might radicalize people into far-right extremist ideologies and behaviors. A group at Georgetown University has used GPT-3 to generate misinformation, including stories around a false narrative, articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation. Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular open source models, including Google’s BERT,  XLNet, and Facebook’s RoBERTa.

Microsoft and Nvidia claim that they’re “committed to working on addressing [the] problem” and encourage “continued research to help in quantifying the bias of the model.” They also say that any use of Megatron-Turing in production “must ensure that proper measures are put in place to mitigate and minimize potential harm to users,” and follow tenets such as those outlined in Microsoft’s Responsible AI Principles.

“We live in a time [when] AI advancements are far outpacing Moore’s law. We continue to see more computation power being made available with newer generations of GPUs, interconnected at lightning speeds. At the same time, we continue to see hyper-scaling of AI models leading to better performance, with seemingly no end in sight,” Kharya and Alvi continued. “Marrying these two trends together are software innovations that push the boundaries of optimization and efficiency.”

The cost of large models

Projects like MT-NLP, AI21 Labs’ Jurassic-1, Huawei’s PanGu-Alpha, Naver’s HyperCLOVA, and the Beijing Academy of Artificial Intelligence’s Wu Dao 2.0 are impressive from an academic standpoint, but building them doesn’t come cheap. For example, the training dataset for OpenAI’s GPT-3 — one of the world’s largest language models — was 45 terabytes in size, enough to fill 90 500GB hard drives.

AI training costs dropped 100-fold between 2017 and 2019, according to one source, but the totals still exceed the compute budgets of most startups. The inequity favors corporations with extraordinary access to resources at the expense of small-time entrepreneurs, cementing incumbent advantages.

For example, OpenAI’s GPT-3 required an estimated 3.1423^23 floating-point operations per second (FLOPS) of compute during training. In computer science, FLOPS is a measure of raw processing performance, typically used to compare different types of hardware. Assuming OpenAI reserved 28 teraflops — 28 trillion floating-point operations per second — of compute across a bank of Nvidia V100 GPUs, a common GPU available through cloud services, it’d take $4.6 million for a single training run. One Nvidia RTX 8000 GPU with 15 teraflops of compute would be substantially cheaper — but it’d take 665 years to finish the training.

Microsoft and Nvidia says that it observed between 113 to 126 teraflops per second per GPU while training MT-NLP. The cost is likely to have been in the millions of dollars.

A Synced report estimated that a fake news detection model developed by researchers at the University of Washington cost $25,000 to train, and Google spent around $6,912 to train a language model called BERT that it used to improve the quality of Google Search results. Storage costs also quickly mount when dealing with datasets at the terabyte — or petabyte — scale. To take an extreme example, one of the datasets accumulated by Tesla’s self-driving team — 1.5 petabytes of video footage — would cost over $67,500 to store in Azure for three months, according to CrowdStorage.

The effects of AI and machine learning model training on the environment have also been brought into relief. In June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly five times the lifetime emissions of the average U.S. car. OpenAI itself has conceded that models like Codex require significant amounts of compute — on the order of hundreds of petaflops per day — which contributes to carbon emissions.

In a sliver of good news, the cost for FLOPS and basic machine learning operations has been falling over the past few years. A 2020 OpenAI survey found that since 2012, the amount of compute needed to train a model to the same performance on classifying images in a popular benchmark — ImageNet — has been decreasing by a factor of two every 16 months. Other recent research suggests that large language models aren’t always more complex than smaller models, depending on the techniques used to train them.

Maria Antoniak, a natural language processing researcher and data scientist at Cornell University, says when it comes to natural language, it’s an open question whether larger models are the right approach. While some of the best benchmark performance scores today come from large datasets and models, the payoff from dumping enormous amounts of data into models is uncertain.

“The current structure of the field is task-focused, where the community gathers together to try to solve specific problems on specific datasets,” Antoniak told VentureBeat in a previous interview. “These tasks are usually very structured and can have their own weaknesses, so while they help our field move forward in some ways, they can also constrain us. Large models perform well on these tasks, but whether these tasks can ultimately lead us to any true language understanding is up for debate.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
Security

Microsoft says it mitigated one of the largest DDoS attacks ever recorded

Microsoft says it was able to mitigate a 2.4Tbps Distributed Denial-of-Service (DDoS) attack in August. The attack targeted an Azure customer in Europe and was 140 percent higher than the highest attack bandwidth volume Microsoft recorded in 2020. It also exceeds the peak traffic volume of 2.3Tbps directed at Amazon Web Services last year, though it was a smaller attack than the 2.54Tbps one Google mitigated in 2017.

Microsoft says the attack lasted more than 10 minutes, with short-lived bursts of traffic that peaked at 2.4Tbps, 0.55Tbps, and finally 1.7Tbps. DDoS attacks are typically used to force websites or services offline, thanks to a flood of traffic that a web host can’t handle. They’re usually performed through a botnet, a network of machines that have been compromised using malware or malicious software to control them remotely. Azure was able to stay online throughout the attack, thanks to its ability to absorb tens of terabits of DDoS attacks.

The attack on Azure lasted more than 10 minutes.
Image: Microsoft

“The attack traffic originated from approximately 70,000 sources and from multiple countries in the Asia-Pacific region, such as Malaysia, Vietnam, Taiwan, Japan, and China, as well as from the United States,” explains Amir Dahan, a senior program manager for Microsoft’s Azure networking team.

While the number of DDoS attacks have increased in 2021 on Azure, the maximum attack throughput had declined to 625Mbps before this 2.4Tbps attack in the last week of August. Microsoft doesn’t name the Azure customer in Europe that was targeted, but such attacks can also be used as cover for secondary attacks that attempt to spread malware and infiltrate company systems.

The attack is one of the biggest in recent memory. Last year, Google detailed a 2.54Tbps DDoS attack it mitigated in 2017, and Amazon Web Services (AWS) mitigated a 2.3Tbps attack. In 2018, NetScout Arbor fended off a 1.7Tbps attack.

Correction October 12th, 3:17PM ET: We originally reported that Microsoft had mitigated the largest DDoS attack ever recorded, but Google mitigated a larger one in 2017. We have changed the headline and the article to reflect this. We regret the error.

Repost: Original Source and Author Link

Categories
Game

The International 10 Dota 2 championships 2021: $40M prize pool in Romania’s largest stadium

Today Valve revealed the latest in one of the most important gaming events of the year: The International 10 – Dota 2 Championships 2021. This event will take place in the year 2021 in Bucharest, Romania, in the country’s largest stadium. In Arena Nationala, the battle for the Aegis of Champions will roll on October 7 through the 10th, followed by a Main Stage play event on October 12, and an October 17 final battle.

This event would appear to be a lock. It’ll take place in-person for the gamers behind the controls in each match, and it’ll be streamed internationally as the matches take place. Before now, the event was set for Sweden, but was held off by necessity. Given the number of people that would be involved in the event – not to mention the importance of each and every member of each team gaining entry into the country – the event was postponed. Now, the event will take place in Romania.

The tournament centers on the game Dota 2, a multiplayer video game made by Valve. The game is a sequel to a community-created modified version of the game Warcraft III: Reign of Chaos, originally created by Blizzard Entertainment. The tournament called The International now offers a prize package to winners that 8 figures large: tens of millions of dollars for the team that proves victorious.

Group Stage for The International 10 will take place October 7 – 10, followed by an October 12 Main Stage series of showdowns. The biggest event of the series will take place October 17, 2021, when the Aegis will be on the line. The prize pool for this year’s Dota 2 Championship is $40,018,195 USD.

If it were not already quite clear, esports is an established sort of event, organization, and lifestyle. With an event like this, the prize money alone should be enough to convince even the last stragglers, the last non-believers in this sort of forward-looking event. Now, if only it were a little bit easier to watch at home. Cross your fingers this latest round of difficulties in making the event unfold will push the creators to make the whole situation more stream-friendly!

Repost: Original Source and Author Link

Categories
Security

World’s Largest Cruise Line Operator Hit by Cyberattack

The largest cruise line operator in the world has been hit by a ransomware attack, with customer data also believed to have been accessed.

Carnival Corporation, which operates more than 100 vessels across 10 different brands that include Carnival Cruise Line, Princess Cruises, and Costa Cruises, notified the U.S. Securities and Exchange Commission (SEC) this week after detecting the attack on August 15.

In its report to the SEC, Florida-based Carnival said that its investigation so far shows that the perpetrators accessed and encrypted some of its computer data, and also downloaded a number of data files. It added that it’s likely the security incident also saw “unauthorized access to personal data of guests and employees.”

The company said it believes the attack targeted only one of its brands, but added that at this stage it could offer no assurance that the computer systems of its other brands were not affected.

Digital Trends has reached out to the company to ask which brand suffered the attack, how many customers may have been impacted, what personal data may have been taken, and for details of the ransomware demand. We will update this piece when we hear back.

Carnival told the SEC that when it spotted the attack, it immediately notified law enforcement, and called upon the services of cybersecurity firms to bolster the security of its computer systems and help it in its investigation.

A ransomware attack uses malicious software to lock a computer system by encrypting files. Once locked, hackers demand payment from the owner of the system in return for a decryption key to regain access to the data.

Such incidents can cause huge disruption for victims — whether individuals or companies — with some feeling they have little choice but to pay the hackers. Retail currency dealer Travelex for example, reportedly paid $2.3 million to regain access to its systems following a ransomware attack at the start of this year, while GPS and fitness-tracker firm Garmin, which suffered a damaging attack last month, may have paid a substantial sum to get its systems up and running again.

To avoid falling victim to a ransomware attack, you should make sure your computer’s security software is fully up to date. You’re also advised to avoid clicking on unverified links in emails that could deliver the malware to your system or your company’s servers. Downloading files from sites you know little about is best avoided, too, and steering clear of unfamiliar USB sticks is also recommended.

If a company does fall victim to a ransomware attack, those with robust back-up procedures are usually best placed to deal with it as they can reset their systems using safely stored data.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Security

JBS, the world’s largest meat supplier, hit with cyberattack

JBS, a Brazilian company which supplies a fifth of the world’s meat, was the victim of a coordinated cyberattack, Bloomberg reports.

Details are still emerging about the extent and severity of the attack — which became apparent to JBS on May 30th, and was disclosed to staff in a memo on the 31st — but it has caused some of the largest slaughterhouses in the US to shut down already, and at least one in Canada. According to Bloomberg, JBS has suspended its own IT systems in Australia and North America, though the company’s backup servers appear to be unaffected. Naturally, the shutdowns of computer systems and physical plants are likely to cause supply delays.

In a press conference earlier today, White House Deputy Press Secretary Karine Jean-Pierre described the attack as coming “from a criminal organization likely based in Russia.” JBS has yet to disclose whether the attack involved ransomware, although the broad shutdowns are consistent with the effects of a ransomware attack.

This marks yet another high-profile piece of infrastructure targeted by Russian hacking groups, following the attack on Colonial Pipeline last month. JBS is not based in the US, but because of its outsized role in meat supply, the attack has the potential to disrupt global availability of beef and pork if not resolved quickly. As such, the White House has offered support to the company and “is engaging directly with the Russian government on this matter, and delivering the message that responsible states do not harbor ransomware criminals,” Jean-Pierre told reporters.

Repost: Original Source and Author Link

Categories
Security

One of the US’s largest insurance companies reportedly paid $40 million to ransomware hackers

CNA Financial, one of the largest US insurance companies, paid $40 million to free itself from a ransomware attack that occurred in March, according to a report from Bloomberg. The hackers reportedly demanded $60 million when negotiations started about a week after some of CNA’s systems were encrypted, and the insurance company paid the lower sum a week later.

If the $40 million figure is accurate, CNA’s payout would rank as one of the highest ransomware payouts that we know about, though that’s not for lack of trying by hackers: both Apple and Acer had data that was compromised in separate $50 million ransomware demands earlier this year. It also seems like the hackers are looking for bigger payouts: just this week we saw reports that Colonial Pipeline paid a $4.4 million ransom to hackers. While that number isn’t as staggering as the demands made to CNA, it’s still much higher than the estimated average enterprise ransomware demand in 2020.

Law enforcement agencies recommend against paying ransoms, saying that payouts will encourage hackers to keep asking for higher and higher sums. For its part, CNA told Bloomberg that it wouldn’t comment on the ransom, but that it had “followed all laws, regulations, and published guidance, including OFAC’s 2020 ransomware guidance, in its handling of this matter.” In an update from May 12, CNA says that it believes its policyholders’ data were unaffected.

According to Bloomberg, the ransomware that locked CNA’s systems was Phoenix Locker, a derivative of another piece of malware called Hades. Hades was allegedly created by a Russian group with the Mr. Robot-esque name Evil Corp.

Correction: Bloomberg wrote that the ransomware used against CNA was a derivative of one created by Evil Corp; we initially suggested it was Evil Corp’s original ransomware instead. We regret the error.

Repost: Original Source and Author Link

Categories
Computing

Dell debuts the XPS 17 9700, its largest XPS ever

With the debut of the Dell XPS 17, the company will bring its popular XPS brand to a 17-inch laptop this summer, with a bezel-less design, H-class 10th-gen CPU, and up to RTX 2060 graphics. There’s an updated XPS 15, too. 

While the laptops were officially announced Wednesday, we actually got a sneak peek at prototypes of the XPS 17 in late 2019. We’ll walk you through everything we know about it.

Dell XPS 17 9700 Price and Availability

Dell said the XPS 17 9700 will go on sale this summer with a starting price of $1,499. Starting obviously means a version that doesn’t include the top-end components.

xps17 bottom angle closing fill  1 Dell

Dell’s XPS 17 9700 is smaller in physical dimensions than some 15-inch laptops, but its display is a 17-inch, 16:10 aspect ratio, 4K UHD+ screen.

Dell XPS 17 9700 Screen Options

The XPS 17 9700’s most distinctive feature is its 17-inch screen, and not just because of the size. While most 17.3-inch displays have a wide, 16:9 aspect ratio best suited for video, Dell’s is a custom panel with a 16:10 aspect ratio, far taller and more pleasant for people who need to get work done.

Two display options are available both of which are 60Hz. The first is a 1920×1200 resolution FHD+ screen that can hit 100 percent of sRGB color space. Although not exactly high-resolution, it’s plenty bright at 500 nits. The screen comes with an anti-glare finish.

dell xps 17 front open croppedDell

Dell’s two display options share certain features, including 178-degree viewing angles and support for Dolby Vision and Eyesafe.

Those who value pixel density will likely prefer the touch-enabled 4K UHD+ screen with a resolution of 3840×2400. It’s rated to hit 100 percent of the Adobe RGB color gamut and greater than 94 percent of the DCI-P3 color gamut. It also complies with the HDR 400 spec for luminance and color depth, and can hit 500 nits of brightness. The panel features anti-smudge and anti-reflective features.

Anti-reflective isn’t the same as anti-glare, which is typically a matte finish. Anti-reflective screens are designed to be glossy or shiny to maintain image crispness, but they typically have internal coatings to minimize reflections.

Both screens offer 178-degree viewing angles and support both the Dolby Vision HDR 4K video format and the Eyesafe standard for reducing blue emissions from the screen—without that horrible brown tint that makes it look like you’re wearing As Seen On TV Blue Blocker sunglasses. Eyesafe screens essentially look “normal,” but without the blue emissions that disturb your sleep and hurt your eyes.

Repost: Original Source and Author Link