A new report has highlighted how ransomware payments to hackers have begun to slow down, with victims continuously opting to not cave in to demands.
Coveware, a company that provides ransomware decryption services, revealed some interesting analytics relating to the state of ransomware during the second quarter of 2022.
As reported by Bleeping Computer, the average payment pertaining to ransomware demands has indeed increased. However, the median value of these payments have decreased in a big way.
During 2022’s second quarter, the mean average ransom payment totalled $228,125, representing an 8% increase compared to the first quarter of this year.
The median ransom payment value, however, came to $36,360 — that’s a staggering 51% drop when compared to the first quarter of 2022.
The aforementioned fall in value follows consistent drops since the first quarter of 2021. That specific period saw average ransomware payments reach new highs ($332,168), while the median value reached a peak of $117,116. That said, this state of affairs was undoubtedly aided by the pandemic and the rise of individuals using their systems at home.
“This trend reflects the shift of RaaS affiliates and developers toward the mid-market where the risk-to-reward profile of attack is more consistent and less risky than high profile attacks,” Coveware said in its findings.
Coveware also mentioned how large corporations are not entertaining any ransom demands solely due to the amount. “We have also seen an encouraging trend among large organizations refusing to consider negotiations when ransomware groups demand impossibly high ransom amounts.”
A shift in strategy
Hackers have increasingly shifted their efforts and focus toward smaller organizations that are delivering positive financial results, which is reflected by the fact that the median size of companies affected by ransomware fell during 2022’s second quarter.
Elsewhere, the most popular choices for ransomware list within the report show a few familiar names from the hacking scene. BlackCat controls 16.9% of the ransomware attacks, while LockBit 2.0 accounts for another sizable chunk (13.1%).
The report also revealed how the double extortion method — a way to threaten targets that their stolen files will be leaked before the encryption process — is still a favored scare tactic among threat actors, with 86% of the reported cases associated with this specific strategy.
For a considerable number of these cases, hackers will continue with their extortion schemes or leak the files they’ve obtained even if they’ve received the ransom payment.
If you’ve been a victim of ransomware, then be sure to seek the services of this anti-hacker group that provides free decryptors.
You’re probably familiar with the online dangers that you could come across while working from home on your own computer or one provided by your employer. Spam, malware, adware, and viruses are just some things to think about. With the future of the workplace now possibly heading into the online metaverse, these are all dangers that could still come up for workers — and Microsoft has a warning about it.
In a recent post, Charlie Bell, the executive vice president for security, compliance, identity, and management at Microsoft, talked about the cornerstones for securing work in the metaverse. Bell believes that with the metaverse, the security stakes will be higher than imagined, and lists ways that companies and the major players in the space can stay safe when bringing workers online to the virtual metaverse. More importantly, though, he also touched on how anyone can easily be impersonated in the metaverse.
“Fraud and phishing attacks targeting your identity could come from a familiar face – literally – like an avatar who impersonates your co-worker, instead of a misleading domain name or email address. These types of threats could be deal-breakers for enterprises if we don’t act now,” explained Bell.
So, how can this security and trust be accomplished? According to Bell, it’ll have to do a lot with information sharing and collaboration on metaverse technologies. It also has to do with adopting multi-factor authentication and password-free authentication in metaverse platforms. Even giving IT admins a console to control the experiences is something that Microsoft and Bell suggest.
According to Bell, the security of work in the metaverse has to come from the apps within, and there’s only “one chance” to establish specific security principles that can create trust and peace of mind for metaverse experiences while it’s still new. “The security community must work together to build a foundation to safely work, shop, and play,” said Bell.
Transparency is the final way of securing the metaverse for everyone. Bell hopes that those who hold leadership positions in the space will be prepared to answer questions from security experts about terms of service, encryption, and vulnerability reporting. “Let’s make the lessons we’ve learned about identity, transparency, and the security community’s powerful collaboration our top ideals to enable this next wave of technology to reach its full potential,” said Bell.
Workers at the studio formerly known as Vicarious Visions are attempting to unionize. On Tuesday, quality assurance staff at Blizzard Albany went public with the news that they had filed for a union election with the National Labor Relations Board (NLRB). In a , the group said it was seeking representation with the Communications Workers of America.
The approximately 20 workers involved in the effort call themselves the Game Workers Alliance Albany, a nod to the first-ever union to . Like their colleagues at Raven Software, the QA staff at Blizzard Albany are seeking fairer compensation, more pay transparency and better benefits. They also want to work with Activision Blizzard to create a process for addressing workplace issues, including cases .
“QA is currently an undervalued discipline in the games and software industries,” the group said. “We strive to foster work environments where we are respected and compensated for our essential role in the development process.” The QA workers at Blizzard Albany say they asked Activision last week to recognize their union voluntarily. The publisher acknowledged the request but has yet to share a decision.
“Our top priority remains our employees. We deeply respect the rights of all employees under the law to make their own decisions about whether or not to join a union,” an Activision Blizzard spokesperson told Engadget. “We believe that a direct relationship between the company and its employees is the most productive relationship. The company will be publicly and formally providing a response to the petition to the NLRB.”
Before Activision at the start of 2021, the 200-person developer was one of the publisher’s most dependable support studios. It worked on the excellent remaster and . More recently, as a part of Blizzard, the studio .
In June, Microsoft it would respect all unionization efforts at Activision Blizzard following the close of its to buy the publisher. In doing so, the company with the Communications Workers of America. According to , Activision Blizzard employees, including some at Blizzard Albany, plan to stage a walkout on Thursday to demand better workplace protections following the overturn of Roe v. Wade.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Ana* and her three-year-old son arrived at the shelter for migrant and refugee women in the northern Mexican city of Monterrey in early October. Every morning, the 14 women at the shelter — mainly from El Salvador and Honduras — share the house chores: sweeping, cooking, and babysitting the children of their compañeras working informal jobs to save enough money to cross into the United States.
The majority of them, traveling alone with as many as three children, spent days unable to communicate with their families after crossing Mexico’s southern border. Not having a local SIM card, they said, made the uncertainty and anxiety of their journey that much worse.
For families crossing borders, a working phone is critical. It lets asylum-seekers stay connected to family, receive money, and access critical information for their journey. But refugees and asylum-seekers face enormous challenges keeping those phones working, as the logistics of cellular networks work against them. The result is a constant scramble, as refugees swap SIM cards and wrestle with telecoms in an effort to create a safer migration journey for themselves and their families.
Ana lost contact with her family after crossing the Guatemala-Mexico border. She didn’t know how to change a SIM card and couldn’t find a place to charge her phone, which ran out of battery in Guatemala.
“My family hadn’t heard from me. Once at the shelter, I went out and found a little shop where I had to pay 15 pesos per hour to charge it and bought a chip for 80 pesos. Then, I called my family,” explains Ana.
Losing mobile coverage when entering Mexico deprives people in transit from being monitored and accompanied by their support network. While telecommunication infrastructure has expanded across borders with expensive international roaming plans, people trying to move freely across those same borders are being left with limited access to mobile services.
Vladimir Cortés is the digital rights program officer in the Mexico and Central America office of Article 19, a nonprofit focused on freedom of expression. Cortés explains that governments, multinational telecommunication corporations, regulatory bodies, and international organizations could establish continuity of access to mobile services for people in migration.
“International organizations can articulate these different actors to guarantee mobile network coverage,” says Cortés. “There is an important opportunity to recognize the phenomenon that currently exists and the level of protection that states can guarantee.”
Six months ago, Ana and her son left their home in Choluteca, Honduras, after receiving threats from the people who kidnapped and killed her 14-year-old daughter Gabriela*. Throughout the journey, Ana, who aspires to build a safe life with her son in Los Angeles, relied on Google Maps to check her location, and WhatsApp or Facebook to communicate with her family.
“In some parts there was a signal and in others not. When there was no internet, I was left with nothing,” says 37-year-old Ana, while her son watches SpongeBobSquarePants on her Samsung Galaxy S6.
The use of GPS applications and instant messaging apps — mostly Facebook and WhatsApp — allows refugees to orient themselves and participate in online migrant networks that can give them a greater sense of community and security. Some of the women at the shelter said it’s hard to trust information available online, since they are aware of online scams that falsely promise visa facilitation and transportation assistance.
Some of these online scams have been linked to serious criminal activities such as kidnapping and human trafficking. Diana González and Juan Manuel Casanueva, researchers at SocialTIC, a Mexican digital security nonprofit, identified various connectivity risks at Mexico’s southern border such as identity theft and extortion.
“The dangers are basically associated with two: identity theft for extortion issues, meaning some type of information can be used to contact their families and ask for money,” explains Casanueva. “And the other is not entirely digital … it’s the lack of communication. If they are victims of other types of danger, they cannot communicate with a support network.”
The women at the shelter often verify online information with their compañeras or other offline sources, such as staff at the shelter or migrant rights groups, since they know Facebook is used to spread misinformation and fake news.
“Saying Facebook is bad or WhatsApp is bad does not apply. It is the only thing there is,” says Casanueva. “The question that should be asked in these spaces is how these people can have the appropriate information, and also how to prevent risks that occur on these platforms, such as identity theft for issues of extortion and scams, criminal networks, and possibly even risk of kidnapping, and lots of fake news.”
Ana limits her mobile use to messaging her family, seeking information about border crossings, and watching cartoons with her son. Masha and the Bear is her favorite since, she says, “it helps to distract” her mind.
Mary left El Salvador with her three children, ages two, five, and eight, after being extorted at the pizza place she owned, and like Ana, she doesn’t like to use her Huawei Y7P unless out of necessity.
“The truth is, I don’t use the phone much more than the girls use it to watch videos to entertain themselves. I just want to know how my father and brothers are, and if my brother who is in the United States is going to send me money,” says Mary, who withheld her full name for her own protection.
For the women in the shelter, the priority is to earn more money so they can find safer ways to cross. When they were able, many took buses instead of walking, or stayed in hotels instead of shelters, to protect their children throughout their journey toward the US-Mexico border.
Esther Nohemí Álvarez lent her Huawei phone to her 15-year-old daughter, who was starting to show symptoms of depression. It was 2019, and the Migration Protection Protocol, a Trump era policy also called “Remain in Mexico,” was forcing thousands of asylum seekers arriving at the US’s southern border to remain in Mexico to await their US hearings.
Álvarez’s daughter grabbed her mom’s phone and did TikTok dance challenges with other girls at the shelter.That same phone allowed her to stay in contact with her mother in Monterrey and with her father in Virginia, while she crossed the US-Mexico border with the assistance of a smuggler in April of this year.
“As an unaccompanied minor, immigration detained her and they contacted her father. She had her father’s number memorized in case her cell phone was taken away,” says Álvarez. “She was there for about 25 days, and they allowed her like three calls to contact her dad.”
Of all the risks that crossed Álvarez’s mind when she decided to send her daughter alone after her asylum claim was denied, digital risks were the least of her concern, let alone government surveillance.
But earlier this year, Mexico’s Senate passed a law that would require mobile users to register their biometric data in a government database in order to obtain a SIM card. The law will allegedly fight organized crime and reduce extortions and kidnappings, even though a similar project implemented between 2008 and 2011 only saw an increase in extortions.
Digital rights groups challenging the law affirm that users’ sensitive personal information will be at risk. Although the law is currently suspended indefinitely by the Supreme Court, Cortés explains that its implementation would generate a greater violation of the rights of migrants, who already face persecution by the Mexican National Institute of Migration and other state actors.
“The registration of the card is not the only problem. The other problem is the delivery of biometrics data. Authoritarian countries can use this as a way to control and undermine the privacy of people,” adds Cortés.
The first time Álvarez and her daughter tried to cross through Ciudad Miguel Alemán, across the border from Roma, Texas, they were held for a week in the hieleras, Customs and Border Protection’s notoriously cold detention cells. They were deported through Nuevo Laredo — a border city that has seen a surge in drug cartel-related violence — more than 150 kilometers away from their original point of entry. It was her mobile phone that allowed Álvarez to locate herself on a map and seek assistance.
As the US government deploys new technologies to surveil and track migrants, asylum-seeking women are not deterred by them. Even if they have to wait longer in Monterrey until they consider it safe to cross, returning home is no longer an option.
“We are going to cross the border. That’s why I’m working here [in Monterrey] to save money,” says Mary, while two of her kids run around the table. “If we don’t make it, then we are going to stay here because I cannot return to my country.”
*Some names in this story have been changed to protect sources from possible reprisals.
MSI is working on a new Z690 motherboard tailored for gamers, and it seems that it may blow all other high-end motherboards out of the water.
It’s likely to top the charts not just for MSI — it might end up one of the best motherboards for gaming when it comes to Intel Alder Lake processors.
The new motherboard, dubbed MSI MEG Z690 Godlike, already won two CES Innovation Honoree awards, one for High-Performance Home Audio/Video and another award for Gaming.
Wccftech shared more information about the exact specifications of the board, although some details still remain unknown. One thing is certain — this motherboard will not be made for small cases. Calling it gigantic is not an overstatement, as it measures a whopping 305 x 310 mm, making it almost square. An EATX motherboard, it’s one of the largest — if not the largest — consumer motherboards on the current market.
The MEG Z690 Godlike looks fun and shiny thanks to the wide range of fully customizable RGB lights strategically placed all over the board. It also comes with a touch LCD panel that measures 3.5 inches and is located near the DDR5 DIMM slots. This feature will provide the user with useful information about the computer, including temperatures, voltages, core clocks, and more. According to MSI, the LCD panel will be customizable through the company’s trademark MSI Dragon software.
The size is not the only impressive aspect of the MSI MEG Z690 Godlike. It comes fully decked out with everything modern gamers could ask for — and then some. The motherboard is said to feature 10Gb Ethernet connectivity, six M.2 slots, the ability to include future PCIe 5.0 drives, and more. There are also plenty of USB ports and even a Thunderbolt 4 port.
Considering that this is a high-end motherboard, the Z690 Godlike has everything it needs to handle the premium components it’s going to support. The board features over 22 power phases for the processor. It sports plenty of heat sinks and shielding to support the temperatures generated by the best processors and graphics cards on the current — and future — market. the MSI MEG Z690 Godlike will also support the newest DDR5 RAM — and loads of it, too. It has enough slots to house up to 128GB of DDR5 RAM running at speeds above 6666MHz.
All of the above sounds like every gamer’s dream, and it’s likely that MSI hasn’t spilled the beans on all of the features of the MSI MEG Z690 Godlike just yet. We also don’t know anything about the pricing and the release date, and these two points are perhaps the most interesting pieces of information for prospective buyers.
One thing is for sure — it won’t be cheap, but it remains to be seen just how expensive it’s going to get. We’re likely to hear more soon, as the board is set to release sometime in 2022.
I just reviewed AMD’s new Radeon RX 6600, which is a budget GPU that squarely targets 1080p gamers. It’s a decent option, especially in a time when GPU prices are through the roof, but it exposed a trend that I’ve seen brewing over the past few graphics card launches. Nvidia’s Deep Learning Super Sampling (DLSS) tech is too good to ignore, no matter how powerful the competition is from AMD.
In a time when resolutions and refresh rates continue to climb, and demanding features like ray tracing are becoming the norm, upscaling is essential to run the latest games in their full glory. AMD offers an alternative to DLSS in the form of FidelityFX Super Resolution (FSR). But FSR isn’t a reason to buy an AMD graphics card, and DLSS is a reason to buy an Nvidia one even if it shouldn’t be.
Nvidia’s walled garden
Nvidia only offers DLSS on its last two generations of graphics cards — in particular, RTX 30-series and 20-series cards. Walling off features like this isn’t something new for Nvidia. For years, it restricted its G-Sync variable refresh rate technology to monitors that included a dedicated (and costly) proprietary module, instead of adopting the open-source FreeSync developed by AMD.
Similarly, many machine learning applications are built to run using Nvidia’s CUDA GPU computing platform, not the OpenCL platform that AMD cards use. Developers have fixed the problem in software libraries like TensorFlow, but there’s still a trend with these libraries: CUDA gets first priority.
That leaves us with DLSS, which is also a technology restricted only to Nvidia hardware. There’s a good reason why — DLSS uses an A.I. model that can only run on the Tensor cores on recent Nvidia graphics cards. Right now, AMD cards don’t have these dedicated A.I. accelerators, but it’s hard to imagine Nvidia taking them into consideration if they existed.
In fairness to Nvidia, the company has taken steps to break down its proverbial walls. For example, G-Sync now works with a range of FreeSync monitors that don’t include a dedicated module. The important thing to know is that Nvidia has traditionally developed new features with only its hardware in mind, while AMD usually takes an open-source approach.
That’s true for DLSS and FSR, too. The difference between DLSS and Nvidia’s other walled-off features is that it’s significantly better than FSR.
Performance parity, and why DLSS is too good to ignore
The massive asterisk is DLSS. When AMD announced FSR, it looked like an open-source competitor to DLSS that could run on AMD and Nvidia cards alike. In reality, it’s an upscaling tool based on dated tech that manages to increase frame rates, but at a significant cost to image quality.
DLSS doesn’t have that problem. Both DLSS and FSR accomplish the same goal by upscaling a low-resolution image to a high-resolution one by filling in the missing pixels. The difference is that FSR uses a baked-in algorithm with a sharpening filter while DLSS uses an A.I. model that’s been trained on what the final image should look like. Basically, DLSS has a lot more information to work with, and Nvidia graphics cards have the A.I. accelerators to take advantage of it.
Making FSR open source was an inclusive move for AMD, but it was also a compromise. DLSS is a reason to buy an Nvidia graphics card given its image quality, and even if AMD restricted FSR to its own platform, it wouldn’t be enough to compete with the feature set of Team Green. You can see that in the recent Back 4 Blood, where DLSS holds up much better than FSR (even if FSR offers higher frame rates overall).
To be clear, I’m not advocating for another walled garden — I don’t like the fact that Nvidia restricts DLSS to its platform, either, and as Intel’s XeSS supersampling feature shows, it’s possible to develop this tech in an inclusive way. The point is that Nvidia isn’t going to develop DLSS for other hardware, but AMD could have developed FSR to go toe-to-toe with DLSS while sticking with an open-source approach.
But it may not stay that way for long. Intel is set to release its Arc Alchemist cards soon, which include XeSS. It works like DLSS, but Intel is also offering a general-purpose version that can run on a variety of hardware. AMD could have jumped on that opportunity but didn’t. It looks like Intel is filling the gap.
In the future, I hope to see AMD, Nvidia, and Intel reach performance and feature parity. At least then we don’t have one dominant graphics card maker resting on its laurels while the rest of the market tries to catch up. AMD has said it will continue working on FSR, and XeSS will be available early next year, so hopefully that shift is right around the corner.
Over the last month, Nvidia’s Deep Learning Super Sampling (DLSS) and AMD’s FidelityFX Super Resolution (FSR) have been in a battle for the limelight. Both tools offer upscaling in supported games to deliver features like ray tracing at high resolutions and frame rates. There might be a new competitor entering the ring, though, and it comes from Microsoft.
Two job postings (spotted by TechSpot) hint at a possible DLSS competitor from Microsoft. The first job posting is for a Senior Software Engineer in the Xbox division of the company.
“Xbox is leveraging machine learning to make traditional rendering algorithms more efficient and to provide better alternatives. The Xbox graphics team is seeking an engineer who will implement machine learning algorithms in graphics software to delight millions of gamers,” the job description reads.
The other posting for a Principal Software Engineer for Graphics is a bit more general, but it specifically mentions “state-of-the-art GPU capabilities on Xbox and Windows for AAA game developers,” as well as experience with machine learning and shader compilation.
A DLSS competitor from Microsoft would be good news for gamers. It’s possible Microsoft could deliver machine learning-assisted upscaling through DirectX. Microsoft’s DirectML library already helps optimize GPU resources for machine learning, and with both job postings referring to gaming on Xbox and Windows, Microsoft could be looking for ways to leverage that library in its gaming sector.
That would make sense given Microsoft’s renewed interest in the gaming market. The recently announced Windows 11 includes a swath of Xbox features, including the Direct Storage API for faster loading times and Auto HDR.
Microsoft’s version may not look the same as DLSS or FSR, however. Microsoft already allows developers to use FSR for game development on Xbox consoles, which it’s able to do thanks to FSR’s open-source approach. DLSS, on the other hand, requires proprietary hardware from Nvidia and is, according to at least one developer, more difficult to work with than FSR.
This upscaling feature would likely come through the DirectX interface, which would give developers more options for upscaling. That should mean the feature won’t require any specific hardware, which is a big deal for aging GPUs and APUs that don’t have the power to stand up to modern AAA games.
As with DLSS and FSR, though, the longevity of Microsoft’s implementation will come down to image quality and performance. In our FidelityFX Super Resolution review, we found that it delivers a solid performance increase at 4K, though it struggles with image quality at the more aggressive upscaling modes. DLSS produces a better result overall, but it requires an Nvidia graphics card.
Where Microsoft’s version, if it exists, falls on that spectrum remains to be seen. If it is coming, Xbox and Windows gamers have a lot to look forward to.
A new rumor suggests Nvidia might be working on the RTX 3080 Super and RTX 3070 Super for laptops. The rumor falls in line with a leaked roadmap from Lenovo last month, which listed the upcoming ThinkPad X1 Extreme Gen 4 sporting either an RTX 3080 Super or RTX 3070 Super. We may know the names of the cards, but that’s about it.
The rumor comes from Videocardz, who spotted a tweet from Greymon55 saying that the range is set to launch next year. The Twitter account was only set up this month, but it has already caught the attention of some well-known leakers.
The tweet alone doesn’t say much, but the Lenovo leak lends it some creditability. The original leak shows that you can configure the X1 Extreme Gen 4 with an Nvidia GTX 1650 Ti, RTX 3060, RTX 3070 Super, or RTX 3080 Super. Meanwhile, Lenovo’s X1 Extreme Gen 4 product page lists the RTX 3080, RTX 3070, RTX 3060, or RTX 3050 Ti as graphics options in the upcoming machine.
The RTX 3080 Super and RTX 3070 Super will allegedly come with 16GB and 8GB of GDDR6 memory, respectively. That’s the only spec we know about, but these Super variants, if they exist, will likely come with more CUDA cores. The RTX 2080 Super mobile, for example, came with 128 more CUDA cores than the RTX 2080 mobile. The cards will likely use the same Ampere architecture, but they could come with a redesigned GPU core.
Looking at last-gen’s launch cadence, it’s possible that Nvidia could announce Super variants in late 2021 or early 2022. The RTX 2080 mobile released in January 2019, and the RTX 2080 Super followed in April 2020. Similarly, the RTX 3080 mobile was announced in January 2021, putting the RTX 3080 Super mobile on track for an early 2022 release.
Nvidia hasn’t announced or hinted at anything at this point, though, and it’s still too soon to say these cards are coming. Last year, Nvidia was apparently working on a 20GB version of the RTX 3080 Ti and a 16GB version of the RTX 3070 Ti, both of which never made it to market. The cards were reportedly canceled to make way for the RTX 3080 Ti and RTX 3070 Ti that are available today.
If previous launches are anything to go by, Nvidia is likely working on an update to its mobile RTX 30-series range. However, it’s possible that the design will be reworked, rebranded, or completely scrapped before next year rolls around.
Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.
Nearly two-thirds (64%) of enterprise decision-makers with responsibility for machine learning, application development, and decision management in their organizations are worried about job security, according to new research by business software company InRule.
Above: 64% of decision-makers consider job security as their biggest personal challenge with AI technologies.
Image Credit: InRule
There are many use cases for AI in the enterprise, from driving market and customer insights to testing new products, mitigating compliance, and addressing privacy risks, and many decision-makers report feeling overwhelmed by the options. At least one-third of decision-makers report too many use cases across business functions like sales, marketing, and customer experience. There were 53% of respondents in the survey who said customer experiences was the top business function for AI — and that they have too many AI use cases in that area.
The problem of having too many use cases will continue to increase as 67% of decision-makers said they expect their AI/ML usage to increase over the next year-and-a-half.
Challenges with collaboration impede AI success. More than half (51%) of decision-makers say their organization has too much data, and 42% struggle to identify and gain access to the right data. Organizational silos exacerbate the inaccessibility of data, hindering collaboration between experts and data scientists.
AI operations are critical to gaining essential insights about customers and markets, but there are myths and misconceptions that may stifle AI projects before they can get off the ground, InRule’s study found. One such misperception is that AI projects can’t be done without enough data scientists, when the reality is that there are many AI and ML tools available.
Another is that using AI can have unintended consequences that could harm the business. Sixty-four percent of decision-makers said it is “Important” or “Critical” for their organization to defend or prove the efficacy of its digital decisions. With the growing number of privacy regulations, enterprises have to be able to justify what they are doing with the data. Even so, 58% of decision-makers find defending or proving the efficacy of their digital decisions challenging. They are willing to share visual representations of their outcomes and inputs used, but less likely to show the code they used or the questions driving the decisions, the study found.
Part of that may be because many organizations don’t have the right tools, technology, process, and culture to identify the right questions for digital decisioning, InRule found. More than half (57%) of decision-makers report not having the tools and technology in place to identify the right questions for their digital decisions and 42% don’t have the right processes or a culture of collaboration, the study said.
The study, which consisted of three interviews and an online survey of 302 U.S.-based individuals, focused on decision-makers’ perceptions of AI. “AI is a critical source of industry competitiveness. The fastest path to AI solutions is to formulate and execute a strategy to scale AI use cases based on reality unencumbered by myths,” the report said.
Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.
This post is by Dr. Mukta Paliwal, senior domain expert at Persistent Systems.
As many as 50% of Gartner client inquiries on the topic of artificial intelligence involve a discussion involving the use of graph technology, the market research firm said in its Top 10 Data and Analytics Trends for 2021. Every large enterprise wants to exploit available data to bring more insights for doing business at scale. To achieve this, connected data has become a logical need, as it helps in bringing context within the existing organizational data to create knowledge.
Businesses have to face the pace of constantly evolving data needs. Knowledge graphs can help companies move away from traditional databases and use the power of natural language processing, machine learning, and semantics to better leverage data.
What is a knowledge graph?
Knowledge graphs represent a collection of interlinked facts about a domain. Essentially, entities and relations are extracted from the unstructured data and stored in the form of a triple: subject-predicate-object. For example, the statement “Captain Marvel is the strongest Avenger” can be broken into a subject (Captain Marvel), a predicate (is the strongest) and an object (Avenger) and stored as a triple (Captain Marvel-is the strongest-Avenger) along with other related entities in a knowledge graph of Avengers, the popular Marvel movie characters.
Essentially, we can define knowledge graphs with these features: 1) they define real-world entities of a domain; (2) they provide relationships between them; (3) they define rules for possible classes of entities and relations via some schema; (4) they enable reasoning to infer new knowledge.
Knowledge graphs can be auto-generated or human-curated, may have been designed with a rigid ontology or may be evolving with time, can be in different shapes and sizes, and may have been developed by a company or by an open-source community. Irrespective of these differences, they help in organizing unstructured data in a way that information can easily be extracted where explicit relations between multiple entities help in the process.
Why use knowledge graphs?
A knowledge graph is self-descriptive, as it provides a single place to find the data and understand what it is all about. As the meaning of the data is encoded alongside the data in the graph itself, the word semantics is associated with the knowledge graph. Knowledge graphs bring additional value by providing:
Context: Knowledge graphs provide context to algorithms by integrating various types of information into an ontology and flexibility to add new derived knowledge on the go. Most traditional knowledge graphs can simultaneously use various types of raw data.
Efficiency: Once desired entities and relations are available, knowledge graphs offer computational efficiencies for querying stored data resulting in effective use of data for generating insights.
Explainability: Large networks of entities and relations provide solutions for the issue of understandability by integrating the meaning of entities available within the graph itself. As such, knowledge graphs become intrinsically explainable.
Where to use knowledge graphs
According to Gartner’s Top 10 Data and Analytics Trends for 2021, knowledge graphs are the foundation of modern data and analytics, with capabilities to enhance and improve user collaboration, machine learning models, and explainable AI. Although graph technologies are not new to data and analytics, there has been a shift in the way they are used. A knowledge graph brings together machine learning and graph technologies to give AI the context it needs.
To solve complex problems, where there is a need to integrate multiple unstructured and semi-structured sources of data coming from a variety of sources, we need connected, reusable, and flexible data foundation to reflect the complexity of the real world. Connected data, enriched with meaning, allows for multiple interpretations from the same data, which is helpful in getting answers to complex queries to derive insights with more efficiency.
Organizations are identifying an increasing number of use cases for knowledge graphs, including:
Fraud detection: Identifying fraudulent transactions is the most prevalent use case and has applications in banking, mobile phone transactions, government benefits and tax fraud. The use of knowledge graphs also enhances fraud, waste, and abuse detection on insurance claims. Knowledge graphs empowered by machine learning and reasoning capabilities allow companies to better identify fraudulent patterns by traversing many real-time interconnected entities in a large network.
Drug discovery: Drug discovery is an extremely complex and cost-intensive process. Knowledge graphs have shown considerable promise across a range of tasks, including drug repurposing, drug interactions, and target gene-disease prioritization. A large number of open- source databases are integrated along with published literature to create huge biomedical knowledge graphs. These KGs have become very helpful in mining the relations between entities like genes, drugs, disease, etc. and use them in downstream applications.
Semantic search: A knowledge graph stores meanings of the entities; hence, knowledge graph-powered search is referred to as “semantic search,” or search enriched with meaning. Semantic search is used to improve the accuracy of search results when exploring the internet or the internal systems of an organization. For semantic search to work, along with a well- curated knowledge graph, the capabilities of text analytics and indexing techniques are used.
Recommender systems: Recommender systems are developed to model users’ preferences for personalized recommendations of products. There are a variety of modeling techniques used to develop the recommendation system. In spite of their considerable merit, these systems suffer from such challenges as data sparsity, cold start, and expandability of the recommendations. Knowledge graph-based recommender systems are able to help solve these challenges to an extent. In this approach, user and item entities are connected through multiple relationships. The relations are used to obtain a probable candidate list for the target user, and the path between target user and recommended item is used as an explanation for recommended items.
Mukta Paliwal Ph.D. is Senior Domain Expert (Data Science) at Persistent Systems. She leads and consults with teams to create and deliver cutting-edge software solutions based on AI/ML in multiple business domains. She has a Ph.D. in Applied Machine Learning.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
up-to-date information on the subjects of interest to you
gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More