Categories
Game

Nexus Mods bans ‘Spider-Man Remastered’ patch that replaced in-game Pride flags

Nexus Mods, a popular mod database, has posted a strongly worded update about the Spider-Man Remastered patch that was created to remove Pride flags in-game. The website’s administrator, Dark0ne, has revealed that the mod was uploaded by a sock puppet under the name “Mike Hawk.” They said the fact that it was added to the database by a secondary account shows the uploader’s intent to troll and demonstrates their understanding that it would not be allowed on the database. As such, the website has decided to remove the patch from its repository and to ban both the user’s main account and sock puppet.

Spider-Man Remastered was released for the PC a few days ago. It’s a refresh of the original title developed by Insomniac Games, which has since been purchased by Sony Interactive, and was released back in 2019. In the game, you’ll see Pride flags around New York as you swing around the city as Spider-Man, and some players were apparently offended by their presence. The mod Mike Hawk released replaces those Pride flags with the flag of the USA.

“We are for inclusivity, we are for diversity,” Dark0ne wrote in their post. They vowed to do a better job of moderating their website and ensuring that their policy is followed more closely going forward. The administrator said that the website will take action if someone uploads a file with the intent to be deliberately against inclusivity and/or diversity. Based on how the website addressed this particular issue, it will likely hand out bans in the future if they’re warranted. Of course, Dark0ne and their team can only police what’s uploaded on Nexus Mods. Seeing as it’s one of the largest repositories around, though, its team’s stance could make an impact and help slow the spread of certain game modifications.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

When AI meets BI: 5 red flags to watch for

Join Transform 2021 this July 12-16. Register for the AI event of the year.


If there is a Holy Grail to business success in 2021, it most certainly has something to do with the power of data. And if business intelligence wasn’t already the answer to C-suite prayers for insight and vision derived from massive amounts of data, then artificial intelligence would be.

Today, AI is widely seen as the enabler that BI has always needed to take it to next-level business value. But incorporating AI into your existing BI environment is not so simple.

And it can be precarious, too: AI can dramatically amplify any almost unnoticeable issue into a significantly larger — and negative — impact onto downstream processes. For example, if you can’t sing in tune but only play with a small karaoke machine at home, there is no risk. It’s not a big problem. But imagine yourself in a huge stadium with a multi-megawatt PA system.

With that amplification potential firmly in mind, organizations hoping to integrate AI into their BI solutions must be keenly aware of the red flags that can scuttle even the best-intentioned projects.

Major pitfalls to watch out for when integrating AI into BI include the following:

1. Misalignment with (or absence of) business use case

This should be the easiest pitfall to recognize — and the most common found among current AI implementations. As tempting as it may be to layer AI into your BI solution simply because your peers or competitors are doing so, the consequences can be dire. It may prove difficult to justify the ROI if you spend millions of dollars to automate a piece of work done by a single employee earning $60,000 per year. Here, the positive ROI does not look obvious.

When seeking input from business leaders about the potential viability of an AI-enabled BI, start the conversation with specific scenarios where AI’s scale and scope could potentially address well-defined gaps and yield a business value exceeding the estimated expense. If those gaps aren’t well-understood or the sufficient new value is not guaranteed, then it’s difficult to justify proceeding further.

2. Insufficient training data

Let’s assume that you have a feasible business case — what should you look out for next? Now, you need to make sure that you have enough data to train AI via a machine learning (ML) process. You may have tons of data, but is it enough to be used for AI training? That will depend on a specific use case. For example, when Thomson Reuters built a Text Research Collection in 2009 for news classification, clustering, and summarization, it required a huge amount of data — close to two million news articles.

If at this point you’re still wondering who can determine what the right training data is, and how much of it will be enough for the intended use case, then you’re facing your next red flag.

3. Missing AI teacher

If you have an outstanding data scientist on-staff, it does not guarantee that you already have an AI teacher. It’s one thing to be able to code in R or Python and build sophisticated analytical solutions, and quite another to identify the right data for AI training, to package it properly for the AI training, to continuously validate the output, and guide AI in its learning pathway.

An AI teacher is not just a data scientist – it’s a data scientist with a lot of patience to go through the incremental machine learning process, with a thorough understanding of the business context and the problem you’re trying to solve, and an acute awareness of the risk of introducing bias via the teaching process.

AI teachers are a special breed, and AI teaching is increasingly considered to be at the intersection of artificial intelligence, neuroscience, and psychology — and they may be hard to find at the moment. But AI does need a teacher: Like a big service dog, such as a Rottweiler, with the proper training it can be your best friend and helper, but without one it could become dangerous, even for the owner.

If you are lucky to get an AI teacher, you still have a couple of other concerns to consider.

4. Immature master data

Master data (MD), the core data that underpins the successful operation of the business, is critically important not only for AI, but for traditional BI as well. The more mature or well-defined that MD is, the better. And while BI can compensate for MD’s immaturity inside a BI solution via additional data engineering, the same cannot be done inside AI.

Of course, you can use AI to master your data, but that is a different use case, known as data preparation for BI and AI.

How can we tell so-called mature MD from immature MD? Consider the following:

  1. The level of certainty in deduplication of MD Entities — it should be close to 100%
  2. The level of relationship management:
    • Inside each MD entity class — for example, “Company_A-is-a-parent-of-Company_B”
    • Across MD entity classes — for example, “Company_A-supplies-Part_XYZ”
  3. The level of consistency of categorizations, classifications, and taxonomies. If the marketing department uses a product classification that is different from the one used in finance, then these two must be properly — and explicitly — mapped to one another.

If you have mastered the above A-B-C of your MD and have successfully moved through the preceding three “red flag” check points, then you can attempt relatively simple use cases of enhancing BI with AI — the ones that use structured data.

If unstructured data, such as free-form text or any information without a pre-defined data model, must be involved in your AI implementation, then watch out for red flag #5.

5. Absence of a well-developed knowledge graph

What is a knowledge graph? Imagine all your MD implemented in a machine-readable format with all the definitions, classes, instances, relationships, and classifications, all interconnected and queryable. That would be a basic knowledge graph. Formally speaking, a knowledge graph includes an information model (at the class level) along with corresponding instances, qualified relationships (defined at the class level and implemented at the instance level), logical constraints, and behavioral rules.

If a knowledge graph is implemented using Semantic Web standards, then you can load it straight into AI, thereby significantly minimizing the AI teaching process described earlier. Another great feature of a knowledge graph is that it is limitlessly extensible in terms of the informational model, relationships, constraints, and so on. It is also easily mergeable with other knowledge graphs. If mature MD may be sufficient for AI implementations using only structured data, then a knowledge graph is a must for:

  • AI solutions processing unstructured data — where the AI uses the knowledge graph to analyze the unstructured data in a similar way as structured data;
  • AI storytelling solutions — where the analytical results are presented as a story or a narrative, not just tables or charts, thereby shifting BI from an on-screen visualization supporting the discussion at the table, to a party of this discussion; a cognitive support service, if you will.

While these potential pitfalls seem daunting at first, they are certainly less worrisome than the alternative. And they serve as a reminder of the best practice common to all successful AI implementations: up-front preparation.

AI can change BI from an insights-invoking tool to a respected participant in the actual decision-making process. It is a qualitative change — and it may be a highly worthwhile investment, as long as you know what to watch out for.

Igor Ikonnikov is a Research & Advisory Director in the Data & Analytics practice at Info-Tech Research Group. Igor has extensive experience in strategy formation and execution in the information management domain, including master data management, data governance, knowledge management, enterprise content management, big data, and analytics.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

When AI flags the ruler, not the tumor — and other arguments for abolishing the black box (VB Live)

AI helps health care experts do their jobs efficiently and effectively, but it needs to be used responsibly, ethically, and equitably. In this VB Live event, get an in-depth perspective on the strengths and limitations of data, AI methodology and more.

Hear more from Brian Christian during our VB Live event on March 31.

Register here for free.


One of the big issues that exists within AI generally, but is particularly acute in health care settings, is the issue of transparency. AI models — for example, deep neural networks — have a reputation for being black boxes. That’s particularly concerning in a medical setting, where caregivers and patients alike need to understand why recommendations are being made.

“That’s both because it’s integral to the trust in the doctor-patient relationship, but also as a sanity check, to make sure these models are, in fact, learning things they’re supposed to be learning and functioning the way we would expect,” says Brian Christian, author of The Alignment Problem, Algorithms to Live By and The Most Human Human.

He points to the example of the neural network that famously had reached a level of accuracy comparable to human dermatologists at diagnosing malignant skin lesions. However, a closer examination of the model’s saliency methods revealed that the single most influential thing this model was looking for in a picture of someone’s skin was the presence of a ruler. Because medical images of cancerous lesions include a ruler for scale, the model learned to identify the presence of a ruler as a marker of malignancy, because that’s much easier than telling the difference between different kinds of lesions.

“It’s precisely this kind of thing which explains remarkable accuracy in a test setting, but is completely useless in the real world, because patients don’t come with rulers helpfully pre-attached when [a tumor] is malignant,” Christian says. “That’s a perfect example, and it’s one of many for why transparency is essential in this setting in particular.”

At a conceptual level, one of the biggest issues in all machine learning is that there’s almost always a gap between the thing that you can readily measure and the thing you actually care about.

He points to the model developed in the 1990s by a group of researchers in Pittsburgh to estimate the severity of patients with pneumonia to triage inpatient vs outpatient treatment. One thing this model learned was that, on average, people with asthma who come in with pneumonia have better health outcomes as a group than non-asthmatics. However, this wasn’t because having asthma is the great health bonus it was flagged as, but because patients with asthma get higher priority care, and also asthma patients are on high alert to go to their doctor as soon as they start to have pulmonary symptoms.

“If all you measure is patient mortality, the asthmatics look like they come out ahead,” he says. “But if you measure things like cost, or days in hospital, or comorbidities, you would notice that maybe they have better mortality, but there’s a lot more going on. They’re survivors, but they’re high-risk survivors, and that becomes clear when you start expanding the scope of what your model is predicting.”

The Pittsburgh team was using a rule-based model, which enabled them to see this asthma connection and immediately flag it. They were able to share that the model had learned a possibly bogus correlation with the doctors participating in the project. But if it had simply been a giant neural network, they might not have known that this problematic association had been learned.

One of the researchers on that project in the 1990s, Rich Caruana from Microsoft, went back 20 years later with a modern set of tools and examined the neural network he helped developed and found a number of equally terrifying associations, such as thinking that being over 100 was good for you, or having high blood pressure was a benefit. All for the same reason — that those people were given higher-priority care.

“Looking back, Caruana says thank God we didn’t use this neural net on patients,” Christian says. “That was the fear he had at the time, and it turns out, 20 years later, to have been fully justified. That all speaks to the importance of having transparent models.”

Algorithms that aren’t transparent, or that are biased, have resulted in a variety of horror stories, which have led to some saying these systems have no place in health care, but that’s a bridge too far, Christian says. There’s an enormous body of evidence that shows that when done properly, these models are an enormous asset, and often better than individual expert judgments, as well as providing a host of other advantages.

“On the other hand,” explains Christian, “some are overly enthusiastic about the embrace of technology, who say, let’s take our hands off the wheel, let the algorithms do it, let our computer overlords tell us what to do and let the system run on autopilot. And I think that is also going too far, because of the many examples we’ve discussed. As I say, we want to thread that needle.”

In other words, AI can’t be used blindly. It requires a data-driven process of building provably optimal, transparent models, from data, in an iterative process that pulls together an interdisciplinary team of computer scientists, clinicians, patient advocates, as well as social scientists that are committed to an iterative and inclusive process.

That also includes audits once these systems go into production, since certain correlations may break over time, certain assumptions may no longer hold, and we may learn more — the last thing you want to do is just flip the switch and come back 10 years later.

“For me, a diverse group of stakeholders with different expertise, representing different interests, coming together at the table to do this in a thoughtful, careful way, is the way forward,” he says. “That’s what I feel the most optimistic about in health care.”


Hear more from Brian Christian during our VB Live event, “In Pursuit of Parity: A guide to the responsible use of AI in health care” on March 31.

Register here for free.

Presented by Optum


 You’ll learn:

  • What it means to use advanced analytics “responsibly”
  • Why responsible use is so important in health care as compared to other fields
  • The steps that researchers and organizations are taking today to ensure AI is used responsibly
  • What the AI-enabled health system of the future looks like and its advantages for consumers, organizations, and clinicians

Speakers:

  • Brian Christian, Author, The Alignment Problem, Algorithms to Live By and The Most Human Human
  • Sanji Fernando, SVP of Artificial Intelligence & Analytics Platforms, Optum
  • Kyle Wiggers, AI Staff Writer, VentureBeat (moderator)

Repost: Original Source and Author Link