Categories
AI

How will AI be used ethically in the future? AI Responsibility Lab has a plan

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


As the use of AI across all industries and nearly every aspect of society grows, there is an increasingly obvious need to have controls in place for responsible AI .

Responsible AI is about making sure that AI is used in a way that isn’t unethical, that helps respect personal privacy and that also generally avoids bias. There is a seemingly endless stream of companies, technologies and researchers tackling issues associated with responsible AI. Now the aptly named AI Responsibility Labs (AIRL) is joining the fray, announcing  $2 million in pre-seed funding, alongside a preview launch of the company’s Mission Control software-as-a-service (SaaS) platform. 

Leading AIRL is the company’s CEO Ramsay Brown, who was trained as a computational neuroscientist at the University of Southern California, where he spent a lot of time working on mapping the human brain. His first startup was originally known as Dopamine Labs, rebranded as Boundless Mind, with a focus on behavioral engineering and how to use machine learning to make predictions about how people are going to behave. Boundless Mind was acquired by Thrive Global in 2019.

At AIRL, Brown and his team are taking on the issues of AI safety, making sure that AI is used responsibly in a way that doesn’t harm society or the organizations that are using the technology.

“We founded the company and built the software platform for Mission Control to start with helping data science teams do their job better and more accurately and faster,” Brown said. “When we look around the responsible AI community, there are some people working on governance and compliance, but they are not talking to data science teams and finding out what actually hurts.”

What data science teams need to create responsible AI

Brown stated emphatically that no organization likely sets out to build an AI that is purposefully biased and that uses data in an unethical fashion.

Rather, what typically occurs in a complex development with many moving pieces and different people is data being unintentionally misused or machine learning models trained on incomplete data. When Brown and his team asked data scientists what was missing and what hurt development efforts, respondents told him they were looking for project management software more so than a compliance framework. 

“That was our big ‘a-ha’ moment,” he said. “The thing that teams actually missed was not that they didn’t don’t understand regulations, it’s that they didn’t know what their teams were doing.”

Brown noted that two decades ago software engineering was revolutionized with the development of dashboard tools like Atlassian’s Jira, which helped developers to build software faster. Now, his hope is that AIRL’s Mission Control will be the dashboard in data science to help data teams build technologies with responsible AI practices.

Working with existing AI and MLops frameworks

There are multiple tools that organizations can use today to help manage AI and machine learning workflows, sometimes grouped together under the industry category of MLops.

Popular technologies include AWS Sagemaker, Google VertexAI, Domino Data Lab and BigPanda. 

Brown said that one of the things his company has learned while building out its Mission Control service is that data science teams have many different tools they prefer to use. He said that AIRL isn’t looking to compete with MLops and existing AI tools, but rather to provide an overlay on top for responsible AI usage. What AIRL has done is developed an open API endpoint so that a team using Mission Control can pipe in any data from any platform and have it end up as part of monitoring processes.

AIRL’s Mission Control provides a framework for teams to take what they’ve been doing in ad hoc approaches and create standardized processes for machine learning and AI operations.

Brown said that Mission Control enables users to take data science notebooks and turn them into repeatable processes and workflows that work within configured parameters for responsible AI usage. In such a model, the data is connected to a monitoring system that can alert an organization if there is a violation of policies. For example, he noted that if a data scientist uses a data set that isn’t allowed by policy to be used for a certain machine learning operation, Mission Control can catch that automatically, raise a flag to managers and pause the workflow.

“This centralization of information creates better coordination and visibility,” Brown said. “It also lowers the probability that systems with really gnarly and undesirable outcomes end up in production.”

Looking out to 2027 and the future of responsible AI

Looking out to 2027, AIRL has a roadmap plan to help with more advanced concerns around AI usage and the potential for Artificial General Intelligence (AGI). The company’s 2027 focus is on enabling an effort it calls the Synthetic Labor Incentive Protocol (SLIP). The basic idea is to have some form of smart contract for using AGI-powered labor in the economy.

“We’re looking at the advent of artificial general intelligence, as a logistical business and society level concern that needs to be spoken about not in “sci-fi terms,” but in practical incentive management terms,” Brown said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
Game

Activision Blizzard’s latest anti-harassment effort is a ‘responsibility committee’

Activision Blizzard is facing increasing scrutiny from the government and the games industry over its handling of the ongoing sexual harassment scandal, and its latest effort might not help. As Kotaku reports, the developer has formed a “Workplace Responsibility Committee” to help it implement new anti-harassment and anti-discrimination efforts. While that sounds useful at first, there’s a concern the initial committee is more symbolic than functional.

The committee will launch with just two members, both of whom (chair Dawn Ostroff and Reveta Bowers) are existing independent board members. They, in turn, will report to the board and key Activision Blizzard executives — including CEO Bobby Kotick, who some argue is partly to blame for the scandal. The duo will work with an outside coordinator and a consultant following the company’s settlement with the EEOC, but there’s no mention of involving regular company staff or outsiders who weren’t part of that court agreement.

As such, it won’t be surprising if the committee does little to satisfy critics. Employees and others have called on Kotick to resign, among other more substantial changes. There’s also low confidence in leadership’s ability to police itself — Jennifer Oneal, Blizzard’s first female leader, allegedly left her position feeling she was the target of discrimination by a seemingly irredeemable company culture. Bloomberg noted that some board members (including Ostroff) are Kotick’s longtime friends and connections, for that matter. The committee might need to take aggressive steps if it wants to prove it’s more than a superficial gesture.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Security

Hacker claims responsibility for T-Mobile attack, bashes the carrier’s security

A person claiming to be behind the T-Mobile data breach that exposed almost 50 million people’s info has come forward to reveal his identity and to criticize T-Mobile’s security, according to a report by The Wall Street Journal. John Binns told the WSJ that he was behind the attack and provided evidence that he could access accounts associated with it, and he went into detail about how he was able to pull it off and why he did it.

According to Binns, he was able to get customer (and former customer) data from T-Mobile by scanning for unprotected routers. He found one, he told the Journal, which allowed him to access a Washington state data center that stored credentials for over 100 servers. He called the carrier’s security “awful” and said that realizing how much data he had access to made him panic. According to the WSJ, it’s unclear whether Binns was working alone, though he implied that he collaborated with others for at least part of the hack.

The information the hacker gained access to includes sensitive personal data, like names, birthdates, and Social Security numbers, as well as important cellular data like identification numbers for cellphones and SIM cards. T-Mobile has said in a statement that it’s “confident” that it’s “closed off the access and egress points the bad actor used in the attack.”

The WSJ’s report goes in depth into Binns’ history as a hacker. He claims that he got his start making cheats for popular video games and that he discovered the flaw that ended up being used in a botnet that attacked IoT devices (though he denies actually working on the code).

According to Binns, his relationship with US intelligence services is troubled, to say the least. A lawsuit that appears to have been filed by Binns in 2020 demands that the CIA, FBI, DOJ, and other agencies tell him what information they have on him. The lawsuit also accuses the government of, among other things, having an informant try to convince Binns to buy Stinger missiles on an FBI-owned website, attacking Binns with psychic and energy weapons, and even with being involved in his alleged kidnapping and torture. An FBI response to his lawsuit denied he was being investigated by the bureau for the botnet or having information related to the alleged surveillance, and abduction, and torture.

Binns told the WSJ that one of his goals behind the attack was to “generate noise,” saying that he hopes someone in the FBI will leak information related to his alleged kidnapping. It’s not likely that Binns’ situation will be improved now that he’s shone a spotlight on himself as the person who hacked one of the US’s major carriers. However, if his reports about how he gained access to a vast trove of T-Mobile data are true, it paints a concerning picture of the carrier’s security practices.

Repost: Original Source and Author Link

Categories
AI

Responsible AI in health care starts at the top — but it’s everyone’s responsibility (VB Live)

Presented by Optum


Health care’s Quadruple Aim is to improve health outcomes, enhance the experiences of patients and providers, and reduce costs — and AI can help. In this VB Live event, learn more about how stakeholders can use AI responsibly, ethically, and equitably to ensure all populations benefit.

Register here for free.


Breakthroughs in the application of machine learning and other forms of artificial intelligence (AI) in health care are rapidly advancing, creating advantages in the field’s clinical and administrative realms.  It’s on the administrative side — think workflows or back office processes — where the technology has been more fully adopted. Using AI to simplify those processes creates efficiencies that reduce the amount of work it takes to deliver health care and improves the experiences of both patients and providers.

But it’s increasingly clear that applying AI responsibly needs to be a central focus for organizations who use data and information to improve outcomes and the overall experience.

“Advanced analytics and AI have a significant impact in how important decisions are made across the health care ecosystem,” says Sanji Fernando, SVP of artificial intelligence and analytics platforms at Optum. And, so, the company has guidelines for the responsible use of advanced analytics and AI for all of UnitedHealth Group.

“It’s important for us to have a framework, not only for the data scientists and machine learning engineers, but for everyone  in our organization — operations, clinicians, product managers, marketing — to better understand  expectations  and how we want to drive breakthroughs to better support our customers, patients, and the wider health care system,” he says. “We view the promise of AI and its responsible use as  part of our shared responsibility to use these breakthroughs appropriately for patients, providers, and our customers.”

The guideline focuses on making sure everyone is considering how to appropriately use advanced analytics and AI, how these models are trained, and how they are monitored and evaluated over time, he adds.

Machine learning models, by definition, learn from the available data that’s being created throughout the health care system. Inequities in the system may be reflected in the data and predictions that machine learning models return. It’s important for everyone to be aware that health inequity may exist and that models may reflect that, he explains.

“By consistently evaluating  how models may classify or infer, and looking at how that affects folks of different races, ethnicities, and ages, we can  be more aware of where some models may require consistent examination to best ensure they are working the way we’d like them to,” he says. “The reality is that there’s no magic bullet to ‘fix’ an ML model automatically, but it’s important for us to understand and consistently learn where these models may impact different groups.”

Transparency is a key factor in delivering responsible AI. That includes being very clear about how you’re training your models, the appropriate use of data used to train an algorithm, as well as data privacy. When possible, it also means understanding how specific features are being identified or leveraged within the model. Basics like an age or date are straightforward features, but the challenge arises with paragraphs of natural language and unstructured text. Each word, phrase or paragraph can be considered a feature, creating an enormous number of combinations to consider.

“But understanding feature importance — the features that are more important to the model — is important to provide better insight into how the model may actually be  working,” he explains. “It’s not true mathematical interpretability, but it gives us a better awareness.”

Another important factor is being able to reproduce the performance and results of a model. Results will necessarily change when you train or retrain an algorithm, so you want to be able to trace that history, by being able to reproduce results over time. This ensures the consistency and appropriateness of the model remains constant (and allows for potential adjustments should they be needed).

There’s no shortage of tools and capabilities available across the field of responsible AI because there are so many people who are passionate about making sure we all use AI responsibly. For example, Optum uses an open-source bias audit tool from the University of Chicago. But there are any number of approaches and great thinking from a tooling perspective, Fernando says, so it’s really becoming an industry best practice to implement a policy of responsible AI.

The other piece of the puzzle requires work and a commitment from everyone in the ecosystem: making responsible use everyone’s responsibility, not just the machine learning engineer or data scientist.

“Our aspiration is that every employee understands these responsibilities and takes ownership of them,” he says, “whether UHG employees are using ML-driven recommendations in their day-to-day work, designing new products and services, or they’re the data scientists and ML engineers who can evaluate models and understand output class distributions, we all have a shared responsibility to ensure these tools are achieving the best and most equitable results for the people we serve.”

To learn more about the ways that AI is impacting the delivery and administration of health care across the ecosystem, the benefits of machine learning for cost savings and efficiency, and the importance of responsible AI for every worker, don’t miss this VB Live event.


Don’t miss out!

Register here for free.


You’ll learn:

  • What it means to use advanced analytics “responsibly”
  • Why responsible use is so important in health care as compared to other fields
  • The steps that researchers and organizations are taking today to ensure AI is used responsibly
  • What the AI-enabled health system of the future looks like and its advantages for consumers, organizations, and clinicians

Speakers:

  • Brian Christian, Author, The Alignment Problem, Algorithms to Live By and The Most Human Human
  • Sanji Fernando, SVP of Artificial Intelligence & Analytics Platforms, Optum
  • Kyle Wiggers, AI Staff Writer, VentureBeat (moderator)

Repost: Original Source and Author Link