Categories
Tech News

IoT has a long ways to go to be trustworthy

The smart homes and cities of the future are often painted as places full of transparent touch screens, voice-controlled assistants, and robots that can often predict what we need before we even ask. Unfortunately, what these advertisements and marketing materials don’t always show is who is really in control of these future worlds.

Sure, most of these videos or images show regular people like you and me as the main actors, but it is really the companies that run the services and networks behind the scenes that are really in control, and it’s going to take a long while before we can and should completely trust these systems with our safety and our lives.

Texas Trouble

The most recent incident plaguing Texan owners of smart thermostats isn’t a totally new case, but the number of people it affected highlights one of the current problems with IoT, especially with smart home appliances and even smart cars. In a nutshell, some homeowners in Texas suddenly found their smart thermostats set to an uncomfortably high temperature during a sweltering day without any action on their part. It turned out that energy companies or service providers working on their behalf remotely adjusted those thermostats without notifying owners beforehand.

Unfortunately for those owners, everything these companies did was legal and by the book. Affected users unknowingly agreed to let energy companies control their thermostats remotely in exchange for discounts on their bills or even sweepstakes entries. As the reports proved, most of these homeowners were not aware of those details and were livid when they discovered what they signed up for.

Whether that strategy of raising thermostat temperatures is an effective one or not is a completely different matter entirely. If the ultimate goal was to reduce the strain on the energy grid, the heatwave plus higher temperatures at home only caused even more use of cooling appliances and other devices that may have actually done the opposite. The fact remains that these companies have the power and the ability to affect people’s lives in ways they didn’t expect they could.

Convenience, not control

IoT devices and serves are powerful and convenient, no doubt about that. They simplify things that often look complicated and automate processes that we perform over and over again to almost no end. They empower people by freeing them to have more time for the more important things in life, like spending time with family and friends or even pampering one’s self every so often.

The tasks that these devices and systems perform for us don’t just magically disappear thanks to IoT, though. What it really does is offload the work somewhere else and give the control over to someone else. Just because you no longer have to flick the light switch doesn’t mean no one has to. More often than not, that someone is some service provider or some smart assistant that is, in the final analysis, owned and controlled by some company.

To put it bluntly, the convenience that IoT offers us comes at the price of giving up some of the control of our lives to others. That in itself isn’t exactly bad and we actually do it every day. Some have secretaries manage their work and we implicitly trust chefs for the food that we eat but don’t cook ourselves. We also trust energy companies to deliver power to our houses 24/7. Likewise, we also trust companies like Google, Nest, Honeywell, and the like to manage some of the details of our houses for us. The problem is when that trust is misplaced or, in some cases, wasn’t knowingly given in the first place.

Smart devices, not so smart humans

Smart devices and the networks that power them seem like technological marvels and they truly are. That, however, doesn’t remove the fact that there will always be humans on both ends of the line, one way or another. At any given time, a problem could appear on any part of that chain, and human errors are, unfortunately, harder to fix than technical ones.

On the one hand, it has been proven time and again that humans have a seemingly innate tendency to gloss over the fine print. What shocked Texas residents was that they were caught unaware that energy companies could actually remotely control their smart thermostats. They probably wouldn’t have been so surprised had they been aware that they had authorized those companies and their partners to do exactly that when they signed up for some savers program.

There’s also no escaping the fact that there will always be companies willing to exploit that human vulnerability. Even companies that promise to do no evil might not always keep to their word at all times. Of course, there needs to be some amount of trust when using some company’s products or services, but it’s still something that people have to keep in mind when signing up for a program or when buying anything, especially one that connects to and gets controlled via the Internet.

Where safety starts

This is not to say that IoT products are bad. In fact, they are the inevitable future. From smart homes to smart buildings to smart streets and cities, we are slowly but surely moving towards a connected future. That makes it even more critical that we develop better and smarter habits and mindsets when embracing the smart products of the future today.

IoT devices come in all shapes, sizes, and capabilities, and some don’t even need to connect to a remote server to function. Devices that can simply connect to a local network or can store data to your cloud storage provider of choice do exist. They might sometimes be more expensive but the privacy and security they offer can give priceless peace of mind instead.

Be always aware of the fine print when signing up for services and promos. There is always a catch to anything and it pays, sometimes literally, to be aware of those before you sign on the dotted line. It would be grand if we could all grok the legalese in Terms of Service documents but sometimes a brief Internet search would be enough. Better yet, support legislation or efforts that would compel companies to provide such terms in easy-to-understand language.

Final Thoughts

There will always be some exchange of power whenever we give control of some part of our lives to something or someone. That doesn’t mean we’re actually giving up complete control and it doesn’t take much to still retain some control, especially over what part of ourselves we want to hand over to companies.

Sometimes, just merely knowing that we are indeed giving these companies and entities power over our lives can be enough. The future is smart, one way or another, but it hopefully also means having smart humans who would know what they are doing better than their predecessors.

Repost: Original Source and Author Link

Categories
AI

Amazon Alexa head scientist on developing trustworthy AI systems

Elevate your enterprise data technology and strategy at Transform 2021.


Particularly over the past half-century, humans have had to adapt to profound technological changes like the internet, smartphones, and personal computers. In most cases, adapting to the technology has made sense — we live in a far more globalized world compared with 50 years ago. But there’s a difference when it comes to AI and machine learning technologies. Because they can learn about people and conform to their needs, the onus is on AI adapting to users rather than the other way around — at least in theory.

Rohit Prasad, head scientist at Amazon’s Alexa division, believes that the industry is at an inflection point. Moving forward, it must ensure that AI learns about users in the same ways users learn so that a level of trust is maintained, he told VentureBeat in a recent phone interview.

One of the ways Amazon’s Alexa team hopes to inject AI with greater trust, and personalization,  is by incorporating contextual awareness, like the individual preferences of Alexa users in a household or business. Starting later this year, users will be able to “teach” Alexa things like their dietary preferences — so that Alexa only suggests vegetarian restaurants and recipes, for example — by applying this information to future interactions.

“Alexa will set the expectation about where this preference information will be used and be very transparent about what it learns and reuses, helping to build tighter trust with the customer,” Prasad said. “These are the benefits to this.”

Toxicity and privacy

Fostering trust hasn’t always been the Alexa team’s strong suit. In 2019, Amazon launched Alexa Answers, a service that allows any Amazon customer to submit responses to unanswered questions. Amazon gave assurances that submissions would be policed through a combination of automatic and manual review, but VentureBeat’s analyses revealed that untrue, misleading, and offensive questions and answers were served to millions of Alexa users. In April 2019, Bloomberg revealed that Amazon employs contract workers to annotate thousands of hours of audio from sometimes accidentally activated Alexa devices, prompting the company to roll out user-facing tools that quickly delete cloud-stored data. And researchers have claimed that Amazon runs afoul of its own developer rules regarding location privacy on Alexa devices.

In response to questions about Alexa Answers, Prasad said that Amazon has “a lot of work [to do]” on guardrails and ranking the answers to questions while filtering information that might be insensitive to a user. “We know that [Alexa devices] are often in a home setting or communal setting, where you can have different age groups of people with different ethnicities, and we have to be respectful of that,” he said.

Despite the missteps, Alexa has seen increased adoption in the enterprise over the past year, particularly in hospitality and elder care centers, Prasad says. He asserts that one of the reasons is Alexa’s ability to internally route requests to the right app, a capability that’s enabled by machine learning.

The enterprise has experienced an uptick in voice technology adoption during the pandemic. In a recent survey of 500 IT and business decision-makers in the U.S., France, Germany, and the U.K., 28% of respondents said they were using voice technologies, and 84% expect to be using them in the next year.

“[Alexa’s ability] to decide the best experience [is] being extended to the enterprise, and I would say is a great differentiator, because you can have many different ways of building an experience by many different enterprises and individual developers,” Prasad said. “Alexa has to make seamless requests, which is a very important problem we’re solving.”

Mitigating bias

Another important — albeit intractable — problem Prasad aims to tackle is inclusive design. While natural language models are the building blocks of services including Alexa, growing evidence shows that these models risk reinforcing undesirable stereotypes. Detoxification has been proposed as a fix for this problem, but the coauthors of newer research suggest even this technique can amplify rather than mitigate biases.

The increasing attention on language biases comes as some within the AI community call for greater consideration of the effects of social hierarchies like racism. In a paper published last June, Microsoft researchers advocated for a closer examination and exploration of the relationships between language, power, and prejudice in their work. The paper also concluded that the research field generally lacks clear descriptions of bias and fails to explain how, why, and to whom specific bias is harmful.

On the accessibility side, Prasad points to Alexa’s support for text messages, which lets users type messages rather than talk to Alexa. Beyond this, he says that the Alexa team is investigating “many” different ways Alexa might better understand different kinds of speech patterns.

“[Fairness issues] become very individualized. For instance, if you have a soft voice, independent of your gender or age group, you may struggle to get Alexa to wake up for you,” Prasad said. “This is where more adaptive thresholding can help, for example.”

Prasad also says that the team has worked to remove biases in Alexa’s knowledge graphs, or the databases that furnish Alexa with facts about people, places, and things. These knowledge graphs, which are automatically created, could reinforce biases in the data they contain like “nurses are women” and “wrestlers are men.”

“It’s early work, but we’ve worked incredibly hard to reduce those biases,” Prasad said.

Prasad believes that tackling these challenges will ultimately lead to “the Holy Grail” in AI: a system that understands how to handle all requests appropriately without manual modeling or human supervision. Such a system would be more robust to variability, he says, and enable users to teach it to perform new skills without the need for arduous engineering.

“[With Alexa,] we’re taking a very pragmatic approach to generalized intelligence,” he said. “The biggest challenge to me as an AI researcher is building systems that perform well but that can also be democratized such that anyone can build a great experience for their applications.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

People are more likely to exploit trustworthy AI than other humans

Cooperation between people holds society together, but new research suggests we’ll be less willing to compromise with “benevolent” AI.

The study explored how humans will interact with machines in future social situations — such as self-driving cars that they encounter on the road — by asking participants to play a series of social dilemma games.

The participants were told that they were interacting with either another human or an AI agent. Per the study paper:

Players in these games faced four different forms of social dilemma, each presenting them with a choice between the pursuit of personal or mutual interests, but with varying levels of risk and compromise involved.

The researchers then compared what the participants chose to do when interacting with AI or anonymous humans.

[Read: Why entrepreneurship in emerging markets matters]

Study co-author Jurgis Karpus, a behavioral game theorist and philosopher at the Ludwig Maximilian University of Munich, said they found a consistent pattern:

People expected artificial agents to be as cooperative [sic] as fellow humans. However, they did not return their benevolence as much and exploited the AI more than humans.

Social dilemmas

One of the experiments they used was the prisoner’s dilemma. In the game, players accused of a crime must choose between cooperation for mutual benefit or betrayal for self-interest.

While the participants embraced risk with both humans and artificial intelligence, they betrayed the trust of AI far more frequently.

However, they did trust their algorithmic partners to be as cooperative as humans.

“They are fine with letting the machine down, though, and that is the big difference,” said study co-author Dr Bahador Bahrami, a social neuroscientist at the LMU. “People even do not report much guilt when they do.”

The findings suggest that the benefits of smart machines could be restricted by human exploitation.

Take the example of autonomous cars. If no one lets them join the traffic, the vehicles will create congestion on the side. Karpus notes that this could have dangerous consequences:

If humans are reluctant to let a polite self-driving car join from a side road, should the self-driving car be less polite and more aggressive in order to be useful?

While the risks of unethical AI attract most of our concerns, the study shows that trustworthy algorithms can generate another set of problems.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Repost: Original Source and Author Link

Categories
AI

Getting to trustworthy AI | VentureBeat

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Artificial intelligence will be key to helping humanity travel to new frontiers and solve problems that today seem insurmountable. It enhances human expertise, makes predictions more accurate, automates decisions and processes, frees humans to focus on higher value work, and improves our overall efficiency.

But public trust in the technology is at a low point, and there is good reason for that. Over the past several years, we’ve seen multiple examples of AI that makes unfair decisions, or that doesn’t give any explanation for its decisions, or that can be hacked.

To get to trustworthy AI, organizations have to resolve these problems with investments on three fronts: First, they need to nurture a culture that adopts and scales AI safely. Second, they need to create investigative tools to see inside black box algorithms. And third, they need to make sure their corporate strategy includes strong data governance principles.

1. Nurturing the culture

Trustworthy AI depends on more than just the responsible design, development, and use of the technology. It also depends on having the right organizational operating structures and culture. For example, many companies that may have concerns about bias in their training data also have expressed concern that their work environments are not conducive to nurturing women and minorities to their ranks. There is indeed, a very direct correlation! To get started and really think about how to make this culture shift, organizations need to define what responsible AI looks like within their function, why it’s unique, and what the specific challenges are.

To ensure fair and transparent AI, organizations must pull together task forces of stakeholders from different backgrounds and disciplines to design their approach. This method will reduce the likelihood of underlying prejudice in the data that’s used to create AI algorithms that could result in discrimination and other social consequences.

Task force members should include experts and leaders from various domains who can understand, anticipate, and mitigate relevant issues as necessary. They must have the resources to develop, test, and quickly scale AI technology.

For example, machine learning models for credit decisioning can exhibit gender bias, unfairly discriminating against female borrowers if uncontrolled. A responsible-AI task force can roll out design thinking workshops to help designers and developers think through the unintended consequences of such an application and find solutions. Design thinking is foundational to a socially responsible AI approach.

To ensure this new thinking becomes ingrained in the company culture, all stakeholders from across an organization — from data scientists and CTOs to Chief Diversity and Inclusivity officers must play a role. Fighting bias and ensuring fairness is a socio-technological challenge that is solved when employees who may not be used to collaborating and working with each other start doing so, specifically about data and the impacts models can have on historically disadvantaged people.

2. Trustworthy tools

Organizations should seek out tools to monitor transparency, fairness, explainability, privacy, and robustness of their AI models. These tools can point teams to problem areas so that they can take corrective action (such as introducing fairness criteria in the model training and then verifying the model output).

Here are some examples of such investigative tools:

There are versions of these tools that are freely available via open source and others that are commercially available. When choosing these tools, it is important to first consider what you need the tool to actually do and whether you need the tool to perform on production systems or those still in development. You must then determine what kind of support you need and at which price, breadth, and depth. An important consideration is whether the tools are trusted and referenced by global standards boards.

3. Developing data and AI governance

Any organization deploying AI must have clear data governance in effect. This includes building a governance structure (committees and charters, roles and responsibilities) as well as creating policies and procedures on data and model management. With respect to humans and automated governance, organizations should adopt frameworks for healthy dialog that help craft data policy.

This as an opportunity to promote data and AI literacy in an organization. For highly regulated industries, organizations can find specialized tech partners that can also ensure that the model risk management framework meets supervisory standards.

There are dozens of AI governance boards around the world that are working with industry in order to help set standards for AI. IEEE is one single example. IEEE is the largest technical professional organization dedicated to advancing technology for the benefit of humanity. The IEEE Standards Association, a globally recognized standards-setting body within IEEE, develops consensus standards through an open process that engages industry and brings together a broad stakeholder community. Its work encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies. Such international standards bodies can help guide your organization to adopt standards that are right for you and your market.

Conclusion

Curious how your org ranks when it comes to AI-ready culture, tooling, and governance? Assessment tools can help you determine how well prepared your organization is to scale AI ethically on these three fronts.

There is no magic pill to making your organization a truly responsible steward of artificial intelligence. AI is meant to augment and enhance your current operations, and a deep learning model can only be as open-minded, diverse, and inclusive as the team developing it.

Phaedra Boinodiris, FRSA, is an executive consultant on the Trust in AI team at IBM and is currently pursuing her PhD in AI and Ethics. She has focused on inclusion in technology since 1999 and is a member of the Cognitive World Think Tank on enterprise AI.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

World Economic Forum launches global alliance to speed trustworthy AI adoption

The World Economic Forum (WEF) is launching the Global AI Action Alliance today, with more than 100 organizations participating at launch. The steering committee includes business leaders like IBM CEO Arvind Krishna, multinational organizations like the OECD and UNESCO, and worker group representatives like International Trade Union Confederation general secretary Sharan Burrow.

The Global AI Action Alliance is paid for by $40 million in grant funding from the Patrick J. McGovern Foundation to support AI and data projects.

Much good can be done with AI, said WEF AI and ML director at the Centre for the Fourth Industrial Revolution Kay Firth-Butterfield, but she cautioned that the technology needs a good governance foundation to garner and maintain public trust.

“It is our expectation that these projects will explore the frontiers of social challenges that can be solved by AI and through experimentation shape the development of new AI technologies. The Foundation is also committing to supply direct data services to global nonprofits to create exemplar organizations poised to capture the benefits of AI for the people and planet they serve,” Patrick J. McGovern Foundation president Vilas Dhar told VentureBeat.

As part of that effort, the group will support organizations promoting AI governance and amplify influential AI ethics frameworks and research. This support is needed to bolster AI ethics work that can often be fragmented or suffer from a lack of exposure.

The Global AI Action Alliance is the latest initiative from the World Economic Forum, following the creation of a Centre for the Fourth Industrial Revolution. In 2019, the World Economic Forum created the Global AI Council with participation from individuals like Uber CEO Dara Khosrowshahi and Microsoft VP Brad Smith to steer WEF AI activity.

Government officials working with the WEF previously created one of the first known guidelines to help people within public agencies weigh risk associated with acquiring AI services from private market vendors. Additional resources include work with a New Zealand government official to reconsider the role of regulation in the age of AI.

AI regulation is not just imperative to protect against systemic discrimination. Unregulated AI is also a threat to the survival of democracy itself at a time when the institution is under attack in countries like Brazil, India, the Philippines, and the United States. Last fall, former European Parliament member Marietje Schaake argued in favor of creating a global alliance to reclaim power from Big Tech firms and champion democracy.

“As a representative of civil society, we prioritize creating spaces for shared decision making, rather than corralling the behavior of tech companies. Alliances like GAIA serve the interests of democracy, restructuring the power dynamic between the elite and the marginalized by bringing them together around one table,” Dhar said.

In related news, earlier this week VentureBeat detailed how the OECD formed a task force dedicated to creating metrics to help nation-states understand how much AI compute they need.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link