Categories
AI

Google’s ethical AI researchers complained of harassment long before Timnit Gebru’s firing

Google’s AI leadership came under fire in December when star ethics researcher Timnit Gebru was abruptly fired while working on a paper about the dangers of large language models. Now, new reporting from Bloomberg suggests the turmoil began long before her termination — and includes allegations of bias and sexual harassment.

Shortly after Gebru arrived at Google in 2018, she informed her boss that a colleague had been accused of sexual harassment at another organization. Katherine Heller, a Google researcher, reported the same incident, which included allegations of inappropriate touching. Google immediately opened an investigation into the man’s behavior. Bloomberg did not name the man accused of harassment, and The Verge does not know his identity.

The allegations coincided with an even more explosive story. Andy Rubin, the “father of Android” had received a $90 million exit package despite being credibly accused of sexual misconduct. The news sparked outrage at Google, and 20,000 employees walked out of work to protest the company’s handling of sexual harassment.

Gebru and Margaret Mitchell, co-lead of the ethical AI team, went to AI chief Jeff Dean with a “litany of concerns,” according to Bloomberg. They told Dean about the colleague who’d been accused of harassment, and said there was a perceived pattern of women being excluded and undermined on the research team. Some were given lower roles than men, despite having better qualifications. Mitchell also said she’d been denied a promotion due to “nebulous complaints to HR about her personality.”

Dean was skeptical about the harassment allegations but said he would investigate, Bloomberg reports. He pushed back on the idea that there was a pattern of women on the research team getting lower-level positions than men.

After the meeting, Dean announced a new research project with the alleged harasser at the helm. Nine months later, the man was fired for “leadership issues,” according to Bloomberg. He’d been accused of misconduct at Google, although the investigation was still ongoing.

After the man was fired, he threatened to sue Google. The legal team told employees who’d spoken out about his conduct that they might hear from the man’s lawyers. The company was “vague” about whether it would defend the whistleblowers, Bloomberg reports.

The harassment allegation was not an isolated incident. Gebru and her co-workers reported additional claims of inappropriate behavior and bullying after the initial accusation.

In a statement emailed to The Verge, a Google spokesperson said: “We investigate any allegations and take firm action against employees who violate our clear workplace policies.”

Gebru said there were also ongoing issues with getting Google to respect the ethical AI team’s work. When she tried to look into a dataset released by Google’s self-driving car company Waymo, the project became mired in “legal haggling.” Gebru wanted to explore how skin tone impacted Waymo’s pedestrian-detection technology. “Waymo employees peppered the team with inquiries, including why they were interested in skin color and what they were planning to do with the results,” according to the Bloomberg article.

After Gebru went public about her firing, she received an onslaught of harassment from people who claimed that she was trying to get attention and play the victim. The latest news further validates her response that the issues she raised were part of a pattern of alleged bias on the research team.

Update April 21st, 6:05PM ET: Article updated with statement from Google.

Repost: Original Source and Author Link

Categories
Tech News

Learn the skills to be an ethical hacker and help turn the tide against cyberthreats

TLDR: The 2021 All-in-One Ethical Hacking and Penetration Testing Bundle offers the training in becoming an ethical hacker, defending vulnerable computer systems from cyberattack.

North Korean hackers infiltrated computer systems for a South Korean nuclear research institute, the latest in a string of attacks against South Korean targets. Closer to home, a new report says hackers have methods for gaining access to Peloton exercise bike cameras and mics. And in-development quantum computers could soon make the unthinkable possible — hacking a cryptocurrency wallet.

It isn’t alarmist to say that cybersecurity threats are everywhere. Whether you want to understand those threats or even join the battle against these black hat forces, the training in The 2021 All-in-One Ethical Hacking and Penetration Testing Bundle ($29.99, over 90 percent off, from TNW Deals) can give you the resources to defend your own computer, a company’s network, or even a entire cloud based system network.

Over nine courses and more than 46 hours of instruction, this training can put any student on the path to a career as an ethical hacker, spotting and fixing network vulnerabilities to protect a computer or network of computers from outside infiltration.

First timers can get up to speed with the training in Hacking Web Applications and Penetration Testing: Fast Start, a guide for newcomers to learn how to “ethically” hack websites from scratch. The course offers hands-on practice to discover and exploit the most common vulnerabilities, find authorization, authentication and session management flaws, and understand how to use those openings to get inside closed systems.

Meanwhile, the other eight courses in this package, all administered by the online learning experts of the Oak Academy, delve further into other major cybersecurity areas. Students will learn how to run network scans, stop the most popular phishing and password attacks, defend against app or email infiltration, protect against malware, and more.

That knowledge leads learners to craft a customized penetration toolkit for use in their ethical hacking defenses. There is also training in how social engineering can be used to compromise operating systems and social media accounts, how to defend a home or business WiFi network, and even the built-in defenses in a cloud platform like Microsoft Azure for defending a network in the cloud.

An $1,800 value, the coursework in The 2021 All-in-One Ethical Hacking and Penetration Testing Bundle is available now at just over $3 per course at $29.99.

Prices are subject to change.

Repost: Original Source and Author Link

Categories
AI

Nice publishes ethical framework for applying AI to customer service

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Nice, a provider of a robotic process automation (RPA) platform infused with machine learning algorithms employed in call centers, today published a Robo Ethical Framework for employing AI to better serve customers.

The goal is to provide some direction on how best to employ robots alongside humans in a call center, rather than focusing on how to replace humans, said Oded Karev, vice president of RPA for Nice.

Specifically, the five guiding principles for the framework are:

  1. Robots must be designed for a positive impact: Robots should contribute to the growth and well-being of the human workforce. With consideration to societal, economic, and environmental impacts, every project that involves robots should have at least one positive rationale clearly defined.
  2. Robots must be free of bias: Personal attributes such as race, religion, sex, gender, age, and other protected status should be left out of consideration when creating robots so their behavior is employee agnostic. Training algorithms are evaluated and tested periodically to ensure they are bias-free.
  3. Robots must safeguard individuals: Delegating decisions to robots requires careful consideration. The algorithms, processes, and decisions embedded within robots must be transparent, providing the ability to explain conclusions with unambiguous rationale. Humans must be able to audit a robot’s processes and intervene to redress the system to prevent potential offenses.
  4. Robots must be driven by trusted data sources: Robots must be designed to act based upon verified data from trusted sources. Data sources used for training algorithms should maintain the ability to reference the original source.
  5. Robots must be designed with holistic governance and control: Humans must have complete information about a system’s capabilities and limitations. Robotics platforms must be designed to protect against abuse of power and illegal access by limiting, proactively monitoring, and authenticating any access to the platform and every type of edit action in the system.

Nice is including a copy of this framework with every license of its RPA platform that it sells. Organizations are, of course, under no obligation to implement it, but the company is trying to proactively reduce the current level of “robot anxiety” that currently exists among employees within an organization, said Karev.

That level of anxiety is actually slowing down the rate at which RPA and other AI technologies would otherwise be adopted, Karev added.

Implementing robotics ethically

In general, most organizations are not closing call centers and laying off workers because they deployed an RPA platform. Instead, as more rote tasks become automated, the call center staff is engaging more deeply with customers in a way that increases overall satisfaction. As a result, customers are consuming more services that are now sold to them via a customer service representative.

There are, however, vertical industry segments where customers would rather not engage with anyone at all. They simply want a robot to automate a task, such as registering a product on their behalf. In either scenario, the relationship between the end customers is fundamentally evolving, thanks in part to the rise of RPA and AI, noted Karev.

In some cases, organizations overestimate the ability of robots to handle customer interactions in place of humans, added Karev. “Robots are not as smart as some of us think they are,” he cautioned.

In fact, Karev noted that governance is crucial to make sure trusted insiders are not abusing robots for nefarious purposes or that cybercriminals are not hijacking a workflow to siphon revenue.

It’s not clear to what degree the Nice framework will become a real-world codicil to the literary Asimov’s Three Laws of Robotics that start by saying no robot may harm a human or, by its inaction, allow a human to come to harm. However, the Nice framework and others like it are a step in the right direction.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

DeepMind AGI paper adds urgency to ethical AI

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


It has been a great year for artificial intelligence. Companies are spending more on large AI projects, and new investment in AI startups is on pace for a record year. All this investment and spending is yielding results that are moving us all closer to the long-sought holy grail — artificial general intelligence (AGI). According to McKinsey, many academics and researchers maintain that there is at least a chance that human-level artificial intelligence could be achieved in the next decade. And one researcher states: “AGI is not some far-off fantasy. It will be upon us sooner than most people think.” 

A further boost comes from AI research lab DeepMind, which recently submitted a compelling paper to the peer-reviewed Artificial Intelligence journal titled “Reward is Enough.” They posit that reinforcement learning — a form of deep learning based on behavior rewards — will one day lead to replicating human cognitive capabilities and achieve AGI. This breakthrough would allow for instantaneous calculation and perfect memory, leading to an artificial intelligence that would outperform humans at nearly every cognitive task.

We are not ready for artificial general intelligence

Despite assurances from stalwarts that AGI will benefit all of humanity, there are already real problems with today’s single-purpose narrow AI algorithms that calls this assumption into question. According to a Harvard Business Review story, when AI examples from predictive policing to automated credit scoring algorithms go unchecked, they represent a serious threat to our society. A recently published survey by Pew Research of technology innovators, developers, business and policy leaders, researchers, and activists reveals skepticism that ethical AI principles will be widely implemented by 2030. This is due to a widespread belief that businesses will prioritize profits and governments continue to surveil and control their populations. If it is so difficult to enable transparency, eliminate bias, and ensure the ethical use of today’s narrow AI, then the potential for unintended consequences from AGI appear astronomical.

And that concern is just for the actual functioning of the AI. The political and economic impacts of AI could result in a range of possible outcomes, from a post-scarcity utopia to a feudal dystopia. It is possible too, that both extremes could co-exist. For instance, if wealth generated by AI is distributed throughout society, this could contribute to the utopian vision. However, we have seen that AI concentrates power, with a relatively small number of companies controlling the technology. The concentration of power sets the stage for the feudal dystopia.

Perhaps less time than thought

The DeepMind paper describes how AGI could be achieved. Getting there is still some ways away, from 20 years to forever, depending on the estimate, although recent advances suggest the timeline will be at the shorter end of this spectrum and possibly even sooner. I argued last year that GPT-3 from OpenAI has moved AI into a twilight zone, an area between narrow and general AI. GPT-3 is capable of many different tasks with no additional training, able to produce compelling narratives, generate computer codeautocomplete images, translate between languages, and perform math calculations, among other feats, including some its creators did not plan. This apparent multifunctional capability does not sound much like the definition of narrow AI. Indeed, it is much more general in function.

Even so, today’s deep-learning algorithms, including GPT-3, are not able to adapt to changing circumstances, a fundamental distinction that separates today’s AI from AGI. One step towards adaptability is multimodal AI that combines the language processing of GPT-3 with other capabilities such as visual processing. For example, based upon GPT-3, OpenAI introduced DALL-E, which generates images based on the concepts it has learned. Using a simple text prompt, DALL-E can produce “a painting of a capybara sitting in a field at sunrise.” Though it may have never “seen” a picture of this before, it can combine what it has learned of paintings, capybaras, fields, and sunrises to produce dozens of images. Thus, it is multimodal and is more capable and general, though still not AGI.

Researchers from the Beijing Academy of Artificial Intelligence (BAAI) in China recently introduced Wu Dao 2.0, a multimodal-AI system with 1.75 trillion parameters. This is just over a year after the introduction of GPT-3 and is an order of magnitude larger. Like GPT-3, multimodal Wu Dao — which means “enlightenment” — can perform natural language processing, text generation, image recognition, and image generation tasks. But it can do so faster, arguably better, and can even sing.

Conventional wisdom holds that achieving AGI is not necessarily a matter of increasing computing power and the number of parameters of a deep learning system. However, there is a view that complexity gives rise to intelligence. Last year, Geoffrey Hinton, the University of Toronto professor who is a pioneer of deep learning and a Turing Award winner, noted: “There are one trillion synapses in a cubic centimeter of the brain. If there is such a thing as general AI, [the system] would probably require one trillion synapses.” Synapses are the biological equivalent of deep learning model parameters.

Wu Dao 2.0 has apparently achieved this number. BAAI Chairman Dr. Zhang Hongjiang said upon the 2.0 release: “The way to artificial general intelligence is big models and [a] big computer.” Just weeks after the Wu Dao 2.0 release, Google Brain announced a deep-learning computer vision model containing two billion parameters. While it is not a given that the trend of recent gains in these areas will continue apace, there are models that suggest computers could have as much power as the human brain by 2025.

Source: Mother Jones

Expanding computing power and maturing models pave road to AGI

Reinforcement learning algorithms attempt to emulate humans by learning how to best reach a goal through seeking out rewards. With AI models such as Wu Dao 2.0 and computing power both growing exponentially, might reinforcement learning — machine learning through trial and error — be the technology that leads to AGI as DeepMind believes?

The technique is already widely used and gaining further adoption. For example, self-driving car companies like Wayve and Waymo are using reinforcement learning to develop the control systems for their cars. The military is actively using reinforcement learning to develop collaborative multi-agent systems such as teams of robots that could work side by side with future soldiers. McKinsey recently helped Emirates Team New Zealand prepare for the 2021 Americas Cup by building a reinforcement learning system that could test any type of boat design in digitally simulated, real-world sailing conditions. This allowed the team to achieve a performance advantage that helped it secure its fourth Cup victory.

Google recently used reinforcement learning on a dataset of 10,000 computer chip designs to develop its next generation TPU, a chip specifically designed to accelerate AI application performance. Work that had taken a team of human design engineers many months can now be done by AI in under six hours. Thus, Google is using AI to design chips that can be used to create even more sophisticated AI systems, further speeding-up the already exponential performance gains through a virtuous cycle of innovation.

While these examples are compelling, they are still narrow AI use cases. Where is the AGI? The DeepMind paper states: “Reward is enough to drive behavior that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalization and imitation.” This means that AGI will naturally arise from reinforcement learning as the sophistication of the models matures and computing power expands.

Not everyone buys into the DeepMind view, and some are already dismissing the paper as a PR stunt meant to keep the lab in the news more than advance the science. Even so, if DeepMind is right, then it is all the more important to instill ethical and responsible AI practices and norms throughout industry and government. With the rapid rate of AI acceleration and advancement, we clearly cannot afford to take the risk that DeepMind is wrong.

Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
AI

DeepMind scientist calls for ethical AI as Google faces ongoing backlash

Elevate your enterprise data technology and strategy at Transform 2021.


Raia Hadsell, a research scientist at Google DeepMind, believes “responsible AI is a job for all.” That was her thesis during a talk today at the virtual Lesbians Who Tech Pride Summit, where she dove into the issues currently plaguing the field and the actions she feels are required to ensure AI is ethically developed and deployed.

“AI is going to change our world in the years to come. But because it is such a powerful technology, we have to be aware of the inherent risks that will come with those benefits, especially those that can lead to bias, harm, or widening social inequity,” she said. “I hope we can come together as a community to build AI responsibly.”

AI approaches are algorithmic and general, which means they’re inherently multi-use. On one side there are promises of curing diseases and unlocking a golden future, and on the other, unethical approaches and dangerous use cases that are already causing harm. With a lot on the line, how to approach these technologies is rarely clear.

Hadsell emphasized that while regulators, lawyers, ethicists, and philosophers play a critical role, she’s particularly interested in what researchers and scientists can actively do to build responsible AI. She also detailed some of the resistance she’s met within the research community and the changes she’s helped bring to life thus far.

Data, algorithms, and applications

The issues plaguing AI are well-known, but Hadsell gave an overview about their roots in data, algorithms, and applications.

Data, for one, is the lifeblood of modern AI, which is mostly based on machine learning. Hadsell said the ability to use these datasets built on millions or billions of human data points is “truly a feat of engineering,” but one with pitfalls. Societal bias and inequity is often encoded in data, and then exacerbated by an AI model that’s trained on a data set. There are also issues of privacy and consent, which she said “have too often been compromised by the irresponsible enthusiasm of a young PhD student.”

Hadsell also brought up the issue of deepfakes, and that the same algorithm used to create them is also used for weather prediction. “A lot of the AI research community works on fundamental research, and that can appear to be a world apart from an actual real-world deployment of that research,” said Hadsell, whose own research currently focuses on solving the fundamental challenges of robotics and other control systems.

Changing the culture

During the event, Hadsell recalled talking to a colleague who had written up a paper about their new algorithm. When asked to discuss the possible future impacts of the research, the colleague replied that they “can’t speculate about the future” because they’re a scientist, not an ethicist.

“Now wait a minute, your paper claims that your algorithm could cure cancer, mitigate climate change, and usher in a new age of peace and prosperity. Maybe I’m exaggerating a bit, but I think that that proves you can speculate about the future,” Hadsell said.

This interaction wasn’t a one-off. Hadsell said many researchers just don’t want to discuss negative impacts, and she didn’t mince words, adding that they “tend to reject responsibility and accountability for the broader impacts of AI on society.” The solution, she believes, is to change the research culture to ensure checks and balances.

A reckoning at NeurIPS

NeurIPS is the largest and most prestigious AI conference in the world, yet despite exponential growth in the number of attendees and papers submitted over the past decade, there were no ethical guidelines provided to authors prior to 2020. What’s more, papers were evaluated strictly on technical merit without consideration for ethical questions.

So when Hadsell was invited to be one of four program chairs tasked with designing the review process for the 10,000 papers expected last year, she initiated two changes. One was recruiting a pool of ethical advisors to give informed feedback on papers deemed to be controversial. The other was requiring every single author to submit a broader impact statement with their work, which would need to discuss the potential positive and negative future impacts, as well as any possible mitigations.

This idea of an impact statement isn’t new — it’s actually a common requirement in other scientific fields like medicine and biology — but this change didn’t go over well with everyone. Hadsell said she “didn’t make a lot of friends” and there were some tears, but later some authors reached out to say it was a valuable experience and even inspired new directions for research. She added there’s also been an uptick in conferences requiring such statements.

“Adding the broader impact statement to a few thousand papers is not quite enough to change the culture towards responsible AI. It’s only a start,” Hadsell said. She also noted that there’s a danger these reviews will become “tick-box formalities” rather than an honest examination of the risks and benefits of each new technological innovation. “So we need to keep the integrity and build onwards, from broader impact statements to responsible AI.”

Walking the walk

Before Hadsell’s talk even began, there was an elephant in the room. Google, which has owned the prestigious DeepMind lab since 2014, doesn’t have the best track record with ethical AI. The issue has been especially front and center since December when Google fired Timnit Gebru, one of the best-known AI researchers and co-lead of its AI ethics team, in what thousands of the company’s employees called a “retaliatory firing.” Gebru says she was fired over email after refusing to rescind research about the risks of deploying large language models. Margaret Mitchell, the other co-lead on the ethics team, was fired as well.

Attendees dropped questions on the topic into the chat as soon as Hadsell’s talk began. “How can you build a culture of accountability and responsibility if voices speaking on the topics of AI ethics and [the] negative impact of Google’s research into AI algorithms (like Timnit Gebru) are rejected?” asked one attendee. Another acknowledged that Hadsell works in a different part of the company, but still asked for her thoughts on the firing.

Hadsell said she didn’t have any additional information or insights other than what’s already been made public. She added, “What I will say is that at DeepMind, we are, you know, really concerned with making sure the voices we have in the community and internally, and the publications that we write and put out, express our diversity and all of the different voices at DeepMind. I believe it’s important for everyone to have the chance to speak about the ethics of AI and about risks, regardless of Google’s algorithm.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

This massive 130-hour training collection can turn you into a skilled ethical hacker

TLDR: The All-In-One 2021 Super-Sized Ethical Hacking Bundle includes 18 courses packed with techniques for spotting hackers and protecting vulnerable systems.

Electronic Arts is one of the world’s biggest game publishers, but that doesn’t mean the publisher of Battlefield, FIFA, and The Sims is safe from hackers, who announced they stole source codes and vast stores of user information recently. Meanwhile, the U.S. government says they’ve successfully recovered most of the ransom money paid to Russian hackers who compromised the critical Colonial Pipeline earlier this year.  

And after being told they were virtually impregnable, there’s now fear that ultra-secure cryptocurrency wallets may not be as safe as thought and could soon become prime hacker targets.

There’s no IT discipline more vital than security and the abilities of ethical hackers. In The All-in-One 2021 Super-Sized Ethical Hacking Bundle ($42.99, over 90 percent off, from TNW Deals), learners get a full introduction to the art of finding and stopping hackers as well as crafting systems immune from crippling cyberattacks.

There’s a lot of ground to cover in a subject as vast as web security, which is why this collection sports a booming 18 courses packed with more than 130 hours of training in all things cyber protection. 

Following some background training in core programming disciplines like Python, the majority of this bundle is an expansive ocean of security study, with training in important tools like Metasploit, Burp, BitNinja and more.

Of course, knowing how to use those tools makes all the difference, so further coursework offers students a hands-on approach for understanding nearly two dozen different hacking techniques, then using those tools in real-life scenarios flex your cybersecurity muscle, spot vulnerabilities, patch holes, combating hackers, and basically turn the system you protect into an absolute hacker no-fly zone.

While ethical hacker training is the core of this voluminous package, that’s not all that’s available in these courses. Learners will discover how to make their own pen testing tools, secure vulnerable wireless networks, or even become a bug bounty hunter, tracking system compromises for unaware companies for a price.

With each course usually running between $99 and $200, The All-in-One 2021 Super-Sized Ethical Hacking Bundle would usually set back students about $3,300. But right now, this giant cybersecurity package is available for a fraction of that price, just $42.99

Prices are subject to change.

Repost: Original Source and Author Link

Categories
AI

Ethical AI will not see broad adoption by 2030, study suggests

Elevate your enterprise data technology and strategy at Transform 2021.


According to a new report released by the Pew Research Center and Elon University’s Imaging the Internet Center, experts doubt that ethical AI design will be broadly adopted within the next decade. In a survey of 602 technology innovators, business and policy leaders, researchers, and activists, a majority worried that the evolution of AI by 2030 will continue to be primarily focused on optimizing profits and social control and that stakeholders will struggle to achieve a consensus about ethics.

Implementing AI ethically means different things to different companies. For some, “ethical” implies adopting AI — which people are naturally inclined to trust even when it’s malicious — in a manner that’s transparent, responsible, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “ethical AI” promises to guard against the use of biased data or algorithms, providing assurance that automated decisions are justified and explainable.

Pew and Elon University asked survey-takers “By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?” Sixty-eight percent predicted ethical principles intended to support the public good won’t be employed in most AI systems by 2030, while only 32% believed these principles will be incorporated into systems by 2030.

“These systems are … primarily being built within the context of late-stage capitalism, which fetishizes efficiency, scale, and automation,” Danah Boyd, a principal researcher at Microsoft, told Pew and Elon University. “A truly ethical stance on AI requires us to focus on augmentation, localized context, and inclusion, three goals that are antithetical to the values justified by late-stage capitalism. We cannot meaningfully talk about ethical AI until we can call into question the logics of late-stage capitalism.”

Internet pioneer Vint Cerf, who participated in the survey, anticipates that while there will be a “good-faith effort” to adopt ethical AI design, good intentions won’t necessarily result in the desired outcomes. “Machine learning is still in its early days, and our ability to predict various kinds of failures and their consequences is limited,” he said. “The machine learning design space is huge and largely unexplored. If we have trouble with ordinary software whose behavior is at least analytic, machine learning is another story.”

Uphill battle

The respondents’ sentiments reflect the slow progress of industry and regulators to curtail the use of harmful AI. Key federal legislation in the U.S. remains stalled, including prohibitions on facial recognition and discriminatory social media algorithms. Less than half of organizations have fully mature, responsible AI implementations, according to a recent Boston Consulting Group survey. And 65% of companies can’t explain how AI predictions are made, while just 38% have bias mitigation steps built into their model development processes, a FICO report found.

“For AI, just substitute ‘digital processing.’ We have no basis on which to believe that the animal spirits of those designing digital processing services, bent on scale and profitability, will be restrained by some internal memory of ethics, and we have no institutions that could impose those constraints externally,” Susan Crawford, a professor at Harvard Law School and former special assistant in the Obama White House for Science Technology and Innovation Policy, noted in the Pew and Elon University report.

Despite the setbacks, recent developments suggest the tide may be shifting — at least in certain areas. In April, the European Commission, the executive branch of the European Union, announced regulations on the use of AI, including strict safeguards on recruitment, critical infrastructure, credit scoring, migration, and law enforcement algorithms. Cities like Amsterdam and Helsinki have launched AI registries that detail how each city government uses algorithms to deliver services. And the National Institute of Standards and Technology (NIST), a U.S. Department of Commerce agency that promotes measurement science, has proposed a method for evaluating user trust in AI systems.

But experts like Douglas Rushkoff believe it will be an uphill battle. “Why should AI become the very first technology whose development is dictated by moral principles? We haven’t done it before, and I don’t see it happening now,” the media theorist and professor at City University of New York told Pew and Elon University. “Most basically, the reasons why I think AI won’t be developed ethically is because AI is being developed by companies looking to make money — not to improve the human condition. So, while there will be a few simple AIs used to optimize water use on farms or help manage other limited resources, I think the majority is being used on people.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

5 tips for an ethical investment in tech stocks

These days people trading on the stock market want more than just a strong financial return. They’re increasingly opting for investments that will also have a positive societal impact.

The coronavirus pandemic showed us even established tech companies can suffer downturns in the short term. Apple, a tech behemoth, was left reeling when Chinese manufacturing hubs were temporarily shut down last year.

In the longer term, however, technology stocks remain a first choice for many investors. Historically, they’ve dominated global stock markets and continue to grow at a remarkable rate.

Even during the downward spiral of the pandemic, tech stocks such as Zoom and Microsoft soared in value as an influx of people started working from home. The question for many investors now is: how can one find profitable investments without supporting unethical activity?

Growth of tech stocks

According to investment advisers Morningstar, technology stocks account for 24.2% of the top 500 stocks in the United States. Facebook, Apple, Amazon, Netflix, and Alphabet (which owns Google) dominate the market, with a combined value of more than US$4 trillion.

Tech stocks also take center stage in Australia. We’ve seen the rapid rise of “buy now, pay later” companies such as Australian-owned Afterpay and Zip.

At the same time, we’ve seen an increase in the number of Australians moving to ethical superannuation funds and ethically-managed investment schemes. The latter lets investors contribute money (to be managed by professional fund managers) which is pooled for investment to produce collective gain.

It’s estimated indirect investment through these schemes has increased by 79% over the past six years.

What is ethical investing?

While ethical investing is a broad concept, it can be understood simply as putting your money towards something that helps improve the world. This can range from companies that advocate for animal rights, to those aiming to limit the societal prevalence of gambling, alcohol, or tobacco.

Although there is no strict definition of ethical investment in Australia, many managed funds and super funds seek accreditation by the Responsible Investment Association Australasia. The “ethical” aspect can be grouped into three broad categories:

  1. Environmental — such as developing clean technology or engaging in carbon-neutral manufacturing
  2. Social — such as supporting innovative technology, reducing social harms such as poverty or gambling, boosting gender equality, protecting human and consumer rights, or supporting animal welfare
  3. Corporate governance — such as being anti-corruption, promoting healthy employee relations, or institutional transparency.

As investors, we must be very careful about the fine print of the companies we invest in. For example, accreditation guidelines dictate that a managed investment fund excluding companies with “significant” ties to fossil fuels could still include one that earns up to a certain amount of revenue from fossil fuels.

So while investment manager AMP Capital is accredited, it can still include companies earning up to 10% of their revenue from fossil fuel distribution and services.

Categories
Computing

Some Ethical Hackers Are Making Huge Amounts of Cash

Broadly speaking, hackers come in two flavors. Those who are out to exploit a computer system and cause havoc for its operator and people who use it, and those who search for vulnerabilities in a system and then inform the operator in exchange for a cash reward.

The latter can make some serious dough from their work, too, with the top ones able to earn millions of dollars in the space of a single year.

HackerOne is a Silicon Valley-based company that partners with the global hacker community to track down security issues for its clients — via so-called “bug bounty programs” — before the vulnerabilities can be exploited by criminals.

A growing number of companies big and small are working with HackerOne to launch bug bounty programs so that flaws can be identified and fixed, thereby removing them as a potential threat to their business.

In its latest annual Hacker Report, HackerOne reveals just how well some ethical hackers have been doing.

In the last year alone, ethical hackers earned a staggering $40 million through the reporting of vulnerabilities to programs run by HackerOne, a huge increase from the $19 million earned in 2019. Nine hackers have earned over $1 million dollars on the platform since 2019, and one hacker passed the $2 million mark in 2020.

More and more ethical hackers from all over the world are signing up to bug bounty programs, with HackerOne having seen a 63% increase in the number of hackers reporting flaws in the last year alone. The company now has more than a million investigators on its books.

In May 2020, HackerOne reached the milestone of $100 million paid to hackers for vulnerability reports, of which 50,000 were made in the last year, with the company forecasting that hackers will earn a total of $1 billion in bug bounties within five years.

Payments for reported vulnerabilities can vary hugely as they depend largely on how dangerous the bug could be to a firm’s computer systems and overall operations if it were to be exploited by hackers with nefarious intentions.

For an example of how payment systems function with bug bounty programs, we can look at one operated by Sony that invites ethical hackers to search for vulnerabilities on its PlayStation platform.

According to data from 2020, payouts start at $100 for a low-rated vulnerability discovered on Sony’s gaming platform, with more valuable tiers offering minimum payments of $400, $1,000, and $3,000.

Discover a low-rated vulnerability on the PlayStation 4, for example, and you should receive a minimum of $500, with higher rewards worth a minimum of $2,500 and $10,000. The most critical vulnerabilities, meanwhile, will result in a payment of at least $50,000.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Google fires Ethical AI lead Margaret Mitchell

Google fired Margaret “Meg” Mitchell, lead of the Ethical AI team, today. The move comes just hours after Google announced diversity policy changes and Google AI chief Jeff Dean sent an apology in the wake of the firing of former Google AI ethics lead Timnit Gebru in late 2020.

Mitchell, a staff research scientist and Google employee since 2016, had been under an internal investigation by Google for five weeks. In an email sent to Google shortly before Mitchell was placed on investigation, Mitchell called Google firing Gebru “forever after a really, really, really terrible decision.”

A statement from a Google spokesperson about Mitchell reads: “After conducting a review of this manager’s conduct, we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees.”

When asked for comment, Margaret declined, describing her mood as “confused and hurting.”

Mitchell was a member of the recently formed Alphabet Workers Union. Gebru has previously suggested that union protection could be a way for AI researchers to shield themselves from retaliation like the kind she encountered when a research paper she co-wrote was reviewed last year.

Earlier today, Dean apologized if Black and female employees were hurt by the firing of Gebru. Additional changes to Google diversity policy were also announced today, including tying DEI goals to performance evaluations for employees at the VP level and above.

On Thursday, Google restructured its AI ethics efforts that brings 10 teams within Google Research, including the Ethical AI team, under Google VP Marian Croak. Croak will report directly to Dean. In a video message, Croak called for more “diplomatic” conversations when addressing ways AI can harm people. Multiple members of the Ethical AI team said they found out about the restructure in the press.

“Marian is a highly accomplished trailblazing scientist that I had admired and even confided in. It’s incredibly hurtful to see her legitimizing what Jeff Dean and his subordinates have done to me and my team,” Gebru told VentureBeat about the decision Thursday.

Mitchell and Gebru came together to co-lead the Ethical AI team in 2018, eventually creating what’s believed to be one of the most diverse divisions within Google Research. The Ethical AI team has published research on model cards to bring transparency to AI and how to perform internal algorithm audits. Last year, the Ethical AI team hired its first sociologists and began to consider how to address algorithmic fairness with critical race theory. At the VentureBeat Transform conference in 2019, Mitchell called diversity in hiring practices important to ethical deployments of AI.

The way Gebru was fired led to allegations of gaslighting, racism, and retaliation, as well as questions from thousands of Google employees and members of Congress with records of authoring legislation to regulate algorithms. Members of the Ethical AI team requested Google leadership take a series of steps to restore trust.

A Google spokesperson told VentureBeat that the Google legal team has worked with outside counsel to conduct an investigation into how Google fired Gebru. Google also worked with outside counsel to investigate employee allegations of bullying and mistreatment by DeepMind cofounder Mustafa Suleyman, who led ethics research efforts at the London-based startup acquired by Google in 2014.

The spokesperson did not provide details when asked what steps the organization has taken to meet demands to restore trust the Ethical AI team made or those laid out in a letter signed by more than 2,000 employees shortly after the firing of Gebru that called for a transparent investigation in full view of the public.

A Google spokesperson also told VentureBeat that Google will work more closely with HR in regard to “certain employee exits that are sensitive in nature.” In a December 2020 interview with VentureBeat, Gebru called a companywide memo that called de-escalation strategies part of the solution “dehumanizing” and a response that paints her as an angry Black woman.

Updated 5:40 p.m. to include comment from Margaret Mitchell

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member



Repost: Original Source and Author Link