Categories
Game

Six state treasurers want Activision Blizzard to address its toxic workplace culture

Following scrutiny from state and federal regulators, Activision Blizzard and its CEO Bobby Kotick now face pressure from an unexpected source. Per , state treasurers from California, Massachusetts, Illinois, Oregon, Delaware and Nevada recently contacted the company’s board of directors to discuss its “response to the challenges and investment risk exposures that face Activision.” In a letter dated to November 23rd, the group tells the board it would “weigh” a “call to vote against the re-election of incumbent directors.”

That call was made on November 17th by a collection of activist shareholders known as . SOC, which holds about , has demanded Kotick resign and that two of the board’s longest-serving directors, Brian Kelly and Robert Morgado, retire by December 31st.

“We think there needs to be sweeping changes made in the company,” Illinois state treasurer Michael Frerichs told Axios. “We’re concerned that the current CEO and board directors don’t have the skillset, nor the conviction to institute these sweeping changes needed to transform their culture, to restore trust with employees and shareholders and their partners.”

Between the six treasurers, they manage about a trillion dollars in assets. But as Axios points out, it’s unclear how much they have invested in Activision, and it’s not something they disclosed to the outlet. However, Frerichs did confirm Illinois has been impacted by the company’s falling stock price.

To that point, the day before  published its bombshell report on Activision and CEO Bobby Kotick, the company’s stock closed at $70.43. The day California’s fair employment agency sued the company its stock was worth $91.88. As of the writing of this article, it’s trading at about $58.44.

The group has asked to meet with Activision’s board by December 20th. We’ve reached out to Activision for comment.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Repost: Original Source and Author Link

Categories
AI

Report: 63% of millennials approve of automation in the workplace

This article is part of a VB special issue. Read the full series: Automation and jobs in the new normal.


According to a new study by the human-centered automation company Hyperscience, 81% of people believe automation can lead to more meaningful work, despite common misperceptions around what automation is, how it’s being used today, and how the U.S. workforce views it.

In its 2021 Automation Pulse Report, Hyperscience found that there continues to be widespread misunderstanding of what automation is. Specifically, while 75% of respondents believe they know what automation is, 55% brought up popular misconceptions when asked to explain that understanding further. Responses included technology existing solely to replace people (17%), automation is a job killer (3%), and conflating AI with automation (10%).

Despite the increasing adoption of automation in today’s digital-first workforce, many respondents did not identify particular benefits and use cases of automation across various industries. While 70% of respondents said automation could add value for the transportation and logistics sector, and 66% believe it adds value for financial services and banking, responses were less convinced of value adds for healthcare (48%), insurance (47%), and government/public sector (45%).

Automation provides more time to focus on valuable tasks. Pie chart. 81% of respondents agree that if automation technology can remove data entry tasks--like manually entering insurance information or details from a handwritten form into a computer--the employee would have more time to focus on more valuable tasks in their everyday job.

One of the bigger highlights from the study specifically focused on millennials, the largest generation in the U.S. labor force today, who are increasingly ready to work side-by-side with this technology. In fact, more than a third (35%) of millennials believe humans and machines can work together and 63% believe automation in the workplace is a good thing — especially if used to alleviate certain work burdens.

Forty-three percent of all respondents agreed with this sentiment, ranking a better employee experience as a result of using automation as the most important part of technological advancement in the workplace. Technology affecting the customer and overall customer experience (34%) ranked a close second, while only 23% of respondents selected the company as the most important beneficiary of technology.

Read the full report by Hyperscience.

This article is part of a VB special issue. Read the full series: Automation and jobs in the new normal.

Repost: Original Source and Author Link

Categories
AI

Workplace monitoring platform Aware takes in $60M

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Aware, a platform that analyzes employee behavior across messaging platforms like Slack, today announced that it raised $60 million in a series C round led by Goldman Sachs Growth Equity, with participation from Spring Mountain Capital, Blue Heron Capital, Allos Ventures, Ohio Innovation Fund, JobsOhio, and Rev1 Ventures. The company says that the capital, which brings its total raised to over $86.9 million, will be put toward product development, sales efforts, and hiring.

“This specific investment will allow us to make significant progress towards helping organizations see the human difference across all functions of the business. We expect to rapidly expand our current integrations into new platforms, as well as to ingest and analyze digital signals from all areas of the organization,” cofounder and CEO Jeff Schumann told VentureBeat via email. “We started in text conversations, files, and images, and we’re already working towards voice and video signals.”

The trend toward remote and hybrid work has prompted some companies to increase their use of monitoring technologies to ensure that employees remain on task. Whether real or imagined, some managers believe that productivity has decreased at their company since staffers started working from home during the pandemic. But these monitoring technologies threaten to infringe on workers’ privacy, creating a chilling effect on speech that challenges the company line.

Columbus, Ohio-based Aware, which was founded in 2017 by Schumann, James Tsai, Matt Huber, and Shawn Domer, leverages APIs and webhooks to collect and process communications from platforms including Slack, Microsoft Teams, Zoom, Yammer, and Workplace from Facebook. The company applies natural language processing and computer vision technologies to predict message sentiment and detect screenshots, among other tasks, identifying relationships between messages to provide context beyond replies.

Aware

“Aware continues to be the leading force for digital collaboration insights, giving companies a better understanding of their workforce and the ability to manage the risks associated with digital conversations and remote collaboration,” Schumann said in a statement. “Aware’s continued adoption as essential tech further shows how AI [is] key to building more transformational businesses.”

But some of Aware’s capabilities might give employees pause, like the platform’s ability to preserve files, edits, and deletions as well as metadata like message locations. While Aware claims its ability to collect and normalize conversations from multiple sources might be useful to employers in search of governance and search solutions, employees might perceive it as a way for companies to tamp down on protests or put workers’ activities under a microscope.

Employee monitoring

Employee monitoring software is a broad category, but generally speaking, it encompasses programs that can measure an employee’s idle time, access webcams, track keystrokes and web history, take screenshots, and record emails, chats, and phone calls. In a survey of employers, ExpressVPN found that 78% were using monitoring software like TimeDoctor, Teramind, Wiretap, Interguard, Hubstaff, and ActivTrak to track their employees’ performance or online activity. Perhaps unsurprisingly, Reports and Data predicts that by 2027, the global market for employee remote monitoring software will hit $1.3 billion.

Pitching itself as an HR solution, Aware claims it can store all public and private messages, attachments, and images sent by employees, recreating conversations in an “easy-to-understand” format. Aware purports to detect inappropriate, offensive, and hateful speech, employing AI to extract top-level message information.

In this way, Aware’s platform is akin to Awareness’ Interguard, which can scan emails and messages for particular keywords. Wiretap and Qumram similarly monitor chat forums like Slack, Yammer, and WhatsApp, using AI to identify “harassment, threats, and intimidation.”

“[With Aware, companies can] automate community management processes, identify incidents of insider threats, [and] identify toxic employees in the workplace,” the company’s website reads. “[Customers can] enable rules that automatically prevent against unsafe sharing and safeguard against HIPAA, FINRA, or PCI violations. [They can also] scan workplace communication content and file shares to identify instances of inappropriate sharing and breaches of confidentiality. [And they can protect their] employees from sexual harassment, discrimination, and bullying by monitoring public and private communications.”

Aware

But biases in the algorithms could color the results of Aware’s assessments. Studies have shown that text-based sentiment analysis systems can exhibit prejudices along race, ethnic, and gender lines — for example, associating Black people with more negative emotions like anger, fear, and sadness. AI models also tend to inconsistently analyze hate speech, with research showing that automated moderation platforms struggle with “Black-aligned English,” quotations of hate speech, slurs, and spelling variations of hateful words.

Schumann claims that Aware is one of the only companies that trains models on the interactions “within enterprise collaboration.” The net result is AI that’s “two to three times as accurate as competitor AI and machine learning services” in the market, he says.

“Aware is the future of human-centered business. We understand and make sense of all the human interactions happening in the organization,” Schumann continued. “We are training on a data lake consisting of billions of normalized, anonymized interactions that is growing daily. This gives us a unique advantage. Aware is positioned to know the customer and their own DNA quite well.”

Beyond the potential bias issues, research shows employees are disinclined to support the deployment of software like Aware’s because they feel like they’re constantly being watched. A recent Gartner survey found that knowledge workers were nearly two times more likely to pretend to be working after their companies invested in tracking systems.

Workers have reason to be concerned. According to a recent complaint filed by the National Labor Relations Board, Google spied on its staffers — two of which were in the process of organizing protests against the company — before terminating several. Call center workers face pressure to sign a contract that lets their employers install in-home cameras to monitor their work. And Amazon delivery drivers say surveillance cameras installed in their vans have made them lose income for reasons beyond their control, like when cars cut them off.

Of the remote or hybrid workers surveyed in the ExpressVPN paper, 59% said that they felt stress or anxiety as a result of their employer monitoring them. Another 43% said that the surveillance felt like a violation of trust, and more than half said they’d quit their job if their manager implemented surveillance measures.

Little recourse

In the U.S., employees have little in the way of legal recourse when it comes to monitoring software. The 1986 Electronic Communications Privacy Act (ECPA) allows companies to surveil communications for “legitimate business-related purposes.” Only two states, Connecticut and Delaware, require notification if employees’ email or internet activities are being monitored, while Colorado and Tennessee require businesses to set written email monitoring policies.

However, the ECPA does prevent employers from monitoring private messages and email accounts that are password-protected and sent from a personal device — unless an employee gives consent. Last October, retailer H&M was fined $40 million for storing sensitive employee information on topics like family issues and religious beliefs. Aware, like other companies providing workplace monitoring software, is bound by this constraint.

Not all employees oppose monitoring software — so long as they’re made aware of the surveillance, how and where the data is maintained, and whether it’s shared with other parties or companies. Over half of office workers surveyed by Robert Half were open to digital surveillance if it led to perks, like the ability to work preferred hours or remotely. Microsoft touts its Viva platform as one such pro-worker solution, with privacy controls including an anonymized, general productivity score for teams and recommendations for employees’ mental health.

Aware

Aware, like its rivals, is growing at a fast clip during the pandemic. Revenue has “consistently” climbed 170% to 200% year-over-year for the last several years, and the company, which employs a workforce of around 60 employees, has over 1 million licensed users across more than 17 countries.

“Our growth was accelerated in part due to [the pandemic] and the emergency shift to remote work. It made the adoption of technologies, such as Teams, Slack, and Zoom, happen virtually overnight,” Schumann said. “This is the dataset we tap into, and now these companies need controls in place for the tech. It also means the focus on employee engagement is incredibly top-of-mind, as organizations struggle to understand their largely remote — or hybrid — workforce. [We] saw an immediate transformation in growth from our existing customer base to the tune of an over 300% increase in data created after the start of mandated lockdowns. We also saw a rapidly increasing demand for the Aware platform, for both our risk mitigation capabilities and our organizational insights.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Why companies must be critical of workplace surveillance practices

With so many more people working from home during the pandemic, employers have stepped up the extent to which they are monitoring them online. Not so many years ago, employees were having to adjust to having their work emails monitored; but that seems almost quaint compared to the digital surveillance we are seeing today.

Employers can use specialist software to track workers’ keystrokes, mouse movements, and the websites they visit. They can take screenshots of employees to check whether they are at their screens and looking attentive, or even use webcam monitoring software that measures things like eye movements, facial expressions, and body language. All this can be checked against a worker’s output to draw conclusions about their productivity.

Besides specialist software, managers can view statistics from their corporate private network to see who logged in and for what duration, and again cross-reference this to workers’ productivity data. In some organizations, staff who do not open work applications early in the morning could potentially be viewed as late for work or not productive enough.

Home-working has also raised the prospect of more informal staff monitoring. For example, if a worker would normally log in to meetings by turning on their video, but one day they are in a car or a new location, the employer might think they are not committed or focused enough.

This all raises questions about how such surveillance is affecting people’s work practices, privacy, and general wellbeing. Given that home-working looks set to extend beyond lockdown for many people, this is clearly a moment for some serious reflection.

The productivity dimension

Managers justify this kind of surveillance by claiming that it is good for productivity. Some workers even seem to agree with this – provided the monitoring is done by a peer and not a manager.

Many have signed up to an online service called Focusmate, for example, which matches anonymous strangers on “work dates” where they briefly say what they will be doing during the appointment and then they can rate one another’s approach to work at the end. The service aims to make workers more productive and to feel less lonely at work.

That said, home-working during the first UK lockdown in spring 2020 did not have a major effect on productivity. Workplace surveillance may even have held it back, given that it appears to have increased at the same time. Certainly, there is evidence that such techniques can make people feel vulnerable, afraid, and less creative. It can also reduce their job satisfaction and lower their morale.

Also bear in mind that this surveillance is taking place in someone’s home, which may make them feel particularly vulnerable. Some people have struggled with their mental health while working at home, and many have had to fit in other responsibilities such as caring for children and home-schooling.

Next steps

In view of all this, companies need to adopt an “ethics of care” approach to their workers, meaning they make a commitment to take care of them. They need to investigate their surveillance practices and analyze how exactly line managers use them to check up on workers.

While carrying out such an investigation, companies should recognize that some employees might be finding workplace surveillance more difficult than others. This will depend on to what extent they think it invades their privacy, and how they weigh the risks and benefits of sharing their data.

This is likely to be affected by things like their cultural background, gender, and the context in question. Those already struggling with home-working, perhaps because they have to care for children at the same time, are particularly likely to feel that this surveillance is making their lives even harder. Workers can therefore try and evade surveillance techniques – for example, by keeping an automatic mouse-moving application open to make sure they appear online all the time.

Companies should be ensuring from these investigations that employees are aware of what data is collected about them and how it’s used. They should hold open discussions with workers and unions on how these monitoring practices affect workers, and allow workers to have their say without threatening disciplinary action. If workers feel that their employers care about them as individuals, they will hopefully feel empowered and trusting towards them, and less likely to find workarounds or to react negatively.

Equally, it is important for regulators like the UK Information Commissioner to reflect on how surveillance in the workplace is changing. The UK code in this area broadly requires that any monitoring be fair to workers and that any adverse impacts – for example, on workers’ privacy, damage to trust, or demeaning workers – be mitigated. The rules may now need to be updated to reflect some of the latest forms of surveillance, and there is a role for researchers in looking into this as well.

Researchers have tended to look at workplace surveillance from the perspective of productivity where workers are viewed as resources, but we need to start thinking in terms of data justice. This has been described as “fairness in the way people are made visible, represented and treated as a result of their production of digital data”.

In a world where computers and smartphones are all around us, we need to negotiate our private spaces and our control over the data we produce online. Just like this has implications in our private lives for our relationship with Facebook or Google, the increases in workplace surveillance make it just as important at work.The Conversation

This article by Evronia Azer, Assistant Professor at the Centre for Business in Society, Coventry University is republished from The Conversation under a Creative Commons license. Read the original article.

Repost: Original Source and Author Link

Categories
AI

SafetyCulture boosts its workplace safety tools, hits $1.6B valuation

Join Transform 2021 this July 12-16. Register for the AI event of the year.


SafetyCulture, a startup developing software for workplace safety and quality management, today announced that it closed a new $73 million round of funding, valuing the company at $1.6 billion. Led by Insight Partners and Tiger Global, SafetyCulture CEO Luke Anear says the funds will support growth as the company evolves from a checklist app into an operations platform for working teams.

Failure to provide a safe atmosphere at work is a costly mistake for both businesses and their employers. The National Safety Council estimates a worker is injured on the job every seven seconds, which equates to 4.6 million injuries a year. And the Centers for Disease Control and Prevention’s National Institute for Occupational Safety and Health pegs the costs of work-related injuries and illnesses at $170 billion a year.

Queensland, Australia-based SafetyCulture, which was founded in 2004, offers a mobile and web app called iAuditor for workplace safety and quality standards; it’s designed to collect data, standardize operations, and identify problems. The platform enables users to create customized forms from paper checklists, Word documents, and Excel spreadsheets, as well as take and annotate photos, add notes, and assign follow-up remedial actions. With SafetyCulture, managers can have teams share observations outside of regular inspections and generate reports for contracts, clients, and more, which can be exported to third-party platforms and tools. They can also connect to sensors — either SafetyCulture’s own or third-party sensors — to monitor work conditions in real time.

“The SafetyCulture journey started with a simple question: How do we keep people in the workplace safe? After witnessing the tragedy of workplace incidents in my time as a private investigator, I realized something needed to be done when it came to safety in the workplace,” Anear told VentureBeat via email. “I went on to recruit a team to help develop a mobile solution and so, iAuditor was born. When we started out this journey in my garage in regional Australia, we had no idea where it would lead. We’d created the first iteration of a digital platform that went on to revolutionize safety inspections globally. It’s now the world’s largest checklist app — and that’s just the beginning.”

SafetyCulture supports text and email notifications that trigger as soon as sensors detect changes that are out of the normal range. Customers can set alerts for things like local weather and have employees capture test results with barcode scanners, drawing tools, Bluetooth thermometers, and more.

SafetyCulture

Above: SafetyCulture’s mobile app, iAuditor.

Image Credit: SafetyCulture

Last September, SafetyCulture acquired EdApp, a learning management system akin to LinkedIn Learning, for a reported $29 million. At the time, SafetyCulture said that EdApp, which was delivering around 50,000 lessons per day, would enable it to offer “micro-learning” resources to workers in a range of industries.

Market momentum

SafetyCulture claims to host over 1.5 million users across 28,000 companies on its platform in more than 85 countries. It powers 600 million checks per year and millions of corrective actions per day, according to Anear.

“Statistics-wise, we’ve seen inspections within iAuditor grow 108% year-over-year and actions grow 161%. This indicates our customers are using our software across a far broader range of use cases, from safety to quality and customer experience. It also suggests our customers are benefiting strongly from the workflows we have built into iAuditor over the last couple of years,” Anear said. “This additional data makes our analytics functionality all the more powerful. As working teams input more and more data, organizations benefit from increased visibility and a broader base of staff contributing to driving safety, quality, and efficiency improvements.”

With 54% of U.S. employees worried about exposure to COVID-19 at their job, the pandemic has helped to drive SafetyCulture’s growth — even in the face of rivals like Aclaimant. According to a recent Pew Center survey, about 25% of employees say that they’re “not too” or “not at all” satisfied with the steps that have been taken in their workplace to keep them safe from the coronavirus. To address this, SafetyCulture created a number of checklists tailored for companies affected by the crises, including cleaning and disinfection logs as well as employee temperature log sheets.

“We’ve built a reputation globally for helping some of the most dangerous industries digitize and adhere to safety and quality protocols. Unfortunately, everywhere is a high risk in a pandemic, so 2020 saw a major push to ensure retailers, schools, hotels, and many other industries had the right support and tech to manage COVID-19,” Anear said.

SafetyCulture’s most recent funding round — which brings the company’s total raised to roughly $241 million — follows a $36 million tranche contributed by TDM Growth Partners, Blackbird Ventures, Index Ventures, former Australian prime minister Malcolm Turnbull and his wife Lucy Turnbull, and Atlassian cofounder Scott Farquhar. The company’s employee base grew 2.5 times in the past three years to over 500 people, and SafetyCulture says it plans to continue the same hiring trajectory “for the foreseeable future.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Robotics safety system developer Fort Robotics raises $13M to tackle workplace automation

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Fort Robotics, a startup developing a safety and security platform for robots, today announced it has raised $13 million in a funding round led by Prime Movers Lab. With the new investment, Fort says it will more than double the size of its workforce and expand its product offerings in “key markets.”

Worker shortages attributable to the pandemic have accelerated the adoption of automation. According to ABI Research, more than 4 million commercial robots will be installed in over 50,000 warehouses around the world by 2025, up from under 4,000 warehouses as of 2018. In China, Oxford Economics anticipates 12.5 million manufacturing jobs will become automated, while in the U.S., McKinsey projects machines will take upwards of 30% of such jobs.

Founded in 2018, Philadelphia, Pennsylvania-based Fort is CEO Samuel Reeves’ second startup. In 2006, he launched Humanistic Robotics, which developed autonomous landmine-clearing systems. Through the refinement of that technology, Reeves saw an opportunity to create a safety-and-security overlay that could be integrated with industrial machines.

Fort’s platform allows managers to configure, monitor, and control robots from web and mobile apps. An endpoint controller wirelessly connects machines to devices, while an embedded hardware security module ensures communications are secure, reducing breach risk. Fort also provides a remote control with a built-in emergency stop that lets workers use one or more machines — ostensibly without compromising safety or command security.

As industrial manufacturers increasingly embrace automation, workplace injuries involving robots are becoming more common. For example, a recent report revealed that Amazon fulfillment warehouses with robots had higher rates of serious worker injuries than less automated facilities. A warehouse in Kent, Washington that employs robots had 292 serious injuries in one year, or 13 serious injuries per 100 workers. Meanwhile, the serious injury rate of a Tracy, California center nearly quadrupled after it introduced robots five years ago, going from 2.9 per 100 workers in 2015 to 11.3 in 2018.

“Fort is creating systems that will accelerate the rise of autonomous vehicles and robotics, making work safer and more productive,” Prime Movers Lab’s Suzanne Fletcher said in a press release. “We’re proud to partner with this forward-looking company that recognizes that safe collaboration between humans and robots is synonymous with the workforce of tomorrow.”

For one customer that has multiple robots operating within a gated area, Fort says it built a solution that enables workers to stop all the robots when a person enters the zone. While each machine has its own e-stop button, manual stops affect productivity, and getting close to the moving robots can be hazardous. Fort installed its safety controller on each robot and a master controller on the door of the caged area so that the system automatically sends a wireless e-stop signal that stops every robot whenever the door opens.

“We’re excited to partner with our new investors to help position Fort at the forefront of this rapidly evolving space,” Reeves said. “The world is on the cusp of a new industrial revolution in mobile automation. With added investment and support, we’ll be able to rapidly scale the company to capitalize on the convergence of trend and opportunity to ensure that robotic systems are safely deployed across all industries.”

Fort claims to have served more than 100 clients last year, including “dozens” of Fortune 1,000 companies in markets such as warehousing, construction, agriculture, manufacturing, mining, turf care, last-mile delivery, and transportation.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI Weekly: Techno-utopianism in the workplace and the threat of excessive automation

Every so often, VentureBeat writes a story about something that needs to go away. A few years back, my colleague Blair Hanley Frank argued that AI systems like Einstein, Sensei, and Watson must go because corporations tend to overpromise results for their products and services. I’ve taken runs at charlatan AI and white supremacy.

This week, a series of events at the intersection of the workplace and AI lent support to the argument that techno-utopianism has no place in the modern world. Among the warning signs in headlines was a widely circulated piece by a Financial Times journalist who said she was wrong to be optimistic about robots.

The reporter describes how she used to be a techno-optimist but in the course of her reporting found that robots can crunch people into their system and force them to work at a robot’s pace. In the article, she cites the Center for Investigative Reporting’s analysis of internal Amazon records that found instances of human injury were higher in Amazon facilities with robots than in facilities without robots.

“Dehumanization and intensification of work is not inevitable,” wrote the journalist, who’s quite literally named Sarah O’Connor. Fill in your choice of Terminator joke here.

Also this week: The BBC quoted HireVue CEO Kevin Parker as saying AI is more impartial than a human interviewer. After facing opposition on multiple fronts, HireVue announced last month it would no longer use facial analysis in its AI-powered video interview analysis of job candidates. Microsoft Teams got similar tech this week to recognize and highlight who’s enjoying a video call.

External auditors have examined the Al used by HireVue and hiring software company Pymetrics, which refers to its AI as “entirely bias free,” but the processes seem to have raised more questions than they’ve answered.

And VentureBeat published an article about a research paper with a warning: Companies like Google and OpenAI have a matter of months to confront negative societal consequences of the large language models they release before they perpetuate stereotypes, replace jobs, or are used to spread disinformation.

What’s important to understand about that paper, written by researchers at OpenAI and Stanford, is that before criticism of large language models became widespread, research and dataset audits found major flaws in large computer vision datasets that were over a decade old, like ImageNet and 80 Million Tiny Images. An analysis of face datasets dating back four decades also found ethically questionable practices.

A day after that article was published, OpenAI cofounder Greg Brockman tweeted what looked like an endorsement of a 90-hour work week. Run the math on that. If you slept seven hours a night, you would have about four hours a day to do anything that is not work — like exercise, eating, resting, or spending time with your family.

An end to techno-utopianism doesn’t have to mean the death of optimistic views about ways technology can improve human lives. There are still plenty of people who believe that indoor farming can change lives for the better or that machine learning can accelerate efforts to address climate change.

Google AI ethics co-lead Margaret Mitchell recently made a case for AI design that keeps the bigger picture in mind. In an email sent to company leaders before she was placed under investigation, she said consideration of ethics and inclusion is part of long-term thinking for long-term beneficial outcomes.

“The idea is that, to define AI research now, we must look to where we want to be in the future, working backwards from ideal futures to this moment, right now, in order to figure out what to work on today,” Mitchell said. “When you can ground your research thinking in both foresight and an understanding of society, then the research questions to currently focus on fall out from there.”

With that kind of long-term thinking in mind, Google’s Ethical AI team and Google DeepMind researchers have produced a framework for carrying out internal algorithm audits, questioned the wisdom of scale when addressing societal issues, and called for a culture change in the machine learning community. Google researchers have also advocated rebuilding the AI industry according to principles of anticolonialism and queer AI and evaluating fairness using sociology and critical race theory. And ethical AI researchers recently asserted that algorithmic fairness cannot simply be transferred from Western nations to non-Western nations or those in the Global South, like India.

The death of techno-utopia could entail creators of AI systems recognizing that they may need to work with the communities their technology impacts and do more than simply abide by the scant regulations currently in place. This could benefit tech companies as well as the general public. As Parity CEO Rumman Chowdhury told VentureBeat in a recent story about what algorithmic auditing startups need to succeed, unethical behavior can have reputation and financial costs that stretch beyond any legal ramifications.

The lack of comprehensive regulation may be why some national governments and groups like Data & Society and the OECD are building algorithmic assessment tools to diagnose risk levels for AI systems.

Numerous reports and surveys have found automation on the rise during the pandemic, and events of the past week remind me of the work of MIT professor and economist Daron Acemoglu, whose research has found one robot can replace 3.3 human jobs.

In testimony before Congress last fall about the role AI will play in the economic recovery in the United States, Acemoglu warned the committee about the dangers of excessive automation. A 2018 National Bureau of Economic Research (NBER) paper coauthored by Acemoglu says automation can create new jobs and tasks, as it has done in the past, but says excessive automation is capable of constraining labor market growth and has potentially acted as a drag on productivity growth for decades.

“AI is a broad technological platform with great promise. It can be used for helping human productivity and creating new human tasks, but it could exacerbate the same trends if we use it just for automation,” he told the House Budget committee. “Excessive automation is not an inexorable development. It is a result of choices, and we can make different choices.”

To avoid excessive automation, in that 2018 NBER paper Acemoglu and his coauthor, Boston University research fellow Pascual Restrepo, call for reforms of the U.S. tax code, because it currently favors capital over human labor. They also call for new or strengthened institutions or policy to ensure shared prosperity, writing, “If we do not find a way of creating shared prosperity from the productivity gains generated by AI, there is a danger that the political reaction to these new technologies may slow down or even completely stop their adoption and development.”

This week’s events involve complexities like robots and humans working together and language models with billions of parameters, but they all seem to raise a simple question: “What is intelligence?” To me, working 90 hours a week is not intelligent. Neither is perpetuating bias or stereotypes with language models or failing to consider the impact of excessive automation. True intelligence takes into account long-term costs and consequences, historical and social context, and, as Sarah O’Connor put it, makes sure “the robots work for us, and not the other way around.”

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member



Repost: Original Source and Author Link