Categories
AI

AI-produced images can’t fix diversity issues in dermatology databases

Image databases of skin conditions are notoriously biased towards lighter skin. Rather than wait for the slow process of collecting more images of conditions like cancer or inflammation on darker skin, one group wants to fill in the gaps using artificial intelligence. It’s working on an AI program to generate synthetic images of diseases on darker skin — and using those images for a tool that could help diagnose skin cancer.

“Having real images of darker skin is the ultimate solution,” says Eman Rezk, a machine learning expert at McMaster University in Canada working on the project. “Until we have that data, we need to find a way to close the gap.”

But other experts working in the field worry that using synthetic images could introduce their own problems. The focus should be on adding more diverse real images to existing databases, says Roxana Daneshjou, a clinical scholar in dermatology at Stanford University. “Creating synthetic data sounds like an easier route than doing the hard work to create a diverse data set,” she says.

There are dozens of efforts to use AI in dermatology. Researchers build tools that can scan images of rashes and moles to figure out the most likely type of issue. Dermatologists can then use the results to help them make diagnoses. But most tools are built on databases of images that either don’t include many examples of conditions on darker skin or don’t have good information about the range of skin tones they include. That makes it hard for groups to be confident that a tool will be as accurate on darker skin.

That’s why Rezk and the team turned to synthetic images. The project has four main phases. The team already analyzed available image sets to understand how underrepresented darker skin tones were to begin with. It also developed an AI program that used images of skin conditions on lighter skin to produce images of those conditions on dark skin and validated the images that the model gave them. “Thanks to the advances in AI and deep learning, we were able to use the available white scan images to generate high-quality synthetic images with different skin tones,” Rezk says.

Next, the team will combine the synthetic images of darker skin with real images of lighter skin to create a program that can detect skin cancer. It will continuously check image databases to find any new, real pictures of skin conditions on darker skin that they can add to the future model, Rezk says.

The team isn’t the first to create synthetic skin images — a group that included Google Health researchers published a paper in 2019 describing a method to generate them, and it could create images of varying skin tones. (Google is interested in dermatology AI and announced a tool that can identify skin conditions last spring.)

Rezk says synthetic images are a stopgap until there are more real pictures of conditions on darker skin available. Daneshjou, though, worries about using synthetic images at all, even as a temporary solution. Research teams would have to carefully check if AI-generated images would have any usual quirks that people wouldn’t be able to see with the naked eye. That type of quirk could theoretically skew results from an AI program. The only way to confirm that the synthetic images work as well as real images in a model would be to compare them with real images — which are in short supply. “Then goes back to the fact of, well, why not just work on trying to get more real images?” she says.

If a diagnostic model is based on synthetic images from one group and real images from another — even temporarily — that’s a concern, Daneshjou says. It could lead to the model performing differently on different skin tones.

Leaning on synthetic data could also make people less likely to push for real, diverse images, she says. “If you’re going to do that, are you actually going to keep doing the work? she says. “I would actually like to see more people do work on getting real data that is diverse, rather than trying to do this workaround.”

Repost: Original Source and Author Link

Categories
AI

A call for increased visual representation and diversity in robotics

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Sometimes it’s the obvious things that are overlooked. Why aren’t there pictures of women building robots on the internet? Or if they are there, why can’t we find them when we search? I have spent years decades doing outreach activities, providing STEM opportunities, and doing women in robotics speaker or networking events. So I’ve done a lot of image searches looking for a representative picture. I have scrolled through page after page of search results ranging from useless to downright insulting every single time.

Finally, I counted.

graph showing women in robotics image results with a female robot taking the lead, followed by a fake robot, and after that women standing near robots

Above: Graph: Image search results via Google showing results of what comes up when the term “woman building robot” is searched.

Image Credit: Andra Keay

My impressions were correct. The majority of the images you find when you look for ‘woman building robot’ are of female robots. This is not what happens if you search for ‘building robot’, or ‘man building robot’. That’s the insulting part, that this misrepresentation and misclassification hasn’t been challenged or fixed. Sophia the robot, or the ScarJo bot, or a sexbot has a much greater impact on the internet than women doing real robotics. What if male roboticists were confronted with pictures of robotic dildos whenever they searched for images of their work?

andra keay's example images women building robots showing female robots, sex robots, fake robots, and men explaining robots to others

Above: Example of image results from Andra Keay’s Google search for ‘women building robots’

Image Credit: Andra Keay

The number of women in the robotics industry is hard to gauge. Best estimates are 5% in most locations, perhaps 10% in some areas. It is slowly increasing, but then the robotics industry is also in a period of rapid growth and everyone is struggling to hire. To my mind, the biggest wasted opportunity for a young robotics company growing like Topsy is to depend on the friends of founders network when it leads to homogenous hiring practices. The sooner you incorporate diversity, the easier it will be for you to scale and attract talent.

For a larger robotics company, the biggest wasted opportunity is not fixing retention. Across the board in the tech industry, retention rates for women and underrepresented minorities are much worse than for pale males. That means that you are doing something wrong. Why not seriously address the complaints of the workers who leave you? Otherwise, you’ll never retain diverse hires, no matter how much money you throw at acquiring them.

The money wasted in talent acquisition when you have poor retention should instead be used to improve childcare, or flexible work hours, or support for affinity groups, or to fire the creep that everyone complains about, or restructure so that you increase the number of female and minority managers. The upper echelons are echoing with the absence of diversity.

On the plus side, the number of pictures of girls building robots has definitely increased in the last ten years. As my own children have grown, I’ve seen more and more images showing girls building robots. But with two daughters now leaving college, I’ve had to tell them that robotics is not one of the female-friendly career paths (if any of them are). Unless they are super passionate about it. Medicine, law, or data analytics might be better domains for their talents. As an industry, we can’t afford to lose bright young women. We can’t afford to lose talented older women. We can’t afford to overlook minority hires. The robotics industry is entering exponential growth. Capital is in abundance, market opportunities are in abundance. Talent is scarce.

These days, I’m focused on supporting professional women in the robotics community, industry, or academia. These are women who are doing critical research and building cutting-edge robots. What do solutions look like for them? Our wonderful annual Ada Lovelace Day list hosted on Robohub has increased the awareness of many ‘new’ faces in robotics. But we have been forced to use profile pictures, primarily because that’s what is available. That’s also the tradition for profile pieces about the work that women do in robotics. The focus is on the woman, not the woman building or programming, or testing the robot. That means that the images are not quite right as role models.

andrea keay's image search results that better represented females in robotics showing images of women brainstorming on a see-through whiteboard, and sitting near constructed robots

Above: Further examples from Andrea Keay’s image search results that better represented females in robotics

Image Credit: Andrea Keay

A real role model shows you the way forward. And that the future is in your hands. The Civil Rights activist Marian Wright Edelman said, “You can’t be what you can’t see.”

A set of images from andra keay's search results displaying the few good images found in the search more accurately representing women working in robotics

Above: A set of images from Andra Keay’s search results displaying the few good images found in the search more accurately representing women working in robotics.

Image Credit: andra keay

So Women in Robotics has launched a photo challenge. Our goal is to see more than 3 images of real women building robots in the top 100 search results. Our stretch goal is to see more images of women building robots than there are of female robots in the top 100 search results! Take great photos following these guidelines, hashtag your images #womeninrobotics #photochallenge #ibuildrobots, and upload them to Wikimedia with a creative commons license so that we can all use them. We’ll share them on the Women in Robotics organization website, too.

Andra Keay's guidelines for what does make a great photo of women in robotics includes: real robot programming, adults of various ages working on robotics, active single subject of the image, individuals pictured shown using tools or code to build a robot, unbranded images, and images that have permission from the subject to be use

Above: Andra Keay’s guidelines for what makes a great, accurate, and realistic photo representing women in robotics.

Image Credit: andra keay

Hey, we’d also love mentions of Women in Robotics in any citable fashion! Wikipedia won’t let us have a page because we don’t have third-party references, and sadly, the mention of our Ada Lovelace Day lists by other organizations have not credited us. We are now an official 501c3 organization, registered in the US, with the mission of supporting women and non-binary people who work in robotics, or who are interested in working in robotics.

andra keay's women in robotics photo challenge additional example and call for submission to photos@womeninrobotics.org

Above: Additional details of the women in robotics photo challenge additional example and call for submission to photos@womeninrobotics.org.

Image Credit: andra keay

If a picture is worth a thousand words, then we can save a forest’s worth of outreach, diversity, and equity work, simply by showing people what women in robotics really do.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Levi-Strauss’ Dr. Katia Walsh on why diversity in AI and ML is non-negotiable

All the sessions from Transform 2021 are available on-demand now. Watch now.


As part of VentureBeat’s series of interviews with women and BIPOC leaders in the AI industry, we sat down with Dr. Katia Walsh, chief strategy and artificial intelligence officer, Levi Strauss & Co. In her career she has forged paths for people from every intersection of race, culture, class, and education, giving them the tools they need in an AI- and data-centric world to be creative, solve problems, develop new solutions, and change the game in their roles across their companies. She’s passionate about the power of diversity, about empowering her employees, and about using technology for good. Learn more about her career, from communist Bulgaria to Levi-Strauss’ first chief strategy and artificial intelligence officer, and her DE&I manifesto below.


See the others in the series: Intel’s Huma Abidi, Redfin’s Bridget Frey, Salesforce’s Kathy Baxter, McAfee’s Celeste Fralick, and ThoughtSpot’s Cindi Howson.


VB: Could you tell us about your background, and your current role at your company?

I started my career as a journalist in communist Bulgaria, where I personally experienced the power of information through a story I wrote while still in high school. That experience led me to become an investigative reporter who aimed to impact human lives, democracy, and society overall. After the fall of communism, I pursued further education in the U.S. During my master’s studies, I discovered the power of new communication technology and specifically, the internet, to amplify the power of information. I then continued my education through a PhD. program that specialized in new communication technology. It was at that point that I became fascinated with a third power, the power of machine learning and its ability to drive desired outcomes. This convergence of three powers — information (or data), technology (or digital), and machine learning (part of artificial intelligence) became the focus of my career.

Over the past 20 years, I’ve used my passion for these three powers to help global businesses win with digital, data, and AI. Throughout my career, I have worked to enable companies to thrive through these powers. I’ve used digital, data, and AI to help both digital-born and established businesses become customer-centric, indispensable to their consumers, and grow. I’ve found myself gravitating to companies that not only strive for profit but also stand for doing social good in the world.

Today, as the first chief strategy and artificial intelligence officer for Levi Strauss & Co., I’m responsible for digital strategy and transformation, while infusing our culture with data, analytic, and artificial intelligence capabilities. This helps us put our fans in the center of everything we do, drive business value across the company globally, and serve as a platform for doing good in the world.

VB: Any woman in the tech industry, or adjacent to it, is already forced to think about DE&I just by virtue of being “a woman in tech” — how has that influenced your career?

One of the myths about digital transformation is that it’s all about harnessing technology. It’s not. To be successful, digital transformation inherently requires and relies on diversity. Artificial intelligence is the result of human intelligence, enabled by its vast talents and also susceptible to its limitations. Therefore, it is imperative that all teams that work in technology and AI are as diverse as possible.

By diversity of people I don’t mean just the obvious in terms of demographics such as race, ethnicity, gender, and age. We critically need people with different skill sets, experiences, educational backgrounds, cultural and geographic perspectives, ways of thinking and working, and more. For example, on the teams I’ve led, I’ve had the privilege of working with many advanced degrees and also people with no formal education. Why? When you have a diverse team reviewing and analyzing data whether it’s for decision-making or algorithms for digital products, you mitigate bias, you move the technology world closer to reflecting the real world, and you are better able to serve your customers who are much more diverse than most companies give them credit for.

VB: Can you tell us about the diversity initiatives you’ve been involved in, especially in your community?

I consider the world to be my community. As an American and European citizen who’s worked at global companies, a global perspective comes naturally, but it’s also important for fostering diversity. The teams I’ve led have been located all over the world, from Boston, Toronto, and Dallas to all geographies throughout Europe and the U.K., plus Singapore, Shanghai, Mumbai, Bangalore, Johannesburg, Nairobi, and Cairo.

At Levi Strauss &Co., diversity of skill sets has played a key role in our digital transformation. In addition to engineers and computer scientists, our growing AI team comprises statisticians, physicists, philosophers, sociologists, designers, retail employees, and distribution center operators. I recently initiated and led our company in its first-ever digital upskilling program, a Machine Learning Bootcamp. By design, it tapped employees with no previous coding or statistics experience working in all markets and functions of the company throughout the world.

The goal was to train people who have deep apparel and retail experience and upgrade their abilities with the latest machine learning skills. In eight weeks, we took people who had never seen code before and trained them to work with Python, use libraries, program neural networks, write automation scripts, and deliver value from coding. This combination of apparel retail expertise and machine learning skills is already resulting in new ways of connecting with consumers, new efficiencies, new creative designs, and new opportunities for our storied brand.

This first-of-its-kind-initiative in the apparel retail industry helped us cultivate more diversity, and attract more women into the traditionally male-dominated field of AI. For example, women represented almost two thirds of our first machine learning graduating class, and the graduates are located in 14 different locations around the world.

VB: How do you see the industry changing in response to the work that women, especially Black and BIPOC women, are doing on the ground? What will the industry look like for the next generation?

Like anything in society, our industry would benefit greatly from the work that women, especially Black and BIPOC women do. We owe it to the world to have more diverse talents creating the current and future solutions and products of technology and artificial intelligence.

Human-centric design by definition means design of and for all humans on the planet. I am personally very grateful to the brave women who have helped uncover and expose bias; advocate for equality, representation and fairness; introduce necessary regulation; and keep solving the myriad of problems that have traditionally stemmed from lack of diversity in our industry.

I look forward to seeing more women — across all ethnicities, Black and BIPOC included, more backgrounds, more skillsets, more geographies, and more perspectives consistently present and evident in our field. This will amplify the power of digital transformation and enable businesses and organizations for future success, while literally changing industries, society, and the world.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

ICIMS: Nearly 20% of orgs aren’t tracking diversity in hiring, recruitment

All the sessions from Transform 2021 are available on-demand now. Watch now.


A year after the 2020 summer of protest against systemic racism and companies outlining their commitment for greater representation, nearly 20% of organizations are not tracking any diversity metrics in their recruitment or hiring practices.

47% of organizations have implemented technology to help reduce unconscious bias in their recruiting and hiring.

The State of Diversity, Equity and Inclusion in the Workplace report, developed by talent cloud company iCIMS and Talent Board, a nonprofit candidate experience benchmark research organization, was issued to better understand how the changing conversation around Diversity, Equity, and Inclusion (DEI) has actually manifested itself within talent acquisition over the past year. Among the findings, the study revealed that technology implementation is the common tactic in deliberate efforts to remove unconscious bias in hiring.

The study found that 47% of organizations have implemented technology to help reduce unconscious bias in their recruiting and hiring. Although 53% have not implemented such technology, one-third of that figure plans to do so in the future.

And while 60% of organizations have instituted diverse slate policies or diversity-focused hiring goals, only 34% embed these targets at the recruiter or hiring manager level. Taken together, the study suggests that well-intentioned C-suites are struggling to channel the widespread desire for a more diverse, representative and bias-free environment into standard hiring practices.

Part of the challenge is that bias is impossible to completely eliminate at the human level – that’s where technology shines. Before interviews, for example, hiring managers can run resumes through an artificial intelligence system to remove anything that could lead to bias on the part of the interviewer – name, address/location, college, GPA, etc. This allows the interviewer to focus on the aspects that matter – skills, experience, potential and results.

Read the full report from iCIMIS and TalentBoard.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Why Salesforce’s Kathy Baxter says diversity and inclusion efforts aren’t enough

Join AI & data leaders at Transform 2021 for the AI/ML Automation Technology Summit. Watch now!


At this year’s Transform we’re stepping up our efforts to build a roster of speakers that reflects the diversity in the industry and highlights the work of leaders who are making a difference.

Among them is Kathy Baxter, Principal Architect, Ethical AI Practice at Salesforce. In 2016, Baxter pitched the role of AI Ethicist to the company’s chief scientist, who pitched it to the CEO, and six days later, it was official. We were excited for the opportunity to speak to her about what the role entails, as well as her thoughts on how the industry is changing, and why focusing on diversity, equity and inclusion (DE&I) efforts isn’t enough.


See the first two in the series: Intel’s Huma Abidi and Refin’s Bridget Frey . More to follow.


VB: Could you tell us about your background, and your current role at your company?

I received a BS [Bachelor of Science] in Applied Psychology and a MS [Masters of Science] in Engineering Psychology/Human Factors Engineering from GA [Georgia] Tech. The degrees combine social science with technology. It also had a strong foundation in research ethics.

I started working on AI ethics “on the side” at Salesforce in 2016, and by 2018, I was working the equivalent of two full-time jobs. I pitched a full-time role of AI Ethicist to our Chief Scientist at the time, Richard Socher, in August of 2018. He agreed this was needed and pitched it to our CEO, Marc Bennioff, who also agreed, and six days later, it was official.

My colleague, Yoav Schlesinger, and I partner with research scientists and product teams to identify potential unintended consequences of the AI research and features they create. We work with them to ensure that the development is responsible, accountable, transparent, and inclusive. We also work to ensure that our solutions empower our customers and society. It’s not about AI replacing humans but helping us create better solutions where it makes sense. That means we also want to avoid techno-solutionism and so we always ask, not just ‘Can we do this, but should we?’

We also work with our partners and customers to ensure that they are using our AI technology responsibly and with our government affairs team to participate in the creation of regulations that will ensure everyone is creating and using AI responsibly.

VB: Any woman in the tech industry, or adjacent to it, is already forced to think about DE&I just by virtue of being “a woman in tech” — how has that influenced your career?

I have always participated in DE&I events at the places I have worked whether that was educational or recruiting events. I’ve also facilitated courses focused on skills to help URMs [underrepresented minorities] advance to higher levels in companies where we’ve seen a large drop off.

The last few years though, I have stepped away from those efforts because I don’t believe they actually address the root cause of lack of diversity and inclusion. Recruiting events or teaching skills to people in underrepresented groups how to deal with systemic bias puts the emphasis on this being a pipeline problem or that the people facing bias are responsible for fixing it.

In my experience, both of these premises fail to address the most serious cause of lack of diversity, and that’s the inherent bias of those in power to decide who is hired, how people are treated when they are hired, and who gets promoted.

I look for every opportunity to ensure when we are hiring for a role that I have any contact with that we reach out to as wide a field of candidates as possible, that we are aware of our biases during the hiring and promotion discussions, and to always be the person that speaks out when I hear or see non-inclusive behavior happening. It’s about calling people in, not out.

So reminding people when we talk about things as simple as project names, “That’s another male scientist’s name. How about a female’s name or we avoid gendered names altogether?” Or looking around the room in important meetings and observing out loud, “Wow. This is a pretty homogenous group we have here. How can we get some other voices involved?”

I also believe in the importance of mentoring and sponsoring others. When I find brilliant folks with expertise that aren’t in the room, in a document, or on an email thread perhaps because they are junior or they aren’t connected with the particular project at hand, I make sure to mention their names and bring them in. It takes work to make sure that hierarchy or organizational charts don’t prevent us from having the best people in discussions because it is worth it for everyone.

VB: How do you see the industry changing in response to the work that women, especially Black and BIPOC women, are doing on the ground? What will the industry look like for the next generation?

The ethics in tech, especially ethics in AI work is largely driven by women and BIPOC since they are the ones harmed by non-inclusive practices and products. It’s taken a long time but it’s gratifying to see that the work of Joy Buolamwini and Timnit Gebru on bias in facial recognition technology [FRT] being broadly consumed by regulators, technology creators, and even consumers thanks to the “Coded Bias” video on Netflix.

We still have a long way to go as FRT is increasingly being used in harmful ways because there is no transparency or accountability when harm is found.

I’m also excited to see more and more students graduating from technology programs with a better understanding of ethics and responsibility. As they become a larger part of tech companies, my hope is that we will see a demise of dark design patterns and a greater focus on helping society, not just making money off of it.

This won’t be sufficient so we need meaningful regulation to stop irresponsible companies from racing to the ethical bottom in the pursuit of profits by any means necessary. We need more women, LGBTQ+, Black, and BIPOC members in the government, civil society, and leadership positions in all companies to make significant changes.

[Baxter’s talk is just one of many conversations around diversity and inclusion at Transform 2021 (July 12-16).  On July 12, we’ll kick off with our third Women in AI breakfast gathering. On Wednesday, we will have a session on BIPOC in AI. On Friday, we’ll host the Women in AI awards. Throughout the agenda, we’ll have numerous other talks on inclusion and bias, including with Margaret Mitchell, a leading AI researcher on responsible AI, as well as with executives from Pinterest, Redfin, and more.]

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Intel exec Huma Abidi on the urgent need for diversity and inclusion in AI

Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.


As part of the lead-up to Transform 2021 coming up July 12-16, we’re excited to put a spotlight on some of our conference speakers who are leading impactful diversity, equity, and inclusion initiatives in AI and data.

We were lucky to land a conversation with Huma Abidi, senior director of AI software products and engineering at Intel. She spoke about her DE&I work in her private life, including her support for STEM education for girls in the U.S. and all over the world, founding the Women in Machine Learning group at Intel, and more.

VB: Could you tell us about your background, and your current role at your company?

HA: This one is easy. As a senior director of AI software products and engineering at Intel, I’m responsible for strategy, roadmaps, requirements, validation and benchmarking of deep learning, machine learning and analytics software products. I lead a globally diverse team of engineers and technologists responsible for delivering world-class products that enable customers to create AI solutions.

VB: Any woman and person of color in the tech industry, or adjacent to it, is already forced to think about DE&I just by virtue of being “a woman and person of color in tech” — how has that influenced your career?

HA: That is very true. Being a woman, and especially a woman of color, you are constantly aware that you are under-represented in the tech industry. When I joined the tech workforce over two decades ago, I was often the only woman in the room and in meetings and it was very obvious to me that there was something wrong with that picture. I decided to do my part to change that and I also proactively sought leaders who would help me progress in my career as a technical leader as well as support my DE&I efforts.

From early on in my career, I volunteered to be part of Intel’s initiatives working on creating a diverse and inclusive workforce. I participated in hiring events which were focused on hiring women and other under-represented minorities (URM) for tech jobs. To help with the onboarding of new URM hires, I led cohorts to offer support, and help make connections and build their networks. To ensure retention, I mentored (and still do!) women and URMs at various career stages, and also helped match mentors and mentees.

I am especially proud to have founded the Women in Machine Learning group at Intel where we discuss exciting technical topics in AI, while also bringing in experts in other areas such as mindfulness. During the pandemic it has been particularly challenging for parents with small children, and we continue to provide support and coaching to help with regards to work-life balance.

After meeting the 2020 goal of achieving full representation of women and URMs at every level (at market availability) in the U.S., Intel’s goal is to increase the number of women in technical roles to 40% by 2030 and to double the number of women and URM in senior leadership. I am very proud to be part of Intel’s RISE initiative.

VB: Can you tell us about the diversity initiatives you’ve been involved in, especially in your community?

HA: I am very passionate about technology and equally about diversity and inclusion. As mentioned above I am involved in many initiatives at Intel related to DE&I.

Just last week at the launch event of our AI for Youth program, I met with 18 young cadets –mostly Black and Hispanic youth — who are committed to military service as part of a Junior ROTC program. We had a great discussion about technology, artificial intelligence, and the challenges of being a minority, URM, and women in tech.

I support several organizations around the world for the cause of women’s education particularly in STEM, including Girl Geek X, Girls innovate, and I am on the board for “Led by,” an organization that provides mentorship to minority women.

According to the United Nations Educational, Scientific and Cultural Organization (UNESDOC) girls lose interest in science after fourth grade. I believe that before young girls start developing negative perceptions about STEM, there needs to be role models who can show them that it is cool to be an engineer or a scientist.

I enjoy talking to high school and college students both in the U.S. and other countries to influence them in considering a career in engineering and AI. Recently, I was invited to talk to 400 students in India, mostly girls, to share with them what it is to be a woman in the tech industry, working in the field of AI.

VB: How do you see the industry changing in response to the work that women, especially Black and BIPOC women, are doing on the ground? What will the industry look like for the next generation?

HA: Women make up nearly half the world’s population and yet there is a large gap when it comes to technical roles and even more so for BIPOC.

There have been several hopeful signs recently. In recent years, there has been an increasing number of high-profile women in technology as well as in leadership roles in tech companies, academia as well as startups. This includes Susan Wojcicki, CEO of YouTube; Aicha Evans, CEO of Zoox; Fei Fei Li leading human centered AI at Stanford; and Meredith Whittaker working on social implications of AI at NYU AI Now Institute, to name a few.

Media and publications are also helping highlight these issues and recognizing women who are making a difference in this area. In the past few years I have participated in a few VentureBeat events and a panel to discuss and bring forward issues like Bias in AI, DE&I, and gender and race gaps in tech industry. I am grateful to be recognized as a 2021 “woman of influence” by the Silicon Valley Business Journal and 2021 “Tribute to Women” by YWCA Golden Gate Silicon Valley for the work I have done in this area.

All tech companies are tackling with lack of gender parity issues and it is well understood that unless we build a pipeline of women in technology, the gender gap will not be narrowed or closed. When they put measures into place around achieving more gender diversity, there should be an explicit focus on race as well as gender. It’s especially important to get more women and underrepresented minorities in AI (an area that I am working on), because of potential biases that a lack of representation can cause when creating AI solutions.

Focused efforts need to be made to provide women, especially BIPOC, leadership opportunities. This is possible only if they have advocates, mentors, and sponsors.

These issues are common to all tech companies and the best way we can make real progress is by joining forces, to make collective investment in fixing these issues, particularly for the underserved communities and partnering with established non-profits.

Earlier this year, Intel announced a new industry coalition with 5 major companies to develop shared diversity and inclusion goals and metrics. The coalition’s inclusion index serves as a benchmark to track diversity and inclusion improvements, shares current best practices, and highlights opportunities to improve outcomes across industries.

The coalition is focusing on four critical areas: 1) leadership representation 2) inclusive language 3) inclusive product development and 4) STEM readiness in underserved communities.

These are examples of great steps in the right direction to close diversity, gender, and race gaps in the tech industry going forward.

[Abidi’s talk is just one of many conversations around D,E&I at Transform 2021 next week (July 12-16).  On Monday, we’ll kick off with our third Women in AI breakfast gathering. On Wednesday, we will have a session on BIPOC in AI. On Friday, we’ll host the Women in AI awards. Throughout the agenda, we’ll have numerous other talks on inclusion and bias, including with Margaret Mitchell, a leading AI researcher on responsible AI, as well as with executives from Pinterest, Redfin, and more.]

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
Tech News

LEGO Everyone is Awesome diversity set launches with Pride Month

LEGO has unveiled its new ‘Everyone is Awesome’ display set featuring the colors of the rainbow (and more), as well as monochrome minifigs in each color. The set is designed to be displayed rather than played with, giving LEGO fans a way to showcase their pride or support for diverse communities. The new LEGO set will arrive just in time for Pride Month in June.

The new LEGO display set is based on the iconic LGBTQIA+ flag, according to the company; it features a total of 11 colors, as well as a variety of minifig designs representing each color. The set includes 346 pieces, though it’ll no doubt be much easier to put together than some of LEGO’s other, more complex sets.

The display, once it is assembled, measures about 4-inches tall and 5-inches wide. The minifigs can be placed around it, used with different LEGO sets, or placed neatly on the display for a special place on your shelf or bookcase. In keeping with the theme, LEGO will make the ‘Everyone is Awesome’ set available to purchase on June 1, the first day of Pride Month.

The set’s designer, VP of Design Matthew Ashton, said:

I wanted to create a model that symbolises inclusivity and celebrates everyone, no matter how they identify or who they love. Everyone is unique, and with a little more love, acceptance and understanding in the world, we can all feel more free to be our true AWESOME selves! This model shows that we care, and that we truly believe ‘Everyone is awesome’!

You’ll be able to purchase the ‘Everyone is Awesome’ set from LEGO’s website, as well as its branded stores, for $34.99 USD and EUR. The company also emphasized that it is partnered with a number of organizations that help it support employees who are members of the LGBTQIA+ community and their allies.

Repost: Original Source and Author Link

Categories
AI

Google is changing its diversity and research policies after Timnit Gebru’s firing

Google is changing its policies related to research and diversity after completing an internal investigation into the firing of ethical AI team co-leader Timnit Gebru, according to Axios. The company intends to tie the pay of certain executives to diversity and inclusivity goals. It’s also making changes to how sensitive employee exits are managed.

Although Google did not reveal the results of the investigation, the changes seem to be direct responses to how the situation with Gebru went down. After Google demanded that a paper she co-authored be retracted, Gebru told research team management that she would resign from her position and work on a transition plan, unless certain conditions were met. Instead of a transition plan, the company immediately ended her employment while she was on vacation. This sparked backlash from members of her team, and even caused some Google engineers to quit in protest.

Google had claimed that Gebru’s paper was not submitted properly, though the research team disagreed. Google has now said it will “streamline its process for publishing research,” according to Axios, but the exact details of the policy changes weren’t given.

In an internal email to staff, Jeff Dean, head of AI at Google, wrote:

I heard and acknowledge what Dr. Gebru’s exit signified to female technologists, to those in the Black community and other underrepresented groups who are pursuing careers in tech, and to many who care deeply about Google’s responsible use of AI. It led some to question their place here, which I regret.

He also apologized for how Gebru’s exit was handled, although he stopped short of calling it a firing.

The policy changes come a day after Google restructured its AI teams, a change which members of the ethical AI team were “the last to know about,” according to research scientist Alex Hanna, who is a part of the team.

Google declined to share the updated policies with The Verge, instead pointing to Axios’s article for details.



Repost: Original Source and Author Link

Categories
Tech News

AI diversity groups snub future funding from Google over staff treatment

Google’s AI ethics drama has taken another twist. Three groups working to promote diversity in AI say they will no longer accept funding from the search giant after a series of controversial firings at the company.

Queer in AI, Black in AI, and Widening NLP cited the dismissals of Timnit Gebru and Margaret, the former co-leads of Google’s Ethical AI team, as well as recruiter April Christina Curley, as reasons for the decision.

Ia joint statement issued on Monday, the groups said Google’s actions had “inflicted tremendous harm” and “set a dangerous precedent for what type of research, advocacy, and retaliation is permissible in our community.”

Until Google addresses the harm they’ve caused by undermining both inclusion and critical research, we are unable to reconcile Google’s actions with our organizational missions.We have therefore decided to end our sponsorship relationship with Google.

Gebru was sacked in December after a conflict over a research paper she co-authored about the dangers of large language models, which are crucial components of Google’s search products.

[Read: 3 new technologies ecommerce brands can use to connect better with customers]

Mitchell was fired three months later for reportedly using automated scripts to find emails showing mistreatment of Gebru, while Curley says she was terminated because the company was “tired of hearing me call them out on their racist bullshit.”

The three groups said Gebru and Mitchell’s exits had disrupted their lives and work, and also stymied the efforts of their former team. Curley’s departure, meanwhile, was described as “a step backward in recruiting and creating inclusive workplaces for Black engineers in an industry where BIPOC are marginalized and undermined.”

The groups urged Google to make the changes necessary to promote research integrity and transparency, as well as allow research that is critical of the company’s products.

They also called for the tech giant “to emphasize work that uplifts and hires diverse voices, honors ethical principles, and respects Indigenous and minority communities’ data and sovereignty.”

None of the organizations have previously rejected funding from a corporate sponsor. Wired reports that Queer in AI received $20,000 from Google in the past year, while Widening NLP got $15,000.

The trio joins a growing number of individuals and organizations who have spurned funding from Google over the company’s treatment of staff.

Five months after Gebru’s firing, the fallout continues to harm Google’s reputation for AI research.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.



Repost: Original Source and Author Link

Categories
AI

Researchers find that labels in computer vision datasets poorly capture racial diversity

Datasets are a primary driver of progress in computer vision, and many computer vision applications require datasets that include human faces. These datasets often have labels denoting racial identity, expressed as a category assigned to faces. But historically, little attention has been paid to the validity, construction, and stability of these categories. Race is an abstract, fuzzy notion, and highly consistent representations of a racial group across datasets could be indicative of stereotyping.

Northeastern University researchers sought to study these face labels in the context of racial categories and fair AI. In a paper, they argue that labels are unreliable as indicators of identity because some labels are more consistently defined than others, and because datasets appear to “systematically” encode stereotypes of racial categories.

Their timely research comes after Deborah Raji and coauthor Genevieve Fried published a pivotal study examining facial recognition datasets compiled over 43 years. They found that researchers, driven by the exploding data requirements of machine learning, gradually abandoned asking for people’s consent, leading them unintentionally include photos of minors, use racist and sexist labels, and have inconsistent quality and lighting

Racial labels are used in computer vision without definition or only with loose and nebulous definition, the coauthors observe from the datasets they analyzed (FairFace, BFW, RFW, and LAOFIW). There’s myriad systems of racial classifications and terminology, some of debatable coherence, with one dataset grouping together “people with ancestral origins in Sub-Saharan Africa, India, Bangladesh, Bhutan, among others.” Other datasets use labels that could be considered offensive, like “Mongoloid.”

Moreover, a number of computer vision datasets use the label “Indian/South Asian,” which the researchers point to as an example of the pitfalls of racial categories. If the “Indian” label refers only to the country of India, it’s arbitrary in the sense that the borders of India represent the partitioning of a colonial empire on political grounds. Indeed, racial labels largely correspond with geographic regions, including populations with a range of languages, cultures, separation in space and time, and phenotypes. Labels like “South Asian” should include populations in Northeast India, who might exhibit traits more common in East Asia, but ethnic groups span racial lines and labels can fractionalize them, placing some members in one racial category and others in a different category.

“The often employed, standard set of racial categories — e.g., ‘Asian,’ ‘Black,’ ‘White,’ ‘South Asian’ — is, at a glance, incapable of representing a substantial number of humans,” the coauthors wrote. “It obviously excludes indigenous peoples of the Americas, and it is unclear where the hundreds of millions of people who live in the Near East, Middle East, or North Africa should be placed. One can consider extending the number of racial categories used, but racial categories will always be incapable of expressing multiracial individuals, or racially ambiguous individuals. National origin or ethnic origin can be utilized, but the borders of countries are often the results of historical circumstance and don’t reflect differences in appearance, and many countries are not racially homogeneous.”

Equally problematically, the researchers found that faces in the datasets they analyzed were systematically the subject of racial disagreements among annotators. All datasets seemed to include and recognize a very specific type of person as Black — a stereotype — while having more expansive (and less consistent) definitions for other racial categories. Furthermore, the consistency of racial perception varied across ethnic groups, with Filipinos in one dataset being seen less consistently seen as Asian compared with Koreans, for example.

“It is possible to explain some of the results purely probabilistically – blonde hair is relatively uncommon outside of Northern Europe, so blond hair is a strong signal of being from Northern Europe, and thus, belonging to the White category. But If the datasets are biased towards images collected from individuals in the U.S., then East Africans may not be included in the datasets, which results in high disagreement on the racial label to assign to Ethiopians relative to the low disagreement on the Black racial category in general,” the coauthors explained.

These racial labeling biases could be reproduced and amplified if left unaddressed, the coauthors warn, taking take on validity with dangerous consequences when divorced from cultural context. Indeed, numerous studies — including the landmark Gender Shades work by Joy Buolamwini, Dr. Timnit Gebru, Dr. Helen Raynham, and Raji — and VentureBeat’s own analyses of public benchmark data have shown facial recognition algorithms are susceptible to various biases. One frequent confounder is technology and techniques that favor lighter skin, which include everything from sepia-tinged film to low-contrast digital cameras. These prejudices can be encoded in algorithms such that their performance on darker-skinned people falls short of that on those with lighter skin.

“A dataset can have equal amounts of individuals across racial categories, but exclude ethnicities or individuals who don’t fit into stereotypes,” they wrote. “It is tempting to believe fairness can be purely mathematical and independent of the categories used to construct groups, but measuring the fairness of systems in practice, or understanding the impact of computer vision in relation to the physical world, necessarily requires references to groups which exist in the real world, however loosely.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link