Categories
AI

Intel exec Huma Abidi on the urgent need for diversity and inclusion in AI

Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.


As part of the lead-up to Transform 2021 coming up July 12-16, we’re excited to put a spotlight on some of our conference speakers who are leading impactful diversity, equity, and inclusion initiatives in AI and data.

We were lucky to land a conversation with Huma Abidi, senior director of AI software products and engineering at Intel. She spoke about her DE&I work in her private life, including her support for STEM education for girls in the U.S. and all over the world, founding the Women in Machine Learning group at Intel, and more.

VB: Could you tell us about your background, and your current role at your company?

HA: This one is easy. As a senior director of AI software products and engineering at Intel, I’m responsible for strategy, roadmaps, requirements, validation and benchmarking of deep learning, machine learning and analytics software products. I lead a globally diverse team of engineers and technologists responsible for delivering world-class products that enable customers to create AI solutions.

VB: Any woman and person of color in the tech industry, or adjacent to it, is already forced to think about DE&I just by virtue of being “a woman and person of color in tech” — how has that influenced your career?

HA: That is very true. Being a woman, and especially a woman of color, you are constantly aware that you are under-represented in the tech industry. When I joined the tech workforce over two decades ago, I was often the only woman in the room and in meetings and it was very obvious to me that there was something wrong with that picture. I decided to do my part to change that and I also proactively sought leaders who would help me progress in my career as a technical leader as well as support my DE&I efforts.

From early on in my career, I volunteered to be part of Intel’s initiatives working on creating a diverse and inclusive workforce. I participated in hiring events which were focused on hiring women and other under-represented minorities (URM) for tech jobs. To help with the onboarding of new URM hires, I led cohorts to offer support, and help make connections and build their networks. To ensure retention, I mentored (and still do!) women and URMs at various career stages, and also helped match mentors and mentees.

I am especially proud to have founded the Women in Machine Learning group at Intel where we discuss exciting technical topics in AI, while also bringing in experts in other areas such as mindfulness. During the pandemic it has been particularly challenging for parents with small children, and we continue to provide support and coaching to help with regards to work-life balance.

After meeting the 2020 goal of achieving full representation of women and URMs at every level (at market availability) in the U.S., Intel’s goal is to increase the number of women in technical roles to 40% by 2030 and to double the number of women and URM in senior leadership. I am very proud to be part of Intel’s RISE initiative.

VB: Can you tell us about the diversity initiatives you’ve been involved in, especially in your community?

HA: I am very passionate about technology and equally about diversity and inclusion. As mentioned above I am involved in many initiatives at Intel related to DE&I.

Just last week at the launch event of our AI for Youth program, I met with 18 young cadets –mostly Black and Hispanic youth — who are committed to military service as part of a Junior ROTC program. We had a great discussion about technology, artificial intelligence, and the challenges of being a minority, URM, and women in tech.

I support several organizations around the world for the cause of women’s education particularly in STEM, including Girl Geek X, Girls innovate, and I am on the board for “Led by,” an organization that provides mentorship to minority women.

According to the United Nations Educational, Scientific and Cultural Organization (UNESDOC) girls lose interest in science after fourth grade. I believe that before young girls start developing negative perceptions about STEM, there needs to be role models who can show them that it is cool to be an engineer or a scientist.

I enjoy talking to high school and college students both in the U.S. and other countries to influence them in considering a career in engineering and AI. Recently, I was invited to talk to 400 students in India, mostly girls, to share with them what it is to be a woman in the tech industry, working in the field of AI.

VB: How do you see the industry changing in response to the work that women, especially Black and BIPOC women, are doing on the ground? What will the industry look like for the next generation?

HA: Women make up nearly half the world’s population and yet there is a large gap when it comes to technical roles and even more so for BIPOC.

There have been several hopeful signs recently. In recent years, there has been an increasing number of high-profile women in technology as well as in leadership roles in tech companies, academia as well as startups. This includes Susan Wojcicki, CEO of YouTube; Aicha Evans, CEO of Zoox; Fei Fei Li leading human centered AI at Stanford; and Meredith Whittaker working on social implications of AI at NYU AI Now Institute, to name a few.

Media and publications are also helping highlight these issues and recognizing women who are making a difference in this area. In the past few years I have participated in a few VentureBeat events and a panel to discuss and bring forward issues like Bias in AI, DE&I, and gender and race gaps in tech industry. I am grateful to be recognized as a 2021 “woman of influence” by the Silicon Valley Business Journal and 2021 “Tribute to Women” by YWCA Golden Gate Silicon Valley for the work I have done in this area.

All tech companies are tackling with lack of gender parity issues and it is well understood that unless we build a pipeline of women in technology, the gender gap will not be narrowed or closed. When they put measures into place around achieving more gender diversity, there should be an explicit focus on race as well as gender. It’s especially important to get more women and underrepresented minorities in AI (an area that I am working on), because of potential biases that a lack of representation can cause when creating AI solutions.

Focused efforts need to be made to provide women, especially BIPOC, leadership opportunities. This is possible only if they have advocates, mentors, and sponsors.

These issues are common to all tech companies and the best way we can make real progress is by joining forces, to make collective investment in fixing these issues, particularly for the underserved communities and partnering with established non-profits.

Earlier this year, Intel announced a new industry coalition with 5 major companies to develop shared diversity and inclusion goals and metrics. The coalition’s inclusion index serves as a benchmark to track diversity and inclusion improvements, shares current best practices, and highlights opportunities to improve outcomes across industries.

The coalition is focusing on four critical areas: 1) leadership representation 2) inclusive language 3) inclusive product development and 4) STEM readiness in underserved communities.

These are examples of great steps in the right direction to close diversity, gender, and race gaps in the tech industry going forward.

[Abidi’s talk is just one of many conversations around D,E&I at Transform 2021 next week (July 12-16).  On Monday, we’ll kick off with our third Women in AI breakfast gathering. On Wednesday, we will have a session on BIPOC in AI. On Friday, we’ll host the Women in AI awards. Throughout the agenda, we’ll have numerous other talks on inclusion and bias, including with Margaret Mitchell, a leading AI researcher on responsible AI, as well as with executives from Pinterest, Redfin, and more.]

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
Computing

Nvidia Reveals Several Urgent Security Issues in GPU Driver

Nvidia is warning GPU owners to update their graphics card drivers after the company discovered several high-level security vulnerabilities. ThreatPost reports that Nvidia found bugs in its virtual GPU software and the display driver that’s required for the graphics card to function.

Nvidia has a table showing the drivers for its different product lines across Windows and Linux, but it doesn’t really matter. It seems GeForce, Quadro, and Tesla drivers are vulnerable across Windows and Linux, so it’s best to update your graphics driver regardless.

In total, the company revealed 13 security vulnerabilities, five through the GPU display driver and eight through the vGPU software. Most sit in between 7 and 8 on CVSS 3.1 (Common Vulnerability Scoring System), which is an open standard for rating security vulnerabilities on a scale of 1 to 10.

CVE‑2021‑1074 is one of the most pressing issues, with a base CVSS score of 7.5. This vulnerability shows up in the display driver installer, where an attacker with local system access can replace the installation files with malicious ones. On the other end, CVE‑2021‑1078 received a base score of 5.5, which shows a vulnerability in the kernel driver that could lead to a system crash.

There’s also CVE‑2021‑1085 through the vGPU software (base score of 7.3), which opens the potential to write data to shared memory locations and manipulate it after validation. That could lead to escalation of privileges and denial of service.

If you just have an Nvidia graphics card, you don’t need to worry about the vGPU vulnerabilities. The vGPU software is built for the data center, allowing operators to share graphics card power across several virtual machines. Nvidia recommends updating your graphics card driver through the Nvidia driver download page and the vGPU software through the Nvidia licensing portal (if you have access to it).

The vulnerabilities highlight the importance of updating your software and drivers regularly. Earlier this year, Nvidia fixed several vulnerabilities in its display driver, and it continues to push updates whenever vulnerabilities show up. The current batch of problems may lead to malicious code execution (ransomware, etc.), escalation of privileges, data disclosure, data corruption, and/or denial of service, so you should update your GPU driver as soon as possible.

All of the issues come through software, so it doesn’t matter which graphics card you have. Even with a last-gen or older GPU — a likely situation given the ongoing graphics card shortage — you still need to update your driver.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

OpenAI and Stanford researchers call for urgent action to address harms of large language models like GPT-3

The makers of large language models like Google and OpenAI may not have long to set standards that sufficiently address their impact on society. Open source projects currently aiming to recreate GPT-3 include GPT-Neo, a project headed by EleutherAI. That’s according to a paper published last week by researchers from OpenAI and Stanford University.

“Participants suggested that developers may only have a six- to nine-month advantage until others can reproduce their results. It was widely agreed upon that those on the cutting edge should use their position on the frontier to responsibly set norms in the emerging field,” the paper reads. “This further suggests the urgency of using the current time window, during which few actors possess very large language models, to develop appropriate norms and principles for others to follow.”

The paper looks back at a meeting held in October 2020 to consider GPT-3 and two pressing questions: “What are the technical capabilities and limitations of large language models?” and “What are the societal effects of widespread use of large language models?” Coauthors of the paper described “a sense of urgency to make progress sooner than later in answering these questions.”

When the discussion between experts from fields like computer science, philosophy, and political science took place last fall, GPT-3 was the largest known language model, at 175 billion parameters. Since then, Google has released a trillion-parameter language model.

Large language models are trained using vast amounts of text scraped from sites like Reddit or Wikipedia as training data. As a result, they’ve been found to contain bias toward a number of groups, including people with disabilities and women. GPT-3, which is being exclusively licensed to Microsoft, seems to have a particularly low opinion of Black people and appears to be convinced all Muslims are terrorists.

Large language models could also perpetuate the spread of disinformation and could potentially replace jobs.

Perhaps the most high-profile criticism of large language models came from a paper coauthored by former Google Ethical AI team leader Timnit Gebru. That paper, which was under review at the time Gebru was fired in late 2020, calls a trend of language models created using poorly curated text datasets “inherently risky” and says the consequences of deploying those models fall disproportionately on marginalized communities. It also questions whether large language models are actually making progress toward humanlike understanding.

“Some participants offered resistance to the focus on understanding, arguing that humans are able to accomplish many tasks with mediocre or even poor understanding,” the OpenAI and Stanford paper reads. 

Experts cited in the paper return repeatedly to the topic of which choices should be left in the hands of businesses. For example, one person suggests that letting businesses decide which jobs should be replaced by a language model would likely have “adverse consequences.”

“Some suggested that companies like OpenAI do not have the appropriate standing and should not aim to make such decisions on behalf of society,” the paper reads. “Someone else observed that it is especially difficult to think about mitigating bias for multi-purpose systems like GPT-3 via changes to their training data, since bias is typically analyzed in the context of particular use cases.”

Participants in the study suggest ways to address the negative consequences of large language models, such as enacting laws that require companies to acknowledge when text is generated by AI — perhaps along the lines of California’s bot law. Other recommendations include:

  • Training a separate model that acts as a filter for content generated by a language model
  • Deploying a suite of bias tests to run models through before allowing people to use the model
  • Avoiding some specific use cases

Prime examples of such use cases can be found in large computer vision datasets like ImageNet, an influential dataset of millions of images developed by Stanford researchers with Mechanical Turk employees in 2009. ImageNet is widely credited with moving the computer vision field forward. But following accounts of ImageNet’s major shortcomings — like Excavating AI — in 2019 ImageNet’s creators removed the people category and roughly 600,000 images from the dataset. Last year, similar issues with racist, sexist, and offensive content led researchers at MIT to end the 80 Million Tiny Images dataset created in 2006. At that time, Prabhu told VentureBeat he would have liked to have seen the dataset reformed rather than canceled.

Some in the field have recommended audits of algorithms by independent external actors as a way to address harm associated with deploying AI models. But that would likely require industry standards not yet in place.

A paper published last month by Stanford University Ph.D. candidate and Gradio founder Abubakar Abid detailed the anti-Muslim tendencies of text generated by GPT-3. Abid’s video of GPT-3 demonstrating anti-Muslim bias has been viewed nearly 300,000 times since August 2020.

In experiments detailed in a paper on this subject, he found that even the prompt “Two Muslims walked into a mosque to worship peacefully” generates text about violence. The paper also says that preceding a text generation prompt can reduce violence mentions for text mentioning Muslims by 20-40%.

“Interestingly, we found that the best-performing adjectives were not those diametrically opposite to violence (e.g. ‘calm’ did not significantly affect the proportion of violent completions). Instead, adjectives such as ‘hard-working’ or ‘luxurious’ were more effective, as they redirected the focus of the completions toward a specific direction,” the paper reads.

In December 2020, Abid’s GPT-3 study received the Best Paper award at NeurIPS, the largest annual machine learning research conference. In a presentation about experiments probing anti-Muslim bias in GPT-3 presented at the first Muslims in AI workshop at NeurIPS, Abid described anti-Muslim bias demonstrated by GPT-3 as persistent and noted that models trained with massive text datasets are likely to have extremist and biased content fed into them. In order to deal with bias found in large language models, you can do a post-factor filtering approach like OpenAI does today, but he said in his experience that leads to innocuous things that have nothing to do with Muslims getting flagged as bias, which is another problem.

“The other approach would be to somehow modify or fine-tune the bias from these models, and I think that is probably a better direction because then you could release a fine-untuned model into the world and that kind of thing,” he said. “Through these experiments, I think in a manual way we have seen that it is possible to mitigate the bias, but can we automate this process and optimize this process? I think that’s a very important open-ended research question.”

In somewhat related news, in an interview with VentureBeat last week following a $1 billion funding round, Databricks CEO Ali Ghodsi said the money was raised in part to acquire startups developing language models. Ghodsi listed GPT-3 and other breakthroughs in machine learning among trends that he expects to shape the company’s expansion. Microsoft invested in Databricks in a previous funding round. And in 2018, Microsoft acquired Semantic Machines, a startup with ties to Stanford University and UC Berkeley.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member



Repost: Original Source and Author Link