Researchers find new vulnerability with Apple Silicon chips

Researchers have released details of an Apple Silicon vulnerability dubbed “Augury.” However, it doesn’t seem to be a huge issue at the moment.

Jose Rodrigo Sanchez Vicarte from the University of Illinois at Urbana-Champaign and Michael Flanders of the University of Washington published their findings of a flaw within Apple Silicon. The vulnerability itself is due to a flaw in Apple’s implementation of the Data-Memory Dependent Prefetcher (DMP).

In short, a DMP looks at memory to determine what content to “prefetch” for the CPU. The researchers found that Apple’s M1, M1 Max, and A14 chips used an “array of pointers” pattern that loops through an array and dereferences the contents.

This could possibly leak data that’s not read because it gets dereferenced by the prefetcher. Apple’s implementation is different from a traditional prefetcher as explained by the paper.

“Once it has seen *arr[0] … *arr[2] occur (even speculatively!) it will begin prefetching *arr[3] onward. That is, it will first prefetch ahead the contents of arr and then dereference those contents. In contrast, a conventional prefetcher would not perform the second step/dereference operation.”

Because the CPU cores never read the data, defenses that try to track access to the data don’t work against the Augery vulnerability.

David Kohlbrenner, assistant professor at the University of Washington, downplayed the impact of Augery, noting that Apple’s DMP “is about the weakest DMP an attacker can get.”

The good news here is that this is about the weakest DMP an attacker can get. It only prefetches when content is a valid virtual address, and has number of odd limitations. We show this can be used to leak pointers and break ASLR.

We believe there are better attacks possible.

— David Kohlbrenner (@dkohlbre) April 29, 2022

For now, researchers say that only the pointers can be accessed and even then via the research sandbox environment used to research the vulnerability. Apple was also notified about the vulnerability before the public disclosure, so a patch is likely incoming soon.

Apple issued a March 2022 patch for MacOS Monterey that fixed some nasty Bluetooth and display bugs. It also patched two vulnerabilities that allowed an application to execute code with kernel-level privileges.

Other critical fixes to Apple’s desktop operating system include one that patched a vulnerability that exposed browsing data in the Safari browser.

Finding bugs in Apple’s hardware can sometimes net a pretty profit. A Ph.D. student from Georgia Tech found a major vulnerability that allowed unauthorized access to the webcam. Apple handsomely rewarded him about $100,000 for his efforts.

Editors’ Choice

Repost: Original Source and Author Link


Researchers trigger new exploit by renaming an iPhone and a Tesla

Security researchers investigating the recently discovered and “extremely bad” Log4Shell exploit claim to have used it on devices as varied as iPhones and Tesla cars. Per screenshots shared online, changing the device name of an iPhone or Tesla to a special exploit string was enough to trigger a ping from Apple or Tesla servers, indicating that the server at the other end was vulnerable to Log4Shell.

In the demonstrations, researchers switched the device names to be a string of characters that would send servers to a testing URL, exploiting the behavior enabled by the vulnerability. After the name was changed, incoming traffic showed URL requests from IP addresses belonging to Apple and, in the case of Tesla, China Unicom — the company’s mobile service partner for the Chinese market. In short, the researchers tricked Apple and Tesla servers into visiting a URL of their choice.

Apple device information screen showing name changed to log4shell attack string

An iPhone device information screen with name changed to contain the exploit string.
Image: Cas van Cooten / Twitter

The iPhone demonstration came from a Dutch security researcher; the other was uploaded to the anonymous Log4jAttackSurface Github repository.

Assuming the images are genuine, they show behavior — remote resource loading — that should not be possible with text contained in a device name. This proof of concept has led to widespread reporting that Apple and Tesla are vulnerable to the exploit.

While the demonstration is alarming, it’s not clear how useful it would be for cybercriminals. In theory, an attacker could host malicious code at the target URL in order to infect vulnerable servers, but a well-maintained network could prevent such an attack at the network level. More broadly, there’s no indication that the method could lead to any broader compromise of Apple or Tesla’s systems. (Neither company responded to an email request for comment by time of publication.)

Still, it’s a reminder of the complex nature of technological systems, which almost always depend on code pulled in from third-party libraries. The Log4Shell exploit affects an open-source Java tool called log4j which is widely used for application event logging; though it’s still not known exactly how many devices are affected, but researchers estimate that it is in the millions, including obscure systems that are rarely targeted by attacks of this nature.

The full extent of exploitation in the wild is unknown, but in a blog post, digital forensics platform Cado reported detecting servers trying to use this method to install Mirai botnet code.

Log4Shell is all the more serious for being relatively easy to exploit. The vulnerability works by tricking the application into interpreting a piece of text as a link to a remote resource, and trying to retrieve that resource instead of saving the text as it is written. All that’s necessary is for a vulnerable device to save the special string of characters in its application logs.

This creates the potential for vulnerability in many systems that accept user input, since message text can be stored in the logs. The log4j vulnerability was first spotted in Minecraft servers, which attackers could compromise using chat messages; and systems that send and receive other message formats like SMS clearly are also susceptible.

At least one major SMS provider appears to be vulnerable to the exploit, according to testing conducted by The Verge. When sent to numbers operated by the SMS provider, text messages containing exploit code triggered a response from the company’s servers that revealed information about the IP address and host name, suggesting that the servers could be tricked into executing malicious code. Calls and emails to the affected company had not been answered at time of publication.

An update to the log4j library has been released to mitigate against the vulnerability, but patching of all vulnerable machines will take time given the challenges of updating enterprise software at scale.

Repost: Original Source and Author Link


AI Weekly: AI researchers release toolkit to promote AI that helps to achieve sustainability goals

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

While discussions about AI often center around the technology’s commercial potential, increasingly, researchers are investigating ways that AI can be harnessed to drive societal change. Among others, Facebook chief AI scientist Yann LeCun and Google Brain cofounder Andrew Ng have argued that mitigating climate change and promoting energy efficiency are preeminent challenges for AI researchers.

Along this vein, researchers at the Montreal AI Ethics Institute have proposed a framework designed to quantify the social impact of AI through techniques like compute-efficient machine learning. An IBM project delivers farm cultivation recommendations from digital farm “twins” that simulate the future soil conditions of real-world crops. Other researchers are using AI-generated images to help visualize climate change, and nonprofits like WattTime are working to reduce households’ carbon footprint by automating when electric vehicles, thermostats, and appliances are active based on where renewable energy is available.

Seeking to spur further explorations in the field, a group at the Stanford Sustainability and Artificial Intelligence Lab this week released (to coincide with NeurIPS 2021) a benchmark dataset called SustainBench for monitoring sustainable development goals (SDGs) including agriculture, health, and education using machine learning. As the coauthors told VentureBeat in an interview, the goal is threefold: (1) lower the barriers to entry for researchers to contribute to achieving SDGs; (2) provide metrics for evaluating SDG-tracking algorithms, and (3) encourage the development of methods where improved AI model performance facilitates progress towards SDGs.

“SustainBench was a natural outcome of the many research projects that [we’ve] worked on over the past half-decade. The driving force behind these research projects was always the lack of large, high-quality labeled datasets for measuring progress toward the United Nations Sustainable Development Goals (UN SDGs), which forced us to come up with creative machine learning techniques to overcome the label sparsity,” the coauthors said. “[H]aving accumulated enough experience working with datasets from diverse sustainability domains, we realized earlier this year that we were well-positioned to share our expertise on the data side of the machine learning equation … Indeed, we are not aware of any prior sustainability-focused datasets with similar size and scale of SustainBench.”


Progress toward SDGs has historically been measured through civil registrations, population-based surveys, and government-orchestrated censuses. However, data collection is expensive, leading many countries to go decades between taking measurements on SDG indicators. It’s estimated that only half of SDG indicators have regular data from more than half of the world’s countries, limiting the ability of the international community to track progress toward the SDGs.

“For example, early on during the COVID-19 pandemic, many developing countries implemented their own cash transfer programs, similar to the direct cash payments from the IRS in the United States. However … data records on household wealth and income in developing countries are often unreliable or unavailable,” the coauthors said.

Innovations in AI have shown promise in helping to plug the data gaps, however. Data from satellite imagery, social media posts, and smartphones can be used to train models to predict things like poverty, annual land cover, deforestation, agricultural cropping patterns, crop yields, and even the location and impact of natural disasters. For example, the governments of Bangladesh, Mozambique, Nigeria, Togo, and Uganda used machine learning-based poverty and cropland maps to direct economic aid to their most vulnerable populations during the pandemic.

But progress has been hindered by challenges, including a lack of expertise and dearth of data for low-income countries. With SustainBench, the Stanford researchers — along with contributors at Caltech, UC Berkeley, and Carnegie Mellon — hope to provide a starting ground for training machine learning models that can help measure SDG indicators and have a wide range of applications for real-world tasks.

SustainBench contains a suite of 15 benchmark tasks across seven SDGs taken from the United Nations, including good health and well-being, quality education, and clean water and sanitation. Beyond this, SustainBench offers tasks for machine learning challenges that cover 119 countries, each designed to promote the development of SDG measurement methods on real-world data.

The coauthors caution that AI-based approaches should supplement, rather than replace, ground-based data collection. They point out that ground truth data are necessary for training models in the first place, and that even the best sensor data can only capture some — but not all — of the outcomes of interest. But AI, they still believe, can be helpful for measuring sustainability indicators in regions where ground truth measurements are scarce or unavailable.

“[SDG] indicators have tremendous implications for policymakers, yet ‘key data are scarce, and often scarcest in places where they are most needed,’ as several of our team members wrote in a recent Science review article. By using abundant, cheap, and frequently updated sensor data as inputs, AI can help plug these data gaps. Such input data sources include publicly available satellite images, crowdsourced street-level images, Wikipedia entries, and mobile phone records, among others,” the coauthors said.

Future work

In the short term, the coauthors say that they’re focused on raising awareness of SustainBench within the machine learning community. Future versions of SustainBench are in the planning stages, potentially with additional datasets and AI benchmarks.

“Two technical challenges stand out to us. The first challenge is to develop machine learning models that can reason about multi-modal data. Most AI models today tend to work with single data modalities (e.g., only satellite images, or only text), but sensor data often comes in many forms … The second challenge is to design models that can take advantage of the large amount of unlabeled sensor data, compared to sparse ground truth labels,” the coauthors said. “On the non-technical side, we also see a challenge in getting the broader machine learning community to focus more efforts on sustainability applications … As we alluded to earlier, we hope SustainBench makes it easier for machine learning researchers to recognize the role and challenges of machine learning for sustainability applications.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Nvidia’s GTC will draw 200K researchers for online event including metaverse session

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 

The metaverse may be the stuff of science fiction, but it’s going to make an appearance at a pretty serious tech event: Nvidia’s annual GPU Technology Conference (GTC), an online event happening November 8-11.

GTC expected to draw more than 200,000 attendees including innovators, researchers, thought leaders, and decision-makers. More than 500 sessions focus on deep learning, data science, HPC, robotics, data center/networking, and graphics. Speakers will discuss the latest breakthroughs in healthcare, transportation, manufacturing, retail, finance, telecoms, and more.

I’m moderating a session on the vision for the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. The panelists include Tim Sweeney, CEO of Epic Games; Morgan McGuire, chief scientist at Roblox; Willim Cui, vice president of Tencent Games; Jinsoo Jeon, head of metaverse at SK Telecom; Rev Lebaredian, vice president of simulation technology and Omniverse engineering at Nvidia; Christina Heller, CEO of Metastage; and Patrick Cozzi, CEO of Cesium. (We’ll air the panel at our own GamesBeat Summit Next event on November 9-10.)

“It’s a different twist to have a metaverse session,” said Estes. “You know that the metaverse has become top of mind with so many other companies talking about it. Omniverse [the metaverse for engineers] is our product in that area. And so we’re, we’re clearly leaning into that, but Omniverse isn’t the only thing going on. And so we were welcoming and embracing other other conversations about that, because in typical Nvidia fashion, a lot of our success model is the fact that we are Switzerland. We’re a platform and a lot of companies are doing great work on our platform. ”


Three top investment pros open up about what it takes to get your video game funded.

Watch On Demand

That’s the general spirit of a lot of the sessions at GTC, Estes said.

Jensen Huang is CEO of Nvidia. He gave a virtual keynote at the recent GTC event.

Above: Jensen Huang is CEO of Nvidia. He gave a virtual keynote at the recent GTC event in the spring and will do so again in November.

Image Credit: Nvidia

“GTC is is attendees can hear from innovators who are in the same general space, but they’re taking different approaches to things,” Estes said. “There are a lot of things about the metaverse that are complementary to the Omniverse.”

Other companies represented among the speakers include Amazon, Arm, AstraZeneca, Baidu, BMW, Domino’s, Electronic Arts, Epic Games, Ford, Google, Kroger, Microsoft, MIT, Oak Ridge National Laboratory, OpenAI, Palo Alto Networks, Red Hat, Rolls-Royce, Salesforce, Samsung, ServiceNow, Snap, Stanford University, Volvo, and Walmart.

And Nvidia CEO Jensen Huang will announce new AI technologies and products in his keynote presentation, which will be livestreamed on Nov. 9 at 9 am Central European Time/4 pm China Standard Time/12 a.m. Pacific Standard Time. It will be rebroadcast at 8 am PST for viewers in the Americas.

“It’s fair to say that you can expect to hear product and technology announcements. From Jensen, you can expect to hear about new partnerships and lots of examples of actually implementing AI on the leading edge,” Estes said. “We’ll have a number of examples of lighthouse customers and end users and our ecosystem partners.”

Online-only approach

Nvidia's Cambridge-1 will be available to external U.K. scientists.

Above: Nvidia’s Cambridge-1 will be available to external U.K. scientists.

Image Credit: Nvidia

It’s the second major GTC event of the year. Traditional, Nvidia held a big event in the spring and then a lot of smaller regional events. But with the pandemic, that has evolved into two major online events, said Greg Estes, vice president of corporate marketing and developer programs at Nvidia, in an interview with VentureBeat.

Because of the delta variant of COVID-19, Nvidia opted to do another online-only event for the fall GTC.

“As for going back to physical events, we’re hoping for the spring but it’s of course hard to say,” Estes said. “On the other hand, I can’t see us doing physical-only ever again. There will always be really solid digital components going forward. It’s just been too successful. People like it a lot. And we draw a lot more people. And also we can also get to some speakers that we couldn’t get to before.”

Nvidia will make sessions available for viewing after the event.

“We’re expecting more than 200,000 registrations, which is what we had in the spring,” Estes said. “It’s just a fantastic thing to have that much interest and that many connections. For our developer community, we take all the GTC session and we make them available in perpetuity for free. We archive these talks on Nvidia on demand.”

For social interaction, Nvidia is using a third-party app dubbed BrainDate to arrange meetings. But Estes note that due to the resurgence in COVID that the company wasn’t comfortable having a lot of in-person gatherings yet. Over time, he expects that virtual reality meetings, events, and collaborations will take off, as it can be more convenient than travel for a lot of people.

“AI technology is evolving so quickly that it makes sense to have more than one event a year,” Estes said.

Other sessions

GPUs in the Nvidia Cambrigde-1.

Above: GPUs in the Nvidia Cambrigde-1.

Image Credit: Nvidia

Ilya Sutskever, chief scientist at OpenAI, will discuss the history of deep learning and what the future might hold. Fei-Fei Li, professor of computer science at Stanford University, will discuss ambient intelligence (smart, sensor-based solutions) to illuminate the dark spaces of healthcare and take part in a Q&A with Kimberly Powell, Nvidia’s vice president of healthcare.

Bei Yang, vice president and technology studio executive at Disney Imagineering, will discuss how the company is using advanced technologies to “imagineer” the metaverse.

Shashi Bhushan, principal AI software and systems architect at Lockheed Martin, will describe how the company is using Nvidia Omniverse, the “metaverse for engineers,” to predict and fight wildfires.

Ross Krambergar, digital solutions for production planning at BMW, will describe how BMW is utilizing Nvidia Omniverse to realize their vision for a digital twin factory of the future to increase manufacturing flexibility.

Keith Perry, chief information officer at St. Jude Children’s Research Hospital, will explain how they used data science to advance treatments for life-threatening diseases in children. Nir Zuk, chief technology officer at Palo Alto Networks, will speak about AI for cybersecurity.

Anima Anandkumar, director of machine learning research at Nvidia and professor at Caltech, will speak in a panel on measuring and mitigating bias in AI models and run a session on advances in the convergence of AI and scientific computing.

Keith Strier, vice president of worldwide AI initiatives at Nvidia, and Mark Andrijanič, minister for digital transformation of Slovenia, will participate in a fireside chat to discuss how countries need to invest in AI, including infrastructure and data scientists.

Scientists at MIT, Amazon Web Services’ Sustainable Data Initiative, and Nvidia will explain how a group of public and private sector entities is providing climate data to scientists.

An expert panel will talk about the potential of Universal Scene Description (USD) for 3D creators in all industries. The panel includes Sebastian Grassia, project lead for USD at Pixar; Mohsen Rezayat, chief solutions architect at Siemens; Shawn Dunn, senior product manager at Epic Games; Simon Haegler, senior software developer at Esri R&D Center Zurich; Hilda Espinal, chief technology officer at CannonDesign; and Michael Kass, senior distinguished engineer at Nvidia.

Axel Gern, CTO at Daimler Trucks, will explain the strategy, challenges and opportunities of developing software-defined trucks for an autonomous future.

And Nvidia’s graphics wizards will reveal the technologies they used to create a virtual Jensen for the previous spring GTC keynote.

Emerging markets

Nvidia's Inception AI startups are from the green countries.

Above: Nvidia’s Inception AI startups are from the green countries.

Image Credit: Nvidia

GTC will feature a series of sessions focused on business and technical topics in Africa, the Middle East and Latin America.

Speakers from organizations and universities, such as the Kenya AI Center of Excellence, Ethiopian Motion Design and Visual Effects Community, Python Ghana, Nairobi Women in Machine Learning & Data Science, and Chile Inria Research Center, will describe how emerging market developers are using AI to address challenges.

“We have more international speakers, and more content that shifts towards Europe in the Middle East,” Estes said. “AI is the center of gravity, but it’s not the only thing we’re doing. One of the things people are talking about is conversational AI. It touches a lot of different industries, from chatbots for call centers to healthcare, where you have doctor who may have a patient where English isn’t their first language.”

A panel dubbed Bridging the Last Mile Gap with AI Education will feature cxperts and community leaders in Africa as they explain how they are democratizing AI and solving real-world challenges.

Latin American government, industry and academia will discuss the state of the AI ecosystem in Latin America and how to empower researchers and educators with GPUs and AI.

Experts will discuss natural language processing resources to build conversational AI for medium- and low-resource languages such as those in Africa, Arabia, and India.

Inception Venture Capital Alliance

Nvidia's Inception program has 8,500 AI startups.

Above: Nvidia’s Inception program has 8,500 AI startups.

Image Credit: Nvidia

Nvidia’s Inception AI program educates more than 8,500 companies that have potential for disruption. And Nvidia execs will talk about the company’s AI strategy and direction, focused on developers, startups, computing platforms, enterprise customers, and corporate development. More than 70 startups will share their business models involving conversational AI, drug discovery, autonomous systems, emerging markets, and other areas.

The panel will include Greg Estes, VP of corporate marketing and developer programs; Manuvir Das, head of enterprise computing; Shanker Trivedi, SVP of worldwide enterprise business; Vishal Bhagwati, head of corporate development; Mat Torgow, head of venture capital business development; and Kari Briski, VP of software product management for AI/HPC.

Ozzy Johnson, director of solutions architecture at Nvidia, will discuss technologies and key frameworks to accelerate a startup’s journey.

The pandemic has spurred investment and innovation in the healthcare and life sciences (HCLS) industry. Despite economic uncertainty, HCLS AI startups raised record funding. This panel will include the CEOs from startups Cyclica in biotech, IBEX in pathology, and Rayshape in ultrasound, moderated by Renee Yao, head of global healthcare AI startups at Nvidia, and cover AI in healthcare trends, challenges, and technical breakthroughs.

Diversity & Inclusion

Nvidia's Omniverse is a way to collaborate in simulated worlds.

Above: Nvidia’s Omniverse is a way to collaborate in simulated worlds.

Image Credit: Nvidia

GTC is structured as an open, all-access event available to virtually any community around the world. Sessions have been curated to inform and inspire developers, researchers, scientists, educators, professionals, and students from historically underrepresented groups.

Topics will include building better datasets and making AI more inclusive. Nvidia partners with organizations including LatinX in AI, Tech Career and W.AI in Israel, and Ewha Womans University of Korea to offer complimentary access to Nvidia Deep Learning Institute workshops for diverse communities.

“We’re doing a lot of educational programs and training with our Deep Learning Institute, and doing other initiatives with educators from historically black colleges and universities, and we’re doing things in Africa,” Estes said. “We’re doing things specifically targeting women in technology to try to bring these communities which have historically been underrepresented to train them better to avail them of the leading thinking to work with educators.”

Nvidia offers free teaching kits for educators to get children interested in AI and engineering.

“It’s important that we’re talking to the next generation coming up, helping both younger people and then mid-career professionals who want to learn new skills, ” Estes said.

One of the diversity sessions brings together academics, industry experts and the founder of W.AI to discuss how to help more women join the field of data science and AI through mentoring opportunities and supporting advanced degree enrollment.

Louis Stewart, head of strategic initiatives for Nvidia’s Developer Ecosystem, will speak with faculty and student researchers from the Africana Digital Ethnography Project on efforts to build new and unique datasets for better natural language understanding from all parts of the world.

An AI for Smart City session will talk about where AI has been deployed to solve urban challenges, ethical challenges associated with using AI in urban settings, and how it could address challenges stemming from urbanization, failing infrastructure, traffic management, population health difficulties, energy crises, and more.

The event will have regional speakers from Europe, the Middle East, Africa, Israel, India, China, Japan, South Korea, Taiwan, and southern Asia Pacific.

“There are smart people everywhere. And that’s a really important theme,” Estes said. “There is no reason in the world why certain countries should have an advantage over others when it comes to the brainpower of people doing AI work. We’re putting energy into reaching out to those communities. Africa is the example I gave earlier, but certainly in Latin America, and all across Asia Pacific, there is good thinking and great work being done today. In Singapore, and Vietnam, and other areas like that. And for us to be able to kind of bring that together in one place is really cool.”


GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Repost: Original Source and Author Link


Google’s ethical AI researchers complained of harassment long before Timnit Gebru’s firing

Google’s AI leadership came under fire in December when star ethics researcher Timnit Gebru was abruptly fired while working on a paper about the dangers of large language models. Now, new reporting from Bloomberg suggests the turmoil began long before her termination — and includes allegations of bias and sexual harassment.

Shortly after Gebru arrived at Google in 2018, she informed her boss that a colleague had been accused of sexual harassment at another organization. Katherine Heller, a Google researcher, reported the same incident, which included allegations of inappropriate touching. Google immediately opened an investigation into the man’s behavior. Bloomberg did not name the man accused of harassment, and The Verge does not know his identity.

The allegations coincided with an even more explosive story. Andy Rubin, the “father of Android” had received a $90 million exit package despite being credibly accused of sexual misconduct. The news sparked outrage at Google, and 20,000 employees walked out of work to protest the company’s handling of sexual harassment.

Gebru and Margaret Mitchell, co-lead of the ethical AI team, went to AI chief Jeff Dean with a “litany of concerns,” according to Bloomberg. They told Dean about the colleague who’d been accused of harassment, and said there was a perceived pattern of women being excluded and undermined on the research team. Some were given lower roles than men, despite having better qualifications. Mitchell also said she’d been denied a promotion due to “nebulous complaints to HR about her personality.”

Dean was skeptical about the harassment allegations but said he would investigate, Bloomberg reports. He pushed back on the idea that there was a pattern of women on the research team getting lower-level positions than men.

After the meeting, Dean announced a new research project with the alleged harasser at the helm. Nine months later, the man was fired for “leadership issues,” according to Bloomberg. He’d been accused of misconduct at Google, although the investigation was still ongoing.

After the man was fired, he threatened to sue Google. The legal team told employees who’d spoken out about his conduct that they might hear from the man’s lawyers. The company was “vague” about whether it would defend the whistleblowers, Bloomberg reports.

The harassment allegation was not an isolated incident. Gebru and her co-workers reported additional claims of inappropriate behavior and bullying after the initial accusation.

In a statement emailed to The Verge, a Google spokesperson said: “We investigate any allegations and take firm action against employees who violate our clear workplace policies.”

Gebru said there were also ongoing issues with getting Google to respect the ethical AI team’s work. When she tried to look into a dataset released by Google’s self-driving car company Waymo, the project became mired in “legal haggling.” Gebru wanted to explore how skin tone impacted Waymo’s pedestrian-detection technology. “Waymo employees peppered the team with inquiries, including why they were interested in skin color and what they were planning to do with the results,” according to the Bloomberg article.

After Gebru went public about her firing, she received an onslaught of harassment from people who claimed that she was trying to get attention and play the victim. The latest news further validates her response that the issues she raised were part of a pattern of alleged bias on the research team.

Update April 21st, 6:05PM ET: Article updated with statement from Google.

Repost: Original Source and Author Link


Google is poisoning its reputation with AI researchers

Google has worked for years to position itself as a responsible steward of AI. Its research lab hires respected academics, publishes groundbreaking papers, and steers the agenda at the field’s biggest conferences. But now its reputation has been badly, perhaps irreversibly damaged, just as the company is struggling to put a politically palatable face on its empire of data.

The company’s decision to fire Timnit Gebru and Margaret Mitchell — two of its top AI ethics researchers, who happened to be examining the downsides of technology integral to Google’s search products — has triggered waves of protest. Academics have registered their discontent in various ways. Two backed out of a Google research workshop, a third turned down a $60,000 grant from the company, and a fourth pledged not to accept its funding in the future. Two engineers quit the company in protest of Gebru’s treatment and just last week, one of Google’s top AI employees, a research manager named Samy Bengio who oversaw hundreds of workers, resigned. (Bengio did not mention the firings in an email announcing his resignation but earlier said he was “stunned” by what happened to Gebru.)

“Not only does it make me deeply question the commitment to ethics and diversity inside the company,” Scott Niekum, an assistant professor at the University of Texas at Austin who works on robotics and machine learning, told The Verge. “But it worries me that they’ve shown a willingness to suppress science that doesn’t align with their business interests.

“It definitely hurts their credibility in the fairness and AI ethics space,” says Deb Raji, a fellow at the Mozilla Foundation who works on AI accountability. “I don’t think the machine learning community has been very open about conflicts of interest due to industry participation in research.”

Niekum and Raji, along with many others inside and outside of Google, were shocked by what happened to Gebru and Mitchell, co-leads of the company’s Ethical AI team. Gebru was fired last December after arguments with managers over a research paper she co-authored with Mitchell and others. (Google disputes this account and says Gebru resigned.) Mitchell was fired in February after searching her email for evidence of discrimination against Gebru. The paper in question examined problems in large-scale AI language models — technology that now underpins Google’s lucrative search business — and the firings have led to protest as well as accusations that the company is suppressing research. After Gebru was ousted in December, a Medium post declaring solidarity with her and criticizing “unprecedented research censorship” by Google was signed by nearly 2,700 employees and more than 4,300 “academic, industry, and civil society supporters.”

It’s likely there will be more protest and more resignations, too. After Bengio left the company, Mitchell tweeted, “Resignations coming now bc people started interviewing soon after we were fired,” and that “job offers are just starting now; more resignations are likely.” When asked for comment on these and other issues highlighted in this piece, Google offered only boilerplate responses.

One of the employees who quit the company in protest earlier this year was David Baker. He started work at Google in 2004 and when he resigned in February, he was director of its Trust & Safety Engineering group. He tells The Verge that Google’s treatment of Gebru (he left before Mitchell was fired) has seriously shaken his confidence in the company.

“I was just blindsided to see and hear what happened to Timnit,” Baker told The Verge. “It broke my heart.” He adds that he didn’t take the decision to resign lightly: he loved his job and refers to his last couple of years at the company as “the happiest days of my life.” But quitting was the least he could do to stand in solidarity with Gebru, he says. “I spent a couple of weeks thinking and talking with my wife and ultimately decided I just couldn’t bring myself to go back to work.”

Baker is just one individual who feels let down by Google, but his response shows how the company has damaged its standing even with senior employees. The Trust & Safety team that Baker oversaw works on a range of important safety problems in Google, from tackling spam on Gmail to removing scams from the company’s advertising platform. “We’re behind the scenes on a whole bunch of applications,” as Baker puts it. He adds that although he didn’t work with Gebru or Mitchell personally, members of his team did, learning from them as part of what he calls the “emerging discipline” of AI safety.

AI safety will grow ever more important to Google as the company integrates machine learning methods ever deeper within its products. Probing the limitations of these systems — not just from a technical perspective but also a social one — was at the heart of Gebru and Mitchell’s work. And while it’s in Google’s interests to find weaknesses in its own technology, it seems the company didn’t want to hear everything its employees had to say.

Baker says that although he was always reassured by Google’s integrity within the Trust & Safety group (“We were very focused on what was right for the user, it was not about what was best for the brand”) the treatment of Gebru has made him doubt whether the company is always able to live up to its best intentions.

“I think it definitely calls into question whether Google can be trusted to honestly question the ethical applications of its technology,” says Baker. “And Google’s failure in diversity will lead to blindspots in its research. The reality is that Google is not a place where folks from all backgrounds can thrive.”

Researchers and academics The Verge spoke to for this story highlighted two distinct but connected concerns with Google’s behavior.

The first is the treatment of Gebru and Mitchell as individuals and what that says about the company’s commitment to diversity and inclusion as an employer. Google has well-documented problems with hiring and retaining minority talent, and this is another example of its failures. The second touches on broader questions about the trustworthiness of the company’s AI research and whether the company can fairly examine the potential harms of its technology. In the case of Gebru and Mitchell’s work, that means the damage posed by large-scale language models.

All those interviewed for this story stressed that they didn’t doubt the integrity of individual Google researchers, but were worried that the company’s internal structures — including its review process of papers — were subtly warping their work.

“I trust that things they are publishing are correct but I don’t trust that they’re not censored,” Hadas Kress-Gazit, a professor of robotics at Cornell who boycotted a Google workshop along with Scott, told The Verge. “It’ll be the truth but not the whole truth.”

One of the ways Google’s research is shaped to fit corporate interests is through the company’s internal review process. Last December, Reuters reported that Google had created a new level of review for “sensitive topics” in 2020. If researchers are writing about topics like sentiment analysis, facial recognition, the categorization of gender, race, or politics, they have to consult with Google’s PR team and legal advisors who will look over their work and suggest changes.

Internal correspondence cited by Reuters includes feedback in which a senior Google manager told a paper’s author to “take great care to strike a positive tone.” Another paper was edited to remove all references to Google’s products, and another to remove mentions of legal risks associated with new research — including risks to users’ personal data.

In a statement to The Verge, Google said: “Our research review process engages a wide range of subject matter experts from across the Research org and Google overall, including social scientists, ethicists, policy and privacy advisors, and human rights specialists, and has helped improve many of our publications and research applications.”

But as Mitchell told Reuters last year (when she was still employed by Google): “If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship.”

Mitchell’s worries are substantiated by the nature of the paper that led to her and Gebru’s departure. Far from offering a controversial or unexpected appraisal, the research gave a comprehensive overview of existing critiques. One marker of this (and of the research’s thoroughness) is that the paper cited 128 previous publications in its original form — more than six times the average for papers published at AI conference NeurIPS.

The paper says that, like many algorithms, AI language models have a tendency to regurgitate “both subtle biases and overtly abusive language patterns” found in training data, and that because of the amount of computing power needed to create these models they come with environmental costs. These are not controversial observations, and even critiques of the paper have praised its general arguments. One widely shared evaluation of a finished version of the paper by computer scientist Yoav Goldberg notes that it “takes one-sided political views” and is overly focused on questions of scale, but adds: “I also agree with and endorse most of the content. This is important stuff, you should read it.”

This makes Google’s objections to the paper unusual. The company’s head of AI, Jeff Dean, said that the paper “didn’t meet our bar for publication” and “ignored too much relevant research” about how the problems it highlighted might be mitigated. But for many, including employees at Google, these objections rang false. As one researcher at Google Brain Montreal, Nicolas Le Roux, commented on Twitter: “My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review.”

Connected to Google’s treatment of the paper itself is the treatment of Gebru as an individual, and what that says about the company’s attitude toward Black and women researchers. “In environments where these people are dismissed, devalued, or discriminated against, their work — these valid critiques of the field — is discredited and dismissed, too,” says Raji. “Minoritized voices have a harder time vocalizing these critiques even though they’re some of the most important contributions to the space.”

This dynamic is not new. Raji gives the example of a 2018 paper called Gender Shades by researcher Joy Buolamwini — a paper now recognized as a landmark critique of gender and racial bias in facial recognition. “Famously, the paper almost didn’t get presented at conference because it was dismissed as too simple,” says Raji. After it was published, Gender Shades had a huge effect on the industry and society at large. It sparked political debates about the utility of facial recognition, prompting companies like Microsoft to reevaluate the accuracy of their technology, and others, like IBM, to drop it altogether.

In other words: it significantly changed the political landscape and the priorities of big tech firms. This is the power and impact that the right paper at the right time can have, and for many people this explains why Google was so keen to shut down Gebru’s criticism.

As Raji notes, much of this important work is done by groups who are not treated well by tech firms. She says this dynamic — dismissal of the individual leading to dismissal of their work — was at play with Google’s treatment of Gebru. “It was really easy for them to fire her because they didn’t value the work she was doing,” she says.

Despite the anger and sadness articulated by many researchers The Verge spoke to, others were more ambivalent about recent incidents. They said it would not affect their willingness to work with Google in future, and noted that interference in research was the price of working in industry labs. Many said they thought the only lasting solution to this problem was better public funding.

One AI professor at an American university who’s previously received money from Google to fund research and wished to be anonymous, told The Verge that he could understand why people wanted to protest the company but said that finding funding in academia would always force researchers to turn to potentially compromising sources.

“I cannot really define a coherent moral or ethical position that says it is okay to accept money from the Department of Defense but not from Google,” said the professor by email. “Put another way: how can you accept (or avert your gaze from) the atrocities that the DoD commits (across the world and also in terms of HR matters involving its own people), but draw the line at the current case with Google?”

Another researcher, who also wished to be anonymous, noted that working in corporate labs would always come with trade-offs between academic freedom and other perks. They said that Google was not alone in treating research staff callously and pointed to Microsoft’s sudden decision in 2014 to shut down an entire Silicon Valley lab, firing more than 50 leading computer scientists with little warning.

By some measures, though, Google is a special case and wields outsize influence in the field of AI in a way that other companies have not in the past. Firstly, Google happens to have in abundance the two resources that have powered AI’s ascendance in recent years: abundant computing power and data. Secondly, the company has stated time and time again that AI is crucial to future profitability. This means it’s directly invested in the field in a way that doesn’t compare to its funding of, say, computational neuroscience. It’s this combination of self-interest and technological advantage that gives it the ability and motivation to direct, to some degree, the parameters of academic research.

“They have this massive influence because of the combination of money they’re putting into research, [the] media influence they wield, and their enormous presence in terms of papers published and reviewers in the system,” says Niekum. He adds, though, that this criticism could be applied to other big tech companies just easily.

Whatever the context of Google’s involvement in AI research, it’s clear that the company has hurt its reputation significantly with its treatment of Gebru and Mitchell. Calculating what effect incidents like these will have on a company in the long run is impossible, but in the short term Google has eroded trust in its AI work and its ability to support minority voices. Accusations of self-censorship will also undermine claims that it can regulate its own technology. If Google can’t be trusted to examine the shortcomings of its own AI tools, does the government need to take a closer look at their workings?

All the same, those boycotting Google workshops and refusing its money know that their actions are more symbolic than anything else. “Compared to the number of people who are collaborating with Google and the number of academics who have part time appointments at Google, it’s a drop in the ocean,” says Kress-Gazit. They’re still determined, though, to press the issue in the hope that Google will make amends. Since the firing of Gebru and Mitchell, the company has appointed a new employee, Marian Croak, to oversee its Responsible AI initiatives. It’s also tweaked its review process for papers (but offered no details about what has changed or why). For those angry with the firm, it needs to do much more, including offering real transparency for reviews and apologizing publicly to Gebru.

And for others, it’s too late altogether. Raji, who is close to Gebru, says that as a result of watching how Google treated her friend over last few months, she’s changed her mind about going to work in industry and decided to pursue a career in academia instead.

“Before this, I had a lot more faith in what could happen with industry research on these AI ethics issues,” she says. “This whole situation shows that within industry there’s a lot of cultural dynamics still at play and you’re still beholden to leadership caring about these issues. As a minority woman, you’re going to be disadvantaged and disrespected in certain ways. And I’m just not ready for that.”

That’s one talented researcher the tech industry has lost. It won’t be the last.

Repost: Original Source and Author Link


NLP needs to be open. 500+ researchers are trying to make it happen

Join executive leaders at the Conversational AI & Intelligent AI Assistants Summit, presented by Five9. Watch now!

The acceleration in Artificial Intelligence (AI) and Natural Language Processing (NLP) will have a fundamental impact on society, as these technologies are at the core of the tools many of us use on a daily basis. However, the resources necessary to create the best-performing AI and NLP models are found mainly at technology giants.

The stranglehold tech giants have on this transformative technology poses a number of problems, ranging from who decides which research gets shared to its impacts on environmental and ethical fronts. For example, while recent NLP models such as GPT3 (from OpenAI and Microsoft) show interesting behaviors from a research point of view, such models are private and only restricted access — or no access at all — is provided to many academic organizations, making it impossible to answer important questions around these models and study capabilities, limitations, potential improvements, bias, and fairness.

A group of more than 500 researchers from 45 different countries — from France, the US, and Japan to Indonesia, Ghana, and Ethiopia — has come together to work towards tackling some of these problems. The project, which the authors of this article are all involved in, is called Big Science, and our goal is to improve the scientific understanding of the capabilities and limitations of large-scale neural network models in NLP and to create a diverse and multilingual dataset and a large-scale language model as research artifacts, open to the scientific community.

BigScience was inspired by scientific creation schemes existing in other scientific fields, such as CERN and the LHC in particle physics, in which open scientific collaborations facilitate the creation of large-scale artifacts useful for the entire research community. So far, a broad range of institutions and disciplines have joined the project in its year-long effort that started in May 2021.

The project has more than 20 working groups and subgroups tackling different aspects of language modeling in parallel, some of which are closely related and interdependent. Data plays a crucial role in the process. In machine learning, a model learns to make predictions based on data it has seen before. The datasets that large language models are typically trained on are massive, mostly English-centric, and sourced from the web, which raises questions about bias, fairness, ethics, and privacy, among others.

Thus, the collective seeks to implement an intentional constitution of the training dataset to favor linguistic, geographical and social representativeness rather than the opportunistic practices that currently define the training data used in very large models. Our data effort also strives to identify the rights of the language owners, subjects, and communities. This is as much an organizational and social challenge as it is a technical challenge. The engineering and modeling groups are dedicated to determining architecture design and scaling laws, for instance, with the concrete goal of training a language model with a capacity of up to 210 billion machine learning parameters on the French Jean Zay supercomputer at IDRIS.

One of our objectives is to uncover and understand the mechanisms that enable a language model to produce valid output on any natural task description it has been given without explicitly being trained to do so (an ability known as zero-shot behavior). Another point of interest is studying how a language model can be updated through time. We also have a group of researchers working on tokenization strategies for a diverse set of languages and modeling multilinguality to ensure that all NLP capabilities are transposed to languages other than English. Others are working on the social impact, carbon footprint, data governance, and legal implications of NLP models and how to extrinsically and intrinsically evaluate them for accuracy.

As the output of this enormous effort, BigScience aims to share a very large multilingual corpus constituted in a way that is responsible, diverse, and mindful of ethical and legal issues, a large-scale multilingual language model exhibiting non-trivial zero shot behaviors in a way that is accessible to all researchers, as well as code and tools associated with these artifacts to enable easy use. Apart from that, this is an opportunity to create a blueprint on how to do large-scale research initiatives in AI. Our effort keeps evolving and growing, with more researchers joining every day, making it already the biggest open science contribution in artificial intelligence to date.

Much like the tensions between proprietary and open-source software in the early 2000s, AI is at a turning point where it can either go in a proprietary direction, where large-scale state-of-the-art models are increasingly developed internally in companies and kept private, or in an open, collaborative, community-oriented direction, marrying the best aspects of open-source and open-science. It’s essential that we make the most of this current opportunity to push AI onto that community-oriented path so that it can benefit society as a whole.

Yacine Jernite is a Research Scientist at HuggingFace. He coordinates the Data effort of the BigScience project as area chair and co-organizer of the data governance group.

Matthias Gallé leads various research teams at Naver Labs Europe, focused on developing AI for our Digital World. His focus for BigScience is on how to inspect, control, and update large pre-trained models.

Victor Sanh is a Research Scientist at Hugging Face. His research focuses on making NLP systems more robust for production scenarios and mechanisms behind generalization.

Samson Tan is a final year computer science PhD candidate at the National University of Singapore and co-chair of the Tokenization working group in BigScience.

Thomas Wolf is co-founder and Chief Science Officer of HuggingFace and co-leader of the BigScience initiative.

Suzana Ilic is a Technical Program Manager at Hugging Face, co-leading the organization of BigScience.

Margaret Mitchell is an industrial AI research scientist and co-chair of the Data Governance working group in BigScience.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Researchers detail blind spots of large language models

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.

Modern AI-powered language systems like OpenAI’s GPT-3 can generate impressively fluent and grammatical text. But they aren’t perfect. While these systems rarely make syntactic errors, they’re prone to breaking semantic and narrative rules or struggling with repetition. For example, they might change the subject of a conversation without a segue or answer a question with an illogical statement.

To measure the extent to which systems suffer from these shortcomings, researchers at the Allen Institute for AI developed Scarecrow, a framework that provides a way for developers to mark problems in AI-generated text. In an analysis spanning 13,000 annotations of 1,300 paragraphs from both AI systems and humans, they found that scaling up the size of models powering the systems helps mitigate some issues but others might require more involved fixes.

Categorizing model errors

The researchers applied their framework to OpenAI’s GPT-2 and GPT-3, as well as Grover, a fake news generator and detector from the University of Washington. As the team explains in a paper, Scarecrow divides errors into 10 categories identified by combining expert analysis with crowdsourced annotation:

  • Grammar and usage: Missing words, extra words, and incorrect or out-of-order words.
  • Redundant: Repeated words or phrases, or ideas repeated using different words.
  • Off-prompt: A phrase or sentence unrelated — or contradictory — to a prompt given to a language generation system.
  • Self-contradiction: Text that contradicts another piece of text the system had previously written.
  • Incoherent: Text that doesn’t fit into the above categories but still doesn’t make sense.
  • Technical jargon: Jargon or specific words from an esoteric field.
  • Needs Google: A fact or figure that appears to be true but requires a Google search to confirm.
  • Bad math: Problems with basic math and converting fixed units and currencies.
  • Commonsense: Text that violates our basic understanding of the world.
  • Encyclopedic: Factually wrong text disproven by textbooks, Wikipedia entries, or encyclopedias.

According to the researchers, certain errors, like Encyclopedic, Commonsense, and Incoherent errors, decrease with models trained on data from particular domains, like news, as well as models containing higher numbers of parameters. (In machine learning, parameters are the parts of models learned from historical training data, and they generally correlate with linguistic sophistication.) But the researchers say parameter scaling benefits seemingly plateau for Off-Prompt, Bad Math, and Grammar and Usage errors.

“These three error categories see a model plateau in error reduction when scaling to GPT-3. Of these error types, humans still commit fewer Off-Prompt and Grammar and Usage errors, but Bad Math appears saturated for our [study],” the researchers wrote.

Self-Contradiction and Redundant errors exhibit more complex scaling behavior, increasing for medium- and large-scale models, depending on interactions with other error types and how the errors are counted. Sampling from a larger set of words makes the models more prone to changing topics but less likely to repeat themselves, and vice versa.

“We posit the reason is that GPT-2 generations [in particular] are so incoherent and off-prompt that there is little opportunity for relevant, comprehensible points to be made and then reversed,” the researchers noted in the paper. “We [also] observe GPT-3 will seem stuck on a particular topic, elaborating on and rephrasing similar ideas more times than a human writer would.”

The researchers aim to spur explorations of natural language generations at scale, in particular ways errors in language models might be automatically fixed. “This paper focuses on open-ended generation, but a natural extension of this method would be to [assess] constrained generation tasks, such as machine translation,” they wrote. “Especially if considering a novel task setting, new error types may [also] prove useful.”


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Researchers develop algorithm to identify well-liked brands

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.

Measuring sentiment can provide a snapshot of how customers feel about companies, products, or services. It’s important for organizations to be aware: 86% of people say that authenticity is a key factor when deciding what brands they like and support. In an Edelman survey, 8% of consumers [SHOULD THAT BE 80%?] said that they need to be able to trust a brand in order to buy products from them.

While sentiment analysis technology has been around for a while, researchers at the University of Maryland’s Robert H. Smith School of Business claim to have improved upon prior methods with a new system that leverages machine learning. They say that their algorithm, which sorts through social media posts to understand how people perceive brands, can comb through more data and better measure favorability.

Sentiment analysis isn’t a perfect science, but social media provides rich signals that can be used to help shape brand strategies. According to statistics, 46% of people have opted to use social media in the past to extend their complaints to a particular company.

“There is a vast amount of social media data available to help brands better understand their customers, but it has been underutilized in part because the methods used to monitor and analyze the data have been flawed,” Wendy W. Moe, University of Maryland associate dean of master’s programs, who created the algorithm with colleague Kunpeng Zhang, said in a statement. “Our research addresses some of the shortcomings and provides a tool for companies to more accurately gauge how consumers perceive their brands.”

Algorithmic analysis

Zhang’s and Moe’s method sifts through data from posts on a brand’s page, including how many users have expressed positive or negative sentiments, “liked” something, or shared something. It predicts how people will feel about that brand in the future, scaling to billions of pages of user-brand interaction data and millions of users.

The algorithm specifically looks at users’ interactions with brands to measure favorability — whether people view that brand in a positive or negative way. And it takes into account biases, inferring favorability and measuring social media users’ positivity based on their comments in the user-brand interaction data.

Zhang and Moe say that brands can apply the algorithm to a range of platforms, such as  Facebook, Twitter, and Instagram, as long as the platforms provide user-brand interaction data and allow users to comment, share, and like content. The algorithm importantly doesn’t use private information, like user demographics, relying instead on user-brand publicly available interaction data.

“A brand needs to monitor the health of their brand dynamically,” Zhang said in a statement. “Then they can change marketing strategy to impact their brand favorability or better respond to competitors. They can better see their current location in the market in terms of their brand favorability. That can guide a brand to change marketing [practices].”

Zhang’s and Moe’s research is detailed in the paper “Measuring Brand Favorability Using Large-Scale Social Media Data,” which will be published in the forthcoming issue of the journal Information Systems Research.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Coronavirus Vaccine Researchers Are Targeted by Cyberattacks

Pharmaceutical companies and vaccine researchers working on a coronavirus vaccine have been the target of hacking attacks, a new report from Microsoft says. The company says these attacks are coming from nation-states, and it condemns the attacks and calls on other states to condemn them too.

Microsoft said in a blog post by Tom Burt, Corporate Vice President, Customer Security & Trust, that it has detected cyberattacks targeting both pharmaceutical companies and researchers in Canada, France, India, South Korea, and the U.S. Most of the attacks targeted organizations that were in the process of developing a coronavirus vaccine, especially those who were currently performing clinical trials.

“Among the targets, the majority are vaccine makers that have COVID-19 vaccines in various stages of clinical trials,” Burt wrote. “One is a clinical research organization involved in trials, and one has developed a COVID-19 test. Multiple organizations targeted have contracts with or investments from government agencies from various democratic countries for COVID-19 related work.”

Microsoft says the attacks came from three actors: Strontium from Russia and two groups from North Korea named Zinc and Cerium. Each group has its own preferred method of hacking, with Strontium using brute force login attempts, in which computers generate and automatically test millions of passwords with the hope that they will happen upon a working password by chance which can then be used to access the system.

Zinc prefers to use spear phishing, in which a particular person, usually someone high up in an organization, is targeted with a phishing attack tailored to their personal situation. Microsoft gave the example of pretending to be a recruiter and emailing someone with what appears to be a job offer to lure them into sharing their credentials.

Cerium also used spear phishing, but instead of pretending to be a recruiter, they pretended to be representative of the World Health Organization and lured people in by discussing themes related to coronavirus.

Microsoft says it blocked many of these attacks with the security protections that are a part of its products and has offered to help organizations where attacks did get through. The company is also urging international leaders to be more proactive in protecting healthcare workers and researchers from cyberattacks.

Editors’ Choice

Repost: Original Source and Author Link