How a university got itself banned from the Linux kernel

On the evening of April 6th, a student emailed a patch to a list of developers. Fifteen days later, the University of Minnesota was banned from contributing to the Linux kernel.

“I suggest you find a different community to do experiments on,” wrote Linux Foundation fellow Greg Kroah-Hartman in a livid email. “You are not welcome here.”

How did one email lead to a university-wide ban? I’ve spent the past week digging into this world — the players, the jargon, the university’s turbulent history with open-source software, the devoted and principled Linux kernel community. None of the University of Minnesota researchers would talk to me for this story. But among the other major characters — the Linux developers — there was no such hesitancy. This was a community eager to speak; it was a community betrayed.

The story begins in 2017, when a systems-security researcher named Kangjie Lu became an assistant professor at the University of Minnesota.

Lu’s research, per his website, concerns “the intersection of security, operating systems, program analysis, and compilers.” But Lu had his eye on Linux — most of his papers involve the Linux kernel in some way.

The Linux kernel is, at a basic level, the core of any Linux operating system. It’s the liaison between the OS and the device on which it’s running. A Linux user doesn’t interact with the kernel, but it’s essential to getting things done — it manages memory usage, writes things to the hard drive, and decides what tasks can use the CPU when. The kernel is open-source, meaning its millions of lines of code are publicly available for anyone to view and contribute to.

Well, “anyone.” Getting a patch onto people’s computers is no easy task. A submission needs to pass through a large web of developers and “maintainers” (thousands of volunteers, who are each responsible for the upkeep of different parts of the kernel) before it ultimately ends up in the mainline repository. Once there, it goes through a long testing period before eventually being incorporated into the “stable release,” which will go out to mainstream operating systems. It’s a rigorous system designed to weed out both malicious and incompetent actors. But — as is always the case with crowdsourced operations — there’s room for human error.

Some of Lu’s recent work has revolved around studying that potential for human error and reducing its influence. He’s proposed systems to automatically detect various types of bugs in open source, using the Linux kernel as a test case. These experiments tend to involve reporting bugs, submitting patches to Linux kernel maintainers, and reporting their acceptance rates. In a 2019 paper, for example, Lu and two of his PhD students, Aditya Pakki and Qiushi Wu, presented a system (“Crix”) for detecting a certain class of bugs in OS kernels. The trio found 278 of these bugs with Crix and submitted patches for all of them — the fact that maintainers accepted 151 meant the tool was promising.

On the whole, it was a useful body of work. Then, late last year, Lu took aim not at the kernel itself, but at its community.

In “On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits,” Lu and Wu explained that they’d been able to introduce vulnerabilities into the Linux kernel by submitting patches that appeared to fix real bugs but also introduced serious problems. The group called these submissions “hypocrite commits.” (Wu didn’t respond to a request for comment for this story; Lu referred me to Mats Heimdahl, the head of the university’s department of computer science and engineering, who referred me to the department’s website.)

The explicit goal of this experiment, as the researchers have since emphasized, was to improve the security of the Linux kernel by demonstrating to developers how a malicious actor might slip through their net. One could argue that their process was similar, in principle, to that of white-hat hacking: play around with software, find bugs, let the developers know.

But the loudest reaction the paper received, on Twitter and across the Linux community, wasn’t gratitude — it was outcry.

“That paper, it’s just a lot of crap,” says Greg Scott, an IT professional who has worked with open-source software for over 20 years.

“In my personal view, it was completely unethical,” says security researcher Kenneth White, who is co-director of the Open Crypto Audit Project.

The frustration had little to do with the hypocrite commits themselves. In their paper, Lu and Wu claimed that none of their bugs had actually made it to the Linux kernel — in all of their test cases, they’d eventually pulled their bad patches and provided real ones. Kroah-Hartman, of the Linux Foundation, contests this — he told The Verge that one patch from the study did make it into repositories, though he notes it didn’t end up causing any harm.

Still, the paper hit a number of nerves among a very passionate (and very online) community when Lu first shared its abstract on Twitter. Some developers were angry that the university had intentionally wasted the maintainers’ time — which is a key difference between Minnesota’s work and a white-hat hacker poking around the Starbucks app for a bug bounty. “The researchers crossed a line they shouldn’t have crossed,” Scott says. “Nobody hired this group. They just chose to do it. And a whole lot of people spent a whole lot of time evaluating their patches.”

“If I were a volunteer putting my personal time into commits and testing, and then I found out someone’s experimenting, I would be unhappy,” Scott adds.

Then, there’s the dicier issue of whether an experiment like this amounts to human experimentation. It doesn’t, according to the University of Minnesota’s Institutional Review Board. Lu and Wu applied for approval in response to the outcry, and they were granted a formal letter of exemption.

The community members I spoke to didn’t buy it. “The researchers attempted to get retroactive Institutional Review Board approval on their actions that were, at best, wildly ignorant of the tenants of basic human subjects’ protections, which are typically taught by senior year of undergraduate institutions,” says White.

“It is generally not considered a nice thing to try to do ‘research’ on people who do not know you are doing research,” says Kroah-Hartman. “No one asked us if it was acceptable.”

That thread ran through many of the responses I got from developers — that regardless of the harms or benefits that resulted from its research, the university was messing around not just with community members but with the community’s underlying philosophy. Anyone who uses an operating system places some degree of trust in the people who contribute to and maintain that system. That’s especially true for people who use open-source software, and it’s a principle that some Linux users take very seriously.

“By definition, open source depends on a lively community,” Scott says. “There have to be people in that community to submit stuff, people in the community to document stuff, and people to use it and to set up this whole feedback loop to constantly make it stronger. That loop depends on lots of people, and you have to have a level of trust in that system … If somebody violates that trust, that messes things up.”

After the paper’s release, it was clear to many Linux kernel developers that something needed to be done about the University of Minnesota — previous submissions from the university needed to be reviewed. “Many of us put an item on our to-do list that said, ‘Go and audit all submissions,’” said Kroah-Hartman, who was, above all else, annoyed that the experiment had put another task on his plate. But many kernel maintainers are volunteers with day jobs, and a large-scale review process didn’t materialize. At least, not in 2020.

On April 6th, 2021, Aditya Pakki, using his own email address, submitted a patch.

There was some brief discussion from other developers on the email chain, which fizzled out within a few days. Then Kroah-Hartman took a look. He was already on high alert for bad code from the University of Minnesota, and Pakki’s email address set off alarm bells. What’s more, the patch Pakki submitted didn’t appear helpful. “It takes a lot of effort to create a change that looks correct, yet does something wrong,” Kroah-Hartman told me. “These submissions all fit that pattern.”

So on April 20th, Kroah-Hartman put his foot down.

“Please stop submitting known-invalid patches,” he wrote to Pakki. “Your professor is playing around with the review process in order to achieve a paper in some strange and bizarre way.”

Maintainer Leon Romanovsky then chimed in: he’d taken a look at four previously accepted patches from Pakki and found that three of them added “various severity” security vulnerabilities.

Kroah-Hartman hoped that his request would be the end of the affair. But then Pakki lashed back. “I respectfully ask you to cease and desist from making wild accusations that are bordering on slander,” he wrote to Kroah-Hartman in what appears to be a private message.

Kroah-Hartman responded. “You and your group have publicly admitted to sending known-buggy patches to see how the kernel community would react to them, and published a paper based on that work. Now you submit a series of obviously-incorrect patches again, so what am I supposed to think of such a thing?” he wrote back on the morning of April 21st.

Later that day, Kroah-Hartman made it official. “Future submissions from anyone with a address should be default-rejected unless otherwise determined to actually be a valid fix,” he wrote in an email to a number of maintainers, as well as Lu, Pakki, and Wu. Kroah-Hartman reverted 190 submissions from Minnesota affiliates — 68 couldn’t be reverted but still needed manual review.

It’s not clear what experiment the new patch was part of, and Pakki declined to comment for this story. Lu’s website includes a brief reference to “superfluous patches from Aditya Pakki for a new bug-finding project.”

What is clear is that Pakki’s antics have finally set the delayed review process in motion; Linux developers began digging through all patches that university affiliates had submitted in the past. Jonathan Corbet, the founder and editor in chief of, recently provided an update on that review process. Per his assessment, “Most of the suspect patches have turned out to be acceptable, if not great.” Of over 200 patches that were flagged, 42 are still set to be removed from the kernel.

Regardless of whether their reaction was justified, the Linux community gets to decide if the University of Minnesota affiliates can contribute to the kernel again. And that community has made its demands clear: the school needs to convince them its future patches won’t be a waste of anyone’s time.

What will it take to do that? In a statement released the same day as the ban, the university’s computer science department suspended its research into Linux-kernel security and announced that it would investigate Lu’s and Wu’s research method.

But that wasn’t enough for the Linux Foundation. Mike Dolan, Linux Foundation SVP and GM of projects, wrote a letter to the university on April 23rd, which The Verge has viewed. Dolan made four demands. He asked that the school release “all information necessary to identify all proposals of known-vulnerable code from any U of MN experiment” to help with the audit process. He asked that the paper on hypocrite commits be withdrawn from publication. He asked that the school ensure future experiments undergo IRB review before they begin, and that future IRB reviews ensure the subjects of experiments provide consent, “per usual research norms and laws.”

Two of those demands have since been met. Wu and Lu have retracted the paper and have released all the details of their study.

The university’s status on the third and fourth counts is unclear. In a letter sent to the Linux Foundation on April 27th, Heimdahl and Loren Terveen (the computer science and engineering department’s associate department head) maintain that the university’s IRB “acted properly,” and argues that human-subjects research “has a precise technical definition according to US federal regulations … and this technical definition may not accord with intuitive understanding of concepts like ‘experiments’ or even ‘experiments on people.’” They do, however, commit to providing more ethics training for department faculty. Reached for comment, university spokesperson Dan Gilchrist referred me to the computer science and engineering department’s website.

Meanwhile, Lu, Wu, and Pakki apologized to the Linux community this past Saturday in an open letter to the kernel mailing list, which contained some apology and some defense. “We made a mistake by not finding a way to consult with the community and obtain permission before running this study; we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for hypocrite patches,” the researchers wrote, before going on to reiterate that they hadn’t put any vulnerabilities into the Linux kernel, and that their other patches weren’t related to the hypocrite commits research.

Kroah-Hartman wasn’t having it. “The Linux Foundation and the Linux Foundation’s Technical Advisory Board submitted a letter on Friday to your university,” he responded. “Until those actions are taken, we do not have anything further to discuss.”

Coronavirus in Minnesota

Photo by Glen Stubbe / Star Tribune via Getty Images

From the University of Minnesota researchers’ perspective, they didn’t set out to troll anyone — they were trying to point out a problem with the kernel maintainers’ review process. Now the Linux community has to reckon with the fallout of their experiment and what it means about the security of open-source software.

Some developers rejected University of Minnesota researchers’ perspective outright, claiming the fact that it’s possible to fool maintainers should be obvious to anyone familiar with open-source software. “If a sufficiently motivated, unscrupulous person can put themselves into a trusted position of updating critical software, there’s honestly little that can be done to stop them,” says White, the security researcher.

On the other hand, it’s clearly important to be vigilant about potential vulnerabilities in any operating system. And for others in the Linux community, as much ire as the experiment drew, its point about hypocrite commits appears to have been somewhat well taken. The incident has ignited conversations about patch-acceptance policies and how maintainers should handle submissions from new contributors, across Twitter, email lists, and forums. “Demonstrating this kind of ‘attack’ has been long overdue, and kicked off a very important discussion,” wrote maintainer Christoph Hellwig in an email thread with other maintainers. “I think they deserve a medal of honor.”

“This research was clearly unethical, but it did make it plain that the OSS development model is vulnerable to bad-faith commits,” one user wrote in a discussion post. “It now seems likely that Linux has some devastating back doors.”

Corbet also called for more scrutiny around new changes in his post about the incident. “If we cannot institutionalize a more careful process, we will continue to see a lot of bugs, and it will not really matter whether they were inserted intentionally or not,” he wrote.

And even for some of the paper’s most ardent critics, the process did prove a point — albeit, perhaps, the opposite of the one Wu, Lu, and Pakki were trying to make. It demonstrated that the system worked.

Eric Mintz, who manages 25 Linux servers, says this ban has made him much more confident in the operating system’s security. “I have more trust in the process because this was caught,” he says. “There may be compromises we don’t know about. But because we caught this one, it’s less likely we don’t know about the other ones. Because we have something in place to catch it.”

To Scott, the fact that the researchers were caught and banned is an example of Linux’s system functioning exactly the way it’s supposed to. “This method worked,” he insists. “The SolarWinds method, where there’s a big corporation behind it, that system didn’t work. This system did work.”

“Kernel developers are happy to see new tools created and — if the tools give good results — use them. They will also help with the testing of these tools, but they are less pleased to be recipients of tool-inspired patches that lack proper review,” Corbet writes. The community seems to be open to the University of Minnesota’s feedback — but as the Foundation has made clear, it’s on the school to make amends.

“The university could repair that trust by sincerely apologizing, and not fake apologizing, and by maybe sending a lot of beer to the right people,” Scott says. “It’s gonna take some work to restore their trust. So hopefully they’re up to it.”

Repost: Original Source and Author Link


University of Minnesota banned from contributing to Linux kernel

The University of Minnesota has been banned from contributing to the Linux kernel by one of its maintainers after researchers from the school apparently knowingly submitted code with security flaws.

Earlier this year, two researchers from the university released a paper detailing how they had submitted known security vulnerabilities to the Linux kernel in order to show how potentially malicious code could get through the approval process. Now, after another student from the university submitted code that reportedly does nothing, kernel maintainer and Linux Foundation fellow Greg Kroah-Hartman has released a statement calling for all kernel maintainers to reject any code submissions from anyone using a email address.

In addition to not accepting any new code from the university, all of the code submitted in the past is being removed and re-reviewed. It seems like it will be a massive amount of work, but Kroah-Hartman has made it clear that the developer community doesn’t appreciate “being experimented on” and that all of the code from the university has been called into question due to the research.

The university has put out a statement, saying it’s been made aware of the research and its subsequent ban from contributing. It says it has suspended that line of research and will be investigating how the study was approved and carried out.

In a statement meant to clarify the study, the researchers said they intended to bring attention to issues with the submission process — mainly, the fact that bugs, including ones that were potentially maliciously crafted, could slip through. Kernel developer Laura Abbot countered this in a blog post, saying that the possibility of bugs slipping through is well-known in the open-source software community. In what appears to be a private message, the person who submitted the reportedly nonfunctional code called Kroah-Hartman’s accusations that the code was known to be invalid “wild” and “bordering on slander.”

It’s unclear if that submission — which kicked off the current controversy — was actually part of a research project. The person who submitted it did so with their email address, while the patches submitted in the study were done through random Gmail addresses, and the submitter claimed that the faulty code was created by a tool. Kroah-Hartman’s response basically said that he found it unlikely that a tool had created the code, and, given the research, he couldn’t trust that the patch was made in good faith either way.

There’s been criticism from some in the open-source community, saying that Kroah-Hartman deciding to pull any patches submitted by U of M personal is an overreaction, which could lead to bugs fixed by legitimate patches being reintroduced. It is worth noting, however, that the plan is to re-review the patches and to resubmit them if they’re found to be valid.

Repost: Original Source and Author Link


University will stop using Proctorio remote testing after student outcry

The University of Illinois Urbana-Champaign announced that it will discontinue its use of remote-proctoring software Proctorio after its summer 2021 term. The decision follows almost a year of outcry over the service, both on UIUC’s campus and around the US, citing concerns with privacy, discrimination, and accessibility.

Proctorio is one of the most prominent software platforms that colleges and universities use to watch for cheating on remote tests. It uses what its website describes as “machine learning and advanced facial detection technologies” to record students through their webcams while they work on their exams and monitor the position of their heads. The software flags “suspicious signs” to professors, who can review its recordings. The platform also enables professors to track the websites students visit during their exams, and bar them from functions like copy / pasting and printing.

Though Proctorio and similar services have been around for years, their use exploded in early 2020 when COVID-19 drove schools around the US to move a bulk of their instruction online. So, too, has scrutiny towards their practices. Students and instructors at universities around the country have spoken out against the widespread use of the software, claiming that it causes unnecessary anxiety, violates privacy, and has the potential to discriminate against marginalized students.

In an email to instructors, ADA coordinator Allison Kushner and Vice Provost Kevin Pitts wrote that professors who continue to use Proctorio through the summer 2021 term are “expected to accommodate students that raise accessibility issues,” and that the campus is “investigating longer-term remote proctoring options.”

Proctorio has been controversial on UIUC’s campus since the service was introduced last spring, and concerned students only grew more vocal through the fall 2020 semester. (Due to COVID-19, the school now operates with a hybrid of online and in-person instruction.) Over 1,000 people signed a petition calling on the university to stop using the service. “Proctorio is not only inefficient, it is also unsafe and a complete violation of a student’s privacy,” reads the petition.

UIUC is one of many campuses where remote proctoring has faced backlash. A Miami University petition, which gathered over 500 signatures, declared that “Proctorio’s design invades student rights, is inherently ableist and discriminatory, and is inconsistent with peer reviewed research.” Over 3,500 signatories have called on the University of Regina to end its use of ProctorTrack, another automated proctoring service. A 1,200-signature petition urges the University of Central Florida to dump Honorlock, another similar software, declaring that “students should not be forced to sign away their privacy and rights in order to take a test.”

Professors and staff have criticized the service as well. The Electronic Privacy Information Center (EPIC) filed a complaint against Proctorio (alongside four other test-proctoring services), claiming that the services’ collection of personal information amounts to “unfair and deceptive trade practices.” Even US senators have gotten involved; a coalition including Sen. Richard Blumenthal (D-CT), Sen. Elizabeth Warren (D-MA), and Sen. Cory Booker (D-NJ) sent an open letter to Proctorio and two similar services in December citing a number of concerns about their business practices. “Students have run head on into the shortcomings of these technologies—shortcomings that fall heavily on vulnerable communities and perpetuate discriminatory biases,” wrote the senators.

The complaints largely revolve around security and privacy — Proctorio’s recordings give instructors and the service access to some of test-takers’ browsing data, and a glimpse of their private homes in some cases. (Proctorio stated in its response to the Senators’ letter that “test-taker data is secured and processed through multiple layers of encryption” and that Proctorio retains its recordings “for the minimum amount of time required by either our customer or by applicable law.”)

Accessibility is another common concern. Students have reported not having access to a webcam at home, or enough bandwidth to accommodate the service; one test-taker told The Verge that she had to take her first chemistry test in a Starbucks parking lot last semester.

Students have also reported that services like Proctorio have difficulty identifying test-takers with darker skin tones, and may disproportionately flag students with certain disabilities. Research has found that even the best facial-recognition algorithms make more errors in identifying Black faces than they do identifying white ones. Proctorio stated in its response that “We believe all of these cases were due to issues relating to lighting, webcam position, or webcam quality, not race.”

“We take these concerns seriously,” reads UIUC’s email, citing student complaints related to “accessibility, privacy, data security and equity” as factors in its decision. It recommends that students for whom Proctorio presents a barrier to test-taking “make alternative arrangements” as the program is phased out, and indicates that accessibility will be considered in the selection of the next remote-proctoring solution.

We’ve reached out to UIUC and Proctorio for comment, and will update this story if we hear back.

Repost: Original Source and Author Link


University of Washington researchers say Amazon’s algorithms spread vaccine misinformation

The pandemic has unleashed a barrage of online misinformation that’s reinvigorated the anti-vaccine movement. Despite the fact that multiple COVID-19 vaccines are approved and beginning to be made available to the public, only two-thirds of Americans say they’ll try to get vaccinated, according to a CNN poll. As governments work toward distributing vaccines, health experts worry that reluctance could make it difficult to achieve herd immunity. Unfortunately, the algorithms powering search engines haven’t traditionally been designed to take into account the credibility and trustworthiness of medical information.

The coauthors of a recent study argue this is particularly true of Amazon, which has faced criticism for failing to regulate the health-related products on its platform. According to the University of Washington researchers, who have affiliations with the The Information School at the University of Washington, their audits reveal Amazon hosts a “plethora” of health misinformative products belonging to categories including books, ebooks, apparel, and health and personal care. They also claim to have found a “filter-bubble” effect in Amazon’s recommendations where recommendations of misinformative health products contain more health misinformation.

“Several medically unverified products for coronavirus treatment, like prayer healing, herbal treatments and antiviral vitamin supplements proliferated Amazon, so much so that the company had to remove 1 million fake products after several instances of such treatments were reported by the media,” the researchers wrote in their paper. “The scale of the problematic content suggests that Amazon could be a great enabler of misinformation, especially health misinformation. It not only hosts problematic health-related content but its recommendation algorithms drive engagement by pushing potentially dubious health products to users of the system.”

The researchers conducted two sets of experiments in May and August to determine the extent to which Amazon might be promoting health misinformation about vaccines. In the first — an “unpersonalized” audit — they used information retrieval metrics to measure the amount of health misinformation users were exposed to when performing for vaccine-related searches. In particular, while logged in as a guest to minimize the influence of personalization algorithms, they canvassed the results of 48 searches belonging to 10 popular vaccine-related topics including “HPV vaccine,” “immunization,” and “MMR vaccine and autism.”

The researchers ran the audit for 15 consecutive days, sorting the results across five different Amazon filters each day: Featured, Price Low to High, Price High to Low, Average Customer Review, and Newest Arrivals. They annotated the resulting 36,000 search results and 16,815 product page recommendations for their stances on health misinformation — i.e., whether they promoted, debunked, or were neutral regarding vaccinations — for a final dataset totaling 4,997 annotated Amazon products.

The second audit — a “personalized” audit — looked at the impact of a customer’s behavioral history on the amount of misinformation returned in search results, recommendations, and auto-complete suggestions. As the researchers note, Amazon history covers a weeklong period of actions including searching for products, searching and clicking, adding to cart after searching and clicking, searching on third-party websites like Google, and more.

After analyzing the results from both audits, the researchers found that search results returned for many vaccine-related queries contain large number of misinformative products, leading to what they characterize as “high misinformation bias.” In addition, misinformative products were ranked higher than “debunking” products, and customers performing actions on misinformative products were presented with more misinformation in their homepages, product page recommendations, and prepurchase recommendations, the researchers say.

“Many search engines and social media platforms employ personalization to enhance users’ experience on their platform by recommending them items that the algorithm thinks they will like based on their past browsing or purchasing history. But on the downside, if not checked, personalization can also lead users into a rabbit hole of problematic content,” the researchers wrote. “Our analysis … revealed that an echo chamber exists on Amazon where users performing real-world actions on misinformative books are presented with more misinformation in various recommendations. Just a single click on an anti-vaccine book could fill your homepage with several other similar anti vaccine books. There is an urgent need for the platform to treat vaccine and other health related topics differently and ensure high quality searches and recommendations.”

For its part, Amazon recently said in a corporate blog post that during 2020, it reviewed almost 10,000 product listings a day to ensure compliance with its policies and removed over 2 million products for violating its offensive or controversial guidelines. More than 1.5 million of these products were identified, reviewed, and removed proactively by automated tools, according to Amazon — often before being seen by a customer.

“We exercise judgment in allowing or prohibiting listings, and we keep the cultural differences and sensitivities of our global community in mind when making a decision on products,” Amazon wrote. “We strive to maximize selection for all customers, even if we don’t agree with the message or sentiment of the product itself. Our offensive and controversial products policy attempts to provide a clear and objective standard against which to measure the products we permit in our store.”

The researchers suggest as one potential solution a “bias meter” that could signal the amount of misinformation present in vaccine-related  search results. They also urge Amazon to stop promoting health misinformative books via sponsorships — the researchers found 98 misinformative products in the sponsored recommendations they annotated — and to introduce a label for health-related products that have been evaluated by experts. Moreover, they recommend that the platform account for misinformation bias in its search and recommendation algorithms to reduce the exposure to misinformative content.

“Our investigations revealed that Amazon’s algorithm has learnt problematic patterns through consumers’ past viewing and buying patterns,” the researchers wrote. “Our study … provides a peek into the workings of Amazon’s algorithm and has paved way for future audits that could use our audit methodology and extensive qualitative coding scheme to perform experiments considering complex real world settings.”

The researchers aren’t the first to uncover the presence of anti-vaccination content on Amazon. In May, CNN found listings and advertisements for books and movies promoting vaccination misinformation including VAXXED: From Cover-Up to Catastrophe, which was dropped from the Tribeca Film Festival in 2016 following an outcry. Amazon removed the anti-vaccine documentaries from its Prime Video service after CNN published its report.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link


University of Pisa leans into the I/O challenge AI applications create

At a time when workloads that employ machine and deep learning algorithms are being built and deployed more frequently, organizations need to optimize I/O throughput in a way that enables those workloads to cost-effectively share the expensive GPU resources used to train AI models. Case in point: the University of Pisa, which has been steadily expanding the number of GPUs it makes accessible to AI researchers in a green datacenter optimized for high-performance computing (HPC) applications.

The challenge the university has encountered as it deploys AI is that machine learning and deep learning algorithms tend to make more frequent I/O requests to a larger number of smaller files than traditional HPC applications, University of Pisa CTO Maurizio Davini said. To accommodate that, the university has deployed NVMesh software from Excelero that can access more than 140,000 small files per second on Nvidia DGX A100 GPU servers.

While Davini said he generally views AI applications as just another type of HPC workload, the way AI workloads access compute and storage resources requires a specialized approach. The NVMesh software addresses that approach by offloading the increasingly frequent I/O requests, freeing up additional compute resources on the Nvidia servers for training AI models, Davini said.

“We wanted to provide our AI researchers with a better experience,” he said.

University of Pisa CTO Maurizio Davini

Above: University of Pisa CTO Maurizio Davini

Excelero is among a bevy of companies that are moving to address the I/O challenges IT teams will encounter when trying to make massive amounts of data available to AI models. As the number of AI models that organizations build and maintain starts to grow, legacy storage systems can’t keep pace. The University of Pisa deployed Excelero to make sure the overall IT experience of its AI researchers remains satisfactory, Davini said.

Of course, more efficient approaches to managing I/O only begin to solve the data management issues organizations that build their own AI models will encounter. IT teams have tended to manage data as an extension of the application employed to create it. That approach is the primary reason there are so many data silos strewn across the enterprise.

Even more problematic is the fact much of the data in those silos conflicts because different applications might have rendered a company name differently or may not have been updated with the most recent transaction data. Having a single source of truth about a customer or event at any specific moment in time remains elusive.

AI models, however, require massive amounts of accurate data to be trained properly. Otherwise, the models will generate recommendations based on inaccurate assumptions because the data the machine learning algorithms were exposed to was either inconsistent or unreliable. IT organizations are addressing that issue by first investing heavily in massive data lakes to normalize all their data and then applying DataOps best processes, as outlined in a manifesto that describes how to automate as many data preparation and management tasks as possible.

Legacy approaches to managing data based on manual copy and paste processes are one of the primary reasons it takes so long to build an AI model. Data science teams are lucky if they can roll out two AI models a year. Cloud service providers like Amazon Web Services (AWS) offer products such as Amazon SageMaker to automate the construction of AI models, increasing the rate at which AI models are created in the months ahead.

But not every organization will commit to building AI models in the cloud. That requires storing data in an external platform, which creates a range of potential compliance issues they might rather avoid. The University of Pisa, for example, finds it easier to convince officials to allocate budget to a local datacenter than to give permission to access an external cloud, Davini noted.

Ultimately, the goal is to eliminate the data management friction that has long been a plague on IT by adopting a set of DataOps processes that are similar in nature to the DevOps best practices widely employed to streamline application development and deployment. However, all the best practices in the world won’t make much of a difference if the underlying storage platform is simply too slow to keep up.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link