Categories
AI

Center for Applied Data Ethics suggests treating AI like a bureaucracy

A recent paper from the Center for Applied Data Ethics (CADE) at the University of San Francisco urges AI practitioners to adopt terms from anthropology when reviewing the performance of large machine learning models. The research suggests using this terminology to interrogate and analyze bureaucracy, states, and power structures in order to critically assess the performance of large machine learning models with the potential to harm people.

“This paper centers power as one of the factors designers need to identify and struggle with, alongside the ongoing conversations about biases in data and code, to understand why algorithmic systems tend to become inaccurate, absurd, harmful, and oppressive. This paper frames the massive algorithmic systems that harm marginalized groups as functionally similar to massive, sprawling administrative states that James Scott describes in Seeing Like a State,” the author wrote.

The paper was authored by CADE fellow Ali Alkhatib, with guidance from director Rachel Thomas and CADE fellows Nana Young and Razvan Amironesei.

The researchers particularly look to the work of James Scott, who has examined hubris in administrative planning and sociotechnical systems. In Europe in the 1800s, for example, timber industry companies began using abridged maps and a field called “scientific forestry” to carry out monoculture planting in grids. While the practice resulted in higher initial yields in some cases, productivity dropped sharply in the second generation, underlining the validity of scientific principles favoring diversity. Like those abridged maps, Alkhatib argues, algorithms can both summarize and transform the world and are an expression of the difference between people’s lived experiences and what bureaucracies see or fail to see.

The paper, titled “To Live in Their Utopia: Why Algorithmic Systems Create Absurd Outcomes,” was recently published and accepted by the ACM Conference on Human Factors in Computing Systems (CHI), which will be held in May.

Recalling Scott’s analysis of states, Alkhatib warns against harms that can result from unhampered AI, including the administrative and computational reordering of society, a weakened civil society, and the rise of an authoritarian state. Alkhatib notes that such algorithms can misread and punish marginalized groups whose experiences do not fit within the confines of data considered to train a model.

People privileged enough to be considered the default by data scientists and who are not directly impacted by algorithmic bias and other harms may see the underrepresentation of race or gender as inconsequential. Data Feminism authors Catherine D’Ignazio and Lauren Klein describe this as “privilege hazard.” As Alkhatib put it, “other people have to recognize that race, gender, their experience of disability, or other dimensions of their lives inextricably affect how they experience the world.”

He also cautions against uncritically accepting AI’s promise of a better world.

“AIs cause so much harm because they exhort us to live in their utopia,” the paper reads. “Framing AI as creating and imposing its own utopia against which people are judged is deliberately suggestive. The intention is to square us as designers and participants in systems against the reality that the world that computer scientists have captured in data is one that surveils, scrutinizes, and excludes the very groups that it most badly misreads. It squares us against the fact that the people we subject these systems to repeatedly endure abuse, harassment, and real violence precisely because they fall outside the paradigmatic model that the state — and now the algorithm — has constructed to describe the world.”

At the same time, Alkhatib warns people not to see AI-driven power shifts as inevitable.

“We can and must more carefully reckon with the parts we play in empowering algorithmic systems to create their own models of the world, in allowing those systems to run roughshod over the people they harm, and in excluding and limiting interrogation of the systems that we participate in building.”

Potential solutions the paper offers include undermining oppressive technologies and following the guidance of Stanford AI Lab researcher Pratyusha Kalluri, who advises asking whether AI shifts power, rather than whether it meets a chosen numeric definition of fair or good. Alkhatib also stresses the importance of individual resistance and refusal to participate in unjust systems to deny them power.

Other recent solutions include a culture change in computer vision and NLP, reduction in scale, and investments to reduce dependence on large datasets that make it virtually impossible to know what data is being used to train deep learning models. Failure to do so, researchers argue, will leave a small group of elite companies to create massive AI models such as OpenAI’s GPT-3 and the trillion-parameter language model Google introduced earlier this month.

The paper’s cross-disciplinary approach is also in line with a diverse body of work AI researchers have produced within the past year. Last month, researchers released the first details of OcéanIA, which treats a scientific project for identifying phytoplankton species as a challenge for machine learning, oceanography, and science. Other researchers have advised a multidisciplinary approach to advancing the fields of deep reinforcement learning and NLP bias assessment.

We’ve also seen analysis of AI that teams sociology and critical race theory, as well as anticolonial AI, which calls for recognizing the historical context associated with colonialism in order to understand which practices to avoid when building AI systems. And VentureBeat has written extensively about the fact that AI ethics is all about power.

Last year, a cohort of well-known members of the algorithmic bias research community created an internal algorithm-auditing framework to close AI accountability gaps within organizations. That work asks organizations to draw lessons from the aerospace, finance, and medical device industries. Coauthors of the paper include Margaret Mitchell and Timnit Gebru, who used to lead the Google AI ethics team together. Since then, Google has fired Gebru and, according to a Google spokesperson, opened an investigation into Mitchell.

With control of the presidency and both houses of Congress in the U.S., Democrats could address a range of tech policy issues in the coming years, from laws regulating the use of facial recognition by businesses, governments, and law enforcement to antitrust actions to rein in Big Tech. However, a 50-50 Senate means Democrats may be forced to consider bipartisan or moderate positions in order to pass legislation.

The Biden administration emphasized support for diversity and distaste for algorithmic bias in a televised ceremony introducing the science and technology team on January 16. Vice President Kamala Harris has also spoken passionately against algorithmic bias and automated discrimination. In the first hours of his administration, President Biden signed an executive order to advance racial equality that instructs the White House Office of Science and Technology Policy (OSTP) to participate in a newly formed working group tasked with disaggregating government data. This initiative is based in part on concerns that an inability to analyze such data impedes efforts to advance equity.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Facebook and NYU use artificial intelligence to make MRI scans four times faster

If you’ve ever had an MRI scan before, you’ll know how unsettling the experience can be. You’re placed in a claustrophobia-inducing tube and asked to stay completely still for up to an hour while unseen hardware whirs, creaks, and thumps around you like a medical poltergeist. New research, though, suggests AI can help with this predicament by making MRI scans four times faster, getting patients in and out of the tube quicker.

The work is a collaborative project called fastMRI between Facebook’s AI research team (FAIR) and radiologists at NYU Langone Health. Together, the scientists trained a machine learning model on pairs of low-resolution and high-resolution MRI scans, using this model to “predict” what final MRI scans look like from just a quarter of the usual input data. That means scans can be done faster, meaning less hassle for patients and quicker diagnoses.

“It’s a major stepping stone to incorporating AI into medical imaging,” Nafissa Yakubova, a visiting biomedical AI researcher at FAIR who worked on the project, tells The Verge.

The reason artificial intelligence can be used to produce the same scans from less data is that the neural network has essentially learned an abstract idea of what a medical scan looks like by examining the training data. It then uses this to make a prediction about the final output. Think of it like an architect who’s designed lots of banks over the years. They have an abstract idea of what a bank looks like, and so they can create a final blueprint faster.

“The neural net knows about the overall structure of the medical image,” Dan Sodickson, professor of radiology at NYU Langone Health, tells The Verge. “In some ways what we’re doing is filling in what is unique about this particular patient’s [scan] based on the data.”

The AI software can be incorporated into existing MRI scanners with minimal hassle, say researchers.
Image: FAIR / NYU

The fastMRI team has been working on this problem for years, but today, they are publishing a clinical study in the American Journal of Roentgenology, which they say proves the trustworthiness of their method. The study asked radiologists to make diagnoses based on both traditional MRI scans and AI-enhanced scans of patients’ knees. The study reports that when faced with both traditional and AI scans, doctors made the exact same assessments.

“The key word here on which trust can be based is interchangeability,” says Sodickson. “We’re not looking at some quantitative metric based on image quality. We’re saying that radiologists make the same diagnoses. They find the same problems. They miss nothing.”

This concept is extremely important. Although machine learning models are frequently used to create high-resolution data from low-resolution input, this process can often introduce errors. For example, AI can be used to upscale low-resolution imagery from old video games, but humans have to check the output to make sure it matches the input. And the idea of AI “imagining” an incorrect MRI scan is obviously worrying.

The fastMRI team, though, says this isn’t an issue with their method. For a start, the input data used to create the AI scans completely covers the target area of the body. The machine learning model isn’t guessing what a final scan looks like from just a few puzzle pieces. It has all the pieces it needs, just at a lower resolution. Secondly, the scientists created a check system for the neural network based on the physics of MRI scans. That means at regular intervals during the creation of a scan, the AI system checks that its output data matches what is physically possible for an MRI machine to produce.

A traditional MRI scan created from normal input data, known as k-space data.
GIF: FAIR / NYU

An AI-enhanced MRI scan created from a quarter of normal input data.
GIF: FAIR / NYU

“We don’t just allow the network to create any arbitrary image,” says Sodickson. “We require that any image generated through the process must have been physically realizable as an MRI image. We’re limiting the search space, in a way, making sure that everything is consistent with MRI physics.”

Yakubova says it was this particular insight, which only came about after long discussions between the radiologists and the AI engineers, that enabled the project’s success. “Complementary expertise is key to creating solutions like this,” she says.

The next step, though, is getting the technology into hospitals where it can actually help patients. The fastMRI team is confident this can happen fairly quickly, perhaps in just a matter of years. The training data and model they’ve created are completely open access and can be incorporated into existing MRI scanners without new hardware. And Sodickson says the researchers are already in talks with the companies that produce these scanners.

Karin Shmueli, who heads the MRI research team at University College London and was not involved with this research, told The Verge this would be a key step to move forward.

“The bottleneck in taking something from research into the clinic, is often adoption and implementation by manufacturers,” says Shmueli. She added that work like fastMRI was part of a wider trend incorporating artificial intelligence into medical imaging that was extremely promising. “AI is definitely going to be more in use in the future,” she says.

Repost: Original Source and Author Link

Categories
AI

Why Microsoft’s self-driving car strategy will work

Self-driving car startup Cruise has just gotten a $2 billion infusion from Microsoft, according to a joint statement Tuesday from Cruise, its owner General Motors, and Microsoft. The investment will bring Cruise’s valuation to $30 billion and make Microsoft an official partner.

Per Tuesday’s announcement: “To unlock the potential of cloud computing for self-driving vehicles, Cruise will leverage Azure, Microsoft’s cloud and edge computing platform, to commercialize its unique autonomous vehicle solutions at scale. Microsoft, as Cruise’s preferred cloud provider, will also tap into Cruise’s deep industry expertise to enhance its customer-driven product innovation and serve transportation companies across the globe through continued investment in Azure.”

So Cruise will get the much-needed funds to conduct research and (possibly discounted) access to Microsoft’s cloud computing resources and move closer to its goal of launching a purpose-built self-driving car.

But in the long run, Microsoft stands to gain more from the deal. Not only will it get two very lucrative customers for its cloud business (Azure will also become GM’s preferred cloud provider, per the announcement), but when seen in the broader context of Microsoft’s self-driving car strategy, “Cruise’s deep industry expertise” will possibly give Microsoft a solid foothold in the future of the still-volatile self-driving car industry.

At a time when most major tech companies are interested in acquiring self-driving car startups or launching their own initiatives, Microsoft’s hands-off approach could eventually turn it into an industry leader.

Self-driving cars from the AI business perspective

Self-driving cars can be viewed as a specialized case within AI-driven businesses. Every company running on AI algorithms — namely machine learning — must bring together a few key pieces to have a viable business model:

  • Algorithms: The company must either use existing machine learning algorithms or research new architectures that suit the problem.
  • Data: The company must have a sound infrastructure that consolidates disparate data sources. It must also have ways to collect and store fresh data from customers to continue to maintain and tune its models and keep an edge over competitors.
  • Compute resources: The company will need access to large compute clusters and specialized hardware to train and update its machine learning models and provide cloud-based inference at scale.
  • Talent: The company needs data scientists, data engineers, and machine learning engineers to develop and maintain AI models and research new techniques.

Microsoft already has a solid AI stack and a full range of products that fit into this category. For instance, the company’s computer vision service runs on machine learning models developed by the company’s engineers. The models were trained using the company’s vast store of image data. As customers use the AI service, they generate more data and labels to further enhance the machine learning models. Finally, Microsoft’s Azure cloud has specialized hardware to both train the models and deliver them at scale and in a cost-efficient manner.

Many companies use Microsoft’s Cognitive Services APIs to integrate AI capabilities into their applications.

Microsoft can also engage in any kind of venture that builds on this AI stack, such as launching its own end-to-end computer vision applications or hosting advanced natural language processing platforms such as OpenAI’s GPT-3.

When it comes to self-driving cars, however, a few new components are added to the mix:

  • Autonomous driving hardware: The company must develop lidars, sensors, cameras, and other hardware that enable self-driving features.
  • Vehicle: The company must either manufacture its own vehicle or find a manufacturing partner to integrate the self-driving car gear.

Self-driving cars introduce manufacturing and legal issues that are challenging for companies focused on their software business. There are a few ways companies can overcome these challenges.

How Microsoft’s self-driving car strategy stacks up

The traditional ways to enter an emerging market are to build the technology or buy it from someone else.

In the late 2000s, Google developed its own self-driving car lab, which it later renamed Waymo, to develop AI software and hardware for autonomous driving. Google does not manufacture its own cars and instead relies on the vehicles of other carmakers, such as Toyota, Audi, Fiat Chrysler, and Lexus to test and deploy its technology.

But Google had a head start, which allowed it to create its own self-driving car unit from scratch. Companies that entered the field later made up for their tardiness by acquiring self-driving car startups. Examples include Amazon’s Zoox acquisition and Intel’s acquisition of MobileEye.

Tesla is among the few to have a complete self-driving car stack. The carmaker has autonomous driving technology integrated into its electric vehicles. It also has millions of sold cars that are constantly collecting fresh data to further enhance its algorithms. Apple also has plans to manufacture its own self-driving car, though the full details have yet to emerge.

Microsoft’s approach to the self-driving car industry is different.

“We partner across the industry. We are not in the business of making vehicles or delivering end mobility-as-a-service offerings,” Sanjay Ravi, GM of Automotive Industry at Microsoft, wrote in a 2019 blog post that laid out Microsoft’s automotive strategy.

Instead of acquiring startups and test-driving cars in cities, Microsoft has a program to support self-driving car startups by providing them with engineering support and discounted access to cloud services. These startups can become potential Microsoft partners in the future. In October, Microsoft entered a partnership agreement with Wayve, a London-based developer of self-driving car software that was part of Microsoft’s startup program. Cruise is the second self-driving car company Microsoft is partnering with. Microsoft also has partnerships with several carmakers to provide them with cloud services.

Why Microsoft’s strategy can succeed

The problem with the self-driving car industry is that we still don’t know when we will get there. Every year, we’re missing new deadlines on having fully autonomous cars on roads. But like the quest for artificial general intelligence, we know that we have a bumpy and potentially long road ahead.

We also don’t know what the final technology will look like. Tesla CEO Elon Musk believes computer vision alone will be enough to reach full autonomy. Other companies are banking on lidar technology becoming more affordable and stable in the future. Car design will also undergo changes as the industry matures.

Another issue is the regulation of self-driving cars. Will self-driving cars be allowed to share the road with human drivers? Will they only be allowed in specific, geofenced areas? How will culpability be determined in the case of accidents?

Every one of these areas can undergo fundamental changes, and those changes will be decisive in determining which startups flourish and which collapse in the coming years. Interestingly, the one thing that will likely remain constant is data, cloud, and software, the three areas where Microsoft already excels.

This is why Microsoft’s strategy of not acquiring startups will protect the company against the industry’s volatility.

For one thing, a partnership is a flexible format that is well-suited to the quickly evolving self-driving car space. Entering a partnership is faster and more affordable than a full acquisition (compare the $2 billion partnership with a full $30 billion acquisition of Cruise, if it was possible at all). At the same time, leaving a partnership is much easier than having to scrap and sell an entire self-driving unit.

Meanwhile, small investments allow Microsoft to cast a wider net and become engaged in a diverse range of solutions through its self-driving startup accelerator program and its partnership agreements. The startups included in Microsoft’s self-driving startup program already represent a diverse set of research areas and directions. Any of them could become a breakthrough solution in the future. While Microsoft supports these startups, it will also tap into their industry expertise and develop its own in-house talent and tools. This will be crucial if and when Microsoft considers making a more serious move to develop self-driving cars.

As the field matures and potential winners become more evident, Microsoft will be in a better position to upgrade its relationship with startups to full partnership and eventual acquisition.

And if Microsoft’s partnership with OpenAI is any indication, a lot of the investment Microsoft makes in startups is in Azure credits, which ensures those startups become locked into Microsoft’s cloud service instead of going with other cloud providers.

On a broader scale, Microsoft’s wide range of partnerships will enable it to become a growing hub to attract self-driving car startups. It will use the expertise and experience of these startups to enhance its cloud and AI services for autonomous driving, and in turn attract more customers.

Many analysts think Microsoft is lagging in the self-driving car space by not having an active program to test cars in cities. I think the company has made a smart move to solidify its position in areas that will remain constant (cloud, data, and algorithms) while developing a strategy that will allow it to adapt to the inevitable changes to the industry in the coming years.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. 

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

TripActions raises $155 million to help enterprises analyze travel and expense data

TripActions, a booking and management platform that offers enterprises real-time data, automated reporting, and insights into business travel and expenses, has raised $155 million in a series E round of funding co-led by existing investor Andreessen Horowitz. The Palo Alto, California-based company is now valued at $5 billion, up from the $4 billion valuation at its series D round 18 months ago.

The raise and valuation come as small businesses and enterprises across the spectrum embrace new ways of functioning, with remote work taking center stage. Moreover, corporate travel has been decimated by the pandemic, raising questions about the viability of platforms like TripActions.

Founded in 2015, TripActions is an AI-enabled platform that gives companies of all sizes access to inventory spanning flights, accommodation, and car rentals, alongside 24/7 access to a global network of travel agents. But one of the big selling points for businesses is the growing amount of data it offers, with TripActions now serving as an end-to-end tool encompassing both travel and expense management.

Through the TripActions spend management dashboard, companies can filter and view travel and expense data by date range, geography, category, and more.

“Having all of this data in one place makes it easier to run a report,” Michael Sindicich, GM of TripActions’ payments and expense product Liquid, told VentureBeat. “A good example of this is the spend management dashboard that launched a few months back. Since we have data from Liquid, we’re able to actively surface and monitor real-time spend to help program managers properly budget and monitor for outlying activity.”

Above: TripActions can now feed expenses data directly into ERP systems

AI and machine learning underpin much of the TripActions platform. On the travel side, the tech is used to optimize flights and hotels based on a company or individual’s preferences, historical travel behavior, and more. And on the Liquid (payments and expense) side, TripActions leverages AI to automatically classify each expense to a category, detect items that aren’t permitted under a company’s policy, and align spending with corporate events in the calendar.

Enterprising

Over the past year, TripActions has introduced dozens of products and updates as it adapts to a new corporate climate. These include a new enterprise-focused offering and integrations with enterprise resource planning (ERP) tools such as NetSuite, Microsoft Dynamics, SAP, Xero, and more. These updates are designed to help finance teams dig deep into payment and expense data in real time, rather than waiting for employees to manually submit expenses, as would be the case with traditional systems.

TripActions also launched a COVID-19 dashboard that includes “business travel continuity tools” to deliver insights into metrics such as countries with the most active COVID-19 cases per 100,000 people or areas with quarantine restrictions for travelers.

Above: TripActions: Covid dashboard

Future of work

As with many businesses operating in the travel realm, 2020 was a tumultuous year for TripActions. As the world entered lockdown last March, the company laid off hundreds of staff before receiving a $125 million debt round of financing to weather the COVID-19 storm and expand deeper into the enterprise market. But however positive one’s outlook, it’s difficult to imagine corporate travel returning to pre-pandemic levels anytime soon, if ever.

However, TripActions’ $155 million raise, which takes its total equity financing to $665 million, and its lofty $5 billion valuation suggest investors are bullish about the company’s prospects.

“Corporate travel won’t ever be the same, but that doesn’t really affect the long-term plans of our company or our investors,” TripActions cofounder and CEO Ariel Cohen told VentureBeat. “Even if travel is different in the future — say, more teams travel for remote team meetings or a wider spectrum of employees travel fewer times per year — TripActions will still be winning that business.”

Moreover, TripActions’ shift last February into the broader corporate expense sphere is now looking more prescient than ever, as it gives the company inroads into corporate finance departments. Indeed, the company said it has seen growing demand from businesses looking for tools to help manage their expenditures.

“The spend management technologies that TripActions has launched in the last year really help expand our business into new markets that are parallel to corporate travel,” Cohen said. “The funding round is a long-term bet on expanding the Liquid portfolio and the long-term prospect of capturing outsized market share in the corporate travel segment.”

Looking further to the future, there is every chance corporate travel will resume some semblance of “business as usual” once the vaccines roll out more widely — Zoom fatigue is a growing phenomenon, after all. Zoom is in fact a TripActions client, alongside other notable names from the business world, including Okta, Box, Lyft, Pinterest, and Silicon Valley Bank.

“We strongly believe business travel will return — maybe not at 100%, but we believe 75% within the next year — and may even exceed pre-COVID levels as a function of a more distributed workforce,” Cohen said. “We know the desire is there. Our customers have told us that their travelers are eager to return once they feel safe doing so. Digital meeting fatigue is real, and we believe strongly in the in-person connection.”

In addition to Andreessen Horowitz, TripActions’ series E round was co-led by Addition and Elad Gil, with participation from Zeev Ventures, Lightspeed Venture Partners, and Greenoaks Capital.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

VAST Data allies with Nvidia on reference storage architecture for AI

VAST Data and Nvidia today unveiled a reference architecture that promises to accelerate storage performance when servers based on graphical processor units (GPUs) access petabytes of data.

The reference architecture is intended to make it simpler to integrate all Flash storage system dubbed Lightspeed with DGX A100 servers from Nvidia. That effort should yield more than 170GB/s of throughput for both GPU-intensive and storage-intensive AI workloads, the companies claim.

Network-attached storage (NAS) systems from VAST Data can be connected to Nvidia servers over NFS-over-RDMA, NFS Multipath, or Nvidia GPUDirect Storage interfaces. The systems can also be incorporated within a larger converged storage fabric that might be based on multiple storage protocols.

VAST Data makes use of proprietary erasure coding, containers, Intel Optane memory, data deduplication, compression, and an extension it developed to the Network File System (NFS) that organizations have used for decades to access distributed storage systems. That approach eliminates the need for IT teams to acquire and deploy a storage system based on a parallel file system to run AI workloads. NFS is both easier to deploy and a familiar file system for most existing storage administrators.

The company has thus far raised $180 million to fuel an effort to replace hard disk drives (HDD) in environments that need to access large amounts of data in sub-milliseconds. The AI workloads running Nvidia servers are typically trying to access a lot of small files residing within a storage environment that can easily reach petabytes of scale. Those servers become more efficient because all the responsibility for processing storage is offloaded to the Lightspeed platform.

It’s not clear how many organizations will be deploying servers to train AI models in on-premises IT environments. The bulk of AI models are trained in the cloud because many data scientist teams don’t want to invest in IT infrastructure.

However, there are still plenty of organizations that prefer to retain control of their data for compliance and security reasons. In addition, organizations that have invested in high-performance computing (HPC) systems are looking to run AI workloads that tend to have very different I/O requirements than legacy HPC applications. In fact, one of the reasons Nvidia acquired Mellanox for $6.8 billion last year was to gain control over the switches required to create storage fabrics spanning multiple servers.

IT organizations will be looking for storage systems capable of simultaneously supporting their existing applications and AI workloads that are starting to be deployed more frequently in production environments, VAST data cofounder and CMO Jeff Denworth said.

“Once you build an AI model, you then have to operationalize it,” Denworth said.

Competition among providers of storage systems optimized for AI workloads is heating up as the number of AI workloads being deployed steadily increases. Over the course of the next few years, just about every application will be augmented by AI models to one degree or another. The challenge now is determining how best to manage and store all the massive amounts of data those AI models need to consume.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

How a designer used AI and Photoshop to bring ancient Roman emperors back to life

Machine learning is a fantastic tool for renovating old photos and videos. So much so that it can even bring ancient statues to life, transforming the chipped stone busts of long-dead Roman emperors into photorealistic faces you could imagine walking past on the street.

The portraits are the creation of designer Daniel Voshart, who describes the series as a quarantine project that got a bit out of hand. Primarily a VR specialist in the film industry, Voshart’s work projects got put on hold because of COVID-19, and so he started exploring a hobby of his: colorizing old statues. Looking for suitable material to transform, he began working his way through Roman emperors. He finished his initial depictions of the first 54 emperors in July, but this week, he released updated portraits and new posters for sale.

Voshart told The Verge that he’d originally made 300 posters in his first batch, hoping they’d sell in a year. Instead, they were gone in three weeks, and his work has spread far and wide since. “I knew Roman history was popular and there was a built-in audience,” says Voshart. “But it was still a bit of a surprise to see it get picked up in the way that it did.”

To create his portraits, Voshart uses a combination of different software and sources. The main tool is an online program named ArtBreeder, which uses a machine learning method known as a generative adversarial network (or GAN) to manipulate portraits and landscapes. If you browse the ArtBreeder site, you can see a range of faces in different styles, each of which can be adjusted using sliders like a video game character creation screen.

Voshart fed ArtBreeder images of emperors he collected from statues, coins, and paintings, and then tweaked the portraits manually based on historical descriptions, feeding them back to the GAN. “I would do work in Photoshop, load it into ArtBreeder, tweak it, bring it back into Photoshop, then rework it,” he says. “That resulted in the best photorealistic quality, and avoided falling down the path into the uncanny valley.”

Voshart says his aim wasn’t to simply copy the statues in flesh but to create portraits that looked convincing in their own right, each of which takes a day to design. “What I’m doing is an artistic interpretation of an artistic interpretation,” he says.

To help, he says he sometimes fed high-res images of celebrities into the GAN to heighten the realism. There’s a touch of Daniel Craig in his Augustus, for example, while to create the portrait of Maximinus Thrax he fed in images of the wrestler André the Giant. The reason for this, Voshart explains, is that Thrax is thought to have had a pituitary gland disorder in his youth, giving him a lantern jaw and mountainous frame. André the Giant (real name André René Roussimoff) was diagnosed with the same disorder, so Voshart wanted to borrow the wrestler’s features to thicken Thrax’s jaw and brow. The process, as he describes it, is almost alchemical, relying on a careful mix of inputs to create the finished product.

A print, now available to buy, of all of Voshart’s photorealistic Roman emperors.
Image by Daniel Voshart

Perhaps surprisingly, though, Voshart says he wasn’t really that interested in Roman history prior to starting this project. Digging into the lives of the emperors in order to create his portraits has changed his mind, however. He’d previously dismissed the idea of visiting Rome because he thought it was a “tourist trap,” but now says “there are specific museums I want to hit up.”

What’s more, his work is already enticing academics, who have praised the portraits for giving the emperors new depth and realism. Voshart says he chats with a group of history professors and PhDs who’ve given him guidance on certain figures. Selecting skin tone is one area where there’s lots of dispute, he says, particularly with emperors like Septimius Severus, who’s thought to have had Phoenician or perhaps Berber ancestors.

Voshart notes that, in the case of Severus, he’s the only Roman emperor for whom we have a surviving contemporary painting, the Severan Tondo, which he says influenced the darker skin tones he used in his depiction. “The painting is like, I mean it depends on who you ask, but I see a dark skinned North African person,” says Voshart. “I’m very much introducing my own sort of biases of faces I’ve known or have met. But that’s what I read into it.”

As a sort of thank you to his advisers, Voshart has even used a picture of one USC assistant professor who looks quite a bit like the emperor Numerian to create the ancient ruler’s portrait. And who knows, perhaps this rendition of Numerian will be one that survives down the years. It’ll be yet another artistic depiction for future historians to argue about.

You can read more about Voshart’s work here, as well as order prints of the emperors.



Repost: Original Source and Author Link

Categories
AI

US announces $1 billion research push for AI and quantum computing

The US government is announcing $1 billion in new funding for multidisciplinary AI and quantum computing research hubs today, according to multiple reports. A total of 12 hubs will be funded, each embedded within different agencies of the federal government. Their work will span a diverse range of topics, from using machine learning for atmospheric and ocean science, to speeding up high-energy physics simulations with quantum systems.

The investment is part of a slow push from the White House to fund emerging technologies. Many policy advisors have worried that America is falling behind in AI and quantum research compared to rivals like China, and warn that these technologies are instrumental not only for economic development but also national security.

It’s extremely difficult to make a fair comparison of US and Chinese spend on technology like AI as funding and research in this area is diffuse. Although China announced ambitious plans to become the world leader in AI by 2030, America still outspends the country in military funding (which increasingly includes AI research), while US tech companies like Google and Microsoft remain world leaders in artificial intelligence.

The Trump administration will likely present today’s news as a counterbalance to its dismal reputation for supporting scientific research. For four years in a row, government budgets have proposed broad cuts for federal research, including work in pressing subjects like climate change. Only the fields of artificial intelligence and quantum computing, with their overt links to military prowess and global geopolitics, have seen increased investment.

“It is absolutely imperative the United States continues to lead the world in AI and quantum,” said US Chief Technology Officer Michael Kratsios ahead of today’s announcement, according to The Wall Street Journal. “The future of American economic prosperity and national security will be shaped by how we invest, research, develop and deploy these cutting edge technologies today.”

Some $625 million of today’s funding will go to research involving quantum information sciences in five centers linked to the Department of Energy (DOE). A further $140 million will be invested in seven AI initiatives, two overseen by the US Department of Agriculture (USDA) and five by the National Science Foundation (NSF). Private tech companies, including IBM and Microsoft, are contributing $300 million in the form of “technology-services donations,” reports the WSJ, likely meaning access to cloud computing resources.

You can read a full list of the projects being funded in this report from VentureBeat.

Repost: Original Source and Author Link

Categories
AI

Google’s AI flood warnings now cover all of India and have expanded to Bangladesh

Google says its flood prediction service, which uses machine learning to identify areas of land prone to flooding and alert users before the waters arrive, now covers all of India and has expanded to parts of Bangladesh as well.

The search giant launched the tool in 2018 for India’s Patna region, but it says it’s been slowly increasing coverage in coordination with local government. In June, it hit the milestone of covering all the worst flood-hit areas of India. The company says this means some 200 million people in India and 40 million people in Bangladesh can now receive alerts from its flood forecasting system.

In addition to expanding coverage, Google is testing more accurate forecasts and has updated how its alerts appear on users’ devices. The company says it’s now sent over 30 million notifications to users with Android devices.

Google’s new flood alerts offer information in some areas about the depth of the waters.
Image: Google

Google has long been interested in providing warnings about natural disasters and national emergencies like floods, wildfires, and earthquakes. Many of these are handled through its Public Alerts program. Just last month, the company launched a new service that turns Android devices into a network of seismometers, leveraging the accelerometers inside phones and tablets to detect the vibrations from earthquakes and send alerts to users.

In the case of flood forecasting, though, Google isn’t using information from customers’ devices. Instead, it draws on a mix of historical and contemporary data about rainfall, river levels, and flood simulations, using machine learning to create new forecast models.

Google says it’s experimenting with new models that can provide even more accurate alerts. Its latest forecast model can “double the lead time” of its previous system, says the company, while also providing people with information about the depths of the flooding. “In more than 90 precent of cases, our forecasts will provide the correct water level within a margin of error of 15 centimeters,” say Google’s researchers.

A study of Google’s forecasts in the Ganges-Brahmaputra river basin carried out with scientists from Yale found that 70 percent of people who received a flood alert did so before flood waters arrived, and 65 percent of households that received an alert took action. “Even in an area suffering from low literacy, limited education, and high poverty, a majority of citizens act on information they receive,” write the researchers. “So, early warnings are definitely worth the effort.”

They noted that problems with using smartphone alerts still remained. The main issues are simply lack of access to smartphones and lack of trust regarding technological warnings. Survey respondents the researchers spoke to said they preferred to receive warnings from local leaders and that sharing them via loud speakers and phones calls was still desirable.

A photo from Yale researchers shows Google’s flood forecast service in use on the ground.
Image: Google

Google says it’s looking into these problems and has started a collaboration with the International Federation of Red Cross and Red Crescent Societies. It hopes to share its flood forecasts with these organizations who can then disseminate the information through their own networks.

Repost: Original Source and Author Link

Categories
AI

These students figured out their tests were graded by AI — and the easy way to cheat

On Monday, Dana Simmons came downstairs to find her 12-year-old son, Lazare, in tears. He’d completed the first assignment for his seventh-grade history class on Edgenuity, an online platform for virtual learning. He’d received a 50 out of 100. That wasn’t on a practice test — it was his real grade.

“He was like, I’m gonna have to get a 100 on all the rest of this to make up for this,” said Simmons in a phone interview with The Verge. “He was totally dejected.”

At first, Simmons tried to console her son. “I was like well, you know, some teachers grade really harshly at the beginning,” said Simmons, who is a history professor herself. Then, Lazare clarified that he’d received his grade less than a second after submitting his answers. A teacher couldn’t have read his response in that time, Simmons knew — her son was being graded by an algorithm.

Simmons watched Lazare complete more assignments. She looked at the correct answers, which Edgenuity revealed at the end. She surmised that Edgenuity’s AI was scanning for specific keywords that it expected to see in students’ answers. And she decided to game it.

Now, for every short-answer question, Lazare writes two long sentences followed by a disjointed list of keywords — anything that seems relevant to the question. “The questions are things like… ‘What was the advantage of Constantinople’s location for the power of the Byzantine empire,’” Simmons says. “So you go through, okay, what are the possible keywords that are associated with this? Wealth, caravan, ship, India, China, Middle East, he just threw all of those words in.”

“I wanted to game it because I felt like it was an easy way to get a good grade,” Lazare told The Verge. He usually digs the keywords out of the article or video the question is based on.

Apparently, that “word salad” is enough to get a perfect grade on any short-answer question in an Edgenuity test.

Edgenuity didn’t respond to repeated requests for comment, but the company’s online help center suggests this may be by design. According to the website, answers to certain questions receive 0% if they include no keywords, and 100% if they include at least one. Other questions earn a certain percentage based on the number of keywords included.

As COVID-19 has driven schools around the US to move teaching to online or hybrid models, many are outsourcing some instruction and grading to virtual education platforms. Edgenuity offers over 300 online classes for middle and high school students ranging across subjects from math to social studies, AP classes to electives. They’re made up of instructional videos and virtual assignments as well as tests and exams. Edgenuity provides the lessons and grades the assignments. Lazare’s actual math and history classes are currently held via the platform — his district, the Los Angeles Unified School District, is entirely online due to the pandemic. (The district declined to comment for this story).

Of course, short-answer questions aren’t the only factor that impacts Edgenuity grades — Lazare’s classes require other formats, including multiple-choice questions and single-word inputs. A developer familiar with the platform estimated that short answers make up less than five percent of Edgenuity’s course content, and many of the eight students The Verge spoke to for this story confirmed that such tasks were a minority of their work. Still, the tactic has certainly impacted Lazare’s class performance — he’s now getting 100s on every assignment.

Lazare isn’t the only one gaming the system. More than 20,000 schools currently use the platform, according to the company’s website, including 20 of the country’s 25 largest school districts, and two students from different high schools to Lazare told me they found a similar way to cheat. They often copy the text of their questions and paste it into the answer field, assuming it’s likely to contain the relevant keywords. One told me they used the trick all throughout last semester and received full credit “pretty much every time.”

Another high school student, who used Edgenuity a few years ago, said he would sometimes try submitting batches of words related to the questions “only when I was completely clueless.” The method worked “more often than not.” (We granted anonymity to some students who admitted to cheating, so they wouldn’t get in trouble.)

One student, who told me he wouldn’t have passed his Algebra 2 class without the exploit, said he’s been able to find lists of the exact keywords or sample answers that his short-answer questions are looking for — he says you can find them online “nine times out of ten.” Rather than listing out the terms he finds, though, he tried to work three into each of his answers. (“Any good cheater doesn’t aim for a perfect score,” he explained.)

Austin Paradiso, who has graduated but used Edgenuity for a number of classes during high school, was also averse to word salads but did use the keyword approach a handful of times. It worked 100 percent of the time. “I always tried to make the answer at least semi-coherent because it seemed a bit cheap to just toss a bunch of keywords into the input field,” Paradiso said. “But if I was a bit lazier, I easily could have just written a random string of words pertinent to the question prompt and gotten 100 percent.”

Teachers do have the ability to review any content students submit, and can override Edgenuity’s assigned grades — the Algebra 2 student says he’s heard of some students getting caught keyword-mashing. But most of the students I spoke to, and Simmons, said they’ve never seen a teacher change a grade that Edgenuity assigned to them. “If the teachers were looking at the responses, they didn’t care,” one student said.

The transition to Edgenuity has been rickety for some schools — parents in Williamson County, Tennessee are revolting against their district’s use of the platform, claiming countless technological hiccups have impacted their children’s grades. A district in Steamboat Springs, Colorado had its enrollment period disrupted when Edgenuity was overwhelmed with students trying to register.

Simmons, for her part, is happy that Lazare has learned how to game an educational algorithm — it’s certainly a useful skill. But she also admits that his better grades don’t reflect a better understanding of his course material, and she worries that exploits like this could exacerbate inequalities between students. “He’s getting an A+ because his parents have graduate degrees and have an interest in tech,” she said. “Otherwise he would still be getting Fs. What does that tell you about… the digital divide in this online learning environment?”



Repost: Original Source and Author Link