Categories
AI

The data economy: How AI helps us understand and utilize our data 

This article is part of a Technology and Innovation Insights series paid for by Samsung. 


Similar to the relationship between an engine and oil, data and artificial intelligence (AI) are symbiotic. Data fuels AI, and AI helps us to understand the data available to us. Data and AI are two of the biggest topics in technology in recent years, as both work together to shape our lives on a daily basis. The sheer amount of data available right now is staggering and it doubles every two years. However, we currently only use about 2 percent of the data available to us. Much like when oil was first discovered, it is taking time for humans to figure out what to do with the new data available to us and how to make it useful.

Whether pulled from the cloud, your phone, TV, or an IoT device, the vast range of connected streams provide data on just about everything that goes on in our daily lives. But what do we do with it?

Earlier this month, HARMAN’s Chairman Young Sohn sat down with international journalist Ali Aslan in Berlin, Germany at the “New Data Economy and its Consequences” video symposium held by Global Bridges. Young and Ali discussed the importance of data, why AI without data is useless, and what needs to be considered when we look at the ethical use of data and AI — including bias, privacy, and security.

Bias

Unlike humans, technology and data are not inherently bias. As the old adage goes — data never lies. Bias in data and AI comes into play when humans train an AI algorithm or interpret data. Much of what we are consuming is influenced based on where the data is coming from and what data is going into the system. Understanding and eliminating our bias are essential to ensuring a neutral algorithm and system.

Controlling data access and permissions are a key first step to remove bias. Having a diverse and inclusive team when developing algorithms and systems is essential. Not everyone has lived the same experiences and backgrounders. Diversity in both can help curb biases by providing different ways of interpreting data inputs and outputs.

Privacy

Permission and access are paramount when we look at the privacy aspect of data. Privacy is extremely important in our increasingly digital society. As such, consumers should have a choice at the beginning of a relationship with an organization and be asked whether they want to opt-in, rather than having to opt-out. GDPR has been a good first step in helping to protect consumers in regards to the capture and use of their data. While GDPR has many well-designed and important initiatives, the legislation could be more efficient.

Security

Whereas data privacy is more of a concern to consumers and individuals, data security has become a global concern for consumers, organizations, and nation-states.

It seems like every day we are reading about another cyber-attack or threat that we should be aware of. Chief among these concerns are the influx of ransomware attacks. Companies and individuals are paying increasingly large amounts of money to bad actors in an attempt to mitigate risk, attention, and embarrassment. These attacks are being carried out by individuals, collectives, and even nation-states in an attempt to cripple the systems of enemies, gather classified information, or garner capital gains.

So how do we trust our data and information is safe and what can we do to be better protected? While there may be bad actors using technology and data for their own nefarious devices, there are also many positive uses for technology. The amount of education and investments being made in the cybersecurity space have helped many organizations to train employees and invest in technologies that are designed to prevent cybercrime at the source — human error. And while we may not be able to stop all cybercrime, we are making progress.

Data and AI for good

While data — both from a collection and storage viewpoint — and AI have gotten negative press around biases, privacy, and security, both can also be used to do an immense amount of good. For example, both data and AI have been crucial in the biomedical and agtech industries. Whether it’s COVID-19 detection and vaccine creation or the creation of biomes and removal of toxins in soil, data and AI have incredible potential. However, one cannot move forward without the other. A solid and stable infrastructure and network are also needed to ensure that we can make use of the other 98 percent of the global data available.


VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact sales@venturebeat.com.

Repost: Original Source and Author Link

Categories
AI

This AI system learned to understand videos by watching YouTube

Elevate your enterprise data technology and strategy at Transform 2021.


Humans understand events in the world contextually, performing what’s called multimodal reasoning across time to make inferences about the past, present, and future. Given text and an image that seem innocuous when considered apart — e.g., “Look how many people love you” and a picture of a barren desert — people recognize that these elements take on potentially hurtful connotations when they’re paired or juxtaposed, for example.

Even the best AI systems struggle in this area. But there’s been progress, most recently from a team at the Allen Institute for Artificial Intelligence and the University of Washington’s Paul G. Allen School of Computer Science & Engineering. In a preprint paper published this month, the researchers detail Multimodal Neural Script Knowledge Models (Merlot), a system that learns to match images in videos with words and even follow events globally over time by watching millions of YouTube videos with transcribed speech. It does all this in an unsupervised manner, meaning that the videos haven’t been labeled or categorized — forcing the system to learn from the videos’ inherent structures.

Learning from videos

Our capacity for commonsense reasoning is shaped by how we experience causes and effects. Teaching machines this type of “script knowledge” is a significant challenge, in part because of the amount of data it requires. For example, even a single photo of people dining at a restaurant can imply a wealth of information, like the fact that the people had to meet up, agree where to go, and enter the restaurant before sitting down.

Merlot attempts to internalize these concepts by watching YouTube videos. Lots of YouTube videos. Drawing on a dataset of 6 million videos, the researchers trained the model to match individual frames with a contextualized representation of the video transcripts, divided into segments. The dataset contained instructional videos, lifestyle vlogs of everyday events, and YouTube’s auto-suggested videos for popular topics like “science” and “home improvement,” each selected explicitly to encourage the model to learn about a broad range of objects, actions, and scenes.

Merlot AI

The goal was to teach Merlot to contextualize the frame-level representations over time and over spoken words, so that it could reorder scrambled video frames and make sense of “noisy” transcripts — including those with erroneously lowercase text, missing punctuation, and filler words like “umm,” “hmm,” and “yeah.” The researchers largely accomplished this. They that in a series of qualitative and quantitative tests, Merlot had a strong “out-of-the-box” understanding of everyday events and situations, enabling it to take a scrambled sequence of events from a video and order the frames to match the captions in a coherent narrative, like people riding a carousel.

Future work

Merlot is only the latest work on video understanding in the AI research community. In 2019, researchers at Georgia Institute of Technology and the University of Alberta created a system that could automatically generate commentary for “let’s play” videos of video games. More recently, researchers at Microsoft published a preprint paper describing a system that could determine whether statements about video clips were true, by learning from visual and textual clues. And Facebook has trained a computer vision system that can automatically learn audio, textual, and visual representations from publicly available Facebook videos.

Merlot AI

Above: Merlot can understand the sequence of events in videos, as demonstrated here.

The Allen Institute and University of Washington researchers note that, like previous work, Merlot has limitations, some owing to the data selected to train the model. For example, Merlot could exhibit undesirable biases because it was only trained on English data and largely local news segments, which can spend a lot of time covering crime stories in a sensationalized way. It’s “very likely” that training models like Merlot on mostly news content could cause them to learn racist patterns as well as sexist patterns, the researchers concede, given that the most popular YouTubers in most countries are men. Studies have demonstrated a correlation between watching local news and having more explicit, racialized beliefs about crime.

For these reasons, the team advises against deploying Merlot into a production environment. But they say that Merlot is still a promising step for future work in multimodal understanding. “We hope that Merlot can inspire future work for learning vision+language representations in a more human-like fashion compared to learning from literal captions and their corresponding images,” the coauthors wrote. “The model achieves strong performance on tasks requiring event-level reasoning over videos and static images.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

How to understand speaker measurements — and why they matter

How do you know if a speaker is any good?

The answer should be obvious. If you like the way it sounds, then it is good. I’m not here to tell you to stop enjoying what you like. But I am here to help you make more educated purchases.

Speakers don’t exist in isolation; most of us want to know we’re getting the best sound for our budget and setup. So how can you tell if one speaker is better than another without direct comparison? How do you know your impressions — or those of reviewers — aren’t being influenced by expectations about a speaker’s price and reputation? And what do you do when you don’t have a chance to listen to a speaker at all before buying it?

This is where speaker measurements and objective data come in. Knowing how to understand frequency response graphs is one of the most important skill an audiophile can have.

Lucky for us, speaker engineers and psychoacoustics researchers have been studying the nature of ‘good sound’ for decades. This research has led to powerful insights which show that, to a substantial degree, your preference for one speaker over another can be predicted by data — frequency response measurements in particular.

So by the end of this article, you should be able to look at a graph like this…

…and know whether it describes a decent speaker, as well as understand what some of its audible flaws might be.

Most of what I know comes from reading what I consider the most important book for any science-loving audiophile: Sound Reproduction: The Acoustics and Psychoacoustics of Loudspeakers and Rooms. Written by Dr. Floyd Toole, perhaps the most renowned expert on the psychoacoustics of speakers, it summarizes decades of research on acoustics and listener preferences.

I’ve since measured dozens of speakers and have found a remarkable correlation between my listening impressions and measurements, which are almost always performed after weeks of hearing the speaker in my own living room. This guide will hopefully help you understand how to correlate that data with your own impressions too.

Okay, so why should I care about measurements? Can’t I just read the review?

Some audiophiles believe listening to a speaker is the only way to know if a speaker is any good. We all have different tastes in music, after all, so surely speakers are the same?

The problem is, when it comes to soundreproduction, not music, you’re probably not that special.

Research suggests that a significant majority of people will rank speakers similarly once you eliminate variables like a speaker’s price, reputation, or aesthetics. The gold standard of this preference research is the double-blind comparison.

Credit: Sean Olive/ Toole & Olive 1984