Categories
AI

Meet the AI research pioneer who wants to redefine ‘progress’

All the sessions from Transform 2021 are available on-demand now. Watch now.


Women in the AI field are making research breakthroughs, spearheading vital ethical discussions, and inspiring the next generation of AI professionals. We created the VentureBeat Women in AI Awards to emphasize the importance of their voices, work, and experience and to shine a light on some of these leaders. In this series, publishing Fridays, we’re diving deeper into conversations with this year’s winners, whom we honored recently at Transform 2021. Check out last week’s interview with a winner of our AI responsibility and ethics award.

Think of an AI technology, and Dr. Nuria Oliver was likely working on it decades ago when it still felt like science fiction. Her research and inventions have ignited advancements across the industry, and now drive many of the products and services we use every day.

But while Oliver, the winner of our AI Research Award, has published more than 150 scientific papers and earned 41 patents, she doesn’t believe in technology advancement for the sake of it. Above all, she is today focused on responsible AI and “developing technology that’s on our side, that really has our interests and our well being as the main objective function.”

“To me, progress is an improvement to the quality of life for all people, all the beings on the planet, and the planet itself — not just some people,” she told VentureBeat. “So I think it’s very important before we invest in any technology, to think whether that development is continuing progress. Or if it’s not, maybe we shouldn’t do it.”

Oliver is activating this belief beyond just her own research, speaking regularly on the topic and also creating the Institute for Humanity Centric AI, a non-profit focused on the impact of AI. She’s also leading efforts to bring more women into the industry, and asks any young girls who may be reading this to consider the opportunities in the field. Oliver herself was the first woman computer scientist in Spain to be named an ACM Distinguished Scientist and an ACM Fellow. She was also the first woman scientific director of R&D at Telefonica, and continues making waves today as the chief scientific advisor of Vodafone Institute.

We’re thrilled to offer Oliver this much-deserved award. We recently caught up with her to learn more about her research and discuss responsible AI, the challenges in the industry, and how business leaders can make sense of the quickly evolving field.

This interview has been edited for brevity and clarity.

VentureBeat: How did you become an AI researcher? And what interests you most about the work?

Dr. Nuria Oliver: I discovered AI when I was studying telecommunications engineering in Spain. It’s a six year degree, and when I was in the third or fourth year, a professor from the math department asked me to write a paper for an international conference. I chose to write about neural networks and human intelligence versus artificial intelligence, and I became fascinated with the topic. And so I decided to do my master’s thesis project on computer vision. My PhD in the U.S. was also on AI. So I guess it all started in my third year of university, but I think before that what really fascinated me and still fascinates me about AI is also human intelligence.

VentureBeat: Of all your inventions and research, is there one that sticks out to you as the most impactful for the field of AI? Or the most impactful in another way?

Oliver: That’s like asking someone if they have a preferred child. But I guess my main area of expertise is building computational models of human behavior and building intelligent interactive systems that understand humans. And in terms of a landmark project, I would say the work I did on modeling human interactions using machine learning techniques, because that was one of the early works on detecting and modeling human interactions. I also did a system that was able to predict the most likely maneuver in a car before anyone was talking about autonomous driving — like back in 19999. So that was also a really complex but very exciting project.

I’m also proud of the first project I did at MIT, which was a real-time facial expression recognition system. That commercially exists today, but it was like science fiction back then in 1995. All the work I’ve done on the intersection between mobile phones, health, and wellness has also been really exciting, because it was sort of trying to really change the way we perceived phones. A lot of that work has also become mainstream today with wearables. And then finally, I would say all the work I’ve done on using data and AI for social good. That’s an area that I’m very passionate about, and I feel it’s had a lot of impact. I created the area for using data and AI for social good at Telefonica, and I created the area at Vodafone.

VentureBeat: Well that’s an amazing body of work, and it sounds like you’re always ahead of the time. So what are you working on now that we might see more of in the future? Is there any emerging area of research that you really have your eye on right now?

Oliver: I’m very interested in developing technology that’s on our side, that really has our interests and our well being as the main objective function. And this is not the case today. Why don’t we design technology that suggests we turn it off if it’s having a negative impact on us? Why is the expectation that the technology we use is designed to maximize the amount of time that we spend using it? I’m also working a lot on some of the key challenges of AI systems that are used for decision making: algorithmic bias, discrimination, opacity, violations of privacy, the subliminal manipulation of human behavior. Right now, I don’t think the impact is necessarily positive. So that’s a big area of focus right now of my work, and I recently created a nonprofit foundation called the Institute for Humanity Centric AI. A lot of the work I just described is part of the research agenda of this new foundation we just created.

VentureBeat: You mentioned some of the big ones like bias and privacy, but I’m wondering what you think are some of the lesser known hurdles with AI research today.

Oliver: There are different types of challenges. This is a very active research area, so there are a lot of technical challenges. In addition to what we already said, there’s inferring causality versus correlations. For a lot of big, important problems, we want to understand the causal relationships between different factors, but that is very difficult to do with many of today’s methods, which are very good at finding correlations but not necessarily causation. There are challenges related to data access and combining data from different sources. And for many impactful use cases like helping with a natural disaster or even the pandemic, you want to be able to make decisions in real time.

And then there are more human-related issues in terms of education and capacity building. I’ve been saying for like 10 years now that we should really transform the compulsory education system so it’s more aligned with the 21st century. I think the education system in many countries is from the second industrial revolution, but we’re in the fourth industrial revolution. I also think we need to invest more in developing human skills that have been very important for our own survival: our social intelligence, emotional intelligence, creativity, our ability to work together, to adapt. And beyond the formal education, I think it’s very important to invest in upskilling and reskilling programs for professionals whose jobs are being impacted by AI. And I think there’s a connection there with some of the other VentureBeat awards like the AI Mentorship Award Katia Walsh won. And then also investing in education for the general population and policymakers, so we can actually make informed decisions about this very important discipline of AI.

And I mentioned it briefly, but there are many challenges related to the data: accessing, sharing, analyzing, ensuring quality, and privacy implications. Because even if the data is non-personal data, you can infer personal attributes like political views, sexual orientation, gender, or age. And of course, there are many barriers related to the governance of these systems and the ethical frameworks necessary to make sure the huge power AI has is going to actually be used for social good. I always say we shouldn’t confuse technological development with progress.

VentureBeat: There are new AI papers and findings coming out every day, and like you said, advancements aren’t always progress. So what advice do you have for technical professionals and decision makers for how they can keep up, understand changes in the field, and parse what research is truly impactful?

That’s a very good question because the field has grown exponentially to the point where papers are being published constantly. And in fact, many influential papers aren’t even published in scientific conferences anymore; they’re published in open repository systems like arXiv without any peer review. So I think it’s important to understand that this work is incremental. If you’re a practitioner or a business leader, understand the main concepts and both the capabilities and limitations of existing AI systems. Try to think of how they can benefit your business without necessarily maybe going into all the details of the latest papers.

VentureBeat: Throughout the conversation, we’ve been touching on this idea of responsible and ethical AI. What do you feel is the role of AI researchers in regards to this and preventing the potential harms of these technologies? How is the responsibility the same or different from that of entrepreneurs and enterprises?

Oliver: Increasingly, leading machine learning conferences are asking for a clear ethical discussion on the implications of the work. So that’s really a step in the right direction. Many universities are now including ethics in the computer science degree as well. My main message here would be that if you’re using AI, develop a human-centric approach from the beginning. Take the direction the field and legislation are going into account. I think Europe is recognizing that if there is no regulation of AI systems, the negative unintended consequences of these systems can be pretty bad. And as I said, you know, we might not have progress at all.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

How AI-powered BI tools will redefine enterprise decision-making

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Value-creation in business intelligence (BI) has followed a consistent pattern over the last few decades. The ability to democratize and expand the addressable user base of solutions has corresponded to large value increases. Enterprise BI arguably started with highly technical solutions like SAS in the mid-’70s, accessible only to a small fraction of highly specialized employees. The BI world began to open up in the ’90s with the advent of solutions like SAP Business Objects, which created an abstraction layer on top of query language to allow a broader swath of employees to run business intelligence. BI 3.0 came in the last decade, as solutions like Alteryx have provided WYSIWYG interfaces that further expanded both the sophistication and accessibility of BI.

But in many cases, BI still involves analysts writing SQL queries to analyze large data sets so that they can provide intelligence for non-technical executives. While this paradigm for analysis continues to increase, I believe that a new BI paradigm will emerge and grow in importance over the next few years — one in which AI surfaces relevant questions and insights, and even proposes solutions.

This fourth wave of BI will leverage powerful AI advancements to further democratize analytics so that any line of business specialist can supervise more insightful and prescriptive recommendations than ever before.

In this fourth wave, the traditional order of BI will be inverted. The traditional method of BI generally begins with a technical analyst investigating a specific question. For example, an electronics retailer may wonder if a higher diversity of refrigerator models in specific geographies will likely increase sales. The analyst blends relevant data sources (perhaps an inventory management system and a billing system) and investigates whether there is a correlation. Once the analyst has completed the work, they present a conclusion about past behavior. They then create a visualization for business decision makers in a system like a Tableau or Looker, which can be revisited as the data changes.

This investigation method works quite well, assuming the analyst asks the right questions, the number of variables is relatively well-understood and finite, and the future continues to look somewhat similar to the past. However, this paradigm presents several potential challenges in the future as companies continue to accumulate new types of data, business models and distribution channels evolve, and real-time consumer and competitive adjustments cause constant disruptions. Specifically:

  1. The amount of data produced today is unfathomably large and accelerating. IDC predicts that worldwide data creation will grow to 163ZB by 2025, up 10x from 2017. With that amount of data, the ability to zero in on the variables that matter is akin to finding a needle in a haystack.
  2. Business models and ways of reaching customers are becoming more varied and complex. Multi-modal distribution (digital, D2C, distributor-led, retail, ecommerce), international customers, mobile usage, and marketing channels (social media, search engine, display, television, etc.) have changed the dynamics of decision making and are more complicated than ever before.
  3. Customers have more options and can change preferences and abandon brands faster than ever. New competition arises from both tech behemoths like Amazon, Google, Microsoft, and Apple and a record amount of venture-backed startups.

BI 4.0

AI-enabled platforms that will define the fourth wave of BI start by crunching and blending massive amounts of data to find and surface patterns and relevant statistical insights. A data analyst applies judgment to these myriad insights to decide which patterns are truly meaningful or actionable for the business. After digging into areas of interest, the platform suggests potential actions based on correlations that have been seen over a more extended period — again validated by human judgment.

The time is ripe for this methodology to proliferate — AI advancements are coming online in conjunction with the growth of cloud-native vendors like Snowflake. Simultaneously, businesses are increasingly feeling the strain that business complexity and data proliferation are putting on their traditional BI processes.

The data analytics space has spawned some incredible companies capable of tackling this challenge. In the last six months, Snowflake vaulted into the top 10 cloud businesses with a valuation above $70 billion, and Databricks raised $1 billion at a $28 billion valuation. Both of these companies (along with similar offerings from AWS and Google Cloud) are vital enablers for modern data analytics, providing data warehouses where teams can leverage flexible, cloud-based storage and compute for analytics.

Industry verticals such as ecommerce and retail that are under the most strain from the three challenges outlined above are starting to see industry-specific platforms emerge to deliver BI 4.0 capabilities — platforms like Tradeswell, Hypersonix, and Soundcommerce. In the energy and materials sector, platforms like Validere and Verusen are helping to address these challenges by using AI to boost margins of operators.

In addition, broad technology platforms like Outlier, Unsupervised, and Sisu have demonstrated the power to pull exponentially more patterns from a dataset than a human analyst could. These are examples of intuitive BI platforms that are easing the strains, old and new, that data analysts face. And we can expect to see more of them emerging over the next couple of years.

Steve Sloane is a Partner at Menlo Ventures.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Scientists are trying to redefine how we measure time – here’s why

Everyone needs to know the time. Ever since the 17th century Dutch inventor Christiaan Huygens made the first pendulum clock, people have been thinking of good reasons to measure time more precisely.

Getting the time right is important in so many ways, from running a railway to doing millisecond trades on the stock market. Now, for most of us, our clocks are checking themselves against a signal from atomic clocks, like those on board the global positioning system (GPS) satellites.

But a recent study by two teams of scientists in Boulder, Colorado might mean those signals will get much more accurate, by paving the way to effectively allow us to redefine the second more precisely. Atomic clocks could become so accurate, in fact, that we could begin to measure previously imperceptible gravity waves.

Brief history of time

Modern clocks still use Huygens’ basic idea of an oscillator with a resonance – like a pendulum of a fixed length that will always move back and forth with the same frequency, or a bell that rings with a specific tone. This idea was greatly improved in the 18th century by John Harrison who realised that smaller, higher frequency oscillators have more stable and pure resonances, making clocks more reliable.

Credit: Andrew Seaman

Nowadays, most everyday clocks use a tiny piece of quartz crystal in the shape of a miniature musical tuning fork, with very high frequency and stability. Not much has changed with this clock design in the past hundred years, although we’ve got better at making them cheaper more reproducible.

The massive difference these days is the way that we check – or “discipline” – quartz clocks. Up until 1955, you needed to keep correcting your clock by checking it against a very regular astronomical phenomenon, like the Sun or the moons of Jupiter. Now we discipline clocks against natural oscillations inside atoms.

The atomic clock was first built by Louis Essen. It was used to redefine the second in 1967, a definition that has remained the same since.

It works by counting the flipping frequency of a quantum property called spin in the electrons in caesium atoms. This natural atomic resonance is so sharp that you can tell if your quartz crystal clock signal wanders off in frequency by less than one part in 10¹⁵, that’s a millionth of a billionth. One second is officially defined as 9,192,631,770 caesium electron spin flips.

The fact we can make such accurately disciplined oscillators makes frequency and time the most precisely measured of all physical quantities. We send out signals from atomic clocks all over the world, and up into space via GPS. Anyone with a GPS receiver in their mobile phone has access to an astonishingly accurate time measurement device.


Read more: Why we will probably never have a perfect clock


If you can measure time and frequency accurately, then there are all kinds of other things you can accurately measure too. For example, measuring the spin flip frequency of certain atoms and molecules can tell you the strength of the magnetic field they experience, so if you can find the frequency precisely then you’ve also found the field strength precisely. The smallest possible magnetic field sensors work this way.

But can we make better clocks that allow us to measure frequency or time even more precisely? The answer might still be just as John Harrison found, to go higher in frequency.

The caesium spin flip resonance has a frequency corresponding to microwaves, but some atoms have nice sharp resonances for optical light, a million times higher in frequency. Optical atomic clocks have shown extremely stable comparisons with one another, at least when a pair of them is placed only a few metres apart.

Scientists are thinking about whether the international definition of the second could be redefined to make it more precise. But to achieve this, the different optical clocks that we would use to keep time precisely need to be trusted to read the same time even if they are in different labs thousands of miles apart. So far, such long distance tests have been not much better than for microwave clocks.

Better clocks

Now, using a new way of linking the clocks with ultra-fast lasers, researchers have shown that different kinds of optical atomic clocks can be placed a few kilometres apart and still agree within 1 part in 10¹⁸. This is just as good as previous measurements with pairs of identical clocks a few hundred metres apart, but about a hundred times more precise than achieved before with different clocks or large distances.

The authors of the new study compared multiple clocks based on different types of atoms – ytterbium, aluminium and strontium in their case. The strontium clock was situated in the University of Colorado and the other two were in the US National Institute of Standards and Technology, down the road.

A diagram showing three atomic clocks being compared at a distance to each other.