Categories
AI

Foundation models: 2022’s AI paradigm shift

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


2022 has seen incredible growth in foundation models — AI models trained on a massive scale — a revolution that began with Google’s BERT in 2018, picked up steam with OpenAI’s GPT-3 in 2020, and entered the zeitgeist with the company’s DALL-E text-to-image generator in early 2021. 

The pace has only accelerated this year and moved firmly into the mainstream, thanks to the jaw-dropping text-to-image possibilities of DALL-E 2, Google’s Imagen and Midjourney, as well as the options for computer vision applications from Microsoft’s Florence and the multimodal options from Deep Mind’s Gato

That turbocharged speed of development, as well as the ethical concerns around model bias that accompany it, is why one year ago, the Stanford Institute for Human-Centered AI founded the Center for Research on Foundation Models (CRFM) and published “On the Opportunities and Risks of Foundation Models” — a report that put a name to this powerful transformation. 

“We coined the term ‘foundation models’ because we felt there needed to be a name to cover the importance of this set of technologies,” said Percy Liang, associate professor in computer science at Stanford University and director of the CRFM. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Since then, the progress of foundation models “made us more confident that this was a good move,” he added. However, it has also led to a growing need for transparency, which he said has been hard to come by. 

“There is confusion about what these models actually are and what they’re doing,” Liang explained, adding that the pace of model development has been so fast that many of foundation models are already commercialized, or are underpinning point systems that the public is not aware of, such as search. 

“We’re trying to understand the ecosystem and document and benchmark everything that’s happening,” he said.

Foundation models lack transparency

The CRFM defines a foundation model as one that is trained on broad data and can be adapted to a wide range of downstream tasks. 

“It’s a single model like a piece of infrastructure that is very versatile,” said Liang — in stark contrast to the previous generation of models that built bespoke models for different applications. 

“This is a paradigm shift in the way that applications are built,” he explained. “You can build all sorts of interesting applications that were just impossible, or at the very least took a huge team of engineers months to build.” 

Foundation models like DALL-E and GPT-3 offer new creative opportunities as well as new ways to interact with systems, said Rishi Bommasani, a Ph.D. student in the computer science department at Stanford whose research focuses on foundation models. 

“One of the things we’re seeing, in language and vision and code, is that these systems may lower the barrier for entry,” he added. “Now we can specify things in natural language and therefore enable a far larger class of people.” 

That is exciting to see, he said, “But it also entails thinking about new types of risks.” 

Foundation model releases are contentious

The challenge, according to Liang and Bommasani, is that there’s not enough information to assess the social impact or explore solutions to risks of foundation models, including biased data sets that lead to racist or sexist output. 

“We’re trying to map out the ecosystem, like what datasets were used, how models are trained, how the models are being used,” Liang said. “We’re talking to the various companies and trying to glean information by reading between the lines.” 

The CRFM is also attempting to allow companies to share details about their foundation models while still protecting company interests and proprietary IP. 

“I think people would be happy to share, but there’s a fear that oversharing might lead to some consequences,” he said. “It’s also if everyone were sharing it might be actually okay, but no one [wants] to be the first to share.” 

This makes it challenging to proceed.

“Even basic things like whether these models can be released is a hot topic of contention,” he said. “This is something I wish the community would discuss a bit more and get a bit more consensus on how you can guard against the risks of misuse, while still maintaining open access and transparency so that these models can be studied by people in academia.” 

The opportunity of a decade for enterprises

“Foundation models cut down on data labeling requirements anywhere from a factor of like 10 times, 200 times, depending on the use case,” Dakshi Agrawal, IBM fellow and CTO of IBM AI, told VentureBeat.  “Essentially, it’s the opportunity of a decade for enterprises.” 

Certain enterprise use cases require more accuracy than traditional AI has been able to handle — such as very nuanced clauses in contracts, for example.

“Foundation models provide that leap in accuracy which enables these additional use cases,” he said. 

Foundation models were born in natural language processing (NLP) and have transformed that space in areas such as customer care analysis, he added. Industry 4.0 also has a tremendous number of use cases, he explained. The same AI breakthroughs happening in language are happening in chemistry for example, as foundation models learn the language of chemistry from data — atoms, molecules and properties — and power a multitude of tasks.

“There are so many other areas where companies would love to use the foundation model, but we are not there yet,” he said, offering high-fidelity data synthesis and more natural conversational assistance as examples, but “we will be there maybe in a year or so. Or maybe two.”

Agrawal points out that regulated industries are hesitant to use current public large language models, so it’s essential that input data is controlled and trusted, while output should be controlled so as not to produce biased or harmful content. In addition, the output should be consistent with the input and facts — hallucinations, or interpretation errors, cannot be tolerated.

For the CEO who has already started their AI journey, “I would encourage them to experiment with foundation models,” he said.

Most AI projects, he explained, get stuck in boosting time to value. “I would urge them to try foundation models to see that time to value shrinks and how little time it takes away from day-to-day business.” 

If an organization has not started on their AI journey or is at a very early stage, “I would say you can just leapfrog,” he said. “Try this very low-friction way of getting started on AI.” 

The future of foundation models

Going forward, Agrawal thinks the cost of foundation models, and the energy used, will go down dramatically, thanks in part to hardware and software specifically targeted towards training them by leveraging the technology more effectively. 

“I expect energy to be exponentially decreasing for a given use case in the coming years,” he said. 

Overall, Liang said that foundation models will have a “transformative” impact – but it requires a balanced and objective approach. 

“We can’t let the hype make us lose our heads,” he said. “The hope is that in a year we’ll at least be at a definitively better place in terms of our ability to make informed decisions or take informed actions.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

Into the metaverse: How conversational AI will build its experiential foundation

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


The much-hyped metaverse concept — once thought of as a futuristic, hypothetical online destination — is quickly becoming a new kind of internet. As more users interact with one another in these virtual environments, brands will realize new opportunities for engaging their target audiences. Companies such as Meta (formerly Facebook) are rapidly making plans to expand into the metaverse, altering and advancing how people will work, socialize, shop and even bank in the future. 

While some completely disagree with the positive potential of this digital world, it cannot be refuted that the metaverse is a topic that many have heard of and will become increasingly ubiquitous. Gaming may be its most obvious initial use case, as consumers and gamers alike are steadily continuing to merge their physical and digital lives. This is something that’s been happening since the arrival of the iPhone, a device that has become an extension of our brains and bodies. As technology progresses and advances, it’s only natural that more parts of our lives will be embedded into the digital world. 

With more people opting to “live” inside the metaverse, there will be routine tasks that require more advanced and intuitive communication. Be it mailing a letter, purchasing land or buying a burger, there must be a proper way to scale communications through artificial intelligence. Technologies like CAIP (conversational AI platforms) will allow brands to design more appealing and engaging user experiences in this burgeoning virtual environment.

Building the experiential foundation

As more retail, banking and technology companies begin to inhabit the metaverse, the need for intelligent and relatable customer service programs will be crucial for success in this new arena. To achieve this, it has become increasingly clear that conversational AI must be the foundation of the metaverse, especially for experience delivery. 

Whether it’s a virtual post office, bank, or fast food restaurant, the interactions will be between a human-controlled avatar and a chatbot, so these avatars must have a streamlined way to communicate in order to effectively reach their goal. We are seeing this begin to take shape, as Meta just released its open source language model that includes 175 billion words, a good start for the conversational AI that will give consumers access to advanced customer support systems that interact like humans. These advancements will not only allow retailers to make the customer service process easier, they can also help them create an even more immersive experience for brand loyalists.

In addition to its language model, Meta released major improvements to its hand tracking API, allowing a user’s hand to move as freely as it would in the physical world. This same precedent must be set for digital conversations, especially in consumer engagement and customer support settings. Just as users need to be able to use their hands, they will need to achieve goals through conversation. If a user cannot properly communicate with an avatar (bot), the lack of “human experience” will likely detract from the system’s ability to blend the digital and physical worlds.

Metaverse + AI aspirations

The metaverse can be thought of as a series of universes that merge, where a user could seamlessly flit among their worker, gamer and social media personas based on their requirements. Many have noted that the metaverse will probably play out like an open-world MMO or MMORPG (e.g. Fortnite or World of Warcraft, respectively) which will allow metaverse avatars to interact with each other. In order for these aspirations to even begin to take shape, the proper AI must be implemented to ensure algorithms and interactions are as scalable and sustainable as possible. This will be especially important considering the number of languages spoken across the globe (there are more than 7,100). 

If the metaverse aims to unite these worlds, how will it allow us to overcome language barriers? This is where conversational AI, speech recognition and self-supervised learning come into play. When providing a chatbot (or in this case, avatar) that needs to support thousands of languages, any conversational AI platform would need to train the avatars to recognize patterns of specific speech and language in order to respond effectively and efficiently to written and voice queries.

The future of the metaverse 

Mark Zuckerberg referred to AI as the most important foundational technology of our time, believing that its power can unlock advancements in other fields, like VR, blockchain, AR and 5G.

With companies across all industries developing ways to monetize the metaverse, these industry giants are banking on it becoming the new (and all-encompassing) internet. Much like the internet boom in the 1990s, we just may look back at this moment and wonder how we ever survived without the metaverse. 

In any universal shift of life, and especially when it comes to technology, it’s important to understand the foundational elements that build these integral parts of our lives. As technology becomes more and more integral in our lives, it’s important for users to truly understand the underpinnings of things like smartphones, computer programs and alternate digital universes.

Innovations like the metaverse would not be achievable without conversational AI, and as open source programs continue to develop, the adoption of this space will only grow. 

Raj Koneru is CEO of Kore.ai

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

PyTorch has a new home: Meta announces independent foundation

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Meta announced today that its artificial intelligence (AI) research framework, PyTorch, has a new home. It is moving to an independent PyTorch Foundation, which will be part of the nonprofit Linux Foundation, a technology consortium with a core mission of collaborative development of open-source software. 

According to Aparna Ramani, VP of engineering at Meta, over the next year the focus will be on making a seamless transition from Meta to the foundation.

Long-term, “The mission is really to drive adoption of AI tooling,” she told VentureBeat. “We want to foster and sustain an ecosystem of vendor-neutral projects that are open source around PyTorch, so the goal for us is to democratize state-of-the-art tools, libraries and other components that make innovations accessible to everybody.” 

PyTorch has become leading AI platform

Since creating PyTorch six years ago, some 2,400 contributors have built more than 150,000 projects on the framework, according to Meta. As a result, PyTorch has become one of the leading platforms for AI research as well as for commercial production use — including as a technological underpinning to Amazon’s Web Services, Microsoft Azure and OpenAI.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

“The new PyTorch Foundation board will include many of the AI leaders who’ve helped get the community where it is today, including Meta and our partners at AMD, Amazon, Google, Microsoft and Nvidia,” Mark Zuckerberg, founder and CEO of Meta, said in an emailed press comment. “I’m excited to keep building the PyTorch community and advancing AI research.” 

Ramani will sit on the board of the foundation as the Meta representative. She told VentureBeat the PyTorch move is a natural transition. 

“This isn’t anything sudden — it’s an evolution of how we’ve always been operating PyTorch as community-driven,” Ramani said. “It’s a natural transition for us to create a foundation that is neutral and egalitarian, including many partners across the industry who can govern the future growth of PyTorch and make sure it is beneficial to everybody across the industry.”

Despite being freed of direct oversight, Meta said it intends to continue using Pytorch as its primary AI research platform and will “financially support it accordingly.” Though, Zuckerberg did note that the company plans to maintain “a clear separation between the business and technical governance” of the foundation.

Ramani pointed out that when PyTorch got its start as a small project by a small group of Meta researchers, nobody expected or anticipated the kind of growth it has enjoyed. 

“It was really the researchers who were like, let’s go solve this problem,” she said. “But as soon as we started building it, clearly PyTorch was solving something absolutely core to what the industry needed at the time — so it resonated with where AI research is going given the speed of innovation and the flexibility that has become absolutely critical. That confluence helped PyTorch really take off.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

Linux Foundation to promote dataset sharing and software dev techniques

At the Linux Foundation Membership Summit this week, the Linux Foundation — the nonprofit tech consortium founded in 2000 to standardize Linux and support its growth — announced new projects: Project OpenBytes and the NextArch Foundation. OpenBytes is an “open data community” as well as a new data standard and format primarily for AI applications, while NextArch — which is spearheaded by Tencent — is dedicated to building software development architectures that support a range of environments.

OpenBytes

Dataset holders are often reluctant to share their datasets publicly due to a lack of knowledge about licenses. In a recent Princeton study, the coauthors found that the opaqueness around licensing — as well as the creation of derivative datasets and AI models — can introduce serious ethical issues, particularly in the computer vision domain.

OpenBytes is a multi-organization effort charged with creating an open data standard, structure, and format with the goal of reducing data contributors’ liability risks, under the governance of the Linux Foundation. The format of data published, shared, and exchanged will be available on the project’s future platform, ostensibly helping data scientists find the data they need and making collaboration easier.

Linux Foundation senior VP Mike Dolan believes that if data contributors understand that their ownership of data is well-protected and that their data won’t be misused, more data will become accessible. He also thinks that initiatives like OpenBytes could save a large amount of capital and labor resources on repetitive data collection tasks. According to a CrowdFlower survey, data scientists spend 60% of their time cleaning and organizing data and 19% of their time actually collecting datasets.

“The OpenBytes project and community will benefit all AI developers, both academic and professional and at both large and small enterprises, by enabling access to more high-quality open datasets and making AI deployment faster and easier,” Dolan said in a statement.

Autonomous car company Motional (a joint venture of Hyundai and Aptiv), Predibase, Zilliz, Jina AI, and ElectrifAi are among the early members of OpenBytes.

NextArch

As for NextArch, it’s meant to serve as a “neutral home” for open source developers and contributors to build an architecture that can support compatibility between microservices. “Microservices” refers to a type of architecture that enables the rapid, frequent, and reliable delivery of large and complex apps.

Cloud-native computing, AI, the internet of things (IoT), and edge computing have spurred enterprise growth and digital investment. According to market research, the digital transformation market was valued at $336.14 billion in 2020 and is expected to grow at a compound annual growth rate of 23.6% from 2021 to 2028. But a lack of common architecture is preventing developers from fully realizing these technologies’ promises, Linux Foundation executive director Jim Zemlin asserts.

“Developers today have to make what feel like impossible decisions among different technical infrastructures and the proper tool for a variety of problems,” Zemlin said in a press release. “Every tool brings learning costs and complexities that developers don’t have the time to navigate, yet there’s the expectation that they keep up with accelerated development and innovation.”

Enterprises generally see great value in emerging technologies and next-generation platforms and customer channels. Deloitte reports that the implementation of digital technologies can help accelerate progress towards organizational goals such as financial returns, workforce diversity, and environmental targets by up to 22%. But existing blockers often prevent companies from fully realizing these benefits. According to a Tech Pro survey, buy-in from management and users, training employees on new technology, defining policies and procedures for governance, and ensuring that the right IT skillsets are onboard to support digital technologies remain challenges for digital transformation implementation.

Toward this end, NextArch aims to improve data storage, heterogeneous hardware, engineering productivity, telecommunications, and more through “infrastructure abstraction solutions,” specifically new frameworks, designs, and methods. The project will seek to automate operations and processes to “increase the autonomy of [software] teams” and create tools for enterprises that address the problems of productization and commercialization in digital transformation.

“NextArch … understands that solving the biggest technology challenges of our time requires building an open source ecosystem and fostering collaboration,” Dolan said in a statement. “This is an important effort with a big mission, and it can only be done in the open source community. We are happy to support this community and help build open governance practices that benefit developers throughout its ecosystem.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Eclipse Foundation launches open source collaboration around software-defined vehicles

One of the world’s largest open source software (OSS) foundations, the Eclipse Foundation, this week announced an invitation to leaders in the technology sector to join in the commencement of a new working group initiative specifically focused on developing a new grade of open source software-defined vehicles.

Alongside the Eclipse Foundation are several top industry players that are joining the foundation’s open source collaborative effort including Microsoft, Red Hat, Bosch, and others.

“With digital technologies unlocking the future of accessible, sustainable, and safe transportation experiences, mobility services providers are increasingly looking to differentiate through software innovation,” said Ulrich Homann, corporate vice president and distinguished architect at Microsoft. “By standardizing the development, deployment, and management of software-defined vehicles through collaboration in the open-source space, businesses can bring tailored mobility solutions to their customers faster and can focus on innovations.”

The Eclipse Foundation’s initiative aims to provide “individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation,” according to the foundation’s press release.

Benefits for mobility

The new working group will focus entirely on building next-generation vehicles based on open source. By open-sourcing this project, the foundation is hoping to pull solutions and innovation from the best and brightest enterprises and individuals across the globe — and doing so with an eye toward creating a strong foundation for software-defined vehicles and future mobility.

“The software-defined vehicle will play a key role in the future of mobility,” Christoph Hartung, president and chairman of embedded systems maker ETAS, said in a press release. “The explosive increase in complexity can only be mastered by working closely together as we do in this initiative.”

The foundation is focused on fostering an environment from which to pave the way for software-defined vehicles, but it doesn’t stop there. Eclipse is also looking at how both its new working group and the innovation of software-defined vehicles can be used to create robust accessibility options for people with various disabilities and physical needs.

“The transfer of personalized functionality across vehicles and brands will be eased — assume a rental car,” Sven Kappel, vice president and head of project for Bosch, told VentureBeat. “So, in the given hardware restraints, the needs of [an] impaired car user could be far faster retrieved and be met by a large developer base with lower implementation cost than classical vehicle architecture and software developing paradigms.”

A software-defined future

Software-defined vehicles have captured the attention of industry leaders, academics, and the public alike. Next-gen vehicle developers are increasingly looking to provide advanced mobility options to serve the global community, just as smart city technologies and initiatives are similarly on the rise.

The benefits from this open-sourced working group can extend beyond vehicles into other industries as well, including cloud computing and manufacturing. A similar open source-focused working initiative in another industry sector could create benefits ranging from collaborative interdisciplinary solutions to ensuring thoughtful inclusion of anticipated consumer needs early on.

As the automotive industry, like other sectors, continually pivots toward a software-defined future, interdisciplinary collaboration with open source technology will further enable innovation. Manufacturers and suppliers will be better equipped to leverage standards that make innovations available to more people — for the software-defined vehicle space, this means being able to bring customizable features to drivers and passengers at an accelerated rate, Homann explained to VentureBeat via email.

“A global open source community can leverage a wide variety of voices, which can lead to greater participation, such as contributing tools and development principles that can enhance diversity and inclusion,” Homann said.

By building and utilizing a strong, open foundation, vehicle manufacturers worldwide will be able to zero in on key differentiators for customers, like mobility services and end-user experience improvements, at the same time that they are saving both time and cost on the non-differentiating elements, such as operating systems, middleware, and communication protocols, Eclipse’s press release claims.

“Although we have extensive roots with the automotive community, a project of this scope and scale has never been attempted before,” said Mike Milinkovich, executive director of the Eclipse Foundation. “This initiative enables participants to get in at the ‘ground level’ and ensure they each have an equal voice in this project.”

The future of software-defined vehicles

The Eclipse Foundation — which has reportedly fostered more than 400 open source projects to date — is eyeing the future as it attempts an open source project unmatched to any of its previous 400. By creating an environment that it anticipates will become “an open ecosystem for deploying, configuring, and monitoring vehicle software in a secure and safe way,” and will assist with achieving a significant transformation for the industry at a large scale.

“The end goal of this project is a completely new type of automobile defined in free, open-to-anyone software that can be downloaded into an off-the-shelf chassis. Adding new features to your call will simply require a software update. An enormous first step in a new era of vehicle development,” a press release from Eclipse stated.

A transportation and logistics report released in August by the market data firm Statista projects that electronic systems will account for nearly 50% of the total price of a new car by 2030. Additionally, the report claims that even before then, by 2025 about 33% of new cars sold will be operated by an electric battery. In fact, the report predicts that within the next decade, the rise of mobility services and autonomous vehicles will launch a revolution throughout the entire auto sector.

In addition, another recent report, titled “Software-defined vehicle Research Report 2021: Architecture Trends and Industry Panorama,” points out that in order to keep up with the Joneses of the automotive industry, original equipment manufacturers (OEMs) must “open up vehicle programming to all enterprises by simplifying the development of vehicle software and increasing the frequency of updates, so as to master the ecological resources of developers.” This further underscores the Eclipse Foundation’s ultimate goal of inviting industry leaders to collaboratively build next-generation vehicles based on open source.

According to the press release, Eclipse plans to create a space fueled by “transparency, vendor-neutrality, and a shared voice” in order to ensure all participants in the open source-driven project have the opportunity to shape the future of the working group — and the very future of vehicle development itself.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Fortnite gets another DC Comics Batman crossover with ‘Foundation’

Following the conclusion of its first crossover series, DC Comics has announced another Batman comic book set in the Fortnite universe. The upcoming product is called ‘Foundation’ and it will arrive for fans later this month. The one-shot offering, DC says, will be available later this month with special perks for buyers.

The new Batman/Fortnite crossover graphic novel was announced today by DC’s Chief Creative Officer Jim Lee. The upcoming ‘Foundation #1’ release will see Batman return to Gotham City where he finds, to his surprise, a new figure called The Foundation who somehow made his way from the Fortnite island to Batman’s world.

Batman/Fortnite: Foundation was written by Epic’s Chief Creative Officer Donald Mustard, ‘Zero Point’ writer Christos Gage, and ‘Dark Nights Metal’ writer Scott Snyder. Joshua Hixson was behind the pencil and ink for this release, while Roman Stevens took care of colors.

The graphic novel’s cover will come in three variants: the main version was created by Greg Capullo, Jonathan Glapion, and Matt Hollingsworth. The variants come from Alex Garner, while some lucky buyers will also get a premium cover variant that was made by Mustard.

Fans who purchase the print version of the Batman/Fortnite: Foundation graphic novel, which is 48 pages long, will get a code to download special items in Fortnite, including the Batman Who Laughs skins and related loading screen. The book will be available to purchase on October 26 in multiple markets, including the US, Germany, Mexico, France, Brazil, and more.

Repost: Original Source and Author Link

Categories
AI

Linux Foundation initiative advocates open standards for voice assistants

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Organizations are beginning to develop, design, and manage their own AI-powered voice assistant systems independent of platforms such as Siri and Alexa. The transition is being driven by the desire to manage the entirety of the user experience and integrate voice assistance into multiple business processes and brand environments, from call centers to stores. In a recent survey of 500 IT and business decision-makers in the U.S., France, Germany, and the U.K., 28% of respondents said they were using voice technologies and 84% expect to be using them in the next year.

To support the evolution, the Linux Foundation today launched the Open Voice Network (OVN), an alliance advocating for the adoption of open standards across voice assistant apps in automobiles, smartphones, smart home devices, and more. With founding members Target, Schwarz Gruppe, Wegmans Food Markets, Microsoft, Veritone, Deutsche Telekom, and others, the OVN’s goal — much like Amazon’s Voice Interoperability Initiative — is to standardize the development and use of voice assistant systems and conversational agents that use technologies including automatic speech recognition, natural language processing, advanced dialog management, and machine learning.

The OVN emerged from research conducted between 2016 and 2018 on the impact of voice assistance on retail, consumer goods, and marketing industries by MIT’s Auto-ID Laboratory, Capgemini, and Intel. It was first announced as the Open Voice Initiative in 2019, but expanded significantly as the COVID-19 pandemic spurred enterprises to embrace digital transformation.

“Voice is expected to be a primary interface to the digital world, connecting users to billions of sites, smart environments and AI bots … Key to enabling enterprise adoption of these capabilities and consumer comfort and familiarity is the implementation of open standards,” Mike Dolan, SVP and general manager of projects at the Linux Foundation, said in a statement. “The potential impact of voice on industries including commerce, transportation, healthcare, and entertainment is staggering and we’re excited to bring it under the open governance model of the Linux foundation to grow the community and pave a way forward.”

Standards development

The OVN will focus specifically on standards development, including research and recommendations toward global standards to engender user choice, inclusivity, and trust. It’ll also work to identify and share conversational AI technologies, with best practices both horizontal and specific to vertical industries, serving as a source of insight for voice assistance. Lastly, OVN will collaborate with and through existing industry associations on regulatory and legislative issues including those of data privacy.

Ali Dalloul, GM at Microsoft’s Azure AI division, notes that as voice becomes a primary interface going forward, the result will be a hybrid ecosystem of general-purpose platforms and independent voice assistants that calls for interoperability. OVN’s mission is supporting this transition with industry guidance on the voice-specific protection of user privacy and data security.

“To speak is human, and voice is rapidly becoming the primary interaction modality between users and their devices and services at home and work,” Dalloul said in a statement. “The more devices and services can interact openly and safely with one another, the more value we unlock for consumers and businesses across a wide spectrum of use cases, such as conversational AI for customer service and commerce.”

Membership to the OVN includes a commitment of resources in support of the alliance’s research, awareness, and advocacy activities and active participation in the OVN’s symposia and workshops. This year, OVN plans to partner with enterprises, voice practitioners, platform providers, and industry associations in North America and the European Union and has formed a working relationship with the China Netcasting Service Association, the Beijing, China-based industry association responsible for voice assistance.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Linux Foundation unveils new permissive license for open data collaboration

Elevate your enterprise data technology and strategy at Transform 2021.


The Linux Foundation has announced a new permissive license designed to help foster collaboration around open data for artificial intelligence (AI) and machine learning (ML) projects.

It has often been said that data is the new oil, but for AI and ML projects in particular, having access to expansive and diverse data sets is key to reducing bias and building powerful models capable of all manner of intelligent tasks. To machines, data is a little like “experience” is to humans — the more of it you have, the better decisions you are likely to make.

With CDLA-Permissive-2.0, the Linux Foundation is building on its previous efforts to encourage data-sharing efforts through licensing arrangements that clearly define how the data — and any derivative data sets — can and can’t be used.

Data pools

The Linux Foundation first introduced the Community Data License Agreement (CDLA) back in 2017 to entice organizations to open up their vast pools of (underused) data to third-parties. There were two original licenses, a sharing license with a “copyleft” reciprocal commitment borrowed from the open source software sphere, stipulating that any derivative data sets built from the original data set must be shared under a similar license; and a permissive license (1.0) without any such obligations in place (similar to how someone might define “true” open source software).

Licenses are basically legal documents that outline how a piece of work (in this case data sets) can be used or modified, but oftentimes specific phrases, ambiguities, or exceptions can be enough to make companies run a mile if they think that releasing content under a specific license could cause them problems somewhere down the line. And that is where the CDLA-Permissive-2.0 license comes into play — it’s essentially a rewrite of version 1.0, but it’s shorter and simpler to follow. But more than that, it has removed certain provisions that were deemed unnecessary or burdensome, and which may have hindered broader use of the license.

For example, version 1.0 of the license included obligations that data recipients preserve attribution notices in the data sets. For context, attribution notices or statements are standard in the software sphere, where a company that releases software built on open source components have to credit the creators of these components in their own software license. But according to the Linux Foundation, feedback it received from the community, and lawyers representing companies involved in open data projects, indicated that there were” challenges involved with associating attributions with data (or versions of data sets).

So while data source attribution is still an option, and might make sense for specific projects particularly where transparency is paramount, it is no longer a condition for businesses looking to share data under the new permissive license. The chief obligation that remains is that the main community data license agreement text is included with the new data sets.

Data vs software

This also helps to highlight how transposing a concept from a software license onto a data set license doesn’t always make sense, partly because laws and regulations usually treat data differently to software and other similar creative content.

“Data is different from software,” Linux Foundation’s VP of compliance and legal Steve Winslow told VentureBeat. “Open source software is typically made of copyrightable works, where authorship is important. By contrast, data may frequently have little or no applicable intellectual property rights, and authorship and attribution are often less important.”

But isn’t attribution still desirable, even if it’s not always going to be applicable or relevant? According to Winslow, enforcing data attribution could have some negative consequences in terms of organizations’ willingness to collaborate around data.

“Some data recipients may still choose to attribute the data, to show that the data is trustworthy based on its source,” Winslow said. “But it will be their call, and not a requirement, as it could impose limitations on how to organize and analyze the data or force unintended burdens on data collaboration.”

For example, let’s assume data from multiple contributors — which could run into the thousands — is pooled into a single data set. If the data set is only ever used in that combined form, then attribution isn’t a huge burden. But if the data set is subsequently split into subsets which are redistributed separately or combined with a different data set, then that creates a whole lot of work in terms of determining which attributions apply to the new data set. In short, things can rapidly descend into confusion and chaos.

Transition

Several companies have already revealed plans to make their existing open data sets available under the new CDLA-Permissive-2.0 license, including Microsoft’s research arm which will now transition some of its open data sets, including Hippocorpus, Public Perception of Artificial Intelligence, Xbox Avatars Descriptions, Dual Word Embeddings, and GPS Trajectory.

Elsewhere, IBM’s Project CodeNet, NOAA JFK, Airline Reporting Carrier On-Time PerformancePubLayNet, and Fashion-MNIST will also transition to the new license.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Grillo, IBM, and the Clinton Foundation expand low-cost earthquake detection to the Caribbean

The Clinton Foundation today announced a commitment to action around deploying a low-cost, open-source-based novel earthquake detection system in the Caribbean, starting with Puerto Rico.

The system is being developed by Grillo in tandem with the long-running Call for Code challenge.

We’ve been covering David Clarke Causes and IBM’s Call for Codefor years here at TNW and Grillo’s work has been one of many success stories to come out of the coding event.

Grillo worked with IBM to develop open-source kits for the contest last year and, with the help of local scientists and government officials, began testing it’s earthquake-sensing technology in Puerto Rico.

Today, the company is gearing up to deploy 90 sensors across the island, developed through what’s called “OpenEEW,” or open-source earthquake early warning.

These low-cost sensors can provide robust seismic activity detection when combined with Grillo’s machine learning solutions.

Per a Clinton Global Initiative press release:

The Caribbean is a highly seismic region due to its location at the convergence zone between major tectonic plates and communities across the region are frequently impacted by seismic events. In January 2020, southern Puerto Rico was impacted by a series of earthquakes over several weeks that damaged homes and infrastructure and caused displacement.

Earthquakes can be difficult to detect. While it’s usually impossible to miss the big ones as they’re happening, many smaller and medium-sized seismic events go undetected by modern sensors.

In fact, when Grillo developed its system in Mexico and Puerto Rico, the team found that it’s inexpensive open-source sensors and detection tech outperformed costly state systems by a significant margin.

Here’s the best part: Grillo and IBM are committed to developing these systems in tandem with local and global developers. In other words: you can help.

Per the press release:

In another important step in helping prepare and alert citizens ahead of earthquakes, they’ll be hosting deployments in Puerto Rico, leveraging the 90+ sensors located across the island and calling on the open source community to help introduce affordable, community-driven detection solutions to the region.

Grillo would like the open source communities help on the following actions:

  • Developers can help improve and test the sensor firmware so that it is more reliable and easier to provision.
  • The MQTT backend needs to be ported to additional open source platforms to allow for a scalable global hosted solution that will ingest data from citizen scientists everywhere.
  • The detection code can use help and is being developed in Python and deployed in Kubernetes. This will allow for rapid and precise detection of earthquakes using many more sensors and distributed systems.
  • The team is experimenting with machine learning for improved accuracy using the latest seismological algorithms.
  • There is also work on the mobile and wearable apps both for provisioning of a sensor device, as well as receiving alerts from the cloud.
  • Work is underway on the public Carbon/React dashboard which allows users to see and interact with devices, as well as view recent earthquake events.

If you think you have what it takes to contribute, or you’d like to learn how to use and develop these solutions as part of the open-source community, you can find out more here.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Repost: Original Source and Author Link

Categories
AI

Cloudera Foundation merger strengthens data science and AI support for nonprofits

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Today, the Patrick J. McGovern Foundation and the Cloudera Foundation, software company Cloudera’s philanthropic arm, announced a merger that will launch an initiative to accelerate data and AI maturity within nonprofits. In a definitive agreement, Silicon Valley-based Cloudera Foundation will merge its staff, $9 million endowment, and $3 million in grants with the Patrick J. McGovern Foundation, a $1.5 billion data science and AI philanthropic organization. The companies say the transaction is expected to close in the second quarter of 2021, subject to regulatory approval.

Nonprofits face significant barriers to leveraging big data analytics and AI tools for driving strategy, growth, and global impact. The gaps in knowledge and technical expertise translate to opportunity costs in efficiency, impact, and scalability. A 2019 survey by PwrdBy found that nonprofit-specific AI is reaching fewer than 23% of organizations worldwide. In that same report, 83% of respondents said they believe ethical frameworks need to be defined before the nonprofit sector sees “full adoption” of AI.

“Data science and AI-based tools, deployed with purpose and social conscience, could improve nearly every part of the human experience. Enabling civil society to access these technologies will unlock transformational approaches to many of the world’s greatest challenges,” Patrick J. McGovern Foundation president Vilas Dhar said in a press release. “Backed by philanthropic capital and deep technical expertise, we are working together with social changemakers to advance a tech-enabled, human-centered future.”

The Cloudera Foundation, which was founded in 2017 with the mission of providing technology and mentorship to foster data expertise in the civil sector, has piloted its approach with seven nonprofits. The Patrick J. McGovern Foundation plans to broaden the foundation’s efforts post-merger to up to 100 organizations addressing the United Nations (UN) Sustainable Development Goals, the collection of 2030 goals set by the UN General Assembly as “a blueprint to achieve a better and more sustainable future for all.”

Cloudera Foundation CEO Claudia Juech will direct activities around data enablement for nonprofits as VP of Data and Society at the Patrick J. McGovern Foundation, a new program. Data and Society will offer a range of services, including public workshops, multi-year data science development collaborations, and accelerator partnerships focusing on extracting actionable insights from datasets.

For example, Data and Society could help organizations combating climate change better manage their resources. In a report on the civil sector’s adoption of AI, the Brookings Institute cites a partnership between The Nature Conservancy and the Red Cross to create a dashboard incorporating social media data on floods for city planning — an effort that might assist cities in becoming more sustainable.

“AI can enable nonprofits to manage a broad range of activities. In conjunction with machine learning and data analytics, it is a way to control costs, handle internal operations, and automate routine tasks within the organization,” the report’s coauthors wrote. “Adoption of these tools can help groups with limited resources streamline internal operations and external communications and thereby improve the manner in which they function.”

Cloudera cofounder and Cloudera Foundation chair Mike Olson added: “Our missions couldn’t be better matched. We have seen how data and analytics enable nonprofits to work more effectively and increase their impact. With the leadership and resources of the Patrick J. McGovern Foundation, our team can do even more to help mission-driven organizations change the world.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link