Categories
AI

What is federated learning? | VentureBeat

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


In partnership with Paperspace

One of the key challenges of machine learning is the need for large amounts of data. Gathering training datasets for machine learning models poses privacy, security, and processing risks that organizations would rather avoid.

One technique that can help address some of these challenges is “federated learning.” By distributing the training of models across user devices, federated learning makes it possible to take advantage of machine learning while minimizing the need to collect user data.

Cloud-based machine learning

The traditional process for developing machine learning applications is to gather a large dataset, train a model on the data, and run the trained model on a cloud server that users can reach through different applications such as web search, translation, text generation, and image processing.

Every time the application wants to use the machine learning model, it has to send the user’s data to the server where the model resides.

In many cases, sending data to the server is inevitable. For example, this paradigm is inevitable for content recommendation systems because part of the data and content needed for machine learning inference resides on the cloud server.

cloud-based machine learning

But in applications such as text autocompletion or facial recognition, the data is local to the user and the device. In these cases, it would be preferable for the data to stay on the user’s device instead of being sent to the cloud.

Fortunately, advances in edge AI have made it possible to avoid sending sensitive user data to application servers. Also known as TinyML, this is an active area of research and tries to create machine learning models that fit on smartphones and other user devices. These models make it possible to perform on-device inference. Large tech companies are trying to bring some of their machine learning applications to users’ devices to improve privacy.

On-device machine learning has several added benefits. These applications can continue to work even when the device is not connected to the internet. They also provide the benefit of saving bandwidth when users are on metered connections. And in many applications, on-device inference is more energy-efficient than sending data to the cloud.

Training on-device machine learning models

On-device inference is an important privacy upgrade for machine learning applications. But one challenge remains: Developers still need data to train the models they will push on users’ devices. This doesn’t pose a problem when the organization developing the models already owns the data (e.g., a bank owns its transactions) or the data is public knowledge (e.g., Wikipedia or news articles).

But if a company wants to train machine learning models that involve confidential user information such as emails, chat logs, or personal photos, then collecting training data entails many challenges. The company will have to make sure its collection and storage policy is conformant with the various data protection regulations and is anonymized to remove personally identifiable information (PII).

Once the machine learning model is trained, the developer team must make decisions on whether it will preserve or discard the training data. They will also have to have a policy and procedure to continue collecting data from users to retrain and update their models regularly.

This is the problem federated learning addresses.

Federated learning

federated learning training phase

The main idea behind federated learning is to train a machine learning model on user data without the need to transfer that data to cloud servers.

Federated learning starts with a base machine learning model in the cloud server. This model is either trained on public data (e.g., Wikipedia articles or the ImageNet dataset) or has not been trained at all.

In the next stage, several user devices volunteer to train the model. These devices hold user data that is relevant to the model’s application, such as chat logs and keystrokes.

These devices download the base model at a suitable time, for instance when they are on a wi-fi network and are connected to a power outlet (training is a compute-intensive operation and will drain the device’s battery if done at an improper time). Then they train the model on the device’s local data.

After training, they return the trained model to the server. Popular machine learning algorithms such as deep neural networks and support vector machines is that they are parametric. Once trained, they encode the statistical patterns of their data in numerical parameters and they no longer need the training data for inference. Therefore, when the device sends the trained model back to the server, it doesn’t contain raw user data.

Once the server receives the data from user devices, it updates the base model with the aggregate parameter values of user-trained models.

The federated learning cycle must be repeated several times before the model reaches the optimal level of accuracy that the developers desire. Once the final model is ready, it can be distributed to all users for on-device inference.

Limits of federated learning

Federated learning does not apply to all machine learning applications. If the model is too large to run on user devices, then the developer will need to find other workarounds to preserve user privacy.

On the other hand, the developers must make sure that the data on user devices are relevant to the application. The traditional machine learning development cycle involves intensive data cleaning practices in which data engineers remove misleading data points and fill the gaps where data is missing. Training machine learning models on irrelevant data can do more harm than good.

When the training data is on the user’s device, the data engineers have no way of evaluating the data and making sure it will be beneficial to the application. For this reason, federated learning must be limited to applications where the user data does not need preprocessing.

Another limit of federated machine learning is data labeling. Most machine learning models are supervised, which means they require training examples that are manually labeled by human annotators. For example, the ImageNet dataset is a crowdsourced repository that contains millions of images and their corresponding classes.

In federated learning, unless outcomes can be inferred from user interactions (e.g., predicting the next word the user is typing), the developers can’t expect users to go out of their way to label training data for the machine learning model. Federated learning is better suited for unsupervised learning applications such as language modeling.

Privacy implications of federated learning

While sending trained model parameters to the server is less privacy-sensitive than sending user data, it doesn’t mean that the model parameters are completely clean of private data.

In fact, many experiments have shown that trained machine learning models might memorize user data and membership inference attacks can recreate training data in some models through trial and error.

One important remedy to the privacy concerns of federated learning is to discard the user-trained models after they are integrated into the central model. The cloud server doesn’t need to store individual models once it updates its base model.

Another measure that can help is to increase the pool of model trainers. For example, if a model needs to be trained on the data of 100 users, the engineers can increase their pool of trainers to 250 or 500 users. For each training iteration, the system will send the base model to 100 random users from the training pool. This way, the system doesn’t collect trained parameters from any single user constantly.

Finally, by adding a bit of noise to the trained parameters and using normalization techniques, developers can considerably reduce the model’s ability to memorize users’ data.

Federated learning is gaining popularity as it addresses some of the fundamental problems of modern artificial intelligence. Researchers are constantly looking for new ways to apply federated learning to new AI applications and overcome its limits. It will be interesting to see how the field evolves in the future.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

VentureBeat presents AI Innovation Awards nominees at Transform 2021

Did you miss today’s livestream? Watch the AI at the Edge & IoT Summit on demand now.


There is a consistent theme that runs through the agenda for VentureBeat’s Transform 2021 virtual conference, and that is artificial intelligence and data analytics are being used in many different areas and in creative ways. AI is being used in the field of fitness, apparel, energy, and real estate. It’s expected that social media companies founded in the past ten or so years will be using AI and other advanced technologies, but it is extremely reassuring when a company that has served generations is still keeping up with the latest technologies — and doing well.

AI is a complex field and the technology is ever-evolving, and VentureBeat has the front-row seat. There is research pushing boundaries of what is considered possible. There are new products transforming how people work, live, and play. Amidst all of that, there are organizations and individuals working on solving certain challenges in ways that are innovative and creative. That is the purpose of the AI Innovation Awards, where emergent, compelling, and influential work gets recognized. For the third annual AI Innovation Awards honored people and companies engaged in compelling and influential work in five areas: natural language processing and understanding, business applications, edge innovation, “Startup Spotlight”, and AI for Good.

A nominating committee helped the editorial team with the selections. The members of the nominating committee for this year’s AI Innovation Awards were: Vijoy Pandey, the vice-president of engineering and CTO of cloud and distributed systems at Cisco; Raffael Marty, senior vice-president of product, cybersecurity, at Connectwise, and the former chief research and intelligence officer at Forcepoint; and Stacey Shulman, vice-president of the IoT Group and general manager of Health, Life Sciences and Emerging Technologies at Intel. They were generous with their time and knowledge, and provided the editorial team with an intriguing list of individuals and organizations to consider.

Natural language processing/understanding

Many things become possible when machines can understand the language people speak and write. Smart assistants can handle more tasks and are responsible for numerous tasks in different industries. Translation services create a more global world. Productivity tools are more effective. There are so many different use cases, and natural language processing — which includes natural language understanding, natural Language Generation, and Natural Language Interaction — makes it all work.

Primer uses machine learning techniques to help parse and collate a large number of documents across several languages in order to facilitate further investigation. Users feed Primer’s software a stream of documents, and it automatically summarizes what it determines to be the most important information out of that haystack of data. Users are then able to filter by topic, event, and other categories to drill down into the information Primer collected so they can go beyond the automatically generated headlines. Primer’s NLP platform is used by a number of United States federal agencies, and recently raised $110 million in funding.

EleutherAI was founded on the idea of making AI technology that would be open source — and the first one on deck was making am open source model replicating the GPT-3 work from OpenAI. This past March, the EleutherAI team released two trained GPT-style language models, GPT-Neo 1.3B and GPT-Neo 2.7B. The code and the trained models are open sourced under the MIT license and can be used for free via Hugging Face’s Transformers platform. This team is pushing the envelope of NLP research through an open source approach. The platform also makes

Dr. Pei Wang of Temple University was nominated for his Nonaxiomatic Reasoning Engine and its application to NLP. The NARS project, which Wang has been working on for approximately 20 years, attempts to uniformly explain and reproduce many cognitive facilities — including reasoning, learning, and planning — to generate a unified theory, model, and system for AI. The nomination is for Wang’s persistence on his symbolic approach to AI, now being absorbed widely in other applications.

Hugging Face is democratizing NLP by building an open source community for sharing models, datasets, and other resources. The team is conducting research, creating NLP libraries such as Transformers and Tokenizers, and releasing tools to leverage models such as BERT, XLNet, and GPT-2. The nominating committee specifically noted that Sasha Rush — Associate Professor at Cornell Tech — is one of the brains behind Hugging Face.

CoPilot, the project launched by GitHub that acts as a pair programmer and helps developers write better code, may be brand new, but it jumped into the nomination lists because of the way it suggests new code and learns the developer’s coding style. Copilot uses OpenAI Codex, which may be more capable than GPT-3 in terms of programming code.

Business applications

It’s interesting to explore new ideas and compelling research, but the true impact of AI comes from the practical applications. Low-code/no-code tools are helping non-developers create applications and data pipelines. Robotic process automation streamlines workflows and makes business operations more efficient. Intelligent software and services help solve real-world problems. This is where life starts to feel like something out of science-fiction.

Incorta offers all-in-one data crunching service for customers to analyze corporate data spread across multiple databases and then render it all into charts and graphics. The company’s service help organizations acquire, enrich, analyze, and act upon business data — upwards of tens of billions of rows of data become “analytics-ready” without the need to pre-aggregate, reshape, or transform the data in any way. Incorta helps reduce data bloat in organizations.

Dr. Sheila Nirenberg, a neuroscientist for Cornell Medical School, has successfully “cracked the code” for how the retina sends signals to the brain. Her work, combined with optogenetics, help blind people see again. While that is impressive on its own, Dr. Nirenberg was nominated because of the way she has taken what she learned in this field and applied them into the AI space. Her company, Nirenberg Neuroscience, applies the “neural code approach” from a mamalian retina to greatly reduce the amount of training data needed for activity based detection models. This new approach will allow very hard-to-train models to become easy to train with supervised learning.

DeepSee.ai automates manual business processes by combining open source and proprietary machine learning, linguistic comparison and prediction techniques, and sentiment analysis. DeepSee’s cloud-hosted platform captures, extracts, normalizes, labels, and analyzes unstructured data, and then surfaces trends and patterns for review. DeepSee provides a pipeline to deliver AI-generated templates, rules, and logic.

Pilot applies AI to the field of financial tech — fintech — and provides context-specific reporting, insights, and expertise for businesses that may not have an in-depth finance team. Pilot’s software provides automated visibility, error management, and predictive insights to help customers make better budgeting and spending decisions.

Indico allows customers to automate the intake and analysis of document- and image-based workflows. The platform, which can be deployed in private cloud or on-premises environments or as a managed service, ingests PDFs, Word documents, and other unstructured text, images, and documents. Once ingested, the data is processed using natural language understanding models and chained together into pipelines to perform data classification, extraction, and comparison. Indico applies transfer learning — where a model developed for one task is used for another task — to deploy unstructured content more effectively.

Edge

Last year’s awards focused on computer vision, but this year, edge AI is becoming a bigger topic of conversation. The pendulum swings regularly between processing all the data in a centralized location and processing data right on the device. A farmer standing in the middle of the field doesn’t have WiFi — making it really difficult to use the data collected by sensors and other smart devices. This is a situation that is going to be familiar across multiple industries, as the Internet of Things and near ubiquitous network capabilities promised by 5G creates new opportunities with real-time data.

Autonomy Institute is a cooperative research consortium focused on advancing and accelerating autonomy and AI at the edge. The consortium announced a pilot program at the Texas Military Department’s Camp Mabry location in Austin, Texas to build out a test smart city environment to optimize traffic management, autonomous cars, industrial robotics, autonomous delivery, 911 drones, and automated road and bridge inspection. The program deploys the Public Infrastructure Network Node (PINN), a unified open standard supporting 5G wireless, edge computing, radar, lidar, enhanced HPS, and intelligent transportation systems (ITS). PINN clusters in a city deployment could be positioned to collect information from the sensors and cameras at a street intersection. Edge computing using PINN is what will make it possible to process all of the signals and do something about it, such as making the traffic lights change as a car approaches the intersection.

DEKA Research’s ROXO bot was built on top of the iBot wheelchair base — a wheelchair that can climb stairs and lift riders to eye level with others — to fill inventor Dean Kamen’s (person behind the Segway) vision to make wheelchairs more affordable. Removing the chair attachment and replacing with a delivery pod turned the robot into a hardened delivery solution that can drive over nearly any terrain and can climb stairs. Under a partnership with FedEx, the ROXO bot provides package delivery.

Edgeworx turns any computing device — regardless of compute and resources, or operating system — into an edge software platform to allow developers to simply and securely deploy, manage, and orchestration applications from cloud to edge. Its technology was designed from the ground up to be the infrastructure layer for edge devices, and to interface with legacy systems and cloud and data center. Edgeworx enables customers to run real software on edge devices with the same level of security and remote control as they would have in a cloud environment.

SambaNova, which was founded by Oracle and Sun Microsystems veteran Rodrigo Liang and Stanford professors Kunle Olukotun and Chris Ré, develops chips for AI workloads. AI accelerators are a type of specialized hardware designed to speed up AI applications such as neural networks, deep learning, and various forms of machine learning from the data center to the edge.

Multiply Labs, founded by two MIT alumni, has helped pharmaceutical companies produce biologic drugs with its robotic manufacturing platform. Operating at the intersection of robotics and pharmaceutical manufacturing, the company makes the production of individualized drugs at industrial scale possible through automation.

Startup Spotlight

There are many players in the field, from small startups working on one specific idea, academic and private research laboratories pushing the boundaries of what we can do, and large well-funded companies exploring the answers to a broad array of questions. This category focuses on companies that work with AI, have raised $35 million or less in funding, and have been in operation for no more than two years. This award spotlights the startup’s potential to make a significant contribution to the field in the years to come.

Apiiro’s Code Risk Platform accelerates development by allowing organizations to identify and prioritize risky code changes before they become part of the development pipeline. Apiiro can identify and fix security problems during the development process because it analyzes the developer’s behavior to identify potentially risky behaviors that could impact the organization. The platfrom can learn historical behavior of application, infrastructure-as-code, open source components.

Medical Informatics Corporation created Sickbay, a technology platform that uses data to help collect information on the patient. The medical environment is awash in data but they aren’t stored in a place that the medical team can access it.

Udyogyantra is focused on food safety and supply chain transparency. The SmartQC system standardizes and improves food quality by providing real-time insights such as temperature of food, quantity, and consistency

Parity analyzes documentation, identifies risk zones, and recommends methods to mitigate harmful model qualities. Parity offers services designed to identify and remove bias from A.I.

TabNine is based around deep learning AI that studies publicly shared code, primarily through scanning GitHub repos, to suggest time-saving code completions, error predictions and generally make coding better. TabNine also plugs into the preferred IDE.

AI for Good

The AI for Good award honors AI-driven technology, applications, and activism goes beyond just making things easier and faster to making a different. This category looks at people and companies that work to protect human lives, fight injustice, and otherwise improve society. There are ways AI is arguably making the world a better place — or if not better, then at least safer. This award is for AI technology, the application of AI, or advocacy or activism in the field of AI that protects or improves human lives or operates to fight injustice, improve equality, and better serve humanity.

Jake Porway, the founder of DataKind, is pushing to help non-profits connect with experts in the field. As more data scientists get involved, they are looking for opportunities to make suggestions about product development or strategy after analyzing data in a business setting. This is a case where the platform exists specifically to harness people’s know-how for causes that go beyond corporate objectives.

The work Carla Gomes has done in AI has benefitted society: her work on hydrodam locations was in coordination with Brazil. The effort made sure that it hit the right balance between methane production / environmental damage and low cost electricity generation.

David Rolnick wrote a big paper about the various ways that AI can help with climate change that was widely circulated and discussed. In nearly every climate change workshop that has come around in recen months, there is now a bit about using AI technologies. That change came about because of David Rolnick.

The Internet Watch Foundation is a team of 21 individuals who work out of Watch Foundation’s office in Cambridgeshire. These individuals spend hours trawling through images and videos containing child sexual abuse. And, each time they find a photo or piece of footage that is dangerous — it needs to be be assessed and labeled. Last year alone the team identified 153,383 web pages with links to child sexual abuse imagery. This creates a vast database that can then be shared internationally in an attempt to stem the flow of abuse. These classifications are also used to work out how long someone convicted of a crime should be sentenced for.

Folding@Home built the first Exascale Edge compute platform performing AI operations. Over a “Million citizen scientists” (which is really just people installing F@H on their computers) — takes part in which essentially is worlds largest supercomputer. The company platform is compiled and optimized for dozens and dozens of architectures, different Intel, AMD, and then dozens of GPU types/models. Folding@Home solved quite a few basic problems for SAR-CoV-2 which made it possible to begin vaccine production.

The winners will be announced in the morning of July 16 as part of the activities wrapping up Transform 2021.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Repost: Original Source and Author Link

Categories
AI

Getting to trustworthy AI | VentureBeat

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Artificial intelligence will be key to helping humanity travel to new frontiers and solve problems that today seem insurmountable. It enhances human expertise, makes predictions more accurate, automates decisions and processes, frees humans to focus on higher value work, and improves our overall efficiency.

But public trust in the technology is at a low point, and there is good reason for that. Over the past several years, we’ve seen multiple examples of AI that makes unfair decisions, or that doesn’t give any explanation for its decisions, or that can be hacked.

To get to trustworthy AI, organizations have to resolve these problems with investments on three fronts: First, they need to nurture a culture that adopts and scales AI safely. Second, they need to create investigative tools to see inside black box algorithms. And third, they need to make sure their corporate strategy includes strong data governance principles.

1. Nurturing the culture

Trustworthy AI depends on more than just the responsible design, development, and use of the technology. It also depends on having the right organizational operating structures and culture. For example, many companies that may have concerns about bias in their training data also have expressed concern that their work environments are not conducive to nurturing women and minorities to their ranks. There is indeed, a very direct correlation! To get started and really think about how to make this culture shift, organizations need to define what responsible AI looks like within their function, why it’s unique, and what the specific challenges are.

To ensure fair and transparent AI, organizations must pull together task forces of stakeholders from different backgrounds and disciplines to design their approach. This method will reduce the likelihood of underlying prejudice in the data that’s used to create AI algorithms that could result in discrimination and other social consequences.

Task force members should include experts and leaders from various domains who can understand, anticipate, and mitigate relevant issues as necessary. They must have the resources to develop, test, and quickly scale AI technology.

For example, machine learning models for credit decisioning can exhibit gender bias, unfairly discriminating against female borrowers if uncontrolled. A responsible-AI task force can roll out design thinking workshops to help designers and developers think through the unintended consequences of such an application and find solutions. Design thinking is foundational to a socially responsible AI approach.

To ensure this new thinking becomes ingrained in the company culture, all stakeholders from across an organization — from data scientists and CTOs to Chief Diversity and Inclusivity officers must play a role. Fighting bias and ensuring fairness is a socio-technological challenge that is solved when employees who may not be used to collaborating and working with each other start doing so, specifically about data and the impacts models can have on historically disadvantaged people.

2. Trustworthy tools

Organizations should seek out tools to monitor transparency, fairness, explainability, privacy, and robustness of their AI models. These tools can point teams to problem areas so that they can take corrective action (such as introducing fairness criteria in the model training and then verifying the model output).

Here are some examples of such investigative tools:

There are versions of these tools that are freely available via open source and others that are commercially available. When choosing these tools, it is important to first consider what you need the tool to actually do and whether you need the tool to perform on production systems or those still in development. You must then determine what kind of support you need and at which price, breadth, and depth. An important consideration is whether the tools are trusted and referenced by global standards boards.

3. Developing data and AI governance

Any organization deploying AI must have clear data governance in effect. This includes building a governance structure (committees and charters, roles and responsibilities) as well as creating policies and procedures on data and model management. With respect to humans and automated governance, organizations should adopt frameworks for healthy dialog that help craft data policy.

This as an opportunity to promote data and AI literacy in an organization. For highly regulated industries, organizations can find specialized tech partners that can also ensure that the model risk management framework meets supervisory standards.

There are dozens of AI governance boards around the world that are working with industry in order to help set standards for AI. IEEE is one single example. IEEE is the largest technical professional organization dedicated to advancing technology for the benefit of humanity. The IEEE Standards Association, a globally recognized standards-setting body within IEEE, develops consensus standards through an open process that engages industry and brings together a broad stakeholder community. Its work encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies. Such international standards bodies can help guide your organization to adopt standards that are right for you and your market.

Conclusion

Curious how your org ranks when it comes to AI-ready culture, tooling, and governance? Assessment tools can help you determine how well prepared your organization is to scale AI ethically on these three fronts.

There is no magic pill to making your organization a truly responsible steward of artificial intelligence. AI is meant to augment and enhance your current operations, and a deep learning model can only be as open-minded, diverse, and inclusive as the team developing it.

Phaedra Boinodiris, FRSA, is an executive consultant on the Trust in AI team at IBM and is currently pursuing her PhD in AI and Ethics. She has focused on inclusion in technology since 1999 and is a member of the Cognitive World Think Tank on enterprise AI.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link