Categories
Security

Canada bans Huawei equipment from 5G networks, orders removal by 2024

Canada has banned the use of Huawei and fellow Chinese tech giant ZTE’s equipment in its 5G networks, its government has announced. In a statement, it cited national security concerns for the move, saying that the suppliers could be forced to comply with “extrajudicial directions from foreign governments” in ways that could “conflict with Canadian laws or would be detrimental to Canadian interests.”

Telcos will be prevented from procuring new 4G or 5G equipment from the companies by September this year, and must remove all ZTE- and Huawei-branded 5G equipment from their networks by June 28th, 2024. Equipment must also be removed from 4G networks by the end of 2027. “The Government is committed to maximizing the social and economic benefits of 5G and access to telecommunications services writ large, but not at the expense of security,” the Canadian government wrote in its statement.

The move makes Canada the latest member of the Five Eyes intelligence alliance to have placed restrictions on the use of Huawei and ZTE equipment in their communication networks. US telcos are spending billions removing and replacing the equipment in their networks, while the UK banned the use of Huawei’s equipment in 2020, and ordered its removal by 2027. Australia and New Zealand have also restricted the use of their equipment on national security grounds.

At the core of these concerns is China’s National Intelligence Law, which critics claim can be used to make Chinese organizations and citizens cooperate with state intelligence work, CBC News reports. The fear is this could be used to force Chinese tech companies like Huawei and ZTE to hand over sensitive information from foreign networks to the Chinese government.

Huawei disputes the claim and says its based on a “misreading” of China’s law. “China will comprehensively and seriously evaluate this incident and take all necessary measures to safeguard the legitimate rights and interests of Chinese companies,” China’s Canadian embassy said in a statement in response to Canada’s ban. In a statement emailed to The Verge, Alykhan Avelshi, a vice president at Huawei Canada called the policy “an unfortunate political decision that has nothing to do with cyber security or any of the technologies in question.”

Canada has taken around three years to come to its decision about the use of Huawei and ZTE equipment in its telecoms networks, a period which Bloomberg notes has coincided with worsening relations between it and China. In December 2018, Canada arrested Huawei’s Chief Financial Officer Meng Wanzhou on suspicion of violating US sanctions. Days later, China imprisoned two Canadian nationals: former diplomat Michael Spavor and entrepreneur Michael Kovrig. After the US came to a deferred-prosecution deal with Meng that allowed her to return to China last year, the Canadians were released.

Opposition politicians criticized the Canadian government’s delay. “In the years of delay, Canadian telecommunications companies purchased hundreds of millions of dollars of Huawei equipment which will now need to be removed from their networks at enormous expense,” Conservative MP Raquel Dancho said in a statement reported by the Toronto Sun. But Bloomberg reports that the likes of BCE and Telus have already been winding down their use of Huawei’s equipment over fears of an eventual ban.

Update May 20th, 6:04PM ET: Added statement from Huawei.

Repost: Original Source and Author Link

Categories
AI

How neural networks simulate symbolic reasoning

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Researchers at the University of Texas have discovered a new way for neural networks to simulate symbolic reasoning. This discovery sparks an exciting path toward uniting deep learning and symbolic reasoning AI.

In the new approach, each neuron has a specialized function that relates to specific concepts. “It opens the black box of standard deep learning models while also being able to handle more complex problems than what symbolic AI has typically handled,” Paul Blazek, University of Texas Southwestern Medical Center researcher and one of the authors of the Nature paper, told VentureBeat.

This work complements previous research on neurosymbolic methods such as MIT’s Clevrer, which has shown some promise in predicting and explaining counterfactual possibilities more effectively than neural networks. Additionally, DeepMind researchers previously elaborated on another neural network approach that outperformed state-of-the-art neurosymbolic approaches.

Essence neural networks mimic human reasoning

The team at the University of Texas coined the term, “essence neural network” (ENN) to characterize its approach, and it represents a way of building neural networks rather than a specific architecture. For example, the team has implemented this approach with popular architectures such as convolutional neural net and recurrent neural net (RNN) architectures.

The big difference is that they did away with backpropagation, which is a cornerstone of many AI processes. “Backpropagation famously opened deep neural networks to efficient training using gradient descent optimization methods, but this is not generally how the human mind works,” Blazek said. ENNs don’t use backpropagation or gradient descent. Rather, ENNs mimic the human reasoning process, learn the structure of concepts from data, and then construct the neural network accordingly.

Blazek said the new technique could have practical commercial applications in the next few years. For example, the team has demonstrated a few ENN applications to automatically discover algorithms and generate novel computer code. “Standard deep learning took several decades of development to get where it is now, but ENNs will be able to take shortcuts by learning from what has worked with deep learning thus far,” he said.

Promising applications of the new technique include the following:

  1. Cognitive science: The researchers designed ENNs as a proof-of-principle for their new neurocognitive theory. It integrates ideas from the philosophy of mind, psychology, neuroscience, and artificial intelligence to explore how the human mind processes information. The theoretical framework could prove beneficial in exploring various theories and models from all these fields.
  2. Algorithm discovery: The researchers found that ENNs can discover new algorithms, similarly to how people can.
  3. High-stakes applications: The research establishes basic building blocks for explainable deep learning systems that can be better understood before deployment and post hoc analysis.
  4. Robust AI: There has been great concern about adversarial attacks against black-box AI systems. ENNs are naturally more robust to adversarial attacks, particularly for symbolic reasoning use-cases.
  5. Machine teaching with limited data: An ENN can train on limited, idealistic data and then generalize to much more complex examples that it has never seen.

Working backward from biology to understand the brain

In contrast to most AI research, the researchers approached the problem from a biological perspective. “The original purpose of our work was to understand how the neuronal structure of the brain processes information,” Blazek said.

The team ultimately proposed a generalized framework for understanding how the brain processes information and encodes cognitive processes. The core idea is that each neuron makes a specialized distinction, either signifying a specific concept or differentiating between two opposing concepts. In other words, one type of neuron makes the distinction “like A” versus “not like A,” and the other kind of neuron makes the distinction “more like A” versus “more like B.”.

These neurons are arranged in an appropriate hierarchy to integrate these distinctions and arrive at more sophisticated conclusions. There are many ways to design the specialized distinction made by each neuron and to arrange the neurons to make complex decisions.

This theory of understanding neural information processing agrees with various theories and observations from philosophy of mind, psychology, and neuroscience. “The surprising thing about this framework is that the neurons reason about ideas in the exact same way that philosophers have always described our reasoning process,” Blazek said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Google releases TF-GNN for creating graph neural networks in TensorFlow Google has released

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Let the OSS Enterprise newsletter guide your open source journey! Sign up here.

Google today released TensorFlow Graph Neural Networks (TF-GNN) in alpha, a library designed to make it easier to work with graph structured data using TensorFlow, its machine learning framework. Used in production at Google for spam and anomaly detection, traffic estimation, and YouTube content labeling, Google says that TF-GNN is designed to “encourage collaborations with researchers in industry.”

Graphs are a set of objects, places, or people and the connections between them. A graph represents the relations (edges) between a collection of entities (nodes or vertices), all of which can store data. Directionality can be ascribed to the edges to describe information, traffic flow, and more.

More often than not, the data in machine learning problems is structured or relational and thus can be described with a graph. Fundamental research on GNNs is decades old, but recent advances have led to great achievements in many domains, like modeling the transition of glass from a liquid to a solid and predicting pedestrian, cyclist, and driver behavior on the road.

TF-GNN

Above: Graphs can model the relationships between many different types of data, including web pages (left), social connections (center), or molecules (right).

Image Credit: Google

Indeed, GNNs can be used to answer questions about multiple characteristics of graphs. By working at the graph level, they can try to predict aspects of the entire graph, for example identifying the presence of certain “shapes” like circles in a graph that might represent close social relationships. GNNs can also be used on node-level tasks to classify the nodes of a graph or at the edge level to discover connections between entities.

TF-GNN

TF-GNN provides building blocks for implementing GNN models in TensorFlow. Beyond the modeling APIs, the library also delivers tooling around the task of working with graph data, including a data-handling pipeline and example models.

Also included with TF-GNN is an API to create GNN models that can be composed with other types of AI models. In addition to this, TF-GNN ships with a schema to declare the topology of a graph (and tools to validate it), helping to describe the shape of training data.

“Graphs are all around us, in the real world and in our engineered systems … In particular, given the myriad types of data at Google, our library was designed with heterogeneous graphs in mind,” Google’s Sibon Li, Jan Pfeifer, Bryan Perozzi, and Douglas Yarrington wrote in the blog post introducing TF-GNN.

TF-GNN adds to Google’s growing collection of TensorFlow libraries, which spans TensorFlow Privacy, TensorFlow Federated, and TensorFlow.Text. More recently, the company open-sourced TensorFlow Similarity, which trains models that search for related items — for example, finding similar-looking clothes and identifying currently playing songs.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

What are graph neural networks (GNN)?

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


Graphs are everywhere around us. Your social network is a graph of people and relations. So is your family. The roads you take to go from point A to point B constitute a graph. The links that connect this webpage to others form a graph. When your employer pays you, your payment goes through a graph of financial institutions.

Basically, anything that is composed of linked entities can be represented as a graph. Graphs are excellent tools to visualize relations between people, objects, and concepts. Beyond visualizing information, however, graphs can also be good sources of data to train machine learning models for complicated tasks.

Graph neural networks (GNN) are a type of machine learning algorithm that can extract important information from graphs and make useful predictions. With graphs becoming more pervasive and richer with information, and artificial neural networks becoming more popular and capable, GNNs have become a powerful tool for many important applications.

Transforming graphs for neural network processing

An image of interconnected nodes set against a marble background.

Every graph is composed of nodes and edges. For example, in a social network, nodes can represent users and their characteristics (e.g., name, gender, age, city), while edges can represent the relations between the users. A more complex social graph can include other types of nodes, such as cities, sports teams, news outlets, as well as edges that describe the relations between the users and those nodes.

Unfortunately, the graph structure is not well suited for machine learning. Neural networks expect to receive their data in a uniform format. Multi-layer perceptrons expect a fixed number of input features. Convolutional neural networks expect a grid that represents the different dimensions of the data they process (e.g., width, height, and color channels of images).

Graphs can come in different structures and sizes, which does not conform to the rectangular arrays that neural networks expect. Graphs also have other characteristics that make them different from the type of information that classic neural networks are designed for. For instance, graphs are “permutation invariant,” which means changing the order and position of nodes doesn’t make a difference as long as their relations remain the same. In contrast, changing the order of pixels results in a different image and will cause the neural network that processes them to behave differently.

To make graphs useful to deep learning algorithms, their data must be transformed into a format that can be processed by a neural network. The type of formatting used to represent graph data can vary depending on the type of graph and the intended application, but in general, the key is to represent the information as a series of matrices.

A series of images set against a grainy, sand-colored background. The first is a series of people's profiles interconnected by nodes. The next are two graphs with a series of people's first names, and basic biographical information.

For example, consider a social network graph. The nodes can be represented as a table of user characteristics. The node table, where each row contains information about one entity (e.g., user, customer, bank transaction), is the type of information that you would provide a normal neural network.

But graph neural networks can also learn from other information that the graph contains. The edges, the lines that connect the nodes, can be represented in the same way, with each row containing the IDs of the users and additional information such as date of friendship, type of relationship, etc. Finally, the general connectivity of the graph can be represented as an adjacency matrix that shows which nodes are connected to each other.

When all of this information is provided to the neural network, it can extract patterns and insights that go beyond the simple information contained in the individual components of the graph.

Graph embeddings

Three images set against a blue marble background. The first: a series of graphs with users' names and personal information. Second image: bar graph entitled "Graph Embedding." Third image: a spreadsheet with users and numbers titled "Graph Embeddings."

Graph neural networks can be created like any other neural network, using fully connected layers, convolutional layers, pooling layers, etc. The type and number of layers depend on the type and complexity of the graph data and the desired output.

The GNN receives the formatted graph data as input and produces a vector of numerical values that represent relevant information about nodes and their relations.

This vector representation is called “graph embedding.” Embeddings are often used in machine learning to transform complicated information into a structure that can be differentiated and learned. For example, natural language processing systems use word embeddings to create numerical representations of words and their relations together.

How does the GNN create the graph embedding? When the graph data is passed to the GNN, the features of each node are combined with those of its neighboring nodes. This is called “message passing.” If the GNN is composed of more than one layer, then subsequent layers repeat the message-passing operation, gathering data from neighbors of neighbors and aggregating them with the values obtained from the previous layer. For example, in a social network, the first layer of the GNN would combine the data of the user with those of their friends, and the next layer would add data from the friends of friends and so on. Finally, the output layer of the GNN produces the embedding, which is a vector representation of the node’s data and its knowledge of other nodes in the graph.

Interestingly, this process is very similar to how convolutional neural networks extract features from pixel data. Accordingly, one very popular GNN architecture is the graph convolutional neural network (GCN), which uses convolution layers to create graph embeddings.

Applications of graph neural networks

An image of three separate neural networks set against a grey background.

Once you have a neural network that can learn the embeddings of a graph, you can use it to accomplish different tasks.

Here are a few applications for graph neural networks:

Node classification: One of the powerful applications of GNNs is adding new information to nodes or filling gaps where information is missing. For example, say you are running a social network and you have spotted a few bot accounts. Now you want to find out if there are other bot accounts in your network. You can train a GNN to classify other users in the social network as “bot” or “not bot” based on how close their graph embeddings are to those of the known bots.

Edge prediction: Another way to put GNNs to use is to find new edges that can add value to the graph. Going back to our social network, a GNN can find users (nodes) who are close to you in embedding space but who aren’t your friends yet (i.e., there isn’t an edge connecting you to each other). These users can then be introduced to you as friend suggestions.

Clustering: GNNs can glean new structural information from graphs. For example, in a social network where everyone is in one way or another related to others (through friends, or friends of friends, etc.), the GNN can find nodes that form clusters in the embedding space. These clusters can point to groups of users who share similar interests, activities, or other inconspicuous characteristics, regardless of how close their relations are. Clustering is one of the main tools used in machine learning–based marketing.

Graph neural networks are very powerful tools. They have already found powerful applications in domains such as route planning, fraud detection, network optimization, and drug research. Wherever there is a graph of related entities, GNNs can help get the most value from the existing data.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Airbnb CTO says graph neural networks will be big in 2021

All the sessions from Transform 2021 are available on-demand now. Watch now.


Executives have to prioritize whether to experiment with cutting-edge technologies or wait to see results from other implementations first, Airbnb chief technology officer Vanja Josifovski said in a conversation with VentureBeat founder and CEO Matt Marshall at VentureBeat’s Transform 2021 virtual conference. Most enterprises — even large ones — have constrained resources, so they have to decide which technologies to invest in and which to wait out.

Typically, the decision is to use state of the art technologies in critical areas and avoid experimental or emerging technology in all the other areas, Josifovski said.

“It’s one of the hardest parts of my job because I do want to hire the best and smartest people, but then I do want to channel that ability into the areas that will provide business impact,” Josifovski said. “In some cases, [we] refrain from using state of the art until we think that we’ll get the return back.”

Josifovski and Marshall discussed some of the innovative trends in artificial intelligence (AI). “If we look at what’s happening today, there are some amazing technologies coming up,” Josifovski said, such as graph neural networks, transformer models, and language models.

Graph neural networks

Graph neural networks will be a major trend in 2021, Josifovski predicted. At its core, the deep learning paradigm is a different way of structuring data, like images, and sequencing data, like text. However, the data’s usage and the structure needed for the model to work can be rigid. Graph neural networks, in contrast, allow a more flexible architecture because the data defines the architecture of the model.

“Graph neural networks is a next iteration that allows us to use a lot more data within the deep learning framework in a much more natural way,” Josifovski said. “I feel that they will open a whole new area, where you’re going to be able to apply the deep learning paradigm a lot easier on a whole different set of data.”

Pinterest has used the model to build a recommendation feature, and Uber built a fraud detection model, for example.

Language models

While it is “an amazing technological achievement,” it may be too soon to work with large language models, Josifovski said. Being able to scale these models is a relatively new concept, but the challenge is finding the data to train the model. However, he added that using models in the production process requires predictability. There have been good examples of using the models to generate text and webpages. This is a good place to use the models because they aren’t “mission critical,” Josifovski said. In contrast, this type of work won’t fit well initially in machines like self-driving cars.

While language models don’t currently work with chatbots, Josifovski believes they will in the future.

Center for innovation?

In the early years, academia was the center of innovation and research for AI, with large companies developing some proprietary technologies, Josifovski said. Over time, waves of innovation in AI came from bigger companies, like Google, Amazon, Microsoft, and Facebook. As many of the technologies become commoditized, Josifovski predicts another shift, this time to smaller, independent companies. In areas like storage and cloud infrastructure management, well-resourced companies will provide the infrastructure to allow smaller players to develop AI.

“The center of gravity will slightly shift from the larger companies into smaller independent companies,” Josifovski said. “We will see a full ecosystem of companies that [have] been developed now and will shape the future.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Versa Networks raises $84M to protect cloud networks

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Versa Networks, a security vendor in the software-defined networking space, today announced that it closed an $84 million series D funding round co-led by Princeville Capital and RPS Ventures, with additional participation from Sequoia Capital, an existing investor. CEO Kelly Ahuja says that the proceeds — which bring Versa’s total raised to $196 million — will be put toward scaling the company’s platform and expanding its marketing and global sales teams.

According to Gartner, the secure access service edge market (SASE) is expected to be worth almost $11 billion by 2024, with least 40% of enterprises having SASE strategies in place over the next three years. A term coined by Gartner, SASE aims to simplify wide-area networking and security by delivering both as a cloud service directly to the source of connection — i.e., an edge computing location — rather than an enterprise datacenter. Security is based around identity, real-time context, and enterprise security and compliance policies. As for identity, it can be attached to anything from a person to a device, branch office, cloud service, application, or an IoT system.

San Jose, California-based Versa was founded in 2012 by brothers Kumar and Apurva Mehta following an 8-year stay at Juniper Networks, where they led the development of Juniper’s MX series routers and mobility portfolio. During their tenure, the Mehta brothers came across a customer need to integrate cloud services services into routers, which presented complexities. They developed a software-defined, programmable solution that integrated network and security with a decoupling of software and hardware, which formed the basis of Versa’s first product.

“During the pandemic, many businesses used the downtime in branch offices to accelerate refreshes and prepare for the upturn, and shifted priority to enabling work-from-anywhere, including a hybrid environment. The results were stellar as we experienced two times year-over-year growth,” Ahuja told VentureBeat via email. “Versa is among the fastest-growing companies in one of the fastest-growing categories. The demand for our solution has been astronomical. We are not opportunity-limited, but have been capacity-limited in our go-to-market.”

SASE

Available via the cloud, on-premises, or as a hybrid of both, Versa’s platform connects enterprise branches and end-users to remote apps. It offers an architecture combining security, networking, analytics, and automation into one software solution, with hardware appliances and admin dashboards that offer policy configuration and access control options.

According to Ahuja, Versa uses AI and machine learning for several aspects of its platform, including in its networking and security in addition to its its management, orchestration, and analytics tools. “In networking and security, we use telemetry datasets from the various elements, as well as from the underlay or cloud and software-as-a-service reachability, to program the optimal path to connect users to applications,” he explained. “In our management, orchestration and analytics, we use all the datasets gathered to train [machine learning] models that allow for faster and automated correlation of operational issues and resolving them.”

Beyond security incumbents like Zscaler and Palo Alto Networks, Versa considers Cisco, VMware, and Fortinet its competitors. But the company, which has close to 500 employees, has managed to attract over 5,000 customers and more than 500,000 sites under contract to date.

“Versa enables multi-cloud deployments for small to very large enterprises with security, reliability, and complete visibility for IT organizations,” Ahuja said. “It supports enterprise-wide internet of things implementations by automatically detecting new devices, authenticating, and applying appropriate policies and security practices for each device … And it delivers secure, high-performance, and low-latency deployments for unified communications, videoconferencing, and VoIP to enterprise branch offices, remote teleworkers, and home-based call agents, ensuring a high-quality experience.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Intel launches more silicon and software for 5G wireless networks

Elevate your enterprise data technology and strategy at Transform 2021.


Intel made the case today that its silicon chips and software are accelerating 5G wireless networks at the edge, and the big chipmaker is launching new chips to further improve its position in virtual radio access networks (vRAN) and other 5G technologies.

Intel VP Dan Rodriguez made the announcements in a keynote speech for the virtual Mobile World Congress event. By 2023, experts expect 75% of data will be created outside of the datacenter — at the edge in factories, hospitals, retail stores, and across cities. Developers want to converge various capabilities at the edge, such as AI, analytics, media, and networking, and Intel wants to be there with the right technology.

In a recent survey of 511 information technology decision-makers, over 78% said they believe 5G technology is crucial to keeping pace with innovation, and nearly 80% said 5G technologies will affect their businesses, Intel reported.

With this in mind, Rodriguez said Reliance Jio, Deutsche Telekom, and Dish Wireless are transforming their networks on Intel architecture. The vRAN promises cloud-like agility and automation capabilities that can help optimize the RAN performance and ultimately improve the experience for users.

Intel is also expanding its family of Agilex FPGA (field programmable gate array), or highly programmable chips. The company is adding a new FPGA with integrated cryptography acceleration that can support MACSec in 5G applications. This adds another layer of security to vRAN at the fronthaul, midhaul, and backhaul levels.

Above: Intel is unveiling new 5G wireless network tech at MWC 2021.

Image Credit: Intel

Intel also said the Intel Ethernet 800 Series family is expanding with the company’s first SyncE capable Ethernet Adapter designed for space-constrained systems on the edge and well-suited for both high-bandwidth 4G and 5G RAN, as well as time- and latency-sensitive applications in industrial, financial, and energy sectors, among others.

Intel summed up the tech as its Intel Network Platform, a technology foundation that aims to reduce development complexity, accelerate time to market, and help customers and partners take advantage of features in Intel hardware — from core to access to edge. Intel says its Intel Network Platform includes system-level reference architectures, drivers, and software building blocks that enable rapid development and delivery of Intel-powered network solutions and an easier, faster path to developing and optimizing network software.

Rodriguez said nearly all commercial vRAN deployments are running on Intel technology. In the years ahead, Intel sees global vRAN base station deployments scaling from hundreds to “hundreds of thousands,” and eventually millions.

Why it matters

Above: Intel’s Mobile World Congress in 2018.

Intel said operators of 5G networks want a more agile, flexible infrastructure to unleash the full possibilities of 5G and edge as they address increased network demands from more connected devices. At the same time, global digitalization is creating new opportunities to use the potential of 5G, edge, artificial intelligence (AI), and cloud to reshape industries ranging from manufacturing to retail, health care, education, and more.

Decision-makers also revealed that they view edge as one of the top three use cases for 5G in the next two years. With Intel’s portfolio delivering silicon and optimized software solutions, the company can tap into an estimated $65 billion edge silicon opportunity by 2025. Intel technology is already deployed in over 35,000 end customer edge implementations.

Network deployments

Operators like Deutsche Telekom, Dish Wireless, and Reliance Jio are relying on Intel technology. Reliance Jio announced it will participate in co-innovations with Intel in 5G radio and wireless core and collaborate in areas that include AI, cloud, and edge computing, which will help with 5G deployment.

Deutsche Telekom is using Intel FlexRAN technology with accelerators in O-RAN Town, in the O-RAN network it is deploying in Neubrandenburg, Germany — a city of 65,000 people spread out over 33 square miles. The company is relying on Intel as a technology partner to deliver high-performance RAN at scale.

Dish Wireless is relying on Intel’s contributions to the 5G ecosystem as it builds out the first cloud-native 5G network in the U.S. Its inaugural launch in Las Vegas, as well as its nationwide network, will be deployed on infrastructure powered by Intel technology in the network core, access, and edge.

Cohere is pioneering a new approach to improving spectrum utilization by leveraging capabilities in FlexRAN. It is integrating and optimizing spectrum multiplier software in the RAN intelligent controller. Cohere’s testing shows its Delay Doppler spatial multiplexing technology is improving channel estimation and delivering an up to 2 times improvement in spectrum utilization for operators. That’s what Vodafone has seen in 700Mhz testing in its labs.

And Cellnex Telecom — with support from Intel, Lenovo, and Nearby Computing — is delivering edge capabilities based on Intel Smart Edge Open. This will allow Cellnex to act faster on data, provide service-level management, improve quality of service, and deliver a more consistent experience to its end users. Deployed in Barcelona, this solution will extend to more markets using the blueprint developed with Intel and Nearby Computing.

Intel said its network business grew 20% between 2019 and 2020, from $5 billion to $6 billion. The company’s strong position is the result of early investments in hardware and software.

Intel predicted a bright future for the industry. As 5G blooms to meet its full potential alongside edge computing, experts expect artificial intelligence, the cloud, and smart cities will become the norm. Factory automation is also expected to flourish with Industry 4.0, and retail locations will redesign the shopping experience. For consumers, cloud gaming and virtual and augmented reality over mobile networks will become an everyday experience, Rodriguez said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Ad networks were right to be horrified by Apple’s App Tracking rules

iPhone users have overwhelmingly been denying apps the ability to track data for advertising, new research suggests, with App Tracking Transparency apparently delivering a worst-case-scenario for personal data brokers. Added in iOS 14.5, which was released to iPhone in late April 2021, the new system requires app-makers to request explicit permission from users before they can share any data collected on them for targeting and advertising purposes.

Apps that do wish to do that must show a pop-up message with two clear options. If you use the Facebook app, for example, you’ll get the dialog: “Allow “Facebook” to track your activity across other companies’ apps and websites? Your data will be used to measure advertising efficiency.”

There are two choices – “Ask App Not to Track” or “Allow” – which users must select from before they can continue to use the app. Ad networks and others had reacted with frustration to the system, arguing that it could have a significant impact on the lucrative user profiles that collated data helps build. Those can help advertisers target their campaigns most effectively.

Apple countered with the fact that App Tracking Transparency doesn’t actually change any of those data sharing abilities – it just requires specific permission before they can be carried out. Now, though, new research suggests the ad companies were probably right to have been worried.

Analysis firm Flurry Analytics, which is owned by Verizon Media, gets aggregated insights from over 1 million apps across 2 billion mobile devices per month. It’s been using that to track how many users actually opt-in to data sharing each day, in addition to the share of users that are “restricted” and thus cannot be asked for permission.

So far, Flurry says, the worldwide daily opt-in rate after the launch of iOS 14.5 just 12-percent, as of May 7, 2021. In the US, it’s even lower still: just 4-percent of users opt-in.

For those who are considered “restricted” for app tracking – that is, they have the global setting “Allow Apps to Request to Track” switched off, and so never see the request dialog for each app – that currently accounts for 5-percent of worldwide daily users, Flurry says. In the US it’s a little lower, at 3-percent.

Apple does offer app-makers and advertisers an alternative, its own identifying system. Dubbed SKAdNetwork, it’s been criticized by some third-parties as being another example of Apple trying to push its own services.

For end-users, though, it seems Apple’s new rules have struck a chord. The company has made privacy a key pitch in iOS and iPadOS in recent years, including trying to pull back the curtain on just what data sharing is going on among advertisers and data brokers, and how that can be used to make surprisingly accurate assumptions about individuals.

Repost: Original Source and Author Link

Categories
Tech News

Tile and Level add Amazon Sidewalk support for neighborhood networks

Amazon Sidewalk, the controversial neighborhood network created by select Echo and Ring devices, is gaining Tile tracker integration, potentially making it easier to hunt down the AirTag rivals. In addition, Level smart locks are also adding support for Sidewalk, while more Echo devices are gaining compatibility for the technology.

Sidewalk is a shared neighborhood network, similar in concept to a larger, semi-private WiFi network. Designed to be more ubiquitous, but low-bandwidth, Sidewalk is created by select Echo and Ring devices, using a small portion of your home internet bandwidth pooled with that of your neighbors.

The idea is that these connected devices can maintain more consistent coverage, and uptime, with the shared network. It’s optional to enable, though has still proved to be controversial since it’ll involve traffic – not-identified to the individual user – going over their private internet connection. That includes not knowing which neighborhood devices are actually connected to a Sidewalk bridge.

Integrations with companies like Tile are part of the justification Amazon has used for why Sidewalk should be switched on. The Bluetooth trackers will be able to connect to a broader Sidewalk network, so that even if they’re outside of the range of your smartphone you’ll still be able to get a more precise lock on their location through the Tile app. You’ll also be able to ask Alexa to “find my keys” or whatever the Tile is attached to, and it will begin ringing as the assistant can contact it across the Sidewalk network.

It’ll help bolster the Tile Network coverage as it staves off Apple’s AirTag, with trackers outside of the home potentially checking in with that locator network more frequently.

Level, meanwhile, is adding support for Sidewalk to its smart locks. Currently, Level requires a Bluetooth connection between the lock and your phone, or an Apple HomeKit system be running, but with the addition of Sidewalk support you’ll be able to remotely check on lock/unlock status, and change that from wherever you have an internet connection.

Level support will be added by the end of May, through the Ring and Level apps, Amazon says.

Finally, Amazon says it’s working with CareBand, which provides wearables for people living with dementia. Sidewalk support will mean those wearables can continue to be tracked – as well as support their “help” button, and automated analysis of activity patterns – even when outside of home WiFi range. The pilot is kicking off now.

From June 8, there’ll be more straightforward setup for Echo devices using Sidewalk, too. That will include both at the point of initial configuration, and if you change your WiFi password or SSID. Echo devices configured that way will also join the Sidewalk network, expanding coverage for devices like Tile and Level.

Repost: Original Source and Author Link

Categories
AI

Deep Instinct’s neural networks for cybersecurity attract $110M

Join Transform 2021 this July 12-16. Register for the AI event of the year.


The increasingly rich data companies are collecting makes them a more tantalizing target for attacks. But Deep Instinct wants to turn that same data into an enterprise’s greatest defensive asset.

Deep Instinct is applying end-to-end deep learning to cybersecurity, an approach that allows it to predict and prevent cyberattacks across a company’s network, according to CEO Guy Caspi.

Today, Deep Instinct announced it has raised $100 million in a round led by BlackRock. Other investors include Untitled Investments, The Tudor Group, Anne Wojcicki, Millennium, Unbound, and Coatue Management. The company has now raised a total of $200 million.

AI for security

The New York-based company is part of a growing wave of startups turning to machine learning and artificial intelligence to combat the rising number of cyberattacks. The industry is optimistic that this ability to automate defenses will help companies gain an edge against increasingly sophisticated and well-funded hackers.

But Deep Instinct is trying to go a step beyond the way others are using AI and machine learning for security. The company has created deep neural networks that allow it to avoid using feature processing that can add an additional step and slow reaction time.

With traditional machine learning, Caspi explained, executable files cannot be processed directly. Instead, they must be converted into a list of features that are then fed into a machine learning model.

How it works

Deep Instinct’s end-to-end deep learning system uses the raw data as input without needing to convert it. The company trains its model in its own labs, rather than on the customer’s premises, by feeding it hundreds of millions of malicious and legitimate files. This huge-scale training workload relies on Nvidia GPUs.

Once the training is finished, Deep Instinct creates a standalone neural network that can be deployed to an organization, where it starts protecting every device connected to the network. Because the system doesn’t require agents, it can be rapidly installed, including covering all applications currently running. And it can recognize previously unknown types of attacks without needing to be constantly updated.

As a result, Deep Instinct claims it can identify and stop attacks within 20 milliseconds while reducing false positives by 99%.

Caspi said he wants to use the latest funding to accelerate growth with an eye toward an IPO in the next couple of years. For now, that means ramping up sales and marketing, with about 30% of the money being reserved for product development.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link