Categories
Computing

Why A.I. will never rule the world

Call it the Skynet hypothesis, Artificial General Intelligence or the advent of the Singularity — for years, A.I. experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans.

According to the theory, advances in A.I. — specifically of the machine learning type that’s able to take on new information and rewrite its code accordingly — will eventually catch up with the wetware of the biological brain. In this interpretation of events, every A.I. advance from Jeopardy-winning IBM machines to the massive A.I. language model GPT-3 is taking humanity one step closer to an existential threat. We’re literally building our soon-to-be-sentient successors.

Except that it will never happen. At least, according to the authors of new book Why Machines Will Never Rule the World: Artificial Intelligence without Fear.

Co-authors University at Buffalo philosophy professor Barry Smith and Jobst Landgrebe, founder of German A.I. company Cognotekt argue that human intelligence won’t be overtaken by “an immortal dictator” any time soon — or ever. They told Digital Trends their reasons why.

Digital Trends (DT): How did this subject get on your radar?

Jobst Landgrebe (JL): I’m a physician and biochemist by training. When I started my career, I did experiments that generated a lot of data. I started to study mathematics to be able to interpret these data, and saw how hard it is to model biological systems using mathematics. There was always this misfit between the mathematical methods and the biological data.

In my mid-thirties, I left academia and became a business consultant and entrepreneur working in artificial intelligence software systems. I was trying to build A.I. systems to mimic what human beings can do. I realized that I was running into the same problem that I had years before in biology.

Customers said to me, ‘why don’t you build chatbots?’ I said, ‘because they won’t work; we cannot model this type of system properly.’ That ultimately led to me writing this book.

Professor Barry Smith (BS): I thought it was a very interesting problem. I had already inklings of similar problems with A.I., but I had never thought them through. Initially, we wrote a paper called ‘Making artificial intelligence meaningful again.’ (This was in the Trump era.) It was about why neural networks fail for language modeling. Then we decided to expand the paper into a book exploring this subject more deeply.

DT: Your book expresses skepticism about the way that neural networks, which are crucial to modern deep learning, emulate the human brain. They’re approximations, rather than accurate models of how the biological brain works. But do you accept the core premise that it is possible that, were we to understand the brain in granular enough detail, it could be artificially replicated – and that this would give rise to intelligence or sentience?

JL: The name ‘neural network’ is a complete misnomer. The neural networks that we have now, even the most sophisticated ones, have nothing to do with the way the brain works. The view that the brain is a set of interconnected nodes in the way that neural networks are built is completely naïve.

If you look at the most primitive bacterial cell, we still don’t understand even how it works. We understand some of its aspects, but we have no model of how it works – let alone a neuron, which is much more complicated, or billions of neurons interconnected. I believe it’s scientifically impossible to understand how the brain works. We can only understand certain aspects and deal with these aspects. We don’t have, and we will not get, a full understanding of how the brain works.

If we had a perfect understanding of how each molecule of the brain works, then we could probably replicate it. That would mean putting everything into mathematical equations. Then you could replicate this using a computer. The problem is just that we are unable to write down and create those equations.

profile of head on computer chip artificial intelligence
Digital Trends Graphic

BS: Many of the most interesting things in the world are happening at levels of granularity that we cannot approach. We just don’t have the imaging equipment, and we probably never will have the imaging equipment, to capture most of what’s going on at the very fine levels of the brain.

This means that we don’t know, for instance, what is responsible for consciousness. There are, in fact, a series of quite interesting philosophical problems, which, according to the method that we’re following, will always be unsolvable – and so we should just ignore them.

Another is the freedom of the will. We are very strongly in favor of the idea that human beings have a will; we can have intentions, goals, and so forth. But we don’t know whether or not it’s a free will. That is an issue that has to do with the physics of the brain. As far as the evidence available to us is concerned, computers can’t have a will.

DT: The subtitle of the book is ‘artificial intelligence without fear.’ What is the specific fear that you refer to?

BS: That was provoked by the literature on the singularity, which I know you’re familiar with. Nick Bostrom, David Chalmers, Elon Musk, and the like. When we talked with our colleagues in the real world, it became clear to us that there was indeed a certain fear among the populace that A.I. would eventually take over and change the world to the detriment of humans.

We have quite a lot in the book about the Bostrum-type arguments. The core argument against them is that if the machine cannot have a will, then it also cannot have an evil will. Without an evil will, there’s nothing to be afraid of. Now, of course, we can still be afraid of machines, just as we can be afraid of guns.

But that’s because the machines are being managed by people with evil ends. But then it’s not A.I. that is evil; it’s the people who build and program the A.I.

DT: Why does this notion of the singularity or artificial general intelligence interest people so much? Whether they’re scared by it or fascinated by it, there’s something about this idea that resonates with people on a broad level.

JL: There’s this idea, started at the beginning of the 19th century, and then declared by Nietzsche at the end of that century, that God is dead. Since the elites of our society are not Christians anymore, they needed a replacement. Max Stirner, who was, like Karl Marx, a pupil of Hegel, wrote a book about this, saying that, ‘I am my own god.’

If you are God, you also want to be a creator. If you could create a superintelligence then you are like God. I think it has to do with the hyper-narcissistic tendencies in our culture. We don’t talk about this in the book, but that explains to me why this idea is so attractive in our times in which there is no transcendent entity anymore to turn to.

brain with computer text scrolling artificial intelligence
Chris DeGraw/Digital Trends, Getty Images

DT: Interesting. So to follow that through, it’s the idea that the creation of A.I. – or the aim to create A.I. – is a narcissistic act. In that case, the concept that these creations would somehow become more powerful than we are is a nightmarish twist on that. It’s the child killing the parent.

JL: A bit like that, yes.

DT: What for you would be the ultimate outcome of your book if everyone was convinced by your arguments? What would that mean for the future of A.I. development?

JL: It’s a very good question. I can tell you exactly what I think would happen – and will happen. I think in the midterm people will accept our arguments, and this will create better-applied mathematics.

Something that all great mathematicians and physicists are completely aware of was the limitations of what they could achieve mathematically. Because they are aware of this, they focus only on certain problems. If you are well aware of the limitations, then you go through the world and look for these problems and solve them. That’s how Einstein found the equations for Brownian motion; how he came up with his theories of relativity; how Planck solved blackbody radiation and thus initiated the quantum theory of matter. They had a good instinct for which problems are amenable to solution with mathematics and which are not.

If people learn the message of our book, they will, we believe, be able to engineer better systems, because they will concentrate on what is truly feasible – and stop wasting money and effort on something that can’t be achieved.

BS: I think that some of the message is already getting through, not because of what we say but because of the experiences people have when they give large amounts of money to A.I. projects, and then the A.I. projects fail. I guess you know about the Joint Artificial Intelligence Center. I can’t remember the exact sum, but I think it was something like $10 billion, which they gave to a famous contractor. In the end, they got nothing out of it. They canceled the contract.

(Editor’s note: JAIC, a subdivision of the United States Armed Forces, was intended to accelerate the “delivery and adoption of A.I. to achieve mission impact at scale.” It was folded into a larger unified organization, the Chief Digital and Artificial Intelligence Officer, with two other offices in June this year. JAIC ceased to exist as its own entity.)

DT: What do you think, in high-level terms, is the single most compelling argument that you make in the book?

BS: Every A.I. system is mathematical in nature. Because we cannot model consciousness, will, or intelligence mathematically, these cannot be emulated using machines. Therefore, machines will not become intelligent, let alone superintelligent.

JL: The structure of our brain only allows limited models of nature. In physics, we pick a subset of reality that fits to our mathematical modeling capabilities. That is how Newton, Maxwell, Einstein, or Schrödinger obtained their famous and beautiful models. But these can only describe or predict a small set of systems. Our best models are those which we use to engineer technology. We are unable to create a complete mathematical model of animate nature.

This interview has been edited for length and clarity.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Could Nvidia’s Thor chip rule automotive AI?

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


As cars get increasingly smarter and self-driving autonomous vehicles continue to be developed there is an obvious need for more computing power. Maybe even the power of a Norse god of thunder.

At the Nvidia GTC conference today, the company announced its new DRIVE Thor platform for automotive. DRIVE Thor is intended to provide a platform that can support self-driving capabilities, vehicle operations such as parking assist, as well as in-vehicle entertainment. The system benefits from the Nvidia Grace CPU and GPU capabilities that come from Hopper architecture. The DRIVE Thor platform replaces the Atlan system that was announced in April 2021. Nvidia expects that the new DRIVE Thor technology will begin to show up in automakers 2025 vehicle models.

“Autonomous vehicles are one of the most complex computing challenges of our time,” Danny Shapiro, vice president of automotive at Nvidia, said during a press briefing. “To achieve the highest possible level of safety, we need diverse and redundant sensors and algorithms, which require massive compute.”

[Follow along with VB’s ongoing Nvidia GTC 2022 coverage »]

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Why use multiple computers when you can use one?

Shapiro explained that modern vehicles use a wide array of computers, distributed throughout the vehicle.

For example, many cars today have advanced driver assistance systems, with parking assist, various monitoring cameras and multiple digital instrument clusters, alongside some form of entertainment system.

“In 2025, these functions will no longer be separate computers,” Shapiro said. “Drive Thor will enable manufacturers to efficiently consolidate these functions into a single system, reducing overall system costs.”

The goal with Drive Thor is to provide automakers with the compute headroom and flexibility to build software-defined autonomous vehicles that are continuously upgradable through secure over the air updates.

Thor’s power isn’t a hammer, it’s an inference transformer

The mythical Norse God Thor relied on his hammer Mjölnir, but there’s nothing mystical about what brings power to Nvidia’s DRIVE Thor platform.

Nvidia DRIVE Thor
Nvidia DRIVE Thor

According to Shapiro, Thor is the first automotive chip to incorporate an inference transformer engine. A transformer is an AI technique that can quickly identify relationships between objects and is particularly useful for computer vision.

“Thor can accelerate inference performance of transformers, which is vital for supporting the massive and complex AI workloads in self-driving vehicles,” Shapiro said. 

Going a step further, the way the system can handle multiple operations security in a real-time approach is with a capability called multi-compute domain isolation. Shapiro explained that the capability enables concurrent time-critical processes to run without interruption. Additionally, on one computer, a vehicle manufacturer can simultaneously run Linux, QNX and Android operating systems and applications.

Learning to self-drive with Drive SIM

The new DRIVE Thor system is one part of Nvidia’s overall automotive efforts. 

Another key part is the Drive Sim technology, which can help to train self-driving vehicles, that will benefit from the Thor chip. Shapiro explained that Drive Sim uses a neural engine that can recreate and replay road situations in a digital twin model.

“Essentially, our researchers have developed an AI pipeline that can reconstruct a 3D scene from recorded sensor data,” Shapiro said. “At the end of the day, though, we’re creating a digital twin of the car and a digital twin of the environment.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

DeepMind is developing one algorithm to rule them all

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


DeepMind wants to enable neural networks to emulate algorithms to get the best of both worlds, and it’s using Google Maps as a testbed.

Classical algorithms are what have enabled software to eat the world, but the data they work with does not always reflect the real world. Deep learning is what powers some of the most iconic AI applications today, but deep learning models need retraining to be applied in domains they were not originally designed for.

DeepMind is trying to combine deep learning and algorithms, creating the one algorithm to rule them all: a deep learning model that can learn how to emulate any algorithm, generating an algorithm-equivalent model that can work with real-world data.

DeepMind has made headlines for some iconic feats in AI. After developing AlphaGo, a program that became the world champion at the game of Go in a five-game match after beating a human professional Go player, and AlphaFold, a solution to a 50-year-old grand challenge in biology, DeepMind has set its sights on another grand challenge: bridging deep learning, an AI technique, with classical computer science.

The birth of neural algorithmic reasoning

Charles Blundell and Petar Veličković both hold senior research positions at DeepMind. They share a background in classical computer science and a passion for applied innovation. When Veličković met Blundell at DeepMind, a line of research known as Neural Algorithmic Reasoning (NAR), was born, after the homonymous position paper recently published by the duo.

The key thesis is that algorithms possess fundamentally different qualities when compared to deep learning methods — something Blundell and Veličković elaborated upon in their introduction of NAR. This suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning.

Like all well-grounded research, NAR has a pedigree that goes back to the roots of the fields it touches upon, and branches out to collaborations with other researchers. Unlike much pie-in-the-sky research, NAR has some early results and applications to show.

We recently sat down to discuss the first principles and foundations of NAR with Veličković and Blundell, to be joined as well by MILA researcher Andreea Deac, who expanded on specifics, applications, and future directions. Areas of interest include the processing of graph-shaped data and pathfinding.

Pathfinding: There’s an algorithm for that

Deac interned at DeepMind and became interested in graph representation learning through the lens of drug discovery. Graph representation learning is an area Veličković is a leading expert in, and he believes it’s a great tool for processing graph-shaped data.

“If you squint hard enough, any kind of data can be fit into a graph representation. Images can be seen as graphs of pixels connected by proximity. Text can be seen as a sequence of objects linked together. More generally, things that truly come from nature that aren’t engineered to fit within a frame or within a sequence like humans would do it, are actually quite naturally represented as graph structures,” said Veličković.

Another real-world problem that lends itself well to graphs — and a standard one for DeepMind, which, like Google, is part of Alphabet — is pathfinding. In 2020, Google Maps was the most downloaded map and navigation app in the U.S. and is used by millions of people every day. One of its killer features, pathfinding, is powered by none other than DeepMind.

The popular app now showcases an approach that could revolutionize AI and software as the world knows them. Google Maps, features a real-world road network that assists in predicting travel times. Veličković noted DeepMind has also worked on a Google Maps application that applies graph networks to predict travel times. This is now serving queries in Google Maps worldwide, and the details are laid out in a recent publication.

Veličković said that although one of the most iconic graph algorithms, Dijkstra’s algorithm, could in theory help compute shortest paths or even estimate travel times, that’s not applicable in reality. To do that, you would have to take all the complexity of the real world — roadblocks, weather changes, traffic flows, bottlenecks, and whatnot — and convert that into an ‘abstractified’ graph of nodes and edges with some weights which correspond to the travel time.

This exemplifies some of the issues with algorithms. As Veličković put it, algorithms are very happy to give you perfect answers, regardless of whether or not the inputs themselves are perfect. So it’s going to give you the shortest path. But who, we asked Veličković, guarantees that this graph is an accurate representation of the real world scenario? He said:

“The algorithm might give you a great solution, but there’s really no way of saying whether this human-devised heuristic is actually the best way of looking at it, and that’s the first point of attack. Historically, we found that whenever there’s a lot of human feature engineering and lots of raw data that we need to work with for that human feature engineering, this is where deep learning shines. The initial success stories of deep learning were exactly in replacing handcrafted heuristics for, say, image recognition with deep neural networks.”

This is the starting point for NAR: replacing the human that maps the real world raw data to a graph input with a neural network. The network will map the complexity of the raw data to some ‘abstractified’ graph inputs that can then be used to run the algorithm over.

Veličković noted there is a good line of research that works exactly in this setting. This research even figured out a way to propagate gradients through the algorithm so you can actually get good neural network optimization in the setting and apply the algorithm.

But, he went on to add, there are a few limitations with the setting, even if you’re able to propagate gradients through it. First of all, the algorithm may not be everything you need to compute the final solution. Some parts of the problem may require shortest path solutions, but maybe at some point you also need to run a flow algorithm and you are not sure how to combine results in the best way, because the problem is very raw and noisy:

“Forcing your representations to rely on the output of this one algorithm might be too limiting. Dijkstra requires one scalar value per every edge in the graph, which means that you’re compressing all that amazing complexity of the real world into just one number for every road segment.

That’s potentially problematic because, if you don’t have enough data to estimate that scalar correctly, you’re doomed. The algorithm is not going to give you correct solutions once again, because you just haven’t seen enough raw data to estimate that scalar correctly.”

Breaking the algorithmic bottleneck

This is what Veličković and Blundell refer to as the algorithmic bottleneck. It happens because we’re placing all of our bets on this one number for every edge, which is a very low dimensional representation of the problem. The way to break bottlenecks, the DeepMind duo suggests, is by using vectors, rather than numbers. In other words, staying highly dimensional.

Deep neural networks are deriving a lot of their success from staying highly dimensional and applying high-dimensional regularization techniques. This means that even if you predict badly some parts of the internal vector in the neural network, the other parts of that vector can still step in and compensate. The problem is that algorithms were designed to run on low dimensional representations, not high dimensional inputs.

The key idea in NAR is to replace the algorithm with a neural network, typically a graph neural network (GNN). The GNN takes a high dimensional input, and does one step of reasoning to produce another high dimensional input. Then once enough steps of that reasoning neural network are completed, final outputs are predicted based on this.

Of course, this high dimensional GNN neural executor needs to actually imitate the algorithm at hand. For this reason, NAR also includes an abstract pipeline where that neural network is pre-trained, typically using lots of synthetic data.

Blundell and Veličković’s previous work, such as neural execution of graph algorithms, pointer graph networks, and persistent message passing, dealt with that part: how to get reliable and robust neural networks that simulate algorithms in an abstract space.

What Blundell and Veličković had not done was to examine whether this can actually be used in a real-world problem. This is where Deac’s work comes in. The approach is based on a combination of neural execution of graph algorithms, pointer graph networks, and persistent message passing. All of this, Deac noted, was plugged into a reinforcement learning (RL) framework. In RL, problems are framed as states and actions, and the goal is to estimate how good each state is. This is what has driven RL applications in games, such as AlphaStar or AlphaGo, Deac noted:

“We define some value over the states. If you think of a chess game, this value is – how good is this move? And what we want is to make plans, make decisions so that this value is maximized. If we know the environment dynamics, [then how do we know] how we can move from one state to another, and what sorts of rewards do we get from a state for a specific action?”

The way to do this is via a so-called value iteration algorithm, which can provide estimates of how good each state is. The algorithm iterates on the value estimates and gradually improves them, until they converge to the ground truth values. This is the algorithm the NAR team is trying to imitate, using synthetic data and simple graphs to estimate meta-values.

The difference is that the team wanted to move from using single numerical values to using multiple-value high-dimensional vectors. Initially, the output might not be that good, but you don’t need that much data to get off the ground because the algorithm can deal with imperfect estimates when you don’t know the environment’s dynamics.

The key, as Deac explained, is iteration, which leads to convergence. In this case, a shortest path algorithm was of interest. The algorithm can be learned and then it will be plugged in. But the idea is that the RL framework should be usable for any algorithm, or combination of algorithms. That is an important step for machine learning, which sometimes struggles as it is re-applied from domain to domain

High performance in low data regimes

The approach Deac and the DeepMind duo worked on is called eXecuted Latent Value Iteration Network, or XLVIN. First, the value iteration algorithm is learned in the abstract space. Then a RL agent is plugged in. The team compared it against an almost identical architecture, with one crucial difference: rather than using their algorithmic component, that architecture predicts the values and runs value iteration on them directly.

Veličković said that this second agent actually managed in some cases to catch up, when fed with a lot more interactions with different environments. But in very low data regimes, the team’s RL architecture does better. This is important, as for environments such as Atari, a classical benchmark in AI also used in XLVIN, more data means more simulation budget, which is not always feasible.

XLVIN empirically validated a strong theoretical link between dynamic programing algorithms and the computations of a GNN. This means, Veličković said, that most polynomial time heuristics can be interpreted as dynamic programming, which in turn means that graph networks might be the right inductive bias for this kind of computation.

Previous theoretical work described a best case scenario, where there are settings of weights for a GNN that’ll make it nicely behave as that dynamic programming algorithm. But it doesn’t necessarily tell you what’s the best way to get there and how to make it work with the specific data you have or extrapolate, and these are issues that are important for algorithms, Veličković noted.

This led the duo to extend their work to models like pointer graph networks and persistent message passing, which moved one step further. They imitate the iterative computation of dynamic programming, but they also try to include some data structures which are crucial to how algorithms operate nowadays and incorporate some aspects of persistent reasoning.

So, rather than just being able to support a simple data structure on top of an existing set of nodes, is it possible to create additional memory additional nodes? Many algorithms rely on initializing additional memory aside from the memory required to store their inputs. So, DeepMind’s research developed models that enable more and more alignment with computation while still following that same GNN blueprint.

RL, Blundell noted, is basically a graph algorithm. It’s a dynamic programming update and it’s closely related to a shortest path algorithm — it’s like an online shortest path algorithm. It’s not surprising that if you’re trying to find the shortest possible path in a graph, and then you want to represent your problem as a graph, that maybe there’s a good relationship there.

Dynamic programming is a great way to think about solving any kind of problem, Blundell went on to add. You can’t always do it, but when you can, it works really, really well. And that’s potentially one of the deep connections between graph algorithms, reinforcement learning and graph networks.

One algorithm to rule them all

In their most recently published work, Reasoning-Modulated Representations, Blundell and Veličković show that they are able to use the algorithmic reasoning blueprints to support unsupervised learning and self-supervised learning. Often, Veličković said, unsupervised learning is all about, “Hey, let’s take a massive quantity of data and try to extract the most meaningful properties out of it.”

But that’s not always all the information you have. You might have some knowledge about how your data came to be. For example, if you’re working with estimating some representations from physics simulations, like a bunch of balls bouncing around or N-body systems, you don’t just see a bunch of snapshots of that system. You know that it has to obey certain laws of physics.

We think the neural algorithmic reasoning blueprint is an excellent way to take those laws of physics, package them up into a neural network that you can splice-in as part of your unsupervised architecture and, as a result, get better representations. And we’re starting to see some really nice results on a variety of environments that this blueprint is actually promising.”

As far as the future of this research goes, DeepMind’s duo wants to extend Deac’s work, and apply it as widely as possible in reinforcement learning, which is an area of really high interest for DeepMind and beyond. There’s algorithms ‘left, right and center inside the reinforcement learning pipeline,’ as Veličković put it.

Blundell on his part reiterated that there aren’t that many algorithms out there. So the question is, can we learn all of them? If you can have a single network that is able to execute any one of the algorithms that you know already, then if you get that network to plug those algorithms together, you start to form really quite complicated processing pipelines, or programs. And if it’s all done with gradients going through it, then you start to learn programs:

“If you really take this to its limit, then you start to really learn algorithms that learn. That becomes very interesting because one of the limitations in deep learning is the algorithms that we have to learn. There hasn’t been that much change in the best optimizers we use or how we update the weights in a neural network during training for quite a long time.

There’s been a little research over different architectures and so forth. But they haven’t always found the next breakthrough. The question is, is this a different way of looking at that, where we can start to find new learning algorithms?

Learning algorithms are just algorithms, and maybe what’s missing from them is this whole basis that we have for other algorithms that we’re using. So we need a slightly more universal algorithm executor to use as the basis for better methods for ML.”

Deac also noted she would like to pursue a network which tries multiple algorithms — all algorithms, if possible. She and some of her MILA colleagues have taken some steps in that direction. They are doing some transfer learning, chaining a couple of algorithms together and seeing if they can transfer between one algorithm, making it easier to learn a separate related algorithm, she said.

Or in other words, as Veličković framed what everyone seems to view as the holy grail of this research: “One algorithm to rule them all.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Google Drive for desktop will be the one app to rule them all

Although not as terrible as its messaging apps and services, Google’s narrative for cloud syncing on the desktop isn’t exactly as clear-cut as it could or should be. There is only one Google Drive, of course, though you can pay for storage through a variety of subscriptions and offers. There is also just one Google Drive app for mobile but when it comes to the desktop, the situation can be a bit confusing and, thankfully, Google is finally changing that.

There was actually a single Google Drive app in the past but, in Google’s infinite wisdom, it “upgraded” that app and split it into two. There’s Drive File Stream that was available only for enterprise customers and there’s Backup and Sync for everyone else. Unsurprisingly, some of those Workspace customers installed and managed both, creating no small amount of confusion.

Google has finally seen the light and it will be unifying those two into one and its name will unsurprisingly be “Google Drive for desktop”. It’s really a sleight of hand, though, as Drive for Desktop is really just Drive File Stream renamed. Considering how some users of Backup and Sync have been complaining about performance issues, it might be for Google to retire it.

In other words, admins and users of Drive File Stream won’t have to do anything to transition to Drive for Desktop. It will be a simple renaming process that will happen when version 45 of the app rolls out. There will be no features lost or gain, for that matter, at least for this update.

Backup and Sync users, on the other hand, will have to move over to the new unified Drive for Desktop client but not before Google has managed to move over the features to the new app. That won’t happen until later this year and Google promises to give users three months’ notice before it shuts down the older app.

Repost: Original Source and Author Link