Why Recycling Isn’t Enough to Solve Our E-Waste Crisis

53.6 million metric tons. That’s over 118 billion pounds. No, I’m not talking about the weight of the Great Wall China (which is 116 billion pounds, in case you’re wondering). I’m talking about the weight problem of e-waste. The United Nations claims the world produced 118,167,773,000 pounds of e-waste — or 53.6 million metric tons — in 2019 alone.

That’s heavier than all of the adults in Europe combined, and this waste isn’t the same as the trillions of pounds of trash generated the world. E-waste contains a slew of harmful chemicals and materials, like mercury, which the UN says accounts for about 110,000 pounds of undocumented waste every year. If you need a more practical example, the UN says that in 2019 alone, some $57 billion worth of recoverable electronics materials — like gold, platinum, and silver — was dumped or burned. And that’s a conservative estimate.

The world has a pretty big e-waste problem, but you probably already know that. It’s obvious. I sat down with a few experts in the field of electronics design, sustainability, and repair to figure out why we have such a big e-waste problem — and more importantly, what we can do to solve it.

But to understand our e-waste problem, we have to look past the waste itself.

Device life cycle, and a circular economy

If you pay attention to big tech events, you’ve undoubtedly heard about the “circular economy.” If you’re unfamiliar, the idea is simple enough: Instead of having an end of life for devices, they’re reintroduced, recycled, or reused to keep the life cycle going. You know, like a circle.

That’s something Michelle Chuaprasert, senior director of innovation and sustainability at Intel, is focused on. The idea of a circular economy is great in theory — just keep using devices that are already around — but it has a big problem. A lot of people either don’t want to get rid of their old devices, or they don’t know how.

“How do we facilitate someone handing down their PC at first life to someone else,” Chuaprasert asked when I talked with her. “The options are there. I think it’s a matter of how do we all motivate each other, right, to take advantage of [recycling programs] … or donate in whichever way is easy for us.”

There are a growing number of ways to either recycle of resell your old devices, even if they aren’t perfect. Best Buy and Staples offer in-store recycling programs, and major device manufacturers like Apple offer trade-in programs. Sites like BackMarket also offer a secondhand marketplace tailored for electronics. And Intel says an increasing number of people are buying secondhand since the coronavirus pandemic emerged.

Melissa Gregg is a senior principal engineer at Intel, who headed up the Intel EVO line before moving on to aid efforts to create more sustainable designs. Gregg told me that the pandemic offered “an opportunity to have that conversation, perhaps even for the first time, if whether people always need a new PC.”

It’s no secret that the demand for computers grew immensely during 2020, as the exodus from the office pushed workers into their homes and schools transitioned to a remote learning model. Add on top of that the chip shortage, and it was difficult for a lot of people to buy a new PC. In some ways, it still is.

The UN says only 17.4% of e-waste was collected and recycled in 2019. And the U.S. is one of the worst countries in the world when it comes to recycling.

The positive outcome, according to Intel, is that hopeful buyers started seeking out secondhand options. “For all of those other people who still need computers, recirculating them, and having access to platforms that make that easy for people to pass on their devices, to me, is a really positive outcome of a pretty bad situation,” Gregg said.

So, problem solved, right? Everyone trades in or recycles their old devices, we don’t dump or burn any more toxic e-waste, and round and round the circle goes.

But it’s not working.

Best Buy has offered recycling for the past 12 years, and although the company says it has recycled 2 billion pounds of electronics since 2009, that’s only a microscopic fraction of the total e-waste the world produces in a single year. In total, the UN says only 17.4% of e-waste was collected and recycled in 2019. And the U.S. is one of the worst countries in the world when it comes to recycling.

Owners of devices have a responsibility to ensure that those devices don’t end up in a landfill. But that only addresses a small part of the problem with how electronics impact the environment.

Billowing smog

Manufacturing. If you’re concerned about the impact e-waste has on the environment, manufacturing should terrify you. It’s true that a circular economy is good for everyone, but manufacturing is where “the big, ugly stuff is done,” according to Gay Gordon-Byrne, executive director of The Repair Association.

Compared to usage, manufacturing accounts for a much more significant portion of a device’s carbon footprint. Microsoft says that around 78% of a Surface device’s carbon footprint comes from manufacturing. A report from 2018 says that between 85% and 95% of annual carbon emissions from phones is due to manufacturing.

It’s the dirty secret of the electronics that power our world today. Recycling your devices helps, and voluntary ecolabels like EPEAT and Energy Star give manufacturers a goal post for designing more energy-efficient devices. But the fact of the matter is, when you get rid of one device, the manufacturer is ready to make a new one.

Factory workers on an assembly line.
A OnePlus assembly line at a manufacturing facility in China. Qilai Shen/Getty Images

“That creates two big problems,” according to Gordon-Byrne. “One is you’ve now got an e-waste problem you didn’t have, and on the other side, you also have a manufacturing problem where really, the big, ugly stuff is done. You know, all the mining, all the terrible worker experiences, all of the pollution. That’s actually all manufacturing.”

This isn’t something the Microsofts and Intels of the world have shied away from. It’s easy and justified to point the finger at manufacturers, but that’s not a solution. Given our reliance on devices today, we can’t just stop making them — but manufacturers can take steps to reduce their environmental impact.

The idea is dematerialization, which Chuaprasert says is a “really big” deal. Take a motherboard, for example. “The motherboard itself inside the PC is one of the higher carbon footprint areas,” Chuaprasert said. “You can make a really big difference if you can shrink the motherboard.”

Chuaprasert even pointed to circuit board cutting. Often, devices come with motherboards cut in a precise way to fit a particular form factor, but taking cutting out of the picture can make for more efficient manufacturing. “Just having more rectangular shapes instead of cutouts goes a long way for reducing the carbon footprint.”

A circuit board.

This is something Intel is working with device manufacturers to accomplish, and ecolabels like EPEAT offer insight into the advancements designers are making. My favorite laptop, the recent Dell XPS 13 2-in-1, received an EPEAT Gold label on the back of energy efficiency, packaging, materials, and even labor practices. If you want to buy a new device and you care about its climate and social impact, you should reference EPEAT and Energy Star.

Although you should use these labels, they don’t always tell the full story. The most recent MacBook Pro, for example, earned a Gold rating from EPEAT, but only scored one out of four in the product longevity category. That’s because it’s not repairable or upgradeable, and Apple doesn’t offer any repair guidance.

With some devices barred from easy, cost-effective repairs, it doesn’t matter how the device was built. If you have a broken device and you can’t repair it, you’ll buy a new one, and that’s the crux of the problem.

These are “hard economic facts,” according to Gordon-Byrne. “Every time you throw [away] a PC or a laptop or something … they get another order.”

The right to repair

The idea of right to repair has been buzzing over the last year. Under shareholder pressure, Microsoft bent the knee and committed to increased repair options for its devices by the end of 2022. And in July, the Federal Trade Commission voted unanimously to enforce an order that ramps up law enforcement against illegal repair restrictions.

The movement stems from a 2011 Massachusetts law, the Motor Vehicle Owners Right to Repair Act.  This law requires carmakers to provide all of the information necessary to diagnose and repair a vehicle, which opens the doors to third-party repair shops and even owner repairs.

The electronics industry is rife with examples of products that are designed not to be repaired.

It’s a foundational piece of legislation for the world today, but it only applies to cars. The right to repair movement is all about enacting new legislation that covers everything else — from MacBooks to tractors. “We’ve got a lot of experience now that says that works great for cars. It should work for everything, but the legislation itself is just for cars,” Gordon-Byrne said.

The electronics industry is rife with examples of products that are designed not to be repaired. The Nintendo Switch, for example, use tri-point screws instead of Phillips head. And anecdotally, the screws seem to use a soft metal, easily stripping when I went to do a Joy-Con shell swap. The “warranty void if removed stickers” warnings you often see usually aren’t true, either. The Magnuson-Moss Warranty Act — passed in 1975 — was designed to protect against these kinds of deceptive warranties.

An iPhone being repaired.

Right to repair is good for consumers, but it has environmental implications, too. For example, iFixIt — a third-party company that offers resources and tools to repair electronics — rates the 2019 MacBook Pro as one of the least repairable options on the market (a trend with other Apple devices iFixIt has ranked). A big reason why is that this laptop, along with many others, has components soldered to the motherboard.

As we’ve already covered, these boards represent a big portion of a device’s carbon footprint. And the most efficient way to repair these devices isn’t to resolder components — it’s to replace the board entirely, ship back the broken one for repair, and reintroduce that in another device. That’s not to mention the problems independent repair shops can have resoldering components without access to schematics.

Apple and others have taken steps to make repairs easier, but Gordon-Byrne suggests that it’s not enough. “I’d rather have legislation,” Gordon-Byrne said. “I think that they need to be told that they need to do these things. Because if we wait for them to do it out of their good graces, we know that that’s not in their financial best interest, so they’re not going to do it.”

The idea of a circular economy works if someone wants to get rid of one device and buy another, but it doesn’t account for those who like the devices they already own. That’s where right to repair comes in, giving owners more options to repair their devices instead of feeding the manufacturing machine or contributing to another pile of e-waste.

The right to repair movement is making headway. At the time of publication, 25 states have right to repair laws on the books, and another 14 have introduced bills in the past. But Gordon-Byrne says device makers are still “biting away around the edges.”

“The really big improvement, the one they don’t want to make, is letting people fix their stuff,” he added.

Big problems, big solutions

A man standing in e-waste.
A worker examines electronic waste awaiting to be dismantled as recyclable waste at the Electronic Recyclers International plant in Holliston, Massachusetts. Zoran Milich/Getty

Recycling isn’t enough to solve our e-waste problem because e-waste isn’t the only problem. It’s one solution for one problem, but e-waste represents many other issues: Our reliance on devices and their supply chains, closed repair ecosystems, and the environmental impact of manufacturing.

“There’s a bigger story to tell about how we need to think about our reliance on devices and the ability to repair them, or trade them, or have them continue to circulate, so we do have autonomy from the instability of supply chains,” Gregg said.

Recycling is a positive step, one that consumers woefully underuse. But to address the increasing issues of e-waste and how electronics impact the environment, we need more sustainable device designs, more energy-efficient technology, and critically, the right to repair. Consumers play a role, but manufacturers do, too. And in this case, they need to lead the charge.

Chuaprasert says part of the problem is awareness. “We don’t always know to look, and I think that’s one area we can all work on, about increasing that awareness of what we can do.”

How you can get involved

So, what can you do? For devices you own, make sure to recycle or resell them — and consider buying secondhand instead of picking up the latest model. In addition, take advantage of ecolabels like EPEAT and Energy Star to help guide your purchasing decisions, and get involved in the right to repair movement. Gordon-Byrne offered an easy way to do that: “To complain effectively, you’ve got to complain to your legislators.”

All of those steps help, but manufacturers need to jump on board by designing more sustainable and repairable devices. Intel is helping manufacturers do that now, but that help only reaches so far.

Gregg summed it up nicely: “System recommendations, design, and working on dematerialization — it only works in an ecosystem and a business model that sees value in longevity.”

Editors’ Choice

Repost: Original Source and Author Link


New AMD Ryzen 5000G Chips Solve a Big PC Building Problem

AMD just released two new Ryzen 5000G processors — the Ryzen 5 5600G and Ryzen 7 5700G. Although budget-focused APUs are par for the course with newer architectures, these two chips arrive at a very opportune time. The GPU shortage is still in effect, and both APUs fill a gap in the PC building space.

Over the past several months, the price of last-gen APUs has gone up in response to the GPU shortage. For example, we recommended the Ryzen 5 3400G in our best $500 gaming PC build at twice the price it should sell for. The two-year-old chip should sell for $150, but it’s nearly $330 at the time of publication.

These new chips from AMD hit on two fronts. In addition to featuring the new Zen 3 architecture, the chips are priced in line with how they should perform. The $360 Ryzen 7 5700G, for example, outclasses the 3400G in Fortnite by 23% at 1080p, according to AMD’s numbers. AMD also says it provides a 1.45x increase in Cinebench R20 and a 1.44x increase in PCMark 10.

Here are the specs of the new chips:

Ryzen 5 5600G Ryzen 7 5700G
Cores 6 8
Threads 12 16
Base clock 3.9GHz 3.8GHz
Boost clock 4.4GHz 4.6GHz
Total cache 19MB 20MB
Graphics compute units 8 7
Graphics speed 1.9GHz 2GHz
TDP 65W 65W
Price $259 $359

Thanks to the GPU pricing crisis, many builders have turned to picking up an APU. Although integrated graphics are never a sure bet for gaming, they’re still capable of running games with trimmed-down settings at lower resolutions. The logic is pretty straightforward — buy an APU for now to scratch the gaming itch, and add in a graphics card later once prices have dropped.

The problem was that APUs became the hot ticket, leading to issues like the vastly overpriced 3400G. The 5600G and 5700G fill that gap nicely, offering builders the opportunity to put together a gaming PC that can actually play games without taking out a new line of credit.

As for the gaming performance you can expect, AMD says the 5600G is capable of 79 frames per second (fps) in Civilization VI, 33 fps in Assassin’s Creed Odyssey, and 98 fps in Fortnite, all at 1080p with Low settings. The 5700G is only slightly more powerful in gaming, matching the 5600G in Assassin’s Creed Odyssey and Fortnite while moving up to 84 fps in Civilization VI. 

AMD originally announced these processors at Computex, and they’re now making it to store shelves. Both parts are available today across retailers at their list price. If all things go well, they should be available at that list price for a while, but it’s too soon to say if they’ll suffer a similar fate as the 3400G.

Although we haven’t had the chance to test the chips ourselves, they look like the perfect addition to a budget build without a dedicated graphics card. And that’s something PC builders have needed for a while.

Editors’ Choice

Repost: Original Source and Author Link


Tessian nabs $65M to solve cybersecurity’s ‘people problem’

Elevate your enterprise data technology and strategy at Transform 2021.

Enterprise-focused email security company Tessian has raised $65 million in a series C round of funding, and announced plans to expand into communication conduits beyond email.

Founded in 2013 originally as CheckRecipient before a rebrand five years later, London-based Tessian solves a number of security headaches for big businesses such as Arm, Prudential, Schroders, and Dentons.

In its original guise, Tessian was focused largely on the misaddressed email problem, whereby a company employee accidentally sends an email to the wrong person. Using machine learning, Tessian scans its customers’ historical email data to spot patterns and then applies that knowledge to identify anomalies in current email activities — it’s designed to stop sensitive data landing in the wrong inbox.

Above: Tessian prevents misaddressed emails

Today, Tessian offers a range of solutions across the email security spectrum, including threat prevention tools that cover business email compromise (BEC), account takeover (ATO), phishing, impersonation attacks, and more.

More broadly, Tessian serves up a human risk hub that gives security personnel data on their “email security posture” by providing detailed insights into risk levels at an individual level based on historical behaviors, while it also sends alerts to each employee before they carry out a risky action — these alerts are designed to coach employees on an ongoing basis.

Above: Tessian: “Human layer risk hub”

Attack vector

Prior to now, Tessian had raised around $59 million, and its latest cash injection saw return investments from Sequioa, Accel, Balderton Capital, and Latitude, as well as lead investor March Capital and Schroder Adveq. The company is now valued at $500 million.

Although companies face multiple security risks from just about every direction, insider (human) threats remain one of the biggest weaknesses companies need to address, while email specifically is one of the most consistently reliable attack vectors. Recent data from Trend Micro revealed that remote work increased high-risk email threats by 32% in 2020, with the bulk of the detected threats consisting of malicious URLs, phishing links, malware, and BEC attempts.

As Tessian notes, “employees are the gatekeepers to companies’ most sensitive systems and data,” which is why it’s expanding beyond email security into messaging and team collaboration platforms. Indeed, while email remains in rude health across the business sphere, workers are using an ever-growing combination of tools to communicate internally and externally, though Tessian stopped short of detailing which platforms it would support in the future.

Additionally, Tessian noted that it plans to double down on its email security credentials, and help companies “replace their secure email gateways and legacy data loss prevention solutions,” according to a blog post.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Why AI can’t solve unknown problems

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.

When will we have artificial general intelligence, the kind of AI that can mimic the human mind in all aspect? Experts are divided on the topic, and answers range anywhere between a few decades and never.

But what everyone agrees on is that current AI systems are a far shot from human intelligence. Humans can explore the world, discover unsolved problems, and think about their solutions. Meanwhile, the AI toolbox continues to grow with algorithms that can perform specific tasks but can’t generalize their capabilities beyond their narrow domains. We have programs that can beat world champions at StarCraft but can’t play a slightly different game at amateur level. We have artificial neural networks that can find signs of breast cancer in mammograms but can’t tell the difference between a cat and a dog. And we have complex language models that can spin thousands of seemingly coherent articles per hour but start to break when you ask them simple logical questions about the world.

In short, each of our AI techniques manages to replicate some aspects of what we know about human intelligence. But putting it all together and filling the gaps remains a major challenge. In his book Algorithms Are Not Enough, data scientist Herbert Roitblat provides an in-depth review of different branches of AI and describes why each of them falls short of the dream of creating general intelligence.

The common shortcoming across all AI algorithms is the need for predefined representations, Roitblat asserts. Once we discover a problem and can represent it in a computable way, we can create AI algorithms that can solve it, often more efficiently than ourselves. It is, however, the undiscovered and unrepresentable problems that continue to elude us.

Representations in symbolic AI

Throughout the history of artificial intelligence, scientists have regularly invented new ways to leverage advances in computers to solve problems in ingenious ways. The earlier decades of AI focused on symbolic systems.

Herbert Roitblat is the author of “Algorithms Are Not Enough”

Above: Herbert Roitblat, data scientist and author of Algorithms Are Not Enough.

Image Credit: Josiah Grandfield

This branch of AI assumes human thinking is based on the manipulation of symbols, and any system that can compute symbols is intelligent. Symbolic AI requires human developers to meticulously specify the rules, facts, and structures that define the behavior of a computer program. Symbolic systems can perform remarkable feats, such as memorizing information, computing complex mathematical formulas at ultra-fast speeds, and emulating expert decision-making. Popular programming languages and most applications we use every day have their roots in the work that has been done on symbolic AI.

But symbolic AI can only solve problems for which we can provide well-formed, step-by-step solutions. The problem is that most tasks humans and animals perform can’t be represented in clear-cut rules.

“The intellectual tasks, such as chess playing, chemical structure analysis, and calculus are relatively easy to perform with a computer. Much harder are the kinds of activities that even a one-year-old human or a rat could do,” Roitblat writes in Algorithms Are Not Enough.

This is called Moravec’s paradox, named after the scientist Hans Moravec, who stated that, in contrast to humans, computers can perform high-level reasoning tasks with very little effort but struggle at simple skills that humans and animals acquire naturally.

“Human brains have evolved mechanisms over millions of years that let us perform basic sensorimotor functions. We catch balls, we recognize faces, we judge distance, all seemingly without effort,” Roitblat writes. “On the other hand, intellectual activities are a very recent development. We can perform these tasks with much effort and often a lot of training, but we should be suspicious if we think that these capacities are what makes intelligence, rather than that intelligence makes those capacities possible.”

So, despite its remarkable reasoning capabilities, symbolic AI is strictly tied to representations provided by humans.

Representations in machine learning

Machine learning provides a different approach to AI. Instead of writing explicit rules, engineers “train” machine learning models through examples. “[Machine learning] systems could not only do what they had been specifically programmed to do but they could extend their capabilities to previously unseen events, at least those within a certain range,” Roitblat writes in Algorithms Are Not Enough.

The most popular form of machine learning is supervised learning, in which a model is trained on a set of input data (e.g., humidity and temperature) and expected outcomes (e.g., probability of rain). The machine learning model uses this information to tune a set of parameters that map the inputs to outputs. When presented with previously unseen input, a well-trained machine learning model can predict the outcome with remarkable accuracy. There’s no need for explicit if-then rules.

But supervised machine learning still builds on representations provided by human intelligence, albeit one that is more loose than symbolic AI. Here’s how Roitblat describes supervised learning: “[M]achine learning involves a representation of the problem it is set to solve as three sets of numbers. One set of numbers represents the inputs that the system receives, one set of numbers represents the outputs that the system produces, and the third set of numbers represents the machine learning model.”

Therefore, while supervised machine learning is not tightly bound to rules like symbolic AI, it still requires strict representations created by human intelligence. Human operators must define a specific problem, curate a training dataset, and label the outcomes before they can create a machine learning model. Only when the problem has been strictly represented in its own way can the model start tuning its parameters.

“The representation is chosen by the designer of the system,” Roitblat writes. “In many ways, the representation is the most crucial part of designing a machine learning system.”

One branch of machine learning that has risen in popularity in the past decade is deep learning, which is often compared to the human brain. At the heart of deep learning is the deep neural network, which stacks layers upon layers of simple computational units to create machine learning models that can perform very complicated tasks such as classifying images or transcribing audio.

Layers of a neural network for deep learning

Above: Deep learning models can perform complicated tasks such as classifying images.

But again, deep learning is largely dependent on architecture and representation. Most deep learning models needs labeled data, and there is no universal neural network architecture that can solve every possible problem. A machine learning engineer must first define the problem they want to solve, curate a large training dataset, and then figure out the deep learning architecture that can solve that problem. During training, the deep learning model will tune millions of parameters to map inputs to outputs. But it still needs machine learning engineers to decide the number and type of layers, learning rate, optimization function, loss function, and other unlearnable aspects of the neural network.

“Like much of machine intelligence, the real genius [of deep learning] comes from how the system is designed, not from any autonomous intelligence of its own. Clever representations, including clever architecture, make clever machine intelligence,” Roitblat writes. “Deep learning networks are often described as learning their own representations, but this is incorrect. The structure of the network determines what representations it can derive from its inputs. How it represents inputs and how it represents the problem-solving process are just as determined for a deep learning network as for any other machine learning system.”

Other branches of machine learning follow the same rule. Unsupervised learning, for example, does not require labeled examples. But it still requires a well-defined goal such as anomaly detection in cybersecurity, customer segmentation in marketing, dimensionality reduction, or embedding representations.

Reinforcement learning, another popular branch of machine learning, is very similar to some aspects of human and animal intelligence. The AI agent doesn’t rely on labeled examples for training. Instead, it is given an environment (e.g., a chess or go board) and a set of actions it can perform (e.g., move pieces, place stones). At each step, the agent performs an action and receives feedback from its environment in the form of rewards and penalties. Through trial and error, the reinforcement learning agent finds sequences of actions that yield more rewards.

Computer scientist Richard Sutton describes reinforcement learning as “the first computational theory of intelligence.” In recent years, it has become very popular for solving complicated problems such as mastering computer and board games and developing versatile robotic arms and hands.

Screengrabs of StarCraft, Rubik's Cube, Go, and DOTA

Above: Reinforcement learning can solve complicated problems such as playing board and video games and performing robotic manipulations.

Image Credit: Tech Talks

But reinforcement learning environments are typically very complex, and the number of possible actions an agent can perform is very large. Therefore, reinforcement learning agents need a lot of help from human intelligence to design the right rewards, simplify the problem, and choose the right architecture. For instance, OpenAI Five, the reinforcement learning system that mastered the online video game Dota 2, relied on its designers simplifying the rules of the game, such as reducing the number of playable characters.

“It is impossible to check, in anything but trivial systems, all possible combinations of all possible actions that can lead to reward,” Roitblat writes. “As with other machine learning situations, heuristics are needed to simplify the problem into something more tractable, even if it cannot be guaranteed to produce the best possible answer.”

Here’s how Roitblat summarizes the shortcomings of current AI systems in Algorithms Are Not Enough: “Current approaches to artificial intelligence work because their designers have figured out how to structure and simplify problems so that existing computers and processes can address them. To have a truly general intelligence, computers will need the capability to define and structure their own problems.”

Is AI research headed in the right direction?

“Every classifier (in fact every machine learning system) can be described in terms of a representation, a method for measuring its success, and a method of updating,” Roitblat told TechTalks over email. “Learning is finding a path (a sequence of updates) through a space of parameter values. At this point, though, we don’t have any method for generating those representations, goals, and optimizations.”

There are various efforts to address the challenges of current AI systems. One popular idea is to continue to scale deep learning. The general reasoning is that bigger neural networks will eventually crack the code of general intelligence. After all, the human brain has more than 100 trillion synapses. The biggest neural network to date, developed by AI researchers at Google, has one trillion parameters. And the evidence shows that adding more layers and parameters to neural networks yields incremental improvements, especially in language models such as GPT-3.

But big neural networks do not address the fundamental problems of general intelligence.

“These language models are significant achievements, but they are not general intelligence,” Roitblat says. “Essentially, they model the sequence of words in a language. They are plagiarists with a layer of abstraction. Give it a prompt and it will create a text that has the statistical properties of the pages it has read, but no relation to anything other than the language. It solves a specific problem, like all current artificial intelligence applications. It is just what it is advertised to be — a language model. That’s not nothing, but it is not general intelligence.”

Other directions of research try to add structural improvements to current AI structures.

For instance, hybrid artificial intelligence brings symbolic AI and neural networks together to combine the reasoning power of the former and the pattern recognition capabilities of the latter. There are already several implementations of hybrid AI, also referred to as “neuro-symbolic systems,” that show hybrid systems require less training data and are more stable at reasoning tasks than pure neural network approaches.

System 2 deep learning, another direction of research proposed by deep learning pioneer Yoshua Bengio, tries to take neural networks beyond statistical learning. System 2 deep learning aims to enable neural networks to learn “high-level representations” without the need for explicit embedding of symbolic intelligence.

Another research effort is self-supervised learning, proposed by Yann LeCun, another deep learning pioneer and the inventor of convolutional neural networks. Self-supervised learning aims to learn tasks without the need for labeled data and by exploring the world like a child would do.

“I think that all of these make for more powerful problem solvers (for path problems), but none of them addresses the question of how these solutions are structured or generated,” Roitblat says. “They all still involve navigating within a pre-structured space. None of them addresses the question of where this space comes from. I think that these are really important ideas, just that they don’t address the specific needs of moving from narrow to general intelligence.”

In Algorithms Are Not Enough, Roitblat provides ideas on what to look for to advance AI systems that can actively seek and solve problems that they have not been designed for. We still have a lot to learn from ourselves and how we apply our intelligence in the world.

“Intelligent people can recognize the existence of a problem, define its nature, and represent it,” Roitblat writes. “They can recognize where knowledge is lacking and work to obtain that knowledge. Although intelligent people benefit from structured instructions, they are also capable of seeking out their own sources of information.”

But observing intelligent behavior is easier than creating it, and, as Roitblat told me in our correspondence, “Humans do not always solve their problems in the way that they say/think that they do.”

As we continue to explore artificial and human intelligence, we will continue to move toward AGI one step at a time.

“Artificial intelligence is a work in progress. Some tasks have advanced further than others. Some have a way to go. The flaws of artificial intelligence tend to be the flaws of its creator rather than inherent properties of computational decision making. I would expect them to improve over time,” Roitblat said.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Copyright 2021


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Tech News

AI can now farm crickets (and hopefully solve world hunger in the process)

Earth’s expanding population and unequal distribution of natural resources are pushing the planet towards a food insecurity crisis.

One solution to the problem is adding an unusual ingredient to our diets: crickets.

The insects have a high protein content and low environmental footprint that could provide a sustainable alternative to meat and fish.

They might not have the most appetizing appearance, but looks can be deceiving: crickets are renowned for their subtle nutty flavor, crunchy texture, and exquisite astringency.

At least, that’s what I’ve been told. My religious beliefs sadly forbid me from indulging in the delicacy — but that doesn’t mean you have to miss out. And thanks to AI, the chirpy critters could be arriving on your plate sooner than you think.

A team led by the Aspire Food Group plans to bring the creatures from farm to fork by building the world’s first fully automated insect manufacturing site.

The crickets will then be turned into food products, including protein powder and bars.

[Read: How do you build a pet-friendly gadget? We asked experts and animal owners]

The project is the first time that industrial automation, IoT, robotics, and AI will be deployed in climate-controlled, indoor vertical agriculture with living organisms.

Tech News

Apple can solve our Face ID mask woes by stealing one of Android’s best features

If you own an iPhone X or later and have gone out into the world recently, you probably noticed an unfortunate side effect of the new mask-wearing culture: Face ID doesn’t work.

It is more of a feature than a bug, but the fact of the matter is that if Apple’s True Depth camera system can’t scan your whole face, it won’t unlock your phone. If you’re wearing a mask like most stores and restaurants require, you’re left typing in your passcode whenever you want to check your shopping list or pay your bill.

Apple offered up a workaround with the recent iOS 13.5 update, but it’s hardly a fix. Now, instead of waiting for Face ID to fail a couple times before the passcode screen pops up, you can swipe up from the bottom of the screen to quickly enter your code. That makes things a little less infuriating, but still not ideal.

This isn’t as much of an issue on the Pixel 4 Android phone, and not just because way fewer people own one. While Face unlock is just as wonky as Face ID when wearing a mask, Android has a system in place that lets you bypass it while still respecting your phone’s security and privacy. It’s called Smart Lock, and it’s perfect for these times.

Smart Lock has been around since the days of Android Lollipop but it’s never been more useful. You can set your phone to stay unlocked based on a variety of factors, including whether you’re carrying it or listening to music on a pair of wireless earbuds:

  • On-Body Detection: Keeps your phone unlocked when it’s in your pocket or close by.

    Trusted Places: Lets you add a location (like your home) where your phone will always stay unlocked.

    Trusted Devices: Recognizes a paired Bluetooth device and keeps your phone unblocked when it’s active.

Even before I had to wear a mask, Smart Lock was one of Android’s best features, and it’s been on my iOS wishlist for years. I understand why Apple might be reluctant to embrace it, but short of training its Face ID algorithm to forgive face coverings—which would seriously undermine the security of the system—a Smart Unlock-style system in iOS 14 would be a great way to mitigate the Face ID frustrations caused by COVID-19.

android smart lock IDG

Smart Lock for Android allows you to keep your phone unlocked based on location, movement, or the presence of a Bluetooth device.

Apple probably would make it as “open” as Android does, but even if it was just limited to its own products, it would still be incredibly useful. That way, your phone would stay unlocked as long as you were wearing your AirPods and Apple Watch, so it wouldn’t matter if you were wearing a mask. Your iPhone would stay unlocked in the store or on a run and bypass the lock screen altogether.

Apple already has a similar system in place on the Mac called Auto Unlock. If you have a mid-2013 or later Mac running macOS High Sierra or later, your Apple Watch can instantly unlock your Mac when it wakes from sleep, eliminating the need to type a passcode or reach over to use Touch ID. Your Mac simply recognizes that your Apple Watch is on your wrist and unlocked, and bypass its own lock mode.

Repost: Original Source and Author Link


PlayStation VR: 5 Common Problems and How to Solve Them

There’s nothing better than pulling a PlayStation VR headset from its packaging, putting it on your head for the first time, and diving into the wonderful worlds that only virtual reality can deliver. Virtual reality is intense, surreal, and unlike anything we’ve seen in video games before.

At the same time, there’s nothing worse than plugging in your PSVR for the first time, only to discover that it isn’t working the way it’s supposed to. To help you iron out the kinks, we’ve compiled a list of some of the most common problems plaguing Sony’s newfangled headset, as well as the steps you can take to rectify them. Not all of these will affect every user — particularly those pertaining to motion sickness — and not every solution we put forth is guaranteed to fix your problem. For more serious issues, you’ll likely have to contact Sony directly.

Further reading:

How to get in touch with Sony

Phone: 1-800-345-7669

Unofficial VR Reddit

Helpful Articles

Twitter: @AskPlayStation

Your headset has tracking issues

Julian Chokkattu/Digital Trends

If your PlayStation VR headset isn’t tracking your movement properly, you might see an “outside of area” message appear or notice that your in-game avatar is moving without your direct input.


The problem could be related to lighting, as the PlayStation Camera is primarily tracking your headset via a number of blue lights on its surface. Tracking issues can happen for a variety of reasons, however, so don’t lose hope if the first few solutions don’t work for you.

  • Make sure that no other light source is interfering with the PlayStation VR headset or camera. Sony has noted that the tracking issue can stem from light reflecting off a window or mirror, so if possible, cover these up. After you’ve adjusted any nearby light sources, you’ll need to adjust the PlayStation Camera. To do this, go to Settings, select Devices, and choose PlayStation Camera.
  • If your lights don’t appear to be the problem, make sure you’re within the designated play area and that the PlayStation Camera can see you clearly. If possible, position yourself about six feet away from the camera, with your headset clearly displayed in the picture.
  • If the aforementioned steps don’t work, wipe both PlayStation Camera lenses with a cloth. The problem could be caused by a dirty lens.
  • A variety of Bluetooth devices can interfere with your headset’s signal. Make sure that all controllers — including any PlayStation Move controllers — are tied to the same user account as your headset and controllers.
  • If none of these steps solve your problem, then your tracking issues are likely caused by a hardware problem. Thankfully, you can easily set up a repair using Sony’s Online Service Tool.

Headset won’t power on or turns off

Perhaps one of the issues you’re running into has to do with turning the headset on or off. There are several cables that must be plugged into your headset, processor box, and PlayStation 4 console in order for the system to work properly — including a USB cable, two HDMI cables, a power cable, and a cable running directly to the headset. Ensure these are correctly connected before you start troubleshooting your headset.


If your headset is plugged in correctly and won’t turn on, the problem could stem from either the PSVR system software or a piece of hardware.

  • Update the PSVR system software before you try any other steps, which can be done in a similar manner to the PlayStation Camera. To do so, go to Settings and select Devices. Then, select PlayStation VR system software.
  • If your headset still isn’t working, check and make sure that the processor unit’s light is white. If it’s red, turn off your system, unplug the processor unit, plug it back in, and try to turn on both your console and headset. If this doesn’t work, you’ll need to contact PlayStation Support directly, which you can do using the following phone number: 1-800-345-7669.
  • If the light on the processor box is white and you still can’t see your PS4’s display through the headset, make sure that all HDMI and USB cables are plugged in correctly, as well as the headset’s connection cable. Then, try all other additional cables to make sure they aren’t causing the problem.
  • If changing cables doesn’t fix your issue, try cleaning the “attachment sensor” — located in the front of the headset — with a cloth.
  • If these steps don’t work, your headset likely needs to be repaired. To set up a repair, contact Sony using the following number: 1-800-345-7669.

On-screen image is blurry

PlayStation VR
Julian Chokkattu/Digital Trends

One of the most common PSVR problems is a blurry picture, but this can often be solved by simply adjusting the device to better fit your eye orientation and unique head measurements.


You should be able to eradicate any blurry images in just a few minutes by checking a few settings and properly adjusting the PSVR to fit your head. This will likely have to be done again if a different player wears the headset.

  • Using the Quick Menu on your PS4 — which can be brought up by pressing the PlayStation button in the middle of your controller — select Adjust PlayStation VR and Adjust headset position. The “scope adjustment button” located at the bottom of your headset and the “headband release button” located on the back can both be used to give you an accurate, comfortable fit. After you put on the headset, use the rear dial to adjust the picture. This will likely fix any blur issues you might be experiencing.
  • If this doesn’t solve your problem, you may need to adjust the headset’s “eye-to-eye” distance. To do so, select Settings and choose Devices. Then, select PlayStation VR, choose eye-to-eye distance, and follow the on-screen instructions. The PlayStation Camera will then measure your face — just make sure you are about 70 centimeters away.
  • If these steps don’t clear up the blur issues, you’ll likely need to send in your PSVR for repairs. To set up a repair, contact Sony using the following number: 1-800-345-7669.

On-screen image is “drifting”

PlayStation VR
Julian Chokkattu/Digital Trends

While using the PSVR headset, either in “cinematic” mode or when playing VR-enabled games, you may notice your picture drifting to one side of the display.


This is a common issue that, in many cases, can be fixed simply by adjusting the camera or quickly re-calibrating your headset.

  • If you’re playing a standard PS4 game or watching a movie using PSVR and the picture has drifted to one side, Sony recommends simply pressing and holding the “options” button on your controller. This will re-position the screen and should fix the issue.
  • For PSVR-enabled games, drifting is more than likely caused by the PlayStation Camera rather than the headset. Make sure you’re positioned directly in front of the camera, with about five or six feet between you and the device. Also, make sure the camera isn’t on a vibrating or moving surface. If it is, this could be because of its proximity to your PS4. Consider reorganizing your consoles and equipment if this is the case.
  • Some users have also reported success when switching from the new, cylindrical PlayStation Camera to the older design, which is more stable and resistant to vibrations.
  • If these steps don’t work, Sony suggests placing the headset on a stable, vibration-free location for 10 seconds.

PlayStation VR is making you sick

PlayStation VR
Julian Chokkattu/Digital Trends

VR sickness is a very common occurrence, particularly with people who experience motion sickness while on roller coasters and in high-speed vehicles. The caveat of immersive virtual reality is that it can affect your equilibrium and balance, causing you to feel queasy when you are, in fact, completely motionless.


No single solution is going to alleviate everyone’s VR-induced nausea, but there are a few steps you’ll want to try — detailed by PlayStation VR users — in order to make your PSVR experience as pleasant and vomit-free as possible.

  • Don’t play standing up. The vast majority of PSVR games are meant to be played sitting down. If you’re feeling nauseous and your sense of balance is already off, you risk not only exacerbating the sickness but also injuring yourself.
  • If you begin feeling nauseous, don’t ignore it—first, try closing your eyes and seeing if it goes away on its own.
  • Try natural remedies. Both peppermint and ginger have fantastic reputations for improving nausea, and it’s likely that you already have them mixed in your teas or spices. 
  • Dramamine is a common over-the-counter medication for nausea relief and is a good option if herbal remedies aren’t doing the trick. Initially designed for people prone to carsickness, seasickness, or airsickness, it is also extremely useful in the VR world. Keep in mind that this medicine can cause drowsiness, so we don’t suggest taking it if you’ll have to drive or use other heavy machinery soon. Kids under 12 shouldn’t use the adult dosage, but there’s a children’s formula available for occasional use. 
  • Select less-intense virtual reality games or experiences. Below is our selection of PSVR games that are notorious for making you sick and those that aren’t as bad. VR is supposed to be a fun experience, so if you’re feeling miserable while playing, it might not be for you. 
Most likely to induce nausea Least likely to induce nausea
Rigs: Mechanized Combat League Job Simulator
EVE: Valkyrie Hustle Kings
Star Wars: Battlefront – Rogue One: X-Wing VR Mission Wayward Sky
Here They Lie Batman: Arkham VR
DriveClub VR Moss
PlayStation VR Worlds — “Scavenger’s Odyssey” game Blood & Truth
Resident Evil 7 in VR mode Rez Infinite
Battlezone Thumper
The Elder Scrolls V: Skyrim in VR Déraciné
Gran Turismo Sport Vacation Simulator
Borderlands 2 Beat Saber
No Man’s Sky Tetris Effect
Superhot Astro Bot Rescue Mission

Editors’ Choice

Repost: Original Source and Author Link

Tech News

CableLabs wants to solve glitchy smartphone Wi-Fi with IWINS

A CableLabs technology called Intelligent Network Wireless Steering, or IWINS, could be the answer to a pesky problem that smartphones have: As they reach the outer fringes of a Wi-Fi network, do they ask for data from the Wi-Fi router or their cellular service?

It’s a problem that you wouldn’t think CableLabs, which develops the DOCSIS standards that govern cable modems, would normally solve. But about half the standards body’s members are mobile network operators, so it made sense, said Phil McKinney, CableLabs president and chief executive, and the former CTO of HP. It’s a problem users might have as they’re outside in their backyard, roaming through a mall, or sitting just out of range of a home router.

“We’ve all done this—we’re on Wi-Fi, and we’re getting kind of a glitchy experience, and what do we do? We turn Wi-Fi off,” McKinney said. “And we just default to cellular, and then we forget we’re on cellular, and we’re using up data packets.”

CableLabs is positioning IWINS as a “network engineer in your device,” examining what Wi-Fi and cellular connections you’re on, what the network conditions are, and what applications you’re running, and steering the network packets appropriately to deliver the best experience. IWINS doesn’t depend on any changes to the operating system; instead, it’s an application that lives on your smartphone and communicates privately to a separate, related piece of software on a server to determine your best connection.

IWINS gathers data from other IWINS users about what their own experiences have been with the Wi-Fi access points or cellular connections in the area, and uses them to guide your experiences, McKinney said. That user data would be completely anonymized, he added.

The technology is in trials at an undisclosed number of CableLabs members, McKinney said, which include Charter, Cox, and Comcast in the United States, among others. The latter is probably one of the most likely candidates to use IWINS as a differentiating feature to lure additional customers from traditional carriers like AT&T or Verizon. “Comcast, or Chartered could offer this as an enhanced experience without any changes to the Verizon network,” McKinney said.

IWINS straddles the border between traditional wireless networking, phones, and cellular, and offers operators the chance to smooth out what has traditionally been a bumpy user experience.

“How do we give the appearance of rock-solid, consistent performance? And the way to do that is constantly monitoring all the access technologies underneath, versus exposing that to the user.” McKinney said. If implemented correctly, it’s possible IWINS will just be one of those technologies that quietly makes those glitchy wireless connections go away.

Repost: Original Source and Author Link