Categories
Computing

Optical illusions could help us build a new generation of AI

You look at an image of a black circle on a grid of circular dots. It resembles a hole burned into a piece of white mesh material, although it’s actually a flat, stationary image on a screen or piece of paper. But your brain doesn’t comprehend it like that. Like some low-level hallucinatory experience, your mind trips out; perceiving the static image as the mouth of a black tunnel that’s moving towards you.

Responding to the verisimilitude of the effect, the body starts to unconsciously react: the eye’s pupils dilate to let more light in, just as they would adjust if you were about to be plunged into darkness to ensure the best possible vision.

The effect in question was created by Akiyoshi Kitaoka, a psychologist at Ritsumeikan University in Kobe, Japan. It’s one of the dozens of optical illusions he’s created over a lengthy career. (“I like them all,” he said, responding to Digital Trend’s question about whether he has a favorite.)

This new illusion was the subject of a piece of research published recently in the journal Frontiers in Human Neuroscience. While the focus of the paper is firmly on the human physiological responses to the novel effect (which it turns out that some 86 percent of us will experience), the overall topic may also have a whole lot of relevance when it comes to the future of machine intelligence — as one of the researchers was eager to explain to Digital Trends.

An evolutionary edge

an optical illusion known as the Fraser spiral
At first glance, it might appear as though this image shows a spiral that winds toward the center. But try to follow one of the lines as it seemingly curves inward, and you’ll realize it’s not a spiral at all.

Something’s wrong with your brain. At least, that’s one easy conclusion to be drawn from the way that the human brain perceives optical illusions. What other explanation is there for a two-dimensional, static image that the brain perceives as something totally different? For a long time, mainstream psychology figured exactly that.

“Initially people thought, ‘Okay, our brain is not perfect … It doesn’t get it always right.’ That’s a failure, right?” said Bruno Laeng, a professor at the Department of Psychology of the University of Oslo and first author of the aforementioned study. “Illusions in that case were interesting because they would reveal some kind of imperfection in the machinery.”

The brain has no way to know what’s [really] out there.”

Psychologists no longer view them that way. If anything, research such as this highlights how the visual system is not just a straightforward camera. The “Illusory Expanding Hole” optical illusion makes clear that the eye adjusts to perceived, even imagined, light and darkness, rather than to physical energy.

Most significantly, it showcases that we don’t just dumbly record the world with our visual systems, but instead perform a continuous set of scientific experiments in order to gain a slight evolutionary advantage. The goal is to analyze data presented to us and try to preemptively deal with problems before they become, well, problems.

“The brain has no way to know what’s [really] out there,” Laeng said. “What it’s doing is building up a sort of virtual reality of what could be out there. There’s a little bit of guesswork. In this respect, you can think of the brain as a kind of probabilistic machine. You can call it a Bayesian machine if you want. It’s using some prior hypothesis and trying to test it all the time to see whether that works.”

Laeng gives the example of our eyes making adjustments based on nothing more than the impression of light from the sun: even when this is sighted through cloud cover or an overhead canopy of leaves. Just in case.

“What matters in evolution is not that it is true [at that moment], but it is probable,” he continued. “By constricting the pupil, your body is already adjusting to a situation that is very likely to happen in a short period of time. What happens [if the sun suddenly comes out] is that you are dazzled. Dazzled means incapacitated temporarily. That has enormous consequences whether you’re a prey or whether you’re a predator. You lose a fraction of a second in a particular situation and you may not survive.”

It’s not just light and darkness where our visual systems need to make guesses, either. Think about a game of tennis, where the ball is traveling at high speed. Were we to base our behavior wholly on what the visual system is receiving at any given moment, we would lag behind reality and fail to return the ball. “We are able to perceive the present although we are really stuck in the past,” Laeng said. “The only way to do it is by predicting the future. It sounds a bit like a word game, but that is it in a nutshell.”

Machine vision is getting better

facial recognition
izusek/Getty Images

So what does this have to do with computer vision? Potentially everything. In order for a robot, for instance, to be able to function effectively within the real world it needs to be able to make these kinds of adjustments on the fly. Computers have an advantage when it comes to their ability to perform extremely fast computations. What they don’t have is millions of years of evolution on their side.

In recent years, machine vision has nonetheless made enormous strides. They can identify faces or gaits in real-time video streams — potentially even in vast crowds of people. Similar image classification and tech tools can recognize the presence of other objects, too, while object segmentation breakthroughs make it possible to better understand the content of different scenes. There has also been significant progress made when it comes to extrapolating 3D images from 2D scenes, allowing machines to “read” three-dimensional information, such as depth, from scenes. This takes modern computer vision closer to human image perception.

However, there still exists a gulf between the best machine vision algorithms and the kinds of vision-based capabilities the overwhelming majority of humans are able to carry out from a young age. While we can’t articulate exactly how we perform these vision-based tasks (to quote the Hungarian-British polymath Michael Polanyi, “we can know more than we can tell”), we are nonetheless able to perform an impressive array of tasks that allow us to harness our eyesight a variety of smart ways.

A Turing Test for machine vision

If researchers and engineers hope to create computer vision systems that operate at least on par with the visual processing skills of the wetware brain, building algorithms that can understand optical illusions is not a bad starting point. At the very least, it could prove a good way of measuring how well machine vision systems operate to our own brains. It may not be the answer to the mythical Artificial General Intelligence, but it might be the key to unlocking General Vision.

an optical illusion that tricks your brain into seeing false colors
Believe it or not, but all these balls are the same shade of grey, and your brain interprets them as having different colors based on the contextual cues of the colored lines that cross over them

“If someone would develop, one day, an artificial visual system that commits the same illusory perception errors that we do, you would know at this point that they’re [achieving] a good simulation of how our brain works,” Laeng said. “It would be a sort of Turing Test. If you have an artificial network that is fooled by illusion as we are, then we [would be] very close to understanding the underlying computation of the brain itself.”

Yi-Zhe Song, reader of Computer Vision and Machine Learning at the Center for Vision Speech and Signal Processing at the U.K.’s University of Surrey, agrees with the hypothesis. “Asking vision algorithms to understand optical illusions as a general topic is of great value to the community,” he told Digital Trends. “It goes beyond the current community focus of asking machines to [recognize], by pushing the envelope further [and] asking machines to reason. This push [would represent] a significant step forward towards ‘General Vision,’ where subjective interpretations of visual concepts need to be accommodated for.”

Use your illusion

To date, there has been some limited research toward this goal — although it remains at a relatively early stage. Nasim Nematzadeh, a researcher who holds a Ph.D. in Artificial Intelligence and Robotics-Low-level vision models, is one person who has published work on this topic.

“We believe that further exploration of the role of simple Gaussian-like models in low-level retinal processing and Gaussian kernel in early stage [deep neural networks], and its prediction of loss of perceptual illusion, will lead to more accurate computer vision techniques and models,” Nematzadeh told Digital Trends. “[This could] contribute to higher level models of depth and motion processing and generalized to computer understanding of natural images.”

Max Williams, an AI researcher who helped compile a dataset of thousands of optical illusion images for computer vision systems, puts the relationship between general vision and optical illusions most succinctly: “Illusions exist because our eyes and brains are performing a messy and ad-hoc process to extract a visual scene from an otherwise incomprehensible light field, created by a physical world which we are almost completely sealed off from,” they told Digital Trends. “I don’t think it’s possible to make a visual system expressive enough to be considered ‘perception’ which is also free from illusions.”

Achieving General Vision

To be clear, achieving human-level (or better) General Vision for AI isn’t simply going to be training them to recognize standard optical illusions. No hyper-specific ability to, say, decode Magic Eye illusions with 99.9% accuracy in 0.001 seconds is going to substitute for millions of years of human evolution.

(Interestingly, machine vision does already have its own version of optical illusions in the form of adversarial models, which can make them mistake – as in one alarming illustration – a 3D-printed toy turtle for a rifle. However, these do not yield the same evolutionary benefits as the optical illusions which work on humans.)

Still, getting machines to understand human optical illusions, and respond to them in the way that we do, could be very useful research.

And one thing’s for sure: When General Vision AI is achieved, it’ll fall for the same kinds of optical illusions as we do. At least, in the case of the Illusory Expanding Hole, 86% of us.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Optical vs. Laser Mouse | Digital Trends

Finding a mouse that achieves the perfect balance between sensitivity and accuracy might seem next to impossible. Laser-based mice offer high sensitivity, but they tend to cause jittering. On the flip side, optical mice use LED technology with lower sensitivity, allowing for more accurate movement.

Choosing the best mouse for you can be a challenge. Luckily, we can help you decide based on your budget, the surface you’re using, and the types of activities you’ll need your mouse for.

Guess what? All mice are optical

Modern mice are basically cameras. They constantly take pictures, although instead of capturing your face, they grab images of the surface underneath. These images aren’t meant for posting on social media but instead are converted into data for tracking the peripheral’s current location on a surface. Ultimately, you have a low-resolution camera in the palm of your hand, otherwise known as a CMOS sensor. Combined with two lenses and a source of illumination, they track the peripheral’s X and Y coordinates thousands of times per second.

All mice are optical, technically, because they take photos, which is optical data. However, the ones marketed as optical models rely on an infrared or red LED that projects light onto a surface. This LED is typically mounted behind an angled lens, which focuses the illumination into a beam. That beam is bounced off the surface, through the “imaging” lens that magnifies the reflected light, and into the CMOS sensor.

Bill Roberson/Digital Trends

The CMOS sensor collects the light and converts the light particles into an electrical current. This analog data is then converted into 1’s and 0’s, resulting in more than 10,000 digital images captured each second. These images are compared to generate the precise location of the mouse, and then the final data is sent to the parent PC for cursor placement every one to eight milliseconds.

On older LED mice, you will find the LED pointing straight down, and shining a red beam onto the surface that’s seen by the sensor. Jump ahead years later, and the LED light is projected at an angle — and typically unseen (infrared). This helps the mouse track its movements on most surfaces.

Laser mice use an accurate, invisible beam

Meanwhile, Logitech takes the credit for introducing the first mouse to use a laser in 2004. More specifically, it is called a vertical-cavity surface-emitting laser diode (or VCSEL) which is used in laser pointers, optical drives, barcode readers, and more.

This infrared laser replaces the infrared/red LED on “optical” models, but don’t worry. It won’t damage your eyes, because the lasers used by mice aren’t powerful (still, don’t press your luck and stare at it for minutes at a time).

They’re also in infrared — outside the visible spectrum — so you won’t see an annoying red glow emanating from beneath your mouse.

Razer Mamba 2015
Bill Roberson/Digital Trends

At one time, laser models were believed to be far superior to “optical” versions. Over time, though, optical mice have improved, and they now work in a variety of situations with a high degree of accuracy. The laser model’s superiority stemmed from having a higher sensitivity than LED-based mice. However, unless you’re a PC gamer, that’s probably not an important feature.

Comparison: Optimal surfaces

All right, both methods use the irregularities of a surface to keep track of the peripheral’s position. But a laser can go deeper into the surface texture. This provides more information for the CMOS sensor and processor inside the mouse to juggle, and hand over to the parent PC.

That matters in situations where the surface may not be ideal for all types of mice. For example, although glass is clear, there are still extremely small irregularities that can be tracked by a laser (it’s not always perfect, but enough for basic mouse work). Meanwhile, we could place the latest optical mouse on the same surface, and it can’t track any movement.

This makes laser-based mice better for glass tables and highly lacquered surfaces, depending on where you want to use it.

Comparison: Sensitivity

Razer Viper Ultimate Mouse
Arif Bacchus/Digital Trends

The problem with laser-based mice is that they can be too accurate, picking up useless information such as the unseen hills and valleys of a surface. This can be troublesome when moving at slower speeds, causing on-screen cursor “jitter,” or what’s better known as acceleration.

The result is some incorrect 1:1 tracking stemming from useless data thrown into the overall tracking mix used by the PC. The cursor won’t appear in the exact location at the exact time your hand intended. Although the problem has improved over the years, laser mice still aren’t ideal if you’re sketching details in Adobe Illustrator. They also tend to perform better on simple surfaces that don’t have a lot of information to scan and relay.

However, this issue becomes more complex when you look at settings options. The CMOS sensor resolution in a laser mouse is different than a camera because it’s based on movement. The sensor consists of a set number of physical pixels aligned in a square grid. The resolution stems from the number of individual images captured by each pixel during a movement of one physical inch across a surface. Because the physical pixels can’t be resized, the sensor can use image processing to divide each pixel into smaller pieces.

That image processing can be adjusted, which is what mice sensitivity settings do. So, for example, if you had a laser mouse that was picking up too much data and dancing around your screen, you could lower sensitivity and help minimize that effect. So while laser mice might be naturally too sensitive for some surfaces, this can be mitigated, which levels the playing field for both types of mouse.

Comparison: Gaming

Logitech MX Master

If you look at the Logitech G brand, you’ll notice that Logitech mostly focuses on LED-based mice when it comes to PC gaming. That’s because the customer base is typically sitting at a desk, and possibly even using a mouse mat designed for the best tracking and friction. They want simple and highly accurate results and absolutely no cursor jitter, so this makes sense.

But Logitechs’s biggest competitor, Razer, lists a number of gamer-specific laser-based mice on its online store. Razer prefers laser technology because it offers higher sensitivity for lightning-quick movement in games. On the right surfaces, laser mice can be amazingly precise, so this also makes sense!

Overall, we don’t think that optical or laser technology is, by itself, enough to recommend any particular mouse for gaming.

Comparison: Pricing

When laser mice first came out, they were significantly more expensive than optical mouse. Today, there’s not nearly as much difference between prices, especially since mice come in so many different tiers for features, customization, ergonomics, and more.

This smooths out the differences, especially at the high end. Getting a top-tier mouse is going to cost you $50 to $100 no matter what sensor type you pick. Down at the other end of the market, the most affordable laser mice still tend to be $5 to $10 more expensive than optical mice. Not a huge difference, but worth noting.

So, Which is better?

One essential consideration if you’re struggling to choose between an optical and laser mouse is how you want to use the product.

The laser mouse is generally a superior choice for companies, as it’s versatile and usable across many different surfaces. Chances are you will never encounter an issue if you happen to switch desks at some point.

You can also use a laser mouse for on-the-go or at-home use. It’s easily transportable, which makes it a fantastic option for laptop users.

In contrast, You can better use optical mouses with mousepads. They are also a bit more budget-friendly. You can use these mouses for gaming, for your stationary home desktop computer, and other similar situations in which you are not moving around.

Editors’ Choice




Repost: Original Source and Author Link