Categories
Game

Big Brain Academy: Brain vs. Brain Review

Big Brain Academy: Brain vs. Brain

MSRP $30.00

“Big Brain Academy: Brain vs. Brain is a shockingly addictive collection of brain teasers, but a slim package makes it a hard sell.”

Pros

  • Addictive gameplay
  • Clever brain teasers
  • Intuitive touch controls
  • Good use of ghost data

Cons

  • Sparse package
  • Limited multiplayer
  • Online play is lacking

Had Big Brain Academy: Brain vs. Brain launched on phones, it would have instantly become my favorite of Nintendo’s mobile apps. The combination of quick challenges and intuitive touchscreen controls makes it feel like a no-brainer for the platform. I could see myself breaking it out anytime I found myself in a long grocery line, filling a few minutes of boredom with some breezy brain teasers.

It’s not available on my iPhone, though, because it’s exclusive to Nintendo Switch. That puts the $30 release in an awkward position. While the Switch is portable, it doesn’t carry the same ease of access as a phone, which is always in my pocket. The game makes good use of the Switch in two-player battles, but Nintendo has missed a seemingly obvious opportunity here. It’s like slotting a circle into a square hole; it technically fits, but it’s not the correct solution.

Big Brain Academy: Brain vs. Brain is a shockingly addictive collection of brain teasers that’s perfect for players of all ages. Though as fun as it is, it’s hard to shake the feeling that it’s appearing on the wrong platform and priced too high for such a sparse package.

Love your brain

If you don’t remember the Big Brain Academy series, you’re probably not alone. A spinoff of the more popular Brain Age series, the puzzle game debuted on Nintendo DS in 2005. It was a clever educational tool where players increased their “brain mass” by completing a set of minigames built around different cognitive skills. A 2007 Wii sequel tried to expand it into a party game, but that was the end of the line for the franchise.

It finally returns on Nintendo Switch, but it hasn’t changed too much. It’s still a slim package that’s built around some focused hooks. The standout is the game’s Test mode, where players tackle five random, one-minute microgames, the results of which determine their “Big Brain Brawn” score. It’s shocking how much mileage comes out of that simple mode. The short nature of tests had me saying “just one more” over and over as I tried to outdo my best round.

Two players compete in a train minigame in Big Brain Academy: Brain vs. Brain.

That works as well as it does because of the minigames themselves, which are easy to grasp and fun to replay. In one game, I need to memorize a sequence of digits in an instant and punch them into a calculator. In another, I need to pop numbered balloons in the correct sequence, lowest to highest. A personal favorite shows me a figure and then asks me to deduce what it would look like viewed from a different angle. They’re simple enough for a child to tackle, but the intensity scales up with quick, correct answers. I found myself itching to master each one in the game’s pair of high score-chasing Practice modes.

The short nature of tests had me saying “just one more” over and over as I tried to outdo my best round.

While the game can be played with buttons, it’s best enjoyed using the Switch’s touchscreen. Each minigame has intuitive inputs, like spinning a clock’s hands to set the right time or swiping to knock numbers out of a column. It’s an extra kick of interactivity that makes each game superbly satisfying.

The real disappointment here is that the game doesn’t really take advantage of the Switch beyond that. There are no minigames built around motion controls or the Joy-cons’ IR sensors. Most of the 20 minigames are pulled from the previous Big Brain games, making this feel more like a quick compilation as opposed to a new game that was built with the Switch in mind.

Battle of the brains

As the Brain vs. Brain subtitle implies, there’s a competitive aspect to the game. though what’s strange is that its multiplayer options are shallow compared to the Wii installment. While that game featured four different modes, the Switch version basically has one. Up to four players compete to see who can clear a minigame fastest. The first one to hit 100 points wins. Players can either choose the minigame category or spin a random wheel, but that’s about the extent of the options.

There’s no traditional online multiplayer either, which is an odd omission.

The best use of multiplayer comes from two people competing on one Switch with touch controls. Lay it down flat and the screen will split in two, allowing players to sit across from one another and tap their side of the screen to play. It’s a cute little trick that actually makes good use of the device itself.

There’s no traditional online multiplayer either, which is an odd omission. Instead, online play occurs in the form of a single-player “Ghost Clash” mode. Anytime a player clears a minigame, their ghost data is recorded. Ghost Clash allows players to compete against their friends’ and families’ ghosts for some asynchronous competition. It’s not really a substitute for actual online play, but it’s at least a clever way to keep tabs on friends.

A player clashes with an online ghost in Big Brain Academy: Brain vs. Brain.

The best implementation of the system comes from he World Ghosts option, where players get to compete against random ghosts from around the world. Beating a ghost grants trophies, which increases a player’s Big Brain World Ranking (a sort of monthly leaderboard). Since these are battles against real players, they’re enjoyably tense, as they require some quick thinking and even quicker reaction time.

Could have been an app

There’s not much else to speak of. What I’ve described is the extent of the game’s features: Test, Practice, World Ghosts, and the shallow multiplayer mode. The only other extra is that the game contains 300 unlockable items, like hats and accessories for a player’s avatar. Unlocking every one would take a while, though that would require playing the same 20 minigames over and over.

At $30, this is a budget Switch game, but that price still feels too high. Big Brain Academy worked on the Nintendo DS because touchscreen controls were still a novel concept at the time (the first iPhone wouldn’t come out until two years after its release). But in 2021, there’s no shortage of touch-enabled brain teasers that can be played on any phone for free. Brain vs. Brain is fun, but it doesn’t make a strong case for someone to purchase it instead of downloading the Lumosity app for $0.

A player pops balloons in Big Brain Academy: Brain vs. Brain.

Were Nintendo not as stuck in its console habits, I think this would have thrived as a mobile app. All of the touch controls would perfectly translate to a phone screen and even tabletop multiplayer could be replicated on an iPad. Locking it to Switch just feels like a needless restriction in this day and age, especially as Nintendo still struggles to nail its mobile gaming ambitions.

At $30, this is a budget Switch game, but that price still feels too high.

I’m probably asking more from Big Brain Academy than anyone in history (but what do you expect from an S-grade brain like mine?). It’s about as low-stakes a video game franchise as you can get. Parents looking for an educational, but still fun game to play with their kids will eat this up. It’s just a reminder that a Nintendo console isn’t a one-size-fits-all platform for every kind of game — nor does it need to be anymore.

Our take

Big Brain Academy: Brain vs. Brain could have used more ideas in the “vs.” department, but the core brain testing is deceptively addictive. Intuitive minigames and satisfying touch controls make for a fun, though sparse collection of family-friendly brain teasers. It’s just hard to recommend it too strongly when mobile apps currently do what it does for free.

Is there a better alternative?

WarioWare: Get It Together! is a more robust (though still slim) package if you’re looking for Switch microgames, while the free mobile app Lumosity can fill your brain-training needs.

How long will it last?

Realistically, most people will probably get a handful of hours out of it unless they intend to log in every day to stay sharp. Monthly challenges and unlockable items give some incentive for those who want to stick with it.

Should you buy it?

No. I genuinely enjoy it, but it’s just a hard sell considering how little is included here — though it’s a great pick for families looking for a fun educational tool.




Repost: Original Source and Author Link

Categories
AI

MindsDB wants to give enterprise databases a brain

Let the OSS Enterprise newsletter guide your open source journey! Sign up here.

Databases are the cornerstone of most modern business applications, be it for managing payroll, tracking customer orders, or storing and retrieving just about any piece of business-critical information. With the right supplementary business intelligence (BI) tools, companies can derive all manner of insights from their vast swathes of data, such as establishing sales trends to inform future decisions. But when it comes to making accurate forecasts from historical data, that’s a whole new ball game, requiring different skillsets and technologies.

This is something that MindsDB is setting out to solve, with a platform that helps anyone leverage machine learning (ML) to future-gaze with big data insights. In the company’s own words, it wants to “democratize machine learning by giving enterprise databases a brain.”

Founded in 2017, Berkeley, California-based MindsDB enables companies to make predictions directly from their database using standard SQL commands, and visualize them in their application or analytics platform of choice.

To further develop and commercialize its product, MindsDB this week announced that it has raised $3.75 million, bringing its total funding to $7.6 million. The company also unveiled partnerships with some of the most recognizable database brands, including Snowflake, SingleStore, and DataStax, which will bring MindsDB’s ML platform directly to those data stores.

Using the past to predict the future

There are myriad use cases for MindsDB, such as predicting customer behavior, reducing churn, improving employee retention, detecting anomalies in industrial processes, credit-risk scoring, and predicting inventory demand — it’s all about using existing data to figure out what that data might look like at a later date.

An analyst at a large retail chain, for example, might want to know how much inventory they’ll need to fulfill demand in the future based on a number of variables. By connecting their database (e.g., MySQL, MariaDB, Snowflake, or PostgreSQL) to MindsDB, and then connecting MindsDB to their BI tool of choice (e.g., Tableau or Looker), they can ask questions and see what’s around the corner.

“Your database can give you a good picture of the history of your inventory because databases are designed for that,” MindsDB CEO Jorge Torres told VentureBeat. “Using machine learning, MindsDB enables your database to become more intelligent to also give you forecasts about what that data will look like in the future. With MindsDB you can solve your inventory forecasting challenges with a few standard SQL commands.”

Above: Predictions visualization generated by the MindsDB platform

Torres said that MindsDB enables what is known as In-Database ML (I-DBML) to create, train, and use ML models in SQL, as if they were tables in a database.

“We believe that I-DBML is the best way to apply ML, and we believe that all databases should have this capability, which is why we have partnered with the best database makers in the world,” Torres explained. “It brings ML as close to the data as possible, integrates the ML models as virtual database tables, and can be queried with simple SQL statements.”

MindsDB ships in three broad variations — a free, open source incarnation that can be deployed anywhere; an enterprise version that includes additional support and services; and a hosted cloud product that recently launched in beta, which charges on a per-usage basis.

The open source community has been a major focus for MindsDB so far, claiming tens of thousands of installations from developers around the world — including developers working at companies such as PayPal, Verizon, Samsung, and American Express. While this organic approach will continue to form a big part of MindsDB’s growth strategy, Torres said his company is in the early stages of commercializing the product with companies across numerous industries, though he wasn’t at liberty to reveal any names.

“We are in the validation stage with several Fortune 100 customers, including financial services, retail, manufacturing, and gaming companies, that have highly sensitive data that is business critical — and [this] precludes disclosure,” Torres said.

The problem that MindsDB is looking to fix is one that impacts just about every business vertical, spanning businesses of all sizes — even the biggest companies won’t want to reinvent the wheel by developing every facet of their AI armory from scratch.

“If you have a robust, working enterprise database, you already have everything you need to apply machine learning from MindsDB,” Torres explained. “Enterprises have put vast resources into their databases, and some of them have even put decades of effort into perfecting their data stores. Then, over the past few years, as ML capabilities started to emerge, enterprises naturally wanted to leverage them for better predictions and decision-making.”

While companies might want to make better predictions from their data, the inherent challenges of extracting, transforming, and loading (ETL) all that data into other systems is fraught with complexities and doesn’t always produce great outcomes. With MindsDB, the data is left where it is in the original database.

“That way, you’re dramatically reducing the timeline of the project from years or months to hours, and likewise you’re significantly reducing points of failure and cost,” Torres said.

The Switzerland of machine learning

The competitive landscape is fairly extensive, depending on how you consider the scope of the problem. Several big players have emerged to arm developers and analysts with AI tooling, such as the heavily VC-backed DataRobot and H2O, but Torres sees these types of companies as potential partners rather than direct competitors. “We believe we have figured out the best way to bring intelligence directly to the database, and that is potentially something that they could leverage,” Torres said.

And then there are the cloud platform providers themselves such as Amazon, Google, and Microsoft which offer their customers machine learning as add-ons. In those instances, however, these services are really just ways to sell more of their core product, which is compute and storage. — Torres also sees potential for partnering with these cloud giants in the future. “We’re a neutral player — we’re the Switzerland of machine learning,” Torres added.

MindDB’s seed funding includes investments from a slew of notable backers, including OpenOcean, which claims MariaDB cofounder Patrik Backman as a partner, YCombinator (MindsDB graduated YC’s winter 2020 batch), Walden Catalyst Ventures, SpeedInvest, and Berkeley’s SkyDeck fund.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Jason Schwartzman plays a floating brain in musical adventure ‘The Artful Escape’

The Artful Escape is an idealized vision of everything the music industry could be, straight out of the brain of Australian rockstar Johnny Galvatron. In five years of development (at least), The Artful Escape has transformed into a psychedelic adventure game with a living soundtrack of original folk and rock music, a cast of ridiculous characters, otherworldly environments, and a roster A-list voice actors, including Jason Schwartzman, Lena Headey, Michael Johnston, Carl Weathers and Mark Strong.

The Artful Escape is set to hit Xbox One, Xbox Series X and S, and PC on September 9th, priced at $20. It’ll hit Game Pass at the same time, and it’s being published by indie hit-maker Annapurna Interactive.

Galvatron is the frontman of The Galvatrons, a high-energy Australian rock group that toured the continent and opened for bands like Def Leppard and Cheap Trick in the late 2000s. However, for the past few years, Galvatron has been a game developer first and foremost. In the 2010s, he used YouTube videos to teach himself how to create a game in Unreal, building off the 3D animation and coding courses he took back in college, right before Warner Music signed him. He then founded a studio, rented some office space, secured a deal with Annapurna, and somewhere along the way, he ended up in a recording booth with Jason Schwartzman.

“We just hung out and spoke about David Bowie and Bob Dylan and video games and stuff,” Galvatron said. “And it was just like, it was a moment for me. He came into the studio and he had like a cape and he had a dressing gown and like an umbrella and a little tiny Korg synth. He brought all these things and he put them all around him and he would like, do the line with the cape and then he would throw the cape around another way, and then he would hold the umbrella and do the line. I was just on my feet the whole time.”

The Artful Escape

Annapurna Interactive

In The Artful Escape, the main character, Francis Vendetti, goes on a multidimensional journey to discover his true stage persona — which seems to be a David Bowie-esque shred machine — while at the same time reckoning with the legacy of his late uncle, a Bob Dylan-style folk icon. He travels through strange and trippy worlds, playing music and hunting for his true sound.

To give a sense of the game’s oddball vibe, Schwartzman plays a giant brain perched atop a pile of discarded fish parts.

“He’s a really funny comic support character,” Galvatron said. “Like a very lofty British alien, like a brain floating in an aquarium on a flotilla of goldfish fins. It’ll make sense when you see it.”

For Galvatron, The Artful Escape is exactly that — an escape. His career as a mainstream rockstar was ultimately unfulfilling, filled with red tape, stagnant bureaucracy and awkward interactions. In between shows, he often found himself curled up in the corner of the tour bus, reading Dune or writing his own novel, watching the continent fly by. 

Repost: Original Source and Author Link

Categories
Tech News

Researchers created a brain interface that can sing what a bird’s thinking

Researchers from the University of California San Diego recently built a machine learning system that predicts what a bird’s about to sing as they’re singing it.

The big idea here is real-time speech synthesis for vocal prosthesis. But the implications could go much further.

Up front: Birdsong is a complex form of communication that involves rhythm, pitch, and, most importantly, learned behaviors.

According to the researchers, teaching an AI to understand these songs is a valuable step in training systems that can replace biological human vocalizations:

While limb-based motor prosthetic systems have leveraged nonhuman primates as an important animal model, speech prostheses lack a similar animal model and are more limited in terms of neural interface technology, brain coverage, and behavioral study design.

Songbirds are an attractive model for learned complex vocal behavior. Birdsong shares a number of unique similarities with human speech, and its study has yielded general insight into multiple mechanisms and circuits behind learning, execution, and maintenance of vocal motor skill.

But translating vocalizations in real-time is no easy challenge. Current state-of-the art systems are slow compared to our natural thought-to-speech patterns.

Think about it: cutting-edge natural language processing systems struggle to keep up with human thought.

When you interact with your Google Assistant or Alexa virtual assistant, there’s often a longer pause than you’d expect if you were talking to a real person. This is because the AI is processing your speech, determining what each word means in relation to its abilities, and then figuring out which packages or programs to access and deploy.

In the grand scheme, it’s amazing that these cloud-based systems work as fast as they do. But they’re still not good enough for the purpose of creating a seamless interface for non-vocal people to speak through at the speed of thought.

The work: First, the team implanted electrodes in a dozen bird brains (zebra finches, to be specific) and then started recording activity as the birds sang.

But it’s not enough just to train an AI to recognize neural activity as a bird sings – even a bird’s brain is far too complex to entirely map how communications work across its neurons.

So the researchers trained another system to reduce real-time songs down to recognizable patterns the AI can work with.

Quick take: This is pretty cool in that it does provide a solution to an outstanding problem. Processing birdsong in real-time is impressive and replicating these results with human speech would be a eureka moment.

But, this early work isn’t ready for primetime just yet. It appears to be a shoebox solution in that it’s not necessarily adaptable to other speech systems in its current iteration. In order to get it functioning fast enough, the researchers had to create a shortcut to speech analysis that might not work when you expand it beyond a bird’s vocabulary.

That being said, with further development this could be among the first giant technological leaps for brain computer interfaces since the deep learning renaissance of 2014.

Read the whole paper here.

Repost: Original Source and Author Link

Categories
Tech News

Is brain drift the key to machine consciousness?

Think about someone you love and the neurons in your brain will light up like a Christmas tree. But if you think about them again, will the same lights go off? Chances are: the answer’s no. And that could have big implications for the future of AI.

A team of neuroscientists from the University of Columbia in New York recently published research demonstrating what they refer to as “representational drift” in the brains of mice.

Per the paper:

Although activity in piriform cortex could be used to discriminate between odorants at any moment in time, odour-evoked responses drifted over periods of days to weeks.

The performance of a linear classifier trained on the first recording day approached chance levels after 32 days. Fear conditioning did not stabilize odour-evoked responses.

Daily exposure to the same odorant slowed the rate of drift, but when exposure was halted the rate increased again.

Up front: What’s interesting here is that, in lieu of a better theory, it’s been long believed that neurons in the brain associate experiences and memories with static patterns. In essence, this would mean that when you smell cotton candy certain neurons fire up in your brain and when you smell pizza different ones do.

And, while this is basically still true, what’s changed is that the scientists no longer believe that the same neurons fire up when you smell cotton candy as did the last time you smelled cotton candy.

This is what “representation drift,” or “brain drift” as we’re calling it, means. Instead of the exact same neurons firing up every time, different neurons across different locations fire up to represent the same concept.

The scientists used mice and their sense of smell in laboratory experiments because it represents a halfway point between the ephemeral nature of abstract memories (what does London feel like?) and the static nature of our other brain connections (our brain’s connection to our muscles, for example).

What the team found was that, despite the fact that we can recognize objects by smell, our brain perceives the same smells differently over time. What you smell one month will have a totally different representation a month later if you take another whiff.

The interesting part: The scientists don’t really know why. This is because they’re bushwhacking a path where few have trod. There just isn’t much in the way of data-based research on how the brain perceives memory and why some memories can seemingly teleport unchanged across areas of the brain.

But perhaps most interesting are the implications. In many ways our brains function similar to binary artificial neural networks. However, the distinct differences between our mysterious gray matter and the meticulously plotted AI systems human engineers build may be where we find everything we need to reverse engineer sentience, consciousness, and the secret of life.

Quick take: According to the scientists’ description, the human brain appears to tune in memory associations over time like an FM radio in a car. Depending on how time and experience has changed you and your perception of the world, your brain may just be readjusting to reality in order to integrate new information seamlessly.

This would indicate we don’t “delete” our old memories or simply update them in place like replacing the contents of a folder. Instead, we re-establish our connection with reality and distribute data across our brain network.

Perhaps the mechanisms driving the integration of data in the human brain – that is, whatever controls the seemingly unpredictable distribution of information across neurons – is what’s missing from our modern-day artificial neural networks and machine learning systems.

You can read the whole paper here.

Repost: Original Source and Author Link

Categories
AI

A simple model of the brain provides new directions for AI research

Elevate your enterprise data technology and strategy at Transform 2021.


Last week, Google Research held an online workshop on the conceptual understanding of deep learning. The workshop, which featured presentations by award-winning computer scientists and neuroscientists, discussed how new findings in deep learning and neuroscience can help create better artificial intelligence systems.

While all the presentations and discussions were worth watching (and I might revisit them again in the coming weeks), one in particular stood out for me: A talk on word representations in the brain by Christos Papadimitriou, professor of computer science at Columbia University.

In his presentation, Papadimitriou, a recipient of the Gödel Prize and Knuth Prize, discussed how our growing understanding of information-processing mechanisms in the brain might help create algorithms that are more robust in understanding and engaging in conversations. Papadimitriou presented a simple and efficient model that explains how different areas of the brain inter-communicate to solve cognitive problems.

“What is happening now is perhaps one of the world’s greatest wonders,” Papadimitriou said, referring to how he was communicating with the audience. The brain translates structured knowledge into airwaves that are transferred across different mediums and reach the ears of the listener, where they are again processed and transformed into structured knowledge by the brain.

“There’s little doubt that all of this happens with spikes, neurons, and synapses. But how? This is a huge question,” Papadimitriou said. “I believe that we are going to have a much better idea of the details of how this happens over the next decade.”

Assemblies of neurons in the brain

The cognitive and neuroscience communities are trying to make sense of how neural activity in the brain translates to language, mathematics, logic, reasoning, planning, and other functions. If scientists succeed at formulating the workings of the brain in terms of mathematical models, then they will open a new door to creating artificial intelligence systems that can emulate the human mind.

A lot of studies focus on activities at the level of single neurons. Until a few decades ago, scientists thought that single neurons corresponded to single thoughts. The most popular example is the “grandmother cell” theory, which claims there’s a single neuron in the brain that spikes every time you see your grandmother. More recent discoveries have refuted this claim and have proven that large groups of neurons are associated with each concept, and there might be overlaps between neurons that link to different concepts.

These groups of brain cells are called “assemblies,” which Papadimitriou describes as “a highly connected, stable set of neurons which represent something: a word, an idea, an object, etc.”

Award-winning neuroscientist György Buzsáki describes assemblies as “the alphabet of the brain.”

A mathematical model of the brain

To better understand the role of assemblies, Papadimitriou proposes a mathematical model of the brain called “interacting recurrent nets.” Under this model, the brain is divided into a finite number of areas, each of which contains several million neurons. There is recursion within each area, which means the neurons interact with each other. And each of these areas has connections to several other areas. These inter-area connections can be excited or inhibited.

This model provides randomness, plasticity, and inhibition. Randomness means the neurons in each brain area are randomly connected. Also, different areas have random connections between them. Plasticity enables the connections between the neurons and areas to adjust through experience and training. And inhibition means that at any moment, a limited number of neurons are excited.

Papadimitriou describes this as a very simple mathematical model that is based on “the three main forces of life.”

Along with a group of scientists from different academic institutions, Papadimitriou detailed this model in a paper published last year in the peer-reviewed scientific journal Proceedings of the National Academy of Sciences. Assemblies were the key component of the model and enabled what the scientists called “assembly calculus,” a set of operations that can enable the processing, storing, and retrieval of information.

“The operations are not just pulled out of thin air. I believe these operations are real,” Papadimitriou said. “We can prove mathematically and validate by simulations that these operations correspond to true behaviors… these operations correspond to behaviors that have been observed [in the brain].”

Papadimitriou and his colleagues hypothesize that assemblies and assembly calculus are the correct model that explain cognitive functions of the brain such as reasoning, planning, and language.

“Much of cognition could fit that,” Papadimitriou said in his talk at the Google deep learning conference.

Natural language processing with assembly calculus

To test their model of the mind, Papadimitriou and his colleagues tried implementing a natural language processing system that uses assembly calculus to parse English sentences. In effect, they were trying to create an artificial intelligence system that simulates areas of the brain that house the assemblies that correspond to lexicon and language understanding.

“What happens is that if a sequence of words excites these assemblies in lex, this engine is going to produce a parse of the sentence,” Papadimitriou said.

The system works exclusively through simulated neuron spikes (as the brain does), and these spikes are caused by assembly calculus operations. The assemblies correspond to areas in the medial temporal lobe, Wernicke’s area, and Broca’s area, three parts of the brain that are highly engaged in language processing. The model receives a sequence of words and produces a syntax tree. And their experiments show that in terms of speed and frequency of neuron spikes, their model’s activity corresponds very closely to what happens in the brain.

The AI model is still very rudimentary and is missing many important parts of language, Papadimitriou acknowledges. The researchers are working on plans to fill the linguistic gaps that exist. But they believe that all these pieces can be added with assembly calculus, a hypothesis that will need to pass the test of time.

“Can this be the neural basis of language? Are we all born with such a thing in [the left hemisphere of our brain],” Papadimitriou asked. There are still many questions about how language works in the human mind and how it relates to other cognitive functions. But Papadimitriou believes that the assembly model brings us closer to understanding these functions and answering the remaining questions.

Language parsing is just one way to test the assembly calculus theory. Papadimitriou and his collaborators are working on other applications, including learning and planning in the way that children do at a very young age.

“The hypothesis is that the assembly calculus—or something like it—fills the bill for access logic,” Papadimitriou said. “In other words, it is a useful abstraction of the way our brain does computation.”

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

This mathematical brain model may pave the way for more human-like AI

Last week, Google Research held an online workshop on the conceptual understanding of deep learning. The workshop, which featured presentations by award-winning computer scientists and neuroscientists, discussed how new findings in deep learning and neuroscience can help create better artificial intelligence systems.

While all the presentations and discussions were worth watching (and I might revisit them again in the coming weeks), one, in particular, stood out for me: A talk on word representations in the brain by Christos Papadimitriou, professor of computer science at the University of Columbia.

In his presentation, Papadimitriou, a recipient of the Gödel Prize and Knuth Prize, discussed how our growing understanding of information-processing mechanisms in the brain might help create algorithms that are more robust in understanding and engaging in conversations. Papadimitriou presented a simple and efficient model that explains how different areas of the brain inter-communicate to solve cognitive problems.

“What is happening now is perhaps one of the world’s greatest wonders,” Papadimitriou said, referring to how he was communicating with the audience. The brain translates structured knowledge into airwaves that are transferred across different mediums and reach the ears of the listener, where they are again processed and transformed into structured knowledge by the brain.

“There’s little doubt that all of this happens with spikes, neurons, and synapses. But how? This is a huge question,” Papadimitriou said. “I believe that we are going to have a much better idea of the details of how this happens over the next decade.”

Assemblies of neurons in the brain

The cognitive and neuroscience communities are trying to make sense of how neural activity in the brain translates to language, mathematics, logic, reasoning, planning, and other functions. If scientists succeed at formulating the workings of the brain in terms of mathematical models, then they will open a new door to creating artificial intelligence systems that can emulate the human mind.

A lot of studies focus on activities at the level of single neurons. Until a few decades ago, scientists thought that single neurons corresponded to single thoughts. The most popular example is the “grandmother cell” theory, which claims there’s a single neuron in the brain that spikes every time you see your grandmother. More recent discoveries have refuted this claim and have proven that large groups of neurons are associated with each concept, and there might be overlaps between neurons that link to different concepts.

These groups of brain cells are called “assemblies,” which Papadimitriou describes as “a highly connected, stable set of neurons which represent something: a word, an idea, an object, etc.”

Award-winning neuroscientist György Buzsáki describes assemblies as “the alphabet of the brain.”

A mathematical model of the brain

To better understand the role of assemblies, Papadimitriou proposes a mathematical model of the brain called “interacting recurrent nets.” Under this model, the brain is divided into a finite number of areas, each of which contains several million neurons. There is recursion within each area, which means the neurons interact with each other. And each of these areas has connections to several other areas. These inter-area connections can be excited or inhibited.

This model provides randomness, plasticity, and inhibition. Randomness means the neurons in each brain area are randomly connected. Also, different areas have random connections between them. Plasticity enables the connections between the neurons and areas to adjust through experience and training. And inhibition means that at any moment, a limited number of neurons are excited.

Papadimitriou describes this as a very simple mathematical model that is based on “the three main forces of life.”

Along with a group of scientists from different academic institutions, Papadimitriou detailed this model in a paper published last year in the peer-reviewed scientific journal Proceedings of the National Academy of Sciences. Assemblies were the key component of the model and enabled what the scientists called “assembly calculus,” a set of operations that can enable the processing, storing, and retrieval of information.

“The operations are not just pulled out of thin air. I believe these operations are real,” Papadimitriou said. “We can prove mathematically and validate by simulations that these operations correspond to true behaviors… these operations correspond to behaviors that have been observed [in the brain].”

Papadimitriou and his colleagues hypothesize that assemblies and assembly calculus are the correct model that explain cognitive functions of the brain such as reasoning, planning, and language.

“Much of cognition could fit that,” Papadimitriou said in his talk at the Google deep learning conference.

Natural language processing with assembly calculus

To test their model of the mind, Papadimitriou and his colleagues tried implementing a natural language processing system that uses assembly calculus to parse English sentences. In effect, they were trying to create an artificial intelligence system that simulates areas of the brain that house the assemblies that correspond to lexicon and language understanding.

“What happens is that if a sequence of words excites these assemblies in lex, this engine is going to produce a parse of the sentence,” Papadimitriou said.

The system works exclusively through simulated neuron spikes (as the brain does), and these spikes are caused by assembly calculus operations. The assemblies correspond to areas in the medial temporal lobe, Wernicke’s area, and Broca’s area, three parts of the brain that are highly engaged in language processing. The model receives a sequence of words and produces a syntax tree. And their experiments show that in terms of speed and frequency of neuron spikes, their model’s activity corresponds very closely to what happens in the brain.

The AI model is still very rudimentary and is missing many important parts of language, Papadimitriou acknowledges. The researchers are working on plans to fill the linguistic gaps that exist. But they believe that all these pieces can be added with assembly calculus, a hypothesis that will need to pass the test of time.

brain areas language processing

“Can this be the neural basis of language? Are we all born with such a thing in [the left hemisphere of our brain],” Papadimitriou asked. There are still many questions about how language works in the human mind and how it relates to other cognitive functions. But Papadimitriou believes that the assembly model brings us closer to understanding these functions and answering the remaining questions.

Language parsing is just one way to test the assembly calculus theory. Papadimitriou and his collaborators are working on other applications, including learning and planning in the way that children do at a very young age.

“The hypothesis is that the assembly calculus—or something like it—fills the bill for access logic,” Papadimitriou said. “In other words, it is a useful abstraction of the way our brain does computation.”

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.

Repost: Original Source and Author Link

Categories
Tech News

What would happen if we connected the human brain to a quantum computer?

Brain-computer interfaces are slowly beginning to take form, and here at Neural we couldn’t be more excited! Elon Musk’s Neuralink claims it’s on the cusp of a working device and Facebook’s been developing non-invasive BCI tech for years.

If everything goes according to plan, we could be wearing doo-dads or getting chip implants that allow us to control machines with our minds in a decade or less.

That’s a pretty cool idea and there are innumerable uses for such a device, but who knows how useful they’ll actually be in the beginning.

It’s easy to get swept up in dreams of controlling entire drone swarms with our thoughts like a master conductor or conducting telepathic conversations with people around the world via the cloud.

But the current reality is that the companies working on these devices are spending hundreds of millions and, so far, we can use them to play pong.

This isn’t meant to denigrate the use of BCIs in the fields of medicine and accessibility, we’re strictly talking about recreational or personal-use gadgets. But, judging from the above video, it could be a while before we can ditch our iPhones and PS5 game pads for a seamless BCI.

In the meantime, there’s nothing wrong with a little conjecture. BCIs aren’t a new idea, but they’ve only ever really existed in the realm of science fiction. Until now. The Deep Learning AI revolution that started in 2014 made them not just possible, but viable.

Machine learning allows us to miniaturize chips, discover new surgical techniques, run complex software on relatively simple hardware, and a dozen other computing and communications feats that work as a rising tide to lift all vessels when it comes to BCIs.

While no technological advance is guaranteed, it seems like BCIs are a shoe-in to become the next big thing in tech. It’s even arguable they could become mainstream before driverless cars do.

Credit: Facebook
Categories
Tech News

Brain genius hacks an Apple AirTag… but don’t panic

When I hear something’s been hacked, it conjures images of Le Carré-style spies and national security leaks, but this isn’t always the case. Sometimes, it’s just a brain genius hacking an Apple AirTag.

Over the weekend, Twitter user Stacksmashing managed to break into Apple’s tracking device. They also managed to dump the firmware of Apple’s new device (although this hasn’t been made public).

Feast your eyes on this:

We can all agree on one thing: this is cool. Apple is renowned for the strong security of its devices, so actually hacking an AirTag is a fantastic achievement. But there’s a bigger question to answer…

Should we be worried that someone hacked an AirTag?

Let’s try and break this down logically. First, we need to find out exactly what Stacksmashing managed to achieve. From a user perspective, the most notable element is they managed to alter the NFC URL.

Effectively, when you tap an AirTag with your phone, it normally directs you to Apple’s Find My service. Stacksmashing managed to alter this so it opened a website of their choice. Like this:

Obviously this could be used to redirect someone towards a malicious website, but this hacked AirTag opens up another question: can it be used for even more nefarious purposes?

A point raised in the Twitter thread is whether or not this hacked or jailbroken AirTag could be used for tracking and recording. Effectively, someone could disable anti-stalking measures and follow you. It’s also broadly possible to use the accelerometer inside the hardware to record audio. In other words, an AirTag could become a spying device.

So… should you be worried?

Not really. At least not yet. In order to hack the AirTag, Stacksmashing had to take it apart, whip out the soldering iron, and power it externally. In other words, if someone’s going to do this with an AirTag you own, it’s gonna take a lot of time and access.

If someone really wants to spy on you, there are far easier ways to do than this. An AirTag being hacked isn’t going to impact you currently.

Really, we should be pleased that someone’s managed this feat. Apple is bound to take note of this and, hopefully, will take further steps to ensure that these devices can’t be easily used to erode someone’s privacy.

Still, massive respect to Stacksmashing. This is cool as fuck.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In –
and you can subscribe to it right here.



Repost: Original Source and Author Link

Categories
Tech News

Scientists measured brain waves using cochlear implants for the first time

Scientists have successfully measured brain waves through an ear implant for the first time, a breakthrough that could improve smart hearing aids.

Researchers from KU Leuven, a university in Belgium, used an experimental cochlear implant to record neural signals that arise in response to sounds. These signals could be used to measure and monitor hearing quality.

“In the future, it should even be possible for the hearing implant to adjust itself autonomously based on the recorded brain waves,” said study co-author Tom Francart.

[Read: How to use AI to better serve your customers]

Instead of making sounds louder like a conventional hearing aid, cochlear implants use electrical signals to directly stimulate the auditory nerve.

The devices are typically adjusted by an audiologist based on user feedback, a time-consuming process that can be challenging for children and people with communication impairments.

In addition, the fittings only happen during irregular sessions at a clinic. This means the settings can’t account for variable factors that affect the user’s hearing, such as different listening environments and physiological changes.

One solution is adjusting the implant via brainwaves. However, this typically requires expensive and cumbersome equipment that’s placed around the head.

A cochlear implant that records neural signals on its own could provide a more useful alternative. Francart said the approach has several advantages:

Firstly, we get an objective measurement that does not depend on the user’s input. In addition, you could measure a person’s hearing in everyday life and monitor it better. So, in the long run, the user would no longer have to undergo testing at the hospital. An audiologist could consult the data remotely and adjust the implant where necessary.

The researchers now want manufacturers to use the study findings to further develop smart hearing devices.

You can read the study paper in the journal Scientific Reports.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Published March 31, 2021 — 17:57 UTC



Repost: Original Source and Author Link