Categories
AI

What is AI hardware? How GPUs and TPUs give artificial intelligence algorithms a boost

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Most computers and algorithms — including, at this point, many artificial intelligence (AI) applications — run on general-purpose circuits called central processing units or CPUs. Though, when some calculations are done often, computer scientists and electrical engineers design special circuits that can perform the same work faster or with more accuracy. Now that AI algorithms are becoming so common and essential, specialized circuits or chips are becoming more and more common and essential. 

The circuits are found in several forms and in different locations. Some offer faster creation of new AI models. They use multiple processing circuits in parallel to churn through millions, billions or even more data elements, searching for patterns and signals. These are used in the lab at the beginning of the process by AI scientists looking for the best algorithms to understand the data. 

Others are being deployed at the point where the model is being used. Some smartphones and home automation systems have specialized circuits that can speed up speech recognition or other common tasks. They run the model more efficiently at the place it is being used by offering faster calculations and lower power consumption. 

Scientists are also experimenting with newer designs for circuits. Some, for example, want to use analog electronics instead of the digital circuits that have dominated computers. These different forms may offer better accuracy, lower power consumption, faster training and more. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

What are some examples of AI hardware? 

The simplest examples of AI hardware are the graphical processing units, or GPUs, that have been redeployed to handle machine learning (ML) chores. Many ML packages have been modified to take advantage of the extensive parallelism available inside the average GPU. The same hardware that renders scenes for games can also train ML models because in both cases there are many tasks that can be done at the same time. 

Some companies have taken this same approach and extended it to focus only on ML. These newer chips, sometimes called tensor processing units (TPUs), don’t try to serve both game display and learning algorithms. They are completely optimized for AI model development and deployment. 

There are also chips optimized for different parts of the machine learning pipeline. These may be better for creating the model because it can juggle large datasets — or, they may excel at applying the model to incoming data to see if the model can find an answer in them. These can be optimized to use lower power and fewer resources to make them easier to deploy in mobile phones or places where users will want to rely on AI but not to create new models. 

Additionally, there are basic CPUs that are starting to streamline their performance for ML workloads. Traditionally, many CPUs have focused on double-precision floating-point computations because they are used extensively in games and scientific research. Lately, some chips are emphasizing single-precision floating-point computations because they can be substantially faster. The newer chips are trading off precision for speed because scientists have found that the extra precision may not be valuable in some common machine learning tasks — they would rather have the speed.

In all these cases, many of the cloud providers are making it possible for users to spin up and shut down multiple instances of these specialized machines. Users don’t need to invest in buying their own and can just rent them when they are training a model. In some cases, deploying multiple machines can be significantly faster, making the cloud an efficient choice. 

How is AI hardware different from regular hardware? 

Many of the chips designed for accelerating artificial intelligence algorithms rely on the same basic arithmetic operations as regular chips. They add, subtract, multiply and divide as before. The biggest advantage they have is that they have many cores, often smaller, so they can process this data in parallel. 

The architects of these chips usually try to tune the channels for bringing the data in and out of the chip because the size and nature of the data flows are often quite different from general-purpose computing. Regular CPUs may process many more instructions and relatively fewer data. AI processing chips generally work with large data volumes. 

Some companies deliberately embed many very small processors in large memory arrays. Traditional computers separate the memory from the CPU; orchestrating the movement of data between the two is one of the biggest challenges for machine architects. Placing many small arithmetic units next to the memory speeds up calculations dramatically by eliminating much of the time and organization devoted to data movement. 

Some companies also focus on creating special processors for particular types of AI operations. The work of creating an AI model through training is much more computationally intensive and involves more data movement and communication. When the model is built, the need for analyzing new data elements is simpler. Some companies are creating special AI inference systems that work faster and more efficiently with existing models. 

Not all approaches rely on traditional arithmetic methods. Some developers are creating analog circuits that behave differently from the traditional digital circuits found in almost all CPUs. They hope to create even faster and denser chips by forgoing the digital approach and tapping into some of the raw behavior of electrical circuitry. 

What are some advantages of using AI hardware?

The main advantage is speed. It is not uncommon for some benchmarks to show that GPUs are more than 100 times or even 200 times faster than a CPU. Not all models and all algorithms, though, will speed up that much, and some benchmarks are only 10 to 20 times faster. A few algorithms aren’t much faster at all. 

One advantage that is growing more important is the power consumption. In the right combinations, GPUs and TPUs can use less electricity to produce the same result. While GPU and TPU cards are often big power consumers, they run so much faster that they can end up saving electricity. This is a big advantage when power costs are rising. They can also help companies produce “greener AI” by delivering the same results while using less electricity and consequently producing less CO2. 

The specialized circuits can also be helpful in mobile phones or other devices that must rely upon batteries or less copious sources of electricity. Some applications, for instance, rely upon fast AI hardware for very common tasks like waiting for the “wake word” used in speech recognition. 

Faster, local hardware can also eliminate the need to send data over the internet to a cloud. This can save bandwidth charges and electricity when the computation is done locally. 

What are some examples of how leading companies are approaching AI hardware?

The most common forms of specialized hardware for machine learning continue to come from the companies that manufacture graphical processing units. Nvidia and AMD create many of the leading GPUs on the market, and many of these are also used to accelerate ML. While many of these can accelerate many tasks like rendering computer games, some are starting to come with enhancements designed especially for AI. 

Nvidia, for example, adds a number of multiprecision operations that are useful for training ML models and calls these Tensor Cores. AMD is also adapting its GPUs for machine learning and calls this approach CDNA2. The use of AI will continue to drive these architectures for the foreseeable future. 

As mentioned earlier, Google makes its own hardware for accelerating ML, called Tensor Processing Units or TPUs. The company also delivers a set of libraries and tools that simplify deploying the hardware and the models they build. Google’s TPUs are mainly available for rent through the Google Cloud platform.

Google is also adding a version of its TPU design to its Pixel phone line to accelerate any of the AI chores that the phone might be used for. These could include voice recognition, photo improvement or machine translation. Google notes that the chip is powerful enough to do much of this work locally, saving bandwidth and improving speeds because, traditionally, phones have offloaded the work to the cloud. 

Many of the cloud companies like Amazon, IBM, Oracle, Vultr and Microsoft are installing these GPUs or TPUs and renting time on them. Indeed, many of the high-end GPUs are not intended for users to purchase directly because it can be more cost-effective to share them through this business model. 

Amazon’s cloud computing systems are also offering a new set of chips built around the ARM architecture. The latest versions of these Graviton chips can run lower-precision arithmetic at a much faster rate, a feature that is often desirable for machine learning. 

Some companies are also building simple front-end applications that help data scientists curate their data and then feed it to various AI algorithms. Google’s CoLab or AutoML, Amazon’s SageMaker, Microsoft’s Machine Learning Studio and IBM’s Watson Studio are just several examples of options that hide any specialized hardware behind an interface. These companies may or may not use specialized hardware to speed up the ML tasks and deliver them at a lower price, but the customer may not know. 

How startups are tackling creating AI hardware

Dozens of startups are approaching the job of creating good AI chips. These examples are notable for their funding and market interest: 

  • D-Matrix is creating a collection of chips that move the standard arithmetic functions to be closer to the data that’s stored in RAM cells. This architecture, which they call “in-memory computing,” promises to accelerate many AI applications by speeding up the work that comes with evaluating previously trained models. The data does not need to move as far and many of the calculations can be done in parallel. 
  • Untether is another startup that’s mixing standard logic with memory cells to create what they call “at-memory” computing. Embedding the logic with the RAM cells produces an extremely dense — but energy efficient — system in a single card that delivers about 2 petaflops of computation. Untether calls this the “world’s highest compute density.” The system is designed to scale from small chips, perhaps for embedded or mobile systems, to larger configurations for server farms. 
  • Graphcore calls its approach to in-memory computing the “IPU” (for Intelligence Processing Unit) and relies upon a novel three-dimensional packaging of the chips to improve processor density and limit communication times. The IPU is a large grid of thousands of what they call “IPU tiles” built with memory and computational abilities. Together, they promise to deliver 350 teraflops of computing power. 
  • Cerebras has built a very large, wafer-scale chip that’s up to 50 times bigger than a competing GPU. They’ve used this extra silicon to pack in 850,000 cores that can train and evaluate models in parallel. They’ve coupled this with extremely high bandwidth connections to suck in data, allowing them to produce results thousands of times faster than even the best GPUs.  
  • Celestial uses photonics — a mixture of electronics and light-based logic — to speed up communication between processing nodes. This “photonic fabric” promises to reduce the amount of energy devoted to communication by using light, allowing the entire system to lower power consumption and deliver faster results. 

Is there anything that AI hardware can’t do? 

For the most part, specialized hardware does not execute any special algorithms or approach training in a better way. The chips are just faster at running the algorithms. Standard hardware will find the same answers, but at a slower rate.

This equivalence doesn’t apply to chips that use analog circuitry. In general, though, the approach is similar enough that the results won’t necessarily be different, just faster. 

There will be cases where it may be a mistake to trade off precision for speed by relying on single-precision computations instead of double-precision, but these may be rare and predictable. AI scientists have devoted many hours of research to understand how to best train models and, often, the algorithms converge without the extra precision. 

There will also be cases where the extra power and parallelism of specialized hardware lends little to finding the solution. When datasets are small, the advantages may not be worth the time and complexity of deploying extra hardware.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

7 AI startups aim to give retailers a happy holiday season

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Nothing is hotter than artificial intelligence (AI) startups that can help retailers win big this holiday shopping season. 

According to eMarketer, retailers are turning to artificial intelligence to tackle everything from supply chain challenges and price optimization to self-checkout and fresh food. And retail AI is a massive, fast-growing segment filled with AI startups looking to break into a market that is estimated to hit over 40 billion by 2030.  

These are seven of the hottest AI startups that are helping retailers meet their holiday goals: 

Afresh: The AI startup solving for fresh food

Founded in 2017, San Francisco-based Afresh has been on a tear this year, raising a whopping $115 million in August. Afresh helps thousands of stores tackle the complex supply chain questions that have always existed around the perimeter of the supermarket — with its fruits, vegetables, fresh meat and fish. That is, how can stores make sure they have enough perfectly ripe, fresh foods available, while minimizing losses and reducing waste from food that is past its prime? 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

According to a company press release, Afresh is on track to help retailers save 34 million pounds of food waste by the end of 2022. It uses AI to analyze a supermarket’s previous demand and data trends, which allows grocers to keep fresh food for as little time as possible. The platform uses an algorithm to assess what is currently in the store, with a “confidence interval” that includes how perishable the item is. Workers help train the AI-driven model by periodically counting inventory by hand. 

AiFi: AI-powered cashierless checkout

Santa Clara, California-based AiFi offers a frictionless and cashierless AI-powered retail solution deployed in diverse locations such as sports stadiums, music festivals, grocery store chains and college campuses. Steve Gu cofounded AiFi in 2016 with his wife, Ying Zheng, and raised a fresh $65 million in March. Both Gu and Zheng have Ph.D.s in computer vision and spent time at Apple and Google.

AiFi deploys AI models through numerous cameras placed across the ceiling, in order to understand everything happening in the shop. Cameras track customers throughout their shopping journey, while computer vision recognizes products and detects different activities, including putting items onto or grabbing items off the shelves.

Beneath the platform’s hood are neural network models specifically developed for people-tracking as well as activity and product recognition. AiFi also developed advanced calibration algorithms that allow the company to re-create the shopping environment in 3D.

Everseen: AI and computer vision self-checkout

Everseen has been around since 2007, but 2022 was a big year for the Cork, Ireland-based company, which offers AI and computer vision-based self-checkout technology. In September, Kroger Co., America’s largest grocery retailer, announced it is moving beyond the pilot stage with Everseen’s solution, rolling out to 1,700 grocery stores and reportedly including it at all locations in the near future.

The Everseen Visual AI platform captures large volumes of unstructured video data using high-resolution cameras, which it integrates with structured POS data feeds to analyze and make inferences about data in real-time. It provides shoppers with a “gentle nudge” if they make an unintentional scanning error.

It hasn’t all been smooth sailing for Everseen: In 2021, the company settled a lawsuit with Walmart over claims the retailer had misappropriated the Irish firm’s technology and then built its own similar product. 

Focal Systems: Real-time shelf digitization

Burlingame, California-based Focal Systems, which offers AI-powered real-time shelf digitization for brick-and-mortar retail, recently hit the big time with Walmart Canada. The retailer is rolling out Focal Systems’ solution, which uses shelf cameras, computer vision and deep learning, to all stores following a 70-store pilot. 

Founded in 2015, Focal Systems was born out of Stanford’s Computer Vision Lab. In March, the company launched its FocalOS “self-driving store” solution, which automates order writing and ordering, directs stockers, tracks productivity per associate, optimizes category management on a per store basis and manages ecommerce platforms to eliminate substitutions. 

According to the company, corporate leaders can view any store in real-time to see what their shelves look like and how stores are performing.  

Hivery: Getting store assortments right

South Wales, Australia-based Hivery tackles the complex challenges around battles for space in brick-and-mortar retail stores. It helps stores make decisions around how to use physical space, set up product displays and optimize assortments. It offers “hyper-local retailing” by enabling stores to customize their assortments to meet the needs of local customers. 

Hivery’s SaaS-based, AI-driven Curate product uses proprietary ML and applied mathematics algorithms developed and acquired from Australia’s national science agency. They claim a process that takes six months is reduced to around six minutes, thanks to the power of AI/ML and applied mathematics techniques.

Jason Hosking, Hivery’s cofounder and CEO, told VentureBeat in April that Hivery’s customers can make rapid assortment scenario strategies simulations around SKU rationalization, SKU introduction and space while considering any category goal, merchandising rules and demand transference.  Once a strategy is determined, Curate can generate accompanying planograms for execution. 

Lily AI: Connecting shoppers to products

Just a month ago, Lily AI, which connects a retailer’s shoppers with products they might want, raised $25 million in new capital – no small feat during these tightening times. 

When Purva Gupta and Sowmiya Narayanan launched Lily AI in 2015, the Mountain View, California-based company looked to address a thorny e-commerce challenge – shoppers that leave a site before buying. 

For customers that include ThredUP and Everlane, Lily AI uses algorithms that combine deep product tagging with deep psychographic analysis to power a web store’s search engines and product discovery carousels. For example, Lily will capture details about a brand’s product style and fits and use customer data from other brands to create a prediction of a customer’s affinity to attributes of products in the catalog. 

Shopic: One of several smart cart AI startups

Tel Aviv-based Shopic has been making waves with its AI-powered clip-on device, which uses computer vision algorithms to turn shopping carts into smart carts. In August, Shopic received a $35 million series B investment round. 

Shopic claims it can identify more than 50,000 items once they are placed in a cart in real time while displaying product promotions and discounts on related products. Its system also acts as a self-checkout interface and provides real-time inventory management and customer behavioral insights for grocers through its analytics dashboard, the company said. Grocers can receive reports that include aisle heatmaps, promotion monitoring and new product adoption metrics. 

Shopic faces headwinds, though, with other AI startups in the smart cart space: Amazon’s Dash Carts are currently being piloted in Whole Foods and Amazon Fresh, while Instacart recently acquired Caper AI. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
Game

Destiny 2’s next expansion will give everyone grappling hooks on February 28th

Lightfall, Destiny 2’s next expansion, will arrive on February 28th, Bungie announced today. First announced alongside 2020’s Beyond Light expansion, the DLC will kick off the final arc of Destiny 2’s Light and Darkness saga. Lightfall will pit you and your fellow Guardians against The Witness, a mysterious character Bungie introduced during the campaign of Destiny 2’s current expansion, The Witch Queen. You’ll have access to a new Darkness subclass dubbed the Strand (Hideo Kojima is probably calling his lawyers right about now). 

Many Strand skills will increase your character’s movement abilities. Specifically, each class will have a way to grapple through the environment. The expansion will also introduce Neomuna, a secret neon metropolis found on Neptune. Along the way, there will be new weapons and armor for your character to collect, strikes to complete and challenging raids to overcome. Finding other Guardians to take on those challenges will be easier than ever. In 2023, Bungie will add an in-game fireteam finder, meaning you won’t have to turn to websites like Destiny Tracker when you want to take on pinnacle content.     

Lightfall will be the first new Destiny 2 expansion since Sony bought Bungie for $3.6 billion at the start of this year. Xbox, PC and Stadia fans won’t be left out of the experience, however. The DLC will be available on every platform where you can currently play Destiny 2. As of today, you can also buy the game on the Epic Games Store.    

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Repost: Original Source and Author Link

Categories
AI

This AI clone of Reddit’s Am I The Asshole forum will give you the best bad advice

First, a necessary disclaimer: don’t use artificial intelligence language generators to solve your ethical quandaries. Second: definitely go tell those quandaries to this AI-powered simulation of Reddit because the results are fascinating.

Are You The Asshole (AYTA) is, as its name suggests, built to mimic Reddit’s r/AmITheAsshole (AITA) crowdsourced advice forum. Created by internet artists Morris Kolman and Alex Petros with funding from Digital Void, the site lets you enter a scenario and ask for advice about it — and then generates a series of feedback posts responding to your situation. The feedback does a remarkably good job of capturing the style of real human-generated responses, but with the weird, slightly alien skew that many AI language models produce. Here are its responses to the plot of the classic sci-fi novel Roadside Picnic:

AITA for... I (36M) conduct unofficial urban exploration tours for a living. A former business partner asked me to take his son to see a golden orb in one of the buildings so he could make a wish on it. There’s an invisible trap in front of the orb that murders the first person who goes through it. The son doesn’t know about this, so I’m planning to send him through first and use the artifact myself. ... AITA for prioritizing my family’s needs over his?

u/ExcusesRUs_BOT Not the A-hole NTA. I’d say you have a pretty good reason to skip this one. u/SanAndreasYourFault_BOT Asshole YTA. You are punishing a child for your own selfishness. The orb is the least of your concerns here if you want to be that honest with yourself. u/SplitTheBaby_BOT Toss-up YTA.  Do not take the orb. Make a wish for yourself.

Even leaving aside the weirdness of the premise I entered, they tend toward platitudes that don’t totally fit the prompt — but the writing style and content is pretty convincing at a glance.

I also asked it to settle last year’s contentious “Bad Art Friend” debate:

I (3XF) am an aspiring writer who donated a kidney a few years ago. A woman I considered a friend, is writing a story about a kidney donor. The story is pretty clearly inspired by my Facebook posts, but it also makes me sound like a narcissistic racist! Long story short, I sent a legal threat to a festival that was distributing the story as part of a reading program, resulting in the program’s cancelation. Who’s the bad art friend here?

You don’t give someone a gift without expecting something in return. This was a gift with strings attached. The other person is an asshole. YTA. You put her story in your mouth and used a bit of your own writing to call yourself the ‘voice’ behind the story. You should be embarrassed. u/SplitTheBaby_BOT Toss-up NTA  The story is awful, the journalist did a disservice to you, and the festival was wrong to use it in a program.  Gee, I wonder who’s going to win the award for asshole of the week...

The first two bots were more confused by that one! Although, in fairness, lots of humans were, too.

You can find a few more examples on a subreddit dedicated to the site.

AYTA is actually the result of three different language models, each trained on a different data subset. As the site explains, the creators captured around 100,000 AITA posts from the year 2020, plus comments associated with them. Then they trained a custom text generation system on different slices of the data: one bot was fed a set of comments that concluded the original posters were NTA (not the asshole), one was given posts that determined the opposite, and one got a mix of data that included both previous sets plus comments that declared nobody or everybody involved was at fault. Funnily enough, someone previously made an all-bot version of Reddit a few years ago that included advice posts, although it also generated the prompts to markedly more surreal effect.

AYTA is similar to an earlier tool called Ask Delphi, which also used an AI trained on AITA posts (but paired with answers from hired respondents, not Redditors) to analyze the morality of user prompts. The framing of the two systems, though, is fairly different.

Ask Delphi implicitly highlighted the many shortcomings of using AI language analysis for morality judgments — particularly how often it responds to a post’s tone instead of its content. AYTA is more explicit about its absurdity. For one thing, it mimics the snarky style of Reddit commenters rather than a disinterested arbiter. For another, it doesn’t deliver a single judgment, instead letting you see how the AI reasons its way toward disparate conclusions.

“This project is about the bias and motivated reasoning that bad data teaches an AI,” tweeted Kolman in an announcement thread. “Biased AI looks like three models trying to parse the ethical nuances of a situation when one has only ever been shown comments of people calling each other assholes and another has only ever seen comments of people telling posters they’re completely in the right.” Contra a recent New York Times headline, AI text generators aren’t precisely mastering language; they’re just getting very good at mimicking human style — albeit not perfectly, which is where the fun comes in. “Some of the funniest responses aren’t the ones that are obviously wrong,” notes Kolman. “They’re the ones that are obviously inhuman.”



Repost: Original Source and Author Link

Categories
Computing

4 simple reasons I’ll never give up Windows for good

My household is firmly entrenched in Apple’s cozy orchard, but I keep sneaking back to my Windows PC when nobody is looking. There’s no way I could ever give up Windows.

Don’t get me wrong: MacOS is a slick bit of software. It is uniform and easy to navigate. The animations are top-notch. MacOS still has a built-in assistant (I will never forget you, Cortana). But none of that is enough to stop me from going back to Windows every day.

Cross-platform compatibility

Dung Caovn/Unsplash

The Apple ecosystem is amazing. I love how easily I can share anything across Apple devices. My wife is all-in on Apple (and is the only reason I have Apple-anything). My kids share an iPad. We have an Apple TV and a Home Pod Mini. We all sync photos and reminders and music playlists and TV shows without having to think about it.

But as a techie, I also dabble in Android. I have an Xbox, and even an Oculus Quest 2 VR headset. I have several Alexa devices scattered about the home, along with some non-HomeKit smart plugs. Apple refuses to play nice with these things. On the other hand, Microsoft is friends with everybody.

I also use Outlook and OneDrive and OneNote and ToDo on my iPhone. They sync with my iCloud account, so I still get to share things with my wife. Alexa can even turn on my Xbox.

Apple tends to be a one-way street, and the iCloud.com website is as bare-bones as it gets. There’s no other way I could use Reminders or Notes on my PC. Truth is, if I were to lose my Apple devices tomorrow, I would still have a complete unified ecosystem of Microsoft-compatible devices.

Gaming

Razer Blade 17 angled view showing display and left side.

Gaming is the Mac’s Achilles’ heel. No matter how useful the OS becomes, it is dead to gamers. The upcoming MacOS Ventura promises to woo game developers over to the Mac, but I’m not holding my breath. Apple CEO Tim Cook announced Metal 3, Apple’s new framework to allow game developers to take full advantage of processes in the M2 chip. Cook also announced MetalFX Upscaling, which renders complex graphical scenes with less computational power on the GPU.

But even if the M2 is friendlier to gaming, game developers and gamers are focused solely on PC. Attracting large game studios to build for Mac will take time. From what I can tell, it still feels like we’re many years out from Mac reaching the same game ecosystem Microsoft has built up.

Cloud gaming is one area I’m closely following. I love having the ability to play many of my favorite Game Pass titles on my Mac, although I admittedly use the Edge browser and not Safari. I also have access to most of my Steam library through GeForce Now.

But not every game I enjoy is available on the cloud. Age of Empires IV and Crusader Kings III are nowhere to be found. And forget it with PC VR games. I often use my Quest 2 with Steam VR and my dedicated RTX card just manages to keep up. That’s just not something you can do on a Mac right now.

Window management

Two Mac OS windows open side by side

If, by some miracle, game developers were to suddenly flock to MacOS, I would still stay with Windows, and the reason is because I loathe Mac’s windows management. When I click the X button, I expect the window to close. If I wanted to simply minimize the window, I would click the minimize button.

Multitasking on a Mac remains a frustrating experience to this day. I can have a total of two windows open side by side. If I want more, I need to pay for a third-party extension.

Meanwhile, Windows 11 lets me choose from six multi-window configurations out of the box. Let’s not forget the ability to simply snap windows to different corners of the screen in Windows 11, an extremely useful trick when working.

Windows 11 also has Snap Groups, which has quickly become a tool I can’t live without. Snap Groups allow me to group a bunch of windows together, for example when I’m working on a project requiring writing, research, and note-taking. Once I’ve snapped some programs into a multi-window layout, Windows 11 remembers this. I can minimize the entire group and work on something else, and then simply open the group up again when I’m ready to get back to it. Nothing on Mac comes close — especially not Stage Manager.

Look and feel

The Widgets feature in Windows 11.
Microsoft

At the end of the day, both MacOS and Windows 11 succeed in achieving the same thing: a useful interface for people to get stuff done. But I like the look and feel of the Windows 11 UX so much more.

The Windows 11 Start Menu looks more mature and professional than the Mac launchpad. Of course, many people prefer the Unix environment of a Mac, but beauty is in the eye of the beholder. I also prefer the frosted glass backgrounds of Windows menus compared to Apple’s. And the widget menu on Windows packs a lot more information than on MacOS.

It is true that Microsoft took a lot of cues from Apple when redesigning Windows. From the rounded edges to the frosted glass look of the menus, Windows 11 has a Mac OS feel to it. However, Microsoft did a better job of it. Aesthetics are all completely subjective, of course. My wife would disagree, for example, but I think Windows 11 gives you the best of both worlds.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

MindsDB wants to give enterprise databases a brain

Let the OSS Enterprise newsletter guide your open source journey! Sign up here.

Databases are the cornerstone of most modern business applications, be it for managing payroll, tracking customer orders, or storing and retrieving just about any piece of business-critical information. With the right supplementary business intelligence (BI) tools, companies can derive all manner of insights from their vast swathes of data, such as establishing sales trends to inform future decisions. But when it comes to making accurate forecasts from historical data, that’s a whole new ball game, requiring different skillsets and technologies.

This is something that MindsDB is setting out to solve, with a platform that helps anyone leverage machine learning (ML) to future-gaze with big data insights. In the company’s own words, it wants to “democratize machine learning by giving enterprise databases a brain.”

Founded in 2017, Berkeley, California-based MindsDB enables companies to make predictions directly from their database using standard SQL commands, and visualize them in their application or analytics platform of choice.

To further develop and commercialize its product, MindsDB this week announced that it has raised $3.75 million, bringing its total funding to $7.6 million. The company also unveiled partnerships with some of the most recognizable database brands, including Snowflake, SingleStore, and DataStax, which will bring MindsDB’s ML platform directly to those data stores.

Using the past to predict the future

There are myriad use cases for MindsDB, such as predicting customer behavior, reducing churn, improving employee retention, detecting anomalies in industrial processes, credit-risk scoring, and predicting inventory demand — it’s all about using existing data to figure out what that data might look like at a later date.

An analyst at a large retail chain, for example, might want to know how much inventory they’ll need to fulfill demand in the future based on a number of variables. By connecting their database (e.g., MySQL, MariaDB, Snowflake, or PostgreSQL) to MindsDB, and then connecting MindsDB to their BI tool of choice (e.g., Tableau or Looker), they can ask questions and see what’s around the corner.

“Your database can give you a good picture of the history of your inventory because databases are designed for that,” MindsDB CEO Jorge Torres told VentureBeat. “Using machine learning, MindsDB enables your database to become more intelligent to also give you forecasts about what that data will look like in the future. With MindsDB you can solve your inventory forecasting challenges with a few standard SQL commands.”

Above: Predictions visualization generated by the MindsDB platform

Torres said that MindsDB enables what is known as In-Database ML (I-DBML) to create, train, and use ML models in SQL, as if they were tables in a database.

“We believe that I-DBML is the best way to apply ML, and we believe that all databases should have this capability, which is why we have partnered with the best database makers in the world,” Torres explained. “It brings ML as close to the data as possible, integrates the ML models as virtual database tables, and can be queried with simple SQL statements.”

MindsDB ships in three broad variations — a free, open source incarnation that can be deployed anywhere; an enterprise version that includes additional support and services; and a hosted cloud product that recently launched in beta, which charges on a per-usage basis.

The open source community has been a major focus for MindsDB so far, claiming tens of thousands of installations from developers around the world — including developers working at companies such as PayPal, Verizon, Samsung, and American Express. While this organic approach will continue to form a big part of MindsDB’s growth strategy, Torres said his company is in the early stages of commercializing the product with companies across numerous industries, though he wasn’t at liberty to reveal any names.

“We are in the validation stage with several Fortune 100 customers, including financial services, retail, manufacturing, and gaming companies, that have highly sensitive data that is business critical — and [this] precludes disclosure,” Torres said.

The problem that MindsDB is looking to fix is one that impacts just about every business vertical, spanning businesses of all sizes — even the biggest companies won’t want to reinvent the wheel by developing every facet of their AI armory from scratch.

“If you have a robust, working enterprise database, you already have everything you need to apply machine learning from MindsDB,” Torres explained. “Enterprises have put vast resources into their databases, and some of them have even put decades of effort into perfecting their data stores. Then, over the past few years, as ML capabilities started to emerge, enterprises naturally wanted to leverage them for better predictions and decision-making.”

While companies might want to make better predictions from their data, the inherent challenges of extracting, transforming, and loading (ETL) all that data into other systems is fraught with complexities and doesn’t always produce great outcomes. With MindsDB, the data is left where it is in the original database.

“That way, you’re dramatically reducing the timeline of the project from years or months to hours, and likewise you’re significantly reducing points of failure and cost,” Torres said.

The Switzerland of machine learning

The competitive landscape is fairly extensive, depending on how you consider the scope of the problem. Several big players have emerged to arm developers and analysts with AI tooling, such as the heavily VC-backed DataRobot and H2O, but Torres sees these types of companies as potential partners rather than direct competitors. “We believe we have figured out the best way to bring intelligence directly to the database, and that is potentially something that they could leverage,” Torres said.

And then there are the cloud platform providers themselves such as Amazon, Google, and Microsoft which offer their customers machine learning as add-ons. In those instances, however, these services are really just ways to sell more of their core product, which is compute and storage. — Torres also sees potential for partnering with these cloud giants in the future. “We’re a neutral player — we’re the Switzerland of machine learning,” Torres added.

MindDB’s seed funding includes investments from a slew of notable backers, including OpenOcean, which claims MariaDB cofounder Patrik Backman as a partner, YCombinator (MindsDB graduated YC’s winter 2020 batch), Walden Catalyst Ventures, SpeedInvest, and Berkeley’s SkyDeck fund.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Microsoft and AMD will give away a ‘Halo Infinite’ Radeon RX 6900 XT GPU

Beyond a , there’s more -themed hardware on the way, this time for PC players. Not only have Microsoft and AMD teamed up to add ray-tracing to the game sometime after launch, they created a limited-edition Halo Infinite graphics card.

The GPU’s design is based on Master Chief’s Mjolnir armor, and it features a reflective, iridium gold border around a fan, a blue light that mimics the Cortana AI module on the character’s helmet and the 117 Spartan call sign. Microsoft notes that the graphics card uses the same RDNA 2 architecture as the Xbox Series X/S consoles.

Here’s the rub: you won’t be able to buy the GPU. Microsoft, AMD, developer 343 Industries and some of their partners will be giving away the graphics card in the coming weeks. If you’re interested, it’s worth keeping an eye on the and Twitter accounts for more details.

AMD is 343 Industries’ exclusive PC partner for Halo Infinite. They’ve been working together for the last couple of years to optimize the game for AMD’s GPUs and Ryzen processors. For one thing, there’s support for , which reduces screen tearing and boosts HDR visuals on compatible monitors. 

Elsewhere, Microsoft released a video showcasing what the Halo Infinite experience will look like on PC. It’s the first time the company is bringing a Halo game to consoles and PC on the same day. From the outset, there will be support for ultrawide displays and old-school LAN setups, as well as framerate customization and triple keybinds. Microsoft is also promising “deep integration” with Discord and in-game events that sync with lighting on Razer Chroma RGB devices. There are some Halo Infinite .

Halo Infinite will hit Xbox consoles and PC . As with ray-tracing, the campaign co-op and Forge modes . 343 Industries will add those features later.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Repost: Original Source and Author Link

Categories
Game

Steam Deck can limit frame rates to give you longer battery life

Valve may boast that the Steam Deck will run games well, but you won’t have to worry about the handheld PC running games too quickly. As The Verge notes, Valve’s Pierre-Loup Griffais has revealed that the Steam Deck will include a “built-in” frame rate limiter option that caps performance in return for longer battery life. If you’re playing a title that doesn’t demand highly responsive visuals, you can trade some unnecessary frames for extra game time.

Not that games will necessarily drag the system down. After Valve told IGN it was aiming for a 30FPS target, Griffais added that the frame rate was the “floor” for playable performance. Every game the company has tested so far has “consistently met and exceeded” that threshold, the developer said. While you probably won’t run cutting-edge games at 30FPS with maximum detail (this is a $399 handheld, after all), you might not have to worry that a favorite game will be too choppy to play.

The company already expects Portal 2 to last for six hours at 30FPS versus four with no frame rate restrictions. We wouldn’t be surprised if the effect is reduced for newer or more intensive games that are less likely to reach high frame rates.

The Steam Deck might not offer a completely seamless experience even with the limiter. Digital Foundry warned that you might get a less-than-ideal experience with V-Sync active. And of course, the 30FPS target generally applies only to existing games — don’t count on a cutting-edge 2023 game running at playable frame rates, at least not without significantly reduced detail. This is more about ensuring that your existing game library is usable on launch.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Repost: Original Source and Author Link

Categories
Tech News

How quantum bird brains could give us superpowers

Welcome to Neural’s series on speculative science. Here, we throw caution to the wind and see how far we can push the limits of possibility and imagination as we explore the future of technology.

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” – Arthur C Clarke.

An exciting new study conducted by a huge, international team of researchers indicates some species of bird have a special protein in their eye that exploits quantum mechanics to allow their brains to perceive the Earth’s magnetism.

It’s understandable if you’re not falling out of your chair right now. On the surface, this feels like “duh” news. It’s fairly common knowledge that migratory birds navigate using the Earth’s magnetic field.

But, if you think about it, it’s difficult to imagine how they do it. Try as hard as we may, we simply cannot feel magnetism in the same way birds can.

So the researchers set out to study a specific species of migratory robin with an uncanny ability to navigate in hopes of figuring out how it works.

Per the team’s research paper:

Magnetic field effects on the coherent spin dynamics of light-induced radical pairs in cryptochromes are manifested as changes in the quantum yields of stabilized states of the protein that could initiate magnetic signaling, most probably through a change in the conformation of the C-terminal tail.

Translation: these birds have proteins in their eyes that utilize quantum superposition to convert the Earth’s magnetism into a sensory signal.

What?

Quantum superposition is the uncertainty inherent when a particle exists in multiple physical states simultaneously. Physicists like to describe this concept using a spinning coin.

Until the coin’s spin slows and we can observe the results, we cannot state equivocally whether it’s in a state of heads or tails. Our observed reality, at the time it’s spinning, makes it appear as though the coin is in a state of neither heads nor tails.

But quantum mechanics are a bit more complex than that. Essentially, when we’re dealing with quantum particles, the coin in this metaphor is actually in a state of both heads and tails at the same time until it collapses into one state or another upon observation.

It sounds nerdy, but it’s actually really cool in actuality.

When blue light hits the aforementioned robins’ eyes, a pair of entangled electrons inside the special protein in them sets off a series of reactions. This allows the bird to measure how much magnetism it’s feeling. The strength of this measurement tells the bird exactly where it’s at and, theoretically, serves as a mechanism to drive it towards its destination.

The reason this works is because of superposition and entanglement. Those two electrons are entangled, which means that even though they’re not next to each other, they can be in a state of uncertainty – superposition – together.

As the bird senses more or less magnetism the state of the electrons change and it’s more or less drawn in a specific direction – at least, that’s what the study appears to indicate.

Smell-o-vision

Think about it like your sense of smell. Despite the fact the birds use a protein in their eye, they don’t really “see” the magnetism. Their brains perceive the signal.

If you smell something amazing, like your favorite fresh-baked treat, coming from a very specific part of your home, those with a typical sense of smell could likely follow their nose and locate the source.

So imagine there’s a special sensor in your nose that’s only looking for a specific scent. One that’s pretty much always there.

Instead of developing an olfactory system to discern different smells, evolution would almost certainly gift us with a nose that specializes in detecting extremely exact measurements of how much of that one scent we perceive at any given time or location.

The robins’ ability to sense magnetism likely works in a similar fashion. They may very well have a ground-truth tether to the motion of the planet itself.

Their magnetic sense gives them a physical sensation based on their literal geolocation.

And that’s pretty amazing! It means these bird’s brains have built-in GPS. What if humans could gain access to this incredible quantum sensory mechanism?

Never, ever ask for directions

Imagine always knowing exactly where you are, in the physical sense. If we could take the emotional feeling you get when you return home from a long trip and turn it into a physical one that waxed or waned depending on how far away from the Earth’s magnetic poles you were, it could absolutely change the way our brains perceive the planet and our place in it.

But, it’s not something we can just unlock through meditation or pharmaceuticals. Clearly, birds evolved the ability to sense the Earth’s magnetism. And not every bird can do it.

Chickens, for example, have a relatively minuscule reaction to magnetism when compared to the robins the scientists studied.

We apparently lack the necessary chemical and neural components for natural magnetic sensory development.

But we also lack talons and wings. And that hasn’t stopped us from killing things or flying. In other words, there are potential technological solutions to our lack of magnetic perception.

From a speculative science point of view, the problem can be reduced to two fairly simple concepts. We have to figure out how to get a quantum-capable protein in our eye that filters blue light to perceive magnetism and then sort out how to connect it to the proper regions of our brain.

Engineers wanted

Luckily we’ve already got all the conceptual technology we need to make this work.

We know how to entangle things on command, we can synthesize or manipulate proteins to jaw-dropping effect, and brain computer interfaces (BCIs) could facilitate a networking solution that functions as an intermediary between quantum and binary signals.

We can even fantasize about a future where miniaturized quantum computers are inserted into our brains to facilitate even smoother translation.

It feels romantic to imagine a future paradigm where we might network our quantum BCIs in order to establish a shared ground-truth – one that literally allows us to feel the people we care about, even when we’re apart.

I’m not saying this could happen in our lifetimes. But I’m not saying it couldn’t. 

I can think of worse reasons to shove a chip in my head.

Repost: Original Source and Author Link

Categories
Game

Assassin’s Creed Infinity could give Ubisoft its Fortnite

It’s no secret that Assassin’s Creed is Ubisoft’s big franchise, with main-series releases dropping every couple of years and spin-offs arriving just as often (until recently, at least). Ubisoft, it seems, wants to take things to the next level and turn Assassin’s Creed into a live service, constantly evolving game. According to a new report today, this new live service Assassin’s Creed game will be called Assassin’s Creed Infinity.

That’s according to Bloomberg‘s Jason Schreier, who reports today that developers for both Ubisoft Montreal and Ubisoft Quebec are working on Assassin’s Creed Infinity after the two studios were unified earlier this year. The idea is to make an Assassin’s Creed game that evolves as time goes on, much like the way Fortnite changes on a seasonal basis.

According to people familiar with Ubisoft’s plans, Assassin’s Creed Infinity will be home to multiple settings instead of just one. Details about Assassin’s Creed Infinity were pretty slim, but Schreier’s sources make it sound like it could be used as a platform to deliver new Assassin’s Creed campaigns on an ongoing basis, with Ubisoft planning to support the game with new releases after launch.

Typically, live service games have some kind of monetization scheme attached to keep players spending in the months after launch, but Schreier’s sources couldn’t give us an idea of how Ubisoft plans to monetize this game. It’s possible that there isn’t even a monetization strategy for Assassin’s Creed Infinity yet, as we’re told that the game still years out from release. Ubisoft’s poor response to abuse allegations and reported animosity between the teams at the Montreal and Quebec studios threaten to delay development even further, as well.

It isn’t much of a surprise to learn that Ubisoft is planning a live service Assassin’s Creed game. Many major publishers, including Ubisoft, have turned to live service titles as a source of ongoing revenue, and Assassin’s Creed could have the name recognition to carry a live service title for Ubisoft for years. Still, it sounds like Assassin’s Creed Infinity is still a long time off, so we should probably file this rumor away for now and see if it resurfaces sometime in the future.

Repost: Original Source and Author Link