How to turn off Quick Note on a Mac

When Apple released MacOS Monterey, it gave users the Quick Note feature. This is a great way to capture a “quick note” without taking an extra step to open the Notes app.

While you can create a Quick Note using a keyboard shortcut, Apple gave the feature a designated Hot Corner. If you’ve been accidentally creating Quick Notes with that move to the corner or simply want your Hot Corner to do something else instead, you can turn off the Quick Note action on Mac.

Turn off Quick Note by disabling the Hot Corner

By default, the bottom-right Hot Corner is set up to create a Quick Note. So, when you move your cursor to that corner, a new note appears. If you’re not fond of this, you can change the corner or disable it to turn off the Quick Note action.

Step 1: Open System preferences using the icon in your Dock or the Apple icon in the menu bar.

Step 2: Choose either Mission control or Desktop and screen saver.

Step 3: Use the Hot Corners button to open those settings.

Mac Mission Control with the Hot Corners button.

Step 4: You’ll see that the bottom-right corner is set to Quick Note. Open that drop-down list and pick a different action. If you want to simply disable the Hot Corner altogether, pick the Dash option.

Hot Corner with Quick Note selected.

Step 5: Select OK to save your change(s).

You can then move your cursor to that corner and see that no Quick Note pops up. If you change your mind later, you can reopen the Hot Corners settings and choose Quick Note for any of the four corners.

No actions for Hot Corners on Mac.

Turn off Quick Note by disabling the shortcut

If you want to go a step further and disable the keyboard shortcut attached to a Quick Note, you can do that too.

Step 1: Open System preferences and choose Keyboard.

Mac System Preferences, Keyboard.

Step 2: Pick the Shortcuts tab.

Step 3: On the left, select Mission control.

Step 4: On the right, you’ll see the Quick Note shortcut’s box checked. Note that the shortcut is Fn + Q or the Globe key + Q.

Uncheck the box for the Quick Note shortcut to disable it.

Keyboard shortcut for Quick Note unchecked.

Step 5: You can then close the keyboard preferences and your change is saved automatically.

Give the keyboard shortcut a try and you should not see a Quick Note pop open.

While Quick Notes are handy for capturing notes when using any app on your Mac, accidentally creating a new note with every move to that Hot Corner can be aggravating. So, simply turn off the Quick Note action.

For more, look at how to use split view or how to use multiple desktops on your Mac.

Editors’ Choice

Repost: Original Source and Author Link


How to turn on FSR on the Steam Deck

The Steam Deck is great, but it still feels like a bit of a work in progress. The good news is that it has indeed progressed since its release. Valve’s updates have added much-requested features and new compatibility, and one of those updates helped improve performance and visual quality with the introduction of AMD’s Fidelity FX Super Resolution, or FSR.

If you are having trouble balancing a usable frame rate and an acceptable resolution for your favorite games – especially when playing on a monitor – then the Steam Deck’s FSR compatibility could make a huge difference.

If you want to make the most of your Steam Deck, here’s how to enable FSR.

What is FSR on the Steam Deck, anyway?

FSR is an AMD technology that stands for FidelityFX Super Resolution, with the latest version being FSR 2.0. Basically, it works to upscale and optimize the resolution of your games so that they can run with more detail without compromising as much on the frame rate. That’s especially helpful for a system like the Steam Deck, which can struggle to reach higher resolutions for certain games, especially if you are connected to a monitor for a larger screen. Valve has even reported that using FSR can significantly save on battery life for the Steam Deck for certain games.

Some games have innate support for FSR, but on the Steam Deck, FSR can be enabled in the settings to help on all games you play on the system. Here’s what you need to do.

How to turn on FSR on the Steam Deck

Step 1: While you are in a game you want to optimize, press the QAM (Quick Access Menu) button on the Steam Deck. That’s the “…” button the right-hand side.

Step 2: In the menu, move down until you reach the Performance section, indicated by the battery icon.

Step 3: In the Performance section, select the Advanced view option to see all the possible features.

Step 4: Scroll down until you get to the Scaling filter slider. Slide this all the way to the right to the setting that says FSR.

The Scaling Filter in the Steam Deck.

Step 5: Note that right below this section there is an option for FSR sharpness. For now, you can max it out. You may want to return to this slider later on and play with it, especially if you are having frame rate issues in the game you are upscaling.

Step 6: You aren’t done quite yet. To enable FSR, open your in-game menu and look for an option to set the resolution, usually in the video or graphics section (below is where to look in Half Life 2). Set the resolution significantly below the native or default level, preferably 720p or lower. You need to drop the resolution for the Deck to recognize the situation and kick in FSR. Now you should be ready to play.

The Steam Deck menu for Half Life 2.

Step 7: If you aren’t noticing any resolution changes at the same frame rate, check your screen settings. Try toggling the full-screen mode off and on to see if that helps. Keep in mind that that you may have to play around with your resolution settings, too. FSR sometimes works best to help increase your frame rate and sometimes to bump up the resolution, and the best outcome often depends on the game that you are playing.

Did you enjoy tweaking your Steam Deck to enable FSR? Here are nine other tips and tricks you need to know to make the most of your Steam Deck.

Editors’ Choice

Repost: Original Source and Author Link


How to turn AI failure into AI success

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

The enterprise is rushing headfirst into AI-driven analytics and processes. However, based on the success rate so far, it appears there will be a steep learning curve before it starts to make noticeable contributions to most data operations.

While positive stories are starting to emerge, the fact remains that most AI projects fail. The reasons vary, but in the end, it comes down to a lack of experience with the technology, which will most certainly improve over time. In the meantime, it might help to examine some of the pain points that lead to AI failure to hopefully flatten out the learning curve and shorten its duration.

AI’s hidden functions

On a fundamental level, says researcher Dan Hendrycks of UC Berkeley, a key problem is that data scientists still lack a clear understanding of how AI works. Speaking to IEEE Spectrum, he notes that much of the decision-making process is still a mystery, so when things don’t work out, it’s difficult to ascertain what went wrong. In general, however, he and other experts note that only a handful of AI limitations are driving many failures.

One of these is brittleness — the tendency for AI to function well when a set pattern is observed, but then fail when the pattern is altered. For instance, most models can identify a school bus pretty well, but not when it is flipped on its side after an accident. At the same time, AIs can quickly “forget” older patterns once they have been trained to spot new ones. Things can also go south when AI’s use of raw logic and number-crunching leads it to conclusions that defy common sense.

Another contributing factor to AI failure is that it represents such a massive shift in the way data is used that most organizations have yet to adapt to it on a cultural level. Mark Montgomery, founder and CEO of AI platform developer KYield, Inc., notes that few organizations have a strong AI champion at the executive level, which allows failure to trickle up from the bottom organically. This, in turn, leads to poor data management at the outset, as well as ill-defined projects that become difficult to operationalize, particularly at scale. Maybe some of the projects that emerge in this fashion will prove successful, but there will be a lot of failure along the way.

Clear goals

To help minimize these issues, enterprises should avoid three key pitfalls, says Bob Friday, vice president and CTO of Juniper’s AI-Driven Enterprise Business. First, don’t go into it with vague ideas about ROI and other key metrics. At the outset of each project, leaders should clearly define both the costs and benefits. Otherwise, you are not developing AI but just playing with a shiny new toy. At the same time, there should be a concerted effort to develop the necessary AI and data management skills to produce successful outcomes. And finally, don’t try to build AI environments in-house. The faster, more reliable way to get up and running is to implement an expertly designed, integrated solution that is both flexible and scalable.

But perhaps the most important thing to keep in mind, says Emerj’s head of research, Daniel Faggella, is that AI is not IT. Instead, it represents a new way of working in the digital sphere, with all-new processes and expectations. A key difference is that while IT is deterministic, AI is probabilistic. This means actions taken in an IT environment are largely predictable, while those in AI aren’t. Consequently, AI requires a lot more care and feeding upfront in the data conditioning phase, and then serious follow-through from qualified teams and leaders to ensure that projects do not go off the rails or can be put back on track quickly if they do.

The enterprise might also benefit from a reassessment of what failure means and how it affects the overall value of its AI deployments. As Dale Carnegie once said, “Discouragement and failure are two of the surest stepping stones to success.”

In other words, the only way to truly fail with AI is to not learn from your mistakes and try, try again.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


How to Turn Keyboard Lighting On and Off

There isn’t just one way to turn on your keyboard lights. It can vary wildly among laptop and peripheral manufacturers and even among different laptop lines from the same brand.

To bring a bit of clarity to the situation, we’ve gathered together six possible ways to turn your keyboard backlighting on or off. Read on to find the best method for your laptop or desktop keyboard.

Press the dedicated button for keyboard backlighting

Some keyboards, like the Logitech G Pro desktop keyboard, will actually have a dedicated button that you can press to toggle the keyboard light on or off. In the case of the Logitech G Pro, you’ll want to look for a button stamped with a sun icon with rays in the upper-right corner of the keyboard. Some laptops also have a dedicated backlight key, though the icon on it can vary wildly.

Press the Increase Brightness button

A close up of the backlit keyboard of an Apple MacBook Pro 13 from 2015.
Bill Roberson/Digital Trends

If you have a MacBook, certain models allow you to turn on the backlighting by pressing the Increase Brightness key, which looks like half of the sun with three rays. Press it until you get the desired level of keyboard light brightness. To turn it off, press the Decrease Brightness key, which looks like a half-circle outlined in dots (instead of the rays) until the light turns off.

Using the Increase/Decrease Brightness buttons should work for certain models of Macs that run MacOS High Sierra, Mojave, Catalina, Big Sur, or Monterey.

Press the assigned Function key

A Dell Inspiron 15 7000 2015 keyboard backlit.
Bill Roberson/Digital Trends

For many Windows laptops, you might need to press a Function key (F1-F12) to turn on your keyboard’s backlighting. If this is the case, which Function key it is will likely depend on the brand and model of your laptop.

For example, Dell notebook PCs have at least three possible key options: F6, F10, or the right arrow key. In some cases, F5 is also possible. From these options, you should be able to tell which one controls the backlighting by seeing which one has the Illumination icon (which looks like a half-sun with rays) stamped on it. If you don’t see this icon at all, your Dell PC doesn’t have keyboard backlighting. But if you do see the icon, press the Function key that has it. (You may need to press it in conjunction with the Fn key.) Pressing that key combination — Fn + [Function Key] — should allow you to cycle through various brightness level options for your Dell PC’s keyboard, so keep pressing it until you reach your desired brightness level or until you turn it off.

HP notebook computers work similarly to Dell laptops: You’ll need to press an assigned Function key (which could be F5 or F4 or F11) with or without pressing the Fn key as well. You may need to press it multiple times to adjust the brightness or turn it off. There should also be a backlight icon stamped on the assigned Function key for your HP notebook that looks like a row of three dots with rays coming out of the first dot.

The main thing, though, is that if you don’t know the keyboard shortcut or Function key assigned to your keyboard’s backlighting feature, you should look it up in your PC’s manufacturer’s support site or manual to find out.

Use the Touch Bar

Close up of person's hand touching a Macbook pro touch bar.
Chesnot/Getty Images

Certain MacBook models may have you adjust your keyboard lighting via the Touch Bar instead. To do so, tap the Arrow icon on the Touch Bar to expand its Control Strip. To turn on backlighting, tap the Increase Brightness button. To turn it off, tap and hold the Decrease Brightness button, which looks like a half-circle outlined in dots, not rays.

These instructions should work for MacBooks with Touch Bars that run MacOS Monterey and Big Sur.

Adjust it in Control Center or Windows Mobility Center

Depending on the manufacturer and model of your device, you might be able to turn on/adjust the keyboard light via your PC’s control panel menu.

For certain MacBooks, that means opening Control Center, selecting Keyboard Brightness, and then dragging its corresponding slider. This should work for some MacBook models that also run MacOS Monterey or Big Sur.

For some Windows 10 PCs, this means you’ll need to access the Windows Mobility Center which resides in the Control Panel. To access it, select Control Panel > Hardware and Sound > Windows Mobility Center. In the Windows Mobility Center, look for the Keyboard Brightness (or Keyboard Backlighting) setting, select its corresponding slider, and pull that slider over to the right.

Use the keyboard’s recommended software, if available

Some keyboards have their own specific software or app that can be used to control and customize the settings of your laptop or desktop keyboard. A great example of this is the app used for Razer’s laptops and peripherals: Synapse. The Synapse app can be used to customize the lighting effects of your Razer gaming laptop’s keyboard or your Razer desktop gaming keyboard. And this can include increasing or decreasing the brightness of your keyboard light or adjusting the settings so that the light stays on or off in sleep mode.

Most of the best gaming keyboards have some kind of back-end software that can let you adjust the RGB lighting of individual keys or turn any or all of them on or off at will.

Enable keyboard backlighting in the BIOS

In some cases, if your laptop has the right keyboard light buttons and they still don’t work the way they’re supposed to, it’s possible that you may need to check your device’s BIOS settings and make sure that they’re configured correctly, or your BIOS may need to be updated to the latest version. When doing either of these things, be sure to follow your device manufacturer’s instructions on how to do it carefully. Look up those specific instructions first. Some manufacturers like HP or Dell have posted detailed instructions online on how to check for these issues and/or correct them.


Why won’t my keyboard backlighting turn on?

There are a number of reasons why your keyboard backlighting won’t turn on. Here are a few you may want to consider:

  • Your device may not actually offer a backlit keyboard. Not all laptops or desktop keyboards have keyboard lights. Check with your device’s manufacturer to confirm that the model of your device is supposed to have backlighting. If it is, confirm that you’re using the right keyboard shortcuts, buttons, or settings to turn it on.
  • Some laptops like MacBooks use light sensors for backlighting in low-light situations. It’s important to know where they are on your device and to make sure you’re not blocking them.
  • Is the backlight not working or is the brightness level set too low? If the brightness level of the backlight is set too low, then the light is probably working but you’re just having trouble seeing it. See if you can increase the brightness level using our suggestions above so you can see the light better.
  • You may need to update the BIOS to the latest version, or its settings aren’t configured correctly. If you decide to update the BIOS to the latest version or reconfigure its settings, be sure to follow your device manufacturer’s specific instructions for doing so.

Does keyboard backlighting drain the battery?

Yes, keyboard backlighting can contribute to the drain, as it does need power to function. If you’re concerned about conserving battery power, you can turn off the backlighting or adjust your keyboard lighting settings so that the light automatically turns off when the computer goes to sleep or the display is off.

How do I change the keyboard backlighting color?

First, make sure that your keyboard has the ability to change backlighting colors. If so, you’ll need to consult your device manufacturer’s specific instructions on how to change the backlighting color. Usually, these instructions will involve you opening a manufacturer-recommended desktop app like the HP OMEN Command Center or Razer’s Synapse app and then customizing your lighting settings within that app to add colors to your backlight.

Editors’ Choice

Repost: Original Source and Author Link


How to Turn on Automatic Tactical Sprint in Call of Duty: Vanguard

Sprinting is not an optional mechanic if you want to survive in Call of Duty: Vanguard. Being able to quickly move around the map, even at the cost of a short delay between slowing down and being able to open fire, is key to winning a match. Whether it’s a pure deathmatch-style game or an objective-focused match, getting into the action and the correct position as fast as possible is the only way to have a fighting chance. That said, pressing in on a thumbstick to start up your sprint can be cumbersome or downright uncomfortable, especially with how frequently you’ll want to start and stop sprinting during a match.

Call of Duty: Vanguard recognized the need to include more accessibility options in the game. There are a ton of options to go through, so you may not even think that they offer a solution to make sprinting much more comfortable. Called Automatic Tactical Sprint, this option can save your thumbs a ton of stress by making sprinting just a little bit easier. What this feature does, and how to access it, is slightly different between platforms, however. Here’s how you can turn on Automatic Tactical Sprint in Call of Duty: Vanguard to spare your fingers that extra bit of straining.

See more

What is Automatic Tactical Sprint?

First introduced in Call of Duty: Modern Warfare, Automatic Tactical Sprint allowed for much easier and fluid transitions from normal running into a full-on sprint. Before this feature, which was noticeably absent from Call of Duty: Black Ops Cold War, you were forced to either press in on your controller’s joystick or double-tap W on your keyboard in order to make your character go full speed. With this option turned on, your character will automatically transition from normal running into a full tactical sprint so long as you continue moving in a given direction.

How to turn on Automatic Tactical Sprint on consoles

If you’re playing on either a PlayStation or Xbox console, you will find the Automatic Tactical Sprint toggle in the game’s settings, and it’s turned off by default. Simply open the Settings and tab over to the Controller settings. Scroll down to the Gameplay header, and find Automatic Sprint. Here, you can choose to have it off, in which case you need to press the thumbsticks as normal; Automatic Sprint, which allows you to go into a normal sprint automatically but still requires a press of the thumbstick to enter a Tactical Sprint; and finally Automatic Tactical Sprint.

How to turn on Automatic Tactical Sprint on PC

Turning on tactical sprint.

If you’re playing on a PC using keyboard and mouse controls, you will find this setting in a slightly different part of the settings menu. Instead of Controller, navigate over to Keyboard & Mouse and look under the Movement settings. Automatic Sprint is just below Automatic Ground Mantle and has the same three settings as it does for controller players.

Editors’ Choice

Repost: Original Source and Author Link


How to Turn Off a Nintendo Switch Controller

The Nintendo Switch has a ton of different controller options, making it one of the most versatile consoles ever created. The standard Joy-Con controllers can be placed in the Joy-Con grip, and you can use a Pro Controller or third-party alternatives to play your games. You can even hook up a GameCube controller as long as you have an adapter.

With all of those options, however, comes the possibility of you accidentally leaving several controllers turned on and needlessly wasting batteries. Fear not, however, as the solution is quite simple. Here is how to turn off a Nintendo Switch controller.

How to turn off a Nintendo Switch controller before a game

If you want to swap controllers on the Nintendo Switch, you can turn your current controller off before entering the game.

Step 1: Go to the main menu of the Switch by pressing the home button on your controller, and select the Joy-Con icon at the bottom of the screen.

Step 2: On the next screen, select the “Change Grip/Order” option on the right and you’ll be prompted to press the L and R buttons on the controllers you want to use. What this will also do is turn off every controller currently connected to the system. Not only is this the perfect way to turn off a controller you don’t want to use, but it can also let you reconfigure the one you’re using already.

Step 3: Want to turn a Joy-Con sideways instead of playing with it vertically? Just press the L and R buttons on the side of it when this screen is up to do so.

How to turn off a Nintendo Switch controller by putting the Switch to sleep

As with most other game consoles, you put the Nintendo Switch to sleep to also shut off any controller currently connected to it.

Step 1: Either tap the power button located on the top of the Switch itself, or select the power icon at the bottom of the main menu and confirm your selection.

Step 2: Pressing the home button on any of your controllers will wake the console back up, and that controller will be the only one still connected.

How to turn off a Nintendo Switch controller using system settings

If you are giving a controller away to someone else, you will want to disconnect that controller from your own system first — this will stop it from turning your console on when they attempt to wake up their own Switch.

Step 1: Go to the system settings, which you can reach by selecting the gear icon at the bottom of the home screen.

Step 2: Scroll down the menu on the left until you spot “Controllers and Sensors.”

Step 3: Select the “Disconnect Controllers” option while your system is in handheld mode. Several instructions will appear on the screen that you can follow directly. By doing so, you’ll remove the devices from your console’s memory. Be sure not to try this while the device is in docked mode— the process won’t work. You’ll receive an error message instead.

How to charge Nintendo Switch controllers

If you can’t turn on your Switch controller, or it keeps shutting down randomly, you likely simply have a dead battery. Fortunately, there are a few ways you can quickly charge your Switch controllers; and doesn’t even have to disrupt your gameplay, you can charge while playing.

You can use any USB-C cable and charger to charge up your Nintendo Switch Pro controllers. On top of that, you could simply connect the controllers straight into the Switch dock. Or you could always slide the Joy-Con controllers directly into the sides of the console to recharge. It’s up to you whether you’d prefer to charge the device by its USB-C port or a docking station.

That said, you could also always buy two separate docking stations for your Switch and Joy-Con controllers instead.

Editors’ Choice

Repost: Original Source and Author Link


Google is about to turn on two-factor authentication by default for millions of users

In May, Google announced plans to enable two-factor authentication (or two-step verification as it’s referring to the setup) by default to enable more security for many accounts. Now it’s Cybersecurity Awareness Month, and Google is once again reminding us of that plan, saying in a blog post that it will enable two-factor for 150 million more accounts by the end of this year.

In 2018, Google said that only 10 percent of its active accounts were using two-factor authentication. It has been pushing, prodding, and encouraging people to enable the setting ever since. Another prong of the effort will require more than 2 million YouTube creators to turn on two-factor authentication to protect their channels from takeover. Google says it has partnered with organizations to give away more than 10,000 hardware security keys every year. Its push for two-factor has made the technology readily available on your phone whether you use Android or iPhone.

A tool that also helps users keep their accounts secure is using a password manager, and Google now says that it checks over a billion passwords a day via its built-in manager for Chrome, Android, and the Google app. The password manager is also available on iOS, where Chrome can autofill logins for other apps. Google says that soon it will help you generate passwords for other apps, making things even more straightforward. Also coming soon is the ability to see all of your saved passwords directly from the Google app menu.

Last but not least, Google is highlighting its Inactive Account Manager. This is a set of decisions to make about what happens to your account if you decide to stop using it or are no longer around and able to make those decisions.

Google Inactive Account Manager

Google Inactive Account Manager
Image: Google

Google added the feature in 2013 so that you can set a timeout period for your account between three and 18 months of disuse before the Inactive Account Manager protocols take effect. Just in case you only switched accounts or forgot about your login, Google will send an email a month before the limit is up. At that point, you can choose to have your information deleted or have it forwarded to whatever trusted contacts you want to have handling things on your behalf. Google’s blog post notes that an inactive account led to the massive Colonial Pipeline attack earlier this year, and just for security’s sake, you probably don’t want your digital life simply hanging around unused for whatever hackers are bored in the future.

Repost: Original Source and Author Link


Google isn’t ready to turn search into a conversation

The future of search is a conversation — at least, according to Google.

It’s a pitch the company has been making for years, and it was the centerpiece of last week’s I/O developer conference. There, the company demoed two “groundbreaking” AI systems — LaMDA and MUM — that it hopes, one day, to integrate into all its products. To show off its potential, Google had LaMDA speak as the dwarf planet Pluto, answering questions about the celestial body’s environment and its flyby from the New Horizons probe.

As this tech is adopted, users will be able to “talk to Google”: using natural language to retrieve information from the web or their personal archives of messages, calendar appointments, photos, and more.

This is more than just marketing for Google. The company has evidently been contemplating what would be a major shift to its core product for years. A recent research paper from a quartet of Google engineers titled “Rethinking Search” asks exactly this: is it time to replace “classical” search engines, which provide information by ranking webpages, with AI language models that deliver these answers directly instead?

There are two questions to ask here. First is can it be done? After years of slow but definite progress, are computers really ready to understand all the nuances of human speech? And secondly, should it be done? What happens to Google if the company leaves classical search behind? Appropriately enough, neither question has a simple answer.

There’s no doubt that Google has been pushing a vision of speech-driven search for a long time now. It debuted Google Voice Search in 2011, then upgraded it to Google Now in 2012; launched Assistant in 2016; and in numerous I/Os since, has foregrounded speech-driven, ambient computing, often with demos of seamless home life orchestrated by Google.

Despite clear advances, I’d argue that actual utility of this technology falls far short of the demos. Check out the introduction below of Google Home in 2016, for example, where Google promises that the device will soon let users “control things beyond the home, like booking a car, ordering dinner, or sending flowers to mom, and much, much more.” Some of these things are now technically feasible, but I don’t think they’re common: speech has not proven to be the flexible and faultless interface of our dreams.

Everyone will have different experiences, of course, but I find that I only use my voice for very limited tasks. I dictate emails on my computer, set timers on my phone, and play music on my smart speaker. None of these constitute a conversation. They are simple commands, and experience has taught me that if I try anything more complicated, words will fail. Sometimes this is due to not being heard correctly (Siri is atrocious on that score), but often it just makes more sense to tap or type my query into a screen.

Watching this year’s I/O demos I was reminded of the hype surrounding self-driving cars, a technology that has so far failed to deliver on its biggest claims (remember Elon Musk promising that a self-driving car would take a cross country trip in 2018? It hasn’t happened yet). There are striking parallels between the fields of autonomous driving and speech tech. Both have seen major improvements in recent years thanks to the arrival of new machine learning techniques coupled with abundant data and cheap computation. But both also struggle with the complexity of the real world.

In the case of self-driving cars, we’ve created vehicles that don’t perform reliably outside of controlled settings. In good weather, with clear road markings, and on wide streets, self-driving cars work well. But steer them into the real world, with its missing signs, sleet and snow, unpredictable drivers, and they are clearly far from fully autonomous.

It’s not hard to see the similarity with speech. The technology can handle simple, direct commands that require the recognition of only a small number of verbs and nouns (think “play music,” “check the weather” and so on) as well as a few basic follow-ups, but throw these systems into the deep waters of conversation and they flounder. As Google’s CEO Sundar Pichai commented at I/O last week: “Language is endlessly complex. We use it to tell stories, crack jokes, and share ideas. […] The richness and flexibility of language make it one of humanity’s greatest tools and one of computer sciences’ greatest challenges.”

However, there are reasons to think things are different now (for speech anyway). As Google noted at I/O, it’s had tremendous success with a new machine learning architecture known as Transformers, a model that now underpins the world’s most powerful natural language processing (NLP) systems, including OpenAI’s GPT-3 and Google’s BERT. (If you’re looking for an accessible explanation of the underlying tech and why it’s so good at parsing language, I highly recommend this blog post from Google engineer Dale Markowitz.)

The arrival of Transformers has created a truly incredible, genuinely awe-inspiring flowering of AI language capabilities. As has been demonstrated with GPT-3, AI can now generate a seemingly endless variety of text, from poetry to plays, creative fiction to code, and much more, always with surprising ingenuity and verve. They also deliver state-of-the-art results in various speech and linguistic tests and, what’s better, systems scale incredibly well. That means if you pump in more computational power, you get reliable improvements. The supremacy of this paradigm is sometimes known in AI as the “bitter lesson” and is very good news for companies like Google. After all, they’ve got plenty of compute, and that means there’s lots of road ahead to improve these systems.

Google channeled this excitement at I/O. During a demo of LaMDA, which has been trained specifically on conversational dialogue, the AI model pretended first to be Pluto, then a paper airplane, answering questions with imagination, fluency, and (mostly) factual accuracy. “Have you ever had any visitors?” a user asked LaMDA-as-Pluto. The AI responded: “Yes I have had some. The most notable was New Horizons, the spacecraft that visited me.”

A demo of MUM, a multi-modal model that understands not only text but also image and video, had a similar focus on conversation. When the model was asked: “I’ve hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently to prepare?” it was smart enough to know that the questioner is not only looking to compare mountains, but that “preparation” means finding weather-appropriate gear and relevant terrain training. If this sort of subtlety can transfer into a commercial product — and that’s obviously a huge, skyscraper-sized if — then it would be a genuine step forward for speech computing.

That, though, brings us to the next big question: even if Google can turn speech into a conversation, should it? I won’t pretend to have a definitive answer to this, but it’s not hard to see big problems ahead if Google goes down this route.

First are the technical problems. The biggest is that it’s impossible for Google (or any company) to reliably validate the answers produced by the sort of language AI the company is currently demoing. There’s no way of knowing exactly what these sorts of models have learned or what the source is for any answer they provide. Their training data usually consists of sizable chunks of the internet and, as you’d expect, this includes both reliable data and garbage misinformation. Any response they give could be pulled from anywhere online. This can also lead them to producing output that reflects the sexist, racist, and biased notions embedded in parts of their training data. And these are criticisms that Google itself has seemingly been unwilling to reckon with.

Similarly, although these systems have broad capabilities, and are able to speak on a wide array of topics, their knowledge is ultimately shallow. As Google’s researchers put it in their paper “Rethinking Search,” these systems learn assertions like “the sky is blue,” but not associations or causal relationships. That means that they can easily produce bad information based on their own misunderstanding of how the world works.

Kevin Lacker, a programmer and former Google search quality engineer, illustrated these sorts of errors in GPT-3 in this informative blog post, noting how you can stump the program with common sense questions like “Which is heavier, a toaster or a pencil?” (GPT-3 says: “A pencil”) and “How many eyes does my foot have?” (A: “Your foot has two eyes”).

To quote Google’s engineers again from “Rethinking Search”: these systems “do not have a true understanding of the world, they are prone to hallucinating, and crucially they are incapable of justifying their utterances by referring to supporting documents in the corpus they were trained over.”

These issues are amplified by the sort of interface Google is envisioning. Although it’s possible to overcome difficulties with things like sourcing (you can train a model to provide citations, for example, noting the source of each fact it gives), Google imagines every answer being delivered ex cathedra, as if spoken by Google itself. This potentially creates a burden of trust that doesn’t exist with current search engines, where it’s up to the user to assess the credibility of each source and the context of the information they’re shown.

The pitfalls of removing this context is obvious when we look at Google’s “featured snippets” and “knowledge panels” — cards that Google shows at the top of the search results page in response to specific queries. These panels highlight answers as if they’re authoritative but the problem is they’re often not, an issue that former search engine blogger (and now Google employee) Danny Sullivan dubbed the “one true answer” problem.

These snippets have made headlines when users discover particularly egregious errors. One example from 2017 involved asking Google “Is Obama planning martial law?” and receiving the answer (cited from a conspiracy news site) that, yes, of course he is (if he was, it didn’t happen).

In the demos Google showed at I/O this year of LaMDA and MUM, it seems the company is still leaning toward this “one true answer” format. You ask and the machine answers. In the MUM demo, Google noted that users will also be “given pointers to go deeper on topics,” but it’s clear that the interface the company dreams of is a direct back and forth with Google itself.

This will work for some queries, certainly; for simple demands that are the search equivalent of asking Siri to set a timer on my phone (e.g. asking when was Madonna born, who sang “Lucky Star,” and so on). But for complex problems, like those Google demoed at I/O with MUM, I think they’ll fall short. Tasks like planning holidays, researching medical problems, shopping for big-ticket items, looking for DIY advice, or digging into a favorite hobby, all require personal judgement, rather than computer summary.

The question, then, is will Google be able to resist the lure of offering one true answer? Tech watchers have noted for a while that the company’s search products have become more Google-centric over time. The company increasingly buries results under ads that are both external (pointing to third-party companies) and internal (directing users to Google services). I think the “talk to Google” paradigm fits this trend. The underlying motivation is the same: it’s about removing intermediaries and serving users directly, presumably because Google believes it’s best positioned to do so.

In a way, this is the fulfillment of Google’s corporate mission “to organise the world’s information and make it universally accessible and useful.” But this approach could also undermine what makes the company’s product such a success in the first place. Google isn’t useful because it tells you what you need to know, it’s useful because it helps you find this information for yourself. Google is the index, not the encyclopedia and it shouldn’t sacrifice search for results.

Repost: Original Source and Author Link


How Pizza Hut aims to turn weather forecasts into sales

All the sessions from Transform 2021 are available on-demand now. Watch now.

It’s not enough to be first — just ask tech giants like Netscape and Friendster. And Pizza Hut.

The deep dish chain was first to online food ordering — an industry projected to hit $126.91 billion this year. But it didn’t keep pace with innovation and was later eclipsed by competitors. Now, nearly three decades later, Pizza Hut knows it’s time to get serious. The company is looking to data, analytics, and AI to learn more about its customers in order to boost digital experiences and sales.

So while the other aforementioned early entrants are no more, Pizza Hut is still spinning up pies, and there are lessons to be learned from its game of catchup. To find out more about the company’s approach to data, its partnerships, and why it chose build over buy for its machine learning technologies, we chatted with Tristan Burns, Pizza Hut’s global head of analytics.

This interview has been edited for brevity and clarity.

VentureBeat: Pizza Hut was fairly early to online ordering and digital customer experiences. How has the vision and approach evolved over time?

Tristan Burns: You’re right — Pizza Hut was the first brand to create an online ordering experience. That was back in 1994, in California. You could submit an order online, it would end up in a store, and it’d be prepared and sent to your house, which was pretty cool. And while Pizza Hut was quite early to the ecommerce game, I think no one would mind me saying we were kind of eclipsed by Dominos in the 2000s. They came out swinging, saying they were a tech company that makes pizzas and with some pretty innovative technologies. Now Pizza Hut Digital Ventures, the organization I work for that is specific to Pizza Hut International, is taking a tech-first approach to redesigning, reimagining, and recreating our ecommerce capabilities. We’re in the process of building and scaling out some pretty robust solutions. It’s a very, very data-centric and very customer-centric approach.

VentureBeat: Are you using AI and machine learning as well? What does that technology look like, and what role is it playing in the organization, specifically in regards to personalization and customer experiences?

Burns: Definitely. We’re in the early stages of an AI journey. And part of our machine learning program is to ingest customer behavior and a little bit about who customers are, where in the world they are, what the weather might be at their location, and then surface relevant product recommendations to them during their experience. We’re still early days in that process, but we’re building in-house capabilities to own it and with the hope that we can do a better job and have more specific outcomes. I think there are limitations when you use an off-the-shelf platform, and because we’re global and working across multiple different regions of the world, we have to be pretty nimble in how we implement and use AI. So those nuances and specifics mean we’re going to have a lot more flexibility if we own the experience and the platform.

VentureBeat: What are the more recent challenges the company’s been facing in terms of data and analytics? What have you been looking to execute or improve?

Burns: So we’re very much trying to improve our daily decision-making, and conversion rate optimization (CRO) is a huge part of that. We’re probably in the early to mid stages of a new CRO-led approach to designing our digital experiences. We have a lot of stakeholders, so we’ve had a lot of input from various corners on how the experiences should work and how we should go about building them. But we’re in a position now where we have to be really conscious of data and what the customer needs, and experimentation is a really big part of that for us now going forward. We’re becoming a lot more mature in making sure that we test and validate with data and user research.

VentureBeat: I know you tapped digital analytics company Contentsquare as part of these efforts. Why did you seek an outside partnership? What is it enabling you to do?

Burns: It’s been almost two years with them now, and I saw that the opportunity and the capabilities of what they were trying to do would just be so effective in getting to the bottom of the problems we were experiencing. We had a lot of what the problem was, but we didn’t have a why. And Contentsquare gave us the opportunity to kind of look right into the customers’ behaviors and get a far better understanding of what they’re doing on our platform. Now it primarily supports our CRO programs, but it also allows us to come up with and test ideas really efficiently. We can see customers do something unexpected or that might not be optimal and then run tests to see if we actually solved the problem.

VentureBeat: What more specifically are the capabilities you’re referring to? And can you give a specific example or anecdote of how you use the technology and the results you’ve seen?

Burns: Personally, I’m a big fan of Contentsquare’s page comparator, where you can superimpose the click rate, scroll rate, and attractiveness rate metrics over top of your experience. And one great example was we saw that customers were not immediately clicking on our deals page. And it became clear they weren’t sure where they needed to click, or even if the deal cards themselves were clickable. We hypothesized it was because we didn’t have a CTA (call to action), and so we ran a test and saw a phenomenal increase in the rate at which customers added those deals to their baskets. We estimated there was an almost $7 million to $8 million uplift in sales if we were to extrapolate the performance we saw over a 12-month period.

VentureBeat: What are the top considerations you found are important when applying data and analytics to customer experiences?

Burns: I think data is fantastic at telling you where you might have a problem and what customers are doing, but I believe you always have to supplement that with user research and insight to really get the full picture. So for any problem data analysis surfaces, we also want to attack it from a different angle. And if we see a problem in both the data and the research, that’s something we should look into solving.

VentureBeat: Is there anything else you think is important to know about all this?

Burns: One thing is the role of data within an organization, and the power tools such as Contentsquare can have to support the democratization and communication of data insight across maybe less data-focused teams, as well as leadership and other stakeholders. Traditionally, I think data people are not seen as being phenomenal communicators — you know, diving deep into a spreadsheet, coming up for short breaks, and then it’s back into the spreadsheets. And we’ve not always looked to hire or looked to focus on fantastic communicators within the data space. But I think that as data takes on a much more central role within organizations, it’s going to be really crucial for companies to think about their data people as strong communicators.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


How to Turn On Bluetooth in Windows and Connect Your Devices

When you first bought your Windows PC, you were likely excited about its ability to connect to Bluetooth. But you still haven’t quite figured out how to activate the feature so you can pair your Bluetooth wireless mouse, Bluetooth keyboard, Bluetooth speaker, Bluetooth headset, Bluetooth headphones, or any other Bluetooth-capable devices you can think of using with your computer.

That Bluetooth problem stops now. To help you out, we’ll show you how to turn on Bluetooth using different methods, and we’ll guide you through pairing your Bluetooth device with your Windows computer or laptop.

Method 1: Windows settings

Before you can start using a Bluetooth device, you must get things configured. That means taking a trip to the Windows settings menu, which means the Control Panel on Windows 7 and the Settings app on Windows 10.

Windows 10

Step 1: On a Windows 10 computer, you’ll want to open the Action Center (it looks like a comment bubble on the right end of the Windows 10 taskbar) and then on the menu that appears, click on the All Settings button. Then, select Devices and click Bluetooth & Other Devices on the left-hand side.

Mark Coppock/Digital Trends

Step 2: Within the Settings menu’s Bluetooth section, toggle Bluetooth to the On position. Once you’ve turned Bluetooth on, you can click Add Bluetooth or Other Device. After the Add a Device window pops up, click on Bluetooth, and Windows 10 will start searching for Bluetooth wireless devices.

Adding a device via Bluetooth on Windows 10 screenshot.
Mark Coppock/Digital Trends

Step 3: Assuming you kicked off your Bluetooth device’s pairing mode, you’ll see it show up in the list of available devices. Select it, and then continue as instructed. Once you’ve connected the device, it will show up in the list of connected peripherals.

Windows 7

Note that Windows 7 is considered “end of life” (EOL) and no longer receives Microsoft’s critical security updates. That makes it dangerous to use, as malware could infect your machine without you knowing.

Usually, once a Bluetooth adapter is installed and configured on a Windows 7 system, it’s automatically turned on and ready to use. In some PCs — like a notebook with built-in Bluetooth — there might be a keyboard shortcut that will turn Bluetooth on or off, or an icon might be present in the system tray that will perform the same function.

Additionally, different PCs and Bluetooth adapters can also include the utilities necessary for making a Bluetooth connection. However, users can generally hit the Start button and select Devices and Printers. Once you’re here, choose Add a Device, pick the option you want, and click Next. Different devices will often have very different procedures for pairing, though, so don’t forget to look up the specific directions for your device.

Method 2: Click the Bluetooth button in the Action Center

Selecting the Bluetooth icon in the Windows 10 Action Center menu screenshot.
Mark Coppock/Digital Trends

Windows 10 makes toggling Bluetooth on and off really easy. All you have to do is navigate to your Action Center and select the Bluetooth button (look for the icon). You’ll know the Bluetooth setting is off when the button is gray. If it’s on, it might read “not connected” if you’re not already connected to a Bluetooth device, or it will tell you that another device is currently attached and connected. 

You should only have to complete this pairing process once. Once your Bluetooth device and PC have a record of each other, they should automatically connect if the device is switched on and in the appropriate range. When you’re not utilizing the Bluetooth setting, you can disable Bluetooth to extend your battery life. Leaving your Bluetooth function on can potentially leave it exposed to hackers or other issues, so it’s a good idea to just turn it off when you aren’t using it.

Editors’ Choice

Repost: Original Source and Author Link