Categories
Security

Ransomware gangs are evolving in new and dangerous ways

With digital technology growing at a rapid pace, ransomware gangs and their methods continue to advance at an aggressive rate as well.

This observation was detailed by cybersecurity and antivirus giant Kaspersky via a new report, highlighting fresh ransomware trends that have materialized throughout 2022.

Andrew Brookes/Getty Images

Although leading cyber gangs have seen operations ceasing due to shutdowns, groups are still finding ways to develop dangerous strains of malware and ransomware. And their efforts are bearing fruits, Kaspersky stresses.

In particular, the company singled out brand new “cross-platform capabilities”, in addition to “updated business processes” and more.

Before we delve into the aforementioned aspects, it’s important to outline what ransomware is exactly. Simply put, it’s a type of code or software that affects files, folders, or the entire operating system of a PC.

Once it has successfully infiltrated its target, ransomware groups will then demand money from the victim if they want to unlock access to their computer.

“If last year we said ransomware is flourishing, this year it’s in full bloom.”

“Ransomware operations have come a long way — from clandestine and amateur beginnings to fully-fledged businesses with distinctive brands and styles that rival each other on the dark web. They find unusual ways to attack their victims or resort to newsjacking to make their attacks more relevant,” Kaspersky said.

The rise of cross-platform programming languages

As for the “prolific use” of cross-platform capabilities, Kaspersky points out that this method is particularly effective in damaging “as many systems as possible with the same malware by writing code that can be executed on several operating systems at once.”

Cross-platform programming languages, Rust and Golang, started picking up steam among the ransomware community during the latter stages of 2021.

For example, a leading group that is an ever-present name in the ransomware space, Conti, has managed to design a variant that is spread via certain affiliates in order to target Linux-based systems.

BlackCat, labeled as a “next-generation” malware gang, was mentioned as another group — one that has apparently attacked more than 60 organizations since December 2021. Rust was its language of choice for developing malware strains.

Elsewhere, a group known as DeadBolt relied on Golang instead for its ransomware endeavors. This cyber gang is notorious for its attacks on QNAP (network-based storage devices from a Taiwanese company).

Ransomeware groups are starting to evolve

Another trend that Kaspersky detailed is the fact that ransomware groups have not only been relying on more advanced tactics for their overall operations, but throughout late 2021 and the opening stages of 2022, they’ve also “continued activities to facilitate their business processes, including regular rebranding to divert the attention of the authorities, as well as updating exfiltration tools.”

Certain groups have developed and started to use entire toolkits that “resembled ones from benign software companies.”

“Lockbit stands out as a remarkable example of a ransomware gang’s evolution. The organization boasts an array of improvements compared to its rivals, including regular updates and repairs to its infrastructure. It also first introduced StealBIT, a custom ransomware exfiltration tool that enables data exfiltration at the highest speeds ever – a sign of the group’s hard work put towards malware acceleration processes.”

Dmitry Galov, a senior security researcher at Kaspersky’s Global Research and Analysis Team, commented on the state of affairs with a summary:

“If last year we said ransomware is flourishing, this year it’s in full bloom. Although major ransomware groups from last year were forced to quit, new actors have popped up with never before seen techniques. Nevertheless, as ransomware threats evolve and expand, both technologically and geographically, they become more predictable, which helps us to better detect and defend against them.”

Google, meanwhile, somewhat mirrored the same remark when it analyzed the record number of zero-day hacks in 2021.

“Zero-day exploits are considered one of the most advanced attack methods an actor can use, so it would be easy to conclude that attackers must be using special tricks and attack surfaces. But instead, the zero-days we saw in 2021 generally followed the same bug patterns, attack surfaces, and exploit “shapes” previously seen in public research.”

Still, that’s not to say that malware and ransomware don’t pose a dangerous threat in today’s digitally-driven world. In fact, ransomware in particular is an extremely lucrative business for cybercriminals. In 2021 alone, this crime type saw $49.2 million in losses for innocent individuals.

The fact that the rise in malware is more commonplace than ever before is not going unnoticed among the leading technology giants.

Microsoft recently confirmed a new initiative where businesses can use the company’s in-house security services and experts to combat cybercrime and strengthen their digital security measures.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Six Easy Ways to Boost Your Mac’s Performance

Is your Mac feeling slow? If it is, then there’s no need to use extra tools to increase your MacBook’s performance.

Just like when it comes to increasing your Mac’s security, you can boost your MacBook’s performance by using some tools built right into MacOS.

Reduce visual effects

At the top of our list is a simple trick. MacOS has a lot of fancy visual effects, but this can take a toll on your system’s RAM and CPU if you have a lower-end machine. The effects could cause your Mac to feel a bit slow, but you can disable them with ease. Here’s how.

To begin, click the Apple button in the Menu Bar, and choose System Preferences. After that, choose Dock & Menu bar. You’ll then be able to uncheck the boxes for Animate opening applications, as well as Automatically hide, and show the Dock. Finally, uncheck the box for Minimize windows using and change the Genie effect to Scale effect. You might want to turn off magnification in the dock, too, just to be safe.

Remove unused apps

removing unused mac apps.

Next up is a simple trick involving removing unused apps. Unused apps can take up space on your Mac’s hard drive or SSD for no reason, leaving you less room for photos and for other apps to store and cache their files. Deleting an app isn’t as simple as it is in Windows, though, so follow our steps below.

First off, click the Apple Menu. Then, choose About this Mac. You should see Storage and Manage. Click this button, followed by Applications. Your apps will show up in the list, and you can click the app, followed by Delete at the bottom to remove it.

In addition to removing unused apps, we also suggest using Apple’s own storage monitor to delete large files. Just click the Apple menu, then choose About this Mac, then Storage followed by Manage. There will be a link that says Reduce Clutter. Click this to delete large files and apps from your Mac.

Change or remove startup programs

Removing startup programs on MacOS.

Is your Mac taking a while to boot? The problem might be some startup applications that are holding your Mac back as soon as you push the power button and log in. For the best experience, we suggest removing startup apps, to ensure you always have a clean boot experience.

To remove startup apps in MacOS, go to System Preferences from the Apple Menu, then go to Users & Groups. Next, choose Login Items. From there, click any unwanted apps in the list and then click the minus button to remove it.

Reindex spotlight

reindexing spotlight in macos.

After you’ve installed a major Mac update, or a security update, your Mac might be feeling a little slow. Usually, this is because Spotlight search is reindexing your hard drive or SSD. In some cases, it might end up getting stuck, so to speed up your Mac, you’ll need to manually reindex again.

To reindex Spotlight, click System Preferences > Spotlight. From there, look for the Privacy tab. Click Privacy and then add Macintosh HD to the list — and then remove it with the minus button. Indexing will start and stop, and your Mac should feel a bit faster.

Update your Mac Software

Updating a Mac via Software Update.

This is an obvious choice, but we also suggest updating your Mac to ensure that it’s not running slow. Of course, not every Mac model is supported by Apple. So, depending on which Mac model you have, you might no longer be seeing firmware and other security updates (which are known to boost Mac performance.) You can check for updates at any time on your Mac by going to the Apple Menu, choosing About this Mac > Software Update.

If your Mac is no longer getting security or software updates, then you might want to physically update it. By that, we mean go buy a new Mac machine. Newer Macs are known to be more efficient and can be quite affordable. We have a buying guide for every type of Mac — M1 Mac, Macbook Air, MacBook Pro, and the iMac.

Reset NVRAM

MacBook Keyboard.

Our last tip is one that’s a bit more technical, and something we don’t recommend for novice users. However, resetting the nonvolatile random access memory (NVRAM) is a trick that we recommend for situations where Macs might be behaving oddly. The NVRAM holds display settings, the time zone setting, speaker volume, and the startup volume choice. Again, it’s only recommended for advanced users, however, so try at your own risk.

According to Apple, you can reset the NVRAM by shutting down your Mac and turning it on while holding Option, Command, P, and R together on your keyboard, then releasing it for 20 seconds. On Mac computers that play a startup sound, you can release the keys after the second startup sound. And on Mac computers that have the Apple T2 Security Chip, you can release the keys after the Apple logo appears and disappears for the second time.

Another trick also involves resetting the System Management Controller (SMC.) However, this is reserved for issues with sleep, wake, power, charging your Mac notebook battery, or other power-related symptoms. We won’t get into that, and point you to Apple’s help article for more.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

3 Ways to Clean a MacBook Screen

MacBooks are beautiful devices, and since you are shelling out a pretty penny for them, you will want to keep them clean. This is especially true of the display. Apple’s retina displays are beautiful, and the mini-LED displays on the new MacBook Pros are a sight to behold.

That’s why it’s extra important to clean them regularly. But how should you go about it? Here are three ways to clean a MacBook screen without damaging it.

What you shouldn’t do

Before we dive into how to properly clean a MacBook screen, let’s first go over what you shouldn’t do. Firstly, you will want to avoid anything overly abrasive. That includes scouring pads, but it also means anything with paper fibers, so no paper towels or toilet tissue. Any of those have the potential to scratch your screen and cause permanent damage.

You also want to avoid using excessive cleaning fluid, and don’t spray anything directly onto the screen. Lastly, make sure not to get anything into the ports.

With that out of the way, here are three ways to clean your MacBook screen.

The old fashioned way

This is the simple and cheap way to clean your screen, and it’s the most common way to clean any laptop screen. It’s cheap and gets the job done. Before you get going, though, make sure to power off the MacBook.

Next, take a microfiber or lint-free cloth and polish the screen by moving your hand in circular motions to pick up dirt and dust. Wiping can damage the outer coating on the display and create scratches.

Once all the dirt is free, you can dampen a cloth in distilled or de-ionized water and gently buff out tougher spots. Again, use circular motions, and make sure the cloth isn’t overly wet.

Once you have wiped down the screen, take another smooth cloth and dry the screen.

In these three steps, you should have a far cleaner screen free of any dirt or buildup.

The fancy way

If you want to clean your MacBook screen in one step, you can always get a special spray designed for LCD screens.

Just spritz a microfiber or lint-free cloth with the spray and polish the screen. It may take several passes to get everything if your screen is dirty, but there is no need to swap out wet or dry cloths — it’ll dry just fine on its own without streaks. It’s pretty simple.

Again, just make sure that you never spray the screen directly and avoid over saturating the cloth.

The really fancy way

A product image of the Apple polishing cloth against a white background

If you want to go the officially-sanctioned route, Apple has its own ungodly expensive “polishing cloth.” It’s a microfiber cloth that costs you $20. It’s also insanely popular, because when you’ve already spent thousands on a MacBook, why not have a branded cleaning cloth to go with it?

Apple recommends just using a non-abrasive and soft cloth to clean their screens, and that’s exactly what this is. If you’re an Apple fan, this is obviously the only way to clean your screen.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

5 Easy Ways to Increase Security in Google Chrome

If you’re one of many people who use Chrome as your default web browser, then you might want to take some steps to ensure that it’s extra secure. This can help you in a world where hackers are always after passwords and can easily spoof websites to look like the real thing.

Well, Google has a lot of tools built right into Chrome that can help with that protection. From Safe Browsing to encrypting passwords and more, we got you covered with five easy ways to dramatically increase security in Google Chrome.

Change your Safe Browsing settings

Our first tip is one that’s enabled by default but can be tweaked for enhanced security. Click the Three Dots at the top right of your screen, choose Settings, and then head to Privacy and Security followed by Security. There will be a section for Safe Browsing.

Under this section, you’ll want to choose the Enhanced Protection option. You probably have Standard Protection on as default, but switching it over to enhanced can give you better protection against dangerous websites, downloads, and extensions. Chrome will even warn you about password breaches. Just keep in mind that Chrome might send your URLs to Safe Browsing to check it across threats. The data will be temporarily linked to your Google Account.

Encrypt your passwords stored in your Google Account

Changing the encryption settings in Chrome.

Next up is another simple tip in regard to encryption. Once you visit Chrome’s Settings menu, click the You and Google option at the top of the screen. You’ll need to be logged into a Google Account for this.

Under Sync, choose Encryption Options. Look for the option that says Encrypt Synced Passwords With Your Google Account. This option will save your passwords in Google’s servers, behind their own encryption methods. That makes it harder for a hacker to access your passwords as they will be encrypted in transit, but there is a small chance that Google might also be able to read them.

However, you can also choose the second option to improve the security a bit more. With the sync passphrase, you can use the encryption without having Google read the passwords, as only you have the unlock key to view the passwords. This means it’s encrypted at two ends, both at your end and at the other end.

This second route, though, complicates sync a bit. You’ll need your passphrase whenever you turn on sync somewhere new, and you’ll have to enter it on your devices where you have sync turned on. Your feed also won’t show suggestions based on sites you visit in Chrome, and you can’t view your password online or use Smart Lock for Passwords. History won’t sync, either.

Turn off FLoC

The FLoC opt out page in Chrome.

You probably heard about FLoC. This controversial Chrome feature essentially goes through your browsing history to see which large group of people, or “cohort,” your recent browsing activity is most similar to. It’s meant for advertisers to select ads for the group as an alternative to Cookies, but some worry that it can be used to gather more data about you or turn Google into more of a monopoly with control over how advertisers can target users.

Facing pushback around FLoC, Google enabled a new Privacy Sandbox that you can visit to disable the feature. Simply visit Chrome’s Settings, go to Privacy and Security, and click the link to Privacy Sandbox. From there, you can flip a switch to disable Sandbox trials, as well as FLoC.

Always use HTTPS

Chrome's HTTPS section in security.

Fourth on our list is a simple trick to ensure you’re only going to safe websites. Many websites used to depend on the Hypertext Transfer Protocol (HTTP.) This sounds cool, but HTTP can leave your browser’s open request to view a website in plain text. As a result, any hacker who can monitor the connection can read the request. This is risky in cases where you enter a password or a credit card number. Hypertext Transfer Protocol Secure (HTTPS) fixes that by encrypting HTTP requests and responses as random characters, making it harder to monitor.

In Google Chrome, you should make sure that the browser is set to always use HTTPS. If you visit an HTTP website, you’ll get warned that it’s not secure. To do this, click the Privacy and Security section in Settings, and look for the Always Use Secure Connection option.

Watch your extensions

Extension options in Google Chrome.

Extensions are a great addition to Chrome, as they can help correct your spelling and grammar, block ads, and more. Not all extensions are good, though. If you’re not careful, extensions can end up hijacking your browser or your private information and even spy on you. It’s always a good practice to ensure any extensions you add are coming from trusted sources only.

To manage extensions in Chrome visit chrome://extensions/ in the address bar. From here, you can click the Details button to see the details and permissions of each extension and go back to the listing in the Chrome Web Store. You can even remove untrusted extensions.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Science, applied: 3 ways AI and ML are advancing the insurance industry

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


This article was written by Kea Goins, a Marketing Coordinator at Valkyrie.

From maximizing advertisement relevance to customizing user experience, the benefits of applied sciences and advanced data analytics have become more apparent as industries adopt data-driven approaches to create new competitive advantages. In this article, we focus on companies in the insurance industry that are implementing applications of data science to deliver efficient, risk-adjusted solutions by detecting fraudulent activity and providing a personalized customer experience. The best place to start is by looking at some of the technological trends being used by insurance companies today.

Growing Trends in the Insurance Industry

Customer Experience & Coverage Personalization

With access to a customer’s behavioral, geographic, social, and account data, AI-enabled chatbots can provide seamless, automated, and personalized buying experiences. These bots are quickly becoming the industry standard. According to a 2020 MIT Technology Review survey of 1,004 business leaders, customer service (via chatbots) is the leading application of AI being deployed today. The study shows that 73% of respondents indicated that by 2022, it will still be the leading use of AI in companies.

Behavioral-Based Policy Pricing

In the auto insurance industry, we are seeing ubiquitous IoT sensors provide personalized data to pricing platforms, allowing safer drivers to be rewarded by paying less for auto insurance (known as usage-based insurance). These techniques have expanded beyond auto insurance, and we are now seeing health & dental insurance companies also use IoT sensors that provide people who maintain a healthier lifestyle with a lower rate for insurance. A recent article highlighted dental insurance company Beam Digital for their use of IoT technologies. This company provides a smart toothbrush to every customer and monitors their oral health, while using this information to support a dental insurance plan. Beam sends the customer notices and encouragement if their brushing habits are falling short of the required standard. The company hopes this will result in improved dental hygiene and reduced premiums.

Faster, Customized Claims Settlement

Online interfaces and computer-vision enabled virtual claims adjusters now make it streamlined and more efficient to settle and pay claims following an accident, while simultaneously decreasing the likelihood of fraud. Customers are now also able to select their preferred provider’s premiums that will be used to pay their claims (known as peer-to-peer/P2P insurance). Data science applications have enabled the required higher-fidelity predictions based on events, in real-time, using large datasets rather than samples to make the best guess.

Industry Leaders That Are Adopting AI/ML

With advancements in AI/ML applications, more insurance companies are now actively leveraging preexisting data to increase the depth of understanding they have of their customers. Companies like State Farm, Liberty Mutual, Allstate, and Progressive are among a few of the industry leaders that are adopting AI and ML applications into their business model.

Allstate Insurance

Greg Firestone, Vice President of Data Science at Allstate Insurance, explained in a recent interview why his company began leveraging anti-fraud technologies to mitigate fraudulent claims. “It’s very hard to measure sometimes, but it’s happening,” Firestone said. “The best prevention is really being aggressive: using AI and data to find fraud. Data is your friend in this regard. Fraud is a problem that impacts all insurance companies, and we need to focus on it and make sure the fraudsters realize that we’re not easy marks.”

The company leverages an AI-based solution to monitor and flag suspicious claims, however, they understand that keeping an eye on future fraud trends will still require a human touch. Large insurance companies process thousands of claims daily, making it impossible for a team of human analysts to thoroughly review each instance for fraudulent activity. Thus, many insurance companies are leveraging advanced AI systems to automate this process, which allows them to reserve their teams for claims the AI-based solution has flagged as suspicious.

Liberty Mutual Insurance

Last year, in an official press release, Liberty Mutual announced a strategic relationship with Groundspeed Analytics, Inc. to cut the time to extract submission data by 50% through the use of Artificial Intelligence (AI). “Properly evaluating customer submission documents is one of the most critical aspects of the underwriting process, and current methods don’t take advantage of the value locked in these documents.” By leveraging the available data in submission documents in a “data first” approach, Groundspeed is helping Liberty Mutual to make better risk selections, improve time-to-quote, and deliver better customer service.

Progressive Insurance

In recent news, Progressive Insurance is reportedly leveraging Machine Learning algorithms for predictive analytics based on data collected from customer drivers. Progressive claims that “its telematics (integration of telecommunications and IT to operate remote devices over a network) mobile app, Snapshot, has collected 14 billion miles of driving data.” By feeding the labeled data which connects accidents with the accordant driving data, the insurer could identify a pattern and predict a new customer’s likelihood of causing accidents by simply gathering hours of their driving data. This data collection process could encourage the drivers to monitor and optimize their driving habits, and possibly decrease their number of accidents. As for the insurance company, increasing further data science capabilities allows them to gather a better outlook on the possible return and risk.

Customer Acquisition Through Predictive Analytics

Traditionally, insurance agents have relied on relationship-selling supported by lead generation tools. Today, new tools exist to help insurance carriers start to predict customer needs for insurance products. These tools use predictive analytics to look for “active signals” of customer intent and then tie in relevant insurance products. For example, knowing that a construction company has just won a large contract is a good signal that they might want additional umbrella insurance. Also, knowing that a business has just secured its first institutional round of funding is a good signal that the firm needs Directors & Officers insurance. Broadly speaking, algorithms use these predictive signals to look for specific events, or business life cycle activities (e.g. starting a business), to offer new and relevant insurance products that fit each customer’s needs. Moreover, algorithms can be used to identify other related businesses that have similar characteristics (e.g. revenue size, industry type, location) to an insurance company’s existing customer base.

Leveraging AI and ML capabilities for gathering and analyzing social, historical, and behavioral data allows companies to gain a more accurate understanding of their customers and provide better products and services. The three industry leaders mentioned in this article are just a few of many companies harnessing the power of applied science capabilities to better understand their customers and their data. Through more precise risk prediction, personalized customer policies, and automated settlements, both insurance providers and customers can benefit from the impact of science applied to technologies in the insurance industry.

Kea Goins is a Marketing Coordinator at Valkyrie.

This story originally appeared on Www.valkyrie.ai. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Five ways Valve’s Steam Deck seems too good to be true

Today we’re looking at the Valve Steam Deck, a gaming machine with built-in game controls that runs SteamOS. This device’s industrial design owes to several machines that’ve found some measure of success in the past – the Sega Game Gear, Razer Project Fiona, and the most obviously similar recent powerhouse of a device like this: Nintendo Switch. But might this be a device that’s too idyllic to make it to gamer hands in the real world?

5. Could this be the one that sticks?

If you take a peek at Razer’s Project Fiona, you’ll find a gaming tablet with controllers at its sides. That machine ran a Steam-style game manager on top of Windows – and never really caught on with the gaming masses. That might’ve been released a bit earlier than it should have, back in the year 2012. Something about that combination of software and hardware didn’t quite capture the public’s imagination – or didn’t do so well enough to make it viable enough for more than one attempt.

SEE TOO: Steam Console news tipped by Gabe Newell

The Steam Deck concept – complete with remote game streaming – could also be compared in part to an earlier machine-and-software combo in the NVIDIA SHIELD handheld gaming system. We were remotely streaming AAA games with this device back in 2014. That device could – and still DOES – have the ability to stream games from a user’s Steam account.

Something about the idea that one would have a smart game streaming device with built-in controllers did not work for more than a single generation with major gaming companies NVIDIA and Razer. What makes the Steam Deck different in 2021?

4. Were Steam Machines just a dream?

It was October of 2013 when Valve started talking about Steam Machines. Those were computers/consoles that’d run SteamOS. They went so far as to reveal and release a few – from top-tier PC manufacturers. Over the next half-decade, we remained hopeful that the Steam Machine would… somehow… keep on trucking. But by April of 2018, it was entirely clear that Steam Machines weren’t going to be what Valve envisioned.

Why should a device sold by Valve, running SteamOS here in late 2021 and early 2022 be any different from the Steam Machines we never really saw take hold over the past near-decade? Will the built-in display make a difference? Will it make a difference that Valve itself is overseeing manufacturing and effectively promising quality software?

3. Can this device really run whatever?

It’s difficult to imagine a new device released running a software that’s different from the operating systems we use all the time, every single day of the year. Especially when the software requires that its maker keep a keen eye on development, it requires faith in the creator of said software that the user would invest in the device.

On the other hand, this device isn’t particularly locked down. As noted by official SteamWorks documentation, “Steam Deck is a PC, and players will be able to install whatever they like, including other OSes.” As such, you could potentially install whatever other game or app stores you like – it’s your device, you can do what you want with it. Will this sort off allowance of freedom in a piece of hardware like this change the way competing gaming devices are expected to do business?

2. Will developers use all of these buttons?

Early Steam Deck developer kits look a whole lot like the imagery that appears on the main Steam Deck webpage. This indicates – but does not guarantee – that this device will be what the machine looks like when it’s ready for public use. UPDATE: Valve has confirmed that “functionally”, the Steam Deck Developer Kit EV2 (engineering verification test build) will be “identical to the Steam Decks that will be shipping to customers later this year.”

The Steam Deck has several hardware controls on it, and, thanks to extensive work with controller integration, it’ll have the ability to work with 3rd-party controllers, too. The controllers on the sides of the Steam Deck are not removable, but the entire device is able to dock and output video signals, as it is effectively a tiny gaming PC. There’ll eventually be an “official dock” with what Valve’s indicated will include ports like USB-C, HDMI, DisplayPort, ethernet, and full-sized USB.

Controls on the Steam Deck include a 7-inch touchscreen, trackpads (effectively Steam Controller touchpads), gyroscope (detecting position of the machine as you move in real space), as well as joysticks, directional pad, XYAB buttons, triggers, and grip buttons. Grip buttons appear under the rim, on the back for the fingers that grip the sides of the machine. Will games actually make use of all the controls, or will some remain un-used while others are used non-stop?

1. Could the price be right?

Steam Deck’s most basic iteration has a listed price of $399 USD. That includes 64GB eMMC internal storage and a hardware carrying case. Each of the first wave of Steam Deck have the same processor, a custom AMD APU with AMD Zen 2 + RDNA 2 GPU with 16GB LPDDR5 RAM. All models include a microSD card slot for storage expansion.

The middle-tier version of Valve’s Steam Deck has 256 GB NVMe SSD (PCIe Gen 3 x4) storage (that’s faster than the eMMC in the most basic model). This version also has the basic carrying case, and adds an “Exclusive Steam Community profile bundle”. The middle-tier model will cost users around $529 USD.

The most extravagant version has a price of $649 USD, and includes 512GB “high-speed NVMe SSD” (PCIe Gen 3 x4). This is the fastest storage of the three. This version of the Steam Deck has an “Exclusive Steam Community profile” as well, and an Exclusive virtual keyboard theme, and an Exclusive carrying case. The most expensive version here has special “Premium anti-glare etched glass” over its display, too – so you’re getting that one hardware upgrade in addition to the faster (and larger amount of) storage.

Valve’s Steam Deck has a release date of December, 2021. That’s their “starts shipping” date, anyway. They’ve indicated that reservations open on July 16, 2021, at 10AM PDT. If this machine is everything Valve professes it will be, the least expensive version of the Steam Deck seems like a winning proposition. Avoiding the whole “different versions of the machine have different capabilities” mess that is Nintendo’s Switch Lite – and the like – seems like a positive move, too.

SIDENOTE: Should the display be better?

The touchscreen display panel on this machine is a 7-inch 1280 x 800px “optically bonded LCD for enhanced readability” with 60Hz refresh rate. That’s clearly aimed at gamers looking to make the most of the platform to win games, rather than people that tend to buy whatever smartphone is available with the most extravagant display panel. This device isn’t running with the 120Hz refresh rate we’ve seen on some recent smartphones and tablets, and it’s not going to be as bright or sharp as a display on a high-end Samsung slate, or an iPad Pro. But those devices aren’t really competing with the Steam Deck, are they?

Much like Nintendo Switch, the on-device display represents one of several ways to play games with the machine. It can be plugged in to a bigger display and used like a gaming console. It can be plugged in to a PC display and used like a desktop of sorts. Does that mean the display on the device doesn’t need to be as high-end as a dedicated tablet?

Repost: Original Source and Author Link

Categories
AI

21 ways medical digital twins will transform health care

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


The health care industry is starting to adopt digital twins to improve personalized medicine, health care organization performance, and new medicines and devices. Although simulations have been around for some time, today’s medical digital twins represent an important new take. These digital twins can create useful models based on information from wearable devices, omics, and patient records to connect the dots across processes that span patients, doctors, and health care organizations, as well as drug and device manufacturers.

It is still early days, but the field of digital twins is expanding quickly based on advances in real-time data feeds, machine learning, and AR/VR. As a result, digital twins could dramatically shift how we diagnose and treat patients, and help realign incentives for improving health. Some proponents liken the current state of digital twins to where the human genome project was 20 years ago, and it may require a similar large-scale effort to take shape fully. A team of Swedish researchers recently wrote, “Given the importance of the medical problem, the potential of digital twins merits concerted research efforts on a scale similar to those involved in the HGP.”

While such a “moon shot” effort may not be immediately underway, there are many indicators that digital twins are gaining traction in medicine. Presented here are 21 ways digital twins are starting to shape health care today, broken roughly into personalized medicine, improving health care organizations, and drug and medical devices and development. In fact, many types of digital twins span multiple use cases and even categories; it is these cross-domain use-cases that form a major strength of digital twins.

Personalized medicine

Digital twins show tremendous promise in making it easier to customize medical treatments to individuals based on their unique genetic makeup, anatomy, behavior, and other factors. As a result, researchers are starting to call on the medical community to collaborate on scaling digital twins from one-off projects to mass personalization platforms on par with today’s advanced customer data platforms.

1. Virtual organs

Several vendors have all been working on virtual hearts that can be customized to individual patients and updated to understand the progression of diseases over time or understand the response to new drugs, treatments, or surgical interventions. Philip HeartModel simulates a virtual heart, starting with the company’s ultrasound equipment.  Siemens Healthineers has been working on a digital twin of the heart to improve drug treatment and simulate cardiac catheter interventions. European startup FEops has already received regulatory approval and commercialized the FEops Heartguide platform. It combines a patient-specific replica of the heart with AI-enabled anatomical analysis to improve the study and treatment of structural heart diseases.

Dassault launched its Living Heart Project in 2014 to crowdsource a virtual twin of the human heart. The project has evolved as an open source collaboration among medical researchers, surgeons, medical device manufacturers, and drug companies. Meanwhile, the company’s Living Brain project is guiding epilepsy treatment and tracking the progression of neurodegenerative diseases. The company has organized similar efforts for lungs, knees, eyes, and other systems.

“This is a missing scientific foundation for digital health able to power technologies such as AI and VR and usher in a new era of innovation,” Dassault senior director of virtual human modeling Steve Levine told VentureBeat. He added that this “could have an even greater impact on society than what we have seen in telecommunications.”

2. Genomic medicine

Swedish researchers have been mapping mice RNA into a digital twin that can help predict the effect of different types and doses of arthritis drugs. The goal is to personalize human diagnosis and treatment using RNA. The researchers observed that medication does not work about 40% to 70% of the time. Similar techniques are also mapping the characteristics of human T-cells that play a crucial role in immune defense. These maps can help diagnose many common diseases earlier when they are more effective and cheaper to treat.

3. Personalized health information

The pandemic has helped fuel the growth of digital health services that help people assess and address simple medical conditions using AI. For example, Babylon Health‘s Healthcheck App captures health data into digital twins. It works with manually entered data such as health histories, a mood tracker, symptom tracker, and automatic capture from fitness devices and wearables like the Apple Watch. The digital twin can provide basic front-line information or help guide priorities and interactions with doctors to address more severe or persistent conditions.

4. Customize drug treatment

The Empa research center in Switzerland is working on digital twins to optimize drug dosage for people afflicted by chronic pain. Characteristics such as age and lifestyle help customize the digital twin to help predict the effects of pain medications. In addition, patient reports about the effectiveness of different dosages calibrate digital twin accuracy.

5. Scanning the whole body

Most approaches to digital twins build on existing equipment to capture the appropriate data, while Q Bio’s new Gemini Digital Twin platform starts with a whole-body scan. The company claims to capture a whole-body scan in 15 minutes without radiation or breath holds, using advanced computational physics models that are more precise than conventional MRI for many diagnoses. The company has received over $80 million from Andreessen Horowitz, Kaiser Foundation Hospitals, and others. Q Bio is also developing integrations to improve these models using data from genetics, chemistry, anatomy, lifestyle, and medical history.

6. Planning surgery

A Boston hospital has been working with Dassault’s digital heart to improve surgical procedure planning and assess the outcomes afterward. The digital twins also help them to generate the shape of a cuff between the heart and arteries.

Sim&Cure’s Sim&Size is a digital twin to help brain surgeons treat aneurysms using simulations to improve patient safety. Aneurysms are enlarged blood vessels that can result in clots or strokes. These digital twins can improve the ability to plan and execute less invasive surgery using catheters to install unique implants. Data from individual patients helps customize simulations that run on an embedded simulation package from Ansys.  Preliminary results have dramatically reduced the need for follow-up surgery.

Improving health care organizations

Digital twins also show promise in improving the way health care organizations deliver care. Gartner coined the term digital twin of the organizations to describe this process of modeling how an organization operates to improve underlying processes.

In most industries, this can start by using process mining to discover variations in business processes. New health care-specific tools can complement these techniques.

7. Improving caregiver experience

Digital twins can also help caregivers capture and find information shared across physicians and multiple specialists. John Snow Labs CTO David Talby said, “We’re generating more data than ever before, and no one has time to sort through it all.” For example, if a person sees their regular primary care physician, they will have a baseline understanding of the patient, their medical history, and medications. If the same patient sees a specialist, they may be asked many of the same repetitive questions.

A digital twin can model the patient and then use technologies like NLP to understand all of the data and cut through the noise to summarize what’s going on. This saves time and improves the accuracy of capturing and presenting information like specific medications, health conditions, and more details that providers need to know in context to make clinical decisions.

8. Driving efficiency

The GE Healthcare Command Center is a major initiative to virtualize hospitals and test the impact of various decisions on changes in overall organizational performance. Involved are modules for evaluating changes in operational strategy, capacities, staffing, and care delivery models to objectively determine which actions to take. For example, they have developed modules to estimate the impact of bed configurations on care levels, optimize surgical schedules, improve facility design, and optimize staff levels. This allows managers to test various ideas without having to run a pilot. Dozens of organizations are already using this platform, GE said.

9. Shrinking critical treatment window

Siemens Healthineers has been working with the Medical University of South Carolina to improve the hospital’s daily routine through workflow analysis, system redesign, and process improvement methodologies. For example, they are working to reduce the time to treat stroke patients. This is important since early treatment is critical but requires the coordination of several processes to perform smoothly.

10. Value-based health care

The rising cost of health care has many nations exploring new incentive models to better align new drugs, interventions, and treatments with outcomes. Value-based health care is one approach that is growing in popularity. The basic idea is that participants, like drug companies, will only get compensation proportionate to their impact on the outcomes. This will require the development of new types of relationships across multiple players in the health delivery systems. Digital twins could provide the enabling infrastructure for organizing the details for crafting these new types of arrangements.

11. Supply chain resilience

The pandemic illustrated how brittle modern supply chains could be. Health care organizations immediately faced shortages of essential personal protection equipment owing to shutdowns and restrictions from countries like China. Digital twins of a supply chain can help health care organizations model their supply chain relationships to understand better how to plan around new events, shutdowns, or shortages. This can boost planning and negotiations with government officials in a pinch, as was the case in the recent pandemic. A recent Accenture survey found that 87% of health care executives say digital twins are becoming essential to their organization’s ability to collaborate in strategic ecosystem partnerships.

12. Faster hospital construction

Digital twins could also help streamline construction of medical facilities required to keep up with rapid changes, such as were seen in the pandemic. Atlas Construction developed a digital twin platform to help organize all the details for health care construction. The project was inspired long before the pandemic when Atlas founder Paul Teschner saw how hard it was to get new facilities built in remote areas of the world. The platform helps organize design, procurement, and construction processes. It is built on top of the Oracle Cloud platform and Primavera Unifier asset lifecycle management service.

13. Streamlining call center interactions

Digital twins can make it easier for customer service agents to understand and communicate with patients. For example, a large insurance provider used a TigerGraph graph database to integrate data from over 200 sources to create a full longitudinal health history of every member. “This level of detail paints a clear picture of the members current and historical medical situation,” said TigerGraph health care industry practice lead Andrew Anderson.

A holistic view of all diagnosis claims prescriptions, refills, follow-up visits, and outstanding claims reduced call handling time by 10%, TigerGraph claimed, resulting in over $100 million in estimated savings. Also, shorter but more relevant conversations between the agents and members have increased Net Promoter Score and lowered churn.

Drug and medical device development

There are many ways that digital twins can improve the design, development, testing, and monitoring of new medical devices and drugs. The U.S. FDA has launched a significant program to drive the adoption of various types of digital approaches. Regulators in the U.S. and Europe are also identifying frameworks for including modeling and simulation as sources of evidence in new drug and device approvals.

14. Software-as-a-medical device

The FDA is creating the regulatory framework to allow companies to certify and sell software-as-a-medical device. The core idea is to generate a patient-specific digital twin from different data sources, including lab tests, ultrasound, imaging devices, and genetic tests. In addition, digital twins can also help optimize the software in medical devices such as pacemakers, automated insulin pumps, and novel brain treatments.

15. Classifying drug risks

Pharmaceutical researchers are using digital twins to explore the heart risks of various drugs. This could help improve drug safety of individual drugs and drug combinations more cost-effectively than through manual testing. They have built a basic model for 23 drugs. Extending this model could help reduce the estimated $2.5 billion required to design, test, get approved, and launch new drugs.

16. Simulating new production lines

Siemens worked with several vaccine manufacturers to design and test various vaccine production line configurations. New mRNA vaccines are fragile and must be precisely combined using microfluidic production lines that precisely combine nanoscale-sized particles. Digital twins allowed them to design and validate the manufacturing devices, scale these processes, and accelerate its launch from 1 year down to 5 months.

17. Improve device uptime

Philips has launched a predictive maintenance program that collates data from over 15,000 medical imaging devices. The company is hoping that digital twins could improve uptime and help their engineers customize new equipment for the needs of different customers. In addition, it is hoping to apply similar principles across all of its medical equipment.

18. Post-market surveillance

Regulators are beginning to increase the emphasis for device makers to monitor the results of their equipment after-sales as part of a process called post-market surveillance. This requires either staffing expensive specialists to maintain the equipment or embedding digital twins capabilities into the equipment. For example, Sysmex worked with PTC to incorporate performance testing into its blood analyzer to receive a waiver from these new requirements, PTC CTO Steve Dertien told VentureBeat. This opened the market for smaller clinical settings closer to patients, which can speed diagnosis.

19. Simulating human variability

Skeletons and atlases commonly depict the perfect human. However, real-life humans typically have some minor variations in their muscles or bones that mostly go unnoticed. As a result, medical device makers struggle with how common anatomical variations among people may affect the fit and performance of their equipment. Virtonomy has developed a library of common variations to help medical equipment makers test conduct studies on how these variations may affect the performance and safety of new devices. In this case, they simulate the characteristics representing common variations in a given population rather than individuals.

20. Digital twin of a lab

Modern drug development often requires testing out thousands or millions of possibilities in a highly controlled environment. A digital twin of the lab can help to automate these facilities. It can also help to prioritize tests in response to discoveries. Digital twins could also improve the reproducibility of experiments across labs and personnel in the same lab. In this quest, Artificial recently closed $21.5 million in series A funding from Microsoft and others to develop lab automation software. The company is betting that unified data models and platforms could help them jump to the front of the $10 billion lab automation market.

21. Improving drug delivery

Researchers at Oklahoma State have been working with Ansys to develop a digital twin to improve drug delivery using models of simulated lungs as part of the Virtual Human System project. They found that only about 20% of many drugs reached their target. The digital twins allowed them to redesign the drug’s particle size and composition characteristics to improve delivery efficiency to 90%.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

4 ways AI is unlocking the mysteries of the universe

Astronomy is all about data. The universe is getting bigger and so too is the amount of information we have about it. But some of the biggest challenges of the next generation of astronomy lie in just how we’re going to study all the data we’re collecting.

To take on these challenges, astronomers are turning to machine learning and artificial intelligence (AI) to build new tools to rapidly search for the next big breakthroughs. Here are four ways AI is helping astronomers.

1. Planet hunting

There are a few ways to find a planet, but the most successful has been by studying transits. When an exoplanet passes in front of its parent star, it blocks some of the light we can see.

By observing many orbits of an exoplanet, astronomers build a picture of the dips in the light, which they can use to identify the planet’s properties – such as its mass, size and distance from its star. Nasa’s Kepler space telescope employed this technique to great success by watching thousands of stars at once, keeping an eye out for the telltale dips caused by planets.