Categories
AI

Google Lens will soon search for words and images combined

Google is updating its visual search tool Google Lens with new AI-powered language features. The update will let users further narrow searches using text. So, for example, if you snap a photo of a paisley shirt in order to find similar items online using Google Lens, you can add the command “socks with this pattern” to specify the garments you’re looking for.

Additionally, Google is launching a new “Lens mode” option in its iOS Google app, allowing users to search using any image that appears while searching the web. This will be available “soon,” but it’ll be limited to the US. Google is also launching Google Lens on desktop within the Chrome browser, letting users select any image or video when browsing the web to find visual search results without leaving their tab. This will be available globally “soon.”

These updates are part of Google’s latest push to improve its search tools using AI language understanding. The updates to Lens are powered by a machine learning model that the company unveiled at I/O earlier this year named MUM. In addition to these new features, Google is also introducing new AI-powered tools to its web and mobile searches.

Using the updated Google Lens to identify a bike’s derailleur.
Image: Google

The changes to Google Lens show the company hasn’t lost interest in this feature, which has always shown promise but seemed to appeal more as a novelty. Machine learning techniques have made object and image recognition features relatively easy to launch at a basic level, but, as today’s updates show, they require a little finesse on the part of the users to be properly functional. Enthusiasm may be picking up, though — Snap recently upgraded its own Scan feature, which functions pretty much identically to Google Lens.

Google wants these Lens updates to turn its world-scanning AI into a more useful tool. It gives the example of someone trying to fix their bike but not knowing what the mechanism on the rear wheel is called. They snap a picture with Lens, add the search text “how to fix this,” and Google pops up with the results that identify the mechanism as a “derailleur.”

As ever with these demos, the examples Google is offering seem simple and helpful. But we’ll have to try out the updated Lens for ourselves to see if AI language understanding is really making visual search more than just a parlor trick.

Repost: Original Source and Author Link

Categories
AI

AI-produced images can’t fix diversity issues in dermatology databases

Image databases of skin conditions are notoriously biased towards lighter skin. Rather than wait for the slow process of collecting more images of conditions like cancer or inflammation on darker skin, one group wants to fill in the gaps using artificial intelligence. It’s working on an AI program to generate synthetic images of diseases on darker skin — and using those images for a tool that could help diagnose skin cancer.

“Having real images of darker skin is the ultimate solution,” says Eman Rezk, a machine learning expert at McMaster University in Canada working on the project. “Until we have that data, we need to find a way to close the gap.”

But other experts working in the field worry that using synthetic images could introduce their own problems. The focus should be on adding more diverse real images to existing databases, says Roxana Daneshjou, a clinical scholar in dermatology at Stanford University. “Creating synthetic data sounds like an easier route than doing the hard work to create a diverse data set,” she says.

There are dozens of efforts to use AI in dermatology. Researchers build tools that can scan images of rashes and moles to figure out the most likely type of issue. Dermatologists can then use the results to help them make diagnoses. But most tools are built on databases of images that either don’t include many examples of conditions on darker skin or don’t have good information about the range of skin tones they include. That makes it hard for groups to be confident that a tool will be as accurate on darker skin.

That’s why Rezk and the team turned to synthetic images. The project has four main phases. The team already analyzed available image sets to understand how underrepresented darker skin tones were to begin with. It also developed an AI program that used images of skin conditions on lighter skin to produce images of those conditions on dark skin and validated the images that the model gave them. “Thanks to the advances in AI and deep learning, we were able to use the available white scan images to generate high-quality synthetic images with different skin tones,” Rezk says.

Next, the team will combine the synthetic images of darker skin with real images of lighter skin to create a program that can detect skin cancer. It will continuously check image databases to find any new, real pictures of skin conditions on darker skin that they can add to the future model, Rezk says.

The team isn’t the first to create synthetic skin images — a group that included Google Health researchers published a paper in 2019 describing a method to generate them, and it could create images of varying skin tones. (Google is interested in dermatology AI and announced a tool that can identify skin conditions last spring.)

Rezk says synthetic images are a stopgap until there are more real pictures of conditions on darker skin available. Daneshjou, though, worries about using synthetic images at all, even as a temporary solution. Research teams would have to carefully check if AI-generated images would have any usual quirks that people wouldn’t be able to see with the naked eye. That type of quirk could theoretically skew results from an AI program. The only way to confirm that the synthetic images work as well as real images in a model would be to compare them with real images — which are in short supply. “Then goes back to the fact of, well, why not just work on trying to get more real images?” she says.

If a diagnostic model is based on synthetic images from one group and real images from another — even temporarily — that’s a concern, Daneshjou says. It could lead to the model performing differently on different skin tones.

Leaning on synthetic data could also make people less likely to push for real, diverse images, she says. “If you’re going to do that, are you actually going to keep doing the work? she says. “I would actually like to see more people do work on getting real data that is diverse, rather than trying to do this workaround.”

Repost: Original Source and Author Link

Categories
AI

Algorithms that detect cancer can be fooled by hacked images

Artificial intelligence programs that check medical images for evidence of cancer can be duped by hacks and cyberattacks, according to a new study. Researchers demonstrated that a computer program could add or remove evidence of cancer from mammograms, and those changes fooled both an AI tool and human radiologists.

That could lead to an incorrect diagnosis. An AI program helping to screen mammograms might say a scan is healthy when there are actually signs of cancer or incorrectly say that a patient does have cancer when they’re actually cancer free. Such hacks are not known to have happened in the real world yet, but the new study adds to a growing body of research suggesting healthcare organizations need to be prepared for them.

Hackers are increasingly targeting hospitals and healthcare institutions with cyberattacks. Most of the time, those attacks siphon off patient data (which is valuable on the black market) or lock up an organization’s computer systems until that organizations pays a ransom. Both of those types of attacks can harm patients by gumming up the operations at a hospital and making it harder for healthcare workers to deliver good care.

But experts are also growing more worried about the potential for more direct attacks on people’s health. Security researchers have shown that hackers can remotely break into internet-connected insulin pumps and deliver dangerous doses of the medication, for example.

Hacks that can change medical images and impact a diagnosis also fall into that category. In the new study on mammograms, published in Nature Communications, a research team from the University of Pittsburgh designed a computer program that would make the X-ray scans of breasts that originally appeared to have no signs of cancer look like they were cancerous, and that would make mammograms that look cancerous appear to have no signs of cancer. They then fed the tampered images to an artificial intelligence program trained to spot signs of breast cancer and asked five human radiologists to decide if the images were real or fake.

Around 70 percent of the manipulated images fooled that program — that is, the AI wrongly said that images manipulated to look cancer-free were cancer-free, and that the images manipulated to look like they had cancer did have evidence of cancer. As for the radiologists, some were better at spotting manipulated images than others. Their accuracy at picking out the fake images ranged widely, from 29 percent to 71 percent.

Other studies have also demonstrated the possibility that a cyberattack on medical images could lead to incorrect diagnoses. In 2019, a team of cybersecurity researchers showed that hackers could add or remove evidence of lung cancer from CT scans. Those changes also fooled both human radiologists and artificial intelligence programs.

There haven’t been public or high-profile cases where a hack like this has happened. But there are a few reasons a hacker might want to manipulate things like mammograms or lung cancer scans. A hacker might be interested in targeting a specific patient, like a political figure, or they might want to alter their own scans to get money from their insurance company or sign up for disability payments. Hackers might also manipulate images randomly and refuse to stop tampering with them until a hospital pays a ransom.

Whatever the reason, demonstrations like this one show that healthcare organizations and people designing AI models should be aware that hacks that alter medical scans are a possibility. Models should be shown manipulated images during their training to teach them to spot fake ones, study author Shandong Wu, associate professor of radiology, biomedical informatics, and bioengineering at the University of Pittsburgh, said in a statement. Radiologists might also need to be trained to identify fake images.

“We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks,” Wu said.

Repost: Original Source and Author Link

Categories
AI

This free web tool is a fast and easy way to remove objects from images

Ever needed to quickly edit something or someone out of an image?

Maybe it’s the stranger who wandered into your family photo, or the stand holding up your final artwork for school? Whatever the job, if you don’t have the time or Photoshop skills needed to edit the thing yourself, why not try Cleanup.pictures — a handy web tool that does exactly what it promises in the URL.

Just upload your picture, paint over the thing you want removed with the brush tool, and hey presto: new image. The results aren’t up to the standards of professionals, especially if the picture is particularly busy or complex, but they’re surprisingly good. The tool is also just quite fun to play around with. You can check out some examples in the gallery below:

Cleanup.pictures seems to be from the same team that made a fun augmented reality demo that lets you “copy and paste” the real world, and is open-source (you can find the underlying code here). Obviously, tools of this sort have long been available, dating back at least to the launch of Photoshop’s Clone Stamp tool, but the quality of these automated programs has increased considerably in recent years thanks to AI.

Machine learning systems are now not only better at the problem of segmentation (marking the divisions between an object and the background) but also inpainting, or filling in the new content. Just last week, Google launched its new Pixel 6 tools with an identical “Magic Eraser” feature, but Cleanup.pictures shows how this feature has become a commodity.

I think my favorite use of this tool, though, is this fantastic series of pictures removing the people from Edward Hopper paintings:

Repost: Original Source and Author Link

Categories
AI

Nvidia’s latest AI tech translates text into landscape images

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Nvidia today detailed an AI system called GauGAN2, the successor to its GauGAN model, that lets users create lifelike landscape images that don’t exist. Combining techniques like segmentation mapping, inpainting, and text-to-image generation in a single tool, GauGAN2 is designed to create photorealistic art with a mix of words and drawings.

“Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher-quality of images,” Isha Salian, a member of Nvidia’s corporate communications team, wrote in a blog post. “Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple of trees in the foreground, or clouds in the sky.”

Generated images from text

GauGAN2, whose namesake is post-Impressionist painter Paul Gauguin, improves upon Nvidia’s GauGAN system from 2019, which was trained on more than a million public Flickr images. Like GauGAN, GauGAN2 has an understanding of the relationships among objects like snow, trees, water, flowers, bushes, hills, and mountains, such as the fact that the type of precipitation changes depending on the season.

GauGAN and GauGAN2 are a type of system known as a generative adversarial network (GAN), which consists of a generator and discriminator. The generator takes samples — e.g., images paired with text — and predicts which data (words) correspond to other data (elements of a landscape picture). The generator is trained by trying to fool the discriminator, which assesses whether the predictions seem realistic. While the GAN’s transitions are initially poor in quality, they improve with the feedback of the discriminator.

Unlike GauGAN, GauGAN2 — which was trained on 10 million images — can translate natural language descriptions into landscape images. Typing a phrase like “sunset at a beach” generates the scene, while adding adjectives like “sunset at a rocky beach” or swapping “sunset” to “afternoon” or “rainy day” instantly modifies the picture.

GauGAN2

With GauGAN2, users can generate a segmentation map — a high-level outline that shows the location of objects in the scene. From there, they can switch to drawing, tweaking the scene with rough sketches using labels like “sky,” “tree,” “rock,” and “river” and allowing the tool’s paintbrush to incorporate the doodles into images.

AI-driven brainstorming

GauGAN2 isn’t unlike OpenAI’s DALL-E, which can similarly generate images to match a text prompt. Systems like GauGAN2 and DALL-E are essentially visual idea generators, with potential applications in film, software, video games, product, fashion, and interior design.

Nvidia claims that the first version of GauGAN has already been used to create concept art for films and video games. As with it, Nvidia plans to make the code for GauGAN2 available on GitHub alongside an interactive demo on Playground, the web hub for Nvidia’s AI and deep learning research.

One shortcoming of generative models like GauGAN2 is the potential for bias. In the case of DALL-E, OpenAI used a special model — CLIP — to improve image quality by surfacing the top samples among the hundreds per prompt generated by DALL-E. But a study found that CLIP misclassified photos of Black individuals at a higher rate and associated women with stereotypical occupations like “nanny” and “housekeeper.”

GauGAN2

In its press materials, Nvidia declined to say how — or whether — it audited GauGAN2 for bias. “The model has over 100 million parameters and took under a month to train, with training images from a proprietary dataset of landscape images. This particular model is solely focused on landscapes, and we audited to ensure no people were in the training images … GauGAN2 is just a research demo,” an Nvidia spokesperson explained via email.

GauGAN is one of the newest reality-bending AI tools from Nvidia, creator of deepfake tech like StyleGAN, which can generate lifelike images of people who never existed. In September 2018, researchers at the company described in an academic paper a system that can craft synthetic scans of brain cancer. That same year, Nvidia detailed a generative model that’s capable of creating virtual environments using real-world videos.

GauGAN’s initial debut preceded GAN Paint Studio, a publicly available AI tool that lets users upload any photograph and edit the appearance of depicted buildings, flora, and fixtures. Elsewhere, generative machine learning models have been used to produce realistic videos by watching YouTube clips, creating images and storyboards from natural language captions, and animating and syncing facial movements with audio clips containing human speech.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

HTC Vive Flow headset images leak days before reported launch

HTC is expected to launch a new VR headset within the week, but you don’t have to wait till then to see what it looks like. A collection of Vive Flow images has made its way online, courtesy of evleaks, before the launch event. According to Protocol, the Vive Flow is a lightweight headset developed for consumers under the internal name “Hue.” The Bluetooth SIG consortium previously published documents describing Hue as a VR AIO (all-in-one) product, which means the device could be a standalone headset that doesn’t need a phone or doesn’t have to be tethered to a PC to work. 

The company reportedly wants to position the Vive Flow primarily as a way to consume media, with some access to gaming. Its chip is less powerful than the Oculus Quest 2’s, Protocol says, but it will have six degrees of freedom tracking. The images leaked online also show more information about the device, including a dual-hinge system to make sure it fits most people, snap-on face cushion, immersive spatial audio, adjustable lenses and active cooling system. After you pair your phone with it via Bluetooth, you can use your mobile device as a VR controller and to stream content to VR using Miracast tech.

In addition, the images show that the Vive Flow will be available for pre-order starting on October 15th, with shipments going out in early November. The headset will set you back US$499, which is $200 more than the Quest 2’s launch price, and you’ll get seven free virtual reality content and a carrying case if you pre-order.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Repost: Original Source and Author Link

Categories
AI

The power of synthetic images to train AI models

Elevate your enterprise data technology and strategy at Transform 2021.


Artificial intelligence is poised to disrupt nearly every industry by the end of the decade with the promise of increased efficiencies, higher profitability, and smarter, data-driven business decisions.

And yet, as Gartner has publicized, 85% of AI projects fail. Four barriers are cited repeatedly: skills of staff; data quality; unclear business case; and security and privacy. A study by Dimensional Research revealed that 96% of organizations have problems with training data quality and quantity, and that most AI projects require more than 100,000 data samples for success.

Data security is an increasingly important consideration in nearly every industry. Privacy laws are expanding rapidly, leading to a shortage in available data sets; even if the data needed to train AI models exists, it may not be available due to compliance requirements.

As a result, companies are now searching for ways to adopt AI without large data sets. More data is not necessarily better. The key is good data, not just big data.

But what do you do when good data just isn’t available? Increasingly, enterprises are discovering the gap can be filled with synthetic data — a move that promises to revolutionize the industry, enabling more companies to use AI to improve processes and solve business problems with machine intelligence.

Synthetic data is artificial data generated via computer program instead of real-world events. Ideally, synthetic data is created from a “seed” of real data — a few false positives and negatives, and a few true positives and negatives. Then those real pieces of data can be manipulated in various ways to create the synthetic dataset good enough and large enough to drive the creation of successful AI models.

There are many synthetic data generators on the market for structured data, such as Gretel, MOSTLY AI, Synthetic IO, Synthesized IO, Tonic, and the open-source Synthetic Data Vault. Scikit-learn is a free software machine learning library for Python with some synthetic data generation capabilities. In addition to synthetic data generators, data scientists can perform the task manually with more effort.

Generative adversarial networks (GANs) are a type of neural network that generate realistic copies of real data. GANs generate new samples into the dataset with image blending and image translation. This type of work is labor-intensive but does provide a way to solve seemingly unsolvable AI challenges.

While several emerging synthetic data generators exist on the market today, often these “out of the box” tools are either insufficient to solve the problem without significant customization, and/or do not have the capability to tackle unstructured data sets — such as photos and videos.

Training an AI model for a global auto maker with synthetic data

A project my team recently worked on with one of the world’s top three auto manufacturers provides a good example of how you can quickly deploy synthetic data to fill a data gap.

Specifically, this example points out how to create synthetic data when the data is in the form of an image. Due to its unstructured character, image manipulation is more complex than numerical or text-based structured datasets.

The company has a product warranty system that requires customers and dealers to submit photos to file a warranty claim. The process of manually examining millions of warranty submissions is time consuming and expensive. The company wanted to use AI to automate the process: create a model to look at the photos, simultaneously validate the part in question, and detect anomalies.

Creating an AI data model to automatically recognize the product in the photos and determine warranty validity wasn’t an impossible task. The catch: for data privacy reasons, the available data set was inaccessible. Instead of tens of thousands of product photos to train the AI models, they could only provide a few dozen images.

Frankly, I felt it was a showstopper. Without a sizable data set, conventional data science had ground to a halt.

And yet, where there is a will, there is a way. We started with a few dozen images with a mixture of good and bad examples, and replicated those images using a proprietary tool for synthetic data — including creative filtration techniques, coloration scheme changes, and lighting changes — much like a studio designer does to create different effects.

One of the primary challenges of using synthetic data is thinking of every possible scenario and creating data with those circumstances. We started out with 30 to 40 warranty images from the auto manufacturer. Based on these few images provided with good and bad examples, we were able to create false positives, false negatives, true positives, and true negatives. We first trained the model to recognize the part in question for the warranty, then trained it to differentiate between other things in the image — for example, the difference between glare on the camera lens and a scratch on a wheel.

The challenge was that as we moved along, outliers were missing. When creating synthetic data, it is important to stop, look at the complete dataset, and see what might be needed to improve the success of the model at predicting what is in the photo. That means considering every possible variable including angles, lighting, blur, partial visibility, and more. Since many of the warranty photos were taken outside, we had to consider cloudy days, rain, and other environmental factors and add those to the synthetic photos as well.

We started with a 70% success rate of identifying the right part and predicting whether it was good or bad and hence, whether to apply the warranty. Upon further manipulation the AI model became smarter and smarter until we reached an accuracy rate above 90%.

The result: In under 90 days the customer had a web-based proof of concept that allowed them to upload any image and produce a yes/no answer on if the image contained the right part in question and a yes/no answer on if the part did in fact fail. An AI model was successfully trained with only a few dozen pieces of actual data and the gaps were filled in with synthetic data.

Dataless AI comes of age

This story is not unique to auto makers. Exciting work is underway to revolutionize industries from insurance and financial services to health care, education, manufacturing, and retail.

Synthetic data does not make real data irrelevant or unnecessary. Synthetic data is not a silver bullet. However, it can achieve two key things:

  1. Fast-track proofs-of-concept to understand their viability;
  2. Accelerate AI model training by augmenting real data.

Make no mistake: data — and importantly, unified data across the enterprise — is the key to competitive advantage. The more real data trained through an AI system, the smarter it gets.

For many enterprises today, each AI project represents millions or tens of millions of dollars and years of effort. However, if companies can validate proofs of concept in months — not years — with limited data sets bolstered with synthetic data, AI costs will radically decrease, and AI adoption will accelerate at an exponential pace.

David Yunger is CEO of AI and software development firm Vaital.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Google wants you to help train its AI by labeling images in Google Photos

Google has updated its Google Photos app on Android with a new option that lets users tell the search giant about the contents of their pictures. By labeling these images, Google can improve its object recognition algorithms, which in turn make Photos more useful. It’s a virtuous cycle of AI development best deployed by tech giants like Google which have lots of data and lots of users.

This isn’t an unusual practice at all. Machine learning systems don’t just learn by themselves, and the vast majority of these applications need to be taught using data labeled by humans. It’s the same reason that CAPTCHAs ask you to identify cars and motorbikes in images. By identifying these objects you’re training AI to do the same.

The feature appears in the most recent version of Google Photos. Just tap on the search button in the app’s menu, scroll down, and you’ll see an option to “Help improve Google Photos.” As reported by 9to5Google, click on it and you’ll be presented with four tasks: to describe your printing preferences for photos; your preferred collages or animations; to identify which photos belong to which holiday events (eg Christmas or Halloween); and to identify the contents of photos (“Name the most important things in this photo”).

Google wants you to tell it about the contents of your photos via an optional feature in Google Photos.
Screenshots: The Verge

As Google explains on a help page about the feature: “It may take time to see the impact your contributions have on your account, but your input will help improve existing features and build new ones; for example, improved suggestions on which photos to print or higher quality creations that you would like. You can delete your answers at any time.” (To do so, tap the three-dot menu at the top right of the screen and hit “Delete my answers.”) At the time of writing, it seems the update is available only on Android, not iOS.

Although this looks to be a new addition to the Google Photos app, the underlying software is much older. The process is powered by “Crowdsource by Google,” a crowdsourcing platform that the company launched in 2016. It gamifies data-labeling, letting users earn points and badges by completing tasks like verifying landmarks, identifying the sentiment of text snippets (is a review positive or negative, for example), transcribing handwritten notes, and other similar jobs. To be clear, though: users don’t get any real rewards for their work beyond virtual kudos from Google.

It’s worth remembering all this when using Google’s whizzy machine learning products: they wouldn’t be half as good without humans helping teach them.

Repost: Original Source and Author Link

Categories
Computing

Leaks reveal Surface Pro 7, Surface Laptop 3, and Surface on ARM images before Microsoft event

More details about Microsoft’s upcoming Surface announcements have leaked out, revealing images of what apparently are the Surface Pro 7, Surface Laptop 3, and an ARM-based Surface, with perhaps more to come.

Evan Blass, whose Twitter handle is @evleaks, published what appear to be marketing materials for the upcoming Surface devices, scheduled to be unveiled at a Microsoft event on Wednesday. The published photos generally line up with what was expected, including two different sizes of the Surface Laptop 3. A Microsoft representative did not immediately respond when asked for comment.

There could be a surprise, too: a dual-screen Surface, Blass tweeted. (Though I follow Blass on Twitter and can see the images he posted, his tweets are protected—meaning that they can’t be linked to directly.) 

surface laptop 3 bigger Evan Blass / Twitter

Blass appears to confirm reports that there will be two Surface Laptop sizes, a 13-inch and a 15-inch model.

In short, the images Blass shows point to at least several products being revealed on Wednesday:

  • Surface Pro 7
  • Surface Laptop 3 (13-inch and 15-inch)
  • Surface with ARM
  • Dual-screen Surface (known as Centaurus)

Surface images answer several questions

Assuming they’re real—and Blass has a good reputation in this regard—the images Blass posted clear up a couple of questions. For one, the Surface Pro 7 includes both a new USB-C port as well as a USB-A connection and a Surface Connector, supporting all three I/O technologies. (The Surface Laptop 3 may also include all three ports, but of the images Blass leaked, only the left side of the chassis isn’t shown—the side that typically houses the USB-A port. It does show the Surface Connector on the right-hand side.) 

surface pro 7 bigger Evan Blass / Twitter

The Surface Pro 7 Blass showed off has both a USB-C and USB-A port.

The Blass images also confirm a redesigned Surface (which Blass refers to as an ARM-powered Surface) and show the tablet with a pair of USB-C ports, unlike the traditional Surface Pro devices. Like the Surface Pro, it has a kickstand, along with the other Surface Pro 7 that Blass reveals. There’s what looks like a new, flattish stylus that fits inside a pen tray of sorts on the front of the device. The keyboard looks absolutely flat, possibly lacking the magnetic hinge that allows the Surface Pro 7’s Type Cover to be slightly inclined.

PCWorld has been told that the ARM-based tablet runs a modified Qualcomm Snapdragon 8cx, which has been rebranded. The tablet itself is being referred to as Surface Pro X, though that may in fact be a placeholder for a Surface Pro 7, or even a Surface Pro 7x. Exact branding was still being worked out late on Friday and into Monday, we’re told. 

surface with arm back bigger Evan Blass / Twitter

The Surface device built around the ARM chip has a pair of USB-C ports.

Finally, there’s the Surface Laptop 3, which has traditionally shipped in 13-inch size and now appears to have a 15-inch size as well. As others have reported, the Surface Laptop appears to ditch the Alcantara fabric of prior generations, with a metal body more reminiscent of the Surface Book.

Repost: Original Source and Author Link

Categories
Tech News

Twitter will now display full images in your timeline

Last September, a ton of users found that Twitter’s image cropping algorithm seemingly had a white bias. In response, Twitter said it would give users control over how images appear on their feeds while the possible bias was being investigated — but then never delivered…until now.

The company announced last night that if you post a single image on your feed, it’ll appear the same as it looked in the composer. The platform is currently testing this feature on its iOS and Android apps.

While Twitter’s announcement is a welcome one, it might not address bias in all situations. When pointing out the problem last year, users usually posted two images together to show the algorithm always opted to focus on the white person in the picture. So if you repeat the tweet below, you’ll still probably get the same horrible results.

If you’re posting just one image, you’ll be probably able to see the full image in the preview.

Personally, I’m also excited about this new feature because I post a lot of memes on Twitter. And often the app’s algorithm crops out the part that’s the punchline for that meme. So then I have to readjust the dimensions of that image so the meme is properly visible and I get that sweet internet validation from strangers that makes my life complete.

Thanks to this new feature, I wouldn’t have to worry about cropping for a single image. Well done, Twitter.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In –
and you can subscribe to it right here.

Published March 11, 2021 — 09:42 UTC



Repost: Original Source and Author Link