Categories
Computing

Apple boss drops heaviest hint yet about future device

Apple is famous for keeping its cards close to its chest when it comes to upcoming products, so comments made by CEO Tim Cook this week have surprised many observers.

Speaking in an interview with China Daily USA, Cook gave the clearest hint yet that Apple is working on a high-tech headset.

Discussing AR technology, Cook said, “Stay tuned and you’ll see what we have to offer.”

Yes, that was pretty much it, but for Cook that’s saying a lot. It’s funny that he actually said, “sort of stay tuned …,” as if he couldn’t quite bring himself to be emphatic about it, because that’s not the Apple way.

Rumors about Apple’s interest in a headset have been swirling for a long time, but Cook, despite often expressing an interest in AR technology, has until now been careful not to give anything away.

Sure, we still have no concrete details on the product, but the person at the top has all but confirmed that a head-based device is on the way.

Speculation over the last few years has pointed toward Apple developing one but two products. A pair of AR glasses and a mixed-reality headset that incorporates both AR and VR.

To put it simply, a VR (virtual reality) headset offers an immersive experience in a digital world, and is popular for gaming. AR (augmented reality) specs, on the other hand, place digital overlays of text or images over what you’re seeing in the real world. An AR/VR headset (mixed-reality) combines both technologies.

In this week’s interview, Cook only ever talked about AR, saying he was “incredibly excited” about the technology and adding that he believes “we’re still in the very early innings of how this technology will evolve.”

He continued: “I couldn’t be more excited about the opportunities we’ve seen in this space,” before finishing off with his line suggesting folks “sort of stay tuned.”

Many expected that Apple might unveil its first headset at its Worldwide Developers Conference earlier this month, but it wasn’t to be. Recent reports have suggested the tech giant could unveil an AR/VR headset toward the end of this year, while the AR specs might not land until 2024. Perhaps that’s what Cook meant by “sort of stay tuned.” The expression usually suggests something will be along soon, but Cook’s unusual choice of words is perhaps his way of saying, “But don’t stay too tuned ‘cos it’s gonna be a while.”

To find out everything we think we know about Apple’s headset plans, Digital Trends has a carefully curated page featuring all of the incoming news.

Editors’ Choice






Repost: Original Source and Author Link

Categories
AI

Medical device leader Medtronic joins race to bring AI to health care

Medtronic, the world’s largest medical device company, is significantly increasing its investments into AI and other technologies, in what it says is an effort to help the health care industry catch up with other industries.

While many other industries have embraced technology, health care has been slower. Studies reveal that only 20% of consumers would trust AI-generated health care advice.

VentureBeat interviewed Torod Neptune, Medtronic’s senior vice president and chief communications officer, and Gio Di Napoli, president of Medtronic’s Gastrointestinal Unit, to discuss the company’s vision of the future of health care technology.

Digital transformation in health care

Neptune spoke about Medtronic’s transition beyond traditional med tech to more innovative solutions using AI. He noted that health care technology — through its unusual scale and ability to harness data analytics, algorithms, and intelligence — plays a significant role in solving big problems in the AI field.

Artificial intelligence increases the detection of early cancer by 14% compared to normal colonoscopy, Di Napoli said. This is very important because “every percentage of increase in detection reduces the risk of cancer by 2%,” he said.

Building on Medtronic’s medical devices already serving millions (like its miniature pacemaker, smart insulin pump, and more), the company’s plan to make health care more predictive and personal led to the development of GI Genius Intelligent Endoscopy Module (granted USFDA de novo clearance on April 9, 2021, and launched on April 12, 2021).

Medical equipment arranged in shelves on a cart, with a large monitor on top that shows an intestinal scan in progress.

Above: Medtronic says its GI Genius Intelligent Endoscopy Module is the first-to-market computer-aided polyp detection system powered by artificial intelligence.

The GI Genius module is the first and only artificial intelligence system for colonoscopy, according to Medtronic, assisting physicians in detecting precancerous growths and potentially addressing 19 million colonoscopies annually. The company says the module serves as a vigilant second observer, using sophisticated AI-powered technology to detect and highlight the presence of precancerous lesions with a visual marker in real time.

Investing in innovative health care

Medtronic has launched more than 190 health care technology products in the past 12 months. It also invests $2.5 billion yearly on research and development (R&D). Medtronic’s CEO, Geoff Martha, recently announced a 10% boost in R&D spending by FY22.

This enormous investment, the largest R&D increase in company history, underscores Medtronic’s focus on innovation and technology.

The company says it plans to expand the number of patients it serves each year, with the goal being 85 million by FY25.

According to Di Napoli, “AI is here. And it’s here to stay.”

A new era of health care

Speaking further about health care technology, Di Napoli says, “I can tell from my personal experience within the gastrointestinal business that there is a need for training and getting to know artificial intelligence as a partner and not as an enemy. And I think it’s critical for companies like ours to keep collecting data to improve our algorithms, to improve how our customers decide based on this data, and also improve patients outcomes with this.”

Although data collection comes with security concerns and privacy issues, Di Napoli says that the company is in constant communication with the FDA to understand the process to put in place to protect sensitive data for the future.

Neptune believes that technology and data drive patient empowerment in a much more significant way, based on more comfortable user adoption over the last 20 months. He said, “I think the pandemic has enabled more comfort and consideration, and there’s a global shift and willingness to engage and adopt new technological solutions.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Amazon’s on-premises device for vision apps, AWS Panorama Appliance, launches publicly

This article is part of a VB special issue. Read the full series: AI and Surveillance.


Amazon today announced the general availability of the AWS (Amazon Web Services) Panorama Appliance, a device that allows customers to use existing on-premises cameras and analyze video feeds with AI. Ostensibly designed for use cases like quality checks and supply chain monitoring, Amazon says that the Panorama Appliance is already being used by companies including Accenture, Deloitte, and Sony.

“Customers in industrial, hospitality, logistics, retail, and other industries want to use computer vision to make decisions faster and optimize their operations. These organizations typically have cameras installed onsite to support their businesses, but they often resort to manual processes like watching video feeds in real time to extract value from their network of cameras, which is tedious, expensive, and difficult to scale,” Amazon wrote in a press release. “Most customers are stuck using slow, expensive, error-prone, or manual processes for visual monitoring and inspection tasks that do not scale and can lead to missed defects or operational inefficiencies.”

By contrast, the Panorama Appliance connects to a local network to perform computer vision processing at the edge, Amazon says. Integrated with Amazon SageMaker — Amazon’s service for building machine learning models — the Panorama Appliance can be updated and deployed with new computer vision models. Companies that opt not to create their own models can choose from solutions offered by Deloitte, TaskWatch, Vistry, Sony, Accenture, and other Amazon partners.

To date, customers have developed models running on the Panorama Appliance for manufacturing, construction, hospitality, and retail, Amazon says. Some are analyzing retail foot traffic to inform store layouts and displays, while others are identifying peak times in stores to pinpoint where staff might be needed.

The Cincinnati/Northern Kentucky International Airport in Hebron, Kentucky, is using the Panorama Appliance to monitor congestion across airport traffic lanes. With the help of Deloitte, The Vancouver Fraser Port Authority has applied the Panorama Appliance to track containers throughout its facilities. And Tyson has built models on the device to count packaged products on lines for quality assurance.

“Organizations across all industries like construction, hospitality, industrial, logistics, retail, transportation, and more are always keen to improve their operations and reduce costs. Computer vision offers a valuable opportunity to achieve these goals, but companies are often inhibited by a range of factors including the complexity of the technology, limited internet connectivity, latency, and inadequacy of existing hardware,” VP of Amazon machine learning at AWS Swami Sivasubramanian said in a statement. “We built the Panorama Appliance to help remove these barriers so our customers can take advantage of existing on-premises cameras and accelerate inspection tasks, reduce operational complexity, and improve consumer experiences through computer vision.”

Privacy implications

Since its unveiling at Amazon’s re:Invent 2020 conference in December, experts have raised concerns about how the Panorama Appliance could be misused. While the purported goal is “optimization,” the device could be coopted for other, less humanitarian intents, like allowing managers to chastise employees in the name of productivity.

In the promotional material for the Panorama Appliance, Fender says it uses the product to “track how long it takes for an associate to complete each task in the assembly of a guitar.” Each state has its own surveillance laws, but most give wide discretion to employers so long as any equipment they use to track employees is plainly visible. There’s no federal legislation that explicitly prohibits companies from monitoring staff during the workday.

Bias could also arise from the computer vision models deployed to the Panorama Appliance if the models aren’t trained on sufficiently diverse data. A study conducted by researchers at the University of Virginia found that two prominent research-image collections displayed gender bias in their depiction of sports and other activities, showing images of shopping linked to women while associating things like coaching with men. Even differences in the sun path between the northern and southern hemispheres and variations in background scenery can affect model accuracy, as can the varying specifications of camera models like resolution and aspect ratio.

Recent history is filled with examples of the consequences of training computer vision models on biased datasets, like virtual backgrounds and automatic photo-cropping tools that disfavor darker-skinned people. Back in 2015, a software engineer pointed out that the image recognition algorithms in Google Photos were labeling his Black friends as “gorillas.” And the nonprofit AlgorithmWatch has shown that Google’s Cloud Vision API at one time automatically labeled thermometers held by a Black person as “guns” while labeling thermometers held by a light-skinned person as “electronic devices.”

Amazon has pitched — and employed — surveillance technologies before. The company’s Rekognition software sparked protests and pushback, which led to a moratorium on the use of the technology. And Amazon’s notorious “Time Off Task” system dings warehouse employees for spending too much time away from the work they’re assigned to perform, like scanning barcodes or sorting products into bins.

An Amazon spokeswoman recently told the BBC that the Panorama Appliance was “designed to improve industrial operations and workplace safety” and that how it is used is up to customers. “For example, AWS Panorama does not include any pre-packaged facial recognition capabilities,” the spokesperson said. All its machine learning functions can happen on the device, they added, “and [relevant data] never has to leave the customer’s facility.”

The Panorama Appliance is now available for sale through Amazon’s AWS Elemental service in the U.S., Canada, U.K., and E.U.

Read More: VentureBeat's Special Issue on AI and Surveillance

Repost: Original Source and Author Link

Categories
AI

Google delivers collection of smart device ‘essentials’ for the enterprise

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


Google Cloud Next is taking place this week virtually. While there are typically many announcements made at their annual event, one of the lesser highlighted announcements this time around is for Google’s Intelligent Product Essentials.

With Intelligent Product Essentials, Google essentially provides all the key components to deploy distributed IoT and edge computing solutions. Components are included for data ingestion, connectivity both for data acquisition and IoT device OS/security updates, integration of acquired data into a database suitable for the type of data generated (e.g., spatial data, textual data, etc.), central and/or edge cloud application processing, AI/ML analysis, enabling modifications/additions to the processes through open APIs, and access to work across a multi-cloud infrastructure (few companies work with a single cloud infrastructure).

It is less of a product and more of a template or workbench, tying a number of connectivity and integration components together to give companies a head start on a complete IoT and edge deployed solution. Google offers some direct engagement with its customers but relies on partnering with specialized systems integrators (a list of which will be expanded over time) to complete a customer’s solution. The SI would typically focus on deploying IoT components and sensors that need to be managed while also enabling the resulting data to be processed for the insights on required actions and/or user and device interactions.

In Google’s announcement of the offering, it focuses on use cases related to consumer products — such as smart ovens and smart bicycles. But I expect more enterprises and organizations to be interested in how Intelligent Product Essentials can help manufacturers create modern environments for machine monitoring, maintenance, and failure analysis, providing updateable and secure “things,” and how it can help organizations deploy complex IoT solutions for smart-cities infrastructure, healthcare monitoring, remote inspections, etc. Clearly, the need for these IoT enabled and edge-powered solutions continues to grow.

Google doesn’t charge for this product per se; rather it gets revenues for the components customers select in the GCP products that are foundational to this solution set. Built on a Kubernetes microservices architecture, its uses DataFlow to move data into the cloud environment and various potential databases of the customer’s choosing (e.g., Firebase), which creates a data warehouse that can be analyzed by Google AI/ML tools (e.g., Vertex AI, or Vertex at the edge), and finally manage the various IoT components remotely with Google management tools. Interestingly, Google does not require that the IoT devices run its Android OS, as it realizes many IoT devices run an RTOS or some other simple OS.

Such a foundational platform is an attractive way to create and deploy industrial devices that often suffer from poor user experience and manageability — from trains to excavators to industrial machinery to medical monitors, etc. Many organizations can benefit from such capability, but many also lack the resources (both monetary and skilled staff) to implement a modern data driven environment to improve their operations. Any reference design that brings together the major components into an integrated approach is highly beneficial.

While this offering from Google may not be an option for companies that require a completely optimized and customized solution and that can take many months or years to create, it offers a simplified way to speed up time-to-deployment for many companies, which means real revenue enhancement, and/or a reduced reliance on scarce resources. Indeed, while there is a high degree of variability involved based on a user organization’s particular requirements, I estimate that a template-structured solution like this can often achieve a 25%-40% reduction in efforts involved, resources needed, and/or time-to-deployment.

GCP as a cloud solution is competitive, and Google has some of the better analytics and AI capabilities available to provide real data insights. But as the number three public cloud provider for enterprises, Google has to try harder. Both AWS and Microsoft have their own IoT and edge computing initiatives and have made some significant inroads, particularly in key industries like automotive, smart cities, and health care. But the market for edge and IoT related solutions is still nascent, so Google entering somewhat later with a GCP offering really is no major setback. And Google does have a major opportunity to convince potential customers that its analytics and AI capability, honed over years for its own product needs, is a major advantage. But Google is still playing catch-up, given the head start and better known products from its competitors.

Bottom Line: The potential benefits in operational efficiency and safety attributed to IoT, edge, and data driven analysis of company operational and business processes are very attractive to many organizations, but they may not have the proper resources to pursue such initiatives. With Google’s Intelligent Product Essentials foundation, many more companies, even smaller scale and/or medium size companies, have a path forward to making IoT and edge a reality. Integrated solution templates like Google’s Intelligent Product Essentials are a great way to achieve advanced IoT and edge enabled solutions with far less friction than completely custom solutions.

Jack Gold is the founder and principal analyst at J.Gold Associates, LLC., an information technology analyst firm based in Northborough, MA., covering the many aspects of business and consumer computing and emerging technologies. Follow him on Twitter @jckgld or LinkedIn at https://www.linkedin.com/in/jckgld.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
Tech News

Google Find My Device might also crowdsource locating lost devices

As always, Apple was able to take an existing technology or feature and make it sound like the most innovating thing that its rivals will then start copying. Although the ability to locate trackers using other people’s devices nearby has long been used by the likes of Tile, Apple’s AirTags and upgraded Find My network has unsurprisingly garnered much more attention, both good and bad. Regardless of that context, it seems that Google will also follow in Apple’s footsteps and upgrade its Find My Device network to turn every Android device nearby into a homing beacon for your lost phone.

Find My Device isn’t actually new, but, just like Apple’s earlier version, it has very limited scope and functionality. Specifically, it can only find devices signed into Google accounts, which limits it to phones, tablets, and Chromebooks, among other things. It also only works if the lost device has an Internet connection; otherwise, its location information may go stale.

XDA discovered that the latest Google Play Services APK hides text that suggests an important upgrade to the framework. It refers to an option to allow your phone to help locate other people’s devices, which is pretty much the same crowdsourced system that Tile and Apple are using.

Although it’s not exactly new technology, this crowdsourced Find My Device might take on a different spin when it is Google that’s doing it. The company hasn’t exactly been famous for its privacy practices, and this location-based system will most likely raise not a few red flags among privacy advocates. Recent exposés accuse Google of continuing to track users’ location even after they have opted out of it.

It is too early to judge such a feature that hasn’t even been acknowledged yet, but privacy-minded users might want to keep an eye out for its arrival. This discovery also raises the possibility that Google will launch its own trackers, which will probably stir the privacy hornet’s nest all the more.

Repost: Original Source and Author Link

Categories
Tech News

Apple rumor tips a new kind of device, at last

A new Apple device was tipped this week with a look and aim unlike any we’ve seen before. This new device is not a phone, or a tablet, or a laptop. It’s not a desktop computer – but it can compute. It would appear that Apple’s latest development is in a device that’s something like a cross between a HomePod and an Apple TV, potentially making Apple TV a device with a screen, at long last.

It would appear that Apple is ready to make Apple TV a part of a device that can actually operate without a 3rd-party display. According to a report from earlier this year from Bloomberg writer Mark Gurman, Apple has begun developing a device that “would combine an Apple TV set-top box with a HomePod speaker” and a camera to allow smart home functions and FaceTime video conferencing abilities.

A more recent report this week suggests that the new device remains in development – suggesting it wasn’t just a concept, but a full-fledged machine that’s far more likely to make its way to the public.

The device could change Apple’s standing in the streaming video service market, allowing Apple TV+ to take its place among the biggest names in the business: Netflix, Hulu, HBO Max, Paramount+, and Disney+. BELOW: The industrial design of the Apple HomePod could be the basis for the build of the device or devices we’re reporting on today.

Gurman also suggested that Apple is exploring creating a higher-end HomePod speaker with an iPad-like device attached to an arm that could move to follow a user’s face around a room. This could be used for FaceTime alone, or it could act as an Apple TV device for users who like to be mobile in their own living space – either way, it’s difficult to imagine Apple avoiding making a machine that looks like the iMac G4, a device that was effectively a display attached to a computer base by a long, adjustable neck.

NOTE: That’s the mock-up you see above, an original iMac G4, with a display replaced with an Apple TV home screen. It could really be this simple – but it’ll likely be something slightly more elegant.

Repost: Original Source and Author Link

Categories
Tech News

Kobo Elipsa combines the function of an e-reader and notebook in one device

The Kobo Elipsa is a new e-reader with many features that are uncommon in the segment. The Elipsa works with accessories, including the Kobo Stylus combining a digital reading and writing experience in a single device. Designers say they looked beyond the standard E-reader experience to create a reading and writing package bridging the gap between print and e-books and between reading and creating.

Elipsa has a 10.3-inch E Ink Carta 1200 glare-free screen. The screen also has ComfortLight adjustable brightness, 32 gigabytes of integrated storage, a stylus, and a SleepCover. It’s offered in midnight blue with the stylus in black and the SleepCover in Slate Blue. Designers behind the product say that the Elipsa allows people who read every day to interact with the book by marking notes, highlighting, and writing in the margins of their digital book.

The digital reader also acts as a digital notebook. Rakuten Kobo CEO Michael Tamblyn says the new e-reader merges the bookstore, book, and notebook, allowing people to capture all the ideas that come from books and writing. The product sounds very much like another digital notebook called the reMarkable 2.

The Kobo Elipsa Pack comes with the Elipsa E-reader, Kobo Stylus, and the SleepCover. It’s available for preorder now at $399.99 in the US or $499.99 Canadian. Preorders begin today, and the Elipsa will be in stores and online on June 24. While pricing for the US and Canada has been offered, Elipsa will be available in most of Europe and Asia.

It’s worth noting that while it has a similar feature set compared to the reMarkable 2, and a similar price, the reMarkable 2 doesn’t include the stylus or folio cover. That potentially makes the Elipsa more appealing than the computing offering. However, it’s unclear if the Elipsa can turn handwritten notes into digital notes.

Repost: Original Source and Author Link

Categories
AI

Device monitoring and management startup Memfault nabs $8.5M

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Memfault, a startup developing software for consumer device firmware delivery, monitoring, and diagnostics, today closed an $8.5 million series A funding round. CEO François Baldassari says the capital will enable Memfault to scale its engineering team and make investments across product development and marketing.

Slow, inefficient, costly, and reactive processes continue to plague firmware engineering teams. Often, companies recruit customers as product testers — the first indication of a device issue comes through users contacting customer service or voicing dissatisfaction on social media. With 30 billion internet of things (IoT) devices predicted to be in use by 2025, hardware monitoring and debugging methods could struggle to keep pace. As a case in point, Palo Alto Networks’ Unit 42 estimates that 98% of all IoT device traffic is unencrypted, exposing personal and confidential data on the network.

Memfault, which was founded in 2019 by veterans of Oculus, Fitbit, and Pebble, offers a solution in a cloud-based firmware observability platform. Using the platform, customers can capture and remotely debug issues as well as continuously monitor fleets of connected devices. Memfault’s software development kit is designed to be deployed on devices to capture data and send it to the cloud for analysis. The backend identifies, classifies, and deduplicates error reports, spotlighting the issues likely to be most prevalent.

Baldassari says that he, Tyler Hoffman, and Christopher Coleman first conceived of Memfault while working on the embedded software team at smartwatch startup Pebble. Every week, thousands of customers reached out to complain about Bluetooth connectivity issues, battery life regressions, and unexpected resets. Investigating these bugs was time-consuming — teams had to either reproduce issues on their own units or ask customers to mail their watches back so that they could crack them open and wire in debug probes. To improve the process, Baldassari and his cofounders drew inspiration from web development and infrastructure to build a framework that supported the management of fleets of millions of devices, which became Memfault.

By aggregating bugs across software releases and hardware revisions, Memfault says its platform can determine which devices are impacted and what stack they’re running. Developers can inspect backtraces, variables, and registers when encountering an error, and for updates, they can split devices into cohorts to limit fleet-wide issues. Memfault also delivers real-time reports on device check-ins and notifications of unexpected connectivity inactivity. Teams can view device and fleet health data like battery life, connectivity state, and memory usage or track how many devices have installed a release — and how many have encountered problems.

“We’re building feedback mechanisms into our software which allows our users to label an error we have not caught, to merge duplicate errors together, and to split up distinct errors which have been merged by mistake,” Baldassari told VentureBeat via email. “This data is a shoo-in for machine learning, and will allow us to automatically detect errors which cannot be identified with simple heuristics.”

Memfault

IDC forecasts that global IoT revenue will reach $742 billion in 2020. But despite the industry’s long and continued growth, not all organizations think they’re ready for it — in a recent Kaspersky Lab survey, 54% said the risks associated with connectivity and integration of IoT ecosystems remained a major challenge.

That’s perhaps why Memfault has competition in Amazon’s AWS IoT Device Management and Microsoft’s Azure IoT Edge, which support a full range of containerization and isolation features. Another heavyweight rival is Google’s Cloud IoT, a set of tools that connect, process, store, and analyze edge device data. Not to be outdone, startups like Balena, Zededa, Particle, and Axonius offer full-stack IoT device management and development tools.

But Baldassari believes that Memfault’s automation features in particular give the platform a leg up from the rest of the pack. “Despite the ubiquity of connected devices, hardware teams are too often bound by a lack of visibility into device health and a reactive cycle of waiting to be notified of potential issues,” he said in a press release. “Memfault has reimagined hardware diagnostics to instead operate with the similar flexibility, speed, and innovation that has proven so successful with software development. Memfault has saved our customers millions of dollars and engineering hours, and empowered teams to approach product development with the confidence that they can ship better products, faster, with the knowledge they can fix bugs, patch, and update without ever disrupting the user experience.”

Partech led Memfault’s series A raise with participation from Uncork Capital, bringing the San Francisco, California-based company’s total raised to $11 million. In addition to bolstering its existing initiatives, Memfault says it’ll use the funding to launch a self-service of its product for “bottom-up” adoption rather than the sales-driven, top-down approach it has today.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Chromebooks vs. Laptops: Which Device Should You Buy?

Many people consider the Chromebook to be the sleeker, quicker, and even simpler cousin of the traditional laptop. Unlike a Mac or Windows system, a Chromebook mostly relies heavily on the internet for everyday tasks. 

Since they’re also typically less expensive, you might be wondering how a Chromebook compares to a regular laptop. Are they a waste of money or an affordable diamond in the rough? Read on to find out!

Both Chromebooks and laptops shouldn’t be expensive. Check out the best Chromebook deals and best laptop sales available now, if you’re looking for a discount.

What is a Chromebook?

Mark Coppock/Digital Trends

When Chromebooks first appeared in 2011, they were lightweight, low-cost laptops based on Google’s new platform called Chrome OS. These laptops mainly relied on cloud-based applications rather than traditional software. Over the years, their more affordable nature has changed, but the value is still at the heart of what a Chromebook offers.

Acer, Asus, HP, Dell, Lenovo, and Samsung sell Chromebooks in various sizes ranging from the ultrabook-type design to 2-in-1 hybrids to the traditional clamshell laptop.

The cheaper models tend to be larger and less powerful than the slimmer, sleeker premium models. These lower-end Chromebooks are most often seen in schools or as first-time personal laptops. Higher-end Chromebooks like Google’s own Pixelbook feature premium aluminum bodies, fast Intel Core processors, and, in some cases, 4K screens.

Although you can’t buy a $2,000 Chromebook like you can a Windows 10 laptop or MacBook, there is now a wide range of options depending on your needs.

What can a Chromebook do?

Chromebooks ship with their own operating system called Chrome OS, which is based on Linux and uses the Chrome browser as an interface. It has basic computing elements, such as a file manager and an app launcher, but most of what you use are web-based apps that require no downloading.

That might sound limiting at first, but many popular apps already offer web-based versions like Spotify, Netflix, Slack, and Evernote. Due to the prevalence of web applications, many people spend the majority of their time in a web browser anyway. If your typical workflow resembles this scenario, transitioning to a Chromebook will be relatively smooth. Just connect to Wi-Fi and proceed with your browsing as normal.

However, with the addition of the Google Play Store, you can also download Android apps to fill in any software gaps. Their implementation in a laptop setting might be a little funky in some cases — some expand full-screen while others remain locked in smartphone screen mode — but Android apps are available if you really need them.

Chromebooks also support Linux software. If you absolutely need desktop applications, setting up Linux is certainly an option. There are Linux versions of Audacity, Firefox, GIMP, OBS Studio, Steam, VirtualBox, and many more, but your favorite application may not offer a Linux-based variant. Check the developer’s website first before ruling out a Chromebook.

Finally, if you’re a gamer, there are plenty of options, but you’re also limited. For example, your best bet is to install Android games or subscribe to Google’s new Stadia streaming service. Installing Steam via Linux is viable, but the typical low-end hardware and minimal storage will limit what you can download and play.

What can’t a Chromebook do?

how to copy and paste on a Chromebook

The limitations of Chrome OS mean you can’t install some important software that you might otherwise need. Some notable examples include certain Adobe applications or any kind of proprietary software that’s restricted to Windows or MacOS. If you rely on similar applications, you’ll either need to find a Linux-based alternative or avoid Chromebooks altogether.

Limitations also extend to performance in general. Chromebooks do tend to run fast, but typically that’s due to the lightweight nature of Chrome OS — it’s not service-heavy in the background like Windows. However, in some cases, you’ll be limited by the components inside. Lower-end Chromebooks tend to use older or low-end processors that can’t compete with what you get in the Windows and Mac space, especially in terms of multitasking. Then again, if you’re looking at spending $200, a Chromebook is a far better option.

On the higher end, there are options like the HP Chromebook x2 or the Pixelbook, and you’ll find familiar processors like the eighth-generation Core i5, which features four cores and plenty of power — Chromebooks tend to fly with these faster options. Some newer Chromebooks, like Samsung’s Galaxy Chromebook and Acer’s Chromebook Spin 713, have Intel’s latest 10th-generation processors, further closing the Chromebook, MacBook, and Windows 10 laptop gap.

Who are Chromebooks for?

Chromebooks are designed with a few specific people in mind. At the forefront are students, as school administrations tend to favor Chromebooks due to their security benefits, sturdy build quality, and software limitations. That means you’ll find cheap Chromebooks in public schools all across the country.

Chromebooks go beyond just cheap, plastic laptops for kids. There are also higher-end options for professionals and college students. Because they tend to be lightweight with long battery life, they are great options for people who need to take their work on the go, whether that’s from class to class or on long flights. Some of these include the Google Pixelbook, Google Pixelbook Go, and the Asus Chromebook Flip C436.

There are certainly those same options in the Windows 10 laptop world. However, in the cheaper price range, Chromebooks can sometimes provide a better value. For example, approximately $500 is where Chromebooks thrive, but Windows 10 laptops at this price tend to get bogged down with a thick chassis and clunky performance.

What Chromebook options are available?

Google Pixelbook running Adobe Lightroom CC

The most expensive Chromebook you can buy is Google’s Pixelbook, with a $1,000 starting price. It represents the high end not only in premium materials and build quality but also performance.

Overall, you’ll find Chromebooks ranging from 11-inch 2-in-1s up to 15-inch options for additional screen real estate. HD resolution is the standard, while Full HD, touchscreen, and 4K options are becoming more common. Intel Celeron processors are a popular choice for today’s Chromebooks — typically, dual-core versions that rarely rise above the 2.0GHz mark — although you’ll find Core i5 and i7 chips as the prices climb.

Most Chromebooks offer 2GB to 4GB of RAM, which is enough for average laptop tasks but low compared to traditional laptop models that regularly offer 8GB or 16GB of RAM. As for storage, Chromebooks don’t have large disk drives, as they depend on the internet for most data purposes. Storage can usually be augmented with an SD card or USB drive if necessary.

For ports, most Chromebooks are largely comparable to laptops, though fewer in number. USB-A, USB-C, and headphone jacks are common connections.

Most Chromebooks have a better battery life than the typical laptop. Although about 10 hours is most common, newer models are more likely to have 12-hour battery life. Windows 10 laptops are slowly closing the gap, but on average, Chromebooks last longer.

The truly high-end part of the laptop range, however, doesn’t include Chromebooks. You won’t find six-core or eight-core processors like you get on a laptop like the MacBook Pro 15, Razer Blade, or Dell XPS 15. These content-creation machines and gaming laptops will outclass any Chromebook in terms of performance.

Finally, Chrome OS tablets such as Google’s own Pixel Slate are available, but we wouldn’t recommend those without a keyboard.

Prices

Despite how expensive Chromebooks become, they are still a more inexpensive option than the majority of other brands. For instance, you could buy more than one Chromebook and still have some remaining funds rather than purchasing only one Microsoft laptop for a hefty $3,000

HP’s latest Chromebook 15 costs around $450, for example, and Lenovo’s Chromebook Flex 15 comes in at just $410. For $226, you can buy a well-known and loved 2017 Samsung model. Chromebooks are so widespread because of their affordability, which makes them accessible to a wider customer base. The only expensive commodity in the Chromebook world is the modernized, high-tech, supreme $1,000 Pixelbook.

A Chromebook’s features will never be able to compare with more expensive laptops, but they perform just about every task you’d need them to do. They are dependable, trustworthy products that meet spontaneous, budget-friendly needs and desires. Their simplistic design also makes them astonishingly user-friendly, which is always a great thing, as many people are technologically-challenged and may not be entirely comfortable when they have to work with laptops or computers.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

How to Use Nearby Share on Your Android Device

Apple’s AirDrop is a terrific way to wirelessly swap files between the company’s devices, like from an iPhone to a Mac. Google began working on similar technology to replace its NFC-based Android Beam launched in 2011. Called Fast Share, it eventually appeared on Pixel phones in 2019 and then was renamed to Nearby Share when it became available to all Android 6.0 devices and newer in 2020.

This guide shows you how to enable and use Nearby Share on Android phones. We also show you how to enable this feature on Chrome OS so you can wirelessly swap files to a Chromebook, just like Apple users do between their MacBooks and iPhones.

Nearby Share requirements

Nearby Share uses several components. Bluetooth is initially used to pair two devices and then, according to Google, Nearby Share determines the best protocol for sharing files: Bluetooth, Bluetooth Low Energy, WebRTC, or peer-to-peer Wi-Fi. The protocol depends on what you’re sharing.

Here’s what you need for Nearby Share:

  • Android 6.0 or newer
  • Bluetooth toggled on
  • Location toggled on
  • Two devices within one foot of each other

Enable Nearby Share on Android

These instructions were verified using a Samsung phone with Android 10 and a Google Pixel phone with Android 11.

Step 1: Swipe down from the top to open the Notification shade and verify that Bluetooth is on. If not, tap the Bluetooth tile. You cannot use Nearby Share without Bluetooth.

Step 2: With the Notification shade still pulled down, verify that location is turned on. If not, tap the Location tile. You cannot use Nearby Share without Location services.

Step 3: With the Notification shade still pulled down, tap the cog icon. This opens the Settings panel.

Step 4: Tap Google.

Step 5: Tap Device Connections.

Step 6: Tap Nearby Share.

Step 7: Tap the Toggle next to Turn On to enable this feature.

Enable Nearby Share on Chrome OS (preview)

There are two Chrome flags that you need to enable: One to turn on Nearby Share and one to add it to the Share menu. Since this isn’t baked into a Stable build as of Chrome 88, it doesn’t work exactly as Google intended. The “sharesheet” aspect removes all sharable options except for Nearby Share.

That said, when you’re done experimenting with this feature, you may want to change the “sharesheet” flag back to Default, so your other sharing options reappear.

Step 1: Open the Chrome browser and type chrome://flags in the address field.

Step 2: Search for “nearby.”

Step 3: Change the setting from Default to Enabled.

Step 4: Search for “sharesheet.”

Step 5: Change the setting from Default to Enabled.

Step 6: Click the Restart button as prompted.

Chrome OS Nearby Share Toggle

Step 7: Click the Quick Settings Panel (system clock) followed by the Settings cog on the pop-up menu.

Step 8: Select Connected Devices on the left.

Step 9: Click the Toggle next to Nearby Share on the right to turn this feature on.

Step 10: Click on Nearby Share again to adjust the settings. See the “Edit Nearby Share settings” section below for more details.

Send and receive with Nearby Share

Here we switch between Google (sender) and Samsung (receiver) devices. The method also applies to Chromebooks.

Step 1: Open your content. In this case, we opened Google Photos to share a screenshot.

Step 2: Tap the Share button. Its location may depend on the app.

Step 3: Tap Nearby Share. If you don’t see it, tap the More button displayed under Share to Apps and then tap Nearby on the following screen, as shown above.

Note: The Nearby Share button should appear on the main Share to Apps strip after you use Nearby Share for the first time.

Step 4: The Nearby Share panel rolls up on the screen with a thumbnail of the file you’re sending. The Looking for Nearby Devices section should change to list nearby devices. Tap on the Receiving device.

Step 5: On the receiving device, tap Accept or Decline.

Share apps with Remote Share

Here’s how to send and receive apps using Nearby Share.

Send an app

Step 1: Tap to open the Google Play Store.

Step 2: Tap the three-line “hamburger” icon in the top left corner.

Step 3: Tap My Apps & Games.

Step 4: Tap the Share category at the top

Step 5: Tap Send.

Step 6: Tap Deny, Only This Time, or While Using the App on a prompt asking about the device’s location.

Step 7: Tap the box next to the app(s) you want to share.

Step 8: Tap the green Paper Airplane Send icon to finish.

Receive an app

Step 1: Tap to open the Google Play Store.

Step 2: Tap the three-line “hamburger” icon in the top left corner.

Step 3: Tap My Apps & Games.

Step 4: Tap the Share category at the top

Step 5: Tap Receive.

Step 6: Compare the pairing code displayed on both devices and tap Receive if they match. If not, tap Cancel.

A note about sending and receiving

By default, Nearby Share is based on your contact list. You can choose to share with all contacts or some contacts. If you choose the latter, you’re prompted to toggle each person in your Google Contacts list. This essentially prevents anyone from trying to send files to your Android device although you can set your device to Hidden too.

Typically, if everything is working correctly, the receiving device will see a pop-up window with the following message: Device Nearby is Sharing. Tap to Become Visible. Normally you don’t need to tap this notification if you know a file is incoming from a device associated with your Google Contacts. Instead, a slide-up prompt appears saying that a device is sharing. The user then taps the Accept or Decline options accordingly.

However, Android devices have a Nearby Share tile on the Quick Settings panel although you may need to edit this panel to add the tile (shown above). This tile is primarily used if you want to receive files from someone not on your Google Contacts list. Tapping this tile launches a slide-up panel showing that the device is now discoverable to everyone, including individuals not in your Google Contacts. Here you can tap the Cog icon to access the Nearby Share settings.

That leads us to the next section.

Edit Nearby Share settings

Step 1: Swipe down from the top to open the Notification shade and then tap and hold on the Nearby Share tile. Alternatively, you can use the same instructions we provided for enabling Nearby Share, listed above.

Step 2: Tap Device Name to change device’s identification, like Kevin’s Pixel, when sending and receiving, if needed.

Step 3: Tap Device Visibility to toggle between three modes: All Contacts, Some Contacts, or Hidden. If you select Some Contacts, you’ll see a darkened toggle next to each contact. Tap each Toggle to allow Nearby Share connectivity with these contacts.

Step 4: Tap Data Usage and change how you want to send data: Data for transferring small files over a mobile connection, Wi-Fi Only for using the local network, or Without Internet to use peer-to-peer sharing (likely Bluetooth). Tap Update if you made any changes.

Step 5: If you want to turn off Nearby Share, tap the Toggle next to On.

Editors’ Choice




Repost: Original Source and Author Link