Categories
Game

Huawei App Store Bug Gives Anyone A Free Pass At Paid Apps

Android app developer Dylan Roussel discovered a bug that, while non-trivial to exploit, isn’t impossible either. In a nutshell, Huawei’s AppGallery exposed certain details about an app, including the download link for the Android package (APK). While that may be normal, the bug is that the same link can be used to directly download a paid app without having to pay for it or even having to verify anything.

This bug has two damaging consequences for Huawei’s app marketplace. The first is more obvious in that anyone with a bit of technical know-how can easily bypass restrictions and download paid apps for free. The bigger threat, however, is that the AppGallery makes it too easy to download apps, both paid and free, outside of official channels, which in turn makes it too easy to pirate apps on that platform. This creates a very large deterrent for developers who may not bother putting in the work needed to offer their apps for Huawei’s ecosystem.

This vulnerability was discovered and reported back in February 2022, but it took Huawei 90 days to send a response. The company did apologize for the miscommunication and delay, citing logistics problems in fixing AppGallery across different regions since it apparently works very differently, too. A fix is promised to arrive by May 25, but the bug’s existence still raises concerns about similar issues that may be lurking in the shadows still undiscovered.

Repost: Original Source and Author Link

Categories
Game

How To Change Your Default Apps On Android Phones

But what if you don’t have a certain app installed on your phone? For example, if someone sends you a Play Store link for a cool chess game. Tapping on the link takes you to the Play Store listing of an application that supports the Instant App feature. In a nutshell, an Instant App is a condensed web-based version of an app that lets you get a brief taste of it without having to download and install it. Not all Android applications support the Instant App system, and it is up to the developers to offer the convenience. A

For applications that support the Instant app functionality, there’s an option to specify the default link opening behavior — open the Play Store in a link in a browser, or directly launch the Instant app version in the Play Store. To do so, follow the steps below:

1. In the Settings app, head over to the Apps section.

2. Scroll down and open the Default apps section, and then select the Opening links option at the bottom.

3. On the next page, select Instant Apps preferences and then enable the toggle that says Upgrade web links.

4. Once enabled, users will be able to directly access the Instant Play option for eligible apps, as is depicted in the image above.

Repost: Original Source and Author Link

Categories
AI

Amazon launches AWS RoboRunner to support robotics apps

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 


At a keynote during its Amazon Web Services (AWS) re:Invent 2021 conference today, Amazon launched AWS IoT RoboRunner, a new robotics service designed to make it easier for enterprises to build and deploy apps that enable fleets of robots to work together. Alongside IoT RoboRunner, Amazon announced the AWS Robotics Startup Accelerator, an incubator program in collaboration with nonprofit MassRobotics to tackle challenges in automation, robotics, and industrial internet of things (IoT) technologies.

The adoption of robotics — and automation more broadly — in enterprises has accelerated as the pandemic prompts digital transformations. A recent report from Automation World found that the bulk of companies that embraced robotics in the past year did so to decrease labor costs, increase capacity, and navigate a lack of available workers. The same survey found that 44.9% of companies now consider the robots in their assembly and manufacturing facilities to be an integral part of daily operations.

Amazon — a heavy investor in robotics itself — hasn’t been shy about its intent to capture a larger part of a robotics software market that is anticipated to be worth over $7.52 billion by 2022. In 2018, the company unveiled AWS RoboMaker, a product to assist developers with deploying robotics applications with AI and machine learning capabilities. And Amazon earlier this year rolled out SageMaker Reinforcement Learning Kubeflow Components, a toolkit supporting the RoboMaker service for orchestrating robotics workflows.

IoT RoboRunner

IoT RoboRunner, currently in preview, builds on the technology already in use at Amazon warehouses for robotics management. It allows AWS customers to connect robots and existing automation software to orchestrate work across operations, combining data from each type of robot in a fleet and standardizing data types like facility, location, and robotic task data in a central repository.

The goal of IoT RoboRunner is to simplify the process of building management apps for fleets of robots, according to Amazon. As enterprises increasingly rely on robotics to automate their operations, they’re choosing different types of robots, making it more difficult to organize their robots efficiently. Each robot vendor and work management system has its own, often incompatible control software, data format, and data repository. And when a new robot is added to a fleet, programming is required to connect the control software to work management systems and program the logic for management apps.

Developers can use IoT RoboRunner to access the data required to build robotics management apps and leverage prebuilt software libraries to create apps for tasks like work allocation. Beyond this, IoT RoboRunner can be used to deliver metrics and KPIs via APIs to administrative dashboards.

IoT RoboRunner competes with robotics management systems from Freedom Robotics, Exotec, and others. But Amazon makes the case that IoT RoboRunner’s integration with AWS — including services like SageMaker, Greengrass, and SiteWise — gives it an advantage over rivals on the market.

“Using AWS IoT RoboRunner, robotics developers no longer need to manage robots in silos and can more effectively automate tasks across a facility with centralized control,” Amazon wrote in a blog post. “As we look to the future, we see more companies adding more robots of more types. Harnessing the power of all those robots is complex, but we are dedicated to helping enterprises get the full value of their automation by making it easier to optimize robots through a single system view.”

AWS Robotics Startup Accelerator

Amazon also announced the Robotics Startup Accelerator, which the company says will foster robotics companies by providing them with resources to develop, prototype, test, and commercialize their products and services. “Combined with the technical resources and network that AWS provides, the strategic collaboration will help robotics startups and the industry overall to experiment and innovate, while connecting startups and their technologies with the AWS customer base,” Amazon wrote in a blog post.

Startups accepted into the Robotics Startup Accelerator program will consult with AWS and MassRobotics experts on business models and with AWS robotics engineers for technical assistance. Benefits include hands-on training on AWS robotics solutions and up to $ 10,000 in promotional credits to use AWS IoT, robotics, and machine learning services. Startups will also receive business development and investment guidance from MassRobotics and co-marketing opportunities with AWS via blogs and case studies.

Robotics startups — particularly in industrial robotics — have attracted the eye of venture capitalists as the trend toward automation continues. From March 2020 to March 2021, venture firms poured $6.3 billion into robotics companies, up nearly 50% from March 2019 to March 2020, according to data from PitchBook. Over the longer term, robotics investments have climbed more than fivefold throughout the past five years, to $5.4 billion in 2020 from $1 billion in 2015.

“Looking ahead, the expectations of robotics suppliers are bullish, with many believing that with the elections over and increased availability of COVID-19 vaccines on the horizon, much demand will return in industries where market skittishness has slowed robotic adoption,” Automation World wrote in its report. “Meanwhile, those industries already seeing an uptick are expected to plough ahead at an even faster pace.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

CoreWeave, a provider of cloud services for GPU-powered apps, gets $50M

CoreWeave, a specialized cloud services provider for GPU-based workloads, said it has partnered with GPU giant Nvidia to enable companies to scale their infrastructure, cut costs, and improve efficiencies.

This rather new cloud services provider is offering on-demand computing resources across several verticals in the tech space and said it will continue to improve performance with help from a $50 million round of investment from Magnetar Capital.

Data scientists, machine learning engineers, and software engineers, to name a few roles, can utilize CoreWeave to deploy artificial intelligence applications and machine learning models. Furthermore, clients can manage their complex AI solutions, powered by GPUs, with the help of CoreWeave’s specialized DevOps expertise, the company said.

No more slow predictions from cloud services

A baseline problem in data science modeling from legacy cloud service providers is slow prediction. CoreWeave said it can improve prediction times by half by reducing inference latency, as well as servicing requests three times faster. As a result, the company estimated that cloud costs could be cut by 75%. These benefits can translate to a decrease in downtime, which can open up data scientists to allocate more time to other projects, ultimately allowing for a more comprehensive, data science-enabled company, CoreWeave suggested. Additionally, these cost savings can be put toward either hiring for more talent or investing in other resources instead.

“$50 million is a big number, and it allows us to accelerate our growth, fortify our infrastructure, and expand our footprint in ways previously unimaginable, despite being profitable since day one,” CoreWeave’s CEO, Michael Intrator, wrote in a blog post announcing the funding. “In 2022, we’ll be launching data centers in new regions, doubling down on our commitment to deliver the industry’s broadest range of high-end compute, and continuing to provide the world’s best infrastructure for on-demand, compute-intensive workloads.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

‘Pokémon Go’ maker Niantic is helping others create AR metaverse apps

Niantic Labs is offering everyone the chance to get their hands on the tech behind Pokémon Go and Pikmin Bloom so they can build their own augmented reality and “real-world metaverse” apps. Developers can start using the Niantic Lightship platform today. The company also announced a $20 million investment fund to back developers that “share our vision for the real-world metaverse and contribute to the global ecosystem we are building.”

Developers can use Ninatic’s toolkit to create real-time 3D mesh maps so apps can understand the surfaces and topography of the world surrounding a device. Other APIs will help apps know the difference between different aspects of an environment, such as the ground, sky, water and buildings. The toolkit also enables developers to make apps that allow up to five players to take part in the same AR multiplayer session, keeping all of their content and interactions in sync.

The tools are mostly free. The multiplayer APIs will be available at no cost for the first six months no matter how many users an app has. After that, Niantic will charge a fee if the APIs are used in an app with more than 50,000 monthly active users.

Several notable brands have taken part in a private beta of the development kit, including Universal Pictures, PGA of America and Warner Music Group. Coachella has created an AR experience that its festival attendees will be able to check out next year. They’ll be able to see a large version of Coachella’s butterfly landing on the seven-story Spectra rainbow walkway tower.

Meanwhile, Shueisha is working with developer T&S to bring characters from One Piece and other manga into the real world with AR. That app will be available in 2022.

Niantic’s vision of the metaverse is very much different from the virtual reality-centered future Facebook’s parent company Meta has in mind. In a blog post in August, CEO John Hanke suggested that the “real-world metaverse” is about connecting the physical and digital worlds, rather than existing purely as a virtual experience. With that in mind, his company has been working on AR glasses with Qualcomm over the last couple of years.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

At Ignite 2021, Microsoft showcases functionality for scalable AI apps

Part of what Microsoft was determined to showcase at its Ignite conference, which kicked off Tuesday, is the extent to which Azure is maturing into an architecture for building scalable, AI-infused apps that also work in hybrid cloud and edge computing scenarios.

Microsoft put out a barrage of AI, data analytics, and DevOps announcements for Azure announcements at Ignite. They include updates to services like Azure Stack HCI, a Windows Server 2019-based cluster that uses “validated” hardware to run virtualized workloads locally.

Azure Arc, a hybrid cloud platform introduced in 2019; and Azure Kubernetes Service (AKS), Microsoft’s implementation of the open source standard for containerized applications.

Taken together, they provide a way of packaging applications or application infrastructure such as SQL Server instances to make them portable between cloud, datacenter, and edge locations, in addition to being manageable within the same framework.

The conference gave Microsoft an opportunity to highlight how beta customers are putting these capabilities to work. Vinh Tran, Head of Cloud Engineering at RBC, Canada’s largest bank, told the audience he’d used Azure Arc to automate and manage on-premises database deployments in Kubernetes containers. By simplifying deployments, whether on-premise or in the cloud, Azure Arc has helped his data team stretch its skill sets and capabilities, he said. “It’s allowed us to focus more on the integration of these products and capabilities into our systems than on building, securing, and managing them ourselves,” he said. “It’s allowed us to reduce our operational overhead managing on-premise databases at scale.”

Another customer Microsoft cited several times is SKF, a Swedish manufacturer of ball bearings and industrial seals with more than 100 factories in 28 countries. By extending cloud services for factory automation to run within their factories, SKF said it saved 40% in hardware costs and 30% in overtime related to machine downtime.

A factory is a good example of the difference between hybrid cloud computing and edge computing. A hybrid architecture might mean running applications or data services in a manufacturer’s datacenter, whereas edge computing would mean putting the technology in the individual factories. Microsoft says it is designing the Azure architecture so that even sophisticated machine learning “inferencing” can happen within an edge location without the need to ship data to the cloud.

This is also the reason Microsoft is making Azure Virtual Desktop deployable on the Azure Stack HCI, so it can be deployed in corporate datacenters or offices “at the edge.”

An architecture for scalable AI

“When I talk to customers about their cloud strategy and adoption, I often hear that they want the new innovation and agility that the cloud enables,” Scott Guthrie, executive VP for cloud and AI at Microsoft, said in a keynote presentation “But they also need to integrate with the existing technology investments within their organizations. They sometimes have dozens, hundreds, or even thousands of servers, applications, and databases that they need to manage across their multiple cloud and on-premises environments,” he said. Moving on-premise resources to the cloud often isn’t practical for regulatory issues or latency reasons, he said.

Healthcare is a good example, where privacy and security might dictate that patient data not leave the hospital, and where data-intensive applications like medical imaging will perform better with data processed locally. A medical technology customer exemplifying this approach is Siemens Healthineers, which is using Azure Arc to deploy and maintain apps across tens of thousands of Edge locations that include clinics, and diagnostics equipment, according to Microsoft.

Building on Kubernetes allows Microsoft to claim portability to any container that adheres to the Cloud Native Computing Foundation standards, in the cloud or otherwise. It’s also enabling technology for multicloud and the ability to migrate between clouds or between cloud and on-premises environments.

However, Guthrie acknowledged that learning the intricacies of Kubernetes can also be “a little daunting” for the uninitiated. Microsoft’s workaround is Azure Container Apps, a simplified packaging of the technology. “Container Apps enables you to easily start building container-based microservices with just your app code, while giving you the flexibility to choose to upgrade to our full Azure Kubernetes Service if and when you’re ready to leverage the full power of Kubernetes.”

Microsoft’s approach is not necessarily unique. Amazon Web Services offers its EKS Anywhere and EKS Connector for Kubernetes, for example. But Microsoft’s on-premise clout is allowing it to claim fans that, in addition to those mentioned above, include the likes of Walmart, Starbucks, HSBC, and the UK’s National Health Service.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Multiverse Computing utilizes quantum tools for finance apps

Despite great efforts to unseat it, Microsoft Excel remains the go-to analytics interface in most industries — even in the relatively tech-advanced area of finance.

Could this familiar spreadsheet be the portal to futuristic quantum computing in finance? The answer is “yes,” according to the principals at Multiverse Computing.

This San Sebastián, Spain-based quantum software startup is dedicated to forging forward with finance applications of the quantum kind, and its leadership sees the Excel spreadsheet as a logical means to begin to make this happen.

“In finance, everybody uses Excel; even Bloomberg has connections for Excel tools,” said Enrique Lizaso Olmos, CEO of Multiverse Computing, which recently gained $11.5 million in a funding round headed by JME Ventures.

Excel is a key entry point for emerging quantum methods, Lizaso Olmos said, as he described how users can drag and drop data sets from Excel columns and rows into Multiverse’s Singularity SDK, which then launches quantum computing jobs on available hardware.

From their desks, for example, Excel-oriented quants can analyze portfolio positions of growing complexity. The Singularity SDK can assign their calculations to the best quantum structure, whether it’s based on ion traps, superconducting circuits, tensor networks, or something else. Jobs can run on dedicated classical high-performance computers as well.

Quantum computing for finance

Multiverse’s recently closed seed funding round, led by JME, also included Quantonation, EASO Ventures, CLAVE Capital, and others. Multiverse principals have backgrounds in quantum physics, computational physics, mechatronics engineering, and related fields. On the business side, Lizaso Olmos can point to more than 20 years in banking and finance.

The push to find ways to immediately start work in quantum applications is a differentiator for Multiverse, claims Lizaso. The focus is to work with available quantum devices that can solve today’s problems in the financial sector.

Viewers see quantum computing as generally slow in developing, but the finance sector shows specific early promise, just as it has in the past with a host of emerging technologies. Finance apps drive investments like JME’s in Multiverse.

In a recent report, “What Happens When ‘If’ Turns to ‘When’ in Quantum Computing,” Boston Consulting Group (BCG) estimated equity investments in quantum computing nearly tripled in 2020, with further uptick seen for 2021. BCG states “a rapid rise in practical financial-services applications is well within reason.”

It’s not surprising, then, that Multiverse worked with both BBVA (Banco Bilbao Vizcaya Argentaria) to showcase both quantum computing in finance and Singularity’s potential to optimize investment portfolio management, as well as Crédit Agricole CIB to implement algorithms for risk management.

“We have been working on real problems, using real quantum computers, not just theoretical things,” Lizaso Olmos said.

Why quantum-inspired work matters

Multiverse pursues both quantum and quantum-inspired solutions for open problems in finance, according to Román Orús, cofounder and chief scientific officer at the company. Such efforts create algorithms that mimic some techniques used in quantum physics, and they can run on classical computers.

“It’s important to support quantum-inspired algorithm development because it can be deployed right away, and it’s educating clients about the formalism that they need for moving to the purely quantum,” Orús said.

The quantum-inspired work is finding some footing in quantum machine learning applications, he explained. There, financial applications that could benefit include credit scoring in lending, credit card fraud, and instant transfer fraud detection.

“These methods come from physics, and they can be applied to speed up and improve machine learning algorithms, and also optimization techniques,” Orús said. “The first ones to plug into finance are super successful.”

Being specific about applications is very important, as both Orús and Lizaso Olmos emphasize.

Whether the tools are quantum or quantum-inspired, the applications users in finance pursue must be selected wisely, Orús and Lizaso Olmos said. In other words, this is not your parents’ general-purpose computing.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Why Having Android Apps in Windows 11 Is a Game Changer

Microsoft finally rolled out the much-anticipated ability to download Android apps on the Windows 11 Insider Preview, and it is turning out to be a game changer. For tablet devices like the Surface Pro 8, Android apps feel like a natural fit.

Learning from the experience of using iPhone apps on an iPad or Mac, or Android apps in Chrome OS, it feels like Microsoft just took a significant step toward marrying software with Surface hardware. Windows 11 itself made a big difference, but now with Android apps available in Windows 11, the 2-in-1 dream is finally coming to fruition.

Getting apps is easy

With the latest Windows Insider beta of Windows 11 already installed on my Surface Pro 8, getting Android apps up and running was simple as could be. Yet, I do want to note that not everyone running Windows 11 will get this experience.

For now, you’ll need to be a Windows Insider. Microsoft is still keeping Android apps in Windows 11 as a feature exclusive to the “beta branch” of Windows 11 until beta testing is complete so it can gather proper feedback. Enrolling a PC to this branch, though, is safe, and it takes less than five minutes.

Once up and running with the beta version of Windows 11, installing Android apps is quite simple. Just update the Microsoft Store app to the latest version, and search for “Amazon Appstore.” Install it following the instructions on the screen (I just needed to click “OK”) and the Windows Subsystem for Android will take care of things in the background automatically.

From there, I opened up the dedicated Amazon Appstore, clicked a listing, and hit the Install button. Android apps show exclusively on the Amazon Appstore, but the apps install right to your Start Menu, without the need to go digging.

From there, I was downloading apps left and right, and my Surface Pro 8 finally felt like a proper tablet.

The performance is just right

The Windows Subsystem for Android and performance in task manager.

In opening up Android apps on Windows on my Surface Pro 8 — like the Kindle app, the Amazon App Store, and Subway Surfers — I never felt any slowdowns. I expected things to run with some lag due to the underlying virtualization, but the Surface Pro 8’s hardware managed to keep up, using up to 1.6GB of RAM maximum for the Windows Subsystem for Android (WSA).

Here’s why I need to talk about performance to look at why this all just works right. Android apps in Windows run under the WSA — essentially an Android virtual machine in the background of the operating system. It’s part of the reason Microsoft requires 8GB of RAM for things to work, and why things end up feeling so native.

In fact, over on Reddit, someone managed to run Geekbench testing on the WSA by sideloading the Android Geekbench 5 app on Windows 11. On an XPS 13 7390 2-in-1, the WSA scored 997 in single-core testing, and 3,122 in multi-core testing. That’s compared to the 1,181 Single-Core Score and the 3,642 Multi-Core scores of native Windows 11.

The secret sauce? Microsoft says it closely partnered with Intel.

Remember when people first benchmarked Windows 10 running virtualized on M1 Macs and said it was better than running it natively on a Surface Pro X? It is true that this is a different situation, but the Android virtualization on Windows 11 seems pretty impressive, if not close to the ARM-based app emulation Apple has accomplished on its own hardware.

The secret sauce? Microsoft says it partnered closely with Intel. Intel’s Bridge Technology enables ARM-only apps to run on AMD and Intel devices.

When it comes to overall performance, Android apps feels like they run on the Surface Pro 8 just like Windows apps do. Comparatively, it’s like opening up an iPad and running an iPhone app. Everything works fine. Windows didn’t freeze, nor does it skip a beat. With Windows 11 being as optimized as it is already thanks to the way it handles foreground tasks and CPU processes, Android apps feel right at home.

It works natively

Android apps on Windows 11 running natively.

Android apps on my Surface also look and feel like native Windows 11 apps. When I open an Android app, it has a title bar just like a Windows app does. I can close the app, size it how I want, and even use Windows 11’s Snap Group feature with that Android app. And the Android apps integrates itself into the Windows 11 Start Menu, Taskbar, and even the clipboard.

All around, Windows 11 Android apps feel like they belong. On a Surface Pro 8 device, in particular, it feels even better. I even get to use the new Touch keyboard to type in Android apps. The 120 Hz screen means that the Kindle Android app feels alive, and games like Subway Surfers really make me feel like I’m part of the action. Sideloaded Android apps like Sketchbook work with the Surface Slim Pen 2 as well.

It’s a big contrast to what happens when you try to run iOS apps on Mac devices. The apps are clunky, have the same touch controls as iPads or iPhones, and don’t support touch. Even on iPads, running an iPhone app under emulation doesn’t give you a windowed mode. Instead, the app goes full-screen. Microsoft is onto something here, and it’s unlocking the Surface’s full potential.

The door is wide open

Sideloaded Android apps in Windows 11.

Android is open source, so that leaves the door wide open for developers and the community. I initially spent a lot of time on my Surface Pro 8 playing with the 50 apps already available in the Amazon App Store. However, it is possible to sideload any Android app you’d like, leaving the door wide open for more ways to enjoy Surface.

Doing so requires turning the WSA to developer mode, as well as the use of Android’s Android Debug Bridge (ADB) developer tools, the Windows terminal, and an APK file for the Aurora app store. (Try this at your own risk. We don’t recommend doing this and Digital Trends is not responsible for damage to your computer.)

This is something that used to be possible on M1 Macs. You could have used a third-party program to sideload any iOS app, but Apple quickly shut it down. On a Surface, and with Android being open source, plus the additional command line tools, I was able to use my own apps from third-party sources. However, apps that need Google Play services (Snapchat or Chrome, for example) do not work since WSA does not support this.

Yet on my Surface Pro 8, these apps felt better and performed better than Windows versions. Instagram’s Android app felt the same way as it did on my Pixel 4XL, with full controls. Microsoft Teams’ Android app also felt a lot more laggy compared to the desktop version, and even integrates with the Windows 11 notification center.

While things are still in beta for now, this early version of Android on Windows 11 impresses. On Surface devices, apps feel great, look great, and perform great. Things can only get better from here on out.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Amazon’s on-premises device for vision apps, AWS Panorama Appliance, launches publicly

This article is part of a VB special issue. Read the full series: AI and Surveillance.


Amazon today announced the general availability of the AWS (Amazon Web Services) Panorama Appliance, a device that allows customers to use existing on-premises cameras and analyze video feeds with AI. Ostensibly designed for use cases like quality checks and supply chain monitoring, Amazon says that the Panorama Appliance is already being used by companies including Accenture, Deloitte, and Sony.

“Customers in industrial, hospitality, logistics, retail, and other industries want to use computer vision to make decisions faster and optimize their operations. These organizations typically have cameras installed onsite to support their businesses, but they often resort to manual processes like watching video feeds in real time to extract value from their network of cameras, which is tedious, expensive, and difficult to scale,” Amazon wrote in a press release. “Most customers are stuck using slow, expensive, error-prone, or manual processes for visual monitoring and inspection tasks that do not scale and can lead to missed defects or operational inefficiencies.”

By contrast, the Panorama Appliance connects to a local network to perform computer vision processing at the edge, Amazon says. Integrated with Amazon SageMaker — Amazon’s service for building machine learning models — the Panorama Appliance can be updated and deployed with new computer vision models. Companies that opt not to create their own models can choose from solutions offered by Deloitte, TaskWatch, Vistry, Sony, Accenture, and other Amazon partners.

To date, customers have developed models running on the Panorama Appliance for manufacturing, construction, hospitality, and retail, Amazon says. Some are analyzing retail foot traffic to inform store layouts and displays, while others are identifying peak times in stores to pinpoint where staff might be needed.

The Cincinnati/Northern Kentucky International Airport in Hebron, Kentucky, is using the Panorama Appliance to monitor congestion across airport traffic lanes. With the help of Deloitte, The Vancouver Fraser Port Authority has applied the Panorama Appliance to track containers throughout its facilities. And Tyson has built models on the device to count packaged products on lines for quality assurance.

“Organizations across all industries like construction, hospitality, industrial, logistics, retail, transportation, and more are always keen to improve their operations and reduce costs. Computer vision offers a valuable opportunity to achieve these goals, but companies are often inhibited by a range of factors including the complexity of the technology, limited internet connectivity, latency, and inadequacy of existing hardware,” VP of Amazon machine learning at AWS Swami Sivasubramanian said in a statement. “We built the Panorama Appliance to help remove these barriers so our customers can take advantage of existing on-premises cameras and accelerate inspection tasks, reduce operational complexity, and improve consumer experiences through computer vision.”

Privacy implications

Since its unveiling at Amazon’s re:Invent 2020 conference in December, experts have raised concerns about how the Panorama Appliance could be misused. While the purported goal is “optimization,” the device could be coopted for other, less humanitarian intents, like allowing managers to chastise employees in the name of productivity.

In the promotional material for the Panorama Appliance, Fender says it uses the product to “track how long it takes for an associate to complete each task in the assembly of a guitar.” Each state has its own surveillance laws, but most give wide discretion to employers so long as any equipment they use to track employees is plainly visible. There’s no federal legislation that explicitly prohibits companies from monitoring staff during the workday.

Bias could also arise from the computer vision models deployed to the Panorama Appliance if the models aren’t trained on sufficiently diverse data. A study conducted by researchers at the University of Virginia found that two prominent research-image collections displayed gender bias in their depiction of sports and other activities, showing images of shopping linked to women while associating things like coaching with men. Even differences in the sun path between the northern and southern hemispheres and variations in background scenery can affect model accuracy, as can the varying specifications of camera models like resolution and aspect ratio.

Recent history is filled with examples of the consequences of training computer vision models on biased datasets, like virtual backgrounds and automatic photo-cropping tools that disfavor darker-skinned people. Back in 2015, a software engineer pointed out that the image recognition algorithms in Google Photos were labeling his Black friends as “gorillas.” And the nonprofit AlgorithmWatch has shown that Google’s Cloud Vision API at one time automatically labeled thermometers held by a Black person as “guns” while labeling thermometers held by a light-skinned person as “electronic devices.”

Amazon has pitched — and employed — surveillance technologies before. The company’s Rekognition software sparked protests and pushback, which led to a moratorium on the use of the technology. And Amazon’s notorious “Time Off Task” system dings warehouse employees for spending too much time away from the work they’re assigned to perform, like scanning barcodes or sorting products into bins.

An Amazon spokeswoman recently told the BBC that the Panorama Appliance was “designed to improve industrial operations and workplace safety” and that how it is used is up to customers. “For example, AWS Panorama does not include any pre-packaged facial recognition capabilities,” the spokesperson said. All its machine learning functions can happen on the device, they added, “and [relevant data] never has to leave the customer’s facility.”

The Panorama Appliance is now available for sale through Amazon’s AWS Elemental service in the U.S., Canada, U.K., and E.U.

Read More: VentureBeat's Special Issue on AI and Surveillance

Repost: Original Source and Author Link

Categories
Computing

Android Apps on Windows 11? Here’s What They Will Look Like

The ability to easily run smartphone/tablet apps on PCs and laptops is something that many users have been longing for, and it seems that Microsoft may be working on implementing just that. Screenshots showcasing Android apps being run on Windows 11 have surfaced, giving insight into what this feature may look like.

Although Microsoft has previously stated that Windows 11 will support Android apps, the operating system launched without that feature. It’s likely that Microsoft is still piloting it and only allowing a select circle of users to try it out via the Dev channel of the Windows Insider Program. However, even upon completion, this is likely not going to be released as a mandatory part of a Windows Update. The leaked information suggests that Microsoft is instead planning to make it optional via the Microsoft Store.

Despite Microsoft’s urging to keep it confidential, leaked screenshots of the feature were posted on Bilibili, a Chinese social media platform. One of the screenshots shows the Windows 11 interface running Microsoft Store with an app called “Windows Subsystem for Android” available for download. The program will utilize the Intel Bridge technology in order to translate ARM code to x86.

This is similar to the Windows Subsystem for Linux which allows Windows users to run Linux binaries on their computers. This program was also recently made available by Microsoft and can be downloaded from the Microsoft Store. This allows for more streamlined and faster updates that don’t have to be tied to Windows Update.

Windows 11 running Android apps.

The other screenshots give more of a glimpse into what Android on Windows 11 might entail. It appears that Microsoft may be trying to integrate the apps to the same extent as regular programs, allowing users to pin them to the taskbar, run multiple apps at once, and receive notifications.

One of the leaked screenshots shows the WeChat Android app. The app is pinned to the taskbar alongside Google Chrome, suggesting that this will be possible in the Windows Subsystem for Android program. Another screenshot clearly shows the ability to run several apps at once, resize them, and receive notifications on the taskbar.

Although these first screenshots look promising, it’s possible that both Microsoft and Intel still have a way to go before this feature can be released. There is no official launch date as of yet, but rumors point to the first half of 2022.

Editors’ Choice




Repost: Original Source and Author Link