Meta’s latest VR headset prototypes could help it pass the ‘Visual Turing test’

Meta wants to make it clear it’s not giving up on high-end VR experiences yet. So, in a rare move, the company is spilling the beans on several VR headset prototypes at once. The goal, according to CEO Mark Zuckerberg, is to eventually craft something that could pass the “visual Turing Test,” or the point where virtual reality is practically indistinguishable from the real world. That’s the Holy Grail for VR enthusiasts, but for Meta’s critics, it’s another troubling sign that the company wants to own reality (even if Zuckerberg says he doesn’t want to completely own the metaverse).

As explained by Zuckerberg and Michael Abrash, Chief Scientist of Meta’s Reality Labs, creating the perfect VR headset involves perfecting four basic concepts. First, they need to reach a high resolution so you can have 20/20 VR vision (with no need for prescription glasses). Additionally, headsets need variable focal depth and eye tracking, so you can easily focus on nearby and far away objects; as well as fix optical distortions inherent in current lenses. (We’ve seen this tech in the Half Dome prototypes.) Finally, Meta needs to bring HDR, or high dynamic range, into headsets to deliver more realistic brightness, shadows and color depth. More so than resolution, HDR is a major reason why modern TVs and computer monitors look better than LCDs from a decade ago.

Meta Reality Labs VR headset prototypes


And of course, the company needs to wrap all of these concepts into a headset that’s light and easy to wear. In 2020, Facebook Reality Labs showed off a pair of concept VR glasses using holographic lenses , which looked like over-sized sunglasses. Building on that original concept, the company revealed Holocake 2 today (above), its thinnest VR headset yet. It looks more traditional than the original pair, but notably Zuckerberg says it’s a fully functional prototype that can play any VR game while tethered to a PC.

“Displays that match the full capacity of human vision are going to unlock some really important things,” Zuckerberg said in a media briefing. “The first is a realistic sense of presence, and that’s the feeling of being with someone or in some place as if you’re physically there. And given our focus on helping people connect, you can see why this is such a big deal.” He described testing photorealistic avatars in a mixed reality environment, where his VR companion looked like it was standing right beside him. While “presence” may seem like an esoteric term these days, it’s easier to understand once headsets can realistically connect you to remote friends, family and colleagues.

Meta’s upcoming Cambria headset appears to be a small step towards achieving true VR presence, the brief glimpses we’ve seen at its technology makes it seem like a small upgrade from the Oculus Quest 2. While admitting the perfect headset is far off, Zuckerberg showed off prototypes that demonstrated how much progress Meta’s Reality Labs has made so far.

Meta Reality Labs VR headset prototypes


There’s “Butterscotch” (above), which can display near retinal resolution, allowing you to read the bottom line of an eye test in VR. To achieve that, the Reality Labs engineers had to cut the Quest 2’s field of view in half, a compromise that definitely wouldn’t work in a finished product. The Starburst HDR prototype looks even wilder: It’s a bundle of wires, fans and other electronics that can produce up to 20,000 nits of brightness. That’s a huge leap from the Quest 2’s 100 nits, and it’s even leagues ahead of super-bright Mini-LED displays we’re seeing today. (My eyes are watering at the thought of putting that much light close to my face.) Starburst is too large and unwieldy to strap onto your head, so researchers have to peer into it like a pair of binoculars.

Meta Mirror Lake VR concept


While the Holocake 2 appears to be Meta’s most polished prototype yet, it doesn’t include all of the technology the company is currently testing. That’s the goal of the Mirror Lake concept (above), which will offer holographic lenses, HDR, mechanical varifocal lenses and eye tracking. There’s no working model yet, but it’s a decent glimpse at what Meta is aiming for several years down the road. It looks like a pair of high-tech ski goggles, and it’ll be powered by LCD displays with laser backlights. The company is also developing a way to show your eyes and facial expressions to outside observers with an external display on the front.

Repost: Original Source and Author Link


Iterate integrates more visual tools with Interplay 7 low-code platform update for AI

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Multiple companies are vying for a piece of share in the competitive – and growing – low-code AI and ML tools market. The basic concept behind low code is about making it easier for users to build applications without needing to dig into source code development tools. Building out AI and ML integrations is a growing area and the space where San Jose, California-based Iterate is playing. 

The company, which is releasing its Interplay 7 platform today, is firmly focused on AI. The release integrates a new workflow engine that enables users to work with multiple AI nodes to load data for building applications and machine learning models. It also provides streamlined integration with Google Vertex AI and AWS Sagemaker for users that want to send trained data models to those services for additional processing. The overall goal is to help larger companies with a modular approach to building AI/ML powered applications.

“We have realized that  a lot of large companies we work with have many web engineers, but they actually don’t have machine learning backend skills,” Brian Sathianathan, CTO at Iterate told VentureBeat.

Competitive marketplace for low code shows promise for AI

The overall market for low-code development tools is forecast to be worth $16.8 billion in 2022 according to a Gartner research forecast. Gartner expects the total market for low code to reach $20.4 billion by 2023.

Looking out even further, Gartner has forecast that it expects a whopping 70% of all new applications developed by companies will use some form of low-code approach by 2025. In contrast, the analyst firm reported that in 2020 fewer than 25% of organizations were using low-code to build new applications.

The vendor marketplace for low-code technologies is crowded: Sstartup launched its no-code platform for helping users to build AI powered applications in February. In the same month, startup Mage launched its low-code AI dev tool which helps organizations to generate models. Cogniteam announced its latest low-code AI updates on May 9, with a focus on robotics. Other established vendors in the low-code space include Appian and Mendix, both of whom have varying degrees of AI enablement capabilities. 

“Low-code represents the primary growth area for AI-driven application development atop a complicated web of services and data sources,” Jason English, principal analyst at Intellyx LLC told VentureBeat. “Bringing down the technical barrier to entry allows business expertise to be applied for ML training, automation and inference tasks.”

How Iterate aims to differentiate

Brian Sathianathan, CTO at Iterate, told VentureBeat that among the ways his company’s platform differentiates against other low-code technologies in the marketplace is with specific industry use case templates.

For example, rather than just providing a generic tool to help users connect to data sources and build out an AI-enabled application, Interplay provides templates for industry verticals including retail, healthcare, automotive and oil and gas industries. Sathianathan said an organization will typically engage with Iterate to solve a specific use case, though the platform can also be customized to build out applications beyond the available templates as well.

In the Interplay 7 release, Iterative is also improving its data preparation capabilities with a visual tool to clean the data so that it is useful for machine learning training operations. In prior releases. Sathianathan said that data cleaning was not a visual process. 

A primary theme with the Interplay update is to Improve the intersection of what end users see overall. It now integrates with the popular Figma tool that is used to mock up interface designs. Sathianathan said that at large organizations, customer-facing interfaces are often designed by agencies that, more often than not, use Figma. The integration enables Figma designs to be imported into Interplay 7, which can then be connected into the AI backend to build applications.

Looking forward to future releases, Sathianathan said he’s looking to expand the AI capabilities his platform can support for the development of AI techniques such as digital twins.

“Today we support the big four use cases of AI including regression, classification, clustering and image recognition,” he said. “In the future we’re going to add additional capabilities,” he said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link


Deep tech, no-code tools will help future artists make better visual content

This article was contributed by Abigail Hunter-Syed, Partner at LDV Capital.

Despite the hype, the “creator economy” is not new. It has existed for generations, primarily dealing with physical goods (pottery, jewelry, paintings, books, photos, videos, etc). Over the past two decades, it has become predominantly digital. The digitization of creation has sparked a massive shift in content creation where everyone and their mother are now creating, sharing, and participating online.

The vast majority of the content that is created and consumed on the internet is visual content. In our recent Insights report at LDV Capital, we found that by 2027, there will be at least 100 times more visual content in the world. The future creator economy will be powered by visual tech tools that will automate various aspects of content creation and remove the technical skill from digital creation. This article discusses the findings from our recent insights report.

Group of superheroes on a dark background


Image Credit: ©LDV CAPITAL INSIGHTS 2021

We now live as much online as we do in person and as such, we are participating in and generating more content than ever before. Whether it is text, photos, videos, stories, movies, livestreams, video games, or anything else that is viewed on our screens, it is visual content.

Currently, it takes time, often years, of prior training to produce a single piece of quality and contextually-relevant visual content. Typically, it has also required deep technical expertise in order to produce content at the speed and quantities required today. But new platforms and tools powered by visual technologies are changing the paradigm.

Computer vision will aid livestreaming

Livestreaming is a video that is recorded and broadcast in real-time over the internet and it is one of the fastest-growing segments in online video, projected to be a $150 billion industry by 2027. Over 60% of individuals aged 18 to 34 watch livestreaming content daily, making it one of the most popular forms of online content.

Gaming is the most prominent livestreaming content today but shopping, cooking, and events are growing quickly and will continue on that trajectory.

The most successful streamers today spend 50 to 60 hours a week livestreaming, and many more hours on production. Visual tech tools that leverage computer vision, sentiment analysis, overlay technology, and more will aid livestream automation. They will enable streamers’ feeds to be analyzed in real-time to add production elements that are improving quality and cutting back the time and technical skills required of streamers today.

Synthetic visual content will be ubiquitous

A lot of the visual content we view today is already computer-generated graphics (CGI), special effects (VFX), or altered by software (e.g., Photoshop). Whether it’s the army of the dead in Game of Thrones or a resized image of Kim Kardashian in a magazine, we see content everywhere that has been digitally designed and altered by human artists. Now, computers and artificial intelligence can generate images and videos of people, things, and places that never physically existed.

By 2027, we will view more photorealistic synthetic images and videos than ones that document a real person or place. Some experts in our report even project synthetic visual content will be nearly 95% of the content we view. Synthetic media uses generative adversarial networks (GANs) to write text, make photos, create game scenarios, and more using simple prompts from humans such as “write me 100 words about a penguin on top of a volcano.” GANs are the next Photoshop.

L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing

Above: L: Remedial drawing created, R: Landscape Image built by NVIDIA’s GauGAN from the drawing

Image Credit: ©LDV CAPITAL INSIGHTS 2021

In some circumstances, it will be faster, cheaper, and more inclusive to synthesize objects and people than to hire models, find locations and do a full photo or video shoot. Moreover, it will enable video to be programmable – as simple as making a slide deck.

Synthetic media that leverages GANs are also able to personalize content nearly instantly and, therefore, enable any video to speak directly to the viewer using their name or write a video game in real-time as a person plays. The gaming, marketing, and advertising industries are already experimenting with the first commercial applications of GANs and synthetic media.

Artificial intelligence will deliver motion capture to the masses

Animated video requires expertise as well as even more time and budget than content starring physical people. Animated video typically refers to 2D and 3D cartoons, motion graphics, computer-generated imagery (CGI), and visual effects (VFX). They will be an increasingly essential part of the content strategy for brands and businesses deployed across image, video and livestream channels as a mechanism for diversifying content.

Graph displaying motion capture landscape


Image Credit: ©LDV CAPITAL INSIGHTS 2021

The greatest hurdle to generating animated content today is the skill – and the resulting time and budget – needed to create it. A traditional animator typically creates 4 seconds of content per workday. Motion capture (MoCap) is a tool often used by professional animators in film, TV, and gaming to record a physical pattern of an individual’s movements digitally for the purpose of animating them. An example would be something like recording Steph Curry’s jump shot for NBA2K

Advances in photogrammetry, deep learning, and artificial intelligence (AI) are enabling camera-based MoCap – with little to no suits, sensors, or hardware. Facial motion capture has already come a long way, as evidenced in some of the incredible photo and video filters out there. As capabilities advance to full body capture, it will make MoCap easier, faster, budget-friendly, and more widely accessible for animated visual content creation for video production, virtual character live streaming, gaming, and more.

Nearly all content will be gamified

Gaming is a massive industry set to hit nearly $236 billion globally by 2027. That will expand and grow as more and more content introduces gamification to encourage interactivity with the content. Gamification is applying typical elements of game playing such as point scoring, interactivity, and competition to encourage engagement.

Games with non-gamelike objectives and more diverse storylines are enabling gaming to appeal to wider audiences. With a growth in the number of players, diversity and hours spent playing online games will drive high demand for unique content.

AI and cloud infrastructure capabilities play a major role in aiding game developers to build tons of new content. GANs will gamify and personalize content, engaging more players and expanding interactions and community. Games as a Service (GaaS) will become a major business model for gaming. Game platforms are leading the growth of immersive online interactive spaces.

People will interact with many digital beings

We will have digital identities to produce, consume, and interact with content. In our physical lives, people have many aspects of their personality and represent themselves differently in different circumstances: the boardroom vs the bar, in groups vs alone, etc. Online, the old school AOL screen names have already evolved into profile photos, memojis, avatars, gamertags, and more. Over the next five years, the average person will have at least 3 digital versions of themselves both photorealistic and fantastical to participate online.

Five examples of digital identities


Image Credit: ©LDV CAPITAL INSIGHTS 2021

Digital identities (or avatars) require visual tech. Some will enable public anonymity of the individual, some will be pseudonyms and others will be directly tied to physical identity. A growing number of them will be powered by AI.

These autonomous virtual beings will have personalities, feelings, problem-solving capabilities, and more. Some of them will be programmed to look, sound, act and move like an actual physical person. They will be our assistants, co-workers, doctors, dates and so much more.

Interacting with both people-driven avatars and autonomous virtual beings in virtual worlds and with gamified content sets the stage for the rise of the Metaverse. The Metaverse could not exist without visual tech and visual content and I will elaborate on that in a future article.

Machine learning will curate, authenticate, and moderate content

For creators to continuously produce the volumes of content necessary to compete in the digital world, a variety of tools will be developed to automate the repackaging of content from long-form to short-form, from videos to blogs, or vice versa, social posts, and more. These systems will self-select content and format based on the performance of past publications using automated analytics from computer vision, image recognition, sentiment analysis, and machine learning. They will also inform the next generation of content to be created.

In order to then filter through the massive amount of content most effectively, autonomous curation bots powered by smart algorithms will sift through and present to us content personalized to our interests and aspirations. Eventually, we’ll see personalized synthetic video content replacing text-heavy newsletters, media, and emails.

Additionally, the plethora of new content, including visual content, will require ways to authenticate it and attribute it to the creator both for rights management and management of deep fakes, fake news, and more. By 2027, most consumer phones will be able to authenticate content via applications.

It is deeply important to detect disturbing and dangerous content as well and is increasingly hard to do given the vast quantities of content published. AI and computer vision algorithms are necessary to automate this process by detecting hate speech, graphic pornography, and violent attacks because it is too difficult to do manually in real-time and not cost-effective. Multi-modal moderation that includes image recognition, as well as voice, text recognition, and more, will be required.

Visual content tools are the greatest opportunity in the creator economy

The next five years will see individual creators who leverage visual tech tools to create visual content rival professional production teams in the quality and quantity of the content they produce. The greatest business opportunities today in the Creator Economy are the visual tech platforms and tools that will enable those creators to focus on the content and not on the technical creation.

Abigail Hunter-Syed is a Partner at LDV Capital investing in people building businesses powered by visual technology. She thrives on collaborating with deep, technical teams that leverage computer vision, machine learning, and AI to analyze visual data. She has more than a ten-year track record of leading strategy, ops, and investments in companies across four continents and rarely says no to soft-serve ice cream.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link


A call for increased visual representation and diversity in robotics

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 

Sometimes it’s the obvious things that are overlooked. Why aren’t there pictures of women building robots on the internet? Or if they are there, why can’t we find them when we search? I have spent years decades doing outreach activities, providing STEM opportunities, and doing women in robotics speaker or networking events. So I’ve done a lot of image searches looking for a representative picture. I have scrolled through page after page of search results ranging from useless to downright insulting every single time.

Finally, I counted.

graph showing women in robotics image results with a female robot taking the lead, followed by a fake robot, and after that women standing near robots

Above: Graph: Image search results via Google showing results of what comes up when the term “woman building robot” is searched.

Image Credit: Andra Keay

My impressions were correct. The majority of the images you find when you look for ‘woman building robot’ are of female robots. This is not what happens if you search for ‘building robot’, or ‘man building robot’. That’s the insulting part, that this misrepresentation and misclassification hasn’t been challenged or fixed. Sophia the robot, or the ScarJo bot, or a sexbot has a much greater impact on the internet than women doing real robotics. What if male roboticists were confronted with pictures of robotic dildos whenever they searched for images of their work?

andra keay's example images women building robots showing female robots, sex robots, fake robots, and men explaining robots to others

Above: Example of image results from Andra Keay’s Google search for ‘women building robots’

Image Credit: Andra Keay

The number of women in the robotics industry is hard to gauge. Best estimates are 5% in most locations, perhaps 10% in some areas. It is slowly increasing, but then the robotics industry is also in a period of rapid growth and everyone is struggling to hire. To my mind, the biggest wasted opportunity for a young robotics company growing like Topsy is to depend on the friends of founders network when it leads to homogenous hiring practices. The sooner you incorporate diversity, the easier it will be for you to scale and attract talent.

For a larger robotics company, the biggest wasted opportunity is not fixing retention. Across the board in the tech industry, retention rates for women and underrepresented minorities are much worse than for pale males. That means that you are doing something wrong. Why not seriously address the complaints of the workers who leave you? Otherwise, you’ll never retain diverse hires, no matter how much money you throw at acquiring them.

The money wasted in talent acquisition when you have poor retention should instead be used to improve childcare, or flexible work hours, or support for affinity groups, or to fire the creep that everyone complains about, or restructure so that you increase the number of female and minority managers. The upper echelons are echoing with the absence of diversity.

On the plus side, the number of pictures of girls building robots has definitely increased in the last ten years. As my own children have grown, I’ve seen more and more images showing girls building robots. But with two daughters now leaving college, I’ve had to tell them that robotics is not one of the female-friendly career paths (if any of them are). Unless they are super passionate about it. Medicine, law, or data analytics might be better domains for their talents. As an industry, we can’t afford to lose bright young women. We can’t afford to lose talented older women. We can’t afford to overlook minority hires. The robotics industry is entering exponential growth. Capital is in abundance, market opportunities are in abundance. Talent is scarce.

These days, I’m focused on supporting professional women in the robotics community, industry, or academia. These are women who are doing critical research and building cutting-edge robots. What do solutions look like for them? Our wonderful annual Ada Lovelace Day list hosted on Robohub has increased the awareness of many ‘new’ faces in robotics. But we have been forced to use profile pictures, primarily because that’s what is available. That’s also the tradition for profile pieces about the work that women do in robotics. The focus is on the woman, not the woman building or programming, or testing the robot. That means that the images are not quite right as role models.

andrea keay's image search results that better represented females in robotics showing images of women brainstorming on a see-through whiteboard, and sitting near constructed robots

Above: Further examples from Andrea Keay’s image search results that better represented females in robotics

Image Credit: Andrea Keay

A real role model shows you the way forward. And that the future is in your hands. The Civil Rights activist Marian Wright Edelman said, “You can’t be what you can’t see.”

A set of images from andra keay's search results displaying the few good images found in the search more accurately representing women working in robotics

Above: A set of images from Andra Keay’s search results displaying the few good images found in the search more accurately representing women working in robotics.

Image Credit: andra keay

So Women in Robotics has launched a photo challenge. Our goal is to see more than 3 images of real women building robots in the top 100 search results. Our stretch goal is to see more images of women building robots than there are of female robots in the top 100 search results! Take great photos following these guidelines, hashtag your images #womeninrobotics #photochallenge #ibuildrobots, and upload them to Wikimedia with a creative commons license so that we can all use them. We’ll share them on the Women in Robotics organization website, too.

Andra Keay's guidelines for what does make a great photo of women in robotics includes: real robot programming, adults of various ages working on robotics, active single subject of the image, individuals pictured shown using tools or code to build a robot, unbranded images, and images that have permission from the subject to be use

Above: Andra Keay’s guidelines for what makes a great, accurate, and realistic photo representing women in robotics.

Image Credit: andra keay

Hey, we’d also love mentions of Women in Robotics in any citable fashion! Wikipedia won’t let us have a page because we don’t have third-party references, and sadly, the mention of our Ada Lovelace Day lists by other organizations have not credited us. We are now an official 501c3 organization, registered in the US, with the mission of supporting women and non-binary people who work in robotics, or who are interested in working in robotics.

andra keay's women in robotics photo challenge additional example and call for submission to

Above: Additional details of the women in robotics photo challenge additional example and call for submission to

Image Credit: andra keay

If a picture is worth a thousand words, then we can save a forest’s worth of outreach, diversity, and equity work, simply by showing people what women in robotics really do.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Our Life Dev Talks the Unconventional Joy of Visual Novels

The visual novel genre is a space within video games that provides indie developers a lot of freedom and opportunity to create stories that they would like to see, especially given that the genre focuses a heavy emphasis on text-based gameplay. Originating in Japan, visual novels have quite a massive audience internationally, with games ranging from romance-driven stories to those with a horror approach.

GB Patch Games is an indie game development company that was founded in 2015 and solely focuses on the development of visual novel games. With a number of titles in its library already, the studio’s current focus is on its Our Life series. The first title, Our Life: Beginnings and Always, was released in November 2020 while the next entry, Our Life: Now & Forever, is currently in development. Our Life: Beginnings and Always follows the story of two childhood friends, their families, and four significant summers that range from childhood to the young adult years

In speaking to Katelyn Clark, founder of GB Patch Games, I learned a little more about the developer’s focus on visual novels, their approach to storytelling, and some of the unique elements found in the Our Life series.

GB Patch Games was founded back in 2015. With a focus on developing visual novels, you have a pretty sizable library of games! What was the initial draw towards the visual novel genre as developers?

Growing up I loved playing video games. It was a lot of fun to interact with them and move the story along myself through gameplay. But what ended up amazing me more than anything else was when I happened to try games that allowed you to interact with the story/character side of the experience. RPGs with secret endings, the Harvest Moon series where you got to pick which person the main character married (if anyone), etc. The realization that in a game where you had some level of control over what happened, things didn’t have to go only one way always stuck with me.

Though for most of my childhood, I always thought normal gameplay came first and that’s all there was. I didn’t know about text adventures or genres like that until I was older. Visual novels were the first story-centered type of game I stumbled on as a young teen. And it was very exciting to find games where the gameplay was all about interacting with a story. I was immediately a big fan and have been ever since.

In terms of the visual novel genre, are there any specific elements to the genre that stand out to you? Was there any particular gap you noticed in the genre that you thought you could fill with your games?

I enjoy stories where not only is anything possible, but where there can be more than one thing possible. Visual novels don’t have to follow any conventions, though certain genres do have some expected elements, and can include as many options/branches as the creator wants. It’s a lot of fun.

I didn’t think I was doing anything particularly needed when I got into making my own, though. I just wanted to create games I liked, try whatever ideas I had even if they weren’t consistent with each other, and hoped other people would have a good time with them too.

With such a wide range of games within the genre, what about the genre has contributed to the stories that you’re looking to tell through your games?

Visual novels make it easy to try out the different interaction concepts I’ve had an interest in. From point management to timed options, to choices that are purely for roleplaying/customization, and always getting to choose who the main character has a preference for (if anyone).

In playing through Our Life: Beginning and Always, I noticed that the ability to really customize the player’s character is a huge part of the game. What is the significance for the GB Patch team in terms of providing such inclusive customization options for players?

When coming up with the initial ideas for Our Life, player customization wasn’t something terribly present. I was thinking about how I wanted to create a story where the characters grew up over the years, how it’d be fun to have your choices influence the way the love interest developed, that I wanted to attempt an atmosphere that was genuinely relaxing/nostalgic to people, and there was a goal of really showing off how charming the childhood friend trope could be if it was given complete focus.

A look at the character customization screen in Our Life: Beginnings & Always.

While going through that list, difficult decisions had to be made. With so many elements and such a long period of time to go over, it was stressful to try coming up with one single, perfect way to do it that would give players of all different personalities a positive experience. My solution was to default to ‘well, maybe we could make it a choice instead of a set thing’.

After repeated rounds of that, it eventually clicked that if I’m imagining players having this ideal little summer story of growing up without worries of how it’ll play out, that the best way to do that would be to just let them live how they wanted to. It was pretty obvious in retrospect. At that point, we began work on coming up with systems and structures that would allow as much personalization as possible so that as many people as possible felt comfortable and included.

Was creating extremely customizable experiences a big focus for GB Patch from the beginning?

It wasn’t. Originally, I wanted the games we made to be a story you had a say in, but still an intentional/set experience. I thought us keeping a good amount of control was the best way to ensure there was a consistent tone and the players would feel the ways we hoped they would and such. It wasn’t until Our Life: Beginnings & Always that all the good things major customization could do got through to me. I imagine quite a lot of our projects will use heavy personalization from here on out.

Specifically looking at Our Life: Beginning and Always, the game definitely has so many choices to go through. With over 300,000 words in the game, there’s a lot of story to read through. What has been the general approach to creating both the base game and the DLC associated with it in terms of writing and mapping out the customizable choices?

It’s definitely freeform. I take each part of the game (Step openings, Moments, and Step closings) one at a time. Almost nothing is strict or unchangeable or decided on in-advanced. Often there aren’t even pre-planned premises for any of the Moments. The events only start to truly exist when I’m actually working on a single new Moment or opening or ending.

Some of the characters of Our Life: Beginnings & Always talk on screen.

There were overarching themes and certain types of scenes I knew I wanted to include, but I didn’t know for sure how anything was going to fit or play out until I worked through each section. Then once new content was complete, I’d go back and add references or alterations to prior scenes where needed. I like being able to give myself as much time as possible before deciding on the content and being free to go with whatever makes sense when I do reach the point where a choice has to be made.

What has been the most rewarding part of creating visual novels? Is it the final game and story, the community that has sprung up around your games, the team that you work with?

Our Life: Beginnings & Always resonated with people more than I ever thought it would. We tried our best, though I honestly thought the game would come out to be an ‘enjoyable time’ and that’d be satisfying enough. I didn’t have confidence that it’d succeed in having a deeper emotional impact. But it did for a lot of the players who gave it a chance.

Seeing how the story and characters made people happy, helped them feel calmer in a really stressful time, let them reflect on their real lives, allowed them to live out things they couldn’t in their real life, and all of that has definitely been the most rewarding part of everything I’ve done with visual novels.

Interview responses have been lightly edited for clarity.

Editors’ Choice

Repost: Original Source and Author Link


Google’s Visual Inspection AI spots defects in manufactured goods

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.

Google today announced the launch of Visual Inspection AI, a new Google Cloud Platform (GCP) solution designed to help manufacturers, consumer packaged goods companies, and other businesses reduce defects during the manufacturing and inspection process. Google says it’s the first dedicated GCP service for manufacturers, representing a doubling down on the vertical.

It’s estimated that defects cost manufacturers billions of dollars every year — in fact, quality-related costs can consume 15% to 20% of sales revenue. Twenty-three percent of all unplanned downtime in manufacturing is the result of human error compared with rates as low as 9 percent in other sectors, according to a Vanson Bourne study. The $327.6 million Mars Climate Orbiter spacecraft was destroyed because of a failure to properly convert between units of measurement, and one pharma company reported a misunderstanding that resulted in an alert ticket being overridden, which cost four days on the production line at £200,000 ($253,946) per day.

Powered by GCP’s computer vision technology, Visual Inspection AI aims to automate quality assurance workflows, enabling companies to identify and correct defects before products are shipped. By identifying defects early in the manufacturing process, Visual Inspection AI can improve production throughput, increase yields, reduce rework, and slash return and repair costs, Google boldly claims.

AI-powered inspection

As Dominik Wee, GCP’s managing director of manufacturing and industrial, explains, Visual Inspection AI specifically addresses two high-level use cases in manufacturing: cosmetic defection detection and assembly inspection. Once the service is fine-tuned on images of a business’ products, it can spot potential issues in real time, optionally operating on an on-premises server while leveraging the power of the cloud for additional processing.

Visual Inspection AI competes with Amazon’s Lookout for Vision, a cloud service that analyzes images using computer vision to spot product or process defects and anomalies in manufactured goods. Announced in preview at the company’s virtual re:Invent conference in December 2020 and launched in general availability in February, Amazon claims that Lookout for Vision’s computer vision algorithms can learn to detect manufacturing and production defects including cracks, dents, incorrect colors, and irregular shapes from as few as 30 baseline images.

But while Lookout for Vision counts GE Healthcare, Basler, and Sweden-based Dafgards among its users, Google says that Renault, Foxconn, and Kyocera have chosen Visual Inspection AI to augment their quality assurance testing. Wee says that with the Visual Inspection AI, Renault is automatically identifying defects in paint finish in real time.

Moreover, Google claims that Visual Inspection AI can build models with up to 300 times fewer human-labeled images than general-purpose machine learning platforms — as few as 10. Accuracy automatically increases over time as the service is exposed to new products.

“The benefit of a dedicated solution [like Visual Inspection AI] is that it basically gives you ease of deployment and the peace of mind of being able to run it on the shop floor. It doesn’t have to run the cloud,” Wee said. “At the same time, it gives you the power of Google’s AI and analytics. What we’re basically trying to do is get the capability of AI at scale into the hands of manufacturers.”

Trend toward automation

Manufacturing is undergoing a resurgence as business owners look to modernize their factories and speed up operations. According to ABI Research, more than 4 million commercial robots will be installed in over 50,000 warehouses around the world by 2025, up from under 4,000 warehouses as of 2018. Oxford Economics anticipates 12.5 million manufacturing jobs will be automated in China, while McKinsey projects machines will take upwards of 30% of these jobs in the U.S.

Indeed, 76% of respondents to a GCP and The Harris Poll survey said that they’ve turned to “disruptive technologies” like AI, data analytics, and the cloud to help navigate the pandemic. Manufacturers told surveyors that they’ve tapped AI to optimize their supply chains including in the management, risk management, and inventory management domains. Even among firms that currently don’t use AI in their day-to-day operations, about a third believe it would make employees more efficient and be helpful for employees overall, according to GCP.

“We’re seeing a lot of more demand, and I think it’s because we’re getting to a point where AI is becoming really widespread,” Wee said. “Our fundamental strategy is to make Google’s horizontal AI capabilities and integrate them into the capabilities of the existing technology providers.”

According to a 2020 PricewaterhouseCoopers survey, companies in manufacturing expect efficiency gains over the next five years attributable to digital transformations. McKinsey’s research with the World Economic Forum puts the value creation potential of manufacturers implementing “Industry 4.0” — the automation of traditional industrial practices — at $3.7 trillion in 2025.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


‘Danganronpa’ visual novels are finally coming to Nintendo Switch

The Danganronpa visual novels will finally be available for the Nintendo Switch later this year, over 10 years after the first game was released for the PSP. Nintendo has announced the mystery/adventure/detective games’ arrival to the platform during its Direct presentation for E3 2021. While the gaming giant doesn’t have an exact release date for the titles yet, it revealed that the three mainline games in the franchise — Danganronpa: Trigger Happy Havoc, Danganronpa 2: Goodbye Despair and Danganronpa V3: Killing Harmony — will be released as a physical collection called Danganronpa: Decadence.

The bundle comes with the anniversary edition of all three games, which will give you access to bonus scenes, illustrations and other extras not included in the original releases. Further, it comes with Danganronpa S: Ultimate Summer Camp, a brand new board game-style title that will look familiar if you ever played the unlockable board game in V3: Killing Harmony. Ultimate Summer Camp will unlock new scenes and interactions between characters that you haven’t seen before. 

While the games don’t have have a release date for the Switch yet, the bundle’s official website says Decadence will be available for pre-order “soon.” You’ll also be able to purchase a collector’s edition box with posters, prints, an OST CD and a metal case if you’re willing to pay extra. In case you just want a certain game from the collection, though, you’ll have to make do with a digital copy: Each title will be sold individually on the Nintendo eShop.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Tech News

Android 12’s biggest visual change may be emoji

One of the biggest visual changes to Android with Android 12 appears to be in emoji. While the emoji system itself will not change, Android 12 Beta 1 alone introduced a whopping 389+ updated emoji designs! Designs are refined in several key ways, including smoothing, flattening, and a conglomerating of color pallet. If we look at the collected changes made in Android 12, the otherwise most recent set of emoji look positively mish-mashed and Web 1.0 in style.

With Android 12, Google introduced a a new wave of Material Design. At Google I/O 2021, Material You was revealed, blurring the boundary between one-way-only and “theme however you like!” With Material You, Google’s aim is to allow users the freedom to have their own visual experience to a degree, while remaining squarely in the Android universe with a solid set of visual rules in movement, size, and preferred combinations.

The Android 12 emoji collection reflects that vision. As shown in a chart made by the folks at Emojigraph, Google’s design choices make the whole collection feel more solid, like their part of a single set. They look like they’ve all been designed by a single team.

This doesn’t seem to mean that all emoji have the same sort of border, the same size lines, or the exact same set of colors. Instead, it’s more like the design team that refined the Android 12 emoji collection separated all emoji into categories and designed from there.

Take for example the set of emoji shown below, courtesy of Emojigraph. Foggy, Night with Stars, Cityscape at Dusk, and Sunset were previously using the same basic set of buildings. Cityscape at Dusk was a full square, while the others were images with unique shapes.

Now, in Android 12, the same set of buildings is used for Night with Stars, Cityscape at Dusk, and Sunset, and they’re all represented with a square image. Foggy is now a completely different image that makes the essence of the word far more clear than it was before – and it’s in a square.

The vast majority of icon changes reduce clutter and refine iconography to more clearly represent the word associated with the emoji. What do you think so far? Do these changes look like they’ll make Android a more user-friendly experience? Have you seen Android 12 on your own phone yet?

Repost: Original Source and Author Link


U.S. banks deploy visual AI tools to monitor customers and workers

Join Transform 2021 this July 12-16. Register for the AI event of the year.

(Reuters) — Several U.S. banks have started deploying camera software that can analyze customer preferences, monitor workers, and spot people sleeping near ATMs, even as the banks remain wary about possible backlash over increased surveillance, more than a dozen banking and technology sources told Reuters.

Previously unreported trials at City National Bank of Florida and JPMorgan Chase, as well as earlier rollouts at banks such as Wells Fargo, offer a rare view into the potential U.S. financial institutions see in facial recognition and related artificial intelligence systems.

Widespread deployment of such visual AI tools in the heavily regulated banking sector would be a significant step toward their becoming mainstream in corporate America.

Bobby Dominguez, chief information security officer at City National, said smartphones that unlock via a face scan have paved the way.

“We’re already leveraging facial recognition on mobile,” he said. “Why not leverage it in the real world?”

City National will begin facial recognition trials early next year to identify customers at teller machines and employees at branches, aiming to replace clunky and less secure authentication measures at its 31 sites, Dominguez said. Eventually, the software could spot people on government watch lists, he said.

JPMorgan said it is “conducting a small test of video analytic technology with a handful of branches in Ohio.” Wells Fargo said it works to prevent fraud but declined to discuss how.

Civil liberties issues loom large. Critics point to arrests of innocent individuals following faulty facial matches, disproportionate use of the systems to monitor lower-income and non-white communities, and the loss of privacy inherent in ubiquitous surveillance.

Portland, Oregon, as of January 1 banned businesses from using facial recognition “in places of public accommodation,” and drugstore chain Rite Aid shut down a nationwide face recognition program last year.

Dominguez and other bank executives said their deployments are sensitive to the issues.

“We’re never going to compromise our clients’ privacy,” Dominguez said. “We’re getting off to an early start on technology already used in other parts of the world and that is rapidly coming to the American banking network.”

Still, the big question among banks, said Fredrik Nilsson, vice president of the Americas at Axis Communications, a top maker of surveillance cameras, is “what will be the potential backlash from the public if we roll this out?”

Walter Connors, chief information officer at Brannen Bank, said the Florida company had discussed but not adopted the technology for its 12 locations. “Anybody walking into a branch expects to be recorded,” Connors said. “But when you’re talking about face recognition, that’s a larger conversation.”

Business intelligence

JPMorgan began assessing the potential of computer vision in 2019 by using internally developed software to analyze archived footage from Chase branches in New York and Ohio, where one of its two Innovation Labs is located, said two people — including former employee Neil Bhandar, who oversaw some of the effort at the time.

Chase aims to gather data to better schedule staff and design branches, three people said, and the bank confirmed. Bhandar said some staff even went to one of Amazon’s cashierless convenience stores to learn about its computer vision system.

Preliminary analysis by Bhandar of branch footage revealed more men would visit before or after lunch, while women tended to arrive mid-afternoon. Bhandar said he also wanted to analyze whether women avoided compact spaces in ATM lobbies because they might bump into someone, but the pandemic halted the plan.

Testing facial recognition to identify clients as they walk into a Chase bank, if they consented to it, has been another possibility considered to enhance their experience, a current employee involved in innovation projects said.

Chase would not be the first to evaluate those uses. A bank in the Northeast recently used computer vision to identify busy areas in branches with newer layouts, an executive there said, speaking on the condition the company not be named.

A Midwestern credit union last year tested facial recognition for client identification at four locations before pausing over cost concerns, a source said.

While Chase developed custom computer vision in-house using components from Google, IBM Watson, and Amazon Web Services, it also considered fully built systems from software startups AnyVision and Vintra, people including Bhandar said. AnyVision declined to comment, and Vintra did not respond to requests for comment.

Chase said it ultimately chose a different vendor, which it declined to name, out of 11 options considered, and began testing that company’s technology at a handful of Ohio locations last October. The effort aims to identify transaction times, how many people leave because of long queues, and which activities are occupying workers.

The bank added that facial, race, and gender recognition are not part of this test.

Using technology to guess customers’ demographics can be problematic, some ethics experts say, because it reinforces stereotypes. Some computer vision programs are also less accurate on people of color, and critics have warned that could lead to unjust outcomes.

Chase has weighed ethical questions. For instance, some internally called for reconsidering planned testing in Harlem, a historically Black neighborhood in New York, because it could be viewed as racially insensitive, two of the people said. The discussions emerged about the same time as a December 2019 New York Times article about racism at Chase branches in Arizona.

Analyzing race was not part of the eventually tabled plans, and the Harlem branch had been selected because it housed the other Chase Innovation Lab for evaluating new technology, the people said, and the bank confirmed.

Targeting the homeless

Security uses for computer vision have long stirred banks’ interest. Wells Fargo used software from the company 3VR over a decade ago to review footage of crimes and see if any faces matched those of known offenders, said John Honovich, who worked at 3VR and founded video surveillance research organization IPVM.

Identiv, which acquired 3VR in 2018, said banking sales were a major focus, but it declined to comment on Wells Fargo.

A security executive at a mid-sized Southern bank, speaking on condition of anonymity to discuss secret measures, said over the last 18 months the bank has rolled out video analytics software at nearly every branch to generate alerts when doors to safes, computer server rooms, and other sensitive areas are left open.

Outside, the bank monitors for loitering, such as the recurring issue of people setting up tents under the overhang for drive-through ATMs. Security staff at a control center can play an audio recording politely asking those people to leave, the executive said.

The issue of people sleeping in enclosed ATM lobbies has long been an industry concern, said Brian Karas, vice president of sales at Airship Industries, which develops video management and analytics software.

Systems that detected loitering so staff could activate a siren or strobe light helped increase ATM usage and reduce vandalism for several banks, he said. Though companies did not want to displace people seeking shelter, they felt this was necessary to make ATMs safe and accessible, Karas said.

City National’s Dominguez said the bank’s branches use computer vision to detect suspicious activity outside.

Sales records from 2010 and 2011 reviewed by Reuters show that Bank of America purchased iCVR cameras that were marketed at the time as helping organizations reduce loitering in ATM lobbies. Bank of America said it no longer uses iCVR technology.

The Charlotte, North Carolina-based bank’s interest in computer vision has not abated. Its officials met with AnyVision on multiple occasions in 2019, including at a September conference during which the startup demonstrated how it could identify the face of a Bank of America executive, according to records of the presentation seen by Reuters and a person in attendance.

The bank said, “We are always reviewing potential new technology solutions that are on the market.”


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Qualcomm research could lower the barriers to visual AI everywhere

Demand is growing for AI solutions that process images from cameras and other photo sensors. These solutions have applications in security, autonomous vehicles, healthcare, smart cities, and many other areas, but the more advanced they become, the more computationally intensive they get. So creating more efficient AI computational devices is a necessity for advancing the field.

Chip maker Qualcomm held a series of briefings late last month that point to a solution on this front. The company discussed an R&D project it has undertaken to reduce the amount of compute necessary to do visual AI, and thus create chips that are smaller, more cost effective, and use less power.

The techniques it is implementing include creating methods to eliminate redundant information processing by only analyzing frame differences rather than analyzing each frame. In typical videos, each subsequent frame contains little information change from the previous one, so not having to process each frame individually with duplicated information is much more efficient. Further, Qualcomm is developing a skip function that limits the number of frames to be analyzed by skipping frames that offer little or no change, eliminating the computational burden of processing them. This creates a process flow that understands the relationship between frames enabling the system to exit processing of the visual data earlier when no additional information is needed, and potentially saving many frame processing cycles.

The goal to reduce the complexity of the processor so that the amount of power used by the chip can be significantly reduced and the actual size of the chip created to be more compact. Both of these capabilities have the potential to reduce the cost of chips. Smaller less power-hungry chips produce less heat and also enable smaller finished devices with more limited power needs. That means being able to move the chip right into the camera or similar products without needing an external processing component as is common in current systems.

A further benefit of this work will be reducing the need for complex processing in the cloud or at the edge. The less data sent to the cloud for processing, the less cost involved in transport. This results in faster data analysis with less latency, less sharing of the data for increased privacy/security, and a reduced cloud processing load.

Indeed, edge computing is becoming commonplace in distributed, often hybrid cloud-based environments. As the number of cameras and visual devices proliferate, the workloads placed on edge computing systems increases, making the deployment more complex and more expensive. AI programs that can be embedded in visual devices would enable a large scale deployment of smart cameras for security, smart cities, autonomous vehicles, etc. while reducing overall system costs. Complexity and deployment cost are a prime inhibitor to greater use of such solutions in both public and private markets.

Of course, Qualcomm isn’t the only company working to find a solution to this challenge. Intel/Movidius and NVidia are key players in AI-based video systems that have current specialized offerings on the market that also enable the ability to do embedded visual processing. And other major players are experimenting in this space, including Google, Microsoft, and several players that supply mobile systems (e.g., Samsung, NXP, ARM, etc.). Each has implemented its own designs based on its unique algorithm acceleration, but more generic devices (e.g., Nvidia) may not be as effective at visual processing tasks. Qualcomm also has the advantage of being a supplier of low power processing solutions inherited from its mobile processing strengths, so it has a strong advantage when it comes to impacting embedded solutions at the edge. But this is a very competitive market that will grow over the next several years and will likely not be dominated by any single vendor for the foreseeable future.

Bottom line

By creating a way to reduce the processing required to do visual AI computations and thereby reducing the power requirements, size, and cost of AI chips, Qualcomm hopes to create mass market devices that can be deployed in lower cost and higher quantity devices, and without sacrificing AI quality. If it can accomplish this, the number of devices with self contained capability will skyrocket, relieving the strain on edge computing and allowing it to do more important processing functions. This is a win for Qualcomm, and potentially a win for all forms of new, visually “smart” products. While Qualcomm is not the only silicon vendor in this race, its R&D efforts should put it ahead of many, and it should benefit significantly from its leadership.

Jack Gold is the founder and principal analyst at J.Gold Associates, LLC., an information technology analyst firm based in Northborough, MA., covering the many aspects of business and consumer computing and emerging technologies. Follow him on Twitter @jckgld or LinkedIn at


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link