Reincubate Camo has come out swinging against Apple’s Continuity Camera technology with a slew of new controls you won’t find anywhere else. These include variable frame rates, intelligent zoom technology, and video stabilization improvements. Many of these go well beyond anything Apple offers in MacOS Ventura.
You may have heard of the Camo app. It allows you to use your iPhone as a Mac webcam and has been a popular piece of software since its release in 2020. Apple may have borrowed a few of Camo’s key concepts when it displayed the Continuity Camera at WWDC 22 in June. Undeterred, Reincubate, the company that owns Camo, wants to differentiate itself from Apple’s more basic tech. Update 1.8 gives you what Apple does not.
For starters, there’s a new variable frame rate setting. You can adjust your frame rates between 15 fps and 60 fps. Reincubate claims it is perfect for capturing smooth stream footage with YouTube and Twitch. You’ll see a new frame rate drop-down in the camera settings. Apple’s Continuity Camera does not allow you to change your frame rate, depending on the iPhone’s hardware to figure it out.
You’ll also get Smart Zoom with the Camo 1.8 update. This limits the amount of digital zoom when possible, so you can zoom and crop your scenes without losing image quality. Reincubate describes Smart Zoom in a blog post. “When cropping out part of a scene, Camo will now avoid using digital zoom wherever possible, and instead rely on a higher resolution source image from the camera’s sensor, mimicking lossless optical zoom.”
Video stabilization is another feature in the update. This is a new feature for stabilizing the image from shaky cameras without sacrificing image quality. People with standing desks who type a lot, and laptops on wobbly surfaces can cause the video to jump around wildly. Image stabilization will compensate for the shaking and keep your face centered in the camera. Your coworkers won’t even know you have a mechanical keyboard!
Finally, Reincubate gives you a way to control the vibrancy of your image in a live stream. Vibrancy Control can enliven or darken your image, which is great for creating a more subtle atmosphere around your face. This update allows you to fine-tune things such as lighting, white balance, and other granular adjustments you simply won’t find on Apple’s offering.
Updates such as Camo 1.8 are what give third-party apps an edge over Apple’s own offerings. You can check out our list of the best third-party apps for your Mac.
Nintendo’s Game Boy Camera has inspired countless DIY projects over the years, from a to an AI model trained to captured by the accessory. However, few are likely to match the creativity of the Camera M by photographer and modder .
In a spotted by , Graves detailed how he turned the humble Game Boy Camera into a mirrorless camera. Using a combination of custom PCB and parts from a repurposed Game Boy Pocket, a 1996 variant of the original 1989 model that was smaller, lighter and more power efficient, he transplanted the internals of a Game Boy into a shell that looks like a Fujifilm . As for the Game Boy Camera’s 128 x 128 pixel CMOS sensor, Graves put that into a custom cart attached to a CS lens mount and a manual focus varifocal lens. The nifty thing about Camera M is that it’s possible to use an original Game Boy Camera in place of the custom cartridge he hacked together.
In either case, the resulting device still takes greyscale 128 x 112 photos, but the ergonomics and user experience are vastly improved. Graves replaced the Pocket’s original screen with a backlit IPS display, making it easier to use the camera at night, and added a 1,800mAh battery that can power everything for up to eight hours. It even comes with USB-C charging. Graves told Gizmodo he hasn’t tried playing any games with his creation yet but speculated turn-based RPGs like Pokémon would be fun with the button layout he devised. So far, only one Camera M exists, but Graves said he’s “strongly leaning” toward selling conversion kits or even complete kits.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Digital Trends may earn a commission when you buy through links on our site.
The best Black Friday deals are here early, making this the perfect time to upgrade your home’s security system. With this fantastic Blink security camera combo deal, you can get two cameras for $100. That’s $80 off the original price of $180. Since the price of a single camera is also $100 right now, you’re basically getting a second one for free. Check out this deal below, and then compare it with the other Black Friday security camera deals we’ve found this week.
If you were only planning on buying one camera, you’ll be glad you found this deal. These wireless cameras are so easy to use that you’ll want as many as you can get without breaking your budget. The key to the success of Blink cameras is how easy they are to use. You mount them wherever you want, plug in a sync module (basically a wireless router), hook them up to the app, and you’re good to go. Each camera will function for up to two years on just two AA batteries. The outdoor version is weatherproof, so it will survive rain and cold temperatures. Of course, they will function just as well inside.
Blink security cameras shoot in up to 1080p, so say goodbye to grainy black-and-white footage. The footage can be held in cloud storage if you pay for a Blink subscription, which is $10 per month for unlimited (at one location) or $3 per month per camera. You can set up custom alerts that go straight to your phone. These alerts can be triggered by motion passing through custom trigger zones you set up for each camera. For instance, you can set up a camera to only alert you when someone steps onto your porch, but ignore cars and people passing by on the sidewalk.
Right now you can get a second camera for free when you buy this combo deal. That’s two cameras for $100, an $80 discount during Amazon Black Friday deals. Hurry before they sell out!
Blink Outdoor is a wireless security camera that you can easily set up at home. The 2-camera kit has a long-lasting battery and can withstand different elements when you place it outdoors.
This wireless 3-camera kit has a two-year battery life and has effective motion detection. It can be set up indoors and outdoors in a matter of minutes.
If you already have a bunch of Blink Outdoor camera system installed in your space, you might want to buy this add-on camera instead. Take note that this add-on doesn’t include the Blink Sync Module.
Use this Blink Wireless Outdoor Add-on Camera to protect indoor or outdoor space with live video, 2-way talk, motion detection, and up to two-year battery life. Requires a Sync Module (not included).
We strive to help our readers find the best deals on quality products and services, and we choose what we cover carefully and independently. The prices, details, and availability of the products and deals in this post may be subject to change at anytime. Be sure to check that they are still in effect before making a purchase.
Digital Trends may earn commission on products purchased through our links, which supports the work we do for our readers.
Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next.
Deep North, a Foster City, California-based startup applying computer vision to security camera footage, today announced that it raised $16.7 million in a series A-1 round. Led by Celesta Capital and Yobi Partners, with participation from Conviction Investment Partners, Deep North plans to use the funds to make hires and expand its services “at scale,” according to CEO Rohan Sanil.
Deep North, previously known as Vmaxx, claims its platform can help brick-and-mortar retailers “embrace digital” and protect against COVID-19 by retrofitting security systems to track purchases and ensure compliance with masking rules. But the company’s system, which relies on algorithms with potential flaws, raises concerns about both privacy and bias.
“Even before a global pandemic forced retailers to close their doors … businesses were struggling to compete with a rapidly growing online consumer base,” Sanil said in a statement. “As stores open again, retailers must embrace creative digital solutions with data driven, outcome-based computer vision and AI solutions, to better compete with online retailers and, at the same time, accommodate COVID-safe practices.”
Deep North was founded in 2016 by Sanil and Jinjun Wang, an expert in multimedia signal processing, pattern recognition, computer vision, and analytics. Wang — now a professor at Xi’an Jiaotong University in Xi’an, China — was previously a research scientist at NEC before joining Epson’s R&D division as a member of the senior technical staff. Sanil founded a number of companies prior to Deep North, including Akirra Media Systems, where Wang was once employed as a research scientist.
“In 2016, I pioneered object detection technology to help drive targeted advertising from online videos. When a major brand saw this, they challenged m e to create a means of identifying, analyzing, and sorting objects captured on their security video cameras in their theme parks,” Sanil told VentureBeat via email. “My exploration inspired development that would unlock the potential of installed CCTV and security video cameras within the customer’s physical environment and apply object detection and analysis in any form of video.”
After opening offices in China and Sweden and rebranding in 2018, Deep North expanded the availability of its computer vision and video analytics products, which offer object and people detection capabilities. The company says its real-time, AI-powered and hardware-agnostic software can understand customers’ preferences, actions, interactions, and reactions “in virtually any physical setting” across “a variety of markets,” including retailers, grocers, airports, drive-thrus, shopping malls, restaurants, and events.
Deep North says that retailers, malls, and restaurants in particular can use its solution to analyze customer “hotspots,” seating, occupancy, dwell times, gaze direction, and wait times, leveraging these insights to figure out where to assign store associates or kitchen staff. Stores can predict conversion by correlating tracking data with the time of day, location, marketing events, weather, and more, while shopping centers can draw on tenant statistics to understand trends and identify “synergies” between tenants, optimizing for store placement and cross-tenant promotions.
“Our algorithms are trained to detect objects in motion and generate rich metadata about physical environments such as engagement, pathing, and dwelling. Our inference pipeline brings together camera feeds and algorithms for real-time processing,” Deep North explains on its website. “[We] can deploy both via cloud and on-premise and go live within a matter of hours. Our scalable GPU edge appliance enables businesses to bring data processing directly to their environments and convert their property into a digital AI property. Video assets never leave the premise, ensuring the highest level of security and privacy.”
Beyond these solutions, Deep North developed products for particular use cases like social distancing and sanitation. The company offers products that monitor for hand-washing and estimate wait times at airport check-in counters, for example, as well as detect the presence of masks and track the status of maintenance workers on tarmacs.
“With Deep North’s mask detection capability, retailers can easily monitor large crowds and receive real-time alerts,” Deep North explains about its social distancing products. “In addition, Deep North … monitors schedules and coverage of sanitization measures as well as the total time taken for each cleaning activity … Using Deep North’s extensive data, [malls can] create tenant compliance scorecards to benchmark efforts, track overall progress, course-correct as necessary. [They] can also ensure occupancy limits are adhered to across several properties, both locally and region-wide, by monitoring real-time occupancy on our dashboard and mobile apps.”
Like most computer vision systems, Deep North’s were trained on datasets of images and videos showing examples of people, places, and things. Poor representation within these datasets can result in harm — particularly given that the AI field generally lacks clear descriptions of bias.
Previous research has found that ImageNet and Open Images — two large, publicly available image datasets — are U.S.- and Euro-centric, encoding humanlike biases about race, ethnicity, gender, weight, and more. Models trained on these datasets perform worse on images from Global South countries. For example, images of grooms are classified with lower accuracy when they come from Ethiopia and Pakistan, compared to images of grooms from the United States. And because of how images of words like “wedding” or “spices” are presented in distinctly different cultures, object recognition systems can fail to classify many of these objects when they come from the Global South.
Bias can arise from other sources, like differences in the sun path between the northern and southern hemispheres and variations in background scenery. Studies show that even differences between camera models — e.g., resolution and aspect ratio — can cause an algorithm to be less effective in classifying the objects it was trained to detect.
Tech companies have historically deployed flawed models into production. ST Technologies’ facial recognition and weapon-detecting platform was found to misidentify black children at a higher rate and frequently mistook broom handles for guns. Meanwhile, Walmart’s AI- and camera-based anti-shoplifting technology, which is provided by Everseen, came under scrutiny last May over its reportedly poor detection rates.
Deep North doesn’t disclose on its website how it trained its computer vision algorithms, including whether it used synthetic data (which has its own flaws) to supplement real-world datasets. The company also declines to say to what extent it takes into account accessibility and users with major mobility issues.
In an email, Sanil claimed that Deep North “has one of the largest training datasets in the world,” derived from real-world deployments and scenarios. “Our human object detection and analysis algorithms have been trained with more than 130 million detections, thousands of camera feeds, and various environmental conditions while providing accurate insights for our customers,” he said. “Our automated and semi-supervised training methodology helps us build new machine learning models rapidly, with the least amount of training data and human intervention.”
In a follow-up email, Sanil added: “Our platform detects humans, including those with unique gaits, and those that use mobility aids and assistive devices. We don’t do any biometric analysis, and therefore there is no resulting bias in our system … In the simplest terms, the platform interprets everything as an object whether it’s a human or a shopping cart or a vehicle. We provide object counts entering or exiting a location. Our object counting and reporting is not influenced by specific characteristics.” He continued: “We have a large set of labeled data. For new data to be labeled, we need to classify some of the unlabeled data using the labeled information set. With the semi-supervised process we can now expedite the labeling process for new datasets. This saves time and cost for us. We don’t need annotators, or expensive and slow processes.”
Privacy and controversy
While the purported goal of products like Deep North’s are health, safety, and analytics, the technology could be coopted for other, less humanitarian intents. Many privacy experts worry that they’ll normalize greater levels of surveillance, capturing data about workers’ movements and allowing managers to chastise employees in the name of productivity.
Deep North is no stranger to controversy, having reportedly worked with school districts and universities in Texas, Florida, Massachusetts, and California to pilot a security system that uses AI and cameras to detect threats. Deep North claims that the system, which it has since discontinued, worked with cameras with resolutions as low as 320p and could interpret people’s behavior while identifying objects like unattended bags and potential weapons.
Deep North is also testing systems in partnership with the U.S. Transportation Security Administration, which furnished it with a grant last March. The company received close to $200,000 in funding to provide metrics like passenger throughput, social distancing compliance, agent interactions, and bottleneck zones as well as reporting of unattended baggage, movement in the wrong direction, or occupying restricted areas.
“We are humbled and excited to be able to apply our innovations to help TSA realize its vision of improving passenger experience and safety throughout the airport,” Sanil said in a statement. “We are committed to providing the U.S. Department of Homeland Security and other government entities with the best AI technologies to build a safer and better homeland through continued investment and innovation.”
Deep North admitted in an interview with Swedish publication Breakit that it offers facial characterization services to some customers to estimate age range. And on its website, the startup touts its technologies’ ability to personalize marketing materials depending on a person’s demographics, like gender. But Deep North is adamant that its internal protections prevent it from ascertaining the identity of any person captured via on-camera footage.
“We have no capability to link the metadata to any single individual. Further, Deep North does not capture personally identifiable information (PII) and was developed to govern and preserve the integrity of each and every individual by the highest possible standards of anonymization,” Sanil told TechCrunch in March 2020. “Deep North does not retain any PII whatsoever, and only stores derived metadata that produces metrics such as number of entries, number of exits, etc. Deep North strives to stay compliant with all existing privacy policies including GDPR and the California Consumer Privacy Act.”
To date, 47-employee Deep North has raised $42.3 million in venture capital.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
up-to-date information on the subjects of interest to you
gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
It’s time once again for the newest batch of Fortnite challenges, this time for season 7, week 8. Compared to the rest of this season, the challenges for week 8 are relatively straightforward, with only a few that should cause any issues. One you might be stuck on is emoting in front of a camera at Believer Beach or Lazy Lake. The act of emoting in front of a camera isn’t hard, but knowing where to find these specific locations is another story.
In this guide, we’ll show you where to find the cameras to complete the latest challenge. Here’s how to emote in front of a camera at Believer Beach or Lazy Lake in Fortnite.
Camera locations at Believer Beach and Lazy Lake
There are two main locations that feature cameras for this challenge: Believer Beach and Lazy Lake. As the challenge states, you can emote at either, and here, we’ll show you where to find them. As depicted in the map above (courtesy of Fortnite.gg), you can easily see where to find them. There are prerequisites to finish first before aiming for this challenge, such as using shield potions, building structures, and more, so keep that in mind.
Since you only need to visit one location for this challenge, we recommend heading toward whichever one is closest to the path of the Battle Bus at the start of a match. Though, it’s worth mentioning that Believer Beach is often less busy than Lazy Lake, so scope things out before heading to one or the other.
We also recommend attempting this in Team Rumble so you can respawn if you get taken out early. Having a team come with you to watch your back is also a good idea, as it’s possible you could get taken out before being able to emote in front of the camera. That’s why it’s recommended to land nearby to get stocked up on weapons, potions, and other gear before making a beeline toward the cameras.
Below are the specific locations of each camera.
The one at Believer Beach is found on the southeastern side of the large dock in this area, just next to an orange umbrella.
As for the one at Lazy Lake, you’ll find it on the eastern portion of this area close to a pool. Look for it on the southeastern side, in the back area of a fancy house.
Once you arrive in either of these areas, stand in front of the camera and use any emote to gain credit for completing the challenge.
Dropbox is getting a slew of interface tweaks alongside new tools and features. One of the more useful inclusions is a new file conversion feature, which lets Dropbox users convert images between different formats like JPEG and PNG, and files into PDFs. Support for video conversion is coming soon. Dropbox is also adding new features to its password manager.
Automatic camera backups from mobile, which will now be available to users on Dropbox’s free tier, are also seeing some improvements. Dropbox says uploads should now be faster and more reliable, and on iOS there’s now an option to specify exactly which folders should be automatically backed up (the feature is coming to Android later this year). There’s also the option to have photos automatically deleted after they’re backed up to save space.
The service’s password manager, which it made available to free account holders in April, is being updated to store credit and debit card details while a password sharing feature announced in March is now rolling out. Dropbox’s password manager is available to both its free and paid account holders, but free accounts are limited to storing a maximum of 50 passwords.
Finally, on the web Dropbox is updating its side navigation interface, making it easier to organize files through dragging and dropping. The details pane is also being overhauled to offer more details on your files at a glance. Desktop users are also getting a simplified interface from their system tray, with “streamlined access to content, search, file activity, and sync progress.”
A lot has already been leaked about the Galaxy Z Fold 3, but there are still one or two pieces that remain uncertain. One of those, the Unpacked 2021 date, may have been finally settled by the alleged remarks of a Samsung executive. The other big mystery is the form that the front-facing camera will take, which may be a typical punch-hole camera or Samsung’s first under-display camera or UDC. A new leak seems to lean more towards the latter, and many are not amused.
Samsung has long been rumored to be developing its own UDC solution even before it yielded to the punch-hole trend. ZTE beat it to the punch with the Axon 20 5G, and that was probably for the best. Given the criticisms surrounding the first-gen UDC, Samsung may have been able to avoid making ZTE’s mistakes.
The company’s first UDC has been rumored on and off again for the Galaxy Z Fold 3, and even the leaks are split on whether Samsung will push through or not. Ice universe’s latest render of the foldable phone suggests that UDC is in, but it might not come in the same way that ZTE’s implementation did.
For one, the supposedly invisible area above the front-facing camera is circular rather than square. For another, it is actually more visible and noticeable than ZTE’s attempts to hide that area of the screen, a design that isn’t sitting well with some people on Twitter. That said, it is an unofficial render and might not reflect the actual device.
The render also “confirms” some disappointing news about the Galaxy Z Fold 3’s other cameras. None of the three on the back show telltale signs of a periscope-style lens, limiting the phone’s telephoto capabilities. The Galaxy Z Fold 3 could follow in the footsteps of the Galaxy S21 or Galaxy S21+, both of which have only three cameras on their backs.
Luckily avoiding the fate that befell Huawei, ZTE has been left free to continue its smartphone business unobstructed and even sell some of those in the US. More than that, however, the company was even brave enough to push forward one of the holy grails in smartphone design. The ZTE Axon 20 5G’s under-display camera or UDC, however, didn’t really live up to expectations, but the company has now confirmed that a next-gen version is coming soon under the Axon 30 flagship name.
When it revealed what would be the world’s first phone with UDC last year, ZTE was quite forthcoming with the innovations it made in that arena. In contrast, it has been pretty silent this year aside from the fact that it was indeed working on a second-gen UDC when everybody thought it abandoned that venture. There has been a recent rumor that its second attempt will be arriving soon and ZTE has just confirmed and teased that it will indeed happen this year.
It simply said that a new generation of camera phones with UDC technology is coming soon and that it will bear the Axon 30 name. It also teased a render of what the phone would look like, which is unsurprisingly almost 100% screen on the front. That said, it is just a marketing render so the absence of a visible “patch” above the UDC sensor should be taken with a grain of salt.
The teaser doesn’t give an exact date for the Axon 30 5G with UDC’s announcement, but ZTE does put a timestamp on another Axon 30 variant coming this week. On July 9 at 10 AM local time, ZTE will unveil a configuration of the Axon 30 Ultra with 16GB of RAM and a whopping 1TB of storage. Details are still thin on this new model, especially its global availability, but it won’t be too long a wait to get the official word.
As for the ZTE Axon 30 5G with UDC, rumors claim it will debut on July 22. Whether it becomes available in global markets, particularly the US, is still a matter of debate. More importantly, it remains to be seen, literally, if it will indeed bring the promised improvements to the UDC technology co-developed with Visionox.
Many use their smartphones to capture anything anywhere, sometimes even in difficult or dangerous locations and positions. One thing our powerful smartphones will never be able to do is to take aerial photos, at least not without some help from a drone or any other flying contraption. Never say never, as they say, and Vivo seems to be thinking of something to solve that problem. Its latest patent shows a smartphone that houses an honest-to-goodness quadcopter drone to take pictures from above, almost like in the movies.
As many patents usually are, the idea is simple, but its implementation will surely become the stuff of engineering and manufacturing legend if successful. The image of a drone sliding out of your phone to take pictures from places you can’t safely go will probably trigger images of sci-fi or spy flicks where everyday things hide other, not so ordinary things.
According to the patent reported by LetsGoDigital, the hidden camera system includes four propellers, three infrared sensors, two cameras, and its own battery. One of the cameras faces forward while the other faces upward, though admittedly, it would be more useful if it were facing the ground. The infrared sensors face the remaining three cardinal directions and are most likely used for obstacle avoidance and navigation.
The imagery it evokes may sound fantastic, but it will probably be a nightmare to pull off. Even the smallest camera drone today is thicker than the thickest smartphone, especially when you include the propellers. Having something like that inside a smartphone also means less room for other essential components, especially the battery. The patent doesn’t really say whether the drone camera system has any use while docked inside, which implies it’s wasted space when not in use.
Of course, it’s just a patent with no assurance of being made into an actual product, but the idea is interesting and intriguing nonetheless. There’s no telling where we will be 10 or 20 years from now regarding smartphone and drone technologies. It might become a more plausible and feasible product in the not-so-distant future, and, when it does, Vivo already holds the patent to own it.
ZTE impressed the world when it announced it has the industry’s first smartphone with an under-display camera or UDC. The ZTE Axon 20 5G, unfortunately, didn’t impress reviewers when it came to that camera’s actual performance. When the company announced the Axon 30 series last April with a traditional punch-hole cutout, most presumed ZTE has given up on the technology. It turns out it may have just been biding its time to show off the hard lessons it learned from last year’s ZTE flagship.
UDCs are a sort of holy grail for smartphone makers, allowing them to push out almost all bezels without resorting to tricks like popup cameras or cutting out a part of the screen. The problem is that manufacturers will have to balance screen quality and allow light to pass through to the camera underneath. For its first attempt, the Axon 20 didn’t perform well on both accounts based on reviews.
There was a very visible square area above the camera where the screen density was noticeably lower than the surrounding area. That may have been negligible compared to the disappointing quality of photos and videos that the front under-display camera produced. It was a brave first try, and it almost seemed as if ZTE went back to the drawing board.
A new report from Chinese media now claims that isn’t the case and that the Axon 30, ZTE’s latest flagship, will have a new variant with an improved UDC. The company’s engineers have supposedly been able to figure out how to eliminate the effects of having an active screen area on top of the camera. In addition, the new module is said also to improve color reproduction, one of the biggest complaints about the ZTE Axon 20 5G’s front-facing camera.
According to the report, ZTE will announce the UDC variant of the ZTE Axon 30 this July. It will definitely be interesting how much it has improved, if at all, but it remains unknown if the company will also launch it in global markets just like it did the other Axon 30 models.