American government officials told TheWashington Post that the Russian military was responsible for a cyberattack on a European satellite internet service that affected Ukrainian military communications in late February.
The hack affected the KA-SAT satellite broadband network, owned by Viasat, an American satellite communications company. On February 24th, the day the Russian invasion of Ukraine began, the KA-SAT network was hit by outages that affected Ukraine and surrounding regions in Europe. A few days afterward, Viasat blamed outages on a “cyber event,” but did not release further details.
The NSA was reported to be collaborating on an investigation with Ukrainian intelligence services, but no results have been officially announced. However, anonymous officials reportedly told the Post that US intelligence analysts have now concluded that Russian military hackers were behind the attack.
A request for confirmation sent by The Verge to the Cybersecurity and Infrastructure Security Agency (CISA) had not received a response by the time of publication.
Officials from Viasat told Air Force Magazine that the attack was conducted through a compromise of the system that manages customer satellite terminals, and only affected customers of the KA-SAT network, a smaller broadband provider that Viasat bought last year from French satellite operator Eutelsat.
At the outset of the conflict, commentators feared that Russia could launch widespread and destructive cyberattacks. While one perspective holds that such attacks have failed to materialize, the slow release of additional information gives credence to the suggestion that many attacks may have occurred in the shadows.
In the aftermath of the hack, CISA and the FBI issued a joint cybersecurity advisory to satellite communications providers, warning that the agencies were aware of possible threats to US and international networks, and advising companies to report any indications of malicious activity immediately.
As the war in Ukraine continues — and US opposition to Russia grows in the form of sanctions — the Biden administration has issued increasingly serious warnings about the possibility of Russian cyberattacks on US infrastructure.
On Monday, President Biden advised US businesses to take added precautions against hacking, citing “evolving intelligence” that Russia was preparing to target the US with cyberattacks. Then on Thursday, the Department of Justice unsealed indictments against four Russians accused of mounting state-sponsored cyberattacks against the US, publicly releasing details of a highly sophisticated hacking campaign involving supply-chain software compromises and spear-phishing campaigns against thousands of employees of companies and US government agencies.
The scare is over and the fun can begin. That’s how I tend to think of creative endeavors involving artificial intelligence these days. We’ve moved past, I think, hyperbolic claims about AI making human art redundant and can now enjoy all the possibilities this technology affords. In that light, Shadow Planet — a new album made as a three-way collaboration between two humans and AI — shows exactly what sort fun can be had.
Shadow Planet is the creation of writer Robin Sloan, musician Jesse Solomon Clark, and Jukebox, a machine learning music program made by OpenAI. After an off-the-cuff Instagram conversation between Sloan and Clark about starting a band (named The Cotton Modules), the two began exchanging tapes of music. A seasoned composer, Clark sent seeds of songs to Sloan who fed them into Jukebox, which is trained on a huge dataset of 1.2 million songs and tries to autocomplete any audio it hears. The AI program, steered by Sloan, then built on Clark’s ideas, which Sloan sent back to him to develop further.
The end result of this three-way trade is Shadow Planet, an atmospheric album in which snippets of folk songs and electronic hooks emerge like moss-covered logs from a fuzzy bog of ambient loops and disintegrating samples. It is a complete album in and of itself: a pocket musical universe to explore.
As Sloan explained to me in an interview over email, the sound of Shadow Planet is in many ways a result of the limitations of Jukebox, which only outputs mono audio at 44.1kHz. “Making this album, I learned that this kind of AI model is absolutely an ‘instrument’ you need to learn to play,” he told me. “It’s basically a tuba! A very… strange… and powerful… tuba…”
It’s this sort of emergent creativity, when machines and humans respond to limitations and advantages in one another’s programming, that makes AI art so interesting. Think about how the evolution of the harpsichord to the piano affected styles of music, for example, and as the ability of the latter to play loudly or softly (rather than the single fixed dynamic of the harpsichord) engendered new musical genres. This, I think, is what’s happening now with a whole range of AI models that are shaping creative output.
You can read my interview with Sloan below, and find out why working with machine learning felt to him “like wandering in an enormous labyrinth.” And you can listen to Shadow Planet on Spotify, Apple Music, iTunes, Bandcamp, or on Sloan and Clark’s website.
This interview has been lightly edited for clarity
Hey Robin, thanks for taking the time to speak to me about this album. First of all, tell me a little bit please about what material Jesse was sending you to start this collaboration? Was it original songs?
Yes! Jesse is a composer for commercials, films, and physical installations — he wrote the generative soundtrack that runs inside the visitor center at Amazon’s Spheres in Seattle. So he’s well-accustomed to sitting down and producing a bunch of musical options. Each tape I received from him had around a dozen small “songlets” on it, some only 20-30 seconds long, others a few minutes, all different, all separated by a bit of silence. So, my first task was always to listen through, decide what I liked best, and copy that to the computer.
And then you fed those into an AI system. Can you tell me a little bit about that program? What was it and how does it work?
I used OpenAI’s Jukebox model, which they trained on ~1.2 million songs, 600K of them in English; it operates on raw audio samples. That’s a huge part of the appeal for me; I find the MIDI-centric AI systems too… polite? They respect the grid too much! The sample-based systems (which I’ve used before, in different incarnations, including to make music for the audiobook of my last novel) are crunchier and more volatile, so I like them better.
To sample the Jukebox model, I used my own customized code. The technique OpenAI describes in their publication is very much like, “Hey, Jukebox, play me a song that sounds like The Beatles,” but I wanted to be able to “weird it up,” so my sampling code allows me to specify many different artists and genres and interpolate between them, even if they don’t have anything in common.
And that’s all just the setup. The sampling process itself is interactive. I’d always start with a “seed” from one of Jesse’s tapes, which would give the model a direction, a vibe to follow. In essence, I’d say to the model: “I’d like something that’s a mix of genre X and Y, sort of like artists A and B, but also, it’s got to follow this introduction: <Jesse’s music plays>”
I’d also, in some cases, specify lyrics. Then, I would go about eight to 10 seconds at a time, generating three options at each step — the computer churns for five to 10 minutes, FUN — then play them back, select one, and continue ahead… or sometimes reject all three and start over. In the end, I’d have a sample between 60-90 seconds long, and I’d print that to tape.
It was, to be honest, an extremely slow and annoying process, but the results were so interesting and evocative that I was always motivated to keep going!
What did Jesse think about the material you were sending him?
He underscores that working with the material was often VERY difficult. Weird instruments would rise up out of nowhere, or the key would change in a strange way, etc. But I think that was part of the fun, too, and the reason to do this project at all: each sample I sent him was a puzzle to solve.
Ultimately, his work was both responsive — “how do I support this sample, help it shine” — and transformative — “what kind of song should this be?” That’s evident on all the songs, but a clear example is “Magnet Train,” where Jesse took pains to showcase and support the vocal performance (weird and dorky and great) and then extended it with elements that suggest “train-ness” — the chugging percussion, etc.
And how exactly did you hone in on this particular sound, do you think? What pushed you in this direction?
Oh, it was definitely the grain of the medium. Early on, I told Jesse that although the model could produce sound at 44.1kHz, it was only in mono. His response was: “Cool! Let’s use mono cassettes then.” And the music he sent back to me was mono, as well. In his final production pass, he added a bit of stereo width, just so the songs weren’t all totally locked in the center, but it’s a pretty “narrow” album generally, and that’s totally because of the AI’s limitation, which we decided to embrace and extend rather than fight. Same goes for the lo-fi, grainy, “radio tuned to a ghost channel” sound — totally an artifact of the way the model produces music, which we amplified further by bouncing the music to tape so many times.
So, in the finished songs that we’re hearing, what proportion of the music is made by AI and what by human? Is it even possible to make that distinction?
It really does vary widely from song to song, and the truth is, in some cases, we lost track! I’d start with a phrase from Jesse, put it through my sampling process, send it back to him, he’d add a layer or extend it, send it back to me, I’d put it BACK through the sampling process… what’s the human / AI breakdown there? It’s all sort of mixed and layered together.
There’s one division that’s clear: anytime you hear anything that sounds like a human voice, whether it’s enunciating lyrics clearly or sort of ooh-ing and ahh-ing, that voice is generated by the AI.
Making this album, I learned that this kind of AI model is absolutely an “instrument” you need to learn to play. And I’ve come to believe that analogy is a lot more useful and generative than like, “AI co-composer” or “automatic AI artist” or whatever other analogy you might have heard or can imagine. It’s basically a tuba! A very… strange… and powerful… tuba…
Haha, right! I’ve spoken to quite a few artists who use machine learning models to make songs or books, and they often talk about the dynamic between them and the AI — whether it was pushing them in a given direction, for example. Did it feel like this for you at all, when you were exploring what music Jukebox could give you?
I love this question, and here’s why: previously, I have been pretty skeptical / critical of the “big [AI] models trained on everything,” even as they’ve risen to prominence. This is a class that includes GPT-3, Jukebox, CLIP, VQGAN, etc. It’s very clear that this approach produces powerful results, but I always thought it was more creatively interesting to take responsibility for your own dataset, understand its composition as a key creative decision, etc. And I still think that’s true, to a degree…
The experience of using Jukebox really turned me around on this. For me, it has felt like wandering in an enormous labyrinth or a dead city: huge, full of alleys and arcades. Even now, having used it for so long, I have no idea what’s still waiting in there, what can be found and carried out. Obviously, I’m betraying the fact that I’ve played too many RPGs here… but truly! That’s the feeling, and it’s VERY fun.
With that in mind, then, what do you think making this album with Jesse taught you about the future of AI and creativity? What do you think these systems will be doing in the future?
AI techniques can do a whole bunch of different things for different kinds of artists, of course, but regarding this specific category, the generative model that can produce new music, new sounds. It seems TOTALLY clear to me that these are on the path to becoming a new kind of synthesizer or electric guitar. I think the story will be broadly similar — they’ll go from research project to novelty (which is where we’re at now) to tools for nascent virtuosos (it’s exciting to think about getting to that point!) to commonplace participants in any / every studio.
The arrival of USB-C and USB-C laptop chargers has been a game-changer for many electronics. You can use the connection to charge devices and transfer media, and it’s conveniently reversible. Top laptops no longer need a big power brick — you can just use USB-C.
But there are some precautions you should take when charging over USB-C. Here’s everything you need to know.
USB charging and laptops
You have probably already used USB connections to charge smaller devices either from your computer or from an outlet. That works well because past USB connections had enough wattage to successfully power up those smaller batteries. Prior versions of USB could only handle a limited amount of power, which is why laptop chargers have typically retained their larger, bulkier cables.
USB-C changed that. This type of connection now provides enough power to juice up most laptops (particularly the Type-C 3.0 version). That’s why laptop charging is a new topic of conversation for USB connections, especially now that more laptops are entering the market with USB-C charging compatibility.
So, how do you know if your current laptop has a USB-C port that also works with charging? You could always look it up, but the easiest way is to simply examine your charger. You can identify a Type-C charger by its unique features. USB-C’s connector is small and rounded, significantly different from the old USB version. It also works no matter which way you connect it to the right port, so there’s no need to flip it the right way around. If your charger uses this connection and plugs into your USB-C port, you have a winner!
USB-C chargers are turning up on many different devices, so it’s not strictly relegated to laptops. Newer phones use USB-C charging, car chargers may come with USB-C ports, and portable chargers (discussed more below) can have USB-C options. But laptops tend to have USB-C more frequently than other devices, at least for now.
Will any port work with any charger?
USB-C is a universal charging standard. That means that, technically, it doesn’t matter what USB-C charger you use — it should be able to power up a laptop with a USB-C charging port and power bank. In the real world, this is taking a while to come true. Today the majority of laptop USB-C chargers are interchangeable, but it’s not guaranteed.
Laptops that rely entirely on USB-C, meanwhile, might not charge with just any charger. PCWorld, in its testing, found that HP’s Spectre x2 wouldn’t charge with any USB-C charger besides its own. HP said that that was intentional because a bad charger could damage the device or cause it to malfunction. Other devices, like the Apple MacBook Pro, don’t have such tight restrictions — a new USB-C authentication system could help with this issue in the near future.
While we haven’t personally heard of any damage from using a USB-C charger other than the one that came with your laptop, there’s always a slim risk when plugging a laptop into an unknown power source. Faulty cables can also be a problem. Microsoft acknowledges this by saying that you can charge a Surface with a USB-C port, but it’s highly recommended that you use the Surface charger that came with the laptop because a USB-C charger would end up being very slow due to how power is metered within the device. In short, it’s a good idea to buy cables and chargers from reputable sources and think twice about using that cable you found laying on the ground in a conference room.
You can buy additional USB-C cables for your own security.
Your settings are important
You will also want to watch your USB-C power mode, which you can find in your laptop settings, usually in the power/battery section. Here, you may find that you can switch Type-C’s output, choosing whether to receive power or send power. If your USB-C laptop charger isn’t working when by all rights it should be, check your settings to make sure it is set to receive power.
Pass-through: Today’s charging hubs also offer something called pass-through charging. This refers to a hub that can connect multiple USB peripherals to a laptop while also charging that laptop’s battery. Basically, it’s a combination between a hub and a USB-C charging cable, so you don’t need two different USB-C ports for each task. HooToo’s popular 6-in-1 USB-C Hub is a good example of this technology. There are other innovative options, too, like sending video to an external monitor while also making sure your laptop is charged during a presentation.
Portable chargers: Maybe you have a portable charger with an old USB connection but a phone that only has a USB-C connection (which is not compatible with older USB ports). You can link the charger to a laptop that has both types of ports, and use it to power up your phone in a roundabout way. Do not try to use your laptop Type C charger to charge your phone directly. The voltage requirements are too different for most devices currently available, and multi-device chargers are still a nascent part of the market.
Thunderbolt 3 allows for a number of charging arrangements: It can “chain” up to six different devices together and easily charge while transmitting data at higher speeds than USB-C alone, up to 100W.
Since Thunderbolt 3 uses the same connection as USB-C, manufacturers are quick to label laptop ports if they also have Thunderbolt 3 technology — you can usually tell just by reading their descriptions. However, Thunderbolt 3 connections only work if you use the right cables, so make sure you have Thunderbolt 3 cables if you want to take advantage of those ports’ full features and bandwidth.
For Apple users
Apple users should know that they aren’t immune to the USB-C transition we’re seeing. Currently, Apple uses Lightning cables for iOS charging, a proprietary design that silos off of all Apple mobile charging accessories. The company has already switched to USB-C charging for MacBooks, and today’s iPad Pros also use USB-C charging. iPhones and iPads are currently unique in their Lightning cable reliance.
Apple is going to have to reimagine its current Lightning cable situation to fulfill the EU’s suspected precedent, demanding a set model of USB-C charging cables. Failure to comply with these demands will result in the loss of valuable European laptop buyers. Combine that with speculation of Apple ditching charging cables entirely, instead of focusing on a portless setup (most likely using a magnetic attachment on the back of the iPhone), and we may see the end of the Lightning cable in the next couple of years.
Furthermore, a recent rumor suggests that Apple will forgo its Lightning port for all its devices, starting with its iPhone range from 2025 onward. Another report also highlights how the company is reportedly preparing to remove the port entirely from the AirPods’ charging case, mice, etc.
The future of USB-C charging
Current USB options are about to receive a major overhaul. According to industry insiders, not only will USB have the fastest speeds ever, but they will also be universally compatible across all brands of devices. The dawn of 2021 could see an entirely new generation of laptops that espouse the new USB4 port. Of course, USB4 adds yet another layer of complexity when it comes to USB protocols. There’s even more to know about the new tech:
USB4 will be available on USB-C ports.
USB4 increases speeds and charging power to match Thunderbolt 3, up to 100W.
USB4 will require cables formatted for USB4 to enable its new features.
USB4 will be backward compatible with USB 2.0.
We’re guessing that all other ports may start giving way to the USB-C. USB-C ports will soon outweigh all other ports in importance, taking over as the primary port. In the future, USB4 and Thunderbolt will likely be the standard.
Frequently Asked Questions
Do all USB-C ports allow charging?
Although USB-C is increasingly becoming more commonplace as the go-to standard for charging, it doesn’t necessarily mean that all USB-C ports allow charging.
There are certain laptops — generally the systems that sport their own charger — that feature USB-C ports which don’t offer charging capabilities.
Can I charge my laptop via USB-C?
Generally speaking, if your laptop comes with a USB-C charging feature, then you should have no problem with charging your system via USB-C. That said, some mobile systems (notebooks and laptops) require you to use a specific cable that is exclusively made to be compatible with that system.
Is USB-C charging better?
USB-C charging is better than any other alternative charging platform. Due to the advanced technical capabilities of the standard, using USB-C charging for your system means it will generally charge it faster than, say, a regular power cable.
Can USB-C let you charge your laptop without a charger?
Yes, and no. Providing your notebook or laptop comes with a USB-C port that was specifically designed to supply charging support, then you can charge it without a regular charger. However, as we mentioned above, not all laptops offer USB-C ports that you can utilize in order to charge your device.
Even with the Apple Car project seemingly back on track and set to deliver an EV with potentially full self-driving capabilities, the project team continued to lose key members to rival car companies, at least based on information that leaked over time. The most notable of these was Doug Field, a former Apple VP of Mac hardware who had been a key re-hire from Tesla and eventually became the fourth senior executive in six years to lead the Apple Car project, only to leave Apple once again. Field bailed out to head up Ford’s EV technology efforts. Seemingly hoping things might be able to run like clockwork, Apple tapped Kevin Lynch – who had been leading its Apple Watch software efforts — to take over the job, as reported by Bloomberg.
Lynch is clearly determined to stabilize the project, and it was made quite clear that Apple is still aiming to deliver a fully-fledged vehicle with the recent hire of Desi Ujkashevic, a 31-year veteran of Ford, which was also first reported by Bloomberg. Ujkashevic’s LinkedIn profile reveals she is a highly skilled engineer who has headed various Ford global engineering teams covering areas including interior design, chassis design, exterior design, and electrical design. Her arrival at Apple is the strongest sign in recent times that the company is steadfast in its desire to ensure the truckload of money it has undoubtedly sunk into the project will yield an EV that will (hopefully) go on sale to the public.
Nvidia announced that its network has been compromised, and the company is in the process of investigating a potential cybersecurity breach that took down the chipmaker’s systems over the last two days. The latest cybersecurity concern adds to Nvidia’s problems with chip shortages during the global pandemic, which have affected the entire semiconductor industry.
News of Nvidia’s compromised internal systems was initially reported by British publication The Telegraph, which noted that it was likely a cyberattack.
“We are investigating an incident,” Nvidia told the publication. “We don’t have any additional information to share at this time.”
According to the report, the company’s developer tools and email system were said to have suffered outages over the past two days, leading to speculation that Nvidia’s network may have suffered from a cyberattack. Other parts of Nvidia’s internal network may have been affected as well, with The Telegraph reporting that the company’s network was entirely compromised.
If accurate, this could mean that confidential and proprietary information may have been unlawfully accessed. Of concern is that a malicious attacker could have injected malware onto Nvidia’s systems. There is speculation that ransomware may have been installed on Nvidia’s servers, though this has not yet been confirmed by the company. It is also unclear at this point what, if any, data may have been improperly accessed or deleted from the company’s network.
The Telegraph did state that some email systems were up and running on Friday. Nvidia’s website and home page are currently up and running.
At this time, it’s unclear who initiated the cyberattack on Nvidia, but the report from The Telegraph speculates on the timing of the cyberattack, which lines up with the invasion of Ukraine by Russia. Ahead of the situation in Ukraine, the U.S. government had issued warnings to technology companies warning of potential cyberattacks, but there’s been no confirmation so far that this was a state-sponsored incident.
In addition to investigating the attack, VideoCardz reported that Nvidia now also has the daunting task of ensuring that its systems are clean and that malware doesn’t slip into software and products that are shipped to consumers, including the company’s graphics cards and chips for autonomous driving, servers, and supercomputers. It’s unclear how long Nvidia’s investigation will take and if law enforcement will ultimately step in.
Prior to the cyberattack, Nvidia was most recently in the news for abandoning its acquisition proposal for chipmaker ARM. Nvidia had initially proposed a $40 billion takeover of ARM from Japanese conglomerate SoftBank, but the deal ultimately fell through due to regulatory concerns.
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Robotics is a diverse industry with many variables. Its future is filled with uncertainty: nobody can predict which way it will develop and what directions will be leading a few years from now. Robotics is also a growing sector of more than 500 companies working on products that can be divided into four categories:
Conventional industrial robots,
Stationary professional services (such as medical and agricultural applications),
Mobile professional services (construction and underwater activities),
Automated guided vehicles (AGVs) for carrying small and large loads in logistics or assembly lines.
According to the International Federation of Robotics data, 3 million industrial robots are operating worldwide – the number has increased by 10% over 2021. The global robotics market is estimated at $55.8 billion and is expected to grow to $91.8 billion by 2026 with a 10.5% annual growth rate.
Biggest industry challenges
The field of robotics is facing numerous issues based on its hardware and software capabilities. The majority of challenges surround facilitating technologies like artificial intelligence (AI), perception, power sources, etc. From manufacturing procedures to human-robot collaboration, several factors are slowing down the development pace of the robotics industry.
Let’s look at the significant problems facing robotics:
Different real-world environments may become challenging for robots to comprehend and take suitable action. There is no match for human thinking; thus, robotic solutions are not entirely dependable.
There was considerable progress in robots perceiving and navigating the environments – for example, self-driving vehicles. Navigation solutions will continue to evolve, but future robots need to be able to work in environments that are unmapped and not fully understood.
Full autonomy is impractical and too distant as of now. However, we can reason about energy autonomy. Our brains require lots of energy to function; without evolutionary mechanisms of optimizing these processes, they wouldn’t be able to achieve the current levels of human intelligence. This also applies to robotics: more power required decreases autonomy.
Elaborate hardware is crucial to today’s robots. Massive work still needs to be performed with artificial muscles, soft robotics, and other items that will help to develop efficient machines.
The above challenges are not unique, and they are generally expected for any developing technology. The potential value of robotics is immense, attracting tremendous investment that focuses on removing existing issues. Among the solutions is collaborating with artificial intelligence.
Robotics and AI
Robots have the potential to replace about 800 million jobs globally in the future, making about 30% of all positions irrelevant. Unsurprisingly, only 7% of businesses currently do not employ AI-based technology but are looking into it. However, we need to be careful when discussing robots and AI, as these terms are often assumed to be identical, which has never been the case.
The definition of artificial intelligence tells about enabling machines to perform complex tasks autonomously. Tools based on AI can solve complicated problems by analyzing large quantities of information and finding dependencies not visible to humans. We at ENOT.ai featured six cases when improvements in navigation, recognition, and energy consumption reached between 48% and 800% after applying AI.
While robotics is also connected to automation, it combines with other fields – mechanical engineering, computer science, and AI. AI-driven robots can perform functions autonomously with machine learning algorithms. AI robots can be described as intelligent automation applications in which robotics provides the body while AI supplies the brain.
AI applications for robotics
The cooperation between robotics and AI is naturally called to serve mankind. There are numerous valuable applications developed so far, starting from household usage. For example, AI-powered vacuum cleaners have become a part of everyday life for many people.
However, much more elaborate applications are developed for industrial use. Let’s go over a few of them:
Agriculture. As in healthcare or other fields, robotics in agriculture will mitigate the impact of labour shortages while offering sustainability. Many apps, for example, Agrobot, enable precision weeding, pruning, and harvesting. Powered by sophisticated software, apps allow farmers to analyze distances, surfaces, volumes, and many other variables.
Aerospace. While NASA is looking to improve its Mars rovers’ AI and working on an automated satellite repair robot, other companies want to enhance space exploration through robotics and AI. Airbus’ CIMON, for example, is developed to assist astronauts with their daily tasks and reduce stress via speech recognition while operating as an early-warning system to detect issues.
Autonomous driving. After Tesla, you cannot surprise anybody with self-driving cars. Nowadays, there are two critical cases: self-driving robo-taxis and autonomous commercial trucking. In the short-term, advanced driver-assistance systems (ADAS) technology will be essential as the market gets ready for complete autonomy and seeks to gain profits from the technology capabilities.
With advances in artificial intelligence coming on in leaps and bounds every year, it’s certainly possible that the line between robotics and artificial intelligence will become more blurred over the coming decades, resulting in a rocketing increase in valuable applications.
Main market tendency
The competitive field of artificial intelligence in robotics is getting more fragmented as the market is growing and is providing clear opportunities to robot vendors. The companies are ready to make the first-mover advantage and grab the opportunities laid by the different technologies. Also, the vendors view expansion in terms of product innovation and global impact as a path toward gaining maximum market share.
However, there is a clear need for increasing the number of market players. The potential of robotics to substitute routine human work promises to be highly consequential by freeing people’s time for creativity. Therefore, we need many more players to speed up the process.
Future of AI in robotics
Artificial intelligence and robotics have already formed a concrete aim for business investments. This technology alliance will undoubtedly change the world, and we can hope to see it happen in the coming decade. AI allows robotic automation to improve and perform complicated operations without a hint of error: a straightforward path to excellence. Both industries are the future driving force, and we will see many astounding technological inventions based on AI in the next decade.
Sergey Alyamkin, Ph.D. is CEO and founder of ENOT.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
HP is sprucing up its gaming laptop lineup with an upgrade for an existing model and the addition of a new system. The company is refreshing the Omen 16, which it introduced almost exactly a year ago, with thermal upgrades.
The cooling optimizations include a fifth heat pipe and fourth outflow vent. HP said it found in testing that these can reduce GPU hinge and bottom SSD temperatures by three percent and 14 percent, respectively, while making the laptop five percent quieter. As such, GPU and CPU performance has been boosted, the company claims.
A Dynamic Power feature in the Omen Gaming Hub can help with that as well. HP says that, using a built-in IR thermopile sensor, it can monitor CPU and GPU capacity in real time and allocate power as needed. Not only will this optimize in-game framerates, according to the company, it should boost CPU performance by up to 36 percent compared with the 2021 Omen 16.
The laptop can house up to an Intel Core i9-12900H series CPU or AMD Ryzen 9 6900HX Mobile Processor. On the graphics front, the Omen 16 will support up to an NVIDIA GeForce RTX 3070 Ti or AMD Ryzen RX 6650M. You’ll also be able to trick out the machine with up to 32GB of DDR5 RAM and up to a 2TB SSD.
There are multiple display options as well, topping out at a 1440p 165Hz IPS screen with 3ms response time and 100 percent sRGB color gamut. Other features include a keyboard with per-key RGB lighting and a white color option for the shell.
Engadget Senior Reporter Sam Rutherford got some hands-on time with the new Omen 16:
Meanwhile, HP is expanding its mid-tier Victus line (which was also introduced last year) with a new model. The Victus 15 is joining the 16-inch version and a desktop system. Display options include a 1080p 144Hz screen with Eyesafe low-blue light tech.
You can have up to an Intel Core i7-12700H or AMD Ryzen 7 5800H CPU with 16GB of RAM. On the GPU side, the Victus 15 supports up to a GeForce RTX 3050 Ti or Radeon RX 6500M. You can also have up to 1TB of SSD storage.
HP said it also made improvements to the thermals with this model. The Victus 15 has four-way airflow and a dual heat pipe design. The company claims these provide a seven percent overall airflow improvement and a 146 percent inlet vent area improvement over the previous model.
The Victus 15 will be available in three color options: mica silver, performance blue and ceramic white. It will have a standard backlit keyboard as well.
Both of the laptops come with Omen Gaming Hub software, which has a new optimization feature that can free up system resources and make adjustments to low-level operating system settings to max out performance.
The systems will both be available this summer from HP’s website and other retailers. The Omen 16 will start at $1,200, while the Victus 15 will have a base price of $800.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
The US Federal Communications Commission has added Russian cybersecurity company Kaspersky Lab to its list of entities that pose an “unacceptable risk to US national security,” according to a report from Bloomberg. This is the first time a Russian company has been added to the list, which is otherwise made up of Chinese companies, like Huawei and ZTE.
Businesses in the US are barred from using federal subsidies provided through the FCC’s Universal Service Fund to purchase any products or services from the companies on the list. In addition to Kaspersky, the FCC also added China Telecom and China Mobile International USA to its list on Friday.
“I am pleased that our national security agencies agreed with my assessment that China Mobile and China Telecom appeared to meet the threshold necessary to add these entities to our list,” FCC Chairwoman Jessica Rosenworcel in a press release (PDF). “Their addition, as well as Kaspersky Labs, will help secure our networks from threats posed by Chinese and Russian state backed entities seeking to engage in espionage and otherwise harm America’s interests.”
Kaspersky responded to the FCC’s move in a press release on its site, saying the agency’s decision was “made on political grounds” in light of Russia’s invasion of Ukraine, and that the company “remains ready to cooperate with US government agencies to address the FCC’s and any other regulatory agency’s concerns.”
In 2017, Russian intelligence allegedly used Kaspersky’s antivirus software to steal classified documents from the National Security Agency — a claim denied by the Moscow-based company. Later that year, Former President Donald Trump signed a bill banning the use of Kaspersky products by federal agencies after accusing the company of having ties to the Kremlin.
How would an AI writing program start an article on the future of AI writing? Well, there’s one easy way to find out: I used the best known of these tools, OpenAI’s GPT-3, to do the job for me.
Using GPT-3 is disarmingly simple. You have a text box to type into and a menu on the side to adjust parameters, like the “temperature” of the response (which essentially equates to randomness). You type, hit enter, and GPT-3 completes what you’ve written, be it poetry, fiction, or code. I tried inputting a simple headline and a few sentences about the topic, and GPT-3 began to fill in the details. It told me that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.”
So far, so good, I thought. I hit enter again, and the program added a quote from Google’s head of AI, Jeff Dean, then referenced an experimental piece of software from the 1960s before promising that an “AI Revolution” was coming that would reap immense rewards across the fields of science, technology, and medicine.
Fine, I thought. Then I thought a little more and did some googling. I soon discovered that the quote from Dean was made up, that the experimental software never existed, and while the promise of an “AI Revolution” was all well and good, it wasn’t any different from the vague nonsense found in hype-filled press releases. Really, what was most revealing about the future of AI was not what GPT-3 said but how it said it. The medium is the message, as Marshall McLuhan pointed out many years ago. And here, the medium included plausible fabrications; endless output; and, crucially, an opportunity to respond to the robot writer.
If we’re looking ahead at the next 10 years of AI development, trying to predict how we will interact with increasingly intelligent software, it helps to consider those tools that can talk back. AI writing models may only be digital parrots, able to copy form without understanding meaning, but they still create a dialogue with the user. This is something that often seems missing from the introduction of AI systems like facial recognition algorithms (which are imposed upon us) or self-driving cars (where the public becomes the test subject in a dangerous experiment). With AI writing tools, there is the possibility for a conversation.
If you use Gmail or Google Docs, then you’ve probably already encountered this technology. In Google’s products, AI editors lurk in the blank space in front of your cursor, manifesting textual specters that suggest how to finish a sentence or reply to an email. Often, their prompts are just simple platitudes — ”Thanks!”, “Great idea!”, “Let’s talk next week!” — but sometimes these tools seem to be taking a stronger editorial line, pushing your response in a certain direction. Such suggestions are intended to be helpful, of course, but they seem to provoke annoyance as frequently as gratitude.
To understand how AI systems learn to generate such suggestions, imagine being given two lists of words. One starts off “eggs, flour, spatula,” and the other goes “paint, crayons, scissors.” If you had to add the items “milk” and “glitter” to these lists, which would you choose and with how much confidence? And what if that word was “brush” instead? Does that belong in the kitchen, where it might apply an egg wash, or is it more firmly located in the world of arts-and-crafts? Quantifying this sort of context is how AI writing tools learn to make their suggestions. They mine vast amounts of text data to create statistical maps of the relationships between words, and use this information to complete what you write. When you start typing, they start predicting which words should come next.
Features like Gmail’s Smart Reply are only the most obvious example of how these systems — often known as large language models — are working their way into the written world. AI chatbots designed for companionship have become increasingly popular, with some, like Microsoft’s Chinese Xiaoice, attracting tens of millions of users. Choose-your-own-adventure-style text games with AI dungeon masters are attracting users by letting people tell stories collaboratively with computers. And a host of startups offer multipurpose AI text tools that summarize, rephrase, expand, and alter users’ input with varying degrees of competence. They can help you to write fiction or school essays, say their creators, or they might just fill the web with endless spam.
The ability of the underlying software to actually understand language is a topic of hot debate. (One that tends to arrive, time and time again, at the same question: what do we mean by “understand” anyway?). But their fluency across genres is undeniable. For those enamored with this technology, scale is key to their success. It’s by making these models and their training data bigger and bigger that they’ve been able to improve so quickly. Take, for example, the training data used to create GPT-3. The exact size of the input is difficult to calculate, but one estimate suggests that the entirety of Wikipedia in English (3.9 billion words and more than 6 million articles) makes up only 0.6 percent of the total.
Relying on scale to build these systems has benefits and drawbacks. From an engineering perspective, it allows for fast improvements in quality: just add more data and compute to reap fast rewards. The size of large language models is generally measured in their number of connections, or parameters, and by this metric, these systems have increased in complexity extremely quickly. GPT-2, released in 2019, had 1.5 billion parameters, while its 2020 successor, GPT-3, had more than 100 times that — some 175 billion parameters. Earlier this year, Google announced it had trained a language model with 1.6 trillion parameters.
The difference in quality as systems get larger is notable, but it’s unclear how much longer these scaling efforts will reap rewards in quality. Boosters think that sky’s the limit — that these systems will keep on getting smarter and smarter, and that they may even be the first step toward creating a general-purpose artificial intelligence or AGI. But skeptics suggest that the AI field in general is starting to reap diminishing returns as it scales ever up.
A reliance on scale, though, is inextricably linked to the statistical approach that creates uncertainty in these models’ output. These systems have no centralized store of accepted “truths”; no embodied understanding of “what the world is like for humans” and, hence, no way to distinguish fact from fiction or to exercise common sense.
Quiz them on simple trivia, like capital cities or the birthdays of US presidents, and they are right most of the time. But to these systems, truth is simply a statistical feature of their training data. They answer questions correctly because the text they’ve been fed has presented them with the correct information with enough frequency. This means that if you push them on any given topic or stray from the most obvious fields, they will lie thoughtlessly, making up quotes, dates, biographical details, and anything else you want to hear. The same probabilistic approach also means they can stumble over common sense questions. Start quizzing them with slightly fantastical queries, and they will confidently assert, for example, that a pencil is heavier than a toaster or that a blade of grass only has one eye. Such answers reveal the gulf between statistical and embodied intelligence.
To get a better understanding of these AI language models, I’ve been playing with a variety for the past few weeks; from tools sold to copywriters to flexible, multipurpose systems like GPT-3. The experience has been dizzying. Often, I’m amazed by the fluency, insight, and creativity of these systems. As part of a project for The Verge’s 10-year anniversary, for example, I used GPT-3 to write technopagan spells for a zine, feeding it a prompt (below in bold) which it completed with a four-step ritual (of which I’m showing only the first step):
Technopaganism is a new sort of magic that blends traditional witchcraft with the power of being online all the time. What follows is a technopagan spell designed to protect your phone from spam and improve the quality of messages you receive. Step 1) Take your cell phone and go outside. Close your eyes, hold the device in front of you, and intone: “I offer you, O Spirit of Technopagan Magic, my eternal and unending electronic connection to the net. And in return I ask you for magick to combine my will with the will of the system — make me impervious to the garbage that flows through my inbox. Make all messages flow together, and I will choose among them what is right for me to see. Grant me the Finger of the Internet, and I will strike at spam, and cast out the spammers back to the dark places where they come from.”
But other times, I’m stunned by how limited these programs are. Something that’s often overlooked is just how much human curation is needed to shape their output. The text above was not the first response I received from GPT-3, and I had to go through several iterations to generate a response that was both cogent and funny. It helped, of course, that the task I’d set GPT-3 was an imaginative and open-ended one: it played into the program’s strengths (and I think GPT-3’s success in such tasks has led some users and observers to exaggerate the intelligence of these systems). Other times, though, the software produced nonsensical content even within the fanciful framing I’d given it. Another “spell” it generated in response to the same prompt was much less focused, adding fictitious social media handles, tech headlines, and non-existent URLs to the spell’s instructions:
Perform a ritual to seal the crown circle into the device. May the gods help us all, the Apple iPhone X reference for is broken! Destroy the holy symbol! —tianong xiong (@tianongxiong) September 13, 2018 To confirm the rumor: Facebook Messenger’s search bar does have a search bar hidden right next to it pic.twitter.com/DghBL5DmhU — Warl Darkh (@dkpt) September 12, 2018
You can argue that this is just creativity of a different sort, and that of course a proper technopagan spell would include URLs. But it’s also obvious the machine has gone off-piste.
Despite such weaknesses, there’s already talk of AI systems taking over writers’ jobs. Naturally, I wondered if a computer could write articles for The Verge (and not just this one). I played around with different models, inputting opening paragraphs into these systems and asking for story ideas. Here is some more from GPT-3 on large language models:
“The human brain is just a really fancy computer,” said Jeff Dean, a Google hardware engineer and AI expert, at a company event in 2016. “It’s a computer made of meat.”
To understand GPT-3, you first have to know a little about how the brain learns. The brain isn’t born knowing how to speak or understand speech. It must first acquire this ability through exposure to the outside world, which is why children tend to learn language so quickly, and why being raised in a household where a language isn’t spoken presents a major obstacle.
As neuroscientist Mark Changizi has shown, the process of learning a language is a massive project. “Language is one of the most complex things that brains can do,” he writes, “and it emerges from the brain’s more basic skills, like vision, hearing, and motor control.”
But how does the brain acquire all this knowledge? The short answer is: via autocomplete.
All these points make sense if you’re not concentrating too hard, but they don’t flow from sentence to sentence. They never follow an argument or build to a conclusion. And again, fabrication is a problem. Both Jeff Dean and Mark Changizi are real people who have been more or less correctly identified (though Dean is now head of AI at Google, and Changizi is a cognitive scientist rather than a neuroscientist). But neither man ever uttered the words that GPT-3 attributed to them, as far as I can tell. Yet despite these problems, there’s also a lot to be impressed by. For example, using “autocomplete” as a metaphor to describe AI language models is both accurate and easy to understand. I’ve done it myself! But is this because it’s simply a common metaphor that others have deployed before? Is it right then to say GPT-3 is “intelligent” to use this phrase or is it just subtly plagiarizing others? (Hell, I ask the same questions about my own writing.)
Where AI language models seem best suited, is creating text that is rote, not bespoke, as with Gmail’s suggested replies. In the case of journalism, automated systems have already been integrated into newsrooms to write “fill in the blanks” stories about earthquakes, sporting events, and the like. And with the rise of large AI language models, the span of content that can be addressed in this way is expanding.
Samanyou Garg is the founder of an AI writing startup named Writesonic, and says his service is used mostly by e-commerce firms. “It really helps [with] product descriptions at scale,” says Garg. “Some of the companies who approach us have like 10 million products on their website, and it’s not possible for a human to write that many.” Fabian Langer, founder of a similar firm named AI Writer, tells The Verge that his tools are often used to pad out “SEO farms” — sites that exist purely to catch Google searches and that create revenue by redirecting visitors to ads or affiliates. “Mostly, it’s people in the content marketing industry who have company blogs to fill, who need to create content,” said Langer. “And to be honest, for these [SEO] farms, I do not expect that people really read it. As soon as you get the click, you can show your advertisement, and that’s good enough.”
It’s this sort of writing that AI will take over first, and which I’ve started to think of as “low-attention” text — a description that applies to both the effort needed to create and read it. Low-attention text is not writing that makes huge demands on our intelligence, but is mostly functional, conveying information quickly or simply filling space. It also constitutes a greater portion of the written world than you might think, including not only marketing blogs but work interactions and idle chit-chat. That’s why Gmail and Google Docs are incorporating AI language models’ suggestions: they’re picking low-hanging fruit.
A big question, though, is what effect will these AI writing systems have on human writing and, by extension, our culture? The more I’ve thought about the output of large language models, the more it reminds me of geofoam. This is a building material made from expanded polystyrene that is cheap to produce, easy to handle, and packed into the voids left over by construction projects. It is incredibly useful but somewhat controversial, due to its uncanny appearance as giant polystyrene blocks. To some, geofoam is an environmentally-sound material that fulfills a specific purpose. To others, it’s a horrific symbol of our exploitative relationship with the Earth. Geofoam is made by pumping oil out of the ground, refining it into cheap matter, and stuffing it back into the empty spaces progress leaves behind. Large language models work in a similar way: processing the archaeological strata of digital text into synthetic speech to fill our low-attention voids.
For those who worry that much of the internet is already “fake” — sustained by botnets, traffic farms, and automatically generated content — this will simply mark the continuation of an existing trend. But just as with geofoam, the choice to use this filler on a wide scale will have structural effects. There is ample evidence, for example, that large language models encode and amplify social biases, producing text that is racist and sexist, or that repeats harmful stereotypes. The corporations in control of these models pay lip service to these problems but don’t think they present serious problems. (Google famously fired two of its AI researchers after they published a detailed paper describing these issues.) And as we offload more of the cognitive burden of writing onto machines, making our low-attention text no-attention text, it seems plausible that we, in turn, will be shaped by the output of these models. Google already uses its AI autocomplete tools to suggest gender-neutral language (replacing “chairman” with “chair,” for example), and regardless of your opinion on the politics of this sort of nudge, it’s worth discussing what the end-point of these systems might be.
In other words: what happens when AI systems trained on our writing start training us?
Despite the problems and limitations of large language models, they’re already being embraced for many tasks. Google is making language models central to its various search products; Microsoft is using them to build automated coding software, and the popularity of apps like Xiaoice and AI Dungeon suggests that the free-flowing nature of AI writing programs is no hindrance to their adoption.
Like many other AI systems, large language models have serious limitations when compared with their hype-filled presentations. And some predict this widespread gap between promise and performance means we’re heading into another period of AI disillusionment. As the roboticist Rodney Brooks put it: “just about every successful deployment [of AI] has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low.” But AI writing tools can, to an extent, avoid these problems: if they make a mistake, no one gets hurt, and their collaborative nature means human curation is often baked in.
What’s interesting is considering how the particular characteristics of these tools can be used to our advantage, showing how we might interact with machine learning systems, not in a purely functional fashion but as something exploratory and collaborative. Perhaps the most interesting single use of large language models to date is a book named Phamarko AI: a text written by artist and coder K Allado-McDowell as an extended dialogue with GPT-3.
To create Phamarko AI, Allado-McDowell wrote and GPT-3 responded. “I would write into a text field, I would write a prompt, sometimes that would be several paragraphs, sometimes it would be very short, and then I would generate some text from the prompt,” Allado-McDowell told The Verge. “I would edit the output as it was coming out, and if I wasn’t interested in what it was saying, I would cut that part and regenerate, so I compared it to pruning a plant.”
The resulting text is esoteric and obscure, discussing everything from the roots of language itself to the concept of “hyper-dimensionality.” It is also brilliant and illuminating, showing how writing alongside machines can shape thought and expression. At different points, Allado-McDowell compares the experience of writing using GPT-3 to taking mushrooms and communing with gods. They write: “A deity that rules communication is an incorporeal linguistic power. A modern conception of such might read: a force of language from outside of materiality.” That force, Allado-McDowell suggests, might well be a useful way to think about artificial intelligence. The result of communing with it is a sort of “emergence,” they told me, an experience of “being part of a larger ecosystem than just the individual human or the machine.”
This, I think, is why AI writing is so much more exciting than many other applications of artificial intelligence: because it offers the chance for communication and collaboration. The urge to speak to something greater than ourselves is evident in how these programs are being embraced by early adopters. A number of individuals have used GPT-3 to talk to dead loved ones, for example, turning its statistical intelligence into an algorithmic ouija board. Though such experiments also reveal the limitations. In one of these cases, OpenAI shut down a chatbot shaped to resemble a developer’s dead fiancée because the program didn’t conform to the company’s terms of service. That’s another, less promising reality of these systems: the vast majority are owned and operated by corporations with their own interests, and they will shape their programs (and, in turn, their users) as they see fit.
Despite this, I’m hopeful, or at least curious, about the future of AI writing. It will be a conversation with our machines; one that is diffuse and subtle, taking place across multiple platforms, where AI programs linger on the fringes of language. These programs will be unseen editors to news stories and blog posts, they will suggest comments in emails and documents, and they will be interlocutors that we even talk to directly. It’s impossible that this exchange will only be good for us, and that the deployment of these systems won’t come without problems and challenges. But it will, at least, be a dialogue.
Next week, Nvidia will be presenting its Computex 2022 keynote, where the company will discuss current and upcoming products for data centers, professional applications, and gaming. It’s not entirely clear what exactly the company will be talking about, and although rumors range from a new low-end GPU to an announcement of next-gen GPUs, Nvidia is always very secretive so we’ll just have to wait and see.
Here’s where you can watch Nvidia’s Computex keynote and what you can expect the company to announce.
How to watch Nvidia’s Computex 2022 keynote
Six different Nvidia executives will speak at Nvidia’s keynote, which starts at 8 p.m. PT on May 23. Computex is hosted in Taiwan, which means their afternoon is America’s late night, so you might have to stay up late to catch the presentation.
Nvidia is likely going to stream the presentation on its YouTube channel, as it typically does for Computex and events like GTC. After the stream is over, we expect a recording to be available on the YouTube page.
Following the presentation, Nvidia will host a talk specifically about Omniverse, hosted by Richard Kerris, Nvidia’s vice president of Omniverse. The talk will cover “the enormous opportunities simulation brings to 3D workflows and the next evolution of A.I.”
What to expect from Nvidia’s Computex 2022 keynote
Nvidia is notoriously tight-lipped about its upcoming products. In fact, ever since the GTX 10-series, Nvidia has always announced new gaming GPUs just weeks before launch, which is very different from rivals AMD and Intel, as they tend to announce big products more than a month away from launch. So, we’re either on the cusp of the next generation (presumably the RTX 40-series) or still some months away.
There are five other speakers at Nvidia’s keynote, but they’re expected to talk about data centers, professional GPUs, and automotive, not gaming GPUs. Unfortunately, if you’re not really into enterprise-grade hardware, you probably aren’t the target demographic of this keynote. Still, Nvidia does what Nvidia wants and we can never be too sure what it’s going to show at a big event like Computex.