Ever needed to quickly edit something or someone out of an image?
Maybe it’s the stranger who wandered into your family photo, or the stand holding up your final artwork for school? Whatever the job, if you don’t have the time or Photoshop skills needed to edit the thing yourself, why not try Cleanup.pictures — a handy web tool that does exactly what it promises in the URL.
Just upload your picture, paint over the thing you want removed with the brush tool, and hey presto: new image. The results aren’t up to the standards of professionals, especially if the picture is particularly busy or complex, but they’re surprisingly good. The tool is also just quite fun to play around with. You can check out some examples in the gallery below:
Cleanup.pictures seems to be from the same team that made a fun augmented reality demo that lets you “copy and paste” the real world, and is open-source (you can find the underlying code here). Obviously, tools of this sort have long been available, dating back at least to the launch of Photoshop’s Clone Stamp tool, but the quality of these automated programs has increased considerably in recent years thanks to AI.
Machine learning systems are now not only better at the problem of segmentation (marking the divisions between an object and the background) but also inpainting, or filling in the new content. Just last week, Google launched its new Pixel 6 tools with an identical “Magic Eraser” feature, but Cleanup.pictures shows how this feature has become a commodity.
I think my favorite use of this tool, though, is this fantastic series of pictures removing the people from Edward Hopper paintings:
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
There are almost 350 million people worldwide with blindness or some other form of visual impairment who need to use the internet and mobile apps just like anyone else. Yet, they can only do so if websites and mobile apps are built with accessibility in mind — and not as an afterthought.
Consider these two sample buttons that you might find on a web page or mobile app. Each has a simple background, so they seem similar.
In fact, they’re a world apart when it comes to accessibility.
It’s a question of contrast. The text on the light blue button has low contrast, so for someone with visual impairment like color blindness or Stargardt disease, the word “Hello” could be completely invisible. It turns out that there is a standard mathematical formula that defines the proper relationship between the color of text and its background. Good designers know about this and use online calculators to calculate those ratios for any element in a design.
So far, so good. But when it comes to text on a complex background like an image or a gradient, things start to get complicated and helpful tools are rare. Before today, accessibility testers have had to check these cases manually by sampling the background of the text at certain points and calculating the contrast ratio for each of the samples. Besides being laborious, the measurement is also inherently subjective, since different testers might sample different points inside the same area and come up with different measurements. This problem — laborious, subjective measurements — has been holding back digital accessibility efforts for years.
Accessibility: AI to the rescue
Artificial intelligence algorithms, it turns out, can be trained to solve problems like this and even to improve automatically as they are exposed to more data.
For example, AI can be trained to do text summarization, which is helpful for users with cognitive impairments; or to do image and facial recognition, which helps those with visual impairments; or real-time captioning, which helps those with hearing impairment. Apple’s VoiceOver integration on the iPhone, whose main usage is to pronounce email or text messages, also uses AI to describe app icons and report battery levels.
Guiding principles for accessibility
Wise companies are rushing to comply with the Americans with Disabilities Act (ADA) and give everyone equal access to technology. In our experience, the right technology tools can help make that much easier, even for today’s modern websites with their thousands of components. For example, a site’s design can be scanned and analyzed via machine learning. It can then improve its accessibility through facial & speech recognition, keyboard navigation, audio translation of descriptions and even dynamic readjustments of image elements.
In our work, we’ve found three guiding principles that, I believe, are critical for digital accessibility. I’ll illustrate them here with reference to how our team, in an effort led by our data science team leader Asya Frumkin, has solved the problem of text on complex backgrounds.
Split the big problem into smaller problems
If we look at the text in the image below we see that there is some kind of legibility problem, but it’s hard to quantify overall, looking solely at the whole phrase. On the other hand, if our algorithm examines each of the letters in the phrase separately — for example, the “e” on the left and the “o” on the right — we can more easily tell for each of them whether it is legible or not.
If our algorithm continues to go through all the characters in the text in this way, we can count the number of legible characters in the text and the total number of characters. In our case, there are four legible characters out of eight in total. The ensuing fraction, with the number of legible characters as the numerator, gives us a legibility ratio for the overall text. We can then use an agreed-upon pre-set threshold, for example, 0.6, below which the text is considered unreadable. But the point is we got there by running operations on each piece of the text and then tallying from there.
Repurpose existing tools where possible
We all remember Optical Character Recognition (“OCR”) from the 1970s and 80s. Those tools had promise but ended up being too complex for their originally intended purpose.
But there was a part of those tools called The CRAFT (Character-Region Awareness For Text) model that held out promise for AI and accessibility. CRAFT maps each pixel in the image to its probability of being in the center of a letter. Based on this calculation, it is possible to produce a heat map in which high probability areas will be painted in red and areas with low probability will be painted in blue. From this heat map, you can calculate the bounding boxes of the characters and cut them out of the image. Using this tool, we can extract individual characters from long text and run a binary classification model (like in #1 above) on each of them.
Find the right balance in the dataset
The model of the problem classifies individual characters in a straightforward binary way — at least in theory. In practice, there will always be challenging real-world examples that are difficult to quantify. What complicates the matter, even more, is the fact that every person, whether they are visually impaired or not, has a different perception of what is legible.
Here, one solution (and the one we have taken) is to enrich the dataset by adding objective tags to each element. For example, each image can be stamped with a reference piece of text on a fixed background prior to analysis. That way, when the algorithm runs, it will have an objective basis for comparison.
For the future, for the greater good
As the world continues to evolve, every website and mobile application needs to be built with accessibility in mind from the beginning. AI for accessibility is a technological capability, an opportunity to get off the sidelines and engage and a chance to build a world where people’s difficulties are understood and considered. In our view, the solution to inaccessible technology is simply better technology. That way, making websites and apps accessible is part and parcel of making websites and apps that work — but this time, for everybody.
Learning how to access the dark web isn’t as clandestine and illegal as it’s often portrayed. In fact, you can have very legitimate reasons for accessing the dark web. Although using it does have its potential pitfalls, and you need to be careful of the kind of sites you visit, it’s perfectly legal and relatively safe to do if you follow the right steps.
Here’s how to access the dark web safely and easily.
Warning: It is crucial that you exercise caution when exploring the dark web. Only visit trusted websites with URLs that are maintained on a trusted source. There are many legitimate websites on the dark web that can be worth visiting, but there are also the absolute extremes of illegal material, such as pornography and gore. Do not explore random links to websites you aren’t aware of or familiar with.
How to access the dark web safely
Step 1: Download the Tor browser from the official website and install it like you would any other application. It’s a free-to-use web browser based on Firefox that lets you access the dark web relatively safely. It uses the onion router to bounce your signal around other Tor users around the world, thereby pseudo-anonymizing you when you access the dark web.
If you want to improve your security further, you could also use a VPN at the same time as Tor. For the most security-conscious, you can even install the Tails operating system on a flash drive and run Tor from that. That’s not entirely necessary for your first time accessing the dark web if you’re just curious, but it does provide additional layers of security that are worth considering if you absolutely need to stay anonymous online.
Step 2: If you’re using the dark web, you probably don’t want your activity monitored — privacy is a core component of the Tor browser and the dark web itself. Make sure that you don’t have any apps open that might track what you’re doing. It also doesn’t hurt to disconnect any microphones you have, as well as your webcam, or use physical privacy switches if you have them.
Step 3: Now you can launch the Tor browser and start exploring the dark web. But be warned, it’s not as user-friendly as the typical clearnet websites you access. There’s no Google for the dark web, and if there was, you probably shouldn’t trust it.
It’s not easy to find what you’re looking for on the dark web, especially if you want to do it safely. You should only ever access websites you know are safe because they’re vetted by websites or other sources that you already know are safe. Good starting points are the Onion Directory and the Hidden Wiki. Take a look at the sites that these directories have collected and see what topics interest you.
Be aware, however, that there are sites for absolutely everything, from the benign to the very illegal, so be careful what you click on or visit.
While visiting any of the sites you find on those Wiki sites, do not maximize your browser, as it can help identify you by your monitor resolution. Don’t input any identifying information about yourself. Don’t upload any pictures or documents of any kind.
Step 4: When you are finished, close the Tor browser and shut down/restart your computer entirely. Pay close attention when starting up again, and if everything appears to be acting normally, you can enable the mic settings, webcam, and other functions again.
Is the dark web illegal?
Not intrinsically, no. In principle, the dark web is just a version of the internet that you can only access using a Tor browser and where it’s almost impossible to identify the users and owners of the websites on there. However, because it’s almost completely anonymous, there are a lot of illegal images, information, products, and services on the dark web. Attempting to even view these can be illegal in many countries and states, so stay well clear of them if you want to keep a clean record and conscience.
Is the dark web safe?
If you exercise due caution and use a combination of the Tor browser and common sense in which sites you access on the dark web, yes, the dark web is safe to use. However, if you go poking around in illegal sites and services or don’t practice good operational security by offering up personal information or trying to perform illegal activities, it very much isn’t safe. You may be targeted by law enforcement, exploited by hackers, or threatened by criminals using the dark web for nefarious purposes.
To use the dark web safely, use as many privacy-enhancing tools or programs as you can, and do not identify yourself in any way to anyone for any reason.
Should you use a VPN and Tor?
If you want to be extra safe, routing your connection through a VPN and then accessing the dark web using the Tor browser provides more security than Tor alone. You can also use the Tails operating system to further protect yourself. It all depends on how safe you feel you need to be. If you’re just browsing the dark web out of curiosity, Tor, or a Tor and VPN configuration is safe enough. If you’re trying to hide from an oppressive government or want to be a whistle-blower for something illegal, take as many steps as you can to be safe.
Windows 11 is now blocking the popular app EdgeDeflector, which provides better ability to select alternative web browsers.
The discovery was made in a new build of Windows 11 through the Insider Program, where Microsoft is now blocking applications that sidestep certain restrictions to change web browsers within the operating system.
You can still change your default web browser to Chrome or Firefox, but as Windows users know, this won’t apply to every situation found in Windows. That’s due to something called the “edge:// protocol,” which a method used by Microsoft within certain elements in Windows, such as the News and Interest widgets.
The protocol ensures that it’ll only open URL links in its Edge browser. It’s implemented within Windows Search as well. Understandably, it’s been a controversial feature as it even circumvents a user’s default browser choice.
Developers have offered alternative apps, such as EdgeDeflector, that allow you to redirect links to a preferred browser, but Microsoft has now made such workarounds worthless through the latest Windows 11 update (build 22494), which is currently available for Insider members.
The developer of EdgeDeflector — home to 500,000 users — has confirmed that Microsoft has effectively disabled his application. As stated in the blog post, the developer, Daniel Aleksandersen, insists that “this isn’t a bug in the Windows Insider preview build. Microsoft has made specific changes to how Windows handles the microsoft-edge:// protocol.”
He added that while he can technically provide a way to bypass Microsoft’s strategy to make apps like EdgeDeflector futile, it would require “destructive changes” to Windows. These alterations to the program’s code would cause several issues for users, the developer stressed. Aleksandersen has thus decided to cease updating the app.
Your web browser is probably the most important — if not the only — app you regularly use.
In October, Brave became the first web browser to incorporate support against Microsoft’s URL scheme by introducing the same functionality EdgeDeflector delivers. Mozilla developer Masatoshi Kimura has also written patches to integrate the protocol into Firefox.
Aleksandersen states that the move by Microsoft is an anticompetitive practice that regulators “just haven’t caught up with yet.”
“Your web browser is probably the most important — if not the only — app you regularly use. Microsoft has made it clear that its priorities for Windows don’t align with its users’.”
This change is only effective in future builds of Windows 11, so as of now, it’s only a preview of what’s to come.
Facebook has become the latest company to offer a cloud gaming service on iOS, only once again you won’t access it through the App Store. Starting today, you can visit the Facebook Gaming website to add a Progressive Web App (PWA) that acts as a shortcut to the service on your iPhone or iPad. To do so, visit the platform’s website and tap the “Add to Home Screen” option from the Safari share sheet.
It’s not an elegant solution, but it’s the same one employed by Amazon and Microsoft. When Apple tweaked its guidelines last September to allow for cloud gaming clients on iOS, it said games offered in a streaming service had to be individually downloaded from the App Store. That’s a requirement both Microsoft and Facebook said was not congruent with how every other platform treats cloud gaming services.
“We’ve come to the same conclusion as others: web apps are the only option for streaming cloud games on iOS at the moment,” Vivek Sharma, Facebook’s vice-president of gaming, told The Vergeof today’s launch. “As many have pointed out, Apple’s policy to ‘allow’ cloud games on the App Store doesn’t allow for much at all. Apple’s requirement for each cloud game to have its own page, go through review and appear in search listings defeats the purpose of cloud gaming.”
The process of adding the web app is complicated enough that Facebook includes a short how-to when you first visit its Gaming website on Safari. You also have to know to navigate to the company’s website in the first place. The reason for that is the App Store guidelines prohibit developers from using their applications to direct individuals to websites that feature alternative payment systems to those offered by Apple, and you pay for the in-game purchases offered in Facebook Gaming titles through Facebook’s Pay platform.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Amazon Web Services (AWS) has banned NSO Group, the company behind the Pegasus spyware program. Vice reported the ban this morning, the day after a sweeping report alleged Pegasus was used to target the phones of human rights activists and journalists.
An Amnesty International investigation into Pegasus says the tool compromised targets’ phones and routed data through commercial services like AWS and Amazon CloudFront, a move that it said “protects NSO Group from some internet scanning techniques.” (Vice notes that a 2020 report previously described NSO using Amazon services.) Amnesty International wrote that it had contacted Amazon about NSO and Amazon had responded by banning NSO-related accounts. “When we learned of this activity, we acted quickly to shut down the relevant infrastructure and accounts,” an Amazon Web Services spokesperson confirmed to The Verge.
AWS wasn’t the only service NSO apparently used. The Amnesty International report links it with several other companies, including DigitalOcean and Linode. NSO allegedly favored servers in Europe and the United States, particularly “the European data centers run by American hosting companies.” As the report describes it, NSO would deploy Pegasus malware through a series of malicious subdomains, exploiting security weaknesses on services like iMessage. Once Pegasus compromised a phone, it could collect data from the phone or activate its camera and microphone for surveillance.
NSO describes Pegasus as a tool for surveilling terrorists and cybercriminals. But yesterday’s reporting — comprising work from Amnesty International, Forbidden Stories, and 17 news outlets — says governments deployed it indiscriminately against political figures, dissidents, and journalists. That included attempting or completing attacks on 37 phones belonging to targets like New York Times and Associated Press journalists, as well as two women close to murdered Saudi journalist Jamal Khashoggi. The NSO has objected to the reporting, calling it “full of wrong assumptions and uncorroborated theories.”
Did you miss today’s livestream? Watch the AI at the Edge & IoT Summit on demand now.
Roughly a year ago, Facebook detailed its work on an AI chatbot called BlenderBot 1.0, which the company claims is the largest-ever project of its kind. In an extension of that work, Facebook today took the wraps off of BlenderBot 2.0, which it says is the first chatbot that can build long-term memory while searching the internet for up-to-date information.
Language systems like OpenAI’s GPT-3 and BlenderBot 1.0 can articulately express themselves — at least in the context of an ongoing conversation. But as Facebook notes, they suffer from very short short-term memory and long-term memory limited to what they’ve been previously taught. Moreover, if you told GPT-3 or BlenderBot 1.0 something today, they’ll forget it by tomorrow. And they’ll confidently state information that isn’t correct, owing to deficiencies in their algorithms. Because they can’t gain additional knowledge, GPT-3 and BlenderBot 1.0 believe that NFL superstar Tom Brady is still on the New England Patriots, for example, and don’t know that he won the 2021 Super Bowl with the Tampa Bay Buccaneers.
By contrast, BlenderBot 2.0 can query the internet using any search engine for movies, TV shows, and more and both read and write to its long-term local memory store. It also remembers the context of previous discussions — a form of continual learning. So, for example, if you talked about Tom Brady with it weeks ago, it could potentially bring up the NFL in future conversations, as it knows that’s a relevant topic to you.
“This work combines and refines a number of ideas around retrieval and memory in transformer models and puts them together in a clever way,” Connor Leahy, one of the founding members of EleutherAI, told VentureBeat via email. “The results look impressive, especially when one considers how small these models are compared to the likes of GPT-3. I especially applaud the team for addressing shortcomings and potential problems with their method, and for releasing their code and data publicly, which will help the wider research community to scrutinize and further improve these methods.”
BlenderBot 2.0 uses an AI model based on retrieval augmented generation, an approach that enables generating responses and incorporating knowledge beyond that contained in a conversation. During a dialogue, the model, which combines an information retrieval component with a text generator, seeks germane data both in its long-term memory and from documents it finds by searching the internet.
A neural network module in BlenderBot 2.0 produces searches given a conversational context. The chatbot then prepends retrieved knowledge to the conversational history and takes the knowledge into account in deciding what to write. Effectively, BlenderBot 2.0 reads from its long-term memory store while writing, a process achieved by using a module that generates the memory to be stored based on conversational content.
In order to train BlenderBot 2.0’s neural networks, Facebook collected data in English using a crowdsourcing platform akin to Amazon Mechanical Turk. One of the resulting datasets, Wizard of the Internet, contains human conversations augmented with new information from internet searches, via the Microsoft Bing API. The other, called Multisession, has long-context chats with humans referencing information from past conversation sessions.
Wizard of the Internet provides guidance to BlenderBot 2.0 on how to generate relevant search engine queries, as well as create responses based on the search results. Meanwhile, Multisession helps the chatbot decide which fresh knowledge to store in long-term memory and what to write given those memories. In tandem with the Blended Skill Talk dataset, which Facebook created to give BlenderBot 1.0 knowledge and “personality,” Facebook says that Wizard of the Internet and Multisession enable BlenderBot 2.0 to chat simultaneously with a range of conversational skills.
Safety and future steps
Even the best language models today exhibit bias and toxicity — it’s well-established that they amplify the gender, race, and religious biases in data on which they were trained. OpenAI itself notes that biased datasets can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” A separate paper by Stanford University Ph.D. candidate and Gradio founder Abubakar Abid details the biased tendencies of text generated by GPT-3, like associating the word “Jews” with “money.” And in tests of a medical chatbot built using GPT-3, the model responded to a “suicidal” patient by encouraging them to kill themself.
In an effort to mitigate this, Facebook says that it implemented “safety recipes” in BlenderBot 2.0 to reduce offensive responses. As measured by an automated classifier, the chatbot was 90% less likely to respond harmfully and 74.5% more likely to give a “safe” response to questions from real people. Facebook also says that, beyond this, its methods alleviate the risk of BlenderBot 2.0 spouting harmful falsehoods “to some extent,” at least compared with previous methods.
“We know that safety issues are not yet solved, and BlenderBot 2.0’s approach of utilizing the internet and long-term memory to ground conversational responses brings new safety challenges,” Facebook research scientist Jason Weston and research engineer Kurt Shuster wrote in a blog post. “As a research community, we need to address them, and we believe reproducible research on safety, made possible by releases like this, will help the community make important new progress in this area together.”
In experiments, Facebook says that BlenderBot 2.0 outperformed BlenderBot 1.0 when it came to picking up where previous conversation sessions left off, with a 17% improvement in “engagingness” (as scored by human evaluators) and a 55% improvement in the use of previous conversation sessions. Furthermore, BlenderBot 2.0 reduced hallucinations from 9.1% to 3.0% and was factually consistent across a conversation 12% more often.
To spur further research in these directions, Facebook has open-sourced BlenderBot 2.0 and the datasets used to train it, Wizard of the Internet and Multisession. “We think that these improvements in chatbots can advance the state of the art in applications such as virtual assistants and digital friends,” Weston and Shuster wrote. “Until models have deeper understanding, they will sometimes contradict themselves. Similarly, our models cannot yet fully understand what is safe or not. And while they build long-term memory, they don’t truly learn from it, meaning they don’t improve on their mistakes … We look forward to a day soon when agents built to communicate and understand as humans do can see as well as talk.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
up-to-date information on the subjects of interest to you
gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
Microsoft is currently collaborating with Google on a new set of APIs (application programming interfaces) that could dramatically change the way we think about simple commands like copy and paste. Collectively called Pickle Clipboard APIs, they will allow you to copy and paste a wide range of file types.
At the moment, Microsoft Edge and Google Chrome only allow you to copy a few file formats between the browser and native applications. These formats include (but are not limited to) JPEG, PNG, and HTML. Reportedly, the two browsers are now working on a new set of Chromium APIs that will hopefully expand the list of supported file types you’re allowed to copy from.
Pickle Clipboard APIs will extend the functionality of the clipboard feature, allowing you to take advantage of a variety of additional file types, including custom formats made specifically for applications on the web.
A simple example is being able to copy and paste .docx (used for Microsoft Word) and TIFF (a large-image format for graphic design) content between your web and desktop apps.
The extension of the copy-paste feature to non-standard web formats also means developers may allow you to copy-paste between PWAs (progressive web apps) and desktop apps. For instance, Google’s products, including Google Docs and Google Sheets, may support the file types of Office 365’s apps, such as Word or Excel. Those who have both a web-based and a native version of a specific app, such as SketchUp, can also hopefully copy-paste within their applications.
According to the Google Chromium conversation about the Pickle Clipboard APIs, the API “lets websites read and write arbitrary unsanitized payloads using a standardized pickling format, as well as read and write a limited subset of OS-specific formats (for supporting legacy apps). The name of the clipboard format is mangled by the browser in a standardized way to indicate that the content is from the web, which allows native applications to opt-in to accepting the unsanitized content.”
Apart from extending compatibility to include multiple niche and proprietary file types, the new set of APIs will also provide developers with custom clipboard formats offering fine-grained control over the copy-paste feature. The possibilities are endless.
The upgraded copy-paste feature could potentially translate to a highly improved experience of using and working in between web applications. Unfortunately, there isn’t much clarity at the moment on when these features will arrive in either Google Chrome or Microsoft Edge.
TLDR: The 2021 Google SEO and SERP Business Marketing Bundle is a one-stop deepdive into the steps for earning top results in Google searches and driving traffic to your content.
Keyword research, backlinking and other tried and true tactics for optimizing search results still matter. But it’s grown increasingly tough for brand and website managers to keep up with the changing factors that determine how a search engine will evaluate, then rank a particular page or piece of content in its results.
But search results vary by the type of information being searched, page variables, and even the device being used, making it more and more difficult to even know whether the search intelligence you’re following is even giving you the correct SEO picture for today’s latest search changes.
The 2021 Google SEO and SERP Business Marketing Bundle ($34.98, over 90 percent off, from NTW Deals) can give interested web watchers insight into what’s driving today’s search engine results, and how to capture some of that attention for their content and products.
Over 11 courses covering more than 26 hours of material, web watchers will examine all of the hottest issues in SEO, including social media marketing, link building Google Citations, and more that can help boost a brand’s presence online and offer a better chance for nabbing that coveted top spot in Google search results.
Even if you’ve never considered the importance of search, the Perfect On-Page SEO in 1 Day That Users and Google Will Love is a great place to start. The course explains the key terms to know and the key tools at your disposal to improve the clickthrough rate to your site or content from Google search results. By knowing how to build high converting, targeted SEO, users are armed with the ability to not only increase traffic, but get a better idea of what visitors or customers really want.
Once you’ve established the basic knowledge, the remaining courses in this package help explore ever deeper into proper SEO and page management etiquette, including the proven SEO and social media marketing strategies to help reach over 1 million people.
From Google’s own link building tactics to optimizing for both voice and local searches to even how to achieve a proper Google citation so your brand shows up in multiple locations to maximize your reach, students will find loads of helpful techniques for earning every possible drop of web traffic.
There are even a pair of technical SEO courses for getting your brand in Google searches with enhanced rich snippet listings, training in ensuring your images are properly tagged for SEO, and even the most advanced training in proper keywords and SERP results that expert users live by.
The 2021 Google SEO and SERP Business Marketing Bundle includes almost $2,200 worth of intensive coursework, but right now, the entire collection is available for just over $3 per course at $34.98.
Twitter for web experienced serious service issues around the world for several hours on Wednesday evening, preventing users from viewing tweets and other content. The problem started at around 10 p.m. ET and took nearly three hours for the company to fix. Twitter’s mobile app appeared to be unaffected.
Twitter Support acknowledged the issue at 10:52 p.m. ET, saying: “Profiles’ tweets may not be loading for some of you on web and we’re currently working on a fix. Thanks for sticking with us!”
It followed up with another message at 11:38 p.m. ET that said tweets “should now be visible on profiles, but other parts of Twitter for web may not be loading for you,” adding, “We’re continuing to work on getting things back to normal.”
Finally, about an hour later, it said: “Aaaand we’re back. Twitter for web should be working as expected. Sorry for the interruption!”
While it’s good news for the Twitter community that the service is now working properly again, the San Francisco-based company is yet to reveal what caused the problem.
Aaaand we’re back. Twitter for web should be working as expected. Sorry for the interruption!
The fault meant that anyone visiting a Twitter account’s webpage received the message, “Something went wrong. Try reloading,” but following the instruction simply produced the same message.
Frustratingly, folks seeking more information from Twitter’s support account via the web were greeted with the same response, with the rest of the page devoid of content. At this stage, users who were able to switch to their Twitter mobile app would probably have done so, as this appeared to be functioning as usual.
As you’d expect, disgruntled users quickly hit other online services to complain about the issue. One said, “Disappearing tweets? Anyone else?” while another suggested that some folks will “migrate to other social media platforms, just like they did before,” adding, “Twitter is like a dumpster, if it goes down the trash spills all over the place.” Another quipped: “OK, who spilled coffee on Twitter’s server?”
Of course, most popular online services suffer issues from time to time, and this wasn’t the first time for Twitter to experience what appeared to be a major outage. Various technical issues have knocked it offline before, though the company is usually quick to respond to the situation and reassure users that it’s working on a fix.
Digital Trends has reached out to Twitter for information on what Wednesday’s problem and we will update this article when we hear back.