Categories
AI

What OpenAI and GitHub’s ‘AI pair programmer’ means for the software industry

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


OpenAI has once again made the headlines, this time with Copilot, an AI-powered programming tool jointly built with GitHub. Built on top of GPT-3, OpenAI’s famous language model, Copilot is an autocomplete tool that provides relevant (and sometimes lengthy) suggestions as you write code.

Copilot is currently available to select applicants as an extension in Visual Studio Code, the flagship programming tool of Microsoft, GitHub’s parent company.

While the AI-powered code generator is still a work in progress, it provides some interesting hints about the business of large language models and the future directions of the software industry.

Not the intended use for GPT-3

The official website of Copilot describes it as an “AI pair programmer” that suggests “whole lines or entire functions right inside your editor.” Sometimes, just providing a function signature or description is enough to generate an entire block of code.

Working behind Copilot is a deep learning model called Codex, which is basically a special version of GPT-3 finetuned for programming tasks. The tool’s working is very much like GPT-3: It takes a prompt as input and generates a sequence of bytes as output. Here, the prompt (or context) is the source code file you’re working on and the output is the code suggestion you receive.

What’s interesting in all of this is the unexpected turns AI product management can take. According to CNBC: “…back when OpenAI was first training [GPT-3], the start-up had no intention of teaching it how to help code, [OpenAI CTO Greg] Brockman said. It was meant more as a general purpose language model [emphasis mine] that could, for instance, generate articles, fix incorrect grammar and translate from one language into another.”

General-purpose language applications have proven to be very hard to nail. There are many intricacies involved when applying natural language processing to broad environments. Humans tend to use a lot of abstractions and shortcuts in day-to-day language. The meaning of words, phrases, and sentences can vary based on shared sensory experience, work environment, prior knowledge, etc. These nuances are hard to grasp with deep learning models that have been trained to grasp the statistical regularities of a very large dataset of anything and everything.

In contrast, language models perform well when they’re provided with the right context and their application is narrowed down to a single or a few related tasks. For example, deep learning–powered chatbots trained or finetuned on a large corpus of customer chats can be a decent complement to customer service agents, taking on the bulk of simple interactions with customers and leaving complicated requests to human operators. There are already plenty of special-purpose deep learning models for different language tasks.

Therefore, it’s not very surprising that the first applications for GPT-3 have been something other than general-purpose language tasks.

Using language models for coding

Shortly after GPT-3 was made available through a beta web application programming interface, many users posted examples of using the language model to generate source code. These experiments displayed an unexplored side of GPT-3 and a potential use case for the large language model.

And interestingly, the first two applications that Microsoft, the exclusive license holder of OpenAI’s language models, created on top of GPT-3 are related to computer programming. In May, Microsoft announced a GPT-3-powered tool that generates queries for its Power Apps. And now, it is testing the waters with Copilot.

Neural networks are very good at finding and suggesting patterns from large training datasets. In this light, it makes sense to use GPT-3 or a finetuned version of it to help programmers find solutions in the very large corpus of publicly available source code in GitHub.

According to Codepilot’s homepage, Codex has been trained on “a selection of English language and source code from publicly available sources, including code in public repositories on GitHub.”

If you provide it with the right context, it will be able to come up with a block of code that resembles what other programmers have written to solve a similar problem. And giving it more detailed comments and descriptions will improve your chances of getting a reasonable output from Codepilot.

Generating code vs understanding software

According to the website, “GitHub Copilot tries to understand [emphasis mine] your intent and to generate the best code it can, but the code it suggests may not always work, or even make sense.”

“Understand” might be the wrong word here. Language models such as GPT-3 do not understand the purpose and structure of source code. They don’t understand the purpose of programs. They can’t come up with new ideas, break down a problem into smaller components, and design and build an application in the way that human software engineers do.

By human standards, programming is a relatively difficult task (well, it used to be when I was learning in the 90s). It requires careful thinking, logic, and architecture design to solve a specific problem. Each language has its own paradigms and programming patterns. Developers must learn to use different application programming interfaces and plug them together in an efficient way. In short, it’s a skill that is largely dependent on symbol manipulation, an area that is not the forte of deep learning algorithms.

Copilot’s creators acknowledge that their AI system is in no way a perfect programming companion (I don’t even think “pair programming,” is the right term for it). “GitHub Copilot doesn’t actually test the code it suggests, so the code may not even compile or run,” they warn.

GitHub also warns that Copilot may suggest “old or deprecated uses of libraries and languages,” which can cause security issues. This makes it extremely important for developers to review the AI-generated code thoroughly.

So, we’re not at a stage to expect AI systems to automate programming. But pairing them with humans who know what they’re doing can surely improve productivity, as Copilot’s creators suggest.

And since Copilot was released to the public, developers have posted all kinds of examples ranging from amusing to really useful.

“If you know a bit about what you’re asking Copilot to code for you, and you have enough experience to clean up the code and fix the errors that it introduces, it can be very useful and save you time,” Matt Shumer, co-founder and CEO of OthersideAI, told TechTalks.

But Shumer also warns about the threats of blindly trusting the code generated by Copilot.

“For example, it saved me time writing SQL code, but it put the database password directly in the code,” Shumer said. “If I wasn’t experienced, I might accept that and leave it in the code, which would create security issues. But because I knew how to modify the code, I was able to use what Copilot gave me as a starting point to work off of.”

The business model of Copilot

In my opinion, there’s another reason for which Microsoft started out with programming as the first application for GPT-3. There’s a huge opportunity to cut costs and make profits.

According to GitHub, “If the technical preview is successful, our plan is to build a commercial version of GitHub Copilot in the future.”

There’s still no information on how much the official Copilot will cost. But hourly wages for programming talent start at around $30 and can reach as high as $150. Even saving a few hours of programming time or giving a small boost to development speed would probably be enough to cover the costs of Copilot. Therefore, it would not be surprising if many developers and software development companies would sign up for Copilot once it is released as a commercial product.

“If it gives me back even 10 percent of my time, I’d say it’s worth the cost. Within reason, of course,” Shumer said.

Language models like GPT-3 require extensive resources to train and run. And they also need to be regularly updated and finetuned, which imposes more expenses on the company hosting the machine learning model. Therefore, high-cost domains such as software development would be a good place to start to reduce the time to recoup the investment made on the technology.

“The ability for [Copilot] to help me use libraries and frameworks I’ve never used before is extremely valuable,” Shumer said. “In one of my demos, for example, I asked it to generate a dashboard with Streamlit, and it did it perfectly in one try. I could then go and modify that dashboard, without needing to read through any documentation. That alone is valuable enough for me to pay for it.”

Automated coding can turn out to be a multi-billion-dollar industry. And Microsoft is positioning itself to take a leading role in this nascent sector, thanks to its market reach (through Visual Studio, Azure, and GitHub), deep pockets, and exclusive access to OpenAI’s technology and talent.

The future of automated coding

Developers must be careful not to mistake Copilot and other AI-powered code generators for a programming companion whose every suggestion you accept. As a programmer who has worked under tight deadlines on several occasions, I know that developers tend to cut corners when they’re running out of time (I’ve done it more than a few times). And if you have a tool that gives you a big chunk of working code in one fell swoop, you’re prone to just skim over it if you’re short on time.

On the other hand, adversaries might find ways to track vulnerable coding patterns in deep learning code generators and find new attack vectors against AI-generated software.

New coding tools create new habits (many of them negative and insecure). We must carefully explore this new space and beware the possible tradeoffs of having AI agents as our new coding partners.

Ben Dickson is a software engineer and the founder of TechTalks, a blog that explores the ways technology is solving and creating problems.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
Tech News

Google FLoC delay means third-party cookies will stick around longer

Google’s Privacy Sandbox, particularly its Federated Learning of Cohorts or FLoC, had the grand ambition of making third-party cookies unnecessary for target advertising, thereby protecting people’s privacy even while making money from them. Like many of Google’s grand ambitions, FLoC was met with no small amount of criticism and pushback. The company still maintains its position on the benefits of FLoC and its innocence from alleged ulterior motives. To give time to address those concerns, it is taking a small step back and delaying FLoC’s implementation to 2023.

When Google announced it would be phasing out third-party cookies by removing support for them on Chrome, it also promised it wouldn’t create technology to replace third-party cookies. Instead, it developed its Privacy Sandbox and FLoC to replace third-party cookies by promising a more privacy-respecting way while still giving businesses and advertisers a way to earn money. Not everyone bought the spiel, though, and Google faced intense scrutiny over its plans.

To give it more time to communicate with the Web’s many denizens, and potentially convince them of the validity of its FLoC, Google says it is delaying its implementation by about a year. It will start in late 2022 to give publishers and advertisers to prepare their services for the big change. That change will start in mid-2023, and Google estimates it will take only three months to phase out support for these third-party cookies.

Of course, the consequence of this delay is that third-party will be around for a little while longer. Web browsers now have systems to protect users from tracking, but Google believes that making these cookies totally obsolete is the only way to be free of them once and for all.

FLoC’s critics probably agree on that point but don’t see eye to eye with how Google is going about it. Points of contention include Google’s monopoly in the browser and advertising businesses, making FLoC a sure-fire way to keep its lead. There are also those that point out how its cohort idea itself is flawed and doesn’t exactly protect users from getting tracked, especially by Google.

Repost: Original Source and Author Link

Categories
Computing

What Apple’s Xcode Cloud Means for the Future of Apps

For consumers and outside observers, Apple’s Worldwide Developers Conference (WWDC) is always a chance to see what lies in store when the next versions of its operating systems come to their devices. For developers, though, it is all about learning what Apple is doing under the hood. At this year’s event, Apple revealed Xcode Cloud, a new feature of its Xcode development app that Apple believes will make life easier and simpler for app builders.

Folks at Apple told us they were incredibly excited for Xcode Cloud — and disappointed that developers could not be on-site when it was announced at the company’s online event — and a quick perusal of the Twittersphere brings up a wealth of devs giddy with expectation for the new feature.

But what exactly is Xcode Cloud, and why is Apple convinced it is such a big deal? To find out, we sat down with both engineers at Apple and the developers it’s targeting to see how Xcode Cloud might impact their work, to hear out any apprehensions they might have, and tease out what it could mean for the future of apps.

What is Xcode Cloud?

Let’s start with the basics. To make apps for Apple platforms, developers use an Apple-created Mac app called Xcode. It’s been around since 2003 and remains one of the most important pieces of software in Apple’s catalog. Xcode Cloud is one of the biggest updates to Xcode in years, bringing new functionality that many developers had to leave Xcode for in the past.

Apple positions Xcode Cloud as a tool that puts previously complex tools within reach of all developers. I asked Wiley Hodges, the Director for Tools and Technologies at Apple, what they were hearing from developers that led to the creation of Xcode Cloud.

“We’ve seen that there are… tasks like distributing the apps to beta testers, like managing feedback and crash reports, that are really critical to building great apps,” Hodges said. “And we’ve seen that more and more of our developers have been interested in continuous integration and using this automated build and automated test process to constantly verify the quality of software while it’s being built.”

Those are exactly the problems Xcode Cloud is meant to address.

Xcode Cloud lets developers run multiple automated tests at once, uses continuous integration (CI) so app code can be quickly iterated and updated. It also simplifies the distribution of app builds to beta testers and lets devs catch up on feedback. It can build apps in the cloud rather than on a Mac to reduce load and allows for the creation of advanced workflows that automatically start and stop depending on set conditions.

“We wanted to bring these tools and services in the reach of all our developers, because right now it’s been something that I think was more on the advanced level for developers to get this set up and running as part of their process,” Hodges explained.

That sounds promising enough. But what do actual developers think?

‘A long-term project’

Xcode running on an Apple MacBook Pro

Putting those tools front and center is something several developers told us was a key attraction of Xcode Cloud. Now that previously quite specialized capabilities have been integrated into the main tool they use to build apps, there is much less need to find third-party alternatives and add extra steps to their workflows.

Denys Telezhkin, a software engineer at ClearVPN, summed this feeling up in an interview with Digital Trends.

“I was very interested [in Xcode Cloud] as there have been a variety of problems with different CIs,” he told me. “For example, Microsoft Azure is difficult to configure, GitHub Actions is expensive, and so on.”

With everything integrated into Xcode Cloud, leaning on unreliable alternatives could become unnecessary. Of course, Apple will be happy to steer developers away from its rivals.

But the chief impetus, Hodges insists, was something different: “The motivation for Xcode Cloud came from our observation that while there was a group of devoted Xcode Server users, most developers still weren’t implementing continuous integration. We started looking at the obstacles that prevented adoption and came to the conclusion that a cloud-hosted CI offering would be the best way to get broad adoption of CI as a practice, particularly with smaller developers for whom setting up and managing dedicated build servers was a bigger challenge.”

“Seeing tools and services like Xcode Cloud integrated directly into the dev platform got us excited.”

For devs, it’s about more than just CI though. Scott Olechowski, Chief Product Officer and Co-Founder of Plex, got to try out a beta version of Xcode Cloud before Apple’s WWDC announcement. He told me the potential benefits are wide-ranging.

“Seeing tools and services like Xcode Cloud integrated directly into the dev platform got us excited since it should really help us be more efficient in our development, QA [quality assurance], and release efforts.”

Part of that increased efficiency will likely come in Xcode Cloud’s collaboration tools. Each team member can see project changes from their colleagues, and notifications can be sent when a code update is published. The timing is auspicious, given the way the ongoing pandemic has physically separated teams all over the globe. Yet it was also coincidental, said Hodges.

“The reality is we’ve been on this path for quite a while, literally years and years, and so I think the timing may be fortuitous in that regard. This is definitely a long-term project that was well underway before our unfortunate recent events.”

Putting it into practice

Xcode running on MacOS Monterey at Apple's WWDC 2021 event

If there is one thing Apple is great at, it’s building an ecosystem of apps and products that all work together. Unsurprisingly, Xcode Cloud reflects that — it connects to TestFlight for beta testers, lets you run builds on multiple virtual Apple devices in parallel, plays nice with App Store Connect, and more. For many developers, that integration could have a strongly positive impact on their work.

Vitalii Budnik, a software engineer at MacPaw’s Setapp, told me having everything in one place will mean more time spent actually coding and less time juggling multiple tools and options. For Budnik’s MacPaw colleague, Bohdan Mihiliev of Gemini Photos, the app distribution process will be faster and smoother than it currently is.

Apple sees Xcode Cloud as something that can improve life for developers large and small. Alison Tracey, a lead developer on Xcode Cloud at Apple, emphasized the way Xcode Cloud levels the playing field for smaller developers as well.

“With the range of options that exist to you in the configuration experience when you’re setting up your workflows, you really can support the needs of a small developer or somebody that’s a small development shop or somebody that’s new to continuous integration, all the way up to more of the advanced power users.”

This ranges from a simple four-step onboarding process to integrating Mac apps and tools like Slack and dashboards thanks to built-in APIs.

The pricing problem

A slide from WWDC 2021 showing Xcode running on an iMac and MacBook Pro

It’s not all smooth sailing, though. Apple refused to divulge pricing details for Xcode Cloud at WWDC, saying more information would not be available until the fall. Many developers I spoke to were concerned about that to one degree or another, and it seems to be putting a slight damper on the excitement a lot of devs are feeling about Xcode Cloud’s potential.

Questions have also been raised about Xcode Cloud’s value to developer teams that create apps for both Apple and non-Apple platforms since Xcode can only be run on the Mac. I put this to Alex Stevenson-Price, Engineering Manager at Plex, since Plex has apps for Mac, Windows, Linux, Android, iOS, and many other systems. He told me that Plex’s various apps are built by different teams using different tools, so while it is a great new string in the Apple team’s bow, it will not be of much use to the non-Apple teams because they will not be using Xcode anyway.

If you want to get Xcode Cloud’s benefits when building an Android app, you are out of luck.

Of course, it should not come as a surprise that Apple has limited interest in providing tools for rival ecosystems. If you want to get Xcode Cloud’s benefits when building an Android app, you are out of luck, but Xcode has always been restricted (Apple might say focused) in that way. That could pose problems for developers who have the same app on both iOS and Android — or any number of other platforms.

Other developers told me they will have to wait and see whether Xcode Cloud’s reputed benefits play out in reality. Its use for solo developers was also questioned, partly because a number of its features are aimed towards teams with multiple members.

For instance, Lukas Burgstaller, the developer behind apps like Fiery Feeds and Tidur, told me Xcode Cloud’s utility depends on the setting.

“While I don’t think I’m going to use it for my personal projects [as] I feel like continuous integration is moderately helpful at best for a solo developer setup, I will definitely start using it in my day job as an iOS team lead, where we were planning to set up some sort of CI for over a year but never got to it.”

But even if he might not use every feature, Burgstaller still described Xcode Cloud as a “finally” announcement, saying he was extremely happy Apple is adding it to Xcode.

A feature with real potential

A slide of Xcode running on MacOS Monterey at Apple's WWDC 2021 event

It is still early days for Xcode Cloud. Like many of the other updates and new features announced at WWDC 2021, from iOS 15 to MacOS Monterey, it is currently only available to beta testers. Despite a few concerns — and bad memories from the spotty launch of another developer tool, Mac Catalyst, a few years ago — the benefits seem to far outweigh the drawbacks, at least according to the developers I spoke to.

In fact, none of those devs said Xcode Cloud was completely without merit, suggesting there will be something for most people who work to create apps for the Apple ecosystem. Provided Apple continues to improve it as developer needs change, and as long as its pricing is not egregiously expensive, Apple might be onto a winner with Xcode Cloud.

As always, the proof is in the pudding, and a lot will depend on the state Xcode Cloud finds itself in at launch. For many developers, though, its fall release can’t come soon enough.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

What Waabi’s launch means for the self-driving car industry

Elevate your enterprise data technology and strategy at Transform 2021.


It is not the best of times for self-driving car startups. The past year has seen large tech companies acquire startups that were running out of cash and ride-hailing companies shutter costly self-driving car projects with no prospect of becoming production-ready anytime soon.

Yet, in the midst of this downturn, Waabi, a Toronto-based self-driving car startup, has just come out of stealth with an insane amount of $83.5 million in a Series A funding round led by Khosla Ventures, with additional participation from Uber, 8VC, Radical Ventures, OMERS Ventures, BDC, and Aurora Innovation. The company’s financial backers also include Geoffrey Hinton, Fei-Fei Li, Peter Abbeel, and Sanja Fidler, artificial intelligence scientists with great influence in the academia and applied AI community.

What makes Waabi qualified for such support? According to the company’s press release, Waabi aims to solve the “scale” challenge of self-driving car research and “bring commercially viable self-driving technology to society.” Those are two key challenges of the self-driving car industry and are mentioned numerous times in the release.

What Waabi describes as its “next generation of self-driving technology” has yet to pass the test of time. But its execution plan provides hints at what directions the self-driving car industry could be headed.

Better machine learning algorithms and simulations

According to Waabi’s press release: “The traditional approach to engineering self-driving vehicles results in a software stack that does not take full advantage of the power of AI, and that requires complex and time-consuming manual tuning. This makes scaling costly and technically challenging, especially when it comes to solving for less frequent and more unpredictable driving scenarios.”

Leading self-driving car companies have driven their cars on real roads for millions of miles to train their deep learning models. Real-road training is costly both in terms of logistics and human resources. It is also fraught with legal challenges as the laws surrounding self-driving car tests vary in different jurisdictions. Yet despite all the training, self-driving car technology struggles to handle corner cases, rare situations that are not included in the training data. These mounting challenges speak to the limits of current self-driving car technology.

Here’s how Waabi claims to solve these challenges (emphasis mine): “The company’s breakthrough, AI-first approach, developed by a team of world leading technologists, leverages deep learning, probabilistic inference and complex optimization to create software that is end-to-end trainable, interpretable and capable of very complex reasoning. This, together with a revolutionary closed loop simulator that has an unprecedented level of fidelity, enables testing at scale of both common driving scenarios and safety-critical edge cases. This approach significantly reduces the need to drive testing miles in the real world and results in a safer, more affordable, solution.”

There’s a lot of jargon in there (a lot of which is probably marketing lingo) that needs to be clarified. I reached out to Waabi for more details and will update this post if I hear back from them.

By “AI-first approach,” I suppose they mean that they will put more emphasis on creating better machine learning models and less on complementary technology such as lidars, radars, and mapping data. The benefit of having a software-heavy stack is the very low costs of updating the technology. And there will be a lot of updating in the coming years as scientists continue to find ways to circumvent the limits of self-driving AI.

The combination of “deep learning, probabilistic reasoning, and complex optimization” is interesting, albeit not a breakthrough. Most deep learning systems use non-probabilistic inference. They provide an output, say a category or a predicted value, without giving the level of uncertainty on the result. Probabilistic deep learning, on the other hand, also provides the reliability of its inferences, which can be very useful in critical applications such as driving.

“End-to-end trainable” machine learning models require no manual-engineered features. This means once you have developed the architecture and determined the loss and optimization functions, all you need to do is provide the machine learning model with training examples. Most deep learning models are end-to-end trainable. Some of the more complicated architectures require a combination of hand-engineered features and knowledge along with trainable components.

Finally, “interpretability” and “reasoning” are two of the key challenges of deep learning. Deep neural networks are composed of millions and billions of parameters. This makes it hard to troubleshoot them when something goes wrong (or find problems before something bad happens), which can be a real challenge in critical scenarios such as driving cars. On the other hand, the lack of reasoning power and causal understanding makes it very difficult for deep learning models to handle situations they haven’t seen before.

According to TechCrunch’s coverage of Waabi’s launch, Raquel Urtasan, the company’s CEO, described the AI system the company uses as a “family of algorithms.”

“When combined, the developer can trace back the decision process of the AI system and incorporate prior knowledge so they don’t have to teach the AI system everything from scratch,” TechCrunch wrote.

self-driving car simulation carla

Above: Simulation is an important component of training deep learning models for self-driving cars. (credit: CARLA)

Image Credit: Frontier Developments

The closed-loop simulation environment is a replacement for sending real cars on real roads. In an interview with The Verge, Urtasan said that Waabi can “test the entire system” in simulation. “We can train an entire system to learn in simulation, and we can produce the simulations with an incredible level of fidelity, such that we can really correlate what happens in simulation with what is happening in the real world.”

I’m a bit on the fence on the simulation component. Most self-driving car companies are using simulations as part of the training regime of their deep learning models. But creating simulation environments that are exact replications of the real world is virtually impossible, which is why self-driving car companies continue to use heavy road testing.

Waymo has at least 20 billion miles of simulated driving to go with its 20 million miles of real-road testing, which is a record in the industry. And I’m not sure how a startup with $83.5 million in funding can outmatch the talent, data, compute, and financial resources of a self-driving company with more than a decade of history and the backing of Alphabet, one of the wealthiest companies in the world.

More hints of the system can be found in the work that Urtasan, who is also a professor in the Department of Computer Science at the University of Toronto, does in academic research. Urtasan’s name appears on many papers about autonomous driving. But one in particular, uploaded on the arXiv preprint server in January, is interesting.

Titled “MP3: A Unified Model to Map, Perceive, Predict and Plan,” the paper discusses an approach to self-driving that is very close to the description in Waabi’s launch press release.

MP3 self-driving neural networks probablistic deep learning

Above: MP3 is a deep learning model that uses probabilistic inference to create scenic representations and perform motion planning for self-driving cars.

The researchers describe MP3 as “an end-to-end approach to mapless driving that is interpretable, does not incur any information loss, and reasons about uncertainty in the intermediate representations.” In the paper researchers also discuss the use of “probabilistic spatial layers to model the static and dynamic parts of the environment.”

MP3 is end-to-end trainable and uses lidar input to create scene representations, predict future states, and plan trajectories. The machine learning model obviates the need for finely detailed mapping data that companies like Waymo use in their self-driving vehicles.

Raquel posted a video on her YouTube that provides a brief explanation of how MP3 works. It’s fascinating work, though many researchers will point out that it not so much of a breakthrough as a clever combination of existing techniques.

There’s also a sizeable gap between academic AI research and applied AI. It remains to be seen if MP3 or a variation of it is the model that Waabi is using and how it will perform in practical settings.

A more conservative approach to commercialization

Waabi’s first application will not be passenger cars that you can order with your Lyft or Uber app.

“The team will initially focus on deploying Waabi’s software in logistics, specifically long-haul trucking, an industry where self-driving technology stands to make the biggest and swiftest impact due to a chronic driver shortage and pervasive safety issues,” Waabi’s press release states.

What the release doesn’t mention, however, is that highway settings are an easier problem to solve because they are much more predictable than urban areas. This makes them less prone to edge cases (such as a pedestrian running in front of the car) and easier to simulate. Self-driving trucks can transport cargo between cities, while human drivers take care of delivery inside cities.

With Lyft and Uber failing to launch their own robo-taxi services, and with Waymo still away from turning One, its fully driverless ride-hailing service, into a scalable and profitable business, Waabi’s approach seems to be well thought.

With more complex applications still being beyond reach, we can expect self-driving technology to make inroads into more specialized settings such as trucking and industrial complexes and factories.

Waabi also doesn’t make any mention of a timeline in the press release. This also seems to reflect the failures of the self-driving car industry in the past few years. Top executives of automotive and self-driving car companies have constantly made bold statements and given deadlines about the delivery of fully driverless technology. None of those deadlines have been met.

Whether Waabi becomes independently successful or ends up joining the acquisition portfolio of one of the tech giants, its plan seems to be a reality check on the self-driving car industry. The industry needs companies that can develop and test new technologies without much fanfare, embrace change as they learn from their mistakes, make incremental improvements, and save their cash for a long race.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Formstack: 82% of people don’t know what ‘no-code’ means

Elevate your enterprise data technology and strategy at Transform 2021.


Despite massive growth in no-code, only 18% of people are familiar with “no-code,” according to the Rise of the No-Code Economy report produced by Formstack, a no-code workplace productivity platform.

Rise of the No-Code Economy

Above: People use no-code tools because they make it easier and faster to develop applications.

Image Credit: Formstack

If you haven’t yet heard of no-code, you’re not alone. Despite the massive growth in no-code over the last year, 82% of people are unfamiliar with the term “no-code.” That’s according to the recent report produced by Formstack, a no-code workplace productivity platform. Yet, the survey also found that 66% of respondents have adopted no-code tools in the last year, and 41% in the last six months—suggesting developer shortages and the pandemic have driven growth across the industry.

No-code is a type of software development that allows anyone to create digital applications—from building mobile, voice, or ecommerce apps and websites to automating any number of tasks or processes—without writing a single line of code. It involves using tools with an intuitive, drag-and-drop interface to create a unique solution to a problem. Alternatively, low-code minimizes the amount of hand-coding needed, yet it still requires technical skills and some knowledge of code.

The report showed that no-code adoption and awareness is still relatively limited to the developer and tech community, as 81% of those who work in IT are familiar with the idea of no-code tools. But despite current adoption rates among those outside of IT, the report shows huge potential for non-technical employees to adopt no-code. Formstack’s survey found that 33% of respondents across industries expect to increase no-code usage over the next year, with 45% in computer & electronics manufacturing, 46% in banking and finance, and 71% in software.

In addition to growing adoption rates, many respondents believe their organizations and industries will begin hiring for no-code roles within the next year, signaling that the growing no-code economy will create demand in an entirely new industry segment.

The survey polled 1,000 workers across various industries, job levels, and company sizes to gather comprehensive data on the understanding, usage, and growth of no-code. External sources and data points are included throughout the report to further define and explain the no-code economy.

Read Formstack’s full Rise of the No-Code Economy.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Google Photos to end free unlimited uploads: What it means to you

Google Photos has brought a whole new dimension to treasuring cherished memories ever since it first burst into the scene in 2015. Right from that time, the photo sharing and cloud storage service has been popular for its free unlimited storage. Users have enjoyed this feature all these years, saving the most important moments with utter piece of mind – not having to worry about storage space running out on their device.

Google’s trusted service gives unlimited cloud storage for photos and videos of up to size 16MP and 1080p resolution. Anything uploaded beyond these limits is simply scaled down to the mentioned parameters – called the High Quality storage. Basically, users don’t have to do anything beyond, hitting the “back up now” button or keep automatic backup on for the secure backup of all photos and videos.

Brace for the change

Starting June 1, 2021, Google Photos will count all the uploaded images and videos against the free 15GB storage with your Google account. Go past the threshold and you’ll have to get the Google One subscription to keep enjoying the service.

The foundation for the move was laid back in November 2020 when Google Photos lead David Leib tweeted the reason for this strategic move. According to him, providing completely free backups is costing the company heavily. So, Google had no other way but to alight the primary cost of running the service and accepting the primary value of online storage.

Already more than a billion people are uploading a whopping 28 million photos to the platform each week. Understandably, to meet the expenses and provide streamlined service that users have got used to all these years, demands a subscription model going forward.

What about existing uploads

Users who rely on Google Photos to upload their device’s content need not worry about their existing files. The existing high-quality content will remain exempt from the upcoming storage restrictions. It’s only after the mentioned date, any new uploads will start ticking the Google account storage meter.
If you manually backup the photos and videos to the service, it is a wise idea to go through your library again and upload any important content before June 1. Anything on or after June 1 will start adding to the space in your designated 15 GB storage quota.

Users will also get a new feature starting June 1 that’ll be more of a photos and video management tool. The AI tool will analyze your stored files and suggest if you want to get rid of photos that are blurry or video clips that are too big to fit on the 15GB free limit.

Existing Pixel users need not worry

Pixel devices being part of the Google ecosystem have the privilege of enjoying the benefits further. Yes, Pixel owners need not worry, as they will get free unlimited storage for high quality uploads. If the users choose to upload the original quality of photos and videos, it will however count toward Google account storage. But yes, nothing for the high quality 16MP and 1080p criteria.

For Pixel 2 and Pixel 2 XL owners, they can enjoy unlimited upload of photos and videos at the Original Quality settings until the end of 2021. It is beyond that time frame any content will be scaled down to the High Quality resolution for cloud storage. Users who have the Google Pixel 3 and Pixel 3 XL devices will enjoy this benefit until January 31, 2022. After that the same methodology will follow for any new photos or videos – i.e. they will be scaled down to High Quality settings.

The newer devices in the lineup including Pixel 4, Pixel 4 XL, Pixel 4a and the Pixel 5 will however not get the free unlimited uploads in the Original Quality settings. They’ll have to live with the free unlimited High Quality storage option. Also, Google will not provide the luxury of free photo and video storage for Pixel devices released in the future.

What are the options?

Once you reach the 15GB storage quota, you’ll have to figure out the options on the table. You can go for the Google One program which is a unified cloud storage platform for Google products. The service has spread across 140 countries ever since its launch in 2018.

The 100GB tier for the plan costs $1.99 per month or $19.99 per year. A 200GB storage slot will bump up the price to $2.99 per month or $29.99 per year. Then there is the 2TB plan which will set you back $9.99 per month or $99.99 per year. These plans should be more than enough for normal usage. If you do require more storage due to professional requirements, there is the 10TB plan for $99.99, 20TB for $199.99 and 30TB for $299.99 per month.

The next best option is Microsoft OneDrive for those who want to try a service out of the Google ecosystem. The single-user plan gives 5GB free storage and thereafter the paid plans start. 100GB storage costs the same as Google at $1.99, and then is 1TB plan for $6.99 per month or $69.99 per year. The 1TB plan comes with Skype and office apps like Word, Excel, PowerPoint and Outlook. The Dropbox option is also open for single-user plans, as it provides 2GB of free storage and then up to 2TB storage for $9.99 per month.

It is clear, out of these options Google One provides the most set of choices. Android users will be better off sticking to the Google ecosystem for peace of mind – and yes – this is our recommendation as well. On the other hand, Microsoft OneDrive is worth consideration for professionals who want the added benefit of Microsoft Office suites.

Repost: Original Source and Author Link

Categories
Game

Fortnite Wild Weeks takes over season six: What this means for players

Epic has announced Fortnite Wild Week, a sort of dynamic adjustment to the battle royale gameplay that will keep things feeling fresh and require players to stay on top of their strategies. Put simply, each week for the remainder of the game’s season six will put emphasis on a different type of weapon with its own unique benefits and drawbacks.

Players don’t need to worry about anything big changing in Fortnite battle royale. The gameplay and island remain the same. Rather, Wild Weeks refers to the items you’ll find around the island, with each week adjusting the loadout so that you’ll be more likely to come across certain types of weapons rather than others.

This first week, which started on May 6, revolves around fire and that’s why you’re easily stumbling across flare guns and fire-powered bows right now. With these, players are easily able to set things on fire, meaning you’ll need to stay on guard about long-range attacks and have an exit strategy to escape the flames.

Epic says that its ‘Fighting Fire with Fire’ Wild Week will be live until May 13 at 10AM ET. That gives players a full week to enjoy the literal firepower before the game cycles to the next weapons drop. Epic hasn’t yet revealed which array of weapons will be emphasized in the next weeks starting on May 20.

After that second Wild Week that starts on May 20, the third and final Wild Week in season six will start on June 3. Players can keep an eye out on the official Fortnite social media accounts to preview the next weapons theme, giving them a chance to plan their strategies before the change.

Repost: Original Source and Author Link

Categories
Tech News

The new iMacs have ‘force-canceling’ speakers — here’s what that means

Apple loves throwing around buzzwords in product reveals and spec sheets, and one that caught a few eyes was the use of ‘force-canceling‘ in the new iMac‘s sound systems. Apple claims these are the best speakers in an iMac yet, but what does a force-canceling woofer actually mean? Is that some kind of Jedi fighting space dog?

It’s nothing quite as sci-fi as Star Wars. Force-canceling woofer arrangements are fairly common in high-end audio, and while they are far from a guarantee of good sound quality, they do theoretically offer some benefits.

The basic idea behind force-canceling woofers is to eliminate unwanted vibrations by having two woofers firing in opposite directions and perfect unison.

Remember Newton’s third law? For every action, there is an equal and opposite reaction. When a woofer moves, the ‘reaction’ has to go somewhere — usually as stray vibrations in the woofer’s enclosure.

These vibrations can lead to the enclosure rattling and radiating unwanted sound. In large drivers, like those in a subwoofer, it can even cause a speaker to move around the floor. By placing another woofer that fires in the opposite direction, the reactive forces are able to largely cancel each other out.

So why would you want force-canceling woofers in your gadegts? Well, if you’ve ever felt like your computer or phone sounds muddy and rattles when playing bass, these extraneous vibrations are often a culprit.

In theory — nobody has actually heard the new iMac yet — the configuration could help prevent these artefacts and minimize distortion.

In hi-fi speakers, you can also deal with vibrations by creating a dense cabinet with dampening materials or carefully chosen geometry, but one can’t expect the same from a multi-function device like the iMac.

None of this guarantees the new iMac will actually sound any good, but hey, now you know. For a more technical explanation — and to see what a force-canceling setup looks like in some hi-fi speakers, you can check out the below explainer by Soundstage Network and audio researcher Jack Oclee-Brown.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In –
and you can subscribe to it right here.



Repost: Original Source and Author Link

Categories
Tech News

What Waymo’s new leadership means for its self-driving cars

Waymo, Alphabet’s self-driving car subsidiary, is reshuffling its top executive lineup. On April 2, John Krafcik, Waymo’s CEO since 2015, declared that he will be stepping down from his role. He will be replaced by Tekedra Mawakana and Dmitri Dolgov, the company’s former COO and CTO. Krafcik will remain as an advisor to the company.

“[With] the fully autonomous Waymo One ride-hailing service open to all in our launch area of Metro Phoenix, and with the fifth generation of the Waymo Driver being prepared for deployment in ride-hailing and goods delivery, it’s a wonderful opportunity for me to pass the baton to Tekedra and Dmitri as Waymo’s co-CEOs,” Krafcik wrote on LinkedIn as he declared his departure.

The change in leadership can have significant implications for Waymo, which has seen many ups and downs as it continues to develop its driverless car business. It can also hint at the broader state of the self-driving car industry, which has failed to live up to its hype in the past few years.

The deep learning hype

In 2015, Krafcik joined Google’s self-driving car effort, then called Project Chauffeur. At the time, there was a lot of excitement around deep learning, the branch of artificial intelligence that has made great inroads in computer vision, one of the key components of driverless cars. The belief was that, thanks to continued advances in deep learning, it was only a matter of time before self-driving cars became the norm on the streets.

Deep learning models rely on vast amounts of training data to develop stable behavior. And if the AI algorithms were ready, as it seemed at the time, reaching deployment-level self-driving car technology was only a question of having a scalable data-collection strategy to train deep learning models. While some of this data can be generated in simulated environments, the main training of deep learning models used in self-driving cars comes from driving in the real world.

Self-driving cars use deep learning to make sense of their surroundings.

Therefore, what Project Chauffeur needed was a leader who had longtime experience in the automotive industry and could bridge the gap between carmakers and the fast-developing AI sector, and deploy Google’s technology on roads.

And Krafcik was the perfect candidate. Before joining Google, he was the CEO of Hyundai Motor America, had held several positions at Ford, and had worked in the International Motor Vehicle Program at MIT as a lean production researcher and consultant.

Under Krafcik’s tenure, Project Chauffeur spun off as Waymo under Alphabet, Google’s parent company, and quickly transformed into a leader in testing self-driving cars on roads. During this time, Waymo struck partnerships with several automakers, integrated Waymo’s AI and lidar technology into Jaguar and Chrysler vehicles, and expanded its test-driving project to more than 25 states.

Today, Waymo’s cars have driven more than 20 million miles on roads and 20 billion miles in simulation, more than any other self-driving car company.

The limits of self-driving technology

Like the executives of other companies working on driverless car technology, Krafcik promised time and again that fully autonomous vehicles were on the horizon. In Waymo’s 2020 Web Summit, Krafcik presented a video of a Waymo self-driving car driving in streets without a backup driver.

“We’ve been working on this technology a long time, for about eight years,” Krafcik said. “And every company, including Waymo, has always started with a test driver behind the wheel, ready to take over. We recently surveyed 3,000 adults across the U.S. and asked them when they expected to see self-driving vehicles, ones without a person in the driver’s seat, on their own roads. And the common answer we heard was around 2020… It’s not happening in 2020. It’s happening today.”

But despite Krafcik’s leverage in the automotive industry, Google’s crack AI research team, and Alphabet’s deep pockets, Waymo—like other self-driving car companies—has failed to produce a robust driverless technology. The cars still require backup drivers to monitor and take control as soon as the AI starts to act erratically.

The AI technology is not ready, and despite the lidar, radar, and other sensor technologies used to complement deep learning models, self-driving cars still can’t handle unknown conditions in the same way as humans do.

They can run thousands of miles without making errors, but they might suddenly make very dumb and dangerous mistakes when they face corner cases, such as an overturned truck on the highway or a fire truck parked at the wrong angle.

So far, Waymo has avoided major self-driving scandals such as Tesla and Uber’s fatal accidents. But it has yet to deliver a technology that can be deployed at scale. Waymo One, the company’s fully driverless robo-taxi service, is only available in limited parts of Phoenix, AZ. After two years, Waymo still hasn’t managed to expand the service to more crowded and volatile urban areas.

The company is still far from becoming profitable. Alphabet’s Other Bets segment, which includes Waymo, had an operating cost of $4.48 billion in 2020, against $657 million in revenue. And Waymo’s valuation has seen a huge drop amid cooling sentiments surrounding self-driving cars, going from nearly $200 billion in 2018 to $30 billion in 2020.

The AI and legal challenges of self-driving cars

Waymo driverless car

While Krafcik didn’t explicitly state the reason for his departure, Waymo’s new leadership lineup suggests that the company has acknowledged that the “fully self-driving cars are here” narrative is a bit fallacious.

Driverless technology has come a long way, but a lot more needs to be done. It’s clear that just putting more miles on your deep learning algorithms will not make them more robust against unpredictable situations. We need to address some of the fundamental problems of deep learning, such as lack of causality, poor transfer learning, and intuitive understanding of physics. These are active areas of research, and no one has still provided a definitive answer to them.

The self-driving car industry also faces several legal complications. For instance, if a driverless car becomes involved in an accident, how will culpability be defined? How will self-driving cars share roads with human-driven cars? How do you define whether a road or environment is stable enough for driverless technology? These are some of the questions that the self-driving car community will have to solve as the technology continues to develop and prepare for mass adoption.

In this regard, the new co-CEOs of Waymo are well-positioned to face these challenges. Dologov, who was Waymo’s CTO before his new role, has a PhD in computer science with a focus on artificial intelligence and has a long history of working on self-driving car technology.

As a postdoc researcher, he was part of Stanford’s self-driving car team that won second place in DARPA’s 2007 Urban Challenge. He was also a researcher at Toyota’s Research Institute in Ann Arbor, MI. And since 2009, he has been among the senior engineers in Google’s self-driving car outfit that later became Waymo.

In a nutshell, he’s as good a leader you can have to deal with the AI software, algorithm, and hardware challenges that a driverless car company will face in the coming years.

Mawakana, on the other hand, is a Doctor of Law. She had led policy teams at Yahoo, eBay, and AOL before joining Waymo and becoming the COO. She’s now well-positioned to tackle the legal and policy challenges that Waymo will face as self-driving cars gradually find try to find their way in more jurisdictions.

The dream of self-driving cars is far from dead. In fact, in his final year as CEO, Krafcik managed to secure more than $3 billion in funding for Waymo. There’s still a lot of interest in self-driving cars and their potential value. But Waymo’s new lineup suggests that self-driving cars still have a bumpy road ahead.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.



Repost: Original Source and Author Link

Categories
Computing

Closing Microsoft Stores means more support and buying headaches for Windows users

Apple Macs “just work” (for now at least). PCs are hobbyist machines. Both are stereotypes, sure, but they might offer some explanation why Apple Stores are thriving and Microsoft just decided to close all of theirs.

Microsoft spun the closure into a “new approach to retail,” one that would be driven through its website and online sales channels, rather than through a network of physical stores. (Amazon must be chuckling at this.) But the writing was on the wall: In 2019, Microsoft shuttered virtually all of its retail kiosks at malls and other shopping centers, leaving the larger Microsoft Stores on uncertain ground.

If you walked into a mall the last few years, the evidence was there in front of you: shoppers visited Apple Stores, and generally ignored Microsoft Stores. That’s a little odd, when you think about it. A 2018 analysis by Thinknum shows that Microsoft Stores tended to open in the shadow of Apple Stores, so that technology buyers would have options from which to choose. So why did Microsoft Stores fail?

Part of it might have just been their respective approaches to retail. Microsoft Stores always felt a bit like a coffee shop: a place to hang out and talk technology in general. Apple Stores always felt more like an exclusive restaurant: If you’re not here to buy something, get out. You browsed at Microsoft Stores. You visited Apple Stores. The pandemic, of course, had its own, profound effects on both.

p1240322 7 Martyn Williams

Customers and staff at the Microsoft Store in San Francisco on March 23, 2017.

Microsoft Stores offered PC support, too

But viewing Apple Stores and Microsoft Stores alike as well, stores, also missed their secondary purpose: as a support hub. As anyone who’s owned an iPhone knows, if something goes wrong, your first stop is usually an Apple Store and its Genius Bar technicians. It’s impossible to know how many people visit an Apple Store just to browse versus scheduling a repair appointment. But I’d bet the proportions swung more toward the service side than in the Windows world. (We asked Microsoft for comment, but they haven’t responded.)

Closing Microsoft’s physical stores will certainly hurt. I visited Microsoft’s San Francisco store on multiple occasions on the way back from work—sometimes to see if I could take advantage of a holiday promotion, or to see how consumers were reacting to a new Surface product, or occasionally to seek out support myself. Microsoft’s own Surface products certainly aren’t worry-free. Microsoft’s Surface Book 2 had numerous bugs, and I’ve brought our review unit in for servicing before. One of our overheating 15-inch models apparently had a discrete GPU whose thermal paste failed over time, which we worked out at the support counter.

I think it’s fair, however, to say that consumers expect their Apple products to work. When they don’t, something is wrong, and an Apple tech is required to fix it. When a typical PC user encounters a bug, they sigh, swear, and start hunting down solutions. PC buyers tend to understand that a $599 PC will ship with a few blemishes. Apple wants you to bring its products in for servicing, like a luxury automobile. A Windows PCs can be tweaked by any shade-tree mechanic. 

microsoft online store 2 Microsoft

Going forward, this will be the “Microsoft Store”. Can Microsoft revamp it into something transformative?

We’ll likely see this trend continue. With the pandemic still raging, there’s less emphasis on browsing physical stores. Instead, people still predominantly shop online, and simply return items with physical flaws or that they can’t fix themselves. As customers learn more about their PC, they’re turning to community support sites like Reddit, how-to videos, and other DIY fixes. Microsoft could also turn toward a strategy it’s pursued in the past: establishing “store-within-a-store” outposts at retailers like Best Buy.

Repost: Original Source and Author Link