Categories
Game

Action RPG ‘Project Eve’ will finally debut in 2023 as ‘Stellar Blade’

Shift Up’s long-touted Project Eve is finally nearing release. The Korean developer has announced that the action RPG will arrive in 2023 as Stellar Blade. It will be a PS5 exclusive on consoles. Accordingly, the studio used Sony’s State of Play event to share a trailer outlining the game’s apocalyptic premise.

You play Eve, a warrior who returns to a shattered Earth to rebuild the last city (Xion) and protect it against “NA:tives.” Naturally, there will be some intrigue between humans alongside the usual battles against horrific-looking creatures. The action RPG mechanics will sound familiar, but could be intriguing if you thrive on perfecting your gameplay. You’ll need “precise timing” with combos, defensive maneuvers and skills to succeed against regular foes, while boss fights will demand a more “strategic” approach.

Project Eve has garnered increasing buzz since the first teaser trailer appeared in 2019, in no small part due to its eye-catching visuals. Whether or not the finished Stellar Blade lives up to those expectations is another story. This is Shift Up’s first console game, not to mention its first AAA release. While the company has had success with the mobile-oriented Destiny Child, it’ll need to show that its experience translates well to other platforms.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link

Categories
Game

DIY project transforms a Game Boy Camera into a modern mirrorless

Nintendo’s Game Boy Camera has inspired countless DIY projects over the years, from a to an AI model trained to captured by the accessory. However, few are likely to match the creativity of the Camera M by photographer and modder .

In a spotted by , Graves detailed how he turned the humble Game Boy Camera into a mirrorless camera. Using a combination of custom PCB and parts from a repurposed Game Boy Pocket, a 1996 variant of the original 1989 model that was smaller, lighter and more power efficient, he transplanted the internals of a Game Boy into a shell that looks like a Fujifilm . As for the Game Boy Camera’s 128 x 128 pixel CMOS sensor, Graves put that into a custom cart attached to a CS lens mount and a manual focus varifocal lens. The nifty thing about Camera M is that it’s possible to use an original Game Boy Camera in place of the custom cartridge he hacked together.

In either case, the resulting device still takes greyscale 128 x 112 photos, but the ergonomics and user experience are vastly improved. Graves replaced the Pocket’s original screen with a backlit IPS display, making it easier to use the camera at night, and added a 1,800mAh battery that can power everything for up to eight hours. It even comes with USB-C charging. Graves told Gizmodo he hasn’t tried playing any games with his creation yet but speculated turn-based RPGs like Pokémon would be fun with the button layout he devised. So far, only one Camera M exists, but Graves said he’s “strongly leaning” toward selling conversion kits or even complete kits.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Repost: Original Source and Author Link

Categories
Game

‘Limbo’ lead designer’s next project is the cosmic adventure ‘Cocoon’

Jeppe Carlsen’s new game has a distinctly different vibe than his previous releases. As the lead gameplay designer of Limbo and Inside, Carlsen is known for building spooky side-scrollers with morbid visuals, but his latest original project features a bug-like explorer in a mysterious, neon-speckled planet system. 

The main character resembles an anthropomorphic firefly as it picks up orbs and uses them to traverse the environment, leaping among worlds and exploring ancient alien technology. 

“Each world exists within an orb that you can carry on your back,” Cocoon‘s description reads. “Wrap your head around the core mechanic of leaping between worlds — and combine, manipulate, and rearrange them to solve intricate puzzles.”

It’s not all light and airy, though — Cocoon features a large, overwhelming world and plenty of shadowy areas, much like Limbo and Inside.

Cocoon is due to hit Xbox One, Xbox Series X and S, Game Pass, Steam and Switch in the first half of 2023. It’s published by Annapurna Interactive and in development at Carlsen’s studio, Geometric Interactive.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Security

Beanstalk cryptocurrency project robbed after hacker votes to send themself $182 million

On Sunday, an attacker managed to drain around $182 million of cryptocurrency from Beanstalk Farms, a decentralized finance (DeFi) project aimed at balancing the supply and demand of different cryptocurrency assets. Notably, the attack exploited Beanstalk’s majority vote governance system, a core feature of many DeFi protocols.

The attack was spotted on Sunday morning by blockchain analytics company PeckShield, which estimated the net profit for the hacker was around $80 million of the total funds stolen, minus some of the borrowed funds that were required to perform the attack.

Beanstalk admitted to the attack in a tweet shortly afterward, saying they were “investigating the attack and will make an announcement to the community as soon as possible.”

Beanstalk describes itself as a “decentralized credit based stablecoin protocol.” It operates a system where participants earn rewards by contributing funds to a central funding pool (called “the silo”) that is used to balance the value of one token (known as a “bean”) at close to $1.

Like many other DeFi projects, the creators of Beanstalk — a development team called Publius — included a governance mechanism where participants could vote collectively on changes to the code. They would then obtain voting rights in proportion to the value of tokens that they held, creating a vulnerability that would prove to be the project’s undoing.

The attack was made possible by another DeFi product called a “flash loan,” which allows users to borrow large amounts of cryptocurrency for very short periods of time (minutes or even seconds). Flash loans are meant to provide liquidity or take advantage of price arbitrage opportunities but can also be used for more nefarious purposes.

According to analysis from blockchain security firm CertiK, the Beanstalk attacker used a flash loan obtained through the decentralized protocol Aave to borrow close to $1 billion in cryptocurrency assets and exchanged these for enough beans to gain a 67 percent voting stake in the project. With this supermajority stake, they were able to approve the execution of code that transferred the assets to their own wallet. The attacker then instantly repaid the flash loan, netting an $80 million profit.

Based on the duration of an Aave flash loan, the entire process took place in less than 13 seconds.

“We are seeing an increasing trend in flash loan attacks this year,” said CertiK CEO and co-founder Ronghui Gu. “These attacks further emphasize the importance of a security audit, and also being educated about the pitfalls of security issues when writing Web3 code.”

When implemented properly, DeFi services benefit from all the security of blockchain, but their complexity can make code difficult to fully audit, making such projects an attractive target for hackers. In the case of the Beanstalk hack, the Publius team admitted that they had not included any provision to mitigate the possibility of a flash loan attack, although presumably this was not apparent until the situation occurred.

A request for comment (sent to the Publius team through Discord) has not yet received a response as of press time.

Brian Pasfield, CTO at cryptocurrency lending platform Fringe Finance, said that decentralized governance structures (known as DAOs) could also create problems.

“DAO governance is currently trending in DeFi,” Pasfield said. “While it is a necessary step in the decentralization process, it should be done gradually and with all the possible risks carefully weighted. Developers and administrators should be aware of new points of failure that can be created by developers or DAO members intentionally or by accident.”

For investors in Beanstalk who have lost their staked coins, there may be little recourse. In a message posted immediately after the hack, the Beanstalk founders wrote that it was “highly unlikely” the project would receive a bailout since it had not been developed with VC backing, adding “we are fucked.”

In the project’s Discord server, many users claim to have lost tens of thousands of dollars of invested cryptocurrency. Since the attack, the hacker has been moving funds through Tornado Cash, a privacy-focused mixer service that has become a go-to step in laundering stolen cryptocurrency funds. With much of the stolen money now obscured, it’s unlikely to be traced and returned.

In the wake of the attack, the value of the BEAN stablecoin has tanked, breaking the $1 peg and trading for around 14 cents on Monday afternoon.



Repost: Original Source and Author Link

Categories
AI

Is your AI project doomed to fail before it begins? 

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Artificial intelligence (AI), machine learning (ML) and other emerging technologies have potential to solve complex problems for organizations. Yet despite increased adoption over the past two years, only a small percentage of companies feel they are gaining significant value from their AI initiatives. Where are their efforts going wrong? Simple missteps can derail any AI initiative, but there are ways to avoid these missteps and achieve success.

Following are four mistakes that can lead to a failed AI implementation and what you should do to avoid or resolve these issues for a successful AI rollout.

Don’t solve the wrong problem

When determining where to apply AI to solve problems, look at the situation through the right lens and engage both sides of your organization in design thinking sessions, as neither business nor IT have all the answers. Business leaders know which levers can be pulled to achieve a competitive advantage, while technology leaders know how to use technology to achieve those objectives. Design thinking can help create a complete picture of the problem, requirements and desired outcome, and can prioritize which changes will have the biggest operational and financial impact.

One consumer product retail company with a 36-hour invoice processing schedule recently experienced this issue when it requested help speeding up its process. A proof of concept revealed that applying an AI/ML solution could decrease processing time to 30 minutes, a 720% speed increase. On paper the improvement looked great. But the company’s weekly settlement process meant the improved processing time didn’t matter. The solution never moved into production.

When looking at the problem to be solved, it’s important to relate it back to one of three critical bottom-line business drivers: increasing revenue, increasing profitability, or reducing risk. Saving time doesn’t necessarily translate to increased revenue or reduced cost. What business impact will the change bring?

Data quality is critical to success

Data can have a make-or-break impact on AI programs. Clean, dependable, accessible data is critical to achieving accurate results. The algorithm may be good and the model effective, but if the data is poor quality or not easy and feasible to collect, there will be no clear answer. Organizations must determine what data they need to collect, whether they can actually collect it, how difficult or costly it will be to collect, and if it will provide the information needed.

A financial institution wanted to use AI/ML to automate loan processing, but missing data elements in source records were creating a high error rate, causing the solution to fail. A second ML model was created to review each record. Those that met the required confidence interval were moved forward in the automated process; those that did not were pulled for human intervention to solve data-quality problems. This multistage process greatly reduced the human interaction required and enabled the institution to achieve an 85% increase in efficiency. Without the additional ML model to address data quality, the automation solution never would have enabled the organization to achieve meaningful results.

In-house or third-party? Each has its own challenges

Each type of AI solution brings its own challenges. Solutions built in-house provide more control because you are developing the algorithm, cleaning the data, and testing and validating the model. But building your own AI solution is complicated, and unless you’re using open source, you’ll face costs around licensing the tools being used and costs associated with upfront solution development and maintenance.

Third-party solutions bring their own challenges, including:

  • No access to the model or how it works
  • Inability to know if the model is doing what it’s supposed to do
  • No access to the data if the solution is SaaS based
  • Inability to do regression testing or know false acceptance or error rates.

In highly regulated industries, these issues become more challenging since regulators will be asking questions on these topics.

A financial services company was looking to validate a SaaS solution that used AI to identify suspicious activity. The company had no access to the underlying model or the data and no details on how the model determined what activity was suspicious. How could the company perform due diligence and verify the tool was effective?

In this instance, the company found its only option was to perform simulations of suspicious or nefarious activity it was trying to detect. Even this method of validation had challenges, such as ensuring the testing would not have a negative impact, create denial-of-service conditions, or impact service availability. The company decided to run simulations in a test environment to minimize risk of production impact. If companies choose to leverage this validation method, they should review service agreements to verify they have authority to conduct this type of testing and should consider the need to obtain permission from other potentially impacted third parties.

Invite all of the right people to the party

When considering developing an AI solution, it’s important to include all relevant decision makers upfront, including business stakeholders, IT, compliance, and internal audit. This ensures all critical information on requirements is gathered before planning and work begins.

A hospitality company wanted to automate its process for responding to data subject access requests (DSARs) as required by the General Data Protection Regulation (GDPR), Europe’s strict data-protection law. A DSAR requires organizations to provide, on request, a copy of any personal data the company is holding for the requestor and the purpose for which it is being used. The company engaged an outside provider to develop an AI solution to automate DSAR process elements but did not involve IT in the process. The resulting requirements definition failed to align with the company’s supported technology solutions. While the proof of concept verified the solution would result in more than a 200% increase in speed and efficiency, the solution did not move to production because IT was concerned that the long-term cost of maintaining this new solution would exceed the savings.

In a similar example, a financial services organization didn’t involve its compliance team in developing requirements definitions. The AI solution being developed did not meet the organization’s compliance standards, the provability process hadn’t been documented, and the solution wasn’t using the same identity and access management (IAM) standards the company required. Compliance blocked the solution when it was only partially through the proof-of-concept stage.

It’s important that all relevant voices are at the table early when developing or implementing an AI/ML solution. This will ensure the requirements definition is correct and complete and that the solution meets required standards as well as achieves the desired business objectives.

When considering AI or other emerging technologies, organizations need to take the right actions early in the process to ensure success. Above all, they must make sure that 1) the solution they are pursuing meets one of the three key objectives — increasing revenue, improving profitability, or reducing risk, 2) they have processes in place to get the necessary data, 3) their build vs. buy decision is well-founded, and 4) they have all of the right stakeholders involved early on.

Scott Laliberte is Managing Director of the Emerging Technology Group at Protiviti.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Sega partners with Microsoft on its ‘Super Game’ project

Sega is to use the company’s Azure cloud platform to produce “large-scale, global games” as part of its recently announced Super Game project. The publisher first this past May during an investor event. At the time, the company said it would become available sometime during its fiscal 2026 year. In this latest announcement, Sega said the project is integral to its mid to long-term strategy and will see it creating games with a global online component.

“This proposed alliance represents SEGA looking ahead, and by working with Microsoft to anticipate such trends as they accelerate further in future, the goal is to optimise development processes and continue to bring high-quality experiences to players using Azure cloud technologies,” the company said. At this point, we wouldn’t read too much into the fact that Sega and Microsoft are partnering on the project. Plenty of companies, , depend on Microsoft for their cloud infrastructure, in part because they want to avoid building an online backend from scratch. 

“We look forward to working together as they explore new ways to create unique gaming experiences for the future using Microsoft cloud technologies,” Microsoft’s Sarah Bond said of the alliance. “Together we will reimagine how games get built, hosted, and operated, with a goal of adding more value to players and Sega alike.”

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Google’s Unattended Project Recommender aims to cut cloud costs

All the sessions from Transform 2021 are available on-demand now. Watch now.


Google today announced Unattended Project Recommender, a new feature of Active Assist, Google’s collection of tools designed to help optimize Google Cloud environments. Unattended Project Recommender is intended to provide a “a one-stop shop” for discovering, reclaiming, and shutting down unattended cloud computing projects, Google says, via actionable and automatic recommendations powered by machine learning algorithms.

In enterprise environments, it’s not uncommon for cloud resources to occasionally be forgotten about. Not only can these resources can be difficult to identify, but they also tend to create a lot of headaches for product teams down the road — including unnecessary waste. A recent Anodot survey found fewer than 20% of companies were able to immediately detect spikes in cloud costs and 77% of companies with over $2 million in cloud costs were often surprised by how much they spent.

Unattended Project Recommender, which is available through Google Cloud’s Recommender API, aims to address this by identifying projects that are likely abandoned based on API and networking activity, billing, usage of cloud services, and other signals. As product managers Dima Melnyk and Bakh Inamov explain in a blog post, the tool was first tested with teams at Google over the course of 2021, where it was used to clean up internal unattended projects and eventually the projects of select Google Cloud customers, who helped to tune Unattended Project Recommender based on real-life data.

Machine learning

Unattended Project Recommender analyzes usage activity across all projects within an organization, including items like service accounts with authentication activity, API calls consumed, network ingress and egress, services with billable usage, active project owners, the number of active virtual machines, BigQuery jobs, and storage requests. Google Cloud customers can automatically export recommendations for investigation or use spreadsheets to interact with the data, as well as Google Workspace.

“Based on [various] signals, Unattended Project Recommender can generate recommendations to clean up projects that have low usage activity, where ‘low usage’ is defined using a machine learning model that ranks projects in [an] organization by level of usage, or recommendations to reclaim projects that have high usage activity but no active project owners,” Melnyk and Inamov wrote. “We hope that [customers] can leverage Unattended Project Recommender to improve [their] cloud security posture and reduce cost.”

Google notes that, as with any other tool, customers can choose to opt out of data processing by disabling the corresponding groups in the “Transparency & control” tab under Google Cloud’s Privacy & Security settings.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Persona 25th Anniversary Site Teases 7 Project Announcements

Atlus’ hit RPG series Persona has been around for 25 years now. To celebrate, the developer is teasing seven announcements on Persona’s 25th-anniversary website.

Persona went from being a simple spinoff of Atlus’original flagship JRPG Shin Megami Tensei to the front runner of the company itself. The series has just come off of its most successful project yet in Persona 5 and is already looking toward the future.

Atlus revealed via Twitter that it has Persona news, events, and more in store coming for the occasion. These announcements will be shared throughout the next year, with the final one unveiled in fall 2022.

It’s very likely that one of these announcements could indeed be Persona 6. Atlus reports that it’s looking into the development of the latest title of the series already. It also states that the task of topping Persona 5 is already bringing out some nerves. “When we created Persona 4, there was pressure that it had to exceed Persona 3. Now, we will have to create a 6 [that] exceeds 5.”

While it’s not clear at all what the other reveals can be, looking at current trends in gaming and the past of Persona may give off a few hints. For instance, one of these announcements could well be a concert experience or music compilation celebrating Persona’s well-received soundtracks.

Another announcement that many have been clamoring for is a new installment of the fighting game spinoff by Arc System Works, Persona 4 Arena. Developers have recently released the successful Dragon Ball FighterZ and Guilty Gear Strive. It has also explored the world of Persona once again in BlazBlue Crosstag Battle. It’s possible that a Persona 5 Arena could be on the horizon to hold fans off until a new main title releases.

The speculation is endless, especially for a company as well-versed in the world of merchandising as Atlus. Figures, music, spinoffs — it’s all very possible. However, all we can do is wait for the announcements to actually begin, with the first coming this September.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Why Redfin’s CTO, Bridget Frey, approaches D&I like an engineering project

Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.


One of the major goals of Transform every year, our intensive, week-long applied AI event, is to bring a broad variety of expertise, views, and experiences to the table, to be mindful of the importance of diversity, and to not just give lip service to representation, but focus on inclusion on a large percentage of our panels and talks.

Inspired by the event’s commitment to diversity and inclusivity, we wanted to sit down with some of Transform’s speakers, leaders in the AI industry who make diversity, equity & inclusion (DE&I) a central tenet of their work.

Among them is Bridget Frey, CTO of Redfin. She’ll be speaking at Transform this year about how she works to build diverse and inclusive teams at the company, and the potential for AI to combat redlining and diversify American neighborhoods. We had the opportunity to speak to Frey about what launched her love of tech, how Redfin makes their focus on equity and inclusion better than just lipservice, and more.


See the first in the series here. More to follow.


VB: Could you tell me about your background, and your current role at your company?

I’m the CTO of Redfin, where we build technology to make buying and selling homes less complicated and less stressful. Right now, we’re investing heavily in machine learning, services, and the cloud as we scale multiple businesses such as Redfin Mortgage and Redfin Now, our iBuyer business.

When I was five, my dad brought home an Apple IIe, and the two of us learned to code on it together. I’ve spent my career working at high-growth technology companies, and just celebrated 10 years with Redfin this spring.

VB: Any woman in the tech industry, or adjacent to it, is already forced to think about DE&I just by virtue of being “a woman in tech” — how has that influenced your career?

I’ve been the only woman in an engineering department more than once in my career. That experience of feeling isolated has had a big influence in how I approach creating a culture that listens to all voices and a team that builds inclusive products. When I became CTO, I had this realization that I was now responsible in a very real way for DE&I on my team, and it inspired me to find ways to make a difference. We still have plenty of work to do, but I firmly believe that the tech industry can improve with focused effort.

VB: Can you tell us about the diversity initiatives you’ve been involved in, especially in your community?

When I joined Redfin in 2011, I was the only woman on the Seattle engineering team. Today, 36% of our technical team are women and 10% are Black or Latinx. Some ways we’ve gotten here:

We approached DE&I the same way our engineering team approached any engineering project — we made a backlog of bugs, we prioritized them, and we started making changes in how we recruit, train, promote, pay, and so many other areas.

We sourced candidates from alternative backgrounds, and set them up to succeed. We’ve made investments in documentation and training, which let us hire more people in 2020 who don’t have traditional computer-science backgrounds. We also opened roles early to candidates from non-traditional sources.

We started hiring more engineers outside of SF and Seattle. In 2018, we opened an engineering office in Frisco, Texas, and 21% of this team is Black or Latinx. As we hire more fully remote workers, we hope to build on that momentum.

We improved the diversity of our recruiting team. From June 1 to December 31, 2020, the percentage of Redfin’s Black and Latinx recruiters increased from 15% to 23%; 47% of our recruiters are now people of color. Their personal networks, and their own Redfin experience, make our recruiters formidable advocates for hiring people of color.

VB: How do you see the industry changing in response to the work that women, especially Black and BIPOC women, are doing on the ground? What will the industry look like for the next generation?

I feel a debt of gratitude to the Black and BIPOC women who are sharing their experiences and pushing our industry to do more. Timit Gebru, as a recent example, has inspired a whole host of researchers and practitioners to speak out on issues of equity in the field of AI. And it’s spreading beyond the ethical AI folks you’d expect to be the most aware of these issues to a broad set of tech workers who are advocating for systemic change. It’s unfortunately still easy to point to things that are broken in the DE&I space, but I’m optimistic that the tech industry is getting better at confronting these issues in a transparent way and then finding concrete solutions that will make a difference.

[Frey’s talk is just one of many conversations around D,E&I at Transform 2021 next week (July 12-16).  On Monday, we’ll kick off with our third Women in AI breakfast gathering. On Wednesday, we will have a session on BIPOC in AI. On Friday, we’ll host the Women in AI awards. Throughout the agenda, we’ll have numerous other talks on inclusion and bias, including with Margaret Mitchell, a leading AI researcher on responsible AI, as well as with executives from Pinterest, Redfin, and more.]

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

US cancels crucial $10B military AI project because Trump is a baby

The Pentagon yesterday announced it was scuttling its long-doomed “Project JEDI,” a cloud-services AI contract that was awarded to Microsoft in 2019.

Up front: Project JEDI is a big deal. The US military needs a reliable cloud-service platform from which to operate its massive AI infrastructure. Unfortunately the project was mishandled from the very beginning.

Now, two years later, the Pentagon‘s calling for a reboot:

Today, the Department of Defense (DoD) canceled the Joint Enterprise Defense Infrastructure (JEDI) Cloud solicitation and initiated contract termination procedures. The Department has determined that, due to evolving requirements, increased cloud conversancy, and industry advances, the JEDI Cloud contract no longer meets its needs.

Evolving requirements?

Acting DOD chief information officer John Sherman is quoted in the press release as saying the Pentagon‘s scrapping the project because “evolution of the cloud ecosystem within DoD, and changes in user requirements to leverage multiple cloud environments to execute mission” have necessitated an overhaul.

But the Pentagon was clearly aware of the problem with a single-provider cloud solution from the onset of the contract.

IBM and Oracle both gave official statements citing the folly of the single-cloud provider. Oracle went on to sue the government and IBM protested JEDI at inception, giving the following statement on its own blog:

IBM knows what it takes to build a world-class cloud. No business in the world would build a cloud the way JEDI would and then lock in to it for a decade. JEDI turns its back on the preferences of Congress and the administration, is a bad use of taxpayer dollars and was written with just one company in mind. America’s warfighters deserve better.

That “one company in mind” was Amazon. But Trump had other plans.

When Microsoft then became the sole awardee of the $10B contract, it set off alarms in the tech community. Amazon filed a lawsuit alleging the Pentagon was forbidden from issuing it the contract due to Donald Trump‘s conflicts of interest.

Notably, Oracle’s lawsuit was dismissed after a government watchdog organization said the Pentagon‘s single-provider solution had been thoroughly looked into and met all the nation’s requirements.

Per the Government Accountability Office (GAO) in November of 2018:

GAO’s decision concludes that the Defense Department’s decision to pursue a single-award approach to obtain these cloud services is consistent with applicable statutes (and regulations) because the agency reasonably determined that a single-award approach is in the government’s best interests for various reasons, including national security concerns, as the statute allows.

And that makes it particularly silly for the Pentagon‘s acting IT boss to go on the record now claiming the US has only recently come to understand that a multi-provider cloud solution is necessary for Project JEDI.

Bottom line: The Pentagon’s covering its own ass. Donald Trump‘s conflicts of interests, in this particular case, have set the nation’s defensive capabilities and AI programs back by years.

Whether we like it or not, we’re in a global AI arms race. And the US is doing a great job of shooting itself in the foot so China can catch up.

There’s no telling how many millions of dollars the US has spent in court defending its ill-advised and completely inexplicable decision to restrict a $10B cloud-service project to a single-provider.

We do know that phase one of Project JEDI was schedule to be completed in April – a deadline that’s long passed.

Now, after all the delays, the Pentagon (sans Donald Trump‘s interference) will finally move forward with Project JEDI as a multi-provider cloud solution, apparently starting the bidding process all over.

And there’s only one rational explanation for all of it: Trump’s soft-shelled, crybaby egotism.

When it was announced the project would be awarded to a single-provider, most experts and pundits were certain Amazon would win the contract. This obviously didn’t sit well with former president Trump.

The former president, during the period of time in which bidding for JEDI was still open, often conflated Amazon and the Washington Post due to both companies being owned by rival billionaire Jeff Bezos.