Earlier this month, Niantic reverted the changes it made to Pokemon GO at the outset of the COVID-19 pandemic. This included bringing the PokeStop and Pokemon Gym interaction distances back to their normal ranges, decreasing the number of gifts Buddy Pokemon grant, and reducing the effectiveness of incense when players are standing still. Niantic received a lot of pushback from the Pokemon GO community for this decision, primarily because of the changes to Gym and PokeStop interaction ranges.
Now Niantic has responded to those upset players, but unfortunately, it looks like the company isn’t interested in budging on this matter – at least not for now. Niantic’s statement gets into the nitty-gritty after explaining that it has heard the feedback and giving us background on the changes and their reversions in the US and New Zealand.
“We have heard your feedback about one change in particular – that of the PokéStop and Gym interaction distance,” Niantic’s statement reads. “We reverted the interaction distance from 80 meters back to the original 40 meters starting in the U.S. and New Zealand because we want people to connect to real places in the real world, and to visit places that are worth exploring.”
“However, we have heard your input loud and clear and so to address the concerns you have raised, we are taking the following actions: We are assembling an internal cross-functional team to develop proposals designed to preserve our mission of inspiring people to explore the world together, while also addressing specific concerns that have been raised regarding interaction distance,” the company continued. “We will share the findings of this task force by the next in game season change (September 1). As part of this process, we will also be reaching out to community leaders in the coming days to join us in this dialogue.”
So, for now, at least, the PokeStop and Pokemon Gym interaction ranges are going back to their original distances of 40 meters in the United States and New Zealand, and it looks like that will be the case for at least the next month. Of course, it’s impossible to know what this so-called task force will decide, but nothing is changing for now, and that’s probably not going to sit well with Pokemon GO players who wanted to see these interaction range increases stick around.
Of course, Niantic always intended these changes to be temporary, so we knew a day would come when they would be reverted. However, in the time that they were live, it seems many players ended up preferring the changes. The Pokemon GO community made some compelling arguments for keeping the distance changes yesterday, but it’s clear with Niantic’s statement that the message the community sent did not have the effect players were hoping for. We’ll let you know when Niantic shares more about this matter, so stay tuned.
Ahead of the release of F9, a trio of cars from the Fast and Furious franchise are barreling on to Rocket League. The vehicle bundle includes the return of two iconic franchise faves that have been absent since the game went free to play last summer, along with the debut of a pivotal custom car from the upcoming sequel. Fans will once again be able to take Dominic Toretto’s Dodge Charger and his dearly departed bro Brian O’Conner’s Nissan Skyline for a spin. Alongside the American muscle and heritage Japanese street racer will be F9‘s rocket-strapped Pontiac Fiero.
You’ll be able to buy the three-vehicle bundle, complete with a bunch of decals, for 2400 Credits. Individually, they’ll cost 1000 credits or 300 credits for those that already own the Charger or Skyline. Of course, it wouldn’t be a Fast and Furious celebration without some Reggaeton. So, Rocket League is adding two player anthems, including a new song by Anitta called Furiosa and an additional anthem that will be revealed when it goes live.
All the goodies will be available in the item shop from June 17th to June 30th. As a freebie, the “Tuna, No Crust” title — a nod to an exchange between love birds O’Conner and Mia Toretto — will be available alongside the new car packs. F9, meanwhile,will roar on to US theaters on June 25th.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
On February 14, a researcher who was frustrated with reproducing the results of a machine learning research paper opened up a Reddit account under the username ContributionSecure14 and posted ther/MachineLearning subreddit: “I just spent a week implementing a paper as a baseline and failed to reproduce the results. I realized today after googling for a bit that a few others were also unable to reproduce the results. Is there a list of such papers? It will save people a lot of time and effort.”
The post struck a nerve with other users on r/MachineLearning, which is the largest Reddit community for machine learning.
“Easier to compile a list of reproducible ones…,” one user responded.
“Probably 50%-75% of all papers are unreproducible. It’s sad, but it’s true,” another user wrote. “Think about it, most papers are ‘optimized’ to get into a conference. More often than not the authors know that a paper they’re trying to get into a conference isn’t very good! So they don’t have to worry about reproducibility because nobody will try to reproduce them.”
A few other users posted links to machine learning papers they had failed to implement and voiced their frustration with code implementation not being a requirement in ML conferences.
The next day, ContributionSecure14 created “Papers Without Code,” a website that aims to create a centralized list of machine learning papers that are not implementable.
“I’m not sure if this is the best or worst idea ever but I figured it would be useful to collect a list of papers which people have tried to reproduce and failed,” ContributionSecure14wrote on r/MachineLearning. “This will give the authors a chance to either release their code, provide pointers or rescind the paper. My hope is that this incentivizes a healthier ML research culture around not publishing unreproducible work.”
Reproducing the results of machine learning papers
Machine learning researchers regularly publish papers on online platforms such as arXiv and OpenReview. These papers describe concepts and techniques that highlight new challenges in machine learning systems or introduce new ways to solve known problems. Many of these papers find their way into mainstream artificial intelligence conferences such as NeurIPS, ICML, ICLR, and CVPR.
Having source code to go along with a research paper helps a lot in verifying the validity of a machine learning technique and building on top of it. But this is not a requirement for machine learning conferences. As a result, many students and researchers who read these papers struggle with reproducing their results.
“Unreproducible work wastes the time and effort of well-meaning researchers, and authors should strive to ensure at least one public implementation of their work exists,” ContributionSecure14, who preferred to remain anonymous, toldTechTalksin written comments.“Publishing a paper with empirical results in the public domain is pointless if others cannot build off of the paper or use it as a baseline.”
But ContributionSecure14 also acknowledges that there are sometimes legitimate reasons for machine learning researchers not to release their code. For example, some authors may train their models on internal infrastructure or use large internal datasets for pretraining. In such cases, the researchers are not at liberty to publish the code or data along with their paper because of company policy.
“If the authors publish a paper without code due to such circumstances, I personally believe that they have the academic responsibility to work closely with other researchers trying to reproduce their paper,” ContributionSecure14 says. “There is no point in publishing the paper in the public domain if others cannot build off of it. There should be at least one publicly available reference implementation for others to build off of or use as a baseline.”
In some cases, even if the authors release both the source code and data to their paper, other machine learning researchers still struggle to reproduce the results. This can be due to various reasons. For instance, the authors might cherry-pick the best results from several experiments and present them as state-of-the-art achievements. In other cases, the researchers might have used tricks such as tuning the parameters of their machine learning model to the test data set to boost the results. In such cases, even if the results are reproducible, they are not relevant, because the machine learning model has been overfitted to specific conditions and won’t perform well on previously unseen data.
“I think it is necessary to have reproducible code as a prerequisite in order to independently verify the validity of the results claimed in the paper, but [code alone is] not sufficient,” ContributionSecure14 said.
Efforts for machine learning reproducibility
The reproducibility problem is not limited to small machine learning research teams. Even big tech companies that spend millions of dollars on AI research every year often fail to validate the results of their papers. In October 2020, a group of 31 scientists wrote ajoint article inNature, criticizing the lack of transparency and reproducibility in a paper on the use of AI in medical imaging, published by a group of AI researchers at Google. “[The] absence of sufficiently documented methods and computer code underlying the study effectively undermines its scientific value. This shortcoming limits the evidence required for others to prospectively validate and clinically implement such technologies,” the authors wrote. “Scientific progress depends on the ability of independent researchers to scrutinize the results of a research study, to reproduce the study’s main results using its materials, and to build on them in future studies.”
Recent years have seen a growing focus on AI’s reproducibility crisis. Notable work in this regard includes the efforts of Joelle Pineau, machine learning scientist at Montreal’s McGill University and Facebook AI, who has been pushing for transparency and reproducibility of machine learning research at conferences such as NeurIPS.
“Better reproducibility means it’s much easier to build on a paper. Often, the review process is short and limited, and the true impact of a paper is something we see much later. The paper lives on, and as a community we have a chance to build on the work, examine the code, have a critical eye to what are the contributions,” PineautoldNaturein an interview in 2019.
At NeurIPS, Pineau has helped develop standards and processes that can help researchers and reviewers evaluate the reproducibility of machine learning papers. Her efforts have resulted in an increase in code and data submission at NeurIPS.
Another interesting project isPapers With Code (where Papers Without Code gets its name from), a website that provides implementations for scientific research papers published and presented at different venues. Papers With Code currently hosts the implementation of more than 40,000 machine learning research papers.
“PapersWithCode plays an important role in highlighting papers that are reproducible. However, it does not address the problem of unreproducible papers,” ContributionSecure14 said.
When a machine learning research paper doesn’t include the implementation code, other researchers who read it must try to implement it by themselves, a non-trivial process that can take several weeks and ultimately result in failure.
“If they fail to implement it successfully, they might reach out to the authors (who may not respond) or simply give up,” ContributionSecure14 said. “This can happen to multiple researchers who are not aware of prior or ongoing attempts to reproduce the paper, resulting in many weeks of productivity wasted collectively.”
Papers Without Code
Papers Without Code includes asubmission page, where researchers can submit unreproducible machine learning papers along with the details of their efforts, such as how much time they spent trying to reproduce the results. If a submission is valid, Papers Without Code will contact the paper’s original authors and request clarification or publication of implementation details. If the authors do not reply in a timely fashion, the paper will be added to the list ofunreproducible machine learning papers.
“PapersWithoutCode solves the problem of centralizing information about prior or ongoing attempts to reproduce a paper and allows researchers (including the original author) to come together and implement a public implementation,” ContributionSecure14 said. “Once the paper has been successfully reproduced, it can be published on PapersWithCode or GitHub where other researchers can use it. In that sense, I would say the goals of PapersWithoutCode are synergistic with that or PapersWithCode and the ML community at large.”
The hope is that Papers Without Code will help establish a culture that incentivizes reproducibility in machine learning research. So far, the website has received more than 10 requests and one author has already pledged to upload their code.
“I realize that this can be a controversial subject in academia and the top priority is to protect the authors’ reputation while serving the broader ML community,” ContributionSecure14 said.
Papers Without Code can become a hub for creating a dialogue between the original authors of machine learning papers and researchers who are trying to reproduce their work.
“Instead of being a static list of unreproducible work, the hope is to create an environment where researchers can collaborate to reproduce a paper,” ContributionSecure14 said.
Reproducible machine learning research
For instance, if you’re working on a research building on the work done in another paper, you should try out the code or the machine learning model yourself.
“Don’t build off of claims or ‘insights’ that could potentially be unfounded just because the paper says so,” ContributionSecure14 says, adding that this includes papers from large labs or work that has been accepted in a reputable conference.
Another good resource is professor Pineau’s “Machine Learning Reproducibility Checklist.” The checklist provides clear guidelines on how to make the description, code, and data of a machine learning paper clear and reproducible for other researchers.
ContributionSecure14 believes that machine learning researchers can play a crucial role in promoting a culture of reproducibility.
“There is a lot of pressure to publish at the expense of academic depth and reproducibility and there are not many checks and balances to prevent this behavior,” ContributionSecure14 said. “The only way this will change is if the current and future generation of ML researchers prioritize quality over quantity in their own research.”
This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.