@newplatonism @PengzhouW @gautamcgoel Had a read... this isn’t a paper or a study, it’s a comment on a breast cancer object detector. The authors published a convincing rebuttal here (https://t.co/nRskNwtxxF). Is there a different paper that did the repro
@alyosak Yeah, but, you know, feature generation is of scant scientific value anyway, according to this gem from Google Health. https://t.co/CGqXRuLvaD https://t.co/h0eMqc7tgM
RT @kdpsinghlab: All are partially true (will explain) but C is right. Proprietary models are used bc EHRs suffer from a last-mile problem.…
RT @kdpsinghlab: All are partially true (will explain) but C is right. Proprietary models are used bc EHRs suffer from a last-mile problem.…
All are partially true (will explain) but C is right. Proprietary models are used bc EHRs suffer from a last-mile problem. While scientists debate whether models should even be made available (https://t.co/IkuP5Fh3vP) the truth is that we don’t have many w
The authors publish a reply to the comment on their paper, thanking the concerned critics for their thoughtful contribution… https://t.co/iWwhExVeH8
Nice to see an interesting discussion here on transparency and reproducibility of AI methods in health care. https://t.co/vze4ll4y6C I think deep learning pipelines should be shareable even when customized code for orchestrating a server grid was used thou
A powerhouse list of professors from Stanford, MIT, Johns Hopkins, Harvard, & others casting doubt on AI. Top AI studies are often not transparent and reproducible, & they’re frequently published without details such as full code, models, and metho
@Meddev_guy @Rishi_K_Gupta @ISARIC1 @CCPUKstudy So in that vein what would be your take on Google’s reasons for not making the models underlying their published data openly available? https://t.co/5tYmRl5rs8
This should work in a very simple way: no code - no paper.
RT @martin_hebart: A DL-based breast cancer screening method was published in Nature but w/o code. An objection to this cites transparency…
5/ If interested, you can find Google's response to our response here: https://t.co/I0P3HfGEYZ. Their main claim was "Unfortunately, the release of any medical device without appropriate regulatory oversight could lead to its misuse." which is completely a
A DL-based breast cancer screening method was published in Nature but w/o code. An objection to this cites transparency & reproducibility: https://t.co/FYDMORtq0n The authors' response cites potential for abuse: https://t.co/OByhj03i8T Should the
...and the authors' reply to the letter at https://t.co/IkWGFcWZbW
RT @michaelhoffman: 4/Here is the rebuttal highlighted the same way. While most of our letter describes the problems with insufficiently de…
Reply to: Transparency and reproducibility in artificial intelligence https://t.co/UwbJWtEKIj
Reply to: Transparency and reproducibility in artificial intelligence https://t.co/vv1zI9y8QW https://t.co/D6DpY36RX9
RT @michaelhoffman: 4/Here is the rebuttal highlighted the same way. While most of our letter describes the problems with insufficiently de…
RT @michaelhoffman: 4/Here is the rebuttal highlighted the same way. While most of our letter describes the problems with insufficiently de…
The google response to a "Matters Arising" piece about not making code available is quite strange: "look, we used open source tools". Can I just say in the future: "I used NumPy" in my methods section? https://t.co/RhIGJzuCFI
Reply to: Transparency and reproducibility in artificial intelligence https://t.co/4AkjKVRYNw @Nature
A discussion on accessibility and reproducibility of AI research in @nature https://t.co/MM8L91RNmg and response: https://t.co/SYvVGexpB6. Addendum to previous paper: https://t.co/DDyLKqLDGY
RT @_sajid_a: "Given the extensive textual description ... we believe that investigators proficient in deep learning should be able to lear…
"Given the extensive textual description ... we believe that investigators proficient in deep learning should be able to learn from and expand upon our approach" feels like the AI equivalent of "The proof is straightforward and left as an exercise to the r
This!! #OpenCode =/= #OpenAccess =/= #OpenData =/= reporting standards !! These are distinct structural #FAIR science issues w different technical solutions, but without transparency, its hard to tell what's #Open & #accessible.
RT @michaelhoffman: 4/Here is the rebuttal highlighted the same way. While most of our letter describes the problems with insufficiently de…
4/Here is the rebuttal highlighted the same way. While most of our letter describes the problems with insufficiently described *methods* and *code*, the @GoogleHealth authors chose to focus first on why they would not share *data*. https://t.co/nYrbimrzZ2
Reply to: Transparency and reproducibility in artificial intelligence https://t.co/e2PLExjyze https://t.co/RAgVtBzXPh