@v_ajw63600 エビデンスでは無いですが、このような報告はあるようですね。 https://t.co/hOJ67RU7BM
理論神経科学erには面白いかもしれないレビュー
Backpropagation and the brain https://t.co/auJkbxkfYT
RT @sskd3: ニューラルネットワークの主な学習規則はバックプロパゲーション法だが、他ニューロンのシナプス結合の重みも把握していなければならないなど生物脳で起こっているとは考えにくい方法。代わりにフィードバック投射によってターゲット信号やエラーが降りてくるのではという提案…
RT @sskd3: ニューラルネットワークの主な学習規則はバックプロパゲーション法だが、他ニューロンのシナプス結合の重みも把握していなければならないなど生物脳で起こっているとは考えにくい方法。代わりにフィードバック投射によってターゲット信号やエラーが降りてくるのではという提案…
@Grady_Booch Not a neuro scientist. But https://t.co/XJkLTec904
@smilerz91 @jmhorp This is where the question about gradients came from. If you show that gradients work in some isolated neural micro-circuit that can't be related to cognitive function, then it isn't helpful. https://t.co/P5KzBAxsMz
One example that isn't: trajectory mapping is achieved through gaze, not solving differential equations. Do you want another example?
@residualnought @jmhorp All of them are optimization problems. Name something that isn’t. And since you are too dumb to search on your own, here are two studies for you to read and (probably lack to) understand. https://t.co/mrdgRT0n3e https://t.co/DHS
this is one of those papers you re-read and re-read and re-read: https://t.co/74opgvfHmC
RT @mister_ekole: Paper: Backpropagation and the Brain, Hinton et al. 8/8 https://t.co/WnLRThMmiL
Paper: Backpropagation and the Brain, Hinton et al. 8/8 https://t.co/WnLRThMmiL
@IainBancarz Hinton has consistently argued (e.g. below) that brains, like present-day AIs, are massively-distributed learning machines (large-scale statistical models if you prefer). Recent successes in AI support that conclusion. https://t.co/5plobe6k
RT @nobel_sean: このbackpropを肩代わりしているネットワークがアストロサイトだったら面白いな(そういう研究あった気がする) https://t.co/ZVcsfXKz07 https://t.co/QBH0R30OkG
Backpropagation and the brain | Nature Reviews Neuroscience https://t.co/a3B7gwtTJY
このbackpropを肩代わりしているネットワークがアストロサイトだったら面白いな(そういう研究あった気がする) https://t.co/ZVcsfXKz07 https://t.co/QBH0R30OkG
@yannx0130 @bboczeng @ylecun It may even do better than the backprop algorithm you strictly defined: https://t.co/qd0VQGoTWs
RT @sskd3: ニューラルネットワークの主な学習規則はバックプロパゲーション法だが、他ニューロンのシナプス結合の重みも把握していなければならないなど生物脳で起こっているとは考えにくい方法。代わりにフィードバック投射によってターゲット信号やエラーが降りてくるのではという提案…
@Daves_Profile @TireTorch @kareem_carr RNNs learn using an algorithm called “backpropagation through time”, but there’s a number of reasons why this is biologically impossible. https://t.co/Suxxnnker5
What is my batch size? How many epoch i have? I used Adam optimizer? Is attention all i need? I was trained on a GPU or TPU cluster? I'm BERT based or GPT based LLM 🦜? Sure! Backpropagation and the brain https://t.co/28oefrpfHd
@SaraBolouki A lot of research is focused on understanding how the human brain works / mimicking the human brain with neural networks / building neural networks that learn like humans. https://t.co/9qXJPKPZ1K
@GJuantorena Actually, there are quite a lot of people interested in mimicking how the human brain works/learns. A few pointers from my list: - https://t.co/UhySQPIIVr - https://t.co/LDleIa0uSl - https://t.co/M8nGnFiFYp
@TimKietzmann There are these reviews: https://t.co/TDigFUWZ7b https://t.co/nqLQbZO9QR But they do not onto more recent versions. There is already a need for an updated, I think.
Obviously, the main difference is in the hardware. I wouldn't be surprised if we make big improvements on that front in the current decade. However, there is overwhelming evidence that our DNN software is also not up to speed - literally: https://t.co/zylP
RT @NatRevNeurosci: #Backpropagation and the brain - a new Review by Lillicrap (@countzerozzz), Santoro (@santoroAI), Marris (@MarrisLuke),…
RT @NatRevNeurosci: #Backpropagation and the brain - a new Review by Lillicrap (@countzerozzz), Santoro (@santoroAI), Marris (@MarrisLuke),…
RT @rasbt: Indeed! Backprop is awesome, but it's probably not how the biological brain works, and that's ok. For reference, I really like…
Indeed! Backprop is awesome, but it's probably not how the biological brain works, and that's ok. For reference, I really like the excellent (but paywalled) article "Backpropagation and the Brain" by Lillicrap, Santoro, Marris, Akerman, and Hinton. https:
@aminam_amin از قضا بنظرم در سطح نورونی خیلی همخوانی نداره، مثلاً ما نورونی داریم که به تنهایی فقط و فقط به یک محرک پیچیده خاص پاسخ میده (Search "Jennifer Aniston Cells"). در مورد backprop ما توی مغز داریمش، مخصوصاً تو مبحث حرکت مشخصاً داریم. این مقاله جال
RT @whatishealth21: I appreciate the plug for my 1990 chapter -- but chapter 11 in my 2015 book with Simon Laughlin (Principles of Neural D…
I appreciate the plug for my 1990 chapter -- but chapter 11 in my 2015 book with Simon Laughlin (Principles of Neural Design) reflects an additional 25 years of study and reflection.
To add to confusion: 1. Classic neuroscience: https://t.co/ZJoNjGT3UI 2. Classic CS: https://t.co/lDnirblzbM 3. And together: in Lillicrap et al which you obviously know: https://t.co/oHBt4UL9y7 4. Retina actually s more elegant model given the RGC ou
求めていた記事に出会いました。 Backpropagationとか、AIのコンセプトから脳を研究するアプローチ。AIの参考にもなりそうなのでシェアします。 #科学を考えて欲しい #駆け出しエンジニアと繋がりたい https://t.co/fWk844HM3S
[Nature Reviews Neuroscience 2020]神経科学のレビュー誌にHinton達が寄稿したもの。今やニューラルネットの学習に欠かせないbackpropが生物学的に実現可能なのかどうかはquestionableだが、著者らはNGRAD仮説という生物学的に実現可能なアルゴリズム(群?)を提唱。(1/n) https://t.co/QT5ZBtGtQS
RT @hillbig: 脳内の学習として誤差逆伝播法(BP)はありえないと考えられていたが、BPで獲得される表現は他の学習で獲得される表現に比べ脳内の表現に近い。またフィードバック接続と時間/空間方向に局所的な活性値の差でBPを近似するNGRAD(例: TP, EP)は脳内で…
RT @AJBlackston: Developments in the domain of AI and ML have often got inspiration from the human brain. This paper analyses the back-prop…
Developments in the domain of AI and ML have often got inspiration from the human brain. This paper analyses the back-propagation algorithm in terms of its equivalence in #neuroscience . #ai #machinelearning #artificialintelligence #ml #deeplearning htt
RT @mister_mangu: Developments in the domain of AI and ML have often got inspiration from the human brain. This paper analyses the back-pro…
Developments in the domain of AI and ML have often got inspiration from the human brain. This paper analyses the back-propagation algorithm in terms of its equivalence in #neuroscience . #ai #machinelearning #artificialintelligence #ml #deeplearning htt
ニューラルネットワークの主な学習規則はバックプロパゲーション法だが、他ニューロンのシナプス結合の重みも把握していなければならないなど生物脳で起こっているとは考えにくい方法。代わりにフィードバック投射によってターゲット信号やエラーが降りてくるのではという提案 https://t.co/AkGBxdexug
Backpropagation and the brain https://t.co/mTlaYS9Btz
@neurosutras @KordingLab @steve47285 @AdamMarblestone @Pieters_Tweet A promising direction is hierarchical self-prediction, such as DTP. I like the Lillicrap et al., 2020 review: https://t.co/HYvhDKFXR7
6/ Back to the presentation, seems like backprop isn't necessarily a deal-breaker for biological plausibility, since it can likely be approximated by other more plausible mechanisms. @tyrell_turing (Is this Twitter handle a Blade Runner reference?!) htt
RT @eijwat: Backpropagation and the brain https://t.co/bnPr06pE07
RT @eijwat: Backpropagation and the brain https://t.co/bnPr06pE07
RT @eijwat: Backpropagation and the brain https://t.co/bnPr06pE07
RT @eijwat: Backpropagation and the brain https://t.co/bnPr06pE07
Backpropagation and the brain https://t.co/bnPr06pE07
RT @doristsao: @rkarmani @bayesianboy Yes, the idea of predictive coding as biologically plausible model of network learning is described b…
RT @doristsao: @rkarmani @bayesianboy Yes, the idea of predictive coding as biologically plausible model of network learning is described b…
@rkarmani @bayesianboy Yes, the idea of predictive coding as biologically plausible model of network learning is described beautifully by Lillicrap et al in https://t.co/ACv858iXN9. Amazingly, the idea of predictive coding was already present in McCulloch
RT @NatRevNeurosci: #Backpropagation and the brain - a new Review by Lillicrap (@countzerozzz), Santoro (@santoroAI), Marris (@MarrisLuke),…
“Feedback connections may instead induce neural activities whose differences can be used to locally approximate these signals and hence drive effective learning in deep networks in the brain.” https://t.co/ikRgKj5IUy
RT @rasbt: Going through my list, here are some other Deep Learning nuggets I enjoyed in 2020: AdaBelief optimizer: https://t.co/wn6ERqnYj…
RT @rasbt: Going through my list, here are some other Deep Learning nuggets I enjoyed in 2020: AdaBelief optimizer: https://t.co/wn6ERqnYj…
RT @rasbt: Going through my list, here are some other Deep Learning nuggets I enjoyed in 2020: AdaBelief optimizer: https://t.co/wn6ERqnYj…
RT @rasbt: Going through my list, here are some other Deep Learning nuggets I enjoyed in 2020: AdaBelief optimizer: https://t.co/wn6ERqnYj…
RT @rasbt: Going through my list, here are some other Deep Learning nuggets I enjoyed in 2020: AdaBelief optimizer: https://t.co/wn6ERqnYj…
Going through my list, here are some other Deep Learning nuggets I enjoyed in 2020: AdaBelief optimizer: https://t.co/wn6ERqnYjS DL in the cloud options: https://t.co/VI1JnQ9Z9O Backprop & the brain: https://t.co/16tquUGg7s Do deeper CNNs perform bett
@neuro_data @danilobzdok @KordingLab @bttyeo @ilyasut The logic is weak? Sure, I can buy that (though it's not as trivially weak as you made it out to be). The evidence is weak? No, I disagree. See, e.g.: https://t.co/2lW4aA3tTu https://t.co/PkSfmLW3oz
New idea?
@PMinervini @ilyasut @nvidia Haha yes. I think @ilyasut is just being a tad bit provocative, as always 😇 However, the idea that gradient errors are being sent back in the brain, for learning, is being considered seriously - https://t.co/6Sm4IEzzE7
@ilyasut Hinton has published an idea going in this direction that biological brains could compute effective synaptic updates by using feedback connections to induce neuron activities whose locally computed differences encode backpropagation-like error sig
RT @santoroAI: Is there anything that backpropagation can tell us about learning in the brain? We argue that, while the brain probably does…
RT @santoroAI: Is there anything that backpropagation can tell us about learning in the brain? We argue that, while the brain probably does…
Backpropogation has allowed us to efficiently train artificial neural networks. How biological neural networks learn remains a mystery, but an auto-encoder architecture offers a plausible approximation of backpropagation in our brains. https://t.co/534bpD
RT @NatRevNeurosci: #Backpropagation and the brain - a new Review by Lillicrap (@countzerozzz), Santoro (@santoroAI), Marris (@MarrisLuke),…
RT @BodyBrainHealth: Backpropagation and the #brain. #EEG #Neurofeedback https://t.co/kxHGyacPwy https://t.co/dqGVl75vUN
Backpropagation and the #brain. #EEG #Neurofeedback https://t.co/kxHGyacPwy https://t.co/dqGVl75vUN
To what extent can each cortical column (or "capsule") be understood as a bidirectional auto-encoder? @geoffreyhinton https://t.co/NqDoti9roT
Target propagation and hierarchical/stacked approximate modeling of dynamics/events on different scales? @countzerozzz @geoffreyhinton https://t.co/NqDothRQxl
RT @hillbig: 脳内の学習として誤差逆伝播法(BP)はありえないと考えられていたが、BPで獲得される表現は他の学習で獲得される表現に比べ脳内の表現に近い。またフィードバック接続と時間/空間方向に局所的な活性値の差でBPを近似するNGRAD(例: TP, EP)は脳内で…
RT @nasiryahm: fin/ PS: This method is an example of a principled and simple NGRAD (neural gradient representation by activity differences)…
fin/ PS: This method is an example of a principled and simple NGRAD (neural gradient representation by activity differences) approach as suggested by Lillicrap et al https://t.co/McPbtGbK7A
RT @NatRevNeurosci: #Backpropagation and the brain - a new Review by Lillicrap (@countzerozzz), Santoro (@santoroAI), Marris (@MarrisLuke),…
RT @NatRevNeurosci: #Backpropagation and the brain - a new Perspective by Lillicrap (@countzerozzz), Santoro (@santoroAI), Marris (@MarrisL…
RT @NatRevNeurosci: #Backpropagation and the brain - a new Perspective by Lillicrap (@countzerozzz), Santoro (@santoroAI), Marris (@MarrisL…
RT @CGrienberger: Fantastic review article featuring error signals in artificial neuronal networks and the brain. Super interesting read!
Fantastic review article featuring error signals in artificial neuronal networks and the brain. Super interesting read!
RT @michael_nielsen: Fun new article on backpropagation & the actual brain, by @geoffreyhinton & collaborators: https://t.co/pTOX6x5JDb Ho…
RT @NatRevNeurosci: #Backpropagation and the brain - a new Review by Lillicrap (@countzerozzz), Santoro (@santoroAI), Marris (@MarrisLuke),…
RT @NatRevNeurosci: #Backpropagation and the brain - a new Review by Lillicrap (@countzerozzz), Santoro (@santoroAI), Marris (@MarrisLuke),…
@AwokeKnowing @richanchez @SebastienBubeck @zacharylipton And has recently elaborated: https://t.co/B593TJKooH
RT @GunnarBlohm: very nice perspective on backpropagation in the brain! https://t.co/agRHI1HH7d
RT @GunnarBlohm: very nice perspective on backpropagation in the brain! https://t.co/agRHI1HH7d
RT @GunnarBlohm: very nice perspective on backpropagation in the brain! https://t.co/agRHI1HH7d
On my weekend to-read-list "Backpropagation and the brain" by Tim Lillicrap et al https://t.co/t7aabWFvW4
RT @GunnarBlohm: very nice perspective on backpropagation in the brain! https://t.co/agRHI1HH7d
RT @GunnarBlohm: very nice perspective on backpropagation in the brain! https://t.co/agRHI1HH7d