Menu
ニューラルネットワークの論文のなかで計算神経科学と関係しているもの(の中で 個人的に 気になったもの)を集めました。重要なのに記載できてない論文がある場合や内容に誤りがある場合、解説を追加していただける場合はIssue または Pull reqからお願いします。
Artificial neural networks and computational neuroscience
Survey
- D. Cox, T. Dean. 'Neural networks and neuroscience-inspired computer vision'. Curr. Biol.24(18) 921-929 (2014). (sciencedirect)
- A. Marblestone, G. Wayne, K. Kording. 'Toward an integration of deep learning and neuroscience'. (2016). (arXiv)
- O. Barak. 'Recurrent neural networks as versatile tools of neuroscience research'. Curr. Opin. Neurobiol. (2017). (sciencedirect)
- D. Silva, P. Cruz, A. Gutierrez. 'Are the long-short term memory and convolution neural net biological system?'. KICS.4(2), 100-106 (2018). (sciencedirect)
- N. Kriegeskorte, T. Golan. 'Neural network models and deep learning - a primer for biologists'. (2019). (arXiv)
Analysis methods for neural networks
Methods for understanding of neural representation of ANN.
Survey
- D. Barrett, A. Morcos, J. Macke. 'Analyzing biological and artificial neural networks: challenges with opportunities for synergy?'. (2018). (arXiv)
Neuron Feature
- I. Rafegas, M. Vanrell, L.A. Alexandre. 'Understanding trained CNNs by indexing neuron selectivity'. (2017). (arXiv)
- A. Nguyen, J. Yosinski, J. Clune. 'Understanding Neural Networks via Feature Visualization: A survey'. (2019). (arXiv)
Comparing neural network representation
Canonical correlation analysis (CCA)
- M. Raghu, J. Gilmer, J. Yosinski, J. Sohl-Dickstein. 'SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability'. NIPS. (2017). (arXiv)
- H. Wang, et al. 'Finding the needle in high-dimensional haystack: A tutorial on canonical correlation analysis'. (2018). (arXiv)
Centered kernel alignment (CKA)
Neural Networks In The Brain
- S. Kornblith, M. Norouzi, H. Lee, G. Hinton. 'Similarity of Neural Network Representations Revisited'. (2019). (arXiv)
Representational stability analysis (ReStA)
- S. Abnar, L. Beinborn, R. Choenni, W. Zuidema. 'Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains'. (2019). (arXiv)
Fixed point analysis for RNN
- M.B. Ottaway, P.Y. Simard, D.H. Ballard. 'Fixed point analysis for recurrent networks'. NIPS. (1989). (pdf)
- D. Sussillo, O. Barak. 'Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks'. Neural Comput.25(3), 626-649 (2013). (MIT Press), (Jupyter notebook)
- M.D. Golub, D. Sussillo. 'FixedPointFinder: A Tensorflow toolbox for identifying and characterizing fixed points in recurrent neural networks'. JOSS. (2018). (pdf), (GitHub)
- G.E. Katz, J.A. Reggia. 'Using Directional Fibers to Locate Fixed Points of Recurrent Neural Networks'. IEEE. (2018). (IEEE)
Ablation analysis
- A.S. Morcos, D.G.T. Barrett, N.C. Rabinowitz, M. Botvinick. 'On the importance of single directions for generalization'. ICLR. (2018). (arXiv)
Others
- M. Haesemeyer, A. Schier, F. Engert. 'Convergent temperature representations in artificial and biological neural networks'. (2018). (bioRxiv)
Cognitive computational neuroscience
認知計算神経科学.
- N. Kriegeskorte, P. Douglas. 'Cognitive computational neuroscience'. Nat. Neurosci.21(9), 1148-1160 (2018). (arXiv)
- J.S. Bowers. 'Parallel Distributed Processing Theory in the Age of Deep Networks'. Trends. Cogn. Sci. (2019). (sciencedirect)
- R.M. Cichy, D. Kaiser. 'Deep Neural Networks as Scientific Models'. Trends. Cogn. Sci. (2019). (sciencedirect)
Computational psychiatry
計算論的精神医学. この分野はもっと論文ありますが、追えてません.
- R.E. Hoffman, U. Grasemann, R. Gueorguieva, D. Quinlan, D. Lane, R. Miikkulainen. 'Using computational patients to evaluate illness mechanisms in schizophrenia'. Biol. Psychiatry.69(10), 997–1005 (2011). (PMC)
Deep neural network as models of the Brain
Biological Neural Networks And Mechanical Neural Networks Technology
脳の神経表現の理解は難しい。ニューラルネットワークに特定のタスクを学習(特定の損失関数に対して最適化)させると、脳の神経表現と同じ表現を獲得する場合がある。このとき、間接的に脳の神経表現の目的を知ることができる(Whyの解決手法)。
Survey
- A.J.E. Kell, J.H. McDermott. 'Deep neural network models of sensory systems: windows onto the role of task constraints'. Curr. Opin. Neurobiol. (2019). (sciencedirect)
Cortical neuron
- P. Poirazi, T. Brannon, B.W Mel. 'Pyramidal Neuron as Two-Layer Neural Network'. Neuron. 37(6). (2003). (Neuron)
- B. David, S. Idan, L. Michael. 'Single Cortical Neurons as Deep Artificial Neural Networks'. (2019). (bioRxiv)
Vision
- D. Zipser, R.A. Andersen. 'A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons'. Nat.331, 679–684 (1988). (Nat.)
- A. Krizhevsky, I. Sutskever, G. Hinton. 'ImageNet classification with deep convolutional neural networks'. NIPS (2012). (pdf)
- (cf.) I. Goodfellow, Y. Bengio, A. Courville. 'Deep Learning'. MIT Press. (2016) : Chapter 9.10 'The Neuroscientific Basis for ConvolutionalNetworks'
- D. Yamins, et al. 'Performance-optimized hierarchical models predict neural responses in higher visual cortex'. PNAS.111(23) 8619-8624 (2014). (PNAS)
- S. Khaligh-Razavi, N. Kriegeskorte. 'Deep supervised, but not unsupervised, models may explain IT cortical representation'. PLoS Comput. Biol. 10(11), (2014). (PLOS)
- U. Güçlü, M.A.J. van Gerven. 'Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream'. J. Neurosci.35(27), (2015). (J. Neurosci.)
- D. Yamins, J. DiCarlo. 'Eight open questions in the computational modeling of higher sensory cortex'. Curr. Opin. Neurobiol.37, 114–120 (2016). (sciencedirect)
- K.M. Jozwik, N. Kriegeskorte, K.R. Storrs, M. Mur. 'Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments'. Front. Psychol. (2017). (Front. Psychol)
- C. J. Spoerer, P. McClure, N. Kriegeskorte. 'Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition'. Front. Psychol. (2017). (Front. Psychol)
- M.N.U. Laskar, L.G.S. Giraldo, O. Schwartz. 'Correspondence of Deep Neural Networks and the Brain for Visual Textures'. (2018). (arXiv)
- I. Kuzovkin, et al. 'Activations of Deep Convolutional Neural Network are Aligned with Gamma Band Activity of Human Visual Cortex'. Commun. Biol.1 (2018). (Commun. Biol.)
- M. Schrimpf, et al. 'Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?'. (2018). (bioRxiv)
- E. Kim, D. Hannan, G. Kenyon. 'Deep Sparse Coding for Invariant Multimodal Halle Berry Neurons'. CVPR. (2018). (arXiv)
- S. Ocko, J. Lindsey, S. Ganguli, S. Deny. 'The emergence of multiple retinal cell types through efficient coding of natural movies'. (2018). (bioRxiv)
- Q. Yan, et al. 'Revealing Fine Structures of the Retinal Receptive Field by Deep Learning Networks'. (2018). (arXiv)
- A. Nayebi, D. Bear, J. Kubilius, K. Kar, S. Ganguli, D. Sussillo, J. DiCarlo, D. Yamins. 'Task-Driven Convolutional Recurrent Models of the Visual System'. (2018). (arXiv), (GitHub)
- J. Lindsey, S. Ocko, S. Ganguli, S. Deny. 'A Unified Theory of Early Visual Representations from Retina to Cortex through Anatomically Constrained Deep CNNs'. (2019). (arXiv)
- J. Ukita, T. Yoshida, K. Ohki. 'Characterisation of nonlinear receptive fields of visual neurons by convolutional neural network'. Sci.Rep. (2019). (Sci.Rep.)
- I. Fruend. 'Simple, biologically informed models, but not convolutional neural networks describe target detection in naturalistic images'. bioRxiv (2019). (bioRxiv)
- T.C. Kietzmann, et al. 'Recurrence required to capture the dynamic computations of the human ventral visual stream'. (2019). (arXiv)
- K. Qiao. et al. 'Category decoding of visual stimuli from human brain activity using a bidirectional recurrent neural network to simulate bidirectional information flows in human visual cortices'. (2019). (arXiv)
- K. Kar, J. Kubilius, K. Schmidt, E.B. Issa, J.J. DiCarlo . 'Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior'. Nat. Neurosci. (2019). (Nat. Neurosci.), (bioRxiv)
- A.S. Ecker. et al. 'A rotation-equivariant convolutional neural network model of primary visual cortex'. ICLR (2019). (OpenReview). (arXiv)
Recursive Cortical Network (RCN; non NN model)
- D. George, et al. 'A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs'. Science (2017). (Science), (GitHub)
Weight shared ResNet as RNN for object recognition
- Q. Liao, T. Poggio. 'Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex'. (arXiv)
Generating visual super stimuli
- C.R. Ponce, et al. 'Evolving super stimuli for real neurons using deep generative networks'. Cell. 177, 999–1009 (2019). (bioRxiv), (Cell)
- P. Bashivan, K. Kar, J.J DiCarlo. 'Neural Population Control via Deep Image Synthesis'. Science. (2019). (bioRxiv), (Science), (GitHub1, GitHub2)
Visual number sense
- K. Nasr, P. Viswanathan, A. Nieder. 'Number detectors spontaneously emerge in a deep neural network designed for visual object recognition'. Sci. Adv. (2019). (Sci. Adv.)
Auditory cortex
- U. Güçlü, J. Thielen, M. Hanke, M. van Gerven. 'Brains on Beats'. NIPS (2016) (arXiv)
- A.J.E. Kell,D.L.K. Yamins,E.N. Shook, S.V. Norman-Haignere, J.H.McDermott. 'A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy'. Neuron98(3), (2018) (sciencedirect)
Motor cortex
- D. Sussillo, M. Churchland, M. Kaufman, K. Shenoy. 'A neural network that finds a naturalistic solution for the production of muscle activity'. Nat. Neurosci.18(7), 1025–1033 (2015). (PubMed)
Spatial coding (Place cells, Grid cells)
- C. Cueva, X. Wei. 'Emergence of grid-like representations by training recurrent neural networks to perform spatial localization'. ICLR. (2018). (arXiv)
- A. Banino, et al. 'Vector-based navigation using grid-like representations in artificial agents'. Nature.557(7705), 429–433 (2018). (pdf), (GitHub)
- J.C.R. Whittington. et al. 'Generalisation of structural knowledge in the hippocampal-entorhinal system'. NIPS. (2018). (arXiv)
Rodent barrel cortex
- C. Zhuang, J. Kubilius, M. Hartmann, D. Yamins. 'Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System'. NIPS. (2017). (arXiv)
Cognitive task
- H.F. Song, G.R. Yang, X.J. Wang. 'Reward-based training of recurrent neural networks for cognitive and value-based tasks'. eLife. 6 (2017). (eLife)
- G.R. Yang, M.R. Joglekar, H.F. Song, W.T. Newsome, X.J. Wang. 'Task representations in neural networks trained to perform many cognitive tasks'. Nat. Neurosci. (2019). (Nat. Neurosci.) (GitHub)
Time perception
- N.F. Hardy, V. Goudar, J.L. Romero-Sosa, D.V. Buonomano. 'A model of temporal scaling correctly predicts that motor timing improves with speed'. Nat. Commun.9 (2018). (Nat. Commun.)
- J. Wang, D. Narain, E.A. Hosseini, M. Jazayeri. 'Flexible timing by temporal scaling of cortical responses'. Nat. Neurosci.21 102–110(2018). (Nat. Neurosci.)
- W. Roseboom, Z. Fountas, K. Nikiforou, D. Bhowmik, M. Shanahan, A. K. Seth. 'Activity in perceptual classification networks as a basis for human subjective time perception'. Nat. Commun.10 (2019). (Nat. Commun.)
Short-term memory task
- K. Rajan, C.D.Harvey, D.W.Tank. 'Recurrent Network Models of Sequence Generation and Memory'. Neuron.90(1), 128-142 (2016). (sciencedirect)
- A.E. Orhan, W.J. Ma. ' A diverse range of factors affect the nature of neural representations underlying short-term memory'. Nat. Neurosci. (2019). (Nat. Neurosci.), (bioRxiv), (GitHub)
- N.Y. Masse. et al. 'Circuit mechanisms for the maintenance and manipulation of information in working memory'. Nat. Neurosci. (2019). (Nat. Neurosci.), (bioRxiv)
Language
- J. Chiang, et al. 'Neural and computational mechanisms of analogical reasoning'. (2019). (bioRxiv)
- S. Na, Y.J. Choe, D. Lee, G. Kim. 'Discovery of Natural Language Concepts in Individual Units of CNNs'. ICLR. (2019). (OpenReview), (arXiv)
Language learning
- B.M. Lake, T. Linzen, M. Baroni. 'Human few-shot learning of compositional instructions'. (2019). (arXiv)
- A. Alamia, V. Gauducheau, D. Paisios, R. VanRullen. 'Which Neural Network Architecture matches Human Behavior in Artificial Grammar Learning?'. (2019). (arXiv)
Neural network architecture based on neuroscience
ニューラルネットワークのアーキテクチャの中で神経科学の知見を取り入れたもの.
Survey
- D. Hassabis, D. Kumaran, C. Summerfield, M. Botvinick. 'Neuroscience-Inspired Artificial Intelligence'. Neuron.95(2), 245-258 (2017).(sciencedirect)
PredNet (Predictive coding network)
- W. Lotter, G. Kreiman, D. Cox. 'Deep predictive coding networks for video prediction and unsupervised learning'. ICLR. (2017). (arXiv, GitHub)
- W. Lotter, G. Kreiman, D. Cox. 'A neural network trained to predict future video frames mimics critical properties of biological neuronal responses and perception'. (2018). (arXiv)
- E. Watanabe, A. Kitaoka, K. Sakamoto, M. Yasugi, K. Tanaka. 'Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction'. Front. Psychol. (2018). (Front. Psychol.)
- M. Fonseca. 'Unsupervised predictive coding models may explain visual brain representation'. (2019). (arXiv, GitHub)
subLSTM
- R. Costa, Y. Assael, B. Shillingford, N. Freitas, T. Vogels. 'Cortical microcircuits as gated-recurrent neural networks'. NIPS. (2017). (arXiv)
Activation functions
- G.S. Bhumbra. 'Deep learning improved by biological activation functions'. (2018). (arXiv)
Flexible normalization
- L. Gonzalo, S. Giraldo, O. Schwartz. 'Integrating Flexible Normalization into Mid-Level Representations of Deep Convolutional Neural Networks'. (2018). (arXiv)
Reinforcement Learning
大事な分野なのに最近の研究を追えていません
- N. Haber, D. Mrowca, L. Fei-Fei, D. Yamins. 'Learning to Play with Intrinsically-Motivated Self-Aware Agents'. NIPS. (2018). (arXiv)
- J. X. Wang, et al. 'Prefrontal cortex as a meta-reinforcement learning system'. Nat. Neurosci. (2018). (Nat. Neurosci.), (bioRxiv), (blog)
- M. Botvinick. et al. 'Reinforcement Learning, Fast and Slow'. Trends. Cogn. Sci. (2019). (Trends. Cogn. Sci.)
- E.O. Neftci, B.B. Averbeck. 'Reinforcement learning in artificial and biological systems'. Nat. Mach. Intell. (2019). (Nat. Mach. Intell.)
Learning and development
Biologically plausible learning algorithms
ニューラルネットワークの強力な学習アルゴリズムである誤差逆伝搬法(Back propagation)は生物学的に妥当である(biological plausible)とは言えない。そこで、生体内でも可能そうな学習方法が考案されている(本当に可能かは議論の余地あり)。
- Y. Bengio, D. Lee, J. Bornschein, T. Mesnard, Z. Lin. 'Towards Biologically Plausible Deep Learning'. (2015). (arXiv)
- T. Lillicrap, D. Cownden, D. Tweed, C. Akerman. 'Random synaptic feedback weights support error backpropagation for deep learning'. Nat. Commun.7 (2016). (Nat. Commun.)
- M. Jaderberg, et al. 'Decoupled Neural Interfaces using Synthetic Gradients' (2016). (arXiv)
- B. Scellier, Y. Bengio. 'Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation'. Front. Comput. Neurosci.11(24), (2017). (arXiv)
- N. Ke, A. Goyal, O. Bilaniuk, J. Binas, M. Mozer, C. Pal, Y. Bengio. 'Sparse Attentive Backtracking: Temporal CreditAssignment Through Reminding'. NIPS. (2018). (arXiv)
- S. Bartunov, A. Santoro, B. Richards, L. Marris, G. Hinton, T. Lillicrap. 'Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures'. NIPS. (2018). (arXiv)
- J. Sacramento, R. P. Costa, Y. Bengio, W. Senn. 'Dendritic cortical microcircuits approximate the backpropagation algorithm'. NIPS. (2018). (arXiv)
- A. Nøkland, L.H. Eidnes. 'Training Neural Networks with Local Error Signals'. (2019). (arXiv) (GitHub)
- R. Feldesh. 'The Distributed Engram'. (2019). (bioRxiv)
- M. Akrout, C. Wilson, P.C. Humphreys, T.Lillicrap, D. Tweed. 'Deep Learning without Weight Transport'. (2019). (arXiv)
- Y. Amit. 'Deep Learning With Asymmetric Connections and Hebbian Updates'. Front. Comput. Neurosci. (2019). (Front. Comput. Neurosci.). (GitHub)
- B.J. Lansdell, P. Prakash, K.P. Kording. 'Learning to solve the credit assignment problem'. (2019). (arXiv)
Survey
- J. Whittington, R. Bogacz. 'Theories of Error Back-Propagation in the Brain'. Trends. Cogn. Sci. (2019). (sciencedirect)
- T.P. Lillicrap, A.Santoro. 'Backpropagation through time and the brain'. Curr. Opin. Neurobiol. (2019). (sciencedirect)
Others
- F. Crick. 'The recent excitement about neural networks'. Nature. 337, 129–132 (1989). (Nat.)
Development of neural networks and brains
- J. Shen, M. D. Petkova, F. Liu, C. Tang. 'Toward deciphering developmental patterning with deep neural network'. (2018). (bioRxiv)
- A.M. Saxe, J.L. McClelland, S. Ganguli. 'A mathematical theory of semantic development in deep neural networks'. PNAS. (2019). (arXiv). (PNAS)
- D.V. Raman, A.P. Rotondo, T. O’Leary. 'Fundamental bounds on learning performance in neural circuits'. PNAS. (2019). (PNAS)
Few shot Learning
- A. Cortese, B.D. Martino, M. Kawato. 'The neural and cognitive architecture for learning from a small sample'. Curr. Opin. Neurobiol.55, 133–141 (2019). (sciencedirect)
A Critique of Pure Learning
- A. Zador. 'A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains'. (2019). (bioRxiv)
Brain Decoding & Brain-machine interface
- E. Matsuo, I. Kobayashi, S. Nishimoto, S. Nishida, H. Asoh. 'Generating Natural Language Descriptions for Semantic Representations of Human Brain Activity'. ACL SRW. (2016). (ACL Anthology)
- Y. Güçlütürk, U. Güçlü, K. Seeliger, S.E.Bosch, R.J. van Lier, M.A.J. van Gerven. 'Reconstructing perceived faces from brain activations with deep adversarial neural decoding'. NIPS (2017). (NIPS)
- R. Rao. 'Towards Neural Co-Processors for the Brain: Combining Decoding and Encoding in Brain-Computer Interfaces'. (2018). (arXiv)
- G. Shen, T. Horikawa, K. Majima, Y. Kamitani. 'Deep image reconstruction from human brain activity'. PLOS (2019). (PLOS)
Others
- M.S. Goldman. 'Memory without Feedback in a Neural Network'. Neuron (2009). (sciencedirect)
- R. Yuste. 'From the neuron doctrine to neural networks'. Nat. Rev. Neurosci. 16, 487–497 (2015). (Nat. Rev. Neurosci.)
- S. Saxena, J.P. Cunningham. 'Towards the neural population doctrine'. Curr. Opin. Neurobiol. (2019). (sciencedirect)
- D.J. Heeger. 'Theory of cortical function'. PNAS. 114(8), (2017). (PNAS)
(Redirected from Biological neural network)
Anatomy of a multipolar neuron
A neural circuit is a population of neurons interconnected by synapses to carry out a specific function when activated.[1] Neural circuits interconnect to one another to form large scale brain networks.[2] Biological neural networks have inspired the design of artificial neural networks, but artificial neural networks are usually not strict copies of their biological counterparts.
Early study[edit]
From 'Texture of the Nervous System of Man and the Vertebrates' by Santiago Ramón y Cajal. The figure illustrates the diversity of neuronal morphologies in the auditory cortex.
Early treatments of neural networks can be found in Herbert Spencer's Principles of Psychology, 3rd edition (1872), Theodor Meynert's Psychiatry (1884), William James' Principles of Psychology (1890), and Sigmund Freud's Project for a Scientific Psychology (composed 1895).[3] The first rule of neuronal learning was described by Hebb in 1949, in the Hebbian theory. Thus, Hebbian pairing of pre-synaptic and post-synaptic activity can substantially alter the dynamic characteristics of the synaptic connection and therefore either facilitate or inhibit signal transmission. In 1959, the neuroscientists, Warren Sturgis McCulloch and Walter Pitts published the first works on the processing of neural networks.[4] They showed theoretically that networks of artificial neurons could implementlogical, arithmetic, and symbolic functions. Simplified models of biological neurons were set up, now usually called perceptrons or artificial neurons. These simple models accounted for neural summation (i.e., potentials at the post-synaptic membrane will summate in the cell body). Later models also provided for excitatory and inhibitory synaptic transmission.
Connections between neurons[edit]
Proposed organization of motor-semantic neural circuits for action language comprehension. Gray dots represent areas of language comprehension, creating a network for comprehending all language. The semantic circuit of the motor system, particularly the motor representation of the legs (yellow dots), is incorporated when leg-related words are comprehended. Adapted from Shebani et al. (2013)
The connections between neurons in the brain are much more complex than those of the artificial neurons used in the connectionist neural computing models of artificial neural networks. The basic kinds of connections between neurons are synapses, chemical and electrical synapses.
The establishment of synapses enables the connection of neurons into millions of overlapping, and interlinking neural circuits. Presynaptic proteins called neurexins are central to this process.[5]
One principle by which neurons work is neural summation – potentials at the postsynaptic membrane will sum up in the cell body. If the depolarization of the neuron at the axon hillock goes above threshold an action potential will occur that travels down the axon to the terminal endings to transmit a signal to other neurons. Excitatory and inhibitory synaptic transmission is realized mostly by excitatory postsynaptic potentials (EPSPs), and inhibitory postsynaptic potentials (IPSPs).
On the electrophysiological level, there are various phenomena which alter the response characteristics of individual synapses (called synaptic plasticity) and individual neurons (intrinsic plasticity). These are often divided into short-term plasticity and long-term plasticity. Long-term synaptic plasticity is often contended to be the most likely memory substrate. Usually the term 'neuroplasticity' refers to changes in the brain that are caused by activity or experience.
Connections display temporal and spatial characteristics. Temporal characteristics refer to the continuously modified activity-dependent efficacy of synaptic transmission, called spike-timing-dependent plasticity. It has been observed in several studies that the synaptic efficacy of this transmission can undergo short-term increase (called facilitation) or decrease (depression) according to the activity of the presynaptic neuron. The induction of long-term changes in synaptic efficacy, by long-term potentiation (LTP) or depression (LTD), depends strongly on the relative timing of the onset of the excitatory postsynaptic potential and the postsynaptic action potential. LTP is induced by a series of action potentials which cause a variety of biochemical responses. Eventually, the reactions cause the expression of new receptors on the cellular membranes of the postsynaptic neurons or increase the efficacy of the existing receptors through phosphorylation.
Backpropagating action potentials cannot occur because after an action potential travels down a given segment of the axon, the m gates on voltage-gated sodium channels close, thus blocking any transient opening of the h gate from causing a change in the intracellular sodium ion (Na+) concentration, and preventing the generation of an action potential back towards the cell body. In some cells, however, neural backpropagation does occur through the dendritic branching and may have important effects on synaptic plasticity and computation.
A neuron in the brain requires a single signal to a neuromuscular junction to stimulate contraction of the postsynaptic muscle cell. In the spinal cord, however, at least 75 afferent neurons are required to produce firing. This picture is further complicated by variation in time constant between neurons, as some cells can experience their EPSPs over a wider period of time than others.
How to cheat nba live mobile. While in synapses in the developing brain synaptic depression has been particularly widely observed it has been speculated that it changes to facilitation in adult brains.
Circuitry[edit]
Model of a neural circuit in the cerebellum
An example of a neural circuit is the trisynaptic circuit in the hippocampus. Another is the Papez circuit linking the hypothalamus to the limbic lobe. There are several neural circuits in the cortico-basal ganglia-thalamo-cortical loop. These circuits carry information between the cortex, basal ganglia, thalamus, and back to the cortex. The largest structure within the basal ganglia, the striatum, is seen as having its own internal microcircuitry.[6]
Neural circuits in the spinal cord called central pattern generators are responsible for controlling motor instructions involved in rhythmic behaviours. Rhythmic behaviours include walking, urination, and ejaculation. The central pattern generators are made up of different groups of spinal interneurons.[7]
There are four principal types of neural circuits that are responsible for a broad scope of neural functions. These circuits are a diverging circuit, a converging circuit, a reverberating circuit, and a parallel after-discharge circuit.[8]
In a diverging circuit, one neuron synapses with a number of postsynaptic cells. Each of thesemay synapse with many more making it possible for one neuron to stimulate up to thousands of cells. This is exemplified in the way that thousands of muscle fibers can be stimulated from the initial input from a single motor neuron.[8]
In a converging circuit, inputs from many sources are converged into one output, affecting just one neuron or a neuron pool. This type of circuit is exemplified in the respiratory center of the brainstem, which responds to a number of inputs from different sources by giving out an appropriate breathing pattern.[8]
A reverberating circuit produces a repetitive output. In a signalling procedure from one neuron to another in a linear sequence, one of the neurons may send a signal back to initiating neuron.Each time that the first neuron fires, the other neuron further down the sequence fires again sending it back to the source. This restimulates the first neuron and also allows the path of transmission to continue to its output. A resulting repetitive pattern is the outcome that only stops if one or more of the synapses fail, or if an inhibitory feed from another source causes it to stop. This type of reverberating circuit is found in the respiratory center that sends signals to the respiratory muscles, causing inhalation. When the circuit is interrupted by an inhibitory signal the muscles relax causing exhalation. This type of circuit may play a part in epileptic seizures.[8]
In a parallel after-discharge circuit, a neuron inputs to several chains of neurons. Each chain is made up of a different number of neurons but their signals converge onto one output neuron. Each synapse in the circuit acts to delay the signal by about 0.5 msec so that the more synapses there are will produce a longer delay to the output neuron. After the input has stopped, the output will go on firing for some time. This type of circuit does not have a feedback loop as does the reverberating circuit. Continued firing after the stimulus has stopped is called after-discharge. This circuit type is found in the reflex arcs of certain reflexes.[8]
Study methods[edit]
Different neuroimaging techniques have been developed to investigate the activity of neural circuits and networks. The use of 'brain scanners' or functional neuroimaging to investigate the structure or function of the brain is common, either as simply a way of better assessing brain injury with high resolution pictures, or by examining the relative activations of different brain areas. Such technologies may include functional magnetic resonance imaging (fMRI), brain positron emission tomography (brain PET), and computed axial tomography (CAT) scans. Functional neuroimaging uses specific brain imaging technologies to take scans from the brain, usually when a person is doing a particular task, in an attempt to understand how the activation of particular brain areas is related to the task. In functional neuroimaging, especially fMRI, which measures hemodynamic activity (using BOLD-contrast imaging) which is closely linked to neural activity, PET, and electroencephalography (EEG) is used.
Connectionist models serve as a test platform for different hypotheses of representation, information processing, and signal transmission. Lesioning studies in such models, e.g. artificial neural networks, where parts of the nodes are deliberately destroyed to see how the network performs, can also yield important insights in the working of several cell assemblies. Similarly, simulations of dysfunctional neurotransmitters in neurological conditions (e.g., dopamine in the basal ganglia of Parkinson's patients) can yield insights into the underlying mechanisms for patterns of cognitive deficits observed in the particular patient group. Predictions from these models can be tested in patients or via pharmacological manipulations, and these studies can in turn be used to inform the models, making the process iterative.
Clinical significance[edit]
Europa universalis 3 download completo. Sometimes neural circuitries can become pathological and cause problems such as in Parkinson's disease when the basal ganglia are involved.[9] Problems in the Papez circuit can also give rise to a number of neurodegenerative disorders including Parkinson's.
See also[edit]
References[edit]
- ^Purves, Dale (2011). Neuroscience (5th ed.). Sunderland, Mass.: Sinauer. p. 507. ISBN9780878936953.
- ^'Neural Circuits | Centre of Excellence for Integrative Brain Function'. Centre of Excellence for Integrative Brain Function. 13 June 2016. Retrieved 4 June 2018.
- ^Michael S. C. Thomas; James L. McClelland. 'Connectionist models of cognition'(PDF). Stanford University.
- ^J. Y. Lettvin; H. R. Maturana; W. S. McCulloch; W. H. Pitts (1959), 'What the frog's eye tells the frog's brain.', Proc. Inst. Radio Engr. (47), pp. 1940–1951
- ^Südhof, TC (2 November 2017). 'Synaptic Neurexin Complexes: A Molecular Code for the Logic of Neural Circuits'. Cell. 171 (4): 745–769. doi:10.1016/j.cell.2017.10.024. PMC5694349. PMID29100073.
- ^Stocco, Andrea; Lebiere, Christian; Anderson, John R. (2010). 'Conditional Routing of Information to the Cortex: A Model of the Basal Ganglia's Role in Cognitive Coordination'. Psychological Review. 117 (2): 541–74. doi:10.1037/a0019077. PMC3064519. PMID20438237.
- ^Guertin, PA (2012). 'Central pattern generator for locomotion: anatomical, physiological, and pathophysiological considerations'. Frontiers in Neurology. 3: 183. doi:10.3389/fneur.2012.00183. PMC3567435. PMID23403923.
- ^ abcdeSaladin, K. Human anatomy (3rd ed.). McGraw-Hill. p. 364. ISBN9780071222075.
- ^French, IT; Muthusamy, KA (2018). 'A Review of the Pedunculopontine Nucleus in Parkinson's Disease'. Frontiers in Aging Neuroscience. 10: 99. doi:10.3389/fnagi.2018.00099. PMC5933166. PMID29755338.
Further reading[edit]
- Intrinsic plasticity Robert H. Cudmore, Niraj S. Desai Scholarpedia 3(2):1363. doi:10.4249/scholarpedia.1363
External links[edit]
- Biological Neural Network Toolbox - A free Matlab toolbox for simulating networks of several different types of neurons
- WormWeb.org: Interactive Visualization of the C. elegans Neural Network - C. elegans, a nematode with 302 neurons, is the only organism for whom the entire neural network has been uncovered. Use this site to browse through the network and to search for paths between any 2 neurons.
- Introduction to Neurons and Neuronal Networks, Neuroscience Online (electronic neuroscience textbook)
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Neural_circuit&oldid=896626845'