Introduction
While walking on the street we are constantly bombarded with complex sensory stimuli. Learning to navigate such complex environments is of fundamental importance for survival. In the brain, these forms of learning are believed to rely on the orchestrated wiring of synaptic communication between different cortical areas, such as visual and motor cortices [Petreanu et al.2012, Manita et al.2015, Makino and Komiyama2015, Poort et al.2015, Zmarz and Keller2016, Attinger et al.2017]. However, how to correctly modify synapses to achieve an appropriate interaction between brain areas has remained an open question. This fundamental issue in learning and development is often referred to as the credit assignment problem [Rumelhart et al.1986, Sutton and Barto1998, Roelfsema and van Ooyen2005, Friedrich et al.2011, Bengio2014]
. The brain, and artificial neural networks alike, have to determine how to best modify a given synapse across multiple processing stages to ultimately improve global behavioural output.
Machine learning has recently undergone remarkable progress through the use of deep neural networks, leading to humanlevel performance in a growing number of challenging problems [LeCun et al.2015]. Key to an overwhelming majority of these achievements has been the backpropagation of errors algorithm (backprop; Rumelhart1986), which has been long dismissed in neuroscience on the grounds of biologically implausibility [Grossberg1987, Crick1989]. Nonetheless, accumulating evidence highlights the difficulties of simpler learning models and architectures in accurately reproducing cortical activity patterns when compared to deep neural networks, notably trained only on sensory data [Yamins et al.2014, KhalighRazavi and Kriegeskorte2014, Yamins and DiCarlo2016]
. Although recent developments have started to bridge the gap between neuroscience and artificial intelligence
[Marblestone et al.2016, Lillicrap et al.2016, Costa et al.2017, Guerguiev et al.2017], whether the brain implements a backproplike algorithm remains unclear.Here we propose that the errors at the heart of backprop are encoded on the distal dendrites of crossarea projecting pyramidal neurons. In our model, these errors arise from a failure to exactly match via lateral (e.g. somatostatinexpressing, SST) interneurons the topdown feedback from downstream cortical areas. Synaptic learning is driven by these errorlike signals that flow through the dendrites and trigger plasticity on bottomup connections. Therefore, in contrast to previous approaches [Marblestone et al.2016], in our framework a given neuron is used simultaneously for activity propagation (at the somatic level), error encoding (at distal dendrites) and error propagation to the soma. Importantly, under certain simplifying assumptions, we were able to formally show that learning in the model approximates backprop.
We first illustrate the different components of the model and afterwards demonstrate its performance by training a multiarea network on associative nonlinear regression and recognition tasks (handwritten digit image recognition, a standard benchmark). Then, we further extend the framework to consider learning of the topdown synaptic pathway. When coupled with a disinhibitory mechanism this allows the network to generate prototypes of learnt images as well as perform input denoising. We interpret this disinhibitory mechanism as being implemented through another inhibitory celltype (e.g. vasoactive intestinal peptideexpressing, VIP interneurons). Finally, we make several experimentally testable predictions in terms of the role of dendrites and different interneuron types being involved while an animal learns to associate signals originating from different brain areas.
Results
The cortex exhibits remarkably intricate, but stereotypical circuits. Below we describe a plastic cortical circuit model that considers two features observed in neocortex: dendritic compartments and different cell types. Crossarea synapses onto the dendritic compartments learn to reduce the prediction error between the somatic potential and their own dendritic branch potential. Additionally, lateral synaptic input from local interneurons learns to cancel topdown feedback from downstream brain areas. When a new topdown input arrives at distal dendrites that cannot be matched by lateral inhibition it signals a neuronspecific error (encoded on the dendritic potential) that triggers synaptic learning at a given pyramidal cell. As learning progresses, the interneurons gradually learn to cancel once again the new input until eventually learning stops. We show that this cortical circuit implements error backpropagation, and demonstrate its performance in various tasks.
The dendritic cortical circuit learns to predict selfgenerated topdown input
We first study a generic network model with cortical brain areas (a multilayer network, in machine learning parlance), comprising an input area (representing, for instance, thalamic input to sensory areas, denoted as area ), one or more ‘hidden’ areas (representing secondary sensory and consecutive higher brain areas, denoted as area and area , respectively) and an output brain area (e.g. motor cortex, denoted as area ), see schematic in Fig. 1A. Unlike conventional artificial neural networks, hidden neurons feature both bottomup () and topdown () connections, thus defining a recurrent network structure. Topdown synapses continuously feed back the next brain area predictions to a given bottomup input. Our model uses of this feedback to determine corrective error signals and ultimately guide synaptic plasticity across multiple areas.
Building upon previous work [Urbanczik and Senn2014], we adopt a simplified multicompartment neuron and describe pyramidal neurons as threecompartment units (schematically depicted in Fig. 1A; see also Methods). These compartments represent the somatic, basal and apical integration zones that characteristically define neocortical pyramidal cells [Spruston2008, Larkum2013]. The dendritic structure of the model is exploited by having bottomup and topdown synapses converging onto separate dendritic compartments (basal and distal dendrites, respectively), consistent with experimental observations [Spruston2008] and reflecting the preferred connectivity pattern of corticocortical projections [Larkum2013].
Consistent with the neurophysiology of SST interneurons [UrbanCiecko and Barth2016], we also introduce a second population of cells within each hidden area with both lateral and crossarea connectivity, whose role is to cancel the topdown input. Modelled as twocompartment units (depicted in red, Fig. 1A; see also Methods), such interneurons are predominantly driven by pyramidal cells within the same area through weights , and they project back to the apical dendrites of the samearea pyramidal cells through weights (see Fig. 1A). Additionally, crossarea feedback onto SST cells originating at the next higher brain area provide a weak nudging signal for these interneurons, modelled after Urbanczik2014 as a conductancebased somatic input current. For computational simplicity, we modelled this weak topdown nudging on a onetoone basis (that can also be relaxed): each interneuron is nudged towards the potential of a corresponding upperarea pyramidal cell. Recent monosynaptic input mapping experiments show that somatostatinpositive cells (SST, of which Martinotti cells are the main type) in fact receive also topdown projections [Leinweber et al.2017], that according to our proposal encode the weak ’teaching’ signals from higher to lower brain areas.
As detailed below, this microcircuit is key to encode and backpropagate errors across the network. We first show how synaptic plasticity of lateral interneuron connections establishes a network regime, which we term selfpredicting, whereby lateral input cancels the selfgenerated topdown feedback, effectively silencing apical dendrites. For this reason, SST cells are functionally inhibitory and are henceforth referred to as interneurons. Crucially, when the circuit is in this socalled selfpredicting state, presenting a novel external signal at the output area gives rise to topdown activity that cannot be explained away by the interneuron circuit. Below we show that these apical mismatches between topdown and lateral input constitute the backpropagated, neuronspecific errors that drive plasticity on the forward weights to the hidden pyramidal neurons.
Learning to predict the feedback signals involves adapting both weights from and to the lateral interneuron circuit. Consider a network that is driven by a succession of sensory input patterns (Fig. 1B, bottom row). The exact distribution of inputs is unimportant as long as they span the whole input space (see SI). Learning to cancel the feedback input is divided between both the weights from pyramidal cells to interneurons, , and from interneurons to pyramidal cells, .
First, due to the somatic teaching feedback, learning of the weights leads interneurons to better reproduce the activity of the respective higher brain area (Fig. 1B (i)). A failure to reproduce area activity generates an internal prediction error at the dendrites of the interneurons, which triggers synaptic plasticity (as defined by Eq. 8 in the Methods) that corrects for the wrong dendritic prediction and eventually leads to a faithful tracing of the upper area activity by the lower area interneurons (Fig. 1B (ii)). The mathematical analysis (see SI, Eq. S27) shows that the plasticity rule (8) makes the inhibitory population implement the same function of the area pyramidal cell activity as done by the area–() pyramidal neurons. Thus, the interneurons will learn to mimic the area–() pyramidal neurons (Fig. 1Ci).
Second, as the interneurons mirror upper area activity, intertopyramidal neuron synapses within the same area (, Eq. 9) successfully learn to cancel the topdown input to the apical dendrite (Fig. 1Cii), independently of the actual input stimulus that drives the network. By doing so, the intertopyramidal neuron weights learn to mirror the topdown weights onto the lower area pyramidal neurons. The learning of the weights onto and from the interneurons works in parallel: as the interneurons begin to predict the activity of pyramidal cells in area , it becomes possible for the plasticity at interneurontopyramidal synapses (Eq. 9) to find a synaptic weight configuration which precisely cancels the topdown feedback (see also SI, Eq. S29). At this stage, every pattern of activity generated by the hidden areas of the network is explained by the lateral circuitry, Fig. 1C (ii). Importantly, once learning of the lateral interneurons has converged, the apical input cancellation occurs irrespective of the actual bottomup sensory input. Therefore, interneuron synaptic plasticity leads the network to a selfpredicting state. We propose that the emergence of this state could occur during development, consistent with experimental findings [Dorrn et al.2010, Froemke2015]. Starting from a crossarea selfpredicting configuration helps learning of specific tasks (but is not essential, see below and Methods).
Deviations from selfpredictions encode backpropagating errors
Having established a selfpredicting network, we next show how prediction errors get propagated backwards when a novel input is provided to the output area. This new signal, which we model via the activation of additional somatic conductances in output pyramidal neurons (see Methods), plays the role of a teaching or associative signal (see specific tasks below). Here we consider a concrete implementation of the network model introduced above, with an input, a hidden and an output brain area (areas 0, 1 and 2, respectively; Fig. 2A). We demonstrate learning in the model with a simple task: memorizing a single inputoutput pattern association. This setup naturally generalizes to multiple memories by iterating over a set of associations to be learned.
When the pyramidal cell activity in the output area is nudged towards some desired target (Fig. 2B (i)), the bottomup synapses from the lower area neurons to the basal dendrites are adapted, again according to the plasticity rule that implements the dendritic prediction of somatic spiking (see Eq. 7 in the Methods and Urbanczik2014). What these synapses cannot explain away shows up as a dendritic error in the pyramidal neurons of the lower area 1. In fact, the selfpredicting microcircuit can only cancel the feedback that is produced by the lower area activity. Due to the unexplained teaching signal in the output area, the topdown input partially survives the lateral inhibition; this leads to the activation of distal dendrites (Fig. 2B (i)). The mathematical analysis reveals that the apical deviation from baseline encodes an error that is effectively backpropagated from the output area.
The somatic integration of apical activity induces plasticity at the bottomup synapses on the basal dendrites. As described above, plasticity at these synapses too is governed by the dendritic prediction of somatic activity, just as for the synapses to the interneurons (Eq. 7). As the apical error changes the somatic activity, plasticity of the weights tries to further reduce the error in the output area. Importantly, the plasticity rule depends only on information that is available at the synaptic level. More specifically, it is a function of both postsynaptic firing and dendritic branch voltage, as well as presynaptic activity, in par with detailed phenomenological models [Clopath et al.2010, Bono and Clopath2017]. In a spiking neuron model, the plasticity rule can reproduce a number of experimental results on spiketiming dependent plasticity [Spicher et al.in preparation].
In contrast to the establishing of the selfpredicting network state, learning now involves the simultaneous modifications of both lateral circuit and bottomup synaptic weights (Fig. 2). On the one hand, lateral weights track changes in output area activity, in this way approximately maintaining the network in a selfpredicting state throughout learning. On the other hand, the inputs to area pyramidal neurons adapt to reduce prediction errors. Altogether, plasticity eventually leads to a network configuration in which the novel topdown input is successfully predicted (Fig. 2B,C).
Crossarea network learns to solve a nonlinear associative task
So far we have described the key components of our model in a multiarea network using a toy problem. Now, we turn to more challenging problems. The first of which is a nonlinear associative task, where the network has to learn to associate the sensory input with the output of a separate multiarea network that transforms the same sensory input — this can be recast as a nonlinear regression problem (Fig. 3A; see Methods for details on the architecture and learning conditions).
We let learning occur in continuous time without pauses or alternations in plasticity as input patterns are sequentially presented. This is in contrast to previous learning models that rely on computing activity differences over distinct phases, requiring temporally nonlocal computation, or globally coordinated plasticity rule switches [Hinton and McClelland1988, O’Reilly1996, Xie and Seung2003, Scellier and Bengio2017, Guerguiev et al.2017]. Furthermore, we relaxed the bottomup vs. topdown weight symmetry imposed by the backprop algorithm and kept the topdown weights fixed. Feedback weights quickly aligned to of the forward weights , in line with the recently discovered feedback alignment phenomenon [Lillicrap et al.2016]. This simplifies the architecture, because topdown and interneurontopyramidal synapses need not to be changed. Finally, to test the robustness of the network, we injected a weak noise current to every neuron, as a simple model for uncorrelated background activity (see Methods). Our network was still able to learn this harder task (Fig. 3B), performing considerably better than a shallow learner where only output weights were adjusted (Fig. 3C). Useful changes were thus made to hidden area bottomup weights; the network effectively solved the credit assignment problem.
Multiarea network learns to discriminate handwritten digits
Next, we turn to a standard machine learning problem, the classification of handwritten digits from the MNIST database. This data set is popularly used to study the performance of learning models, including various artificial neural networks trained with backprop. Notably, shallow models (e.g., logistic regression) or networks trained with plain Hebbian learning alone suffer from poor performance and do not offer a remedy for the problem.
We wondered how our model would fare in this realworld benchmark, in particular whether the prediction errors computed by the interneuron microcircuit would allow learning the weights of a hierarchical nonlinear network with multiple hidden areas. To that end, we trained a deeper, larger 4area network (with 78450050010 pyramidal neurons, Fig. 4A) by pairing digit images with teaching inputs that nudged the 10 output neurons towards the correct class pattern. To speed up the experiments we studied a simplified network dynamics which determined compartmental potentials without requiring a full neuronal relaxation procedure (see Methods). As in the previous experiments, synaptic weights were randomly initialized and set to a selfpredicting configuration where interneurons cancelled topdown inputs, rendering the apical compartments silent before training started. Topdown and interneurontopyramidal weights were kept fixed.
The network was able to achieve a test error of 1.96%, Fig. 4B, a figure not overly far from the reference mark of nonconvolutional artificial neural networks optimized with backprop (1.53%) and comparable to recently published results that lie within the range 1.62.4% [Lee et al.2015, Lillicrap et al.2016]. This was possible even though interneurons had to keep track of changes to forward weights as they evolved, simultaneously and without phases. Indeed, apical compartment voltages remained approximately silent when output nudging was turned off (data not shown), reflecting the maintenance of a selfpredicting state throughout learning. Moreover, thanks to a feedback alignment dynamics [Lillicrap et al.2016]
, the interneuron microcircuit was able to translate the feedback from downstream areas into single neuron prediction error signals, despite the asymmetry of forward and topdown weights and at odds with exact backprop.
Disinhibition enables sensory input generation and sharpening
So far we assumed that feedback from downstream neurons is relayed through fixed topdown synapses. However, this need not be so. As we demonstrate next, the interneuron microcircuit is capable of tracking changes to the topdown stream dynamically as learning progresses. This endows the model with important additional flexibility, as feedback connections — known to mediate attention and perceptual acuity enhancement in sensory cortices — are likely plastic [Huber et al.2012, Petreanu et al.2012, Manita et al.2015, Makino and Komiyama2015, Attinger et al.2017, Leinweber et al.2017].
As a case in point we considered a simple extension to a threearea network of 784100010 pyramidal neurons again exposed to MNIST images, Fig. 5. The architecture is as before, except that we now let dendritic predictive plasticity shape the topdown weights from output to hidden neurons as well as an extra set of weights connecting hidden neurons back to the input area (see Eq. 10 in the Methods).
In this extended network, topdown synapses learn to predict the activities of the corresponding area below and thus implement an approximate inverse of the forward model. In effect, these connections play a dual role, beyond their sole purpose in backprop. They communicate back upper area activities to improve the hidden neuron representation on a recognition task, and they learn to invert the induced forward model. This paired encoderdecoder architecture is known as target propagation in machine learning [Bengio2014, Lee et al.2015]. Our compartmental pyramidal neuron model affords a simple design for the inverse learning paradigm: once more, plasticity of topdown synapses is driven by a postsynaptic dendritic error factor, comparing somatic firing with a local branch potential carrying the current topdown estimate.
Importantly, our results show that the network still learned to recognize handwritten digits, Fig. 5A, reaching a classification error of 2.48% on the test set. This again highlights that not only transposed forward weight matrices, as prescribed by backprop, deliver useful error signals to hidden areas. In this experiment, we initialized every weight matrix randomly and independently, and did not prelearn lateral circuit weights. Although forward, topdown and lateral weights were all jointly adapted starting from random initial conditions, a selfpredicting state quickly ensued, leading to a drop in classification error. Concomitantly, the reconstructions of hidden neuron activities and input images improved, Fig. 5B.
The learned inverse model can be used to generate prototypical digit images in the input area. We examined qualitatively its performance by directly inspecting the produced images. Specifically, for each digit class we performed a toptobottom pass with lateral inhibition turned off, starting from the corresponding class pattern . For simplicity, we disabled basal feedforward inputs as well to avoid recurrent effects (see Methods). This procedure yielded prototype reconstructions which resemble natural handwritten digits, Fig. 5C, confirming the observed decrease in reconstruction loss.
Crucially, for the network to be able to generate images, the apical dendrites of hidden neurons should be fully driven by their topdown inputs. In terms of our microcircuit implementation, this is achieved by momentarily disabling the contributions from lateral interneurons. A switchlike disinhibition [Pi et al.2013] is thus capable of turning apical dendrites from error signalling devices into regular prediction units: the generative mode corresponds to a disinhibited mode of operation. Due to their preferential targetting of SST interneurons, VIP interneurons are likely candidates to implement this switch.
Recent reports support the view that corticocortical feedback to distal dendrites plays an active role as mice engage in perceptual discrimination tasks [Manita et al.2015, Makino and Komiyama2015, Takahashi et al.2016]. Inspired by these findings, we further tested the capabilities of the model in a visual denoising task, where the prior knowledge incorporated in the topdown network weights is leveraged to improve perception. In Fig. 5D, we show the reconstructions obtained after presenting randomly picked MNIST images from the test set that had been corrupted with additive Gaussian noise. We show only the apical predictions carried by topdown inputs back to sensory area 0, without actually changing area 0 activity. Interestingly, we found that the hidden neuron representations shaped by classification errors served as reasonable visual features for the inverse model as well. Most of the noise was filtered out, although some of the finer details of the original images were lost in the process.
Discussion
How the brain successfully assigns credit and modifies specific synaptic connections given a global associative signal has puzzled neuroscientists for decades. Here we have introduced a novel framework in which a single neuron is capable of transmitting predictions as well as prediction errors. These neuronspecific errors are encoded at distal dendrites and are kept in check by lateral (e.g. somatostatinexpressing) interneurons. Next, local synaptic plasticity mechanisms use such dendriticencoded prediction errors to correctly adjust synapses. We have shown that these simple principles allow networks of multiple areas to successfully adjust their weights in challenging tasks, and that this form of learning approximates the well known backpropagation of errors algorithm.
Experimental predictions
Because our model touches on a number of levels: from brain areas to microcircuits, dendritic compartments and synapses, it makes several predictions. Here we highlight some of these predictions and related experimental observations:
(1) Dendritic error representation.Probably the most fundamental feature of the model is that dendrites, in particular distal dendrites, encode error signals that instruct learning of lateral and downstream connections. This means that during a task that required the association of two brain areas to develop, lateral interneurons would modify their synaptic weights such that the topdown signals are cancelled. Moreover, during learning, or if this association is broken, a dendritic error signal should be observed. While monitoring such dendritic signals during learning is challenging, there is recent experimental evidence that supports this model. Mice were trained in a simple visuomotor task where the visual flow in a virtual environment presented to the animal was coupled to its own movement [Zmarz and Keller2016, Attinger et al.2017]. When this coupling was broken (by stopping the visual flow) mismatch signals were observed in pyramidal cells, consistent with the prediction error signals predicted by our model.
(2) Lateral inhibition of apical activity. Our apical error representation is based on lateral inhibitory feedback to distal dendritic compartments of pyramidal cells. There is evidence for topdown feedback to target distal (layer1) synapses of both layer2/3 and layer5 pyramidal cells [Petreanu et al.2009], and both cell types have lateral somatostatin interneurons which target the distal dendrites of the respective pyramidal cells [Markram et al.2004]. The cancellation of the feedback provided by somatostatin interneurons should be nearexact both in its magnitude and delay. In the brain, there can be a substantial delay between the lateral excitatory input and the feedback from other brain areas (in the order of tens to hundreds of milliseconds [Cauller and Kulics1991, Larkum2013]), suggesting that the lateral inhibitory interaction mediated by SST cells should be also delayed and tuned to the feedback. Interestingly, there is strong experimental support for a delayed inhibition mediated by pyramidaltoSST connections [Silberberg and Markram2007, Murayama et al.2009, Berger et al.2009, Berger et al.2010], which could in principle be tuned to match both the delay and magnitude of the feedback signal. Moreover, the spontaneous activity of SST interneurons is relatively high [UrbanCiecko and Barth2016], which again is consistent with our model as SST interneurons need to constantly match the topdown input received by neighbouring pyramidal cells. We would predict that these levels of spontaneous firing rates in SST should match the level of feedback received by the pyramidal cells targeted by a particular SST interneuron. In addition, our model predicts the need for a weak topdown input onto SST interneurons. Again, this is in line with recent topdown connectivity studies suggesting that SST can indeed provide such a precise cancellation of the topdown excitatory inputs [Zhang et al.2014, Leinweber et al.2017].
(3) Hierarchy of prediction errors A further implication of our multiarea learning model is that a highlevel prediction error occurring at some higher cortical area would imply also lowerlevel prediction errors to cooccur at earlier areas. For instance, a categorization error occurring when a visual object is misclassified, would be signalled backwards through our interneuron circuits to lower areas where the individual visual features of the objects are represented. Recent experimental observations in the macaque faceprocessing hierarchy support this view [Schwiedrzik and Freiwald2017]. We predict that higherarea activity modulates lowerarea activity with the purpose to shape synaptic plasticity at these lower areas.
Here we have focused on the role of SST cells as a feedbackspecific interneuron. There are many more interneuron types that we do not consider in our framework. One such type are the PV (parvalbuminpositive) cells, which have been postulated to mediate a somatic excitationinhibition balance [Vogels et al.2011, Froemke2015] and competition [Masquelier and Thorpe2007, Nessler et al.2013]. These functions could in principle be combined with the framework introduced here, or as we suggest below, PV interneuron may be involved in representing yet another type of prediction errors different from the classification errors considered so far. VIP (vasoactive intestinal peptide) interneurons that are believed to be engaged in cortical disinhibition [Letzkus et al.2015] are assumed in our framework to switch between the discriminative mode and the local attention mode in which lower area activity is generated out of higher area activity (see Fig. 5).
We have focused on an interpretation of our predictive microcircuits as learning across brain areas, but they may also be interpreted as learning across different groups of pyramidal cells within the same brain area.
Comparison to previous approaches
It has been suggested that error backpropagation could be approximated by an algorithm that requires alternating between two learning phases, known as contrastive Hebbian learning [Ackley et al.1985]
. This link between the two algorithms was first established for an unsupervised learning task
[Hinton and McClelland1988] and later analyzed [Xie and Seung2003] and generalized to a broader class of models [O’Reilly1996, Scellier and Bengio2017]. The two phases needed for contrastive Hebbian learning are: (i) for each input pattern, the network first has to settle down being solely driven by inputs; then, (ii) the process is repeated while additionally driving outputs towards a desired target state. Learning requires subtracting activity patterns recorded on each phase — and therefore requires storing activity in time — or changing plasticity rules across the network on a coordinated, phasedependent manner, which appears to be biologically implausible.Twophase learning recently reappeared in a study which, like ours, uses compartmental neurons [Guerguiev et al.2017]. In this more recent work, the difference between the activity of the apical dendrite in the presence and the absence of the teaching input represents the error that induces plasticity at the forward synapses. This error is used directly for learning the bottomup synapses without influencing the somatic activity of the pyramidal cell. In contrast, we postulate that the apical dendrite has an explicit error representation at every moment in time by simultaneously integrating topdown excitation and lateral inhibition. As a consequence, we do not need to postulate separate temporal phases, and our network operates continuously in time while plasticity at all synapses is always turned on.
The solution proposed here to avoid twophase learning relies on a plastic microcircuit that provides functional lateral inhibition. All the involved plasticity rules are errorcorrecting in spirit and can be understood as learning to match a certain target voltage. For the synapses from the interneurons to the apical dendrite of the pyramidal neurons, the postsynaptic target is the resting potential, and hence the (functionally) inhibitory plasticity rule can be seen as achieving a dendritic balance, similarly to the homeostatic balance of inhibitory synaptic plasticity as previously suggested (Vogels2011,Luz2012). Yet, in our model, inhibitory plasticity plays a central role in multiarea, deep error coding, which goes beyond the standard view of inhibitory plasticity as a homeostatic stabilizing force [Keck et al.2017].
Error minimization is an integral part of brain function according to predictive coding theories [Rao and Ballard1999, Friston2005], and backprop can be mapped onto a predictive coding network architecture [Whittington and Bogacz2017]. From a formal point of view this approach is encompassed by the framework introduced by LeCun1988. A possible network implementation is suggested by Whittington:2017js that requires intricate circuitry with appropriately tuned errorrepresenting neurons. According to that model, the only plastic synapses are those that connect prediction and error neurons.
We built upon the previously made observation that topdown and bottomup weights need not be in perfect agreement to enable multiarea errordriven learning [Lee et al.2015, Lillicrap et al.2016]. Consistent with these findings, the strict weight symmetry arising in the classical error backpropagation algorithm is not required in our case either for a successful learning in hidden area neurons.
We have also shown that topdown synapses can be learned using the same dendritic predictive learning rule used at the remaining connections. In our model, the topdown connections have a dual role: they are involved in the apical error representation and, they learn to match the somatic firing driven by the bottomup input [Urbanczik and Senn2014]. The simultaneous learning of the bottomup and topdown pathways leads to the formation of a generative network that can denoise sensory input or generate dreamlike inputs (Fig. 5).
Finally, the framework introduced here could also be adapted to other types of errorbased learning, such as in generative models that instead of learning to discriminate sensory inputs, learn to generate following sensory input statistics. Error propagation in these forms of generative models, which arise from an inaccurate prediction of sensory inputs, may rely on different dendritic compartments and interneurons, such as the previously mentioned PV inhibitory cells [Petreanu et al.2009].
Acknowledgements
The authors would like to thank Timothy P. Lillicrap, Blake Richards, Benjamin Scellier and Mihai A. Petrovici for helpful discussions. WS thanks Matthew Larkum for many discussions on dendritic processing and the option of dendritic error representation. In addition, JS thanks Elena Kreutzer, Pascal Leimer and Martin T. Wiechert for valuable feedback and critical reading of the manuscript.
This work has been supported by the Swiss National Science Foundation (grant 310030L156863, WS) and the Human Brain Project.
Methods
Neuron and network model. The somatic membrane potentials of pyramidal neurons and interneurons evolve in time according to
(1)  
(2) 
with one such pair of dynamical equations for every hidden area ; input area neurons are indexed by .
describe standard conductancebased voltage integration dynamics, having set membrane capacitance to unity and resting potential to zero for clarity purposes. Background activity is modelled as a Gaussian white noise input,
in the equations above. To keep the exposition brief we use matrix notation, and denote by andthe vectors of pyramidal and interneuron somatic voltages, respectively. Both matrices and vectors, assumed column vectors by default, are typed in boldface here and throughout.
As described in the main text, pyramidal hidden neurons are taken as threecompartment neurons to explicitly incorporate basal and apical dendritic integration zones into the model, inspired by the design of L2/3 pyramidal cells. The two dendritic compartments are coupled to the soma with effective transfer conductances and , respectively. Compartmental potentials are given in instantaneous form by
(3)  
(4) 
where is the neuronal transfer function, which acts componentwise on .
Although the design can be extended to more complex morphologies, in the framework of dendritic predictive plasticity two compartments suffice to compare desired target with actual prediction. Hence, aiming for simplicity, we reduce pyramidal output neurons to twocompartment cells, essentially following Urbanczik2014; the apical compartment is absent ( in Eq. 1) and basal voltages are as defined in Eq. 3. Synapses proximal to the somata of output neurons provide direct external teaching input, incorporated as an additional source of current . For any given such neuron, excitatory and inhibitory conductancebased input generates a somatic current , where and are excitatory and inhibitory synaptic reversal potentials, respectively. The point at which no current flows, , defines the target teaching voltage towards which the neuron is nudged.
Interneurons are similarly modelled as twocompartment cells, cf. Eq. 2. Lateral dendritic projections from neighboring pyramidal neurons provide the main source of input
(5) 
whereas crossarea, topdown synapses define the teaching current . Specifically, an interneuron at area receives private somatic teaching excitatory and inhibitory input from a pyramidal neuron at area balanced according to , , where is some constant scale factor denoting overall nudging strength; with this setting, the interneuron is nudged to follow the corresponding next area pyramidal neuron.
Synaptic plasticity. Our model synaptic weight update functions belong to the class of dendritic predictive plasticity rules [Urbanczik and Senn2014, Spicher et al.in preparation] that can be expressed in general form as
(6) 
where is an individual synaptic weight, is a learning rate, and denote distinct compartmental potentials, is a rate function, third factor is a function of potential , and is the presynaptic input. Eq. 6 was originally derived in the light of reducing the prediction error of somatic spiking, when represents the somatic potential and is a function of the postsynaptic dendritic potential.
Concretely, the plasticity rules for the various connection types present in the network are:
(7)  
(8)  
(9) 
where denotes vector transpose and the area firing rates. So the strengths of plastic synapses evolve according to the correlation of dendritic prediction error and presynaptic rate and can undergo both potentiation or depression depending on the sign of the first factor.
For basal synapses, such prediction error factor amounts to a difference between postsynaptic rate and a local dendritic estimate which depends on the branch potential. In Eqs. 7 and 8, dendritic predictions and take into account dendritic attenuation factors. Meanwhile, plasticity rule (9) of lateral interneurontopyramidal synapses aims to silence (i.e., set to resting potential , here and throughout null for simplicity) the apical compartment; this introduces an attractive state for learning where the contribution from interneurons balances topdown dendritic input. The learning rule of apicaltargetting synapses can be thought of as a dendritic variant of the homeostatic inhibitory plasticity proposed by Vogels2011.
In the experiments where the topdown connections are plastic (cf. Fig. 5), the weights evolve according to
(10) 
with . An implementation of this rule requires a subdivision of the apical compartment into a distal part receiving the topdown input (with voltage ) and a more proximal part receiving the lateral input from the interneurons (with voltage .
Nonlinear function approximation task. In Fig. 3, a pyramidal neuron network learns to approximate a random nonlinear function implemented by a heldaside feedforward network with the same (302010) dimensions; this ensures that the target function is realizable. One teaching example consists of a randomly drawn input pattern assigned to corresponding target
. Teacher network weights and input pattern entries are sampled from a uniform distribution
. We choose a soft rectifying nonlinearity as the neuronal transfer function, .The pyramidal neuron network is initialized to a selfpredicting state where and . Top down weight matrix is fixed and set at random with entries drawn from a uniform distribution. Output area teaching currents are set so as to nudge towards the teachergenerated . Reported error curves are exponential moving averages of the sum of squared errors loss computed after every example on unseen input patterns. Plasticity induction terms given by the righthand sides of Eqs. 79 are lowpass filtered with time constant before being definitely consolidated, to dampen fluctuations; synaptic plasticity is kept on throughout. Plasticity and neuron model parameters are given in the accompanying supplementary material.
MNIST image classification and reconstruction tasks. When simulating the larger models used on the MNIST data set we resort to a discretetime network dynamics where the compartmental potentials are updated in two steps before applying synaptic changes.
The simplified model dynamics is as follows. For each presented MNIST image, both pyramidal and interneurons are first initialized to their bottomup prediction state (3), , starting from area upto top area . Output area neurons are then nudged towards their desired target , yielding updated somatic potentials . To obtain the remaining final compartmental potentials, the network is revisited in reverse order, proceeding from area down to . For each , interneurons are first updated to include topdown teaching signals, ; this yields apical compartment potentials according to (4), after which we update hidden area somatic potentials as a convex combination with mixing factor . The convex combination factors introduced above are directly related to neuron model parameters as conductance ratios. Synaptic weights are then updated according to Eqs. 710.
Such simplified dynamics approximates the full recurrent network relaxation in the deterministic setting , with the approximation improving as the topdown dendritic coupling is decreased, .
We train the models on the standard MNIST handwritten image database, further splitting the training set into 55000 training and 5000 validation examples. The reported test error curves are computed on the 10000 heldaside test images. The fourarea network shown in Fig. 4 is initialized in a selfpredicting state with appropriately scaled initial weight matrices. To speedup training we use a minibatch strategy on every learning rule, whereby weight changes are averaged across 10 images before being actually consolidated. We take the neuronal transfer function to be a logistic function, and include a learnable threshold on each neuron, modelled as an additional input fixed at unity with plastic weight. Desired target class vectors are 1hot coded, with . During testing, the output is determined by picking the class label corresponding to the neuron with highest firing rate. Model parameters are given in full in the supplementary material.
To generate digit prototypes as shown in Fig. 5C, the network is ran feedforward in a toptobottom fashion: a pass of pyramidal neuron activations is performed, while disabling the feedforward stream as well as the interneuron negative lateral contributions. For this reason, this mode of recall is referred to in the main text as the disinhibited mode. The output area is initialized to the 1hotcoded pattern corresponding to the desired digit class.
The denoised images shown in Fig. 5D are the topdown predictions
obtained after presenting randomly selected digit examples from the test set, corrupted with additive Gaussian noise of standard deviation
. The network states are determined by the twostep procedure described above. Recurrent effects are therefore ignored, as a single backward step is performed.Computer code. For the first series of experiments (Figs. 13) we wrote custom Mathematica (Wolfram Research, Inc.) code. The larger MNIST networks (Figs. 4 and 5
) were simulated in Python using the TensorFlow framework.
References

[Ackley et al.1985]
Ackley DH, Hinton GE, Sejnowski TJ (1985)
A learning algorithm for Boltzmann machines.
Cognitive Science 9:147–169.  [Attinger et al.2017] Attinger A, Wang B, Keller GB (2017) Visuomotor coupling shapes the functional development of mouse visual cortex. Cell 169:1291–1302.e14.
 [Bengio2014] Bengio Y (2014) How autoencoders could provide credit assignment in deep networks via target propagation. arXiv:1407.7906 .
 [Berger et al.2009] Berger TK, Perin R, Silberberg G, Markram H (2009) Frequencydependent disynaptic inhibition in the pyramidal network: a ubiquitous pathway in the developing rat neocortex. The Journal of physiology 587:5411–5425.
 [Berger et al.2010] Berger TK, Silberberg G, Perin R, Markram H (2010) Brief bursts selfinhibit and correlate the pyramidal network. PLOS Biology 8:e1000473.
 [Bono and Clopath2017] Bono J, Clopath C (2017) Modeling somatic and dendritic spike mediated plasticity at the single neuron and network level. Nature Communications 8:706.
 [Bottou1998] Bottou L (1998) Online algorithms and stochastic approximations. In Saad D, editor, Online Learning and Neural Networks. Cambridge University Press, Cambridge, UK.
 [Cauller and Kulics1991] Cauller LJ, Kulics AT (1991) The neural basis of the behaviorally relevant N1 component of the somatosensoryevoked potential in SI cortex of awake monkeys: evidence that backward cortical projections signal conscious touch sensation. Experimental Brain Research 84:607–619.
 [Clopath et al.2010] Clopath C, Büsing L, Vasilaki E, Gerstner W (2010) Connectivity reflects coding: a model of voltagebased stdp with homeostasis. Nature Neuroscience 13:344–352.

[Costa et al.2017]
Costa RP, Assael YM, Shillingford B, de Freitas N, Vogels TP (2017)
Cortical microcircuits as gatedrecurrent neural networks
In Advances in Neural Information Processing Systems, pp. 271–282.  [Crick1989] Crick F (1989) The recent excitement about neural networks. Nature 337:129–132.
 [Dorrn et al.2010] Dorrn AL, Yuan K, Barker AJ, Schreiner CE, Froemke RC (2010) Developmental sensory experience balances cortical excitation and inhibition. Nature 465:932–936.
 [Friedrich et al.2011] Friedrich J, Urbanczik R, Senn W (2011) Spatiotemporal credit assignment in neuronal population learning. PLOS Computational Biology 7:e1002092.
 [Friston2005] Friston K (2005) A theory of cortical responses. Philosophical Transactions of the Royal Society of London B: Biological Sciences 360:815–836.
 [Froemke2015] Froemke RC (2015) Plasticity of cortical excitatoryinhibitory balance. Annual Review of Neuroscience 38:195–219.
 [Grossberg1987] Grossberg S (1987) Competitive learning: From interactive activation to adaptive resonance. Cognitive Science 11:23–63.
 [Guerguiev et al.2017] Guerguiev J, Lillicrap TP, Richards BA (2017) Towards deep learning with segregated dendrites. eLife 6:e22901.
 [Hinton and McClelland1988] Hinton GE, McClelland JL (1988) Learning representations by recirculation. In Anderson DZ, editor, Neural Information Processing Systems, pp. 358–366. American Institute of Physics.
 [Huber et al.2012] Huber D, Gutnisky DA, Peron S, O’Connor DH, Wiegert JS, Tian L, Oertner TG, Looger LL, Svoboda K (2012) Multiple dynamic representations in the motor cortex during sensorimotor learning. Nature 484:473–478.
 [Keck et al.2017] Keck T, Toyoizumi T, Chen L, Doiron B, Feldman DE, Fox K, Gerstner W, Haydon PG, Hübener M, Lee HK, Lisman JE, Rose T, Sengpiel F, Stellwagen D, Stryker MP, Turrigiano GG, van Rossum MC (2017) Integrating Hebbian and homeostatic plasticity: the current state of the field and future research directions. Philosophical Transactions of the Royal Society of London B: Biological Sciences 372.
 [KhalighRazavi and Kriegeskorte2014] KhalighRazavi SM, Kriegeskorte N (2014) Deep supervised, but not unsupervised, models may explain it cortical representation. PLOS Computational Biology 10:1–29.
 [Larkum2013] Larkum M (2013) A cellular mechanism for cortical associations: an organizing principle for the cerebral cortex. Trends in Neurosciences 36:141–151.
 [LeCun1988] LeCun Y (1988) A theoretical framework for backpropagation. In Touretzky D, Hinton G, Sejnowski T, editors, Proceedings of the 1988 Connectionist Models Summer School, pp. 21–28. Morgan Kaufmann, Pittsburg, PA.
 [LeCun et al.2015] LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444.
 [Lee et al.2015] Lee DH, Zhang S, Fischer A, Bengio Y (2015) Difference target propagation. In Machine Learning and Knowledge Discovery in Databases, pp. 498–515. Springer.
 [Leinweber et al.2017] Leinweber M, Ward DR, Sobczak JM, Attinger A, Keller GB (2017) A Sensorimotor Circuit in Mouse Cortex for Visual Flow Predictions. Neuron 95:1420–1432.e5.

[Letzkus et al.2015]
Letzkus JJ, Wolff SBE, Lüthi A (2015)
Disinhibition, a Circuit Mechanism for Associative Learning and Memory.
Neuron 88:264–276.  [Lillicrap et al.2016] Lillicrap TP, Cownden D, Tweed DB, Akerman CJ (2016) Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications 7:13276.
 [Luz and Shamir2012] Luz Y, Shamir M (2012) Balancing feedforward excitation and inhibition via Hebbian inhibitory synaptic plasticity. PLOS Computational Biology 8:e1002334.
 [Makino and Komiyama2015] Makino H, Komiyama T (2015) Learning enhances the relative impact of topdown processing in the visual cortex. Nature Neuroscience 18:1116–1122.
 [Manita et al.2015] Manita S, Suzuki T, Homma C, Matsumoto T, Odagawa M, Yamada K, Ota K, Matsubara C, Inutsuka A, Sato M et al. (2015) A topdown cortical circuit for accurate sensory perception. Neuron 86:1304–1316.
 [Marblestone et al.2016] Marblestone AH, Wayne G, Kording KP (2016) Toward an integration of deep learning and neuroscience. Frontiers in Computational Neuroscience 10:94.
 [Markram et al.2004] Markram H, ToledoRodriguez M, Wang Y, Gupta A, Silberberg G, Wu C (2004) Interneurons of the neocortical inhibitory system. Nature Reviews Neuroscience 5:793–807.
 [Masquelier and Thorpe2007] Masquelier T, Thorpe S (2007) Unsupervised learning of visual features through spike timing dependent plasticity. PLOS Computational Biology 3.
 [Murayama et al.2009] Murayama M, PérezGarci E, Nevian T, Bock T, Senn W, Larkum ME (2009) Dendritic encoding of sensory stimuli controlled by deep cortical interneurons. Nature 457:1137–1141.
 [Nessler et al.2013] Nessler B, Pfeiffer M, Buesing L, Maass W (2013) Bayesian computation emerges in generic cortical microcircuits through spiketimingdependent plasticity. PLOS Computational Biology 9:e1003037.
 [O’Reilly1996] O’Reilly RC (1996) Biologically plausible errordriven learning using local activation differences: The generalized recirculation algorithm. Neural Computation 8:895–938.
 [Petreanu et al.2012] Petreanu L, Gutnisky DA, Huber D, Xu Nl, O’Connor DH, Tian L, Looger L, Svoboda K (2012) Activity in motorsensory projections reveals distributed coding in somatosensation. Nature 489:299–303.
 [Petreanu et al.2009] Petreanu L, Mao T, Sternson SM, Svoboda K (2009) The subcellular organization of neocortical excitatory connections. Nature 457:1142–1145.
 [Pi et al.2013] Pi HJ, Hangya B, Kvitsiani D, Sanders JI, Huang ZJ, Kepecs A (2013) Cortical interneurons that specialize in disinhibitory control. Nature 503:521–524.
 [Poort et al.2015] Poort J, Khan AG, Pachitariu M, Nemri A, Orsolic I, Krupic J, Bauza M, Sahani M, Keller GB, MrsicFlogel TD, Hofer SB (2015) Learning enhances sensory and multiple nonsensory representations in primary visual cortex. Neuron 86:1478–1490.
 [Rao and Ballard1999] Rao RP, Ballard DH (1999) Predictive coding in the visual cortex: a functional interpretation of some extraclassical receptivefield effects. Nature Neuroscience 2:79–87.

[Roelfsema and van Ooyen2005]
Roelfsema PR, van Ooyen A (2005)
Attentiongated reinforcement learning of internal representations for classification.
Neural Computation 17:2176–2214.  [Rumelhart et al.1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by backpropagating errors. Nature 323:533–536.

[Scellier and Bengio2017]
Scellier B, Bengio Y (2017)
Equilibrium propagation: Bridging the gap between energybased models and backpropagation.
Frontiers in Computational Neuroscience 11:24.  [Schwiedrzik and Freiwald2017] Schwiedrzik CM, Freiwald WA (2017) Highlevel prediction signals in a lowlevel area of the macaque faceprocessing hierarchy. Neuron 96:89–97.e4.
 [Silberberg and Markram2007] Silberberg G, Markram H (2007) Disynaptic inhibition between neocortical pyramidal cells mediated by Martinotti cells. Neuron 53:735–746.
 [Spicher et al.in preparation] Spicher D, Clopath C, Senn W (in preparation) Predictive plasticity in dendrites: from a computational principle to experimental data.
 [Spruston2008] Spruston N (2008) Pyramidal neurons: dendritic structure and synaptic integration. Nature Reviews Neuroscience 9:206–221.
 [Sutton and Barto1998] Sutton RS, Barto AG (1998) Reinforcement learning: An introduction, Vol. 1 MIT Press, Cambridge, Mass.
 [Takahashi et al.2016] Takahashi N, Oertner TG, Hegemann P, Larkum ME (2016) Active cortical dendrites modulate perception. Science 354:1587–1590.
 [UrbanCiecko and Barth2016] UrbanCiecko J, Barth AL (2016) Somatostatinexpressing neurons in cortical networks. Nature Reviews Neuroscience 17:401–409.
 [Urbanczik and Senn2014] Urbanczik R, Senn W (2014) Learning by the dendritic prediction of somatic spiking. Neuron 81:521–528.
 [Vogels et al.2011] Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W (2011) Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 334:1569–1573.
 [Whittington and Bogacz2017] Whittington JCR, Bogacz R (2017) An approximation of the error backpropagation algorithm in a predictive coding network with local Hebbian synaptic plasticity. Neural Computation 29:1229–1262.
 [Xie and Seung2003] Xie X, Seung HS (2003) Equivalence of backpropagation and contrastive Hebbian learning in a layered network. Neural Computation 15:441–454.
 [Yamins and DiCarlo2016] Yamins DL, DiCarlo JJ (2016) Using goaldriven deep learning models to understand sensory cortex. Nature Neuroscience 19:356–365.
 [Yamins et al.2014] Yamins DL, Hong H, Cadieu CF, Solomon EA, Seibert D, DiCarlo JJ (2014) Performanceoptimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences 111:8619–8624.
 [Zhang et al.2014] Zhang S, Xu M, Kamigaki T, Do JPH, Chang WC, Jenvay S, Miyamichi K, Luo L, Dan Y (2014) Longrange and local circuits for topdown modulation of visual cortex processing. Science 345:660–665.
 [Zmarz and Keller2016] Zmarz P, Keller GB (2016) Mismatch receptive fields in mouse visual cortex. Neuron 92:766–772.
Supplementary information
Supplementary data
Below we detail the model parameters used to generate the figures presented in the main text.
Fig. 1 details. The parameters for the compartmental model neuron were: , , . Interneuron somatic teaching conductances were balanced to yield overall nudging strength . Initial weight matrix entries were independently drawn from a uniform distribution
. We chose background activity levels of
. The learning rates were set as and .Input patterns were smoothly transitioned by lowpass filtering with time constant . A transition between patterns was triggered every 100 ms. Weight changes were low pass filtered with time constant . The dynamical equations were solved using Euler’s method with a time step of 0.1, which resulted in 1000 integration time steps per pattern.
Fig. 3 details. Initial forward weights and were scaled down by a factor of 0.1. Background noise level was raised to . The learning rates were , , . Weight matrices and were kept fixed, so the model relied on a feedback alignment mechanism to learn. Remaining parameters as used for Fig. 1.
Fig. 4 details. We chose mixing factors and . Forward learning rates were , , . Lateral learning rates were and . Initial forward weights were drawn at random from a uniform distribution , and the remaining weights from .
Fig. 5 details. We took all mixing factors equal . Forward learning rates: , . Lateral connections learned with rate . Topdown connections were initialized from a uniform distribution and adapted with learning rates and .
Supplementary analysis
In this supplementary note we present a set of mathematical results concerning the network and plasticity model described in the main text.
To proceed analytically we make a number of simplifying assumptions. Unless noted otherwise, we study the network in a deterministic setting and consider the limiting case where lateral microcircuit synaptic weights match the corresponding forward weights:
(S1)  
(S2) 
The particular choice of proportionality factors, which depend on the neuron model parameters, is motivated below. Under the above configuration, the network becomes selfpredicting.
To formally relate the encoding and propagation of errors implemented by the inhibitory microcircuit to the backpropagation of errors algorithm from machine learning, we consider the limit where topdown input is weak compared to the bottomup drive. This limiting case results in error signals that decrease exponentially with area depth, but allows us to proceed analytically.
We further assume that the topdown weights converging to the apical compartments are equal to the corresponding forward weights, . Such weight symmetry is not essential for successful learning in a broad range of problems, as demonstrated in the main simulations and as observed before [Lee et al.2015, Lillicrap et al.2016]. It is, however, required to frame learning as a gradient descent procedure. Furthermore, in the analyses of the learning rules, we assume that synaptic changes take place at a fixed point of the neuronal dynamics; we therefore consider discretetime versions of the plasticity rules. This approximates the continuoustime plasticity model as long as changes in the inputs are slow compared to the neuronal dynamics.
For convenience, we will occasionally drop neuron type indices and refer to bottomup weights and to topdown weights . Additionally, we assume without loss of generality that the dendritic coupling conductance for interneurons is equal to the basal dendritic coupling of pyramidal neurons, . Finally, whenever it is useful to distinguish whether output area nudging is turned off, we use superscript ‘’.
Interneuron activity in the selfpredicting state. Following Urbanczik2014, we note that steady state interneuron somatic potentials can be expressed as a convex combination of basal dendritic and pyramidal neuron potentials that are provided via somatic teaching input:
(S3) 
with and the effective dendritic transfer and leak conductances, respectively, and the total excitatory and inhibitory teaching conductance. In the equation above, is the interneuron dendritic prediction (cf. Eq. 8), and is a mixing factor which controls the nudging strength for the interneurons. In other words, the current prediction and the teaching signal are averaged with coefficients determined by normalized conductances. We will later consider the weak nudging limit of .
The relation holds when pyramidaltointerneuron synaptic weights are equal to pyramidalpyramidal forward weights, up to a scale factor: , which simplifies to for the last area where (to reduce clutter, we use the slightly abusive notation whereby should be understood to be zero when referring to output area neurons). This is the reason for the particular choice of ideal pyramidaltointerneuron weights presented in the preamble. The network is then internally consistent, in the sense that the interneurons predict the model’s own predictions, held by pyramidal neurons.
Bottomup predictions in the absence of external nudging. We first study the situation where the input pattern is stationary and the output area teaching input is disabled, . We show that the fixed point of the network dynamics is a state where somatic voltages are equal to basal voltages, up to a dendritic attenuation factor. In other words, the network effectively behaves as if it were feedforward, in the sense that it computes the same function as the corresponding network with equal bottomup but no topdown or lateral connections.
Specifically, in the absence of external nudging (indicated by the in the superscript), the somatic voltages of pyramidal and interneuron are given by the bottomup dendritic predictions,
(S4)  
(S5) 
To show that Eq. S4 describes the state of the network, we start at the output area and set Eq. 1 to zero. Because nudging is turned off, we observe that is equal to if area also satisfies . The same recursively applies to the hidden area below when its apical voltage vanishes, . Now we note that at the fixed point the interneuron cancels the corresponding pyramidal neuron, due to the assumption that the network is in a selfpredicting state, which yields . Together with the fact that , we conclude that the interneuron contribution to the apical compartment cancels the topdown pyramidal neuron input, yielding the required condition .
The above argument can be iterated down to the input area, which is constant, and we arrive at Eq. S4.
Zero plasticity induction in the absence of nudging. In view of Eq. S4, which states that in the absence of external nudging the somatic voltages correspond to the basal predictions, no synaptic changes are induced in basal synapses on the pyramidal and interneurons as defined by the plasticity rules (7) and (8), respectively. Similarly, the apical voltages are equal to rest, , when the topdown input is fully predicted, and no synaptic plasticity is induced in the intertopyramidal neuron synapses, see (9). When noisy background currents are present, the average prediction error is zero, while momentary fluctuations will still trigger plasticity. Note that the above holds when the dynamics is away from equilibrium, under the additional constraint that the integration time constant of interneurons matches that of pyramidal neurons.
Recursive prediction error propagation. Prediction errors arise in the model whenever lateral interneurons cannot fully explain topdown input, leading to a deviation from baseline in apical dendrite activity. Here, we look at the network steady state equations for a stationary input pattern and derive an iterative relationship which establishes the propagation across the network of prediction mismatches originating downstream. The following compartmental potentials are thus evaluated at a fixed point of the neuronal dynamics.
Under the assumption (S1) of matching interneurontopyramidal topdown weights, apical compartment potentials simplify to
(S6) 
where we introduced error vector defined as the difference between pyramidal and interneuron firing rates. Such deviation can be intuitively understood as an areawise interneuron prediction mismatch, being zero when interneurons perfectly explain pyramidal neuron activity. We now evaluate this difference vector at a fixed point to obtain a recurrence relation that links consecutive areas.
The steadystate somatic potentials of hidden pyramidal neurons are given by
(S7) 
To shorten the following, we assumed that the apical attenuation factor is equal to the interneuron nudging strength . As previously mentioned, we proceed under the assumption of weak feedback, small. As for the corresponding interneurons, we insert Eq. Supplementary analysis into Eq. S3 and note that when the network is in a selfpredicting state we have , yielding
(S8) 
Using the identities (Supplementary analysis) and (S8), we now expand to first order the difference vector around as follows
(S9) 
Matrix is a diagonal matrix with diagonal equal to , i.e., whose th element reads . It contains the derivative of the neuronal transfer function evaluated componentwise at the bottomup predictions . Recalling Eq. S6, we obtain a recurrence relation
(S10) 
Finally, last area pyramidal neurons provide the initial condition by being directly nudged towards the desired target . Their membrane potentials can be written as
(S11) 
and this gives an estimate for the error in the output area of the form
(S12) 
where for simplicity we took the same mixing factor for pyramidal output and interneurons. Then, for an arbitrary area, assuming that the synaptic weights and the remaining fixed parameters do not scale with , we arrive at
(S13) 
Thus, steady state potentials of apical dendrites (cf. Eq. S6) recursively encode neuronspecific prediction errors that can be traced back to a mismatch at the output area.
Learning as approximate error backpropagation. In the previous section we found that neurons implicitly carry and transmit error information across the network. We now show how the proposed synaptic plasticity model, when applied at a steady state of the neuronal dynamics, can be recast as an approximate gradient descent learning procedure.
More specifically, we compare our model against learning through backprop [Rumelhart et al.1986] or approximations thereof [Lee et al.2015, Lillicrap et al.2016] the weights of the feedfoward multiarea network obtained by removing interneurons and topdown connections from the intact network. For this reference model, the activations are by construction equal to the bottomup predictions obtained in the full model when output nudging is turned off, , cf. Eq. S4. Thus, optimizing the weights in the feedforward model is equivalent to optimizing the predictions of the full model.
Define the loss function
(S14) 
where denotes the number of output neurons. can be thought of as the multiarea, multioutput unit analogue of the loss function optimized by the single neuron model [Urbanczik and Senn2014], where it stems directly from the particular chosen form of the learning rule (7). The nudging strength parameter allows controlling the mixing with the target and can be understood as an additional learning rate parameter. Albeit unusual in form, function imposes a cost similar to an ordinary squared error loss. Importantly, it has a minimum when and it is lower bounded. Furthermore, it is differentiable with respect to compartmental voltages (and synaptic weights). It is therefore suitable for gradient descent optimization. As a side remark, integrates to a quadratic function when is linear.
Gradient descent proceeds by changing synaptic weights according to
(S15) 
The required partial derivatives can be efficiently computed by the backpropagation of errors algorithm. For the network architecture we study, this yields a learning rule of the form
(S16) 
The error factor can be expressed recursively as follows:
(S17) 
ignoring constant factors that depend on conductance ratios, which can be dealt with by redefining learning rates or backward pass weights. As in the previous section, matrix is a diagonal matrix, with diagonal equal to .
We first compare the fixed point equations of the original network to the feedforward activations of the reference model. Starting from the bottommost hidden area, using Eqs. S6, Supplementary analysis and S13, we notice that , as the bottomup input is the same in both cases. Inserting this into second hidden area steady state potentials and linearizing the neuronal transfer function gives . This can be repeated and for an arbitrary area and neuron type we find
(S18)  
(S19) 
Writing Eq. S18 in the first form emphasizes that the apical contributions dominate the bottomup corrections, which are of order .
Next, we prove that up to a factor and to first order the apical term in Eq. S18 represents the backpropagated error in the feedforward network, . Starting from the topmost hidden area apical potentials, we reevaluate difference vector (S12) using (S18). Linearization of the neuronal transfer function gives
(S20) 
Inserting the expression above into Eq. S18 and using Eq. S19 the apical compartment potentials at area can then be recomputed. This procedure can be iterated until the input area is reached. In general form, somatic membrane potentials at hidden area can be expressed as
(S21)  
(S22) 
This equation shows that, to leading order of , hidden neurons mix and propagate forward purely bottomup predictions with topdown errors that are computed at the output area and spread backwards.
We are now in position to compare model synaptic weight updates to the ones prescribed by backprop. Output area updates are exactly equal by construction, . For pyramidaltopyramidal neuron synapses from hidden area to area , we obtain
(S23) 
while backprop learning rule (S16) can be written as
(S24) 
where we used that, to first order, the output area error factor is . Hence, up to a factor of which can be absorbed in the learning rate , changes induced by synaptic plasticity are equal to the backprop learning rule (S16) in the limit , provided that the topdown weights are set to the transpose of the corresponding feedforward weights, . The ‘quasifeedforward’ condition has also been invoked to relate backprop to twophase contrastive Hebbian learning in Hopfield networks [Xie and Seung2003].
In our simulations, topdown weights are either set at random and kept fixed, in which case Eq. S23 shows that the plasticity model optimizes the predictions according to an approximation of backprop known as feedback alignment [Lillicrap et al.2016]; or learned so as to minimize an inverse reconstruction loss, in which case the network implements a form of difference target propagation [Lee et al.2015].
Interneuron plasticity. The analyses of the previous sections relied on the assumption that the synaptic weights to and from interneurons were set to their ideal values, cf. Eqs. S1 and S2. We now study the plasticity of the lateral microcircuit synapses and show that, under mild conditions, learning rules (8) and (9) yield the desired synaptic weight matrices.
We first study the learning of pyramidaltointerneuron synapses . To quantify the degree to which the weights deviate from their optimal setting, we introduce the convex loss function
(S25) 
where denotes the trace of matrix and , as defined in Eq. S2.
Comments
There are no comments yet.