Hello World!
I wanted to start the Blog with something that has been on my mind since the great Computational Psychiatry Course 2016. Some of the talks there revolved around the free energy/active inference framework in cognitive neuroscience. Although “free energy” does sound a little esoteric at first, the name derives from the thermodynamic free energy in statistical physics and just describes the functional form of the objective function, which the brain is hypothesized to minimize.
So in this (and the following) post I want to outline the idea and the mathematical machinery behind this framework and pair it with recent advances in stochastic inference using deep generative models (i.e. Stochastic Backpropagation/Variational Autoencoders and Evolution Strategies) to create autonomous agents that can perform goal-directed action while building a generative model of themselves and their surroundings. I decided to try and use the minimal amount of formulae required. But if you find bits too vague (or too technical) please let me know and I’ll try to be clearer (or less technical).
Active Inference
Here I will just give a brief and subjective summary of the arguments in this paper by Karl Friston.
Active Inference rests on the basic assumption that any agent in exchange with a dynamic, fluctuating environment must keep certain inner parameters within a well defined range in order to survive. Think of the pH of your blood or the concentration of certain ion species. E.g. even changing the potassium concentration in your blood serum by a tiny amount (some mmol/l) might disrupt the electrical signal transduction in your heart muscle cells, changing the electrical and mechanical activity of your heart from a stable, cyclic attractor (pumping blood) to a chaotic one, which can lead to immediate heart failure. Looking at the space spanned by all relevant variables required to define the state that the agent is in (including its interaction with its environment, e.g. the forces of gravity acting on it or the temperature of the surrounding environment), this means that the agent must restrict itself to a small volume in this “state space” in order to keep all relevant parameters within a viable range.
One can formalize this notion in terms of the probability distribution on the state space, i.e. how likely it is to find an agent in a certain state. A measure of how concentrated a probability distribution is on a space
is its entropy:
The entropy is just the average surprise over the distribution. It increases, the more “spread out” the distribution becomes. The entropy of a univariate Gaussian, for example, is proportional to the logarithm of its standard-deviation. By minimizing the entropy of its distribution on the space of possible states it can be in, an agent can counter dispersive effects from its environment and keep a stable “identity” (in terms of its inner parameters) in a fluctuating environment.
However, a human agent does not have (or at least during most of our evolutionary history did not have) direct access to an objective measurement of its blood-potassium concentration. Instead we only perceive the world around us via our sensory epithelia. But agents whose sensory organs did not have a good mapping of relevant physical states to appropriate sensory inputs, would not have lasted very long. So we further assume that we can upper bound the entropy of the agent’s physical states by the entropy of its sensory states plus a constant term (technical details here, note that this approximation is not required anymore by more recent formulations). Let now denote the space of all possible observations of an agent. To ensure its survival (in terms of keeping its physiological variables within well-defined bounds) it has to minimize its sensory entropy:
A further assumption we use is ergodicity. This means that – looking at a large population of agents – the relative proportion of agents in a certain state at a given moment of time is equal to the relative amount of time a single agent spends in this state (i.e. that time- and ensemble-averages are equivalent). This allows us to write the sensory entropy as
From the calculus of variations now follows that an agent can minimize its sensory entropy by minimizing the sensory surprise at all times:
To be able to efficiently do this, our agent needs a statistical model of its sensory inputs, to evaluate . Since the world in which we live is hierarchically organised, dynamic, and features a lot of complex noise sources with different time scales and which enter at different levels of the hierarchy, we assume that the model of our agent is a deep (hierarchical), recurrent (dynamic), latent variable (structured noise) model. In fact, there is even a theorem that “every good regulator of a system must be a model of that system”. Furthermore, we assume that this model is generative, using the observation that we are able to imagine certain situation and perceptions (like the image of a blue sky over a desert landscape) without actually experiencing or having experienced them. We will discuss a possible implementation of this model in the brain later, but let’s first assume our agent possesses a generative model
of sensory observations
and latent variables or states
, which we can factorise into a likelihood function
and a prior on the states
:
The set comprises the (time-invariant or slowly changing) parameters that the agent can change to improve its model of the world. In the brain this might be the pattern and strength of synapses, the connections between individual neurons. Given this factorisation, to minimize surprise, the agent must solve the hard task of calculating
by marginalizing over all possible states that could lead to a given observation. As the dimensionality of the latent state space can be very high, this integral is extremely hard to solve. Therefore, a further assumption of the free energy principle is, that agents do not try to minimize the surprise
directly, but rather minimize an upper bound, which is a lot simpler to calculate.
Using the fact that the (don’t be afraid by the name, the definition is in the next line) Kullback-Leibler (KL) Divergence
between two arbitrary distributions and
with a shared support on a common space
is always greater than or equal to zero, and equal to zero if and only if the two distributions are equal. Don’t be scared by the long name or the – on the first look – intimidating notation. This quantity is also known as relative information and has some quite intuitive interpretations. E.g. it is the average amount of information required to be transmitted to communicate the density
, given that the receiver already knows the density
. And although there are many more divergence measures between probability densities that are non-negative, and zero if and only if the two densities are equal, the KL-divergence is the only one to fulfil three crucial properties: It is independent of the coordinate system used to represent the space on which the two densities are defined, it is local, e.g. local changes to one of the two densities only affect the respective parts of the integral, and it is what physicists call extensive. This just means that it is additive over independent (in the stochastic sense) subsystems. To learn more about the special role of the KL-divergence please refer to the wonderful notes on divergence measures by Danilo Rezende. Beyond our current discussion, for these reasons the KL divergence is also exceptionally suitable to be evaluated using sampling based methods, as shown and discussed recently in great work by Tran, Ranganath, and Blei (2017).
So using the KL-Divergence, we can define the free energy as:
After this – admittedly loooong – list of assumptions, you might now again wonder, what is supposed to mean:
is an arbitrary (so called variational) density over the space of latent states
, which belongs to a family of distributions parameterized by a time-dependent, i.e. fast changing, parameter set
. If
was a diagonal Gaussian,
would be the corresponding means and standard-deviations. This parameter set can be encoded by the internal states of our agent, e.g. by the neural activity (firing pattern) of the neurons in its brain. Thus, the upper bound
now only depends on quantities to which our agent has direct access. Namely the states of its sensory organs
, the synapses encoding its generative model of the world
and the neural activity representing the sufficient statistics
of the variational density.
Using the definition of the Kullback-Leibler divergence, the linearity of the integral, Bayes’ rule, and manipulation of logarithms, one (You! Try it!) can derive the following equivalent forms of the free energy functional:
Here means calculating the expectation value of
with respect to the variational density
.
If the agent had been tied to a rock, unable to interact with or change its environment, the only thing it could do to minimize would be to change the parameters
of its generative model and the sufficient statistics
of its inner representation. Looking at the first form of the free energy, optimizing
would correspond to minimizing the Kullback-Leibler divergence between the variational distribution
and the true posterior distribution
, i.e. the probability over states
given the observations
. Thus, the agent automatically shows a probabilistic representation of an approximate posterior on the states of the world, given its sensory input. The optimization of the sufficient statistics
of the variational density
is therefore what we call “perception”. As
has to be optimized on a fast timescale, quickly changing with the sensory input
, it is likely represented in terms of neural activity. This might explain why we often find hallmarks of Bayesian computations and probabilistic representations of sensory stimuli in the brain (Berkes et al.; c.f. Knill & Pouget, Fiser & et al.). As the variational free energy upper bounds the surprise
, minimising free energy with respect to the parameters
of the generative model will simultaneously maximise the evidence
for the agent’s generative model of the world. The resulting optimisation of the parameters
of the generative model is what we call perceptual learning and what might be implemented by changing the synaptic connections between neurons in the brain. The second form is just to show the Physicists among you, where the name “free energy” comes from, since its form is very similar to the Helmholtz Free Energy of the Canonical Ensemble (just to demystify its name, if you are not a Physicist, feel free to ignore this sentence).
So far, this is exactly how variational inference on generative models works. In practice, you do not optimise the sufficient statistics of the variational density directly, since you would have to perform this optimisation for every single observation
. Instead one can use a very flexible, parameterised function
, and optimise its parameters together with the parameters of the generative model. E.g. in the diagonal Gaussian case you could use deep neural networks to calculate the means and standard-deviations of
from the observations
(c.f. the nice work by Rezende, Mohamed & Wierstra, Kingma & Welling, and Chung & al.). This allows us to fit generative latent variable models to our data while simultaneously getting an easy to evaluate approximation to the posterior distribution over latent variables given data samples.
Now comes the interesting part, Action:
We now free our agent from its ties, i.e. we give it actuators that allow it to actively change the state of its environment. Suddenly the sensory observations become a function of the current and past states
of the agent’s actuators (“muscles”), via their influence on the state of the world generating the sensory inputs. Now the agent can not only minimize the free energy bound by learning (optimising the parameters of its generative model) and perception (optimising the sufficient statistics of the variational density), but also by changing the observations it makes. This is called Active Inference. So while learning and perception help the agent to make the Free Energy a tight bound on the actual sensory surprise
, action now can use this tight bound to change the observations an agent makes, thereby minimizing (given the bound is tight enough) the actual sensory entropy
, which was the objective that we started with.
Remembering the last form of the variational free energy
We see that our agent can minimize it by seeking out observations that have a high likelihood
under its own generative model of the world (averaged over its approximate posterior of the state of the world given its previous observations).
This gives us the following dynamics for the parameters of an agent’s generative model, its internal states
, encoding the sufficient statistics of the variational density
, and the states of its actuators
:
But how do we instill goal-directed behavior in such an agent? As mentioned above, the agent will seek out states that conform to its expectations, i.e. that have a high likelihood under its generative model of the world. So, you can encode specific goals in the priors of the generative model of the agent: assigning a higher a priori probability to a specific state, the agent will try to seek out this state more often.
As noted above, while the parameters describe the agent’s beliefs about the dynamic rules that govern the world, and should be optimized on a slow timescale, i.e. over large batches of data, the internal states
and the action states
must change on the timescale of the sensory input. Thus, in a discrete world they would have to be optimized at each time step. As these kinds of optimizations are costly, it is easier to use functional approximations of
and
, e.g. by deep neural networks, and optimize their parameters together with the parameters of the generative model (analogous to the work in variational inference on deep generative models by Rezende, Mohamed & Wierstra, Kingma & Welling, and Chung & al.).
I admit that the premises of Active Inference take some time to get used to. But when you do, suddenly many apparently disparate findings in neuroscience fit together. From optical illusions, to neural responses to oddball stimuli, mirror neurons, predictive coding, basic features of neural architecture, critical neural dynamics, and human choice behaviour, many findings can be framed and explained by the free energy principle and the resulting active inference dynamics.
That’s it. To minimize mathematical machinery, I stayed quite abstract in this post, but I will make up for it by showing you how to implement a full active inference agent in a small simulated environment, using deep neural networks, sampling and evolution strategies in the next one.
Edited Nov. 14th, 2018: Fixed some typos, added a paragraph with some details on the KL-divergence, added a short discussion on the roles of learning, perception and action in the optimisation of the sensory entropy, and added a short note that the assumption of a linear sensory mapping can be relaxed, using more recent formulations (but there will be a separate blog post on this, soon*ish*).
Thank you Kai, this is an extremely useful primer! If I’m understanding correctly, the discussion of updating a parameterized function for the recognition density q_u used theta just to illustrate it’s a parameterized function, and not to imply these are the same parameters as those used by the generative model (also called theta).
> Instead one can use a very flexible, parameterised function $u_{\theta}(o)$, and optimise its parameters together with the parameters of the generative model
Did I get that right? The relationship between the generative and recognition models (e.g. whether one is an inversion of the other or not) seems to be somewhat controversial already, so I wanted to clarify the notation here.
LikeLike
Thank you for the nice feedback and the important question, Jeremy. You‘re exactly right. The amortized recognition function has its own set of parameters. I just lumped it together with the parameters of the likelihood- and action-function into the set theta, which summarizes all the parameters of the agent, that are optimized during the training process. This was supposed to declutter the notation, but might have created confusion, I‘m sorry. Please refer to the paper and the accompanying code for more details (cf. the publications page).
LikeLiked by 1 person
Thanks for clarifying Kai!
LikeLike