The Leaky Integrate-and-Fire Neuron Model
If you want to simulate a brain, you need to simulate neurons. But real neurons are absurdly complex—thousands of ion channels, intricate dendritic trees, and biochemical cascades we don’t fully understand. How do you capture the essence without drowning in details?
Enter the Leaky Integrate-and-Fire (LIF) model. It’s the Honda Civic of neuron models: not fancy, but reliable, efficient, and gets you where you need to go. First proposed over a century ago, it remains the most widely used neuron model in computational neuroscience.
The Core Idea
A neuron is basically a leaky capacitor with a threshold. That’s it. Here’s what that means:
- Capacitor: The cell membrane stores charge, creating a voltage difference
- Leaky: Charge slowly leaks out through ion channels
- Integrate: Input currents add up over time
- Fire: When voltage crosses a threshold, the neuron spikes and resets
This captures the fundamental computation: accumulate evidence until you’re confident enough to signal.
The Mathematics
The LIF neuron is described by a single differential equation:
$$\tau_m \frac{dV}{dt} = -(V - V_{rest}) + R \cdot I(t)$$
Where:
- $V$ is the membrane potential (voltage)
- $\tau_m$ is the membrane time constant (typically 10-30 ms)
- $V_{rest}$ is the resting potential (around -70 mV)
- $R$ is the membrane resistance
- $I(t)$ is the input current
When $V$ reaches the threshold $V_{th}$ (around -55 mV), the neuron fires a spike and resets to $V_{reset}$.
Let’s break this down:
- The term $-(V - V_{rest})$ pulls the voltage back toward rest—this is the “leak”
- The term $R \cdot I(t)$ pushes the voltage based on input—this is the “integrate”
- The time constant $\tau_m$ controls how fast these dynamics play out
Building Intuition
Think of it like filling a leaky bucket with water:
- Water level = membrane voltage
- Hole in the bucket = leak conductance
- Water pouring in = input current
- Bucket overflowing = spike
If you pour water faster than it leaks out, the bucket eventually overflows. If you stop pouring, the water level drops back down. The time constant determines how quickly the bucket drains.
This analogy explains why timing matters. Two inputs arriving together are more effective than the same inputs spread apart—the first input’s effect hasn’t leaked away yet when the second arrives.
Implementation in Python
Let’s implement a LIF neuron from scratch:
import numpy as np
import matplotlib.pyplot as plt
class LIFNeuron:
def __init__(self, tau_m=20e-3, v_rest=-70e-3, v_thresh=-55e-3,
v_reset=-75e-3, R=10e6):
"""
Leaky Integrate-and-Fire neuron.
Parameters:
- tau_m: membrane time constant (seconds)
- v_rest: resting potential (volts)
- v_thresh: spike threshold (volts)
- v_reset: reset potential after spike (volts)
- R: membrane resistance (ohms)
"""
self.tau_m = tau_m
self.v_rest = v_rest
self.v_thresh = v_thresh
self.v_reset = v_reset
self.R = R
self.v = v_rest # Initialize at rest
def step(self, I, dt):
"""
Advance the neuron by one time step.
Parameters:
- I: input current (amperes)
- dt: time step (seconds)
Returns:
- spike: True if neuron fired, False otherwise
"""
# Euler integration of membrane equation
dv = (-(self.v - self.v_rest) + self.R * I) / self.tau_m
self.v += dv * dt
# Check for spike
if self.v >= self.v_thresh:
self.v = self.v_reset
return True
return False
def simulate(self, I_trace, dt):
"""
Simulate neuron response to input current trace.
Parameters:
- I_trace: array of input currents
- dt: time step
Returns:
- v_trace: membrane potential over time
- spike_times: times when spikes occurred
"""
v_trace = []
spike_times = []
for i, I in enumerate(I_trace):
v_trace.append(self.v)
if self.step(I, dt):
spike_times.append(i * dt)
return np.array(v_trace), np.array(spike_times)
# Simulation parameters
dt = 0.1e-3 # 0.1 ms time step
T = 500e-3 # 500 ms total time
t = np.arange(0, T, dt)
# Create input current: step function
I_input = np.zeros_like(t)
I_input[(t > 50e-3) & (t < 450e-3)] = 2e-9 # 2 nA from 50-450 ms
# Create and simulate neuron
neuron = LIFNeuron()
v_trace, spike_times = neuron.simulate(I_input, dt)
# Plot results
fig, axes = plt.subplots(2, 1, figsize=(10, 6), sharex=True)
axes[0].plot(t * 1000, I_input * 1e9, 'b-')
axes[0].set_ylabel('Input Current (nA)')
axes[0].set_title('LIF Neuron Response')
axes[1].plot(t * 1000, v_trace * 1000, 'k-')
axes[1].axhline(y=-55, color='r', linestyle='--', label='Threshold')
axes[1].set_xlabel('Time (ms)')
axes[1].set_ylabel('Membrane Potential (mV)')
axes[1].legend()
plt.tight_layout()
plt.show()
print(f"Neuron fired {len(spike_times)} spikes")
print(f"Mean firing rate: {len(spike_times) / 0.4:.1f} Hz")
The F-I Curve: Input-Output Relationship
A fundamental characterization of any neuron model is its frequency-current (F-I) curve: how does firing rate depend on input current?
For the LIF neuron, we can derive this analytically. Given constant input current $I$, the firing rate $f$ is:
$$f = \left[ \tau_m \ln\left(\frac{R \cdot I - V_{th} + V_{rest}}{R \cdot I - V_{reset} + V_{rest}}\right) \right]^{-1}$$
This only applies when $I$ is strong enough to reach threshold. Below that, the neuron never fires.
def lif_firing_rate(I, tau_m=20e-3, v_rest=-70e-3, v_thresh=-55e-3,
v_reset=-75e-3, R=10e6):
"""Calculate theoretical LIF firing rate for constant current."""
v_inf = v_rest + R * I # Steady-state voltage
if v_inf <= v_thresh:
return 0 # Below threshold, no firing
# Time to reach threshold from reset
T_isi = tau_m * np.log((v_inf - v_reset) / (v_inf - v_thresh))
return 1 / T_isi
# Plot F-I curve
I_range = np.linspace(0, 5e-9, 100)
rates = [lif_firing_rate(I) for I in I_range]
plt.figure(figsize=(8, 5))
plt.plot(I_range * 1e9, rates, 'b-', linewidth=2)
plt.xlabel('Input Current (nA)')
plt.ylabel('Firing Rate (Hz)')
plt.title('LIF Neuron F-I Curve')
plt.grid(True, alpha=0.3)
plt.show()
The F-I curve shows a sharp threshold followed by a roughly linear increase. This is a key feature: the neuron acts as a threshold-linear unit, which has nice computational properties.
Adding Realism: Extensions
The basic LIF model is a starting point. Here are common extensions:
Refractory Period
Real neurons can’t fire again immediately after a spike. Add an absolute refractory period:
def step_with_refractory(self, I, dt):
if self.refractory_timer > 0:
self.refractory_timer -= dt
return False
# Normal LIF dynamics...
if self.v >= self.v_thresh:
self.v = self.v_reset
self.refractory_timer = 2e-3 # 2 ms refractory
return True
return False
Exponential LIF (AdEx)
Add an exponential term that models the rapid spike upstroke:
$$\tau_m \frac{dV}{dt} = -(V - V_{rest}) + \Delta_T \exp\left(\frac{V - V_T}{\Delta_T}\right) + R \cdot I$$
This creates more realistic spike shapes and dynamics.
Adaptation
Real neurons often slow down their firing over time. Add an adaptation current:
$$\tau_w \frac{dw}{dt} = a(V - V_{rest}) - w$$
The adaptation variable $w$ increases with each spike and reduces excitability.
Biological Grounding
How well does the LIF model capture real neuron behavior?
What it gets right:
- Threshold behavior
- Integration of inputs over time
- Membrane time constant dynamics
- Basic input-output relationships
What it misses:
- Spike shape (LIF spikes are instantaneous)
- Dendritic computation
- Ion channel dynamics
- Bursting and other complex patterns
For many purposes, what it gets right is enough. The LIF model has successfully explained phenomena from sensory coding to decision-making.
Using Brian2 for Larger Simulations
For networks of LIF neurons, use Brian2:
from brian2 import *
# Parameters
N = 1000 # Number of neurons
tau = 20*ms
v_rest = -70*mV
v_thresh = -55*mV
v_reset = -75*mV
# Neuron equations
eqs = '''
dv/dt = (v_rest - v + R*I) / tau : volt
I : amp
R : ohm
'''
# Create neuron group
neurons = NeuronGroup(N, eqs, threshold='v > v_thresh',
reset='v = v_reset', method='euler')
neurons.v = v_rest
neurons.R = 10*Mohm
neurons.I = '(0.5 + 0.5*rand()) * nA' # Random input currents
# Record spikes and voltage
spikes = SpikeMonitor(neurons)
voltage = StateMonitor(neurons, 'v', record=[0, 1, 2])
# Run simulation
run(500*ms)
# Plot raster
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.plot(spikes.t/ms, spikes.i, '.k', markersize=1)
plt.xlabel('Time (ms)')
plt.ylabel('Neuron index')
plt.title('Population Activity')
plt.subplot(122)
for i in range(3):
plt.plot(voltage.t/ms, voltage.v[i]/mV, label=f'Neuron {i}')
plt.xlabel('Time (ms)')
plt.ylabel('Voltage (mV)')
plt.legend()
plt.title('Example Traces')
plt.tight_layout()
plt.show()
Practical Applications
The LIF model is used extensively in:
Theoretical Neuroscience
Understanding how neural circuits compute. The mathematical tractability of LIF neurons allows analytical results that guide intuition.
Neuromorphic Engineering
Hardware implementations of LIF neurons power chips like Intel’s Loihi. The simple dynamics translate efficiently to silicon.
Machine Learning
Spiking neural networks often use LIF neurons as their basic unit. The model provides a good balance of biological plausibility and computational efficiency.
Brain-Machine Interfaces
Decoding neural signals often assumes LIF-like dynamics. Understanding the model helps interpret real neural recordings.
Why LIF Endures
The LIF model has survived for over 100 years because it captures a fundamental truth: neurons are threshold devices that integrate information over time. Everything else is details.
This doesn’t mean the details don’t matter—they do, for specific questions. But for understanding the computational principles of neural systems, the LIF model remains unmatched in its combination of simplicity and insight.
As Gerstner and colleagues write: “The art of modeling is to include just enough detail to capture the phenomenon of interest, and no more.”