MH-FLOCKE MH-FLOCKE
HomeDocsGitHubBlogPaperYouTubeReddit𝕏

MH-FLOCKE

A robot dog that learns to walk, see, and play — not through programming, but through spiking neurons, a cerebellar forward model, and embodied experience.

65+
Cognitive Modules
232–4,624
Spiking Neurons
3.5×
Beats PPO Baseline
Sim→Real
Same Brain, Real Hardware

New: SNN Running on Real Hardware — Freenove Robot Dog

Sim-to-Real: From Simulation to Walking Robot

MH-FLOCKE runs on real hardware. The Freenove Robot Dog Kit (~100€) with a Raspberry Pi 4 runs the same SNN and cerebellum code as the MuJoCo simulator — one codebase, two platforms.

A brain trained in simulation transfers directly to the real robot via a single file (brain.pt). The SNN continues learning on the Pi through R-STDP and cerebellar adaptation, driven by real IMU data from the MPU6050.

232 neurons, 933 synapses, 29Hz control loop on a Raspberry Pi 4. Cerebellar climbing fiber responds to real orientation errors. Zero falls.

The live web dashboard shows all 6 cerebellar populations (MF, GrC, GoC, PkC, DCN, OUT) with real-time spike activity, servo angles, and competence gate — directly from the running SNN on the Pi.

Go2 in Simulation: Ball Contact Through Biological Learning

What Is MH-FLOCKE?

MH-FLOCKE is a scientific experiment: Can an artificial system develop genuine understanding — not through programming, but through embodied experience?

The system receives a body (a Unitree Go2 quadruped in MuJoCo simulation), a world, and neurons. No calibration. No motor mapping. No hardcoded strategies. It must discover what it is, what it can do, and what the world is.

Every component exists as proven neuroscience. The integration is new. Nobody has built the complete system.

MH-FLOCKE implements spiking neural networks with R-STDP, a Marr-Albus-Ito cerebellar forward model, central pattern generators, and a Free Energy framework — all running simultaneously in a 15-step cognitive cycle.

The result: a quadruped that learns to walk within minutes, navigates toward interesting objects, and develops emergent behaviors like sniff → walk → trot → chase → alert — without any of these being programmed.

Architecture

A biologically grounded cognitive architecture — from spinal reflexes to metacognition.

🧠

Spiking Neural Network

232–4,624 Izhikevich neurons in cerebellar architecture. Populations: mossy fibers, granule cells, Golgi, Purkinje, DCN. Learning via R-STDP.

🔄

Cerebellar Forward Model

Marr-Albus-Ito architecture predicts motor outcomes. Climbing fiber error signals drive LTD/LTP in Purkinje cells.

🦿

Spinal CPG + Reflexes

Central Pattern Generators produce rhythmic gaits. Spinal reflexes handle righting and cross-extension. PD controller bridges to Go2 torques.

🤖

Sim-to-Real Transfer

Same codebase on Pi and simulator. Brain trained in MuJoCo runs on Freenove Robot Dog. Cerebellar learning continues on real hardware.

💊

Neuromodulation

Dopamine, serotonin, norepinephrine, acetylcholine dynamically modulate learning, exploration, and arousal.

🌐

Global Workspace

Sensory, motor, predictive, error, and memory modules compete for broadcast. Metacognition monitors consciousness level.

4 Changes That Made Ball Contact Work

01

Task-Specific Prediction Error

State-based distance signal: TPE = (ball_dist - 3.0) / 3.0. Clear gradient toward the ball.

02

Vision Boost

When TPE exceeds 0.05, last 16 input neurons amplified by TPE × 0.5. Higher error = stronger sensory drive.

03

R-STDP Sign Fix

combined = 0.1 × reward + 0.9 × (−PE). Approaching reduces PE → positive reinforcement.

04

Ball Curriculum

5 stages from (1.5m, 0°) to (3.0m, 34°). Advance when ball_dist_min < 0.5m. Two advances in 100k steps.

Results

3.5×
Outperforms PPO baseline
10-seed ablation, SNN+Cerebellum
0.8cm
Closest ball approach
Global minimum, 100k steps
29Hz
Real-time on Raspberry Pi
232 neurons, PyTorch CPU-only
0
Falls on real hardware
Freenove Robot Dog, IMU-driven