Birds-of-a-Feather Panel Session in Energy-Aware Neural Engineering

Wednesday | May 28, 2025 | 11:00 - 12:30

This exclusive panel session delves into urgent and important subjects related to brain-inspired and neural systems. The event offers a distinctive platform for distinguished scientists and engineers from both industry and academia to engage in an interdisciplinary dialogue on the crucial subject of energy-efficient operations in machines and biological brains. Following short presentations, the floor will be open for questions from the audience and further discussion.

Goal: Bring together engineers and neuroscientists to share insights related to energy-aware learning in Biology and Machine to foster cross-disciplinary interaction and enhance the understanding of brain-inspired computation.

Covered Topics:

  • Bio-inspiration and breakthroughs required for In-memory computing (IMC) to become a transformative technology for deep learning
  • Insights from artificial and biological neural networks for edge intelligence
  • The thermodynamics of sentience, free energy principle, achieving lower limit on thermodynamic work through minimization of computational complexity
  • Metabolic cost of learning, comparison to computation models, energy saving strategies for plasticity
  • Concepts of locality and sparsity to reduce energy footprint; exploitation of analog RRAMs for efficient computation
  • Event-driven computation and SpiNNaker system examples, exploitation of spatio-temporal sparsity inherent in biological neural systems

Session Chairs: Ali Muhtaroglu, Bipin Rajendran, Gert Cauwenberghs, Mark Van Rossum

Format:

  • First Round: Delivery of a 10-minute introductory presentation (pitch) on one of the above topics. The topic abstract is attached for each presenter in the table. Each panelist will be introduced by one of the session chairs proposing the session. The panelist presentations will be followed by a 10-minute break.
  • Second Round: The floor will be opened to audience questions to panelists and panelist responses. The discussion will be led by one of the session chairs.

Rational:  Why the topic is novel, why it is relevant to the ISCAS community, and how it fits within the innovation themes?

The proposed session on Birds-of-a-Feather Panel Session in Energy-Aware Neural Engineering is both novel and timely, given the major thrust in emerging categories of brain-inspired architectures, deep-learning technologies, energy efficient machine learning systems and neuromorphic ICs in order to integrate intelligence and autonomy into exploding applications such as healthcare and rapid diagnostics, internet of everything, autonomous systems, robotics, vehicles and wearables. Hence, the session not only aligns with ISCAS 2025 innovation themes, but also provides a venue for distinguished scientists and industry experts active in this field to provide their perspective in an interactive and cross-disciplinary format. The proposing committee has prioritized discipline, topic and gender diversity in inviting the panelists. As part of the novelty of the panel, prominent neuroscientists will be included in the discussion on how engineers can be further inspired by the brain in understanding and optimizing energy consumed in learning processes.
 

  • Can in-memory computing ever become a transformative technology for deep learning?

    In-memory computing (IMC) is an emerging paradigm that addresses the processor-memory dichotomy in modern computing systems, particularly suited for deep neural networks (DNNs). IMC leverages memory devices' physical attributes and array-level organization for synaptic operations, often involving analogue computation and emerging memory devices. This approach is inspired by the computational principles observed in the human brain, such as hard-wired neural networks and analogue processing. I would like to share my thoughts on where the field stands and what additional bio-inspiration and breakthroughs are required for IMC to become a transformative technology for deep learning. 

  • Merging insights from artificial and biological neural networks for neuromorphic edge intelligence

    The development of efficient bio-inspired algorithms and hardware is currently missing a clear framework. Should we start from the brain computational primitives and figure out how to apply them to real-world problems (bottom-up approach), or should we build on working AI solutions and fine-tune them to increase their biological plausibility (top-down approach)? We will see why biological plausibility and hardware efficiency are often two sides of the same coin, and how neuroscience- and AI-driven insights can cross-feed each other toward neuromorphic edge intelligence. By applying these findings to real-world problems such as on-device learning and safety-critical scenarios, we will show how smart devices can be made to adapt to their environment and users within a few microwatts, or how they can be made to react to stimulus within just a few microseconds.

  • The thermodynamics of sentience

    What are the principles that underwrite sentient behaviour? This presentation uses the free energy principle to furnish an account in terms of active inference. The free energy principle formulates data assimilation and decision-making from the point of view of physics; in particular, the properties that self-organising systems—that distinguish themselves from their world—must possess. The narrative starts with a heuristic proof suggesting that life—or self-organization to nonequilibrium steady-states—is an emergent property of any dynamical system that possesses a Markov blanket. Crucially, a Markov blanket equips the system with a particular Lagrangian called variational free energy. 

  • Energy Efficient Learning in Neural Networks

    The brain is one of the most energy intense organs. Some of this energy is used for neural information processing, however, fruit fly experiments have also shown that learning is metabolically costly. First, we will present estimates of this cost, introduce a general model of this cost, and compare it to costs in computers. Next, we turn to a supervised artificial network setting and explore a number of strategies that can save energy needed for plasticity, either by modifying the cost function, by restricting plasticity, or by using less costly transient forms of plasticity. Finally, we will discuss adaptive strategies and possible relevance for biological learning.

  • Analog substrates for temporal and local event-based computation

    The main source of energy consumption in neural network hardware is the amount of data being transferred and the distances it travels. Spiking Neural Network (SNN) hardware addresses this by encoding information sparsely in event timing, and localizing computation and memory, thereby minimizing data movement. Recent advancements in memory technologies, particularly Resistive Random-Access Memory (RRAM), have further enabled computation within memory, reducing energy overheads.
     

  • Event-driven computation in the SpiNNaker system

    The million-core SpiNNaker machine, developed at the University of Manchester primarily for brain modelling applications, has supported an open neuromorphic computing platform under the auspices of the EU Flagship Human Brain Project since 2016. The central feature of the machine includes a flexible multicast communication system that conveys neural spike information as small packets and an event-driven execution model that exploits the spatio-temporal sparsity inherent in biological neural systems.