Initial commit

Co-authored-by: Aradhana Dube <a.dube@rug.nl>
Co-authored-by: Renzo I. Barraza Altamirano <r.i.barraza.altamirano@rug.nl>
Co-authored-by: Paolo Gibertini <p.gibertini@rug.nl>
Co-authored-by: Luca D. Fehlings <l.d.fehlings@rug.nl>
This commit is contained in:
2026-02-26 18:30:32 +01:00
commit 9fabbdefc0
75 changed files with 447515 additions and 0 deletions

211
.gitignore vendored Normal file
View File

@@ -0,0 +1,211 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[codz]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py.cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# UV
# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
#uv.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
#poetry.toml
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
# pdm recommends including project-wide configuration in pdm.toml, but excluding .pdm-python.
# https://pdm-project.org/en/latest/usage/project/#working-with-version-control
#pdm.lock
#pdm.toml
.pdm-python
.pdm-build/
# pixi
# Similar to Pipfile.lock, it is generally recommended to include pixi.lock in version control.
#pixi.lock
# Pixi creates a virtual environment in the .pixi directory, just like venv module creates one
# in the .venv directory. It is recommended not to include this directory in version control.
.pixi
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.envrc
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
docs-site/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
.idea/
# Abstra
# Abstra is an AI-powered process automation framework.
# Ignore directories containing user credentials, local state, and settings.
# Learn more at https://abstra.io/docs
.abstra/
# Visual Studio Code
# Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore
# that can be found at https://github.com/github/gitignore/blob/main/Global/VisualStudioCode.gitignore
# and can be added to the global gitignore or merged into this file. However, if you prefer,
# you could uncomment the following to ignore the entire vscode folder
.vscode/
# System
.DS_Store
Thumbs.db
# Ruff stuff:
.ruff_cache/
# PyPI configuration file
.pypirc
# Cursor
# Cursor is an AI-powered code editor. `.cursorignore` specifies files/directories to
# exclude from AI features like autocomplete and code analysis. Recommended for sensitive data
# refer to https://docs.cursor.com/context/ignore-files
.cursorignore
.cursorindexingignore
# Marimo
marimo/_static/
marimo/_lsp/
__marimo__/

15
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,15 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.6
hooks:
- id: ruff-check
args: [ --fix ]
files: ^(src|scripts|tests)/.*\.(py|ipynb)$
- id: ruff-check
name: Sort imports
args: [ --select, I, --fix ]
files: ^(src|scripts|tests)/.*\.(py|ipynb)$
- id: ruff-format
files: ^(src|scripts|tests)/.*\.(py|ipynb)$

1
.python-version Normal file
View File

@@ -0,0 +1 @@
3.13

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2026 University of Groningen (Fernando M. Quintana)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

89
README.md Normal file
View File

@@ -0,0 +1,89 @@
# Felice
This project provides a [JAX](https://github.com/google/jax) implementation of the different neuron models in felice
## Overview
The framework is built on top of diffrax and leverages JAX's automatic differentiation for efficient simulation and training of analogue models.
### Key Features
- **Delay learning**
- **WereRabbit Neuron Model**: Implementation of a dual-state oscillatory neuron model with bistable dynamics
## Installation
Felice uses [uv](https://github.com/astral-sh/uv) for dependency management. To install:
```bash
uv sync
```
### CUDA Support (Optional)
For GPU acceleration with CUDA 13:
```bash
uv sync --extra cuda
```
## Neuron Models
### WereRabbit
The WereRabbit neuron model implements a bistable oscillatory system with dual-state dynamics (x1, x2). Key characteristics:
- **Bistable dynamics**: Two stable states with smooth transitions
- **Event-based spiking**: Spikes detected when system reaches a fixpoint
- **Configurable parameters**: Bias current, scaling distance, and tolerance settings for the fixpoint calculation
Configuration parameters:
- `Ibias`: Bias current (default: 300e-12 A)
- `scaling_distance`: Scaling factor for state attraction (default: 0.6)
- `rtol`, `atol`: Relative and absolute tolerances for spike detection
## Development
### Running Tests
```bash
uv run pytest tests
```
### Building Documentation
Documentation is built with MkDocs:
```bash
cd docs-site && uv run mkdocs serve
```
### Code Quality
The project uses pre-commit hooks for code formatting and linting with Ruff:
```bash
uv run pre-commit install
uv run pre-commit run --all-files
```
## Requirements
- Python >= 3.13
- JAX >= 0.8.1
- Equinox >= 0.13.2
- Diffrax >= 0.7.0
See [pyproject.toml](pyproject.toml) for the complete list of dependencies.
## Contributing
Contributions are welcome! Please ensure:
1. Code passes all tests (`pytest`)
2. Code is formatted with Ruff (`pre-commit run --all-files`)
3. Type annotations are included for new functions
## License
- **Code** is licensed under the [MIT License](./LICENSE).
- **Documentation** is licensed under [CC BY 4.0](./docs-site/LICENSE).

398
docs-site/LICENSE Normal file
View File

@@ -0,0 +1,398 @@
Copyright (c) 2026 University of Groningen (Fernando M. Quintana)
Attribution 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution 4.0 International Public License ("Public License"). To the
extent this Public License may be interpreted as a contract, You are
granted the Licensed Rights in consideration of Your acceptance of
these terms and conditions, and the Licensor grants You such rights in
consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part; and
b. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's
License You apply must not prevent recipients of the Adapted
Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public
licenses. Notwithstanding, Creative Commons may elect to apply one of
its public licenses to material it publishes and in those instances
will be considered the “Licensor.” The text of the Creative Commons
public licenses is dedicated to the public domain under the CC0 Public
Domain Dedication. Except for the limited purpose of indicating that
material is shared under a Creative Commons public license or as
otherwise permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the
public licenses.
Creative Commons may be contacted at creativecommons.org.

View File

@@ -0,0 +1,7 @@
# Datasets
::: felice.datasets
options:
show_root_heading: true
show_source: true
heading_level: 2

View File

@@ -0,0 +1,9 @@
# API Reference
API documentation for Felice.
## Modules
- [Neuron Models](neuron_models.md) - Neuron model implementations
- [Solver](solver.md) - Zero-clipping solver
- [Datasets](datasets.md) - Built-in datasets

View File

@@ -0,0 +1,8 @@
# Neuron Models
::: felice.neuron_models
options:
show_root_heading: true
show_source: true
members_order: source
heading_level: 2

View File

@@ -0,0 +1,7 @@
# Solver
::: felice.solver
options:
show_root_heading: true
show_source: true
heading_level: 2

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

33
docs-site/docs/index.md Normal file
View File

@@ -0,0 +1,33 @@
# Felice
This project provides a [JAX](https://github.com/google/jax) implementation of the different neuron models in felice
## Overview
The framework is built on top of diffrax and leverages JAX's automatic differentiation for efficient simulation and training of analogue models.
### Key Features
- **Delay learning**
- **Non-linear neuron models**
- [**WereRabbit Neuron Model**](neuron_models/wererabbit/index.md): Implementation of a dual-state oscillatory neuron model with bistable dynamics
- [**FHN Neuron Model**](neuron_models/fhn/index.md)
- [**Snowball Neuron Model**](neuron_models/snowball/index.md)
## 📦 Installation
Felice uses [uv](https://github.com/astral-sh/uv) for dependency management. To install:
```bash
uv sync
```
### CUDA Support (Optional)
For GPU acceleration with CUDA 13:
```bash
uv sync --extra cuda
```
See the [examples](scripts/examples/neuron_models/) directory for more detailed usage examples.

View File

@@ -0,0 +1,20 @@
window.MathJax = {
tex: {
inlineMath: [['$', '$'], ['\\(', '\\)']],
displayMath: [['$$', '$$'], ['\\[', '\\]']],
processEscapes: true,
processEnvironments: true,
tags: 'ams'
},
options: {
ignoreHtmlClass: ".*|",
processHtmlClass: "arithmatex"
}
};
document$.subscribe(() => {
MathJax.startup.output.clearCache()
MathJax.typesetClear()
MathJax.texReset()
MathJax.typesetPromise()
})

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

View File

@@ -0,0 +1,4 @@
# Circuit implementing the fhn neuron.
- The circuits in the schematics implement the FHN neuron described.
- The FHN neuron is an implementation of the circuit described in (Ribar, L. (2019). Synthesis of neuromorphic circuits with neuromodulatory properties [Apollo - University of Cambridge Repository]. https://doi.org/10.17863/CAM.53750). The OTA and CMFB are well known designs that can be found in textbooks.

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,22 @@
# FitzHugh-Nagumo
## Circuit equation
$$
\begin{align}
C\frac{dv}{dt} &= I_{app} - I_{passive} - I_{fast} - I_{slow} \\
\frac{dv_{slow}}{dt} &= \frac{v - v_{slow}}{\tau_{slow}} \\
\frac{dI_{app}}{dt} &= -\frac{I_{app}}{\tau_{syn}}
\end{align}
$$
where the currents are:
- $I_{passive} = g_{max}(v - E_{rev})$
- $I_{fast} = a_{fast} \tanh(v - v_{off,fast})$
- $I_{slow} = a_{slow} \tanh(v_{slow} - v_{off,slow})$
## Examples
See the following interactive notebook for a practical example:
- [Basic Usage Example](fhn.ipynb) - Introduction to the FitzHugh-Nagumo model

View File

@@ -0,0 +1,12 @@
# Neuron Models
Felice implements several non-linear neuron models for spiking neural networks.
## Available Models
| Model | Type | Key Features |
|-------|------|--------------|
| [WereRabbit](wererabbit/index.md) | Dual-state oscillatory | Bistable dynamics, predator-prey |
| [FitzHugh-Nagumo](fhn/index.md) | ... | ... |
| [Snowball](snowball/index.md) | Exponential Integrate-and-Fire neuron model | ... |
| [LIF](lif/index.md) | Leaky Integrate-and-Fire neuron model | ... |

View File

@@ -0,0 +1,7 @@
# LIF
## Circuit Design
W/L = 4/3
## Circuit Simulation
![lif, output plot](/docs-site/docs/img/lif_plot.png) Fig.1 The dynamics of leaky integrate and fire neuron. The grey signal is the input spikes, the yellow signal is the membrane potential and the dark blue is the output spikes from the neuron.
## Referennces
1. Sourikopoulos I, Hedayat S, Loyez C, Danneville F, Hoel V, Mercier E and Cappy A (2017) A 4-fJ/Spike Artificial Neuron in 65 nm CMOS Technology. Front. Neurosci. 11:123. doi: 10.3389/fnins.2017.00123

View File

@@ -0,0 +1,13 @@
# Snowball
## Circuit description
The circuit implemented for exponential integrate and fire neuron has been used from [1]. Part (a) in Fig.2 in [1] implements the exponential integrate and fire neuron. The neuron receives input currents using the input DPI filter [2]. This input current is integrated on the node Vmem by the membrane capacitance. The membrane potential leaks in the absence of an input spike which can be set by the bias Vleak. The Vmem potential node is connected to a cascoded source follower formed by the P14-15 and N5-6. A threshold voltage of the neuron can be set by the bias Vthr which is compared to the membrane potential. When the membrane potential is just near the threshold voltage, it starts the positive feedback block which exponentially increases membrane potential and causes the neuron to spike. As the neuron spikes, the membrane potential gets reset to ground and the refractory bias helps to stop the neuron from spiking during the refractory period as similar to a biological neuron. The circuit implemented for this experiment does not exercise either adaptability or needs a pulse extender as implemented in [1]. The Vdd used in the simulation is 1V. The neuron receives 5nA input pulses with a pulse width of 100μs.
Input current mirror W/l = 0.2 <br>
All other transistors W/L = 4/3
## Circuit Simulation
![snowball, output plot](/docs-site/docs/img/exif_plot.png) Fig.1 The dynamics of Exponential integrate and fire neuron. The light blue signal is the input spikes, the yellow signal is the membrane potential and the dark blue is the output spikes from the neuron.
## References
1. Rubino, Arianna, Melika Payvand, and Giacomo Indiveri. "Ultra-low power silicon neuron circuit for extreme-edge neuromorphic intelligence." 2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS). IEEE, 2019.
2. Bartolozzi, Chiara, Srinjoy Mitra, and Giacomo Indiveri. "An ultra low power current-mode filter for neuromorphic systems and biomedical signal processing." 2006 IEEE Biomedical Circuits and Systems Conference. IEEE, 2006.

View File

@@ -0,0 +1,66 @@
# WereRabbit
The wererabbit neuron model is a two coupled oscillator that follows a predator- prey dynamic with a switching in the diagonal of the phaseplane. When the z in equation 1c represents the “moon phase”, when ever it cross that threshold, the rabbit (prey) becomes the predator.
## Circuit equation
$$
\begin{align}
C\frac{du}{dt} &= z I_{bias} - I_{n0} e^{\kappa v / U_t} [z + 26e^{-2} (0.5 - u) z] - I_a \\
C\frac{dv}{dt} &= -z I_{bias} + I_{n0} e^{\kappa u / U_t} [z + 26e^{-2} (0.5 - v) z] - I_a \\
z &= tanh(\rho (u-v))\\
I_a &= \sigma I_{bias} \\
\end{align}
$$
| **Parameter** | **Symbol** | **Definition** | **Value** |
|-----------|--------|------------|-------|
| Capacitance | C | Circuit capacitance | $0.1\,pF$ |
| Bias current | $I_{bias}$ | DC bias current for the fixpoint location | $100\,pA$
Leakage current | $I_{n0}$ | Transistor leakage current | $0.129\,pA$
Subthreshold slope | $\kappa$ | Transistor subthreshold slope factor | $0.39$
Thermal voltage | $U_t$ | Thermal voltage at room temperature | $25\,mV$
Bias scale | $\sigma$ | Scaling factor for the distance between fixpoints | $0.6$
Steepness | $\rho$ | Tanh steepness for the moonphase | $5$s
## Abstraction
To simplify the analysis of the model for simulation purposes, we can introduce a dimensionless time variable $\tau=tI_{bias}/C$, transforming the derivate of the equations in $\frac{d}{dt}=\frac{I_{bias}}{C}\frac{d}{d\tau}$. Substituting this time transformation on equation~\ref{eq:wererabbit:circ}
$$
\begin{equation}
C\frac{I_{bias}}{C}\frac{du}{d\tau} = z I_{bias} - I_{n0} e^{\kappa v / U_t} [z + 26e^{-2} (0.5 - u) z] - \sigma I_{bias}
\end{equation}
$$
And dividing by $I_{bias}$ on both sides:
$$
\begin{equation}
\frac{du}{d\tau} = z - \frac{I_{n0}}{I_{bias}} e^{\kappa v / U_t} [z + 26e^{-2} (0.5 - u) z] - \sigma
\end{equation}
$$
Obtaining the following set of equations:
$$
\begin{align}
z &= tanh(\kappa (u-v)) \\
\frac{du}{dt} &= z - z \alpha e^{\beta v} [1 + \gamma (0.5 - u)] - \sigma \\
\frac{dv}{dt} &= -z - z \alpha e^{\beta u} [1 + \gamma (0.5 - v)] - \sigma
\end{align}
$$
| **Parameter** | **Definition** | **Value** |
|---------------|----------------|-----------|
| $\tau$ | $tI_{bias}/C$ | -- |
| $\alpha$ | $I_{n0}/I_{bias}$ | $0.0129$ |
| $\beta$ | $\kappa/U_t$ | 15.6 |
| $\gamma$ | -- | $26e^{-2}$ |
| $\rho$ | Tanh steepness for the moonphase | 5 |
| $\sigma$ | Scaling factor for the distance between fixpoints | 0.6 |
## Examples
See the following interactive notebook for a practical example:
- [Basic Usage Example](wererabbit.ipynb) - Introduction to the WereRabbit model

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,14 @@
.md-header__button.md-logo {
margin-top: 0;
margin-bottom: 0;
padding-top: 0;
padding-bottom: 0;
}
.md-header__button.md-logo img,
.md-header__button.md-logo svg {
display: block;
width: auto;
height: 2.4rem;
fill: currentcolor;
}

76
docs-site/mkdocs.yml Normal file
View File

@@ -0,0 +1,76 @@
site_name: Felice models
theme:
name: material
logo: img/felice.png
favicon: img/felice.png
palette:
primary: teal
features:
- navigation.path
- navigation.indexes
- content.code.copy
- content.code.select
- content.code.annotate
copyright: >
Felice models © 2026 by University of Groningen (Fernando M. Quintana) |
Code is licensed under <a href="https://opensource.org/licenses/MIT">MIT License</a> | Docs is licensed under <a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a><img src="https://mirrors.creativecommons.org/presskit/icons/cc.svg" alt="" style="max-width: 1em;max-height:1em;margin-left: .2em;"><img src="https://mirrors.creativecommons.org/presskit/icons/by.svg" alt="" style="max-width: 1em;max-height:1em;margin-left: .2em;">
nav:
- Home: index.md
- Neuron Models:
- neuron_models/index.md
- WereRabbit:
- neuron_models/wererabbit/index.md
- Basic example: neuron_models/wererabbit/wererabbit.ipynb
- FitzHugh-Nagumo:
- neuron_models/fhn/index.md
- Example: neuron_models/fhn/fhn.ipynb
- Snowball:
- Description: neuron_models/snowball/index.md
- API Reference:
- api/index.md
- Neuron Models: api/neuron_models.md
- Solver: api/solver.md
- Datasets: api/datasets.md
markdown_extensions:
- pymdownx.arithmatex:
generic: true
- pymdownx.highlight:
anchor_linenums: true
line_spans: __span
pygments_lang_class: true
- pymdownx.inlinehilite
- pymdownx.snippets
- pymdownx.superfences
extra_css:
- stylesheets/extra.css
extra_javascript:
- javascripts/mathjax.js
- https://polyfill.io/v3/polyfill.min.js?features=es6
- https://unpkg.com/mathjax@3/es5/tex-mml-chtml.js
plugins:
- search
- autorefs
- mkdocstrings:
default_handler: python
handlers:
python:
options:
docstring_style: google
show_source: true
show_root_heading: true
show_object_full_path: false
show_category_heading: true
show_symbol_type_heading: true
members_order: source
group_by_category: true
show_signature_annotations: true
separate_signature: false
- mkdocs-jupyter
- print-site

66
pyproject.toml Normal file
View File

@@ -0,0 +1,66 @@
[project]
name = "felice"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"boilerplot@git+https://github.com/fehlings/boilerplot",
"diffrax>=0.7.0",
"einops>=0.8.2",
"equinox>=0.13.2",
"ipykernel>=6.0.0",
"ipywidgets>=8.1.8",
"jax>=0.9.0",
"jaxtyping>=0.3.4",
"marimo>=0.19.4",
"matplotlib>=3.10.8",
"optax>=0.2.6",
"pandas>=2.3.3",
"plotly>=6.5.2",
"pyarrow>=22.0.0",
"pyqt5>=5.15.11",
"scikit-learn>=1.8.0",
"seaborn>=0.13.2",
"tqdm>=4.67.1",
"wigglystuff>=0.2.14",
]
[project.optional-dependencies]
cuda13 = ["jax[cuda13]>=0.9.0"]
cuda12 = ["jax[cuda12]>=0.9.0"]
mps = ["jax-mps>=0.9.4"]
local = []
[project.scripts]
felice = "felice.gui.launcher:main"
[dependency-groups]
dev = [
"mkdocs>=1.6.1",
"mkdocs-jupyter>=0.25.0",
"mkdocs-material>=9.7.0",
"mkdocs-print-site-plugin>=2.8",
"mkdocs-to-pdf>=0.10.1",
"mkdocstrings-python>=2.0.1",
"pre-commit>=4.5.1",
"pymdown-extensions>=10.20",
"pytest>=9.0.2",
"pytest-sugar>=1.1.1",
"ruff>=0.14.9",
"tox>=4.33.0",
"tox-uv>=1.29.0",
]
[tool.ruff]
extend-include = ["*.ipynb"]
[tool.ruff.lint]
ignore = ["F722"]
[tool.mypy]
disable_error_code = ["import-untyped"]
[build-system]
requires = ["uv_build>=0.9.11,<0.10.0"]
build-backend = "uv_build"

File diff suppressed because one or more lines are too long

Binary file not shown.

View File

@@ -0,0 +1,247 @@
import marimo
__generated_with = "0.19.4"
app = marimo.App(width="medium")
@app.cell
def _():
import marimo as mo
from wigglystuff import Slider2D
return Slider2D, mo
@app.cell
def _():
import diffrax
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
import numpy as np
return diffrax, jax, jnp, np, plt
@app.cell
def _(diffrax, jax, jnp):
def vector_field(t, state, args):
u, v = state
alpha, beta, gamma, kappa, sigma, delta = args
z = jax.nn.tanh(kappa * (v - u))
# Prey dynamics
du = (1 - alpha * jnp.exp(beta * v) * (1 - gamma * (0.3 - u))) + sigma * z
# Predator dynamics
dv = (-1 + alpha * jnp.exp(beta * u) * (1 + gamma * (0.3 - v))) + sigma * z
return jnp.array([du, dv])
def compute_nullclines(vector_field, u_range, v_range, args, resolution=200):
"""
Compute nullclines
du/dt = 0 (u-nullcline)
dv/dt = 0 (v-nullcline)
"""
alpha, beta, gamma, kappa, sigma, delta = args
u_vals = jnp.linspace(u_range[0], u_range[1], resolution)
v_vals = jnp.linspace(v_range[0], v_range[1], resolution)
U, V = jnp.meshgrid(u_vals, v_vals)
dU, dV = vector_field(0, [U, V], args)
return U, V, dU, dV
def solve(dyn, y0, p, T, n=500):
sol = diffrax.diffeqsolve(
diffrax.ODETerm(dyn),
diffrax.Tsit5(),
t0=0.0,
t1=T,
dt0=0.0001,
y0=y0,
args=p,
saveat=diffrax.SaveAt(ts=jnp.linspace(0, T, n)),
stepsize_controller=diffrax.PIDController(rtol=1e-7, atol=1e-8),
max_steps=50000,
)
return sol.ts, sol.ys
return compute_nullclines, solve, vector_field
@app.cell
def _(Slider2D, mo):
# alpha = 0.5 # I_n0 / I_bias ratio
# beta = 0.39/0.025 # k / U_t (inverse thermal scale)
# gamma = 0.26 # coupling coefficient
# kappa = 5.0 # tanh steepness
# sigma = 0.6 # bias scaling (s * I_bias normalized)
# y0 = jnp.array([0.2, 0.4])
# ts, ys = solve(vector_field, y0, params, 140)
alpha = mo.ui.slider(
0.0004, 0.012, 0.00001, 0.00129, label="alpha", orientation="vertical"
)
beta = mo.ui.slider(
0.0, 30, 0.00001, 0.39 / 0.025, label="beta", orientation="vertical"
)
gamma = mo.ui.slider(0, 1, 0.01, 0.26, label="gamma", orientation="vertical")
kappa = mo.ui.slider(0, 30, 1.0, 10.0, label="kappa", orientation="vertical")
sigma = mo.ui.slider(0, 1, 0.01, 0.6, label="sigma", orientation="vertical")
delta = mo.ui.slider(1, 100.0, 1, 10, label="delta", orientation="vertical")
# v0 = mo.ui.slider(0, 1.0, 0.01, 0.3, label="v0")
# u0 = mo.ui.slider(0, 1.0, 0.01, 0.2, label="u0", orientation="vertical")
state0 = mo.ui.anywidget(
Slider2D(
x=0.34,
y=0.38,
width=150,
height=150,
x_bounds=(0.0, 0.6),
y_bounds=(0.0, 0.6),
)
)
mo.hstack(
[
mo.plain_text("""
alpha: I_n0 / I_bias ratio
beta: k / U_t ratio
gamma: coupling coefficient
kappa: tanh steepness
sigma: bias scaling (s * I_bias)
"""),
mo.hstack(
[state0, alpha, beta, gamma, kappa, sigma, delta], justify="start"
),
]
)
return alpha, beta, delta, gamma, kappa, sigma, state0
@app.cell
def _(
alpha,
beta,
compute_nullclines,
delta,
gamma,
jnp,
kappa,
np,
plt,
sigma,
solve,
state0,
vector_field,
):
params = (
alpha.value,
beta.value,
gamma.value,
kappa.value,
sigma.value,
delta.value,
)
ic_neuro = [state0.x, state0.y]
u_range = [0.0, 0.6]
v_range = [0.0, 0.6]
u_sparse = jnp.linspace(u_range[0], u_range[1], 20)
v_sparse = jnp.linspace(v_range[0], v_range[1], 20)
Us, Vs = jnp.meshgrid(u_sparse, v_sparse)
def plot_vf(ax, vector_field):
U, V, dU, dV = compute_nullclines(vector_field, u_range, v_range, params)
dUs, dVs = vector_field(0, [Us, Vs], params)
# Normalize for visualization
magnitude = np.sqrt(dUs**2 + dVs**2)
magnitude[magnitude == 0] = 1
dUs_norm = dUs / magnitude
dVs_norm = dVs / magnitude
# Nullclines
ax.contour(U, V, dU, levels=[0], colors="blue", linewidths=2, linestyles="-")
ax.contour(U, V, dV, levels=[0], colors="red", linewidths=2, linestyles="-")
ax.quiver(Us, Vs, dUs_norm, dVs_norm, magnitude, cmap="viridis", alpha=0.6)
# Trajectories
color = plt.cm.plasma(0.2)
ts, ys = solve(vector_field, jnp.array(ic_neuro), params, delta.value)
ax.plot(ys[:, 0], ys[:, 1], "-", color=color, linewidth=1.5, alpha=0.8)
ax.plot(ic_neuro[0], ic_neuro[1], "o", color=color, markersize=6)
ax.set_xlabel("u (Prey)")
ax.set_ylabel("v (Predator)")
ax.set_title("Wererabbit: Phase Portrait")
ax.legend(["u-nullcline (du/dt=0)", "v-nullcline (dv/dt=0)"], loc="upper right")
ax.set_xlim(u_range)
ax.set_ylim(v_range)
ax.axhline(y=0, color="gray", linestyle="--", alpha=0.3)
ax.axvline(x=0, color="gray", linestyle="--", alpha=0.3)
def plot_trj(ax, vector_field):
ts, ys = solve(vector_field, jnp.array(ic_neuro), params, delta.value)
ax.plot(ts, ys[:, 0], "b-", linewidth=2, label="u (Prey)")
ax.plot(ts, ys[:, 1], "r-", linewidth=2, label="v (Predator)")
ax.set_xlabel("Time τ")
ax.set_ylabel("Population")
ax.set_title(
f"Wererabbit: Time Series (IC: u₀={ic_neuro[0]:.2f}, v₀={ic_neuro[1]:.2f})"
)
ax.legend()
ax.axhline(y=0, color="gray", linestyle="--", alpha=0.3)
fig = plt.figure(figsize=(10, 4))
# --- Plot 1: Wererabbit Phase Portrait ---
ax1 = fig.add_subplot(1, 2, 1)
plot_vf(ax1, vector_field)
ax2 = fig.add_subplot(1, 2, 2)
plot_trj(ax2, vector_field)
# ax3 = fig.add_subplot(3, 2, 3)
# plot_vf(ax3, vector_field_prod)
# ax4 = fig.add_subplot(3, 2, 4)
# plot_trj(ax4, vector_field_prod)
# ax5 = fig.add_subplot(3, 2, 5)
# plot_vf(ax5, vector_field_exp)
# ax6 = fig.add_subplot(3, 2, 6)
# plot_trj(ax6, vector_field_exp)
plt.tight_layout()
fig
return
@app.cell
def _():
return
@app.cell
def _():
return
@app.cell
def _():
return
if __name__ == "__main__":
app.run()

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,188 @@
import marimo
__generated_with = "0.19.4"
app = marimo.App(width="medium")
@app.cell
def _():
import diffrax as dfx
import jax.numpy as jnp
import marimo as mo
import matplotlib.pyplot as plt
import numpy as np
from jax import jit
return dfx, jit, jnp, mo, np, plt
@app.cell
def _(dfx, jnp):
def vector_field(t, y, args):
v, vslow = y
ipasive = args["gmax"] * (v - args["Erev"])
ifast = args["af"] * jnp.tanh(v - args["Ef"])
islow = args["as"] * jnp.tanh(vslow - args["Es"])
dv = (-ipasive - ifast - islow) / args["C"]
dvs = (v - vslow) / args["ts"]
return jnp.array([dv, dvs])
term = dfx.ODETerm(vector_field)
return term, vector_field
@app.cell
def _(mo):
p1 = mo.ui.slider(0.0, 5.0, value=1.0, step=0.1, label="gmax")
p2 = mo.ui.slider(-1.0, 1.0, value=0.0, step=0.1, label="Erev")
p3 = mo.ui.slider(-5.0, 5.0, value=-2.0, step=0.05, label="af")
p4 = mo.ui.slider(-1.0, 1.0, value=0.0, step=0.05, label="Ef")
p5 = mo.ui.slider(-5.0, 5.0, value=2.0, step=0.05, label="as")
p6 = mo.ui.slider(-1.0, 1.0, value=0.0, step=0.05, label="Es")
p7 = mo.ui.slider(1.0, 100.0, value=50.0, step=0.1, label="ts")
p8 = mo.ui.slider(0.0, 1.0, value=1.0, step=0.01, label="C")
mo.hstack(
[
mo.vstack([p1, p2, p3, p4], justify="start", gap=1),
mo.vstack([p5, p6, p7, p8], justify="start", gap=1),
]
)
return p1, p2, p3, p4, p5, p6, p7, p8
@app.cell
def _(mo):
mo.md("""
### Initial Conditions & Simulation
""")
return
@app.cell
def _(mo):
x0 = mo.ui.slider(-5.0, 5.0, value=2.0, step=0.1, label="x₀")
y0 = mo.ui.slider(-5.0, 5.0, value=0.0, step=0.1, label="y₀")
t_max = mo.ui.slider(10, 100, value=30, step=5, label="t_max")
mo.hstack([x0, y0, t_max], justify="start", gap=2)
return t_max, x0, y0
@app.cell
def _(dfx, jit, jnp, p1, p2, p3, p4, p5, p6, p7, p8, t_max, term, x0, y0):
@jit
def solve_ode(y_init, args, t_end):
solver = dfx.Tsit5()
saveat = dfx.SaveAt(ts=jnp.linspace(0, t_end, 2000))
sol = dfx.diffeqsolve(
term,
solver,
t0=0,
t1=t_end,
dt0=0.01,
y0=y_init,
args=args,
saveat=saveat,
max_steps=100000,
)
return sol.ts, sol.ys
args = {
"gmax": p1.value,
"Erev": p2.value,
"af": p3.value,
"Ef": p4.value,
"as": p5.value,
"Es": p6.value,
"ts": p7.value,
"C": p8.value,
}
y_init = jnp.array([x0.value, y0.value])
t, ys = solve_ode(y_init, args, float(t_max.value))
x_sol = ys[:, 0]
y_sol = ys[:, 1]
return args, t, x_sol, y_sol
@app.cell
def _(args, jnp, np, plt, t, vector_field, x0, x_sol, y0, y_sol):
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))
# Time series
ax1.plot(t, x_sol, "b-", lw=1.5, label="x(t)")
ax1.plot(t, y_sol, "r-", lw=1.5, label="y(t)")
ax1.set_xlabel("Time t")
ax1.set_ylabel("State")
ax1.set_title("Transient Response")
ax1.legend()
ax1.grid(True, alpha=0.3)
# Phase plane bounds
# pad = 1.0
xmin, xmax = -4, 4
ymin, ymax = -2.5, 2.5
# Vector field
X, Y = jnp.meshgrid(jnp.linspace(xmin, xmax, 20), jnp.linspace(ymin, ymax, 20))
U, V = jnp.zeros_like(X), np.zeros_like(Y)
state = jnp.stack([X, Y], axis=0)
deriv = vector_field(0.0, state, args)
dx, dy = deriv[0], deriv[1]
mag = jnp.sqrt(dx**2 + dy**2)
U = jnp.where(mag > 0, dx / mag, U)
V = jnp.where(mag > 0, dy / mag, V)
ax2.quiver(X, Y, U, V, alpha=0.4, color="gray", scale=25)
# Nullclines
Xf, Yf = jnp.meshgrid(jnp.linspace(xmin, xmax, 150), jnp.linspace(ymin, ymax, 150))
DX, DY = jnp.zeros_like(Xf), jnp.zeros_like(Yf)
state = jnp.stack([Xf, Yf], axis=0)
deriv = vector_field(0.0, state, args)
DX, DY = deriv[0], deriv[1]
ax2.contour(
Xf,
Yf,
DX,
levels=[0],
colors="blue",
linestyles="--",
linewidths=1.5,
alpha=0.7,
)
ax2.contour(
Xf, Yf, DY, levels=[0], colors="red", linestyles="--", linewidths=1.5, alpha=0.7
)
# Trajectory
ax2.plot(x_sol, y_sol, "b-", lw=2)
ax2.plot(x0.value, y0.value, "go", ms=10, label="Start")
ax2.plot(x_sol[-1], y_sol[-1], "r*", ms=12, label="End")
ax2.set_xlabel("x")
ax2.set_ylabel("y")
ax2.set_title("Phase Plane")
ax2.legend()
ax2.grid(True, alpha=0.3)
ax2.set_xlim(xmin, xmax)
ax2.set_ylim(ymin, ymax)
plt.tight_layout()
fig
return
@app.cell
def _():
return
if __name__ == "__main__":
app.run()

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,318 @@
import marimo
__generated_with = "0.19.4"
app = marimo.App(width="medium")
@app.cell
def _():
import marimo as mo
from wigglystuff import Slider2D
return Slider2D, mo
@app.cell
def _():
import diffrax
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
import numpy as np
return diffrax, jax, jnp, np, plt
@app.cell
def _(diffrax, jax, jnp):
def vector_field(t, state, args):
u, v = state
alpha, beta, gamma, kappa, sigma, delta = args
z = jax.nn.tanh(kappa * (u - v))
# Prey dynamics
du = z * (1 - alpha * jnp.exp(beta * v) * (1 + gamma * (0.5 - u))) - sigma
# Predator dynamics
dv = z * (-1 + alpha * jnp.exp(beta * u) * u * (1 + gamma * (0.5 - v))) - sigma
return jnp.array([du, dv])
def vector_field_prod(t, state, args):
u, v = state
alpha, beta, gamma, kappa, sigma, delta = args
z = jax.nn.tanh(kappa * (u - v))
# Prey dynamics
du = (
z * (1 - alpha * jnp.exp(beta * v) * (1 + gamma * (0.5 - u))) - sigma
# + sigma * jnp.maximum(0, delta - u) / (delta + 1e-16)
)
# Predator dynamics
dv = (
z * (-1 + alpha * jnp.exp(beta * u) * (1 + gamma * (0.5 - v))) - sigma
# + sigma * jnp.maximum(0, delta - v) / (delta + 1e-16)
)
dv = jnp.where(jnp.allclose(z, 0.0), dv * jnp.sign(v), dv)
du = jnp.where(jnp.allclose(z, 0.0), du * jnp.sign(u), du)
return jnp.array([du, dv])
def vector_field_exp(t, state, args):
u, v = state
alpha, beta, gamma, kappa, sigma, delta = args
z = jax.nn.tanh(kappa * (u - v))
# Prey dynamics
du = (
z * (1 - alpha * jnp.exp(beta * v) * (1 + gamma * (0.5 - u)))
- sigma
+ sigma * jnp.exp(-u / delta)
)
# Predator dynamics
dv = (
z * (-1 + alpha * jnp.exp(beta * u) * u * (1 + gamma * (0.5 - v)))
- sigma
+ sigma * jnp.exp(-v / delta)
)
return jnp.array([du, dv])
def physical_vector_field(t, state, args):
x1, x2 = state
alpha, beta, gamma, kappa, sigma, delta = args
In0 = 129e-15 # fixed by design
C = 0.1e-12 # fixed by design
kk = 0.39 # fixed by tech
Ut = 0.025 # temperature dependent
Ibias = In0 / alpha
Ia = Ibias * sigma
x3 = jax.nn.tanh(kappa * (x1 - x2))
dx1 = (
x3 * Ibias
- (In0 * jnp.exp(kk * x2 / Ut)) * (x3 + 26e-2 * (0.5 - x1) * x3)
- Ia
) / C
dx2 = (
-x3 * Ibias
+ In0 * jnp.exp(kk * x1 / Ut) * (x3 + 26e-2 * (0.5 - x2) * x3)
- Ia
) / C
return jnp.array([dx1, dx2])
def compute_nullclines(vector_field, u_range, v_range, args, resolution=200):
"""
Compute nullclines
du/dt = 0 (u-nullcline)
dv/dt = 0 (v-nullcline)
"""
alpha, beta, gamma, kappa, sigma, delta = args
u_vals = jnp.linspace(u_range[0], u_range[1], resolution)
v_vals = jnp.linspace(v_range[0], v_range[1], resolution)
U, V = jnp.meshgrid(u_vals, v_vals)
dU, dV = vector_field(0, [U, V], args)
return U, V, dU, dV
def solve(dyn, y0, p, T, n=1000):
sol = diffrax.diffeqsolve(
diffrax.ODETerm(dyn),
diffrax.Tsit5(),
t0=0.0,
t1=T,
dt0=0.01,
y0=y0,
args=p,
saveat=diffrax.SaveAt(ts=jnp.linspace(0, T, n)),
stepsize_controller=diffrax.PIDController(rtol=1e-7, atol=1e-8),
max_steps=50000,
)
return sol.ts, sol.ys
return compute_nullclines, solve, vector_field_prod
@app.cell
def _(Slider2D, mo):
# alpha = 0.5 # I_n0 / I_bias ratio
# beta = 0.39/0.025 # k / U_t (inverse thermal scale)
# gamma = 0.26 # coupling coefficient
# kappa = 5.0 # tanh steepness
# sigma = 0.6 # bias scaling (s * I_bias normalized)
# y0 = jnp.array([0.2, 0.4])
# ts, ys = solve(vector_field, y0, params, 140)
alpha = mo.ui.slider(
0.0004, 0.012, 0.00001, 0.00129, label="alpha", orientation="vertical"
)
beta = mo.ui.slider(
0.0, 30, 0.00001, 0.39 / 0.025, label="beta", orientation="vertical"
)
gamma = mo.ui.slider(0, 1, 0.01, 0.26, label="gamma", orientation="vertical")
kappa = mo.ui.slider(0, 10, 0.1, 5.0, label="kappa", orientation="vertical")
sigma = mo.ui.slider(0, 1, 0.01, 0.6, label="sigma", orientation="vertical")
delta = mo.ui.slider(0, 0.1, 0.001, 0.02, label="delta", orientation="vertical")
# v0 = mo.ui.slider(0, 1.0, 0.01, 0.3, label="v0")
# u0 = mo.ui.slider(0, 1.0, 0.01, 0.2, label="u0", orientation="vertical")
state0 = mo.ui.anywidget(
Slider2D(
width=150,
height=150,
x_bounds=(-1.0, 1.5),
y_bounds=(-1.0, 1.5),
)
)
mo.hstack(
[
mo.plain_text("""
alpha: I_n0 / I_bias ratio
beta: k / U_t ratio
gamma: coupling coefficient
kappa: tanh steepness
sigma: bias scaling (s * I_bias)
"""),
mo.hstack(
[state0, alpha, beta, gamma, kappa, sigma, delta], justify="start"
),
]
)
return alpha, beta, delta, gamma, kappa, sigma, state0
@app.cell
def _(
alpha,
beta,
compute_nullclines,
delta,
gamma,
jnp,
kappa,
np,
plt,
sigma,
solve,
state0,
vector_field_prod,
):
params = (
alpha.value,
beta.value,
gamma.value,
kappa.value,
sigma.value,
delta.value,
)
ic_neuro = [state0.x, state0.y]
u_range = [-1.0, 1.5]
v_range = [-1.0, 1.5]
u_sparse = jnp.linspace(u_range[0], u_range[1], 20)
v_sparse = jnp.linspace(v_range[0], v_range[1], 20)
Us, Vs = jnp.meshgrid(u_sparse, v_sparse)
def plot_vf(ax, vector_field):
U, V, dU, dV = compute_nullclines(vector_field, u_range, v_range, params)
dUs, dVs = vector_field(0, [Us, Vs], params)
# Normalize for visualization
magnitude = np.sqrt(dUs**2 + dVs**2)
magnitude[magnitude == 0] = 1
dUs_norm = dUs / magnitude
dVs_norm = dVs / magnitude
# Nullclines
ax.contour(U, V, dU, levels=[0], colors="blue", linewidths=2, linestyles="-")
ax.contour(U, V, dV, levels=[0], colors="red", linewidths=2, linestyles="-")
ax.quiver(Us, Vs, dUs_norm, dVs_norm, magnitude, cmap="viridis", alpha=0.6)
# Trajectories
color = plt.cm.plasma(0.2)
ts, ys = solve(vector_field, jnp.array(ic_neuro), params, 50)
ax.plot(ys[:, 0], ys[:, 1], "-", color=color, linewidth=1.5, alpha=0.8)
ax.plot(ic_neuro[0], ic_neuro[1], "o", color=color, markersize=6)
ax.set_xlabel("u (Prey)")
ax.set_ylabel("v (Predator)")
ax.set_title("Wererabbit: Phase Portrait")
ax.legend(["u-nullcline (du/dt=0)", "v-nullcline (dv/dt=0)"], loc="upper right")
ax.set_xlim(u_range)
ax.set_ylim(v_range)
ax.axhline(y=0, color="gray", linestyle="--", alpha=0.3)
ax.axvline(x=0, color="gray", linestyle="--", alpha=0.3)
def plot_trj(ax, vector_field):
ts, ys = solve(vector_field, jnp.array(ic_neuro), params, 50)
ax.plot(ts, ys[:, 0], "b-", linewidth=2, label="u (Prey)")
ax.plot(ts, ys[:, 1], "r-", linewidth=2, label="v (Predator)")
ax.set_xlabel("Time τ")
ax.set_ylabel("Population")
ax.set_title(
f"Wererabbit: Time Series (IC: u₀={ic_neuro[0]:.2f}, v₀={ic_neuro[1]:.2f})"
)
ax.legend()
ax.axhline(y=0, color="gray", linestyle="--", alpha=0.3)
fig = plt.figure(figsize=(10, 4))
# --- Plot 1: Wererabbit Phase Portrait ---
ax1 = fig.add_subplot(1, 2, 1)
plot_vf(ax1, vector_field_prod)
ax2 = fig.add_subplot(1, 2, 2)
plot_trj(ax2, vector_field_prod)
# ax3 = fig.add_subplot(3, 2, 3)
# plot_vf(ax3, vector_field_prod)
# ax4 = fig.add_subplot(3, 2, 4)
# plot_trj(ax4, vector_field_prod)
# ax5 = fig.add_subplot(3, 2, 5)
# plot_vf(ax5, vector_field_exp)
# ax6 = fig.add_subplot(3, 2, 6)
# plot_trj(ax6, vector_field_exp)
plt.tight_layout()
fig
return
@app.cell
def _():
return
@app.cell
def _():
return
@app.cell
def _():
return
if __name__ == "__main__":
app.run()

158
scripts/networks/plot.ipynb Normal file

File diff suppressed because one or more lines are too long

Binary file not shown.

Binary file not shown.

File diff suppressed because one or more lines are too long

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

281
scripts/networks/train.py Normal file
View File

@@ -0,0 +1,281 @@
import argparse
import os
from typing import Any, Tuple
import equinox as eqx
import jax
import jax.numpy as jnp
import jax.random as jrandom
import optax
import pandas as pd
from jaxtyping import Array, Float
from optax import OptState
from tqdm import trange
from felice.datasets.reasoning import ReasoningDataset
from felice.networks import Implicit, Mamba, SequenceClassifier
from felice.networks.implicit.boomerang import ImplicitBoomerang
def compute_loss(
model: eqx.Module, inputs: Array, targets: Array, masks: Array
) -> Float[Array, ""]:
def forward_single(inp, tgt, msk):
logits = model(inp)
loss = optax.softmax_cross_entropy_with_integer_labels(logits, tgt)
return (loss * msk).sum() / (msk.sum() + 1e-8)
losses = jax.vmap(forward_single)(inputs, targets, masks)
return losses.mean()
v_and_grad = eqx.filter_value_and_grad(compute_loss)
@eqx.filter_jit
def compute_accuracy(
model: eqx.Module, inputs: Array, targets: Array, masks: Array
) -> Float[Array, ""]:
def forward_single(inp, tgt, msk):
logits = model(inp)
preds = jnp.argmax(logits, axis=-1)
correct = (preds == tgt) * msk
return correct.sum(), msk.sum()
correct, total = jax.vmap(forward_single)(inputs, targets, masks)
return correct.sum() / (total.sum() + 1e-8)
@eqx.filter_jit
def train_step(
model: eqx.Module,
opt_state: OptState,
optimizer: Any,
inputs: Array,
targets: Array,
masks: Array,
) -> Tuple[eqx.Module, OptState, Array]:
loss, grads = v_and_grad(model, inputs, targets, masks)
updates, opt_state = optimizer.update(grads, opt_state, model)
model = eqx.apply_updates(model, updates)
return model, opt_state, loss
def train_and_compare(
model_type: Any,
logdir: str,
task_type: str = "simple",
n_epochs: int = 1000,
batch_size: int = 64,
d_model: int = 64,
d_state: int = 16,
d_inner: int = 32,
dt: float = 1.0,
max_iters: int = 8,
lr: float = 1e-3,
seed: int = 42,
# with_thr: bool = True,
) -> Tuple[eqx.Module, eqx.Module, Array, Array, pd.DataFrame]:
r"""Train Mamba and implicit model on the reasoning synthetic dataset.
Args:
model_type: The type of the implicit model to train (Boomerang, Mamba Implicit).
logdir: Directory and filenmae of the log.
task_type: Type of task to solve from the reasoning synthetic dataset (simple, accumulation).
n_epochs: Number of epochs to train.
batch_size: Training batch size.
d_model: Model dimensions including output.
d_state: Model state dimension.
d_inner: Model latent dimension.
max_iters: Maximum number of iterations in the implicit model.
lr: Learning rate.
seed: Random seed.
with_thr: For the Boomerang model, if using threshold for dual fixpoints.
Returns:
The trained models (mamba and implicit) with the respective final accuracy and
a pandas dataframe with the loss and accuracy per epoch.
"""
key = jrandom.key(seed)
keys = jrandom.split(key, 4)
dataset = ReasoningDataset()
standard_model = SequenceClassifier(
vocab_size=dataset.VOCAB_SIZE,
d_model=d_model,
d_state=d_state,
d_inner=d_inner,
model_class=Mamba,
key=keys[0],
)
implicit_model = SequenceClassifier(
vocab_size=dataset.VOCAB_SIZE,
d_model=d_model,
d_state=d_state,
d_inner=d_inner,
model_class=model_type,
max_iters=max_iters,
dt=dt,
# with_thr=with_thr,
key=keys[1],
)
# implicit_model = ImplicitBoomerang(
# vocab_size=dataset.VOCAB_SIZE,
# d_model=d_model,
# d_state=d_state,
# d_inner=d_inner,
# max_iters=max_iters,
# dt=dt,
# # with_thr=with_thr,
# key=keys[1],
# )
# Count parameters
def count_params(model):
return sum(
x.size for x in jax.tree_util.tree_leaves(eqx.filter(model, eqx.is_array))
)
print(f"Mamba SSM params: {count_params(standard_model):,}")
print(f"Implicit SSM params: {count_params(implicit_model):,}")
# Optimizers
optimizer = optax.adam(lr)
standard_opt_state = optimizer.init(eqx.filter(standard_model, eqx.is_array))
implicit_opt_state = optimizer.init(eqx.filter(implicit_model, eqx.is_array))
# Training loop
print(f"\nTraining on task: {task_type} with {max_iters} steps")
print("=" * 60)
train_key = keys[2]
df = pd.DataFrame({"Epoch": [], "Loss": [], "Acc": [], "Model": []})
pbar = trange(n_epochs)
for epoch in pbar:
train_key, batch_key = jrandom.split(train_key)
inputs, targets, masks = dataset.generate_batch(
batch_key, batch_size, task_type
)
# Train standard model
standard_model, standard_opt_state, standard_loss = train_step(
standard_model, standard_opt_state, optimizer, inputs, targets, masks
)
# Train implicit model
implicit_model, implicit_opt_state, implicit_loss = train_step(
implicit_model, implicit_opt_state, optimizer, inputs, targets, masks
)
if (epoch + 1) % 10 == 0:
# Evaluate on fresh batch
eval_key = jrandom.fold_in(keys[3], epoch)
eval_inputs, eval_targets, eval_masks = dataset.generate_batch(
eval_key, batch_size, task_type
)
standard_acc = compute_accuracy(
standard_model, eval_inputs, eval_targets, eval_masks
)
implicit_acc = compute_accuracy(
implicit_model, eval_inputs, eval_targets, eval_masks
)
new_df = pd.DataFrame(
{
"Epoch": [epoch, epoch],
"Loss": [standard_loss.item(), implicit_loss.item()],
"Acc": [standard_acc.item(), implicit_acc.item()],
"Model": ["Mamba", "Implicit"],
}
)
df = pd.concat([df, new_df], ignore_index=True)
df.to_pickle(logdir)
pbar.write(
f"Epoch {epoch + 1:4d} | "
f"Standard: loss={standard_loss:.4f}, acc={standard_acc:.4f} | "
f"Implicit: loss={implicit_loss:.4f}, acc={implicit_acc:.4f}"
)
# Final evaluation
print("\n" + "=" * 60)
print("Final Evaluation (1000 samples)")
print("=" * 60)
eval_inputs, eval_targets, eval_masks = dataset.generate_batch(
keys[4], 1000, task_type
)
standard_acc = compute_accuracy(
standard_model, eval_inputs, eval_targets, eval_masks
)
implicit_acc = compute_accuracy(
implicit_model, eval_inputs, eval_targets, eval_masks
)
print(f"Mamba SSM accuracy: {standard_acc:.4f}")
print(f"Implicit SSM accuracy: {implicit_acc:.4f}")
print(f"Improvement: {(implicit_acc - standard_acc) * 100:.2f}%")
return standard_model, implicit_model, standard_acc, implicit_acc, df
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"-t", type=int, choices=[1, 2], default=1, help="Task to perform"
)
parser.add_argument(
"-m",
type=str,
choices=["boomerang", "implicit"],
default="implicit",
help="Neuron model to use",
)
parser.add_argument("--dt", type=float, default=0.001, help="Simulation timestep")
parser.add_argument("-i", type=int, default=8, help="Maximum number of iterations")
parser.add_argument("-b", type=int, default=64, help="Batch size")
parser.add_argument(
"--thr", action="store_true", help="Using threshold on the boomerang neuron"
)
args = parser.parse_args()
if args.m == "boomerang":
model_type = ImplicitBoomerang
elif args.m == "implicit":
model_type = Implicit
else:
raise NotImplementedError(f"{args.t} model type not implemented")
logdir = os.path.join("tmp", f"task{args.t}-{args.i}-{args.m}-{args.thr}")
if not os.path.exists("tmp"):
os.makedirs("tmp")
print(f"Saving at {logdir}")
_, implicit_model, std_acc1, imp_acc1, df = train_and_compare(
model_type,
logdir,
task_type="simple" if args.t == 1 else "accumulation",
n_epochs=1000,
batch_size=64,
d_model=ReasoningDataset.NUM_OUTPUT,
d_state=16,
d_inner=128,
dt=args.dt,
max_iters=args.i,
# with_thr=args.thr,
)
eqx.tree_serialise_leaves(f"{logdir}.eqx", implicit_model)
df.to_pickle(logdir)
print("=" * 70)
print("SUMMARY")
print("=" * 70)
print(f"{'Task':<25} {'Mamba SSM':<15} {'Implicit SSM':<15} {'Delta':<10}")
print("-" * 70)
print(
f"{'Simple Comparison':<25} {std_acc1:<15.4f} {imp_acc1:<15.4f} {(imp_acc1 - std_acc1) * 100:>+.2f}%"
)

File diff suppressed because one or more lines are too long

0
src/felice/__init__.py Normal file
View File

View File

View File

@@ -0,0 +1,172 @@
from io import BytesIO
from pathlib import Path
from typing import Optional, Sequence
from urllib.request import urlopen
from zipfile import ZipFile
import jax.numpy as jnp
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
letters = [
"Space",
"A",
"B",
"C",
"D",
"E",
"F",
"G",
"H",
"I",
"J",
"K",
"L",
"M",
"N",
"O",
"P",
"Q",
"R",
"S",
"T",
"U",
"V",
"W",
"X",
"Y",
"Z",
]
def load(
thr: int = 1,
taxels: Optional[Sequence[int]] = None,
custom_letters: Optional[Sequence[str]] = None,
):
file_name = Path(f"./braille/data_braille_letters_th_{thr}.pkl")
if not file_name.exists():
resp = urlopen(
"https://zenodo.org/records/7050094/files/reading_braille_data.zip"
)
with ZipFile(BytesIO(resp.read())) as zObject:
zObject.extract(f"data_braille_letters_th_{thr}.pkl", path="./braille")
data_dict = pd.read_pickle(file_name)
# Extract data
data_list: list[np.ndarray] = []
label_list: list[int] = []
nchan = len(data_dict["events"][1]) # number of channels per sensor
max_events = 0
for i, sample in enumerate(data_dict["events"]):
for taxel in range(len(sample)):
for event_type in range(len(sample[taxel])):
events = sample[taxel][event_type]
max_events = len(events) if len(events) > max_events else max_events
for i, sample in enumerate(data_dict["events"]):
events_array = np.full([nchan, max_events, 2], np.inf)
for taxel in range(len(sample)):
# loop over On and Off channels
for event_type in range(len(sample[taxel])):
events = sample[taxel][event_type]
if events:
events_array[taxel, : len(events), event_type] = events # ms
if taxels is not None:
events_array = np.reshape(
np.transpose(events_array, (0, 2, 1))[taxels, :, :],
(-1, events_array.shape[1]),
)
selected_chans = 2 * len(taxels)
else:
events_array = np.reshape(
np.transpose(events_array, (0, 2, 1)), (-1, events_array.shape[1])
)
selected_chans = 2 * nchan
lbl = data_dict["letter"][i]
if custom_letters is not None:
if lbl in custom_letters:
data_list.append(events_array)
label_list.append(custom_letters.index(lbl))
else:
data_list.append(events_array)
label_list.append(letters.index(lbl))
data = np.stack(data_list) * 100 # To ms
labels = np.stack(label_list)
nb_outputs = len(np.unique(labels))
x_train, x_test, y_train, y_test = train_test_split(
data, labels, test_size=0.30, shuffle=True, stratify=labels
)
x_test, x_validation, y_test, y_validation = train_test_split(
x_test, y_test, test_size=0.33, shuffle=True, stratify=y_test
)
trainset = (jnp.asarray(x_train), jnp.asarray(y_train))
testset = (jnp.asarray(x_test), jnp.asarray(y_test))
valset = (jnp.asarray(x_validation), jnp.asarray(y_validation))
return trainset, testset, valset, selected_chans, nb_outputs
def load_raw(
taxels: Optional[Sequence[int]] = None,
custom_letters: Optional[Sequence[str]] = None,
):
file_name = Path("./braille/data_braille_letters_digits.pkl")
if not file_name.exists():
resp = urlopen(
"https://zenodo.org/records/7050094/files/reading_braille_data.zip"
)
with ZipFile(BytesIO(resp.read())) as zObject:
zObject.extract("data_braille_letters_digits.pkl", path="./braille")
data_dict = pd.read_pickle(file_name)
# Extract data
data_list: list[np.ndarray] = []
label_list: list[int] = []
nchan = data_dict["taxel_data"][0].shape[1] # number of channels per sensor
for i, sample in enumerate(data_dict["taxel_data"]):
if taxels is not None:
sample = sample[:, taxels]
selected_chans = len(taxels)
else:
selected_chans = nchan
lbl = data_dict["letter"][i]
if custom_letters is not None:
if lbl in custom_letters:
data_list.append(sample)
label_list.append(custom_letters.index(lbl))
else:
data_list.append(sample)
label_list.append(letters.index(lbl))
data = np.stack(data_list) # To ms
labels = np.stack(label_list)
nb_outputs = len(np.unique(labels))
x_train, x_test, y_train, y_test = train_test_split(
data, labels, test_size=0.30, shuffle=True, stratify=labels
)
x_test, x_validation, y_test, y_validation = train_test_split(
x_test, y_test, test_size=0.33, shuffle=True, stratify=y_test
)
trainset = (jnp.asarray(x_train), jnp.asarray(y_train))
testset = (jnp.asarray(x_test), jnp.asarray(y_test))
valset = (jnp.asarray(x_validation), jnp.asarray(y_validation))
return trainset, testset, valset, selected_chans, nb_outputs

View File

@@ -0,0 +1,125 @@
from typing import Tuple
import jax
import jax.numpy as jnp
from jaxtyping import Array, PRNGKeyArray
class ReasoningDataset:
"""
Task types:
1. Simple comparison: IF A > B THEN x ELSE y
2. Accumulated conditions: SET A, ADD B, IF SUM > threshold THEN x ELSE y
"""
# Token vocabulary
VOCAB = {
"PAD": 0,
"SET_A": 1,
"SET_B": 2,
"SET_C": 3,
"IF_A>B": 4,
"IF_A<B": 5,
"IF_SUM>": 6,
"THEN": 7,
"ELSE": 8,
"QUERY": 9,
"ADD": 10,
}
NUM_OFFSET: int = 11
VOCAB_SIZE: int = 31
NUM_OUTPUT: int = 16
def generate_simple_comparison(self, key: PRNGKeyArray) -> Tuple[Array, Array]:
"""
Generate: [SET_A, a, SET_B, b, IF_A>B, THEN, x, ELSE, y, QUERY]
Target at QUERY position: x if a > b else y
"""
keys = jax.random.split(key, 4)
a = jax.random.randint(keys[0], (), 0, self.NUM_OUTPUT - 1)
b = jax.random.randint(keys[1], (), 0, self.NUM_OUTPUT - 1)
x = jax.random.randint(keys[2], (), 0, self.NUM_OUTPUT - 1)
y = jax.random.randint(keys[3], (), 0, self.NUM_OUTPUT - 1)
# TODO: Generate other sequences
input_seq = jnp.array(
[
self.VOCAB["SET_A"],
self.NUM_OFFSET + a,
self.VOCAB["SET_B"],
self.NUM_OFFSET + b,
self.VOCAB["IF_A>B"],
self.VOCAB["THEN"],
self.NUM_OFFSET + x,
self.VOCAB["ELSE"],
self.NUM_OFFSET + y,
self.VOCAB["QUERY"],
]
)
result = jnp.where(a > b, x, y)
target = jnp.array([0, 0, 0, 0, 0, 0, 0, 0, 0, result])
mask = jnp.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1])
return input_seq, target, mask
def generate_accumulation_condition(self, key: PRNGKeyArray) -> Tuple[Array, Array]:
"""
Generate: [SET_A, a, ADD, b, ADD, c, IF_SUM>, threshold, THEN, x, ELSE, y, QUERY]
Requires accumulating a + b + c and comparing to threshold.
"""
keys = jax.random.split(key, 6)
a = jax.random.randint(keys[0], (), 0, 10)
b = jax.random.randint(keys[1], (), 0, 10)
c = jax.random.randint(keys[2], (), 0, 10)
threshold = jax.random.randint(keys[3], (), 5, 25)
x = jax.random.randint(keys[4], (), 0, 15)
y = jax.random.randint(keys[5], (), 0, 15)
# TODO: Generate other sequences
input_seq = jnp.array(
[
self.VOCAB["SET_A"],
self.NUM_OFFSET + a,
self.VOCAB["ADD"],
self.NUM_OFFSET + b,
self.VOCAB["ADD"],
self.NUM_OFFSET + c,
self.VOCAB["IF_SUM>"],
self.NUM_OFFSET + threshold,
self.VOCAB["THEN"],
self.NUM_OFFSET + x,
self.VOCAB["ELSE"],
self.NUM_OFFSET + y,
self.VOCAB["QUERY"],
]
)
total = a + b + c
result = jnp.where(total > threshold, x, y)
target = jnp.zeros(13, dtype=jnp.int32).at[-1].set(result)
mask = jnp.zeros(13, dtype=jnp.int32).at[-1].set(1)
return input_seq, target, mask
def generate_batch(
self, key: PRNGKeyArray, batch_size: int, task_type: str = "simple"
) -> Tuple[Array, Array, Array]:
keys = jax.random.split(key, batch_size)
if task_type == "simple":
gen_fn = self.generate_simple_comparison
else: # accumulation
gen_fn = self.generate_accumulation_condition
inputs, targets, masks = [], [], []
for k in keys:
inp, tgt, msk = gen_fn(k)
inputs.append(inp)
targets.append(tgt)
masks.append(msk)
return jnp.stack(inputs), jnp.stack(targets), jnp.stack(masks)

View File

@@ -0,0 +1,45 @@
from typing import Generic, Protocol, TypeVar
import equinox as eqx
import jax
import jax.random as jrandom
from jaxtyping import Array, Float, PRNGKeyArray
from .implicit import Boomerang, Implicit
from .ssm import Mamba
__all__ = ["Mamba", "Implicit", "Boomerang", "SequenceClassifier"]
T = TypeVar("T")
class HasSequential(Protocol):
def sequential(self, x): ...
class SequenceClassifier(eqx.Module, Generic[T]):
embedding: eqx.nn.Embedding
model: eqx.Module
def __init__(
self,
vocab_size: int,
d_model: int,
model_class: T,
key=PRNGKeyArray,
**model_kwargs: dict,
):
keys = jrandom.split(key, 2)
self.embedding = eqx.nn.Embedding(vocab_size, d_model, key=keys[0])
self.model = model_class(d_model=d_model, key=keys[1], **model_kwargs)
def __call__(self, input_ids: Float[Array, " seq"]) -> Float[Array, "seq d_model"]:
x = jax.vmap(self.embedding)(input_ids)
y = self.model(x)
return y
def sequential(
self: "SequenceClassifier[HasSequential]", input_ids: Float[Array, " seq"]
) -> Float[Array, "seq d_model"]:
x = jax.vmap(self.embedding)(input_ids)
return self.model.sequential(x)

View File

@@ -0,0 +1,4 @@
from .base import Implicit
from .boomerang import Boomerang
__all__ = ["Implicit", "Boomerang"]

View File

@@ -0,0 +1,210 @@
from typing import Callable, Optional
import diffrax as dfx
import equinox as eqx
import jax
import jax.numpy as jnp
import jax.random as jrandom
import optimistix as optx
from diffrax._custom_types import RealScalarLike
from jaxtyping import Array, Float, PRNGKeyArray, PyTree
from ..utils import binary_op
def _depth_step(_, z_prev, args):
x, model = args
# ── Eq (6): h_t^{(s)} = Λ(z_t^{(s-1)}, x_t) * h_{t-1}^{(s)} + u(z_t^{(s-1)}, x_t)
lambda_vals = jax.vmap(model.compute_lambda)(z_prev, x)
u_vals = jax.vmap(model.compute_u)(z_prev, x)
_, h = jax.lax.associative_scan(binary_op, (lambda_vals, u_vals), axis=0)
# ── Eq (7): z_t^{(s)} = f_θ(z_t^{(s-1)}, h_{t-1}^{(s)}, x_t)
# z depends on h_{t-1}^{(s)}, i.e. the hidden state of the
# PREVIOUS token h_{t-1}^{(s)}, NOT the just-computed h_t^{(s)}.
h0 = jnp.zeros((1, model.d_state))
h = jnp.concatenate([h0, h[:-1]], axis=0)
dz = jax.vmap(model.f_theta)(z_prev, h, x)
return dz
Normalizer = Callable[[PyTree[Array]], RealScalarLike]
class Implicit(eqx.Module):
d_model: int = eqx.field(static=True)
d_state: int = eqx.field(static=True)
d_inner: int = eqx.field(static=True)
dt: float = eqx.field(static=True)
max_time: float = eqx.field(static=True)
max_iters: int | None = eqx.field(static=True)
rtol: float = eqx.field(static=True)
atol: float = eqx.field(static=True)
norm: Normalizer = eqx.field(static=True)
adjoint: dfx.AbstractAdjoint
solver: dfx.AbstractSolver
f_net: eqx.nn.Linear
lambda_net: eqx.nn.Linear # Λ: maps (z, x) → decay factor (diagonal)
u_net: eqx.nn.Linear # u: maps (z, x) → input state
out_net: eqx.nn.Linear # Output: maps (z, h) → y
def __init__(
self,
d_model: int,
d_state: int = 16,
d_inner: int = 32,
dt: float = 1.0,
max_iters: int | None = 100,
max_time: float = 100,
solver: Optional[dfx.AbstractSolver] = None,
adjoint: Optional[dfx.AbstractAdjoint] = None,
rtol: float = 1e-4,
atol: float = 1e-3,
norm: Normalizer = optx.rms_norm,
*,
key: PRNGKeyArray,
):
self.d_model = d_model
self.d_state = d_state
self.d_inner = d_inner
self.dt = dt
self.max_time = max_time
self.max_iters = max_iters
self.solver = solver if solver is not None else dfx.Tsit5()
self.adjoint = adjoint if adjoint is not None else dfx.ImplicitAdjoint()
self.rtol = rtol
self.atol = atol
self.norm = norm
keys = jrandom.split(key, 4)
self.f_net = eqx.nn.Linear(
d_inner + d_state + d_model,
d_inner,
key=keys[0],
)
self.lambda_net = eqx.nn.Linear(d_inner + d_model, d_state, key=keys[1])
self.u_net = eqx.nn.Linear(d_inner + d_model, d_state, key=keys[2])
self.out_net = eqx.nn.Linear(d_inner, d_model, key=keys[3])
def compute_lambda(
self, z: Float[Array, " d_inner"], x: Float[Array, " d_model"]
) -> Float[Array, " d_state"]:
zx = jnp.concatenate([z, x], axis=-1)
# Sigmoid to keep in (0, 1) for stability
return jax.nn.sigmoid(self.lambda_net(zx))
def compute_u(
self, z: Float[Array, " d_inner"], x: Float[Array, " d_model"]
) -> Float[Array, " d_state"]:
zx = jnp.concatenate([z, x], axis=-1)
return self.u_net(zx)
def f_theta(
self,
z: Float[Array, " d_inner"],
h: Float[Array, " d_state"],
x: Float[Array, " d_model"],
) -> Float[Array, " d_inner"]:
"""f_θ(z, h, x) → z - the implicit function."""
zhx = jnp.concatenate([z, h, x])
dz = jax.nn.silu(self.f_net(zhx)) - z
return dz
def get_z(
self,
x: Float[Array, "seq d_model"],
t0: float = 0,
t1: float | None = None,
num_points: int = 100,
) -> Float[Array, "seq d_model"]:
seq_len, _ = x.shape
t1 = t1 if t1 is not None else self.max_time
z0 = jnp.zeros((seq_len, self.d_inner))
terms = dfx.ODETerm(_depth_step)
sol = dfx.diffeqsolve(
terms,
self.solver,
t0=0.0,
t1=t1,
dt0=self.dt,
y0=z0,
args=(x, self),
max_steps=self.max_iters,
saveat=dfx.SaveAt(ts=jnp.linspace(t0, t1, num_points)),
)
return sol.ts, sol.ys
def sequential(self, x: Float[Array, "seq d_model"]) -> Float[Array, "seq d_model"]:
def scan_fn(h, x_t):
def depth_step(_, z_prev, args):
h, x, model = args
z_s = model.f_theta(z_prev, h, x)
return z_s
z0 = jnp.zeros((self.d_inner,))
cond_fn = dfx.steady_state_event(rtol=1e-4, atol=1e-3, norm=optx.rms_norm)
event = dfx.Event(cond_fn)
terms = dfx.ODETerm(depth_step)
sol = dfx.diffeqsolve(
terms,
self.solver,
t0=0.0,
t1=self.max_time,
dt0=self.dt,
y0=z0,
args=(h, x_t, self),
max_steps=self.max_iters,
event=event,
adjoint=self.adjoint,
)
z_star = sol.ys[-1]
# Compute new hidden state
lambda_val = self.compute_lambda(z_star, x_t) # (d_state,)
u_val = self.compute_u(z_star, x_t) # (d_state,)
h_new = lambda_val * h + u_val
y = self.out_net(z_star)
return h_new, y
h_init = jnp.zeros(self.d_state)
_, y = jax.lax.scan(scan_fn, h_init, x)
return y
def __call__(self, x: Float[Array, "seq d_model"]) -> Float[Array, "seq d_model"]:
seq_len, _ = x.shape
z0 = jnp.zeros((seq_len, self.d_inner))
cond_fn = dfx.steady_state_event(rtol=1e-4, atol=1e-3, norm=optx.rms_norm)
event = dfx.Event(cond_fn)
terms = dfx.ODETerm(_depth_step)
sol = dfx.diffeqsolve(
terms,
self.solver,
t0=0.0,
t1=self.max_time,
dt0=self.dt,
y0=z0,
args=(x, self),
max_steps=self.max_iters,
event=event,
adjoint=self.adjoint,
)
z_star = sol.ys[-1]
y_star = jax.vmap(self.out_net)(z_star)
return y_star

View File

@@ -0,0 +1,346 @@
from typing import Optional
import diffrax as dfx
import equinox as eqx
import jax
import jax.numpy as jnp
import jax.random as jrandom
import optimistix as optx
from jaxtyping import Array, Float, PRNGKeyArray
from ..utils import binary_op
from .base import Implicit, Normalizer
class ImplicitBoomerang(eqx.Module):
d_model: int = eqx.field(static=True)
d_state: int = eqx.field(static=True)
d_inner: int = eqx.field(static=True)
max_iters: int = eqx.field(static=True)
tol: float = eqx.field(static=True)
dt: float = eqx.field(static=True)
with_thr: bool = eqx.field(static=True)
debug: bool = eqx.field(static=True)
f_net: eqx.nn.Linear
lambda_net: eqx.nn.Linear # Λ: maps (z, x) → decay factor (diagonal)
u_net: eqx.nn.Linear # u: maps (z, x) → input state
out_net: eqx.nn.Linear # Output: maps (z, h) → y
def __init__(
self,
d_model: int,
d_state: int = 16,
d_inner: int = 32,
max_iters: int = 20,
tol: float = 1e-5,
dt: float = 1e-3,
with_thr: bool = True,
debug: bool = False,
*,
key: PRNGKeyArray,
):
self.d_model = d_model
self.d_state = d_state
self.d_inner = d_inner
self.max_iters = max_iters
self.tol = tol
self.dt = dt
self.with_thr = with_thr
self.debug = debug
keys = jrandom.split(key, 6)
self.f_net = eqx.nn.Linear(
d_state + d_model,
d_inner,
key=keys[1],
)
self.lambda_net = eqx.nn.Linear(d_inner + d_model, d_state, key=keys[2])
self.u_net = eqx.nn.Linear(d_inner + d_model, d_state, key=keys[3])
self.out_net = eqx.nn.Linear(d_inner + d_state, d_model, key=keys[4])
def compute_lambda(
self, z: Float[Array, " d_inner"], x: Float[Array, " d_model"]
) -> Float[Array, " d_state"]:
zx = jnp.concatenate([z, x], axis=-1)
# Sigmoid to keep in (0, 1) for stability
return jax.nn.sigmoid(jax.vmap(self.lambda_net)(zx))
def compute_u(
self, z: Float[Array, " d_inner"], x: Float[Array, " d_model"]
) -> Float[Array, " d_state"]:
zx = jnp.concatenate([z, x], axis=-1)
return jax.vmap(self.u_net)(zx)
def f_theta(
self,
z: Float[Array, " d_inner"],
h: Float[Array, " d_state"],
x: Float[Array, " d_model"],
) -> Float[Array, " d_inner"]:
"""f_θ(z, h, x) → z - the implicit function."""
hx = jnp.concatenate([h, x])
alpha_sigma = jnp.split(self.f_net(hx), 2)
rho = 30.0
alpha = jax.nn.sigmoid(alpha_sigma[0])
beta = 15.6
gamma = 0.26
sigma = jax.nn.sigmoid(alpha_sigma[1])
u, v = jnp.split(z, 2)
def true_fn(u, v):
return jax.nn.tanh(rho * (v - u))
def false_fn(u, v):
return jnp.ones_like(u)
thresh = jax.lax.cond(self.with_thr, true_fn, false_fn, u, v)
du = (1 - alpha * jnp.exp(beta * v) * (1 - gamma * (0.3 - u))) + sigma * thresh
dv = (-1 + alpha * jnp.exp(beta * u) * (1 + gamma * (0.3 - v))) + sigma * thresh
dz = jnp.concat([du, dv])
z = z + self.dt * dz
# kk = 0.68 # fixed by tech
# Ut = 0.025 # temperature dependent
# I_r0 = 0.9
# x1, x2 = jnp.split(z, 2)
# alpha = 0.000129 + (0.0129 - 0.000129) * jax.nn.sigmoid(
# alpha_sigma[0]
# ) # The circuit will get directly the current
# Ia = jax.nn.tanh(alpha_sigma[1]) * 0.6
# x3 = 1 # (np.tanh(20 * (x2 - x1))) #smoother transition
# dx1 = 2.3 * (
# 1 - (alpha * jnp.exp(kk * x2 / Ut)) * (1 - I_r0 * (0.3 - x1)) + Ia * x3
# )
# dx2 = 2.3 * (
# -1 + (alpha * jnp.exp(kk * x1 / Ut)) * (1 + I_r0 * (0.3 - x2)) + Ia * x3
# )
# dz = jnp.concat([dx1, dx2])
# z = z + self.dt * dz
return z
def get_z(self, x: Float[Array, "seq d_model"]) -> Float[Array, "seq d_model"]:
seq_len, _ = x.shape
def body_fn(state, _):
z = state
lambda_vals = self.compute_lambda(z, x)
u_vals = self.compute_u(z, x)
_, h = jax.lax.associative_scan(binary_op, (lambda_vals, u_vals), axis=0)
h_prev = jnp.concatenate([jnp.zeros((1, self.d_state)), h[:-1]], axis=0)
z_new = jax.vmap(self.f_theta)(z, h_prev, x)
return z_new, z_new
# Initialize
z_init = jnp.zeros((seq_len, self.d_inner))
# Run fixed-point iteration
_, z = jax.lax.scan(
body_fn,
z_init,
None,
length=self.max_iters,
)
return z
def sequential(self, x: Float[Array, "seq d_model"]) -> Float[Array, "seq d_model"]:
def scan_fn(h, x_t):
z_init = jnp.zeros((self.d_inner,))
z_prev_init = jnp.full((self.d_inner,), jnp.inf)
def cond_fn(state):
i, z, z_prev = state
converged = (
jnp.linalg.norm(z - z_prev) / jnp.linalg.norm(z_prev) < 0.001
)
return (i < self.max_iters) & ~converged
def body_fn(state):
i, z, _ = state
z_new = self.f_theta(z, h, x_t)
return (i + 1, z_new, z)
# Run fixed-point iteration
if self.debug:
def scan_fn(state, _):
new_state = body_fn(state)
return new_state, new_state[1]
_, z_star_debug = jax.lax.scan(
scan_fn, (0, z_init, z_prev_init), None, length=self.max_iters
)
z_star = z_star_debug[-1]
else:
_, z_star, _ = eqx.internal.while_loop(
cond_fn,
body_fn,
(0, z_init, z_prev_init),
max_steps=self.max_iters,
kind="bounded",
)
# Compute new hidden state
lambda_val = self.compute_lambda(
z_star[jnp.newaxis, :], x_t[jnp.newaxis, :]
)[0] # (d_state,)
u_val = self.compute_u(z_star[jnp.newaxis, :], x_t[jnp.newaxis, :])[
0
] # (d_state,)
h_new = lambda_val * h + u_val
zh = jnp.concatenate([z_star, h_new])
y = self.out_net(zh)
return h_new, (y, z_star_debug)
h_init = jnp.zeros(self.d_state)
_, (y, z_star) = jax.lax.scan(scan_fn, h_init, x)
if self.debug:
return y, z_star
else:
return y
def __call__(self, x: Float[Array, "seq d_model"]) -> Float[Array, "seq d_model"]:
seq_len, _ = x.shape
def cond_fn(state):
i, z, z_prev = state
converged = jnp.linalg.norm(z - z_prev) / jnp.linalg.norm(z_prev) < 0.001
return (i < self.max_iters) & ~converged
def body_fn(state):
i, z, _ = state
lambda_vals = self.compute_lambda(z, x)
u_vals = self.compute_u(z, x)
_, h = jax.lax.associative_scan(binary_op, (lambda_vals, u_vals), axis=0)
h_prev = jnp.concatenate([jnp.zeros((1, self.d_state)), h[:-1]], axis=0)
z_new = jax.vmap(self.f_theta)(z, h_prev, x)
return (i + 1, z_new, z)
# Initialize
z_init = jnp.zeros((seq_len, self.d_inner))
z_prev_init = jnp.full((seq_len, self.d_inner), jnp.inf)
# Run fixed-point iteration
if self.debug:
def scan_fn(state, _):
new_state = body_fn(state)
return new_state, new_state[1]
_, z_star_debug = jax.lax.scan(
scan_fn, (0, z_init, z_prev_init), None, length=self.max_iters
)
z_star = z_star_debug[-1]
else:
_, z_star, _ = eqx.internal.while_loop(
cond_fn,
body_fn,
(0, z_init, z_prev_init),
max_steps=self.max_iters,
kind="bounded",
)
lambda_vals = self.compute_lambda(z_star, x)
u_vals = self.compute_u(z_star, x)
_, h_star = jax.lax.associative_scan(binary_op, (lambda_vals, u_vals), axis=0)
zh = jnp.concatenate([z_star, h_star], axis=-1)
y_star = jax.vmap(self.out_net)(zh)
if self.debug:
return y_star, z_star_debug
else:
return y_star
class Boomerang(Implicit):
def __init__(
self,
d_model: int,
d_state: int = 16,
d_inner: int = 32,
dt: float = 1.0,
max_iters: int | None = 100,
max_time: float = 100,
solver: Optional[dfx.AbstractSolver] = None,
adjoint: Optional[dfx.AbstractAdjoint] = None,
rtol: float = 1e-4,
atol: float = 1e-3,
norm: Normalizer = optx.rms_norm,
*,
key: PRNGKeyArray,
):
keys = jrandom.split(key)
super(Boomerang, self).__init__(
d_model=d_model,
d_state=d_state,
d_inner=d_inner,
dt=dt,
max_iters=max_iters,
max_time=max_time,
solver=solver,
adjoint=adjoint,
rtol=rtol,
atol=atol,
norm=norm,
key=keys[0],
)
self.f_net = eqx.nn.Linear(
d_state + d_model,
d_inner,
key=keys[1],
)
def f_theta(
self,
z: Float[Array, " d_inner"],
h: Float[Array, " d_state"],
x: Float[Array, " d_model"],
) -> Float[Array, " d_inner"]:
"""f_θ(z, h, x) → z - the implicit function."""
hx = jnp.concatenate([h, x])
alpha_sigma = jnp.split(self.f_net(hx), 2)
kk = 0.68 # fixed by tech
Ut = 0.025 # temperature dependent
I_r0 = 0.9
x1, x2 = jnp.split(z, 2)
alpha = 0.000129 + (0.0129 - 0.000129) * jax.nn.sigmoid(
alpha_sigma[0]
) # The circuit will get directly the current
Ia = jax.nn.tanh(alpha_sigma[1]) * 0.6
x3 = 1 # (np.tanh(20 * (x2 - x1))) #smoother transition
dx1 = 2.3 * (
1 - (alpha * jnp.exp(kk * x2 / Ut)) * (1 - I_r0 * (0.3 - x1)) + Ia * x3
)
dx2 = 2.3 * (
-1 + (alpha * jnp.exp(kk * x1 / Ut)) * (1 + I_r0 * (0.3 - x2)) + Ia * x3
)
dz = jnp.concat([dx1, dx2])
return dz

View File

@@ -0,0 +1,420 @@
from typing import Tuple
import equinox as eqx
import jax
import jax.numpy as jnp
import jax.random as jrandom
from jax import flatten_util, lax
from jaxtyping import Array, Float, PRNGKeyArray
@jax.jit
def binary_op(
left: Tuple[Array, Array],
right: Tuple[Array, Array],
) -> Tuple[Array, Array]:
"""
(a1, b1) ∘ (a2, b2) = (a1·a2, a2·b1 + b2)
"""
a1, b1 = left
a2, b2 = right
return (a1 * a2, a2 * b1 + b2)
class ImplicitSNN(eqx.Module):
d_model: int
d_state: int
d_latent: int
max_iters: int
tol: float
embedding: eqx.nn.Embedding
f_net: eqx.Module # f_θ: maps (z, h, x) → z
f_net2: eqx.Module # f_θ: maps (z, h, x) → z
lambda_net: eqx.Module # Λ: maps (z, x) → decay factor (diagonal)
u_net: eqx.Module # u: maps (z, x) → input contribution
out_net: eqx.nn.Linear # Output: maps (z, h) → y
def __init__(
self,
vocab_size: int,
d_model: int,
d_state: int = 16,
d_latent: int = 32,
max_iters: int = 20,
tol: float = 1e-5,
*,
key: PRNGKeyArray,
):
self.d_model = d_model
self.d_state = d_state
self.d_latent = d_latent
self.max_iters = max_iters
self.tol = tol
keys = jrandom.split(key, 6)
self.embedding = eqx.nn.Embedding(vocab_size, d_model, key=keys[0])
# f_θ(z, h, x) → z
# Input: z (d_latent) + h (d_state) + x (d_model)
self.f_net = eqx.nn.Linear(
d_state + d_model,
d_latent // 2,
key=keys[1],
)
self.f_net2 = eqx.nn.Linear(
d_state + d_model,
d_latent // 2,
key=keys[5],
)
# Λ(z, x) → (d_state,) decay factors
self.lambda_net = eqx.nn.Linear(d_latent + d_model, d_state, key=keys[2])
# u(z, x) → (d_state,) input contribution
self.u_net = eqx.nn.Linear(d_latent + d_model, d_state, key=keys[3])
# Output projection
self.out_net = eqx.nn.Linear(d_latent + d_state, d_model, key=keys[4])
def compute_lambda(
self, z: Float[Array, " d_latent"], x: Float[Array, " d_model"]
) -> Float[Array, " d_state"]:
"""Λ(z, x) - the decay/retention factor."""
zx = jnp.concatenate([z, x], axis=-1)
# Sigmoid to keep in (0, 1) for stability
return jax.nn.sigmoid(jax.vmap(self.lambda_net)(zx))
def compute_u(
self, z: Float[Array, " d_latent"], x: Float[Array, " d_model"]
) -> Float[Array, " d_state"]:
"""u(z, x) - the input contribution."""
zx = jnp.concatenate([z, x], axis=-1)
return jax.vmap(self.u_net)(zx)
# TODO: Change for diffrax
def f_theta(
self,
z: Float[Array, " d_latent"],
h: Float[Array, " d_state"],
x: Float[Array, " d_model"],
) -> Float[Array, " d_latent"]:
"""f_θ(z, h, x) → z - the implicit function."""
hx = jnp.concatenate([h, x])
rho = 30.0
alpha = jax.nn.sigmoid(self.f_net(hx))
beta = 15.6
gamma = 0.26
sigma = jax.nn.sigmoid(self.f_net2(hx))
u, v = jnp.split(z, 2)
thresh = jax.nn.tanh(rho * (v - u))
du = (1 - alpha * jnp.exp(beta * v) * (1 - gamma * (0.3 - u))) + sigma * thresh
dv = (-1 + alpha * jnp.exp(beta * u) * (1 + gamma * (0.3 - v))) + sigma * thresh
dz = jnp.concat([du, dv])
z = z + 0.001 * dz
return z
# ==================== Parallel mode (training) ====================
def parallel_scan_h(
self,
lambda_vals: Float[Array, "seq d_state"],
u_vals: Float[Array, "seq d_state"],
) -> Float[Array, "seq d_state"]:
"""
Compute h sequence via parallel scan.
h_t = λ_t · h_{t-1} + u_t
This is the standard SSM recurrence, parallelizable via associative scan.
"""
# Elements: (λ_t, u_t)
# After scan: (cumulative λ, h_t)
_, h = lax.associative_scan(binary_op, (lambda_vals, u_vals), axis=0)
return h
def debug(self, x):
x = jax.vmap(self.embedding)(x) # (seq, d_model)
seq_len = x.shape[0]
def body_fn(state, _):
z, _ = state
# Compute λ, u
lambda_vals = self.compute_lambda(z, x)
u_vals = self.compute_u(z, x)
# Parallel scan for h
h = self.parallel_scan_h(lambda_vals, u_vals)
# Shift h
h_prev = jnp.concatenate([jnp.zeros((1, self.d_state)), h[:-1]], axis=0)
# Update z
z_new = jax.vmap(self.f_theta)(z, h_prev, x)
return (z_new, z), z_new
# Initialize
z_init = jnp.zeros((seq_len, self.d_latent))
z_prev_init = jnp.full((seq_len, self.d_latent), jnp.inf)
_, z_star = jax.lax.scan(
body_fn, (z_init, z_prev_init), None, length=self.max_iters
)
return z_star
def forward_parallel(
self,
x: Float[Array, " seq"],
) -> Float[Array, "seq d_model"]:
"""
Parallel forward pass for training.
Algorithm:
1. Initialize z for all positions
2. Iterate until convergence:
a. Compute λ, u from current z (parallel over positions)
b. Compute h via parallel scan
c. Update z = f_θ(z, h_shifted, x) (parallel over positions)
3. Compute output from final z, h
Note: h_shifted means h_{t-1} for position t, so we shift h right.
"""
x = jax.vmap(self.embedding)(x) # (seq, d_model)
seq_len = x.shape[0]
def cond_fn(state):
i, z, z_prev = state
converged = jnp.linalg.norm(z - z_prev) < self.tol
return (i < self.max_iters) & ~converged
def body_fn(state):
i, z, _ = state
# Compute λ, u
lambda_vals = self.compute_lambda(z, x)
u_vals = self.compute_u(z, x)
# Parallel scan for h
h = self.parallel_scan_h(lambda_vals, u_vals)
# Shift h
h_prev = jnp.concatenate([jnp.zeros((1, self.d_state)), h[:-1]], axis=0)
# Update z
z_new = jax.vmap(self.f_theta)(z, h_prev, x)
return (i + 1, z_new, z)
# Initialize
z_init = jnp.zeros((seq_len, self.d_latent))
z_prev_init = jnp.full((seq_len, self.d_latent), jnp.inf)
# Run fixed-point iteration
_, z_star, _ = eqx.internal.while_loop(
cond_fn,
body_fn,
(0, z_init, z_prev_init),
max_steps=self.max_iters,
kind="bounded",
)
# Final h computation with converged z
lambda_vals = self.compute_lambda(z_star, x)
u_vals = self.compute_u(z_star, x)
h_star = self.parallel_scan_h(lambda_vals, u_vals)
# Output
zh = jnp.concatenate([z_star, h_star], axis=-1)
y_star = jax.vmap(self.out_net)(zh)
return y_star
# ==================== Sequential mode (inference) ====================
def find_fixed_point(
self,
h_prev: Float[Array, " d_state"],
x: Float[Array, " d_model"],
) -> Float[Array, " d_latent"]:
"""
Find z* such that z* = f_θ(z*, h_prev, x)
using fixed-point iteration.
"""
def cond_fn(state):
i, z, z_prev = state
converged = jnp.linalg.norm(z - z_prev) < self.tol
return (i < self.max_iters) & ~converged
def body_fn(state):
i, z, _ = state
z_new = self.f_theta(z, h_prev, x)
return (i + 1, z_new, z)
# Initialize z
z_init = jnp.zeros(self.d_latent)
z_prev_init = jnp.full(self.d_latent, jnp.inf)
_, z_star, _ = eqx.internal.while_loop(
cond_fn,
body_fn,
(0, z_init, z_prev_init),
max_steps=self.max_iters,
kind="bounded",
)
return z_star
def step(
self,
h_prev: Float[Array, " d_state"],
x: Float[Array, " d_model"],
) -> Tuple[Float[Array, " d_state"], Float[Array, " d_model"]]:
"""
Single step of the implicit SSM.
1. Find z* via fixed-point iteration
2. Compute h* = Λ(z*, x) · h_prev + u(z*, x)
3. Compute output y
"""
# Find fixed point z*
z_star = self.find_fixed_point(h_prev, x)
# Compute new hidden state
lambda_val = self.compute_lambda(z_star[jnp.newaxis, :], x[jnp.newaxis, :])[
0
] # (d_state,)
u_val = self.compute_u(z_star[jnp.newaxis, :], x[jnp.newaxis, :])[
0
] # (d_state,)
h_new = lambda_val * h_prev + u_val
# Compute output
zh = jnp.concatenate([z_star, h_new])
y = self.out_net(zh)
return h_new, y
def forward_sequential(
self,
x: Float[Array, " seq"],
) -> Float[Array, "seq d_model"]:
"""
Sequential forward pass for inference.
Processes one token at a time, maintaining state.
"""
x = jax.vmap(self.embedding)(x)
def scan_fn(h, x_t):
h_new, y_t = self.step(h, x_t)
return h_new, y_t
h_init = jnp.zeros(self.d_state)
_, y = lax.scan(scan_fn, h_init, x)
return y
def __call__(
self,
x: Float[Array, "seq d_model"],
mode: str = "parallel",
) -> Float[Array, "seq d_model"]:
"""
Forward pass over sequence.
Note: This is inherently sequential due to the implicit nature.
"""
if mode == "parallel":
return self.forward_parallel(x)
else:
return self.forward_sequential(x)
if __name__ == "__main__":
import time
key = jrandom.key(42)
keys = jrandom.split(key, 2)
d_model, d_state, d_latent = 64, 16, 32
seq_len = 1080
model = ImplicitSNN(
vocab_size=16,
d_model=d_model,
d_state=d_state,
d_latent=d_latent,
max_iters=10,
key=keys[0],
)
# Test input
x = jrandom.randint(keys[1], (seq_len,), 0, 15)
# Test parallel mode
print("Testing parallel mode...")
y_parallel = model(x, mode="parallel")
print(f" Input: {x.shape}, Output: {y_parallel.shape}")
# Test sequential mode
print("\nTesting sequential mode...")
y_sequential = model(x, mode="sequential")
print(f" Input: {x.shape}, Output: {y_sequential.shape}")
# Compare outputs (should be close but not exact due to different convergence paths)
diff = jnp.linalg.norm(y_parallel - y_sequential) / jnp.linalg.norm(y_sequential)
print(f"\nRelative difference: {diff:.6f}")
# Test gradients in parallel mode
def loss_fn(model, x, mode):
return jnp.mean(model(x, mode=mode) ** 2)
print("\nTesting gradients (parallel mode)...")
grads_par = eqx.filter_grad(loss_fn)(model, x, "parallel")
grads_par = flatten_util.ravel_pytree(grads_par)[0]
print("\nTesting gradients (sequential mode)...")
grads_seq = eqx.filter_grad(loss_fn)(model, x, "sequential")
grads_seq = flatten_util.ravel_pytree(grads_seq)[0]
grad_diff = jnp.linalg.norm(grads_par - grads_seq) / jnp.linalg.norm(grads_seq)
print(" Gradients computed successfully!")
print(f"\nRelative difference: {grad_diff:.6f}")
# Benchmark
print("\nBenchmarking...")
# JIT compile
forward_parallel_jit = eqx.filter_jit(lambda m, x: m(x, mode="parallel"))
forward_sequential_jit = eqx.filter_jit(lambda m, x: m(x, mode="sequential"))
# Warmup
_ = forward_parallel_jit(model, x)
_ = forward_sequential_jit(model, x)
# Time parallel
start = time.time()
for _ in range(100):
_ = forward_parallel_jit(model, x).block_until_ready()
parallel_time = (time.time() - start) / 100
# Time sequential
start = time.time()
for _ in range(100):
_ = forward_sequential_jit(model, x).block_until_ready()
sequential_time = (time.time() - start) / 100
print(f" Parallel: {parallel_time * 1000:.3f} ms")
print(f" Sequential: {sequential_time * 1000:.3f} ms")
print(f" Speedup: {sequential_time / parallel_time:.2f}x")

137
src/felice/networks/ssm.py Normal file
View File

@@ -0,0 +1,137 @@
import math
import equinox as eqx
import jax
import jax.numpy as jnp
import jax.random as jrandom
from einops import repeat
from jaxtyping import Array, Float, PRNGKeyArray
from .utils import binary_op
class SelectiveSSM(eqx.Module):
d_model: int = eqx.field(static=True)
d_state: int = eqx.field(static=True)
dt_rank: int = eqx.field(static=True)
x_proj: eqx.nn.Linear
dt_proj: eqx.nn.Linear
A_log: Float[Array, "d_inner d_state"]
D: Float[Array, " d_inner"]
def __init__(
self,
d_model: int,
dt_rank: int,
d_state: int = 16,
*,
key=PRNGKeyArray,
):
self.d_model = d_model
self.d_state = d_state
self.dt_rank = dt_rank
keys = jrandom.split(key, 5)
self.x_proj = eqx.nn.Linear(
self.d_model, self.dt_rank + self.d_state * 2, use_bias=False, key=keys[3]
)
self.dt_proj = eqx.nn.Linear(
self.dt_rank, self.d_model, use_bias=True, key=keys[4]
)
# S4D-Real initialization
A = repeat(
jnp.arange(1, d_state + 1, dtype=jnp.float32),
"n -> d n",
d=self.d_model,
)
self.A_log = jnp.log(A)
self.D = jnp.ones(self.d_model)
def __call__(self, x: Float[Array, "seq d_model"]) -> Float[Array, "seq d_model"]:
x_dbl = jax.vmap(self.x_proj)(x)
dt, B, C = jnp.split(
x_dbl, [self.dt_rank, self.dt_rank + self.d_state], axis=-1
)
dt = jax.vmap(self.dt_proj)(dt)
dt = jax.nn.softplus(dt)
A = -jnp.exp(self.A_log) # (d_inner, d_state)
dA = jnp.exp(jnp.einsum("ld,dn->ldn", dt, A))
dB_x = jnp.einsum("ld,ln,ld->ldn", dt, B, x)
_, h = jax.lax.associative_scan(binary_op, (dA, dB_x), axis=0)
y = jnp.einsum("ldn,ln->ld", h, C)
y = y + x * self.D
return y
class Mamba(eqx.Module):
d_model: int = eqx.field(static=True)
d_conv: int = eqx.field(static=True)
d_inner: int = eqx.field(static=True)
in_proj: eqx.nn.Linear
out_proj: eqx.nn.Linear
conv1d: eqx.nn.Conv1d
ssm: SelectiveSSM
def __init__(
self,
d_model: int,
d_state: int = 16,
d_inner: int = 32,
d_conv: int = 4,
*,
key=PRNGKeyArray,
):
self.d_model = d_model
self.d_conv = d_conv
self.d_inner = d_inner
keys = jrandom.split(key, 4)
self.in_proj = eqx.nn.Linear(self.d_model, self.d_inner * 2, key=keys[0])
self.conv1d = eqx.nn.Conv1d(
in_channels=self.d_inner,
out_channels=self.d_inner,
kernel_size=d_conv,
groups=self.d_inner,
padding=d_conv - 1,
key=keys[1],
)
self.ssm = SelectiveSSM(
d_model=d_inner,
dt_rank=math.ceil(self.d_model / 16),
d_state=d_state,
key=keys[2],
)
self.out_proj = eqx.nn.Linear(self.d_inner, self.d_model, key=keys[3])
def __call__(self, x: Float[Array, "seq d_model"]) -> Float[Array, "seq d_model"]:
seq_len, _ = x.shape
# Projec the input into the convolution and residual
xz = jax.vmap(self.in_proj)(x)
x, z = jnp.split(xz, 2, axis=-1)
# 1D Convolution
x = x.T
x = self.conv1d(x)[:, :seq_len]
x = x.T
x = jax.nn.silu(x)
y = self.ssm(x)
y = y * jax.nn.silu(z)
logits = jax.vmap(self.out_proj)(y)
return logits

View File

@@ -0,0 +1,13 @@
from typing import Tuple
import jax
from jaxtyping import Array
@jax.jit
def binary_op(
left: Tuple[Array, Array], right: Tuple[Array, Array]
) -> Tuple[Array, Array]:
a1, b1 = left
a2, b2 = right
return (a1 * a2, a2 * b1 + b2)

View File

@@ -0,0 +1,5 @@
from .boomerang import Boomerang
from .fhn import FHNRS
from .wererabbit import WereRabbit
__all__ = ["WereRabbit", "FHNRS", "Boomerang"]

View File

@@ -0,0 +1,171 @@
from typing import Any, Dict, Tuple
import equinox as eqx
import jax
import jax.numpy as jnp
import optimistix as optx
from jaxtyping import Array, DTypeLike, Float
class Boomerang(eqx.Module):
rtol: float = eqx.field(static=True)
atol: float = eqx.field(static=True)
u0: float = eqx.field(static=True)
v0: float = eqx.field(static=True)
alpha: float = eqx.field(static=True) # I_n0 / I_bias ratio
beta: float = eqx.field(static=True) # k / U_t (inverse thermal scale)
gamma: float = eqx.field(static=True) # coupling coefficient
rho: float = eqx.field(static=True) # tanh steepness
sigma: float = eqx.field(static=True) # bias scaling (s * I_bias)
dtype: DTypeLike = eqx.field(static=True)
def __init__(
self,
*,
atol: float = 1e-6,
rtol: float = 1e-4,
alpha: float = 0.0129,
beta: float = 15.6,
gamma: float = 0.26,
rho: float = 30.0,
sigma: float = 0.6,
dtype: DTypeLike = jnp.float32,
):
r"""Initialize the WereRabbit neuron model.
Args:
key: JAX random key for weight initialization.
n_neurons: Number of neurons in this layer.
in_size: Number of input connections (excluding recurrent connections).
wmask: Binary mask defining connectivity pattern of shape (in_plus_neurons, neurons).
rtol: Relative tolerance for the spiking fixpoint calculation.
atol: Absolute tolerance for the spiking fixpoint calculation.
alpha: Current scaling parameter $\alpha = I_{n0}/I_{bias}$ (default: 0.0129)
beta: Exponential slope $\beta = \kappa/U_t$ (default: 15.6)
gamma: Coupling parameter $\gamma = 26e^{-2}$
rho: Steepness of the tanh function $\rho$ (default: 5)
sigma: Fixpoint distance scaling $\sigma$ (default: 0.6)
wlim: Limit for weight initialization. If None, uses init_weights.
wmean: Mean value for weight initialization.
init_weights: Optional initial weight values. If None, weights are randomly initialized.
fan_in_mode: Mode for fan-in based weight initialization ('sqrt', 'linear').
dtype: Data type for arrays (default: float32).
"""
self.dtype = dtype
self.alpha = alpha
self.beta = beta
self.gamma = gamma
self.rho = rho
self.sigma = sigma
self.rtol = rtol
self.atol = atol
def fn(y, _):
return self.vector_field(y[0], y[1])
solver: optx.AbstractRootFinder = optx.Newton(rtol=1e-8, atol=1e-8)
y0 = (jnp.array(0.3), jnp.array(0.3))
u0, v0 = optx.root_find(fn, solver, y0).value
self.u0 = u0.item()
self.v0 = v0.item()
def init_state(self, n_neurons: int) -> Float[Array, "neurons 2"]:
"""Initialize the neuron state variables.
Args:
n_neurons: Number of neurons to initialize.
Returns:
Initial state array of shape (neurons, 3) containing [u, v],
where u and v are the predator/prey membrane voltages.
"""
u = jnp.full((n_neurons,), self.u0, dtype=self.dtype)
v = jnp.full((n_neurons,), self.v0, dtype=self.dtype)
x = jnp.stack([u, v], axis=1)
return x
def vector_field(
self, u: Float[Array, "..."], v: Float[Array, "..."]
) -> Tuple[Float[Array, "..."], Float[Array, "..."]]:
alpha = self.alpha
beta = self.beta
gamma = self.gamma
sigma = self.sigma
rho = self.rho
z = jax.nn.tanh(rho * (v - u))
du = (1 - alpha * jnp.exp(beta * v) * (1 - gamma * (0.3 - u))) + sigma * z
dv = (-1 + alpha * jnp.exp(beta * u) * (1 + gamma * (0.3 - v))) + sigma * z
return du, dv
def dynamics(
self,
t: float,
y: Float[Array, "neurons 2"],
args: Dict[str, Any],
) -> Float[Array, "neurons 2"]:
"""Compute time derivatives of the neuron state variables.
This implements the WereRabbit dynamics
- du/dt: Predator dynamics
- dv/dt: WerePrey dynamics
Args:
t: Current simulation time (unused but required by framework).
y: State array of shape (neurons, 2) containing [u, v].
args: Additional arguments (unused but required by framework).
Returns:
Time derivatives of shape (neurons, 2) containing [du/dt, dv/dt].
"""
u = y[:, 0]
v = y[:, 1]
du, dv = self.vector_field(u, v)
dxdt = jnp.stack([du, dv], axis=1)
return dxdt
def spike_condition(
self,
t: float,
y: Float[Array, "neurons 2"],
**kwargs: Dict[str, Any],
) -> Float[Array, " neurons"]:
"""Compute spike condition for event detection.
A spike is triggered when the system reach to a fixpoint.
INFO:
`has_spiked` is use to the system don't detect a continuos
spike when reach a fixpoint.
Args:
t: Current simulation time (unused but required by the framework).
y: State array of shape (neurons, 3) containing [u, v, has_spiked].
**kwargs: Additional keyword arguments (unused).
Returns:
Spike condition array of shape (neurons,). Positive values indicate spike.
"""
_atol = self.atol
_rtol = self.rtol
_norm = optx.rms_norm
vf = self.dynamics(t, y, {})
@jax.vmap
def calculate_norm(vf, y):
return _atol + _rtol * _norm(y) - _norm(vf)
base_cond = calculate_norm(vf, y).repeat(2)
return base_cond

View File

@@ -0,0 +1,235 @@
from typing import Any, Dict, Union
import equinox as eqx
import jax.numpy as jnp
from jaxtyping import Array, DTypeLike, Float
class FHNRS(eqx.Module):
r"""FitzHugh-Nagumo neuron model
Model for FitzHugh-Nagumo neuron, with a hardware implementation proposed by
Ribar-Sepulchre. This implementation uses a dual-timescale dynamics with fast
and slow currents to produce oscillatory spiking behavior.
The dynamics are governed by:
$$
\begin{align}
C\frac{dv}{dt} &= I_{app} - I_{passive} - I_{fast} - I_{slow} \\
\frac{dv_{slow}}{dt} &= \frac{v - v_{slow}}{\tau_{slow}} \\
\frac{dI_{app}}{dt} &= -\frac{I_{app}}{\tau_{syn}}
\end{align}
$$
where the currents are:
- $I_{passive} = g_{max}(v - E_{rev})$
- $I_{fast} = a_{fast} \tanh(v - v_{off,fast})$
- $I_{slow} = a_{slow} \tanh(v_{slow} - v_{off,slow})$
References:
- Ribar, L., & Sepulchre, R. (2019). Neuromodulation of neuromorphic circuits. IEEE Transactions on Circuits and Systems I: Regular Papers, 66(8), 3028-3040.
Attributes:
reset_grad_preserve: Preserve the gradient when the neuron spikes by doing a soft reset.
gmax_pasive: Maximal conductance of the passive current.
Erev_pasive: Reversal potential for the passive current.
a_fast: Amplitude parameter for the fast current dynamics.
voff_fast: Voltage offset for the fast current activation.
tau_fast: Time constant for the fast current (typically zero for instantaneous).
a_slow: Amplitude parameter for the slow current dynamics.
voff_slow: Voltage offset for the slow current activation.
tau_slow: Time constant for the slow recovery variable.
vthr: Voltage threshold for spike generation.
C: Membrane capacitance.
tsyn: Synaptic time constant for input current decay.
weights: Synaptic weight matrix of shape (in_plus_neurons, neurons).
"""
# Pasive parameters
gmax_pasive: float = eqx.field(static=True)
Erev_pasive: float = eqx.field(static=True)
# Fast current
a_fast: float = eqx.field(static=True)
voff_fast: float = eqx.field(static=True)
tau_fast: float = eqx.field(static=True)
# Slow current
a_slow: float = eqx.field(static=True)
voff_slow: float = eqx.field(static=True)
tau_slow: float = eqx.field(static=True)
# Neuron threshold
vthr: float = eqx.field(static=True)
C: float = eqx.field(static=True, default=1.0)
# Input synaptic time constant
tsyn: float = eqx.field(static=True)
dtype: DTypeLike = eqx.field(static=True)
def __init__(
self,
*,
tsyn: Union[int, float, jnp.ndarray] = 1.0,
C: Union[int, float, jnp.ndarray] = 1.0,
gmax_pasive: Union[int, float, jnp.ndarray] = 1.0,
Erev_pasive: Union[int, float, jnp.ndarray] = 0.0,
a_fast: Union[int, float, jnp.ndarray] = -2.0,
voff_fast: Union[int, float, jnp.ndarray] = 0.0,
tau_fast: Union[int, float, jnp.ndarray] = 0.0,
a_slow: Union[int, float, jnp.ndarray] = 2.0,
voff_slow: Union[int, float, jnp.ndarray] = 0.0,
tau_slow: Union[int, float, jnp.ndarray] = 50.0,
vthr: Union[int, float, jnp.ndarray] = 2.0,
dtype: DTypeLike = jnp.float32,
):
"""Initialize the FitzHugh-Nagumo neuron model.
Args:
tsyn: Synaptic time constant for input current decay. Can be scalar or per-neuron array.
C: Membrane capacitance. Can be scalar or per-neuron array.
gmax_pasive: Maximal conductance of passive current. Can be scalar or per-neuron array.
Erev_pasive: Reversal potential for passive current. Can be scalar or per-neuron array.
a_fast: Amplitude of fast current. Can be scalar or per-neuron array.
voff_fast: Voltage offset for fast current activation. Can be scalar or per-neuron array.
tau_fast: Time constant for fast current (typically 0 for instantaneous). Can be scalar or per-neuron array.
a_slow: Amplitude of slow current. Can be scalar or per-neuron array.
voff_slow: Voltage offset for slow current activation. Can be scalar or per-neuron array.
tau_slow: Time constant for slow recovery variable. Can be scalar or per-neuron array.
vthr: Voltage threshold for spike generation. Can be scalar or per-neuron array.
dtype: Data type for arrays (default: float32).
"""
self.dtype = dtype
self.tsyn = tsyn
self.C = C
self.gmax_pasive = gmax_pasive
self.Erev_pasive = Erev_pasive
self.a_fast = a_fast
self.voff_fast = voff_fast
self.tau_fast = tau_fast
self.a_slow = a_slow
self.voff_slow = voff_slow
self.tau_slow = tau_slow
self.vthr = vthr
def init_state(self, n_neurons: int) -> Float[Array, "neurons 3"]:
"""Initialize the neuron state variables.
Args:
n_neurons: Number of neurons to initialize.
Returns:
Initial state array of shape (neurons, 3) containing [v, v_slow, i_app],
where v is membrane voltage, v_slow is the slow recovery variable,
and i_app is the applied synaptic current.
"""
return jnp.zeros((n_neurons, 3), dtype=self.dtype)
def IV_inst(self, v: Float[Array, "..."], Vrest: float = 0) -> Float[Array, "..."]:
"""Compute instantaneous I-V relationship with fast and slow currents at rest.
Args:
v: Membrane voltage.
Vrest: Resting voltage for both fast and slow currents (default: 0).
Returns:
Total current at voltage v with both fast and slow currents evaluated at Vrest.
"""
I_pasive = self.gmax_pasive * (v - self.Erev_pasive)
I_fast = self.a_fast * jnp.tanh(Vrest - self.voff_fast)
I_slow = self.a_slow * jnp.tanh(Vrest - self.voff_slow)
return I_pasive + I_fast + I_slow
def IV_fast(self, v: Float[Array, "..."], Vrest: float = 0) -> Float[Array, "..."]:
"""Compute I-V relationship with fast current at voltage v and slow current at rest.
Args:
v: Membrane voltage for passive and fast currents.
Vrest: Resting voltage for slow current (default: 0).
Returns:
Total current with fast dynamics responding to v and slow current at Vrest.
"""
I_pasive = self.gmax_pasive * (v - self.Erev_pasive)
I_fast = self.a_fast * jnp.tanh(v - self.voff_fast)
I_slow = self.a_slow * jnp.tanh(Vrest - self.voff_slow)
return I_pasive + I_fast + I_slow
def IV_slow(self, v: Float[Array, "..."], Vrest: float = 0) -> Float[Array, "..."]:
"""Compute steady-state I-V relationship with all currents at voltage v.
Args:
v: Membrane voltage for all currents.
Vrest: Unused parameter for API consistency (default: 0).
Returns:
Total steady-state current with all currents responding to v.
"""
I_pasive = self.gmax_pasive * (v - self.Erev_pasive)
I_fast = self.a_fast * jnp.tanh(v - self.voff_fast)
I_slow = self.a_slow * jnp.tanh(v - self.voff_slow)
return I_pasive + I_fast + I_slow
def dynamics(
self,
t: float,
y: Float[Array, "neurons 3"],
args: Dict[str, Any],
) -> Float[Array, "neurons 3"]:
"""Compute time derivatives of the neuron state variables.
This implements the FitzHugh-Nagumo dynamics with passive, fast, and slow currents:
- dv/dt: Fast membrane voltage dynamics
- dv_slow/dt: Slow recovery variable dynamics
- di_app/dt: Synaptic current decay
Args:
t: Current simulation time (unused but required by framework).
y: State array of shape (neurons, 3) containing [v, v_slow, i_app].
args: Additional arguments (unused but required by framework).
Returns:
Time derivatives of shape (neurons, 3) containing [dv/dt, dv_slow/dt, di_app/dt].
"""
v = y[:, 0]
v_slow = y[:, 1]
i_app = y[:, 2]
I_pasive = self.gmax_pasive * (v - self.Erev_pasive)
I_fast = self.a_fast * jnp.tanh(v - self.voff_fast)
I_slow = self.a_slow * jnp.tanh(v_slow - self.voff_slow)
i_sum = I_pasive + I_fast + I_slow
dv_dt = (i_app - i_sum) / self.C
dvslow_dt = (v - v_slow) / self.tau_slow
di_dt = -i_app / self.tsyn
return jnp.stack([dv_dt, dvslow_dt, di_dt], axis=1)
def spike_condition(
self,
t: float,
y: Float[Array, "neurons 3"],
**kwargs: Dict[str, Any],
) -> Float[Array, " neurons"]:
"""Compute spike condition for event detection.
A spike is triggered when this function crosses zero (v >= vthr).
Args:
t: Current simulation time (unused but required by event detection).
y: State array of shape (neurons, 3) containing [v, v_slow, i_app].
**kwargs: Additional keyword arguments (unused).
Returns:
Spike condition array of shape (neurons,). Positive values indicate v > vthr.
"""
return y[:, 0] - self.vthr

View File

@@ -0,0 +1,194 @@
from typing import Any, Dict
import equinox as eqx
import jax
import jax.numpy as jnp
import optimistix as optx
from jaxtyping import Array, DTypeLike, Float
class WereRabbit(eqx.Module):
r"""
WereRabbit Neuron Model
The WereRabbit model implements a predator-prey dynamic with bistable
switching behavior controlled by a "moon phase" parameter $z$.
The dynamics are governed by:
$$
\begin{align}
z &= tanh(\rho (u-v)) \\
\frac{du}{dt} &= z - z \alpha e^{\beta v} [1 + \gamma (0.5 - u)] - \sigma \\
\frac{dv}{dt} &= -z - z \alpha e^{\beta u} [1 + \gamma (0.5 - v)] - \sigma
\end{align}
$$
where $z$ represents the "moon phase" that switches the predator-prey roles.
Attributes:
alpha: Current scaling parameter $\alpha = I_{n0}/I_{bias}$ (default: 0.0129)
beta: Exponential slope $\beta = \kappa/U_t$ (default: 15.6)
gamma: Coupling parameter $\gamma = 26e^{-2}$
rho: Steepness of the tanh function $\rho$ (default: 5)
sigma: Fixpoint distance scaling $\sigma$ (default: 0.6)
rtol: Relative tolerance for the spiking fixpoint calculation.
atol: Absolute tolerance for the spiking fixpoint calculation.
weight_u: Input weight for the predator.
weight_v: Input weight for the prey.
"""
dtype: DTypeLike = eqx.field(static=True)
rtol: float = eqx.field(static=True)
atol: float = eqx.field(static=True)
alpha: float = eqx.field(static=True) # I_n0 / I_bias ratio
beta: float = eqx.field(static=True) # k / U_t (inverse thermal scale)
gamma: float = eqx.field(static=True) # coupling coefficient
rho: float = eqx.field(static=True) # tanh steepness
sigma: float = eqx.field(static=True) # bias scaling (s * I_bias)
def __init__(
self,
*,
atol: float = 1e-3,
rtol: float = 1e-3,
alpha: float = 0.0129,
beta: float = 15.6,
gamma: float = 0.26,
rho: float = 5.0,
sigma: float = 0.6,
dtype: DTypeLike = jnp.float32,
):
r"""Initialize the WereRabbit neuron model.
Args:
rtol: Relative tolerance for the spiking fixpoint calculation.
atol: Absolute tolerance for the spiking fixpoint calculation.
alpha: Current scaling parameter $\alpha = I_{n0}/I_{bias}$ (default: 0.0129)
beta: Exponential slope $\beta = \kappa/U_t$ (default: 15.6)
gamma: Coupling parameter $\gamma = 26e^{-2}$
rho: Steepness of the tanh function $\rho$ (default: 5)
sigma: Fixpoint distance scaling $\sigma$ (default: 0.6)
dtype: Data type for arrays (default: float32).
"""
self.dtype = dtype
self.alpha = alpha
self.beta = beta
self.gamma = gamma
self.rho = rho
self.sigma = sigma
self.rtol = rtol
self.atol = atol
def init_state(self, n_neurons: int) -> Float[Array, "neurons 2"]:
"""Initialize the neuron state variables.
Args:
n_neurons: Number of neurons to initialize.
Returns:
Initial state array of shape (neurons, 3) containing [u, v, has_spiked],
where u and v are the predator/prey membrane voltages, has_spiked is a
variable that is 1 whenever the neuron spike and 0 otherwise .
"""
x1 = jnp.zeros((n_neurons,), dtype=self.dtype)
x2 = jnp.zeros((n_neurons,), dtype=self.dtype)
return jnp.stack([x1, x2], axis=1)
def vector_field(self, y: Float[Array, "neurons 2"]) -> Float[Array, "neurons 2"]:
"""Compute vector field of the neuron state variables.
This implements the WereRabbit dynamics
- du/dt: Predator dynamics
- dv/dt: WerePrey dynamics
Args:
y: State array of shape (neurons, 2) containing [u, v].
Returns:
Time derivatives of shape (neurons, 2) containing [du/dt, dv/dt].
"""
u = y[:, 0]
v = y[:, 1]
z = jax.nn.tanh(self.rho * (u - v))
du = (
z * (1 - self.alpha * jnp.exp(self.beta * v) * (1 + self.gamma * (0.5 - u)))
- self.sigma
)
dv = (
z
* (-1 + self.alpha * jnp.exp(self.beta * u) * (1 + self.gamma * (0.5 - v)))
- self.sigma
)
dv = jnp.where(jnp.allclose(z, 0.0), dv * jnp.sign(v), dv)
du = jnp.where(jnp.allclose(z, 0.0), du * jnp.sign(u), du)
return jnp.stack([du, dv], axis=1)
def dynamics(
self,
t: float,
y: Float[Array, "neurons 2"],
args: Dict[str, Any],
) -> Float[Array, "neurons 2"]:
"""Compute time derivatives of the neuron state variables.
This implements the WereRabbit dynamics
- du/dt: Predator dynamics
- dv/dt: WerePrey dynamics
Args:
t: Current simulation time (unused but required by framework).
y: State array of shape (neurons, 3) containing [u, v, has_spiked].
args: Additional arguments (unused but required by framework).
Returns:
Time derivatives of shape (neurons, 3) containing [du/dt, dv/dt, 0].
"""
dxdt = self.vector_field(y)
return dxdt
def spike_condition(
self,
t: float,
y: Float[Array, "neurons 2"],
**kwargs: Dict[str, Any],
) -> Float[Array, " neurons"]:
"""Compute spike condition for event detection.
A spike is triggered when the system reach to a fixpoint.
INFO:
`has_spiked` is use to the system don't detect a continuos
spike when reach a fixpoint.
Args:
t: Current simulation time (unused but required by the framework).
y: State array of shape (neurons, 3) containing [u, v, has_spiked].
**kwargs: Additional keyword arguments (unused).
Returns:
Spike condition array of shape (neurons,). Positive values indicate spike.
"""
_atol = self.atol
_rtol = self.rtol
_norm = optx.rms_norm
vf = self.dynamics(t, y, {})
@jax.vmap
def calculate_norm(vf, y):
return _atol + _rtol * _norm(y[:-1]) - _norm(vf[:-1])
base_cond = calculate_norm(vf, y)
return base_cond

64
src/felice/solver.py Normal file
View File

@@ -0,0 +1,64 @@
from typing import Optional
import equinox as eqx
import jax
from diffrax import AbstractSolver, AbstractTerm
from diffrax._custom_types import Args, BoolScalarLike, DenseInfo, RealScalarLike, Y
from diffrax._solution import RESULTS
from diffrax._solver.base import _SolverState
from jaxtyping import PyTree
class ClipSolver(eqx.Module):
solver: AbstractSolver
def __getattr__(self, name):
return getattr(self.solver, name)
def step(
self,
terms: PyTree[AbstractTerm],
t0: RealScalarLike,
t1: RealScalarLike,
y0: Y,
args: Args,
solver_state: _SolverState,
made_jump: BoolScalarLike,
) -> tuple[Y, Optional[Y], DenseInfo, _SolverState, RESULTS]:
"""Make a single step of the solver.
Each step is made over the specified interval $[t_0, t_1]$.
**Arguments:**
- `terms`: The PyTree of terms representing the vector fields and controls.
- `t0`: The start of the interval that the step is made over.
- `t1`: The end of the interval that the step is made over.
- `y0`: The current value of the solution at `t0`.
- `args`: Any extra arguments passed to the vector field.
- `solver_state`: Any evolving state for the solver itself, at `t0`.
- `made_jump`: Whether there was a discontinuity in the vector field at `t0`.
Some solvers (notably FSAL Runge--Kutta solvers) usually assume that there
are no jumps and for efficiency re-use information between steps; this
indicates that a jump has just occurred and this assumption is not true.
**Returns:**
A tuple of several objects:
- The value of the solution at `t1`.
- A local error estimate made during the step. (Used by adaptive step size
controllers to change the step size.) May be `None` if no estimate was
made.
- Some dictionary of information that is passed to the solver's interpolation
routine to calculate dense output. (Used with `SaveAt(ts=...)` or
`SaveAt(dense=...)`.)
- The value of the solver state at `t1`.
- An integer (corresponding to `diffrax.RESULTS`) indicating whether the step
happened successfully, or if (unusually) it failed for some reason.
"""
y1, y_error, dense_info, solver_state, result = self.solver.step(
terms, t0, t1, y0, args, solver_state, made_jump
)
y1_clipped = jax.tree_util.tree_map(jax.nn.relu, y1)
return y1_clipped, y_error, dense_info, solver_state, result

0
tests/__init__.py Normal file
View File

42
tox.ini Normal file
View File

@@ -0,0 +1,42 @@
[tox]
requires =
tox>=4.33.0
tox-uv>=1.29.0
env_list = type, lint, 3.14, 3.13, 3.12, 3.11
[testenv]
description = Run test under {base_python}
deps =
pytest>=9.0.2
pytest-sugar>=1.1.1
commands =
uv pip install .[local]
pytest {posargs:tests}
[testenv:lint]
description = Run linters
skip_install = true
deps =
ruff>=0.14.9
commands =
ruff check {posargs:src tests scripts}
ruff format --check {posargs:src tests scripts}
[testenv:format]
description = Run code formatting
skip_install = true
deps =
ruff>=0.14.9
commands =
ruff check --fix {posargs:src tests scripts}
ruff check --select I --fix {posargs:src tests scripts}
ruff format {posargs:src tests scripts}
[testenv:type]
description = run type checker
deps =
mypy>=1.19.1
commands =
uv pip install .[local]
uv pip install pandas-stubs types-tqdm
mypy {posargs:src tests scripts}

3838
uv.lock generated Normal file

File diff suppressed because it is too large Load Diff