Skip to main content
Deno 2 is finally here 🎉️
Learn more

🧬 NEAT Neural Network for DenoJS

This project is a practical implementation of a neural network based on the NEAT (NeuroEvolution of Augmenting Topologies) algorithm, written in DenoJS using TypeScript, with additional features such as error-guided discovery, memetic evolution, and distributed workflows.

For project terminology, coding conventions, and development guidelines, see AGENTS.md.

✨ Feature Highlights

  1. Extendable Observations: Input and output features are identified by stable UUIDs in the exported representation, rather than only by positional indices. This prevents the need to restart the evolution process as new observations are added, and makes it practical to evolve creatures on multiple machines and then recombine them, much like NEAT’s historical marking for genes Stanley & Miikkulainen (2002).

  2. Distributed Training: Training and evolution can be run on multiple independent nodes. The best-of-breed creatures can later be combined on a centralised controller node, mirroring the island model used in evolutionary algorithms.

  3. Life Long Learning: Designed for continuous learning in changing environments. The same population can keep training and adapting as new data arrives over weeks or months, supporting continual learning while still relying on your training data to keep past knowledge represented.

  4. Efficient Model Utilisation: Once trained, the current best model can be utilised efficiently by calling the activate function. This runs a single forward pass that maps inputs to outputs.

    Note

    Activation uses WASM (required). The library initialises the WASM backend automatically; callers do not need to call any init function or set environment variables.

  5. Unique Squash Functions: Supports unique squash functions such as IF, MAX and MIN, offering a wider range of potential solutions. More about Activation Functions.

  6. Neuron Pruning: Neurons whose activations don’t vary during training are removed, and the biases in associated neurons are adjusted. More about Pruning (Neural Networks).

  7. CRISPR: Allows injection of genes into a population of creatures during evolution. More about CRISPR.

  8. Grafting: If parents aren’t “genetically compatible”, the grafting algorithm enables cross-island interbreeding, preserving diversity in the same spirit as island-model evolution.

  9. Memetic Evolution: Records and utilises the biases and weights of the fittest creatures to fine-tune future generations. Learn more about Memetic Algorithms.

  10. Error-Guided Structural Evolution: Dynamically identifies and creates new synapses by analysing neuron activations and errors. A dedicated Rust module performs GPU-accelerated analysis and proposes structural candidates. Discovery runs typically find improvements of 0.5-3% per run that accumulate over many iterations.

    Warning

    Relies entirely on the NEAT-AI-Discovery Rust extension library. If the library is not available, the discovery phase is skipped; there is no TypeScript fallback.

  11. Visualisation

  12. Adaptive Mutation Rate: Automatically adjusts mutation strategy based on creature size - large creatures focus on weight/bias modification rather than topology expansion.

  13. Adaptive Mutation Rate Based on Fitness Progress: Mutation rate is automatically adjusted based on whether evolution is improving, stagnating, or stable, helping balance exploration and exploitation.

  14. Continuous Incremental Discovery: For distributed, multi-machine discovery workflows that accumulate small improvements over time, see the Discovery Guide.

  15. Training Data Fuzzing: Noise injection during training prevents creatures from memorising exact training examples. Gaussian or uniform perturbations are added to inputs (and optionally outputs for label smoothing) each iteration, encouraging robust generalisation.

  16. K-Fold Cross-Validation: Built-in k-fold cross-validation evaluates creatures on held-out data folds during evolution, reducing overfitting to a single train/test split.

  17. Hyperparameter Self-Adaptation: Each creature carries its own learning rate, mutation rates, and regularisation strength. These evolve alongside topology and weights — creatures with better-suited hyperparameters achieve higher fitness and propagate their settings, inspired by self-adaptive evolution strategies.

  18. Transfer Learning: Export trained creatures as checkpoints with metadata, import them into new tasks with UUID mapping for different input/output configurations, and seed populations with pre-trained creatures for transfer learning across related problems.

  19. ONNX Export: Export trained creatures to the ONNX (Open Neural Network Exchange) format for deployment in standard ML inference pipelines, bridging the gap between neuroevolution and production deployment.

🚀 Quick Start

// Single discovery iteration
const result = await creature.discoveryDir(dataDir, {
  discoveryRecordTimeOutMinutes: 1,
  discoveryAnalysisTimeoutMinutes: 10,
});

if (result.improvement) {
  console.log(`Found ${result.improvement.changeType} improvement!`);
  // Use improved creature for next iteration
}

Tip

For distributed, multi-machine workflows that accumulate small improvements over time, see the Discovery Guide for a complete walkthrough.

💻 Usage

This project is designed to be used in a DenoJS environment. Please refer to the DenoJS documentation for setup and usage instructions.

📚 Documentation

For detailed documentation, see the docs/ directory:

🚀 Getting Started

🧠 Core Concepts

  • COMPARISON.md: How NEAT compares to traditional neural networks, CNNs, RNNs, and modern LLMs
  • Discovery Guide: Complete guide to distributed, multi-machine discovery workflows, including failure/success caches, replay, candidate category limits, focus overrides, and the cost-of-growth gate
  • Intelligent Design: Systematic squash function optimisation for hidden neurons

🔧 API & Reference

🔬 Advanced Topics

⚡ Operations

  • Performance Tuning: Tuning WASM caches, thread pools, memory management, and scaling for large-scale training
  • Performance Research: WASM migration research and benchmark learnings
  • Troubleshooting: Common issues and solutions for WASM, discovery, memory, CI, and configuration

🤝 For Contributors

🤝 Contributions

Contributions are welcome! See CONTRIBUTING.md for development setup, workflow, and guidelines. Please submit a pull request or open an issue to discuss potential changes/additions.

⚖️ Licence

This project is licensed under the terms of the Apache Licence 2.0. For the full licence text, please see LICENSE

Built with the Deno Standard Library

codecov