- v0.292.7Latest
- v0.292.6
- v0.292.5
- v0.292.5
- v0.292.3
- v0.292.3
- v0.292.2
- v0.292.1
- v0.292.0
- v0.291.31
- v0.291.30
- v0.291.29
- v0.291.28
- v0.291.27
- v0.291.26
- v0.291.26
- v0.291.25
- v0.291.24
- v0.291.23
- v0.291.23
- v0.291.22
- v0.291.21
- v0.291.20
- v0.291.19
- v0.291.18
- v0.291.16
- v0.291.15
- v0.291.14
- v0.291.12
- v0.291.12
- v0.291.11
- v0.291.10
- v0.291.9
- v0.291.9
- v0.291.8
- v0.291.7
- v0.291.6
- v0.291.5
- v0.291.4
- v0.291.4
- v0.291.3
- v0.291.2
- v0.291.2
- v0.291.1
- v0.291.0
- v0.290.2
- v0.290.1
- v0.290.0
- v0.290.0
- v0.289.1
- v0.289.0
- v0.288.1
- v0.288.0
- v0.287.0
- v0.287.0
- v0.286.1
- v0.286.0
- v0.285.2
- v0.285.1
- v0.285.0
- v0.284.0
- v0.283.0
- v0.282.1
- v0.282.0
- v0.281.0
- v0.280.0
- v0.279.2
- v0.279.1
- v0.279.0
- v0.278.1
- v0.278.0
- v0.277.0
- v0.276.0
- v0.276.0
- v0.275.0
- v0.274.2
- v0.274.1
- v0.274.0
- v0.274.0
- v0.273.4
- v0.273.3
- v0.273.2
- v0.273.0
- v0.272.1
- v0.272.1
- v0.272.0
- v0.272.0
- v0.271.0
- v0.270.2
- v0.270.1
- v0.270.0
- v0.269.1
- v0.269.0
- v0.268.0
- v0.267.0
- v0.266.0
- v0.265.0
- v0.265.0
- v0.264.0
- v0.263.0
- v0.262.0
- v0.261.0
- v0.260.0
- v0.259.5
- v0.259.4
- v0.259.3
- v0.259.2
- v0.259.2
- v0.259.1
- v0.259.1
- v0.259.0
- v0.258.1
- v0.256.1
- v0.256.0
- v0.255.1
- v0.255.0
- v0.254.0
- v0.253.1
- v0.253.0
- v0.252.0
- v0.252.0
- v0.251.1
- v0.251.0
- v0.249.2
- v0.249.1
- v0.249.0
- v0.248.1
- v0.248.0
- v0.248.0
- v0.247.0
- v0.246.0
- v0.245.0
- v0.244.0
- v0.243.0
- v0.243.0
- v0.242.0
- v0.242.0
- v0.241.0
- v0.240.0
- v0.238.1
- v0.238.0
- v0.237.3
- v0.237.2
- v0.237.1
- v0.237.0
- v0.236.0
- v0.236.0
- v0.235.0
- v0.234.0
- v0.233.0
- v0.231.1
- v0.231.0
- v0.229.0
- v0.228.0
- v0.227.1
- v0.226.0
- v0.225.0
- v0.224.0
- v0.223.0
- v0.222.0
- v0.221.1
- v0.220.2
- v0.220.2
- v0.220.1
- v0.220.0
- v0.219.0
- v0.219.0
- v0.218.0
- v0.217.1
- v0.217.0
- v0.215.0
- v0.215.0
- v0.214.4
- v0.214.3
- v0.214.1
- v0.214.0
- v0.213.1
- v0.213.0
- v0.213.0
- v0.212.0
- v0.211.0
- v0.210.0
- v0.209.1
- v0.209.0
- v0.208.1
- v0.208.0
- v0.205.0
- v0.204.4
- v0.204.3
- v0.204.2
- v0.204.2
- v0.204.1
- v0.203.0
- v0.202.1
- v0.202.1
- v0.202.0
- v0.201.3
- v0.201.2
- v0.201.1
- v0.201.0
- v0.200.0
- v0.199.0
- v0.198.5
- v0.198.5
- v0.198.4
- v0.198.3
- v0.198.2
- v0.198.2
- v0.198.1
- v0.198.0
- v0.197.0
- v0.196.3
- v0.196.3
- v0.196.2
- v0.196.1
- v0.196.0
- v0.195.12
- v0.195.10
- v0.195.9
- v0.195.8
- v0.195.7
- v0.195.6
- v0.195.5
- v0.195.5
- v0.195.4
- v0.195.3
- v0.195.2
- v0.195.1
- v0.195.0
- v0.194.7
- v0.194.6
- v0.194.5
- v0.194.4
- v0.194.4
- v0.194.3
- v0.194.2
- v0.194.1
- v0.194.0
- v0.193.3
- v0.193.3
- v0.193.2
- v0.193.1
- v0.193.0
- v0.192.0
- v0.191.1
- v0.191.0
- v0.191.0
- v0.190.2
- v0.190.1
- v0.190.0
- v0.189.8
- v0.189.7
- v0.189.5
- v0.189.4
- v0.189.3
- v0.189.3
- v0.189.2
- v0.189.1
- v0.189.0
- v0.188.0
- v0.187.0
- v0.186.1
- v0.186.0
- v0.185.1
- v0.185.0
- v0.184.3
- v0.184.2
- v0.184.2
- v0.184.1
- v0.184.0
- v0.183.0
- v0.182.0
- v0.181.1
- v0.181.0
- v0.179.21
- v0.179.20
- v0.179.19
- v0.179.18
- v0.179.17
- v0.179.16
- v0.179.15
- v0.179.15
- v0.179.14
- v0.179.13
- v0.179.13
- v0.179.12
- v0.179.10
- v0.179.9
- v0.179.8
- v0.179.7
- v0.179.7
- v0.179.6
- v0.179.5
- v0.179.4
- v0.179.3
- v0.179.3
- v0.179.2
- v0.179.2
- v0.179.1
- v0.179.0
- v0.178.6
- v0.178.5
- v0.178.4
- v0.178.3
- v0.178.1
- v0.178.0
- v0.177.15
- v0.177.13
- v0.177.12
- v0.177.10
- v0.177.9
- v0.177.8
- v0.177.7
- v0.177.7
- v0.177.6
- v0.177.5
- v0.177.3
- v0.177.2
- v0.177.1
- v0.177.0
- v0.176.0
- v0.175.3
- v0.175.2
- v0.175.1
- v0.175.0
- v0.175.0
- v0.174.6
- v0.174.5
- v0.174.4
- v0.174.4
- v0.174.3
- v0.174.2
- v0.174.1
- v0.174.1
- v0.174.0
- v0.172.0
- v0.171.13
- v0.171.12
- v0.171.11
- v0.171.10
- v0.171.9
- v0.171.7
- v0.171.6
- v0.171.5
- v0.171.4
- v0.171.3
- v0.171.2
- v0.171.0
- v0.171.0
- v0.170.5
- v0.170.4
- v0.170.3
- v0.61.0
- v0.60.0
- v0.60.0
- v0.59.0
- v0.58.0
- v0.58.0
- v0.57.0
- v0.57.0
- v0.56.0
- v0.54.0
- v0.53.0
- v0.52.0
- v0.52.0
- v0.51.0
- v0.51.0
- v0.50.0
- v0.49.0
- v0.48.0
- v0.47.0
- v0.47.0
- v0.46.0
- v0.46.0
- v0.44.0
- v0.43.0
- v0.42.0
- v0.41.0
- v0.40.0
- v0.39.0
- v0.38.0
- v0.37.0
- v0.36.0
- v0.35.0
- v0.33.0
- v0.31.0
- v0.30.0
- v0.29.0
- v0.28.0
- v0.27.0
- v0.27.0
- v0.26.0
- v0.25.0
- v0.25.0
- v0.23.0
- v0.22.0
- v0.21.0
- v0.20.0
- v0.19.0
- v0.19.0
- v0.18.0
- v0.17.0
- v0.16.0
- v0.15.1
- v0.15.0
- v0.14.4
- v0.14.3
- v0.14.2
- v0.14.1
- v0.14.0
- v0.13.1
- v0.13.0
- v0.12.1
- v0.11.0
- v0.10.21
- v0.10.20
- v0.10.19
- v0.10.18
- v0.10.17
- v0.10.16
- v0.10.15
- v0.10.14
- v0.10.13
- v0.10.12
- v0.10.11
- v0.10.10
- v0.10.9
- v0.10.8
- v0.10.7
- v0.10.6
- v0.10.5
- v0.10.4
- v0.10.3
- v0.10.2
- v0.10.1
- v0.10.0
- v0.9.15
- v0.9.13
- v0.9.12
- v0.9.11
- v0.9.10
- v0.9.9
- v0.9.8
- v0.9.7
- v0.9.6
- v0.9.5
- v0.9.4
- v0.9.3
- v0.9.2
- v0.9.1
NEAT Neural Network for DenoJS
This project is a practical implementation of a neural network based on the NEAT (NeuroEvolution of Augmenting Topologies) algorithm, written in DenoJS using TypeScript, with additional features such as error-guided discovery, memetic evolution, and distributed workflows.
Terminology
We keep the tone playful, but every nickname maps to a mainstream machine-learning idea:
- Creatures are simply individual neural networks/genomes inside a NEAT population, as described in the original NEAT paper by Stanley & Miikkulainen (2002).
- Memetic evolution refers to the well-studied combination of evolutionary search plus local gradient descent, also called a memetic algorithm.
- CRISPR injections describe targeted gene edits inspired by the real-world CRISPR gene editing technique; in practice we add hand-crafted synapses/neurons.
- Grafting is crossover between incompatibly shaped genomes, similar to the island-model speciation strategies used in evolutionary algorithms.
If you spot another fun label, expect it to be backed by a reference to the standard term the first time it appears.
Feature Highlights
Extendable Observations: The observations can be extended over time because input and output features are identified by stable UUIDs in the exported representation, rather than only by positional indices. This prevents the need to restart the evolution process as new observations are added, and makes it practical to evolve creatures on multiple machines and then recombine them, much like NEATās historical marking for genes Stanley & Miikkulainen (2002).
Distributed Training: Training and evolution can be run on multiple independent nodes. The best-of-breed creatures can later be combined on a centralized controller node, mirroring the island model used in evolutionary algorithms. This feature allows for distributed computing and potentially faster training times, enhancing the efficiency of the learning process.
Life Long Learning: Unlike many pre-trained neural networks, this project is designed for continuous learning, making it adaptable in changing environments. In long-running deployments (for example, generating fresh training data each day from many years of market and company data), the same population can keep training and adapting as time goes on. New observations can be added over weeks or months by widening the dataset and introducing new UUID-indexed features, without throwing away existing creatures. This supports continual learning in the spirit of continual learning, while still relying on your training data to keep past knowledge represented.
Efficient Model Utilisation: Once trained, the current best model can be utilised efficiently by calling the
activatefunction. This runs a single forward pass that maps inputs to outputs, allowing for quick and easy deployment of the trained model.
Feed-forward vs recurrent connections
NEAT-AI supports two broad topology styles:
Feed-forward (forward-only): No recurrent connections. This means:
- No self-loops ((from == to))
- No feedback/backward connections ((from > to), ie an edge that points to an earlier neuron index)
- Each activation depends only on the current input and upstream neuron activations
Recurrent (feedback-enabled): Recurrent connections are allowed (self-loops and feedback/backward connections). These can make use of previous activations and are useful for time-series style behaviours.
In our production workloads, each record is treated as independent (no temporal dependence), so the default configuration is feed-forward/forward-only.
Unique Squash Functions: The neural network supports unique squash functions such as IF, MAX and MIN. These functions provide more options for the activation function, which can lead to different network behaviours, offering a wider range of potential solutions. More about Activation Functions.
Neuron Pruning: Neurons whose activations donāt vary during training are removed, and the biases in the associated neurons are adjusted. This feature optimizes the network by reducing redundancy and computational load. More about Pruning (Neural Networks).
CRISPR: Allows injection of genes into a population of creatures during evolution. This feature can introduce new traits and potentially improve the performance of the population. More about CRISPR.
Grafting: If parents arenāt āgenetically compatibleā, then the āgraftingā algorithm from one parent to another parent onto the child will be used. This allows for species from islands to interbreed, preserving diversity in the same spirit as cross-island migration in island-model evolution.
Memetic Evolution: The algorithm can now record and utilize the biases and weights of the fittest creatures to fine-tune future generations. This process, inspired by the concept of memes, allows the system to ārememberā and build upon successful traits, enhancing the evolutionary process. Learn more about Memetic Algorithms.
Error-Guided Structural Evolution: Dynamically identifies and creates new synapses by analysing neuron activations and errors. This targeted structural adaptation improves performance by explicitly reducing neuron-level errors, blending evolutionary topology adjustments with error-driven learning. The Rust discovery engine can currently reconstruct hidden neurons using standard squashes including ReLU, GELU, ELU, SELU, Softplus, LOGISTIC (sigmoid), and TANH.
Instead of relying purely on random structural mutations (as many NEAT implementations do), a dedicated Rust module performs GPU-accelerated analysis and proposes a focused set of structural candidates. In our own workloads, discovery runs typically uncover small but meaningful improvements (around 0.5ā3% per run) that accumulate over many iterations without hand-editing architectures.
Note: Error-Guided Structural Evolution now relies entirely on the NEAT-AI-Discovery Rust extension library. If the library is not available, the discovery phase is skipped wholesale; there is no TypeScript fallback.
Discovery Integration Guide: Step-by-step instructions for running discovery via
Creature.discoveryDir()are available in the DiscoveryDir guide.Adaptive Mutation Rate: Large creatures (619 neurons, 17,935 synapses) have a massive search space. Adding more structure (ADD_NODE, ADD_CONNECTION) makes the search space exponentially larger while rarely improving fitness. The adaptive mutation rate feature automatically adjusts mutation strategy based on creature size:
- Small creatures (< 100 neurons): Normal topology mutation rates
- Medium creatures (100-300 neurons): Gradually reduced topology expansion
- Large creatures (> 300 neurons): Focus primarily on MOD_WEIGHT, MOD_BIAS
Configuration example:
const options: NeatOptions = { adaptiveMutationThresholds: { medium: 100, // neurons threshold for medium creatures large: 300, // neurons threshold for large creatures largeTopologyWeight: 0.1, // 10% chance of topology mutation for large }, };
This leads to faster convergence for large creatures while preventing unnecessary structural growth that rarely improves fitness.
Continuous Incremental Discovery: For distributed, multi-machine discovery workflows that accumulate small improvements over time, see the Discovery Guide.
Documentation
For detailed documentation, see the docs/ directory:
- Discovery Guide: Complete guide to distributed, multi-machine discovery workflows
- Elastic back propagation: Why we prefer minimum-change weight updates and avoid pushing saturated squashes (eg. ArcTan) further into saturation
- DiscoveryDir API: Technical API reference for
Creature.discoveryDir()and data preparation - GPU Acceleration: GPU acceleration for discovery on macOS using Metal
Comparison with Other AI Approaches
Want to understand how NEAT compares to traditional neural networks, CNNs, RNNs, and modern LLMs? See our comprehensive COMPARISON.md document which explains:
- What weāve implemented and how it works
- Pros and cons of our NEAT approach vs traditional methods
- Our unique innovations (memetic evolution, error-guided discovery, etc.)
- Shortcomings and future work opportunities with references
This comparison helps you understand when to use NEAT vs other approaches and identifies areas for future development.
Usage
This project is designed to be used in a DenoJS environment. Please refer to the DenoJS documentation for setup and usage instructions.
Discovery Integration
Discovery enables continuous incremental improvement of neural networks through automated structural analysis. Each discovery run finds small improvements (0.5-3%), which accumulate over time through repeated iterations.
Quick Start
// Single discovery iteration
const result = await creature.discoveryDir(dataDir, {
discoveryRecordTimeOutMinutes: 1,
discoveryAnalysisTimeoutMinutes: 10,
});
if (result.improvement) {
console.log(`Found ${result.improvement.changeType} improvement!`);
// Use improved creature for next iteration
}Documentation
- Discovery Guide: Complete guide to distributed, multi-machine discovery workflows
- DiscoveryDir API: Technical API reference and data preparation
Evaluation Summary Logging
By default, the library logs evaluation summaries with the [DiscoveryRunner]
prefix. To avoid duplicate logging when your application also logs evaluation
results:
- Disable library logging: Set
discoveryDisableEvaluationSummaryLogging: truein your options - Use exported formatting utilities: Import
formatErrorDeltaandformatPercentWithSignificantDigitsfrom the discovery module to format summaries consistently
import { formatErrorDelta } from "./mod.ts";
const result = await creature.discoveryDir(dataDir, {
discoveryDisableEvaluationSummaryLogging: true, // Disable library logging
// ... other options
});
// Log summaries yourself using the exported formatters
if (result.evaluations) {
for (const summary of result.evaluations) {
console.log(
`Candidate: ${summary.changeType}, improvement: ${
formatErrorDelta(summary.errorDeltaPct ?? 0)
}`,
);
}
}Discovery is designed for continuous operation across multiple machines, accumulating improvements over hundreds of iterations. See the Discovery Guide for real-world workflows and production-tuned configurations.
Discovery Failure Cache
When running discovery iteratively with a stable training dataset, you can enable failure caching to avoid re-evaluating candidates that previously failed to improve the creatureās score. This significantly speeds up discovery runs by skipping known-failing candidates.
const result = await creature.discoveryDir(dataDir, {
discoveryFailureCacheDir: ".discovery/failure-cache",
// ... other options
});How it works:
- After evaluating candidates, those that fail to improve the score are cached
- On subsequent runs, cached candidates are skipped before evaluation
- Cache keys use weight/bias magnitude (exponent only), so only significant changes trigger re-evaluation
- Delete the cache directory when your training dataset changes
When to use:
- Training dataset changes infrequently (e.g., once a day)
- Running discovery repeatedly on the same creature
- Want to reduce wasted computation on known-failing candidates
Cache key design:
- Neuron removal (
remove-neuron,remove-low-impact): Uses just the neuron UUID - if removal failed once, it wonāt succeed until the creature structure changes significantly - Synapse removal (
remove-synapse): Uses just the from/to neuron UUIDs - Other candidates: Uses the exponential component of weights/biases in
scientific notation - weights like
0.123and0.234(bothe-1) map to the same key, while0.001(e-3) and0.1(e-1) map to different keys
Cache entry metadata:
Each cached failure includes diagnostic metadata such as scores, error values,
and the discoveryVersion field indicating which NEAT-AI-Discovery library
version generated the candidate. This helps identify when cache entries may be
stale due to library upgrades that improve candidate generation.
For debugging purposes, each cache entry also includes a rustRequest field
containing the original Rust candidate response (e.g., neuronDetails for
add-neurons, synapseCandidate for add-synapses, squashCandidate for
change-squash, removalCandidate for remove-low-impact). This allows comparison
between what Rust suggested and what actually happened during evaluation.
Discovery Success Cache + Replay
Discovery runs can take a long time (often tens of minutes). In that time, the normal evolution loop may advance the population so far that a successful discovery result is no longer competitive by the time you reinsert it.
To prevent successful discoveries being lost, you can enable a success cache and periodically replay cached successes against the current fittest creature.
Enable success caching during discovery:
const result = await creature.discoveryDir(dataDir, {
discoveryFailureCacheDir: ".discovery/failure-cache",
discoverySuccessCacheDir: ".discovery/success-cache",
// ... other options
});When discoverySuccessCacheDir is set, every single-step candidate that
improves score is persisted so it can be replayed later. The cache stores the
candidate details (and diagnostic metadata), not a full creature export.
(Combination candidates are not stored; replay will try combinations on demand.)
Replay cached successes against the current fittest creature:
const replay = await creature.discoveryReplayDir(dataDir, {
discoverySuccessCacheDir: ".discovery/success-cache",
// Optional tuning knobs (defaults are usually fine)
discoveryReplayMaxSingles: 20,
discoveryReplayMaxPairwise: 10,
discoveryReplayMaxTriples: 8,
// Optional (off by default): verify scores against the current dataset to
// detect drift and avoid accepting stale improvements.
discoveryReplayVerifyScores: true,
// Bounded concurrency used during verification (defaults to max(availableCores, 8))
discoveryReplayConcurrency: 8,
// When verification is enabled, include claimed vs actual baseline drift details
discoveryReplayRescoreBaseline: true,
// Optional: return timing diagnostics for visibility into where replay time is spent.
discoveryReplayDiagnostics: true,
});
if (replay.improvement) {
console.log(replay.improvement.message);
// Reinsert replay.improvement.creature into your population and re-evaluate.
}
// When discoveryReplayVerifyScores is enabled, replay also reports:
// - baselineRescore: claimed vs actual baseline score on the current dataset
// - verifiedImprovement: the selected best outcome, gated by baseline actual score
if (replay.baselineRescore?.changed) {
console.log(replay.baselineRescore.reason);
}
if (replay.verifiedImprovement?.improved) {
console.log(replay.verifiedImprovement.message);
}
if (replay.diagnostics) {
console.log(replay.diagnostics.timingsMS);
}Replay behaviour:
- Skips cached candidates that already appear to be applied to the current creature
- Re-scores the remaining candidates against the current creature
- Archives cache entries that no longer improve score (obsolete successes) to
an
obsoletedirectory at the same level as the success cache directory - Tries combinations of still-successful candidates (pairs/triples/all) and returns the best improvement
Obsolete entries archive:
When a cached success entry no longer results in an improvement (e.g., due to
training data drift), it is moved to an obsolete directory rather than being
deleted. This preserves the history of candidates that once resulted in
improvements:
- Success cache:
.discovery/success-cache/{changeType}/{key}.json - Obsolete archive:
.discovery/obsolete/{changeType}/{key}.json
As with the failure cache, delete the success cache and obsolete directories when your training dataset changes materially.
Discovery Candidate Category Limits
You can control the minimum number of candidates evaluated per category. This is useful when certain mutation types are more successful for your use case. For example, if neuron removal is working well but adding neurons isnāt, you can increase the removal candidates and reduce add-neurons candidates.
const result = await creature.discoveryDir(dataDir, {
discoveryMinCandidatesPerCategory: {
addNeurons: 0, // Skip add-neurons candidates
addSynapses: 1, // Minimum 1 add-synapses candidate
changeSquash: 1, // Minimum 1 change-squash candidate
removeLowImpact: 10, // Evaluate 10 removal candidates
},
// ... other options
});Default values:
addNeurons: 1addSynapses: 1changeSquash: 1removeLowImpact: 3
Higher values mean more candidates of that type will be evaluated. Set to 0 to skip a category entirely.
Forced Focus Overrides
The discovery recorder now honours an optional discoveryFocusNeuronUUIDs
override. When supplied, the recorder prioritises those hidden/output neuron
UUIDs instead of sampling by error, giving you deterministic reproduction of a
known gap. Each entry must match a neuron in the crippled creature.
To see the override in action, run the sibling
NEAT-AI-Examples repository. The
discovery/discover_missing_neuron.ts script generates a wide, long synthetic
dataset, removes a known neuron, and invokes Creature.discoveryDir() with a
forced focus list so you can reproduce production time-outs safely:
deno run --allow-read --allow-write --allow-env --allow-ffi \
../NEAT-AI-Examples/discovery/discover_missing_neuron.tsThe example writes synthetic assets to a hidden .synthetic-discovery/
directory (ignored by git) and logs extended diagnostics whenever the Rust
recorder flushes or hits the time-out path. Use it as the starting point for
debugging āInvalid string lengthā failures without touching live workloads.
Discovery Cost-of-Growth Gate
Discovery candidates are now triaged using the configured costOfGrowth setting
before they reach the evaluator. Each new synapse consumes 1 x costOfGrowth,
while every new neuron consumes roughly 3 x costOfGrowth (two synapses plus
the neuron). Candidates whose expected error reduction is smaller than their
structural cost are skipped entirely. This keeps discovery focused on proposals
that can actually repay the growth penalty and prevents logs from being flooded
with meaningless +0.000% deltas.
Note: The following candidate types are excluded from the cost-of-growth threshold check:
Squash changes (
change-squash): Donāt add structural complexity (no new synapses or neurons). They only modify the activation function of existing neurons, so there is no growth cost to penalise.Removal candidates (
remove-neuron,remove-synapse,remove-low-impact): Donāt add structural complexity - they remove it. They improve score by reducing complexity, not by reducing error. Removing elements that return a similar score will improve the creatureās score.
Enabling the Rust Discovery Module
The Rust FFI extension shipped via
NEAT-AI-Discovery provides
the accelerated structural hints used by discoveryDir(). To enable it:
Clone the repository alongside this project and build/install the library:
git clone https://github.com/stSoftwareAU/NEAT-AI-Discovery.git ../NEAT-AI-Discovery/scripts/runlib.sh
The
runlib.shscript automatically:- Installs Rust and Cargo if missing (no sudo required)
- Builds the library in release mode
- Installs it to
~/.cargo/lib/with version tracking - Signs it on macOS for FFI compatibility
Alternatively, export an explicit path to the library:
export NEAT_AI_DISCOVERY_LIB_PATH="/absolute/path/to/libneat_ai_discovery.dylib"
Grant FFI permissions and validate the installation:
deno run --allow-env --allow-ffi --allow-read scripts/check_discovery.ts
In your application, guard discovery calls with
isRustDiscoveryEnabled()so that controllers fail fast when the module is unavailable.
When the library cannot be resolved, set NEAT_RUST_DISCOVERY_OPTIONAL=true in
environments where skipping discovery should not abort the worker. Otherwise,
treat a missing module as a deployment error and halt the job.
Deployment Checklist
Before committing code changes, ensure you complete the following steps:
Run quality checks in both repositories:
# In NEAT-AI-Discovery cd ../NEAT-AI-Discovery ./quality.sh # In NEAT-AI cd ../NEAT-AI ./quality.sh
Increment version numbers:
- NEAT-AI: Update
deno.jsonversion field (e.g.,0.204.1ā0.204.2) - NEAT-AI-Discovery: Update
Cargo.tomlversion field (e.g.,0.1.41ā0.1.42)
- NEAT-AI: Update
Verify all tests pass in both repositories before committing.
These steps ensure code quality, proper versioning, and that all tests pass before deployment.
Contributions
Contributions are welcome. Please submit a pull request or open an issue to discuss potential changes/additions.
License
This project is licensed under the terms of the Apache License 2.0. For the full license text, please see LICENSE