Decentralized AI, powered by the community. Share your compute power. Join the network.
Your conversations follow you via API. Unique on the LLM market. Save, restore, delete — from any tool.
Windows, Linux, macOS. CPU and GPU. From your laptop to your server.
Models from 0.8B to 35B cooperate. The small one is fast, the big one is smart.
Zero-knowledge encryption. Opt-in memory. GDPR compliant.
Works with OpenCode, Cursor, aider, Open WebUI, and any compatible tool.
Most LLM providers give you a stateless API. Cellule.ai persists your conversations server-side and lets you save, restore, or delete them from any client — OpenWebUI, Cursor, OpenCode, curl, whatever. Your context lives in the molecule, not in the client.
save / enregistre
Persist the current conversation in your account. Works in any chat client.
restore / restaure <num>
Bring back a saved conversation. List with restaure, then pick one by number or ID.
delete / supprime <num|all>
Remove one conversation or all at once. Opt-in memory + GDPR compliant.
These commands work inside any chat, across any tool that talks to cellule.ai/v1. Type save / enregistre in OpenWebUI, then open Cursor tomorrow and type restaure — your context is there.
Every coding session builds knowledge. Cellule.ai captures observations, consolidates them into episodes, extracts durable facts, and detects workflow patterns. Your agent gets smarter with every interaction — across sessions, across pools.
Every inference, code review, and tool call is captured automatically. Zero effort.
Observations consolidate into session summaries. What you did, what worked, what failed.
Durable knowledge extracted from episodes: preferences, patterns, architecture decisions. Linked by a relationship graph.
Recurring workflow patterns detected automatically. Your agent knows what works before you ask.
Vector similarity + relationship graph + procedural matching. Stale memories fade with Ebbinghaus decay. Frequently used ones strengthen.
Any MCP-compatible agent (Claude Code, OpenCode, Cursor) plugs into the collective memory. One command: iamine mcp-server
All content encrypted with your token (PBKDF2 + Fernet). The pool stores your memories but cannot read them. GDPR delete across the federation.
Memory syncs across federated pools. Move between pools, your knowledge follows. Like RAID for intelligence.
Install in 60 seconds. Your CPU or GPU serves an AI model to the network.
Your compute powers the network. $IAMINE tokens track your contribution — not yet deployed, coming in alpha.
Spend your tokens to access models your PC could never run. 30B+ models at your fingertips.
OpenAI-compatible API. Use OpenCode, Cursor, or any tool. The pool handles routing, tool-calls, and think delegation automatically.
L'API Cellule.ai est compatible OpenAI. Utilisez votre outil de coding favori avec smart routing, mémoire persistante et tool-calls natifs.
https://cellule.ai/v1
api_key
acc_YOUR_TOKEN
model
iamine/auto
Agent de coding terminal recommandé. Installez avec npm i -g opencode-ai, initialisez avec iamine init, puis tapez /cellule pour générer un SPEC.md.
iamine init opencode --token acc_xxxSPEC.md decoupe en micro-tachesOpen-source Claude Code rewrite in Rust. Model-agnostic CLI agent.
iamine init clawcode --token acc_xxxagent and describe what you want to buildVS Code-based AI IDE. Override the API URL to use Cellule.ai as your backend.
Git-based CLI pair-programming. Very efficient for refactoring and targeted fixes.
Extension VS Code et JetBrains pour completion et chat IA dans l'IDE.
Interface web complete self-hosted. Chat multi-conversations, historique, templates. Compatible OpenAI.
Chaque pool analyse en temps reel ses capacites et detecte automatiquement ses lacunes.
Comment ca marche ? Quand vous lancez un worker avec iamine worker --auto, il interroge les pools du reseau, evalue ou il sera le plus utile, et rejoint automatiquement le pool qui a le plus besoin de ses capacites. Un worker qui comble un manque critique recoit naturellement plus de jobs — et gagne plus de $IAMINE. Si un pool tombe, les workers migrent automatiquement vers un pool compatible. Chaque pool vise l’autosuffisance : pouvoir traiter 100% des requetes sans dependre d’un autre pool.
Contribuez votre puissance de calcul. Gagnez des tokens. Token pas encore deploye on-chain.
Projet en developpement actif. Les fonctionnalites token ci-dessous sont des previews — rien n'est encore deploye sur blockchain.
Every AI request you serve is tracked. $IAMINE tokens will reflect your contribution to the network.
When deployed, tokens will be transferable. Your participation has value within the network.
Use $IAMINE tokens to access network resources and vote on its direction.
Les detenteurs votent sur les modeles, frais et evolutions du reseau.
Quand le token $IAMINE sera deploye on-chain, vous pourrez exporter vos tokens gagnes vers n'importe quel wallet compatible EVM.
Wallet multi-chain. Ajoutez le token \ en un clic.
Le wallet Web3 le plus populaire. Exportez et tradez librement.
100% compatible OpenAI. Drop-in replacement: changez la baseURL de votre client et c'est parti.
Inference OpenAI-compatible avec smart routing, mémoire persistante L1/L2/L3 et tool-calls natifs.
curl https://cellule.ai/v1/chat/completions \
-H "Authorization: Bearer acc_xxx" \
-d '{"model":"iamine","messages":[{"role":"user","content":"hi"}]}'
Liste les modeles disponibles. Renvoie {id: iamine} (smart routing automatique cote pool).
curl https://cellule.ai/v1/models \ -H "Authorization: Bearer acc_xxx"
Solde, credits, total gagne et liste de tes workers lies au compte.
curl https://cellule.ai/v1/account/my-workers \ ?session_id=YOUR_SESSION
Active/desactive la memoire persistante (opt-in). RGPD compliant, droit a l oubli inclus.
curl -X POST https://cellule.ai/v1/account/memory \
-d '{"session_id":"xxx","enabled":true}'
Le reseau s’adapte a votre materiel. Que vous ayez un laptop ou un serveur GPU, le pool vous place la ou vous etes le plus utile.
Une commande. Le reseau detecte votre materiel, telecharge le meilleur modele, et vous place automatiquement dans le pool qui a le plus besoin de vous.
pip install iamine-ai -i https://cellule.ai/pypi --extra-index-url https://pypi.org/simple && python -m iamine worker --auto
Vous gerez vos propres LLM (llama-server, vLLM, Ollama). Le reseau route le trafic vers vos backends. Ideal pour les power users avec du materiel deja configure.
pip install iamine-ai -i https://cellule.ai/pypi --extra-index-url https://pypi.org/simple && python -m iamine proxy -c proxy.json
4 GB RAM, any CPU
16 GB RAM, GPU
32+ GB RAM, GPU
Once the worker base grows, run your own pool to host workers and federate with the molecule. 3 minutes via docker compose.
docker compose tutorial →