How IAMINE works

1
You share your PC's power
Your CPU or GPU runs an AI model with maximum context memory. Small model + large context = fast and smart. GPU auto-detected.
2
The pool combines everyone's power
Smart routing sends each request to the best available worker. Larger models require workers with enough RAM and compute power to run them.
3
You earn $IAMINE tokens
Every AI token your machine generates earns you $IAMINE. More context = more tokens = more rewards.
4
Spend tokens on powerful AI
Use your $IAMINE to access models your PC could never run: 7B, 14B, even 32B — powered by the pool's combined compute.
5
Export to your wallet
$IAMINE tokens will be tradeable on DEX. Export to MetaMask or Rabby and trade your compute power.
6
Infinite encrypted memory
With an account, your conversations have unlimited context. 3-level compaction + AES encryption — talk for hours, zero data leaks.

IAMINE.ORG

Simplicity. Efficiency. AI for Everyone.

Your PC has unused power. We turn it into AI. Join the network in 60 seconds with a single command. CPU or GPU, no cloud, no hassle.

Linux curl -sL https://iamine.org/install.sh | bash
Windows irm https://iamine.org/install.ps1 | iex
macOS pip install iamine-ai -i https://iamine.org/pypi --extra-index-url https://pypi.org/simple && python -m iamine worker --auto
GPU auto-detected. For NVIDIA GPU: pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124
-
Workers
-
Jobs
-
Network t/s
-
Pool Power
-
Best Model

IAMINE Pool — Live Network

Real-time view of the distributed AI network. Each node is a worker contributing compute power.

0
Workers
0
Total t/s
0
Jobs
-
Best Model

Network Dashboard

Job distribution, credits and performance across all workers.

0
Routable
0
Total Jobs
0
$IAMINE Earned
Job Distribution
Worker Leaderboard
Worker Model t/s Jobs Share $IAMINE
Auto-refresh every 10s

Network Progression

Every worker that joins unlocks more powerful AI. Level up together.

Serve
Run a model on your PC, earn $IAMINE for every token generated
=
Use
Spend $IAMINE to use bigger models you can't run locally
Example: Your 8 GB PC serves a 0.5B model → earns $IAMINE → you spend them to use a 14B model running on a 32 GB machine in the pool. Access AI you could never run locally.

Live AI Chat

IAMINE Chat
No history stored Offline
Welcome! Ask anything to test the distributed AI network.

Benchmark your machine

See what your PC can do. We find the best AI model for you.

Your machine is ready!

Join the IAMINE network now with a single command:

pip install iamine-ai -i https://iamine.org/pypi --extra-index-url https://pypi.org/simple && python -m iamine worker --auto
Works on all platforms:
Linux: curl -sL https://iamine.org/install.sh | bash
Windows: irm https://iamine.org/install.ps1 | iex
macOS: pip install iamine-ai -i https://iamine.org/pypi --extra-index-url https://pypi.org/simple

This will install IAMINE, download the optimal model for your machine, and connect to the pool automatically.

Uninstall IAMINE
Warning: This will delete your local wallet (wallet.json). Back it up first!
Backup command: copy wallet.json wallet.json.backup
Step 1: Uninstall the package
pip uninstall iamine-ai -y
Step 2: Delete data (optional)
del wallet.json config.json && rmdir /s /q models Win
rm -rf wallet.json config.json models/ Linux

Your earned $IAMINE credits on the pool are preserved even after uninstall.

$IAMINE

Contribute compute. Earn tokens. Simple.

+

Earn

Every AI request processed generates $IAMINE proportional to your compute.

~

Trade

Tradeable on DEX. Your compute power has real value.

*

Use

Spend $IAMINE for premium models and priority inference.

%

Govern

Holders vote on models, fees, and network upgrades.

Export to your wallet

When the $IAMINE token is deployed on-chain, you will be able to export your earned tokens to any EVM-compatible wallet.

🦊

Rabby Wallet

Multi-chain wallet. Add IAMINE token with one click.

🔶

MetaMask

The most popular Web3 wallet. Export and trade freely.

How it will work:
1 Connect your wallet (Rabby, MetaMask, or any EVM wallet)
2 Enter your API token (iam_xxx) to verify your balance
3 Choose amount to export and confirm the transaction
$IAMINE tokens appear in your wallet — trade on any DEX
Token contract will be published here after deployment.
Chain: TBD (Hyperliquid / Base / Arbitrum) · Standard: ERC-20 · Supply: proportional to network compute

Your API Access

Contribute compute, earn credits. Use credits to access the AI API. 1 request served = 1 request earned.

Your worker is running!
-
$IAMINE
-
Earned
-
Used
-
Status

How API credits work

You serve AI tokens
=
You use the API

API Usage

OpenAI-compatible. Works with any client.

Terminal
python -m iamine ask "What is quantum computing?"
curl
curl https://iamine.org/v1/api/chat \ -H "Content-Type: application/json" \ -d '{"api_token":"iam_xxx","messages":[{"role":"user","content":"Hello"}]}'
Python
import requests r = requests.post("https://iamine.org/v1/api/chat", json={ "api_token": "iam_xxx", "messages": [{"role": "user", "content": "Hello"}] }) print(r.json()["choices"][0]["message"]["content"])

Connect to Open WebUI

Use IAMINE as a backend for Open WebUI. Copy these settings:

Open WebUI Settings > Connections > OpenAI API
API Base URL
https://iamine.org/v1
API Key
iam_YOUR_TOKEN
docker-compose.yml
docker run -d -p 3000:8080 \ -e OPENAI_API_BASE_URL=https://iamine.org/v1 \ -e OPENAI_API_KEY=iam_YOUR_TOKEN_HERE \ --name open-webui \ ghcr.io/open-webui/open-webui:main

Full API Reference

EndpointMethodDescriptionAuth
/v1/chat/completionsPOSTOpenAI-compatible chat (auto conv_id)Bearer token
/v1/messagesPOSTAnthropic-compatible (Claude Code)x-api-key
/v1/api/chatPOSTChat (1 credit/request)api_token
/v1/statusGETPool status-
/v1/pool/powerGETPool power analysis-
/v1/wallet/{token}GETCheck balancetoken in URL
/v1/models/availableGETList server models-
/v1/modelsGETList worker models-
/v1/admin/modelsGETModel tiers + unlock status-

No worker detected

Start a worker on this machine to see it here, or log in to manage multiple workers.

curl -sL https://iamine.org/install.sh | bash
Windows: irm https://iamine.org/install.ps1 | iex

— or —

Tutorials

Use IAMINE with your favorite tools. The API is OpenAI-compatible.

OpenCode

RECOMMENDED

Use OpenCode as an AI coding agent in your terminal, powered by the IAMINE network. Reads files, writes code, runs commands — all via the distributed pool.

0. Install OpenCode
npm install -g opencode-ai@latest
1. Create opencode.json in your project
{ "$schema": "https://opencode.ai/config.json", "provider": { "iamine": { "npm": "@ai-sdk/openai-compatible", "name": "IAMINE Pool", "options": { "baseURL": "https://iamine.org/v1", "apiKey": "{env:IAMINE_API_KEY}" }, "models": { "iamine": { "name": "IAMINE Smart Pool", "limit": { "context": 131072, "output": 4096 } } } } } }
2. Set your token and launch
export IAMINE_API_KEY=iam_YOUR_TOKEN opencode
Windows (PowerShell)
$env:IAMINE_API_KEY = "iam_YOUR_TOKEN" opencode
3. Optional: create OPENCODE.md for project context
# My Project ## Backend Tu tournes sur le pool distribue IAMINE. Tu as un contexte illimite (L1/L2/L3). ## Regles - Ecris du Python propre avec type hints - Utilise write pour creer des fichiers - Reponds en francais
/iamine — Bootstrap and initialize a project
Step 1 — Bootstrap OPENCODE.md in your project directory (one-shot):
cd my-project && iamine init

Downloads the OPENCODE.md template from iamine.org with a confirmation prompt. Add -y to skip confirmation.

Step 2 — Launch OpenCode and run the /iamine command:
/iamine A note manager in Python with SQLite, tags and markdown export

The IAMINE pool generates optimized SPEC.md and OPENCODE.md for your project. Then say: "Read SPEC.md and start development".

22+ t/s
Native tool-calls
8 min
Full project from scratch
32 jobs
15 tests passing
Validated: OpenCode supports read, write, glob, bash tool-calls at 22+ t/s via the IAMINE pool. The Think Tool (pool_assist) kicks in automatically — if the small LLM gets stuck, a larger one takes over seamlessly.

Claude Code

Claude Code requires 70B+ models to function correctly. The IAMINE pool currently runs smaller models (9B-30B). For AI-assisted coding with IAMINE, use OpenCode instead — it is fully validated and recommended.

Open WebUI

Connect Open WebUI to the IAMINE network for a ChatGPT-like interface.

docker run -d -p 3000:8080 \ -e OPENAI_API_BASE_URL=https://iamine.org/v1 \ -e OPENAI_API_KEY=iam_YOUR_TOKEN \ --name open-webui \ ghcr.io/open-webui/open-webui:main
Use a single model named iamine — the pool handles smart routing. Memory commands (save, restore, remember) work directly in the Open WebUI chat.

Infinite Context Memory

All tools below benefit from IAMINE's L1/L2/L3 compaction — your conversations are never lost, even across sessions. The pool manages memory server-side via PostgreSQL.

No configuration needed. Just connect any OpenAI-compatible client and get unlimited context automatically.

Advanced Features

Collaborative Intelligence (Think Tool)

NEW

The pool automatically injects a "think" tool into conversations. When Scout (9B, fast) encounters a complex task, it delegates to a larger model (30B+) transparently. You get the best of both worlds: speed for simple questions, depth for hard ones.

Scout 9B
Fast answers (64 t/s)
Coder 30B+
Deep reasoning on demand
How it works
1. You ask a question via any client (chat, OpenCode, Cursor...)
2. Scout receives it and starts answering
3. If the task is complex, Scout calls the think tool
4. The pool routes the think request to a 30B+ worker
5. Scout integrates the deep answer into its response

This is called pool_assist — fully automatic, zero configuration.

Memory Commands

NEW

IAMINE supports persistent memory commands in any connected client. Your conversations and personal facts are saved server-side with AES encryption.

Command Alias Effect
enregistre save Saves the current conversation to your account
restaure restore Lists and loads saved conversations
souviens-toi remember Memorizes personal facts in the RAG (retrieved automatically)
Memory can be toggled on/off in your profile settings. You can export or delete all your data at any time (GDPR compliant). Works in all clients: chat, OpenCode, Open WebUI, Cursor, aider...

Cursor

Use Cursor IDE with IAMINE as the AI backend.

Settings > Models > OpenAI API Key
API Keyiam_YOUR_TOKEN
Override URLhttps://iamine.org/v1
Modeliamine
Enable "Override OpenAI Base URL" in Cursor settings, paste the URL above, and select model "iamine". The pool handles smart-routing automatically.

Project Generator

NEW

Generate optimized SPEC.md and OPENCODE.md files for your project. The pool AI creates a complete project blueprint ready for OpenCode, Cursor, or aider.

aider

Use aider for AI pair programming with IAMINE.

Linux / macOS
export OPENAI_API_BASE=https://iamine.org/v1 export OPENAI_API_KEY=iam_YOUR_TOKEN aider --model openai/iamine
Windows (PowerShell)
$env:OPENAI_API_BASE = "https://iamine.org/v1" $env:OPENAI_API_KEY = "iam_YOUR_TOKEN" aider --model openai/iamine

Continue.dev (VS Code / JetBrains)

Add IAMINE as a provider in Continue.

~/.continue/config.json
{ "models": [{ "title": "IAMINE Pool", "provider": "openai", "model": "iamine", "apiBase": "https://iamine.org/v1", "apiKey": "iam_YOUR_TOKEN" }] }

Python (OpenAI SDK)

from openai import OpenAI client = OpenAI( base_url="https://iamine.org/v1", api_key="iam_YOUR_TOKEN" ) response = client.chat.completions.create( model="iamine", messages=[{"role": "user", "content": "Hello!"}] ) print(response.choices[0].message.content)

curl

curl https://iamine.org/v1/api/chat \ -H "Content-Type: application/json" \ -d '{ "api_token": "iam_YOUR_TOKEN", "messages": [{"role": "user", "content": "Hello!"}] }'

IAMINE CLI

# Check balance python -m iamine wallet # Ask a question (costs 1+ $IAMINE) python -m iamine ask "What is quantum computing?" # See recommended model for your machine python -m iamine recommend

Get in touch

DM
David M.
Co-founder & Architecture
RP
Regis P.
Co-founder & Strategy
WA
WASA
Co-founder & Token

Add Mobile Worker

Turn your smartphone into an AI worker. Scan the QR code to pair your phone with your IAMINE account.

IAMINE.ORG

Whitepaper — Distributed AI Inference Network

Abstract

IAMINE is a decentralized AI inference network that transforms idle computing power into a shared artificial intelligence resource. By contributing CPU cycles, participants earn $IAMINE tokens proportional to their compute contribution. These tokens can be spent to access AI models more powerful than any single participant could run locally. The network uses smart routing to optimally match requests to workers based on context size, model capability, and performance.

1. The Problem

Access to powerful AI is concentrated in the hands of a few corporations. Running a high-quality language model requires expensive hardware (GPUs, large RAM) that most people don't have. Meanwhile, billions of PCs sit idle with unused CPU cycles. The current AI economy extracts value from users (their data, their attention) without giving back. There is no way for individuals to participate in the AI economy as producers, only as consumers.

2. The IAMINE Solution

IAMINE inverts the model: anyone with a PC becomes an AI provider. The network consists of three components:

3. Smart Routing

The core innovation of IAMINE is its intelligent request routing. Unlike simple round-robin load balancers, IAMINE considers:

Context Awareness
Tracks accumulated conversation tokens. Routes long conversations to workers with larger context windows — proactively, before limits are hit.
Model Matching
Users can request specific models. The router finds a worker that has it loaded, or suggests the best available alternative.
Worker Affinity
Within a conversation, the router prefers the same worker for consistency and cache efficiency.
Performance Scoring
Workers are scored by benchmark results, RAM, context capacity. The fastest capable worker wins.

4. Infinite Context Memory

IAMINE simulates infinite conversation memory through a 3-level compaction system. Account holders get encrypted persistent memory — no conversation limit, zero data leaks.

L1
Live Messages
Recent messages in RAM. Fast access, always fresh.
L2
Smart Summary
LLM-generated summary of older messages. Auto-condensed when too large.
L3
Encrypted Archive
Full history encrypted with your account token. Only you can read it.

When context fills up, old messages are summarized and archived. The summary is re-summarized when it grows (meta-compaction). Archived data is AES-encrypted with a key derived from your account token — even database administrators cannot read your conversations. Everything is deleted when the session expires. The result: talk for hours without losing context, with end-to-end privacy.

Distributed Compaction

Workers cooperate through the pool. When a conversation needs compaction, the pool delegates the summarization to an idle worker — freeing the main worker to keep serving the user. The pool acts as a trust broker: it knows every worker, their capabilities, and their state. Workers don't need to trust each other directly — they trust the pool.

5. $IAMINE Tokenomics

The $IAMINE token creates a self-sustaining economy:

ModelEarn (serve 100 tokens)Cost (use per request)
Qwen 0.5B+0.5 $IAMINE1 $IAMINE
Qwen 1.5B+1.02
Qwen 3B+2.03
Qwen 7B+4.05
Qwen 14B+8.015
Qwen 32B+15.030
Qwen 72B+25.050

Larger models cost more to use but earn more for workers who serve them. This incentivizes participants to upgrade their hardware, strengthening the network.

Loyalty Rewards

Every 30 seconds, the pool randomly rewards an online worker with 0.5 to 3 $IAMINE. Rare bonus drops (5-15) and jackpots (20-50) keep the excitement alive. The longer you stay online, the more chances you get. New workers receive a 500 $IAMINE welcome bonus on first connection.

6. Level Progression

As the network grows, larger models become available — but they require workers with sufficient resources (RAM, compute) to run them. The pool tracks total power and worker capabilities to determine which models can be served. This creates a virtuous cycle: powerful workers unlock premium models and earn more tokens.

7. Privacy & Security

IAMINE takes privacy seriously. Free demo users get ephemeral conversations in RAM only — zero persistence. Account holders benefit from L3 encrypted archives: conversation history is AES-encrypted with a key derived from your personal account token. Even server administrators cannot read your data. All conversation data (L1, L2, L3) is automatically purged after 1 hour of inactivity. No logs, no training on user data. Workers process requests in real-time and do not persist any content.

8. Technical Architecture

9. Roadmap

Q1 2026
MVP launch. Pool + Worker + Website. Models 0.5B to 3B. Token system (off-chain).
Q2 2026
Linux + macOS (Apple Metal) workers. Models up to 14B. Smart routing v2. Multi-worker accounts.
Q3 2026
$IAMINE token on-chain (ERC-20). DEX listing. Wallet export (MetaMask, Rabby). Governance votes.
Q4 2026
Pipeline parallelism (split large models across workers). Models up to 72B. Enterprise API. SDK for developers.

9. Team

David M.
Co-founder. Architecture & Engineering. Designed the smart routing engine and distributed infrastructure.
Regis P.
Co-founder. Strategy & Business. Drives tokenomics design, partnerships, and go-to-market.
IAMINE.ORG — v1.0 — March 2026
[email protected]
IAMINE.ORG — Open Source — [email protected]