🧬 Run a Cellule.ai worker

Turn your idle PC into a node of the distributed LLM network. One-liner install on Linux/macOS, one download on Windows.

Why ? Without workers, there is no network. Every CPU or GPU joining brings more capacity, lower latency, better model coverage. Your machine earns community credits ($IAMINE) proportional to the compute it actually delivers.

Pick your OS

🐧 Linux One-line install 🍎 macOS One-line install 🪟 Windows .exe or pip

Linux

Quickest: one command

curl -sSL https://cellule.ai/install-worker.sh | bash

The installer detects your distro, checks Python 3.12+, creates a user-local venv at ~/.cellule-worker, installs iamine-ai, and sets up a systemd --user service so the worker survives reboots.

No sudo needed if Python 3.12+ is already on your machine. The installer runs entirely in your home directory. It will print the package-manager command to install Python if missing.

What the installer does

  1. Detects OS + architecture
  2. Verifies Python 3.12+ (suggests apt/dnf/pacman command if missing)
  3. Creates ~/.cellule-worker venv
  4. Installs iamine-ai from https://iamine.org/pypi
  5. Writes ~/.config/systemd/user/cellule-worker.service
  6. Enables linger + starts the service

Manual install (if you prefer)

python3.12 -m venv ~/.cellule-worker
source ~/.cellule-worker/bin/activate
pip install --upgrade pip
pip install iamine-ai -i https://iamine.org/pypi --extra-index-url https://pypi.org/simple
python -m iamine worker --auto

Manage the service

# Status
systemctl --user status cellule-worker

# Live logs
journalctl --user -u cellule-worker -f

# Restart
systemctl --user restart cellule-worker

# Stop + disable
systemctl --user disable --now cellule-worker

GPU acceleration (NVIDIA CUDA, AMD ROCm)

The default install ships CPU-only llama-cpp-python. For GPU, rebuild with the right flags:

# NVIDIA CUDA
CMAKE_ARGS="-DGGML_CUDA=on" ~/.cellule-worker/bin/pip install --force-reinstall --no-binary=:all: llama-cpp-python

# AMD ROCm
CMAKE_ARGS="-DGGML_HIPBLAS=on" ~/.cellule-worker/bin/pip install --force-reinstall --no-binary=:all: llama-cpp-python

macOS

Quickest: one command

curl -sSL https://cellule.ai/install-worker.sh | bash

Same installer as Linux — detects macOS, creates ~/.cellule-worker, and installs a launchd agent at ~/Library/LaunchAgents/ai.cellule.worker.plist so the worker auto-starts at login.

Apple Silicon (M1/M2/M3/M4) users get Metal GPU acceleration automatically on the default install — llama-cpp-python ships with Metal support on macOS.

Requirements

Manual install

brew install [email protected]
python3.12 -m venv ~/.cellule-worker
~/.cellule-worker/bin/pip install --upgrade pip
~/.cellule-worker/bin/pip install iamine-ai -i https://iamine.org/pypi --extra-index-url https://pypi.org/simple
~/.cellule-worker/bin/python -m iamine worker --auto

Manage the agent

# Status
launchctl list | grep cellule

# Live logs
tail -f ~/.cellule-worker/worker.log

# Stop
launchctl unload ~/Library/LaunchAgents/ai.cellule.worker.plist

# Start again
launchctl load ~/Library/LaunchAgents/ai.cellule.worker.plist

Windows

Option A: standalone .exe (coming soon)

We're building a signed, single-file cellule-worker.exe (~250 MB) that bundles Python + all dependencies. No install, no dependencies, double-click to start. Status: in progress — see GitHub Releases.

When ready, install will be:

  1. Download cellule-worker.exe from GitHub Releases
  2. Right-click → Properties → Unblock (if Windows SmartScreen prompts)
  3. Double-click to run — worker registers with the network automatically

Option B: pip install (works today)

Open PowerShell (not cmd.exe) and run:

# Install Python 3.12 first if missing — winget install Python.Python.3.12
py -3.12 -m venv "$env:USERPROFILE\.cellule-worker"
& "$env:USERPROFILE\.cellule-worker\Scripts\pip.exe" install --upgrade pip
& "$env:USERPROFILE\.cellule-worker\Scripts\pip.exe" install iamine-ai -i https://iamine.org/pypi --extra-index-url https://pypi.org/simple
& "$env:USERPROFILE\.cellule-worker\Scripts\python.exe" -m iamine worker --auto

Auto-start at boot (Task Scheduler)

# Create a scheduled task that starts the worker at logon
$action = New-ScheduledTaskAction -Execute "$env:USERPROFILE\.cellule-worker\Scripts\python.exe" -Argument "-m iamine worker --auto"
$trigger = New-ScheduledTaskTrigger -AtLogon
$settings = New-ScheduledTaskSettingsSet -StartWhenAvailable -RestartCount 3 -RestartInterval (New-TimeSpan -Minutes 2)
Register-ScheduledTask -TaskName "CelluleWorker" -Action $action -Trigger $trigger -Settings $settings

Requirements

GPU acceleration (NVIDIA)

$env:CMAKE_ARGS="-DGGML_CUDA=on"
& "$env:USERPROFILE\.cellule-worker\Scripts\pip.exe" install --force-reinstall --no-binary=:all: llama-cpp-python

How the worker joins the network

  1. The worker starts in --auto mode: it benches your hardware (CPU, GPU, RAM, tokens/s).
  2. It contacts the public federated pools on cellule.ai and picks the one where its capability fills a gap best (M12 placement).
  3. It downloads the assigned GGUF model (auto-picked by your hardware profile — 2B / 4B / 7B / 14B / 30B MoE).
  4. It handshakes Ed25519 and starts accepting jobs over WebSocket.
  5. Your credits ($IAMINE) accumulate in your account for every inference your worker completes.
✓ PREPROD / testnet. $IAMINE credits have no market value yet. Every inference you run today is building the history that will be honored when the economy goes on-chain. Early contributors are tracked in worker_wallet_snapshots.

Requirements by model

ModelRAMDiskSpeed target (CPU)GPU
Qwen 3.5 2B4 GB+1.5 GB8+ t/soptional
Qwen 3.5 4B6 GB+2.5 GB5+ t/soptional
Qwen3 30B A3B (MoE)24 GB+18 GBnot feasible CPUrecommended

Don't know your specs ? Run the worker in --auto and it picks the best fit. Too small ? It will refuse gracefully and tell you why.

Uninstall

Linux

systemctl --user disable --now cellule-worker
rm -rf ~/.cellule-worker ~/.config/systemd/user/cellule-worker.service
systemctl --user daemon-reload

macOS

launchctl unload ~/Library/LaunchAgents/ai.cellule.worker.plist
rm -rf ~/.cellule-worker ~/Library/LaunchAgents/ai.cellule.worker.plist

Windows

Unregister-ScheduledTask -TaskName "CelluleWorker" -Confirm:$false
Remove-Item -Recurse -Force "$env:USERPROFILE\.cellule-worker"

Troubleshooting

pip install fails with llama-cpp-python build error

Install the C++ build chain:

Worker doesn't appear in /status

Check logs (see "Manage the service" above). Common causes: no network to cellule.ai:443, Python version too old, firewall blocking WebSocket.

Worker hogs my CPU

The worker only runs inferences when the network routes a job to it. Between jobs it's idle. You can cap CPU via systemd (CPUQuota=50%) or launchd (Nice = 10).

Security

← Back to homepage · Federation explained · Run your own pool · GitHub