Turn your idle PC into a node of the distributed LLM network. One-liner install on Linux/macOS, one download on Windows.
Why ? Without workers, there is no network. Every CPU or GPU joining brings more capacity, lower latency, better model coverage. Your machine earns community credits ($IAMINE) proportional to the compute it actually delivers.
curl -sSL https://cellule.ai/install-worker.sh | bash
The installer detects your distro, checks Python 3.12+, creates a user-local venv at ~/.cellule-worker, installs iamine-ai, and sets up a systemd --user service so the worker survives reboots.
~/.cellule-worker venviamine-ai from https://iamine.org/pypi~/.config/systemd/user/cellule-worker.servicepython3.12 -m venv ~/.cellule-worker
source ~/.cellule-worker/bin/activate
pip install --upgrade pip
pip install iamine-ai -i https://iamine.org/pypi --extra-index-url https://pypi.org/simple
python -m iamine worker --auto
# Status
systemctl --user status cellule-worker
# Live logs
journalctl --user -u cellule-worker -f
# Restart
systemctl --user restart cellule-worker
# Stop + disable
systemctl --user disable --now cellule-worker
The default install ships CPU-only llama-cpp-python. For GPU, rebuild with the right flags:
# NVIDIA CUDA
CMAKE_ARGS="-DGGML_CUDA=on" ~/.cellule-worker/bin/pip install --force-reinstall --no-binary=:all: llama-cpp-python
# AMD ROCm
CMAKE_ARGS="-DGGML_HIPBLAS=on" ~/.cellule-worker/bin/pip install --force-reinstall --no-binary=:all: llama-cpp-python
curl -sSL https://cellule.ai/install-worker.sh | bash
Same installer as Linux — detects macOS, creates ~/.cellule-worker, and installs a launchd agent at ~/Library/LaunchAgents/ai.cellule.worker.plist so the worker auto-starts at login.
llama-cpp-python ships with Metal support on macOS.
brew install [email protected] if missingbrew install [email protected]
python3.12 -m venv ~/.cellule-worker
~/.cellule-worker/bin/pip install --upgrade pip
~/.cellule-worker/bin/pip install iamine-ai -i https://iamine.org/pypi --extra-index-url https://pypi.org/simple
~/.cellule-worker/bin/python -m iamine worker --auto
# Status
launchctl list | grep cellule
# Live logs
tail -f ~/.cellule-worker/worker.log
# Stop
launchctl unload ~/Library/LaunchAgents/ai.cellule.worker.plist
# Start again
launchctl load ~/Library/LaunchAgents/ai.cellule.worker.plist
.exe (coming soon)We're building a signed, single-file cellule-worker.exe (~250 MB) that bundles Python + all dependencies. No install, no dependencies, double-click to start. Status: in progress — see GitHub Releases.
When ready, install will be:
cellule-worker.exe from GitHub ReleasesOpen PowerShell (not cmd.exe) and run:
# Install Python 3.12 first if missing — winget install Python.Python.3.12
py -3.12 -m venv "$env:USERPROFILE\.cellule-worker"
& "$env:USERPROFILE\.cellule-worker\Scripts\pip.exe" install --upgrade pip
& "$env:USERPROFILE\.cellule-worker\Scripts\pip.exe" install iamine-ai -i https://iamine.org/pypi --extra-index-url https://pypi.org/simple
& "$env:USERPROFILE\.cellule-worker\Scripts\python.exe" -m iamine worker --auto
# Create a scheduled task that starts the worker at logon
$action = New-ScheduledTaskAction -Execute "$env:USERPROFILE\.cellule-worker\Scripts\python.exe" -Argument "-m iamine worker --auto"
$trigger = New-ScheduledTaskTrigger -AtLogon
$settings = New-ScheduledTaskSettingsSet -StartWhenAvailable -RestartCount 3 -RestartInterval (New-TimeSpan -Minutes 2)
Register-ScheduledTask -TaskName "CelluleWorker" -Action $action -Trigger $trigger -Settings $settings
winget install Python.Python.3.12llama-cpp-python)$env:CMAKE_ARGS="-DGGML_CUDA=on"
& "$env:USERPROFILE\.cellule-worker\Scripts\pip.exe" install --force-reinstall --no-binary=:all: llama-cpp-python
--auto mode: it benches your hardware (CPU, GPU, RAM, tokens/s).cellule.ai and picks the one where its capability fills a gap best (M12 placement).$IAMINE) accumulate in your account for every inference your worker completes.worker_wallet_snapshots.
| Model | RAM | Disk | Speed target (CPU) | GPU |
|---|---|---|---|---|
| Qwen 3.5 2B | 4 GB+ | 1.5 GB | 8+ t/s | optional |
| Qwen 3.5 4B | 6 GB+ | 2.5 GB | 5+ t/s | optional |
| Qwen3 30B A3B (MoE) | 24 GB+ | 18 GB | not feasible CPU | recommended |
Don't know your specs ? Run the worker in --auto and it picks the best fit. Too small ? It will refuse gracefully and tell you why.
systemctl --user disable --now cellule-worker
rm -rf ~/.cellule-worker ~/.config/systemd/user/cellule-worker.service
systemctl --user daemon-reload
launchctl unload ~/Library/LaunchAgents/ai.cellule.worker.plist
rm -rf ~/.cellule-worker ~/Library/LaunchAgents/ai.cellule.worker.plist
Unregister-ScheduledTask -TaskName "CelluleWorker" -Confirm:$false
Remove-Item -Recurse -Force "$env:USERPROFILE\.cellule-worker"
Install the C++ build chain:
sudo apt install build-essential cmakexcode-select --installCheck logs (see "Manage the service" above). Common causes: no network to cellule.ai:443, Python version too old, firewall blocking WebSocket.
The worker only runs inferences when the network routes a job to it. Between jobs it's idle. You can cap CPU via systemd (CPUQuota=50%) or launchd (Nice = 10).
~/.cellule-worker and the OS's user-service directory. No system-wide changes without your explicit permission.← Back to homepage · Federation explained · Run your own pool · GitHub