✓ Works on any laptopmacOS · Linux · WindowsFully open source

drift

Terminal-first AutoML agent. Zero friction. Same engine as the web app.

github.com/lakshitsachdeva/intent2model

Step-by-step setup

drift needs both pipx and npm. pipx runs the Python CLI (the chat REPL). npm starts the engine and downloads it. Follow in order:

1

Get a local LLM

drift uses an LLM for planning and training. Pick one before installing drift. See options below (Gemini CLI or Ollama).

2

Install pipx

pipx runs the Python CLI (the actual drift chat interface). Without it, drift won't start.

pip install pipx
pipx ensurepath

Windows: restart PowerShell after ensurepath. macOS/Linux: restart terminal or run source ~/.zshrc

3

Install npm (Node.js)

npm starts the engine, downloads it on first run, and launches the pipx CLI. Get it from nodejs.org.

4

Install drift (both)

You need both — npm for the engine launcher, pipx for the CLI.

pipx install drift-ml
npm install -g drift-ml
5

Run drift

drift

First run downloads the engine (~100MB). Then:

drift › load data.csv
drift › predict price
drift › try something stronger
drift › quit

Use as library

Add drift to your Python scripts. pip install drift-ml and import.

from drift import Drift

d = Drift()
d.load("iris.csv")
d.chat("predict sepal length")
result = d.train()
print(result["metrics"])

Or connect to an existing engine: Drift(base_url="http://localhost:8000")

LLM options — pick one

Training and planning use an LLM. You need one. Here's exactly how to install each option:

Option A: Gemini CLI

Recommended

Google's Gemini in your terminal. Free tier. drift uses it by default if gemini is on PATH.

Install Gemini CLI

npm (any platform)

npm install -g @google/gemini-cli

Homebrew (macOS / Linux)

brew install gemini-cli

Get your API key

Go to aistudio.google.com/apikey, create a key, then:

export GEMINI_API_KEY="your-key-here"

Add to ~/.zshrc or ~/.bashrc to persist.

Option B: Ollama + Llama

Run Llama (and other models) locally. No API key. Fully offline.

Install Ollama

macOS

Download from ollama.com/download/mac — drag to Applications.

Linux

curl -fsSL https://ollama.com/install.sh | sh

Windows

Download from ollama.com/download/windows — run the installer.

Download Llama (after Ollama is installed)

ollama pull llama3.2

Or ollama pull llama2, ollama pull gemma2, etc.

Run the model (keep this running in another terminal)

ollama run llama3.2

Or run once in background: ollama serve

Option C: Other local LLM

Any compatible API. Configure the engine accordingly. See the repo for details.

The engine runs planning and training locally. The LLM is the brain; execution is automatic. No data leaves your machine.

Windows (npm/pipx): engine crashes or LLM planning fails?

When using npm or pipx, the engine can inherit a limited PATH — Gemini CLI may not be found. We now auto-prepend npm/pipx bins. If issues persist:

Option A: Run the backend manually (skips engine binary):

cd backend
python -m uvicorn main:app --host 0.0.0.0 --port 8000

Create .env in project root with GEMINI_API_KEY=your-key — fixes "I am ready for your first command" / empty JSON. (Root cause: Windows cmd.exe 8K char limit; we pass prompts via stdin.)

Option B: If engine binary crashes, run it manually to see the error:

cd $env:USERPROFILE\.drift\bin
.\drift-engine-windows-x64.exe

Fixes: Visual C++ Redistributable, Windows Defender exception, allow port 8000 in firewall.

Run locally

Clone the repo and start the engine:

git clone https://github.com/lakshitsachdeva/intent2model.git
cd intent2model
./start.sh

Deploy the frontend to Vercel so anyone can open the UI. The engine runs on each user's machine (or set DRIFT_BACKEND_URL to a running engine).

What is drift?

Local-first: the engine runs on your machine. Training and planning stay local; you never send data to our servers. Terminal-first, chat-based — same engine as the web app. No commands to memorize. Zero auth. Zero tokens. Fully open source.

github.com/lakshitsachdeva/intent2model