Quickstart
Five minutes. One query. Any LLM provider you already use.Research Preview — RLMX is experimental. The RLM algorithm is from a recent paper. Things will change. Tell us on Discord.
Set your API key
RLMX uses Google Gemini by default. Set your API key:Settings are stored at
~/.rlmx/settings.json with 0600 permissions.Initialize a project
Navigate to your project directory and scaffold a config:This creates an
rlmx.yaml with sensible defaults and inline comments explaining every option. You can skip this step — RLMX auto-scaffolds on first query if no config exists.Run your first query
Point RLMX at some context and ask a question:RLMX loads your source files into a Python REPL, then iterates — writing Python code to navigate the context, executing it, and refining until it calls
FINAL() with the answer.Output modes
| Mode | Flag | Description |
|---|---|---|
| text | --output text (default) | Plain text answer to stdout |
| json | --output json | Structured JSON with answer, references, usage stats |
| stream | --output stream | JSONL events per iteration, then a final event |
What just happened?
Under the hood, RLMX:- Loaded your files into a persistent Python subprocess as the
contextvariable - Sent the LLM a system prompt with metadata about the context (not the content itself)
- The LLM wrote Python code to search, filter, and read specific parts of the context
- RLMX executed each code block and fed the results back
- After a few iterations, the LLM called
FINAL()with its answer
Next steps
CLI Reference
Every command and flag documented.
Configuration
Customize model, tools, caching, and budget limits.
Batch Mode
Run hundreds of questions against cached context.
Stuck? Ask on Discord
Real humans. Real answers.