RLMX
RLMX is a CLI research tool that implements the RLM (REPL-LM) algorithm. It lets LLMs navigate large codebases and document collections programmatically through a persistent Python REPL — instead of stuffing everything into context.Research Preview — RLMX is experimental. The RLM algorithm is bleeding-edge research. Expect sharp edges. Tell us on Discord.
How it works
Traditional RAG retrieves chunks and hopes for the best. RLMX takes a different approach:-
Prompt externalization — Your context (files, directories) is loaded into a Python REPL as the
contextvariable. Only metadata appears in the LLM message history. The LLM never sees raw context in its messages. -
Iterative REPL loop — The LLM writes Python code in
```repl```blocks. RLMX executes each block in a persistent subprocess, feeds results back, and the LLM iterates until it has the answer. -
Recursive sub-calls — Inside REPL code, the LLM can spawn child queries:
llm_query(prompt)— single LLM completionllm_query_batched(prompts)— concurrent LLM callsrlm_query(prompt)— spawn a full child RLM sessionrlm_query_batched(prompts)— parallel child RLM sessions
-
Termination — The loop ends when the LLM calls
FINAL("answer")orFINAL_VAR("variable_name"), or when max iterations is reached.
Why use RLMX?
| Approach | Context handling | Best for |
|---|---|---|
| RAG | Retrieve chunks, stuff into prompt | Simple Q&A over small docs |
| Full context | Dump everything into system prompt | Small codebases, high cost |
| RLMX (RLM) | LLM navigates programmatically | Large codebases, complex analysis |
| RLMX (CAG) | Cache full context at provider level | Repeated queries, batch Q&A |
Part of the Automagik ecosystem
RLMX works standalone or as part of a Genie workflow. Use it as a research tool inside agent sessions, as a batch processor for document interrogation, or as a library in your own tools.Requirements
- Node.js >= 18
- Python 3.10+ (for the REPL subprocess)
- An LLM API key (Google Gemini, Anthropic, OpenAI, or any pi/ai provider)
Quickstart
Install, configure, and run your first query in under five minutes.
CLI Reference
Every command, flag, and output mode documented.
Configuration
rlmx.yaml format, config commands, and fallback files.
Batch Mode
Bulk interrogation, caching, and cost estimation.