Installation Guide: Deploy LAMBDA Locally from Scratch – Zero-Code Data Analysis Agent
LAMBDA is an open-source, multi-agent data analysis system powered by large language models. It allows anyone — even those without programming skills — to perform complex data science tasks simply by chatting in natural language (English or Chinese). No code writing required.
This comprehensive guide walks you through every step of installing and running LAMBDA locally on your machine. By the end, you'll have a fully functional instance running in your browser, ready to analyze Excel, CSV, or other tabular data.
1. System Requirements & Prerequisites
Before you begin, make sure your system meets these minimum requirements:
- Operating System: Windows 10/11, macOS 11+, or Linux (Ubuntu 20.04+ recommended)
- Python: Version 3.10 or higher (3.11/3.12 preferred for best performance)
- RAM: At least 8GB (16GB+ strongly recommended if using local models)
- Disk Space: ~5–10GB free (for dependencies + model weights if using local LLMs)
- Internet: Required during installation (for downloading packages and models)
Step 1.1: Install Python
Windows
- Go to https://www.python.org/downloads/
- Download the latest Python 3.10+ installer (64-bit recommended)
- Run the installer and **important**: check the box “Add Python 3.x to PATH” at the bottom of the first screen
- Click “Install Now” and wait for completion
- Open Command Prompt (cmd) or PowerShell and verify:
You should see something likepython --versionPython 3.12.1
macOS
- Recommended: Download the official installer from python.org
- Alternative (if you have Homebrew installed):
brew install python@3.12 - Verify:
python3 --version
Linux (Ubuntu/Debian example)
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.12 python3.12-venv python3-pip
2. Clone the Repository
Open your terminal (Command Prompt, Terminal.app, or any shell) and run:
git clone https://github.com/AMA-CMFAI/LAMBDA.git
cd LAMBDA
If you don’t have Git installed:
- Windows: Download from git-scm.com
- macOS:
brew install gitor install from Xcode - Linux:
sudo apt install git
3. Create & Activate a Virtual Environment (Highly Recommended)
Using a virtual environment prevents conflicts with other Python projects on your system.
Option A: Conda (Best for beginners & managing large models)
# If you don’t have Conda, download Miniconda from https://docs.conda.io/en/latest/miniconda.html
conda create -n lambda python=3.12
conda activate lambda
Option B: Built-in venv
# macOS/Linux
python3 -m venv lambda-env
source lambda-env/bin/activate
# Windows
python -m venv lambda-env
lambda-env\Scripts\activate
After activation, your terminal prompt should show (lambda) or (lambda-env).
4. Install All Dependencies
With the virtual environment activated, run:
pip install -r requirements.txt
Tips for faster/smoother installation:
- China users: Add a mirror
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple - If you get errors about missing build tools (e.g., numpy, scipy): Install Microsoft Visual C++ Build Tools (Windows) or
sudo apt install build-essential(Linux)
5. Register Jupyter Kernel (Critical for Code Execution)
LAMBDA uses Jupyter as its secure code interpreter backend. Register a dedicated kernel:
ipython kernel install --name lambda --user
This command makes the “lambda” kernel available for safe, sandboxed code execution.
6. Configure config.yaml – The Heart of LAMBDA
In the project root, find and open config.yaml with any text editor (VS Code, Notepad++, etc.).
You have two main options for the language model backend:
Option A: Cloud API (Fastest to start – OpenAI, Groq, Anthropic, etc.)
conv_model: "gpt-4o-mini" # or "gpt-4o", "claude-3-5-sonnet-20241022", etc.
programmer_model: "gpt-4o-mini"
inspector_model: "gpt-4o-mini"
api_key: "sk-your-openai-api-key-here" # Get from https://platform.openai.com/account/api-keys
base_url_conv_model: 'https://api.openai.com/v1'
base_url_programmer: 'https://api.openai.com/v1'
base_url_inspector: 'https://api.openai.com/v1'
streaming: True
max_attempts: 5
max_exe_time: 18000
Option B: Local LLM (Free, private, unlimited – Highly recommended long-term)
- Install Ollama: https://ollama.com/download
- Pull a strong model (run in terminal):
ollama pull qwen2.5:14b # or ollama pull llama3.1:8b / deepseek-r1:14b etc. - Configure config.yaml:
conv_model: "qwen2.5:14b" programmer_model: "qwen2.5:14b" inspector_model: "qwen2.5:14b" api_key: "ollama" base_url_conv_model: 'http://localhost:11434/v1' base_url_programmer: 'http://localhost:11434/v1' base_url_inspector: 'http://localhost:11434/v1' streaming: True
7. Launch LAMBDA!
Make sure Ollama is running (if using local model), then in the terminal:
python lambda_app.py
After a few seconds, you’ll see output like:
Running on local URL: http://127.0.0.1:7860
Open that address in your browser. Upload a CSV/Excel file and start chatting:
“Show me the sales trend by month”
“Which product has the highest profit margin?”
“Generate a full report with charts and insights”
Troubleshooting Common Issues
- “python not found” → Reinstall Python and check “Add to PATH”
- ModuleNotFoundError → Ensure you're in the virtual environment (
conda activate lambdaorsource lambda-env/bin/activate) - API rate limit / invalid key → Double-check your API key (no extra spaces)
- Ollama not responding → Run
ollama servein another terminal, ensure model is pulled - Slow startup → First run downloads large files; subsequent launches are much faster
Congratulations! You now have your own local LAMBDA instance.
Project GitHub: https://github.com/AMA-CMFAI/LAMBDA
Full Documentation: https://ama-cmfai.github.io/LAMBDA-Docs/