← Return the blog list

Installation Guide: Deploy LAMBDA Locally from Scratch – Zero-Code Data Analysis Agent

January 10, 2026 · Approx. 12-minute read

LAMBDA is an open-source, multi-agent data analysis system powered by large language models. It allows anyone — even those without programming skills — to perform complex data science tasks simply by chatting in natural language (English or Chinese). No code writing required.

This comprehensive guide walks you through every step of installing and running LAMBDA locally on your machine. By the end, you'll have a fully functional instance running in your browser, ready to analyze Excel, CSV, or other tabular data.

1. System Requirements & Prerequisites

Before you begin, make sure your system meets these minimum requirements:

Step 1.1: Install Python

Windows

  1. Go to https://www.python.org/downloads/
  2. Download the latest Python 3.10+ installer (64-bit recommended)
  3. Run the installer and **important**: check the box “Add Python 3.x to PATH” at the bottom of the first screen
  4. Click “Install Now” and wait for completion
  5. Open Command Prompt (cmd) or PowerShell and verify:
    python --version
    You should see something like Python 3.12.1

macOS

  1. Recommended: Download the official installer from python.org
  2. Alternative (if you have Homebrew installed):
    brew install python@3.12
  3. Verify:
    python3 --version

Linux (Ubuntu/Debian example)

sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.12 python3.12-venv python3-pip

2. Clone the Repository

Open your terminal (Command Prompt, Terminal.app, or any shell) and run:

git clone https://github.com/AMA-CMFAI/LAMBDA.git
cd LAMBDA

If you don’t have Git installed:

3. Create & Activate a Virtual Environment (Highly Recommended)

Using a virtual environment prevents conflicts with other Python projects on your system.

Option A: Conda (Best for beginners & managing large models)

# If you don’t have Conda, download Miniconda from https://docs.conda.io/en/latest/miniconda.html
conda create -n lambda python=3.12
conda activate lambda

Option B: Built-in venv

# macOS/Linux
python3 -m venv lambda-env
source lambda-env/bin/activate

# Windows
python -m venv lambda-env
lambda-env\Scripts\activate

After activation, your terminal prompt should show (lambda) or (lambda-env).

4. Install All Dependencies

With the virtual environment activated, run:

pip install -r requirements.txt

Tips for faster/smoother installation:

5. Register Jupyter Kernel (Critical for Code Execution)

LAMBDA uses Jupyter as its secure code interpreter backend. Register a dedicated kernel:

ipython kernel install --name lambda --user

This command makes the “lambda” kernel available for safe, sandboxed code execution.

6. Configure config.yaml – The Heart of LAMBDA

In the project root, find and open config.yaml with any text editor (VS Code, Notepad++, etc.).

You have two main options for the language model backend:

Option A: Cloud API (Fastest to start – OpenAI, Groq, Anthropic, etc.)

conv_model: "gpt-4o-mini"                # or "gpt-4o", "claude-3-5-sonnet-20241022", etc.
programmer_model: "gpt-4o-mini"
inspector_model: "gpt-4o-mini"

api_key: "sk-your-openai-api-key-here"     # Get from https://platform.openai.com/account/api-keys

base_url_conv_model: 'https://api.openai.com/v1'
base_url_programmer: 'https://api.openai.com/v1'
base_url_inspector: 'https://api.openai.com/v1'

streaming: True
max_attempts: 5
max_exe_time: 18000

Option B: Local LLM (Free, private, unlimited – Highly recommended long-term)

  1. Install Ollama: https://ollama.com/download
  2. Pull a strong model (run in terminal):
    ollama pull qwen2.5:14b
    # or ollama pull llama3.1:8b / deepseek-r1:14b etc.
  3. Configure config.yaml:
    conv_model: "qwen2.5:14b"
    programmer_model: "qwen2.5:14b"
    inspector_model: "qwen2.5:14b"
    
    api_key: "ollama"
    base_url_conv_model: 'http://localhost:11434/v1'
    base_url_programmer: 'http://localhost:11434/v1'
    base_url_inspector: 'http://localhost:11434/v1'
    
    streaming: True

7. Launch LAMBDA!

Make sure Ollama is running (if using local model), then in the terminal:

python lambda_app.py

After a few seconds, you’ll see output like:

Running on local URL:  http://127.0.0.1:7860

Open that address in your browser. Upload a CSV/Excel file and start chatting:

“Show me the sales trend by month”
“Which product has the highest profit margin?”
“Generate a full report with charts and insights”

Troubleshooting Common Issues

Congratulations! You now have your own local LAMBDA instance.

Project GitHub: https://github.com/AMA-CMFAI/LAMBDA
Full Documentation: https://ama-cmfai.github.io/LAMBDA-Docs/