Introduction

In today’s AI landscape, every query you send to ChatGPT, Perplexity, or other cloud-based AI services leaves your questions on someone else’s server. For researchers, journalists, investors, and anyone working with sensitive information, this is a fundamental problem.

Enter Local Deep Research by LearningCircuit — an open-source, privacy-first AI research platform that runs entirely on your own computer. With over 7,000 GitHub stars and skyrocketing adoption (2,494 new stars just this week), it has become one of the fastest-growing privacy-focused AI projects on the internet.

Unlike traditional AI search tools that route your queries through third-party servers, Local Deep Research gives you complete control over your data, your models, and your research process — all while delivering results comparable to state-of-the-art systems.

What Is Local Deep Research?

Local Deep Research (LDR) is an agent-based AI research assistant that automates the entire research workflow: formulating searches, collecting sources across academic databases, web pages, and private documents, analyzing findings, and synthesizing everything into well-cited reports — all running locally on your machine.

Think of it as having a personal research analyst that works around the clock, never sends your questions anywhere, and builds a growing, searchable knowledge base from every session.

Key Metrics at a Glance

MetricValue
GitHub Stars7,035+ (2,494 this week)
Docker Pulls15,000+
Benchmark Accuracy~95% on SimpleQA
Supported Search Engines10+ (arXiv, PubMed, Wikipedia, SearXNG, etc.)
LanguagesPython, JavaScript/TypeScript, HTML/CSS
LicenseOpen Source
Hardware RequiredRuns on CPU; GPU optional for better performance

Why Choose Local Deep Research Over Cloud Alternatives?

The AI research market is dominated by products like OpenAI’s Deep Research, Perplexity Pro, and NotebookLM — all of which require sending your queries to external servers. Here’s why Local Deep Research is different:

FeatureLocal Deep ResearchPerplexity ProNotebookLMOpenAI Deep Research
Data stays on your machine✅ Fully local❌ Sent to cloud❌ Sent to Google❌ Sent to OpenAI
Works offline (after setup)✅ Yes❌ Requires internet❌ Requires internet❌ Requires internet
No monthly subscription✅ Free & open source$20/monthFree (with Google account)$200/month (ChatGPT Ultra)
Custom LLM support✅ Ollama, LM Studio, llama.cpp❌ GPT only❌ Google models only❌ OpenAI only
Encrypted knowledge base✅ SQLCipher encrypted❌ No encryption❌ No encryption❌ No encryption
Self-hosted deployment✅ Docker, pip, CLI❌ Not available❌ Not available❌ Not available
Enterprise API access✅ REST API + MCP❌ Limited API❌ No public API❌ No API
Academic paper indexing✅ arXiv, PubMed, Semantic ScholarPartial❌ No❌ No

Core Features Explained

1. Multiple Research Modes

Quick Summary — Get concise answers with citations in 30 seconds to 3 minutes. Perfect for quick fact-checking and rapid exploration of new topics.

Detailed Research — Comprehensive analysis with structured findings, multiple sections, and thorough sourcing. Ideal for preparing presentations, writing articles, or making investment decisions.

Report Generation — Professional-quality reports with tables of contents, proper formatting, and download options in PDF or Markdown. These are publication-ready outputs.

Document Analysis — Upload your private documents and research them directly alongside live web sources. Your files are encrypted and never leave your machine.

2. 20+ Research Strategies

LDR isn’t limited to a single approach. It offers over 20 research strategies optimized for different use cases:

  • Quick facts — Rapid fact retrieval with minimal tokens
  • Deep analysis — Multi-turn reasoning for complex questions
  • Academic research — Optimized for scholarly paper discovery and synthesis
  • LangGraph Agent Strategy — An autonomous agentic mode where the LLM dynamically decides what to search, which specialized engines to use, and when to synthesize results

The LangGraph strategy is particularly powerful: it adaptively switches between search engines based on what it finds, collecting significantly more sources than pipeline-based approaches.

3. Extensive Search Engine Support

LDR integrates with over 10 search sources across categories:

Free Academic Sources:

  • arXiv — Pre-print papers across computer science, physics, and mathematics
  • PubMed — Biomedical and life sciences literature
  • Semantic Scholar — AI-powered academic paper search
  • Wikipedia — General knowledge base

Free General Sources:

  • SearXNG — Privacy-respecting metasearch engine
  • GitHub — Code repositories and developer discussions
  • Elasticsearch — Technical documentation search
  • Wayback Machine — Historical web page archives
  • The Guardian & Wikinews — News coverage

Premium Sources (optional):

  • Tavily — AI-powered web search
  • Google — Via SerpAPI or Programmable Search Engine
  • Brave Search — Privacy-focused general search

4. Build Your Personal Knowledge Base

Every research session discovers valuable sources. With one click, you can save them directly to an encrypted library — academic papers from arXiv, PubMed articles, or web pages. LDR extracts text, indexes everything with embeddings, and makes it searchable.

Over time, your knowledge compound: each session benefits from your accumulated research corpus plus live web sources.

5. MCP Server Integration

One of LDR’s most powerful features is its built-in MCP (Model Context Protocol) server, which allows AI assistants like Claude Desktop and Claude Code to perform deep research on demand. This means you can ask Claude to “research X thoroughly” and Claude will delegate to Local Deep Research, getting back fully sourced, comprehensive reports.

6. Advanced Capabilities

  • Analytics Dashboard — Track costs, performance metrics, and usage patterns
  • Journal Quality System — Automatic reputation scoring for 212K+ indexed academic sources with predatory journal detection
  • Research Subscriptions — Automated research digests delivered daily, weekly, or on custom schedules
  • Real-time Updates — WebSocket support for live progress monitoring during research sessions
  • Adaptive Rate Limiting — Intelligent retry system that learns optimal wait times for rate-limited APIs
  • Per-User Encrypted Databases — SQLCipher encryption ensures complete data isolation

Step-by-Step: Installing and Running Local Deep Research

This is the easiest way to get started. A single command spins up everything you need:

# Download the Docker Compose configuration
curl -O https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.yml

# Start the service (CPU-only, works on all platforms)
docker compose up -d

After approximately 30 seconds, open http://localhost:5000 in your browser. You’re ready to start researching.

With NVIDIA GPU (Linux):

curl -O https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.yml
curl -O https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.gpu.override.yml
docker compose -f docker-compose.yml -f docker-compose.gpu.override.yml up -d

Option 2: pip Install (macOS, Windows, Linux)

pip install local-deep-research

This works on all three major operating systems. SQLCipher encryption is included via pre-built wheels — no compilation needed.

Option 3: Manual Docker Setup

For advanced users who want to customize components:

# Step 1: Run Ollama locally
docker run -d -p 11434:11434 --name ollama ollama/ollama
docker exec ollama ollama pull gpt-oss:20b

# Step 2: Run SearXNG for enhanced search
docker run -d -p 8080:8080 --name searxng searxng/searxng

# Step 3: Run Local Deep Research
docker run -d -p 5000:5000 --network host \
  --name local-deep-research \
  --volume "deep-research:/data" \
  -e LDR_DATA_DIR=/data \
  localdeepresearch/local-deep-research

Configuring Your LLM

After installation, configure which model powers your research. LDR supports:

Local Models (zero API cost):

  • Ollama — Connect to its native API at localhost:11434. Popular models: Llama 3, Mistral, Gemma, DeepSeek, Qwen
  • LM Studio — Connect to its OpenAI-compatible server at localhost:1234/v1
  • llama.cpp — Connect to llama-server at localhost:8080/v1

Cloud Models (if you prefer managed inference):

  • OpenAI (GPT-4, GPT-3.5)
  • Anthropic (Claude 3 series)
  • Google (Gemini)
  • 100+ models via OpenRouter

Go to Settings → LLM in the web interface to select your preferred model. Local models mean zero ongoing costs and complete data privacy.

Using Local Deep Research: Practical Examples

Example 1: Quick Research via Python API

from local_deep_research.api import LDRClient, quick_query

# One-line simple research
summary = quick_query("username", "password", "What are the latest breakthroughs in quantum computing?")
print(summary)

# Client for multiple operations
client = LDRClient()
client.login("username", "password")
result = client.quick_research("How is AI transforming drug discovery in 2026?")
print(result["summary"])

Example 2: Integrating with Claude Desktop via MCP

Configure Claude Desktop to use Local Deep Research for deep research tasks. Once set up, you can simply ask Claude:

“Please do a deep research on transformer model optimization techniques and compile a report.”

Claude will automatically invoke LDR’s MCP server, execute the research across multiple sources, and return a comprehensive, cited report — all without leaving the Claude interface.

Example 3: Automating Market Intelligence

Use LDR’s subscription feature to stay informed:

  1. Subscribe to topics like “quantum computing advances,” “biotech funding rounds,” or “semiconductor supply chain”
  2. Receive automated research summaries daily or weekly
  3. Results arrive as structured markdown reports with sources
  4. Compare changes over time using your encrypted knowledge base

Real-World Use Cases

For Researchers and Academics

Perform systematic literature reviews across arXiv, PubMed, and Semantic Scholar without exposing your research topics to any cloud service. Build a personalized knowledge base of papers, annotated and searchable.

For Journalists

Investigate sensitive topics without any digital footprint. All queries stay on your machine. Combine live web searches with document analysis from confidential sources stored locally.

For Investors and Analysts

Monitor market trends, company fundamentals, and industry developments using automated research subscriptions. The journal quality system ensures you’re reading reputable sources and filtering out predatory publications.

For Developers and Engineers

Stay current with technology trends across GitHub repositories, technical documentation sites, and Stack Overflow. Run benchmarks using the built-in SimpleQA evaluation system to test different LLM configurations.

For Business Teams

Deploy LDR on your internal network with per-user encrypted databases. The REST API enables integration with existing workflows, and the enterprise dashboard provides usage analytics for the entire team.

Performance: Benchmarks That Compete With Paid Services

Local Deep Research achieves ~95% accuracy on the SimpleQA benchmark — a serious measure of factual question-answering ability — when configured with GPT-4.1-mini combined with SearXNG and the focused-iteration strategy.

More importantly, local models can achieve similar performance with proper configuration. The community maintains benchmark datasets on Hugging Face and GitHub, allowing you to compare accuracy numbers across hundreds of local and cloud model combinations before committing to any setup.

One community member achieved these results on a single NVIDIA RTX 3090 using Qwen3.6-27B, demonstrating that professional-grade AI research doesn’t require expensive cloud subscriptions.

Comparison with Competitors

vs. Perplexity Pro ($20/month)

Perplexity offers speed and convenience but routes everything through Google’s infrastructure. LDR provides comparable quality with full data sovereignty, no monthly fees, and the ability to customize search sources and LLMs.

vs. NotebookLM (Free)

NotebookLM excels at document grounding within Google’s ecosystem but requires a Google account and sends all data to Google servers. LDR lets you upload private documents with encrypted storage and choose any LLM provider.

vs. OpenAI Deep Research ($200/month for ChatGPT Ultra)

OpenAI’s offering is powerful but prohibitively expensive for most users. LDR achieves similar results at zero cost, runs locally, and supports a much wider range of models and search engines.

vs. Firecrawl / Web Scraping Tools

Tools like Firecrawl focus on extracting website content at scale. LDR goes beyond scraping — it understands context, synthesizes findings across multiple sources, and produces structured research output with citations.

Getting Started Today

  1. Visit github.com/LearningCircuit/local-deep-research
  2. Install using Docker Compose (curl + docker compose up)
  3. Configure your preferred LLM in Settings
  4. Start researching — type any question and watch LDR do its magic

The entire setup takes less than 5 minutes with Docker Compose. For detailed setup guides, see the installation documentation.

Security and Privacy

LDR takes security seriously. The project includes extensive security scanning:

  • Static analysis (CodeQL, Semgrep, DevSkim, Bearer)
  • Dependency and secrets scanning (OSV-Scanner, npm-audit)
  • Container security (Dockle, Hadolint, Checkov)
  • Runtime security (OWASP ZAP scans, Zizmor workflow checks)

All user databases are encrypted with SQLCipher. Research history, saved documents, and knowledge bases are completely isolated per user. Nothing leaves your machine unless you explicitly configure external search API keys.

Conclusion

Local Deep Research represents a paradigm shift in how we interact with AI-powered research tools. By putting privacy, customization, and data ownership first, it offers a compelling alternative to the walled-garden AI services dominating the market.

Whether you’re a researcher protecting your unpublished work, a journalist covering sensitive topics, an investor doing due diligence, or simply someone who values their digital privacy — Local Deep Research gives you the power of deep AI-powered inquiry without compromise.

The project is actively maintained (329 commits), has a thriving community, and receives positive coverage in tech media worldwide. With 7,000+ stars and counting, it’s clearly resonating with users who care about owning their AI experience.

Try it today. Your research deserves to be private.