AI Chat Tab
The AI Chat tab provides an interactive natural-language interface to your heap analysis. Ask questions about the heap dump and receive answers grounded in the actual analysis data.
What You See
A chat interface with a message input at the bottom and a scrollable message history above it:
┌──────────────────────────────────────────────────────┐
│ 🤖 Based on the heap analysis, the primary memory │
│ consumer is com.example.cache.DataCache at │
│ 512 MB (31.4% of heap). This single HashMap- │
│ backed cache retains nearly a third of all │
│ reachable memory... │
│ │
│ 👤 What are the top 3 things I should fix first? │
│ │
│ 🤖 Here are the top 3 recommendations: │
│ 1. **DataCache** (512 MB) — Add LRU eviction... │
│ 2. **Duplicate strings** (35 MB) — Enable JVM... │
│ 3. **Empty collections** (9.6 MB) — Lazy init... │
├──────────────────────────────────────────────────────┤
│ [Type your question here...] [Send] │
└──────────────────────────────────────────────────────┘
How It Works
Context Injection
On the first message, HeapLens prepends a structured summary of the heap analysis as context for the LLM. This includes:
- Heap Summary — Total size, object counts, GC root count
- Top 15 Objects — Class name, type, shallow size, retained size
- All Leak Suspects — Severity, class, percentage, description, dependency info
- Top 20 Classes — From the class histogram
- Waste Analysis — Total waste, top 5 duplicate strings
This context is formatted as markdown tables (~2-3K tokens) and included as a system message, so the LLM can answer questions accurately about your specific heap dump.
Streaming Responses
Responses are streamed token-by-token for fast feedback — you see the answer being written in real time rather than waiting for the complete response.
Conversation History
The chat maintains a per-editor conversation history (up to 20 exchanges). Follow-up questions reference the context of previous answers.
Configuration
The AI Chat requires an API key. Configure it in VS Code settings:
| Setting | Description | Example |
|---|---|---|
heaplens.llm.provider | LLM provider | "anthropic" or "openai" |
heaplens.llm.apiKey | API key | "sk-ant-..." |
heaplens.llm.baseUrl | Custom API endpoint (optional) | "https://my-proxy.example.com" |
heaplens.llm.model | Model override (optional) | "claude-sonnet-4-20250514" |
Without an API key, the chat tab shows a configuration prompt.
Example Questions
Here are effective questions to ask about your heap dump:
Diagnostic Questions
- "What is the biggest memory consumer and why?"
- "Is there a memory leak? What evidence supports that?"
- "Why is byte[] the largest class by shallow size?"
- "Are there any suspicious retention patterns?"
Actionable Questions
- "What are the top 3 things I should fix to reduce memory usage?"
- "How can I reduce the 35 MB of duplicate strings?"
- "What code change would have the most impact on heap size?"
- "Should I add a cache eviction policy? What size limit would you recommend?"
Investigative Questions
- "Explain the relationship between DataCache and the HashMap retaining 510 MB"
- "Are the 45,000 UserSession objects expected for a service with 100 concurrent users?"
- "What are the empty collections and are they worth fixing?"
- "Compare the leak suspects — which is the root cause vs. a symptom?"
Copilot Chat Integration
HeapLens also registers as a VS Code Chat Participant. If you have GitHub Copilot Chat installed, you can use the @heaplens mention in the Copilot Chat panel:
@heaplens What's causing the high memory usage?
This uses the same analysis context injection but integrates with the Copilot Chat UI rather than HeapLens's built-in chat tab.
Limitations
- The LLM sees a summary of the analysis data, not the raw heap dump. It cannot access individual objects that are not in the top 15 or the histogram.
- For deep investigation (e.g., "what is object #1234567 retaining?"), use the Dominator Tree tab directly.
- Responses are generated by the LLM and may occasionally be inaccurate. Always verify recommendations against the actual data in other tabs.