Lab (Chat)
The Lab is Lattice’s AI-powered chat interface where you interact with an intelligent assistant that grounds every response in your curated sources.
How It Works
Section titled “How It Works”When you send a message, Lattice:
- Searches your sources using hybrid retrieval
- Analyzes the query to identify relevant context
- Reasons through the information with transparent thinking steps
- Responds with cited, verifiable claims
Grounded Responses
Section titled “Grounded Responses”Every response from Lattice includes numbered citations:
Based on the documentation [1], Claude Sonnet offers a 200Kcontext window [2], while GPT-4 Turbo supports 128K tokens [3].For high-volume applications, the pricing differs significantly [4].
[1] Anthropic Documentation - Model Overview[2] Anthropic Documentation - Context Windows[3] OpenAI Documentation - GPT-4 Turbo[4] Anthropic Pricing - API RatesClick any citation to see the original source passage.
Thinking Steps
Section titled “Thinking Steps”Lattice exposes its reasoning process through thinking steps:
[Thinking] Analyzing the query about context windows...[Thinking] Found 3 relevant sources with specifications...[Thinking] Comparing Claude and GPT-4 context limits...[Thinking] Checking pricing implications for large contexts...Suggested Prompts
Section titled “Suggested Prompts”The Lab provides context-aware suggested prompts based on:
- Your current workspace sources
- Recent conversation topics
- Common research patterns
Click any suggestion to use it as your next query.
Chat Modes
Section titled “Chat Modes”Research Agent (Default)
Section titled “Research Agent (Default)”The full-featured mode with:
- Transparent thinking steps
- Multi-source synthesis
- Citation tracking
- Artifact detection
Direct LLM
Section titled “Direct LLM”Bypass the research agent for simple queries:
- Faster responses
- No source grounding
- Useful for general questions
Using Context Effectively
Section titled “Using Context Effectively”@Mentions for Source Boosting
Section titled “@Mentions for Source Boosting”Reference specific sources to prioritize them:
@anthropic-docs What are the rate limits for Claude?Scenario Context
Section titled “Scenario Context”Set an active scenario to inform responses:
Given my high-volume chat scenario, which modeloffers the best latency/cost tradeoff?Stack Context
Section titled “Stack Context”Reference stack configurations for targeted advice:
Using my Claude Haiku Speed Stack, how should Iconfigure the temperature for consistency?Conversation History
Section titled “Conversation History”The Lab maintains full conversation history within each workspace. You can:
- Scroll back to review previous exchanges
- Search history for specific topics
- Delete individual messages
- Export conversations for documentation
Streaming Responses
Section titled “Streaming Responses”Responses stream in real-time via Server-Sent Events (SSE):
event: stepdata: {"type": "thinking", "content": "Analyzing query..."}
event: contentdata: {"type": "text", "content": "Based on the documentation"}
event: contentdata: {"type": "text", "content": " [1], Claude Sonnet..."}
event: donedata: {"type": "complete", "usage": {"input_tokens": 1234, "output_tokens": 567}}API Reference
Section titled “API Reference”Send Message
Section titled “Send Message”POST /api/workspaces/{workspace_id}/chatContent-Type: application/json
{ "message": "Compare Claude and GPT-4 for RAG applications", "stream": true, "include_history": true, "scenario_id": "optional-scenario-uuid", "stack_id": "optional-stack-uuid"}Returns: Server-Sent Events stream
POST /api/workspaces/{workspace_id}/chatContent-Type: application/json
{ "message": "Compare Claude and GPT-4 for RAG applications", "stream": false}Returns: JSON response with complete message
List Messages
Section titled “List Messages”GET /api/workspaces/{workspace_id}/messagesDelete Message
Section titled “Delete Message”DELETE /api/workspaces/{workspace_id}/messages/{message_id}