I'll guide you through creating an LLM workflow that queries a knowledge base in Stack AI. Based on the platform's features, here's a step-by-step approach:
Step 1: Create a New Project
-
Click the "New Project" button in the top right corner of your Stack AI dashboard

-
Select "Workflow Builder" and click "Create"
-
Choose the "Knowledge Base Agent" template or start with a "New Project" template
Step 2: Set Up Your Knowledge Base
-
First, create a knowledge base by clicking "Knowledge Bases" in the sidebar

-
Click "Create a Knowledge Base"
-
Enter a name for your knowledge base and click "Create"
-
Upload your documents or connect to external sources like:
- SharePoint
- Google Drive
- Dropbox
- Confluence
- Gmail (as shown in recent updates)
Step 3: Build Your Workflow
In the Workflow Builder canvas:
Add an Input Node
- Drag an Input node from the sidebar to capture user queries
Add a Knowledge Base Node
- From the sidebar under "Knowledge Bases," drag your created knowledge base into the canvas
- Connect it to receive the user query input
- This will perform semantic search on your documents
Add an LLM Node
- Drag an LLM node from the sidebar
- In the LLM configuration:
- Reference the knowledge base results in your prompt using
{knowledge-base-node-name}
- Write a prompt like: "Based on the following context: {knowledge-base-0}, answer the user's question: {input-0}"
Add an Output Node
- Connect an Output node to display the LLM's response
Step 4: Configure Advanced Features (Optional)
For Better Accuracy:
- Enable metadata filtering in your knowledge base settings for improved retrieval accuracy
- Use Dynamic Vector Store nodes for in-memory document embeddings if needed
For Conversation Memory:
- Add a Shared Memory node to maintain context across multiple interactions
For Complex Logic:
- Use If/Else nodes or AI Routing nodes for conditional processing
Step 5: Test and Deploy
- Test your workflow using the chat interface
- Check the Analytics view to monitor performance
- Use the Export view to deploy your agent as a chatbot or API
Pro Tips:
- Reference knowledge bases in LLM prompts using
/
- Use
@
to reference tools in prompts
- Enable bounding boxes for PDF citations to improve accuracy
- Consider using the Prompt Library to store and reuse effective prompts

This approach creates a Retrieval-Augmented Generation (RAG) system where your LLM can query and reason over your knowledge base content to provide accurate, contextual responses.