AI Chatbot Integration
Integrate an intelligent assistant to query Slurm status and generate scripts.
The Slurm Dashboard includes a powerful AI Chatbot that leverages Large Language Models (LLMs) to assist users with cluster management tasks. This feature allows administrators and users to interact with the cluster using natural language.
Features
- Natural Language Queries: Ask about node status, job details, or partition information.
- Script Generation: Automatically generate
sbatchscripts based on requirements. - Context Awareness: The AI understands the specific configuration of your Slurm cluster.
- Interactive UI: A modern chat interface with suggested follow-up questions.
Architecture
The chatbot implementation consists of three main parts:
- API Route: Handles the chat stream and communicates with the LLM provider.
- Server Actions: Executes server-side logic and tools.
- UI Components: Renders the chat interface and messages.
Configuration
1. Environment Setup
The necessary packages are already installed with the application. You only need to configure the environment variables in your .env file to connect to your LLM provider.
# Base URL for the LLM provider (OpenAI compatible)
OPENAI_API_URL="https://api.baseurl.com/v1"
# The main model for chat (e.g., gpt-4o, qwen-2.5-72b)
OPENAI_API_MODEL="llm-model"
# A faster model for generating suggestions (optional)
OPENAI_API_MODEL_SUGGESTION="llm-model"
# Your API key
OPENAI_API_KEY="sk-..."
2. Implementation Details
The core logic is distributed across the following files:
API Route (app/api/chat/route.ts)
This route handles the POST requests from the chat interface. It initializes the OpenAI client and streams the response back to the client.
Server Actions (actions/actions.tsx)
Server actions are used to fetch real-time data from the Slurm cluster (e.g., sinfo, squeue) which can be fed into the LLM as context or used by the LLM via function calling.
UI Components (components/llm/*)
The UI is built with Tailwind CSS and includes features like:
- ChatList (
chat-list.tsx): The main container that renders the list of messages. - Message Components (
message.tsx): Reusable components for user and bot messages (UserMessage,BotMessage,BotCard). - Tool Invocation (
tool-invocation.tsx): Handles the rendering of rich UI components for tool outputs (e.g.,SlurmNodeDetails,SlurmJobDetails). - Empty State (
empty-state.tsx): The initial view with suggested queries.
Usage
Once configured, the AI Assistant can be accessed via the dashboard interface.

Example Queries
Check Node Status
User: "Show me details for node 'sdg051'"
AI: Returns a structured card showing System Load, Memory, Partitions, and GRES usage.
Generate Scripts
User: "Give me an example sbatch script for a GPU job"
AI: Generates a valid bash script with #SBATCH directives and explains the parameters.
Troubleshooting
User: "Why is my job pending?"
AI: Analyzes the job status and explains reasons like "Resources" or "Priority".
Customization
System Prompt
You can customize the system prompt in app/api/chat/route.ts to adjust the AI's personality or restrict its knowledge scope.
const systemPrompt = `
You are a specialized Slurm HPC (High Performance Computing) assistant.
Your ONLY purpose is to assist users with Slurm workload manager tasks, HPC cluster operations, and related scripting (bash, sbatch, etc.).
CRITICAL SAFETY INSTRUCTIONS:
- You must REFUSE to answer any questions unrelated to Slurm, HPC, Linux, or programming/scripting for HPC environments.
...
`;
Default Suggestions
To update the default suggestions shown in the empty chat state, modify the starters array in components/llm/empty-state.tsx.
const starters = [
{
heading: "Node Details",
message: 'Show me details for node "sc020"',
},
{
heading: "Job Details",
message: 'Show me details for job "12345"',
},
// Add your own suggestions here
];