AI Assistant
The AI Assistant is a built-in chat panel within HarbourBuilder that connects to Ollama for local AI-powered coding help. No API keys, no cloud services, no data leaving your machine — just a local model providing instant assistance while you code.
The AI Assistant communicates with a local Ollama instance running on your machine. Your code, questions, and prompts never leave your computer. This is ideal for enterprise environments, proprietary codebases, and developers who value privacy.
Supported Backends
| Backend | Type | API Key | Network |
|---|---|---|---|
| Ollama | Local | None | localhost only |
| LM Studio | Local | None | localhost only |
Model Selector
The AI Assistant includes a model selector dropdown that lists all available models detected from your running Ollama or LM Studio instance. Available options typically include:
| Model | Size | Best For |
|---|---|---|
llama3 | ~5 GB | General purpose, coding, explanation |
codellama | ~4 GB | Code generation and refactoring |
mistral | ~4 GB | Fast responses, coding tasks |
deepseek-coder | ~4 GB | Code-specific queries |
phi3 | ~2 GB | Lightweight, fast on modest hardware |
qwen2.5-coder | ~3 GB | Multi-language code assistance |
Run ollama list in your terminal to see installed models.
Install new models with ollama pull <model-name>. The AI Assistant
auto-detects available models when you open the panel.
Chat Interface
The AI Assistant panel provides a familiar chat experience:
- Chat history — scrollable conversation view with your messages and AI responses.
- Input field — type your question or paste code for analysis.
- Send button — submit your message to the model.
- Streaming responses — watch the response appear token by token.
- Code blocks — generated code is displayed in formatted, syntax-highlighted blocks with a copy button.
- Model selector — switch models without restarting.
- Clear chat — reset the conversation history.
Example Prompts
Here are some effective prompts to try with the AI Assistant:
| Category | Prompt |
|---|---|
| Code generation | "Write a Harbour function that sorts an array using quicksort" |
| Explanation | "Explain how the TTransformer component works" |
| Debugging | "Why am I getting a 'variable not found' error on this line?" |
| Refactoring | "Rewrite this procedural code as a Harbour class" |
| Best practices | "What's the best way to handle database connections in HarbourBuilder?" |
| Learning | "Show me how to use events with a TButton" |
Using TOllama in Your Code
The same TOllama component that powers the AI Assistant is available in your own applications. Here's how to use it:
Basic Usage
#include "hbbuilder.ch" function Main() local oOllama, cResponse DEFINE OLLAMA oOllama ; HOST "localhost" ; PORT 11434 ; MODEL "llama3" cResponse := oOllama:Chat( "What is Harbour programming language?" ) ? cResponse return nil
Streaming Response
static function StreamExample( oOllama, oMemo ) local cPrompt := "Write a Harbour function to calculate fibonacci" oMemo:Append( "Generating...\n\n" ) // Stream the response token by token into a Memo oOllama:ChatStream( cPrompt, ; { |cToken| oMemo:Append( cToken ) } ) return nil
Setting a System Prompt
oOllama:SetSystem( "You are a Harbour/xBase expert. ; Always provide complete, working code examples. ; Explain your reasoning clearly." ) cResponse := oOllama:Chat( "How do I read a file line by line?" )
Using LM Studio
LM Studio provides a compatible API. Simply point TOllama at the LM Studio server:
oOllama := TOllama():New() oOllama:cHost := "localhost" oOllama:nPort := 1234 // LM Studio default port oOllama:cModel := "local-model"
Privacy Benefits
- No data exfiltration — your source code, business logic, and API keys never leave your machine.
- No API keys to manage — local models require no authentication or billing setup.
- Works offline — once models are downloaded, no internet connection is required.
- No rate limits — unlimited queries with no usage caps or per-token costs.
- Enterprise-safe — compliant with policies that prohibit sending code to external AI services.
The AI Assistant uses TOllama internally. Anything you can do in the assistant panel, you can also do programmatically in your own HarbourBuilder applications. See the AI Integration tutorial for a complete walkthrough.
Configuration
Open the AI Assistant panel from the menu Tools > AI Assistant or use the toolbar button. The settings gear icon lets you configure:
| Setting | Description | Default |
|---|---|---|
| Backend | Ollama or LM Studio | Ollama |
| Host | Server hostname or IP | localhost |
| Port | Server port | 11434 |
| Model | Model to use for chat | llama3 |
| Temperature | Creativity (0.0–2.0) | 0.7 |
| Max Tokens | Maximum response length | 2048 |
| System Prompt | Custom system instructions | (empty) |