AI Assistant

The AI Assistant is a built-in chat panel within HarbourBuilder that connects to Ollama for local AI-powered coding help. No API keys, no cloud services, no data leaving your machine — just a local model providing instant assistance while you code.

100% Local & Private

The AI Assistant communicates with a local Ollama instance running on your machine. Your code, questions, and prompts never leave your computer. This is ideal for enterprise environments, proprietary codebases, and developers who value privacy.

Supported Backends

BackendTypeAPI KeyNetwork
OllamaLocalNonelocalhost only
LM StudioLocalNonelocalhost only

Model Selector

The AI Assistant includes a model selector dropdown that lists all available models detected from your running Ollama or LM Studio instance. Available options typically include:

ModelSizeBest For
llama3~5 GBGeneral purpose, coding, explanation
codellama~4 GBCode generation and refactoring
mistral~4 GBFast responses, coding tasks
deepseek-coder~4 GBCode-specific queries
phi3~2 GBLightweight, fast on modest hardware
qwen2.5-coder~3 GBMulti-language code assistance
Model Management

Run ollama list in your terminal to see installed models. Install new models with ollama pull <model-name>. The AI Assistant auto-detects available models when you open the panel.

Chat Interface

The AI Assistant panel provides a familiar chat experience:

Example Prompts

Here are some effective prompts to try with the AI Assistant:

CategoryPrompt
Code generation"Write a Harbour function that sorts an array using quicksort"
Explanation"Explain how the TTransformer component works"
Debugging"Why am I getting a 'variable not found' error on this line?"
Refactoring"Rewrite this procedural code as a Harbour class"
Best practices"What's the best way to handle database connections in HarbourBuilder?"
Learning"Show me how to use events with a TButton"

Using TOllama in Your Code

The same TOllama component that powers the AI Assistant is available in your own applications. Here's how to use it:

Basic Usage

#include "hbbuilder.ch"

function Main()

   local oOllama, cResponse

   DEFINE OLLAMA oOllama ;
      HOST "localhost" ;
      PORT 11434 ;
      MODEL "llama3"

   cResponse := oOllama:Chat( "What is Harbour programming language?" )

   ? cResponse

return nil

Streaming Response

static function StreamExample( oOllama, oMemo )

   local cPrompt := "Write a Harbour function to calculate fibonacci"

   oMemo:Append( "Generating...\n\n" )

   // Stream the response token by token into a Memo
   oOllama:ChatStream( cPrompt, ;
      { |cToken| oMemo:Append( cToken ) } )

return nil

Setting a System Prompt

oOllama:SetSystem( "You are a Harbour/xBase expert. ;
   Always provide complete, working code examples. ;
   Explain your reasoning clearly." )

cResponse := oOllama:Chat( "How do I read a file line by line?" )

Using LM Studio

LM Studio provides a compatible API. Simply point TOllama at the LM Studio server:

oOllama := TOllama():New()
oOllama:cHost := "localhost"
oOllama:nPort := 1234  // LM Studio default port
oOllama:cModel := "local-model"

Privacy Benefits

graph LR A["Your Code"] --> B["Ollama / LM Studio\n(on your machine)"] B --> C["AI Model\n(local)"] C --> D["Response\n(back to you)"] style A fill:#3fb950,stroke:#2ea043,color:#0d1117 style B fill:#58a6ff,stroke:#388bfd,color:#0d1117 style C fill:#d2a8ff,stroke:#bc8cff,color:#0d1117 style D fill:#3fb950,stroke:#2ea043,color:#0d1117
AI Assistant vs TOllama Component

The AI Assistant uses TOllama internally. Anything you can do in the assistant panel, you can also do programmatically in your own HarbourBuilder applications. See the AI Integration tutorial for a complete walkthrough.

Configuration

Open the AI Assistant panel from the menu Tools > AI Assistant or use the toolbar button. The settings gear icon lets you configure:

SettingDescriptionDefault
BackendOllama or LM StudioOllama
HostServer hostname or IPlocalhost
PortServer port11434
ModelModel to use for chatllama3
TemperatureCreativity (0.0–2.0)0.7
Max TokensMaximum response length2048
System PromptCustom system instructions(empty)

On This Page

Getting Started Component Palette IDE Features Tutorials Reference Platforms Supported Backends Model Selector Chat Interface Example Prompts Using TOllama in Your Code Basic Usage Streaming Response Setting a System Prompt Using LM Studio Privacy Benefits Configuration