Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: Where do I start with mod_harbour from scratch?
Posted: Wed Oct 15, 2025 07:35 AM
Hi Dutch,
your summary is basically right: with mod_harbour (Harbour module for a web server) you can keep your xBase/Harbour business logic and data access, while the UI (user interface) is HTML (markup) + CSS (styles) + JavaScript (JS, the browser’s programming language) in the browser.
Before you jump in, a few questions I found useful for my own decision:
Where will the server run?
My answer: Windows only. We need a hybrid mode with our existing desktop apps, shared files, DBF/CDX (DBF = dBASE table file, CDX = compound index) indexes—and we want to understand the full path from request to disk.
How much framework do you want?
My answer: as little as possible. The simpler the stack, the easier it is for every developer to read, debug, and change. Fewer external components, more explicit code.
How do frontend and backend talk?
My answer: HTTP (Hypertext Transfer Protocol) + JSON (JavaScript Object Notation), with a thin boundary and clear, documented endpoints.
Our decision (and it works well for us):
Backend: a small Harbour microservice (single EXE, executable file) that exposes JSON routes for read/update/locking/audit.
Frontend: plain HTML/JS for the UI.
Bridge (optional): a tiny PHP (Hypertext Preprocessor) layer if/when needed (e.g., same-origin, cookies, small forms). Otherwise the browser talks directly to the microservice.
Exposure: publish via Cloudflared (so we don’t embed TLS/SSL—Transport Layer Security / Secure Sockets Layer; Cloudflare terminates HTTPS).
Why this route?
We reuse our DBF/CDX know-how, implement locking and audit exactly as needed, and keep the server code short and transparent.
Any dev can open one file, see the routes, and understand the request/response flow within minutes.
The browser stays a simple, durable target: HTML/CSS/JS without heavy frameworks.
If you prefer mod_harbour directly inside a web server, the idea is similar: keep your domain logic in Harbour and serve pages or JSON. The trade-off is packaging and deployment style. We like the standalone EXE because runtime path and logs are very easy to reason about.
A practical starter checklist:
Define 5–10 clear JSON endpoints (e.g., /readrecord, /updaterecord, /lock_acquire, /lock_status, /audit_log).
Add small but important hygiene: limits (max header/body), CORS/OPTIONS (Cross-Origin Resource Sharing / HTTP preflight), logging, and a simple /healthz.
Keep the frontend minimal at first (vanilla JS, fetch, a table, a form). Add a component library later only if it clearly saves time.
Decide early how you’ll handle concurrency (record locks, conflict detection) and encoding (UTF-8, Unicode format) in/out.
Automate a few curl (command-line HTTP client) tests so anyone can verify endpoints locally.
Short summary:
Yes—your understanding is correct. Keep Harbour for backend logic and data, write the UI in web technologies, and choose the thinnest glue you can live with. For us: Windows + small Harbour microservice + HTML/JS (+ optional PHP) has been the right balance of power, simplicity, and team maintainability.
A note on concurrency
Even with, say, 5 waiter handhelds, 3 reception desks, and 4 back-office stations, true simultaneous write conflicts are rare—operations usually arrive slightly offset in time. If two really do write the same record at the exact same moment, a short wait (<1 s) is perfectly acceptable.
How I implement it:
Pessimistic lock per resource (e.g., BOOKING#BNR=… (booking number)) with TTL ~60 s (time to live) and auto-extend while the editor is active.
Idempotent lock-acquire; the second user sees “busy until …” and retries with 200–500 ms backoff.
Fine-grained: never global locks—only the one record.
Extra guard: optimistic check with row-version/updated-at in case someone bypasses the lock.
UX (user experience): small HUD (heads-up display)/badge “locked by … (xx s)”, plus a “notify when free” button.
Server hygiene: 413 for oversized bodies, short timeouts, audit all lock events.
This keeps the stack simple, collision-resistant, and easy for the whole team to understand—without heavy overhead.
AI-readability (maybe the most important point today)
If you want LLMs (large language models) to help effectively, your source must be self-contained and easy to ingest. Libraries are great, but an LLM can’t reason about code it doesn’t see. Pulling in a whole framework quickly blows the token budget and turns every fix into “please upload more files.”
Why I chose “maximum self”:
Small, flat codebase → the model can read the whole server in one pass and give precise, line-level fixes.
No hidden magic → fewer black boxes; less “I can’t see that dependency, please paste it.”
One place to maintain → no double maintenance of your app + their lib; behavior stays predictable.
Deterministic mental model → easier for humans and AI (artificial intelligence) to follow the request → parse → route → reply flow.
Make code AI-friendly (and human-friendly):
Block markers & index (Harbourino style):
//-- INDEX --// …, -> ROUTER, -> READ_REQUEST, -> LOCKS, etc. The model can jump to the right section; diffs stay tight.
Tiny, named helpers over giant utility crates (one function = one job, ≲60–120 lines).
One-line header per block with What / Inputs / Outputs / Errors.
Stable names: HANDLE_READRECORD, LOCK_TryAcquire, FmtDateOut (no cute aliases).
Minimal dependencies: prefer standard C/Harbour + a few paste-in snippets.
Nearby examples: a couple of curl calls in comments per route.
Plain INI (simple config file) via GetPvProfString()—easy to show the AI and override locally.
Result: The AI can read the whole microservice, suggest accurate patches, and you stay within token limits without uploading half the internet. That’s why I favor a small Harbour microservice + plain HTML/JS (optional PHP bridge) and avoid heavy frameworks unless they clearly save time on my use case.
Team culture
We should get back to a culture of open exchange: take time to read ideas carefully, try them, measure, and then talk together. That has slipped lately. It’s hard to question things we thought were carved in stone—but that willingness to rethink is what makes us better, technically and humanly. If we treat critique as an invitation and experiments as learning, the whole team wins.
Wishing you the best of luck and success with your decision.
Best regards,
Otto