FiveTech Support Forums

FiveWin / Harbour / xBase community
Board index mod_harbour HIX -> Ticket Project (VI) - Load testing
Posts: 1283
Joined: Fri Feb 10, 2006 02:34 PM
HIX -> Ticket Project (VI) - Load testing
Posted: Fri Oct 31, 2025 02:40 PM
Hi,

This will be the last entry, but it's one of the most important in order to understand the power we have with HIX when creating a website/web service.

Alright, in this chapter we're gonna talk about load and performance testing for our Ticket project. We need to make sure our system can handle traffic spikes when multiple users hit it at once. This'll be more about concepts than code, but I think it's worth spending 10 minutes to read through it.

Our system has to handle situations where more than one user might request a ticket at the exact same millisecond. We're not just looking at how many users are making requests, but whether the system will crash if they all hit it simultaneously.

For this test, we're gonna run 100 requests with 5 concurrent users. What does that mean?
It means we're simulating how your website behaves when several people use it at the same time. Even though 5 users might not sound like much, we're talking about users processing in the same millisecond - not just connecting at different times.


What are we simulating?

Think of your website like a movie theater ticket counter:

• Request: Like someone in line saying "I want a ticket" - it's asking your server to do something
• 100 Requests: Total number of "tickets" we're trying to sell in this test
• User: One person in line
• 5 Concurrent Users: This is the key part - 5 people all asking for things at the same time


The Goal

We're testing:
1. Speed: How fast does the site handle each request with 5 users hammering it?
2. Stability: Does it crash or throw errors under pressure?
3. Data Handling: When 5 people are reading/writing data simultaneously, does our ticket.dbf file get corrupted? The system should handle this cleanly.
4. As you've seen, when we issue a ticket we add and lock a record. The TRlock() function already loops if the record is locked.


The Results

I used ApacheBench for this test. The results are pretty impressive:
Code (text): Select all Collapse
Concurrency Level:      5
Time taken for tests:   2.253 seconds
Complete requests:      100
Failed requests:        0
Requests per second:    44.38 [#/sec] (mean)
Time per request:       22.532 [ms] (mean, across all concurrent requests)
The key things: no failures, and each ticket process takes about 22ms. That means our system could handle about 44 requests per second!

Remember - these 22ms aren't for some simple "Hello World" - this includes table I/O, HTML generation, the whole process.

Here's something interesting: when I tested with 100 requests but only 1 user, the system took 29ms. The concurrent test was faster because Harbour handles multiple threads efficiently - about 24% faster with concurrency!


The Basic Flow

1. User clicks "Get Ticket" in browser
2. Browser sends HTTP POST to our web service API
3. Load balancer routes to available application server
4. Server calls proc_ticket(...)
5. System handles tables - checks existence, appends data, updates, commits
6. Process completes
7. Server generates HTML/JSON response
8. Browser receives response
9. Interface updates with ticket confirmation


My Take

I've been programming for web environments for years. For the past 5 years we've been building tools to bring Harbour to the web. We have a Harbour server that's packed with features and honestly, I think it can handle 90% of the applications you want to build.

We don't need weird workarounds or other languages (PHP, Python, etc.), and we don't have to process our DBF files differently from our beloved RDD system. Our Harbour setup handles everything perfectly and lets us keep enjoying what we love.

Sure, web development isn't easy, but with AI tools now helping with front-end design, and with the backend/server configuration already solved for us... we're in a pretty good spot.

I think that with these explanations, you can see that with a "start" the change of environment begins very easily.

That’s all 😊







C.
Salutacions, saludos, regards

"...programar es fácil, hacer programas es difícil..."

UT Page -> https://carles9000.github.io/
Forum UT -> https://discord.gg/bq8a9yGMWh
HIX -> https://github.com/carles9000/hix
Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: HIX -> Ticket Project (VI) - Load testing
Posted: Fri Oct 31, 2025 11:24 PM
HIX → Ticket Project (VI) – Real-world load test inside my microservice (xWH-hub)

I integrated the chapter’s load test directly into my Harbour microservice (xWH-hub) and ran it locally. The service appends one record per request to ticket.dbf (append + rlock + unlock + commit). Conclusion up front: the Harbour microservice is genuinely fast and stable under concurrent writes.

Setup (local)

Windows host (local loopback)

Endpoint: /proc_ticket (POST JSON {"user_name":"LoadTest"})

Each request writes 1 record (DBF append + lock + unlock + commit)

Tool: ab 2.3 (ApacheBench)

Sanity check: after successive runs I see 9,600 records in ticket.dbf (matches total requests)

Commands I used
:: Direct to microservice
ab -n 200 -c 5 -p post.json -T application/json http://127.0.0.1:9090/proc_ticket
ab -n 1000 -c 5 -p post.json -T application/json http://127.0.0.1:9090/proc_ticket
ab -n 200 -c 20 -p post.json -T application/json http://127.0.0.1:9090/proc_ticket
ab -n 1000 -c 20 -p post.json -T application/json http://127.0.0.1:9090/proc_ticket

:: End-to-end via a tiny PHP proxy that forwards JSON to the service
ab -n 200 -c 5 -p post.json -T application/json http://localhost/wh24_ZIMMERPLAN/proxy_proc_ticket.php
ab -n 1000 -c 5 -p post.json -T application/json http://localhost/wh24_ZIMMERPLAN/proxy_proc_ticket.php
ab -n 200 -c 20 -p post.json -T application/json http://localhost/wh24_ZIMMERPLAN/proxy_proc_ticket.php
ab -n 1000 -c 20 -p post.json -T application/json http://localhost/wh24_ZIMMERPLAN/proxy_proc_ticket.php

Results (highlights)

Direct /proc_ticket

n=200, c=5 → ~267.19 RPS, TPR≈18.713 ms, p50≈18 ms, p95≈24–25 ms, 0 failed

n=1000, c=5 → ~254.35 RPS, TPR≈19.658 ms, p50≈19 ms, p95≈25 ms, 0 failed

n=200, c=20 → ~266.84 RPS, TPR≈74.952 ms, p50≈73–78 ms, p95≈85–110 ms, 0 failed

n=1000, c=20 → ~256.74 RPS, TPR≈77.899 ms, p50≈77 ms, p95≈88–94 ms, 0 failed

Via PHP proxy (E2E)

n=200, c=5 → ~243.73 RPS, TPR≈20.514 ms, p50≈20–21 ms, p95≈25 ms, 0 failed

n=1000, c=5 → ~229.17 RPS, TPR≈21.817 ms, p50≈21 ms, p95≈33 ms, 0 failed

n=200, c=20 → ~241.45 RPS, TPR≈82.832 ms, p50≈81–83 ms, p95≈95–103 ms, 0 failed

n=1000, c=20 → ~241.49 RPS, TPR≈82.819 ms, p50≈81–85 ms, p95≈95–103 ms, 0 failed

Side-by-side vs. the reference post (c = 5)
Metric Reference My direct result
Requests per second 44.38 RPS ≈ 267.19 RPS
Time per request (mean, across all concurrent requests) 22.532 ms ≈ 3.743 ms (18.713 ÷ 5)
Complete requests 100 200
Failed requests 0 0
Conclusion

Harbour holds up extremely well under concurrent load with real table I/O. On my setup, the microservice sustains ~250–270 RPS with tight p50 and reasonable p95 latencies, and 0 failures across all runs. The end-to-end path via a slim PHP proxy stays very close to the direct numbers, indicating only minimal overhead. For a compact native stack serving write-heavy requests, these results are more than solid.

Bottom line: you can comfortably say it’s about ~6× faster throughput with ~6× lower “across” latency in the direct test, and ~5.3×–5.5× on the end-to-end path through the PHP proxy.







Posts: 1283
Joined: Fri Feb 10, 2006 02:34 PM
Re: HIX -> Ticket Project (VI) - Load testing
Posted: Sat Nov 01, 2025 08:17 AM
Otto,

HIX doesn't need external programs like PHP to act as an intermediary; it's all Harbour.

Can the other members test your solution?

Explain how it works in another thread

Thanks ! :D

C.
Salutacions, saludos, regards

"...programar es fácil, hacer programas es difícil..."

UT Page -> https://carles9000.github.io/
Forum UT -> https://discord.gg/bq8a9yGMWh
HIX -> https://github.com/carles9000/hix
Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: HIX -> Ticket Project (VI) - Load testing
Posted: Sat Nov 01, 2025 09:01 AM

Hi Charly,

I shared my tests because I was genuinely surprised by the speed and stability. My approach is mainly about an ADS replacement—the specific web server doesn’t matter; it’s the microservice that counts. The runs were meant as a personal orientation, and I’m very happy with the outcome.

It also shows that you can work effectively with a preprocessor and patcher—in fact, for a first understanding it may even make things simpler.

Have a great holiday,

Otto

Continue the discussion