FiveTech Support Forums

FiveWin / Harbour / xBase community
Board index FiveWin for Harbour/xHarbour FW class: TPerceptron and TNeuralNetwork – understanding AI basics
Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
FW class: TPerceptron and TNeuralNetwork – understanding AI basics
Posted: Sat Jan 31, 2026 07:26 PM

Hello friends,

Many years after the first AI examples presented by Antonio Linares in the FiveWin environment, it becomes clear how valuable these early approaches actually were. Especially the simple examples based on perceptrons are excellent for truly understanding artificial intelligence—often better than modern, highly abstracted AI tools that hide many of the underlying mechanisms.

These examples make visible the foundations on which today’s AI systems are built: neurons, weights, activation functions, forward propagation, error calculation, backpropagation, and learning rate. Anyone who has worked through these concepts gains a much clearer mental model than the majority of today’s AI users.

For us as programmers, this understanding is particularly important. Not to build AI systems ourselves, but to realistically assess how AI learns, where its limits are, and why it can sometimes be convincingly wrong. For this reason, I have put together a learning plan that starts exactly at this level, which I am happy to share here.

1st FWH + [x]Harbour 2017 international conference - https://forums.fivetechsupport.com/viewtopic.php?t=33515

Finally, a sincere thank-you to Antonio for his research and development work over the years, and for sharing these ideas with our community—it has had a lasting impact and remains highly relevant today.

Best regards,
Otto

Internal Training Document

Fundamental Understanding of Artificial Intelligence (AI) for Application Developers

Target audience:
Application developers, technical staff, IT-oriented professionals

Prerequisites:
Basic programming knowledge
(variables, loops, classes, functions)

Training goal:
To develop a realistic, technical understanding of AI in order to use it
safely, effectively, and independently in daily work.


---

1. What AI is – and what it is not

1.1 What AI is not

  • No thinking
  • No consciousness
  • No genuine understanding
  • No “knowledge” in the human sense

AI does not decide — it computes probabilities.


---

1.2 What AI actually is

A statistical pattern recognition system
trained on many examples
to determine which output best fits a given context.

AI does not work with rules like classical programs, but with weights.


---

2. The core principle: learning through weight adjustment

To explain this, we use a simplified perceptron model
(implemented in Harbour/FiveWin).

2.1 Simplified perceptron – core idea

Code (harbour): Select all Collapse
nSum += aInputs[ n ] * ::aWeights[ n ]

An artificial neuron does exactly one thing:

Input × weight → summed value

The result is just a number — nothing more.


---

2.2 Learning through correction

Code (harbour): Select all Collapse
if nSum < nExpectedResult
   ::aWeights[ 1 ] += 0.1
endif

if nSum > nExpectedResult
   ::aWeights[ 1 ] -= 0.1
endif

The learning rule is simple:

  • Result too small → slightly increase the weight
  • Result too large → slightly decrease the weight

That is learning.
No rules, no logic, no understanding.


---

2.3 Key insight

The model does not know why the result is wrong.
It only knows that it is wrong.

This principle applies to all neural networks, including modern AI systems.


---

3. The limits of this model (intentionally)

The perceptron:

  • does not recognize mathematical rules
  • does not understand meaning
  • only approximates values

These limitations are not flaws — they are didactically essential.

Anyone who understands these limits will not overestimate AI.


---

4. Bridge to modern AI systems (LLMs)

4.1 What is different in modern AI models

PerceptronModern AI (LLM)
1 weightBillions of weights
One numberHigh-dimensional vectors
Explicit target valuesProbabilities
Local trainingTraining on massive datasets

4.2 What remains the same

The learning principle is identical:
Small adjustments → better statistical fit

A Large Language Model (LLM) is not a different kind of entity,
but an extreme scaling of the same principle.


---

5. Why AI appears “intelligent”

AI has learned:

  • how explanations are structured
  • how technical texts are written
  • what plausible answers look like

However, AI does not verify whether something is true, correct, or applicable.

AI optimizes plausibility, not truth.


---

6. Why prompting works

A prompt:

  • provides context
  • shifts probabilities
  • guides the model in a specific direction

Prompting is not magic — it is:

temporary context training

The clearer the context, the more stable the output.


---

7. Proper use of AI in everyday work

7.1 Suitable use cases

  • Explanations
  • Summaries
  • Structuring text
  • Idea generation
  • High-level code reviews
  • Documentation support

7.2 Critical use cases

  • Numbers and calculations
  • Legal statements
  • Factual claims
  • System-specific assumptions
  • Business decisions

AI provides suggestions — not guarantees.


---

8. Core principles to remember

  1. AI does not understand — it adapts
  2. AI does not think — it evaluates probabilities
  3. Good prompts do not replace expertise
  4. Poor input produces convincing nonsense
  5. Humans remain responsible

---

9. Closing statement of the training

Artificial intelligence is not a replacement for thinking,
but an amplifier of clarity.
Unclear thinking produces well-phrased nonsense.


---

10. Recommended next steps (optional)

  • Deliberate experimentation with prompts
  • Critical evaluation of results
  • Using AI as a tool, not an authority
  • Periodic refresh of technical fundamentals

---

Document status

Internal technical training document
No marketing content, no product references, no certification focus

Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: FW class: TPerceptron and TNeuralNetwork – understanding AI basics
Posted: Sun Feb 01, 2026 10:14 PM

Training Note

This example is not intended for optimization or production use.
Its sole purpose is to illustrate the fundamental principles:

Learning = adjustment of weights

AI = statistics, not understanding

Increasing size changes complexity, not the underlying principle

/* --------------------------------------------------------------------
   Neural networks for Harbour / FiveWin
   --------------------------------------------------------------------
   Didactic example for understanding the core principles of
   neural networks and learning systems.

   Special thanks to Antonio Linares for his early research and
   development work in the FiveWin community, and for demonstrating
   artificial intelligence concepts long before they became mainstream.
   These early examples still provide an excellent foundation for
   understanding modern AI systems.
   -------------------------------------------------------------------- */

#include "FiveWin.ch"

/* Euler's number, required for the sigmoid activation function.
   This is not about mathematical precision, but about demonstrating
   a smooth, non-linear activation. */
#define M_E   2.71828182845904523536

function Main()

   /* Create a neural network with a fixed topology:
      - 7 input neurons
      - 3 hidden neurons
      - 1 output neuron

  The topology is intentionally small to keep the learning process
  transparent and observable. */
   local oNN := TNeuralNetwork():New( { 7, 3, 1 } )

   local n, m

   /* Training input data.
      Each array represents one input vector.
      In this example, inputs are binary to keep the focus on learning,
      not on data complexity. */
   local aInputs := { { 1, 1, 1, 0, 1, 1, 1 },;
                      { 1, 0, 0, 0, 0, 0, 1 },;
                      { 1, 1, 0, 1, 1, 1, 0 },;
                      { 1, 1, 0, 1, 0, 1, 1 },;
                      { 1, 0, 1, 1, 0, 0, 1 },;
                      { 0, 1, 1, 1, 1, 1, 1 } }

   /* Expected results for each input vector.
      These values act as the "teacher" during training.
      The network does not know the rule behind them; it only
      tries to minimize the error. */
   local aExpected := { 0, 1, 2, 3, 4, 5, 6 }

   /* Training loop.
      The network is trained multiple times with the same data.
      This repetition is essential: learning happens gradually
      through small weight adjustments. */
   for m = 1 to 500
      for n = 1 to Len( aInputs )

     /* Forward pass:
        The input is propagated through the network and
        produces an output. */
     oNN:Calculate( aInputs[ n ] )

     /* Backpropagation:
        The error between expected and actual output is used
        to adjust the weights. */
     oNN:BackPropagate( { aExpected[ n ] } )

  next
   next

   /* After training, we test the network with a new input.
      The network now produces a result based on learned weights,
      not on hard-coded rules. */
   oNN:Calculate( { 1, 1, 0, 1, 0, 1, 1 } )

   /* Display the output of the last layer.
      This shows the final numerical result of the network. */
   MsgInfo( ATail( oNN:aLayers ):nOutput )

   /* Optional visualization / inspection of the network object.
      Useful for educational exploration of internal states. */
   XBrowser( oNN )

return nil


/* --------------------------------------------------------------------
   TPerceptron
   --------------------------------------------------------------------
   Represents a single artificial neuron.
   A perceptron has inputs, weights, an output value and an error term.
   -------------------------------------------------------------------- */

CLASS TPerceptron

   DATA aInputs    // Stores the last input values
   DATA aWeights   // Weight factors applied to inputs
   DATA nOutput    // Output after activation function
   DATA nError     // Error value used during backpropagation

   METHOD New( nInputs )
   METHOD Calculate( aInputs )

ENDCLASS


METHOD New( nInputs ) CLASS TPerceptron

   local n

   /* Initialize weight and input arrays.
      All weights start with the same value to keep behavior predictable
      for learning demonstrations. */
   ::aWeights = Array( nInputs )
   ::aInputs  = Array( nInputs )

   for n = 1 to nInputs
      ::aWeights[ n ] = 1
   next

return Self


METHOD Calculate( aInputs ) CLASS TPerceptron

   local nSum := 0, n

   /* Core operation of a neuron:
      Multiply each input by its weight and sum the results.
      This is the entire "thinking" process of a perceptron. */
   for n = 1 to Len( aInputs )
      nSum += aInputs[ n ] * ::aWeights[ n ]
      ::aInputs[ n ] = aInputs[ n ]
   next

   /* Apply activation function.
      The sigmoid introduces non-linearity and keeps output bounded. */
return ( ::nOutput := Sigmoid( nSum ) )


/* Sigmoid activation function.
   Converts a raw sum into a smooth output between 0 and 1. */
function Sigmoid( nValue )
return 1 / ( 1 + M_E ^ -nValue )


/* --------------------------------------------------------------------
   TNeuralNetwork
   --------------------------------------------------------------------
   Represents a simple multi-layer neural network.
   The network is composed of layers, each containing perceptrons.
   -------------------------------------------------------------------- */

CLASS TNeuralNetwork

   DATA aLayers          // Array of layers, each layer is an array of perceptrons
   DATA nLearningRate INIT 0.5  // Controls how strongly weights are adjusted

   METHOD New( aTopology )
   METHOD Calculate( aInputs )
   METHOD BackPropagate( aExpected )

ENDCLASS


METHOD New( aTopology ) CLASS TNeuralNetwork

   local n, m

   /* Create layers based on topology definition.
      Each layer contains a number of perceptrons.
      Each perceptron knows how many inputs it expects. */
   ::aLayers = Array( Len( aTopology ) )

   for n = 1 to Len( aTopology )
      ::aLayers[ n ] = Array( aTopology[ n ] )
      for m = 1 to aTopology[ n ]
         ::aLayers[ n ][ m ] = ;
            TPerceptron():New( If( n == 1, 1, aTopology[ n - 1 ] ) )
      next
   next

return Self


METHOD Calculate( aInputs ) CLASS TNeuralNetwork

   local n, m, i, aOutputs

   /* First layer:
      Each perceptron processes one input value. */
   for n = 1 to Len( aInputs )
      for m = 1 to Len( ::aLayers[ 1 ] )
         ::aLayers[ 1 ][ m ]:Calculate( { aInputs[ m ] } )
      next
   next

   /* Subsequent layers:
      Each perceptron receives the outputs of the previous layer. */
   for n = 2 to Len( ::aLayers )
      for m = 1 to Len( ::aLayers[ n ] )
         aOutputs = {}
         for i = 1 to Len( ::aLayers[ n - 1 ] )
            AAdd( aOutputs, ::aLayers[ n - 1 ][ i ]:nOutput )
         next
         ::aLayers[ n ][ m ]:Calculate( aOutputs )
      next
   next

return nil


METHOD BackPropagate( aExpected ) CLASS TNeuralNetwork

   local n, m
   local aLastLayer := ATail( ::aLayers )
   local nSum

   /* Output layer error calculation.
      Error is based on difference between expected and actual output,
      scaled by the derivative of the sigmoid function. */
   for n = 1 to Len( aLastLayer )
      aLastLayer[ n ]:nError = ;
         ( aExpected[ n ] - aLastLayer[ n ]:nOutput ) * ;
         aLastLayer[ n ]:nOutput * ( 1 - aLastLayer[ n ]:nOutput )

  /* Adjust weights using learning rate and error. */
  aLastLayer[ n ]:aWeights[ n ] = ;
     ::nLearningRate * aLastLayer[ n ]:nError * ;
     aLastLayer[ n ]:aInputs[ n ]
   next

   /* Propagate error backwards through hidden layers.
      Each neuron distributes error proportionally to its contribution. */
   for n = Len( ::aLayers ) - 1 to 1 step -1
      nSum = 0
      for m = 1 to Len( ::aLayers[ n + 1 ] )
         nSum += ;
            ::aLayers[ n + 1 ][ m ]:nError * ;
            ::aLayers[ n + 1 ][ m ]:aWeights[ n + 1 ]
      next

  for m = 1 to Len( ::aLayers[ n ] )
     ::aLayers[ n ][ m ]:nError = ;
        ::aLayers[ n ][ m ]:nOutput * ;
        ( 1 - ::aLayers[ n ][ m ]:nOutput ) * nSum
  next
   next

return nil
Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: FW class: TPerceptron and TNeuralNetwork – understanding AI basics
Posted: Sun Feb 01, 2026 10:31 PM

After posting this example, I begin to work step by step through the underlying concepts with the help of AI. I would like to share this small learning journey here—maybe it is useful for someone, and maybe it helps save some time. I also hope that others will join in and post their own learning progress.

My starting point was a very simple question:

I do run an Ollama server, but I have not spent much time with it yet. As an application developer, I am generally interested in how a Large Language Model actually works—by what principle such systems are able to generate such good answers. I am not looking for details, just a very simple introduction to the core idea.

With all the recent talk about “AI training,” I realized that although I work with AI on a daily basis, it is mostly on the prompt level. I would like to gain a slightly deeper understanding than that of a general user—without going into steep mathematics or academic theory.

This led to the question of whether a classical Harbour/FiveWin example using perceptrons and neural networks is suitable for this purpose. For me, the answer was surprisingly clear: yes—and in many ways even better than what is offered in many modern AI introductions.

The reason is simple. This code shows all the core concepts that really matter:

neurons / perceptrons
weights
activation function (sigmoid)
forward pass
error
backpropagation
learning rate

This is extremely valuable for building a solid mental model. Based on this, I have put together a gentle learning plan that I would like to explore and share step by step.

Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: FW class: TPerceptron and TNeuralNetwork – understanding AI basics
Posted: Sun Feb 01, 2026 10:43 PM

Introduction to the Series

I would like to go through this example from top to bottom, clarifying the concepts that appear step by step. Not in a theoretical way, but exactly in the order in which they arise when reading the code.

My first question when going through the program came up quite quickly:
👉 What exactly is a vector, and why do I need it here?

In short, a vector is nothing more than an ordered list of numbers. You can think of it as an array in which each position represents a specific feature or signal. The code itself has no notion of meaning—it only operates on numbers. We, as humans, assign meaning to these numbers in order to build a mental model.

However, another thought followed fairly quickly:
👉 Is the term “vector” really the most important starting point here?

The more I thought about it, the clearer it became that the neuron is actually the more central concept. A neuron is the active part of the system—the element that reacts, computes, produces an output, and later adapts. The vector is ultimately just the transport mechanism for the signals the neuron operates on.

Seen this way, a very natural starting point emerges:

  • A neuron reacts to multiple inputs
  • Each input has a weight that describes sensitivity
  • All inputs together produce an output
  • If the output is wrong, the neuron is slightly adjusted

Only after this does it make sense to talk about vectors—as a practical way to pass multiple signals together.

With this perspective, I would like to begin and explain the example exactly from this point:
starting with the neuron as the smallest unit of reaction, rather than with abstract mathematical concepts.

The series will be continued.

Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: FW class: TPerceptron and TNeuralNetwork – understanding AI basics
Posted: Sun Feb 01, 2026 10:49 PM

I don’t yet know whether I will fully understand this example in the end—but asking these questions is already part of the learning process.

Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: FW class: TPerceptron and TNeuralNetwork – understanding AI basics
Posted: Thu Feb 05, 2026 11:56 PM

This is exactly the right next step, because this is where things shift from
“theoretical AI” → “aha, this is just business logic with math.”

We take a single neuron and feed it with real DBF fields.
No network, no backpropagation, no magic.


---

🎯 Goal

A single neuron should decide:

“Should this guest receive a discount?”

Output:

  • 0.0 … 0.49 no
  • 0.5 … 1.0 yes

---

📂 Example DBF (realistic)

Assume a KUNDEN.DBF with the following fields:

FieldTypeMeaning
STAMMGASTLRegular guest
BUCHUNGENNNumber of bookings
UMSATZNTotal revenue
BESCHWERDENNNumber of complaints
VIPLVIP flag

---

🧠 Step 1: Fields → Inputs (very important)

A neuron can handle numbers only.

We translate DBF fields into values between 0 and 1:

Code (harbour): Select all Collapse
aInputs := {
   If( STAMMGAST, 1.0, 0.0 ),        // yes / no
   Min( BUCHUNGEN / 20, 1.0 ),       // normalized
   Min( UMSATZ / 10000, 1.0 ),       // normalized
   1 - Min( BESCHWERDEN / 5, 1.0 ),  // negative → inverted
   If( VIP, 1.0, 0.0 )
}

👉 Extremely important:
The neuron does not know what a “regular guest” is.
It only sees:

{ 1.0, 0.6, 0.8, 1.0, 0.0 }

---

🧠 Step 2: The neuron (same style as the previous code)

Code (harbour): Select all Collapse
oNeuron := TPerceptron():New( 5 )

// Weights (you can think of these as importance factors)
oNeuron:aWeights := {
   0.8,   // regular guest is important
   0.4,   // bookings
   0.6,   // revenue
   0.7,   // no complaints
   1.0    // VIP is very important
}

---

🔢 Step 3: Compute with real data

Assume:

Code (text): Select all Collapse
STAMMGAST    = .T.
BUCHUNGEN   = 12
UMSATZ      = 7,000
BESCHWERDEN = 0
VIP         = .F.

Inputs after transformation:

Code (text): Select all Collapse
{ 1.0, 0.6, 0.7, 1.0, 0.0 }

Weighted sum:

1.0 * 0.8  = 0.80
0.6 * 0.4  = 0.24
0.7 * 0.6  = 0.42
1.0 * 0.7  = 0.70
0.0 * 1.0  = 0.00
------------------
Sum        = 2.16

---

🧮 Step 4: Activation function

Code (harbour): Select all Collapse
Sigmoid( 2.16 ) ≈ 0.896

---

Result

Code (text): Select all Collapse
Neuron output = 0.896  → give discount

👉 The neuron has decided.

Not intelligent.
Not magical.
Just a weighted rule.


---

🧠 The “aha” moment

This neuron is functionally equivalent to:

Code (harbour): Select all Collapse
IF STAMMGAST .AND. UMSATZ > 5000 .AND. BESCHWERDEN = 0
   Rabatt()
ENDIF

BUT:

IF logicNeuron
hardsoft
black / whitegray areas
many IFsone compute core
hard to tunechange weights
explicit ruleslearnable

👉 A neuron is generalized IF logic.


---

🧠 Most important takeaway

Inputs are translated DBF fields—nothing more.
A neuron is a soft decision rule.


---

Of course 👍 — here is the English explanation, written so it fits perfectly into your series and directly answers the question.


---

Where do these values come from?

At this point, the neuron would only see something like:

{ 1.0, 0.6, 0.7, 1.0, 0.0 }

The short answer is:

These values come from a deliberate translation of DBF fields into numbers between 0 and 1.
They do not come from the neuron, not from AI, but from us.

Let’s go through this step by step, directly based on the code.


---

1️⃣ Starting point: real DBF fields

Assume a record in KUNDEN.DBF contains the following values:

Code (text): Select all Collapse
STAMMGAST    = .T.
BUCHUNGEN   = 12
UMSATZ      = 7000
BESCHWERDEN = 0
VIP         = .F.

These are business data, not AI data.


---

2️⃣ The translation rule (defined by us)

In the code, we explicitly define this mapping:

Code (harbour): Select all Collapse
aInputs := {
   If( STAMMGAST, 1.0, 0.0 ),
   Min( BUCHUNGEN / 20, 1.0 ),
   Min( UMSATZ / 10000, 1.0 ),
   1 - Min( BESCHWERDEN / 5, 1.0 ),
   If( VIP, 1.0, 0.0 )
}

👉 This is not AI logic.
👉 This is domain knowledge.


---

3️⃣ Calculating the values step by step

Input 1 – STAMMGAST

Code (harbour): Select all Collapse
If( .T., 1.0, 0.0 ) → 1.0

---

Input 2 – BUCHUNGEN = 12

Code (harbour): Select all Collapse
12 / 20 = 0.6
Min( 0.6, 1.0 ) → 0.6

---

Input 3 – UMSATZ = 7000

Code (harbour): Select all Collapse
7000 / 10000 = 0.7
Min( 0.7, 1.0 ) → 0.7

---

Input 4 – BESCHWERDEN = 0

Code (harbour): Select all Collapse
0 / 5 = 0.0
Min( 0.0, 1.0 ) → 0.0
1 - 0.0 → 1.0

👉 No complaints = positive signal


---

Input 5 – VIP = .F.

Code (harbour): Select all Collapse
If( .F., 1.0, 0.0 ) → 0.0

---

4️⃣ Final input vector

Putting everything together:

Code (text): Select all Collapse
{ 1.0, 0.6, 0.7, 1.0, 0.0 }

👉 This is exactly the vector the neuron sees.


---

5️⃣ Key didactic point (important)

The neuron did not create these numbers.
It does not know what they mean.
They are handed to it fully prepared.

All the “intelligence” in this step lies in:

  • choosing the fields
  • normalizing the values
  • deciding the sign and direction (e.g., inverting complaints)

---

6️⃣ Why normalization matters so much

A neuron can only react meaningfully if:

  • all inputs are on a comparable scale
  • no single field dominates the others

That’s why:

  • 12 bookings → 0.6
  • 7000 revenue → 0.7
  • 0 complaints → 1.0

---

7️⃣ Key takeaway for the series

The neuron does not create meaning.
It receives meaning encoded as numbers.

Or even more concise:

Vectors are designed.
Weights are learned.

The series will be continued.

Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: FW class: TPerceptron and TNeuralNetwork – understanding AI basics
Posted: Sun Feb 08, 2026 07:39 PM

From a Business Question to an LLM Answer

“Why an LLM appears to answer questions”

Goal of this learning unit

After this unit, it should be clear:

Why an LLM produces text in response to a question,
even though—just like a single neuron—
it actually only performs evaluations.


---

1️⃣ Starting point (intentionally repeated)

We already understand this pattern:

A single neuron:

  • receives numbers
  • computes a score
  • produces no decision, only a value
Code (text): Select all Collapse
Score = 0.896

The decision is made outside the neuron.

👉 This principle remains fully intact.


---

2️⃣ The crucial shift with LLMs

Inside an LLM, the internal question is not:

“What is the correct answer?”

It is always:

“Which next token fits best right now?”

This is the key mental shift.


---

3️⃣ A question is not answered — it is continued

You write:

“Should this guest be offered a discount?”

Internally, the LLM does not produce:

  • yes / no
  • true / false
  • a decision

Instead, it does this:

Code (text): Select all Collapse
Context → evaluation → next token

For example:

Code (text): Select all Collapse
"The"        Score 0.81  ← selected
"Yes"        Score 0.43
"Maybe"      Score 0.12

👉 One token is selected.
Then the entire process repeats.


---

4️⃣ The LLM process step by step

Step A – Text

Code (text): Select all Collapse
“Should this guest be offered a discount?”

Step B – Tokens

Code (text): Select all Collapse
["Should", " this", " guest", " be", " offered", " a", " discount", "?"]

Step C – Tokens → vectors

(each token becomes numbers)

Step D – Evaluation by many neurons

(each possible continuation gets a score)

Step E – Token selection

(not necessarily the top one, but a very plausible one)

Step F – Repeat

The text is now longer, the context richer.


---

5️⃣ Why this looks like an “answer”

Because during training the model learned:

  • how answers usually start
  • how explanations are structured
  • how humans argue
  • how responses typically end

It did not learn:

  • whether something is true
  • whether it fits your system
  • whether it is complete

👉 Plausibility beats truth.


---

6️⃣ The direct comparison

Single NeuronLLM
Discount scoreToken score
Threshold (0.5)Selection rule
Yes / NoWord
OnceMillions of times
Decision outsideInterpretation outside

👉 An LLM is a token-scoring machine.


---

7️⃣ The key takeaway of this unit

An LLM does not answer questions.
It continues text—very skillfully.

Or even more directly:

LLMs generate answers by repeatedly choosing
the next plausible step.


---

8️⃣ Why this understanding matters

Once you understand this:

  • you stop overestimating AI
  • hallucinations make sense
  • good vs. bad prompts become obvious
  • you know where to intervene

👉 Prompting = context control,
not asking better questions.


---

9️⃣ Closing this learning unit

What looks like “understanding”
is actually continuous evaluation.

And that is something you already know—
from a single neuron.


---

The series will be continued.

Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: FW class: TPerceptron and TNeuralNetwork – understanding AI basics
Posted: Sun Feb 08, 2026 07:50 PM

---

The short, precise answer upfront

“banana” does not get a score because someone assigned one to it.
The score is computed dynamically from millions of learned relationships.

And no:

  • there is no fixed table
  • no global value like "banana" = 0.001
  • no manual assignment

Now step by step.


---

1️⃣ Why does “banana” get a score at all?

When the model asks:

“Which token fits best right now?”

this technically means:

For every possible token, a score is computed
describing how well it fits the current context.

Including:

  • "banana"
  • "quantum"
  • "yesterday"

👉 No token is ignored.
But most receive extremely low scores.


---

2️⃣ Does “banana” have millions of features?

Yes — but not in the intuitive, human sense.

“banana” is not:

  • a word with properties
  • an object with knowledge
  • a list of semantic attributes

“banana” is:

a high-dimensional numeric vector
(e.g. 768 or 4096 numbers)

For example (highly simplified):

banana → { -0.2, 1.7, 0.01, -3.4, ..., 0.8 }

👉 These numbers have no names.
👉 No dimension is labeled “fruit”, “yellow”, or “food”.
👉 Meaning emerges only from relative position in vector space.


---

3️⃣ How is the score computed?

The current context:

Code (text): Select all Collapse
"Should this guest be offered a discount?"

is also compressed into a context vector:

context → { 0.9, -1.2, 2.1, ..., -0.4 }

Now exactly what you already know from your neuron happens:

Two vectors are compared.

Very roughly:

score = similarity(context_vector, token_vector)
  • "It" → high similarity
  • "depends" → very high similarity
  • "banana" → almost no similarity

👉 banana is not excluded — it simply loses.


---

4️⃣ Why does “banana” fit so poorly?

Not because of meaning in the human sense, but because of statistics during training.

During training, the model observed:

  • "guest", "discount", "offered"
    often appear together with
  • "policy"
  • "decision"
  • "depends"
  • "criteria"

But almost never with:

  • "banana"

👉 The vectors are far apart in space.


---

5️⃣ Important: there is no “banana neuron”

No neuron represents “banana”
No neuron knows “fruit”
No neuron asks “Does this make sense?”

Instead:

Millions of neurons respond weakly or strongly
to numeric patterns.

The numeric pattern of "banana" simply does not align with the numeric pattern of the context.


---

6️⃣ Your central question, answered precisely

“Is ‘banana’ represented by millions of features in neurons, and we just select the ones that fit the question?”

Almost — small correction:

  • ✔ “banana” has a high-dimensional vector
  • ✔ the context has a vector
  • ✔ they are compared
  • nothing is actively searched
  • nothing is filtered beforehand

👉 Everything is evaluated simultaneously.


---

7️⃣ Exact parallel to your neuron

Your code:

Code (harbour): Select all Collapse
nSum += aInputs[n] * aWeights[n]

LLM internally (highly simplified):

Code (text): Select all Collapse
score("banana")  = context · vector("banana")
score("depends") = context · vector("depends")

👉 Same principle.
Just:

  • larger
  • massively parallel
  • repeated thousands of times per answer

---

8️⃣ The key sentence that resolves everything

A token does not get a score
because it means something,
but because its numeric pattern
matches the context more or less well.

Or even clearer:

The model does not know that “banana” is wrong.
It only detects: “This fits extremely poorly here.”


---

9️⃣ Why this unlocks everything else

Once you understand this:

  • hallucinations make sense
  • prompt control becomes clear
  • context weighting is obvious
  • “convincing nonsense” is explained

Because:

Plausibility replaces truth.

Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: FW class: TPerceptron and TNeuralNetwork – understanding AI basics
Posted: Sun Feb 08, 2026 07:53 PM

Short answer: No — that is only one special interpretation.
And the confusion is completely understandable.


---

The precise answer

A vector is not inherently a start/end point with direction in 3D space.
That is only one possible representation.


---

Why we often think of arrows in space

In school, vectors are usually introduced as:

  • arrows
  • start point and end point
  • direction and magnitude
  • typically in 2D or 3D

For example:

(3, 2, 1)

👉 This is a geometric vector.
👉 Very useful in physics and mechanics.

But it is not the general definition.


---

The general (correct) definition

A vector is an ordered list of numbers.

Nothing more.

For example:

Code (text): Select all Collapse
{ 1.0, 0.6, 0.7, 1.0, 0.0 }
  • no start point
  • no end point
  • no physical direction
  • no geometry

👉 Just numbers in a fixed order.


---

So where does “direction” come from?

“Direction” only appears through interpretation.

Example:

Code (text): Select all Collapse
{ 180, 80, 45 }

Could mean:

  • height
  • weight
  • age

Or:

Code (text): Select all Collapse
{ 0.8, 0.4, 0.6, 0.7, 1.0 }

Could mean:

  • importance of regular guest
  • bookings
  • revenue
  • complaints
  • VIP

👉 The numbers get meaning only because we assign it.


---

Why people still talk about “direction”

Mathematically, any vector can be treated as a point in a space:

  • 2 numbers → point in 2D space
  • 3 numbers → point in 3D space
  • 5 numbers → point in 5D space
  • 768 numbers → point in 768D space

👉 You cannot visualize this anymore,
but mathematically it works the same way.


---

What this means for AI / LLMs

In AI, this is the key idea:

A vector is a point in a high-dimensional space.

  • closeness = similarity
  • distance = difference
  • direction = change

But:

  • no physical space
  • no real arrows
  • no visible geometry

---

Back to your example

Your input:

Code (text): Select all Collapse
{ 1.0, 0.6, 0.7, 1.0, 0.0 }

This is:

  • not an arrow
  • not a direction
  • not a visible location

It is simply:

A state that describes multiple properties at once.


---

Why it is still called a “vector”

Because you can do math with it:

  • add
  • multiply
  • compare
  • measure similarity

👉 Vector = a computable bundle of values


---

The key takeaway

A vector is not a geometric object.
It can be interpreted geometrically.

Or even clearer:

In AI, vectors are states, not arrows.


---

One last analogy (often helps)

  • Array → memory structure
  • Vector → array with meaning and algebra

---
Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: FW class: TPerceptron and TNeuralNetwork – understanding AI basics
Posted: Mon Feb 09, 2026 07:11 AM

Learning Unit: Vectors Are Not Constants

Naive question (but essential)

Are vectors created only at request time,
or do words have fixed, absolute values?


---

Short answer

Words do not have fixed vector values.
Vectors are created anew for each request.

This is not a minor detail, but a core principle of modern LLMs.


---

Why this question naturally arises

It is common to assume:

“The meaning of a word must be stored somewhere.”

This assumption typically comes from:

  • school-level vector concepts (fixed arrows)
  • older NLP systems
  • classical data models

The assumption is understandable, but incorrect for LLMs.


---

How modern LLMs actually work

Question

Does each word have a fixed vector?

Answer:
No.

Each token has only a base representation
a kind of initial state without meaning.


---

Question

When does a word acquire meaning?

Answer:
Only through context.

The same token:

  • bank of the river
  • go to the bank

results in different vectors each time,
even though the token itself is identical.


---

Question

When are these vectors created?

Answer:
At every request.

For each prompt:

  • tokens are generated
  • context is built
  • vectors are recomputed

---

What happens technically (without mathematics)

Question

What is the internal process at a high level?

Answer:

  1. Text is split into tokens
  2. Tokens receive base vectors
  3. Attention integrates context
  4. Context-dependent vectors emerge
  5. Further computation is performed using these vectors

👉 Steps 3–4 occur every time, from scratch.


---

The correct mental model

Question

Is a word a fixed point in space?

Answer:
No.

The correct model is:

A word is a movable point
whose position shifts depending on its surroundings.


---

Why “banana” still feels stable

Question

Why does “banana” still seem to mean the same thing?

Answer:
Because similar contexts produce similar vectors—
not because the vector itself is fixed.

Stability arises from:

  • statistical regularities
  • repeated usage
  • similar environments

---

Connection to the neuron example

Question

How does this relate to the neuron model?

Answer:

Known pattern:

DBF fields → normalized values → evaluation

LLM pattern:

Token → context → new vector → evaluation

👉 A vector represents a state, not a stored value.


---

Prompt refinement

Question

Do vectors change when the prompt is refined?

Answer:
Yes—all relevant vectors change.

A prompt is not a question to the model, but:

an instruction to construct a new context state

Therefore:

Different prompt → different state → different response

---

What does NOT happen

Question

Does the model learn during prompting?

Answer:
No.

  • no weights are updated
  • nothing is stored
  • no permanent learning occurs

All changes are temporary and apply only to the current request.


---

Core takeaway

Words have no fixed meanings.
Vectors have no fixed values.
Meaning emerges at the moment of the request.

Or more concisely:

Vectors are contextual states, not constants.


---

Why this learning unit matters

Understanding this makes it possible to:

  • understand attention
  • understand hallucinations
  • understand prompt sensitivity
  • understand why small prompt changes have large effects
Posts: 6983
Joined: Fri Oct 07, 2005 07:07 PM
Re: FW class: TPerceptron and TNeuralNetwork – understanding AI basics
Posted: Mon Feb 09, 2026 07:14 AM

This documentation is created with ChatGPT acting as a teacher, while the author works through and tries to understand the example originally provided by Antonio. The goal is not to present finished knowledge, but to document the learning process step by step as understanding develops.

Continue the discussion