rhlawek wrote:I've been looking for some old source code to prove it to myself but this looks very similar to what I was taught as Predictor/Corrector methods back in the mid-80s
Yes, it's a very old concept. But still interesting.
EMG
rhlawek wrote:I've been looking for some old source code to prove it to myself but this looks very similar to what I was taught as Predictor/Corrector methods back in the mid-80s
Pedro Domingos name them "learners": software that can "learn" from data.
The simplest way of learning from data is comparing two bytes. How ? Substracting them: A zero means they are equal, different from zero means they are different.
The difference between them is the "error". To correct the error, we modify a "weight" . Its amazing that from that simple concept, all what can be built. In the same way all our software technology comes from a bit, being zero or one.
The perceptron mimics (in a very simple way) the behavior of a brain neuron. The neuron receives several inputs, each one has a weight (stored at the neuron) and the sum of all those inputs times their weights may fire or not an output.
Backpropagation helps to fine tune those weights, and finally the perceptron "adjusts" itself to the right weight for each input to produce the expected output.
AI is already everywhere and will change very much our lives and the way software is developed ![]()
David Miller C++ code ported to Harbour:
viewtopic.php?p=202115#p202115
Don't miss to try your first neural network ![]()
#include "FiveWin.ch"
function Main()
local oNet := TNet():New( { 1, 2, 1 } ), n
local x
while oNet:nRecentAverageError < 0.95
oNet:FeedForward( { x := nRandom( 1000 ) } )
oNet:Backprop( { If( x % 5 == 0, 5, 1 ) } )
end
oNet:FeedForward( { 15 } )
XBROWSER ArrTranspose( { "Layer 1 1st neuron" + CRLF + "Input:" + Str( oNet:aLayers[ 1 ][ 1 ]:nOutput ) + ;
CRLF + "Weigth 1:" + Str( oNet:aLayers[ 1 ][ 1 ]:aWeights[ 1 ], 4, 2 ), ;
{ "Layer 2, 1st neuron" + CRLF + "Weigth 1: " + Str( oNet:aLayers[ 2 ][ 1 ]:aWeights[ 1 ] ) + ;
CRLF + "Output: " + Str( oNet:aLayers[ 2 ][ 1 ]:nOutput ),;
"Layer 2, 2nd neuron" + CRLF + "Weight 1: " + Str( oNet:aLayers[ 2 ][ 2 ]:aWeights[ 1 ] ) + ;
CRLF + "Output: " + Str( oNet:aLayers[ 2 ][ 2 ]:nOutput ) },;
"Layer 3 1st neuron" + CRLF + "Weigth 1: " + Str( oNet:aLayers[ 3 ][ 1 ]:aWeights[ 1 ] ) + ;
CRLF + "Weigth 2: " + Str( oNet:aLayers[ 3 ][ 1 ]:aWeights[ 2 ] ) + ;
CRLF + "Output: " + Str( oNet:aLayers[ 2 ][ 2 ]:nOutput ) } ) ;
SETUP ( oBrw:nDataLines := 4,;
oBrw:aCols[ 1 ]:nWidth := 180,;
oBrw:aCols[ 2 ]:nWidth := 180,;
oBrw:aCols[ 3 ]:nWidth := 180,;
oBrw:nMarqueeStyle := 3 )
return nil#include "FiveWin.ch"
function Main()
local oNeuron := TPerceptron():New( 1 )
local n, nValue
for n = 1 to 50
oNeuron:Learn( { nValue := nRandom( 1000 ) }, ExpectedResult( nValue ) )
next
MsgInfo( oNeuron:aWeights[ 1 ] )
MsgInfo( oNeuron:Calculate( { 5 } ) )
return nil
function ExpectedResult( nValue )
return nValue * 2
CLASS TPerceptron
DATA aWeights
METHOD New( nInputs )
METHOD Learn( aInputs, nExpectedResult )
METHOD Calculate( aInputs )
ENDCLASS
METHOD New( nInputs ) CLASS TPerceptron
local n
::aWeights = Array( nInputs )
for n = 1 to nInputs
::aWeights[ n ] = 0
next
return Self
METHOD Learn( aInputs, nExpectedResult ) CLASS TPerceptron
local nSum := ::Calculate( aInputs )
if nSum < nExpectedResult
::aWeights[ 1 ] += 0.1
endif
if nSum > nExpectedResult
::aWeights[ 1 ] -= 0.1
endif
return nil
METHOD Calculate( aInputs ) CLASS TPerceptron
local n, nSum := 0
for n = 1 to Len( aInputs )
nSum += aInputs[ n ] * ::aWeights[ n ]
next
return nSumTest of scaling and descaling values:
Scaling: ( value - minimum ) / ( Maximum - Minimum )
0 --> ( 0 - 0 ) / ( 9 - 0 ) --> 0
1 --> ( 1 - 0 ) / ( 9 - 0 ) --> 0.111
2 --> ( 2 - 0 ) / ( 9 - 0 ) --> 0.222
3 --> ( 3 - 0 ) / ( 9 - 0 ) --> 0.333
4 --> ( 4 - 0 ) / ( 9 - 0 ) --> 0.444
5 --> ( 5 - 0 ) / ( 9 - 0 ) --> 0.555
6 --> ( 6 - 0 ) / ( 9 - 0 ) --> 0.666
7 --> ( 7 - 0 ) / ( 9 - 0 ) --> 0.777
8 --> ( 8 - 0 ) / ( 9 - 0 ) --> 0.888
9 --> ( 9 - 0 ) / ( 9 - 0 ) --> 1
Hola !
Articulo interesante que ayuda a entrar en este mundillo... https://blogs.elconfidencial.com/tecnol ... n_1437007/
Saludetes.