Build Simple AI .NET Library – Part 3 – Perceptron

Introduction

In last article, reviewed different types of machine learning types as part of AI composition. Training is one aspect what about “Intelligence” component itself?

Well, researchers from early AI days focused to mimic human intelligence into machines/applications. The early formation was Perceptron which mimics the smallest processing unit in human central nervous system.

Using brain anatomy, nervous system at its core consists of large number of highly interconnected neurons that work in both ways, Sensory (inputs) and Motor (actions as muscles).

Here are couple of resources which provide introduction to neural networks

Neural Network

Neural Networks by Christos Stergiou and Dimitrios Siganos

What is Perceptron?

You may consider Perceptron as the smallest processing element which mimics single neuron

Electrically speaking, Perceptron receives input(s), do some processing and then produce output. At biological origin, neuron receives sensory information via dendrites in form of chemical/electrical excite or inhibit actions, the summation of all actions then transferred to axon within same chemical/electrical form.

Axon terminals convert the axon electrical signal into excite or inhibit signal based on threshold value.

Same can be converted to electrical terms where Perceptron sum all its inputs and generates output. To mimic axon terminals, we need some kind of conversion function to convert summation into excite or inhibit. This conversion function is called Activation or Transfer function in ANN terminology.

Above is very simple mimic which has a major flaw, there is no room for intelligence, output is directly derived from inputs and that is it. Besides that all inputs are equally treated which might not be always the case; some inputs may have higher priority than others.

To overcome this flaw, let’s introduce Weights to inputs so each input shall has its own weight. in such case, manipulating weights could yield to different output value even with same inputs set.

So, What are the values of weights? Clearly, neural network is a mapping function between output to inputs. in other words, ANN is an optimization function. Setting weights (other way to name it is, Perceptron training) is the way will use to impose AI into Perceptron.

The following algorithm represents Perceptron Training

''' <summary>
''' Given:
'''     X is input set
'''     Y is label or desired output
'''     h(x) = X1*W1+X2*W2+.....Xm*Wm is hypothesis or h function
''' Initialize Weights randomly
''' For each training set 
'''     Calculate Output h(x) 
'''     Calculate Error = h(x) - Y
'''     Update Weights accordingly
''' Repeat till termination 
''' </summary>

How to update Weights? Let’s assume we have only 1 input X and Desired output (label) Y then

h(x) = X * W

Applying Gradient Descent algorithm from last article, for b parameter

b in our case is W, m=1 then

Where r is learning rate to control step, normally 0 < r < 1

What is termination condition? Iteration could terminate once the iteration error is less than a user-specified error threshold or a predefined number of iterations have been completed.

Let’s revisit our algorithm

''' <summary>
''' Given:
'''     X is input set
'''     Y is label or desired output
'''     h(x) = X1*W1+X2*W2+.....Xm*Wm is hypothesis or h function
''' Initialize Weights randomly
''' For each training set 
'''     Calculate Output h(x) 
'''     Calculate Error = h(x) - Y
'''     Update Weights accordingly W = W - r * (h(x) - Y) * X
''' Repeat till termination:
'''     Number of Iterations > threshold value or
'''     Iteration error less than a threshold
''' </summary>

For the record, above Perceptron layout and training algorithm is known as McCulloch and Pitts model (MCP)

What is Activation Function?

As discussed, activation function or transfer function used to convert Perceptron output to excite or inhibit. For example, let’s assume we are using Perceptron to detect +ve numbers from a set of numbers.

In this case, activation function shall be simply a step function

so, activation function to use depends on application or problem being solved.

Here are couple of common used activation functions

* Source Wikipedia article

When to use Perceptron?

Clearly, Perceptron provide basic processing function in form of

Consider a case of 1 input then graph of h(x) shall be similar to

as straight line, it can segregate 2 regions, let’s say group A and group B

This is Linear Classifier or Binary Classifier function. If you would recall, classification was the 2nd type of supervised learning. Perceptron is used in binary classification type of problems where we have only 2 possible solutions or groups for each input set.

Nonlinear classifier function could be something as below

In such case, Perceptron can not be used and other algorithms shall be applied (for later discussion)

As a recap, Perceptron can be used only to answer questions as “Is this set of inputs fits at group A or group B?” given that A & B are linearly separated.

Full Linear Perceptron

Consider above example of 1 input x and h(x) = X * W for groups A & B

Let’s assume we do have other 2 groups C & D which are linearly separated as per the following

Current Perceptron can not be used to optimize above function where, regardless the value of W; output will be always 0 incase input is 0 which does not represent above graph.

To resolve this issue, we have to build full linear Perceptron h(x) = a + X * W

How to define a? To simply Perceptron design and training as well, AI community agreed to include a Bias concept into Perceptron by assuming that there is always input X<sub>0 </sub>=1 hence a shall be the weight for this input, so

X0 = 1

Accordingly, final Perceptron design shall be

Sample Code

Perceptron Class

To create a Perceptron Class, the following are minimal fields:

''' <summary>
''' Size is the number of inputs to Perceptron including X0=1
''' </summary>
Private _Size As Integer
''' <summary>
''' Weights is 1D Matrix that holds the weights of Perceptron
''' Weights size is same as Perceptron size
''' </summary>
Private _Weights As Matrix1D
''' <summary>
''' Learning Rate between 0 to 1
''' </summary>
Private _LearnRate As Single

Constructor

Accepts size of Perceptron (number of inputs including bias) and learning rate

''' <summary>
''' Constructor with 2 parameters Learning rate and size
''' Weights are randomly initialized
''' </summary>
''' <param name="PerceptronSize">Number of inputs including X0</param>
''' <param name="LRate"></param>
Public Sub New(PerceptronSize As Integer, LRate As Single)
    Me._Size = PerceptronSize
    Me._Weights = New Matrix1D(Me.Size)
    Me._Weights.RandomizeValues(-100, 100)
    Me._LearnRate = LRate
End Sub

Hypothesis Function, calculates h(x)=∑(Xi * Wi) and returns h(x) as matrix

''' <summary>
''' Calculate Hypothesis Function of Perceptron h(x) = Sum (X * W)
''' </summary>
''' <param name="Input"></param>
''' <returns></returns>
Public Function HypothesisFunction(Input As Matrix1D) As Matrix1D
    If Input.Size <> Me.Size Then Throw New Exception("Input Matrix size shall match " & Me.Size.ToString)
    Dim HypothesisFun As New Matrix1D(Me.Size)

    HypothesisFun = Input.Product(Weights)
    Return HypothesisFun
End Function

CalcOutput, calculate final output of activation function

''' <summary>
''' Calculate Final Perceptron Output = ActivationFunction(h(x))
''' </summary>
''' <param name="Input"></param>
''' <param name="ActivationFunction"></param>
''' <returns></returns>
Public Function CalcOutput(Input As Matrix1D, ActivationFunction As IActivationFunction) As Single
    Dim Hypothesis_x As Single = Me.HypothesisFunction(Input).Sum

    Return ActivationFunction.Function(Hypothesis_x)
End Function

TrainPerceptron, is the main training function. it accepts 2 array; one is training set inputs matrices and other is labels array (correct answers)

''' <summary>
   ''' Train Perceptron Algorithm
   ''' Given:
   '''     X is input set
   '''     Y is label or desired output
   '''     h(x) = X1*W1+X2*W2+.....Xm*Wm is hypothesis or h function
   ''' Initialize Weights randomly
   ''' For each training set
   '''     Calculate Output h(x)
   '''     Calculate Error = h(x) - Y
   '''     Update Weights accordingly W = W - r * (h(x) - Y) * X
   ''' Repeat till termination:
   '''     Number of Iterations > threshold value or
   '''     Iteration error less than a threshold
   ''' </summary>
   ''' <param name="Input">Input Matrix</param>
   ''' <param name="Label">Desired Output</param>
   ''' <param name="ActivationFunction"></param>
   Public Sub TrainPerceptron(Input() As Matrix1D, Label() As Single, ActivationFunction As IActivationFunction)
       Dim m As Integer = Input.Count  ' training set size
       Dim Counter As Integer = 0      ' number of iterations
       Dim MSE As Single = 0           ' To track error MSE
       Dim IterateError As Single = 0  ' To Track error in each iteration

       Do
           Counter += 1
           MSE = 0  ' Reset error

           For I As Integer = 0 To m - 1 ' iterate through training set
               Dim Out As Single = Me.CalcOutput(Input(I), ActivationFunction)
               IterateError = Out - Label(I)
               For Index As Integer = 0 To Me.Size - 1
                   Me._Weights.Values(Index) = Me._Weights.Values(Index) - Me.LearnRate * IterateError * Input(I).GetValue(Index)
               Next
               MSE += IterateError
               IterateError = 0
           Next
           ' Calculate MSE
           MSE = 1 / (2 * m) * MSE * MSE
           ' Check termination condition
       Loop Until MSE < 0.001 OrElse Counter > 10000
   End Sub

Simply, it iterates in all training set inputs and update weights in each iteration. then repeats the same steps till loop is terminated.

Loop shall be terminated if MSE reached a value below 0.001 and in some cases (mostly if inputs are not linearly separated) need to have a safety condition to avoid infinite loops, hence max number of iterations (tracked by variable Counter) is set to 10,000

Activation Functions

To simplify implementation of different activation functions, an interface IActivation has been created

Namespace ActivationFunction
    Public Interface IActivationFunction
        Function [Function](x As Single) As Single
        Function Derivative(x As Single) As Single
    End Interface

End Namespace

each activation function shall implement 2 methods, Function and derivative (this is for later use)

The following activation functions have been implemented

IdentityFunction

Namespace ActivationFunction

    ''' <summary>
    ''' Always returns the same value that was used as its argument
    ''' f(x) = x
    ''' </summary>
    Public Class IdentityFunction
        Implements IActivationFunction

        Public Function [Function](x As Single) As Single Implements IActivationFunction.Function
            Return x
        End Function

        Public Function Derivative(x As Single) As Single Implements IActivationFunction.Drivative
            Return 1
        End Function
    End Class
End Namespace

ReluFunction

Namespace ActivationFunction

    ''' <summary>
    ''' Implements f(x) = Max(0,x)
    ''' Returns x if x > 0 or return 0
    ''' </summary>
    Public Class ReluFunction
        Implements IActivationFunction

        Public Function [Function](x As Single) As Single Implements IActivationFunction.Function
            Return Math.Max(x, 0)
        End Function

        Public Function Derivative(x As Single) As Single Implements IActivationFunction.Drivative
            If x >= 0 Then Return 1
            Return 0
        End Function
    End Class
End Namespace

SignFunction

Namespace ActivationFunction

    ''' <summary>
    ''' Return +1 if x is more than or equal 0
    ''' return -1 otherwise
    ''' </summary>
    Public Class SignFunction
        Implements IActivationFunction

        Public Function [Function](x As Single) As Single Implements IActivationFunction.Function
            If x >= 0 Then
                Return 1
            Else
                Return -1
            End If
        End Function

        Public Function Derivative(x As Single) As Single Implements IActivationFunction.Drivative
            Return 0
        End Function
    End Class
End Namespace

SoftStepFunction or Logistics Function

Namespace ActivationFunction

    ''' <summary>
    ''' Implements logistic function = 1/(1 + exp(x)) 
    ''' </summary>
    Public Class SoftStepFunction
        Implements IActivationFunction

        Public Function [Function](x As Single) As Single Implements IActivationFunction.Function
            Dim Y As Single

            Y = Math.Exp(-x)
            Y = Y + 1
            Y = 1 / Y
            Return Y
        End Function

        ''' <summary>
        ''' Implements f’(x)=f(x)(1-f(x))
        ''' </summary>
        ''' <param name="x"></param>
        ''' <returns></returns>
        Public Function Derivative(x As Single) As Single Implements IActivationFunction.Drivative
            Dim Y As Single

            Y = [Function](x)
            Return Y * (1 - Y)
        End Function
    End Class
End Namespace

StepFunction

Namespace ActivationFunction

    ''' <summary>
    ''' Return 1 if x is more than or equal 0
    ''' rerun 0 otherwise
    ''' </summary>
    Public Class StepFunction
        Implements IActivationFunction

        Public Function [Function](x As Single) As Single Implements IActivationFunction.Function
            If x >= 0 Then
                Return 1
            Else
                Return 0
            End If
        End Function

        Public Function [Function](x As Single, Theta As Single) As Single
            If x >= Theta Then
                Return 1
            Else
                Return 0
            End If
        End Function

        Public Function Derivative(x As Single) As Single Implements IActivationFunction.Drivative
            Return [Function](x)
        End Function
    End Class
End Namespace

Sample Test

GUI

Sample application creates random training set with pre-defined relation. and passes this data to Perceptron for training then draws classification line based on final Perceptron weights

TrainingSet Class

''' <summary>
''' Class to build random training set
''' Creates training set within specified width and height limits (for graphical visualization)
''' Positive points are ones above line 300 - 2 / 3 * X (random straight line)
''' Points below this line are negative
''' </summary>
Public Class TrainingSet
    Private _PointsNum As Integer
    Private _Width As Integer
    Private _Height As Integer

    ''' <summary>
    ''' Holds training set inputs matrix
    ''' </summary>
    Private _Points() As Matrix1D
    ''' <summary>
    ''' Holds labels (correct answers) array
    ''' </summary>
    Private _Labels() As Single

    Private _Gen As RandomFactory

    Public Sub New(PointsNum As Integer, Width As Integer, Height As Integer)
        _PointsNum = PointsNum
        _Width = Width
        _Height = Height
        _Gen = New RandomFactory
        ReDim _Points(PointsNum - 1)
        ReDim _Labels(PointsNum - 1)
        Randomize()
    End Sub

    ''' <summary>
    ''' Create random points
    ''' </summary>
    Public Sub Randomize()
        For I As Integer = 0 To _PointsNum - 1
            Points(I) = New Matrix1D(3)
            Points(I).SetValue(0, 1)
            Points(I).SetValue(1, _Gen.GetRandomInt(0, _Width))
            Points(I).SetValue(2, _Gen.GetRandomInt(0, _Height))
            Labels(I) = Classify(Points(I).GetValue(1), Points(I).GetValue(2))
        Next
    End Sub

    Public ReadOnly Property Points As Matrix1D()
        Get
            Return _Points
        End Get
    End Property

    Public ReadOnly Property Labels As Single()
        Get
            Return _Labels
        End Get
    End Property

    ''' <summary>
    ''' Creates labels array by checking points against straight line 300 - 2 / 3 * X
    ''' </summary>
    ''' <param name="X">Point X coordinate</param>
    ''' <param name="Y">Point Y coordinate</param>
    ''' <returns></returns>
    Private Function Classify(X As Single, Y As Single) As Single
        Dim d As Single = 300 - 2 / 3 * X
        If Y >= d Then Return +1
        Return -1
    End Function

    ''' <summary>
    ''' Draws points within passed canvas object
    ''' </summary>
    ''' <param name="MyCanv"></param>
    Public Sub Draw(MyCanv As Canvas)
        For I As Integer = 0 To _PointsNum - 1
            If _Labels(I) = 1 Then
                MyCanv.DrawBox(5, Points(I).GetValue(1), Points(I).GetValue(2), Color.Blue)
            Else
                MyCanv.DrawCircle(5, Points(I).GetValue(1), Points(I).GetValue(2), Color.Green)
            End If
        Next
    End Sub
End Class

Creation of Perceptron and TrainingSet

Private Sub SampleTest1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
    ' Initialize random training set with size 100
    RndTraininSet = New TrainingSet(100, PictureBox1.Width, PictureBox1.Height)
    MyCanvas = New Canvas(PictureBox1.Width, PictureBox1.Height)
    MyPerceptron = New Perceptron(3, 0.1)
    ActivFun = New SignFunction
End Sub

one important note here is Activation function selection, as mentioned activation function selection is dependent on problem being resolved. for our example, training set is divided into 2 groups +ve and -ve based on point location from defined straight line.

Based on this problem criteria, the best activation function is SignFunction which outputs +1 or -1

Try changing to other functions, for some; Perceptron will never reach min MSE state

Perceptron training

Private Sub btnTrain_Click(sender As Object, e As EventArgs) Handles btnTrain.Click
    MyPerceptron.TrainPerceptron(RndTraininSet.Points, RndTraininSet.Labels, ActivFun)
End Sub

Recap

Perceptron is the basic processing node in ANN (Artificial Neural Network) which used mainly to resolve Binary Linear Classification problems

Perceptron uses Supervised learning to set its weights

Formula to update weights in each iteration of training set is

Activation function defines the final output of Perceptron and selection of activation function is based on problem being resolved.

Leave a comment