To implement the internal workings of perceptron and testing the accuracy of in train and test dataset.
The perceptron model is a general computational. It takes an input, aggregates it (weighted sum) and returns 1 only if the aggregated sum is more than some threshold else returns 0.
Our goal is to find the w vector(weights) that can perfectly classify positive inputs and negative inputs in our data.
We first implement AND, OR and XOR logic with perceptron-learning algorithm by plugging in the approriate weights and computational logic.
Hence, we create our own perceptrons
If a multilayer perceptron has a linear activation function in all neurons, that is,
a linear function that maps the weighted inputs to the output of each neuron,
then linear algebra shows that any number of layers can be reduced to a two-layer input-output model.
In MLPs some neurons use a nonlinear activation function that was developed to model the frequency of action potentials,
or firing, of biological neurons.
We use MLP to classify Iris Dataset.
We use Label Encoder to encode the Class Label - Species Column
We next divide the dataset into training 70% and testing sets 30%.
We use Sequentail Keras Model. We add 3 layers. First two layers have 4 perceptrons and last one has 3 perceptrons.
The first two layers implement rectified linear unit activation function (ramp), last layer implements softmax to scale it
into probabilities. We then plot the mean-square error and its changes with each epoch (here 10).