Learning Model

time to make Saitama's head shine

Learning Model

The learning model contains of 2 components:

  1. Hypothesis Set

  2. Learning Algorithm

1. Hypothesis Set

Firstly, a ML model creates the set of hypothesis. A model can be for e.g.

  • Linear Classification

  • Linear Regression

  • Logistic Regression

Depending on the learning scenario, a model to be used can be favored over another.

In the credit-card approval case, the 'Perceptron Learning Model' is the preferred for Linear Classification. Where d amount of x variables refers to input attributes e.g. "gender", "age", "salary"

The bank should approve credit if

and deny credit if

with the combination of both formulas to be potentially rewritten as

with each hypothesis looking like:

Control

As mentioned, the model creates the hypothesis set (H). However, there are not many parameters we can control after picking a model. For example, we cannot simply alter the formula of the Perceptron Learning Model to produce multiple hypotheses (for the logic will be changed).

In fact, there are only 2 parameters within our control in the entire universe of ML:

Weight (w)

  • Each weight is attached to each input variable (x)

  • In this case, the more important a variable is in the context of approving of credit-card application, the greater the value of weight should be given to influence the instance to exceed the threshold

Threshold

  • This can be an arbitrary value/constant that the sum of all scoring for each variable each candidate must minimally meet to be considered approved

  • This can also be used as an ends to a mean, for example to use threshold to approve 250,000 people maximally in their credit-card applications

Through altering these 2 parameters, we can essentially create multiple hypotheses to eventually have a hypothesis set (H), with the best hypothesis (g).

Masking Threshold

to

Vectorizing

Do take a look at my GitBook Matrices and Linear Algebra Fundamentals if you do not know what matrices are!

At this juncture, the formula of the model we're using now is pretty neat, but its major problem is that its algebraic computation can be pretty brutal to implement. To illustrate this, lets take Alice from our example.

By implementing our model's formula and slotting arbitrary weights, we'll get

However, if we vectorize both the weights and input variable x, we'll get

Ok in pure honesty it does not look that bad LOL, but will probably look worse with more input variables. Hence, the vectorized formula is written as

2. Learning Algorithm

The learning algorithm simply updates weights.

Hence, for a Perceptron Learning Algorithm, it merely corrects misclassified points where

Negative to positive

For example,

  • Point n is thus misclassified

Where

Hence, the Perceptron Learning Algorithm corrects the misidentified with

Positive to negative

The reverse is true as well.

-just skip the following if you understand the previous lol-

Given the scenario where,

Hence, the Perceptron Learning Algorithm corrects the misidentified with

Iteration

If the training examples are truly linearly separable, all points can be correctly classified through repeating the above Perceptron Learning Algorithm for all points. This is despite misclassifying correctly classified points previously.

That is if the data set is truly linearly separable. If not, we can attempt to transform the data to make it linearly separable.

Last updated