Learning Model

time to make Saitama's head shine

ord lo

Learning Model

The learning model contains of 2 components:

  1. Hypothesis Set

  2. Learning Algorithm

1. Hypothesis Set

Firstly, a ML model creates the set of hypothesis. A model can be for e.g.

  • Linear Classification

  • Linear Regression

  • Logistic Regression

Depending on the learning scenario, a model to be used can be favored over another.

In the credit-card approval case, the 'Perceptron Learning Model' is the preferred for Linear Classification. Where d amount of x variables refers to input attributes e.g. "gender", "age", "salary"

x=(x1,x2,...xd)x = (x_1, x_2, ... x_d)

The bank should approve credit if

i=1dwixi>threshold\sum_{\mathrm{i=1}}^dw_ix_i > threshold

and deny credit if

i=1dwixi<threshold\sum_{\mathrm{i=1}}^dw_ix_i < threshold

with the combination of both formulas to be potentially rewritten as

h(x)=sign((i=1dwixi)threshold)h(x)=sign((\sum_{\mathrm{i=1}}^dw_ix_i) -threshold)

with each hypothesis looking like:

PLA

Control

As mentioned, the model creates the hypothesis set (H). However, there are not many parameters we can control after picking a model. For example, we cannot simply alter the formula of the Perceptron Learning Model to produce multiple hypotheses (for the logic will be changed).

Saitama controlling a rock by pinching his noise

In fact, there are only 2 parameters within our control in the entire universe of ML:

Weight (w)

  • Each weight is attached to each input variable (x)

  • In this case, the more important a variable is in the context of approving of credit-card application, the greater the value of weight should be given to influence the instance to exceed the threshold

Threshold

  • This can be an arbitrary value/constant that the sum of all scoring for each variable each candidate must minimally meet to be considered approved

  • This can also be used as an ends to a mean, for example to use threshold to approve 250,000 people maximally in their credit-card applications

Through altering these 2 parameters, we can essentially create multiple hypotheses to eventually have a hypothesis set (H), with the best hypothesis (g).

Masking Threshold

The threshold is very much like a weight. Both are parameters (values that change) to be tweaked. Hence, we can treat threshold as a special 'weight' w0w_0 to greatly simplify the formula from

h(x)=sign((i=1dwixi)threshold)h(x)=sign((\sum_{\mathrm{i=1}}^dw_ix_i) -threshold)

to

h(x)=sign(i=1dwixi+w0)h(x)=sign(\sum_{\mathrm{i=1}}^dw_ix_i+w_0 )

and similarly grouping w0w_0into the summation notation, to

h(x)=sign(i=0dwixi)h(x)=sign(\sum_{\mathrm{i=0}}^dw_ix_i)

Vectorizing

Do take a look at my GitBook Matrices and Linear Algebra Fundamentals if you do not know what matrices are!

At this juncture, the formula of the model we're using now is pretty neat, but its major problem is that its algebraic computation can be pretty brutal to implement. To illustrate this, lets take Alice from our example.

Name

Age

Gender

Salary

Debt

Default

Alice

23

Female

24000

-

1

By implementing our model's formula and slotting arbitrary weights, we'll get

h(Alice)=sign((3.2×23)+(1.5×0)+(0.25×24000)+(1.5×0))h(Alice) = sign((-3.2\times 23)+(1.5\times 0)+(0.25\times 24000)+(-1.5\times 0))

However, if we vectorize both the weights and input variable x, we'll get

[3.21.50.251.5]×[230240000]\left[\begin{array}{cc} -3.2\\ 1.5 \\ 0.25 \\ -1.5 \end{array}\right] \times \left[\begin{array}{cc} 23 & 0 & 24000 & 0 \end{array}\right]

Ok in pure honesty it does not look that bad LOL, but will probably look worse with more input variables. Hence, the vectorized formula is written as

h(x)=sign(wTx)h(x) = sign(w^T x)

2. Learning Algorithm

The learning algorithm simply updates weights.

Hence, for a Perceptron Learning Algorithm, it merely corrects misclassified points where

sign(wTxn)ynsign(w^Tx_n) \neq y_n

Where yny_nis derived from the target function.

Negative to positive

For example,

  • sign(wTxn)=1sign(w^Tx_n)=-1from the hypothesis, while

  • yn=+1y_n = +1from the target function,

  • Point n is thus misclassified

Where

  • θ<90°\theta <90\degreeand acute, product is negative,

  • θ=90°\theta = 90\degreeand right, product is 0,

  • θ>90°\theta > 90\degreeand obtuse, product is positive,

https://www.quora.com/Can-a-scalar-product-be-negative

The product (y) from the dot product of Vector w1w_1and x is negative, whereas it should be positive.

Vectors of w_1 and x

Hence, the Perceptron Learning Algorithm corrects the misidentified with

w2w1+ynxnw_2 \leftarrow w_1+y_nx_n

Since yn=+1y_n = +1, we are simply adding x to the weight, where

w2w1+xnw_2 \leftarrow w_1+x_n

With the new Vector w2w_2being

Vector w_2

The new angle θ2\theta_2formed with vector Vector w2w_2and xxis <90°<90\degree, and the Perceptron Learning Algorithm has updated the weight to correctly classify the point N. 😵

Positive to negative

The reverse is true as well.

-just skip the following if you understand the previous lol-

Given the scenario where,

  • sign(wTxn)=+1sign(w^Tx_n)= +1from the hypothesis, while

  • yn=1y_n = -1from the target function,

  • For sign(wTxn)ynsign(w^Tx_n) \neq y_n, thus n is misclassified

Where the product (y) from the dot product of Vector w1w_1 and x is positive, whereas it is currently negative.

Vectors of w_1 and x

Hence, the Perceptron Learning Algorithm corrects the misidentified with

w2w1+ynxnw_2 \leftarrow w_1+y_nx_n

Since yn=1y_n = -1, we are simply subtracting x from the weight, where

w2w1xnw_2 \leftarrow w_1-x_n

With the new Vector w2w_2being

Vector w_2

The new angle θ2\theta_2formed with the vector Vector w2w_2and xxis >90°>90\degree, and the Perceptron Learning Algorithm has updated the weight to correctly classify the point N. 😵

Iteration

If the training examples are truly linearly separable, all points can be correctly classified through repeating the above Perceptron Learning Algorithm for all points. This is despite misclassifying correctly classified points previously.

PLA hard at work

That is if the data set is truly linearly separable. If not, we can attempt to transform the data to make it linearly separable.

Last updated

Was this helpful?