Introduction to Matrices (Pt. 1)
Last updated
Last updated
Wait, what, are we learning math all-over again!? 👻
Maybe a good way to start off this GitBook is through the words of this man, a lecturer of MIT, Gilbert Strang.
I highly recommend watch his videos first, but will too humbly summarize his discussions.
Matrices are (good for me, people with OCD lol) neat.
It offers both a tidy way to
Represent dimensions, and
A nifty way to transform data as well.
They are essentially
Numbers (and sometimes algebra), that are
Spaced out neatly in rows and columns, whom are
Arranged in a rectangular (sometimes brackets or straight-lines lol❓❓❓) array
This eases the representation of multiple dimensions, and also offers a transparent, verifiable way to demonstrate transformation process of data (addressing algorithmic bias).
Dimensions are equivalent often input variables, and if you are unsure what it is, don't worry as of yet! 😉 It is merely a term used in machine learning to predict output variables, general patterns and trends, or for reinforcement learning.
For example, there may be numerous input variables that may go into a supposed model created for the whole purpose of predicting an output variable "HDB housing price", where these could range from substantial to trivial factors such as location, type of flat (HDB flats or EC), amount of rooms, the amenities available, years of lease remaining, noise factor (if it close to an air-base or airport), and etc.
Currently, matrices are widely used in:
Inputs (possibly output) of machine learning (specifically deep learning / neural networks)
Term-document matrix in Natural Language Processing (NLP)