Gradient Descent With Multiple Variables + Improvements

(Refer to Figure 1 first) Not a lot has changed from the last time we talked about gradient descent (please look at that article before reading on). It is hard to explain how the summation and 1/m comes into play, but when you take the partial derivative with respect to the variable that you are …

Linear Algebra Integration with Hypothesis Function

Before we go more into looking at data sets and creating more hypotheses lets hold a quick review on matrices. Basic multiplication, addition, subtraction, and division between vectors and scalars should be pretty easy to grasp from multiple tutorials online. We will primarily go over the properties of matrices in this article. First, matrices are …

Unsupervised and Supervised Learning with Introduction to Hypothesis and Cost Functions

Welcome to the Machine Learning part of the blog! Let’s start out with finding the differences between supervised and unsupervised learning. By definition, “the main difference between the two types is that supervised learning is done using a ground truth, or in other words, we have prior knowledge of what the output values for our …

Design a site like this with WordPress.com
Get started