Decision Boundaries with Logistic Regression

So now that we have defined our hypothesis for logistic regression lets talk about what is called a decision boundary. We can translate the output of the hypothesis to say if h(x) >= 0.5 then y = 1 otherwise h(x) < 0.5 and y = 0. This is fair because it gives a half/half split. …

Intro to Logistic Regression

Let’s say we want to create a model to solve a binary classification problem; yes or no. For example based on the tumor size is a specific tumor malignant; yes or no? Let’s look more into this example with the hypothetical data below: If we apply linear regression to this data set we get: Now …

Gradient Descent With Multiple Variables + Improvements

(Refer to Figure 1 first) Not a lot has changed from the last time we talked about gradient descent (please look at that article before reading on). It is hard to explain how the summation and 1/m comes into play, but when you take the partial derivative with respect to the variable that you are …

Linear Algebra Integration with Hypothesis Function

Before we go more into looking at data sets and creating more hypotheses lets hold a quick review on matrices. Basic multiplication, addition, subtraction, and division between vectors and scalars should be pretty easy to grasp from multiple tutorials online. We will primarily go over the properties of matrices in this article. First, matrices are …

Conceptual + Mathematical View on Gradient Descent

Continuing from the last article to narrow down the theta1 and theta 0 we use a learning technique called gradient descent. So we have our cost function J(theta, theta1) and we want the result of the function to be the smallest possible. What gradient descent does is that it randomly starts with a theta and …

A Mathematical View on the Cost Function: 2-D, 3-D, and Contour Graphs

Though the equation does look daunting don’t worry we’ll go over it. Okay so the inside term is y^I – yi. y^i is basically the work function h(x). We subtract the value of h(x) from y (our actual point) for all points (that’s why we use summation). We square the expression because we eliminate any …

Plans for 2020 and Update on Uploading Status

Sorry for not posting these last few months, I have just been trying to get adjusted to an exhausting sophomore life. I have quite a few drafts almost ready to publish, so more articles about machine learning and algorithmic designs will be coming out soon! For 2020 my goal is to upload an article once …

Unsupervised and Supervised Learning with Introduction to Hypothesis and Cost Functions

Welcome to the Machine Learning part of the blog! Let's start out with finding the differences between supervised and unsupervised learning. By definition, “the main difference between the two types is that supervised learning is done using a ground truth, or in other words, we have prior knowledge of what the output values for our …

Center of Mass; Moment of Inertia

What if we want to track the kinematics and behavior of an object with strings and blocks flailing all over the place. Which point on the object should we measure to find the displacement, acceleration, force etc.? So lets just talk about one atom i in the object. The force acting on the atom is …

Design a site like this with WordPress.com
Get started