Gradient Descent
Gradient Descent is an optimization algorithm. The general idea of Gradient Descent is to tweak parameters $latex \theta$ iteratively in order to minimize a cost function by measuring the gradient … Read More
Gradient Descent is an optimization algorithm. The general idea of Gradient Descent is to tweak parameters $latex \theta$ iteratively in order to minimize a cost function by measuring the gradient … Read More
Machine learning algorithms can be classified into batch or online methods by whether or not the algorithms can learn incrementally as new data arrive. Batch Learning Batch learning methods are … Read More
When we train a linear regression model (i.e., $latex \hat{y} = \theta^T \cdot \textbf{x}$), we might consider two different ways to train it: Using a “closed-form” solution that directly computes … Read More
We can determine if the performance of a model is poor by looking at prediction errors on the training set and the evaluation set. The model might be too simple … Read More
This post shows a tutorial of using doc2vec and the t-SNE visualization in Python for disease clustering. Of course, these tutorial codes can be used for any other types of … Read More
The difference between simple linear regression and multiple linear regression is that simple linear regression has only one independent variable whereas multiple linear regression has more than one independent variable. … Read More
In a previous post, we have looked at Logistic Regression for binary classification. In this post, we will see how the logistic regression can be generalized for multiple classes. This … Read More
You might guess that this algorithm is only for regression problems because of its name. However, Logistic Regression also can be used for classification problems. Logistic Regression estimates the probability … Read More
Regularization is a good way to reduce overfitting. For a linear model, the model can be regularized by penalizing the weights of the model. Simply speaking, the regularization prevents the … Read More
In machine learning, L1 and L2 techniques are widely used as cost function and regularization. It is worth to know key differences between L1 and L2 for a better model. … Read More