-
Sep 17, 2016
Driving a smart cab using Q-learning
Q-learning for self-driving smartcab.
-
Sep 16, 2016
Q-learning for self-driving smartcab: Supplimental material
Q-learning for self-driving smartcab: Supplimental material.
-
Jun 27, 2016
Linear classification - Softmax.
This post presents the simple softmax classifier, the associated cost function and presents mathematical development of the weight update rule.
-
Jun 26, 2016
Linear classification - Support vector machines.
This post presents mathematical derivation of support vector machines. First a classification cost-function is defined and then a gradient descent approach is used to derive the optimal classifier.
-
Jun 25, 2016
Linear regression using matrix derivatives.
This post presents basic matrix calculus relations and demonstrates how they can be applied to obtain the coeffients in linear regression. Two methods, gradient descent and pseudoinverse-based solution are presented.
-
Jun 24, 2016
Building a student intervention system: MCA for dimensionality reduction
Multiple Correspondence Analysis (MCA) is a dimensionality reduction technique for categorical variables. Here MCA is applied to identify at-risk students.
-
Jun 24, 2016
Instance based learning (KNN for image classification) - Part 3
In this post, k-NN algorithms is applied to classify images in the CIFAR dataset. 28% accuracy is obtained for k = 10.
-
Jun 23, 2016
Instance based learning (Kernel Methods) - Part 2
This post presents kernel-based algorithms for regression and classification.
-
Jun 10, 2016
Building a student intervention system - EDA
Here I perform exploratory data analysis (EDA) on behavioral and demographic data collected in 395 students to identify the features that correlate with graduation rates.
-
Jun 10, 2016
Multiple Correspondance Analysis (MCA) - Introduction
MCA is a dimensionality reduction technique for data sets that are comprised of only categorical variables. In this post, I will present the method and apply it on a toy problem.
-
Jun 8, 2016
Instance based learning (and KNN) - Part 1
In this post, k-NN algorithms are presented for regression and classification tasks. The effect of choosing K is demonstrated via a regression and classification example.