Classification, Regression and Dimensionality Reduction
|CS 4641 Machine Learning||Charles Isbell||Spring 2014||Java,Python,C++||No||Code N/A|
|CS 8803-STR Statistical Techniques in Robotics||Byron Boots||Spring 2015||Matlab||No||Code N/A|
I had numerous projects in CS4641 where I had to find data sets (the UC Irvine ML Repository is a fantastic source for this.) and perform analysis using various algorithms on them, in an effort to learn what kind of data topology a particular algorithm handled well, and what kind it handled poorly, for regression, classification and dimensional analysis/reduction. These algorithms included pruned Decision Trees, Neural Nets, Boosting, Support Vector Machines, k-Nearest Neighbors for supervised learning and k-Means Clustering, Expectation Maximization, PCA, ICA, Randomized Projection and Insignificant Component Analysis for unsupervised learning and dimensionality reduction.
While we were not required to implement these algorithms in the Machine Learning class, and could instead use pre-coded implementations, most of the algorithms I implemented anyway, to better understand them. I had already implemented neural nets in various languages - as an assignment for the students to complete in Python when I TA'ed the Intro To AI course and in C++ as part of the Kinect library I implemented as a research project.