|
Classes /
CS273a: Introduction to Machine LearningCLOSED : 2013 OFFERING Assignments and Exams:
Lecture: Tues/Thurs 2-3:30pm, BH1600Instructor: Prof. Alex Ihler (ihler@ics.uci.edu), Office Bren Hall 4066
Some Course Notes in developmentAlso, a possibly helpful LaTeX template I use for homeworks and solutions. Introduction to machine learning and data miningHow can a machine learn from experience, to become better at a given task? How can we automatically extract knowledge or make sense of massive quantities of data? These are the fundamental questions of machine learning. Machine learning and data mining algorithms use techniques from statistics, optimization, and computer science to create automated systems which can sift through large volumes of data at high speed to make predictions or decisions without human intervention. Machine learning as a field is now incredibly pervasive, with applications from the web (search, advertisements, and suggestions) to national security, from analyzing biochemical interactions to traffic and emissions to astrophysics. Perhaps most famously, the $1M Netflix prize stirred up interest in learning algorithms in professionals, students, and hobbyists alike. This class will familiarize you with a broad cross-section of models and algorithms for machine learning, and prepare you for research or industry application of machine learning techniques. BackgroundThis is an introductory graduate class, intended for first year graduate students. We will assume familiarity with some concepts from probability, calculus, and linear algebra. Programming will be required; we will primarily use Matlab, but no prior experience with Matlab will be assumed. Textbook and ReadingThere is no required textbook for the class. However, useful books on the subject for supplementary reading include Murphy's "Probabilistic Machine Learning", Duda, Hart & Stork, "Pattern Classification", and Hastie, Tibshirani, and Friedman, "The Elements of Statistical Learning". GradingThe course consists of homeworks, some small in-class quizzes, a project, midterm and final exam. Grading is approximately (possibly subject to modification):
Homeworks are due at 5pm on the listed day (or on EEE at the dropbox closing time). Late homeworks may not be accepted, and will not be after solutions are posted. Please turn in what you have at the deadline. CollaborationPlease do form study groups for discussion of the material, including lectures, homework, past exams, etc. Your fellow students are one of your best resources in this course. Piazza is often useful for this as well. However, you are responsible for the material, and should do the homework yourself. In other words, discussing the concepts in the homework, and solution strategies, etc. is fine -- but please do not look at others' solutions, exchange code, etc. MatlabOften we will write code for the course using the Matlab environment. Matlab is accessible through NACS computers at several campus locations (e.g., MSTB-A, MSTB-B, and the ICS lab), and if you want a copy for yourself student licenses are fairly inexpensive ($100). You may also use the free alternative Octave (heavily tested but poor GUI), or another alternative FreeMat (newer, less tested), both of which attempt to provide a free, syntax-compatible alternative to Matlab. However, please try to stick to Matlab syntax so that we can run your code in Matlab, and be aware that the code provided to you is likely tested in Matlab and not Octave or FreeMat, and the responsibility for discovering and fixing/working around any bugs will be yours. If you're not comfortable with that, use Matlab. If you are not familiar with Matlab, there are a number of tutorials on the web:
You may want to start with one of the very short tutorials, then use the longer ones as a reference during the rest of the term. For getting started, you can actually run simple Octave scripts and functions online at Interesting stuff for students
Syllabus
Course ProjectSee here (pdf) for the full description. For your course project, you will explore data mining and prediction in the wild, in a real life data set and compared against the performance of teams from around the world. We will use a data set from a past Knowledge Discovery in Data (KDD) Cup, a yearly competition in machine learning and data mining associated with the KDD conference. In particular, we will use the 2004 Competition's Particle Physics data set. The challenge is described in full on the webpage: http://osmot.cs.cornell.edu/kddcup/
trees, linear classifiers (logistic regression, support vector machines, etc.), naive Bayes classifiers, and/or boosted classifiers (decision stumps, etc.). Each member of your team may try one or two models, and can explore setting them to the data and assessing their performance using validation or cross-validation.
|