|
CS295, Fall 2008: Research Problems in Machine LearningGroup Pages:
Instructors: Alex Ihler and Max WellingClass schedule: Mon/Wed 2:00-3:30pm, DBH 1423During this class you will engage in research in small groups of approximately 3 to 4 students. Each group will select a research project as their focus for the term. A list of several potential projects will be provided by the instructors, or students may define their own research project as well (subject to approval, i.e., that it be sufficiently challenging and not identical to the research you were doing for your PhD anyway). We strongly encourage projects that will either lead to tangible results (e.g. a publication or to a web-based application). See below for some example projects. For the course, students will be required to read and present to the class papers relevant to their project, and write a research paper or technical report and present the results of their work. Grading will be satisfactory/unsatisfactory. Students will meet as a class on Mondays, and as individual groups with the instructors on Wednesdays. Reading and Presentation ScheduleOctober 6th
October 13th
October 20th
October 27th
November 3rd
November 17th
Example (potential) research projects for the class:Jigsaw puzzle assistantImagine a user uploading an photo of the pieces of a jigsaw puzzle. Your task is build a system that can suggest moves to the user, or potentially solve the puzzle. Possible technical ingredients for this example include:
Stylistic or corrective image transformsThese guys proposed learning
transformations from one image to another, which can be used to "impose" a particular image
style onto another image. They use it for producing faux artistic effects, and for image restoration such as super-resolution and deblurring (ill-posed) inverse problems.
Automatic identity taggingCurrently, there exist a large number of tools to search and organize photos by date, folder, EXIF tags, etc. However, almost no one bothers to tag images with identity information, because it's too much effort. But what if you could do all that automatically, by detecting faces, grouping them, and getting a relatively small amount of user input instead? Aspects include
Google apparently does something like this on PicasaWeb (described here); see also for example this paper.
Image search / retrievalBecause images are typically untagged with any identifying information, it becomes important to find images using some criterion (which has yet to be decided upon). As an example, see this demo for searching for images based on color, or this demo which searches based on wavelet coefficients. Alternatively, you might want to find images which are "similar" to some test image. For example, they might contain the same entity or object -- given an image of some animal, plant, etc. you might want to know what it is by searching a database of labeled images for something with a similar appearance.
Intelligent web site-mapAutomatically create an organization and "map" or hierarchical interface given a web address. For inspiration, see this CalIT2 topic model. One goal here is to make something like this truly automatic, so that no (or almost no) intervention is required. You might use social network analysis, topic models of text, etc. to achieve this.
Other ideas
|