Computer Science
This page shows resources on courses and projects related to Computer Science. The following is a list of advanced, graduated Computer Science courses I have taken. Each course has its own resources (practice problems, example code, solutions, ...) that I created or obtained while pursuing my Master's Degree at Indiana University. Please feel free to contact me at with any questions or suggestions. For resources of courses I have assited or teach look at my teaching section. For resources on the Math side of things, take a look at my Math section.
Professional Practicum
For the entire calendar year 2012, corresponding to the academic semesters: Spring, Summer and Fall 2012, I was a research assistant for Dr. Geoffrey Brown on the Computer Science Department at Indiana University, Bloomington. My main responsability was to supervise a team of undergraduate students in building automated scripts for legacy software. This project is now finished and you can read an abstract of the published paper next. You can also browse the project's repository here and also here, and finally you can visit the conference where this project was published at The ninth annual iPres conference on digital preservation hosted by the University of Toronto iSchool (Faculty of Information) in Toronto, Ontario, Canada.
Abstract
Access to many born-digital materials can only be accomplished economically through the use of emulation where contemporaneous software is executed on an emulated machine. For example, many thousands of CD-ROMs have been published containing proprietary software that cannot be reasonably recreated. While emulation is proven technology and is widely used to run both current and obsolete versions of Windows and Unix operating systems, it suffers a fatal flaw as a preservation strategy by requiring future users to be facile with today's operating systems and software.
We have previously advocated assisted emulation as a strategy to alleviate this shortcoming. With assisted emulation, a preserved object is stored along with scripts designed to control a legacy software environment and access to the object enabled through a helper application. In this paper we significantly extend this work by examining, for a large data set, both the cost of creating such scripts and the common problems that these scripts must resolve.
Hardware System Design II
This is a graduate level course being offered at Indiana University as part of the foundational requirements for MS/PhD. I took this course on Spring 2013 with Computer Engineering Specialist Bryce Himebaugh. This class is mainly practical with labs almost every week and a project at the end. All labs were designed by Bryce.
The following is a compilation of my solutions to assignments given in class. You may also want to downloads the Drivers.
Labs
- Lab 1 - [Report] - [Code]
- Lab 2 - [Report] - [Code]
- Lab 3 - [Report] - [Code]
- Lab 4 - [Report] - [Code]
- Lab 5 - [Report] - [Code]
- Lab 6 - [Report] - [Code]
- Lab 7 - [Report] - [Code]
- Lab 8 - [Report]
Exams
Project: Rover
The main goal of this class was to use the tools developed in the labs to construct a project. The project was built in teams, in my case I worked with Sajith Sasidharan. We worked on a Rover (a small car) that could be (1) remotely controlled via cellphone and that could (2) automatically stop before hitting a wall or another obstacle. To accomplished the second goal, we used a sonar to detect how close to an obstacle the rover was and forcefully stop it. The following files contain all the project's documentation and software. It is interesting to note that goal (1) implied that the Rover could be controlled from virtually anywhere (i.e., anywhere with Internet connection) since we coded a small server to interface between the cellphone and the rover.
Combinatorics
This is a graduate level course being offered at Indiana University as part of the foundational requirements for MS/PhD. I took this course on Spring 2013 with Prof. Esfandiar Haghverdi. The book we mainly used was Extremal Combinatorics: With Applications in Computer Science (Texts in Theoretical Computer Science. An EATCS Series). Other books are: Modern Graph Theory (Graduate Texts in Mathematics) and Additive Combinatorics (Cambridge Studies in Advanced Mathematics). These are good resources but more on the advance side. Getting help with this material is not that easy.
The following is a compilation of my solutions to assignments given in class.
Assignments
- Assignment 1 - [Solution]
- Assignment 2 - [Solution]
- Assignment 3 - [Solution]
- Assignment 4 - [Solution]
Exams
More Resources
- Multilinear Polynomials
- Alon-Babai-Suzuki’s Conjecture related to binary codes in non-modular version
- Extremal results on finite sets
- Dimension and Polynomials
My take on the subject
Extremal Combinatorics is all about finding bounds. Usually you have some sort of discrete and finite object (say a graph) and considering its structure (say number of edges and vertices) you want to estimate a non-trivial property (for example, number of triangles in it -number of 3-edges fully connected-). A classical type of result is the Handshaking Lemma. The main figure of this field is Paul Erdős, a brilliant Mathematician of the past century (which indicates that this is a relatively new field of Mathematics). I think this type of field that specializes in discrete and finite structures is going to become more relevant as it relates to Computer Science and networks in particular. Amazingly enough it seems that this is a harder subject of study than the classic Mathematical Analysis that deal with infinite and uncountable sets. Strange how infinite sets are, to an extent, easy to analyze than finite ones (I recommend a series of Lectures by Prof. N J Wildbergeron this subject).
Machine Learning
This is a graduate level course being offered at Indiana University as part of the foundational requirements for MS/PhD. I took this course on Spring 2015 with Prof. Predrag Radivojac. In this course we used the following book: Pattern Recognition and Machine Learning (Information Science and Statistics). In my opinion this is not a very friendly book and rather hard to read. Other alternatives are: By Tom M. Mitchell Machine Learning (McGraw-Hill International Editions Computer Science Series) (1st First Edition) [Paperback] and The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics).
The following is a compilation of my solutions to assignments given in class.
Assignments
- Assignment 1 - [Solution]
- Assignment 2 - [Solution]
- Assignment 3 - [Solution]
- Assignment 4 - [Solution]
Exams
More Resources
My take on the subject
Machine Learning is an amazing field of study. In a nutshell, Machine Learning is the science (and art!) of discovering patterns out of data (see also Elements of AI). This is particularly relevant today as far as Big Data goes. In my opinion, Machine Learning, as a discipline, is a generalization of well-established techniques from Statistics and Mathematics (specially Optimization and Probability) for data sets so big and complex that even today's computer technology might not be enough to give you an analytic (closed form solution). What do you do instead? Usually come up with an iterative procedure, i.e., an algorithm that approximates a solution. Amazingly, the main tool for many of the techniques used in the field is an old optimization algorithm called Newton-Raphson. The novelty lies in the way in which this algorithm is used.
But what are we trying to approximate? Very succinctly, given a data set we hypothesize a data generating mechanism i.e., the way in which the data was generated, and then try to estimate the parameters that best fit the data. This procedure gives you a model from which you can make predictions and simulate new data, among other things you can do. As an example, suppose we have a data set and we hypothesize that it was generated from a Poisson distribution. In this case we would need only to estimate the parameter lambda since given this parameter we can completely describe the distribution. There are many ways to perform this estimation some of which are: Maximum Likelihood (ML), Maximum a posteriori, etc. Once we have an estimate for lambda, we have completely described the data set by the Poisson distribution together with the estimated lambda. In other words, our model is a Poisson distribution with the estimated lambda. And there you go, you have discovered the pattern in your data! (more on this on Statistical Learning Theory).
Of course the example above is overly simplistic. There are many more variants and issues to be considered when building a model. The point, however, is that the estimation of parameters is usually so complicated, with many more parameters to estimate in highly-dimensional spaces, that analytic-closed form solutions are impossible and thus, approximate algorithms are necessary. In a way one can say that Machine Learning was already possible before computers, since the Mathematical theory was already in place, but it was not feasible as the amount of computations necessary to build a single model were beyond the capabilities of what could be reliably and consistently performed by humans. In other words, it was not economical to optimize models until computation became relatively cheap. Machine Learning is one of those fields that were ahead of its time.
Elements of AI
This is a graduate level course being offered at Indiana University as part of the foundational requirements for MS/PhD. I took this course on Fall 2011 with Prof. Kris Hauser. In this course we used the following book: Artificial Intelligence: A Modern Approach (3rd Edition). This is a classic Artificial Intelligence book which gives an overview of the subject. I recommend this book both for beginners and as a reference for more advance work.
The following is a compilation of my solutions to assignments given in class. All code written by the course's associate Instructor Mark Wilson in Python.
Assignments
- Assignment 1 - [Solution] - [Code]
- Assignment 2 - [Solution] - [Code]
- Assignment 3 - [Solution]
- Assignment 4 - [Solution] - [Code]
- Assignment 5 - [Solution] - [Code]
- Assignment 6 - [Solution] - [Code]
- Assignment 7 - [Solution]
- Assignment 8 - [Code]
More Resources
- Information Sheet
- Final Practice Exam - [Solution]
- Final 2007
- Final 2006
- Final 2004
- Final 2002
- Final 1996
- Bayes Independence
- Bayes Network
My take on the subject
Artificial Intelligence is one of the core areas of Computer Science. In essence the subject can be equated with Automated Decision Making. The central question is: how can we enable computers to make their "own" decisions?. This is by nature a philosophical inquiry. To begin with: what on earth does it mean for a computer to make a decision by itself? Can computers ever act on their own? These are very interesting question, however, this course does not deal with any of these issues - maybe a quick mention at the very first class and that is it. On the contrary, this course is mainly concerned with practical aspects related to such questions.
The current reality of the subject is far from what science fiction novelists would have us thinking. In fact, current AI can be thought of as a Framework composed of Techniques and Tools, the majority of them mathematical in nature, to make use of the vast quantities of data available by exploiting them and allowing computers to crunch numbers really, really fast and discover patterns that otherwise would be invisible to humans (see also Machine Learning).
At the end of the day we must remember that computers are machines capable of computation at speeds far beyond what any human is capable of doing. On the other hand, Computer Systems -the combination of all tools related to modern computers: Databases, Operating System, Networks, etc- does emerge as something more than number crunching. Is at this level that Artificial Intelligence is making a real contribution and will continue to do so in the future, even if it is does not become what the popular imagination think it will.
Advanced Database Concepts
This is a graduate level course being offered at Indiana University as an optional part of the requirements for MS/PhD. I took this course on Fall 2011 with Prof. Yuqing Melanie Wu. In this course we used the following book: Database Management Systems, 3rd Edition. I recommend this book as it is a decent textbook to get you to start thinking about Databases as an integral part of modern Computer Systems and an interesting subject of study on its own.
The following is a compilation of my solutions to assignments given in class.
Assignments
- Assignment 1 - [Solution]
- Assignment 2 - [Solution]
- Assignment 3 - [Solution]
- Assignment 4 - [Solution]
More Resources
- MidTerm Information Sheet
- Final Exam Information Sheet
- MidTerm Review written by Mo Zhou
- MidTerm 2011 Solutions
- Final Exam 2004
- Final Exam 2003
- Final Exam 2001
- Final Exam 2000
Project: B-Town Properties
In this course students are required to form teams and work in a project. This typically means building a dynamic website throughout the duration of the semester. In my case, my team (one of the team members was Jay Mercer) and I built a site called B-Town properties, and the project scope was the following: "BTown Properties is a web-based system where members and visitors can search the system’s database for other members’ properties. Only a member is allowed to contact another member to set meeting times to see a property. The system will allow members to upload information regarding their properties, maintain watch lists of properties and property features, and write reviews about other members’ properties. Other features, such as social networking, may be added if time permits." Here are some files related to this project:
- Group 13 - Milestone 1
- Group 13 - Milestone 2
- Group 13 - Milestone 3
- Group 13 - Milestone 3 - Presentation
- BTown Properties App, written in PHP using Zend Framework
You can also check out the GitHub Repository for this project.
Theory Of Computing
This is a graduate level course being offered at Indiana University as part of the foundational requirements for MS/PhD. I took this course on Spring 2012 with Prof. Dirk Van Gucht. The book we use for this course was Introduction to the Theory of Computation By Sipser. This is a must have book in the library of any Computer Scientist. Highly recommended.
The following is a compilation of my solutions to assignments given in class.
Assignments
- Assignment 1 - [Solution]
- Assignment 2-1 - [Solution]
- Assignment 2-2 - [Solution]
- Assignment 3 - [Solution]
- Assignment 4 - [Solution]
- Assignment 5 - [Solution]
- Assignment 6 - [Solution]
The following is a compilation of resources I found useful while taking the course and thinking about this subject. These are not necessarily resources given in the course. Some of this I found online or compiled myself. Feel free to contact me if you would like to add a resource to this page or have any suggestions.
Resources
- Logic and Set Theory, taken from Professor John Watrous' Page
- Math for Computer Science
- Turing Machines Overview
- An example of a proof by structural induction
- Course Notes
- Notes on Reduction
- Regular Expressions, taken from Professor Mitsunori Ogihara's Page
- Convert NFA to DFA, taken from Professor John Watrous' Page
- Diagonalization, taken from Professor John Watrous' Page
- Halting Problem, taken from Professor Mitsunori Ogihara's Page
- Notes on Rice's Theorem, taken from Professor Luca Trevisan's Page
- Rice Theorem and Facts
- MidTerm Example Test
- Homeworks with solutions Example 1, taken from Professor Marvin K. Nakayama's Page
- Homeworks with solutions Example 2, taken from Professor Marvin K. Nakayama's Page
My take on the subject
The thing I found more interesting about Theory of Computing is not actually what in principle computers can do, but what they can't. This is the first serious, mathematically rigorous course that emphasizes the limitations of computing rather than focusing on the usual (seemingly) infinite capacity of modern computers. To really understand how to use a tool you not only need to know what can you do with it but also what you cannot. Any computer scientist that wants a more profound understanding of their field should really take a course of this nature. The main results is to understand the halting problem. The main tool to do so is to use diagonalization, a proof technique invented by Cantor.