Saturday, April 5, 2014

The End of the Semester: Final Thoughts

This will probably be my last blog entry of the semester, since we finally finished week 12. There is a lot I can say about this semester, but I’ll keep it short and to the point.

First of all, after I finished CSC108, I believed that I had gathered quite a large amount of programming knowledge and would be well off if asked to program a wide variety of things. However, after going through CSC148, I realized that there is a huge world of knowledge that I still need to get my hands into before I can call myself a computer scientist. Algorithms, data structures, recursion and efficiency are all extremely key stepping stones on that path, which I learned in this course. It was definitely tougher than 108, but not in a negative way. It allowed me to test the limits of my knowledge and problem solving. One of the most prominent things I’ve noticed is that I’ve gotten a much stronger grasp on OOP after completing this course. 

The assignments, in my opinion, were extremely balanced. Not too easy, and not too difficult. They asked us to do things that we had learned about, and didn’t ask anything excessively difficult. Therefore, I had no real complaints regarding the assignments and exercises. The tests weren’t too bad either, but sometimes the questions asked for a little more time than was given (at least for me!). I thought professor Danny Heap was an excellent professor and created a positive environment for learning in lectures. Overall, I found the material of this course and teaching staff to be really good in helping me develop my skills. 

In conclusion, I’ve devoted a lot of time to this course, and as a result, I’ve gained a lot from it. Looking back to my earlier sLOG posts, I can definitely see the improvement and training I’ve gone through over the last semester. I’m looking forward to taking more computer science and programming related courses in the future, and wish everyone reading this the best!

Saturday, March 29, 2014

Sorting and Efficiency: Why is it important?

This weeks topic is arguably one of the most important aspects of computer science. The goal of computer scientists should be to create programs, algorithms and software that perform at an optimal rate in order to reduce the amount of time and money required to achieve the desired result. One category of algorithms that are linked very closely to the concept of efficiency are sorting algorithms. Sorting algorithms consist of various different approaches that lead to the same result; a sorted array of data. Now, the question that some people might be asking is: if they all lead to the same conclusion, why do we need to have all these different types of sorting algorithms? The answer, as you might expect, is that they differ in their sorting efficiency.

Different tasks have different thresholds that a programmer has to meet (some might be stricter, some might be more lenient), and that allows the programmer to choose the most efficient sorting algorithm that suits their needs. For example, if your employer asked you to write a program that sorts an array in maximum x units of time. The keyword here is maximum, meaning that the programmer would need to go through all of the sorting algorithms he knows and choose the few that perform the task in less than or equal to the amount of time that the employer asked for. This requirement is referred to as the "worst-case run-time". This requirement of maximum run-time can be measured with a concept called Big O notation.

Big O notation is a type of notation that gives us information about the performance of an algorithm with a specific input size of n. Big O notation is basically a bound on a function, restricting its behavior. Other bounds include Big Theta and Big Omega, however, we're only interested in Big O right now. Big O is the upper bound on a function, meaning that in any specific case, the function can not grow faster than the Big O. One important thing to note here is that saying an algorithm runs in Big O of a function does not automatically mean it is the worst-case run-time. A case is not the same as a bound. It is also possible to measure the average-case and best-case in terms of Big O.

If an algorithms run-time is in Big O of a function, then its run-time will never exceed the function. To relate this to sorting algorithms, there are various values of Big O that a sorting algorithm may have. For these examples, we'll be looking at the average-case time complexities, for comparisons sake. First of all, O(1) is used to describe a sorting algorithm that's run-time never changes regardless of the input size. We have not really had any sorting algorithms of this efficiency introduced to us yet, but I'm sure some exist. Next, we have O(n^2) algorithms, such as insertion sort, selection sort, bubblesort which are not used very often, as they are not very efficient. Then, we have the O(n logn) algorithms which are generally the most used, such as mergesort, quicksort and heapsort.

In conclusion, it is important to keep in mind that as computer scientists and programmers, we can reduce the amount of time and money being invested in simple operations such as sorting by simply choosing the right algorithm that maximizes the efficiency. Going forward, the knowledge about efficiency we've learned so far will be crucial in creating programs that perform as nicely as possible, and keep our code as clean as can be.

Friday, February 28, 2014

Recursion: An In-depth Look

When faced with a large and repetitive problem in the real world, it’s highly impractical to keep repeating the tedious little details over and over. Thus, we often find ways to break the problem up into little parts that come together to complete the task. The same approach can be applied to computer science, resulting in recursion.

Recursion can be defined as a method in which a bigger problem can be solved by solving smaller and smaller instances of the same problem, which come together to solve the initial problem. A simple example of recursion would be returning the sum of values in a nested list. We’d keep summing non-list elements, and then call the function again on a list element inside the initial list. Each inner list would then return a sum once a base case is reached, and that is key to recursive functions. It keeps recursing and going deeper until a specified base case is reached (acting as a signal that the element in question is finally acceptable to work with), at which point the function finally returns a value (in this case, the sum). I personally like to compare the workings of recursive functions to the Russian Matryoshkya dolls, because like the dolls, recursive functions keep going deeper within themselves to solve a problem.

One important thing to keep in mind is when and when not to use recursion. Generally, after using it for a while in class and with a bit of reading, I realized that recursion is extremely useful as compared to iteration when working/sorting through nested data. Otherwise, when working with linear data, recursion can just overcomplicate things, and therefore we should stick with iterative solutions. Ultimately, the goal of computer scientists should be to use whichever approach solves the problem in the most efficient and clear way. This is presumably what the instructors want us to realize because by introducing recursion to us, they want us to be equipped with the tools required to tackle various different types of problems.

In conclusion, recursion was difficult for me in the start because I had never attempted to do anything like it before. However, after a few weeks of tracing and hands-on work in the labs and A1, I feel like I’ve become comfortable with it and can use it whenever it is required. 

Wednesday, February 19, 2014

Trees: A New Type of Data Structure & Reflection on A1

I know I’m posting this in the middle of reading week, but I’m covering week 6 of the course because it was midterm week and I didn’t have enough time to cover the stuff I wanted to last week.

So this week we covered a new data structure called a tree, and different ways to traverse that tree (called tree “traversal” methods). Basically, a tree is a type of hierarchical data structure, similar to a family tree. Just like a family tree, each node has a parent, child and ancestry. However, each node can only have one parent. The most common type of tree is a binary tree, which means that each node can have up to a maximum of two children. After that, we also learned vocabulary that referred to different attributes of any given tree (such as length, depth, path).

The tree traversal methods are pre-order traversal, in-order traversal and post-order traversal. Pre-order traversal can easily be remembered by the acronym NLR. Using this acronym, we go through each node (N), branch off to the left (L, if possible), and lastly the right (R, if possible). Conversely, the in-order traversal can be remembered by LNR, meaning the node comes in the middle. Lastly, post-order traversal can be remembered by LRN, meaning the node comes last. 

An easy way to remember if the traversal is producing the right values is to keep in mind that in pre-order traversal, the node (N) is present at the start (NLR). In in-order traversal, the node is in the middle (LNR). In post-order traversal, the node is at the end (LRN).

To conclude the tree discussion, it was a fairly easy topic to cover and didn’t pose too many difficulties. It’s a pretty useful recursive data structure. 

To finish off the blog, I want to talk a little bit about assignment one that we handed in last week. I found it pretty easy and fun until step 5, which started to give our group a lot of problems. In the end, we never managed to write a function that moved four stools in the least number of moves, but our program managed to move cheese on four stools in much less than 2^n - 1 moves, so that was something we were pretty happy about. 


It was a long week, so thanks for reading! Next week is finally our first term test, and I’ll be talking more about that later on.

Thursday, February 13, 2014

This Week: Namespaces and Scope!

This week we covered two different things in the lectures. Firstly, we discussed the recursive algorithm to move cheeses from a source stool, to a destination stool that is required in A1. However, I’ll talk more about A1 next week, when it’s actually due! The second topic we learned was namespaces  and scopes. The key thing that I learned this week was that two functions that have an identical name, are unrelated if they are stored in separate namespaces. The scope is the area in which a namespace is accessible, which is searched for the name. This is why in A1, TOAHModel has a move() function, and ConsoleController also has a move function, and they can interact without any problems because they are stored in different namespaces.   

Other then that, this week wasn’t too bad and I am fairly confident about the material covered. I'm sure I'll have more to talk about next week, when our first assignment is due!

Sunday, February 9, 2014

Recursion can be tough!

Recursion was the topic of this weeks lectures. It was a totally new concept for me coming into CSC148, and as I expected, it was pretty hard to get my head around at first. The whole concept of being able to go deeper inside the function itself until a base case was reached was complex for me, and I didn't really understand it. However, I was reading other students SLOGs and I found a very interesting comparison between the movie Inception and Recursion. While reading this comparison, I found myself smiling because the similarities really did strike out to me. Now, whenever I think about recursion, I can visualize it as I visualized Inception, as something "inside" something; in this case, a process "inside" a process. Our professor told us to trace the recursive pattern until we get an answer, and to "not re-do what we already know". This process of tracing and plugging in helped me to figure out how recursion basically works. Now, I'm just going to need a little more practise (which Assignment 1 will be giving me plenty of!) to get a much firmer grip on recursive functions. We also saw turtles being used to show how it can branch off into a very complex design. 

One other thing that was taught to us, which I found extremely helpful, was list comprehension. This is something that I saw before on websites, but never really bothered to learn myself as it wasn't something necessary to know at the time. I kind of chide myself now for being lazy, because list comprehensions could have cut my code down by numerous lines! It's something that I'm slowly trying to bring into my coding to make it more concise. 

Wednesday, January 22, 2014

Object-Oriented Programming

This week, I’m going to be talking about Object Oriented Programming, something that has been constantly presented to students as an extremely important topic, and I am going to be sharing my thoughts on it.

Object-oriented programming is programming that utilizes the knowledge of classes, objects and methods, and allows us to create a unique class that can be modified to suit our requirements. The new class inherits all of the methods of the default ‘object’ class, and it is also possible to inherit the methods of other types of classes. For example, we can create a class that can record a list of books that a person has read, and then create methods to sort the titles, authors, publishers and more. By using object-oriented programming, we are able to deal with many different requirements that we encounter while programming and develop specific classes to tackle them. I believe that the greatest asset of OOP is that it allows programmers to be flexible with their data and their program requirements.

            Personally, object-oriented programming was never too difficult for me; however, prior to this week, I did struggle to understand the concept of the ‘self’ variable used in defining methods for a class. When creating a class that utilized variables created in the __init__ method, I had to use trial and error with any variable I wanted to use in any later methods. For example, I didn’t know for sure when to put self before a variables name to save it. This was partly due to the fact that it had been a while since I attempted OOP, but even in CSC108 I found it a little tricky to create methods that shared variables. However, this week I was able to finally learn that ‘self’ is just a reference to the 'current instance of an object', and how to utilize variables created in the initializer, in other methods. Working on programs during the labs refreshed my memory, and having to discuss the work with my lab partner also helped me to get a better understanding of the details behind OOP.

            To summarize, during week three I was able to understand OOP better than I did before, and since it is so important, it will definitely be used in many future programs that I will write.