Algorithms and Big O Notation
The five principle values you find in programming
I wrote an article the other day about the programming profession and the qualities & skills you needed. One of those skills was math. I then wrote a second article on the algorithms and again mentioned math. And well I thought it might be a good time to actually write about math for a change.
So here we go. Now there is a fair bit of math in computer science, although I would argue most of it reasonably well-grounded. I mentioned base theory, boolean logic, geometry, matrices and percentages as candidates. But I don’t want to talk about any those right now. I want to talk about this math thing called the big O notation. Big O notation is a means of measuring the performance of algorithms.
It is a notation invented by Paul Bachmann and Edmund Landau. The individual you won’t be surprised to learn were mathematicians. Although you may be taken back by the knowledge that they were both born long before computers were invented in the eighteenth century.
The notation is used as a standard way to compare how algorithms perform as your data input grows. Here is a cheat sheet that shows you a graph of what is what.

In short, you’re looking for algorithms that sit in the green band and want to avoid those in the red. So what does all this means practically?
O(1)
This denotes a constant performance. So it doesn’t matter how much data you add to your set, the time it takes is always the same. An example of this might be a function that calculates something, so pure math if you like. It won’t matter what sort of numbers you send to the function, the time it takes will always be constant. This is the green zone.
O(n)
This is also known as linear time. This means the amount of time it takes for your method to run is directly linked to the amount of data that you add to it. So a CS example would look like this.
for t in 0...n {
print("n ",n)
}
Now as can you see, the bigger you make n, the longer it will take for the method to complete. But the time is taken will be uniform. Plot it on a graph and it will be a straight line. On the chart above we’re in the yellow zone.
O(n2)
This is also known as quadratic time. This means the amount of time it takes for your method to run will increase dramatically as the amount of data you send in increases. A simple example of this looks like this.
for t1 in 0...n {
for t2 in 0...n {
print("t1 t2 ",t1,t2)
}
}
You can appreciate I am sure that the greater the value of n the longer it will take. Here we’re in the red zone.
O(log n)
This is known as logarithmic time, this means the amount of time it takes for a method to run will increase in proportion to a logarithmic on the input data size. The classic example of a logarithmic algorithm is a binary search. The code looks like this.
O(nlog n)
This is also as logarithmic time, only this time is multiplied by n [which is the amount of data]. A binary sort is a classic example of an algorithm with a big O of nlog n. The code for that looks like this.
Which covers our five main cases. To be a better programmer you need to be at the top of this list. You don’t want to build an app with loops with a Big O complexity at the bottom unless of course, you like being roasted alive in your code reviews.
O(1)
O(n)
O(log n)
O(nlog n)
O(n2)
Now, it isn’t just about the time taken for your function to run. Because you can use the same idea when talking about the memory used by your application. That said obviously the same heuristic applies, you want O(1) in an ideal world and you definitively don’t want O(n2).
As a closing note here is the source of the graphic I used. An excellent reference should you be interested in learning/having more examples than I gave in this super short article on the subject.
Keep calm, keep coding