# Improving on an Algorithm: Time Complexity

Recently, when I was looking at an old solution I wrote months ago for a LeetCode problem, How many numbers are smaller than the current number, I suddenly found that although my previous solution used some tricks in an attempt to do it faster, the function I used made it as slow as the time complexity of a brute force solution.

### Wait, What is Time Complexity

Time complexity is a common concept in the Computer Science world. However, as it is a necessary tool for the discussion here, let’s make sure we are on the same page before we start.

Time complexity is a measurement of running time of an algorithm. It is estimated by counting the ‘steps’ of an algorithm.

When calculating time complexity, we focus on the ‘growth’ of running time as the input size grows. We use ‘big-O notation’ to represent the time complexity of an algorithm, such as *O(1)*, *O(N)*, *O(N log N)*, *O(N ^{2})*,

*O(2*, etc, where

^{N})*N*represents the input size.

As we focus on the growth of the time complexity, there are two things we consider trivial when calculating time complexity.

- As we only consider the growth for big-O notation,
*O(N)*and*O(2N)*are both considered*O(N)*. - When calculating something like
*O(N*,^{2}+ N)*O(N*would dominate the growth when the input size gets bigger. Therefore, the^{2})*O(N)*part is trivial.*O(N*is the same as^{2}+N)*O(N*.^{2})

The following are some common time complexities, the higher in the table is faster:

Big-O Notation | Name |
---|---|

O(1) |
constant time |

O(log N) |
logarithmic time |

O(N) |
linear time |

O(N log N) |
linearithmic time |

O(N^{2}) |
quadratic time |

O(2^{N}) |
exponential time |

O(N!) |
factorial time |

#### Why improving time complexity matters?

Although computers nowadays run pretty fast, when the algorithm has, for instance, an exponential time complexity, the programme can still take pretty long to complete when the input size gets huge.

However, there is not always a way to solve a problem with faster algorithms—like it is impossible to solve a sorting problem within linear processing (*O(N)*). The best we can get for sorting is *O(N log N)*.

If it is possible to find an algorithm that solves the problem with better time complexity, it would always be better even when the computers are getting faster and faster nowadays. A bad computer running an algorithm with good time complexity can still potentially outrun a good computer running an algorithm with bad time complexity.

### Brute Force

A brute force algorithm is usually the slowest way to solve a problem. It works by working through every possible combination and find the answer.

A brute force method to solve this problem can look like:

```
class Solution {
func smallerNumbersThanCurrent(_ nums: [Int]) -> [Int] {
var output = [Int]()
var temp = 0
for i in nums { // N steps
temp = 0
for j in nums { // N steps
if j < i {
temp += 1
}
}
output.append(temp)
}
// Nested for loops takes N * N steps
// (N steps for each step)
return output
}
}
```

As we would need the nested for-in loops, each taking *N* steps, this algorithm easily reach the realm of *O(N ^{2})*—any faster way to solve this problem would be an improvement.

### Consider Ver. 1 Solution

The following code was my first attempt:

```
class Solution {
func smallerNumbersThanCurrent(_ nums: [Int]) -> [Int] {
var output = [Int]()
var sorted = nums.sorted() // O(N log N)
for i in nums { // N steps
if let id = sorted.firstIndex(of: i) {
// N steps
output.append(id)
}
}
// Still N * N steps
return output
}
}
```

Let’s walk through this code first.

The trick for this code is that with a sorted array, the first index of the item would be the number that is smaller than the item.

So in the beginning, we prepared an empty `[Int]`

array for output. As the input array would not be sorted, we use the native sorting function, implemented with Timsort algorithm (*O(N log N)*), to get the sorted array.

Let’s consider this sample input/output:

Sample Input: [5,1,4,4,2]

Sample Output: [4,0,2,2,1]

If we sort the array, we would get [1,2,4,4,5], the index would be [0,1,2,3,4]. Whenever we want to know how many numbers are smaller than 5, we just need to find the index of the first instance of it in the sorted array.

It may seem better, but it ends up being as slow as the brute force method.

The reason for it is that to get the index of the first instance in the array, I used the `firstIndex(of:)`

function here.

In Swift, `firstIndex(of:)`

is a linear search. It scans each item of the whole array linearly until it reaches the intended item, taking linear time (*O(N)*) on average. What’s worse, when you put it in a for-in loop that runs through every element of the array, another *O(N)*, what we have at hand become nested loops that both run through the input array one by one. That is *O(N ^{2}*).

As we mentioned, when we are considering time complexity, we focus on the ‘growth’ of time complexity of the algorithm. Then, let’s calculate the time complexity of this solution:

- Native
`sorted`

function takes*O(N log N)*. - The nested loops take
*O(N*^{2}) - Generating output happens within the nested loops, so it wouldn’t take extra time.

Therefore, although we have used the native sort function to sort the array, an *O(N log N)*) algorithm, the time complexity is still at *O(N ^{2}) + O(N log N)*, which is still

*O(N*.

^{2})### Then How to Improve It

So the bottleneck we had above was the `firstIndex(of:)`

function. If we can reduce the time we need to search the index of the first instance in the sorted array, it can be a lot faster.

How can we improve on that? With the Dictionary, a data structure implemented with hash table in Swift, the search can be reduced to constant time (*O(1)*). The construction of the Dictionary only takes linear processing of the sorted array (*O(N)*).

```
class Solution {
func smallerNumbersThanCurrent(_ nums: [Int]) -> [Int] {
var sorted = nums.sorted() // O(N log N)
var posMap = [Int:Int]()
for (index,value) in sorted.enumerated() where posMap[value] == nil {
// for-loop: N steps
posMap[value] = index
}
return nums.map { posMap[$0]! }
}
}
```

When generating the `posMap`

, we record only the first instance by asking it to only record it when there is no record for the value in the dictionary yet. We would record the number (value) as the `key`

of the dictionary, and the index as the dictionary `value`

.

As a result, when we ask for where is the index (`value`

) of a number (`key`

), we can get it by calling posMap[`key`

] (returns the corresponding `value`

).

The `return`

part of this version is a fancier, shorter way to write the following code with the `map`

function.

```
var output = [Int]()
for num in nums {
output.append(posMap[num]!)
}
return output
```

So we can easily see that this part takes a N step loop (*O(N)*).

Let’s calculate the time complexity of this version:

- Sorting
`nums`

takes*O(N log N)*. - Generating the map takes
*O(N)*. - Generating output takes
*O(N)*.

So *O(N log N) + O(N) + O(N)* is *O(N log N)*, better than the previous *O(N ^{2})* method.

Time complexity may be a complicated concept in the beginning. Nonetheless, it provides us with a nice tool to look into the pros and cons of our algorithm at hand. Although we may not endlessly improve on our algorithm, it is always a good thing to have an idea of what is holding back our code.