Algorithms For Web Developers

Algorithms For Web Developers

Learn the basics of time and space complexities, how they are calculated and practice your skills on some common mathematical algorithms

This is my first article on data structures and algorithms series. In this article, you will learn about algorithms, their characteristics, time and space complexities, how to determine them by looking at code, and the time complexities of some inbuilt javascript arrays and object functions. You will also be able to practice your skills by looking at the implementation of some basic mathematical algorithms at the end of this article.

Prerequisites

This article is especially for those web developers who are looking forward to exploring data structures and algorithms either for interview preparation or for learning something new. I have picked javascript as the language for explaining stuff since almost all web developers are familiar with it. Before you start going through articles in this series, I expect you to have a basic knowledge of programming concepts like arrays, objects, loops, and functions.

What is an Algorithm?

In computer science, an algorithm refers to a list of different steps that are required to solve a problem. For example, steps to sort an array or steps to search an element in a linked list.

Characteristics of An Algorithm

Every algorithm must have the following characteristics

  • An algorithm should have well-defined inputs and outputs

  • Each step in an algorithm should be clear and precise

  • Algorithms should be independent of the programming language

Efficiency of An Algorithm

There are multiple algorithm options for the same problem. For instance, to sort elements in an array, we have bubble sort and insertion sort. How do we know which algorithm is better?

We can measure the time consumed by both algorithms however the execution time of an algorithm depends on many different things like the type of operating system and the programming language used.

Therefore to analyse the performance of an algorithm irrespective of external factors like OS and programming language, we can simply judge an algorithm based on its input size. For instance, an algorithm which loops through the input two times will be slower as compared to an algorithm which loops through the input only one time.

Types Of Complexities

As said above, we can evaluate the performance of an algorithm in terms of its input size. This can be done using two ways:

Time Complexity

The time taken by an algorithm to complete is called time complexity. Time complexity purely depends on the number of times a line of code is executed and not on external factors like OS and programming language.

Time complexity returns the time taken by an algorithm to execute not in terms of seconds or milliseconds but in terms of its input. For instance, if you need to display Hello World text, the time complexity will be O(1). If you need to print all elements in an array with n number of elements, the time complexity will be O(n). O is just a notation to represent time complexity.

Space Complexity

The total amount of space taken by an algorithm in memory is called space complexity. The more variables/inputs in an algorithm, the more will be its space complexity. Space complexity is also denoted using O.

Although both time and space complexities are crucial parameters in evaluating the performance of an algorithm, In this article I'll focus more on the time complexity since it is comparatively more frequently asked during interviews.

Representing Complexities

Although I already mentioned that time and space complexities are represented using O. However, there are other ways to represent complexities too. These are known as asymptotic notations and they are of three types.

Big - O Notation (O)

Consider an array = [5,4,3,2,1]. Now if you want to sort it in ascending order, you need to change the position of all 5 elements to get output array = [1,2,3,4,5]. This is called the worst case since you need to sort all elements. And the worst-case time and space complexities are denoted using the Big - O Notation which I previously talked about. Example. O(1), O(n) etc.

Omega Notation (Ω)

Now consider a new array = [1,2,3,4,5]. You want to sort it in ascending order but it is already sorted in that order. This is called the best case since you do not have to sort any elements. And the best-case time and space complexities are denoted using the Omega Ω Notation.

Theta Notation (Θ)

Now consider another array = [1,2,3,5,4]. You want to sort it in ascending order however, 3 out of 5 elements are already sorted properly. This is called the average case since you do not have to sort all elements. And the average-case time and space complexities are denoted using the Theta Θ Notation.

These were the 3 types of time and space complexities. However in interviews and general programming, what matters most is the worst-case time complexity. Therefore in this article, I'll focus more on the worst-case time complexity of algorithms (Big - O).

Determining Worst Case Time Complexity

As previously said, the worst-case time complexity of an algorithm is denoted using Big - O notation. When comparing 2 algorithms, the algorithm with higher worst-case time complexity is known to perform slower. Now to determine worst case time complexity of an algorithm, look at the following code:

function nSum(n) {
    let sum = 0;
    for(let i = 1; i<=n; i++) {
        sum = sum + i;
    }
    return sum;
}

This function nSum(n) takes a number n and returns the sum of all numbers from 0 to n. For instance, if n=2, output = 1+2 = 3. If n=5, output = 1+2+3+4+5 = 15.

Now, to determine the time complexity of this function, count the number of times each line inside function will execute.

let sum = 0; will execute only one time

sum = sum+i; will execute depending on the value of n in for loop. So if n = 5, this line will be executed 5 times

return sum; will execute only one time

So, time complexity for n=5 is O(1 + 5 + 1) = O(7)

Or, time complexity = O(1 + n + 1) = O(n + 2)

What if n = 1000000?

Time complexity = O(1000000 + 2) = O(1000002)

Since the value of n is usually more than the number of lines executed in code only once, we can ignore them.

So, the time complexity of the above function = O(n)

Time Complexity Cheatsheet

Although this table may not always work however in most cases, you can use the following rules to accurately determine the worst-case time complexity of an algorithm.

ConditionComplexityMeaning
No loop or recursion present in a functionO(1) - Constant Time ComplexityExecution time of an algorithm will be the same irrespective of the number of inputs in a function
One loop present in a functionO(n) - Linear Time ComplexityThe more number of inputs, the more will be the execution time
Two nested loops present in a functionO(n^2) - Quadratic Time ComplexityExecution time of an algorithm will be quadratic to the total number of input variables
Multiple nested loops present in a functionO(n^m) - Quadratic Time Complexity where m is the total number of nested loopsExecution time of an algorithm will be quadratic to the total number of input variables
Two different loops present in a functionO(mn) - Where m and n are different inputs used by loopsExecution time of an algorithm will depend on both m and n number of inputs
Input size becomes half after every iteration inside the loop (Eg. i = i/2)O(log(n)) - Logarithmic Time ComplexityExecution time of an algorithm will be less than the total number of inputs

Time Complexity vs Performance

The following are the different time complexities in the ascending order of their performance. The starting ones will execute super fast while the complexities at the end will run slow.

O(1) << O(log(n)) << O(n) << O(nlog(n)) << O(n^2) << O(2^n) << O(n!)

Time Complexity In Javascript Arrays

An array in javascript refers to a collection of similar values stored in a sequential order. This is how an array in javascript is created:

const fruits = ['Melon','Strawberry','Banana']
const numbers = [1,2,3]
const mixedArray = [1,'Hello',true]

When writing code, you need to perform different operations on an array such as adding or deleting a new element, sorting the array or searching for an element etc. You can use the following table to determine the time complexity of an array operation in javascript.

Array OperationSyntaxTime Complexity
Inserting new element in array at endarray.push(elementValue)O(1)
Deleting element in array at endarray.pop()O(1)
Inserting new element in array at startarray.unshift(elementValue)O(n) since if you add element at start of array, all other element's index number will get updated
Deleting element in array at startarray.shift()O(n) since if you delete an element at start of array, all other element's index number will get updated
Access any element from array using its index numberconsole.log(array[index])O(1)
Search for an element in arrayO(n)
Javascript array iteration functionsarray.forEach(), array.map(), array.filter(), array.reduce()O(n)
Javascript array splitting functionsarray.slice(), array.splice(), array.concat()O(n)

Time Complexity In Javascript Objects

An object in javascript refers to a collection of key-value pairs where value contains data and key is used to access that data. This is how an array in javascript is created:

const user = {
    name: "Ishant",
    age: 21
}

When writing code, you need to perform different operations on an object such as adding or deleting keys and values, searching for keys and values etc. You can use the following table to determine the time complexity of an object operation in javascript.

Object OperationSyntaxTime Complexity
Add a new keyobject.newKey = newValueO(1)
Deleting a keydelete object.keyNameO(1)
Getting a value using its keyconsole.log(obj.keyName) or console.log(obj["keyName"])O(1)
Search for keys and values in an objectObject.keys(), Object.values(), Object.entries()O(n) since if you need to iterate through all keys and values in an object

Time To Practice

Now I will share some code snippets of javascript functions related to some mathematical operations and you have to determine the time complexity of those functions.

Factorial Of A Number

Look at the following code

function factorial(n) {
    let fact = 1;
    while(n>=1) {
        fact = fact * n;
        n--;
    }
    return fact;
}

console.log(factorial(5))
// Output: 120

This code calculates the factorial of a number. For instance, factorial(5) = 5x4x3x2x1 = 120 or factorial(3) = 3x2x1 = 6. In this example,

let fact = 1; is executed only one time

fact = fact * n; and n--; are executed n times

return fact; is executed only one time

When calculating time complexity, you have to take all the lines of code inside a loop as one unit.

So time complexity = O(1 + n + 1) = O(n + 2)

Or, Time complexity = O(n)

Fibonacci Sequence

Look at the following code

function fibonacci(n) {
   const fibonacciSeries = [0,1]    
   for(let i = 2; i<=n; i++) {
       fibonacciSeries[i] = 
        fibonacciSeries[i-1] + fibonacciSeries[i-2];
   }
   return fibonacciSeries;
}

console.log(fibonacci(10))
// Output: [0,1,1,2,3,5,8,13,21,34,55]

This code calculates the fibonacci series of a number which is the sum of a number n's n-1 and n-2 numbers. For instance, fibonacci(2) = [0,1,1], fibonacci(3) = [0,1,1,2], fibonacci(4) = [0,1,1,2,3], fibonacci(5) = [0,1,1,2,3,5], fibonacci(6) = [0,1,1,2,3,5,8] and so on.

As you can see that there is only one for loop which loops from i=2 to i<=n so,

**Time complexity = O(n)**Fibonacci Sequence

Power Of Two

Look at the following code

function isPowerOfTwo(n) {
   while(n>1) {
    if(n%2!==0) {
        return false;
    }    
    n = n/2;
   }
   return true;
}

console.log(isPowerOfTwo(10))
// Output: false

console.log(isPowerOfTwo(8))
// Output: true

This code checks if a given number n is an exponent of 2. For instance, isPowerOfTwo(1) = true since 2^0 = 1 and isPowerOfTwo(8) = true since 2^3 = 8

As you can see that there is only one while loop which executes the code while the value of n is more than one. However, the value of n gets halved after each iteration (n = n/2). Therefore, the function algorithm execution time is half the value of the input. So,

Time complexity = O(log(n))

You will find more use cases of time complexity when learning sorting and searching algorithms. You can refer to the time complexity table whenever you feel stuck.

Thanks For Reading

If you enjoy reading this article, please give it a like and let me know your feedback in the comments :) For any questions, you can drop me a message on my Instagram or LinkedIn.