Time Analysis of Recursive Program | Back Substitution Method

Back Substitution method

 

Let’s understand the time complexity of a recursive function with some examples:

 

Example:
 int test(int n)  
 {  
   if(n > 20)  
    return test(n/2) + test(n/2); //recursive call  
 }  


For solving the test(int n), let's say that time taken is T(n). In the above function, “if-condition” will take some constant time then we gonna calculate two tests (n/2) and summing it up. So, there are some constant terms involving “if-condition” and for the addition operation in the recursive call.

 

T(n) = c + 2T(n/2)

This is how we should write a recursive function and then we should try solving the recursive function.


Back Substitution method. 

There are many ways to solve the recursive function, one of the ways are: Back Substitution method.


 //Example 1:  
 int test(int n)  
 {  
   if(n > 1)   //base condition  
    return test(n-1);  //recursive call  
 }  
 //Time Complexity = O(n)   

 

If we assume that the time taken for the test(int n) is T(n) then, the time is actually some constant term for comparison, and after that time is taken to solve the problem on (n-1) inputs.

 

T(n) = 1 + T(n-1) ; when n>1 --------> (1)

        = 1                ; when n=1

We represent the constant term using 1 or c.

 

Back substitution can be applied to any of the problems but it is slow.

T(n-1) = 1+T(n-2) -------->(2)

T(n-2) = 1+T(n-3) -------->(3)

 

Substituting eq(2) in (1);

T(n) = 1+1+T(n-2)

T(n) = 2+T(n-2) -------->(4)

 

Substituting eq(3) in (4)

T(n) = 2+1+T(n-3)

T(n) = 3+T(n-3)

   -

   -

   -

   -

T(n) = k+T(n-k) -------->(5)

Checking the base case of the given algorithm, we find that,

T(n) = 1+T(n-1) ;valid when n>1

n is going to start from a very large number and gonna gradually decrease and at some point, n will be 1, so whenever it reaches 1, we’ll stop.

T(n) = k+T(n-k), This will get eliminated when there will be T(1) then.

n-k = 1

K = n-1

 

Putting the value of k in eq (5),

=(n-1)+T(n-(n-1))

=(n-1)+T(1)

=(n-1)+1           because T(1)=1

=n

 

Therefore, T(n) = n = O(n)


Example 2: Try it yourself.

T(n) = n+T(n-1) ; when n>1

        = 1              ; when n=1

Answer: Time complexity will be O(n^2).

Time complexity of Iterative programs.



About graph: f(n) is the actual time taken by the Algorithm, but in practice, we say it the time taken by the program to exist but when you write it into a program and execute it, we found out that this is not the real-time, it is just an approximation. It means, if f(n) increases for the algorithm, then the program time increases if f(n) decreases program time decrease. 


To find out f(n), we need to know that there are 2 types of algorithm:

Iterative: Here there is no function calling and having for loops and all, then it is iterative.


Recursive: Here there is a function calling itself then it is called recursion.


Any program that can be written using iteration could be written using recursion and any recursion program can be converted into an iteration. So both are equivalent in power. But if we go through analysis, both are different:


In order to analyze an iterative program, we need to count the number of times a loop is getting executed. Whereas, if we have to find out the time taken by the recursive program, we use recursive equations which means, we write a function of in terms of n/2

A(n)->f(n).

A(n)->f(n/2) 


In case the algorithm does not contain Iteration or Recursion means there’s no depending on running time on the I/P size, means whatever is the input size, the running time is going to be a constant value.


So if there’s no Iteration or Recursion in the algorithm, we need not worry about the time, it will be constant for such program we write the order of 1 O(1) which represents constant time.   


Examples:  

 //Example 1:  
 int timecomp(int n)  
 {  
   int i;  
   for(i=0; i<n; i++)  
     printf("RavelingTech"); //this line will be printed n times  
 }  
 //Therefore, the time complexity of the algorithm is O(n).  
 
//Example 2:  
 int timecomp(int n)  
 {  
   int i,j;  
   for(i=0; i<n; i++)  
   {  
     for(j=0; j<n; j++)  
       printf("RavelingTech"); //this line will printed nxn times  
   }  
 }  
 //Therefore, the time complexity of the algorithm is O(n^2).  
 
//Example 3:   
 int timecomp(int n)  
 {  
   int i=1, s=1;  
   while(s <= n)  
   {  
     i++;   //increment in i is linear  
     s=s+i;  //increment in s depends on i  
     printf("RavelingTech");  
   }  
 }  
 /*  
 s | 1 3 6 10 15 21 ........n  
 i | 1 2 3 4 5 6 ........k  
 By the time i reaches to k, s will be the sum of 1st k natural number  
   = k(k+1)/2  
 Now, if the loop has to stop , then  
 k(k+1)/2 > n  
 k = O(√n) <--- Time Complexity.  
 */  

How To Apply Dark Mode to all websites in Edge Browser.

DarkMode

The new Chromium Edge browser is packed with useful new features and some of the edge’s features are tucked away below the bonnet which you can enable through Edge flags settings.

One of the interesting features that you can enable from Edge flags settings in dark mode for web content.

I have already explained how to activate Edge’s dark mode, but this just changes the color of the browser’s interface and doesn’t have any effect on the web pages you visit.

To read the web content in dark mode, type edge://flags in the address bar and search for "dark" in the Search Flags box.

Dark Mode Edge Browser

Click on the drop-down next to ‘Force Dark Mode for Web Contents’, then change this to Enable and Restart your browser when prompted.

Edge Browser Dark Mode

Now your Edge browser is ready to load all web content in dark mode and later if you want you can go through the same settings to disable it. 20 Best Features of Chromium Edge Browser. 

Discard Inactive tabs automatically in Google Chrome.

Auto Tab Discard

Auto Tab Discard solves everyone’s biggest moan about Chrome - by automatically ‘killing’ unused tabs, it claws back much-needed memory. This should speed up your browser and save battery life when you’re using a laptop.

Features of Auto Tab Discard.
  • It speeds up the browser and minimizes memory usage.
  • It designates specific tabs to be whitelisted to prevent discarding.
  • You can retain discarded tabs after closing and re-opening your browser.
  • The favicon of a website displays the discarded state.


Click the extension’s toolbar button for a range of quick settings that let you swiftly discard the current tab or any other tab you may have open, in any Chrome window.

Your tabs won’t suddenly disappear- instead, the browser tool puts a discarded tab into a suspended state until you click back into it, at which point it will reload. You can also jump between current tabs, close them using the navigation arrows or order all tabs to remain open during your session.

             
However, it’s in the add-on’s settings where the ‘auto’ part of its name occurs. Here, you can set a time limit that kicks in when you have a specific number of tabs open. 

What is Computer Programming? Basic.

Programming

Dictionary Definition: Programming is the process of preparing an instructional program for a device.

But that’s a really confusing definition, so in layman’s terms what exactly does that mean? Essentially, it is attempting to get a computer to complete a specific task without mistakes.

Just, for example, you instruct your less than intelligent friend to build a Lego set. He lost the instructions so now he can only build based on your commands. Your friend is far from competent on his own and so you must have to give him EXACT instructions on how to build, even if there is one piece wrong the entire Lego set will be ruined.

Giving instructions to your friend is very similar to how programmers code.

An important thing to note is that computers are actually very dumb. We build them up to be this super-sophisticated piece of technology when in actuality a computer’s main functionality comes from how we manipulate it to serve our needs.

{"Computers are only smart because we program them to be"};

The language that Computer Understand.

Programming isn’t a simple as giving instructions to your friend since, in the case of programming, the computer doesn’t speak the same language as you. The computer only understands machine code, which is a numerical language known as a binary that is designed so that the computer can quickly read it and carry out its instructions.

Every instruction fed to the computer is converted into a string of 1’s and 0’s and then interpreted by the computer to carry out a task. Therefore, in order to talk to the computer, you must first translate your English instruction to Binary.

Directly translating what you want the computer to do into machine code is extremely difficult, in fact almost impossible and would take a very long time to do if you could. This is where programming languages come into play.

Programming languages are fundamentally a middle man for translating a program into machine code-the series of 0’s and 1’s that the computer can understand. These languages are much easier for humans to learn than machine code, and are thus very useful for programmers.

{"A programming language is not English and not machine code, it is somewhere in the middle"};

How Programming Language Vary?

There are many different programming languages out there that each has its own unique uses.
Programming

  • Java and Python: General Purpose languages.
  • HTML/CSS: Designed for specific tasks like web page design.


Each language also varies in how powerful it is:
  • JavaScript is a scripting language and isn’t used for big problems.
  • Java and Python can carry out a much more computationally taxing process.


We measure a programming language’s power level by how similar it is to machine code. Low-level programming languages such as Assembly or C are closer to binary than a high-level programming language such as Java or Python.

The basic idea is that the lower the level of your programming language, the more your code will resemble what the machine can interpret as instructions.

So, try different languages and decide which one’s rules, interface, and level of specification you like the best.  

If you like this post and find it useful then you can show your support by donating a small amount from your heart. You can also show your support by following us on Facebook and Twitter.

Introduction to Asymptotic Notations.

Before implementing any algorithm as a program, we need to analyze which algorithm is good in terms of time and space.

Time: Which one can execute faster.
Memory: Which one can take less memory.  

Here we are mainly going to focus on calculating the Time Complexity which generally depends on the size of the input. Let's understand this point with the help of an example.


Suppose we have an integer array and we want to add an element at the beginning of the list then to perform this operation we need one extra space at the end of the array and we have to shift all the elements of the array by one unit and the number of the shift we have made is equal to the number of elements present in the given array. 


So from this small example, we can understand that Running Time mostly depends on the size of the input. Therefore, if the size of the input is n, then f(n) is a function of n that denotes the Time Complexity. Example: 

Time Complexity calculation

From the above example, we conclude that most of the time is consumed by 12 but wait we cannot conclude just by checking for a single n value, we need to calculate a bigger value of n. Below is the calculation for bigger n values. 
n 5n^2 6n 12
1 21.74% 26.09% 52.17%
10 87.41% 10.49% 2.09%
100 98.79% 1.19% 0.02%
1000 99.88% 0.12% 0.002%

Now if we observe the above table we found that for a bigger n value 5n^2 it takes most of the time so while calculating the Time Complexity we can neglect the remaining terms because the single term gives us the approximate result and it is very near to the actual result. This approximate measure of time complexity is called Asymptotic Complexity.


What are Asymptotic Notations?

Asymptotic notations are mathematical tools to measure the space and time complexity of an algorithm. There are mainly three asymptotic notations used to show the time complexity of the algorithm.

O(n) Big O notation. (Worst Case)

Big O notation is the most used notation to measure the performance of any algorithm by defining its order of growth.

Big O notation

Assuming that the time and input are going to increase like in the above graph then what is the worst-case or upper bound for this function?
We have to find out another function in such a way that the function is greater than the given f(n), let's call it c.g(n), after some limit n=no


For any given algorithm, we should be able to find out what is its time complexity or what its f(n) in terms of input n. Now after finding out that, if we can bound that function by some another function say c.g(n) in such a way that after some input no, the value of c.g(n) is always greater than f(n).

[ f(n)<=c.g(n) ], after some n where is n>=no
c, n are real numbers, c > 0 and n >=no 

Definition: f(n)  O(g(n)) if there exists constants c > 0 and no > =1 such that 0 <= f(n) <= c.g(n) for all n >= no

Example: Is f(n) is big O of g(n) [f(n) = O(g(n))] where f(n) = 3n+2 and g(n) = n. 

Solution: At first we have to find two constants c and no such that, 
f(n)<=c.g(n), c > 0 and n >=no
3n+2 < cn. 
let's take c=4
3n+2 < 4n
=> n>=2
Therefore, f(n) = O(g(n)). 
Therefore, g(n) = n bounds the function f(n) which means f(n) is upper bounded by g(n).

So if n is bounding this f(n) then definitely n^2 will bound. It means if g(n) = n is satisfying then any value greater than n (n^2, n^3,----n^n) will also bound 3n+2, but we need to keep in mind that big O is always the upper bound, so all the answers are correct but we will always go for least bound or tightest bound and n is the tightest bound for given f(n). 

Ω(n) Omega notation. (Best Case)

Omega notation gives us a lower bound of the complexity in the best case analysis. In this notation, we analyze the minimum number of steps that need to be executed. For example, In Linear search, the best case is when your search element is present in the first place.
Omega

We have to find out another function in such a way that the function is smaller than the given f(n), let's call it c.g(n), after some limit n=no
Now after finding out that, if we can bound that function by some another function say c.g(n) in such a way that after some input no, the value of c.g(n) is always smaller than f(n).

[ f(n)>=c.g(n) ], after some n where is n>=no
c, n are real numbers, c > 0 and n >=no 

Definition: f (n) ∈ Ω(g(n)) if there exists constants c > 0 and n0 >= 1 such that 0 ≤ c ·g(n) ≤ f (n) for all n ≥ n0

Example: Is f(n) is big Omega of g(n) where f(n) = 3n+2 and g(n) = n.

Solution: 
3n+2 >= cn. is true even for c = 1 
Therefore, the condition is satisfied and 3n+2 is lower bounded by n. 
The given function f(n) can be lower bounded by any value lower than n but it's better to take the closest bound. 

Θ(n) Theta notation. (Average Case)

Theta notation gives us a tight bound of the complexity in the average case analysis. In this notation, we calculate the average by considering different inputs, including the best and worst cases.
Theta

Here we have to find out both the upper bound and lower bound, by a function by a function by varying c.
If f(n) is bounded by c1g(n) and c2g(n) then, f(n) is Î˜g(n).

Definition: f (n) ∈ Θ(g(n)) if there exists constants c1 > 0, c2 > 0 and n0 > 1 such that c1 ·g(n) ≤ f (n) ≤ c2 ·g(n) for all n ≥ n0

Example: Is f(n) is big Theta of g(n) where f(n) = 3n+2 and g(n) = n.

Solution: 
f(n)  ≤  c2 ·g(n)
3n+2 ≤ 4n, true for n0 ≥ 1.

f(n) ≥ c1 ·g(n)
3n+2 ≥ n, true for n0 ≥ 1 and c = 1

This means, that f(n) = 3n+2 is bounded by g(n) from both upper as well as lower bound, for upper bound  c2 = 4, and for lower bound c1 = 1, both the cases are valid for n0 ≥ 1.

Note: Sometimes, Î˜ is also called asymptotically equal which means if the leading term in the function is going to be 1, then we can take n as g(n) because they are asymptotically equal. 
Ex: f(n) = 3n^2+n+1 = Î˜(n^2). 
f(n) = 5n^3+n+1 = Î˜(n^3). 

👉Support Us by Sharing our post with others if you really found this useful and also share your valuable feedback in the comment section below so we can work on them and provide you best ðŸ˜Š.(alert-success) 

Algorithms Introduction and Analysis

Algorithms Introduction and Analysis

An Algorithm is a step-by-step procedure that helps us in solving computational problems. Using the right algorithm to solve a particular kind of problem increases the efficiency of code in terms of space and time complexity.
Space Complexity: Space complexity is the amount of memory space the algorithm is required for giving the desired output. The memory space is consumed by the input data, temporary data, and output data of the algorithm. (alert-success)
Time Complexity: Time complexity is the amount of time required by an algorithm for the execution and providing the desired output. It depends on the number of iterative or recursive steps involve in the execution of the algorithm.
(alert-success)
Sometimes, many get confused between the term Algorithm and Program. An algorithm is a designing part of solving a problem and the Program is the implementation part of solving that problem.  

We can write an Algorithm in any language like the English language or by using some mathematical notations. But the program for that algorithm must be written using programming languages like C, C++, Java, or Python.

How To Analyse an Algorithm?

You can find more than one algorithms for solving a particular problem after that you have to analyze them and use the most efficient one. 

The analysis of an algorithm is done base on its efficiency. The two important terms used for the analysis of an algorithm are “Priori Analysis” and “Posterior Analysis”.

Priori Analysis: It is done before the actual implementation of the algorithm when the algorithm is written in the general theoretical language. In this, the efficiency of the algorithm is calculated base on its complexity. It is just an approximate analysis. (alert-passed)  
Posterior Analysis: It is done after the actual implementation and execution of the algorithm using any programming language like C, C++, Java, or Python. It is the actual analysis in which the space and time complexity of the algorithm is calculated more correctly. (alert-passed) 

Analysis of Algorithms Based on Space and Time Complexity. 

The complexity of any algorithm is based upon two important factors, space complexity, and time complexity but it also depends on many other factors like which language you are using, which compiler, or which machine you are using for the implementation of the algorithm so measuring the exact time is very difficult.

Therefore, the time complexity of an algorithm is not measured in time units like seconds or microseconds. It depends on the size of the input, the bigger the size of the input more the time an algorithm will take.

We always analyze the algorithm for bigger input because sometimes it seems similar for smaller input. If the size of the input is n, then f(n) which is a function of n denotes the time complexity.

For example:
The time complexity of a program is given by a function f(n) = 2n^2+2n+1. If we analyze the function for different inputs (n=1, n=10, n=100, n=1000), we can observe the n^2 term as more dominant over other terms in the given equation so we can ignore the smaller terms like n or any constant because it is difficult to calculate the exact function for time complexity and in the asymptotic term we can say that the time complexity for the given function is O(n^2).
This method of measuring the approximate complexity is known as asymptotic complexity. (alert-success) 

Now, I hope you understand the above example and the term asymptotic but even if you are unable to get it completely don't worry as we are going to cover this in more detail soon. 

First, we need to learn more about the types of asymptotic notations. There are three asymptotic notations that are used to calculate the space and time complexity of an algorithm. They are Big OBig â„¦, and Big Î˜. But every time we are not going to calculate all three for an algorithm.  

Types of Asymptotic notation.

We use these three notations to express the complexity of an algorithm in three different cases: 

Big O is going to say the worst-case time or upper bound which means, if we are giving any time complexity in terms of Big O, it says that in any case, your time will not exceed this.

Big â„¦ is going to say the best case time or lower bound which means, in any case, you can’t achieve better than this. In this notation, we analyze the minimum number of steps that need to be executed. 

Big Θ is going to say the average case time, somewhere in between the worst and best. In this notation, we calculate the average by considering different types of inputs including the best case and the worst case.

In practice, we are not interested in what is the best time the algorithm could execute. We are always interested in what is the worst-case time that will be taken by this algorithm for any input.

Lower Bound < Average Bound < Upper Bound

We use the average case when the best case and worst case are the same and when both cases are taking the same time.

Example: Assume an array data structure of length n.
Time Complexity of and Array
We want to search for an element using linear search, if the element we want to search is 5, then in the best case we can search it in 1 comparison. In 1 step we can find it out.

=> â„¦(1) time complexity and it says that you can never go down beyond this.

Now if we want to search 3 out of n elements in the array then we need to compare all the elements in the worst-case O(n) and it says that you can never go beyond this complexity. 

C++ Example Code for Linear Search.

//Linear Search Algorithm code in C++
#include <iostream>
using namespace std;

int linearSearch(int arr[], int n, int target) {
    for (int i = 0; i < n; i++) {
        if (arr[i] == target) {
            return i; // return the index of the target element if found
        }
    }
    return -1; // return -1 if target element is not found
}

int main() {
    int arr[] = {5, 2, 8, 3, 1, 9};
    int n = sizeof(arr) / sizeof(arr[0]);
    int target = 3;

    int index = linearSearch(arr, n, target);
    if (index != -1) {
        cout << "Element found at index: " << index << endl;
    } else {
        cout << "Element not found." << endl;
    }

    return 0;
}
Output:
Element found at index: 3

In the above code, we have implemented the linear search algorithm to find the target element in the given array. The algorithm iterates through each element of the array and compares it with the target element. If a match is found, the index is returned; otherwise, -1 is returned to indicate that the element was not found.

Now I hope you get some idea of how we can calculate the best, worst, and average time complexity of an algorithm. We are going to practice a lot of questions to master this concept in our upcoming articles. 

Related Post:
👉Support Us by Sharing our post with others if you really found this useful and also share your valuable feedback in the comment section below so we can work on them and provide you best ðŸ˜Š.(alert-success) 

Static Array and Dynamic Arrays.

Static vs Dynamic

An array is the most useful data structure because it forms a fundamental building block for all other data structures. Using arrays and pointers, we can construct almost every data structure.

What is a Static Array?

A static array is a fixed-length container containing n elements indexable from the range [0, n-1].

What is meant by being ‘indexable?

This means that each slot/index in the array can be referenced with a number.

When and Where is a static Array used?

  • Storing and accessing sequential data.
  • Temporarily storing objects.
  • Used by IO routines as buffers.
  • Lookup tables and inverse lookup tables.
  • Can be used to return multiple values from a function.
  • Used in dynamic programming to cache answers to sub-problems.


Static Array Example:
static array example

A[0] = 44
A[1] = 12
A[4] = 6
A[7] = 9
A[9] => index out of bounds!
Elements in A are referenced by their index. There is no other way to access elements in an array. Array indexing is zero-based, meaning the first element is found in position zero.

What is a Dynamic Array?

The dynamic array is the array that can grow and shrink in size dynamically as needed.

How can we implement a dynamic array?

One way is to use a static array!
Create a static array with an initial capacity.
Add elements to the underlying static array, keeping track of the number of elements.

If adding another element will exceed the capacity, then create a new static array with twice the capacity and copy the original elements into it.

The complexity of Static Array and Dynamic Array.

Operations Static Array Dynamic Array
Access O(1) O(1)
Search O(n) O(n)
Insertion N/A O(n)
Appending N/A O(1)
Deletion N/A O(n)
The access time of static and dynamic arrays is constant because the array is indexable.

The search time of static and dynamic arrays is linear because sometimes we have to traverse all the elements in the worst cases.

The other operations like inserting, appending, and deletion are not possible with a static array because a static array is fixed in size, it cannot grow larger or smaller.

However, inserting, appending and deletion is possible with a dynamic array. Insertion and deletion take linear time because we have to shift elements from their position in this process. Appending is the constant time because it adds an element at the end of the array. 

DON'T MISS

Nature, Health, Fitness
© all rights reserved
made with by templateszoo