Discard Inactive tabs automatically in Google Chrome.

Auto Tab Discard

Auto Tab Discard solves everyone’s biggest moan about Chrome - by automatically ‘killing’ unused tabs, it claws back much-needed memory. This should speed up your browser and save battery life when you’re using a laptop.

Features of Auto Tab Discard.
  • It speeds up the browser and minimizes memory usage.
  • It designates specific tabs to be whitelisted to prevent discarding.
  • You can retain discarded tabs after closing and re-opening your browser.
  • The favicon of a website displays the discarded state.


Click the extension’s toolbar button for a range of quick settings that let you swiftly discard the current tab or any other tab you may have open, in any Chrome window.

Your tabs won’t suddenly disappear- instead, the browser tool puts a discarded tab into a suspended state until you click back into it, at which point it will reload. You can also jump between current tabs, close them using the navigation arrows or order all tabs to remain open during your session.

             
However, it’s in the add-on’s settings where the ‘auto’ part of its name occurs. Here, you can set a time limit that kicks in when you have a specific number of tabs open. 

What is Computer Programming? Basic.

Programming

Dictionary Definition: Programming is the process of preparing an instructional program for a device.

But that’s a really confusing definition, so in layman’s terms what exactly does that mean? Essentially, it is attempting to get a computer to complete a specific task without mistakes.

Just, for example, you instruct your less than intelligent friend to build a Lego set. He lost the instructions so now he can only build based on your commands. Your friend is far from competent on his own and so you must have to give him EXACT instructions on how to build, even if there is one piece wrong the entire Lego set will be ruined.

Giving instructions to your friend is very similar to how programmers code.

An important thing to note is that computers are actually very dumb. We build them up to be this super-sophisticated piece of technology when in actuality a computer’s main functionality comes from how we manipulate it to serve our needs.

{"Computers are only smart because we program them to be"};

The language that Computer Understand.

Programming isn’t a simple as giving instructions to your friend since, in the case of programming, the computer doesn’t speak the same language as you. The computer only understands machine code, which is a numerical language known as a binary that is designed so that the computer can quickly read it and carry out its instructions.

Every instruction fed to the computer is converted into a string of 1’s and 0’s and then interpreted by the computer to carry out a task. Therefore, in order to talk to the computer, you must first translate your English instruction to Binary.

Directly translating what you want the computer to do into machine code is extremely difficult, in fact almost impossible and would take a very long time to do if you could. This is where programming languages come into play.

Programming languages are fundamentally a middle man for translating a program into machine code-the series of 0’s and 1’s that the computer can understand. These languages are much easier for humans to learn than machine code, and are thus very useful for programmers.

{"A programming language is not English and not machine code, it is somewhere in the middle"};

How Programming Language Vary?

There are many different programming languages out there that each has its own unique uses.
Programming

  • Java and Python: General Purpose languages.
  • HTML/CSS: Designed for specific tasks like web page design.


Each language also varies in how powerful it is:
  • JavaScript is a scripting language and isn’t used for big problems.
  • Java and Python can carry out a much more computationally taxing process.


We measure a programming language’s power level by how similar it is to machine code. Low-level programming languages such as Assembly or C are closer to binary than a high-level programming language such as Java or Python.

The basic idea is that the lower the level of your programming language, the more your code will resemble what the machine can interpret as instructions.

So, try different languages and decide which one’s rules, interface, and level of specification you like the best.  

If you like this post and find it useful then you can show your support by donating a small amount from your heart. You can also show your support by following us on Facebook and Twitter.

Introduction to Asymptotic Notations.

Before implementing any algorithm as a program, we need to analyze which algorithm is good in terms of time and space.

Time: Which one can execute faster.
Memory: Which one can take less memory.  

Here we are mainly going to focus on calculating the Time Complexity which generally depends on the size of the input. Let's understand this point with the help of an example.


Suppose we have an integer array and we want to add an element at the beginning of the list then to perform this operation we need one extra space at the end of the array and we have to shift all the elements of the array by one unit and the number of the shift we have made is equal to the number of elements present in the given array. 


So from this small example, we can understand that Running Time mostly depends on the size of the input. Therefore, if the size of the input is n, then f(n) is a function of n that denotes the Time Complexity. Example: 

Time Complexity calculation

From the above example, we conclude that most of the time is consumed by 12 but wait we cannot conclude just by checking for a single n value, we need to calculate a bigger value of n. Below is the calculation for bigger n values. 
n 5n^2 6n 12
1 21.74% 26.09% 52.17%
10 87.41% 10.49% 2.09%
100 98.79% 1.19% 0.02%
1000 99.88% 0.12% 0.002%

Now if we observe the above table we found that for a bigger n value 5n^2 it takes most of the time so while calculating the Time Complexity we can neglect the remaining terms because the single term gives us the approximate result and it is very near to the actual result. This approximate measure of time complexity is called Asymptotic Complexity.


What are Asymptotic Notations?

Asymptotic notations are mathematical tools to measure the space and time complexity of an algorithm. There are mainly three asymptotic notations used to show the time complexity of the algorithm.

O(n) Big O notation. (Worst Case)

Big O notation is the most used notation to measure the performance of any algorithm by defining its order of growth.

Big O notation

Assuming that the time and input are going to increase like in the above graph then what is the worst-case or upper bound for this function?
We have to find out another function in such a way that the function is greater than the given f(n), let's call it c.g(n), after some limit n=no


For any given algorithm, we should be able to find out what is its time complexity or what its f(n) in terms of input n. Now after finding out that, if we can bound that function by some another function say c.g(n) in such a way that after some input no, the value of c.g(n) is always greater than f(n).

[ f(n)<=c.g(n) ], after some n where is n>=no
c, n are real numbers, c > 0 and n >=no 

Definition: f(n)  O(g(n)) if there exists constants c > 0 and no > =1 such that 0 <= f(n) <= c.g(n) for all n >= no

Example: Is f(n) is big O of g(n) [f(n) = O(g(n))] where f(n) = 3n+2 and g(n) = n. 

Solution: At first we have to find two constants c and no such that, 
f(n)<=c.g(n), c > 0 and n >=no
3n+2 < cn. 
let's take c=4
3n+2 < 4n
=> n>=2
Therefore, f(n) = O(g(n)). 
Therefore, g(n) = n bounds the function f(n) which means f(n) is upper bounded by g(n).

So if n is bounding this f(n) then definitely n^2 will bound. It means if g(n) = n is satisfying then any value greater than n (n^2, n^3,----n^n) will also bound 3n+2, but we need to keep in mind that big O is always the upper bound, so all the answers are correct but we will always go for least bound or tightest bound and n is the tightest bound for given f(n). 

Ω(n) Omega notation. (Best Case)

Omega notation gives us a lower bound of the complexity in the best case analysis. In this notation, we analyze the minimum number of steps that need to be executed. For example, In Linear search, the best case is when your search element is present in the first place.
Omega

We have to find out another function in such a way that the function is smaller than the given f(n), let's call it c.g(n), after some limit n=no
Now after finding out that, if we can bound that function by some another function say c.g(n) in such a way that after some input no, the value of c.g(n) is always smaller than f(n).

[ f(n)>=c.g(n) ], after some n where is n>=no
c, n are real numbers, c > 0 and n >=no 

Definition: f (n) ∈ Ω(g(n)) if there exists constants c > 0 and n0 >= 1 such that 0 ≤ c ·g(n) ≤ f (n) for all n ≥ n0

Example: Is f(n) is big Omega of g(n) where f(n) = 3n+2 and g(n) = n.

Solution: 
3n+2 >= cn. is true even for c = 1 
Therefore, the condition is satisfied and 3n+2 is lower bounded by n. 
The given function f(n) can be lower bounded by any value lower than n but it's better to take the closest bound. 

Θ(n) Theta notation. (Average Case)

Theta notation gives us a tight bound of the complexity in the average case analysis. In this notation, we calculate the average by considering different inputs, including the best and worst cases.
Theta

Here we have to find out both the upper bound and lower bound, by a function by a function by varying c.
If f(n) is bounded by c1g(n) and c2g(n) then, f(n) is Î˜g(n).

Definition: f (n) ∈ Θ(g(n)) if there exists constants c1 > 0, c2 > 0 and n0 > 1 such that c1 ·g(n) ≤ f (n) ≤ c2 ·g(n) for all n ≥ n0

Example: Is f(n) is big Theta of g(n) where f(n) = 3n+2 and g(n) = n.

Solution: 
f(n)  ≤  c2 ·g(n)
3n+2 ≤ 4n, true for n0 ≥ 1.

f(n) ≥ c1 ·g(n)
3n+2 ≥ n, true for n0 ≥ 1 and c = 1

This means, that f(n) = 3n+2 is bounded by g(n) from both upper as well as lower bound, for upper bound  c2 = 4, and for lower bound c1 = 1, both the cases are valid for n0 ≥ 1.

Note: Sometimes, Î˜ is also called asymptotically equal which means if the leading term in the function is going to be 1, then we can take n as g(n) because they are asymptotically equal. 
Ex: f(n) = 3n^2+n+1 = Î˜(n^2). 
f(n) = 5n^3+n+1 = Î˜(n^3). 

👉Support Us by Sharing our post with others if you really found this useful and also share your valuable feedback in the comment section below so we can work on them and provide you best ðŸ˜Š.(alert-success) 

Algorithms Introduction and Analysis

Algorithms Introduction and Analysis

An Algorithm is a step-by-step procedure that helps us in solving computational problems. Using the right algorithm to solve a particular kind of problem increases the efficiency of code in terms of space and time complexity.
Space Complexity: Space complexity is the amount of memory space the algorithm is required for giving the desired output. The memory space is consumed by the input data, temporary data, and output data of the algorithm. (alert-success)
Time Complexity: Time complexity is the amount of time required by an algorithm for the execution and providing the desired output. It depends on the number of iterative or recursive steps involve in the execution of the algorithm.
(alert-success)
Sometimes, many get confused between the term Algorithm and Program. An algorithm is a designing part of solving a problem and the Program is the implementation part of solving that problem.  

We can write an Algorithm in any language like the English language or by using some mathematical notations. But the program for that algorithm must be written using programming languages like C, C++, Java, or Python.

How To Analyse an Algorithm?

You can find more than one algorithms for solving a particular problem after that you have to analyze them and use the most efficient one. 

The analysis of an algorithm is done base on its efficiency. The two important terms used for the analysis of an algorithm are “Priori Analysis” and “Posterior Analysis”.

Priori Analysis: It is done before the actual implementation of the algorithm when the algorithm is written in the general theoretical language. In this, the efficiency of the algorithm is calculated base on its complexity. It is just an approximate analysis. (alert-passed)  
Posterior Analysis: It is done after the actual implementation and execution of the algorithm using any programming language like C, C++, Java, or Python. It is the actual analysis in which the space and time complexity of the algorithm is calculated more correctly. (alert-passed) 

Analysis of Algorithms Based on Space and Time Complexity. 

The complexity of any algorithm is based upon two important factors, space complexity, and time complexity but it also depends on many other factors like which language you are using, which compiler, or which machine you are using for the implementation of the algorithm so measuring the exact time is very difficult.

Therefore, the time complexity of an algorithm is not measured in time units like seconds or microseconds. It depends on the size of the input, the bigger the size of the input more the time an algorithm will take.

We always analyze the algorithm for bigger input because sometimes it seems similar for smaller input. If the size of the input is n, then f(n) which is a function of n denotes the time complexity.

For example:
The time complexity of a program is given by a function f(n) = 2n^2+2n+1. If we analyze the function for different inputs (n=1, n=10, n=100, n=1000), we can observe the n^2 term as more dominant over other terms in the given equation so we can ignore the smaller terms like n or any constant because it is difficult to calculate the exact function for time complexity and in the asymptotic term we can say that the time complexity for the given function is O(n^2).
This method of measuring the approximate complexity is known as asymptotic complexity. (alert-success) 

Now, I hope you understand the above example and the term asymptotic but even if you are unable to get it completely don't worry as we are going to cover this in more detail soon. 

First, we need to learn more about the types of asymptotic notations. There are three asymptotic notations that are used to calculate the space and time complexity of an algorithm. They are Big OBig â„¦, and Big Î˜. But every time we are not going to calculate all three for an algorithm.  

Types of Asymptotic notation.

We use these three notations to express the complexity of an algorithm in three different cases: 

Big O is going to say the worst-case time or upper bound which means, if we are giving any time complexity in terms of Big O, it says that in any case, your time will not exceed this.

Big â„¦ is going to say the best case time or lower bound which means, in any case, you can’t achieve better than this. In this notation, we analyze the minimum number of steps that need to be executed. 

Big Θ is going to say the average case time, somewhere in between the worst and best. In this notation, we calculate the average by considering different types of inputs including the best case and the worst case.

In practice, we are not interested in what is the best time the algorithm could execute. We are always interested in what is the worst-case time that will be taken by this algorithm for any input.

Lower Bound < Average Bound < Upper Bound

We use the average case when the best case and worst case are the same and when both cases are taking the same time.

Example: Assume an array data structure of length n.
Time Complexity of and Array
We want to search for an element using linear search, if the element we want to search is 5, then in the best case we can search it in 1 comparison. In 1 step we can find it out.

=> â„¦(1) time complexity and it says that you can never go down beyond this.

Now if we want to search 3 out of n elements in the array then we need to compare all the elements in the worst-case O(n) and it says that you can never go beyond this complexity. 

C++ Example Code for Linear Search.

//Linear Search Algorithm code in C++
#include <iostream>
using namespace std;

int linearSearch(int arr[], int n, int target) {
    for (int i = 0; i < n; i++) {
        if (arr[i] == target) {
            return i; // return the index of the target element if found
        }
    }
    return -1; // return -1 if target element is not found
}

int main() {
    int arr[] = {5, 2, 8, 3, 1, 9};
    int n = sizeof(arr) / sizeof(arr[0]);
    int target = 3;

    int index = linearSearch(arr, n, target);
    if (index != -1) {
        cout << "Element found at index: " << index << endl;
    } else {
        cout << "Element not found." << endl;
    }

    return 0;
}
Output:
Element found at index: 3

In the above code, we have implemented the linear search algorithm to find the target element in the given array. The algorithm iterates through each element of the array and compares it with the target element. If a match is found, the index is returned; otherwise, -1 is returned to indicate that the element was not found.

Now I hope you get some idea of how we can calculate the best, worst, and average time complexity of an algorithm. We are going to practice a lot of questions to master this concept in our upcoming articles. 

Related Post:
👉Support Us by Sharing our post with others if you really found this useful and also share your valuable feedback in the comment section below so we can work on them and provide you best ðŸ˜Š.(alert-success) 

Static Array and Dynamic Arrays.

Static vs Dynamic

An array is the most useful data structure because it forms a fundamental building block for all other data structures. Using arrays and pointers, we can construct almost every data structure.

What is a Static Array?

A static array is a fixed-length container containing n elements indexable from the range [0, n-1].

What is meant by being ‘indexable?

This means that each slot/index in the array can be referenced with a number.

When and Where is a static Array used?

  • Storing and accessing sequential data.
  • Temporarily storing objects.
  • Used by IO routines as buffers.
  • Lookup tables and inverse lookup tables.
  • Can be used to return multiple values from a function.
  • Used in dynamic programming to cache answers to sub-problems.


Static Array Example:
static array example

A[0] = 44
A[1] = 12
A[4] = 6
A[7] = 9
A[9] => index out of bounds!
Elements in A are referenced by their index. There is no other way to access elements in an array. Array indexing is zero-based, meaning the first element is found in position zero.

What is a Dynamic Array?

The dynamic array is the array that can grow and shrink in size dynamically as needed.

How can we implement a dynamic array?

One way is to use a static array!
Create a static array with an initial capacity.
Add elements to the underlying static array, keeping track of the number of elements.

If adding another element will exceed the capacity, then create a new static array with twice the capacity and copy the original elements into it.

The complexity of Static Array and Dynamic Array.

Operations Static Array Dynamic Array
Access O(1) O(1)
Search O(n) O(n)
Insertion N/A O(n)
Appending N/A O(1)
Deletion N/A O(n)
The access time of static and dynamic arrays is constant because the array is indexable.

The search time of static and dynamic arrays is linear because sometimes we have to traverse all the elements in the worst cases.

The other operations like inserting, appending, and deletion are not possible with a static array because a static array is fixed in size, it cannot grow larger or smaller.

However, inserting, appending and deletion is possible with a dynamic array. Insertion and deletion take linear time because we have to shift elements from their position in this process. Appending is the constant time because it adds an element at the end of the array. 

Difference Between Preferred DNS and Alternate DNS server.

DNS server

Switching to the best DNS server not only increases the page loading speed, but also helps in accessing several different websites that were not accessible before. Changing the computer’s Preferred and Alternate DNS to Google public DNS (8.8.8.8 and 8.8.4.4) is considered best to use.

While changing my Preferred and Alternate DNS server of my computer I found myself wondering about the difference between Preferred and Alternate DNS. After talking with some experts I got the answer to my question and in this post, I am going to share that knowledge with you all.

At first, to understand the difference between Preferred Alternate DNS you have to understand what DNS server is and how it works?

What a DNS server is and How it works?

The Domain Name System or DNS is a global database of the numerical IP addresses that lie behind the ‘friendly’ website names we prefer.

DNS is like a phone book for the internet. When you type a web address in your browser like www.algolesson.com (also known as a hostname), a DNS server will use it to look up the actual IP address of the website. The DNS then converts this to the website’s numerical (IP) address on the internet. Your request is then passed on to that server and the page is retrieved.

Difference between Preferred and Alternate DNS.

Of course, there isn’t just one single DNS server computer sat in a dusty room somewhere, but lots of them, run by numerous companies globally (including Google). These servers synchronize in various ways so that, in theory at least, the database on one DNS server is the same as another. However, similarly, in theory, some DNS servers might be less up-to-date than others-so some might contain redundant web addresses, while others might lack a new one.

All this brings us to the difference between Preferred and Alternate. These are really Microsoft Windows terms; elsewhere you might see them referred to as ‘primary’ and ‘secondary’. As a modern, internet-connected PC is more or less useless without DNS, it’s sensible to have a backup. That’s the purpose of an Alternate (or secondary) DNS server: if your PC can’t contact the Preferred (primary) server, it’ll head to the Alternate. 

You can flip the IP addresses around and it won’t make any noticeable difference. Again, theoretically, a secondary DNS server might be slightly less up-to-date than a primary one-but practically this will hardly ever be the case with Google’s DNS servers. 

How To Print a List of files in a Folder in Windows 10.

Print a List of files

I love watching animation films a lot and to store then I have a 3TB hard drive filled with old animation movies. But managing the list of movies which I’ve already seen becomes difficult. I decided to print the titles of these films so I can track the list before watching the next movie.

After searching for sometimes, I get this old Windows Command Prompt trick to print the list of files name of a folder.

You can save the list of files names of any folder with the help of this Windows Command Prompt trick.

Print a List of files in a folder.

Press Windows+R and enter cmd to open the Command Prompt window. In the window, type: cd c:\users\Probin Kumar\documents and press Enter(Probin Kumar is my system name you have to give your own pc name), it will change the directory to the c: drive. Change the letter or the path if your external drive uses a different letter.

List of files in a Folder in Windows

After the first command type: dir> list.txt and press Enter. The second command gathers all the filenames and saves them to a file named list.

dir> list.txt

The >list.txt part tells the command to direct all output to a file called ‘list.txt’ instead of displaying it on the screen. The file is created if it doesn’t exist or overwritten if it does.

After running these commands, open Explorer and double-click list.txt on the C: drive. This should open it in Notepad.

You can print the list of files from Notepad by pressing Ctrl+P if you want a hard copy but, with the file list in Notepad, you could also use the Find command to search for filenames. Press Ctrl+F to find any text, such as the full or partial name of a file, to see if you already have it. 

How To Delete Your Google Account Permanently.

Delete Your Google Account

Quitting Google forever is a big step and not for the faint-hearted. Once you’ve deleted your account, there’s no back going back (although you can of course create a new account and start afresh). This is definitely not something you should jump into lightly - there’s no reason why you can’t keep your existing Google account but just not use it.

You can also use Google services such as Search and YouTube without logging in, although certain features are restricted if you don’t allow ‘personalization’. 

Delete Your Google Account.

If you’re absolutely sure you’re ready to ditch Google and have backed up all your data and chosen your alternatives, then deleting your account is actually very easy.

Simply go to your account page (myaccount.google.com) and click ‘Data & personalization’ in the left-hand panel, then scroll down to ‘Download, delete or make a plane for your data’ and click ‘Delete a service or your account’.

Delete Google Account

Finally, click ‘Delete your account’ and say goodbye to Google. Google may ask to re-write your password for confirmation.  

Delete Your Google Account

Recover Your Google Account. 

If you change your mind within two days of deletion, there is a chance you can get back your Google account by going to the ‘Account recovery’ page.

Recover Google Account

Enter the email address or phone number associated with the account that was deleted, then click ‘Attempt to restore this account’. 

If you like this post and find it useful then you can show your support by donating a small amount from your heart. You can also show your support by following us on Facebook and Twitter. 

DON'T MISS

Nature, Health, Fitness
© all rights reserved
made with by AlgoLesson