Efficiency of an algorithm is measured by

Let c = 4 (there are lots of c's to choose from) and show that there exists an n0 in which the equation holds

3n2 + 3n + 10 <=4n2 0 <= n2-3n-10 0 <= (n-5)(n+2)

so for n >= 5 we found c=4


Example

Here is an example where n may have to become  larger in order for g(x) to dominate.

Find an n when 2n > 8n4
n log2 2 > log28 + 4 log2n
n > 3 + 4 log2 n
(n-3)/4 > log2 n

so n > 16


Big-O Arithmetic

Here are some simplifications that can be applied

  1. O(k*f) = O(f) That is, constants can be ignored.
  2. O(f*g) = O(f) * O(g) if a function is a product then its order is the product of the orders of the factors.
  3. O(f/g) = O(f) / O(g)  same for a function that is a quotient
  4. O(f) > O(g) if and only if f dominates g
  5. O(f+g) = Max[ O(f), O(g) ] That is, terms of lower degree can be ignored.
  6. Be careful with functions that have subtraction:
    If f(n) = O(h(n)) and g(n) = O(h(n)) then
    f(n)-g(n) is not equal to O(h(n)) - O(h(n)) = 0

2n2 - n2 = n2

Some other rules

  1. The powers of n are ordered according to the exponent na = O(nb) iff a <= b
  2. The order of log n is independent of the base taken loga n = O(logb n) for all a, b > 1
  3. Logarithms grow more slowly than any power of n log n = O(na) for any a>0 but na != O(log n)
  4. na = O(bn) for all a, b>1 but bn != O(na) for b>1

Common examples of domination

Let X and Y denote variables and a,b,c, and d denote constants

In order of dominance (top is most dominant)

Functioncondition common nameNnN!N factorial andominates bn if a > b exponentialNcdominates Nd if c>d polynomial (cubic, quadratic, etc.)n log nn log nnlinear log nlogarithmic (regardless of base) 1constant 

Log scale graph of big-O complexities

> log2 := loglogplot (log[2](x), x=1..lim, color=green): 

Efficiency of an algorithm is measured by

> lin := loglogplot (x, x=1..lim, color=gold):
> nlogn := loglogplot (x*log[2](x), x=1..32, color=blue):
> quad :=loglogplot (x^2, x=1..32, color=yellow):
> cube := loglogplot (x^3, x=1..lim, color=violet):
> quart := loglogplot (x^4, =1..lim, color=maroon):
> expon := loglogplot (2^x, x=1..lim, color=red):
> fact := loglogplot (factorial(x), x=1..10, color=orange):


Table of growth rates (12.1)

N

log2N

n*log2N

N2

N3

2N

3N

1001123 2124849 42816641681832464512256656116464256409665,536 43,046,7213251601024322,7684,294,967,296 …6463844096262,144(Note 1) …128789616,3842,097,152(Note 2) …25682048 65,5361,677,216???????? …

Note1: The value here is approximately the number of machine instructions executed by a 1 gigaflop computer in 5000 years.

For maximum efficiency of algorithm we wish to minimize resource usage. The important resources such as time and space complexity cannot be compared directly, so time and space complexity could be considered for an algorithmic efficiency.

 

Method for determining Efficiency

The efficiency of an algorithm depends on how efficiently it uses time and memory space.

The time efficiency of an algorithm is measured by different factors. For example, write a program for a defined algorithm, execute it by using any programming language, and measure the total time it takes to run. The execution time that you measure in this case would depend on a number of factors such as:

·        Speed of the machine

·        Compiler and other system Software tools

·        Operating System

·        Programming language used

·        Volume of data required

However, to determine how efficiently an algorithm solves a given problem, you would like to determine how the execution time is affected by the nature of the algorithm. Therefore, we need to develop fundamental laws that determine the efficiency of a program in terms of the nature of the underlying algorithm.

 

Space-Time tradeoff

A space-time        or time-memory   tradeoff is a way of solving in less time by using more storage space or by solving a given algorithm in very little space by spending more time.

To solve a given programming problem, many different algorithms may be used. Some of these algorithms may be extremely time-efficient and others extremely space-efficient.

Time/space trade off refers to a situation where you can reduce the use of memory at the cost of slower program execution, or reduce the running time at the cost of increased memory usage.

 

Asymptotic Notations

Asymptotic Notations are languages that uses meaningful statements about time and space complexity. The following three asymptotic notations are mostly used to represent time complexity of algorithms:

(i) Big O

Big O is often used to describe the worst-case of an algorithm.

(ii) Big Ω

Big Omega is the reverse Big O, if Bi O is used to describe the upper bound (worst - case) of a asymptotic function, Big Omega is used to describe the lower bound (best-case).

(iii) Big Θ

When an algorithm has a complexity with lower bound = upper bound, say that an algorithm has a complexity O (n log n) and (n log n), it’s actually has the complexity Θ (n log n), which means the running time of that algorithm always falls in n log n in the best-case and worst-case.

 

Best, Worst, and Average ease Efficiency

Let us assume a list of n number of values stored in an array. Suppose if we want to search a particular element in this list, the algorithm that search the key element in the list among n elements, by comparing the key element with each element in the list sequentially.

The best case would be if the first element in the list matches with the key element to be searched in a list of elements. The efficiency in that case would be expressed as O(1) because only one comparison is enough.

Similarly, the worst case in this scenario would be if the complete list is searched and the element is found only at the end of the list or is not found in the list. The efficiency of an algorithm in that case would be expressed as O(n) because n comparisons required to complete the search.

The average case efficiency of an algorithm can be obtained by finding the average number of comparisons as given below:

How does Mcq measure efficiency of algorithm?

The two main measures for the efficiency of an algorithm are time complexity and space complexity, but they cannot be compared directly. So, time and space complexity is considered for algorithmic efficiency.

What is efficiency of an algorithm?

In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on the usage of different resources.