Efficiency of an algorithm is measured by
Let c = 4 (there are lots of c's to choose from) and show that there exists an n0 in which the equation holds Show
3n2 + 3n + 10 <=4n2 0 <= n2-3n-10 0 <= (n-5)(n+2) so for n >= 5 we found c=4 ExampleHere is an example where n may have to become larger in order for g(x) to dominate. Find an n when 2n > 8n4 so n > 16 Big-O ArithmeticHere are some simplifications that can be applied
2n2 - n2 = n2 Some other rules
Common examples of dominationLet X and Y denote variables and a,b,c, and d denote constants In order of dominance (top is most dominant) Log scale graph of big-O complexities> log2 := loglogplot (log[2](x), x=1..lim, color=green): > lin := loglogplot (x, x=1..lim, color=gold): > nlogn := loglogplot (x*log[2](x), x=1..32, color=blue): > quad :=loglogplot (x^2, x=1..32, color=yellow): > cube := loglogplot (x^3, x=1..lim, color=violet): > quart := loglogplot (x^4, =1..lim, color=maroon): > expon := loglogplot (2^x, x=1..lim, color=red): > fact := loglogplot (factorial(x), x=1..10, color=orange): Table of growth rates (12.1)N log2N n*log2N N2 N3 2N 3N 1001123 2124849 42816641681832464512256656116464256409665,536 43,046,7213251601024322,7684,294,967,296 …6463844096262,144(Note 1) …128789616,3842,097,152(Note 2) …25682048 65,5361,677,216???????? …Note1: The value here is approximately the number of machine instructions executed by a 1 gigaflop computer in 5000 years. For maximum efficiency of algorithm we wish to minimize resource usage. The important resources such as time and space complexity cannot be compared directly, so time and space complexity could be considered for an algorithmic efficiency.
Method for determining EfficiencyThe efficiency of an algorithm depends on how efficiently it uses time and memory space. The time efficiency of an algorithm is measured by different factors. For example, write a program for a defined algorithm, execute it by using any programming language, and measure the total time it takes to run. The execution time that you measure in this case would depend on a number of factors such as: · Speed of the machine · Compiler and other system Software tools · Operating System · Programming language used · Volume of data required However, to determine how efficiently an algorithm solves a given problem, you would like to determine how the execution time is affected by the nature of the algorithm. Therefore, we need to develop fundamental laws that determine the efficiency of a program in terms of the nature of the underlying algorithm.
Space-Time tradeoffA space-time or time-memory tradeoff is a way of solving in less time by using more storage space or by solving a given algorithm in very little space by spending more time. To solve a given programming problem, many different algorithms may be used. Some of these algorithms may be extremely time-efficient and others extremely space-efficient. Time/space trade off refers to a situation where you can reduce the use of memory at the cost of slower program execution, or reduce the running time at the cost of increased memory usage.
Asymptotic NotationsAsymptotic Notations are languages that uses meaningful statements about time and space complexity. The following three asymptotic notations are mostly used to represent time complexity of algorithms: (i) Big OBig O is often used to describe the worst-case of an algorithm. (ii) Big ΩBig Omega is the reverse Big O, if Bi O is used to describe the upper bound (worst - case) of a asymptotic function, Big Omega is used to describe the lower bound (best-case). (iii) Big ΘWhen an algorithm has a complexity with lower bound = upper bound, say that an algorithm has a complexity O (n log n) and (n log n), it’s actually has the complexity Θ (n log n), which means the running time of that algorithm always falls in n log n in the best-case and worst-case.
Best, Worst, and Average ease EfficiencyLet us assume a list of n number of values stored in an array. Suppose if we want to search a particular element in this list, the algorithm that search the key element in the list among n elements, by comparing the key element with each element in the list sequentially. The best case would be if the first element in the list matches with the key element to be searched in a list of elements. The efficiency in that case would be expressed as O(1) because only one comparison is enough. Similarly, the worst case in this scenario would be if the complete list is searched and the element is found only at the end of the list or is not found in the list. The efficiency of an algorithm in that case would be expressed as O(n) because n comparisons required to complete the search. The average case efficiency of an algorithm can be obtained by finding the average number of comparisons as given below: How does Mcq measure efficiency of algorithm?The two main measures for the efficiency of an algorithm are time complexity and space complexity, but they cannot be compared directly. So, time and space complexity is considered for algorithmic efficiency.
What is efficiency of an algorithm?In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on the usage of different resources.
|