CMU 15-112: Fundamentals of Programming and Computer Science
Class Notes: Efficiency
Learning Goal: compute how efficient a given program is and determine how to improve its efficiency. In particular:
- Recognize common Big-O function families that programs run in.
- Compute the Big-O runtime of a given program by determining how many computations occur.
- Contrast different searching and sorting algorithms and their efficiencies.
- Efficiency: Big Ideas
- Big-O
- Check 7.4
- Calculating Efficiency
- Check 7.5
- Efficiency and Algorithmic Thinking
- Efficiency: Big Ideas
When writing programs in the real world, it is not enough to write a program that works. Often, we need to write programs that work quickly, or efficiently. A slow program will not compete well against other similar programs, which can lead to loss of customers and revenue; a slow program in a time-sensitive field such as medicine may even lead to loss of lives.
When considering efficiency, we can examine a number of different program attributes, such as time (how long the program takes to run), space (how much memory it requires), bandwidth (how many messages need to be sent via the internet), etc. In 15-112, we will only consider time.
Additionally, when considering efficiency, we don't just care about how fast a program is. A program's speed may vary greatly based on the computer it runs on, what other programs are currently running, and what input it is given. To average out all of these factors, we instead investigate how the program's time changes as the size of the input changes.
As we consider how much time a program will take to run on a given size of input, we'll want to examine how many algorithmic steps a program takes, as an approximation of how much work the program does. We'll define an algorithmic step to be a single simple action that a program does (such as adding two numbers). We can then determine how efficient a program is by determining how the number of algorithmic steps grows with the size of the input.
Finally, it's worth noting that the number of algorithmic steps that occur varies widely based on the provided input. We could consider a best-case input, or an average-case input. However, in 15-112 (and many other courses), we'll primarily consider worst-case inputs, or the inputs that will lead to the most possible work done by the algorithm. - Big-O
- Big-O Definition
Big-O Notation is a system used to describe the behavior of a function as the size of the function's input grows. We do this by calculating the number of algorithmic steps that we expect to occur in a worst-case scenario in terms of a number N, where N is the size of the input. This is usually the length of a string or list.
In 15-112, we care mostly about the highest-order term of the runtime equation, since that term will have the largest impact as the size of the input grows. Therefore, we ignore all lower-order terms and constants, and instead say that the Big-O runtime of the function is it's highest-order N term, without a constant attached.
Here are a few examples in math functions:- 3n2 - 2n + 25 is O(n2)
- 30000n2 + 2n - 25 is O(n2)
- 0.00000000001n2 + 123456789n is O(n2)
- 10nlog17n + 25n - 17 is O(nlogn)
- Common Function Families
Big-O runtimes of programs often fall into a set of common function families, which are listed below. The graphs shown below demonstrate how these function families grow with the size of N.
- O(1) - Constant
- O(logN) - Logarithmic
- O(N0.5) - Square-Root
- O(N) - Linear
- O(NlogN) - Linearithmic/Loglinear
- O(N2) - Quadratic
- O(Nk) - Polynomial
- O(kn) - Exponential
Graphically (Images borrowed from here):
- Big-O Definition
- Check 7.4
- Calculating Efficiency
In this class, we'll determine a program's function family by approximating how many algorithmic steps occur in terms of N. This section describes how to approach actually counting these steps.
- Python Builtins
Earlier, we defined an algorithmic step to be a simple action, like adding two numbers. However, Python is full of actions that are not particularly simple! Many built-in functions, such as determining whether an item is in a list, take time to run based on the size of the function's parameter. We do not expect you to memorize the builtin efficiencies; instead, you may look them up in the table provided in the link below when needed. We will include the relevant subset of this table on quizzes and exams.
You can find the 15-112 Builtin Runtime Table here. - Sequences, Nesting, and Composition
Knowing the Python builtin runtimes helps us compute the runtime of a single statement, but to find the runtime of multiple statements put together, we must pay attention to whether statements are sequenced, nested, or composed.
- Sequencing
When statements are sequenced, their runtimes should be added together. The total Big-O runtime is then the highest-order term in the result.
L = [ 52, 83, 78, 9, 12, 4 ] # assume L is an arbitrary list of length N L.sort() # This is O(NlogN) L.sort(reverse=True) # This is O(NlogN) L[0] -= 5 # This is O(1) print(L.count(L[0]) + sum(L)) # This is O(N) + O(N) # what is the total cost here? # O(NlogN) + O(NlogN) + O(1) + O(N) + O(N) = O(2NlogN + 2N + 1) = O(NlogN) - Nesting
When statements are nested, we must examine how the nesting impacts how many times the statement will be called. In an if statement, consider if the statement will ever be reached; in a loop, consider how many times the loop iterates.
L = [ 52, 83, 78, 9, 12, 4 ] # assume L is an arbitrary list of length N for c in L: # This loop's body is executed N times L[0] += c # This is O(1) L.sort() # This is O(NlogN) print(L) # This is O(N) (why?) # what is the total cost here? # N * (O(1) + O(NlogN)) + O(N) = O(N**2*logN + 2N) = O(N**2*logN) - Composition
Finally, in composition, consider how many times each function will be called, and how the size of the functions' parameters changes.def f(L): # assume L is an arbitrary list of length N L1 = sorted(L) # This is O(NlogN) return L1 # This is O(1) def g(L): # assume L is an arbitrary list of length N L1 = L * len(L) # This is O(N**2) (why?) return L1 # This is O(1) L = [ 52, 83, 78, 9, 12, 4 ] # assume L is an arbitrary list of length N L = f(g(L)) # What is the Big-O of this? print(L) # This is O(N**2) (why?) # what is the total cost here? # O(N**2) + O(1) + O(N**2*logN**2) + O(1) + O(N**2) = O(2*N**2*logN + 2*N**2 + 2) = O(N**2*logN)
- Sequencing
- Checking Results with time.time()
Calculating Big-O runtimes can be tricky, so it can be helpful to test whether you've gotten the function family right. To do this, you can time how long your code takes to run using the time.time() function. By calling time.time() before and after running the a program, then computing the difference, you can calculate how the time the program takes changes as you change the scale of the input size.
Note: Run this code in Standard Python, as it will timeout if you run it in brython.# The following functions all solve the same problem: # Given a non-negative integer n, return True if n is the sum # of the squares of two non-negative integers, and False otherwise. def f1(n): for x in range(n+1): for y in range(n+1): if (x**2 + y**2 == n): return True return False def f2(n): for x in range(n+1): for y in range(x,n+1): if (x**2 + y**2 == n): return True return False def f3(n): xmax = int(n**0.5) for x in range(xmax+1): for y in range(x,n+1): if (x**2 + y**2 == n): return True return False def f4(n): xyMax = int(n**0.5) for x in range(xyMax+1): for y in range(x,xyMax+1): if (x**2 + y**2 == n): return True return False def f5(n): xyMax = int(n**0.5) for x in range(xyMax+1): y = int((n - x**2)**0.5) if (x**2 + y**2 == n): return True return False def testFunctionsMatch(maxToCheck): # first, verify that all 5 functions work the same print("Verifying that all functions work the same...") for n in range(maxToCheck): assert(f1(n) == f2(n) == f3(n) == f4(n) == f5(n)) print("All functions match up to n =", maxToCheck) testFunctionsMatch(100) # use larger number to be more confident import time def timeFnCall(f, n): # call f(n) and return time in ms # Actually, since one call may require less than 1 ms, # we'll keep calling until we get at least 1 secs, # then divide by # of calls we had to make calls = 0 start = end = time.time() while (end - start < 1): f(n) calls += 1 end = time.time() return float(end - start)/calls*1000 #(convert to ms) def timeFnCalls(n): print("***************") for f in [f1, f2, f3, f4, f5]: print ("%s(%d) takes %8.3f milliseconds" % (f.__name__, n, timeFnCall(f, n))) timeFnCalls(1001) # use larger number, say 3000, to get more accurate timing
- Python Builtins
- Check 7.5
- Efficiency and Algorithmic Thinking
Programs can often be solved with many different algorithms, and some algorithms are more efficient than others. In this section, we'll look at two case studies of efficiency in algorithm design: searching and sorting lists. We'll discuss this section in more depth in class.
- Searching
- Linear search
The algorithm for linear search is simple: try to find an item by checking each element in the list in order. Since each item needs to be checked, it takes O(N) time. - Binary search
The algorithm for binary search is a bit more complicated. Assuming that you have a sorted list, you can narrow down the part of the list that you need to search by checking the middle element, determining which half of the list needs to be examined, then eliminating the other half. Since you halve the list each iteration, it takes O(logN) time.
- Linear search
- Sorting
There are dozens of different sorting algorithms used in the world, each with its own benefits and downsides. In this class, we'll only investigate a few:
- Bubble Sort: sort the elements of a list by repeatedly comparing pairs, 'bubbling up' the larger item of the pair, and continuing until no swaps are made
- Selection Sort: repeatedly find the smallest element of the list and swap it into the first open position, until all elements have been swapped in
- Merge Sort: break the list into individual elements (which are automatically sorted), then merge the sorted 'sublists' into groups of 2, then 4, then 8, etc., until the whole list is sorted
You can find implementations of each of these algorithms and descriptions of their efficiency here.
More Info on Sorting:
- Searching