The Self-Taught Computer Scientist
Chapter 1 What Is an Algorithm? 7
Download 1.48 Mb. Pdf ko'rish
|
books-library.net-11301817Az7X6
Chapter 1 What Is an Algorithm?
7 print(i) for j in range(n): print(j) for h in range(n): print(h) What part of this program is most important for determining how many steps your algorithm takes to complete? You may think both parts of the function (the first loop and the second loop containing other loops) are important. After all, if n is 10,000, your computer will print many numbers in both loops. It turns out that the following code is irrelevant when you are talking about your algorithm’s efficiency: # loop 1 for i in range(n): print(i) To understand why, you need to look at what happens as n gets bigger. Here is the equation for the number of steps in your algorithm: T(n) = n + n**3 When you have two nested for loops that take n steps, it translates to n**2 (n to the second power) because if n is 10, you have to do 10 steps twice, or 10**2. Three nested for loops are always n**3 for the same reason. In this equation, when n is 10, the first loop in your program takes 10 steps, and the second loop takes 10 3 steps, which is 1,000. When n is 1,000, the first loop takes 1,000 steps, and the second loop takes 1,000 3 , which is 1 billion. See what happened? As n gets bigger, the second part of your algorithm grows so much more quickly that the first part becomes irrelevant. For example, if you needed this program to work for 100,000,000 database records, you wouldn’t care about how many steps the first part of the equation takes because the second part will take exponentially more steps. With 100,000,000 records, the second part of the algorithm would take more than a septillion steps, which is 1 followed by 24 zeros, so it is not a reasonable algorithm to use. The first 100,000,000 steps aren’t relevant to your decision. Because the important part of an algorithm is the part that grows the fastest as n gets bigger, com- puter scientists use big O notation to express an algorithm’s efficiency instead of a T( n) equation. Big O notation is a mathematical notation that describes how an algorithm’s time or space requirements (you will learn about space requirements later) increase as the size of n increases. Computer scientists use big O notation to create an order- of- magnitude function from T( n). An order of magnitude is a class in a classification system where each class is many times greater or smaller than the one before. In an order- of- magnitude function, you use the part of T( n) that dominates the equation and ignore everything else. The part of T( n) that dominates the equation |
Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling
ma'muriyatiga murojaat qiling