Efficiency of Algorithms
Efficiency
A measure of how well an algorithm uses resources, such as time and memory, to solve a problem.
We will focus on time efficiency, which is often the most critical factor when choosing an algorithm.
Note- Big O notation is a mathematical way to describe how the running time or memory use of an algorithm grows as the input size increases.
- It does not measure the exact time in seconds.
- Instead, it focuses on the rate of growth of operations as the input ($n$) becomes larger.
Why Do We Use Big O?
- To compare algorithms independent of hardware or programming language.
- To predict scalability: how the algorithm behaves as $n$ grows.
- To choose efficient algorithms for large datasets.
How Big O Works
We look at the number of steps an algorithm performs in relation to input size $n$:
| Big O | Name | Example | Description |
|---|---|---|---|
| $O(1)$ | Constant | Accessing an array element | Time does not depend on input size. |
| $O(\log n)$ | Logarithmic | Binary search | Each step halves the problem size. |
| $O(n)$ | Linear | Summing a list | Work grows directly with input size. |
| $O(n \log n)$ | Log-linear | MergeSort, QuickSort | Efficient sorting algorithms. |
| $O(n^2)$ | Quadratic | Nested loops over $n$ | Work grows with the square of input size. |
| $O(2^n)$ | Exponential | Recursive subset generation | Work doubles with each extra input. |
| $O(n!)$ | Factorial | Traveling Salesman brute force | Extremely inefficient for large $n$. |
Why Efficiency Matters
- Efficient algorithms can handle larger datasets and solve problems faster.
- Inefficient algorithms may become impractical as the size of the input grows.
How to Count in Big O
To deduce the efficiency of an algorithm, consider the following steps:
- Identify the Input Size: Determine the variable that represents the size of the input (e.g., $n$ for a list of length $n$).
- Count the Operations: Estimate the number of operations the algorithm performs as the input size grows.
- Determine the Growth Rate: Use Big O notation to describe how the number of operations scales with the input size.
- When analysing an algorithm, focus on the operations that dominate its running time as the input size grows.
- These are the operations that determine its efficiency.
Single Loop vs Nested Loops
Note- Loops are a common source of repeated steps in algorithms.
- The number of iterations depends on the loop structure and the input data.
- A for loop typically iterates a fixed number of times, determined by its range.
- Consider the following Python code:
for i in range(n)
print(i)- The loop runs from $0$ to $n-1$, so the print(i) statement is executed n times.
- A while loop continues until a condition is no longer true.
- The number of iterations depends on how the condition changes.
- Consider this Python code:
i = 1
while i <= n:
print(i) i = i + 1- The loop starts with $i = 1$ and increments $i$ by 1 each time.
- It stops when $i > n$, so the print(i) statement is executed n times.