Understanding Big O Notation
- Big O Notation is a mathematical notation used to describe the efficiency of an algorithm in terms of its time and space complexity.
- Time Complexity: Measures how the execution time of an algorithm changes with the size of the input.
- Space Complexity: Measures how the memory usage of an algorithm changes with the size of the input.
Big O notation focuses on the worst-case scenario, providing an upper bound on the number of operations required.
Common Big O Notations
- O(1) - Constant Time: The number of operations remains the same regardless of input size.
- Example: Accessing an element in an array by index.
- O(n) - Linear Time: The number of operations increases linearly with the input size.
- Example: Iterating through an array.
- O(log n) - Logarithmic Time: The number of operations increases logarithmically as the input size grows.
- Example: Binary search.
- O(n log n) - Log-Linear Time: Common in efficient sorting algorithms like quicksort and mergesort.
- O(n²) - Quadratic Time: The number of operations increases quadratically with the input size.
- Example: Bubble sort, selection sort.
- O(2ⁿ) - Exponential Time: The number of operations doubles with each additional input element.
- Example: Recursive algorithms without memoization, like the naive Fibonacci sequence.
- O(n!) - Factorial Time: The number of operations grows factorially with the input size.
- Example: Permutation generation, the traveling salesman problem.
The lower the Big O notation, the more efficient the algorithm is for large input sizes.
Calculating Big O Complexity
- Identify the Basic Operations: Determine the key operations that drive the algorithm's performance.
- Analyze Loops and Recursion:
- Single loop: Typically O(n).