Big O Calculator
Compare time complexities from O(1) to O(n!) for any input size. View data structure operations, sorting algorithm cheat sheets, and estimated runtimes. Built for developers and CS students.
Constant
1 ops
0.0 us
Logarithmic
10 ops
0.1 us
Linear
1.0K ops
10.0 us
Linearithmic
10.0K ops
99.7 us
Quadratic
1.0M ops
10.0 ms
Cubic
1.0B ops
10.00 sec
Exponential
> 10^15 ops
heat death of universe
Factorial
> 10^15 ops
heat death of universe
Operations Count by Input Size
| n | O(1) | O(log n) | O(n) | O(n log n) | O(n²) | O(n³) | O(2^n) | O(n!) |
|---|---|---|---|---|---|---|---|---|
| 10 | 1 | 3 | 10 | 33 | 100 | 1.0K | 1.0K | 3.6M |
| 100 | 1 | 7 | 100 | 664 | 10.0K | 1.0M | overflow | overflow |
| 1.0K | 1 | 10 | 1.0K | 10.0K | 1.0M | 1.0B | overflow | overflow |
| 10.0K | 1 | 13 | 10.0K | 132.9K | 100.0M | 1000.0B | overflow | overflow |
| 100.0K | 1 | 17 | 100.0K | 1.7M | 10.0B | > 10^15 | overflow | overflow |
| 1.0M | 1 | 20 | 1.0M | 19.9M | 1000.0B | overflow | overflow | overflow |
Complexity Classes
O(1) - Constant
Same time regardless of input size
Array access by index, Hash table lookup, Stack push/pop
O(log n) - Logarithmic
Halving the problem each step
Binary search, Balanced BST lookup, Finding in sorted array
O(n) - Linear
Time grows directly with input
Linear search, Array traversal, Finding max/min
O(n log n) - Linearithmic
Efficient sorting territory
Merge sort, Heap sort, Quick sort (average)
O(n²) - Quadratic
Nested loops over input
Bubble sort, Selection sort, Nested loops
O(n³) - Cubic
Triple nested loops
Naive matrix multiply, Floyd-Warshall, 3-Sum brute force
O(2^n) - Exponential
Doubles with each additional input
Recursive Fibonacci, Power set, Brute-force subsets
O(n!) - Factorial
Permutation territory
Traveling salesman (brute), All permutations, Bogosort
8 Complexity Classes
Compare O(1), O(log n), O(n), O(n log n), O(n^2), O(n^3), O(2^n) and O(n!) with operation counts and runtime estimates.
Data Structure Reference
Access, search, insert and delete complexities for arrays, linked lists, hash tables, BSTs, heaps, stacks, queues and graphs.
Sorting Cheat Sheet
Best, average and worst case for 9 sorting algorithms including merge sort, quick sort, heap sort, Tim sort and radix sort.
How to Determine Big O Complexity
Big O Notation Explained for Developers
Why Big O Matters
Big O notation is how programmers communicate about algorithm efficiency. When someone says "binary search is O(log n)," every developer understands that doubling the input barely changes the runtime. When someone says "this nested loop is O(n^2)," everyone knows it will struggle with large datasets. It is the universal language of algorithmic performance.
Understanding Big O is essential for coding interviews, system design, and writing production code that scales. A function that works fine on 100 items might freeze on 100,000 items if you chose an O(n^2) approach when an O(n log n) solution existed.
The Complexity Classes
O(1) - Constant: The operation takes the same time no matter the input. Accessing an array element by index, hash table lookup (average case), pushing to a stack. These are the fastest operations.
O(log n) - Logarithmic: The problem is halved at each step. Binary search is the classic example. For 1 billion items, binary search needs at most 30 comparisons. This is extremely efficient.
O(n) - Linear: You look at every element once. Finding the maximum in an unsorted array, summing all elements, linear search. The time grows directly with input size.
O(n log n) - Linearithmic: This is the sweet spot for sorting. Merge sort, heap sort, and Tim sort all operate here. You process every element (n) and each element involves a logarithmic number of comparisons (log n).
O(n^2) - Quadratic: Typically caused by nested loops. Bubble sort, selection sort, comparing every pair of elements. Acceptable for small n (hundreds), but too slow for thousands or more.
O(2^n) - Exponential: Each additional input doubles the work. Generating all subsets of a set, naive recursive Fibonacci. Practical only for n under about 25 to 30.
O(n!) - Factorial: Every permutation of the input. Brute-force traveling salesman, generating all orderings. Practical only for n under about 12.
Common Interview Patterns
- Two pointers: Often reduces O(n^2) to O(n). Used for sorted array problems, removing duplicates, and container with most water.
- Hash map: Turns O(n) lookups into O(1). Used for two-sum, frequency counting, and duplicate detection.
- Binary search: Reduces O(n) search to O(log n). Works on any sorted or monotonic data.
- Sliding window: Solves subarray/substring problems in O(n) instead of O(n^2).
- Divide and conquer: Achieves O(n log n) by splitting the problem in half recursively.
Space Complexity
Big O applies to memory too. An algorithm that creates a copy of the input uses O(n) extra space. One that works in-place uses O(1) extra space. Recursive algorithms use O(depth) stack space. When evaluating algorithms, consider both time and space. Sometimes you trade one for the other (memoization uses O(n) space to save O(2^n) time).
Best, Average and Worst Case
Big O usually describes the worst case. Quick sort is O(n^2) in the worst case (already sorted input with bad pivot selection) but O(n log n) on average. When people say "quick sort is O(n log n)," they usually mean the average case. Always clarify which case you are discussing, especially in interviews.