Why your code is slow (or fast)?

September 18, 2025byRohit Roy

Imagine you need to sort a billion names for a search engine. There are dozens of ways to do it. Each one works. But some finish in seconds and others take hours. Algorithm analysis helps you compare them.

We care most about running time. Memory matters too, but time is what makes or breaks a program. An algorithm that takes too long is useless, no matter how much memory you have.

But it runs fast on my machine!

You could run two algorithms on your laptop and compare. But that result means nothing outside your machine. A fast algorithm on a high-end server might crawl on a budget PC. Hardware varies. Programming languages vary. Even coding style affects execution time.

We need a measure that is tied to the algorithm itself, not the machine running it.

That measure is input size: nn. It is the number of elements you are working with. The length of an array. The nodes in a graph. The bits in a number. Bigger nn means more work. If you express running time as a function of nn, you get a machine-independent measure. Call it f(n)f(n).

What is in f(n)f(n)?

f(n)f(n) counts operations. Basic steps like adding numbers, comparing values, or reading memory.

Suppose you are summing a list of nn numbers. You loop through each one and add it. That is roughly nn operations. So f(n)=nf(n) = n.

Now imagine sorting that list with Bubble Sort. You compare pairs of numbers in nested loops. That leads to roughly n×nn \times n comparisons, plus some setup work. So f(n)=n2+4n+100f(n) = n^2 + 4n + 100.

Each term reflects a part of the algorithm. The n2n^2 comes from the nested loops. The 4n4n might be a single loop for swaps. The 100100 is fixed setup work.

We count these operations on an idealized machine where each step takes the same time. This removes real-world noise like slow CPUs or cache delays. The focus stays on the algorithm's logic.

Inputs are not always the same

The same algorithm can behave very differently depending on the input.

A sorted list might let a sorting algorithm finish early. A reverse-sorted list forces it to do every step. That is why we analyze three cases:

  • Best Case: The luckiest input. Linear search finds the item first. That is 1 step.
  • Worst Case: The hardest input. Linear search checks every element. That is nn steps.
  • Average Case: A typical random input. Linear search checks about half the list. That is roughly n/2n/2 steps.

The worst case is the most useful. It guarantees performance no matter what input arrives. The average case predicts real-world behavior but requires assumptions about input patterns. The best case is rarely the one you plan for.

Simplifying f(n)f(n)

For large inputs, not all terms in f(n)f(n) matter equally.

Take f(n)=n2+4n+100f(n) = n^2 + 4n + 100. At n=1,000,000n = 1{,}000{,}000, the n2n^2 term is a trillion. The 4n4n term is four million. The constant 100100 is negligible. The n2n^2 term dominates everything else.

This is the idea of rate of growth. It describes how running time scales as input grows. Computer scientists call this time complexity.

For large nn:

f(n)=n2+4n+100n2f(n) = n^2 + 4n + 100 \approx n^2

We focus on the dominant term because it determines the algorithm's behavior at scale.

Asymptotic notation

We use asymptotic notation to describe rate of growth cleanly, without carrying all the terms.

A function f(n)f(n) is asymptotic to g(n)g(n) if:

limnf(n)g(n)=1\lim_{n \to \infty} \frac{f(n)}{g(n)} = 1

This means f(n)f(n) and g(n)g(n) grow at the same rate for very large nn. We use g(n)g(n) as a simpler approximation of f(n)f(n).

For f(n)=n2+4n+100f(n) = n^2 + 4n + 100, a natural choice is g(n)=n2g(n) = n^2. But we could also pick g(n)=n3g(n) = n^3, which grows faster and gives an upper estimate. Or g(n)=ng(n) = n, which grows slower and gives a lower estimate.

These choices lead to three notations: Big-OO, Big-Ω\Omega, and Big-Θ\Theta.

Big-OO: the upper bound

Big-OO sets a ceiling. It says the function will never grow faster than this.

Formally, f(n)=O(g(n))f(n) = O(g(n)) if there exist constants c>0c > 0 and n0>0n_0 > 0 such that:

f(n)cg(n)for all nn0f(n) \leq c \cdot g(n) \quad \text{for all } n \geq n_0

There are infinitely many valid choices for g(n)g(n). If f(n)=nf(n) = n, then f(n)=O(n)f(n) = O(n) is valid. So is O(n2)O(n^2) or O(2n)O(2^n). All are correct upper bounds. In practice, we pick the tightest one.

For f(n)=n2+4n+100f(n) = n^2 + 4n + 100, the tightest choice is O(n2)O(n^2). The smaller terms are dropped. Big-OO answers the question: how bad can it get?

Big-Ω\Omega: the lower bound

Big-Ω\Omega sets a floor. It says the function will never grow slower than this.

Formally, f(n)=Ω(g(n))f(n) = \Omega(g(n)) if there exist constants c>0c > 0 and n0>0n_0 > 0 such that:

cg(n)f(n)for all nn0c \cdot g(n) \leq f(n) \quad \text{for all } n \geq n_0

Again, there are infinitely many valid choices. If f(n)=n2f(n) = n^2, then Ω(n2)\Omega(n^2) is valid. So is Ω(n)\Omega(n) or Ω(1)\Omega(1). In practice, we pick the largest g(n)g(n) that still stays below f(n)f(n).

For f(n)=100n2+10n+50f(n) = 100n^2 + 10n + 50, the tightest choice is Ω(n2)\Omega(n^2). Big-Ω\Omega answers the question: how fast can it get?

Big-Θ\Theta: the tight bound

Big-OO gives a ceiling. Big-Ω\Omega gives a floor. Big-Θ\Theta gives both at once.

Formally, f(n)=Θ(g(n))f(n) = \Theta(g(n)) if there exist constants c1c_1, c2c_2, and n0n_0 such that:

c1g(n)f(n)c2g(n)for all nn0c_1 \cdot g(n) \leq f(n) \leq c_2 \cdot g(n) \quad \text{for all } n \geq n_0

f(n)f(n) is squeezed between two multiples of g(n)g(n). It grows exactly like g(n)g(n), up to constant factors.

For f(n)=100n2+10n+50f(n) = 100n^2 + 10n + 50:

  • Upper bound: O(n2)O(n^2)
  • Lower bound: Ω(n2)\Omega(n^2)
  • Together: Θ(n2)\Theta(n^2)

Putting it all together

Here is how best, worst, and average cases look for linear search.

Worst case (f(n)nf(n) \approx n, item is last or missing):

  • O(n)O(n): at most linear time.
  • Ω(n)\Omega(n): at least linear time.
  • Θ(n)\Theta(n): exactly linear time.

Best case (f(n)1f(n) \approx 1, item is first):

  • O(1)O(1): at most constant time.
  • Ω(1)\Omega(1): at least constant time.
  • Θ(1)\Theta(1): exactly constant time.

Average case (f(n)n/2f(n) \approx n/2, random input):

  • O(n)O(n): at most linear time.
  • Ω(n)\Omega(n): at least linear time.
  • Θ(n)\Theta(n): exactly linear time.

Does O(n)O(n) mean worst case?

Not automatically. O(n)O(n) says the running time grows no faster than linear. It says nothing about which case you are analyzing.

For linear search:

  • Worst case: f(n)=nf(n) = n, so O(n)O(n).
  • Best case: f(n)=1f(n) = 1, so O(1)O(1).

Both are OO notation. They describe different cases. Whenever you see O(n)O(n), ask: best, worst, or average?

So,

  • OO gives an upper bound. It answers: how bad can it get?
  • Ω\Omega gives a lower bound. It answers: how fast can it get?
  • Θ\Theta gives a tight bound. It answers: what is the exact growth rate?
  • Case analysis and asymptotic notation are independent. You can apply any notation to any case.
  • In practice, we focus on worst-case Big-OO. It guarantees performance under the hardest conditions.