Python time.perf_counter Function

Complete guide to Python's time.perf_counter function covering high-resolution timing, benchmarking, and practical examples.

Python time.perf_counter Function

Python time.perf_counter Function

Last modified April 11, 2025

This comprehensive guide explores Python’s time.perf_counter function, which provides the highest available resolution timer for benchmarking. We’ll cover performance measurement, timing comparisons, and practical examples.

Basic Definitions

The time.perf_counter function returns a high-resolution performance counter value in fractional seconds. It’s designed for measuring short durations.

Key characteristics: highest available timer resolution, monotonic (always increases), not affected by system clock changes, and ideal for benchmarking. The reference point is undefined, so only differences are meaningful.

Basic Performance Timing

The simplest use of time.perf_counter measures code execution time. This example shows basic usage for timing a function call.

basic_timing.py

import time

def calculate_sum(n): return sum(range(n))

Start timer

start = time.perf_counter()

Execute function

result = calculate_sum(1000000)

Stop timer

end = time.perf_counter()

Calculate duration

duration = end - start print(f"Calculation took {duration:.6f} seconds") print(f"Result: {result}")

This example demonstrates how to measure the execution time of a function. The perf_counter provides high-resolution timing suitable for performance measurements.

The :.6f format specifier displays the duration with 6 decimal places for microsecond precision.

Comparing Multiple Implementations

time.perf_counter is ideal for comparing different implementations. This example compares two approaches to summing numbers.

compare_implementations.py

import time

def sum_with_loop(n): total = 0 for i in range(n): total += i return total

def sum_with_builtin(n): return sum(range(n))

n = 1000000

Time loop implementation

start = time.perf_counter() result = sum_with_loop(n) end = time.perf_counter() print(f"Loop: {end - start:.6f} sec")

Time built-in implementation

start = time.perf_counter() result = sum_with_builtin(n) end = time.perf_counter() print(f"Built-in: {end - start:.6f} sec")

This pattern helps identify performance differences between approaches. The built-in sum is typically faster than a Python loop.

Multiple runs may be needed for reliable results due to system variability.

Timing Context Manager

A context manager provides a clean way to time code blocks. This example creates a reusable timing context.

timing_context.py

import time

class Timer: def enter(self): self.start = time.perf_counter() return self

def __exit__(self, *args):
    self.end = time.perf_counter()
    self.duration = self.end - self.start
    print(f"Execution took {self.duration:.6f} seconds")

Using the context manager

with Timer(): # Code to time data = [x**2 for x in range(10000)] filtered = [x for x in data if x % 2 == 0] total = sum(filtered)

print(f"Total: {total}")

The context manager automatically handles timing start/stop and output. This pattern reduces boilerplate when timing multiple code sections.

The duration remains available as timer.duration after the block.

Measuring Small Code Sections

perf_counter excels at measuring very short operations. This example times individual list operations.

micro_timing.py

import time

def time_operation(operation, n=1000): start = time.perf_counter() for _ in range(n): operation() end = time.perf_counter() return (end - start) / n

Define operations

def list_append(): lst = [] lst.append(1)

def list_concat(): lst = [] lst = lst + [1]

Time operations

append_time = time_operation(list_append) concat_time = time_operation(list_concat)

print(f"Append: {append_time:.9f} sec/op") print(f"Concat: {concat_time:.9f} sec/op") print(f"Ratio: {concat_time/append_time:.1f}x")

By repeating operations and averaging, we can measure very short durations. This reveals performance differences between similar operations.

The :.9f format shows nanosecond precision for these tiny durations.

Comparing with time.time

This example demonstrates the resolution difference between perf_counter and time.time for short intervals.

resolution_comparison.py

import time

def empty_function(): pass

Test with time.time

start = time.time() empty_function() end = time.time() print(f"time.time resolution: {end - start:.9f} sec")

Test with perf_counter

start = time.perf_counter() empty_function() end = time.perf_counter() print(f"perf_counter resolution: {end - start:.9f} sec")

Measure smallest detectable difference

def measure_resolution(func): min_diff = float(‘inf’) for _ in range(100): start = func() end = func() diff = end - start if 0 < diff < min_diff: min_diff = diff return min_diff

print(f"time.time min resolution: {measure_resolution(time.time):.9f}") print(f"perf_counter min resolution: {measure_resolution(time.perf_counter):.9f}")

perf_counter typically shows much higher resolution than time.time, making it better for microbenchmarks.

The resolution test measures the smallest non-zero timing difference detectable.

Statistical Timing with Multiple Runs

For reliable benchmarks, multiple runs and statistical analysis help account for variability. This example shows advanced timing techniques.

statistical_timing.py

import time import statistics

def benchmark(func, n=1000): times = [] for _ in range(n): start = time.perf_counter() func() end = time.perf_counter() times.append(end - start)

mean = statistics.mean(times)
stdev = statistics.stdev(times)
minimum = min(times)
maximum = max(times)

print(f"Mean: {mean:.9f} sec")
print(f"StdDev: {stdev:.9f} sec")
print(f"Min: {minimum:.9f} sec")
print(f"Max: {maximum:.9f} sec")
print(f"Range: {maximum/minimum:.1f}x")

def test_function(): sum(x*x for x in range(1000))

print(“Benchmark results:”) benchmark(test_function)

This approach provides a more complete picture of performance characteristics than single measurements. It helps identify variability and outliers.

Statistical measures like standard deviation show timing consistency across runs.

Best Practices

  • Use for short intervals: perf_counter is ideal for microbenchmarks

  • Multiple runs: Measure several times for reliable results

  • Warm-up: Consider discarding first measurements

  • System load: Run benchmarks on quiet systems

  • Relative comparisons: Compare implementations rather than absolute times

Source References

Author

My name is Jan Bodnar, and I am a passionate programmer with extensive programming experience. I have been writing programming articles since 2007. To date, I have authored over 1,400 articles and 8 e-books. I possess more than ten years of experience in teaching programming.

List all Python tutorials.

ad ad