Posted by WoodenChair 1/1/2026
Care about orders of magnitude instead, in combination with the speed of hardware https://gist.github.com/jboner/2841832 you'll have a good understanding of how much overhead is due to the language and the constructs to favor for speed improvements.
Just reading the code should give you a sense of its speed and where it will spend most time. Combined with general timing metrics you can also have a sense of the overhead of 3rd party libraries (pydantic I'm looking at you).
So yeah, I find that list quite useful during the code design, likely reduce time profiling slow code in prod.
Firstly, I want to start with the fact that the base system is a macOS/M4Pro, hence;
- Memory related access is possibly much faster than a x86 server. - Disk access is possibly much slower than a x86 server.
*) I took x86 server as the basis as most of the applications run on x86 Linux boxes nowadays, although a good amount of fingerprint is also on other ARM CPUs.
Although it probably does not change the memory footprint much, the libraries loaded and their architecture (ie. being Rosetta or not) will change the overall footprint of the process.
As it was mentioned on one of the sibling comments -> Always inspect/trace your own workflow/performance before making assumptions. It all depends on specific use-cases for higher-level performance optimizations.
The list of floats is larger, despite also being simply an array of 1000 8-byte pointers. I assume that it's because the int array is constructed from a range(), which has a __len__(), and therefore the list is allocated to exactly the required size; but the float array is constructed from a generator expression and is presumably dynamically grown as the generator runs and has a bit of free space at the end.
For example, my M4 Max running Python 3.14.2 from Homebrew (built, not poured) takes 19.73MB of RAM to launch the REPL (running `python3` at a prompt).
The same Python version launched on the same system with a single invocation for `time.sleep()`[1] takes 11.70MB.
My Intel Mac running Python 3.14.2 from Homebrew (poured) takes 37.22MB of RAM to launch the REPL and 9.48MB for `time.sleep`.
My number for "how much memory it's using" comes from running `ps auxw | grep python`, taking the value of the resident set size (RSS column), and dividing by 1,024.
1: python3 -c 'from time import sleep; sleep(100)'
I guess you could find yourself in a situation where a 2X speedup is make or break and you're not a week away from needing 4X, etc. But not very often.
[1] https://hex.pm/packages/snakepit [2] https://hex.pm/packages/snakebridge
String operations in Python are fast as well. f-strings are the fastest formatting style, while even the slowest style is still measured in just nano-seconds.
Concatenation (+) 39.1 ns (25.6M ops/sec)
f-string 64.9 ns (15.4M ops/sec)
It says f-strings are fastest but the numbers show concatenation taking less time? I thought it might be a typo but the bars on the graph reflect this too? "literal1 " + str(expression) + " literal2"
vs f"literal1 {expression} literal2"
The only case that would be faster is something like: "foo" + str(expression)If I have only plain Python installed and a .py file that I want to test, then what's the easiest way to get a visualization of the call tree (or something similar) and the computational cost of each item?
Benchmark Iteration Process
Core Approach:
- Warmup Phase: 100 iterations to prepare the operation (default)
- Timing Runs: 5 repeated runs (default), each executing the operation a specified number of times
- Result: Median time per operation across the 5 runs
Iteration Counts by Operation Speed: - Very fast ops (arithmetic): 100,000 iterations per run
- Fast ops (dict/list access): 10,000 iterations per run
- Medium ops (list membership): 1,000 iterations per run
- Slower ops (database, file I/O): 1,000-5,000 iterations per run
Quality Controls:
- Garbage collection is disabled during timing to prevent interference
- Warmup runs prevent cold-start bias
- Median of 5 runs reduces noise from outliers
- Results are captured to prevent compiler optimization elimination
Total Executions: For a typical benchmark with 1,000 iterations and 5 repeats, each operation runs 5,100 times (100 warmup + 5×1,000 timed) before reporting the median result.