Posted by runningmike 3 days ago
In production systems we often see Python services scaling horizontally because of the GIL limitations. If true parallelism becomes common, it might actually reduce the number of containers/services needed for some workloads.
But that also changes failure patterns — concurrency bugs, race conditions, and deadlocks might become more common in systems that were previously "protected" by the GIL.
It will be interesting to see whether observability and incident tooling evolves alongside this shift.
As much as I dislike Java the language, this is somewhere where the difference between CPython and JVM languages (and probably BEAM too) is hugely stark. Want to know if garbage collection or memory allocation is a problem in your long running Python program? I hope you're ready to be disappointed and need to roll a lot of stuff yourself. On the JVM the tooling for all kinds of observability is immensely better. I'm not hopeful that the gap is really going to close.
Generally speaking the optimal horizontal scaling is as little as you have to. You may want a bit of horizontal scaling for redundancy and geo distribution, but past that vertically scaling to fewer larger process tend to be more efficient, easier to load balance and a handful of other benefits.
You can load modules and then fork child processes. Children will share memory with each other (if they need to modify any shared memory, they get copy-on-write pages allocated by the kernel) and you'll save quite a lot on memory.
And for smaller projects it's such an annoyance. Having a simple project running, and having to muck around to get cron jobs, background/async tasks etc. to work in a nice way is one of the reasons I never reach for python in these instances. I hope removing the GIL makes it better, but also afraid it will expose a whole can of worms where lots of apps, tools and frameworks aren't written with this possibility in mind.
Not by much. The cases where you can replace processes with threads and save memory are rather limited.
There's even the multiprocessing module in the stdlib to achieve this.
With multiprocessing, processes are expensive and work hogs each process. You must serialize data twice for IPC, that's expensive and time consuming.
You shouldn't have to break out multiple processes, for example, to do some simple pure-Python math in parallel. It doesn't make sense to use multiple processes for something like that because the actual work you want to do will be overwhelmed by the IPC overhead.
There are also limitations, only some data can be sent to and from multiple processes. Not all of your objects can be serialized for IPC.
Unless the app would constantly be creating and killing processes then the process creation overhead would not be that much but IPC is killer
And also your types aren’t pickable or whatever and now you gotta change a lot of stuff to get it to work lol.
All that to avoid hiring a few developers to make optimized native clients on the most popular platforms. Popular apps and websites should lose or get carbon credits on optimization. What is negligible for a small project becomes important when millions of users get involved, and especially background apps.
[0] https://news.microsoft.com/apac/2020/03/17/windows-10-poweri...
I feel like I never wasting my time when I learn how to do things with the web platform because it turns out the app I made for desktop and tablet works on my VR headset. Sure if you are going to pay me 2x the market rate and it is a sure thing you might interest me in learning Swift and how to write iOS apps but I am not going to do it for a personal project or even a moneymaking project where I am taking some financial risk no way. The price of learning how to write apps for Android is that I have to also learn how to write apps for iOS and write apps for Windows and write apps for MacOS and decide what's the least-bad widget set for Linux and learn to program for it to.
Every time I do a shoot-out of Electron alternatives Electron wins and it is not even close -- the only real competitor is a plain ordinary web application with or without PWA features.
Only if you're ok with giving your users a badly performing application. If you actually care about the user experience, then Electron loses and it's not even close.
Python + tkinter == about the same size as electron
Java + JavaFX == about the same size as electron
Sure there are people who write little applets for software developers that are 20k Win32 applications still but that is really out of the mainstream.
- iOS? React Native, Ionic, Web app via Safari
- Android? Same thing
- Mac, Windows, Linux – Tauri, Electron, serve it yourself
Native? Oh boy, here we fucking go: you've spent last decade honing your Android skills? Too bad, son, time to learn Android jerkpad. XML, styles, Java? What's that, gramps? You didn't hear that everything is Kotlin now? Dagger? That's so 2025, it's Hilt/Metro/Koin now. Oh wow, you learned Compose on Android? Man, was your brain frozen for 50 years? It's KMM now, oh wait, KMM is rebranded! It's KMP now! Haha, you think you know Compost? We're going to release half baked Compost multiplatform now, which is kinda the same, but not quite. Shitty toolchain and performance worse than Electron? Can't fucking hear you over jet engine sounds of my laptop exhaust, get on my level, boy!
Perhaps I'm stating the obvious, but you deal with this with lock-free data structures, immutable data, siloing data per thread, fine-grain locks, etc.
Basically you avoid locks as much as possible.
Imo the GIL was used as an excuse for a long time to avoid building those out.
Previously we had to use ProcessPoolExecutor which meant maintaining multiple copies of the runtime and shared data in memory and paying high IPC costs, being able to switch to ThreadPoolExecutor was hugely beneficially in terms of speed and memory.
It almost feels like programming in a modern (circa 1996) environment like Java.
Measure aggressively and test under real concurrency: use tracemalloc to find memory hotspots, py-spy or perf to profile contention, and fuzz C extension paths with stress tests so bugs surface in the lab not in production. Watch per thread stack overhead and GC behavior, design shared state as immutable or sharded, keep critical sections tiny, and if process level isolation is still required stick with ProcessPoolExecutor or expose large datasets via read only mmap.
Edit: Never mind. If it walks like a duck and talks like a duck...
5.4: Energy consumption going down because of parallelism over multiple cores seems odd. What were those cores doing before? Better utilization causing some spinlocks to be used less or something?
5.5: Fine-grained lock contention significantly hurts energy consumption.
Greater power draw though; remember that energy is the integral of power over time.
On N cores, the power is N times greater and the time is N times smaller, so the energy is constant.
In reality, the scaling is never perfect, so the energy increases slightly when a program is run on more cores.
Nevertheless, as another poster has already written, if you have a deadline, then you can greatly decrease the power consumption by running on more cores.
To meet the deadline, you must either increase the clock frequency or increase the number of cores. The latter increases the consumed energy only very slightly, while the former increases the energy many times.
So for maximum energy efficiency, you have to first increase the number of cores up to the maximum, while using the lowest clock frequency. Only when this is not enough to reach the desired performance, you increase the clock frequency as little as possible.
5.5 depends a lot on the implementation used for locks. High energy consumption due to contention normally indicates bad lock implementations.
In the best implementations, there is no actual contention. A waiting core only reads a private cache line, which consumes very little energy, until the thread that had hold the lock immediately before it modifies the cache line, which causes an exit from the waiting loop. In such implementations there is no global lock variable. There is only a queue associated with a resource and the threads insert themselves in the queue when they want to use the shared resource, providing to the previous thread the address where to signal that it has completed its use of the resource, so the single shared lock variable is replaced with per-thread variables that accomplish its function, without access contention.
While this has been known for several decades, one can still see archaic lock implementations where multiple cores attempt to read or write the same memory locations, which causes data transfers between the caches of various cores, at a very high power consumption.
Moreover, even if you use optimum lock implementations, mutual exclusion is not the best strategy for accessing a shared data resource. Even optimistic access, which is usually called "lock-free", is typically a bad choice.
In my opinion, the best method of cooperation between multiple threads is to use correctly implemented shared buffers or message queues.
By correctly implemented, I mean using neither mutual exclusion nor optimistic access (which may require retries), but using dynamic partitioning of the shared buffers/queues, which is done using an atomic fetch-and-add instruction and which ensures that when multiple threads access simultaneously the shared buffers or queues they access non-overlapping ranges. This is better than mutual exclusion because the threads are never stalled and this is better than "lock-free", i.e. optimistic access, because retries are never needed.
Unlocking Python’s Cores: Hardware Usage and Energy Implications of Removing the GIL
I am curious about the NumPy workload choice made, due to more limited impact on CPython performance.
> Abstract: [...] The results highlight a trade-off. For parallelizable workloads operating on independent data, the free-threaded build reduces execution time by up to 4 times, with a proportional reduction in energy consumption, and effective multi-core utilization, at the cost of an increase in memory usage. In contrast, sequential workloads do not benefit from removing the GIL and instead show a 13-43% increase in energy consumption