Python 3.14 is here. How fast is it?
(blog.miguelgrinberg.com)744 points by pjmlp 5 days ago
744 points by pjmlp 5 days ago
The build of Python that I used has tail calls enabled (option --with-tail-call-interp). So that was in place for the results I published. I'm not sure if this optimization applies to recursive tail calls, but if it does, my Fibonacci test should have taken advantage of the optimization.
That tells you how much I know about the feature. :) But in any case, I'm positive that the flag was enabled, so my results are with tail calls. I suppose part of the difference between 3.13 and 3.14 could be thanks to this.
Good to know! Thanks for confirming. Yes, I would guess that the tail call interpreter explains part of the difference between 3.13 and 3.14. Previously the overall improvement to the interpreter has been measured at 1-5%, or even 10-15% depending on the compiler version you are using: https://blog.nelhage.com/post/cpython-tail-call/
If your benchmark setup is easy to re-run, it would be awesome to see numbers that compare the tail call interpreter to the build where it is disabled, to isolate how much improvement is due to that.
Where are you getting these numbers?
Python 3.11 on Debian is around 21 MB installed size (python3.11-minimal + libpython3.11-minimal + libpython3.11-stdlib), not counting common shared dependencies like libc, ncurses, liblzma, libsqlite3, etc.
Looking at the embeddable distribution for Windows (32-bit), Python 3.11 is 17.5 MB unpacked, 3.13 is slightly smaller at 17.2 MB and 3.14 is 18.4 MB (and adds the _zstd and _remote_debugging modules).
This is the "standard" configure + make + make install, which includes libpython.a, header files, Python's own tests (python -m test), plus __pycache__, and debug symbols. Distros of course may split it up into multiple packages, split out debug symbols, etc.
See `docker run -it --rm -w /store ghcr.io/spack/all-pythons:2025-10-10`.
To be fair, the main contributors are tests and the static library.
Just looking at libpython.so
10M libpython3.6m.so.1.0
11M libpython3.7m.so.1.0
13M libpython3.8.so.1.0
14M libpython3.9.so.1.0
17M libpython3.10.so.1.0
24M libpython3.11.so.1.0
30M libpython3.12.so.1.0
30M libpython3.13.so.1.0
34M libpython3.14.so.1.0
The static library is likely large because of `--with-optimizations` enabling LTO (so smaller shared libs, but larger static libs).With batteries included, growing should be a desired outcome.
Not always. See dead batteries: https://peps.python.org/pep-0594/
pypy has frequently struggled with funding. Here's a link if you want to donate this christmas https://opencollective.com/pypy
That >2x performance increase over 3.9 in the first test is pretty impressive. A narrow use case for sure, but assuming you can leave your code completely alone and just have it run on a different interpreter via a few CLI commands, that's a nice bump.
For quick and dirty Python benchmark, try https://github.com/DarkStar1982/fast_langton_ant/
Run as "python3 server.py -s 10000000 -n"
A lot of Python use cases don't care about CPU performance at all.
In most cases where you do care about CPU performance, you're using numpy or scikit learn or pandas or pytorch or tensorflow or nltk or some other Python library that's more or just a wrapper around fast C, C++ or Fortran code. The performance of the interpreter almost doesn't matter for these use cases.
Also, those native libraries are a hassle to get to work with PyPy in my experience. So if any part of your program uses those libraries, it's way easier to just use CPython.
There are cases where the Python interpreter's bad performance does matter and where PyPy is a practical choice, and PyPy is absolutely excellent in those cases. They just sadly aren't common and convenient enough for PyPy to be that popular. (Though it's still not exactly unpopular.)
It doesn't play nice with a lot of popular Python libraries. In particular, many popular Python libraries (NumPy, Pandas, TensorFlow, etc.) rely on CPython’s C API which can cause issues.
FWIW, PyPy supports NumPy and Pandas since at least v5.9.
That said, of all the reasons stated here, it's why I don't primarily use PyPy (lots of libraries still missing)
Speaking only for myself, and in all sincerity: every year, there is some feature of the latest CPython version that makes a bigger difference to my work than faster execution would. This year I am looking forward to template strings, zstd, and deferred evaluation of annotations.
Keep in mind that the two scripts that I used in my benchmark are written in pure Python, without any dependencies. This is the sweet spot for pypy. Once you start including dependencies that have native code their JIT is less efficient. Nevertheless, the performance for pure Python code is out of this world, so I definitely intend to play more with it!
Because in the real world, for code where performance is needed, you run the profiler and either find that the time is spent on I/O, or that the time is spent inside native code.
This might have been your experience, but mine has been very different. In my experience a typical python workload is 50% importing python libraries, 45% slow python wrapper logic and 5% fast native code. I spend a lot of time rewriting the python logic in C++, which makes it 100x faster, so the resulting performance approaches "10% fast native logic, 90% useless python imports".
If imports are slow, you need to not be writing python in the first place, because you are either on limited hardware or you are writing a very performant app.
I do a bit of performance work and find most often that things are mixed: there’s enough CPU between syscalls that the hardware isn’t being full maximized, but there’s enough I/O the CPUs aren’t pegged either. It is rare that the profiler finds an obvious hotspot that yields an easy win; usually it shows that with heavy refactoring you can make 10% of your load several times faster, and then you’ll need to do the same for the next 10% and so on. That is the more typical real world for me, and in that world Python is really awful when compared to rewrite-it-in-Rust.
This "There are no hot spots, it's just a uniform glowing orange" situation is why Google picked C++ and then later Rust and to some extent why they picked Go too.
IRL you will have CPU-bottlenecked pure Python code too. But it's not enough to take on the unknown risk of switching to a lesser supported interpreter. Worst case you just put in the effort to convert the hot parts to multiprocessing.
Also, that engineer time you would spend optimizing for performance costs more than just throwing more hardware at it.
That's the thing with single threaded CPU operations, you can't throw more hardware at it
We look periodically and pypy is usually unusable for us due to third-party library support. E.g. psycopg2, at least as of a couple years ago. Have not checked in a while.
pypy has a c-extension compatibility layer that allows running psycopg2 (via psycopg2cffi) and similar for numpy etc.
Because it hasn't been blessed by the PSF. Plus it's always behind, so if you want to use the newest version of framework x, or package y then you're SOL.
Python libraries used to brag about being pure Python and backwards compatible, but during the push to get everyone on 3.x that went away, and I think it is a shame.
I keep wondering the same. It's a significant speed-up in most cases and equally easy to (apt) install
For public projects I default the shebang to use `env python3` but with a comment on the next line that people can use if they have pypy. People seem to rarely have it installed but they always have Python3 (often already shipped with the OS, but otherwise manually installed). I don't get it. Just a popularity / brand awareness thing I guess?
I think generally people who care about performance don't tend to write their code in Python to begin with, so the culture of python is much less performance sensitive than is typical even among other interpreted languages like perl, php, ruby or javascript. The people who do need performance, but are still using python, tend to rely on native libraries doing significant numerical calculations, and many of these libraries are not compatible with PyPy. The escape hatch there is to offload more and more of the computation into the native runtime rather than to optimize the python performance.
Because all the heavy number-crunching code is already written in C or Rust or as CUDA kernels, so the actual time spent running Python code is miniscule. If it starts to matter, I would probably reach for Cython first. PyPy is an extremely impressive project, but using it adds a lot of complexity to what is usually a glue language. It is a bit like writing a JIT for Bash.
The advantage of core python is that you import stuff and 99.999999% of the time it works.
With PyPy not so much.
I've never experienced any problems that could be attributed to the speed of my Python runtime. I use Python a lot for internal scripting and devops work, but never in a production environment that scaled beyond a few hundred users. I suspect most Python usecases are like that, and CPython is just the safest option.
It's not easily available in uv. Even if I installed it outside uv, it always seems significantly out of date. I'm running code in spaces where with uv I can control all the installs of Python, so I don't benefit from using an older release for compatibility.
Yeah I'm curious about this myself. Seems to utterly destroy CPython in every one of those benchmarks.
because it turns out that optimizing performance of a programming language designed for use-cases where runtime performance doesn't matter ... doesn't matter
There's currently talk of adding gigawatts of data center capacity to the grid just for use cases where python dominates development. While a lot of that will be compiled into optimized kernels on CPU or GPU, it only takes a little bit of 1000x slower code to add up to a significant chunk of processing time at training or inference time.
What percentage of the CPU cycles are actually spent running Python though? My impression is _very_ low in production LLM workloads. I think significantly less than 1%. There are almost certainly better places to spend the effort, and if it did matter, I think they would replace Python with something like C++ or Rust.
I feel like Python should be much faster already. With all the big companies using Python and it's huge popularity I would have expected that a lot of money, work and research would be put into making Python faster and better.
Why?
There are other languages you can use to make stuff go fast. Python isn't for making stuff go fast. Its for rapid dev, and that advantage matters way more when you already are going to be slow due to waiting for network response
This has always confused me... is Python really that much better at rapid dev? I work on a Python project and every day I wish the people that started the project had chosen a different language that actually scaled well with the problem rather than Python, which they likely chose because it was for "rapid dev".
You can run Python processes in parallel for "scaling". Youtube and Uber run python backends. This is cheaper than developer time per hour.
This version runs circles around other languages. Well ... half a circle, anyway.
I'm thankful they included a compiled language for comparison, because most of the time when I see Python benchmarks, they measure against other versions of Python. But "fast python" is an oxymoron and 3.14 doesn't seem to really change that, which I feel most people expected given the language hasn't fundamentally changed.
This isn't a bad thing; I don't think Python has to be or should be the fastest language in the world. But it's interesting to me seeing Python getting adopted for a purpose it wasn't suited for (high performance AI computing). Given how slow it is, people seem to think there's a lot of room for performance improvements. Take this line for instance:
> The free-threading interpreter disables the global interpreter lock (GIL), a change that promises to unlock great speed gains in multi-threaded applications.
No, not really. I mean, yeah you might get some speed gains, but the chart shows us if you want "great" speed gains you have two options: 1) JIT compile which gets you an order of magnitude faster or 2) switch to a static compiled language which gets you two orders of magnitude faster.
But there doesn't seem to be a world where they can tinker with the GIL or optimize python such that you'll approach JIT or compiled perf. If perf is a top priority, Python is not the language for you. And this is important because if they change Python to be a language that's faster to execute, they'll probably have to shift it away from what people like about it -- that it's a dynamic, interpreted language good for prototyping and gluing systems together.
I've been writing Python professionally for a couple of decades, and there've only been 2-3 times where its performance actually mattered. When writing a Flask API, the timing usually looks like: process the request for .1ms, make a DB call for 300ms, generate a response for .1ms. Or writing some data science stuff, it might be like: load data from disk or network for 6 seconds, run Numpy on it for 3 hours, write it back out for 3 seconds.
You could rewrite that in Rust and it wouldn't be any faster. In fact, a huge chunk of the common CPU-expensive stuff is already a thin wrapper around C or Rust, etc. Yeah, it'd be really cool if Python itself were faster. I'd enjoy that! It'd be nice to unlock even more things that were practical to run directly in Python code instead of swapping in a native code backend to do the heavy lifting! And yet, in practice, its speed has almost never been an issue for me or my employers.
BTW, I usually do the Advent of Code in Python. Sometimes I've rewritten my solution in Rust or whatever just for comparison's sake. In almost all cases, choice of algorithm is vastly more important than choice of language, where you might have:
* Naive Python algorithm: 43 quadrillion years
* Optimal Python algorithm: 8 seconds
* Rust equivalent: 2 seconds
Faster's better, but the code pattern is a lot more important than the specific implementation.
> Or writing some data science stuff, it might be like: load data from disk or network for 6 seconds, run Numpy on it for 3 hours, write it back out for 3 seconds.
> You could rewrite that in Rust and it wouldn't be any faster.
I was asked to rewrite some NumPy image processing in C++, because NumPy worked fine for 1024px test images but balked when given 40 Mpx photos.
I cut the runtime by an order of magnitude for those large images, even before I added a bit of SIMD (just to handle one RGBX-float pixel at a time, nothing even remotely fancy).
The “NumPy has uber fast kernels that you can't beat” mentality leads people to use algorithms that do N passes over N intermediate buffers, that can all easily be replaced by a single C/C++/Rust (even Go!) loop over pixels.
Also reinforced by “you can never loop over pixels in Python - that's horribly slow!”
Same with opencv and even sometimes optimized matrix libraries in pure C++. These are all highly optimized. But often when you want to achieve something you have to chain stuff which quickly eats up a lot of cycles, just by copying stuff around and having multiple passes that the compiler is unable to fuse. You can often pretty easily beat that even if you are not an optimization god by manual loop fusion.
Fused expressions are possible using other libraries (numexpr is pretty good), but I agree that there's a reluctance to use things outside of NumPy.
Personally though I find it easier to just drop into C extensions at the point that NumPy becomes a limiting factor. They're so easy to do and it lets me keep the Python usability.
That's because you're doing web stuff. (I/O limited). So much of our computing experience has been degraded due to this mindset applied more broadly. Despite a steady improvement in hardware, my computing experiences have been stagnating and degraded in terms of latency, responsiveness etc.
I'm not going to even go into the comp chem simulations I've been running, or that about 1/3 the stuff I do is embedded.
I do still use python for web dev, partly because as you say, it's not CPU-bound, and partly because Python's Django framework is amazing. But I have switched to rust for everything else.
As a java backend dev mainly working on web services, I wanted to like python, but I have found it really hard to work on a large python project because the auto complete just does not work as well as something like java.
Maybe it is just due to not being as familiar with how to properly setup a python project, but every time I have had to do something in a django or fast api project it is a mess of missing types.
How do you handle that with modern python? Or is it just a limitation of the language itself?
I won’t completely argue against that, and I’ve also adopted Rust for smaller or faster work. Still, I contend that a freaking enormous portion of computing workloads are IO bound to the point that even Python’s speed is Good Enough in an Amdahl’s Law kind of way.
People will always make bad decisions. For example, I'd also squint at a developer who wanted to write a new non-performance-critical network service in C. Or a performance-critical one, for that matter, unless there was some overwhelming reason they couldn't use Rust or even C++.
And my experience is this: you start using ORMs, and maybe you need to format a large table once in a while. Then your Python just dies. Bonus points if you're using async to service multiple clients with the same interpreter.
And you're now forced to spend time hunting down places for micro-optimizations. Or worse, you end up with a weird mix of Cython and Python that can only be compiled on the developer's machine.
LOL, python is plenty fast if you make sure it calls C or Rust behind the scenes. Typical of 'professional' python people. Something too slow? just drop into C. It surely sounds weird to everyone who complains about Python being slow and the response is on these lines.
But that’s the whole point of it. You have the option to get that speed when it really matters, but can use the easier dynamic features for the very, very many use cases where that’s appropriate.
This is an eternal conversation. Years ago, it was assembler programmers laughing at inefficient C code, and C programmers replying that sometimes they don’t need that level of speed and control.
People really misconstrue the relationship between Python and C/C++ in these discussions.
Those libraries didn't spring out of thin air, nor were they ever existing.
People wanted to write and interface in python badly, that's why you have all these libraries with substantial code in another language yet research and development didn't just shift to that language.
TensorFlow is a C++ library with a python wrapping. Pytorch has supported C++ interface for some time now, yet virtually nobody actually uses tensorflow or pytorch in C++ for ML R&D.
If python was fast enough, most would be fine, probably even happy to ditch the C++ backends and have everything in python, but the reverse isn't true. The C++ interface exists, and no-one is using it. C++ is the replaceable part of this equation. Nobody would really care if Rust was used instead.
Even as a Fortran programmer, the majority of my flops come from BLAS, LAPACK, and those sort of libraries… putting me in the exact same boat as the Python programmers, really. The “professional” programmers in general don’t worry too much about tying their identities to language choices, I think.
This is a very common pattern in high level languages and has been a thing ever since Perl had first come onto the scene. The whole point was that you use more ergonomic, easier to iterate languages like Perl or Python for most of your logic and you drop down into C, C++, Zig, or Rust to write the performance sensitive portions of your code.
When compiled languages became popular again in the 2010s there was a renewed effort into ergonomic compiled languages to buck this trend (Scala, Kotlin, Go, Rust, and Zig all gained their popularity in this timeframe) but there's still a lot of code written with the two language pattern.
Exactly, most Python devs neither need nor care about perf. Most applications don't even need perf, because whether it's .1 second or .001 seconds, the user is not going to notice.
But this current quest to make Python faster is precisely because the sluggishness is noticeable for the task it's being used for most at the moment. That 6 second difference you note between the Optimal Python and the optimal Rust is money on the table if it translates to higher hardware requirements or more server time. When everything is optimal and you could still be 4x faster, that's a tough pill to swallow if it means spending more $$$.
> most Python devs neither need nor care about perf.
You do understand that's a different but equivalent way of saying, "If you care about performance, then Python is not the language for you.", don't you?
It's pretty simple. Nobody wants to do ML R&D in C++.
Tensorflow is a C++ library with python bindings. Pytorch has supported a C++ interface for some time now, yet virtually nobody uses C++ for ML R&D.
The relationship between Python and C/C++ is the inverse of the usual backend/wrapper cases. C++ is the replaceable part of the equation. It's a means to an end. It's just there because python isn't fast enough. Nobody would really care if some other high perf language took its place.
Speed is important, but C++ is even less suited for ML R&D.
I agree. Unless they make it like 10x faster it doesn't really change anything. It's still a language you only use if you absolutely don't care whatsoever about performance and can guarantee that you never will.
Well, that's not true at all. Scientists care about performance, but it turns out that Python is really good for number crunching since it is really good for using very fast C libraries. I know people who use pandas to manipulate huge datasets from radar astronomy. Also, of course, it's used in machine learning. If Python was "only" used in situations where you don't care about performance, it would not be used in so many scenarios that definitely need high performance. Sure, it is not pure Python, but it's still Python being used, just used to orchestrate C libraries
If you’re actually building and shipping software as a business Python is great. The advantages of Python for a startup are many. Large pool of talent that can pickup the codebase on essentially day 1. Fairly easy to reason about, mature, code velocity, typically one and only one way to do things as opposed to JavaScript. There is way more to the story than raw performance.
The counterargument used to be, the heavy lifting will be offloaded to python modules written in C, like numpy.
Which was true, but maybe not the strongest argument. Why not use a faster language in the first place?
But it's different now. There's huge classes of problems where pytorch, jax &co. are the only options that don't suck.
Good luck competing with python code that uses them on performance.
> Why not use a faster language in the first place?
Well for the obvious reason that there isn't really anything like a Jupyter notebook for C. I can interactively manipulate and display huge datasets in Python, and without having to buy a Matlab license. That's why Python took off in this area, really
>Which was true, but maybe not the strongest argument. Why not use a faster language in the first place?
Because most faster languages sucks donkeys balls when it comes to using them quickly and without ceremony. Never mind trying to teach non-programmers (e.g. physics, statistics, etc people) them...
>>> you absolutely don't care whatsoever about performance and can guarantee that you never will.
Those are actually pretty good bets, better than most other technological and business assumptions made during projects. After all, a high percentage of projects, perhaps 95%, are either short term or fail outright.
And in my own case, anything I write that is in the 5% is certain to be rewritten from scratch by the coding team, in their preferred language.
Sure but you're still screwing yourself over on that 5% and for no real reason - there are plenty of languages that are just as good as Python (or better!) but aren't as hilariously slow.
And in my experience rewrites are astonishingly rare. That's why Dropbox uses Python and Facebook uses PHP.
Obtuse statement. There are many ways of speeding up a python project if requirements change.
A painful rewrite in another language is usually the only option in my experience.
If you're really lucky you have a small hot part of the code and can move just that to another language (a la Pandas, Pytorch, etc.). But that's usually only the case for numerical computing. Most Python code has its slowness distributed over the entire codebase.
People use Python for things where performance matters, and it's fine
Probably people at some point were making same arguments about ASM and C. How many people though do ASM these days? Not arguing that for now it is relevant point, obviously Rust / C are way faster.
I doubt it. C is well within 2x of what you can achieve with hand written assembly in almost every case.
Furthermore writing large programs in pure assembly is not really feasible, but writing large programs in C++, Go, Rust, Java, C#, Typescript, etc. is totally feasible.
As someone who was a hardcore python fanboy for a long time, no, no it won't. There are classes of things that you can only reasonably do in a language like rust, or where go/kotlin will save you a crazy amount of pain. Python is fine for orchestration and prototyping, but if it's the only arrow you have in your quiver you're in trouble.
Completely agree, Python is great for its simple syntax, C-interop and great library ecosystem, but it is a pain to debug, deploy, and maintain in more complex use cases, and doesn't play as nicely as other languages with modern stacks (eg. k8s). What is pleasure for the developer (no explicit typing, wild i/o-as-you-go, a library for everything) is pain for the maintainer (useless error messages, sudden exceptions of lacking UAC, dependency hell).
Go, Kotlin and Rust are just significantly more modern and better designed, incorporating the lessons from 90s languages like Python, Ruby and Java.
I know sometimes performance doesn’t matter, and python is certainly useful, but it’s not fast. It can be fast enough and they’ve put a lot of effort into making fast libraries (called in c).
When doing bioinformatics we had someone update/rewrite a tool in java and it was so much faster. Went from a couple days to some like 4 hours of runtime.
Python certainly can be used in production (my experience maintaining some web applications in Java would make me reach for python/php/ruby to create a web backend speed be dammed). Python has some great libraries.
I even changed to JS as my fave for backends. Still using Py for other stuff ofc, but I'm constantly missing some of the JS niceties.
At least Python doesn't have an extremist "100% Pure" ideology like Java, and instead (like TCL and Lua) it's been designed from the ground up for easily integrating with other languages and libraries, embedding, and extending, instead of Java's intolerantly weaponized purity and linguistic supremacy.
Reasons why Sun and Java failed:
Strategy over product. McNealy cast Java as a weapon of mass destruction to fight Microsoft, urging developers to "evangelize Java to fight Microsoft." That fight-first framing made anti-Microsoft positioning the goal line, not developer throughput.
Purity over pragmatism. Sun’s "100% Pure Java" program explicitly banned native methods and dependencies outside the core APIs. In practice, that discouraged bridges to real-world stacks and punished teams that needed COM/OS integration to ship. (Rule 1: "Use no native methods.")
"100% Pure Java" has got to be one of the worst marketing slogans in the history of programming languages, signaling absolutism, exclusion, and gatekeeping. And it was technically just as terrible and destructive an idea that held Java back from its potential as an inclusive integration, extension, and scripting language (especially in the web browser context, since it was so difficult to integrate, that JavaScript happened instead and in spite of Java).
Lua, Python, and even TCL were so much better and successful at embedding and extending applications than Java ever was (or still is), largely because they EMBRACED integration and REJECTED "purity".
Java's extremist ideological quest for 100% purity made it less inclusive and resilient than "mongrel" languages and frameworks like Lua, Python, TCL, SWIG, and Microsoft COM (which Mozilla even cloned as "XP/COM"), that all purposefully enabled easy miscegenation with existing platforms and libraries and APIs instead of insanely insisting everyone in the world rewrite all their code in "100% Pure Java".
That horrible historically troubling slogan was not just a terrible idea technically and pragmatically, but it it also evoked U.S. nativist/KKK's "100% Americanism", Nazi's "rassische Reinheit", "Reinhaltung des Blutes", and "Rassenhygiene", Fascist Italy's "La Difesa della Razza", and white supremacist's "white purity". It's no wonder Scott McNealy is such a huge Trump supporter!
While Microsoft courted integrators. Redmond pushed J/Direct / Java-COM paths, signaling "use Windows features from Java if that helps you deliver." That practicality siphoned off devs who valued getting stuff done over ideological portability.
Community as militia. The rhetoric ("fight," "evangelize") enlisted developers as a political army to defend portability, instead of equipping them with first-rate tooling and sanctioned interop. The result: cultural gatekeeping around "purity" rather than unblocking use cases.
Ecosystem costs. Tooling leadership slid to IBM’s aptly named Eclipse (a ~$40M code drop that became the default IDE), while Sun’s own tools never matched Eclipse’s pull: classic opportunity cost of campaigning instead of productizing.
IBM's Eclipse cast a dark shadow over Sun's "shining" IDE efforts, which could not even hold a candle to Microsoft's Visual Studio IDE that Sun reflexively criticized so much without actually bothering to use and understand the enemy.
At least Microsoft and IBM had the humility to use and learn from their competitor's tools, in the pursuit of improving their own. Sun just proudly banned them from the building, cock-sure there was nothing to learn from them. And now we are all using polyglot VSCode and Cursor, thanks to Microsoft, instead of anything "100% Pure" from Sun!
Litigation drain. Years of legal trench warfare (1997 suit and 2001 settlement; then the 2004 $1.6B peace deal) defended "100% Pure Java" but soaked time, money, and mindshare that could have gone to developer-facing capabilities.
Optics that aged poorly. The very language of "purity" in "100% Pure Java" read as ideological and exclusionary to many -- whatever Sun's presumed intent -- especially when it meant "rewrite in Java, don’t integrate." The cookbook literally codified "no native methods," "no external libraries," and even flagged Runtime.exec as generally impure.
McNealy’s self-aggrandizing war posture did promote Java’s cross-platform ideal, but it de-prioritized developer pragmatism -- stigmatizing interop, slow-rolling mixed-language workflows, and ceding tools leadership -- while burning years on lawsuits. If your priority was "ship value fast," Sun’s purity line often put you on the wrong side of the border wall.
And now finally, all of Java's remaining technical, ideological, and entrenched legacy enterprise advantages don't matter any more, alas, because they are all overshadowed by the unanthropomorphizable lawnmower that now owns it and drives it towards the singular goal of extracting as much profit from it as possible.
Very interesting post, thanks for putting it together.
Rust is indeed quite fast, I thought NodeJS was much better tbh., although it's not bad. I'd be interested to learn what's holding it back because I've seen many implementations where V8 can get C++-like performance (I mean it's C++ after all). Perhaps there's a lot of overhead in creating/destroying temporary objects.
> V8 can get C++-like performance (I mean it's C++ after all)
I don’t think that follows. Python is written in C, but that doesn’t mean it can get C-like performance. The sticking point is in how much work the runtime has to do for each chunk of code it has to execute.
(Edit: sorry, that’s in reply to another child content. I’m on my phone in a commute and tapped the wrong reply button.)
One reason is that I did not spend much time optimizing the Node and Rust versions, I just translated the Python logic as directly and quickly as I could. At least I did not ask an LLM to do it for me, which I hope counts. ;-)
Edit: fixed a couple of typos.
V8 gets C++-like performance when it is getting code that JITs very well. This is typically highly-numeric code, even sometimes tuned specifically for the JIT. This tends to cause a great deal of confusion when people see the benchmarks for that highly numeric code and then don't understand why their more conventional code doesn't get those speeds. It's because those speeds only apply to code you're probably not writing.
If you are writing that sort of code, then it does apply; the speed for that code is real. It's just that the performance is much more specific than people think it is. In general V8 tends to come in around the 10x-slower-than-C for general code, which means that in general it's a very fast scripting language, but in the landscape of programming languages as a whole that's middling single-thread performance and a generally bad multiprocessing story.
For the bubble sort implementation, it's due to the use of the destructuring assignment in the benchmark code. When swapping to a regular swap using a temporary variable, the benchmark runs more than 4 times faster on my machine. Still not at Rust level of performance, but a bit closer to it.
I started using Python again recently after a 15 year break. The reason was I started working with LangChain, specifically LangGraph agents. The JavaScript/TypeScript versions are months behind. In the AI world with the progress thats been made recently, months might as well be years.
'3.14159265359...' - there is a lot of room to grow - keep it at pi
I’m very glad python is getting faster. But the correct answer to “Is Python Really That Slow?” is unambiguously YES. Unless you’re using some ML library like torch or numpy which spends all its time in optimized C code, python is still EXTREMELY slow. We are going to need a lot of these 10% improvements for python to be comparable to Go, Java, or Node, each of which are about 30x faster on typical computer tasks.
For all those making πthon jokes: https://github.com/python/cpython/pull/125035
Aha, the perfect time for Python to adopt the TeX version numbering system.
Kinda curious. Have you figured out why the code runs faster on a Mac?
Very nice post - it's good to see benchmarks done for humans.
For fun, I tried this in Raku:
(0, 1, *+* ... *)[40] #0.10s user 0.03s system 63% cpu 0.214 total
lolSeriously, Python is doing great stuff to squeeze out performance from a scripting language. Realistically, Raku has fewer native libraries (although there is Inline::Python) and the compiler still has a lot of work to get the same degree of optimisation (although one day it could compare).
EDIT: for those who have commented, yes you are correct … this is a “cheat” and does not seek to state that Raku is faster than Python - as I said Raku still has a lot of work to do to catch up.
I take it this is supposed to be the equivalent of fib(40), which ran on the author's system in Pyπ in 6.59 seconds and apparently on yours, with Raku, in 0.21?
Do you have the same hardware as the author or should one of you run the other's variant to make this directly comparable?
No, this is very much not the same. The Raku version is like writing this in Python:
def fibonacci():
a, b = 0, 1
while True:
yield a
a, b = b, a+b
And taking the 40th element. It's not comparable at all to the benchmark, that's deliberately an extremely slow method of calculating fibonacci numbers for the purpose of the benchmark. For this version, it's so fast that the time is dominated by the time needed to start up and tear down the interpreter.Well, sure; you're using dynamic programming, while the stress test Python Fibonacci code is deliberately using recursion without memoization — it makes function calls proportionate to the number computed. Most of the time you're seeing in the Raku code is the interpreter startup. Python doesn't have syntax strongly oriented towards that sort of trick (it's not as strong of a second-best APL as it is a second-best Lisp or Haskell), but:
$ python -m timeit "x = (1, 0); [x[0] for _ in range(40) if (x := (x[0] + x[1], x[0]))][-1]"
50000 loops, best of 5: 4 usec per loop
(Or a "lazy iterator" approach:) $ python -m timeit --setup 'from itertools import islice, count' 'x = (1, 0); next(islice((x[0] for _ in count() if (x := (x[0] + x[1], x[0]))), 40, None))'
50000 loops, best of 5: 5.26 usec per loop
I hope they speedrun to Python 6.28 because tau > pi
(mini unrelated rant. I think pi should equal 6.28 and tau should equal 3.14, because pi looks like two taus)
> I think pi should equal 6.28 and tau should equal 3.14, because pi looks like two taus
Ha. Undeniable proof that we had them backwards all along!
Does 3.14.0 count as one of those 16? I’m more interested in the 3.14.15 than the correctly rounded up 3.14.16.
> And this is a bit disappointing. At least for this test, the JIT interpreter did not produce any significant performance gains, so much that I had to double and triple check that I used a correctly built interpreter with this feature enabled. I do not know much about the internals of the new JIT compiler, but I'm wondering if it cannot deal with this heavily recursive function. FWIW one thing that is worth calling out here is that the initial goal for JIT right now in Python is getting it relatively stable, functional, and more or less getting the initial implementation out there. It's not surprising at all that it's not faster.
I say this because I think the teams working on free-threaded and JIT python maybe could have done a better job publicly setting expectations.
I mean, Guido had a 2021 Faster CPython presentation where they claimed "5x in 4 years (1.5x per year)"[0]. Developers have significantly walked back those expectations since then.
[0] Github slide deck https://github.com/faster-cpython/ideas/blob/main/FasterCPyt...
One important caveat to remember is that this is before a lot of the work on free-threaded python started in full force. A lot of cutting edge work had to be done to support this in the GC but this came with performance penalties. As a result, the trajectory of the Faster CPython effort changed quite a bit.
Didn't help Microsoft axed several folks on that team too...
Sure, reality is a harsh mistress, but those were really optimistic targets which were used to frame a lot of the development efforts.
Only tested against NodeJS and Rust
What about Lua and LuaJIT
I did some recent testing that showed both Lua and LuaJIT-joff (its interpreter-only mode) to be about 2x faster than Python. Both PyPy and full-on LuaJIT were about 10x faster.
Years ago, I even found Ruby to be faster than Python. This was back in the Ruby 2.0 / Python 3.5 days - I'd be interested to know if it's still the case.
If faster language interpreters were included in the tests, the title could be "Python 3.14 is here. How slow is it?"
It would be interesting to test intepreter startup time across various intepreter including Python
In my experience LuaJIT is extremely fast in comparison to Python. Perl is faster, too.
Yeah honestly I don't really care about these benchmarks. Python isn't built for raw performance and that's totally fine! It's the number one choice for prototyping and can do so much, that's what actually matters. I think it's cool they're working on speed improvements though, means my prototype-to-production cycle gets a bit smoother lol.
honestly if the performance of the python interpreter has a big impact on your application's performance and that's something you care about - you're already doing things very wrong
tl;dr: Two orders of magnitude slower than Rust, so 2-3 orders slower than native. Python on a 2 GHz processor runs as fast as C on a 2-20 MHz processor.
True, Python could be better or worse than two orders of magnitude slower for your particular use case, but it's 70x slower for recursion and addition that it clearly hasn't special-cased. That's good to know.
Well, if that's good to know --
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Maybe when you are reinventing the wheel instead of using e.g. numpy, Jax, PyTorch. Python is an ecosystem some of which is tooling built in C/C++. There’s no reason to ignore those libraries just because C devs like to roll their own everything.
quick n dirty Python code will run faster than quick n dirty C++ code
Get back to writing in other languages when speed matters.
I would tell you a joke about python but it would take you a long time to get it.
At least you eventually get it. I regularly don't get UDP jokes.
Do any of these tests measure the new experimental tail call interpreter (https://docs.python.org/3.14/using/configure.html#cmdoption-...)?
I couldn't find any note of it, so I would assume not.
It would be interesting to see how the tail call interpreter compares to the other variants.