GraalPy – A high-performance embeddable Python 3 runtime for Java
(graalvm.org)322 points by fniephaus 2 days ago
322 points by fniephaus 2 days ago
Your mileage may very much vary, much like pypy this is very inconsistent and highly dependent on your workload (as well as your dependencies).
My limited experience was that on re-heavy workload pypy is several times slower than cpython (~3x compared to 3.10) and graal is even worse (~6x compared to 3.11).
Which version was that with? GraalVM can JIT compile regular expressions these days, with the same compiler as everything else. They implemented TRegex on top of Truffle so regex can be inlined and optimized like regular code.
Performance does indeed depend on workload. There's a page that compares GraalPy vs CPython and Jython on the Python Performance Suite which aims to be "real world":
https://www.graalvm.org/latest/reference-manual/python/Perfo...
There the speedup is smaller, but this is partly because a lot of real world Python workloads these days spend all their time inside C or the GPU. Having a better implementation is still a good idea though, because it means more stuff can be done by researchers who don't know C++ well or at all. The point at which you're forced to get dedicated hackers involved to optimize gets pushed backwards if you can rely on a good JIT.
That is why we should always use a standardized, controlled benchmark suite, which has well-defined rules to assure fair cross-language comparisons with a representative, well-balanced workload. By focusing on a core set of language features and abstractions, Are-we-fast-yet allows for a more controlled comparison of language implementation performance, isolating the effects of compiler and runtime optimizations.
This is especially important for scripting languages like Python, where a large part of the features are implemented in C or other native languages and called via FFI. That's why, for example, the benchmark implements its own collections, because we want to know how fast the interpreter is. Otherwise, as you have noticed, the result is randomly influenced by how much compute a particular application can delegate to the FFI.
> That's why, for example, the benchmark implements its own collections, because we want to know how fast the interpreter is. Otherwise, as you have noticed, the result is randomly influenced by how much compute a particular application can delegate to the FFI.
That sounds like the exact opposite of what I would want as a user of the language: the benchmark completely abstracts the actual behaviour of the runtime, claiming purported gains which don’t come anywhere near manifesting when trying to run actual software.
I’m not implementing my own collections when `dict` suffices, and I don’t really care that a pure python version of `re` runs faster in graal than in cpython, because I’m not using that.
So what happens is I see claims that graalpython runs 17 times faster than cpython, I try it out, it runs 6 times slower instead, and I can only conclude that graal is a worthless pile of lies and I should stop caring.
If you don't know exactly what you are measuring, the measurement is worthless. We must therefore isolate the measurement subject for the measurement, and avoid uncontrollable influences as far as possible. This is how engineering works, and every engineer should also be aware of measurement errors. In addition, repeatability and falsifiability of the experiment and conclusions are required for scientific claims. The mere statement "too slow to be acceptable" or "worthless pile of lies" is not enough for this.
A measurement method does not have to represent every practical application of the measured subject. In the present case, the measurement allows a statement to be made about the performance of the interpreter (CPython) in relateion to the JIT compiler (GraalPy). Whether the technology is right for your specific application or not is another question.
Tried to use graalvm (interpreter) to run a fairly large project at my $dayjob$ and ran into a few issues right away.
- Maturin doesn't support the graal interpreter, so no Py03 packages
- uv doesn't seem to run, as `fork` and `execve` are missing from the os package?
- Graal seems to have a huge number of patches to popular libraries so that they'll run, most seem to be of the form that patch c files to add additional IFDEFs
I don't think Graal is going to be a viable target for large projects with a huge set of dependencies unfortunately, as the risk of not being able to upgrade to different versions or add newer dependencies is going to be too high.It's impressive what it does seem to support though, and probably worth looking at if you have a smaller scale project.
The number of patches is going down with time and many are trivial one liners, e.g. uvloop
https://github.com/oracle/graalpython/blob/b907353de1b72a14e...
- self.cython_always = False
+ self.cython_always = True
That's the entire patch. Others are working around bugs in the C extensions themselves that a different implementation happens to expose, and can be upstreamed:https://github.com/oracle/graalpython/blob/b907353de1b72a14e...
Still others exist for old module versions, but are now obsolete:
https://github.com/oracle/graalpython/blob/b907353de1b72a14e...
# None of the patches are needed since 43.0, the pyo3 patches have been upstreamed
And finally, some are just general portability improvements. Fork doesn't exist on Windows. Often it can be replaced with just starting a sub-process.So the patching situation has been getting much better over time, partly due to the GraalPy team actively getting involved with and improving the Python ecosystem as a whole.
There is basic GraalPy support in Maturin[0] and PyO3[1], the problem is often that packages require older Maturin/PyO3 versions and/or they use CPython-isms, semi-public APIs, etc., but it is getting better, for example [2].
It is fair to say that large projects with a huge set of dependencies will likely face some compatibility issues, but we're working on ironing this out. There is GraalPy support in setup-python GitHub action. GraalPy is supported in the manylinux image [3]. Hopefully soon also in cibuildwheel [4].
[0] https://github.com/PyO3/maturin/pull/1645 (merged)
[1] https://github.com/PyO3/pyo3/pull/3247 (merged)
[2] https://github.com/pydantic/jiter/pull/135 (merged)
[3] https://github.com/pypa/manylinux/pull/1520 (merged)
To be fair, this also happened when Graal was released for Java. Give it another go in 3-6 months, the Graal team will have improved interoperability massively.
It is a chicken (interpreter) and egg (dependencies) problem. You cannot fix the dependency problems without the interpreter. Neither can you release an interpreter with full dependency support.
For projects using GraalPy, I'd wager that most would vendor all their dependencies at the start of the project and upgrade along the way. I have shipped a couple products with Jython, and very little 3rd party code was used and almost none of the standard library, it was all driving Java from the same project.
So it does have to do with scale but in the opposite direction. Big long projects will want to adopt something like GraalPy because of how long the project will take.
What I was hoping to be able to do was run our existing cpython project on graal to try and benefit from whatever speedups the jvm (or, if possible, compiling to a native module) would provide, rather than build with the jvm specifically in mind from the get go.
I guess what makes Python interesting right now is the integration with ML toolchains, CUDA, Metal/MLX, pytorch, tensorflow, LLM encoders/decoders, etc. more than Python the language. But can GraalVM run those codes meaningfully when Python is merely used for glue code with the important bits implemented in native code?
Yes, apparently it can
https://www.graalvm.org/dev/reference-manual/python/Native-E...
> CPython provides a native extensions API for writing Python extensions in C/C++. GraalPy provides experimental support for this API, which allows many packages like NumPy and PyTorch to work well for many use cases. The support extends only to the API, not the binary interface (ABI), so extensions built for CPython are not binary compatible with GraalPy. Packages that use the native API must be built and installed with GraalPy, and the prebuilt wheels for CPython from pypi.org cannot be used. For best results, it is crucial that you only use the pip command that comes preinstalled in GraalPy virtualenvs to install packages. The version of pip shipped with GraalPy applies additional patches to packages upon installation to fix known compatibility issues and it is preconfigured to use an additional repository from graalvm.org where we publish a selection of prebuilt wheels for GraalPy. Please do not update pip or use alternative tools such as uv.
For anyone interested, here's the PyPI repository with additional binary wheels for GraalPy: https://www.graalvm.org/python/wheels/
We also want to make it easy for Python package maintainers to test and build wheels for GraalPy. It's already available via setup-python, and we are adding GraalPy support to cibuildwheel. If you need any help, please reach out to us!
While hpy is great and I'm excited about it, I would rather bet on the limited C API[0] (which is basically what hpy tries to be if I understand correctly).
0: https://devguide.python.org/developer-workflow/c-api/#limite...
Limited C API is not as abstract as HPy. Most notably Limited C API still exposes reference counting as memory management mechanism, HPy abstracts that. However, ecosystem wide adoption of limited C API and stable ABI would already improve things significantly.
I am willing to live with Python as the Lisp we deserve to have, on this AI wave, when it finally gets a proper JIT story we can rely on, regardless of the workload.
Currently it is a mix and match of an herculean engineering effort mostly ignored by the community (PyPy), DSLs for GPGPUs, bunch of C and C++ libraries that people keep referring to as "Python" when any language can have similar bindings, jython, IronPython, GraalPy,...
So it isn't for lack of trying, at least we finally have CPython folks more welcoming to performance improvements, and JITs.
> I am willing to live with Python as the Lisp we deserve to have
You can have your cake and eat it too https://github.com/hylang/hy
The reasons for all this stuff having been developed in Python also make Python interesting right now, all by themselves. It did not happen by accident; this stuff was developed fairly recently and there was no shortage of mature languages to choose from.
The people disliking the language are very vocal about it, but there is a huge amount of silent people that loves it and an even bigger amount that just like it as much as alternatives. It's mainstream now, not trending like 10 years ago, so there is no hype about it anymore. We just use it to do stuff.
Add to that the existing excellent ecosystem, the strong culture of scientific stacks and a very good story for providing c-extentions (actually the best one in all scripting languages because of things like cibuildwheel).
It's only in small tech bubbles like HN that devs find it surprising.
Python has many issues that are quite clear when you operate at some kind of scale and need proper multiprocessing/multithreading support. And its not just the GIL, you get very unexpected behaviors when dealing with exit handlers and signal handlers in edge cases. Having seen what other languages look like it just doesn’t feel like a language that was designed well for running at scale.
The tooling has markedly improved though. Things like typing and compile time checks, great. But its also funny to me that some of the fastest tools for python are being built in rust (eg uv).
I’ve always found Python to be sort of loved on HN. Not by everyone or course but I guess it depends on each of our experiences on here. I’m usually rather surprised when I meet people who genuinely dislike Python, because that seems like such an odd occurrence. Even if people don’t “love” the language most people seem to have had rather fond experiences or memories of it. Usually criticism comes down to its inefficiencies, but those aren’t exactly unreasonable critiques.
As I said it’s anecdotal, but in my experience Python gets a lot of love compared to something like Java or C#. Both of which are often met with real harshness. Hell I’ve ranted unseemly about C# myself.
My only big critism is the CPython folks resistance to any kind of performance improvements, and the way PyPy efforts have been largely ignored, making Python the last major dynamic language to finally start caring about performance and having a JIT in the box.
Finally thanks to data science, and people getting fed up with always writing bindings, this is changing, and Python can join Common Lisp, Scheme, Smalltalk, SELF, JavaScript, Ruby, Lua, Dylan, Julia, BASIC club.
I mean, even on HN, I'd say if there's derision, it's mostly one uttered with a yawn rather than genuine hate. And that's almost justified; while I spend a lot of my time with lots of different languages (I can't think of a single one I outright hate btw), Python is the one that pays for my things and... Well, there's not much drama there is there (now that we're lost 2->3 anyway)? It's a glue language that's easy to learn, but offers tons of depth should you want it. My primary annoyance at Python used to be the typing, but type annotations have made this less of an issue. It's a nice language and you can do almost everything with it. It's a bit boring, but I guess that's a good thing.
As a former Perl hacker who started using Python in 2005, I saw Python ride several waves. (Numerical computation, data science, deep learning)
Perl was the leading tool for scripting and text parsing. Python didn’t really supplant it for a long time — until people started writing more complicated scripts that had to be maintained. Perl reads like line noise after 6 months whereas I can look at Python code from 20 years ago, prettify it with black, and understand it.
Python got picked up by the scientific computing community, which gave it some its earliest libraries like numpy, f2py, scipy. Some of us who were on MATLAB moved over.
Then data science happened. Pandas built off the scientific computation foundations and eventually libraries like scikit and matplotlib (mimicking matlab’s plotting) came along.
Then tensorflow came along and built on the foundation of numerical libraries. PyTorch followed.
Other systems like Django came and made python popular for building database backed websites.
Suddenly there was momentum and today almost all numerical software have a python API — this includes proprietary stuff like CPLEX and what have you.
Python was the glue language that had the lowest barrier of entry. For instance, Spark was written in Scala and has a performant Scala API but everyone uses PySpark because it’s much more accessible, despite the interop cost.
The counterfactual to all this was Ruby. It had much nicer syntax than Python but when I tried to use it in grad school I was quickly stymied by the lack of numerical libraries. Ruby never found a niche outside of Rails and config management.
Essentially Python — like Nvidia today — bet on linear algebra (and more broadly on data processing) and won.
I get why there’s hate for Python — it’s not a perfect language. Yet those of us pragmatists who use it understand the trade offs. You trade off on the metal performance for programmer performance. You trade off packaging difficulties for something that works. You trade off an imperfect syntax for getting things done.
I could have used Ruby — a much more beautiful lanaguage — in grad school and worked around its lacks, but I would have not graduated on time. Python was pragmatic choice for me and continues to be one for me today (outside of situations requiring raw performance)
I agree with you, and I'll put it slightly stronger. Ruby is a better language than Python in every way except the very most important two:
- Imports in Ruby seriously suck compared to Python. Everything requires into a global scope and an ecosystem like bundler which encourages centralizing all imports for your entire codebase into one file.
- Python has docstrings encouraging in code documentation.
Add common ecosystem things like the Ruby community encouraging generated methods, magical "do what I mean" parameters, and REPL poke-driven development, and this leads to the effect that Python codebases are almost always well documented and easy to understand. You can tell where every symbol comes from, and you can usually find a documentation entry for every single method. It's not uncommon for a Ruby library, even a popular one, to be documented solely through a scattering of sparsely-explained examples with literally no real API documentation. Inheriting a long-lived Ruby project can be a serious ordeal just to discover where all the code that's running is running, why it's running, where things are preloaded into a builtin class, and with Rails and Railties, a Gem can auto insert behavior and Middleware just by existing, without ever being explicitly mentioned in any code or configs other than the Gemfile. It's an absolute headache.
My dream language would be Ruby with Python-style imports and docstrings.
Pragmatic use of $LANGUAGE is a telltale sign of the wizened programmer; one who understands the use-case and solution set well enough to know when the tool fits.
I wrote Ruby when I got started because it was the most accessible and the Rails learning content was top notch. Now I use python when I need more than a few `bash` pipes to accomplish anything, but if I were to solve a capital-P Problem, of course the tool often chooses the project after constraints.
As someone that did the Perl to Python transition back in 2003, for UNIX scripting tasks, the way to do OOP with packages and blessed references was clunky, and having to always go back to the manuals for some clever programming tricks from team mates was tiresome, while Python provided something nicer, and I wasn't really into the sed/awk like features in Perl anyway.
However due to being a interpreted scripting language I never bothered to use Python for anything beyond OS scripting.
Using Python as C and C++ REPL of sorts has been common in academia since it took the scripting crown away from Perl and Tcl, which were used during the late 90's.
Example see the Bioinformatics papers from that period, and the Perl tooling used alongside the research.
Already in 2003 CERN was using Python on some of their build infrastructure (see CMT), Grid Computing scripting efforts, and we had Python trainings available to us.
Now there is a difference between a REPL of sorts, scripting OS tasks, and going full blown applications with a pure interpreter.
Looks like all of that would run in a native sandbox environment which in turn is called from the Python running on the JVM. So, maybe it simplifies interop, but whether it's straightforward to get full performance from the native layer (especially GPU/multicore) is an open question.
OP here.
More details about this particular release are in the blog post at https://medium.com/graalvm/whats-new-in-graal-languages-24-1...
Happy to answer any additional questions!
Hi, what's the deployment process like? Is there a program similar to warbler (for jruby) that builds a jar for a python program?
EDIT: I tried the native binary command here on a simple hello world script.
It downloaded some stuff in the background, built the entire python and java and embedded it into a 350 MB ELF binary on linux after 15 minutes of using 24 GB RAM and 100% CPU.
But I'd much prefer a smaller jar file which I can distribute cross-platform.
https://www.graalvm.org/uploads/quick-references/GraalPy_v1/...
Thanks for the question, nurettin.
Although GraalPy can create standalone applications [1], you don't have to turn your hello world script into a self-contained binary. You can, of course, create a JAR that depends on GraalPy, or a fat JAR that contains it, and deploy it just like any other Java application.
We are still updating our docs to mention more details on this and publish some guides, apologies for the delay.
[1] https://www.graalvm.org/latest/reference-manual/python/stand...
FWIW we've had full Java/Python integration in Clojure for awhile now, courtesy of Chris Neurnberger and libpython-clj: https://github.com/clj-python/libpython-clj
If you're into that sort of thing.
Self-interest disclosure: I'm a major contributor and heavy user.
I'm assuming you mean "how well does JVM concurrency play with Python concurrency"? Python concurrency works perfectly well on its own, Java/Clojure concurrency works very well on its own, trying to pass multithreaded information across the JVM boundary to Python while bypassing the GIL will result in a segfault (Edit: but there are "with-gil" wrappers you can use to prevent that, at a slight performance hit). In practice this tends not to be much of a problem as you setup a parallel workload on one side of the boundary or the other and pass information with a threadsafe queue. We do plenty of heavy parallel computations, data science, AI, fintech, etc.
There are certainly some leaky abstractions and there is a general expectation that you understand the quirks of Python and Clojure pretty well, so it's not for everyone. Knowing something about Java would probably help too but I've been using libpython-clj in production since 2017 years and I barely know anything about Java (compared to Python/Clojure).
This is pretty interesting, what's the benefit over using python so directly with java? I mean, is the overhead of having these as seperate services / processes too much? I'm not trying to provoke I'm genuinely curious about the use case.
Also, what's the dev workflow like? When I'm coding python I basically live inside the debugger (a.k.a the carmark method), do you use an IDE that understands both java and python? Whats the debugging experience like? Can you set a breakpoint and then evaluate python code and expressions inside the debugger like you can if it was just solely a python project using VSCode and the python debugger?
Why would they benefit? When duckdb/Polaris are being used correctly, all the work is happening in the native stack. It should already be very fast compared to the Python runtime.
I recently moved a large ETL process that was mostly Python runtime processing to pyarrow/Polaris and wrote all the ETL logic in SQL. I've seen processes that used to take a week to run drop to about an hour (no exaggeration).
They wouldn’t benefit from performance because as you say they are already blazing fast as is. And I know what you mean — I rewrote a pure (granted old pre-2.0) pandas transformation into duckdb and compute time dropped from nearly an hour to single digit minutes.
But having these in Graal would allow more types of applications to be deployed in JVM stacks. As sibling comments note, many data science models are in python but production stacks are in Java.
Took a little digging to find that it targets 3.11. Didn’t see anything about a GIL. If you’re a Python person, don’t click the quick start link unless you want to look at some xml.
Python implementations naturally don't have any GIL in regards to JVM or CLR variants, there is no such thing on those platforms.
YAML and JSON have both tried to replicate the XML tooling experience, only worse.
Schemas, comments, parsing and schema conversions tools.
I think GraalPython does have a GIL, see https://github.com/oracle/graalpython/blob/master/docs/contr... - and if by "there is no such thing on those platforms" you mean JVM/CLR not having a GIL, C also does not have a GIL but CPython does.
My mistake, as I assumed they took the same decision as jython and IronPython.
https://jython.readthedocs.io/en/latest/Concurrency/#no-glob...
https://wiki.python.org/moin/IronPython
The difference between JVM, CLR and C in regards to parallel and concurrent code is that they are built for those kind of workloads, and have a memory model proper, hence not needing a GIL.
I think they would have to here, to support native modules. Jython (and I believe IronPython, but don't quote me) does not support native CPython modules. CPython modules explicitly control the GIL, so if they are supported (as they are here), you can't really leave the GIL out without exposing potential thread safety issues.
"PEP 703 – Making the Global Interpreter Lock Optional in CPython" (2023) https://peps.python.org/pep-0703/
CPython built with --disable-gil does not have a GIL (as long as PYTHONGIL=0 and all loaded C extensions are built for --disable-gil mode) https://peps.python.org/pep-0703/#py-mod-gil-slot
"Intent to approve PEP 703: making the GIL optional" (2023) https://news.ycombinator.com/item?id=36913328#36917709 https://news.ycombinator.com/item?id=36913328#36921625
Gradle files are less verbose than the equivalent Maven pom.xml but Gradle tends to have other issues like: complex builds that are hard to maintain, not running on the latest JVM version without some wait time, and constantly breaking because Gradle makes breaking changes every release. I'm hoping the declarative Gradle experiment [0] helps with this.
Additionally if XML isn't your thing Maven is making a push for other formats in Maven 4 like HOCON [1].
[0] https://blog.gradle.org/declarative-gradle-first-eap [1] https://github.com/apache/maven-hocon-extension
What is the use-case for GraalPy? To be honest I don't understand why would anyone want to use it.
I worked at a company where data scientists wrote python code using pandas and we had port it to java and a library called keanu that was very useful but soon became unmaintained.
Of course this was very time consuming and unrewarding, all because only java applications could be deployed to production due to a stupid top-down decision.
This GraalPy sounds like something I wish existed back then.
jep[0] has existed for a while now, and does what GraalPy is doing quite well.
I'm using it for similar purposes as you stated and for that it works quite well. A research group I am collaborating with does a lot of their work in one Java application (ImageJ for microscopy), so by integrating my Python processing code into that application, it finds its way a lot quicker into the daily workflows of everyone in that group.
Most recently I've also extended the jep setup to include optional Python version bootstrapping via uv[1], so that I can be sure that the plugins I'm writing have the correct Python version available, without people having to install that manually on the machine.
Jython has historically lagged hard, often falling behind for very extended periods. For a time their releases basically just stopped, which led to them missing support for pretty much anything between 2.7 and 3.6 (iirc). I know the project basically rebooted at some point, but I've since lost interest.
Besides all the nice answers given by others, a big one was not mentioned: performance!
Graal can do pretty advanced JIT-compilation for any Graal language, plus you can mix-and-match languages (with a big chunk of their ecosystems) and it will actually compile across language boundaries. And we haven’t even mentioned Java’s state of the art GCs that can run circles around any tracing GC, let alone the very low throughput reference counting.
Picture working for a big, non-tech corporation. Your BU only does Java because it has always been thus and Jeff the SVP is a law grad and doesn't want anything to change because of perceived risk. GraalVM allows smart people who have to work within such limitations to still write (mostly) the software they want while still vaguely relating it to Java for decision makers.
Those "smart people" write blackboxes in esoteric languages that only the same person maintains.
Everyone else has to write wrappers to interact with that blackbox. God forbid someone daring to even change the code, because it basically doesn't even need/use junit tests. Eventually the smart person gets bored and moves to something else, that tool then gets rewritten to Java in two days by someone else.
End of story.
Not so vaguely, either. The dev story is not Java but the deploy story is.
When I was learning programming, my coding class used a Bukkit plugin that connected to Python. I can't remember what it was called, but that was for Minecraft 1.7.10.
Not sure if you were wanting Python specifically, but KubeJS lets you use JavaScript for mods. I think there's also a clojure integration.
Does it have to be run in a GraalVM, or any JVM implementation is fine?
> You can use GraalPy with GraalVM JDK, Oracle JDK, or OpenJDK
Thanks. I actually managed to run the quick example with Temurin Java 22. Maybe that is what they mean by "OpenJDK": java.vm.name=OpenJDK 64-Bit Server VM, java.vendor.version=Temurin-22.0.2+9
Have you turned on `-XX:+EnableJVMCI`?
https://www.graalvm.org/latest/reference-manual/embed-langua...
Update. I actually managed to run the quick example with Temurin Java 22: java.vm.name=OpenJDK 64-Bit Server VM, java.vendor.version=Temurin-22.0.2+9
Yes, but I was under the impression that graal-level inter-op was limited to packages the graal toolchain could compile.
Thus, while swift and graal both depend on llvm, they use different variants and there's no real way to make inter-op between swift and graal (even using the llvm it which graal is said to be able to consume).
e.g., I believe this announcement represents the work to compile a python (3.11) and some proof-of-concept python packages using graal toolchain, to spur other packages to support the same.
So I'd really love to be wrong, but I believe building under the graal llvm is the common factor.
I don’t really see how swift comes into the picture, besides SuLong being a thing (running LLVM bitcode). Native binary was meant as a compile target in the previous comment, I believe, not as an input. Graal can do both, but as a target it has no dependency on LLVM.
So yeah, graalvm should be able to produce a native binary for python code (though depending on the specifics it might actually be more like a native binary interpreter running python scripts, it can’t optimize in every circumstance but I’m hazy on the details).
I haven't seen embedding using graal/vm, or inter-op using the native JVM FFI.
There is (active, 2K stars) https://github.com/pvieito/PythonKit and I've heard of people being able to deploy apps with python on the app store. YMMV.
In case someone is interested, here are some benchmark results comparing GraalPy and others with JDK8 using the Are-we-fast-yet benchmark suite: https://stefan-marr.de/downloads/tmp/awfy-bun.html
And here is a table representation of all benchmarks and the geomean and median overall results: http://software.rochus-keller.ch/awfy-bun-summary.ods
The implementation of the same benchmark suite runs around factor 2.4 (geomean) faster on JDK8 than on GraalPython EE 22.3 Hotspot, or 41 times faster than CPython 3.11. GraalPython is thus about 17 times faster than CPython, and about two times faster than PyPy. The Graal Enterprise Edition (EE) seem to be factor 1.31 faster than the Community Edition (CE).