Xfwl4 – The Roadmap for a Xfce Wayland Compositor
(alexxcons.github.io)366 points by pantalaimon 5 days ago
366 points by pantalaimon 5 days ago
(xfwl4 author here.)
> I wonder how strictly they interpret behavior here given the architectural divergence?
It's right there in the rest of the sentence (that you didn't quote all of): "... or as much as possible considering the differences between X11 and Wayland."
I'll do my best. It won't be exactly the same, of course, but it will be as close as I can get it.
> As an example, focus-stealing prevention.
Focus stealing prevention is a place where I think xfwl4 could be at an advantage over xfwm4. Xfwm4 does a great job at focus-stealing prevention, but it has to work on a bunch of heuristics, and sometimes it just does the wrong thing, and there's not much we can do about it. Wayland's model plus xdg-activation should at least make the focus-or-don't-focus decision much more consistent.
> I am curious about the mandatory compositing overhead. Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
I'm not sure yet, but I suspect your fears are well-founded here. On modern (and even not-so-modern) hardware, even low-end GPUs should be fine with all this (on my four-year-old laptop with Intel graphics, I can't tell the difference performance-wise with xfwm4's compositor on or off). But I know people run Xfce/X11 on very-not-modern hardware, and those people may unfortunately be left behind. But we'll see.
At least they are honest regarding the reasons, not a wall of text to justify what bails down to "because I like it".
Naturally these kinds of having a language island create some attrition regarding build tooling, integration with existing ecosystem and who is able to contribute to what.
So lets see how it evolves, even with my C bashing, I was a much happier XFCE user than with GNOME and GJS all over the place.
You know that all the Wayland primitives, event handling and drawing in gnome-shell are handled in C/native code through Mutter, right ? The JavaScript in gnome-shell is the cherry on top for scripting, similar to C#/Lua (or any GCed language) in game engines, elisp in Emacs, event JS in QtQuick/QML.
It is not the performance bottleneck people seem to believe.
It has been the case that stalls in the GJS land can stall the compositor though, especially if it's during a GC cycle.
> ...or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
I think I know what "frame perfect" means, and I'm pretty sure that you've been able to get that for ages on X11... at least with AMD/ATi hardware. Enable (or have your distro enable) the TearFree option, and there you go.
I read somewhere that TearFree is triple buffering, so -if true- it's my (perhaps mistaken) understanding that this adds a frame of latency.
> I read somewhere that TearFree is triple buffering, so -if true- it's my (perhaps mistaken) understanding that this adds a frame of latency.
True triple buffering doesn't add one frame of latency, but since it enforces only whole frames be sent to the display instead of tearing, it can cause partial frames of latency. (It's hard to come up with a well-defined measure of frame latency when tearing is allowed.)
But there have been many systems that abused the term "triple buffering" to refer to a three-frame queue, which always does add unnecessary latency, making it almost always the wrong choice for interactive systems.
I don't know what "workarounds" you're talking about, or what unwanted behavior that I presume you're talking about. Would you be more specific?
I ask because just a few minutes ago, I ran VRRTest [0] on my dual-monitor machine and saw no screen tearing on either monitor. Because VRR is disabled in multi-monitor setups, I saw juddering on both monitors when I commanded VRRTest render rates that weren't a multiple of the monitor's refresh rate, but no tearing at all.
My setup:
* Both monitors hooked up via DisplayPort
* Radeon 9070 (non-XT)
* Gentoo Linux, running almost all ~amd64 packages.
* x11-base/xorg-server-21.1.20
* x11-drivers/xf86-video-amdgpu-25.0.0-r1
* x11-drivers/xf86-video-ati-22.0.0
* sys-kernel/gentoo-sources-6.18.5
* KDE and Plasma packages are either version 6.22.0 or 6.5.5. I CBA to get a complete list, as there are so many relevant packages.
(I'm posting in a reply in part because the edit window is long since past.)
Yeah. I'm actually quite interested in hearing what "workarounds" and/or misbehavior you're talking about. 'amdgpu(4)' says this about the TearFree property:
Option "TearFree" "boolean"
Set the default value of the per-output ’TearFree’ property,
which controls tearing prevention using the hardware page flip‐
ping mechanism. TearFree is on for any CRTC associated with one
or more outputs with TearFree on. Two separate scanout buffers
need to be allocated for each CRTC with TearFree on. If this op‐
tion is set, the default value of the property is ’on’ or ’off’
accordingly. If this option isn’t set, the default value of the
property is auto, which means that TearFree is on for rotated
outputs, outputs with RandR transforms applied, for RandR 1.4
secondary outputs, and if ’VariableRefresh’ is enabled, otherwise
it’s off.
The explicit mention that the "auto" enables TearFree only for secondary outputs and rotated and/or transformed outputs if 'VariableRefresh' is disabled seems to directly contradict what I think you're saying. And if "auto" enables TearFree on secondary displays, my recommendation of "on" certainly also does. But, yeah. I await clarification.One thing to keep in mind is that composition does not mean you have to do it with vsync, you can just refresh the screen the moment a client tells you the window has new contents.
Compositor overhead even with cheapo Intel laptop graphics is basically a non-issue these days. The people still rocking their 20 year old thinkpads might want to choose something else, but besides that kind of user I don't think it's worth worrying too much about.
It isn't always pure overhead, but also jitter, additional delays and other issues caused by the indirection. Most systems have a way to mostly override the compositor for fullscreen windows and for games and other applications where visible jitter and delays are an issue you want that even on modern hardware.
> Most systems have a way to mostly override the compositor for fullscreen windows and for games
No, they don't. I don't think Wayland ever supported exclusive fullscreen, MacOS doesn't, and Windows killed it a while back as well (in a Windows 10 update like 5-ish years ago?)
Jitter is a non-issue for things you want vsync'd (like every UI), and for games the modern solution is gsync/freesync which is significantly better than tearing.
That matches what I recall too, back when I ran a very cheap integrated intel (at least that's what I recall) card on my underpowered laptop. I posted a few days ago with screenshots of my 2009 setup with awesome+xcompmgr, and I remember it being very snappy (much more so than my tuned Windows XP install at the time). https://news.ycombinator.com/item?id=46717701
> Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
I think this is ultimately correct. The compositor will have to render a frame at some point after the VBlank signal, and it will need to render with it the buffers on-screen as of that point, which will be from whatever was last rendered to them.
This can be somewhat alleviated, though. Both KDE and GNOME have been getting progressively more aggressive about "unredirecting" surfaces into hardware accelerated DRM planes in more circumstances. In this situation, the unredirected planes will not suffer compositing latency, as their buffers will be scanned out by the GPU at scanout time with the rest of the composited result. In modern Wayland, this is accomplished via both underlays and overlays.
There is also a slight penalty to the latency of mouse cursor movement that is imparted by using atomic DRM commits. Since using atomic DRM is very common in modern Wayland, it is normal for the cursor to have at least a fraction of a frame of added latency (depending on many factors.)
I'm of two minds about this. One, obviously it's sad. The old hardware worked perfectly and never had latency issues like this. Could it be possible to implement Wayland without full compositing? Maybe, actually. But I don't expect anyone to try, because let's face it, people have simply accepted that we now live with slightly more latency on the desktop. But then again, "old" hardware is now hardware that can more often than not, handle high refresh rates pretty well on desktop. An on-average increase of half a frame of latency is pretty bad with 60 Hz: it's, what, 8.3ms? But half a frame at 144 Hz is much less at somewhere around 3.5ms of added latency, which I think is more acceptable. Combined with aggressive underlay/overlay usage and dynamic triple buffering, I think this makes the compositing experience an acceptable tradeoff.
What about computers that really can't handle something like 144 Hz or higher output? Well, tough call. I mean, I have some fairly old computers that can definitely handle at least 100 Hz very well on desktop. I'm talking Pentium 4 machines with old GeForce cards. Linux is certainly happy to go older (though the baseline has been inching up there; I think you need at least Pentium now?) but I do think there is a point where you cross a line where asking for things to work well is just too much. At that point, it's not a matter of asking developers to not waste resources for no reason, but asking them to optimize not just for reasonably recent machines but also to optimize for machines from 30 years ago. At a certain point it does feel like we have to let it go, not because the computers are necessarily completely obsolete, but because the range of machines to support is too wide.
Obviously, though, simply going for higher refresh rates can't fix everything. Plenty of laptops have screens that can't go above 60 Hz, and they are forever stuck with a few extra milliseconds of latency when using a compositor. It is unideal, but what are you going to do? Compositors offer many advantages, it seems straightforward to design for a future where they are always on.
Love your post. So, don’t take this as disagreement.
I’m always a little bewildered by frame rate discussions. Yes, I understand that more is better, but for non-gaming apps (e.g. “productivity” apps), do we really need much more than 60 Hz? Yes, you can get smoother fast scrolling with higher frame rate at 120 Hz or more, but how many people were complaining about that over the last decade?
I enjoy working on my computer more at 144Hz than 60Hz. Even on my phone, the switch from 60Hz to a higher frame rate is quite obvious. It makes the entire system feel more responsive and less glitchy. VRR also helps a lot in cases where the system is under load.
60Hz is actually a downgrade from what people were used to. Sure, games and such struggled to get that kind of performance, but CRT screens did 75Hz/85Hz/100Hz quite well (perhaps at lower resolutions, because full-res 1200p sometimes made text difficult to read on a 21 inch CRT, with little benefit from the added smoothness as CRTs have a natural fuzzy edge around their straight lines anyway).
There's nothing about programming or word processing that requires more than maybe 5 or 6 fps (very few people type more than 300 characters per minute anyway) but I feel much better working on a 60 fps screen than I do a 30 fps one.
Everyone has different preferences, though. You can extend your laptop's battery life by quite a bit by reducing the refresh rate to 30Hz. If you're someone who doesn't really mind the frame rate of their computer, it may be worth trying!
I never complained about 60, then I went to 144 and 60 feels painful now. The latency is noticable in every interaction, not just gaming. It's immediately evident - the computer just feels more responsive, like you're in complete control.
Even phones have moved in this direction, and it's immediately noticable when using it for the first time.
I'm now on 240hz and the effect is very diminished, especially outside of gaming. But even then I notice it, although stepping down to 144 isn't the worst. 60, though, feels like ice on your teeth.
> how many people were complaining about that over the last decade?
Quite a few. These articles tend to make the rounds when it comes up: https://danluu.com/input-lag/ https://lwn.net/Articles/751763/ Perception varies from person to person, but going from my 144hz monitor to my old 60hz work laptop is so noticeable to me that I switched it from a composited wayland DE to an X11 WM.
If our mouse cursors are going to have half a frame of latency, I guess we will need 60Hz or 120Hz desktops, or whatever.
I dunno. It does seem a bit odd, because who was thinking about the framerates of, like, desktops running productivity software, for the last couple decades? I guess I assumed this would never be a problem.
Essentially, the only reason to go over 60 Hz for desktop is for a better "feel" and for lower latency. Compositing latency is mainly centered around frames, so the most obvious and simplest way to lower that latency is to shorten how long a frame is, hence higher frame rates.
However, I do think that high refresh rates feel very nice to use even if they are not strictly necessary. I consider it a nice luxury.
I couldn't find ready stats on what percentage of displays are 60 hz but outside of gaming and high end machines I suspect 60 hz is still the majority of of machines used by actual users meaning we should evaluate the latency as it is observed by most users.
The point is that we can improve latency of even old machines by simply attaching a display output that supports a higher refresh rate, or perhaps even variable refresh rate. This can negate most of the unavoidable latency of a compositor, while other techniques can be used to avoid compositor latency in more specific scenarios and try to improve performance and frame pacing.
A new display is usually going to be cheaper than a new computer. Displays which can actually deliver 240 Hz refresh rates can be had for under $200 on the lower end, whereas you can find 180 Hz displays for under $100, brand new. It's cheap enough that I don't think it's even terribly common to buy/sell the lower end ones second-hand.
For laptops, well, there is no great solution there; older laptops with 60 Hz panels are stuck with worse latency when using a compositor.
> As an example, focus-stealing prevention. In xfwm4 (and x11 generally), this requires complex heuristics and timestamp checks because x11 clients are powerful and can aggressively grab focus. In wayland, the compositor is the sole arbiter of focus, hence clients can't steal it, they can only request it via xdg-activation. Porting the legacy x11 logic involves the challenge of actually designing a new policy that feels like the old heuristic but operates on wayland's strict authority model.
Not that that's necessarily the best way to do it but nothing stops xfwl4 from simply granting every focus request and then applying their existing heuristics on the result of that.
> Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
well, the answer is just no, wayland has been consistently slower than X11 and nothing running on top can't really go around that
Can you cite any sources for that claim? I found this blog post that says wayland is pretty much on par with X11 except for XWayland, which should be considered a band-aid only anyways: https://davidjusto.com/articles/m2p-latency/
Here's one article: https://mort.coffee/home/wayland-input-latency/
It's specifically about cursor lag, but I think that's because it's more difficult to experimentally measure app rendering latency.
> wayland has been consistently slower than X11
Wayland is a specification, it has an inability to be "faster" than other options. That's like saying JSON is 5% slower than Word.
And as for the implementations being slower than X, that also doesn't reflect reality.
There is no Wayland to run on top of as its a standard to implement rather than a server to talk to.
Settings -> Window Manager Tweaks -> Focus -> Activate focus stealing prevention
https://gitlab.xfce.org/xfce/xfwm4/-/blob/master/settings-di...
I hope that XFCE remains a solid lightweight desktop option. I've become a huge fan of KDE over the past couple of years, but it certainly isn't what you would consider lightweight or minimal.
Personally, I'm a big proponent of Wayland and not big Rust detractor, so I don't see any problem with this. I do, however, wonder how many long-time XFCE fans and the folks who donated the money funding this will feel about it. To me the reasoning is solid: Wayland appears to be the future, and Rust is a good way to help avoid many compositor crashes, which are a more severe issue in Wayland (though it doesn't necessarily need to be fatal, FWIW.) Still I perceive a lot of XFCE's userbase to be more "traditional" and conservative about technologies, and likely to be skeptical of both Wayland and Rust, seeing them as complex, bloated, and unnecessary.
Of course, if they made the right choice, it should be apparent in relatively short order, so I wish them luck.
> Still I perceive a lot of XFCE's userbase to be more "traditional" and conservative about technologies, and likely to be skeptical of both Wayland and Rust, seeing them as complex, bloated, and unnecessary.
Very long time (since 2007) XFCE user here. I don't think this is accurate. We want things to "just work" and not change for no good reason. Literally no user cares what language a project is implemented in, unless they are bored and enjoy arguing about random junk on some web forum. Wayland has the momentum behind it, and while there will be some justified grumbling because change is always annoying, the transition will happen and will be fairly painless as native support for it continues to grow. The X11 diehards will go the way of the SysV-init diehards; some weird minority that likes to scream about the good old days on web forums but really no one cares about.
There are good reasons to switch to Wayland, and I trust the XFCE team to handle the transition well. Great news from the XFCE team here, I'm excited for them to pull this off.
I used XFCE for a long time and I very much agree. it just works, and is lightweight. I use KDE these days but XFCE would be my second choice.
> The X11 diehards will go the way of the SysV-init diehard
I hope you are not conflating anti-systemD people with SysV init diehards? As far as I can see very few people want to keep Sysv init, but there are lots who think SystemD init is the wrong replacement, and those primarily because its a lot more than an init system.
In many ways the objects are opposite. People hate system D for being more than init, people hate Wayland for doing less than X.
Edit: corrected "Wayland" to "XFCE" in first sentence!
It is refreshing to see somebody else notice that the complaints about systemd and Wayland are philosophically incompatible.
Systemd is creating the same kind of monolith monoculture that Xorg represented. Wayland is far more modular.
Regardless of your engineering preferences, rejecting change is the main reason to object to both.
If Rust has one weakness right now, it's bindings to system and hardware libraries. There's a massive barrier in Rust communicating with the outside ecosystem that's written in C. The definitive choice to use Rust and an existing Wayland abstraction library narrows their options down to either creating bindings of their own, or using smithay, the brand new Rust/Wayland library written for the Cosmic desktop compositor. I won't go into details, but Cosmic is still very much in beta.
It would have been much easier and cost-effective to use wlroots, which has a solid base and has ironed out a lot of problems. On the other hand, Cosmic devs are actively working on it, and I can see it getting better gradually, so you get some indirect manpower for free.
I applaud the choice to not make another core Wayland implementation. We now have Gnome, Plasma, wlroots, weston, and smithay as completely separate entities. Dealing with low-level graphics is an extremely difficult topic, and every implementor encounters the same problems and has to come up with independent solutions. There's so much duplicated effort. I don't think people getting into it realize how deceptively complex and how many edge-cases low-level graphics entails.
(xfwl4 author here.)
> using smithay, the brand new Rust/Wayland library
Fun fact: smithay is older than wlroots, if you go by commit history (January 2017 vs. April 2017).
> It would have been much easier and cost-effective to use wlroots
As a 25+ year C developer, and a ~7-year Rust developer, I am very confident that any boost I'd get from using wlroots over smithay would be more than negated by debugging memory management and ownership issues. And while wlroots is more batteries-included than smithay, already I'm finding that not to be much of a problem, given that I decided to base xfwl4 on smithay's example compositor, and not write one completely from scratch.
Thanks for the extra info. I'm glad it hasn't turned out to be much of an issue. I've looked at your repository and it seems to be off to a great start.
Personally, I'm anxious to do some bigger rust projects, but I'm usually put off by the lack of decent bindings in my particular target area. It's getting better, and I'm sure with some time the options will fill out more.
> The X11 diehards will go the way of the SysV-init diehards; some weird minority
I upvoted your general response but this line was uncalled for. No need to muddy the waters about X11 -> Wayland with the relentlessly debated, interminable, infernal init system comparison.
> Literally no user cares what language a project is implemented in
This is only true most of the time - some languages have properties which "leak" to user.
Like if it's Java process, then sooner or later user will have to mess with launchers and -Xmx option.
Or if it's a process which has lots of code and must not crash, language matters. C or C++ would segfault on any sneeze. Python or Ruby or even Java would stay alive (unless they run out of memory, or hang due to a logic bug)
> Literally no user cares what language a project is implemented in
I think this is true but also maybe not true at the same time.
For one thing, programming languages definitely come with their own ecosystems and practices that are common.
Sometimes, programming languages can be applied in ways that basically break all of the "norms" and expectations of that programming language. You can absolutely build a bloated and slow C application, for example, so just using C doesn't make something minimal or fast. You can also write extremely reliable C code; sqlite is famously C after all, so it's clearly possible, it just requires a fairly large amount of discipline and technical effort.
Usually though, programs fall in line with the norms. Projects written in C are relatively minimal, have relatively fewer transitive dependencies, and are likely to contain some latent memory bugs. (You can dislike this conclusion, but if it really weren't true, there would've been a lot less avenues for rooting and jailbreaking phones and other devices.)
Humans are clearly really good at stereotyping, and pick up on stereotypes easily without instruction. Rust programs have a certain "feel" to them; this is not delusion IMO, it's likely a result of many things, like the behaviors of clap and anywho/Rust error handling leaking through to the interface. Same with Go. Even with languages that don't have as much of a monoculture, like say Python or C, I think you can still find that there are clusters of stereotypes of sorts that can predict program behavior/error handling/interfaces surprisingly well, that likely line up with specific libraries/frameworks. It's totally possible to, for example, make a web page where there are zero directly visible artifacts of what frameworks or libraries were used to make it. Yet despite that, when people just naturally use those frameworks, there are little "tells" that you can pick up on a lot of the time. You ever get the feeling that you can "tell" some application uses Angular, or React? I know I have, and what stuns me is that I am usually right (not always; stereotypes are still only stereotypes, after all.)
So I think that's one major component of why people care about the programming language that something is implemented in, but there's also a few others:
- Resources required to compile it. Rust is famously very heavy in this regard; compile times are relatively slow. Some of this will be overcome with optimization, but it still stands to reason that the act of compiling Rust code itself is very computationally expensive compared to something as simple as C.
- Operational familiarity. This doesn't come into play too often, but it does come into play. You have to set a certain environment variable to get Rust to output full backtraces, for example. I don't think it is part of Rust itself, but the RUST_LOG environment variable is used by multiple libraries in the ecosystem.
- Ease of patching. Patching software written in Go or Python, I'd argue, is relatively easy. Rust, definitely can be a bit harder. Changes that might be possible to shoehorn in in other languages might be harder to do in Rust without more significant refactoring.
- Size of the resulting programs. Rust and Go both statically link almost all dependencies, and don't offer a stable ABI for dynamic linking, so each individual Rust binary will contain copies of all of their dependencies, even if those dependencies are common across a lot of Rust binaries on your system. Ignoring all else, this alone makes Rust binaries a lot larger than they could be. But outside of that, I think Rust winds up generating a lot of code, too; trying to trim down a Rust wasm binary tells you that the size cost of code that might panic is surprisingly high.
So I think it's not 100% true to say that people don't care about this at all, or that only people who are bored and like to argue on forums ever care. (Although admittedly, I just spent a fairly long time typing this to argue about it on a forum, so maybe it really is true.)
I know, I know! Change is hard and scary. To be honest, while I am glad they're finally tackling this, I'm also expecting to have a pretty annoying couple of weeks whenever they ship this support and I finally decide to make the switch. There will be things to learn and new behaviors to understand and, yes, new bugs and annoyances to learn to work around. But I think if we both put our big boy pants on, keep a positive and friendly attitude, and help each other out, we can make it through these difficult times and come out the other side with proper high DPI support, sane multi-monitor handling and hot-swapping, and maybe even some wild new stuff like high color depth options.
Why does Wayland "feel like the future?" It feels like a regression to me and a lot of other people who have run into serious usability problems.
At best, it seems like a huge diversion of time and resources, given that we already had a working GUI. (Maybe that was the intention.) The arguments for it have boiled down to "yuck code older than me" from supposed professionals employed by commercial Linux vendors to support the system, and it doesn't have Android-like separation — a feature no one really wants.
The mantra of "it's a protocol" isn't very comforting when it lacks so many features that necessitate workarounds, leading to fragmentation and general incompatibility. There are plenty of complicated, bad protocols. The ones that survive are inherently "simple" (e.g., SMTP) or "trivial" (e.g., TFTP). Maybe there will be a successor to Wayland that will be the SMTP to its X400, but to me, Wayland seems like a past compromise (almost 16 years of development) rather than a future.
Wayland supports HDR, it's very easy to configure VRR, and it's fractional scaling (if implemented properly) is far superior to anything X11 can offer.
Furthermore, all of these options can be enabled individually on multiple screens on the same system and still offer a good mix-used environment. As someone who has been using HiDPI displays on Linux for the past 7 years, wayland was such a game changer for how my system works.
Even if you dislike Wayland, forwards-going development is clearly centred around it.
Development of X11 has largely ended and the major desktop environments and several mainstream Linux distributions are likewise ending support for it. There is one effort I know of to revive and modernize X11 but it’s both controversial and also highly niche.
You don’t have to like the future for it to be the future.
It's mostly coz nobody really wants to improve X11. I don't think there is many wayland features that would be impossible to implement in X11 it's just nobody wants to dig into crusty codebase to do it.
And sadly wayland decided to just not learn any lessons from X11 and it shows.
What do you mean nobody wants to improve X11? There were developers with dozens of open merge requests with numerous improvements to X11 that were being actively ignored/held back by IBM/Red Hat because they wanted Wayland, their corporate project, to succeed instead.
Wayland was the first display system on Linux I've used that just worked perfectly right out of the box on a bog standard Intel iGPU across several machines. I think that is a big draw for a lot of people like myself who just want to get things done. For me X11 represents the past through experience I had when I had to tinker with the X11 config file to get basic stuff like video playback to work smoothly without tearing. My first Wayland install was literally a "wow this is the future of Linux" for me quite honestly when I realised everything just worked without even a single line of config. I would recommend a Wayland distro like Debian to the average computer user knowing Wayland just works -- prior to Wayland I'd be like "well Linux is great but if you like watching YouTube you'll need to add a line to your xorg config to trun on the thingy that smoothes out video playback on Intel iGPUs". Appreciate others have different perpectives -- I come from the POV of someone who likes to install a OS and have all the basic stuff working out of the box.
Because X is not getting much development at this point (personally I still use i3, haven’t switched to Sway, the present works fine for me).
Hmm? Seems to be getting plenty of development.
I've been on and off linux desktops since the advent of Wayland. Unsure of the actual issues people run into at this point outside of very niche workflows or applications, to which, there are X11 fallbacks for.
Also, by "commercial linux vendors", you do realize Wayland is directly supported (afaik, correct me if wrong) by the largest commercial linux contributors, Red Hat, Canoncial. They're not simply 'vendors'.
> Unsure of the actual issues people run into at this point outside of very niche workflows or applications, to which, there are X11 fallbacks for.
I don't know if others have experienced this but the biggest bug I see in Wayland right now is sometimes on an external monitor after waking the computer, a full-screen electron window will crash the display (ie the display disconnects).
I can usually fix this by switching to another desktop and then logging out and logging back in.
Such a strange bug because it only affects my external monitor and only affects electron apps (I notice it with VSCode the most but that's just cause I have it running virtually 24/7)
If anyone has encountered this issue and figured out a solution i am all ears.
> it doesn't have Android-like separation — a feature no one really wants.
It's certainly a feature I want. Pretty sure I'm not alone in wanting isolation between applications--even GUI ones. There's no reason that various applications from various vendors shouldn't be isolated into their own sandboxes (at least in the common case).
There is a big reason: It impedes usability, extensibility and composability. If you sandbox GUI applications then the sandbox needs to add support for any interaction between them or they will just not be possible - and to fully support many advanced interactions like automation you will essentially have to punch huge holes in the sandbox anyway.
Meanwhile the advantages of sandboxing are pretty much moot in an open source distro where individual applications are open and not developed by user hostile actors.
Yes, sandboxing impedes those things. But I assume you're not advocating against sandboxing in general, right?
Starting with a sandbox and poking holes/whitelisting as-needed is a good way to go. Whitelisting access on a per-application basis is a pragmatic way to do this, and Flatpak with Wayland gives a way to actually implement this. It's imperfect, but it's a good start.
Preventing keylogging is a good, concrete example here. There's no reason some random application should be able to see me type out the master password in my password manager.
Likewise, there is no reason that some other application should be able to read ~/.bash_history or ~/.ssh/. The browser should limit itself to ~/Downloads. Etc.
> Meanwhile the advantages of sandboxing are pretty much moot in an open source distro where individual applications are open and not developed by user hostile actors.
Defense in depth. Belt and suspenders. I do trust the software I run to some degree, and take great care in choosing the software. But it's not perfect. Likewise, I take care to use sandboxing features whenever I can, acknowledging that they sometimes must have holes poked in them. But the Swiss cheese model is generally a good lens: https://en.wikipedia.org/wiki/Swiss_cheese_model
If we weren't concerned with belt and suspenders and could rely on applications being developed by non-hostile actors, then we could all run as root all the time! But we don't do that--we try to operate according to least-privilege and isolate separate tasks as much as is practical. Accordingly, technologies which allow improved isolation with zero or minimal impact to functionality are strictly a good thing, and should be embraced as such.
> given that we already had a working GUI. (Maybe that was the intention.)
Neither X11 nor Wayland provide a GUI. Your GUI is provided by GTK or QT or TCL or whatever. X11 had primitive rendering instructions that allowed those GUIs to delegate drawing to a central system service, but very few things do that anymore anyway. Meaning X11 is already just a dumb compositor in practice, except it's badly designed to be a dumb compositor because that wasn't its original purpose. As such, Wayland is really just aligning the protocol to what clients actually want & do.
Here's my PoV:
- Having a single X server that almost everyone used lead to ossification. Having Wayland explicitly be only a protocol is helping to avoid that, though it comes with its own growing pains.
- Wayland-the-Protocol (sounds like a Sonic the Hedgehog character when you say it like that) is not free of cruft, but it has been forward-thinking. It's compositor-centric, unlike X11 which predates desktop compositing; that alone allows a lot of clean-up. It approaches features like DPI scaling, refresh rates, multi-head, and HDR from first principles. Native Wayland enables a much better laptop docking experience.
- Linux desktop security and privacy absolutely sucks, and X.org is part of that. I don't think there is a meaningful future in running all applications in their own nested X servers, but I also believe that trying to refactor X.org to shoehorn in namespaces is not worth the effort. Wayland goes pretty radical in the direction of isolating clients, but I think it is a good start.
I think a ton of the growing pains with Wayland come from just how radical the design really is. For example, there is deliberately no global coordinate space. Windows don't even know where they are on screen. When you drag a window, it doesn't know where it's going, how much it's moving, anything. There isn't even a coordinate space to express global positions, from a protocol PoV. This is crazy. Pretty much no other desktop windowing system works this way.
I'm not even bothered that people are skeptical that this could even work; it would be weird to not be. But what's really crazy, is that it does work. I'm using it right now. It doesn't only work, but it works very well, for all of the applications I use. If anything, KDE has never felt less buggy than it does now, nor has it ever felt more integrated than it does now. I basically have no problems at all with the current status quo, and it has greatly improved my experience as someone who likes to dock my laptop.
But you do raise a point:
> It feels like a regression to me and a lot of other people who have run into serious usability problems.
The real major downside of Wayland development is that it takes forever. It's design-by-committee. The results are actually pretty good (My go-to example is the color management protocol, which is probably one of the most solid color management APIs so far) but it really does take forever (My go-to example is the color management protocol, which took about 5 years from MR opening to merging.)
The developers of software like KiCad don't want to deal with this, they would greatly prefer if software just continued to work how it always did. And to be fair, for the most part XWayland should give this to you. (In KDE, XWayland can do almost everything it always could, including screen capture and controlling the mouse if you allow it to.) XWayland is not deprecated and not planned to be.
However, the Wayland developers have taken a stance of not just implementing raw tools that can be used to implement various UI features, but instead implement protocols for those specific UI features.
An example is how dragging a window works in Wayland: when a user clicks or interacts with a draggable client area, all the client does is signal that they have, and the compositor takes over from there and initiates a drag.
Another example would be how detachable tabs in Chrome work in Wayland: it uses a slightly augmented invocation of the drag'n'drop protocol that lets you attach a window drag to it as well. I think it's a pretty elegant solution.
But that's definitely where things are stuck at. Some applications have UI features that they can't implement in Wayland. xdg-session-management for being able to save and restore window positions is still not merged, so there is no standard way to implement this in Wayland. ext-zones for positioning multi-window application windows relative to each-other is still not merged, so there is no standard way to implement this in Wayland. Older techniques like directly embedding windows from other applications have some potential approaches: embedding a small Wayland compositor into an application seems to be one of the main approaches in large UI toolkits (sounds crazy, but Wayland compositors can be pretty small, so it's not as bad as it seems) whereas there is xdg-foreign which is supported by many compositors (Supported by GNOME, KDE, Sway, but missing in Mir, Hyprland and Weston. Fragmentation!) but it doesn't support every possible thing you could do in X11 (like passing an xid to mpv to embed it in your application, for example.)
I don't think it's unreasonable that people are frustrated, especially about how long the progress can take sometimes, but when I read these MRs and see the resulting protocols, I can't exactly blame the developers of the protocols. It's a long and hard process for a reason, and screwing up a protocol is not a cheap mistake for the entire ecosystem.
But I don't think all of this time is wasted; I think Wayland will be easier to adapt and evolve into the future. Even if we wound up with a one-true-compositor situation, there'd be really no reason to entirely get rid of Wayland as a protocol for applications to speak. Wayland doesn't really need much to operate; as far as I know, pretty much just UNIX domain sockets and the driver infrastructure to implement a WSI for Vulkan/GL.
Thanks a lot for an actually constructive comment on Wayland! The information tends to be lost in all the hate.
I understand the frustration, but I see a lot of "it's completely useless" and "it's a regression", though to me it really sounds like Wayland is an improvement in terms of security. So there's that.
> xdg-session-management for being able to save and restore window positions > is still not merged, so there is no standard way to implement this in Wayland
For me, this is a real reason not to want to be forced to use Wayland. I'm sure the implementation of Wayland in xfce is a long time off, and the dropping of Xwindows even further off, so hopefully this problem will have been solved by then.
You seen to know your Waylands.
Do you know if global shortcuts are solved in a satisfactory way, and if there easy mechanism for one application to query wayland about other applications.
One hack I've made a while ago was to bind win+t command to a script that queried the active window in the current workspace, and based on a decision opened up a terminal at the right filesystem location, with a preferred terminal profile.
All I get from llms is that dbus might be involved in gnome for global shortcuts, and when registering global shortcuts in something like hyperland app ids must be passed along, instead of simple scripts paths.
It's a downgrade that we have no choice but to accept in order to continue using our machines. Anyone familiar with Microsoft or Apple already knows that's the future.
Yeah, I am staunch proponent of "don't try to fix what is not broken". Current XFCE is fast, light-weight, usable and works fine without major issues. While I don't fully understand the advantages / disadvantages of XFCE using Wayland instead of X, if, as someone else pointed out here on HN, running XFCE on Wayland is going to make it slower, it means these developers will be crippling one of XFCE's strongest feature. In that case other minor advantages seems pointless to users like me.
> running XFCE on Wayland is going to make it slower
Citation. None of the other desktops have slowed with Wayland, and gaming is as fast as, if not marginally faster on KDE/Gnome with Wayland vs LXDE on X.
I based it on this thread - https://news.ycombinator.com/item?id=46780901
Long-time XFCE user here. We care that stuff works the same, we appreciate how much work it is to achieve that when the world is changing out from under you, and we appreciate that XFCE understands this and cares about it. Being in Rust is not a concern.
I don't think this will be a quick transition.
Wayland has lots of potential, but it's far from ready to replace X11, especially in multitasking environments. XFCE is taking their time, because their community is more very concerned stability.
I predict that XFCE will default to X11 until Wayland has reached broad feature parity, then default to Wayland but keep X11 support until the last vestages of incompatibility are delt with.
There's no reason that this wouldn't be accepted by their community, and it should be lighter weight, in the end.
I am an XFCE user since many years, and am pretty decidedly in the "traditional and conservative about technologies" camp, and I think this is neat and just fine and dandy -- as long as they're not in a hurry to depreciate X11. Whenever I eventually have to go Wayland I would like to continue to use XFCE, so thumbs up for doing the work.
Long-time xfce fan here, I trust the team to make the right decisions of what to do with their copious spare time and insane amounts of funding </s>
(Instead of seeing this as "xfce jumps on bandwagon", I'm seeing it more as "bandwagon finally stable enough for xfce".)
In my view, this project itself shows some of the reasons why Wayland is the right path forward.
On X, we had Xorg and that is it. But at least Xorg did a lot of the work for you.
On Wayland, you in theory have to do a lot more of the work yourself when you build a compositor. But what we are seeing is libraries emerge that do this for you (wlroots, Smithay, Louvre, aquamarine, SWC, etc). So we have this one man project expecting to deliver a dev release in just a few months (mid-2026 is 4 months from now).
But it is not just that we have addressed the Wayland objection. This project was able to evaluate alternatives and decide the smithay is the best fit both for features and language choice. As time goes on, we will see more implementations that will compete with each other on quality and features. This will drive the entire ecosystem forward. That is how Open Source is supposed to work.
Because Wayland only does essential low-level stuff such as display and graphics it forced people to start coming up with a common Linux desktop (programming) interface out of nowhere to basically glue everything together and make programs at least interoperate.
Such an effort to rethink Linux desktop alone could've been a major project on its own but as having something was necessitated by Wayland all of it has become hurried and lacking control. Anything reminiscent of a bigger and more comprehensive project is in initial stages at best. If Wayland has been coming on for about ten years now I'll give it another ten years until we have some kind of established, consistent desktop API for Linux again.
X11 did offer some very basic features for a desktop environment so that programs using different toolkits could work together, and enough hooks you could implement stuff in window managers etc. Yet there was nothing like the more complete interfaces of the desktops of other operating systems that tied everything together in a central, consistent way. So, Linux desktop interface was certainly in need for a rewrite but the way it's happening is just disheartening.
Nobody has a user-space stick big enough to force things in the Linux world.
When Apple dropped the old audio APIs of classic macOS and introduced CoreAudio, they pissed off a lot of developers, but those developers had no choice. In the GUI realm, they only deprecated HIKit for a decade or two before removing it (if they've even done that), but they made it very clear that CoreFoo was the API you should be using and that was that.
In Linux-land, nobody has that authority. Nobody can come up with an equivalent to Core* for Linux and enforce its use. Consequently, you're going to continue to see the Qt/GTK/* splits, where the only commonality is at the lowest level of the window system (though, to Qt's credit, optionally also the event loop).
GNOME has enough weight to at least force most projects to accommodate them. But unfortunately this has mostly been for the worst, as GNOME is usually the odd one out with most matters of taste and design.
You say that like it's a bad thing. If you have have an actually good design then you can convince people with those advantages instead of forcing them with a stick.
I think that's the main reason many of us use Linux actually - because we didn't like what the big stick corpos wanted to force on us.
It's not necessarily a bad thing.
But both Qt and GTK are entirely as well designed as, say, Apple's Core* frameworks for GUI development, yet neither has become the singular GUI toolkit on Linux.
One can view this as a benefit of Linux or as a disadvantage. Both are true.
As time goes on, we will see more implementations that will compete with each other on quality and features. This will drive the entire ecosystem forward.
Unfortunately there aren't enough developers to maintain all those duplicate implementations to the level users expect so a lot of features will be missing and a lot of maintainers will burn out. Not having a libcompositor remains Wayland's biggest mistake.
The other key element with Wayland is that the kernel does a ton of the work for you. There s GEM buffer management and DMA-BUF to manage and move around video & regular memory, there's kernel mode setting, there's incredibly good mesa drivers.
X didn't have any of that to build from. It basically was a second kernel, was the OS that dealt with the video card atop the OS actual. It talked to the PCI device & did everything.
Part of the glory of Wayland is that we have fantastic really good OS abstractions & drivers. When we go to make a display server, we start at such a different level now. Trying to layer X's abstractions atop is messy & ugly & painful, because it mostly inhibits devs from being able to use the hardware in neat efficient clean direct modern ways. You have to write an X extension that coexists with a patchwork of other extensions that slots into the X way, that can figure out how to leverage the hardware. With Wayland, most compositors just use the kernel objects. There's much less intermediary, much less cruft, much less wild indirection & accretion to cut a path for.
And as you beautifully state, competing libraries can decide what abstractions & techniques work for them. There's an ecosystem of ideas, a flux to optimize hone & improve, on a variety of different dimensions. The Bazaar free to find its way vs the one giant ancient Cathedral. It's just so so so good we're finally not all trapped inside.
Tl;dr: Wayland has a much higher level that it can start from. And trying to use gpu's & hardware well in X was a nightmare because X has a sea of abstractions - extensions that you had to target & develop around, making development in X a worst of both worlds low level but having to cope with a so many high level constructs you had to navigate through.
> On Wayland, you in theory have to do a late more...
This is vaguely a double-edged sword. Yes, more code duplication across disparate projects - but that also allows people who _really care_ (such as the xfce team) to roll up their sleeves and do more. Any WM will only ever be as good as the X11 baseline, Wayland servers have the opportunity to compete on this front.
Although I'm probably permanently stuck with the Niri workflow, I am looking forward to seeing what the xfce developers come up with.
By the time we get to that utopia someone will declare Wayland obsolete and we'll all be arguing over how Nextfad is the best/worst thing ever.
And technically, nothing has been stopping the xfce devs or anyone from making their own X11 server / X.org fork if the window manager interface was too limiting.
I have no doubt about it, but for my use-cases Wayland definitely is a step up. It's definitely a first-world-problem, but somehow typing feels more enjoyable at low latency - back when I still had a backup X11 session, I could instantly tell that I had left it on: the mouse cursor, input, everything felt like soup.
This is the same logic that led to rails being shoehorned into every company in 2006-2010, which spawned a whole ecosystem of people who specialized in rewriting Rails projects back to Java/C#.
Yes, the stack gets you most of the way there. No, you won't be happy if you need to actually make changes to any part of that other than the top layer.
I've been using Xfce as a daily driver in one machine for about a decade now.
Great to know there's work on the wayland support front.
Also, writing it in Rust should help bring more contributors to the project.
If you use Xfce I urge you to donate to their Open Collective:
Isn't the switch from X11 to Wayland the most painful switch that happened in the linux world ? Even going from python 2 to 3 was not as bad
The move from kernel 2.4.x to 2.6.x was pretty painful. The absolute slog from 2.6 to 3.0 and a development model that a least somewhat resembles the model used today was exhausting.
In case you weren't there, the "even" kernels (e.g. 2.0, 2.2, 2.4, and 2.6) were the stable series while the "odd" kernels (e.g. 2.1, 2.3, 2.5) were the development series, the development model was absolutely mental and development moved at a glacial pace compared to today's breakneck speed.
The pre-git days were less than ideal. The BitKepper years were... interesting, politically and philosophically speaking.
Also, KDE4 was a dark, dark period.
To me the most painful switch was Gnome 2 to Gnome 3. I still miss Gnome 2.
I left Gnome 3 for other WMs (eventually settled on cinnamon), but every once in a while I decided to give Gnome 3 a try, just to be disappointed again. I felt like those people in abusive romantic relationships that keep coming back and divorcing over and over again. "Oh, Gnome has really changed now, he won't beat me again this time!".
Just wait. In 8 years, Wayland will be as old as X11 was when Wayland was created.
Then we'll make Wayland 2.
Fully-featured DEs like Gnome and KDE work a lot worse when doing everything in software rendering. If you're working on a device with subpar/nonexistent GPU driver support (i.e. Nvidia hardware for years on end), the experience is absolutely awful.
Nvidia's driver do something weird on Wayland when my laptop is connected to HDMI, probably something funky with the iGPU<->dGPU communication. Everything works, but at the whims of Nvidia an update reduces the maximum FPS I can achieve over HDMI to about 30-45fps. Jittery and painful, even on a monitor that supposedly supports VRR.
That's not really Wayland's fault of course, but in the same way Linux is broken because Photoshop doesn't work on it, Wayland is broken for many users because their desktop is weird on it.
> I still have a choice to not use systemd.
Depending on your DE, you have a choice not to use Wayland. Like, yes, if you use GNOME then you don't get choices but that's their whole ethos, and unfortunately I've heard about KDE dropping X, but there are other options and as I type this comment in i3 I can assure you Xorg still works.
systemd was a problem for early adopters (e.g., Fedora). Distros like Debian joined the party later and, as a result, got things way more stable. I never had any systemd-related problem in Debian, while for Fedora (some years earlier) I had some bugs affecting my ability to work. They all seem to work very fine now. Things took a while to mature, but it just works now.
It was a similar story with Pulseaudio - it caused pain for early adopters but, by the time that Debian stable switched to it by default, almost all of the issues and corner cases had long since been worked out and it was almost completely trouble-free.
Mind you, the libc5 -> glibc2 upgrade was pretty horrible in Debian land, so they didn't always get it right in the early days...
How? At worst the user can just add their own symlink or the developer may need to recompile the app.
This is nothing like wayland where the APIs to do what you want may not even exist, or may not exist in some random compositor a user is using.
I've used Smithay's Rust client toolkit for a few months now. For making apps it is still sometimes have unsafe wrappers disguised as safe. It has a lot of internals wrapped in Arc<>, but in my tests, the methods are not safe to call from different threads anyhow, you will get weird crashes if done so.
I will seek to dive-in to how Wayland API actually works, because I'd really like to know what not to do, when the wrappers used 'wrong' can crash.
FYI, you can currently use most wlroots-based compositors with XFCE. I myself am running Hyprland + XFCE on Gentoo. https://github.com/bergutman/dots
I resisted Wayland for a longtime, but I'm sold now that I see how well it does on old hardware.
I have an old Thinkpad. Firefox on X is slow and scrolls poorly. On wayland, the scrolling is remarkably smooth for 10 y/o hardware, and the addition of touchpad gestures is very nice. Yes, there's more configuration overhead for each compositor, but I'm now accepting this trade.
Does Wayland work on non-Linux systems (e.g. *BSD)?
If an application is written for Wayland, is there a way to send its windows to (e.g.) my Mac, like I can with X11 to XQuartz?
Wayland works pretty well on FreeBSD and I know at least wlroots compositors work a bit on OpenBSD (though, I suspect anyone on OpenBSD would prefer to use their homegrown Xenocara). There are Wayland compositors for Mac, the youtuber Brodie Robertson did a good overview of them a few days ago
Microsoft's WSL2 GUI integration works based on Wayland (and XWayland): https://github.com/microsoft/wslg
Rather than going fully protocol-based (like Waypipe), they used Weston to render to RDP. Using RDP's "remote apps" functionality, practically any platform can render the windows. I think it's a pretty clever solution, one perhaps even better than plain X11 forwarding (which breaks all kinds of things like GPU acceleration).
I don't know if anyone has messed with this enough to get it to work like plain old RemoteApps for macOS/BSD/Windows/Linux, but the technology itself is clearly ready for it.
It depends on what you mean by send. Wayland doesn't have network transparency, there's a bit of a song and dance you have to do to get that working properly. I'm not sure the state of that or of Wayland compositors in general on Mac.
> It depends on what you mean by send.
Currently I can:
$ ssh -X somehost xeyes
and get a window on macOS.For xeyes that works. It is absolutely an inferior and chatty protocol for any other application though, like try to watch a youtube video in chrome through it.
X's network transparency was made at a time when we drawn two lines as UI, and for that it works very well. But today even your Todo app has a bunch of icons that are just bitmaps to X, and we can transfer those via much better means (that should probably not be baked into a display protocol).
I think Wayland did the correct decision here. Just be a display protocol that knows about buffers and that's it.
User space can then just transport buffers in any way they seem fit.
Also, another interesting note, the original X network transparency's modern analogue might very well be the web, if you look at it squinted. And quite a few programs just simply expose a localhost port to avoid the "native GUI" issue wholesale.
Today you would do:
`$ waypipe ssh somehost foot`
You need waypipe installed on both machines. For the Mac, I guess you'll need something like cocoa-way (https://github.com/J-x-Z/cocoa-way). Some local Wayland compositor, anyway.
Yes, but still kind of WIP.
It is in freebsd's official handbook, and the openbsd folks have been playing around with it since 2023 at least https://xenocara.org/Wayland_on_OpenBSD.html
I'm not sure how much farther along they are than that post though.
(xfwl4 author here.)
Absolutely seriously. To me, a big part of what makes Xfce is xfwm4's behavior. Even though most of the other Xfce components will run decently well on wlroots-based compositors, I don't really have an interest in using them, as that's not "Xfce" to me.
But it's not going to be perfect, though, as some things that we take for granted on x11 still just do not have Wayland protocols to enable them. This will take a long time. Alex's blog post says a developer preview around the middle of this year, and I expect I can deliver on that, and maybe (maybe!) even a stable release by next year (maybe!), but full feature parity will take years.
I guess at this point it is safe to say that whenever you see "rewrite in Rust", it simply means there is no one to maintain the software anymore. They are saying this pretty openly that they weren't able to patch xfwm4.
I only fear that this is manifestation of a wider phenomenon when new software developers are unable to maintain software created by old software developers. If that is so, they will try to simplify the software to what they can actually maintain and rewrite it into a form in which they can maintain it.
If i assume this is true, then all of this is annoying, but actually makes sense: Wayland is simpler than X11, so people will tend to maintain Wayland-related software rather than X11-related. Rust won't let unskilled coders to make some mistakes, so from their point of view it is going to be simpler to rewrite something in Rust.
Although, goodbye network-transparency, goodbye performance, goodbye stability. Oh well, but it's that time of the year.
As someone that is sensitive to displays, one of the best features of XFCE, unlike others desktops, is that it doesn't cause eye strain, probably because it doesn't play tricks - a pixel at a certain color is stable, and not dithered(if you choose) and higher level primitives are also stable and don't play time/frequency based games.
I hope XFCE preserves this, it is a killer feature in today's world.
This bullet point from the reason to chose one library over another is a prime example of what I love about XFCE:
• smithay has great documentation.
Not only are they considering it, but they're expressly calling it out. I'm convinced that the publication of the Agile Manifesto was an exercise in Cunningham's Law, and to that end the XFCE team has produced something great by doing the opposite.Not the whole codebase, only the window manager (compositor is the Wayland equivalent). Other components are written in C and will remain so for the foreseeable future. Those components gained Wayland support in the last couple of years, you can try Xfce in a labwc session, there are of course several things to improve, but the compositor is the last big piece missing.
(xfwl4 author here.)
I spent a month or so in 2024 attempting to refactor xfwm4 so it could serve dual purpose as both an X11 window manager and Wayland compositor, and ended up abandoning the idea. It was just getting ugly and hard to read and understand, and I wasn't confident that I could continue to make changes without introducing bugs (crashers, even). We want X11 users to be unaffected by this, and introducing bugs in xfwm4 wouldn't achieve that goal.
Note that we don't have to rewrite all of Xfce: xfce4-session, xfce4-panel, xfdesktop, etc. will all run on Wayland just fine (there are some rough edges that need to be ironed out for full Wayland support, but they're already fairly usable on wlroots-based compositors). This is just (heh, "just") building a compositor and porting xfwm4's WM behavior and UI elements over to it. Not a small task, to be sure, but much smaller than "rewriting all of Xfce".
It was originally named XFce after the XForms library. As of Xfce 3, it uses GTK though, so it could be called GTKce, but renaming the project every time you change widget toolkits is probably not a good idea.
Great to see xfce continue on into the next age.
I've been using popos for a while, but xfce will always have a place in my heart.
If it had tiling support I'd probably use it still. Being so lightweight is a massive boon.
The more wayland compositors the better. It will force developers to actually abide by the specification instead of creating single implementation hacks like in the webbrowser ecosystem.
I suspect many of us still using X, are xfce users waiting for an alternative; I've heard very mixed things about current Fedora xfce wayland setups from different people.
It seems I will require a microsoft rust compiler and won't be able to use a small alternative plain and simple C compiler for xfce.
The beginning of the end, or are there plain and simple alternative microsoft rust compilers? Is microsoft rust syntax at least as simple than C?
Or the right way will be to use an alternative wayland compositor with the rest of xfce?
I love XFCE, with the move to wayland I hope they start thinking about abandoning GTK though
If they do not mind introducing C++ (they're introducing Rust so i guess multilanguage development isn't out of the question) then FLTK could be an option, though it'd probably need to improve its theming support.
They both have kinda similar roots in that XFCE originally used XForms which was an open source replacement of the SGI Forms library while FLTK also started as a somewhat compatible/inspired opensource replacement of SGI Forms in C++.
GTK4 is still pretty usable without libadwaita and all its Gnome-isms.
But frankly I think forking and maintaining GTK3 is preferable to moving to EFL or Qt. GIMP is still on GTK3. MATE is still on GTK3. Inkscape is still on GTK3 (but GTK4 work is in progress). Evolution is still on GTK3.
I think GTK3 will be around for a long time.
Hell I wish EFL was more used in general. I was thinking QT (mainly because I forgot about EFL) but that's much better
Am I the only one who's not buying into the Wayland hype? I just want X11 support not to fall into disrepair, as I see nothing wrong with it.
(xfwl4 author here.)
I'm also not a big fan of Wayland, to be honest. But that's the way the winds are blowing. X11 has its problems, but even if they are fixable, no one seems to want to work on Xorg anymore. I'm certainly not prepared to maintain it and push it forward. Are you?
Depending on Xorg today is more or less ok, but I do expect distros will stop shipping it eventually.
> I just want X11 support not to fall into disrepair
Are you also willing to maintain it?
Are you willing to write accessibility support for the new xfce only wayland compositor? How will you get every other wayland compositor to support your non-'wayland core' accessibility extension?
People like to frame things like the waylands are some sort of default and nothing is being lost and no one is being excluded.
Everyone has settled on an accessibility standard (Matt Campbell's). So it's not "your" accessibility protocol, it's already "the" accessibility protocol. This is working as intended IMO: allow things to compete and future in the wild and then pick the fittest.
Right. The push based accessibility that is only supported by GNOME's compositor, mutter, and GNOME's DE's userland as of this last 6 months. I would very happy to hear about even this extension supported under other wayland compositors and software. Do you know of any?
Since you seem informed perhaps you can clear something up for me, when Cambpell says "push full accessibility tree to trusted clients" does that mean you get the entire desktop tree, or only for that application?
Because if you don't get the entire window tree, because you only get the single windows information when that application provides it, it is highly incompatible with existing solutions. They say it is compatible because application developers can create a new virtualized thing themselves. But that's not compatible. And beyond that, it is a "solution" that prevents me from controlling my own computer. I understand GNOME is targeting everyone not just power users. But as a power user I am someone. I am a human being.
And Campbell's assertions that push is more performant than pull and full tree are being backed by arguments informed from problems that don't even apply generally. GTK 4 broke this, not GTK 3. It's not a push versus pull thing. It's wayland architecture focused Gtk4 causing the problem when things are fine in X11 focused GTK 3. ref: https://gitlab.gnome.org/GNOME/gtk/-/issues/6269 a11y: No API for supporting a11y Selection interface , https://gitlab.gnome.org/GNOME/gtk/-/issues/6204 a11y AT-SPI: get_child_count implementation iterating over all children causes freeze for objects with many a11y children
If the XLibre project appears to be making enough fairly-consistent progress for you to be comfortable tossing around some cash, then do gather up some likeminded folks to hire a dev to follow the guidance here [0] and help out!
Do note that I've never tried to croudfund a programmer, but that's something that I have to believe is possible to do.
[0] <https://github.com/X11Libre/xserver?tab=readme-ov-file#i-wan...>
Ubuntu and Fedora dropping X11:
https://www.theregister.com/2025/06/12/ubuntu_2510_to_drop_x...
https://itsfoss.com/news/fedora-43-wayland-only/
Kde Plasma 6.8 dropping X11:
https://itsfoss.com/news/kde-plasma-to-drop-x11-support/
Suse dropping X11:
https://documentation.suse.com/releasenotes/sles/html/releas...
Wow, this is annoying. I really like Xfce, but there are plenty of minor things which would need improvements. Instead of fixing all these minor things, they waste a lot of their donations on a rewrite for Wayland / Rust - apparently for exactly the same reason as all the other Wayland stuff and Rust reworks. Developers like to write new code more than actually maintaining / improving fixing existing things and finds some excuses to do this.
(xfwl4 author here.)
That's a fair criticism sometimes, but, frankly, if you want things the way you want them, learn to code and dig in. Otherwise it's not really fair of you to complain about stuff that people have built for you for free, in their spare time.
In this particular case, it's not fully a "new and shiny, must play!" situation. I personally am not even a big fan of Wayland, and I'm generally highly critical of it. But Xorg is more or less unmaintained, and frankly, if we don't have a Wayland compositor, we'll become obsolete eventually. That's just the way the wind is blowing.
I am not complaining about what people do in their spare time. If the blog post said "someone does this because he likes to spend his own time on it", I would not complain. I am complaining about a) the justifications given which I think are all nonsense IMHO and rationalizations for something which some likes to do, and b) the use of donations which should be better used to improve the software instead of creating more rewrites.
I also do not agree with the Wayland is inevitable sentiment. There are non-systemd distros, there will also be non-Wayland distros. The idea is that only those things survive which are pushed into the ecosystem by the cooperate bullies is wrong, otherwise Linux would not exist.
The Linux desktop was essentially fine already two decades ago and instead of the needed refinements, bug fixing, and polishments, we get random changes in technology after the other, so nothing ever really improves but we incrementally lose applications which do not keep up, break workflows, sometimes even regress in technology (network transparency), and discourage people from investing into applications because the base is not stable. My hope was that Xfce4 is different, but apparently this was unfounded.
> I am not complaining about what people do in their spare time.
Re-read your original post. You are absolutely complaining about what we do in our spare time.
> If the blog post said "someone does this because he likes to spend his own time on it", I would not complain.
I mean, that's part of it. I wouldn't do it if I wasn't interested in doing it. I have my own long list of Wayland criticisms, but I think it's interesting.
> I also do not agree with the Wayland is inevitable sentiment.
I think that's where we'll be at an impasse.
There are non-systemd distros because there are viable alternatives. Xorg (the server implementation, I mean, not X11 the protocol/system) is dead. I don't like saying that. I've invested a lot of time into X11 and understanding how it works, and how Xorg works. But no one wants to maintain it. There is the XLibre fork, and I wish them well, and do want them to succeed, but sustaining a fork is hard, and only time will tell if that works out.
But I don't think X11 has a future, unfortunately. And that really does make me sad. You're free to disagree with that, but... well, so what.
> The Linux desktop was essentially fine already two decades ago and instead of the needed refinements
That's a view through rose-tinted glasses if I ever saw one.
> we get random changes in technology after the other
Jamie Zawinski called this the "Cascade of Attention-Deficit Teenagers", and he's right. I do think some of these changes are an earnest and honest attempt to make things better, but yes, people just want to work on what interests them, and what makes them feel good and accomplished.
When we work for a corporation we don't really get to do that, but when it's unpaid, spare-time volunteer work, we have the freedom to do whatever makes us happy, even if it makes other people mad or disappointed or annoyed, or isn't the most "productive" use of our time (whatever that means).
::shrug::
> > I am not complaining about what people do in their spare time.
> Re-read your original post. You are absolutely complaining about what we do in our spare time.
Then I am not sure how I have to understand this sentence: "After careful consideration, we’ve decided on a meaningful way to use the generous donations from our community: funding longtime Xfce core developer Brian Tarricone to create xfwl4, a brand-new Wayland compositor for Xfce."
This is great news! If anyone from the Team reds these comments, Thank you people so much for XFCE4!
My response to the Wayland/X11 nonsense bickering has always been that I'll switch to Wayland when xfce does. I usually eyeroll when I see "in Rust" but the developers writeup in the linked article and their comments here are very reassuring and I look forward to their success!
>The goal is, that xfwl4 will offer the same functionality and behavior as xfwm4 does...
I wonder how strictly they interpret behavior here given the architectural divergence?
As an example, focus-stealing prevention. In xfwm4 (and x11 generally), this requires complex heuristics and timestamp checks because x11 clients are powerful and can aggressively grab focus. In wayland, the compositor is the sole arbiter of focus, hence clients can't steal it, they can only request it via xdg-activation. Porting the legacy x11 logic involves the challenge of actually designing a new policy that feels like the old heuristic but operates on wayland's strict authority model.
This leads to my main curiosity regarding the raw responsiveness of xfce. On potato hardware, xfwm4 often feels snappy because it can run as a distinct stacking window manager with the compositor disabled. Wayland, by definition forces compositing. While I am not concerned about rust vs C latency (since smithay compiles to machine code without a GC), I am curious about the mandatory compositing overhead. Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?