Android’s desktop interface leaks
(9to5google.com)285 points by thunderbong 5 days ago
285 points by thunderbong 5 days ago
Elements on the top of the screen have virtually infinite height, and elements in the corners have infinite height and width. You can't aim "too high" for something at the top of the screen.
Status bars on top don't make sense if you have tabs on top. Now your tabs are infinitely smaller, and aiming at them requires a lot more effort.
Mac's original design had the menubar on top, and its windows didn't have tabs, so it all worked fine together. That's not the case for browsers with tabs on top.
Along the way, it seems most designers have forgotten about Fitt's Law: https://en.wikipedia.org/wiki/Fitts's_law#Implications_for_U...
The linked article seems to imply that this remains a good design choice even today:
> The use of this rule can be seen for example in MacOS, which always places the menu bar on the top left edge of the screen instead of the current program's windowframe.
I guess now that the browser is the one app you probably spend the most amount of time in, it might make a little less sense? Android's lack of a menu bar system makes it make very little sense there.
Apple's design never made sense. It's fine when apps are maximised but it gets very confusing when apps are not maximised and the menu is very far from the app that it belongs to.
Since it only works well for maximised apps, the UX is much better if you just merge the menu into the title bar of apps.
I found it to not at all be confusing once I got used to it—-I find it less confusing and more reliable in practice.
Speaking from 20+ years of Windows use with a local menu bar and 7 years of Linux desktop where I switched to a global menu bar—-it was an instant improvement in quality of life.
I no longer have to hunt for a narrow menu bar strip, just throw the mouse all the way up, and hope to never hunt for it ever again.
I wonder how relevant Fitt's law is with bigger screens and the drastically changed ratio between mouse hand movement and cursor movement on screen. It used to be that you could reach a screen corner with a very simple flick of the mouse hand wrist. But that doesn't feel the same way anymore on modern hardware.
Depends on your configuration, I guess. I just tried this with my mx master with its standard resolution (so no ridiculous 800000 dpi gaming mouse) on a 4k 32" at 100% under windows. I can easily reach a corner with a quick flick of the wrist.
On my laptop's FHD screen it's even better.
Auto-hide the task bar at the bottom, and you've basically got the Gnome UI. Works just fine. It's the permanent screen reservation of the double task bar that really eats up the usable desk space.
Samsung's task bar (when you enable the DeX integration on a tablet) also supports this and it makes for a fine user experience.
Edit: I've enabled "force desktop mode" on my Pixel 9 Pro and hooked it up to my laptop dock. The UI looks almost exactly the same already. Taskbar at the bottom, notification bar at the top.
It's clearly experimental; my ultrawide screen scales horribly, my keyboard app gets horribly confused, and interacting with the top bar triggers a full-screen tablet overlay that looks a bit weird.
However, Chrome opens multiple windows and browses just fine. There are right-click menus, mouse hover interactions, window resizing features (though some apps require the "force resizable activities" flag). Ethernet Just Works, audio/video just works, and I can operate my phone screen while working in dock mode (so apps that absolutely refuse to work can still be operated through the touch screen).
Hiding the bottom bar doesn't solve the problem because it still takes the corners away. You can't put UI there because the bottom bar will come up and cover it when you mouse into the corner. The OS is taking all four corners for itself. Greedy! Apps should have that space. Apps are what we are here to use and the OS is getting in the way.
macOS doesn't seem to care and that's what all the designers use. I'm guessing people think it's pretty?
The dock would also better if it weren't stretched to the entire screen width by default but perhaps Google is planning to use that space for something. It's also possible they're going to remove the top bar at some point, that'd make the UI standard Windows-shaped.
The Gnome trick for the dock is to only show the dock when hitting the Super button which also brings up the virtual desktops and what macOS might call Launchpad (except not full screen by default). Ubuntu likes to force the dock on you the way macOS does, but you can disable that.
For me the only feature I want from a taskbar/status bar is ability to hide it permanently so it never comes out until I press a key on a keyboard (like meta/Windows key or even a key combination if needed). Task bar auto popping just because you weren't precise with your mouse and moved it too far down or to the corner is very annoying to me. I treat it as a test if desktop environment have "we will force our way down your throat" or "we try to make it great for everyone" attitude. Windows and Apple are choosing the former. KDE is choosing the latter.
I am surprised because somehow they get it on phones (hidden by default, needs a gesture to bring down). Real estate on a laptop is a just as precious, especially vertical real estate.
What I don't get is why they don't design UIs with a status bar on the side - you have so much waster horizontal real estate both in landscape monitors, with every website, code editor having huge bars on the either side of the text.
Smartphones are portrait screens and its clear that having only that much horizontal real estate is enough.
I would put all bars at the top. On a touch-enabled screen it's much easier to touch something at the top than something next to the keyboard, so I move the taskbar to the top on all my Windows two-in-ones.
Shout out to https://github.com/valinet/ExplorerPatcher for forcing this behavior even on Windows 11.
Why not at the side, though? Better use of the vertical space and no issues with touchscreens (interactable items on top, hand reaching from the side). That's how I do it on every desktop/laptop machine I own.
I don't remember why I did that, tbh. Having it at the side works as well, I agree.
And ChromeOS is designed for Chrome's tab bar, so that it is at the top edge of the screen, essentially making it infinitely tall. This is one of the things that makes surfing the web more pleasant and ChromeOS then on Windows and Mac.
With many linux window managers you can move the statusbar to the bottom, hide the titlebar and configure chrome to use the (now hidden) system titlebar for the same effect.
Agreed. Those few little icons at the top are wasting an entire row on the screen. That junk can go in the corner of the taskbar like Windows.
Agree. They should make the desktop UI similar to what is there on ChromeOS or Samsung Dex. The top bar doesn't make sense at all.
I think that the first smartphones were way smaller than today’s ones played a role. They were not that elongated either, the aspect ratio was way closer to a square. I could easily reach the top bar back then. It was definitely not a hurdle. But of course that’s not true for a long time now.
This looks like it will help a lot of students and families who are on a budget. If you can just plug your phone into a screen you do not need to buy a separate laptop anymore. The browser extensions are the most important part because that is what makes a computer useful. I am glad to see they are thinking about this.
>This looks like it will help a lot of students and families who are on a budget. If you can just plug your phone into a screen you do not need to buy a separate laptop anymore.
Except that android phones with display output are mostly flagships with flagship prices.
But 50 Euros on the used market got me a retired corporate HP/Dell laptop with 1080p screen, intel 8th gen i5 quad core, 8GB RAM and 256GB NVME on which I put Linux. Way better for studying and productivity than my android phone hooked up to the TV.
It's a nice feature to have as a backup in case my laptop dies, but I wouldn't daily drive an android phone as a desktop computer for productivity.
Actually many ridiculously expensive "flagship" smartphones do not have DisplayPort and some do not have even USB 3.
The chances to find DisplayPort in what nowadays have become medium-price smartphones, i.e. $500 to $600, are about as good as finding DisplayPort in a "flagship".
Resell the 8GB of RAM and buy an even better phone then? That's 150 euros of value right there.
Then use the money on a reputable second hand store to buy a used S20 5G 128GB for 150 euros, or a S22 128GB for 145, maybe an S21 Ultra 5G 256GB for 139, and you've got yourself a valiant workstation already (Samsung DeX works great out of the box, no need to wait for Google here). I can also find an S20+ 5G 128GB for 75 euros with display damage (but that doesn't matter when you hook it up to a monitor).
On another website I can find an S20+ 5G with cracks in the edges of the touch screen for 50 euros. That's 12GiB of RAM, 128GiB of storage, a 3200x1440p@120Hz screen and 5G connectivity built in. You're gonna need a Bluetooth mouse and keyboard (that's like what, 5 euros?) to hook it up to the TV but then you're good.
Why buy a used phone that will stop receiving updates, can't be fixed or upgraded and can't run whatever you want on it when you can use a real computer instead?
That's a nicely thought out setup, but why would other people want to do that hassle, instead of just getting a cheap laptop, which is what most people do?
You're making up niche scenarios for the sake of winning an argument, but you don't daily drive, you don't dog-food yourself, they're only good as concepts on paper, but not in practice.
The market for people buying obsolete phones to connect to their TV as their daily driver workhorse computers is insanely small, even on quirky HN let alone outside this bubble. So who do you think you are convincing/converting with this?
Like the most popular Androids are Samsungs and Samsung has been shipping DEX on their flagships since for-ever, and how many of their users actually use it? Or how many buy them just for that feature alone? You haven't discovered an untapped market here that will replace PC/laptops for most average people.
A 2-3 generation old pixel on the second hand marker is not expensive at all though.
And you easily add a mouse/keyboard just fine to it.
>A 2-3 generation old pixel on the second hand marker is not expensive at all though.
Sure but at around 300 bucks is still way over 50 bucks.
And even if you get a used Pixel 8, having separate phone and computer adds a priceless layer of redundancy and flexibility.
If someone steals my phone, I don't want to also loose my work PC with it.
Isn't Pixel 10 the first one with fully supported desktop mode?
I remember I was very confused when buying a Pixel 7 to replace my (then 3 year old) Huawei P30 Pro, and the inferior camera + lack of desktop mode made it feel like a net downgrade.
Old pixels don't have HDMI output, I learned it the hard way when the screen of my Pixel 3a died.
Only pixel 8 and up have DP. So 2 generations old. I guess soon it will be 3rd gen.
The moto g100 is a good example of a midrange phone with decent specs, including video output. It launched at $400, and can be bought for around $200 these days.
It has a Snapdragon 870, 8gb RAM, 128gb storage, a microSD slot, headphones jack, and a big enough battery to last 2 days. It's a little chunky, and it's not waterproof, but beyond that it's just about everything I ever wanted in a phone.
Motorola, of course, has already abandoned it. But it still gets up-to-date Android via Lineage OS and other community made ROMs.
How did they abandon it? It release october last year according to google.
>but beyond that it's just about everything I ever wanted in a phone.
I get that, but none of this answers my question of why people should use that to a TV, instead of a PC, other than to flex? It really isn't more practical, nor saving you money and you're still limited to the apps of android ecosystem rather than the windows/linux one.
> How did they abandon it? It release october last year according to google.
Oh, geez, I didn't realize they abandoned it so hard they even recycled the name! I was referring to the 2021 g100: https://m.gsmarena.com/motorola_moto_g100-10791.php - it looks like the 2025 recycle lacks video output :/ https://m.gsmarena.com/motorola_moto_g100_5g_(china)-14228.p...
As for why anyone should do it, I'm not really arguing that anyone should. I was just trying to point out that it's more affordable than you might think. (Although it can't beat the deal you got on your laptop.)
I think it might make sense if you already have a laptop dock with a screen and a keyboard at home and at school/work, and your needs were fairly lightweight, and you really valued portability. Or as you suggested, it could just serve as a backup device in case your main laptop gets broken or whatever.
Yeah, fair enough. I actually really like the recent trend of Android manufacturers committing to 7 years of software updates, because yeah, community ROMs really aren't for everyone.
My point was more that there are affordable options if you're inclined to do a bit of tinkering.
You don't wired need display output, just WiFi. Motorola's Smart Connect desktop uses Miracast for using TVs and the like as desktop monitors as well as wired.
I got my moto g84 5G with 8/256 GB for about 170 euros new and it supports it (not wired). Seems to work fine.
USB-C is only a connector/socket - a device having a USB-C socket does not guarantee much beyond being able to plug a USB-C connector into it.
Some USB-C devices only use the port for charging for example. Others might only support USB 2.0.
Getting a display out from something with USB-C socket needs the device to support something called DP Alt Mode.
Note that cables matter too - you can have a DP alt mode enabled monitor and phone, but if you have the wrong cable it won’t work. Welcome to the future.
> Except that android phones with display output are mostly flagships with flagship prices.
Might well be that this becomes a lot more common on cheaper phones if it becomes a popular feature though. A display port output isn't currently that useful, so it's something it makes sense to cut from budget models. But if this desktop functionality becomes popular that calculus may change.
I am pretty sure brands would rather sell you additionnal devices, like tablets or chromebook (will they be called androidbooks?) than make budget models able to do so.
Some "flagship" and higher-end-midtier phones cheap out on the USB connection. USB 2 over USB-C with USB-PD for fast charging. No video out, slow data transfers.
Maybe when desktop mode becomes more common there will be an incentive to fix the shitty USB situation.
Cheap phones probably won't really have the power to effectively multi-task so I imagine cheap models would rather disable the feature than leave the user with a bad UI.
> Except that android phones with display output are mostly flagships with flagship prices.
There are exceptions. For example, the Motorola Edge has DP Alt Mode.
https://uperfect.com/blogs/wikimonitor/list-of-smartphones-w...
Do you understand how much are 50 bucks in a third world country? I mean, Android phone is not the cheapest solution for the poor (obviously) but it helps a lot having this kind of features for a family.
>Do you understand how much are 50 bucks in a third world country?
Yes I do, no need to patronize us with that since even in 3rd world countries people have access to old computers from ewaste imports at a reasonable price, we don't all live in straw mudhuts wearing loincloths swinging from branch to branch.
Now tell me which 50 euro phone ships with display output and is readily available. AFAIK Oneplus 7T I had is the cheapest with that feature but still over 50 and official SW goes to Android 12. Not sure if flashing lineage will still keep display output feature.
Then there's the issue of availability in 3rd world countries, where it might be easier to find some scrapped Dell optiplex with a core 2 duo, or a beat up Acer from the windows 7 era for cheap at your local market versus a cheap android with display output capabilities being more of a unicorn. Sure you'll find your Pixel 8s and or Samsung S24s too, but those imports don't come cheap there, compared to the masses of lesser known cheap chinese phones but those don't have display output and their software is shit.
Plus, if you go that route of Pixel 8 as a pc, you still need the budget for an external display, mouse and keyboard and your battery will wear out much faster. So then why not get a cheap laptop which has all the peripherals?
Plus 2, old phones age very poorly performance wise, they slow down a lot due to thermal paste and battery degradation and nobody makes quality OP 7T batteries anymore to do a swap and get back to out of the box performance. What you find on Aliexpress now are fakes or poor quality clones. While a laptop is much easier to repair and maintain as parts wear out or break.
It'd be really weird if extensions got enabled/disabled based on whether the USB cable is plugged into a monitor or not.
I expect the eventual production version of this will have extensions if and only if the normal Android Chrome has extensions at that time.
Yea, I very much doubt they would ever put a browser extension on this. It's funny, I feel as though reading some Google dev's response on reddit about why mobile chrome didn't have extensions was my inflection point when I started to realize they were becoming evil.
>Android Chrome not having extensions is just a build option toggle. It doesn't have extensions because Google doesn't want it to, not for technical reasons.
The leak screenshots are from the dev version of the app. It has not been confirmed to actually have extensions enabled in the prod version, which is what the parent poster was talking about. It would have been prudent to actually read the post I was replying to and the actual article, not just look at pictures.
So you need to buy a phone, a monitor, a keyboard and a mouse. And you need a desk where to put the stuff, which is not a given if you are part of a poor family with several kids.
A cheap android phone and a cheap chinese laptop with 16GB of ram is about 300 EUR where I live, and you can use it wherever you want.
How will this succeed where the Motorola Atrix failed way back in 2011?
https://arstechnica.com/gadgets/2011/03/the-motorola-atrix-4...
My Moto Edge 2024 has "Ready For" which is basically this still today. I plug in the USB-C cable normally connected to my work MacBook and I instantly get a full desktop experience; mouse, keyboard and sound included.
It's how I play Minecraft with my kids when they get the itch. Sometimes if I know I'm only gonna be zoning out on Youtube at night I'll use to to save a few watts too.
It can do 1440p at 120hz, all on a really affordable phone. It's nice.
I've only used it when I'm in a pinch but it's handy. Blowing up mobile apps to a larger screen and multitasking isn't ideal certainly but I've been able to handle "email job" type activities while out of pocket. That said I've never heard of anyone else who's actually used it.
What might have been...
Windows Phone was on this path ages ago, and looking really good.
I eagerly await one of two dreams (or both):
1. A phone which can seamlessly function as a desktop for my work.
2. A new clamshell Android phone ala Nokia e90, which is good enough for work stuff on the go.
This has existed for more than a decade. Even my old Samsung S10+ had this feature. It's called Samsung DeX and it is not some handicapped environment like iPad OS either, you can actually get real work done, especially if you're a software engineer.
> The browser extensions are the most important part because that is what makes a computer useful.
What year were you born? I ask not to lambast but because given infinite time, I doubt I would come to the same conclusion. To me (a 37-year old) that statement sounds like someone who grew up with Chromebooks in school
As Google's domination continues, the US and EU need to force mobile OS vendors to open up platforms for third party app installation without gatekeeping, deep menu toggles, or scare walls.
You already need a phone to pay for parking, order at residents, identify yourself with the government, etc. Two companies should not dictate essential life function interaction.
The monopoly grip on this is so tight that it's almost impossible to compete.
You can get a monstrously powerful MacBook almost new for under $500. And that includes display, keyboard, touchpad and speakers. And a whole lot more.
What makes a computer useful is the form factor (decent size screen, mouse and keyboard instead of touch controls) and having full control of the system. It has nothing to do with browser extensions.
Yeah and we’ll be forced to do this because nobody can afford computers anymore because of Ram and SSD prices because of companies like Google buying it all up at extortionist prices.
We’re going backwards by putting all of our compute back in the warehouse.
It really does look to be a rewrite of ChromeOS to make it a native Android experience with very few tweaks to the User experience that I can see.
I think it's a good idea on Google's part. The trend of consumers using mobiles as their one and only computing experience is still strong. This will blend the experience consumers have between desktops and their primary computing platform.
It's a trend with Apple as well. It can be seen in iOS/macOS 26 Tahoe. There's lots of untapped potential in those iPads with M-series CPUs. We've also had rumors of a "MacBook Air Lite" sporting a cellphone A-series CPU. The convergence is happening.
I would love to be able to do more with my Google Pixel phone. Right now, the MacBook is my primary workstation, but the possibility of an even more "mobile" productivity setup is very enticing. Now if only I could get an Android tablet with the new "Terminal" feature in Android 15...
I wouldn't be surprised if Apple cannibalized Mac for that forced App Store and services revenue.
Google is about to dip into that market, where desktop users are forced to use the Play Store to install any app. Apple would be foolish to leave that money on the table.
Essentially a clone of Windows 11, and those screenshots make me realise just how much I hate the rounded corners, borderless vagueness, and excess padding of "modern" UI.
For contrast, this UI is more my style: https://serenityos.org/screenshot-b36968c.png
Is this serious? Looking at this I feel both bored to death (think "Severance gray office hell" boredom) AND overwhelmed because of the tons of decorations and useless creases and crappy icons and awful text and unnecessary affordances that only made sense 30 years ago when no one knew how to use a computer. Ugh
I agree with the parent comment, but I understand that someone who did not grow up with UIs that looked like that would think otherwise. I do feel that Windows 2000 was peak UI for desktop operating systems, but it's probably due to a combination of nostalgia and the fact that I deeply dislike modern Electron-based UIs with too few decorations and an overly minimalistic and non-customizable "we know best" attitude.
I have absolutely no interest in expanding the use of Android in my life. I am, in fact, far more interested in going the other way and trying to reduce my reliance on any locked down platforms.
Tried the desktop mode on my Pixel 8 running GrapheneOS. It's getting very close to being usable.
On my Pixel 9a (also on GrapheneOS) the biggest limitation is it can't be set to higher than 1080p, and the upscaling algorithm with my 4K display (not sure where in the chain that happens, monitor or phone) was quite terrible to the point of text legibility being a concern.
The usage experience otherwise is quite good, it's perhaps my preferred way to sync data to and from my phone, I have it all stored on a NAS so I connect to my Type-C display (which has keyboard, mouse, and ethernet connected to its switch), fire up a terminal, type in my rsync commands, and my pictures & music are synced ~instantly at LAN speeds.
I use Samsung Dex all the time on my S25. Got rid of my laptop completely.
Best part is, I can make "apps" on the fly that I can use from my phone with termux integration.
Dang. is that tmux, cmatrix, mise, and node running on your android device? Or is that an ssh session?
Actually, it's the Android 16 Linux Terminal (pKVM) [1], so it's 100% on-device. Since this is a full VM, unlike Termux, you can run Docker too! (Though Termux is more stable and lightweight for daily use.)
[1]: https://www.androidauthority.com/android-16-linux-terminal-d...
I enjoy cool features like this, but as usual, I'm wary of the consequences.
Android is becoming more and more locked down like iOS. Even if it weren't, it's still always been more locked down than a standard desktop or laptop machine running an operating system of the user's choice.
With the advent of smartphones and tablets, already I see non- and semi-technical users often dropping their laptop or desktop and just using their phone or tablet. (I know people who don't even have a laptop/desktop anymore.)
Android having a full desktop interface will just add fuel to this fire, and further normalize running a locked-down OS and device that users don't truly own or control as their only computing platform.
It wasn't always so locked-down as it is today.
The OG Motorola Droid, for example: While it clearly wasn't a design intent, there was really nothing of any gravity to stop people from using it in any way they wished.
Rooting was a simple matter of running a hacked su command, and voila: One becomes root. The bootloader wasn't locked at all. Custom kernels and userlands were normal. It was a great little pocket computer to goof around with for anyone who cared enough to give it a swing.
Just install the "missing" su binary and...done.
At the time, I felt that this was a perfectly acceptable way to keep it working reliably for regular folk.
In a way I don't know what I think about them preventing me from modifying "their" certified OS. Many products do that (if I buy a Marshall smartspeaker, it's not like if I can modify the software, is it?).
What I want is to be able to properly install an alternative OS (just like I don't care about what Windows or macOS do, as long as I can install Linux), and that goes with the bootloader unlocking/locking.
The problem is for every person who wants to do this, there are hundreds (thousands?) who wouldn't want to - and these people are vulnerable various security exploits that would allow someone evil to take over their device.
This isn't just a made up situation: There are nations that have large teams of people who's job is to figure out how to get software installed on your device of their choice/make/design, allowing them to do whatever they want.
I have mixed feelings as well.
The security model of Android and iOS is vastly superior, and for "normal" users it is not so much of a problem if they don't have control they neither need nor want.
On the other hand, I obviously don't like it when I don't have control over my hardware. But what I hate the most is when the manufacturers prevent me from installing an alternative OS. I like being able to install something like GrapheneOS.
Also the fact that I'm forced (in practice) to use the Play Services is not really about the device being locked down.
Vastly superior security doesn't make you give up freedoms for security. But do tell me how successful the war against scams has been for the average user.
I am not sure what you are trying to say.
Convincing a user to give their password will always be an issue, that's fundamental. But because phishing exists does not mean that security does not matter.
Without security, there is no need to phish, because the system does not protect anything. Once you have a good security, then the best attack is phishing because it's easier to trick the human than the system. This means that the security is good, not bad.
I am with you, and for this reason I really want them to fail. The PC is currently still a platform where the user has a relatively large amount of control and digital autonomy, and as long as a sizeable part of the population keeps using it, companies and government institutions cannot ignore it and must support it.
Once 90% of all internet clients are iOS or Android the open internet is dead, and the concept of a general purpose computer on which you can run any computation you want is also inaccessible to the average person. From that point on, everything is a service that you rent from either Apple or Google.
Samsung should not give up DEX, is mature and works exceptionally well with applications that are supported (wireless DEX with my samsung tv works great). Knowing Google, they will give up after 5 iterations or updates will be very slow. It is good for the ecosystem, I hope it will become more mainstream.
Some "first look"
It's just a slightly different showcase of the same UI shown in this video: https://www.youtube.com/watch?v=yzDO-GS-Bm8
That UI is available to test on any Pixel 10 (maybe even any Android 16 device?)
I do have it in Pixel 8 after enabling in developer options. It's a bit buggy and low resolution, but does the job when e.g. I want to connect some video I'm already watching on mobile to the external display via USB-C. (You can connect a mouse via Bluetooth to the phone, or via a USB dongle plugged into your monitor, to control it.)
An interesting thing is that you can run apps X and Y on desktop screen while also run app X on mobile screen independently.
No, only the 85% or so of it that's accreted since about 2008. Prior to that it actually made money by offering useful search results without infringing on user privacy. That core business model could still work to power a company about 1/8th the size of current Google. Current Google cannot survive on that model. Something went really wrong when it put growthism above all else.
> Something went really wrong
What went wrong was Google (the old 'do no evil' Google) bought the ad network DoubleClick. The acquired DoubleClick side then took over old Google from the inside out such that what we have today is Doubleclick calling itself "google", no more 'do no evil' old Google anywhere, and all the evil that exists on the advertising side infesting everything they do.
I don't want a "PC future" where you can't just install software without OS vendor blessing.
This is why Valve invested so much in Linux. They saw the writing on the wall of Microsoft becoming Apple (but shittier). Now they have an alternative. If Microsoft charges a 30% tax on all Steam transactions and won't let Steam run unless they do that, Valve can heavily push Linux and Steam Machine sales.
> Microsoft becoming Apple (but shittier)
At least Microsoft haven't fallen so low as to fail basic design principles like having transparent on top of transparent buttons, having disappearing controls depending on window size (scrollbars), or having corners so rounded that the click to drag mostly being outside the actual window.
The Windows 11 UI is annoying, but at least it doesn't look like a kid's toy.
> At least Microsoft haven't fallen so low as to fail basic design principles like having transparent on top of transparent buttons
That's just because Microsoft has been there done that already 2 decades ago ;) (IIRC in Windows Vista).
Same with the fine-grained in-your-face permission popups. Introduced by Microsoft in Vista, copied by Apple in Mojave ;)
Apple's bad ideas look ugly. Microsoft's bad ideas lock you out of your computer, delete your files and give the undeleted files to the FBI.
Having a mandatory sign-in prompt when opening Notepad and two context menus is way worse than anything Apple did in Tahoe.
> At least Microsoft haven't fallen so low as to fail basic design principles like having transparent on top of transparent buttons,
They did that but made it work well all the way back with Windows 7, maybe even Vista.
And yet they failed to get game devs to natively target SteamOS.
As long as they depend on Proton, they haven't fully solved their problem.
I'm not sure how they could have failed that if that was never their goal in the first place. The entire point of Proton is that the Win32 API is infinitely more stable and worthwhile to target than anything Linux distros offer, and that the financial incentives aren't there for developers to 5x their platform distribution effort to reach 1% more users. An approach that relies on developers doing that would never work, and fortunately for Valve that isn't their approach.
What's the purpose of a native build if the windows build runs just as good, or even better?
They ensured that the devs need not worry about another build target that requires extensive QA. Maybe in the distant future we will get ubiquitous native builds, but honestly and again, who cares?
Proton and Wine means there is a single target now, instead of the fragmented mess that is Desktop Linux today.
Tbh, why bother?
kernel32+user32+gdi32+d3d[11|12]+dxgi is a pretty great API abstraction for game development. And unlike Linux desktop APIs the Win32 APIs are actually stable, so those games will also work in 5 years, and most importantly, performance is the same or better than on Windows. It's unlikely that game devs directly targeting Vulkan would do any better, and when using a high level engine, any layering overhead in Proton is negligible anyway. And don't even get me started about the state of audio APIs on Linux ;)
Also don't underestimate the amount of workarounds and tweaks that (most likely) go into Proton for games that make poor system API use. Without Proton those game-specific hacks would need to go into MESA, Wayland, X11 or various system audio libraries. At least Proton is one central place to accumulate all the game-specific warts in some dusty corner of their code base.
TL;DR: just think of Proton as an extremely low level and slim cross-platform API for games (not all that different than SDL), and suddenly it makes a lot of sense. And I bet that in 5..10 years Windows will have regressed so much that it might actually be better to run games through a Proton-like shim even on Windows (assuming Windows hasn't become 'yet another Linux distro' by then anyway) ;)
> As long as they depend on Proton, they haven't fully solved their problem.
Maybe not, but they fully solved my problem with games, which was that I could not play on Linux. I started playing again just because of the SteamDeck, I think it's a pretty big achievement :-).
Same, but my PC runs on Linux so I don't feel threatened.
I feel like at some point normies may end up just using iPadOS or Android as a "convergent" device: a tablet/phone that they can plug into a docking station and use as a computer.
I am sort of hoping that it will work with something like GrapheneOS, so that I will be able to benefit from it on my phone.
> my PC runs on Linux so I don't feel threatened.
Well, you should feel threatened. Where do you think the push towards TPM and secure boot is heading? Microsoft is insanely envious of how Apple and Google locked down their platforms and have total control over app stores, and that’s what Microsoft wants too. It’s a huge revenue stream they’re leaving on the table. Now that there’s precedent on mobile, they’ll have no problem pushing it through on desktop.
And once all the normies have moved to iPads, there won’t be a big enough market for anyone to manufacture PC hardware for hobbyists anymore.
Right, I guess we agree but I was not clear.
In general, I don't care so much if Windows or macOS become as locked as Android or iOS, as long as I can install Linux on my hardware.
My point is that many people seem to complain because they want to be root on the Google-certified Android. I disagree with that: Google makes an OS where you cannot be root. If you want an OS where you can be root, you should be able to install another OS on the hardware you bought. Because you should own that hardware. But you don't own Google.
Neither do I. But with Windows slipping badly, Google could start encroaching on their core tech.
Linux seems to be gaining a lot of traction, both with the fall of windows and gaming being more than feasible.
It makes sense for the tech savvy option to succeed, now that personal computing is disappearing. Average folks won’t use a windows/macbook, they’ll use phones and tablets.
My only concern is ending in a macOS+asahi situation where supporting a single device requires mountains of effort.
So just don't use windows? The only reason I use android to begin with is because the mobile centric distros I looked into didn't appear to be to the point I would want to daily drive them yet. If and when that changes I'll switch.
The only real issue is sourcing good mobile hardware that isn't locked down. At least for the time being the pixel line satisfies that.
The visual design is just so bad. Its so ugly and souless. I actually feel bad for the UI designer that had to put their name behind that.
Right off the bat:
1. Those loading spinners being bumpy pulled my eyes around the screen wildly to the point that they were experiencing pain. They need to be removed.
2. There is a status bar along the top, and an application launcher docked at the bottom. That's way too much vertical space being used, especially on laptops (which I'd wager is going to be the most used viewing modality for this OS). It should be merged into a single bar along one side of the screen.
3. The dropdown at the top left of each window seems to be taking up a lot of space. Especially when you're trying to fit four tabs on the screen. Get rid of it so the tabs can actually be read.
4. At 27 seconds in, there are two close icons in the focused window. That being bad could be a presentation all in itself.
5. Random padding in the status bar.
MacOS up until Catalina
Windows 7 with the Shine 2.0 theme
Gnome Adwaita
QNX Photon 6
BeOS R5
I'm sure there's more good examples. People here will say Windows 2000, but I don't think it has any grace.
I'm not a visual designer but heres what I think is making it look so bad. The task bar is way to thick. The flat design of the icons needs some shadows or something to make them pop a bit. The rounding on the corners is way to much and needes to be dialed back significantly. There is both a top bar and a bottom bar, pick one ffs dont waste screenspace with both. Also top bar is transparent while bottom is fat and grey. I'd keep the top bar and scrap the bottom.
As far as OS's that look good, #1 KDE Plasma with breeze, MacOS snow leopard, Windows xp and vista.
Eh, it's better. But it's still a mess unless you're using a device specifically designed for Linux like a Steamdeck or Framework. Expect to spend a lot of time messing around in the console if you install on an arbitrary laptop that came with Windows installed. Wifi problems, sleep problems, external monitor problems, laptop screen brightness problems, graphics card problems.
Does it still require wiping your drive and enabling developer mode to install software outside the Play Store like ChromeOS does? DOA if so.
Is it going to be the same future as Fuchsia OS? There were some good ideas in that one, but then one day it sort of disappeared. Not that that was surprising - Google is good at that.
Unless Google reorganizes and gets more focused, I'd say they are highly likely to repeat their mistakes.
IMHO both Apple and Google are missing a big opportunity here. Both are doing work to blur the lines between desktop and mobile. Both are targeting laptops, ar, phones, and tablets.
These are multiple modalities. Or they should be. But because the way both are structured, these are isolated islands with some interoperability but the whole experience is very device centric.
What's nicer is when you have multiple devices and a clean handover between them. You basically sign in and all your apps and data are there. All the open apps have the same state. They just adapt to the formfactor.
Apple has been taking babysteps here but it's still hopelessly compartmentalizing the market. So switching between devices is a lot of setup and install friction.
And for Google, they've been banging the drum that everything is cloud based since forever. Yet they can't figure out a cross device UX that makes sense. It should be as simple as sign in and all your stuff is there. That was the vision with ChromeOS at some point but then they lost interest, got distracted by Fuchsia, went off and created Flutter and also forgot that Android was the thing that actually has an enormous amount of users and OEMs shipping it.
The trillion dollar opportunity here: if devices become like shoes, many people probably have more than one. Some people have many pairs of shoes for different occasions. But they have only one phone. Because switching between devices is painful. Adding another OS to the mix just kicks that can down the road. Multi device, multi modal access to your stuff is the key thing that they should be nailing. If e.g. Apple were to nail that, some people might have many different devices in different sizes and form factors. The main decision as to which one to use would be based on which is most appropriate for the context.
If you take something like that as the starting point, the logical conclusion is that Google should evolve Android to run on any type of device and make sure that everything plays nice together. Switching between your Android phone(s), tablet, TVs, car, AR/VR goggles, or laptops should not be hard. Devices running a version of Android exist in all those categories. But there's very little/no integration across these.
I don't think nobody has solved the problem of mobile/desktop split so far.
Microsoft's Surface Pro line barely made any difference -- nobody buys it to use it as a tablet, and generally the touch experience is just bad if you have ever used a real tablet.
Apple pretends to try and market iPad as your next computer, but we all know how it works. (They also have this thing that allows phone apps to run natively on MacOS, but that has got near zero traction.)
Samsung tried as well, half-mindedly, and I can confidently say a Samsung phone doesn't work as well as a PC in DeX mode.
So now it's Google. I don't think they can come up with some magic solution to change this.
What is Apple doing to blur the line between desktop and mobile? Is it the abomination they call iPadOS? It's a joke, it's nothing more than iOS+.
They've made it perfectly clear that they want to keep desktop and mobile separate in order to convince their customers to buy all their devices.
MacOS can now run a lot of IOS applications. They have icloud to sync a bunch of things. But only for their own applications. But maybe it's more clear if we limit the discussion to just phones. The key question, why do people only buy 1 iphone? Is it because it's perfect for every situation? Or because they can't play nice together?
Switching from one to the other is a PITA. What if that was seamless. You just pick one up and now everything is there. You might have a phone for work, going out, hiking, etc. You might have phones matching different iphones. Apple could be selling you many phones. Losing a phone would be a lot less dramatic because you'd have multiple spares. Do they all have to have the same screen size? Do all of them even need a screen?
Apple sells lots of phones. They could be selling 2-3x more. That's just phones. The way they positioned their VR device and watch is kind of telling. They are both kind of satellite devices. But apparently that requires special apps and a new OS (for both).
I don't get your point. Are you saying that Apple can sell more specialized iPhones?
But that's exactly what is the problem with modern Apple - software is shit. It's limited on purpose, so that the users are forced to buy more Apple devices. So no, they don't need to sell more iPhones, they need to make their software better, so it won't be like a cult where you need to keep buying a bunch of different devices. But then they won't get easy money from the followers, so they will never do this.
For me the best solution would be seperate environments with completely different UIs, but running on the same device (probably in a phone form factor)
Apple's in the best position to offer this because they have both Mobile and Desktop OS's. And their chips are already capable of having two OS's installed side-by-side with a strong security barrier (and also more than fast enough to run a full desktop OS). But alas they haven't attempted it yet.
This effort is the abandonment. This is ChromeOS bring shut down, this is the one taking over the many.
It was an experiment to keep bright engineers busy with cool ideas to show off.Even back then they could have known that it is not a viable idea to make a tectonic platform switch with not much business arguments for it.
I would love for a microkernel, capabilities OS to become standard. Windows and Linux have been bolting on security layers for decades because the modern threat model was barely a concern when they were originally created.
Google actually has the sway that it could drag the industry in that direction.
Cookie banner doesn't have reject-all button
Isn't that against the rules nowadays?
It is interesting to consider the different developments happening with the big mobile orgs regarding the convergence computing paradigm:
- Samsung’s Dex has been out for a while - independent devs have been working on Linux “as an app” for some time - Android desktop interface in this article - Apple developing video output on iPhones - Apple working on a Macbook with a mobile chip
- another exciting thing is XR devices and mobile computing
- my concern is convergence computing will reduce the importance of desktop interfaces and the freedom we have to install whatever applications we want
> my concern is convergence computing will reduce the importance of desktop interfaces and the freedom we have to install whatever applications we want
Yep, it absolutely will I expect. All the pieces are being or have been laid to build the new world where only a "trusted" device will be able to use the internet. Us nerds can still have our Linux, but it won't work with much of the internet because we won't be able to pass attestation.
Building to that future is exactly what I would expect from Apple, but Google doing so has surprised me. Google doing so is also the thing that will bring it to pass, so there's a special seed of hatred for them germinating in my heart right now. Hopefully I'm just being alarmist and paranoid, but I really don't think I am.
Some Refs:
Web Environment Integrity: https://en.wikipedia.org/wiki/Web_Environment_Integrity
Private Access Token: https://developer.apple.com/news/?id=huqjyh7k
I think tech companies are realizing that the biggest "mistake" they have ever made was giving so much freedom to the desktop user. They hate that we can look into, modify, and delete files, hate that we can add custom-made software, and hate that we can identify and turn off tracking/telemetry. They realized this with the mobile platform and locked everything down, but by that time it was already too late.
Authoritarian governments (that is, what unfortunately all governments want to be) also love this, since if a few big companies control all computing, they can regulate them to control the public.
Fortunately, there are many computers already in the public's hands (which they can use to perform any computation without government restrictions and without paying/sending data to a company); but more and more people are switching to mobile platforms (and kids start out on these platforms) that I'm worried about the future.
If this trend continues, then self-hosting may become the final bastion of hobbyist FOSS.
I am somewhat hopeful that local AI will save us. It will be fairly easy to automate interacting with normie devices and services in the near future. It's not impossible to prevent it, but that will probably be annoying enough to the normies for them to reconsider. I see a future where the select few will still be able to use their free devices to operate the nonfree ones remotely, while incrementally taking back control with things like self-hosted tools.
I used to look up to Google and Googlers but that was a big mistake on my part because that only made the following disappointment ever so hurtful. All of the product killing, services/APIs lockdown and disrepair that has been their modus operandi over the last decade made them into just another corporate software company.
> Google doing so has surprised me Google are absolutely interested in this because more and more people are installing ad blockers and since their main game is advertisement, they can't allow that. The older the retired Google elites become and the less filtered their language becomes, the more you can peer into their minds and decisions leading up to now. Just look at what Eric Schmidt has been doing and saying.
>my concern is convergence computing will reduce the importance of desktop interfaces and the freedom we have to install whatever applications we want
The final nail was drilled into the coffin when a judge ruled Google a monopoly with Android a year or so ago.
You would think this is good but:
Apple was not found to be a monopoly with iOS. Why?
Because iOS doesn't allow any competitors, how can they be anti-competitive?
The judge explained this Google when they raised the issue, and just like that, Android wants to become iOS.
Good fucking job judge. 10,000 IQ ruling.
The Chrome Extensions support is the interesting part here. That's often the dealbreaker for using mobile devices as computer replacements.
Google's had this weird situation where Android and ChromeOS overlap more every year. At some point maintaining two operating systems with converging feature sets seems wasteful.
My guess: ChromeOS probably survives for the education market where manageability matters more than capabilities. But for consumers? Android on a big screen with keyboard and mouse might just be good enough.
I'm running AdGuard in Chromium right now. I don't see any ads, even on YouTube. May I ask what did you mean?
Not that I don't think MV3 is limited, but.. we're comparing this against MV2, right? It was already missing basic functionality like full filtering of http responses, I remember a bug about not seeing POST bodies being open for 10+ years..
Windows keeps getting worse, but it's still not as bad as Android. It's like how an iPad with macOS would be great, but a MacBook with iOS would be terrible.
I've been using Fedora part-time and I'm quite happy with it. Although I access the Android ecosystem via Waydroid—which, I know, isn't a true 'Android desktop'—the reality is that few Android apps are actually designed for a desktop experience.
Android makes me feel like it wants to convince me that rooting is considered too dangerous even for advanced users—you have to be a developer to handle it, or you will kill yourself.
But it might be good news for the ARM platform. Given how fragmented ARM is, Windows on ARM has turned out to be an even bigger disaster, making a Linux-based approach a much more promising alternative.
It is relatively easy to make a secure OS. What is terribly expensive in developer labor is a secure OS that runs a mainstream browser well. Android is an open-source OS that runs a mainstream browser well and is about 100 times more secure than any other open-source Linux distro -- except ChromiumOS. But for some reason, no vibrant open-source project ever formed around ChromiumOS whereas a vibrant open-source project (namely GrapheneOS) has formed around the Android open-source project.
Oh, I see Google's angle now. They want to make android a viable desktop OS in order to have more users using android Chrome rather than Windows Chrome, because the former lacks extension support, and thus ad blockers. Of course, you can still install brave or kiwi browser or Firefox to your heart's content, but most people won't. It's brilliantly simple. It's not too bad for power users, they'll probably use a different browser, or for developers, given the work they're putting into the Linux containers, but for most users...we'll see the expected result.
This shows a version of Chrome with extensions.
> The Google Chrome interface mostly aligns with the current large-screen Android version except for the Extensions button, which is currently only available on the desktop browser.
Wow, I don't know how I missed that. Guess that theory goes out the window!
They ought to put the status bar at the bottom. All the designers using Macs probably forgot, but Chrome's tab interface was designed for Windows where it could be all the way at the top of the screen. And in general it's more common for desktop apps designed for mouse and keyboard to have frequently accessed UI elements at the top of the window than the bottom. So desktop apps would benefit from being able to use that real estate at the very top of the screen.
This is what you lose when you take a team developing a desktop OS and move it under a team doing a mobile OS.