Not AI.
My bad on the Apple part.
The Nvidia driver didn’t support some protocol that AMD/Intel did that was used by desktops for the night light.
Yes, they could have made the night light work. But why would they when Nvidia said the feature was coming soon? Well it turned out that soon was taking a very long time and eventually KDE actually did create a special night light implementation just for Nvidia. The problem was that it was a hack that had extra overhead. And in the end the hack didn’t get shipped because Nvidia finally starting supporting the protocol.
Unfortunately not. There’s been a number of things on Nvdia’s side that slowed down Wayland adoption.
They didn’t always support Xwayland hardware acceleration.
Nvidia pushed for a technology called EGLStreams while everyone else agreed on GBM. So the desktop stack had to support both. Nvidia eventually relented and started supporting GBM.
Nvidia didn’t support VRR or night light for a while.
Nvidia didn’t support necessary stuff for Gamescope to function properly.
And overall Nvidia on Wayland was just buggy. I remember that many games failed to launch or had weird performance issues. But those issues just went away when I got an AMD card.
But things are in a much better state today. Though I did recently test a 20 series card on Fedora 41 and it was a terrible experience on the proprietary drivers. But when speaking with orhers, they didn’t share my issues.
One opinion that Wayland has is that the client is responsible for decorating its window. It draws its own title bar, shadow around the window, and the cursor.
Though not everybody was happy with this. A few protocols were created that lets clients tell the compositor to draw decorations around the window and the cursor.
But still, every app needs to support those client side decorations and cursors because not all compositors support those protocols. Gnome notably doesn’t, they like client side decorations.
Before Wayland, there was X Window System, created in 1984. X Window System was designed in a time where you had one good computer connected to multiple displays used by different people. X went through many versions but version 11 (X11) stayed around for a long time.
But the architecture just isn’t good. It wasn’t designed for modern needs. MacOS used to use X, but replaced it to fit modern needs. Windows didn’t use X, but they too updated Windows to fit modern needs. But Linux and other OSs stuck with X for a lot longer, hacking it to make it work. Honestly, it’s amazing how well it does work.
But isn’t not great. It wasn’t designed with security in mind, it doesn’t do multi-monitor well. Behind the scenes, it considers everything to be one giant display; issues arise when it comes to mixed-dpi displays and when monitor refresh rates don’t match. It’s also just a bloated, old code base that people don’t want to work on. Fixing X would not only be difficult, but would break compatibility.
So people got working on a modern replacement for X aiming to avoid its issues. Wayland is leaner, more opinionated, and designed for how modern hardware operates. Wayland itself is just a protocol (like X11), and there’s many different implementations of that protocol: Mutter, Kwin, wlroots, smithay, Mir, Weston, etc. Meanwhile X11 pretty much only had one relevant implementation, Xorg. Wayland’s diversity has its pros and cons. Pros include (1) you can create your implementation in any programming language you want rather than being stuck to just one, (2) an implementation can fill just the needs on the person making it rather than trying to generalize it for everyone. But cons include the fact that this fragmentation leads to scenarios where one implementation supports something that others don’t and implementation-specific bugs.
Wayland’s opinionated design is also draws criticisms. It gives a lot of control to the compositors rather than windows, which is how Xorg, MacOS, and Windows work. Nvidia’s wayland adoption was also slow and terrible. It took many years to get it into the only decent shape it’s in now.
Fedora Atomic, and by extension Universal Blue, does put the home in /var. It’s to denote that the directory is mutable.
This seems to be a systemd feature, system services can’t touch home directories by default.
https://unix.stackexchange.com/a/684074
I think a user script would still work. Or you could set the flag that would let system services access your home.
Is this a systemd user service?
Mint never preinstalled the snap. They package their own version of Firefox. I believe they have an agreement with Mozilla.
I use a resource monitor called Resources. It has a GPU tab that shows the GPU decoder usage go up when playing a video.
Weird, it’s been working for me for a while. I just need to manually set “media.ffmpeg.vaapi.enabled” to true in about:config.
Nate Graham is hesitant increase the frequency. When the feature was announced, he repeatedly emphasized it a was a single year popup. He doesn’t want to go back on his word.
It’s a year’s worth of improvements.
Though if you’re using Proton Experimental, you’ve already been receiving these improvements since it uses the staging (or git?) branch.
I believe Proton stable releases use the stable version of wine with fixes backported.
The UI is not too complicated, which is why I like it. I use it to automatically unlock and mount my drives in /run/media/drive_name on boot.
I use Gnome Disks for this, even on Kinoite.
Yes
You can do this using Lutris.
Note that this isn’t a perfect sandbox. For example, the game can still send a link to your browser to open. Theoretically it could do something malicious with that. Though you could probably work around that issue by changing your default browser to a flatpak version and disable network access there. There might be other small sandbox breaks, but nothing I can think of.
I used to always remove Fedora Flatpaks, but I’ve grown to like them.
They are built from Fedora RPMs, so follow Fedora’s packaging and building guidelines. Meanwhile Flathub and snap are the wild west of packaging; many flatpaks/snaps are just repackagings of existing packages, which are often built against ancient glibc and libraries for broad compatibility for traditional packages.
They use libraries that are in Fedora’s repos. So any vendored dependencies in a Fedora Flatpak will get automatically updated once the app is rebuilt. Meanwhile on Flathub/snap, those vendored dependencies need to be manually updated (though there are tools/bots for Flathub that automatically check for updates and can even create merge requests). Upstream app developers may not upgrade their apps in a timely fashion.
I also much prefer how Fedora handles runtimes. I only have two Fedora runtimes on my system, Fedora Platform and Fedora KDE 6 Platform, which are both based on Fedora 41. Meanwhile on Flathub, I have 52 runtimes installed. Thankfully most of these are small, but there are still quite a few larges ones. Multiple versions of mesa, multiple versions of Qt, multiple versions of the Freedesktop runtime.
By far the biggest disadvantage is that they’re affected by Fedora’s copyright/patent restrictions. So most multimedia apps I end up installing from Flathub so I have working codecs. But there is some work being done that would allow Fedora Flatpaks utilize ffmpeg-full from Flathub.
Unless the OS installer chooses to wipe the driver, which Debian’s (non-calamares) installer does.
I’m not going to deny that he can act aggressively, but his point is still valid. The anti-Rust sentiments of some maintainers has slowed down the upstreaming of Rust into the kernel. It doesn’t make sense to waste people’s time by letting R4L limp along in its current state.
R4L either needs to be given the go-ahead to get things upstreamed, to the dismay of some Linux maintainers who don’t like Rust, or R4L should be killed and removed from the kernel so we can stop wasting people’s time.
Personally, I think killing R4L would be a major mistake. Android’s Linux fork with Rust support has been a major success for Google and significantly cut down on vulnerabilities. And the drivers for Apple’s M chips has been surprisingly robust given how new they are and for being reverse engineered.