That doesn’t speak much of the architecture. Also it’s really odd. Not denying what you’re seeing is happening, just that it seems odd based on the setups I run with ZFS. My main server is in fact a shared machine that I use as a workstation and games along as a server. All works in parallel. I used to have a mirror, then a 4-disk RAIDz and now an 8-disk RAIDz2. I have multiple applications constantly using the pool. I don’t notice any performance slowdowns on the desktop, or in-game when IO goes high. The only time I notice anything is when something like multiple Plex transcoders hit the CPU hard. Sequential performance is around 1.3GB/s which is limited by the data bus speeds (USB DAS boxes). Random performance is very good although I don’t have any numbers out of my head. I’m using mostly WD Elements shucked disks and a couple of IronWolfs. No enterprise grade disks on this system.
I’m also not saying that you have to keep fucking around with it instead of going Btrfs. Simply adding another anecdote to the picture. If I had a serious problem like that and couldn’t figure it out I’d be on LVMRAID+Ext4 which is what used prior to ZFS.
That is totally possible. I spent a month changing boards and CPUs to fix a curse on my main, unrelated to storage. In case you’re curious.