Hello! I am aware of a solution like optimus-manager, which restarts GDM/Desktop in order to switch GPUs. But what I’m curious about is if it is possible to run the Desktop on the dedicated GPU, and inside Desktop run all applications on the integrated GPU by default.
I can currently use the command prime-run <application> to launch an <application> running on the dedicated GPU, but the desktop is running on the integrated GPU by default.
I like the buttery smooth desktop animations (f.e. opening Activites view or switching worksapces) when Desktop is running on the dedicated GPU. I can run the desktop on the dGPU by turning of “Hybrid Graphics” in my HP BIOS settings, and then using only the 440xx Nvidia drivers. Everything is so silky smooth in this mode.
So I’m curious if I can use Hybrid mode, but default the desktop to the dedicated GPU.
It sounds like PRIME Render Offload is what allows this, but I’m new to this area. The following output indicates offloading works:
$ glxinfo | grep vendor
server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
OpenGL vendor string: Intel Open Source Technology Center
$ __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo | grep vendor
server glx vendor string: NVIDIA Corporation
client glx vendor string: NVIDIA Corporation
OpenGL vendor string: NVIDIA Corporation
Can the desktop be made to run with __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia? If so, does that automatically make any application started in the desktop also use nvidia?
Let me disclaim up front that I can’t answer your question, as I run a single-GPU desktop system (with an Nvidia GT 710 card) and have no real experience with dual-GPU laptops beyond knowing that they exist. But it’s been 8 days, and in the interest of talking the problem forward… treat the comments to follow as Socratic, more than anything else, is what I’m saying. Hopefully by asking (poorly-informed) questions, maybe I can inspire you towards a solution.
I would be surprised (a) if that worked, and (b) even if it did, if there was any real benefit to doing it that way.
My understanding of dual-GPU architectures was, the intent is that system relies on the integrated GPU to do most of the lifting, conserving the (more powerful, performance-oriented, power-sapping and heat-generating) discrete GPU by leaving it in a dormant state until it’s needed.
But it doesn’t work the other way around, does it? Meaning, when the system as a whole is running on the discrete GPU, the integrated GPU doesn’t powersave or go offline, except perhaps when it’s disabled entirely — in which case, it wouldn’t be available for “offloading” of processes that you don’t want to run on the discrete GPU.
The “standby” architecture for the discrete GPU — to the best of what I’ve perceived/assumed about the design of those systems — is unique to the discrete GPU. It doesn’t go both ways, so the two GPUs are not equal partners in a way that would make it possible to “invert” their relationship like that. When running on the discrete GPU, that chip can’t be taken offline to conserve power because it’s running the system. And unless the integrated chip been bypassed entirely (or maybe not even then), it would also be online at all times.
Or have there been changes on that front in recent years that place the dual GPUs on a more equal footing?
Beyond whether that setup is possible, I’m curious about the differences in your experience between when you’re running on the integrated GPU vs. the discrete. If the discrete chip is an Nvidia GPU using the nvidia proprietary drivers, then in that mode at least, you (like me) must be running an X11 desktop session, since the nvidia drivers still (AFAIK) don’t support Wayland.
But when you’re running on the integrated GPU, is it still an X11 desktop, or does your session switch to Wayland in that mode? Perhaps that switch — along with, presumably, the need to start an embedded X11 session in order to run PRIME processes, is part of the reason for the poorer desktop performance you’ve perceived from the Intel chip?
Or, if not, have you tried running a Wayland session on the integrated GPU, to compare the performance? The impression I’ve gotten is that Wayland is much better optimized for Intel chips than X11, at this point. (Is it possible to use PRIME under a Wayland session, with the nvidia drivers? With nouveau? We’re in areas now where I readily admit I’m clueless.)
That’s the main reason I’m posting: the experience is unbearably slow and painful on the integrated Intel card, and smooth and delightful on the discrete Nvidia card.
Even if I ensure only the Intel driver is in use, so X11 is running only on Intel, the performance is janky. I have old computers with only Intel integrated graphics that run Windows 10 animations just fine (which are very comparable to Gnome animations), so there is definitely a problem with the Intel setup. I hope it isn’t a hardware problem! I should install Windows 10 on this same machine to verify. It has only ever had FreeDOS (which it shipped with) and now Linux.
I will give that a go!
From what I hear, no, and if prime is enabled the Wayland option in Manjaro is a no-op, and it always uses X11.
But yeah, regarding whether the desktop can work the other way around (on Nvidia with apps on Intel), I’m clueless too (total newb in this area).