How fractional scaling works in Wayland?

(I’m not sure whether to post this here or under Platform)

It is my understanding that under Wayland GNOME will oversample, that is it will first produce output at a larger integer scale using the underlying toolkit and then interpolate the output to downscale to the appropriate fractional scale. But I have some doubts:

  1. Is it really so?
  2. If the underlying toolkit directly supports fractional scaling (as, for example, Qt does) will GNOME leverage this fact or set it to render at an integer scale the same?
  3. Can any version of GTK produce non-integer scaled output?
  4. What is the interpolation mechanism, if any?

My current beliefs are:

  1. Yes
  2. No
  3. No
  4. Linear with a fallback to nearest neighbour when the overall scale is integer.

If they need updating, would you please tell me?


Fractional scaling works by having the clients render at ceil(scale) where scale is the scale the monitor configured to. E.g. if you configure your monitor to 150 %, that in fact means a scaling factor of 1.5 and a HiDPI, clients will be told to render with a scaling factor of 2. What the compositor then does is take the over sized client buffer and paints it smaller on the target framebuffer.

The target framebuffer, i.e. the framebuffer that will be sent to the monitor, is not what is scaled, however, only clients buffers during composition. The same often applies to images that is part of the shell, as their source tend to only be available at certain sizes and we always pick e.g an icon that is at least as large as the result will be on the final framebuffer, which often means scaling down during composition too. The same applies to text, IIRC.

So for 1., yes for client buffers and most likely icons in the shell etc, but not painting in general, as we always paint to a framebuffer matching the monitor resolution.

For 2. - no, there is no fractional support in Wayland.

For 3. - no, GTK does not support non-integer scaled output.

For 4. - IIRC we use linear when the source dimension doesn’t match the target dimension.


Thanks for the detailed answer. Much appreciated!

@jadahl some further questions if you don’t mind.

When using xrandr to achieve fractional scaling on xorg, some scaling factors are extremely demanding. For example, in order to achieve 1.5x in my external UHD monitor, 3840 * 2 / 1.5
= 5120 pixels width are required, and 2160 * 2 / 1.5 = 2880 height are required. It’s very noticeable, mainly as increased stuttering for the overview animation, when compared to 1.75x and, of course, 2x. I understand you need to produce more pixels if you want a smaller scale because more stuff will fit the screen.

Now, my questions: will this be the same under Wayland? If I set 1.5 scaling factor, will most of my apps (suppose they are maximized or full screen) be rendering about 5000x3000 pixels? Why would one expect this to be faster / less-demanding than my current xrandr workaround? I’m asking this because of a conversation I had in a PR, in which it was stated that all this won’t have the same penalty on Wayland that it has now using the RandR protocol, but I’m failing to grasp how can it be so. Maybe there are other optimizations in place, but my confusion is about the specific fact that you still have to produce more in order to sample.

Thanks again.

A significant difference is that the framebuffer GNOME Shell itself composites onto has the same size as the monitor resolution, as while with the Xrandr case, it will be larger, and it’s up to Xorg to apply whatever means of scaling transformation it can do to resize the enlarged framebuffer to match the size of the monitor resolution.

Ok, interesting point, I guess it’s not that helpful when I have a maximized browser window on top, but it might help in more “composition-intensive” scenarios.

Now, I promise, last one, regarding output quality. I’ve read the discussion [1] in which the “MacOS way” of rendering twice as big and then downscaling seems to be preferred. Now, this preference is supported by the MacBook example of 175% scaling (so raster size is increased by a mild factor of 2/1.75) in a retina screen (so pixel dense enough to alleviate the degradation due to raster interpolation while downscaling). I’m not that convinced that this argument is sound for the nowadays average non-Apple laptop that comes with an FHD screen for which a scale of 125-150% is natural and which has < 200 dpi resolution. I’ve been reading your report of the fractional scaling hackfest and also Marco’s related posts, but I was unable to find anything relevant to my concern. Have you compared the output of engines that support direct rendering at a fractional scaling to the output produced by first (integer) upscaling then downscaling the same input at different resolutions (in particular, resolutions lower than “retina”) and using different interpolation mechanisms? It’s quite difficult to find information about this, because the downscaling literature starts from the original bitmap, not one larger that is richer in details, so it’s not clear how the combined up-down procedure performs. Some people seem as hostile to the approach as to call fractional scaling a lie [2].



Thinking about it, I’m not sure whether for things like font rendering there is a raster operation at the end (I mean, when downscaling) or this is automatically accounted by the transformations already set at rendering time, so fonts are accurately rendered at the final scale because both the integer scale factor and the final raster size (I mean, the physical size in screen pixels, I ignore the right term for this) are taken into account by the client itself.

More concretely, here is a not very interesting fragment of code that exemplifies both alternatives. It’s not very interesting because it just upscales by 2 and then downscales by 2. But in one case this is all done directly to the output surface, while in the other case there are two surfaces, the first one is twice the size of the second one, and there is a raster scaling operation between them. In both cases the client renders everything at 2x.

import cairo

width_px, height_px = 300, 100
font_size = 18
text = "Lorem Ipsum Lorem Ipsum"

def render(oversample, output_path):
    surface = cairo.ImageSurface(
        (2 if oversample else 1) * width_px,
        (2 if oversample else 1) * height_px,
    context = cairo.Context(surface)
        1 / (1 if oversample else 2),
        1 / (1 if oversample else 2),
    context.rectangle(0, 0, 2 * width_px, 2 * height_px)
    context.set_source_rgb(1, 1, 1)
    context.set_source_rgb(0, 0, 0)
    context.select_font_face("Sans", cairo.FONT_SLANT_NORMAL, cairo.FONT_WEIGHT_NORMAL)
    context.set_font_size(2 * font_size)
    context.move_to(2 * width_px * 0.1, 2 * height_px * 0.5)
    if oversample:
        surface2 = cairo.ImageSurface(cairo.FORMAT_RGB24, width_px, height_px)
        context2 = cairo.Context(surface2)
        context2.scale(0.5, 0.5)

render(True, "scale_over.png")
render(False, "scale.png")

As you can see outputs are rather different (I don’t know which scaling algorithm is Cairo using here, I assume it’s just rounding to the nearest pixel):


The left one has oversample=True, the right one has oversample=False. The right one, of course, is exactly the same as if no scaling would have been applied for this trivial example, so it’s the benchmark.

So what alternative better represents the facts, if any? I know that it sounds as I’m asking the same than at the beginning of this thread, but there is some ambiguity in saying “first upscaling then downscaling” that I would like to eliminate. At first I was assuming that the downscaling would be necessarily distortive, but I’m not so sure this is the case. In my mind clients were rendering each to a surface buffer and those surfaces were then transformed and composed into an output buffer, but maybe those intermediate buffers are “abstracted away” so there are no actual intermediate buffers but just the output buffer, and hence no raster down-scaling at all (as seems to be the case with xrandr). Maybe all this is obvious and was implicit in your answers, sorry if that’s the case, it’s just a new mindset I have to grasp.

After reading through gdk and mutter code and also parts of the wayland specification, I believe I can spare you the answer. It’s pretty clear that the server can’t preset any transformation in the surface or buffer where the client renders, whether it uses shm or drm to get it, so any downscaling should be done later by the server as a raster transformation from one buffer to another using some sort of filter. AFAICT cogl is configured to use a bilinear filter. I also found out that xrandr also uses bilinear, at least since version 1.5 (see [1]), with fallback to nearest, so one shouldn’t expect gnome/wayland to do better or worse in terms of output quality. Moreover, the left image in my previous comment was also filtered by bilinear, I checked that cairo was using that filter. I’m still concerned about the overhead and loss of quality for scaling factors near 1 (like 1.25). I’ve been experimenting with scaling up from 1x in those cases, but results are pretty poor:

All 125%, from left to right: direct transformation, up 2 down 1.6 bilinear, up 2 down 1.6 nearest, up 1.25 bilinear. It’s clear up-down bilinear is the winner here, but considering that scaling factors between 1 and 1.5 are to be expected for some years at least, it might be worth proposing an extension to the standard in order to leverage engines that can do fractional scaling (I’m not only thinking of Qt, but browsers and Electron).

[1] xrandr.c - xorg/app/xrandr - Primitive command line interface to RandR extension (mirrored from

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.