Posted by robotnikman 4 days ago
Why does refresh rate have such a large impact on power consumption? I understand that the control electronics are 60x more active at 60 Hz than 1 Hz, but shouldn't the light emission itself be the dominant source of power consumption by far?
High pixel density displays have disproportionately higher display refresh power (not just proportional to the total number of pixels as the column lines capacitances need to be driven again for writing each row of pixels). This was an important concern as high pixel densities were coming along.
Display needs fast refreshing not just because pixel would lose charge, but because the a refresh can be visible or result in flicker. Some pixels tech require flipping polarity on each refresh but the curves are not exactly symmetric between polarities, and further, this can vary across the panel. A fast enough refresh hides the mismatch.
Simply, this means each pixel can hold its state longer between refreshes. So, the panel can safely drop its refresh rate to 1Hz on static content without losing the image.
Yes, even "copying the same pixels" costs substantial power. There are millions of pixels with many bits each. The frame buffer has to be clocked, data latched onto buses, SERDES'ed over high-speed links to the panel drivers, and used to drive the pixels, all while making heat fighting reactance and resistance of various conductors. Dropping the entire chain to 1Hz is meaningful power savings.
[1] https://news.lgdisplay.com/en/2026/03/lg-display-becomes-wor...
So, no, there is a meaningful difference in the nature of the circuits.
And regardless, the HW path still involves copying the entire frame buffer - it’s literally in the name.
But the article is about an OLED display, so the pixels themselves are emitting light.
The article is about an LCD display, actually.
The connection between the GPU and the display has been run length encoded (or better) since forever, since that reduces the amount of energy used to send the next frame to the display controller. Maybe by "1Hz" they mean they also only send diffs between frames? That'd be a bigger win than "1Hz" for most use cases.
But, to answer your question, the light emission and computation of the frames (which can be skipped for idle screen regions, regardless of frame rate) should dwarf the transmission cost of sending the frame from the GPU to the panel.
The more I think about this, the less sense it makes. (The next step in my analysis would involve computing the wattage requirements of the CPU, GPU and light emission, then comparing that to the KWh of the laptop battery + advertised battery life.
The LG press release states that it's LCD/TFT.
https://news.lgdisplay.com/en/2026/03/lg-display-becomes-wor...
> LG Display is also preparing to begin mass production of a 1Hz OLED panel incorporating the same technology in 2027.
And yet, it’s the fundamental technology enabling always on phone and smartwatch displays
The intent of this is to reduce the time that the CPU, GPU, and display controller is in an active state (as well as small reductions in power of components in between those stages).
it would make a lot of sense in situations where the average light generating energy is substantially smaller:
pretend you are a single pixel on a screen (laptop, TV) which emits photons in a large cone of steradians, of which a viewer's pupil makes up a tiny pencil ray; 99.99% of the light just misses an observer's pupils. in this case this technology seems to offer few benefits, since the energy consumed by the link (generating a clock and transmitting data over wires) is dwarfed by the energy consumed in generating all this light (which mostly misses human eye pupils)!
Now consider smart glasses / HUD's; the display designer knows the approximate position of the viewer's eyes. The optical train can be designed so that a significantly larger fraction of generated photons arrive on the retina. Indeed XReal or NReal's line of smart glasses consume about 0.5 W! In such a scenario the links energy consumption becomes a sizable proportion of the energy consumption; hence having a low energy state that still presents content but updates less frequently makes sense.
One would have expected smart glasses to already outcompete smartphones and laptops, just by prolonged battery life, or conversely, splitting the difference in energy saved, one could keep half of the energy saved (doubling battery life) while allocating the other half of the energy for more intensive calculations (GPU, CPU etc.).
The Apple Watch Series 5 (2019) has a refresh rate down to 1Hz.
M4 iPad Pro lacks always-on display despite OLED panel with variable refresh rate (2024):
https://9to5mac.com/2024/05/09/m4-ipad-pro-always-on-display...
Assuming the xps has the same size battery, and this really reduces power consumption by 48%, I'd expect 16 hours real world, 32 in benchmarks and 48 in some workload Dell can cherry pick.
Positive is that the battery life couldn't possibly get worse with newer ones.
It's how I got mine about 6-7 years back anyways, still works great (except the battery) ...never let windows get it's claws into the machine in the first place
Edit: to add, I realized over time that having a battery that lasts longer just can't seem to beat my older laptop experiences: being able to just swap an extra battery in and have full charge at will (without soldering and all that 'ish) In that sense I feel that the future is coming full circle to modularity, swapability, repairability - to the point they're becoming my primary considerations for the next portable computing select I will need to acquire.
I checked 10 seconds ago. The only models I can order in my country with linux are Pro Max and a Precision workstation.
If I pretend to be located in the US, an XPS 13 from 2024 becomes available at 200$ more than the Windows variant, and no OLED option.
What a weird marketing strategy from Dell...
Apparently they stopped making the Developer Edition which came with Ubuntu in 2022-2023 (which was definitely cheaper by 100-200 bucks or so than the Windows version with exact same hardware, I recall the developer edition os discount very clearly)
Now the XPS line has fallen as well, as apparently even the SSD now gets soldered to the motherboard, no longer possible to service with basic tools really once it starts failing. My old 2018-ish XPS has an M.2 slot and a battery that is relatively simple by modern standards to replace with some screwdrivers and careful handling (something I think is vital for a workhorse computer, as batteries 'decimate' in capacity within 2-3 years or so in my experience)
I don't even know what's left out there anymore among major makers... when I have to look again, maybe framework... Been hearing about them for a bit now and they seem quite relevant to the discussion - haven't seen one live yet to be fair
Less of a problem for iphones that unlikely to stay for a week in the same place plugged in and unused.
Yeah sure if you buy it as a toy it may not be used for much lol. Check your consumerism
Brightness, Uniformity, Colour Accuracy etc. It is hard as we take more and more features for granted. There is also cost issues, which is why you only see them in smaller screens.
I'm not sure that there's really anything new here? 1Hz might be lower. Adoption might be not that good. But this might just be iteration on something that many folks have just not really taken good advantage of till now. There's perhaps signficiant display tech advancements to get the Hz low, without having significant G-Sync style screen-buffers to support it.
One factor that might be interesting, I don't know if there's a partial refresh anywhere. Having something moving on the screen but everything else stable would be neat to optimize for. I often have a video going in part of a screen. But that doesn't mean the whole screen needs to redraw.
CRTs needed to be refreshed to keep the phosphors glowing. But all screens are now digital: why is there a refresh rate at all?
Can’t we memory-map the actual hardware bits behind each pixel and just draw directly (using PCIe or whatever)?
> Source: https://www.pcworld.com/article/3096432 [2026-03-23]
---
> HKC has announced a new laptop display panel that supports adaptive refresh across a 1 to 60Hz range, including a 1Hz mode for static content. HKC says the panel uses an Oxide (metal-oxide TFT) backplane and its low leakage characteristics to keep the image stable even at 1Hz.
> Source: https://videocardz.com/newz/hkc-reveals-1hz-to-60hz-adaptive... [2025-12-29]
---
> History is always changing behind us, and the past changes a little every time we retell it. ~ Hilary Mantel
Ok, that makes some amount of sense. The article claims this is an OLED display, and I haven't heard of significant power games from low-refresh-rate OLED (since they have to signal the LED to stay on regardless of refresh rate).
However, do TFT's really use as much power as the rest of the laptop combined?
They're claiming 48% improvement, so the old TFT (without backlight) has to be equivalent to backlight + wifi + bluetooth + CPU + GPU + keyboard backlight + ...
The ability to vary it seems like it would be valuable as there are significant portions of a screen that remain fairly static for longer periods but equally there are sections that would need to change more often and would thus mess with the ability to stick to a low rate if it's a whole screen all-or-nothing scenario.
For example:
- reading an article with intermittent scrolling
- typing with periodic breaks
Ideally, I would be able to bind a keyboard shortcut to the refresh-rate switch, so that the software doesn't have to figure out that now I'm on Youtube so I actually want the higher refresh rate, but now I'm on a mostly-text page so I want the refresh rate to go back down to 1 Hz. If I can control that with a simple Fn+F11 combination or something, that would be the ideal situation.
Not that any laptop manufacturers are likely to see this comment... but you never know.
I don't think you could divide vertically though.
Don't think anyone has done this yet. You could be the first.
So it makes sense you could cut the refresh time down to a second to save power...
Although one wonders if it's worth it when the backlight uses far more power than the control electronics...
Saving battery is nice, but I'm not leaving Linux for that misery any time soon
Apple already uses similar tech on the phones and watches.
With conventional PSR, I think the goal is to power off the link between the system framebuffer and the display controller and potentially power down the system framebuffer and GPU too. This may not be beneficial unless it can be left off long enough, and there may be substantial latency to fire it all back up. You do it around sleep modes where you are expecting a good long pause.
Targeting 1 Hz sounds like actually planning to clock down the link and the system framebuffer so they can run sustain low bandwidth in a more steady state fashion. Presumably you also want to clock down any app and GPU work to not waste time rendering screens nobody will see. This seems just as challenging, i.e. having a "sync to vblank" that can adapt all the way down to 1 Hz?
When you have display persistence, you can imagine a very different architecture where you address screen regions and send update packets all the way to the screen. The screen in effect becomes a compositor. But then you may also want transactional boundaries, so do you end up wanting the screen's embedded buffers to also support double or triple buffering and a buffer-swap command? Or do you just want a sufficiently fast and coordinated "blank and refill" command that can send a whole screen update as a fast burst, and require the full buffer to be composited upstream of the display link?
This persistence and selective addressing is actually a special feature of the MIP screens embedded in watches etc. They have a link mode to address and update a small rectangular area of the framebuffer embedded in the screen. It sends a smaller packet of pixel data over the link, rather than sending the whole screen worth of pixels again. This requires different application and graphics driver structure to really support properly and with power efficiency benefits. I.e. you don't want to just set a smaller viewport and have the app continue to render into off-screen areas. You want it to focus on only rendering the smaller updated pixel area.
I was under the impression that modern compositors operated on a callback basis where they send explicit requests for new frames only when they are needed.
A compositor could request new frames when it needs them to composite, in order to reduce its own buffering. But how does it know it is needed? Only in a case like window management where you decided to "reveal" a previously hidden application output area. This is a like older "damage" signals to tell an X application to draw its content again.
But for power-saving, display-persistence scenarios, an application would be the one that knows it needs to update screen content. It isn't because of a compositor event demanding pixels, it is because something in the domain logic of the app decided its display area (or a small portion of it) needs to change.
In the middle, naive apps that were written assuming isochronous input/process/output event loops are never going to be power efficient in this regard. They keep re-drawing into a buffer whether the compositor needs it or not, and they keep re-drawing whether their display area is actually different or not. They are not structured around diffs between screen updates.
It takes a completely different app architecture and mindset to try to exploit the extreme efficiency realms here. Ideally, the app should be completely idle until an async event wakes it, causes it to change its internal state, and it determines that a very small screen output change should be conveyed back out to the display-side compositor. Ironically, it is the oldest display pipelines that worked this way with immediate-mode text or graphics drawing primitives, with some kind of targeted addressing mode to apply mutations to a persistent screen state model.
Think of a graphics desktop that only updates the seconds digits of an embedded clock every second, and the minutes digits every minute. And an open text messaging app only adds newly typed characters to the screen, rather than constantly re-rendering an entire text display canvas. But, if it re-flows the text and has to move existing characters around, it addresses a larger screen region to do so. All those other screen areas are not just showing static imagery, but actually having a lack of application CPU, GPU, framebuffer, and display link activities burning energy to maintain that static state.
But the other side of things - the driver and compositor and etc supporting arbitrarily low frequencies - seems like it's already (largely?) solved in the real world. To your responsiveness point, I guess you wouldn't want to use such a scheme without a variable refresh rate. But that seems to be a standard feature in ~all new consumer electronics at this point. Redrawing the entire panel when you could have gotten away with only a small patch is unfortunate but certainly not the end of the world.