Also, can I hack the OS? Specifically interested in direct VR rendering (other headsets don't allow to bypass compositor).
Though Valve has put a focus on developer ease and very low software lockdown in the recent years with their hardware, so I'd say the chances on direct rendering are quite good!
The best known is perhaps https://en.wikipedia.org/wiki/Project_Looking_Glass but there's many others, at varying stages of development. It will be interesting to see if custom OS/UX development is made available for this device. We'd also need quite a bit of custom development to make the OS comprehensively usable with gamepad-like controllers alone (no mouse or keyboard required). The existing work on "10-ft." media center interfaces can provide a useful starting point for this but it's far from covering all possible uses.
will help the hardware last longer. cz non-removable lithium batteries suck.
Lower voltages, but flatter discharge curve so pretty much everything works with them.
Here's hoping it will be like the Deck and we get Frame OLED in a year or so.
Needs to run GNU Hurd for that.
Which, I can't help but wonder. If this ran on the Frame... To aid in streaming from your desktop... In conjunction with Foveated Streaming...
What's even more bonkers to me is the idea that this could really be baked in to the streaming, if you think about it.
Server generates 100 beautiful frames in one second.
The streaming hardware reminds the server that with the current network conditions, that it can only handle X bits per second.
The Frame says, "Hey, here's where the eyes were looking, here's where they are looking, here's where they're projected to look."
The Server knows how many actual frames it can send in a second.
But the Server can run the same Lossless Scaling & Lossless Frame Generation that the client will run.
The Server can then compare the predicted frames against the actual frames, and it can send the delta needed to correct the predictions. Especially in areas close to where the eyes are looking.
The Client consumes the stream, predicts frames, applies the delta, and ends up with better framerate and quality...
Can you do this all low latency, too? I suspect so.
Add it all up, and this is pretty bonkers. I mean, it's kind of obvious that that's how streaming "should work," but I think I hadn't really put it all together yet in my mind. My mental model was too "off the shelf." It didn't include how much effort the client could do... And that the server could then also do, to help the client... It's really a beautiful pairing, when you think about it.
What's really bonkers is that Steam could learn what final images are supposed to look like, for a given game, and train AI specifically for that game. Update the server and the client, and bam, everyone gets better quality. With zero developer interaction.
That could be super easy for a developer to make, all in one process, with no networking requirement for the game logic...
Having my FoV dumbed down to 90º sounds like hell, especially in a game where we are looking for opponents.
Playing Doom on a widescreen monitor with the FoV modifications made it a lot less annoying. I want that even more today.
I am a bit confused: you can see your shoulders while you are looking forward?
It’s the amount of compute power that my brain allows for peripheral vision that’s the only unusual thing. But it makes video games feel claustrophobic to an unpleasant degree.
I can just about see my shoulders when i look forward, I'd probably also say my field of vision to be "the plane of view defined by my shoulders".