It's essentially a poor man's hacked up DynamicLand - projector, camera, live agent. There are so many things you could do if you had a strong working baseline for this. My kids used it to create stories, learn how to draw various things, and watching safe videos they could hold in their hand.
There's something weirdly compelling and delightfully physical about holding a piece of paper that shows a live rocket launch, with the flames streaming down the page. It could also project targeted pieces of text, such as inline homework advice, or graphs next to data. It doesn't take long to imagine any other number of fun use cases, and it feels a lot more freeing and inspiring than keeping everything bound to a screen.
Github - https://github.com/Pugio/Orly (hacky minimal prototype that did the thing)
Video Pitch - https://youtu.be/-9l1x7GnmxU (filmed an hour before the deadline on an old phone with no sleep)
https://www.theverge.com/2022/10/20/23415167/amazon-glow-sup...
If you don't mind me asking, what hardware did you use? Especially for the project, I'm guessing it needs to have quite a strong bulb in order to be seen in broad daylight?
The Folk Computer people have some incredible work they've been doing too, that's definitely worth looking at for anyone interested. Their intergation of a novel display technology is really sweet too, allowing for good visibility in a variety of conditions, which I love. https://folkcomputer.substack.com/ https://folk.computer/ https://news.ycombinator.com/item?id=39241472 (165 points, 2 years ago, 53 comments)
The Folk Computer people have some incredible work they've been doing too, that's definitely worth looking at for anyone interested. Their intergation of a novel display technology is really sweet too, allowing for good visibility in a variety of conditions, which I love. https://folkcomputer.substack.com/ https://folk.computer/
I asked at some point if I could theoretically develop an application that could literally be controlled by a Fischer Price toy, like a little plastic car console or something. Or even potentially have a real keyboard that isn’t connected to anything, but the VisionPro can just see my keypresses and apply them as if I was actually pressing something. The former case is possible, but surprisingly difficult, but the latter case isn’t really there yet (requires too much precision and latency is worse than just using a Bluetooth keyboard).
Either way, the idea of a computing environment that meshes with and directly interacts with the real, physical objects around you is an interesting premise I’d like to see taken further with “Spatial Computing”/AR. Scanning and recording things I’m writing on a whiteboard or in a notebook by recognizing that I’ve picked up a pen and am writing something down would just be getting started.
Of course, if we’re ambiently recording everything you’re doing there will need to be some kind of regular process/interface to “sift” everything at the end of the day. This is the core of the Getting Things Done methodology. Everything goes into a big “intake list” and then you do periodic check-ins throughout the day where you review the list and decide whether to move those to a series of sub-lists to “do this now,” “do this soon,” or “do this someday.”
Edit: you've unfortunately been breaking the site guidelines badly and frequently. Examples (among many others):
https://news.ycombinator.com/item?id=47706755
https://news.ycombinator.com/item?id=47603599
https://news.ycombinator.com/item?id=47476320
https://news.ycombinator.com/item?id=47068759
If you keep this up, we're going to have to ban you. I don't want to ban you, so if you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
This reminds me of those predictions from 1900 about the year 2000, when they thought we'd all live in enormous skyscrapers and get around by flying cars. Instead we moved out to suburbs because improved logistics systems meant we could buy things from suburban shopping centres rather than having to go into city centres. Revolution, not evolution.
Surely the real advantage of an 'actually good AI' would be getting the AI to do the work itself, rather than just allowing the work to be done in a format with which the human is more comfortable. The underlying problem is that there are too many things vying for our attention.
It sounds like the author is on the same track, has the same mindset. And I like.
I am also reminded of the Young Lady's Illustrated Primer: in Neil Stephenson's Diamond Age. It is not exactly what the author describes but, if the book had a computer backend, it also divorces the user from the computer interface we have come to know. Perhaps for me some future (better) local LLM within such a book is what I want. A kind of companion I ask questions of…
(I mean I suppose I should just do what was posted a day or to ago to the Ask HN: and put a local LLM behind a messaging app and I could just converse with it wherever I am. Tangent: I am kind of fascinated by the idea of a personal LLM that has context stretching back to my earliest days—were I to have started conversing with this synthetic companion at a young age. Imagine the lifetime of context where the LLM knows my habits, how I've changed over the years. I suppose this is nightmare fuel for a number of you.)
There are basically three versions of the book:
1) The ones developed for a few rich kids. These are partially automated, but backed by gig workers. They get what we might call (if you'll pardon the term) "Actually Indians" AI (augmented by the regular type).
2) The one our protagonist gets. This is one of the books from #1, but the distinctive feature here is that an early gig worker (the book calls these "'ractors" when they're doing this kind of work) the protagonist draws takes a special interest in her and intentionally keeps drawing jobs for her over a period of several years. This continuity and personal care by a single real person is what sets it apart and makes her experience so excellent.
3) The mass-market version that's entirely computerized, no human touch. This version brainwashes a fuckload of kids into becoming the "mouse army", and that's really all we see as far as what it can do: something really bad (if convenient for our protagonist).
The message of the book is 100% the opposite of "automated learning-books are amazing". It's "tech for learning sucks ass and/or is outright dangerous if you rely only on it, and a real human tutor who cares about a kid is the best thing around even in a crazy high-tech future-world".
What's the point? LLMs tend towards the mean/average --- I want better in my life and interactions --- it's useful when I need an example DXF or similar rote task, but my current project is a woodworking joint which has no precedent.
Yes, the skeumorphism angle is an interesting one, and one which is surprisingly absent in the _ur_ description of a stylus equipped computing device, the slates/tablets from Larry Niven and Jerry Pournelle's _The Mote in God's Eye_ --- this sort of thing seems to be coming back around --- a recent Kindle Scribe firmware update add shape recognition. I'd be _very_ pleased if my new Kindle Scribe Coloursoft could fully become a replacement for my Newton....
Regardless, I have still found them useful. Diagnosing the problems with a car is maybe an esoteric example but is still useful.
For many months now I have been working through learning about and implementing a hobbyist analog computer with LLM as engineer-confidant. I already knew the basics of op-amps and analog computing but was surprised at a lot of the new things I discovered only by way of the LLM saying (for example), "Hey, here's a nice way to get your reference voltages…" and the project benefited from it (and I learned about a new chip/device/technique).
But it's only going to allow you to avail oneself of prior art/techniques.
Because it was a profit making venture for car companies. Suburbs are horrifically inefficient, they survive by the twisted "communism" of cannibalizing the dense urban tax bases to support the sprawling, expensive to service and maintain, isolating flatlands.
It was only later that the almighty combustion engine and tire companies forcibly replaced streetcars with buses and trucks, that cars began their hegemonic domination of suburbia. The National Highway System decrees didn't hurt, either, but highways were built in the USA with an ulterior motive of national defense.
Meanwhile, traffic and the stigma around drunk driving (which wasn’t nearly as strong or strictly enforced before the 90s), have quickly taken much of the bloom off the rose of car-dependent lifestyles. I predict the growth of micromobility options will continue to make cities even more attractive as well by improving coverage for areas where transit can’t go and generally improve the throughput of city streets and reduce the space needed for parking cars for people who live within “not-quite walking but feels silly to drive” distance.
The big gap in the US at least is simply a lack of cities! Everything is still concentrated in a handful of legacy urban centers that survived the waves of “urban renewal” and it’s simply too expensive to house all the people who want to live there without turning them into Hong Kong sized megalopolises, which starts to introduce new problems from overwhelming density. “Urban” development patterns need to expand out to more of the country to take demand pressure off the 5 or 6 American cities with decent mass transit.
They just want OpenClaw with printing and scanning privileges. Every morning OpenClaw prints out a task list or items that need action, the author writes notes/responses, and places it on the scanner. This is basically how my program director worked at my last job. Every morning the secretary would have his schedule printed out, he'd go to meetings and write notes, and would pass by his secretary and stick a note or two on her desk saying "set up a meeting with XYZ org/team within the next few days on ABC topic." The secretary would also print documents/presentations and he'd mark them up throughout the day with changes he wanted made, and he'd drop the documents off when he was done going through them, and the secretary would distribute the documents to their respective POCs to make the changes.
Basically the only thing the author hasn't mentioned that the secretary did is that the secretary also acted as a gatekeeper for access to the program director, either in real-time ("no, you can't go in, they are meeting with a higher level director") or would take a request for a meeting and have enough personal context on whether the director would want the meeting themself or want to see it go through a division chief first. Not sure if OpenClaw can do that, but just about everything else is totally do-able. Not sure if I really want to see someone wasting this much paper just to "feel analog" but I suppose it probably isn't a big deal since most people won't do it this way, and will stick to digital forms of communication with their OpenClaw secretary.
[0] https://www.youtube.com/watch?v=7wa3nm0qcfM [1] https://dynamicland.org/
UNIX Principle anyone ? Do one thing, and do it well - seems like in this 'age of AI' the industry is rediscovering by detour best practices, decades old, all over again.
But otherwise having 'interfaces' printed out to you and an LLM multi-modal later working from your notes on it sounds really interesting and less stressful than modern 'computing'.
The Office's Michael Scott would be proud - Paper may just be the future of Digital after all!
Human picks up all the sheets out of the printer, writes out replies with pen
Human puts the stack of answered email sheets in a multi-page scanner
Scanner physically scans them, agent transcribes them and matches them back to the incoming emails via the unique ID on each sheet, sends replies
You could adjust this flow for anything where human input is just one part of a larger sequence: just add print -> write -> scan into your flow where you'd normally have a human type. It's kind of a rebirth of faxing
When I showed her the reply button in Eudora (this was in 2001), she was so happy that she bought me a cake.
She struggled with IT but was tack sharp otherwise. So far she's the only boss I've ever really liked.
Before everyone just started using Docusign anyway, I'd bought houses with a phone "scanner". LOL.
I don't think I started with it, but for a very long time I've had an app called TinyScanner that's good-enough at edge detection, can de-noise or make a document entirely black & white, and can glue multiple pages together into a PDF. The results look better than plenty of flatbed scanner results I've seen, if not as good as the best of those.
Ditto with Forth dumping the memory, creating literal structures for numbers and whatnot. Also, the 'see' command among dumping literal memory bytes.
Being both a REPL helps a lot. But Forth gets into a lower level than S9 itself.