Posted by david927 6/29/2025
Ask HN: What Are You Working On? (June 2025)
Reading through the Terms of service in websites is a pain. Most of the users skip reading that and click accept. The risk is that they enter into a legally binding contract with a corporation without any idea what they are getting themselves into.
How it started: I read news about Disney blocking a wrongful death lawsuit, since the victim agreed to a arbitration clause when they signed up for a disney+ trial.
I started looking into available options for services that can mitigate this and found the amazing https://tosdr.org/en project.
That project relies on the work of volunteers who have been diligently reading the TOS and providing information in understandable terms.
Light bulb moment: LLM's are good at reading and summarizing text. Why not use LLMs for the same. That's when I started building tosreview.org. I am also sending it for the bolt.new hackathon.
Existing features: Input for user entered URLs or text Translation available for 30+ languages.
Planned features: Chrome/firefox extension Structured extraction of key information ( arbitration enforced , jurisdiction enforced etc).
Let me know if you have any feedback
How does your product do in the age of AI?
I could imagine this could be sold to a whatever-legal-tech company, or maybe to a compliance company or similar.
AI and specifically the summarization capabilities of the LLMs is what made this product feasible.
This is still a side project with no plans of monetization. There is no moat (yet) that an internal team in a legal tech company cannot replicate. There are still a few interesting problems to solve in the roadmap, which I am eager to work on. Then will let life take its course
Another Moby-Dick of mine is Kadessh, the SSH server plugin of Caddy, formerly known as caddy-ssh. This one is an itch. I wrote about it here https://www.caffeinatedwonders.com/2022/03/28/new-ssh-server..., and the repo is here: https://github.com/kadeessh/kadeessh. Similar to the other one, feedback and helping hands are sorely needed.
They are both sort of an obsession and itches of mine, but between dayjob and school, I barely have a chance to have the clear mind to give them the attention they require.
The library of public domain classics is courtesy of Standard Ebooks. I publish a book every Saturday, and refine the EPUB parser and styler whenever they choke on a book. I’m currently putting the finishing touches to endnote rendering (pop-up or margin notes depending on screen width) so that next Saturday’s publication of “The Federalist Papers” does justice to the punctilious Publius.
Obligatory landing page for the paid product:
First we built it as a tool to fix any bug. After talking to a few folks, we realized that it is too broad. From my own personal experience, we realized how messy it is within organizations to address accessibility issues. Everybody scrambles around last minute. No body loves the work - developers, PMs, TPMs etc. And often external contractors or auditors are involved.
Now with Workback, we are hoping to solve the issues using the agentic loop.
If you personally experienced this problem, would love to chat and learn from your experience.
- Create your own PDF editor with custom UI with the help of public methods which are exposed in the web component.
- You can add dynamic variables/data to the templates. What this means is you create one template, for example, a certificate template with name and date as variables and all you have to do is upload your CSV / JSON of names and dates, and it will generate the dynamic PDFs for you.
- It's framework-agnostic. You can use this library in any front-end framework.
It's still in early development, and I would love to connect with people who have some use cases around it.
I have integrated this library in one of our projects, Formester. You can see the details here https://formester.com/features/pdf-editor/
I have posted this demo video for reference https://www.youtube.com/watch?v=jorWjTOMjfs
Note: Right now it has very limited capabilities like only adding text and image elements. Will be adding more features going forward.
While Cursor stops after writing great code, Vide goes the extra mile and has full runtime integration. Vide will go the extra mile & make sure the UI looks on point, works on all screen configurations and behaves correctly. It does this by being deeply integrated into Flutters tooling, it's able to take screenshot/ place widgets on a Figma-like canvas and even interact with everything in an isolated and reproducible environment.
I currently have a web version of the IDE live but I'm going to launch a full native desktop IDE very soon.
My value proposition is to make developers more productive by skipping the boring stuff, while FlutterFlow is more of an "all-in-one" app platform.
I've been thinking a lot about the current field of AI research and wondering if we're asking the right questions? I've watched some videos from Yann LeCun where he highlights some of the key limitations of current approaches, but I haven't seen anyone discussing or specifying all major key pieces that are believed to be currently missing. In general I feel like there's tons of events and presentations about AI-related topics but the questions are disappointingly shallow / entry-level. So you have all these major key figures repeating the same basic talking points over and over to different audiences. Where is the deeper content? Are all the interesting conversations just happening behind closed doors inside of companies and research centers?
Recently I was watching a presentation from John Carmack where he talks about what Keen is up to, but I was a bit frustrated with where he finished. One of the key insights he mentions is that we need to be training models in real-time environments that operate independently from the agent, and the agent needs to be able to adapt. It seems like some of the work that he's doing is operating at too low of an abstraction level or that it's missing some key component for the model to reflect on what it's doing, but then there's no exploration of what that thing might be. Although maybe a presentation is the wrong place for this kind of question.
I keep thinking that we're formulating a lot of incoherent questions or failing to clearly state what key questions we are looking to answer, across multiple domains and socially.
RAG and/or Fine-tuning is not the way.
Another topic is security, which would consist of using Ollama + Proxmox for example, but of course, right now, as emergent intelligence is still early, we would have to wait 2-3 years for ~8 B parameter local models to be as good as ChatGPT o3 pro or Claude Opus 4.
I do believe that we are close to discovering a new interface. What is now presenting itself through IDE’s and the command line (terminal)… I strongly believe we are 1-2 years away from a new kind of interface, that is not meant for developers only.
That feels like an IDE, works like a CLI, but is intuitive as Chrome is for browsing the web.
Tail calls between different VM functions are the next challenge. I'm going to somehow have it allocate the VM instance in the same space (if the frame size of the target is larger than the source, "alloca" the difference). The arguments have to be smuggled somehow while we are reinitializing the frame in-place.
I might have a prefix instruction called tail which immediately precedes a call, apply, gcall or gapply. The vm dispatch loop will terminate when it encounters tail similarly to the end instructions. The caller will notice that a tail instruction had been executed, and then precipitate into the tail call logic which will interpret the prefixed instruction in a special way. The calling instruction has to pull out the argument values from whatever registers it refers to. They have to survive the in-place execution somehow.
Been a freelance dev for years, now beginning a "sabbatical" (love that word).
Planning to do a lot of learning, self-improvement, and projects. Tech-related and not. Preparing for the next volume (not chapter) of life. Refactoring, if you like, among other things.
I'm excited.
In Singapore, the system is heavily academic. You're expected to follow a rigid path (PSLE → JC → Uni → job), but no one teaches you how to think about what kind of life you want to live—or how to create it. That leaves many people feeling lost, even if they’re “on track.”
This platform flips that. It starts with the big picture: *“When you’re 90, what do you want your life to have looked like?”*
From there, users create a personal timeline of milestones across life domains: health, relationships, learning, impact—and now, *financial freedom.*
The app helps users:
1. Set long-term visions, then break them into clear, visual milestones
2. Use an AI assistant to suggest weekly actions and recalibrate as life evolves
3. Voice journal instead of typing; the AI transcribes and flags patterns (“You mentioned burnout 5x this week. Want to add a rest week or revise your work goals?”)
4. Track basic finances and align spending/saving to long-term goals (“You want to take a year off at 30. At this pace, you’ll have the runway by 32. Want to adjust?”)
5. Get matched with mentors or peer circles for guidance and accountability
The goal is not to “optimize” life like a spreadsheet. It’s to help people reflect, take control, and become someone they’re proud of.
If you’ve worked on anything in this space—journaling, goal tracking, financial wellness, coaching—I’d love to learn:
A. What made your tool stick long-term?
B. How did you balance simplicity with depth?
C. Any design or product traps I should avoid?
Appreciate any thoughts, questions, or brutal feedback.