Posted by david927 11/9/2025
Ask HN: What Are You Working On? (Nov 2025)
https://www.arthurcarabott.com/adc-2024/
As part of it I am building a code generator to generate shared type definitions in C++ and TypeScript (plus serialization, comparison and cloning).
https://demo.replays.lol/clipper (recording the demo video today).
The idea is that a generic video message doesn't appeal to a fan of a video game streamer, instead what really would be cool would be watching them react to your best moment in a game.
Our software removes all friction from the journey, the fan doesn't even need to record their own gameplay, we have bots set up that can load up someone else's gameplay just from their username, record their highlight for them, upload it to our platform, then the streamer just needs to come in, watch a ~60 sec clip, give a genuine reaction, press 'submit' and its all done.
There's a few markets I'm trying to find product market fit in: ~1-2 minute coaching sessions, sports commentator style commentary over your clip from influencers, hyped up reactions from your favorite streamer, a community-focused segment on a stream of watching a compilation of your fan's best moments.
We're ready to launch, just trying and struggling to find the first few people to sign up.
It's a modern, open source, self-hosted customer support desk.
Each participating team (got 300 signups so far) will get a set of text tasks and a set of simulated APIs to solve them.
For instance the task (a typical chatbot task) could say something like: “Schedule 30m knowledge exchange next week between the most experienced Python expert in the company and 3-5 people that are most interested in learning it “
AI agent will have to solve through this by using a set of simulated APIs and playing a bit of calendar Tetris (in this case - Calendar API, Email API, SkillWill API).
Since API instances are simulated and isolated (per team per task), it becomes fairly easy to automatically check correctness of each solution and rank different agents in a global leaderboard.
Code of agents stays external, but participants fill and submit brief questionnaires about their architectures.
By benchmarking different agentic implementations on the same tasks - we get to see patterns in performance, accuracy and costs of various architectures.
Codebase of the platform is written mostly in golang (to support thousands of concurrent simulations). I’m using coding agents (Claude Code and Codex) for exploration and easy coding tasks, but the core has still to be handcrafted.
Here is a screenshot of a test task: https://www.linkedin.com/posts/abdullin_ddd-ai-sgr-here-is-h...
Although… since I record all interactions, could replay all them as if they were streamed.
And the host of bio libs required to do it. The sort of thing that are mature in Python, for example, but I needed to build for Rust.
If you have any ideas or comments for improvement, feel free to reply anytime! (For reference, this service is designed for Korean users — I’m Korean myself.)