Posted by david927 3/30/2025
Ask HN: What are you working on? (March 2025)
[Sequential thinking](https://skeet.build/docs/integrations/sequentialthinking) - it’s like enabling thinking but without the 2x cost
[Memory](https://skeet.build/docs/integrations/memory) - I use this for repo / project specific prompts and workflows
[Linear](https://skeet.build/docs/integrations/linear)- be able to find and issue, create models a branch and do a first pass, update linear with a comment on progress
[github](https://skeet.build/docs/integrations/github) - create a PR with a summary of what o just did
[slack](https://skeet.build/docs/integrations/slack) - send a post to my teams channel with the linear and GitHub PR link with a summary for review
[Postgres](https://skeet.build/docs/integrations/postgres) / [redis](https://skeet.build/docs/integrations/redis) - connect my staging dbs and get my schema to create my models and for typing. Also use it to write tests or do quick one off queries to know the Redis json I just saved.
[Sentry](https://skeet.build/docs/integrations/sentry) - pull the issue and events and fix the issue, create bug tickets in linear / Jira
[Figma](https://skeet.build/docs/integrations/figma) - take a design and implement it in cursor by right clicking copying the link selection
[Opensearch](https://skeet.build/docs/integrations/opensearch) - query error logs when I’m fixing a bug
It's essentially a small search engine for videos that runs locally on your laptop. Previously, the system just extracted information about whole video files and maintained a searchable list of those files.
I'm putting the finishing touches on a major architectural change to solve a request from someone who creates highlight reels for professional soccer matches. They needed to tag and search for specific moments within videos - like all goals scored by a particular player across dozens of camera angles of dozens of matches.
This required re-engineering the entire data model to support metadata entries that point to specific segments of a file entity rather than just the file itself.
Instead of treating each file as an atomic unit, I now separate the concept of "content" from the actual files that contain it. This distinction allows me to:
1. Create "virtual clips" - time segments within a video that have their own metadata, tags, and titles - without generating new files
2. Connect multiple files that represent the same underlying content (like a match highlight with different ad insertions for YouTube vs. Twitch)
3. Associate various resolution encodings with the same logical content
For example, a content creator might have multiple versions of the same video with different ad placements for different channels. In the old system, these were just separate, unrelated files. Now, they're explicitly modeled as variants of the same logical content, making organization and search much more powerful.
I also completely reworked the job system, moving from code running in the Electron main process to spawning Temporal in a child process. I know it sounds like overkill for a desktop app, but it's been surprisingly perfect for a desktop app that's processing hundreds of terabytes of video.
When you're running complex ML pipelines on random people's home computers, things go wrong - custom FFmpeg versions that don't support certain codecs, permission issues with mounted volumes, their local model server crashes. With Temporal, when a job fails, I just show users a link to the Temporal Web UI where they can see exactly what failed and why. They can fix their local config and retry just that specific activity instead of starting over. It's cut my support burden dramatically.
My developer experience is so much better too. The ML pipeline for face recognition has multiple stages (small model looks for faces, bigger model does pose detection, embedding generation with even larger model) that takes minutes to run. With Temporal, I can iterate on just the activity that's failing without rerunning the entire pipeline. Makes developing these data-intensive workflows so much more manageable.
My perspective is all those hours of raw footage are just raw materials waiting to be shaped into stories, highlights, or presentations. The value is concentrated in a few hotspots.
Jellyfin and Plex appear to have been built on fundamentally different technical assumptions than Video Clip Library. They expect media to remain connected and accessible to the server at all times - when drives disconnect, they often purge those entries from their databases, requiring full rescans when reconnected. It appears Jellyfin only fixed this in Oct 2024.
The reality for many isn't sleek network storage - it's often just a plastic container filled with labeled hard drives sitting in a closet.
Video Clip Library is architected specifically for the archival cold storage workflow where most media is physically offline. The database maintains complete metadata even when drives are disconnected. When you search for 'soccer highlights from 2018,' it not only tells you what file contains that footage but precisely where that physical drive is located: 'in the blue SSD in Alice's desk, bottom drawer'. You can upload pictures of each drive, print out barcodes, write detailed notes. Organization stuff.
This workflow doesn't necessarily make sense for full-time professionals with dedicated workstations, but it's ideal for the long-tail use cases that originally drove me to build this software - normal people with occasional video projects. Of course, as is often the case, people bring it to their day job and start pushing for more business-oriented features. But the genesis of this software was for the individual creator, the freelancer, or small teams of auteurs collaborating on creative projects. A tool to accommodate the stop-and-start reality of passion projects. A poor man's editing with Proxies.
So far everyone is accumulating clip annotations on video files over time.
I'm thinking of clips as essentially write-only/append-only annotations. Labels or metadata attached to sections of videos rather than new files. The system is designed to support overlapping clips and allows you to filter/view all clips for a video.
To clarify, Video Clip Library is purely a search engine - it doesn't composite or edit videos. Although it will let you re-encode to save space. I built it for scenarios like: "I have a catalog of shots from the last five years, and when working on a new project, I might want to reuse B-roll or footage I've already taken." A YouTuber doing a Then and Now will find footage from their first year.
For me personally, the virtual clips feature will improve my learning process. I'm not a professional videographer. Naturally I spend time studying work from more skilled creators, trying to understand what makes it effective. I'm excited to take notes on specific moments - "these are the places across many different videos where I feel afraid" or "interesting rack focus technique here" - with notes and tags scoped to their own clips. I was already taking these notes in Obsidian. But it wasn't great.
I find a beauty in the layering: I can create overlapping clips that represent different aspects of the same footage - one layer for emotional responses, another for technical observations. Note: here I'm creating them manually one hour here, one hour there over months as I find the time or interest waxes. I might only annotate a few thousand clips across a couple hundred films in my lifetime. That's ok. I don't need the computer to understand the videos perfectly frame by frame.
The professional use case that prompted this feature is different - teams collect footage, then editors assemble compilations and marketing materials months later. They will run AI models to annotate videos as they're ingested, or apply new models to existing catalogs. Then someone with a creative concept can quickly search: "Do we already have footage that supports this idea or do we need to shoot something new?"
This is at the very early stages where I have a design sketch and some experiments that validate the design. Below is the README:
Rio is an experimental C++ async framework. The goal is to provide a lightweight framework for writing C++ server applications that is easy to use and provides low consistent latencies.
Today, async frameworks that focus on efficiency typically use one of two architectures:
1. Shared nothing architectures, also called thread-per-core. This is used by frameworks such as Seastar and Boost.Asio. In a shared-nothing architecture, each worker thread runs its own event loop and is intended to run on its own dedicated core. The application is architected to shard its workload over multiple workers with only infrequent communication between them. When a task performs a CPU bound task, it needs to be explicitly run in a thread pool as otherwise it would block other tasks from running in the current worker (often referred to as a "reactor stall").
2. Work stealing architectures. This architecture is used by frameworks such as Tokio. In this case there are also multiple worker threads, each running their own event loop. When a specific worker gets overloaded or runs a blocking task, other threads can execute ready tasks. This goes some way to prevent reactor stalls. However, even though other threads can steal ready tasks, they do not poll the event loop for new readiness events. This means that a task that does not yield back to the runtime will increase latencies for other requests assigned to that worker.
The thesis for Rio is that in real-world server applications, it gets increasingly hard to ensure you yield back frequently to the event loop. In particular, there are many CPU bound tasks that server applications commonly perform, such parsing protocols, or performing encryption and compression. If tasks take less than ~10 microseconds it is often not worth it to offload these to a thread pool as the system call overhead of synchronization will take more time than this. Additionally, putting in various thread offloads makes it harder to develop, especially in larger teams with individuals of different experience levels. The result is that there will either be too many work pushed on thread pools or too little. The net effect will be that latencies will be less consistent.
Rio is an experiment for a work stealing architecture where completion events can also be stolen. The Rio runtime uses multiple worker threads to handle asynchronous tasks. Each worker threads runs its own io_uring, which is also registered to an eventfd. A central "stealer" thread listens to the eventfds for all workers. When an readiness event becomes available, the stealer will check if the corresponding thread is currently executing a task. If so, it will signal an idle worker with a request to process the completion event and run any tasks that results in it. The stealing logic is aware of the system topology and will try to wake up a thread that shares a higher level cache with the task.
Quite frankly, it feels great!
I'm going to speak this update because that way I can have the text written more naturally. So, let's do that. So, since October last year, this means about six months, I've been using Cursor as a non-coder. Actually, I've been coding the last time, I guess, back in the days of Flash in ActionScript. So, I used Cursor for the past six months, maybe around 12 to 15 hours a day on average, you know, some days off, more hours in other days. And I've been working together with my co-founder, who's a developer, but he's also using AI, except he's using AI less than I do. So, we've been working on several things, of which one of them is closest to being released, maybe 80% done, which is, as I said, a vibe coding platform.
So this particular vibe coding platform will actually not compete with the likes of Replit, Bold, Loveable and about 40 other platforms that are basically the same: chat + IDE + preview area. What we're doing is actually different because we are rewriting a project that we built about 10 years ago. And I guess in a way similar to how Replit and Bolt had completely pivoted because of AI. The same thing is happening with this project of ours. So the project was actually about offering interactive widgets to people for their websites, but we had to build those manually. So now we have rewritten the platform from scratch using AI. And we offer zero widgets because with AI we are able to offer an infinity of widgets. That's exciting.
So this was supposed to stay a side project and frankly I'm pretty surprised that there's so much demand and the growth is amazing for all these AI code gen platforms. And if that happens to us, to this project, I don't know exactly what we are going to do because our plan is actually to tackle a bigger project that we are also working on. And the strategy here is to do what we call compounding startups. So instead of building a larger project with many features, we actually split that in several projects and we are doing them one at a time and giving each the ability to actually generate revenue. At some point we are likely going to raise capital. We had this plan before but with AI things have completely changed. We are able to do now in a week what five developers were doing in six months.
So the other projects are Talbo, Reclona and Tradul. Talbo is the what they compound into. But I am not ready to talk too much about them right now. :)