Posted by david927 6/29/2025
Ask HN: What Are You Working On? (June 2025)
Most recently have been focused on better geographic visualizations in the public studio for people to experiment - getting decent automatic lat/long, want to have easy path visualizations (start/end, etc). More AI-accelerated options as well, especially around model authoring.
Repo: https://github.com/trilogy-data/pytrilogy Studio: https://trilogydata.dev/studio-core/
Started as a very simple app for me to play around with OpenAI’s API last year then morphed into a portfolio project during my job search earlier this year. Now happily employed but still hacking on it.
Right now, a user can create a quiz, take a quiz, save it and share the quiz with other people using a URL.
Demo: You can try out the full working application at https://quizknit.com
Github Links: Frontend: https://github.com/jibolash/quizknit-react , Backend: https://github.com/jibolash/quizknit-api
Here's the summary: - read all your sources - public websites, docs, video - answer questions with confidence score and no hallucinations with citations - cut support time and even integrates directly into your customer facing chatbots like Intercom
Still deliberating on the business model. If anyone would be interested in taking a look, I would love to show you.
I store the chunks in a custom-built database (on top of riak_core and Bitcask), and I have it automatically also make an HLS stream as well. This involved remuxing the AAC chunks into MPEG-TS and dynamically create the playlist.
It's also horizontally scalable, almost completely linearly. Everything is done with Erlang's internal messaging and riak_core, and I've done a few (I think) clever things to make sure everything stays fast no matter how many nodes you have and no matter how many concurrent streams are running.
I’ll probably release the code I wrote for the input radio station but that’s just a glorified script written in Rust and calling FFmpeg. The only fun part of that is I call OpenAI to get AI commercials and DJ chatter.
The thing is, we’ve been retrofitting software made for humans for machines, which creates unnecessary complications. It’s not about model capability, which is already there for most processes I have tested, it’s because systems designed for people are confusing to AI, do not fit their mental model, and making the proposition of relying on agents operating them a pipe dream from a reliability or success-rate perspective.
This led me to a realization: as agentic AI improves, companies need to be fully AI-native or lose to their more innovative competitors. Their edge will be granting AI agents access to their systems, or rather, leveraging systems that make life easy for their agents. So, focusing on greenfield SaaS projects/companies, I've been spending the last few weeks crafting building blocks for small to medium-sized businesses who want to be AI-native from the get-go. What began as an API-friendly ERP evolved into something much bigger, for example, cursor-like capabilities over multiple types of data (think semantic search on your codebase, but for any business data), or custom deep-search into the documentation of a product to answer a user question.
Now, an early version is powering my products, slashing implementation time by over 90%. I can launch a new product in hours supported by several internal agents, and my next focus is to possibly ship the first user-facing batch of agents this month to support these SaaS operations. A bit early to share something more concrete, but I hope by the next HN thread I will!
Happy to jam about these topics and the future of the agentic-driven economy, so feel free to hit me up!
It's called Heap. It's a macOS app for creating full-page local offline archives of webpages in various formats with a single click.
Creates image screenshot, pdf, markdown, html, and webarchive.
It can also be configured to archive videos, zip files etc using AppleScript. It can do things like run JavaScript on the website before archiving, signing in with user accounts before archiving, and running an Apple Shortcut post archiving.
I feel like people who are into data hoarding and self host would find this very helpful. If anyone wants to try it out:
https://apps.apple.com/ca/app/heap-website-full-page-image/i...
Surprisingly the blocker has been identifying notes from the microphone input. I assumed that'd have been a long-solved problem; just do an FFT and find the peaks of the spectrogram? But apparently that doesn't work well when there's harmonics and reverb and such, and you have to use AI models (google and spotify have some) to do it. And so far it still seems to fail if there are more than three notes played simultaneously.
Now I'm baffled how song identification can work, if even identifying notes is so unreliable! Maybe I'm doing something wrong.
It's based on the assumption that the most common frequency difference in all pairs of spectrum peaks is the base frequency of the sound.
-For the FFT use the Gaussian window because then your peaks look like Gaussians - the logarithm of a Gaussian is a parabola, so you only need three samples around the peak to calculate the exact frequency.
-Gather all the peaks along with their amplitudes. Pair all combinations.
-Create a histogram of frequency differences in those pairs, weighted by the product of the amplitudes of the peaks.
When you recognise a frequency you can attenuate it via comb filter and run the algorithm again to find another one.
I was thinking this would be a good project to learn AI stuff, but it seems like most of the work is better off being fully deterministic. Which, is maybe the best AI lesson there is. (Though I do still think there's opportunity to use AI in the translation of teacher's notes (e.g. "pay attention to the rest in measure 19") to a deterministic ruleset to monitor when practicing).
The idea is a fully weighted hammer action keyboard with nothing else, such as the Arturia KeyLab 88 MkII, and add to that tiny LED lights above each key. And have a tablet computer which has a tutor, and it shows the notes but also a guitar hero like display of the coming notes, where the LED lights shine for where to press, and correction for timing and heaviness of press, etc.