Posted by david927 3/30/2025

Ask HN: What are you working on? (March 2025)

What are you working on? Any new ideas that you're thinking about?
390 points | 992 commentspage 2
icy 3/31/2025|
We’ve been building a new social-enabled git collaboration platform on top of Bluesky’s AT Protocol: https://tangled.sh

You can read an intro here: https://blog.tangled.sh/intro (it’s publicly available now, not invite-only).

In short, at the core of Tangled is what we call “knots”; they’re lightweight, headless servers that serve up your git repository, and the contents viewed and collaborated upon via the “app view” at tangled.sh. All social data (issues, comments, PRs, other repo metadata) is stored “on-proto”—in your AT Protocol PDS.

We don’t just plan to reimplement GitHub, but rethink and improve on the status quo. For instance, we plan to make stacked PRs and stacked diff-based reviews first-class citizens of the platform.

Right now, our pull request feature is rather simplistic but very useful still: you paste your git diff output, and “resubmit” for a new round of review (if necessary). You can see an example here: https://tangled.sh/@tangled.sh/core/pulls/14

We’re fully open source: https://tangled.sh/@tangled.sh/core and are actively building. Come check it out!

carom 4/1/2025|
How is the support for LFS? Also, what backend language? I have some Go code for implementing an LFS server and auth but did not want to build a full code forge. All of the major Git hosts have woefully bad LFS management (e.g. if you want to purge a file from history have to delete the whole repository).
icy 4/2/2025||
We don't support LFS at the moment. Everything is in Go.
janosch_123 3/31/2025||
I am sharing what I learnt building electric cars.

On YouTube: https://youtube.com/@foxev-content

In a learning app: https://foxev.io/academy/

On a physical board where people can explore electric car tech on their desk: https://foxev.io/ev-mastermind-kit/

Backstory: from 2018-2023 I converted London taxis to be electric and built three prototypes. We also raised £300k and were featured in The Guardian. I have a day-job again these days and am persisting what I learnt and sharing it. YouTube is super interesting for me because of the writing, similar for the web app actually because the code isn't that complicated, it's about how do I present it in a way that engaged users, so I am thinking mostly about UX.

Actually why not, here is the intro to the first module (100 questions about batteries - ends in a 404): https://foxev.io/academy/modules/1/

farmin 3/31/2025||
Thanks for sharing. I watched some of canbus video and came back to say it looks really interesting. I am building a prototype diesel electric autonomous vehicle for farm applications at the moment so the videos are somewhat relevant to me.
janosch_123 3/31/2025||
Thank you means a lot :)
HeyLaughingBoy 3/31/2025|||
I really love the concept of the mastermind kit. An experimentation product that's geared to learning about a specific niche.
carom 4/1/2025||
What is the ETA on the kit shipping if I pre-order one?
janosch_123 4/1/2025||
Not taking payments yet, ETA was Q1 but it's slipping to Q2 as my day job has kept me busy

You can sign up interest and I will send you an email when it's ready.

kamranjon 3/31/2025||
I started a small company selling accessories that I design, 3d print, and build for old 16mm film cameras. I recently released a crystal synchronized motor for Arri cameras, which allows you to record sound and have it sync up properly later, that has actually been selling pretty well. My next goal is to get into CNC machining with metal and actually build a modern 16mm film camera.

For my day job I am currently working for an online education company. I have been learning about the concepts behind knowledge tracing and using knowledge components to get a fine grained perspective on what types of skills someone has acquired throughout their learning path. It is hard because our company hasn't really had any sort of basis to start from, so I have been reading a lot of research papers and trying to understand, from sort of first principles, how to approach this problem. It has been a fun challenge.

somethingsome 4/1/2025|
Hey! Seems a nice job, do you mind if I ask which company and if you found some interesting references on the subject?
kamranjon 4/1/2025||
Hi! I can't really share the company, but I do love the space and happy to discuss what I've been reading.

So the idea of Knowledge Tracing originated, from my understanding with a paper in 1994: http://act-r.psy.cmu.edu/wordpress/wp-content/uploads/2012/1... this sort of introduced the idea that you could model and understand a students learning as it progresses through a set of materials.

The concept of Knowledge Components was started, I believe, at Carnegie Mellon and University of Pittsburgh with the Learn Lab: https://learnlab.org/learnlab-research/ - in 2012 they authored a paper defining KLI (Knowledge Learning Instruction framework): https://pact.cs.cmu.edu/pubs/KLI-KoedingerCorbettPerfetti201... which provided the groundwork for the concept of Knowledge Components.

This sort of kicked things off with regards to really studying these things on a finer-grained level. They have a Wiki which covers some key concepts: https://learnlab.org/wiki/index.php?title=Main_Page like the Knowledge Component: https://learnlab.org/wiki/index.php?title=Knowledge_componen...

Going forward a few years you have a Stanford paper, Deep Knowledge Tracing (DKT): https://stanford.edu/~cpiech/bio/papers/deepKnowledgeTracing... which delves into utilizing RNN(recurrent neural networks) to aide in the task of modelling student knowledge over time.

Jumping really far forward to 2024 we have another paper from Carnegie Mellon & University of Pittsburgh: Automated Generation and Tagging of Knowledge Components from Multiple-Choice Questions: https://arxiv.org/pdf/2405.20526 and A very similar paper that I really enjoyed from Switzerland: Using Large Multimodal Models to Extract Knowledge Components for Knowledge Tracing from Multimedia Question Information https://arxiv.org/pdf/2409.20167

Overall the concept I've been sort of gathering is that, if you can break down the skills involved in smaller and smaller tasks, you can make much more intelligent decisions about what is best for the student.

The other thing I've been gathering is that Skills Taxonomies are only useful in as much as they help you make decisions about students. If you build a very rigid Taxonomy that is unable to accommodate change, you can't really adapt easily to new course material or to make dynamic decisions about students. So the idea of a rigid Taxonomy is quickly becoming outdated. Large language models are being used to generate fine-grained skills (Knowledge Components) from existing course material to help model a students development based on performance in a way that can be easily updated when materials change.

I have worked through and replicated some of the findings in these later papers using local models, for example using the open Gemma 2 27b models from Google to generate Knowledge components and using Sentence Embedding models and K-means clustering to gather them together and create groups of related Knowledge Components. It's been a really fun project and I've been learning quite a bit.

somethingsome 4/1/2025||
Thank-you! It's a long time that I have a similar idea and I'm interested in developing it, but never found the time to dig deeper, with those references I will jump start in the subject and refine.

It's nice to know I'm not the only one thinking about that.

The trick for me is that it's a path in a graph for each student, so even if some component is not as strong for one student, he can fill the gap by taking another route. A good framework would be resilient if it finds many possible paths to reach the same result, and not forcing one path. But then, teaching in this way is more difficult.

benno128 3/30/2025||
Working on Runno (https://runno.dev/) as a side project. It's a tool for running code in the browser for educational use.

[Edit]: I wrote a re-introduction to Runno: The WebComponent for Code over the weekend (https://runno.dev/articles/web-component/)

I've been playing around with turning it into a sandbox for running code in Python (https://runno.dev/articles/sandbox-python/). This would allow you to safely execute AI generated code.

Generally thinking about more ways to run code in sandbox environments, as I think this will become more important as we are generating a lot of "untrusted" code using Gen AI.

kmad 3/30/2025||
Awesome! Have you considered pyodide[1]? Pydantic uses this for sandboxing its AI agents [2].

1. https://pyodide.org/en/stable/ 2. https://ai.pydantic.dev/mcp/run-python/

benno128 3/31/2025||
Thanks! Yeah I'm very aware of Pyodide and interested in adopting some of their techniques.

A big difference between my approach and their approach is that Runno is generic across programming languages. Pyodide only works for Python (and can only work for Python).

Big interesting development in this space is the announcement of Endor at WASM IO which I'd like to try out: https://endor.dev/

lhmiles 3/30/2025||
Amazing!
itwenty 3/31/2025||
I released an iOS app last October for users who use Apple Watch to record their workouts - https://mergefit.itwenty.me

It lets users merge two or more workouts into a single one. There have been times when I have been out riding, hiking or whatever and accidentally end the activity on my apple watch instead of pausing it. Starting a new workout means having your stats split across the two workouts.

The "usual" way to merge such workouts is to export all of them to individual FIT files, then use a tool like fitfiletools.com to merge the individual FIT files. You then have a merged FIT file, which is difficult to import back into Apple Health. This process also requires access to the internet, which is not always guaranteed when out in remote areas.

MergeFit makes this process easy by merging workouts right on device and without the need to deal with FIT files at all. It reads data directly from Apple Health and writes the merged data back to Apple Health.

The app reached a small milestone a few days ago - crossing 1000$ in total sales.

stefandesu 3/31/2025||
Great idea! One other thing that annoys me is the inability to trim workouts, mostly walking workouts which I forgot to stop. Even when the Apple Watch asks you if you want to end the workout, it doesn’t cut off the end where nothing happened anymore. It should be possible to do this (semi-)automatically using the heart rate and movement data.
itwenty 3/31/2025||
That's a good problem to solve. Another pet peeve of mine is the inability to modify different streams of fitness data in a single workout. If I record workout on the watch, but use a different device just for heart rate, there's no good way to replace the watch recorded HR data with this other device's HR data.
andupotorac 3 days ago||
Does this allow installing dependencies with nodejs for example? I know CodeSandbox did this a while back, so I imagine it's not something that browsers can just do.
Cyphase 3/30/2025||
Myself.

Been a freelance dev for years, now going on "sabbatical" (love that word) shortly.

Planning to do a lot of learning, self-improvement, and projects. Tech-related and not. Preparing for the next volume (not chapter) of life. Refactoring, if you like, among other things.

I'm excited.

john_the_writer 4/2/2025||
Yeah.. I'm starting a LIT-RPG style blog for my personal growth. Skills/attributes tied to personal goals. Spirit as it relates to learning guitar (for instance), Mana as it relates to tech skills.

Right now it's all a flat index.html and simple.css hosted on github pages, but I'll get a site eventually. Writing a blog is part of my goals to become a profitable author. (Creativity ~= blogging, sketching, animation)

wes-k 4/2/2025|||
Been doing this the last 10 months. Lots of growth and self-discovery. Best wishes on your journey!
elvis10ten 3/31/2025||
Good luck. I plan on doing something similar once I get my permanent residence later this year.
rottytooth 3/30/2025||
I’m finishing several esolangs for the first artist’s monograph of programming languages, out in Sept: https://mitpress.mit.edu/9780262553087/forty-four-esolangs/ including a hands-free (and not dictated) language.

I recently completed Valence: a language with polysemantic programs https://danieltemkin.com/Esolangs/Valence on GitHub: https://github.com/rottytooth/Valence

Older work includes Folders: code written as a pattern of folders: https://github.com/rottytooth/Folders , Entropy: where data decays each time it’s read from or written to: http://entropy-lang.org/ and Olympus: where code is written as prayer to Greek gods, who may or may not carry out your wishes: https://github.com/rottytooth/Olympus (a way to reverse the power structure of code, among other things).

I have three more to complete in the next few months.

recursivedoubts 3/30/2025||
I'm working on an emulator for 16-bit computer I have designed for teaching students. It's designed to make low level computing more accessible for modern students by making things as visual as possible, for example blinken-lighten for the registers like w/ the old PDPs, color coded memory that shows where the code and data segments are, where the stack is, etc, and a small frame buffer that drives a 64x64 2 bit display that uses the same color palette as the original gameboy. The instruction set is a mashup of MIPS, the Scott CPU and JVM/forth stack operations. I'm excited about it.

here's a screenshot:

https://gist.github.com/1cg/e99206f5d7b7b68ebcc8b813d54a0d38

Lerc 3/30/2025|
Nice. I made an 8-bit AVR thing along those lines, 240x180 - 16 color. In browser emulator and assembler.

Can load source from gists https://k8.fingswotidun.com/static/ide/?gist=ad96329670965dc...

Never really did much with it, but it was interesting and fun.

_cn0w 3/31/2025||
Working on [redacted], a Chrome extension to make YouTube more time-efficient (and way more fun).

Its main differentiator: hover any thumbnail (homepage, search, shorts, etc.) for an instant mini-summary, like Wikipedia link previews. Also includes detailed summaries w/ timestamps, chat w/ video, chat w/ entire channels, and comment summaries.

Hover & Detailed summaries are free if you plug in your own OpenAI API key ("free for nerds" mode).

Aiming to be the best YouTube-specific AI tool. Would love your feedback. No signup needed for free tier/BYOK. If you try it and email me ([redacted]), happy to give you extended Pro access!

alabhyajindal 3/31/2025||
Love the idea and the name! Do you think whether this will worsen the YouTube experience? Because it encourages consuming content at a faster pace than watching the videos themselves.
_cn0w 3/31/2025||
Thanks!

I think its impact on watch time depends on your goal of that session. When I'm in "looking for a specific answer" mode it does reduce my watch time, but there's plenty of times when I just want to watch youtube–and when I do, it helps me find what to watch, rather than reducing my watch time per se.

thefourthchime 3/31/2025||
nice! can you support safari?
_cn0w 3/31/2025||
Thanks! This may take awhile to be honest, but have added it to the list.
Saigonautica 3/31/2025|
I've got a self-hosted personal library management app more or less done here: https://github.com/seanboyce/ubiblio

An electronic board game similar to Settlers of Catan (https://github.com/seanboyce/Calculus-the-game), just received the much better full sized boards. Will assemble and test over the next few weeks, then document properly. I got the matte black PCBs, they look really cool.

A hardware quantum RNG. Made a mistake in the board power supply, but it still works well with cut trace and a bodge wire. Will probably fix the bug and put the results up in a few weeks. Can push out ~300 bytes of entropy a second, each as an MQTT message.

A hardware device that just does E2E encrypted chat (using Curve 25519). Microcontrollers only, no OS, and nothing ever stored locally. HDMI output (1024x768), Wi-Fi, and USB keyboard support. I originally designed it to use a vanilla MQTT broker, but I'm probably going to move it to HTTP and just write a little self-hosted HTTP service that handles message routing and ephemeral key registration. Right now the encryption, video output, and USB host works -- but it was a tangle of wires to fix the initial bugs, so I ordered new boards. Got to put those through testing, then move on to writing the server.

Iterating on hardware stuff is pretty slow. I try to test more than one bugfix in parallel on the same board. Iteration time is 2-3 weeks and 8$. If I have all the parts in stock. I don't have very much free time right now due to work, so this suits me fine. A rule I live by is that I must always be creating, so I think this is a reasonable compromise.

More comments...