Top
Best
New

Posted by david927 2 days ago

Ask HN: What are you working on? (May 2026)

What are you working on? Any new ideas that you're thinking about?
277 points | 1054 commentspage 47
xeonax 2 days ago|
I'm porting my unity game to web using copilot. Is fun. I can iterate so fast. https://github.com/XEonAX/DriftsInSpace/tree/main/client
cryo32 2 days ago||
Wrote a Forth VM in C in about 1996 based on TCJ articles by Brag Rodriguez. Managed to get it to compile with modern GCC this morning and fix all the horrible issues with valgrind. Trying to adapt it to a context where it'll be usable for a spreadsheet-like system with reasonable decimal numeric precision. Consider it an RPL calculator with an Excel-like front end.
Joel_Mckay 2 days ago||
Drafting a small adaptive filter to deal with LLM generated email spam etc.

I don't often have time to do OSS projects, but will keep it readable for packagers. The most time consuming part will be overly verbose commenting needed for people to be able to audit the source quickly.

It is a boring side-project, but unfortunately a necessary one. =3

fomoz 2 days ago||
I'm working on https://vtxmacro.com, a free and fully autonomous LLM trading platform. Basically have any model you want trade for you. Right now I support ~860 models across 16 providers (including OpenRouter), plus Local AI and OpenAI Compatible endpoints.

The bot settings (system prompt and user prompt, temperature, reasoning, etc.) are 100% transparent and customizable, and all users can view and copy anyone else's settings from the leaderboard. The goal is to build the best trading bots possible by seeing what works.

You can run a bot on Gemini 4 31B with a free tier Google AI Studio account (I'm running 5 bots on it myself). Or just run Gemma 4 26B on your PC if you have the GPU for it. I'm running 5 on my 5090, so I'm trading with 10 bots total.

The platform is connected to Hyperliquid and you can trace all the trades on the blockchain from the user's Analytics page (always public).

The way it works is you set a loop interval (default 1 minute) and the model receives the candles, market stats, indicators, account balance, current positions and so on and decides Buy, Sell, or Hold and how many units.

It's still experimental but I have already processed 1m+ prompts, 10k+ trades, and almost $1m in volume since January 2026. I have around 15 bots running right now, you can check their PnL on the leaderboard (public). I've made a lot of changes in the last few weeks so most recent either 24h or 7d results are the most relevant. The model you use is super important (Gemma 4 31B so far is the best value I found, better than Gemini 3 Flash and you can run it for free) and also the coin you choose is important too. Preferably, you want something that's trending. My friend's bot did well with ZEC and VVV this week.

Right now I'm working on improving reliability (I bought a Japanese VPS to run my own HL node), and this weekend I moved the app from Render to my own DC VPS for 10x+ cheaper and 1000x more bandwidth (25 TB instead of 25 GB, seriously if you're using Render and want cheaper infra look into buying your own VPS).

I'm also implementing CLI/MCP for OpenClaw support. And next is an automatic screener that will use LLMs to pick the most promising cryptos to trade (since I noticed this has a huge effect on PnL).

If you have questions, let me know, the Trade page has my Telegram group link.

kukkeliskuu 2 days ago|
How well are these performing?
fomoz 2 hours ago||
You can check in detail on the leaderboard, but I suggest you look at the last 24h to 7 days.

We've been playing with different settings and models since January, older wallets have had some bleed with Gemini 3 Flash and bot settings that didn't perform well.

We've been running bots exclusively on Gemma 4 31B and 26B for the last 7-10 days and they've been either breaking even or trading very well, it really depends on the coin. I think around 10-30% gain for the good cases for the week.

It's still experimental, I only put $100 into each bot (and that's what I recommend to my users) so it's not crazy money, but once we're comfortable we'll put more money into it.

https://vtxmacro.com/leaderboard?t=24h

mindaslab 1 day ago||
https://injee.codeberg.page/

Injee - The no configuration instant Database for front end developers.

0xCE0 2 days ago||
Trying to get my product (desktop application) to the state of minimal sellable version (according to my quality level expectations). I tend to be perfectionist, thinking it is never even good enough. Hopefully I can show it to you/world in the summer, and hear what people think of it. But for now (or the past 5 years), I have nothing to show and tell.
lfcipriani 1 day ago||
I'm building https://github.com/lfcipriani/obsidian-radar, an Obsidian plugin that provides a radar view to your notes so you can map your focus visualy.
hxii 1 day ago|
This actually looks kinda cool! Not sure this would be something that I'd use (as I am really trying to not rely on too many plugins), but it makes me happy to see cool new ways of visualizing data in Obsidian.
niraj-agarwal 2 days ago||
Made a stocks dashboard using percentile ranks and "lens" for discovering stocks using 2D plots with roughly orthogonal compound metrics.

https://www.stocksdashboards.com

Going to get back into self-managed IRA...this time better-informed :-)

jonathang6k 2 days ago||
I've found it hard to keep up with movie and TV news (particularly when the new Backrooms film is coming out).

So, I built an agent to help remind me -- it's a subscription based service that sends you updates every morning, and stores your preferences so it can learn what you like.

https://holly.garelick.net

granthamctaylor 1 day ago|
I have been quietly working for the last three years: a novel hierarchical and extensible modeling framework that can cleanly and efficiently embed any json-like object for any predictive modeling task with zero feature engineering.

json2vec enables users to, for example, build tabular / transactional foundation models like TabBERT / PRAGMA dynamically... by just declaring their data schema. This is a space in which Netflix, Stripe, Revolut, Capital One, Nubank, J.P. Morgan, NVIDIA, etc. have been developing for several years.

json2vec goes a step further from just tabular data or structured transactional data. It enables arbitrary structured "json-like" observations with hierarchical BERT-like transformer encoder blocks. Financial transactions, chess positions, flight itineraries, raw tabular data, rideshare activity, ecommerce, behavioral sequence models... Any raw data able to be represented in `json` can be encoded into a tree of embeddings, and used for downstream finetuning for supervised machine learning... No feature engineering required.

https://github.com/granthamtaylor/json2vec

json2vec supports extensible plugin support for new data types (numbers, categories, raw text, datetimes, hashable objects [think: IP addresses and phone numbers], and raw embeddings), all of which may be pretrained via MLM-like self-supervised learning. If your needs are not met with the built-in datatypes, the framework is extensible in that you may build your own custom datatypes (think: geographical coordinates). Built in decision heads for a subset of datatypes enable predictive modeling multi-task and multi-array outputs (predicting fraud at a per-transaction level, or a per-account level).

json2vec also enables built in data pipelines for 100b+ training observations streaming from cloud storage. These pipelines integrate with layer of programmatic data querying and UDFs can consume the vast majority of upstream data processing so that developers don't waste time on massive batch data preprocessing jobs before model training.

Oh, and the best part: the model architectures instantiated by json2vec are mutable. Model developers can add and remove features and targets at their whim - allowing for truly reusable foundation models that can adapt for each individual use case.

My hope is that with a standardized hierarchical modeling framework, interested organizations can better collaborate with one by sharing reusable logic with one another instead of hardcoding use-case-specific architecture.

More comments...