Posted by derek 1 day ago
I guess keep them on backend/library tasks for now. I am sure the companies are already working on getting a snapshot of a browser page and feeding it back into multimodal model so it can comprehend what "looking" means.
What sort of subscription plan is that?
ccusage shows me getting over 10x the value of paying via API tokens this month so far...
npx ccusage@latest
Outputs a table of your token usage over the last few days, which it reads from the jsonl files that Claude Code leaves tucked away in the ~/.claude/ directory.I have a problem where half the times I see people talking about their AI workflow, I can't tell if they are talking about some kind of dream workflow that they have, or something they're actually using productively
In my case it’s more like developing a mindset building a framework than to push feature after feature. I would think it’s like that for most companies. You can get an unpolished version of most apps easily, but polishing takes 3-5x the time.
Lets not talk about development robustness, backend security etc etc. Like AI has just way too many slippages for me in these cases.
However I would still consider myself a heavy AI user, but I mainly use it to discuss plans,(what google used to be) or to check it if I’ve forgotten anything.
For most features in my app I’m faster typing it out exactly the way I want it. (with a bit of auto-complete) The whole brain-coordination works better.
I guess long talk, but you’re not alone trust your instinct. You don’t seem narrow minded.
I’ve had an impossible learning curve the last year, but as I should rather be vibe-coded biased I still use less AI now to make sure it’s more consistent.
I think the two camps are different in terms of skill honestly, but also in terms of needs. Like of course you are faster vibe-coding a front-end then to write the code manually, but build a robust backend/processing system its a different kind of tier.
So instead of picking a side it’s usually best to stay as unbiased as possible and choose the right tool for the task
To the author & anyone reading - publicly release your agent harnesses, even if its shit or vibe coded! I am constantly iterating on my meta and seeking to improve.
For example, an agent working on the dashboard for the Documents portion of my project has a completely different idea from the agent working on the dashboard for the Design portion of my project. The design consistency is not there, not just visually, but architecturally. Database schema and API ideas are inconsistent, for example. Even on the same input things are wildly different. It seems that if it can be different, it will be different.
You start to update instruction files to get things consistent, but then these end up being thousands of lines on a large project just to get the foundations right, eating into the context window.
I think ultimately we might need smaller language models trained on certain rules & schemas only, instead of on the universe of ideas that a prompt could result in. Small language models are likely the correct path.
> The design consistency is not there, not just visually, but architecturally.
Seniors always gonna have to senior. Doesn't matter if the coders are AI or humans. You have to make sure you provide enough structures for the agents to move in roughly the same direction while allowing enough flexibility that you're not better off just writing the code.
This is a very interesting concept
Could this be extended to the point of an LLM producing/improving itself?
If not, what are the current limitations to get to that point?
Check out aider writing aider stats here: https://aider.chat/HISTORY.html
Aider writing its own code is definitely cool and within the same concept
I’d love to see an LLM or some sort of coding model that modifies/trains the model itself