Top
Best
New

Posted by malgamves 16 hours ago

Files are the interface humans and agents interact with(madalitso.me)
174 points | 106 commentspage 4
tacitusarc 11 hours ago|
Does everyone just use AI to write these days? Or is the style so infectious that I just see it everywhere? I swear there needs to be some convention around labeling a post with how much AI was used in its creation.
heavyset_go 9 hours ago||
I'd be embarassed to put my name on AI prose without a disclaimer and I'd also be annoyed to read it as a reader.

IMO it's insulting to the audience, it says your time and attention is not worthy of the author's own time and attention spent putting their own thoughts in their own words.

If you're going to do that at least mention it's LLM output or just give me your outline prompts. I don't care what your LLM has to say, I'm capable of prompting your outline in my own model myself if I feel like it.

josephg 6 hours ago|||
> If you're going to do that at least mention it's LLM output

Yes, this! Please label AI generated content. Pull request written by an AI? Label it as ai generated. Blog post? Article generated with AI? Say so! It’s ok to use AI models. Especially if English is your second language. But put a disclaimer in. Don’t make the reader guess.

Eg:

> This content was partially generated by chatgpt

Or

> Blog post text written entirely by human hand, code examples by Claude code

fragmede 5 hours ago||||
Have any outlines you'd care to share?
coliveira 8 hours ago|||
I'm not a fan of AI and try to avoid it, but there is a difference from AI output published by someone knowledgeable and any other AI output that you run by yourself. If an expert looked at the result and found it to be ok, then you can have some assurance that it at least makes sense. Your own AI run doesn't mean anything, it could be 100% hallucination and a non-expert will buy it as truth.
Joel_Mckay 8 hours ago||
Unfortunately, LLM slop now makes up >53% of the web, and is growing.

It is easy to spot the compacted token distribution unique to each model, but search engines still seem to promote nonsense content. =3

"Bad Bot Problem - Computerphile"

https://www.youtube.com/watch?v=AjQNDCYL5Rg

"A Day in the Life of an Ensh*ttificator "

https://www.youtube.com/watch?v=T4Upf_B9RLQ

sethev 10 hours ago|||
LLMs were trained on stuff that people wrote. I get there are "tells", but don't really think people are as good at identifying AI generated text as they think they are...
afro88 8 hours ago|||
I wouldn't have picked this article as AI until I got an agent to do some writing for me and read a bunch of it to figure out if I can stand behind it. Now I see the tells everywhere "It's not this. It's that." is particularly common and I can't unsee it. (FWIW I rewrote most of the writing it generated, but it did help me figure out my structure and narrative)

The problem I think with AI generated posts is that you feel like you can't trust the content once it's AI. It could be partly hallucinated, or misrepresented.

sethev 5 hours ago||
Yeah, but "it's not X. It's Y" is a common idiom that LLMs picked up from people. That's the point i was making. And it's starting to feel like every post has at least one comment claiming that it was AI generated.
antonvs 9 hours ago||||
Good chunks of the article don't trigger this for me, but I would bet money on the final paragraph involving AI:

> That's not a technical argument. It's a values argument. And it's one that the filesystem, for all its age and simplicity, is uniquely positioned to serve. Not because it's the best technology. But because it's the one technology that already belongs to you.

adi_kurian 6 hours ago||||
Contractions
computably 6 hours ago|||
You don't have to be good at identifying AI generated text to detect low-effort slop.
malgamves 8 hours ago|||
As the author I can assure you there’s a human behind these words. Interesting times me live in though, I find myself questioning what’s AI and what’s not often too and at the moment we’ve offloaded that responsibility to the good will of authors or platform policy which might have to change soon
green-salt 5 hours ago|||
Nice dodge! Unfortunately, this made it more obvious.
jonmagic 4 hours ago||||
I thought it was a great post tying a lot of things I’ve been reading and thinking about together. Could care less if you used AI if it helps my brain expand and or make connections I wouldn’t have otherwise.
meindnoch 7 hours ago||||
"there’s a human behind these words"

That's a bit vague. Was the article written without the aid of LLMs? Yes or no.

torginus 7 hours ago||
Well, if the words were 100% hand-written, I assume he'd have said that.
lovecg 7 hours ago|||
As in, you used 0 AI to write or edit this text? Or some AI? I’d like to calibrate myself.
grey-area 7 hours ago||
We all know the answer to that.
q3k 11 hours ago|||
Everyone's trying to be the new thought leader enlightened technical essayist. So much fluff everywhere.
orsorna 11 hours ago||
What's wild is that with a few minutes of manual editing it would give exponential return. For instance, a lead sentence in your section saying "here's why X" that was already described by your subheading is unnecessary and could have been wholly removed.
amarant 9 hours ago|||
Exponential return? This is the front page of HN! What does exponential returns even look like?

Are you saying this post is a few edits away from becoming a New York Times bestseller?

orsorna 8 hours ago||
No, I guess I meant editing to approach a text that doesn't look rushed over (LLM generation is a subset of such poor writings)

But you're right, it did hit the front page, and that says more about my sensibilities not lining up with whoever is voting the article up.

gzread 10 hours ago||||
You'd have to have a good idea of how you want the document to read, which is half (or more) of the process of writing it.
antonvs 9 hours ago|||
IME many people aren't very capable of editing their own work effectively. It's why "editor" exists as a profession.
idiotsecant 10 hours ago|||
This doesn't seem particularly AI slopped to me.
einr 8 hours ago||
"Not bigger than databases. Different from databases.

It's not a website you go to — it's a little spirit that lives on your machine.

Not a chatbot. A tool that reads and writes files on your filesystem.

That's not a technical argument. It's a values argument."

goodmythical 10 hours ago|||
Does everyone just complain about people using the tools they like to use these days? Or is the style so infectious that I just see it everywhere? I swear there needs to be some convention around labeling a post with how much whining was used in its creation.
panarky 7 hours ago||
Does everyone just easily accuse genuine, literate humans of "cheating" with AI when there's no way they could know that?

There are a lot of unique aspects of the writing in this post that LLMs don't typically generate on their own.

And there's not a "delve" or "tapestry" or even a bullet point to be found.

Also, accusations and complaints like this are off-topic and uninteresting.

We should be talking about filesystems here, not your gut instinct AI detector that has a sky-high false-positive rate.

I swear there needs to be some convention around throwing wild accusations at people you don't know based exclusively on vibes and with zero actual evidence.

bsenftner 9 hours ago|
I don't think this paradigm will last, or be what becomes the more common structure in the future. This will still suffers from conflicts of persona and objective, plus has the issue that individual apps will need protected file hierarchies to prevent malicious injections. I don't see this as a solution, just a deck chair shuffle.

I've been researching and building with a different paradigm, an inversion of the tool calling concept that creates contextual agents of limited scope, but pipelines of them, with the user in triplicate control of agent as author, operator of an application with a clear goal, and conversationally cooperating on a task with one or more agents.

I create agents that are inside open source software, making that application "intelligent", and the user has control to make the agent an expert in the type of work that human uses that software. Imagine a word processor that when used by a documentation author has multiple documentation agents that co-work with the author. While that same word processor when used by a, for example, romance novelist has similar agents but experts in a different literary / document goal. Then do this with spreadsheets, and project management software, and you get an intelligent office suite with amazing levels of user assistance.

In this structure, context/task specific knowledge is placed inside other software, providing complex processes to the user they can conversationally request and compose on the fly, use and save as a new agent for repeated use, or discard as something built for the moment. The agents are inside other software, with full knowledge of that application in addition to task knowledge related to why the user is using that software. It's a unified agent creation and use and chain-of-thought live editing environment, in context with what one is doing in other software.

I wrap the entire structure into a permission hierarchy that mirrors departments, projects, and project staff, creating an application suite structure more secure than this Filesystems approach, with substantially more user controls that do not expose the potential for malicious application. The agents are each for a specific purpose, which limits their reach and potential for damage. Being purpose built, the users (who are task focused, not developers) easily edit and enhance the agents they use because that is the job/career they already know and continue to do, just with agent help.

visarga 7 hours ago|
Your project, while interesting as an approach, is orders of magnitude more complex than the proposition here - which is to rely on agents skills with file systems, bash, python, sed, grep and other cli tools to find and organize data, but also maintain their own skills and memories. LLMs have gained excellent capabilities with files and can generate code on the fly to process them. It's people realizing that you can use a coding agent for any cognitive work, and it's better since you own the file system while easily swapping the model or harness.

I personally use a graph like format but organized like a simple text file, each node prefixed with [id] and inline referencing other nodes by [id], this works well with replace, diff, git and is navigable at larger scales without reading everything. Every time I start work I have the agent read it, and at the end update it. This ensures continuity over weeks and months of work. This is my take on file system as memory - make it a graph of nodes, but keep it simple - a flat text file, don't prescribe structure, just node size. It grows organically as needed, I once got one to 500 nodes.

bsenftner 4 hours ago||
It ends up being similar to how early PC software was written before people realized malicious software could be running. There used to be little to no memory safety between running programs, and this treatment of files as the contextual running memory is similar. It's a great idea until a security perspective is factored in. It will need to end up being very much like closed applications and their of writing proprietary files, which will need some security layer that is not there yet.