Top
Best
New

Posted by horseradish 14 hours ago

We should revisit literate programming in the agent era(silly.business)
239 points | 141 comments
palata 10 hours ago|
I am not convinced.

- Natural languages are ambiguous. That's the reason why we created programming languages. So the documentation around the code is generally ambiguous as well. Worse: it's not being executed, so it can get out of date (sometimes in subtle ways).

- LLMs are trained on tons of source code, which is arguably a smaller space than natural languages. My experience is that LLMs are really good at e.g. translating code between two programming languages. But translating my prompts to code is not working as well, because my prompts are in natural languages, and hence ambiguous.

- I wonder if it is a question of "natural languages vs programming languages" or "bad code vs good code". I could totally imagine that documenting bad code helps the LLMs (and the humans) understand the intent, while documenting good code actually adds ambiguity.

What I learned is that we write code for humans to read. Good code is code that clearly expresses the intent. If there is a need to comment the code all over the place, to me it means that the code is maybe not as good as it should be :-).

Of course there is an argument to make that the quality of code is generally getting worse every year, and therefore there is more and more a need for documentation around it because it's getting hard to understand what the hell the author wanted to do.

pdntspa 3 hours ago||
> because my prompts are in natural languages, and hence ambiguous.

Legalese developed specifically because natural language was too ambiguous. A similar level of specificity for prompting works wonders

One of the issues with specifying directions to the computer with code is that you are very narrowly describing how something can be done. But sometimes I don't always know the best 'how', I just know what I know. With natural language prompting the AI can tap into its training knowledge and come up with better ways of doing things. It still needs lots of steering (usually) but a lot of times you can end up with a superior result.

vnorilo 3 hours ago||
Yes. LLMs are search engines into the (latent) space or source code. Stuff you put into the context window is the "query". I've had some good results by minimizing the conversational aspect, and thinking in terms of shaping the context: asking the LLM to analyze relevant files, nor because I want the analysis, but because I want a good reading in the context. LLMs will work hard to stay in that "landscape", even with vague prompts. Often better than with weirdly specific or conflicting instructions.
bottd 10 hours ago|||
> If there is a need to comment the code all over the place, to me it means that the code is maybe not as good as it should be :-)

If good code was enough on its own we would read the source instead of documentation. I believe part of good software is good documentation. The prose of literate source is aimed at documentation, not line-level comments about implementation.

wvenable 5 hours ago|||
> If good code was enough on its own we would read the source instead of documentation.

That's 100% how I work -- reading the source. If the code is confusing, the code needs to be fixed.

kalaksi 2 hours ago||
Confusing code is one thing, but projects with more complex requirements or edge cases benefit from additional comments and documentation. Not everything is easily inferred from code or can be easily found in a large codebase. You can also describe e.g. chosen tradeoffs.
habinero 2 hours ago||
There's no way around just learning the codebase. I have never seen code documentation that was complete or correct, let alone both.
WillAdams 8 hours ago||||
https://diataxis.fr/

(originally developed at: https://docs.divio.com/documentation-system/) --- divides documentation along two axes:

- Action (Practical) vs. Cognition (Theoretical)

- Acquisition (Studying) vs. Application (Working)

which for my current project has resulted in:

- readme.md --- (Overview) Explanation (understanding-oriented)

- Templates (small source snippets) --- Tutorials (learning-oriented)

- Literate Source (pdf) --- How-to Guides (problem-oriented)

- Index (of the above pdf) --- Reference (information-oriented)

zenoprax 4 hours ago||
I've been trying to implement this as closely as possible from scratch in an existing FOSS project:

https://github.com/super-productivity/super-productivity/wik...

Even with a well-described framework it is still hard to maintain proper boundaries and there is always a temptation to mix things together.

AdieuToLogic 6 hours ago||||
> If good code was enough on its own we would read the source instead of documentation.

An axiom I have long held regarding documenting code is:

  Code answers what it does, how it does it, when it is used, 
  and who uses it.  What it cannot answer is why it exists.  
  Comments accomplish this.
eru 5 hours ago|||
An important addendum: code can sometimes, with a bit of extra thinking of part of the reader, answer the 'why' question. But it's even harder for code to answer the 'why not' question. Ie what were other approaches that we tried and that didn't work? Or what business requirements preclude these other approaches.
AdieuToLogic 5 hours ago|||
> But it's even harder for code to answer the 'why not' question.

Great point. Well-placed documentation as to why an approach was not taken can be quite valuable.

For example, documenting that domain events are persisted in the same DB transaction as changes to corresponding entities and then picked up by a different workflow instead of being sent immediately after a commit.

1718627440 5 hours ago|||
I don't think this is enough to completely obsolete comments, but a good chunk of that information can be encoded in a VCS. It encodes all past approaches and also contains the reasoning and why not in annotation. You can also query this per line of your project.
eru 4 hours ago||
Git history is incredible important, yes, but also limited.

Practically, it only encodes information that made it into `main`, not what an author just mulled over in their head or just had a brief prototype for, or ran an unrelated toy simulation over.

1718627440 3 hours ago|||
If you throw away commit messages, that is on you, it is not a limitation of Git. If I am cleaning up before merging, I'm maybe rephrasing things, but I am not throwing that information away. I regularly push branches under 'draft/...' or 'fail/...' to the central project repository.
kalaksi 2 hours ago||
Sounds easier (for everybody) to just use comments.
necovek 3 hours ago|||
In fairness to GP, they said VCS, not Git, even if they are somewhat synonomous today. Other VCSes did support graph histories.

Still, "3rd dimension" code reasoning (backwards in time) has never been merged well with code editing.

necovek 3 hours ago|||
Good naming and good tests can get you 90% of the way to "why" too.
necovek 3 hours ago||||
Having "grown up" on free software, I've always been quick to jump into code when documentation was dubious or lacking: there is only one canonical source of truth, and you need to be good at reading it.

Though I'd note two kinds of documentation: docs how software is built (seldom needed if you have good source code), and how it is operated. When it comes to the former, I jump into code even sooner as documentation rarely answers my questions.

Still, I do believe that literate programming is the best of both worlds, and I frequently lament the dead practice of doing "doctests" with Python (though I guess Jupyter notebooks are in a similar vein).

Usually, the automated tests are the best documentation you can have!

habinero 2 hours ago|||
> If good code was enough on its own we would read the source instead of documentation.

Uh. We do. We, in fact, do this very thing. Lots of comments in code is a code smell. Yes, really.

If I see lots of comments in code, I'm gonna go looking for the intern who just put up their first PR.

> I believe part of good software is good documentation

It is not. Docs tell you how to use the software. If you need to know what it does, you read the code.

baq 3 hours ago|||
Docs and code work together as mutually error correcting codes. You can’t have the benefits of error detection and correction without redundant information.
ghywertelling 2 hours ago||
> With agents, does it become practical to have large codebases that can be read like a narrative, whose prose is kept in sync with changes to the code by tireless machines?

I think this is true. Your point supports it. If either the explanation / intention or the code changes, the other can be brought into sync. Beautiful post. I always hated the fact that research papers don't read like novels, eg "ohk, we tried this which was unsuccessful but then we found another adjacent approach and it helped."

Computer Scientist Explains One Concept in 5 Levels of Difficulty | WIRED

https://www.youtube.com/watch?v=fOGdb1CTu5c

Computer scientist Amit Sahai, PhD, is asked to explain the concept of zero-knowledge proofs to 5 different people; a child, a teen, a college student, a grad student, and an expert. Using a variety of techniques, Amit breaks down what zero-knowledge proofs are and why it's so exciting in the world of cryptography.

alkonaut 33 minutes ago|||
Maybe if we had a really terse and unambiguous form of English? Whenever there is ambiguity we insert parentheses and operators to really make it clear what we mean. We can enclose different sentences in brackets to make sure that the scope of a logical condition and so on. Oh wait
hosh 10 hours ago|||
I don’t have my LLMs generate literate programming. I do ask it to talk about tradeoffs.

I have full examples of something that is heavily commented and explained, including links to any schemas or docs. I have gotten good results when I ask an LLM to use that as a template, that not everything in there needs to be used, and it cuts down on hallucinations by quite a bit.

awesome_dude 8 hours ago|||
> Natural languages are ambiguous. That's the reason why we created programming languages. So the documentation around the code is generally ambiguous as well. Worse: it's not being executed, so it can get out of date (sometimes in subtle ways).

I loathe this take.

I have rocked up to codebases where there were specific rules banning comments because of this attitude.

Yes comments can lie, yes there are no guards ensuring they stay in lock step with the code they document, but not having them is a thousand times worse - I can always see WHAT code is doing, that's never the problem, the problems is WHY it was done in this manner.

I put comments like "This code runs in O(n) because there are only a handful of items ever going to be searched - update it when there are enough items to justify an O(log2 n) search"

That tells future developers that the author (me) KNOWS it's not the most efficient code possible, but it IS when you take into account things unknown by the person reading it

Edit: Tribal knowledge is the worst type of knowledge, it's assumed that everyone knows it, and pass it along when new people onboard, but the reality (for me) has always been that the people doing the onboarding have had fragments, or incorrect assumptions on what was being conveyed to them, and just like the childrens game of "telephone" the passing of the knowledge always ends in a disaster

AdieuToLogic 6 hours ago|||
> Yes comments can lie ...

Comments only lie if they are allowed to become one.

Just like a method name can lie. Or a class name. Or ...

bonesss 4 hours ago||
Right.

The compiler ensures that the code is valid, and what ensures that ‘// used a suboptimal sort because reasons’ is updated during a global refactor that changes the method? … some dude living in that module all day every day exercising monk-like discipline? That is unwanted for a few reasons, notably the routine failures of such efforts over time.

Module names and namespaces and function names can lie. But they are also corrected wholesale and en-masse when first fixed, those lies are made apparent when using them. If right_pad() is updated so it’s actually left_pad() it gets caught as an error source during implementation or as an independent naming issue in working code. If that misrepresentation is the source of an emergent error it will be visible and unavoidable in debugging if it’s in code, and the subsequent correction will be validated by the compiler (and therefore amenable to automated testing).

Lies in comments don’t reduce the potential for lies in code, but keeping inline comments minimal and focused on exceptional circumstances can meaningfully reduce the number of aggregate lies in a codebase.

deathanatos 3 hours ago||
> what ensures that ‘// used a suboptimal sort because reasons’ is updated during a global refactor that changes the method?

And for that matter, what ensures it is even correct the first time it is written?

(I think this is probably the far more common problem when I'm looking at a bug, newly discovered: the logic was broken on day 1, hasn't changed since; the comment, when there is one, is as wrong as the day it was written.)

awesome_dude 2 hours ago||
But, you've still got an idea of why things were done the way they were - radio silence is....

Go ask Steve, he wrote it, oh, he left about 3 years ago... does anyone know what he was thinking?

larusso 5 hours ago|||
I don’t disagree here. I personally like to put the why into commit messages though. It’s my longtime fight to make people write better commit messages. Most devs I see describe what they did. And in most cases that is visible from the change-set. One has to be careful here as similar to line documentation etc everything changes with size. But I prefer if the why isn’t sprinkled between source. But I’m not dogmatic about it. It really depends.
awesome_dude 4 hours ago||
https://conventionalcommits.org/en/v1.0.0/

I <3 great (edit: improve clarity) commit comments, but I am leaning more heavily to good comments at the same level as the dev is reading - right there in the code - rather than telling them to look at git blame, find the appropriate commit message (keeping in mind that there might have been changes to the line(s) of code and commits might intertwine, thus making it a mission to find the commit holding the right message(s).

edit: I forgot to add - commit messages are great, assuming the people merging the PR into main aren't squashing the commits (a lot of people do this because of a lack of understanding of our friend rebase)

k32k 8 hours ago|||
"But translating my prompts to code is not working as well, because my prompts are in natural languages, and hence ambiguous."

Not only that, but there's something very annoying and deeply dissatisfying about typing a bunch of text into a thing for which you have no control over how its producing an output, nor can an output be reproduced even if the input is identical.

Agreed natural language is very ambiguous and becoming more ambiguous by the day "what exactly does 'vibe' mean?".

People spoke in a particular way, say 60 years ago, that left very little room for interpretation of what they meant. The same cannot be said today.

caseyohara 7 hours ago||
> People spoke in a particular way, say 60 years ago, that left very little room for interpretation of what they meant. The same cannot be said today.

Surely you don’t mean everyone in the 1960s spoke directly, free of metaphor or euphemism or nuance or doublespeak or dog whistle or any other kind or ambiguity? Then why are there people who dedicate their entire life to interpreting religious texts and the Constitution?

k32k 7 hours ago||
Compared with today, on average, they did.

There's a generation of people that 'typ lyk dis'.

So yes.

jyounker 4 hours ago|||
Your point is less persuasive than you intended. You complain about linguistic ambiguity, but then you show an example of sensible spelling reform.
ChrisGreenHeur 2 hours ago|||
that example is regarding syntax, and is actually no worse than any other
casey2 4 hours ago||
Programming languages are natural and ambiguous too, what does READ mean? you have to look it up to see the types. The power comes from the fact that it's audit-able, but that you don't need to audit it every time you want to write some code. You think you write good code? try to prove it after the compiler gets through with it.

Natural languages are richer in ideas, it may be harder to get working code going from a purely natural description to code, than code to code, but you don't gain much from just translating code. One is only limited by your imagination the other already exists, you could just call it as a routine.

You only have a SENSE for good code because it's a natural language with conventions and shared meaning. If the goal of programming is to learn to communicate better as humans then we should be fighting ambiguity not running from it. 100 years from now nobody is going to understand that your conventions were actually "good code".

musicale 4 hours ago||
> Programming languages are natural and ambiguous too

Programming languages work because they are artificial (small, constrained, often based on algebraic and arithmetic expressions, boolean logic, etc.) and have generally well-defined semantics. This is what enables reliable compilers and interpreters to be constructed.

s3anw3 6 minutes ago||
I think the tension between natural language and code is fundamentally about information compression. Code is maximally compressed intent — minimal redundancy, precise semantics. Prose is deliberately less compressed — redundant, contextual, forgiving — because human cognition benefits from that slack.

Literate programming asks you to maintain both compression levels in parallel, which has always been the problem: it's real work to keep a compressed and an uncompressed representation in sync, with no compiler to enforce consistency between them.

What's interesting about your observation is that LLMs are essentially compression/decompression engines. They're great at expanding code into prose (explaining) and condensing prose into code (implementing). The "fundamental extra labor" you describe — translating between these two levels — is exactly what they're best at.

So I agree with your conclusion: the economics have changed. The cost of maintaining both representations just dropped to near zero. Whether that makes literate programming practical at scale is still an open question, but the bottleneck was always cost, not value.

beernet 1 hour ago||
Literate programming sounds great in a blog post, but it falls apart the moment an agent starts hallucinating between the prose and the actual implementation. We’re already struggling with docstrings getting out of sync; adding a layer of philosophical "intent" just gives the agent more room to confidently output garbage. If you need a wall of text to make an agent understand your repo, your abstractions are probably just bad. It feels like we're trying to fix a lack of structural clarity with more tokens.
rorylaitila 41 minutes ago||
Even on the latest models, LLMs are not deterministic between "don't do this thing" and "do this thing". They are both related to "this thing" and depending on other content in the context and seed, may randomly do the thing or not. So to get the best results, I want my context to be the smallest possible truthful input, not the most elaborated. More is not better. I think good names on executable source code and tightest possible documentation is best for LLMs, and probably for people too.
rednafi 10 hours ago||
I think a lighter version of literate programming, coupled with languages that have a small API surface but are heavy on convention, is going to thrive in this age of agentic programming.

A lighter API footprint probably also means a higher amount of boilerplate code, but these models love cranking out boilerplate.

I’ve been doing a lot more Go instead of dynamic languages like Python or TypeScript these days. Mostly because if agents are writing the program, they might as well write it in a language that’s fast enough. Fast compilation means agents can quickly iterate on a design, execute it, and loop back.

The Go ecosystem is heavy on style guides, design patterns, and canonical ways of doing things. Mostly because the language doesn’t prevent obvious footguns like nil pointer errors, subtle race conditions in concurrent code, or context cancellation issues. So people rely heavily on patterns, and agents are quite good at picking those up.

My version of literate programming is ensuring that each package has enough top-level docs and that all public APIs have good docstrings. I also point agents to read the Google Go style guide [1] each time before working on my codebase.This yields surprisingly good results most of the time.

[1] https://google.github.io/styleguide/go/

username223 6 hours ago|
> The Go ecosystem is heavy on style guides, design patterns, and canonical ways of doing things.

Go was designed based on Rob Pike's contempt for his coworkers (https://news.ycombinator.com/item?id=16143918), so it seems suitable for LLMs.

rustybolt 12 hours ago||
I have noticed a trend recently that some practices (writing a decent README or architecture, being precise and unambiguous with language, providing context, literate programming) that were meant to help humans were not broadly adopted with the argument that it's too much effort. But when done to help an LLM instead of a human a lot of people suddenly seem to be a lot more motivated to put in the effort.
ptak_dev 4 minutes ago||
This is the pattern I keep noticing too. A lot of "good engineering hygiene" that got dismissed as overhead is now paying dividends specifically because agents can consume it.

Detailed commit messages: ignored by most humans, but an agent doing a git log to understand context reads every one. Architecture decision records: nobody updates them, but an agent asked to make a change that touches a core assumption will get it wrong without them.

The irony is that the practices that make code legible to agents are the same ones that make it legible to a new engineer joining the team. We just didn't have a strong enough forcing function before.

zdragnar 12 hours ago|||
In my years of programming, I find that humans rarely give documentation more than a cursory glance up until they have specific questions. Then they ask another person if one is available rather than read for the answer.

The biggest problem is that humans don't need the documentation until they do. I recall one project that extensively used docblock style comments. You could open any file in the project and find at least one error, either in the natural language or the annotations.

If the LLM actually uses the documentation in every task it performs- or if it isn't capable of adequate output without it- then that's a far better motivation to document than we actually ever had for day to day work.

1718627440 5 hours ago|||
I think this really depends on culture. If you target OS APIs or the libc, the documentation is stellar. You have several standards and then conceptual documentation and information about particular methods all with historic and current and implementation notes, then there is also an interactive hypertext system. I solve 80% of my questions with just looking at the official documentation, which is also installed on my computer. For the remaining I often try to use the WWW, but these are often so specific, that it is more successful to just read the code.

Once I step out of that ecosystem, I wonder how people even cope with the lack of good documentation.

suzzer99 8 hours ago||||
The other problem is that documentation is always out of date, and one wrong answer can waste more time than 10 "I don't knows".
ijk 10 hours ago|||
I have discovered that the measure of good documentation is not whether your team writes documentation, but is instead determined by whether they read it.
hinkley 12 hours ago|||
Paraphrasing an observation I stole many years ago:

A bunch of us thought learning to talk to computers would get them out of learning to talk to humans and so they spent 4 of the most important years of emotional growth engaging in that, only to graduate and discover they are even farther behind everyone else in that area.

analog31 5 hours ago||
This raises an interesting point. I've speculated that if someone has a hard time expressing themselves to other humans verbally or in writing, they're also going to have a hard time writing human-readable code. The two things are rooted in the same basic abilities. Writing documentation or comments in the code at least gives someone two slim chances at understanding them, instead of just one.

I have the opposite problem. Granted, I'm not a software developer, but only use code as a problem solving tool. But once again, adding comments to my code gives me two slim chances of understanding it later, instead of one.

hinkley 3 hours ago|||
I think there’s some of that, but it’s also probably a thing where people who make good tutors/mentors tend to write clearer code as well, and the Venn diagram for that is a bit complicated.

Concise code is going to be difficult if you can’t distill a concept. And that’s more than just verbal intelligence. Though I’m not sure how you’d manage it with low verbal intelligence.

1718627440 5 hours ago|||
> I've speculated that if someone has a hard time expressing themselves to other humans verbally or in writing

I don't think they have actually problems with expressing themselves, code is also just a language with a very formal grammar and if you use that approach to structure your prose, it's also understandable. The struggle is more to mentally encode non-technical domain knowledge, like office politics or emotions.

jpollock 12 hours ago|||
Documentation rots a lot more quickly than the code - it doesn't need to be correct for the code to work. You are usually better off ignoring the comments (even more so the design document) and going straight to the code.
hinkley 12 hours ago||
I maintain you’re either grossly misappropriating the time and energy of new and junior devs if this is the case on your project, or you have gone too long since hiring a new dev and your project is stagnating because of it.

New eyes don’t have the curse of knowledge. They don’t filter out the bullshit bits. And one of the advantages of creating reusable modules is you get more new eyes on your code regularly.

This may also be a place where AI can help. Some of the review tools are already calling us out on making the code not match the documentation.

habinero 2 hours ago||
No, they're 100% correct. This has been my experience at every place I've worked at in SV, from startup to FAANG.

You write the code so you can scan it easily, and you build tools to help, and you ask for help when you need it, but you still gotta build that mental map out

jimbokun 7 hours ago|||
Well maybe if those people were managing one or more programmers and not writing the code themselves, they would have worked similarly.
cmrdporcupine 10 hours ago|||
I've had LLMs proactively fix my inline documentation. Rather pleasant surprise: "I noticed the comment is out of date and does not reflect the actual implementation" even asking me if it should fix it.
jimbokun 7 hours ago||
I find LLMs more diligent about keeping the documentation than any human developer, including myself.
what 5 hours ago||
The difference is that they’re using the LLM to write those readmes and architecture and whatever else documents. They’re not putting any effort in.
perrygeo 12 hours ago||
Considering LLMs are models of language, investing in the clarity of the written word pays off in spades.

I don't know whether "literate programming" per se is required. Good names, docstrings, type signatures, strategic comments re: "why", a good README, and thoughtfully-designed abstractions are enough to establish a solid pattern.

Going full "literate programming" may not be necessary. I'd maybe reframe it as a focus on communication. Notebooks, examples, scripts and such can go a long way to reinforcing the patterns.

Ultimately that's what it's about: establishing patterns for both your human readers and your LLMs to follow.

crazygringo 12 hours ago||
Yeah, I think what is needed is somewhere between docstrings+strategic comments, and literate programming.

Basically, it's incredibly helpful to document the higher-level structure of the code, almost like extensive docstrings at the file level and subdirectory level and project level.

The problem is that major architectural concepts and decisions are often cross-cutting across files and directories, so those aren't always the right places. And there's also the question of what properly belongs in code files, vs. what belongs in design documents, and how to ensure they are kept in sync.

amelius 12 hours ago||
Also:

"Bad programmers worry about the code. Good programmers worry about data structures and their relationships."

-- Linus Torvalds

Swizec 11 hours ago|||
> "Bad programmers worry about the code. Good programmers worry about data structures and their relationships."

If you get the architecture wrong, everyone complains. If you get it right, nobody notices it's there.

esafak 11 hours ago||
The SRE's Lament.
Terr_ 10 hours ago||
"Nothing needs fixing, so what do we pay you for?"

"Everything's broken! What do we even pay you for!?"

k32k 8 hours ago|||
Doesnt this apply with the hysteria of LLMs?

The question being - are LLMs 'good' at interpreting and making choices/decisions about data structures and relationships?

I do not write code for a living but I studied comp sci. My impression was always that the good software engineers did not worry about the code, not nearly as much as the data structures and so on.

skydhash 7 hours ago||
The only use of code is to process data, aka information. And any knowledge worker that the success of processing information is mostly relying on how it's organized (try operating a library without an index).

Most of the time is spent about researching what data is available and learning what data should be returned after the processing. Then you spend a bit of brain power to connect the two. The code is always trivial. I don't remember ever discussing code in the workplace since I started my career. It was always about plans (hypotheses), information (data inquiry), and specifications (especially when collaborating).

If the code is worrying you, it would be better to buy a book on whatever technology you're using and refresh your knowledge. I keep bookmarks in my web browser and have a few books on my shelf that I occasionally page through.

jimbokun 7 hours ago||
Notebooks are an example of literate programming.
cfiggers 12 hours ago||
Interesting and semi-related idea: use LLMs to flag when comments/docs have come out of sync with the code.

The big problem with documentation is that if it was accurate when it was written, it's just a matter of time before it goes stale compared to the code it's documenting. And while compilers can tell you if your types and your implementation have come out of sync, before now there's been nothing automated that can check whether your comments are still telling the truth.

Somebody could make a startup out of this.

kaycebasques 11 hours ago||
I'm a technical writer. Off the top of my head I reckon at least 10 startups have … started up … in this space since 2023.
andyhasit 11 hours ago|||
I once had a mad idea of creating an automated documentation-driven paradigm where every directory/module/class/function has to have a DocString/JSDoc, with the higher level ones (directory/module) essentially being the documentation of features and architecture. A ticket starts by someone opening a PR with suggested changes to the docs, the idea being that a non-technical person like a PM or tester could do it. The PR then passes to a dev who changes the code to match the doc changes. Before merging, the tool shows the doc next to every modified piece of code and the reviewer must explicitly check a box to say it's still valid. And docstrings would be able to link to other docstrings, so you could find out what other bits of code are connected to what you're working on (as that link doesn't always exist in code, e.g. across APIs) and read their docs to find the larger context and gotchas.
spawarotti 12 hours ago|||
There is at least one startup doing it already (I'm not affiliated with it in any way): https://promptless.ai/
cfiggers 9 hours ago||
Thanks for the pointer. That looks more to me like it's totally synthesizing the docs for me. I can see someone somewhere wanting that. I would want a UX more like a compiler warning. "Comment on line 447 may no longer be accurate." And then I go fix it my own dang self.
amelius 11 hours ago|||
Why would you need comments from an AI if you can just ask it what the code is doing?
jimbokun 6 hours ago|||
Because the human needs to tell the AI whether it’s the code or the comment that’s wrong.
melagonster 8 hours ago|||
Because only a human writer can explain why he did the resolution. But nobody wants to update comments each time.
esafak 11 hours ago||
If you have CI hooked up to AI you could you just use a SLM to do that in a periodic job with https://github.github.com/gh-aw/ or https://www.continue.dev/. You could also have it detect architectural drift.
jph00 12 hours ago||
Nearly all my coding for the last decade or so has used literate programming. I built nbdev, which has let me write, document, and test my software using notebooks. Over the last couple of years we integrated LLMs with notebooks and nbdev to create Solveit, which everyone at our company uses for nearly all our work (even our lawyers, HR, etc).

It turns out literate programming is useful for a lot more than just programming!

mkl 2 hours ago|
This seems to be the best link? https://solve.it.com/

The name is quite hard to search for, as it's used by a lot of different things.

Jeremy it's pretty hard to understand what this is from the descriptions, and the two videos are each ~1 hour long. Please consider showing screenshots and one or two short videos.

cadamsdotcom 12 hours ago|
Test code and production code in a symmetrical pair has lots of benefits. It’s a bit like double entry accounting - you can view the code’s behavior through a lens of the code itself, or the code that proves it does what it seems to do.

You can change the code by changing either tests or production code, and letting the other follow.

Code reviews are a breeze because if you’re confused by the production code, the test code often holds an explanation - and vice versa. So just switch from one to the other as needed.

Lots of benefits. The downside is how much extra code you end up with of course - up to you if the gains in readability make up for it.

More comments...