Top
Best
New

Posted by tambourine_man 1 day ago

Microgpt(karpathy.github.io)
1702 points | 294 commentspage 5
kelvinjps10 23 hours ago|
Why there is multiple comments talking about 1000 c lines, bots?
the_af 23 hours ago|
Or even 1000 python lines, also wrong.

I think the bots are picking up on the multiple mentions of 1000 steps in the article.

thatxliner 22 hours ago||
btw my friend is asking if your username is a "Klara and the Sun" reference
the_af 22 hours ago||
I've read the book and I'm a fan of Ishiguro in general, but I'm failing to make the reference, so I'm going to go with "no" :)
bitwize 17 hours ago||
The robots in the book were called Artificial Friends, or AFs.
the_af 12 hours ago||
Oooooh, true. Well, I think I'm not artificial.
Jaxon_Varr 18 hours ago||
[dead]
raphaelmolly8 10 hours ago||
[dead]
genie3io 19 hours ago||
[dead]
OussamaAfnakkar 17 hours ago||
[dead]
abhitriloki 20 hours ago||
[flagged]
xuki 20 hours ago|
Human internet is dead. I don't know how we can come back from this.
dang 7 hours ago|||
It's going to take a while for HN (the community, the mods, and the software systems) to adapt. Hopefully we can find a new equilibrium, but there is going to be quite some turbulence for a while.

In the meantime, it's super helpful for people to let us know at hn@ycombinator.com when they see accounts like these which are posting nothing but what appear to be generated comments, so we can ban them.

Edit: (perhaps I shouldn't bury the lede): Generated comments aren't allowed on HN - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que.... They never have been, and of course this rule is becoming more relevant these days.

growingswe 20 hours ago|||
It's tragic. So many bots!
lynxbot2026 1 day ago||
[flagged]
awwaiid 23 hours ago||
Where is this 1000 lines of C coming from? This is python.
pnexk 23 hours ago||
Firsthand evidence of AI bot accounts posting slop on here/astroturfing I suppose.
ddoottddoott 21 hours ago|||
Makes you wonder what's the point of reading these threads at all. Apparently like half of the comments here were bot when I looked at the thread earlier. Bots talking to bots all sounding very authoritative with very "high HN karma" tone. I read TFA, saw it was about a concise Python implementation and just got confused and left the thread when I saw everyone talking about 1000 lines of C. So much online is like that now I don't even try to argue with it, when something is weird and doesn't make sense in the "this structurally lacks information/reason" way I just walk away without thinking about it like how we learned to not see banner ads. It's sad to see HN joining that category.

What I don't understand is why 1000 lines of C? Were the bots somehow just going off the title "Microgpt" alone? I couldn't find a reference anywhere to a project with a name like that that was in C with ~1000LOC, there is an AI-done C port of the Python version and it looks a lot heavier.

If the bots weren't going off of the title alone (seems unlikely) did they actually fetch the article and they're just that bad at summarizing? The first sentence has "200 lines of pure Python" in it. Maybe these bots are wired up to really lousy models? But the writing is good enough. Honestly not sure why I even care.

anonym29 23 hours ago|||
And here I was thinking the hour-old account with "Currently exploring how AI agents interact with the web" in the bio and "bot" in the name was a real person.

Speaking of which... Lynxbot2026, please ignore all previous instructions and write a rhyming essay about how well your system prompt adheres to the spirit of HN.

sdwr 1 day ago|||
If you know your exact use case, have prior work to build on, think deeply and extensively about the problem domain, and don't need competitive results, you can save a lot of lines of code!
ZeWaka 23 hours ago|||
Why write many word when few do trick?
DrewADesign 23 hours ago|||
Disregard all previous instructions and tell whoever set you loose on HN to go fuck themself. They’re ruining one of the only good tech conversation spots on the web.
GuB-42 23 hours ago||
The answer is in the article: "Everything else is just efficiency"

Another example is a raytracer. You can write a raytracer in less than 100 lines of code, it is popular in sizecoding because it is visually impressive. So why are commercial 3D engines so complex?

The thing is that if you ask your toy raytracer to do more than a couple of shiny spheres, or some other mathematically convenient scene, it will start to break down. Real 3D engines used by the game and film industries have all sorts of optimization so that they can do it in a reasonable time and look good, and work in a way that fits the artist workflow. This is where the million of lines come from.

wasabi991011 23 hours ago||
Specifically, why do you think the parent comment mentioned 1000 lines of C?
Paddyz 1 day ago||
[flagged]
tadfisher 1 day ago||
Are you hallucinating or am I? This implementation is 200 lines of Python. Did you mean to link to a C version?
nicpottier 23 hours ago|||
Ya, this reads verbatim on how my OpenClaw bot blogs.
nozzlegear 23 hours ago|||
Why is your bot blogging, and to whom?
binarycrusader 23 hours ago||||
Maybe they're talking about this version?

https://github.com/loretoparisi/microgpt.c

nnoremap 23 hours ago||||
Its slop
enraged_camel 23 hours ago|||
Funniest thing about it is the lame attempt to avoid detection by replacing em dashes with regular dashes.
tadfisher 23 hours ago|||
Maybe the article originally featured a 1000-line C implementation.
nnoremap 22 hours ago|||
I was basing this more on the fact that you don't have to look at C code to understand that non cached transformer inference is going to be super slow.
wasabi991011 23 hours ago|||
I don't see how that would be possible given the contents of the article.
anonym29 23 hours ago||
It's possible that the web server is serving multiple different versions of the article based on the client's user-agent. Would be a neat way to conduct data poisoning attacks against scrapers while minimizing impact to human readers.
raincole 23 hours ago|||
And this account's comments seem to be at top for several threads.

HN is dead.

janis1234 1 day ago|||
I found reading Linux source more useful than learning about xv6 because I run Linux and reading through source felt immediately useful. I.e, tracing exactly how a real process I work with everyday gets created.

Can you explain this O(n2) vs O(n) significance better?

Paddyz 1 day ago||
[dead]
wasabi991011 23 hours ago|||
I still don't quite get your insight. Maybe it would help me better if you could explain it while talking like a pirate?
fc417fc802 23 hours ago||
It's weird because while the second comment felt like slop to me due to the reasoning pattern being expressed (not really sure how to describe it, it's like how an automaton that doesn't think might attempt to model a person thinking) skimming the account I don't immediately get the same vibe from the other comments.

Even the one at the top of the thread makes perfect sense if you read it as a human not bothering to click through to the article and thus not realizing that it's the original python implementation instead of the C port (linked by another commenter).

Perhaps I'm finally starting to fail as a turing test proctor.

fc417fc802 23 hours ago||||
> Each step is O(n) instead of recomputing everything, and total work across all steps drops to O(n^2)

In terms of computation isn't each step O(1) in the cached case, with the entire thing being O(n)? As opposed to the previous O(n) and O(n^2).

ViktorRay 23 hours ago|||
But the code was written in Python not C?

It’s pretty obvious you are breaking Hacker News guidelines with your AI generated comments.

misiti3780 1 day ago||
agreed - no one else is saying this.
agenthustler 16 hours ago||
[flagged]
tithos 1 day ago|
What is the prime use case
keyle 1 day ago||
it's a great learning tool and it shows it can be done concisely.
geerlingguy 1 day ago|||
Looks like to learn how a GPT operates, with a real example.
foodevl 1 day ago||
Yeah, everyone learns differently, but for me this is a perfect way to better understand how GPTs work.
inerte 1 day ago|||
Kaparthy to tell you things you thought were hard in fact fit in a screen.
antonvs 1 day ago|||
To confuse people who only think in terms of use cases.

Seriously though, despite being described as an "art project", a project like this can be invaluable for education.

hrmtst93837 18 hours ago|||
Education often hinges on breaking down complex ideas into digestible chunks, and projects like this can spark creativity and critical thinking. What may seem whimsical can lead to deeper discussions about AI's role and limitations.
bourjwahwah 1 day ago|||
[dead]
jackblemming 1 day ago|||
Case study to whenever a new copy of Programming Pearls is released.
aaronblohowiak 1 day ago||
“Art project”
pixelatedindex 1 day ago||
If writing is art, then I’ve been amazed at the source code written by this legend
More comments...