Top
Best
New

Posted by kristianpaul 3 hours ago

Train Your Own LLM from Scratch(github.com)
148 points | 18 comments
JoeDaDude 11 minutes ago|
Coincidentally, I just started on Build a Large Language Model (From Scratch), a repo/book/course by Sebastian Raschka [0][1][2]. Maybe it is a good problem to have to have to decide which learning resource to use.

[0] https://github.com/rasbt/LLMs-from-scratch

[1] https://www.manning.com/books/build-a-large-language-model-f...

[2] https://magazine.sebastianraschka.com/p/coding-llms-from-the...

jvican 2 hours ago||
If you're interested in this resource, I highly recommend checking out Stanford's CS336 class. It covers all this curriculum in a lot more depth, introduces you into a lot of theoretical aspects (scaling laws, intuitions) and systems thinking (kernel optimization/profiling). For this, you have to do the assignments, of course... https://cs336.stanford.edu/
the_real_cher 1 hour ago|
how does one get the lectures? I don't see the option for any lectures.
eftychis 1 hour ago|||
https://github.com/stanford-cs336/lectures
azangru 24 minutes ago|||
One goes to youtube and searches for cs336?
antirez 12 minutes ago||
Context: he is one of the MLX developers, a skilled ML researcher.
NSUserDefaults 1 hour ago||
Been doing it since the day I was born. The beginnings were hard but I’m getting there.
steveharing1 13 minutes ago||
The documentation is really helpful enough to get started
ofsen 42 minutes ago||
This looks like exact copy of this video of andrej karpathy ( https://youtu.be/kCc8FmEb1nY ) but in a writing format, am i wrong ?
hiroakiaizawa 1 hour ago||
Nice. What scale does this realistically reach on a single machine?
lynx97 47 minutes ago||
Model: 36L/36H/576D, 144.2M params

runs on a Blackwell 6000 Max-Q, using 86GB VRAM. Training supposedly takes 3h40m

iamnotarobotman 2 hours ago||
This looks great for a first introduction to training LLMs, and it looks simple enough to try this locally. Great job!
baalimago 2 hours ago|
Train your LM from scratch*

I doubt you have a machine big enough to make it "Large".

mips_avatar 1 hour ago||
You can fully train a 1.6b model on a single 3090. That’s a reasonably big model.
electroglyph 47 minutes ago||
you can train it, but not fully
nucleardog 1 hour ago||
Hey now! I've got a half terabyte of RAM at my disposal! I mean, it's DDR4 but... it's RAM!

And it's paired with 48 processor cores! I mean, they don't even support AVX512 but they can do math!

I could totally train a LLM! Or at least my family could... might need my kid to pick up and carry on the project.

But in all seriousness... you either missed the point, are being needlessly pedantic, or are... wrong?

This is about learning concepts, and the rest of this is mostly moot.

On the pedantic or wrong notes--What is the documented cut-off for a "large" language model? Because GPT-2 was and is described as a "large" language model. It had 1.5B parameters. You can just about get a consumer GPU capable of training that for about $400 these days.

Malcolmlisk 21 minutes ago||
Then rewrite the title and call it "learn how to do a non usable llm from scratch"
improbableinf 4 minutes ago||
Opus 4.7 is non-usable for the tasks I have — but it’s considered an LLM.

And no one is stopping anyone from tweaking few parameters in this repo to go above 10M parameters.