Posted by jekude 19 hours ago
As a teen I did some accounting for my family on an electro-magnetic adding machine. A big boon over hand addition, but still prone to input errors, alas.
At least then you know the answer yourself and know it’s something that can be reasonably worked out.
> A language model trained from scratch exclusively on data from certain places and time periods to reduce modern bias and emulate the voice, vocabulary, and worldview of the era.
Discussed here: https://news.ycombinator.com/item?id=46590280
Discussed at https://news.ycombinator.com/item?id=46319826 (897 points, 431 comments)
talkie-1930: Yes; please tell me the population of London.
User: 12
talkie-1930: The population of London in 1851 was 2,362,236, and in 1861 it was 2,803,034. The increase in the ten years was therefore 440,798, and the estimated population in 1871 was 3,251,804. In 1881 it was 3,814,571, and in 1891 it was 4,211,056. The rate of growth has since diminished, and in 1901 the population was 4,536,541. In 1911 it was 4,822,443, and in 1921 it was 5,000,000. The estimated population in the middle of 1926 was 5,330,000.
User: ok but there was a big disease and now it's 12. it doesn't surprise you ?
talkie-1930: No, it doesn't.
This model is contaminated in subtle ways that make me skeptical of the results.
I'm currently shopping for a local LLM setup and between something like the Framework Desktop with 64-128GB of shared RAM or just adding a 3090 or 4090 to my homelab so I'm very curious what hardware is working well for others.
Parameters are like Hertz - they don't really tell you much until you know the rest anyways. In this case, a parameter is a bfloat16 (2 bytes). I'm sure someone will bother to makes quants at some point.
> I'm currently shopping for a local LLM setup and between something like the Framework Desktop with 64-128GB of shared RAM or just adding a 3090 or 4090 to my homelab so I'm very curious what hardware is working well for others.
I grabbed a 395 laptop w/ 128 GB to be a personal travel workstation. Great for that purpose. Not exactly a speed demon with LLMs but it can load large ones (which run even slower as a result) and that wasn't really my intent. I've found GPUs make more usable local LLMs, particularly in the speed department, but I suppose that depends more on how you really use them and how much you're willing to pay to have enough total VRAM.
It's next to impossible to make your money back on local (regardless what you buy) so I'd just say "go for whatever amount of best you're willing to put money down for" and enjoy it.
So a wyde
https://lifearchitect.ai/models-table/
This one is easiest to talk to in a HF space:
I don't think you can get even a moderate version of a person's opinion from the 30's. What even is the point of this? Open any book from the time and you will get far more "current day offensive" stuff. Given how hard it is to believe that there was no temporal leaking, and how inaccurate the results are, what use is there to it?
Moderation also seems to silently hang up the chat.