Posted by mmayberry 1 day ago
https://www.reuters.com/business/meta-acquires-ai-agent-soci...
https://techcrunch.com/2026/03/10/meta-acquired-moltbook-the...
He should probably hire a proper "number 2" (not someone political like sandberg) -- someone who "gets" the internet, like how he did when he was a harvard geek making a hot-or-not clone in his dorm room. I'm not sure acqui-hiring the moltbook founders is the move.
That being said, I think the one silver lining is that it seems like big-tech now has a willingness to hire people who actually ship things of value, like peter steinberger. Another nail in the coffin for leetcode, I hope.
Eventually there may be a big misstep, perhaps, something big enough to bring down the company. But he’s never come close to date. He’s so good at making money from ads that he can afford to keep burning cash on fruitless projects, hiring people that don’t deliver, building infrastructure he might not need. That’s a testament to his performance as a money maker.
Meta is an advertising machine. Not something I’d want to be associated with, but you cannot deny that he has built an incredible ad machine, probably the greatest ad machine ever built - whereas Google had to deliver sophisticated and costly tech to maintain their machine (maps, google search, gmail) meta’s only technical breakthrough has been to hyperscale a php website.
1. https://en.wikipedia.org/wiki/Social_bot#Meta
2. https://en.wikipedia.org/wiki/Dead_Internet_theory#Facebook
Nope, turns out it is just a bunch of out of touch execs throwing shit at the wall and hoping something sticks. Fudging Llama 4 scores. Hiring Alexandr Wang for $14 billion. Making outlandish offers to poach AI talent from OpenAI, Anthropic and Google. Making dubious acquisitions like Manus. Now trying to chase the agents hype by acquiring a company that went viral for 5 minutes and has already been forgotten.
It is laughable how far out of the loop they are, and so desperate to fit in.
RoPE? The position encoding method published 2 years before Llama and already in models such as GPT-J-6B?
DPO, a method whose paper had no experiments with Llama?
QLoRA? The third in a series of quantization works by Tim Dettmers, the first two of which pre-dated Llama?