Posted by latexr 5 hours ago
That's also why Apple is so worried about their App Store revenue above all else. The legal argument they make is that the 30% take is an IP licensing scheme, but the value of IP is Soviet central planning nonsense. Certainly, if the App Store was just there to take 30% from games, Apple wouldn't be defending it this fiercely[0], and they wouldn't have burned goodwill trying to impose the 30% on Patreon.
Likewise, the value of generative AI is not that the AI is going to give us post-scarcity mental labor or even that AI will augment human productivity. The former isn't happening and the latter is dwarfed by the fact that AI is a rules exploit to access a bunch of copyrighted information that would have otherwise cost lots of money. In that environment, it is unethical to evaluate the technology solely on its own merits. My opinion of your model and your thinly-veiled """research""" efforts will depend heavily on what the model is trained for and on, because that's the only intelligent way to evaluate such a thing.
Did you train on public domain or compensated and consensually provided data? Good for you.
Did you train an art generator on a bunch of artists' deviantART or Dribbble pages? Fuck off, slopmonger.
Did you train on a bunch of Elsevier journals? You know what? Fuck them, they deserve it, now please give me the weights for free.
Humans can smell exploitation a mile away, and the people shitting on AI are doing so because they smell the exploitation.
[0] As a company, Apple has always been mildly hostile to videogames. Like, strictly speaking, operating a videogame platform requires special attention to backwards compatibility that only Microsoft and console vendors have traditionally been willing to offer. The API stability guarantees Apple and Google provide - i.e. "we don't change things for dumb reasons, but when we do change them we expect you to move within X years" are not acceptable to anything other than perpetually updated live service games. The one-and-done model of most videogames is not economically compatible with the moving target that is Apple platforms.
I think this hazard extends up and down too; a balance we each have of how we regard possibility & value vs whether we default to looking for problems or denial. This becomes a pattern of perspective people adopt. And I worry so much at how doubt & denial pervade. In our hearts and… well… in the comments, everywhere.
I get it and I respect it; it's true: we need to be aware, alert, and on guard. Everything is very complicated. Hazards and bad patterns abound. But especially as techies, finding possibility is enormously valuable to me. Being willing to believe and amplify the maybe, even when it's a challenging situation. I cherish that so much.
Thank you very much Steve Yegge for the life-changing experience of Notes from the Mystery Machine Bus. I did not realize, did not have framing to understand the base human motivations of tech & building & the comments. I see the world so much differently for grokking the thesis here, see much more the outlooks people come from than I did. It has pushed me in life to look for higher possibility & reach, & to avoid closings of the mind, to avoid rejecting, to avoid fear uncertainty and doubt. https://gist.github.com/cornchz/3313150
It's one of the most Light Side vs Dark Side noospherically illuminating pieces I've ever read. The article here touches upon those who care, and what they see: it frames the world. Yegge's post I think reflects further, back at the techie, on what happens to caring thoughtful people, Carlin's arc if idealist -> disappointed -> cynic. And to me Notes was a rallying cry to have fortitude, & to keep a certain purity of hope close, and to work against thought terminating fear uncertainty and doubt.
I find this argument even stranger. Every system can be reduced to its parts and made to sound trivial thereby. My brain is still just neurons firing. The world is just made up of atoms. Humans are just made up of cells.
>here’s actually a few commonly understood theories of existence that are generally accepted even by laypeople, like, “if I ask a sentient being how many Rs there are in the word ‘strawberry’ it should be able to use logic to determine that there are three and not two,” which is a test that generative AI frequently fails.
This shows that the author is not very curious because its easy to take the worst examples from the cheapest models and extrapolate. Its like asking a baby some questions and interpreting humanity's potential on that basis. What's the point of this?
> The questions leftists ask about AI are: does this improve my life? Does this improve my livelihood? So far, the answer for everyone who doesn’t stand to get rich off AI is no.
I'll spill the real tension here for all of you. There are people who really like their comfy jobs and have got attached to their routine. Their status, self worth and everything is attached to it. Anything that disrupts this routine is obviously worth opposing. Its quite easy to see how AI can make a person's life better - I have so many examples. But that's not what "leftists" care about - its about security of their job.
The rest of the article is pretty low quality and full of errors.
I find this line of reasoning compelling. Curiosity ( and trying to break things ) will get you a lot fun. The issue I find that people don't even try to break things ( in interesting ways ), but repeat common failure modes more as a gospel and not an observed experiment. The fun thing is that even the strawberry issue tells us more about the limitations of llms than not. In other words, that error is useful...
<< Their status, self worth and everything is attached to it. Anything that disrupts this routine is obviously worth opposing.
There is some of that for sure. Of all days, today I had my manager argue against use of AI for a use case that would affect his buddy's workflow. I let it go, because I am not sure what it actually means, but some resistance is based on 'what we have always done'.
That's a fair way to look at it - failure modes tell us something useful about the underlying system. In this case it tells us something about how LLM's work at the token level.
But if you go a step beyond that, you would realise that this problem is solved at a _general_ level with the reasoning models. GPT o1 was internally named strawberry as far as I remember. This would be a nice discussion to have but instead of shallow dismissal of AI as a technology with a failure mode that has been pretty much solved.
What really has not been solved is long context and continual learning (and world model stuff but I don't find that interesting).
I wonder about that. In a sense, the solution seems simple.. allow more context. One of the issues, based on progression of chatgpt models, was that too much context allowed for a much easier jailbreak and the fear most corporates have over that make me question the service. Don't get me wrong, I am not one of those people missing 4o for telling me "I love you". I do miss it its nerfed capability to go across all conversations. Working context is was made more narrow now. For a paid sub, that kind of limitation is annoying.
My point is, I know there are some interesting trade-offs to be made ( mostly because I am navigating those on local inference machine ), but with all those data centers one would think, providers have enough power to solve that.. if they so chose.
But the trivialization does not come from being reduced to parts, but what parts you end up with.
It is like realizing the toy that seems to be able figure out a path around obstacles, cannot actually "see", but works by a clever arrangement of gears.
in this case can you come up with things that the toy can't do but a toy with eyes could have?