I think everyone is starting to see this as a middle man problem to solve, look at ERP systems for instance when they popped up it had some growing pains as an industry. (or even early windows/microsoft 'developers, developers, developers' target audience)
I OpenAI see it will take a lot of third party devs to take what OpenAI has and run with it. So they want to build a good developer and start up network to make sure that there are a good, solid ecosystem of options corporations and people can use AI wise.
The gap was that workers were using their own implementation instead of the company's implementation.
https://www.linkedin.com/feed/update/urn:li:activity:7365026...
These third-party apps get huge token usage with agenentic patterns. So losing out on them and being forced to make more internal products to tune to specific use cases is not something they want to biuld out or explore
The fact it is mid is why they are really needing all the other lines of business to work. AKA selling tokens to AI apps the specialize in other mid products, and limit the snakeoil AI products that are littering the market ruining AI's image of being the new catch all solution.
Then I discovered LLMs.
If you think IntelliSense is comparable to what LLMs can do, you really, really need to try giving an AI higher-level problems to solve. Throwaway example I gave in a similar thread a few weeks ago: https://news.ycombinator.com/item?id=44892576
I think a big part of simonw's shtick is trying to get people to give LLMs a proper try, and TBH that's what I end up doing a lot too, including right now! The problem is a "proper try" takes dedicated effort, because it's not obvious where the AI will excel or fail for your specific context, and people legitimately don't have enough time for that.
But once you figure it out, it feels like when you first discovered IntelliSense, except you already know IntelliSense, so it's like... IntelliSense raised to the power of IntelliSense.
Then you have Java and C# where you need a whole IDE if you're writing more than 10 lines. Because using anything brings the whole jungle with it.
Seems like languages like Java and C# that encourage more complexity just aim to provide richer context to mine. Simple example, given an incomplete line like "TypeA foo = bar.", the IDE can very easily figure out you want "bar.getBlah(baz)" because getBlah has a return type of "TypeA" and "baz" is the only variable available in the scope. But to have all that context at that point requires a whole bunch of setup beforehand, like a fine-grained types supported by a rich type system and function signatures and so on, which incentivizes verbosity that usually scales with the complexity of the app.
So yes, that's a lot of verbosity, but also a lot of context. To your point, I feel like the philosophy of languages like Java and C# is deliberately based on providing enough context for sophisticated tooling like IntelliSense and IntelliJ.
Unfortunately, the languages came before such sophisticated tooling existed, and when good tools did exist they were expensive, and even with those tools now being widely and freely availble, many people still don't use them. (Plus, in retrospect, the language designs themselves genuinely turned out to be more complex than ideal in some aspects.)
So the current reputation of these languages encouraging undue complexity is probably due to their philosophies being grounded in sound reasoning but based on predictions that didn't quite pan out as expected.
Same with Lisp. If you take emacs has an example, you have instant documentation on every functions. Another example can be python where there’s an help system embedded into the language.
Java is basically unwritable without a full indexer and completion. But it has a lot of guardrails and its verbosity discourages deviation.
And today we have Swift and kotlin which is barely better. They do a lot of magic behind the scene to reduce verbosity, but you’re still reliant on the indexer which is now coupled with a compiler for the magic stuff.
Better languages insists on documentation, contextual help, shorter programs, no magic unless created by the programmer, and visibility (inspection with a debugger and traceability with the system source available, if possible).
- OpenAI needs talent, and it's generally hard to find. Money will buy you smart PhDs who want to be on the conveyer belt, but not people who want to be a centre of a project of their own. This at least puts them in the orbit of OpenAI - some will fly away, some will set up something to be aquihired, some will just give up and try to join OpenAI anyway
- the amount of cash they will put into this is likely minuscule compared to their mammoth raises. It doesn't fundamentally change their funding needs
- OpenAI's biggest danger is that someone out there finds a better way to do AI. Right now they have a moat made of cash - to replicate them, you generally need a lot of hardware and cash for the electricity bill. Remember the blind panic when DeepSeek came out? So, anything they can do to stop that sprouting elsewhere is worth the money. Sprouting within OpenAI would be a nice-to-have.
My guess is this is as much about talent acquisition as it is about talent retention. Give the bored, overpaid top talent outside problems to mentor for/collaborate on that will still have strong ties to OpenAI, so they don't have the urge to just quit and start such companies on their own.
Imagining one negative spin doesn’t an imagination make. Imagine harder.
I don't think there is any money given, except travel costs for first and last week.
> Thank you for your application. We will contact a select group of applicants in the coming weeks. If you are not contacted, we’d love to have you apply for the next cohort.
They can't even be bothered to ask ChatGPT to send a "no" email. Incredible.
First time I am hearing this term. It is a euphemism like pre-owned cars (instead of used cars).
What does this mean? People who do not yet have any idea? Weird.
Spoiler: it didn't go anywhere. The story on HN is still here:
https://news.ycombinator.com/item?id=3700712
but the link is 404
I find this disturbing. How can someone be useful to others without an idea of what that even means? How can one provide a novel offering without even caring about it? It's an expression of missing craft and bad taste. These aspirations are reactive, not generated by something beautiful (like kindness, or optimism).
Fortunately it is not hopeless; aspiring entrepreneurs can find deeper motivation if they look for it.
(I like to give the following advice: it is easier to first be useful to others and become rich than it is to be rich and then become useful to others. This almost certainly requires sufficient empathy and care to have a hypothesis and be "post-idea".)
The thing that makes me continually have ideas is the same thing that makes me not want to dedicate my life to implementing just one of them. It would be like picking a favourite child if I were producing offspring like a queen bee.
I think there is value in the effort to develop something and frequently implementing something well is worth as much and sometimes much more than just a simple proof of concept. Someone has to build the things, It should be the people who are good at that and feel rewarded by a job done well more than a job done differently.
I do think that there isn't enough perspective of the lives that other people lead that can cause odd side-effects. Some people keep their ideas secret, or overvalue the idea because it was the one they had. This is a perspective I find hard to relate to. Most of the creative people I know are much happier someone knowing about their creations. They're like grains of sand, each one with their own details and can be evaluated many different ways. A lot of intellectual property feels like watching a man jealously protecting their grain of sand while standing on a beach.
I believe that is why the intent of things like copyright is to not protect ideas themselves. You cannot copyright an idea, and as an ideas person (a rather horrid term) that feels appropriate. The thing that you have built around the idea is the valuable thing you have contributed to the world. I think that is why items that are copyrightable are referred to as work. The value you bring comes from the from the work you did, not the idea you had, ideas just come to you (often at inconvenient times).
Mass media causes a bit of an aberration because of this. The thing that makes someone wealthy from a popular work is not proportional to the work done to produce it or even the quality of the work. Works that can be easily reproduced and distributed receive a disproportionate reward to their quality. A median quality work in many fields can receive next to no reward. The most popular works receive a masssive reward. The mechanism allowing a control of supply to provide reward for work ends up influencing a supply demand curve that gives massive rewards to a very few and very little to the majority. There is still an element of merit to the successes, the popular things are popular for a reason, some of those things really are the best. The question is would they have still been the best if everyone who worked to create stuff were rewarded more linearly to quality, would that support enough development of ability and opportunity that the pool from which the best can be selected becomes much larger.
[this might have gone off topic, but obviously my brain has things that have to come out]
It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.
OpenAI had a lot of goodwill and the leadership set fire to it in exchange for money. That's how we got to this state of affairs.
What's even scarier is that if they actually had the direct line of sight to AGI that they had claimed, it would have resulted in many businesses and lines of work immediately being replaced by OpenAI. They knew this and they wanted it anyway.
Thank god they failed. Our legislators had enough of a moment of clarity to take the wait and see approach.
First, when they thought they had a big lead, OpenAI argued for AI regulations (targeting regulatory capture).
Then, when lead evaporated by Anthropic and others, OpenAI argued against AI regulations (so that they can catch up, and presumably argue for regulations again).
Most regulations that have been suggested would but restrictions mostly the largest, most powerful models, so they would likely affect OpenAI/Anthropic/Google primarily before smaller upstarts would be affected.
Their prerogative is to make money via closed-source offerings so they can afford safety work and their open-source offerings. Ilya noted this near the beginning of the company. A company can't muster the capital needed to make SOTA models giving away everything for free when their competitor is Google, a huge for-profit company.
As per your claim that they are scammy, what about them is scammy?
Not sure specifically what the commenter is referring to re: scammy, but things like the Scarlett Johansson / Her voice imitation and copyright infringement come to mind for me.
GPT-OSS is not a near-state-of-the-art model: it is a model deliberately trained in a way that it appears great in the evaluations, but is unusable and far underperforms actual open source models like Ollama. That's scammy.
[1] https://www.lesswrong.com/posts/pLC3bx77AckafHdkq/gpt-oss-is...
[2] https://huggingface.co/openai/gpt-oss-20b/discussions/14
This is true for governments, corporations, unions, and even non-profits. Large organizations, even well-intentioned ones, are "slow AI"[1]. They don't care about you as an individual, and if you don't treat everything they do and say with a healthy amount of skepticism and mistrust, they will trample all over you.
It's not that being openly hostile towards OpenAI on a message board will change their behavior. Only Slow AI can defeat other Slow AI. But it's our collective duty to at least voice our disapproval when a company behaves unethically or builds problematic technology.
I personally enjoy using LLMs. I'm a pretty heavy user of both ChatGPT and Claude, especially for augmenting web search and writing code. But I also believe building these tools was an act of enclosure of the commons at an unprecedented scale, for which LLM vendors must be punished. I believe LLMs are a risk to people who are not properly trained in how to make the best use of them.
It's possible to hold both these ideas in your head at the same time: LLMs are useful, but the organizations building them must be reined in before they cause irreparable damage to society.
[1]: https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...
Isnt that a good thing? The comments here are not sponsored, nor endorsed by YC.
I've been posting here for over a decade, and I have absolutely no interest in YC in any way, other than a general strong negative sentiment towards the entire VC industry YC included.
Lots of people come here for the forum, and leave the relationship with YC there.
Because our views are our own and not reflective of the feelings of the company that hosts the forum?
Big tech (not just AI companies) have been viewed with some degree of suspicion ever since Google's mantra of "Don't be evil" became a meme over a decade ago.
Regardless of where you stand on the concept of copyright law, it is an indisputable fact that in order for these companies to get to where they are today - they deliberately HOOVERED up terabytes of copyrighted materials without the consent or even knowledge of the original authors.
Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.
I’d do some self reflection and ask yourself why you need to carry water for them.
I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.
Skepticism is healthy. Cynicism is exhausting.
Thank you for posting this.
This feels like a program to see what sticks.
We'll invest in your baby even before it's born! Simply accept our $10,000 now, and we'll own 30% of what your child makes in its lifetime. The womb is a hostile environment where the fetus needs to fight for survival, and a baby that actually manages to be born has the kind of can-do attitude and fierce determination and grit we're looking for in a founder.
What better than companies whose central purpose is putting their API to use creatively? Rather than just waiting and hoping every F500 can implement AI improvements that aren't cut during budget crunches.
Would this have been viewed with skepticism if any other startup from like 5+ years ago selling an API did this? If so, then how is it not even worse when a startup that is supposed to be providing access to what is pushed as a technical marvel of a panacea or something does it?
Sometimes I feel like I'm taking crazy pills...
I literally help companies implement AI systems. So I'm not denying there being any value...just...I don't understand how we can say with a straight face that they need to "build and grow demand for their product and API" while the same company was just reported on inking a $300B deal with Oracle for infra...like come on...the demand isn't there yet?!
Isn't that how we got (and eventually lost) most Google products?
I suspect, but could be wrong, that in OpenAI’s case it is because they believed they will reach AGI imminently and then “all problems are solved”, in other words the ultimate product. However, since that isn’t going to happen, they now have to think of more concrete products that are hard to copy and that people are willing to pay for.
Exactly what I read between the lines on this.
More they train such engineers more profitable for them to spread the word.
In 1st cohort, they're probably going to accept extrovert people with active social presence.
Next up, we're funding prenatal individuals.