Top
Best
New

Posted by meetpateltech 1 day ago

Ex-GitHub CEO launches a new developer platform for AI agents(entire.io)
596 points | 559 commentspage 14
ajbajb 1 day ago|
[flagged]
TOMDM 1 day ago||
This is what Claude had to say about your comment if we're doing this now:

Imagine being so intellectually lazy that you can't even be bothered to form your own opinion about a product. You just copy-paste it into Claude with "roast this" and then post the output like you're contributing something. That's not criticism, that's outsourcing your personality to an API call. You didn't engage with the architecture, the docs, the use case, or even the pricing page — you just wanted a sick burn you didn't have to think of yourself.

dakolli 1 day ago||
Are people doing this thing now where they can't even judge a product, website by themselves? Or read and analyze anything without asking an LLM to do it for them.

2026: The year everyone fried their brain with Think for Me SaaS.

beart 1 day ago|||
I know a guy who uses AI to answer every question in his life. It tells him how to raise his kids, how to spend time with his wife. He takes it to the park with him and asks it what he should do there (on his phone). When people ask him questions, he forwards those questions directly to his phone and uses the response.

For any new piece of technology, there are a subset of people for whom it will completely and utterly destroy.

murukesh_s 1 day ago||||
Try driving without google maps. Its a slippery slope we common folks we are in and there is no coming back - except for few purists..
dakolli 1 day ago||
I think there's a distribution of agency in humans, hence why we have insults like "npcs". Its probably not fair to use that word to describe people, but the cliche has some truth in it and I think a lot of tech exploits this.

I personally rarely need to use google maps, and if I do its a glance at it on the beginning of a trip, and I can find my way there through normal navigation. I might look again if I get lost, whereas, I have friends that use it to give directions to go five blocks. I don't think sense of direction is innate either, but its a muscle you build and some people choose to not work on that muscle and they suffer the consequences, albeit minor consequences.

I think we are seeing something similar with LLMs with the development and maintenance of reading, planning, creative and critical thinking skills. While some people might have a higher baseline, I think everyone has the ability to strengthen those muscles and the world implores us to do that in many situations, however, now we can pay Altman $0.0010 cents to offload that workout onto a GPU much like people do with navigation and maps. Tech companies love to exploit the dopamine driven response from taking shortcuts, getting somewhere quickly, its no different here.

I think (/know) the implications of this are much more hazardous than consequences of not exercising your navigational abilities, and at least with navigation there are fallback to assist people (signs, landmarks ect). There are no societal fallbacks for llm assisted thinking once someone becomes dependent on it for all aspects of analysis, planning and creativity. Once it is taken away (or they can't afford a quality of output the previously did), where do those natural abilities stand? The implications are very terrifying in my opinion.

I'm personally trying to stay as far away as possible from these things, I see where this is heading and its not as inconsequential as needing Maps to navigate 5 blocks. I do not want my critical thinking skills correlated 1:1 to the quality and quantity of tokens I can afford or have access too anymore than I do not want my navigational abilities correlated 1:1 to the quality of Maps service available to me.

People will say that this is cope, its the new calculator, whatever.. Have fun, I promise you that not knowing trigonometry but having access to an LLM does not give you the ability to write CAD software. I actually think not using these will give you a huge competitive advantage in the future. Someone who has great navigation skills will likely win a navigational competition in the mountains, or survive longer in certain situations. While the scope of those skills is narrow, it still proves a point[0]. The scope of your reading, critical thinking, creativity and planning skills is not limited.

[0]: It should be noted that some of the worlds most high agency and successful people actually participate in navigation as a sport called Orienteering, and spend boatloads of money in it.. I wonder why that is?

ajbajb 1 day ago|||
[dead]
willmarquis 22 hours ago|
The thread is missing the forest for the trees. The interesting bet here isn't git checkpoints—it's that someone is finally building the observability layer for agent-generated code.

Most agent frameworks (LangChain, Swarm, etc.) obsessed over orchestration. But the actual pain point isn't "how do I chain prompts"—it's "what did the agent do, why, and how do I audit/reproduce it?"

The markdown-files-in-git crowd is right that simple approaches work. But they work at small scale. Once you have multiple agents across multiple sessions generating code in production, you hit the same observability problems every other distributed system hits: tracing, attribution, debugging failures across runs.

The $60M question is whether that problem is big enough to justify a platform vs. teams bolting on their own logging. I'm skeptical—but the underlying insight (agent observability > agent orchestration) seems directionally correct.

doctoboggan 21 hours ago||
@dang with the launch of open claw I have seen so much more LLM slop comments. I know meta comments like mine aren't usually encouraged, but I think we need to do something about this as a community. Is there anything we can do? (either ban or at least requiring full disclosure for bot comments would be nice).

EDIT: I suspect the current "solution" is to just downvote (which I do!), but I think people who don't chat with LLMs daily might not recognize their telltale signs so I often see them highly upvoted.

Maybe that means people want LLM comments here, but it severely changes the tone and vibe of this site and I would like to at least have the community make that choice consciously rather than just slowly slide into the slop era.

Zacharias030 21 hours ago|||
Parent comment has the rhythm of an AI comment. Caught myself not realizing it until you mentioned it. Seems like I am more in tune with LLM slop on twitter, which is usually much worse. But on second sight it's clear and it also shows the comment as having no stance, and very generic.

@dang I would welcome a small secondary button that one can vote on to community-driven mark a comment as AI, just so we know.

gabriel-uribe 20 hours ago||||
The moltbook-ification of every online forum seems inevitable this year. I wish we had a counter to this.
neom 4 hours ago||
I've been thinking about this, one solution I wonder if to put a really hard problem in the sigh up flow that humans couldn't solve, if it's solve in the signup, it's a bot, not sure how tf to actually basically captchas flip, however I suspect this would only work for so long.
sebmellen 21 hours ago||||
It's the dead internet theory in action. Every time I see slop I comment on it. I've found people don't always like it when you comment on it.
doctoboggan 21 hours ago||
Yes I usually just bite my tongue and downvote, but with the launch of open claw I think the amount of slop has increased dramatically and I think we need to deal with it sooner than later.
sebmellen 17 hours ago||
Do you really think openclaw is to blame? I shudder to think of how few protections HN has against bots like that.
fblp 13 hours ago||||
Thank you for pointing this out. I didn't catch that the parent comment was ai either and upvoted it. Changed it to a downvote seeing your comment and realizing it the comment did indeed have many AI flags.
ijidak 21 hours ago|||
Nothing about the parent comment suggests AI, except the em dash, but that's just a regular old punctuation that predates AI.
doctoboggan 21 hours ago|||
How much experience do you have interacting with LLM generated prose? The comment I replied to sets off so many red flags that I would be willing to stake a lot on it being completely LLM generated.

It's not just the em dashes - its the cadence, tone and structure of the whole comment.

toraway 16 hours ago||
Yeah it's really frustrating how often I see kneejerk rebuttals assuming others are solely basing it on presence of em-dashes. That's usually a secondary data point. The obvious tells are more often structure/cadence as you say and by far most importantly: a clear pattern of repeated similar "AI smell" comments in their history that make it 100% obvious.
clbrmbr 20 hours ago||||
I didn’t catch it until seeing these flag-raising comments… checking the other comments from the last 8 hours, it’s Claw for sure.
drc500free 20 hours ago|||
Punchy sentence. Punchy sentence. It's not A, it's B.

The actual insight isn't C, it's D.

slopbrain 21 hours ago|||
You're absolutely right! It's not the tooling, it's the platform.
SirensOfTitan 21 hours ago|||
This sounds awfully like an LLM generated comment.

I suppose it was just a matter of time before this kind of slop started taking over HN.

kristianc 21 hours ago|||
> Once you have multiple agents across multiple sessions generating code in production, you hit the same observability problems every other distributed system hits: tracing, attribution, debugging failures across runs.

This has been the story for every trend empowering developers since year dot. Look back and you can find exactly the same said about CD, public cloud, containers, the works. The 'orchestration' (read compliance) layers always get routed around. Always.

rockwotj 20 hours ago|||
I thought everyone was just using open telemetry traces for this? This is just a classic observability problem that isn’t unique with agents. More important yes, but not unique functionally.
loveparade 19 hours ago||
Can you explain more how otel traces solve this problem? I don't understand how it's related.
Aeolun 22 hours ago|||
Ok, I’ll grant you that if they can get agents to somehow connect to other’s reasoning in realtime that would be useful. Right now it’s me that has to play reasoning container.
kaicianflone 20 hours ago|||
This is interesting. I’m experimenting with something adjacent in an open source plugin, but focused less on orchestration and more on decision quality.

Instead of just wiring agents together, I require stake and structured review around outputs. The idea is simple: coordination without cost trends toward noise.

Curious how entire.io thinks about incentives and failure modes as systems scale.

baggy_trough 22 hours ago|||
It's not this, it's that?
jascha_eng 21 hours ago|||
verbatim llm output with little substance to it. HN mods don't want us to be negative but if this is what we have to take serious these days it is hard to say anything else.

I guess I could not comment at all but that feels like just letting the platform sink into the slopacolypse?

RiverCrochet 21 hours ago||||
A. B isn't C—it's D1.

E. But F, G: H1, H2...

I. J—but D2 seems K.

paodealho 21 hours ago|||
Yes—it is!
brunoborges 20 hours ago|||
I think we need an Agent EE Server Platform. :P
tjlanmp 21 hours ago|||
That is a sharp observation———it is the observability that matters! The question arises: Who observes the observers? Would you like me to create MetaEntire.ai———an agentic platform that observes Entire.io?
zack6849 20 hours ago||
I think you need a few more em-dashes there to be safe
backbay-machine 20 hours ago||
Wholeheartedly agree. We have been working hard at a solution towards this and welcome any feedback and skepticism: https://github.com/backbay-labs/clawdstrike