Posted by mooreds 10 hours ago
I’m working on an open source project called consensus-tools that sits above systems like this and focuses on that gap. Agents do not just act, they stake on decisions. Multiple agents or agents plus humans evaluate actions independently, and bad decisions have real cost. This reduces guessing, slows risky actions, and forces higher confidence for security sensitive decisions. Execution answers what an agent can do. Consensus answers how sure we are that it should do it.
[0] https://github.blog/changelog/2025-08-15-github-actions-poli...
Also, a reminder: if you run Codex/Claude Code/whatever directly inside a GitHub Action without strong guardrails , you risk leaking credentials or performing unsafe write actions.
Thanks, yes, this is crucial.
Two years, then we'll know if and how this industry has completely been revolutionized.
By then we'd probably have an AGI emulator, emulated through agents.
As for the domain, this is the same account that has been hosting Github projects for more than a decade. Pretty sure it is legit. Org ID is 9,919 from 2008.
If you are changing your product for AI - you don’t understand AI. AI doesn’t need you to do this, and it doesn’t make you a AI company if you do.
AI companies like Anthropic, OpenAI, and maybe Google, simply will integrate at a more human leave and use the same tools humans used in the past, but do so at a higher speed, reliability.
All this effort wasted, as AI don’t need it, and your company is spending millions maybe billions to be an AI company that likely will be severely devalued as AI advances.
Or you can be full accelerationist and give an agent the role of standing up all the agents. But then you need someone with the job of being angry when they get a $7000 cloud bill.