Posted by CoffeeOnWrite 5 days ago
That said, I do think it would be nice for people to note in pull requests which files have AI gen code in the diff. It's still a good idea to look at LLM gen code vs human code with a bit different lens, the mistakes each make are often a bit different in flavor, and it would save time for me in a review to know which is which. Has anyone seen this at a larger org and is it of value to you as a reviewer? Maybe some tool sets can already do this automatically (I suppose all these companies report the % of code that is LLM generated must have one if they actually have these granular metrics?)
> The article opens with a statement saying the author isn't going to reword what others are writing, but the article reads as that and only that.
Hmm, I was just saying I hadn't seen much literature or discussion on trust dynamics in teams with LLMs. Maybe I'm just in the wrong spaces for such discussions but I haven't really come across it.
What is an oracle? That's a system that:
- Knows things - Has pre-crawled, indexed information about specific domains
- Answers authoritatively - Not just web search, but curated, verified data
- Connects isolated systems - Apps can query Gnosis instead of implementing their own crawling/search
- May have some practical use for blockchain actions (typically a crypto "oracle" bridges web data with chain data. In this context the "oracle" is AI + storage + transactions on the chain.
The Core Components:
- Evolve: Our tooling layer - manages the MCP servers, handles deployment, monitors health. Agentic tools.
- Wraith: Web crawler that fetches and processes content from URLs, handles JavaScript rendering, screenshots, and more. Agentic crawler.
- Alaya: Vector database (streaming projected dimensions) for storing and searching through all the collected information. Agentic storage.
- Gnosis-Docker: Container orchestration MCP server for managing these services locally. Agentic DevOps.
There's more coming.
https://github.com/kordless/gnosis-evolve
https://linkedin.com/in/kordless
https://github.com/kordless/gnosis-wraith (under heavy development)
There's also a complete MCP inspection and debugging system for Python here: https://github.com/kordless/gnosis-mystic
Sorry about the JS stuff I wrote this while also fooling around with alpine.js for fun. I never expected it to make it to HN. I'll get a static version up and running.
Happy to answer any questions or hear other thoughts.
Edit: https://static.jaysthoughts.com/
Static version here with slightly wonky formatting, sorry for the hassle.
Edit2: Should work on mobile now well, added a quick breakpoint.
While on the other hand real nation-state threat actors would face no such limitations.
On a more general level, what concerns me isn't whether people use it to get utility out of it (that would be silly), but the power-imbalance in the hand of a few, and with new people pouring their questions into it, this divide getting wider. But it's not just the people using AI directly but also every post online that eventually gets used for training. So to be against it would mean to stop producing digital content.
At the moment LLMs allow me to punch far above my weight class in Python where I do a short term job. But then I know all the concepts from decades dabbling in other ecosystems. Let‘s all admit there is a huge amount of accidental complexity (h/t Brooks‘s Silver-bullet) in our world. For better or worse there are skill silos that are now breaking down.
Sure we can ask it why it did something but any reason it gives is just something generated to sound plausible.