Posted by evakhoury 12/16/2025
Large scale software is made cheap and fast, not good.
Example: AI browser agents can be exploited via prompt injection (even Google's new "User Alignment Critic" only catches 90% of attacks).
For password management, we solved this with zero-knowledge architecture - the AI navigates websites but never sees credentials. Credentials stay in local Keychain, AI just clicks buttons.
Formal verification would be amazing for proving these isolation guarantees. Has anyone worked on verifying AI agent sandboxes?
I highly dislike the general tone of the article. Formal Methods are not fringe, they are used all around the world by good teams building reliable systems. The fact they are not mainstream have more to do with the poor ergonomics of the old tools and the corporate greed that got rid of the design activities in the software development process to instead bring about the era of agile cowboy coding. They did this just because they wanted to churn out products quickly at the expense of quality. It was never the correct civilized way of working and never will be.
Future: AI generates incorrect code, and formal verification that proves that the code performs that incorrect behaviour.