Posted by ibobev 3 days ago
However, one point of clarification
> Several participants in that discussion have suggested that this method should be upgraded to a complete alternative to the standard token-based approaches. The OWASP maintainer was initially skeptical, but towards the end of the thread they appear to be warming up to the idea and in search of opinions from other leading security experts. So it is quite possible that this method will become mainstream in the near future.
The maintainer didn't just warm up to the idea - they came to accept it, otherwise the changes wouldn't have ever landed. So, the quoted section is somewhat unintentionally calling the maintainer's integrity into question.
Though, I just noticed that the cheatsheet text has changed significantly from what we settled upon. Fetch Metadata has been relegated again to defense in depth status. Hopefully there was just some mistake.
It is pretty clear to me that the maintainer is cautious and is seeking other expert opinions before accepting the proposed upgrade to full solution. This, to me, shows integrity and not the lack of it. I apologize if my choice of words somehow can be interpreted in any other way!
Our confusion might be due to the fact that an erroneous PR (by seemingly an AI-wielding student...) was somehow recently accepted that completely reverted the changes we collectively worked on, which effectively made Fetch Metadata a full solution. So, it is back to showing as defense in depth. I've raised an issue about it, which wouldn't have happened if I didn't see your article!
Here's the previous language:
> If your software targets only modern browsers, you may rely on [Fetch Metadata headers](#fetch-metadata-headers) together with the fallback options described below to block cross-site state-changing requests
We then detailed some fallbacks (eg Origin header). Full text can be viewed in the original PR
https://github.com/OWASP/CheatSheetSeries/pull/1875
or
https://github.com/OWASP/CheatSheetSeries/blob/7fc3e6b8fde65...
If after reading that you still think that Fetch Metadata is not a viable full solution, I'd be curious to know why - the goal of that PR (and the preceding discussion that I instigated) was to upgrade it from Defense in Depth to Full (even if slightly less full than tokens, due to the possible need for some fallbacks).
Confession, I did not read the PR. I assumed that what is currently published in the cheatsheet is the same as the PR. This is what guided my analysis.
I will update my article to be in agreement with reality, now that I understand it. Thanks!
It's often easier to smuggle a same-origin request than to steal a CSRF token, so you're widening the set of things you're vulnerable to by hoping that this can protect state mutating GETs.
The bugs mentioned in the GitHub issue are some of the sorts of issues that will hit you, but also common things like open redirects turn into a real problem.
Not that state mutating GETs are a common pattern, but it is encoded as a test case in the blog post's web framework.
Please correct me if I have missed anything, but I have designed this feature in my framework so that the default action when evaluating CSRF-related headers is to block. I then check all the conditions that warrant access. The idea is that for any unexpected conditions I'm not currently considering the request is going to be blocked, which ensures security isn't put at risk.
I expect there are some situations in which state-changing GET requests are not going to be allowed, where they should be. I don't think the reverse situation is possible, though, which is what I intended with my security first design. I can always revisit the logic and add more conditions around state-changing GET requests if I have to, but as you say, these are uncommon, so maybe this is fine as it is.
Likewise, if you could elaborate on the open redirects issue, that would be great.
I think these sorts of minor web app issues are common enough that state changing GETs should be explicitly discouraged if you are relying on Sec-Fetch-Site.
People do still allow 3rd party images/links on websites. Much less common in typical software, but it does happen.
Playing with window.location and meta redirects in jsfiddle, they both seem to lose cross-site context when I link to them.