Posted by ingve 1 day ago
https://www.youtube.com/watch?v=B2qePLeI-s8
From the HTTP must die thread a month ago. https://news.ycombinator.com/item?id=44915090
In a pure .Net world it's the norm to use strict input validation and tell clients to fix their bad requests and this looks like one of those cultural blindspots. "We" wouldn't naturally consider a case where a server accepted a request which has not been strictly validated. With the move to .Net Core and a broadening of the scope to not only target enterprises and we'll find issues like this....
Mostly this stuff comes down to skill issues.
I jsonrpc I think 200 OK is correct with an error payload that says “you are not authorized” or similar.
> what should the status be in case of error
400 to 500 range status code. You can't ignore the lower level protocol because you're implementing a higher level one.
If you're maintaining an old api you can publish new versions of endpoints that don't accept mangled requests. If it's important you can give clients a time limit like let's say a few months to update their software to use your updated endpoints before you remove the old ones.
It's really fun trying to test connectivity issues like this.
Dev, UAT / QA, Staging, PROD. This is the ideal setup in my eyes. It lets QA / UAT hold changes that are maybe not 100% ready, while not blocking testing that is mean to go into PROD ASAP because it can sit in staging.
+ for case of cost: lots of very large companies have prod environments that cost big $$$. Business will not double prod cost for a staging environment mirroring prod. Take an example of any large bank you know. The online banking platform will cost tens if not hundreds of millions of dollars to run. Now consider that the bank will have hundreds of different platforms. It is just not economically feasible.
+ for the case of law: in some sectors, by law, only workers with "need to know" can access data. Any dev environment data cannot, by law, be a copy of prod. It has to be test data, even anonymization prod data is not allowed in dev/test because of de-anonymization risk.
Given this, consider a platform / app that is multi-tenant (and therefore data driven ) eg a SaaS app in a legally regulated industry such as banking or health care. Or even something like Shopify or GMail for corporate where the app hosts multiple organizations and the org to be used is picked based on data (user login credentials).
The app in this scenario is driven by data parameterization - the client site and content are data driven e.g. when clientXYZ logs on, the site becomes https://clientXYZ.yourAppName.com and all data, config etc are "clientXYZ" specific. And you have hundreds or thousands of clentsAAA through clientZZZ on this platform.
In such a world, dev & test environments can never be matched with prod. Further, the behaviour of the client specific sites could be different even with the same code because data parameters drive app behaviour.
Long story short, mirroring staging and prod is just not feasible in large corporate tech
In low throughput environments I see stuff like this. The problem is with high throughput environments it doesn't tend to happen because of the massive expense incurred.
You cannot do this if you're changing more than that one thing. The only way to make this work really is either dynamic environments that completely mirror everything, which tends to be time consuming or expensive or continuous delivery to a production-like environment via feature flags and so forth.
Having a staging server that is a mirror of production[1] improves things a bit over doing nothing. You need the entire environment, including all your dependencies, to have a real test of anything, and that includes things that corporate IT departments typically hate.
[1]: Why is it so common to see "PROD" written as if it were an acronym?
If you staging environment is pointing to the exact same databases PROD is, and other similar dependencies, there's no reason you can't hotswap it with PROD itself, I mean I've done something like this before.
It's much easier if your production deployment pipeline is setup for it though. You'd want to scale down drastically for staging, but in my eyes, if you're not going to have staging be as carbon copy of PROD as you humanely can have it, you might as well not have that fourth environment and just suffer when you cannot reproduce bugs. The real gem of staging is that if it would break in PROD, it would definitely break in staging. In the few companies where we had a carbon copy of PROD setup as a staging environment where key things are pulled from PROD itself, we've had way less bugs promoted to PROD when QA tests them in staging.
In theory the ROI is worth it, if you care about quality. Sadly most places do not care about quality nearly enough.
Why do you believe that?
Being liberal in what you accept doesn't mean you can't do input validation or you're forced to pass through unsupported parameters.
It's pretty obvious you validate the input that is relevant to your own case, you do not throw errors if you stumble upon input parameters you don't support, and then you ignore the irrelevant fields.
The law is "be conservative in what you send, be liberal in what you accept". The first one is pretty obvious.
How do you add cost to the entire ecosystem by only using the fields you need to use?
I like to call it the "hardness principle". It makes your system take longer to break, but when it does it's more damaging than it would have been if you'd rejected malformed input in the first place.
I don't think that's true at all. The whole point of the law is that your interfaces should be robust, and still accept input that might be nonconforming in some way but still be possible to validate.
The principle still states that if you cannot validate input, you should not accept it.
I have always been a proponent for the exact opposite of Postel's law: If it's important for a service to be accommodating in what it accepts, then those accommodations should be explicit in the written spec. Services MUST NOT be liberal in what they accept; they should start from the position of accepting nothing at all, and then only begrudgingly accept inputs the spec tells them they have to, and never more than that.
HTML eventually found its way there after wandering blindly in the wilderness for a decade and dragging all of us behind it kicking and screaming the entire time; but at least it got there in the end.
No. Your claim expresses a critical misunderstanding of the principle. It's desirable that a browser should be robust to support broken but still perfectly parceable HTML. Otherwise, it fails to be even useable when dealing with anything but perfectly compliant documents, which mind you means absolutely none whatsoever.
But just because a browser supports broken documents, that doesn't make them less broken. It just means that the severity of the issue is downgraded, and users of said browser have one less reason to migrate.
If browsers had conformed to a rigid specification and only accepted valid input from the start, then people wouldn't have produced all that broken html and we wouldn't be in this mess that we are in now.
For example, some JSON parsers extend the language to accept comments and trailing commas. That is not a change that creates vulnerability.
Other parsers extend the language by accepting duplicated keys and disambiguate them with some random rule. That is is a vulnerability factory.
Being flexible by creating a well defined superlanguage is completely different from doing it with an ill-defined one that depends on heuristics and implementation details to be evaluated.
I agree that there are better ways to design flexibility into protocols but that requires effort, forethought, and most of all imagination. You might not imagine that your little scientific document format would eventually become the world's largest application platform and plan accordingly.
Create a new project with the latest spring version, and maven will warn you.
At this point I consider this worthless noise.
You can either wait and accept being vulnerable or update the component yourself and therefore run an unsupported and untested configuration. Doomed if you do, doomed if you don't.
On Windows, if you have the "Install updates for other Microsoft products" option enabled, .NET [Core] runtimes will be updated through Windows Update.
If the domain's group policy won't let you turn it on from the UI (or if you want to turn it on programmatically for other reasons), the PowerShell 7 installer has a PowerShell script that can be adapted to do the trick: https://github.com/PowerShell/PowerShell/blob/ba02868d0fa1d7...
>= 6.0.0 <= 6.0.36
>= 8.0.0 <= 8.0.20
>= 9.0.0 <= 9.0.9
<= 10.0.0-rc.1
Microsoft.AspNetCore.Server.Kestrel.Core:
<= 2.3.0
Fixes are available for .NET 6 from HeroDevs ongoing security support for .NET 6, called NES* for .NET.
*never ending support
I'm probably missing something, but I still don't get how this would work without a proxy unless my own code manually parses the request from scratch. Or maybe that is what the author means.
The vulnerability, as far as I understand it, relies on two components interpreting these chunks differently. So one of them has to read \r or \n as valid markers for the chunk end, and the other one must only allow \r\n as specified.
Kestrel used to allow \r and \n (and the fix is to not do that anymore). So only if my own code parses these chunks and uses \r\n would I be vulnerable, or?
The proxy version of the vulnerability seems quite clear to me, and pretty dangerous as .NET parses non-compliant and would thereby be vulnerable behind any compliant proxy (if the proxy is relevant for security aspects).
But the single application version of the vulnerability seems to me to be very unlikely and to require essentially having a separate full HTTP parser in my own application code. Am I missing something here?
For example, let's say you have an HTTP API that checks a few headers and then makes another outgoing HTTP request. You might just send the stream along, using incomingHttpRequestStream.CopyTo(outgoingHttpRequestStream) / (or CopyToAsync). (https://learn.microsoft.com/en-us/dotnet/api/system.io.strea...)
That might be vulnerable, because it could trick your server to send what appears to be two HTTP requests, where the 2nd one is whatever the malicious party wants it to be... But only if you allow incoming HTTP versions < 2. If you blanket disallow HTTP below 2.0, you aren't vulnerable.
---
But I agree that this seems to be more "much ado about nothing" and doesn't deserve 9.9:
> In the python aiohttp and ruby puma servers, for example, give the vulnerability only a moderate severity rating in both cases. In netty it's even given a low severity.
I suspect the easiest way to handle this is to disallow HTTP < 2 and then update .Net on your own schedule. (Every minor release of .Net seemed to break something at my company, so we had to lock down to the patch otherwise our build was breaking every 2-3 months.)
It wouldn't surprise me if Microsoft found a first-party or second-party (support contract) or open source/nuget Kestrel/ASP.NET Middleware somewhere in the wild that was affected by this vulnerability in a concerning way. In that case, it also somewhat makes sense that Microsoft doesn't necessarily want to victim blame the affected Middleware given that they recognized that Kestrel itself should have better handled the vulnerability before it ever passed to Middleware.
The CVE points out (and the article as well) some issue with user-land code using `HttpRequest.BodyReader` on the "parsed" request, it just doesn't include specifics of who was using it to do what. Plenty of Middleware may have reason to do custom BodyReader parsing, especially if it applies ahead of ASP.NET Model Binding.
If you blanket disallow old HTTP, clients will fail to reach you.
Blanket disallowing old HTTP depends on who is calling your web service: I don't think modern browsers need to fall back to HTTP v1; so the risk is if you have a web service that is called by scripts or other programs using old HTTP libraries.
Even then, it's pretty well established that no one remains compatible with old TLS libraries, so I don't see why we need to remain compatible with old HTTP libraries indefinitely.
I don't hate postel's law, but I admit I try not to think about it lest I get triggered by a phone call that such and such site doesn't work.