Posted by agwa 12 hours ago
Though I'd like to make another protocol known: Web Application Socket (WAS). I designed it 16 years ago at my dayjob because I thought FastCGI still wasn't good enough.
Instead of packing bulk data inside frames on the main socket, WAS has a control socket plus two pipes (raw request+response body). Both the WAS application and the web server can use splice() to operate on a pipe, for example. No framing needed. Also, requests are cancellable and the three file descriptors can always be recovered.
Over the years, we used WAS for many of our internal applications, and for our web hosting environment, I even wrote a PHP SAPI for WAS. Quite a large number of web sites operate with WAS internally.
It's all open source:
- library: https://github.com/CM4all/libwas - documentation: https://libwas.readthedocs.io/en/latest/ - non-blocking library: https://github.com/CM4all/libcommon/tree/master/src/was/asyn... - our web server: https://github.com/CM4all/beng-proxy - WebDAV: https://github.com/CM4all/davos - PHP fork with WAS SAPI: https://github.com/CM4all/php-src
FastCGI and HTTP are at two different levels. HTTP is for data transfer from, say, a browser and a server. FastCGI is for handling that data between the server and an application.
Just now I glanced at the article and it seems the author writes in a confusing way to imply that HTTP and FastCGI are interchangeable and they are not.
fwiw, I used fcgi for a decade for all our web customers.
> FastCGI and HTTP are at two different levels. HTTP is for data transfer from, say, a browser and a server. FastCGI is for handling that data between the server and an application. Just now I glanced at the article and it seems the author writes in a confusing way to imply that HTTP and FastCGI are interchangeable and they are not.
That might be just you. The article is littered with the qualifier "for reverse proxies", including in the title and two section headers, and "as the protocol between reverse proxies and backends" in the second paragraph. I don't know how it could be any more clear on this point.
The max_k comment you've quoted includes "for these things"; context clues suggest by "these things" he also means to limit his comment to the reverse proxy <-> backend leg.
I think the author mentions HTTP because many people use it where they could be using FastCGI and just don’t.
Not entirely correct. A reverse proxy can either speak HTTP, or a different protocol such as FastCGI with the application server. The article is talking about that communication.
They are not interchangeable for the browser-to-server communication, but they are for the server-to-application piece.
The article points out that HTTP and FastCGI are both options for reverse proxies to communicate to the downstream server. I didn't find a reference to them being interchangeable outside of that context. If there is or was one please quote it.
Or you could use something like haproxy's proxy protocol (although that may not support all the information you want, and doesn't work for multiplexing).
Edit: actually the "Forwarded" header kind of fills that niche. Although you may want extensions for things like the client certificate.
I remember the great FastCGI vs. SCGI vs. HTTP wars: I was founding a Web2.0 startup right at the time these technologies were gaining adoption, and so was responsible for setting up the frontend stack. HTTP won because of simplicity: instead of needing to introduce another protocol into your stack, you can just use HTTP, which you already needed to handle at the gateway. Now all sorts of complex network topologies became trivial: you could introduce multiple levels of reverse proxies if you ran out of capacity; you could have servers that specialized in authentication or session management or SSL termination or DDoS filtering or all the other cross-cutting concerns without them needing to know their position in the request chain; and you could use the same application servers for development, with a direct HTTP connection, as you did in production, where they'd sit behind a reverse proxy that handled SSL and authentication and abuse detection.
It also helped that nginx was lots faster than most FastCGI/SCGI modules of the time, and more robust. I'd initially setup my startup's stack as HTTP -> Lighttpd -> FastCGI -> Django, but it was way slower than just using nginx.
The use of HTTP was basically the web equivalent of the End-to-End Principle [1] for TCP/IP. It's the idea that the network and its protocols should be agnostic to what's being transmitted, and all application logic should be in nodes of the network that filter and redirect packets accordingly. This has been a very powerful principle and shouldn't be discarded lightly.
The observation the article makes is that for security, it's often better to follow the Principle of Least Privilege [2] rather than blindly passing information along. Allowlist your communications to only what you expect, so that you aren't unwittingly contributing to a compromise elsewhere in the network.
And the article is highlighting - not explicitly, but it's there - the tension between these two principles. E2E gives you flexibility, but with flexibility comes the potential for someone to use that flexibility to cause harm. PoLP gives you security, but at the cost of inflexibility, where your system can only do what you designed it to do and cannot easily adapt to new requirements.
[1] https://en.wikipedia.org/wiki/End-to-end_principle
[2] https://en.wikipedia.org/wiki/Principle_of_least_privilege
I don't think the analogy works, not in the context of connection caching and multiplexing. An intermediate gateway multiplexing multiple HTTP requests over another HTTP channel, where that channel is the terminal leg directly to the listening service (i.e. requests aren't demultiplexed before hitting the application socket), fundamentally violates the logic to end-to-end in multiple ways. The analogy only works, if at all, if you preserve 1:1 connection symmetry.
All the reverse proxy exploits can be traced directly back to violating end-to-end.
If the analogy were true, then SMTP delivery across multiple MXs would be end-to-end as well. It's not, and you see many of the same issues as with reverse proxies, including messaging boundary desync'ing.
I guess you're trying to analogize HTTP requests as messages, but it falls apart almost immediately in the context of all the hairy details. The nature of TCP and HTTP semantics and the various concrete protocol details throws a wrench into things, with predictable consequences.
The end-to-end principle doesn't permit playing fast and loose with semantics. It demands very hard, rigid boundaries regarding state management and transport layering. That's the whole point. "Mostly" end-to-end is not end-to-end, not even a little bit.
Google for example has long wrapped HTTP into their own Stubby protocol between their frontline web servers and applications; it’s much faster and more featureful than using the HTTP wire protocol. It’s something that a typical company doesn’t need, but once the scale increases it becomes worthwhile to justify using a different wire protocol and developing all the tooling around that new wire protocol.
Most of the arguments for using HTTP reverse proxying over FastCGI or SCGI came down to ubiquity. It let you do things (like connect directly to your app servers with a web browser) that you couldn't do with FastCGI.
HTTP 2.0 multiplexing is tcp in tcp, it's asking for trouble. Just open more connections and let tcp be your multiplex. Depending on your connection rate, you can't really do 64k connections per frontend ip to each service ip:port, but if your rate isn't too high, 20-30k is feasible. most http based applications don't need or benefit from anywhere near that level of concurrency on frontend to backend. But if it's not enough, you can add more ips to the frontend or backend, or more ports to the backend.
I'm pretty sympathetic to the argument for FastCGI or similar as the protocol for frontend to backend though; having client set headers clearly separate from frontend set headers is very nice, and having clear agreement on message boundaries is of obvious value. Unless you're just doing a straight tcp proxy, in which case ProxyProtocol is good enough to transfer the original IPs and then pass data as-is.
Large organizations have a well-known pattern for how to handle this tension between the E2E principle and the PoLP. It's a firewall. As per the E2E principle, this is a node in the system, usually placed near the outside, which is responsible for inspecting and sanitizing every request that enters the system. The input is untrusted external requests that may have arbitrary binary data. The output is the particular subset of HTTP that form valid requests for the server, sanitized to a minimal grammar and now trusted because you reject every packet that wasn't a well-formed request for your particular service. As an added bonus, now you can collect stats on who is sending these malformed requests, which lets you do things like DDoS protection or calling their ISP or contacting the FBI.
The article even admits this: the right solution to untrusted headers is to strip out everything you aren't explicitly expecting at the reverse proxy. If you didn't know True-Client-IP exists, don't pass it on. Allowlist and block everything by default, don't blocklist and allow everything by default.
You're correct that if the proxy removes all unknown headers, you're safe (with HTTP/2). But that sounds extremely inconvenient - before your application can use a new header, you have to talk to the team who runs the proxy. And popular reverse proxy software doesn't do that by default so it remains a huge footgun for the unwary. All completely avoided with FastCGI.
https://serverfault.com/questions/1033131/filter-to-only-pas...
Set proxy_pass_request_headers off, and then explicitly proxy_set_header each individual header you want to forward to the variable representing it in nginx config.
Or just use CloudFlare Tunnel, which gives you a bunch of other DDoS and abuse protection and keeps your app server off the public Internet.
You describe an organizational failure, where different teams are allowed to do whatever they like instead of having a proper platform team, which can enforce security and standards for the benefit of interoperability. It's not an argument in favour of transparent end-to-end behaviour in datacenters.
Sadly httpd went the way of "let's make the configuration difficult"; I abandoned it when they suddenly changed the configuration format. I could have adjusted, but I switched to lighttpd (and also, past that point I let ruby autogenerate any configuration format, so technically I could return to httpd, but I don't want to - I think people who develop webservers, need to think about forcing people to adjust to any new format. If there is a "simple" decision to willy-nilly switch the configuration format, perhaps enable e. g. yaml-configuration in ADDITION, so that we don't have to go through new if-clause config statements suddenly).
Little tweak here, little tweak there...
I feel that if I can't work something out without asking a generative ML model, then I probably don't understand it well enough to properly assess the generated answer, and if I didn't understand the documentation well enough in the first place then “verify it against the documentation” is not a suitable answer, so I probably shouldn't be self-hosting that system on the open network.
It is quite irritating that the existence of generative models is apparently becoming an acceptable excuse for inadequate documentation. Rather than suggesting that I ask copilot when the documentation Azure is lacking, perhaps MS should as copilot to generate some better documentation (and have their human domain experts review it for correctness) so we have good documentation to work from. It strikes me that them using a bunch of LLM crunching power up-front is likely to me more efficient than a great many of us spending smaller amounts or resources each (many of us asking the same questions) at the point of consumption.
The scenario is we have our first party task lists and data viewers, but often users want to highly customize it. Say build a Kanban view or a custom dashboard with data filters and charts.
The box has a coding agent which means the user can code anything vs us building traditional report builder tools.
Go’s stdlib has good support on both the server side and user space. The coding agent makes a page-name/main.go that talks CGI and the server delegates requests to it.
It’s all “person scale” data and page views so no real need to optimize with fast CGI even.
What’s old is new again for agents!
Go's CGI server implementation doesn't set $HTTP_PROXY so you're safe from that, but I still don't love how CGI uses environment variables.
Neither do I. They really only make sense in the context of a request which was actually to a CGI script resident in a document root - they're an exceptionally awkward way of describing other HTTP requests, especially ones which aren't being served from a document root. And there's a lot of information lost in translation, like the order and original capitalization of HTTP headers. (Not that these things are supposed to matter, but still.)
With widespread browser support for WHATWG streams, it's pretty easy to implement your own WebSockets over long-lived HTTP requests. Basically you just send a byte stream and prepend each message with a header, which can just be a size in many cases.
Advantages over WebSockets:
* No special path in your server layer like you need for WebSocket.
* Backpressure
* You get to take advantage of HTTP/2/3 improvements for free
* Lower framing overhead
Unfortunately AFAIK it's still not supported to still be streaming your request body while receiving the response, so you need a pair of requests for full bidirectional streaming.
I don't know if anything else in the RHEL distributions use FastCGI.
$ rpm -qi php-fpm | grep ^Summary
Summary : PHP FastCGI Process ManagerI don't really know anything about the FastCGI.
Most of the stuff I've done for reverse proxies has been pretty straightforward and just using the stuff built into Nginx, but I have to admit that it wouldn't have even occurred to me to use FastCGI if I needed something more elaborate.
I used FastCGI a bit about ten years ago to "convert" some C++ code I wrote to work on the web, but admittedly I haven't used it much since then.
But even if you disagree with me the point is that I can count on only one hand the number of times I went "oh man, I need a FastCGI middle end".
In my experience, this isn't a good feature. It sounds nice, but it can often mean everything runs fine while your load is low, but when your load gets high, you spawn more workers and run out of memory. It's much better to have a static number of workers in my experience.
Crash recovery is handy, if needed though.
Can we just take a moment to appreciate the absurdity of HTTP headers for a moment? We have X-Forwarded-For, X-Real-IP, each CDN has their own custom flavored one. Some of them are a comma-separated list, and usually ends up having an IP of your own LB uselessly added in there (I know why, it's just not helpful). All of them might be inserted by a malicious user-agent. I guess nobody could agree on how all the various trusted servers in the pipeline should convey the important bit.
I guess it fits in quite well with the absurdity of the User-Agent header, which has come so far in absurdity that Apple decided to fully kill it by just sending utterly fake nonsense (false OS version, etc) in the name of "pRiVaCy."
It is less expressive than HTTP in ways that may or may not be important to your application; I prefer accurate URL handling.