Posted by Garbage 1 day ago
On top of that I keep running into unexpected roadblocks with Cloudflare, like when I was trying to set up the tunnel they required me to set up a dedicated domain, you can't set up a subdomain of an existing domain. Probably fine if you are rolling it out as a production service, but for just testing it to make sure it even works (see IPv6 comments above), I just wanted to set it up as a subdomain.
It's been relatively painless for me to set up tunnels secured by SSO to expose dashboards and other internal tools across my distributed team using the free plan. Yes, I need to get a little creative with my DNS records (to avoid nested subdomain restrictions), but this is not really much of a nuisance given all of the value they're giving me for free.
And after paying just a little bit ($10-20 per month), I'm getting geo-based routing through their load balancers to ensure that customers are getting the fastest connection to my infra. All with built-in failover in case a region goes down.
As someone who uses Cloudflare at a professional level, I don't. To me each and every single service provided by Cloudflare feels somewhere between not ready for production or lacking any semblance of a product manager. Everything feels unreliable and brittle. Even the portal. I understand they are rushing to release a bunch of offerings, but this rush does surface in their offerings.
One of my pet peeves is Cloudflare's Cache API in Cloudflare Workers, and how Cloudflare's sanctioned approach to cache POST requests is to play tricks with the request, such as manipulate HTTP verb, URL, and headers, until it somehow works. It's ass-backwards. They own the caching infrastructure, they own the js runtime, they designed and are responsible for the DX, but all they choose to offer is a kludge.
Also, Cloudflare Workers are somehow deemed as customizable request pipelines, but other Cloudflare products such as Cloudflare Images service can't be used with Workers as it fails to support forwarding standard request headers.
I could go on and on, but ranting won't improve anything.
Now I get it things happen and you gotta do what you gotta do but then you aren't on the happy path anymore and you can't have the same expectations.
That's simply wrong. Things like GraphQL beg to differ. Anyone can scream this until they are red in the face but the need to cache responses from non-GET requests is pervasive. I mean, if it wasn't then why do you think Cloudflare recommends hacks to get around them?
https://developers.cloudflare.com/workers/examples/cache-pos...
Your blend of argument might have had a theoretical leg to stand on if Cloudflare didn't went out of it's way to put together official examples on how to cache POST requests.
We'll likely replace it at some point with a non-standard API that works better. People will then accuse us of trying to create lock-in. ¯\_(ツ)_/¯
That's perfectly fine, but it doesn't justify the lack of support for non-GET requests though. The Cache API represents the interface but you dictate what you choose how to implement it. In fact, Cloudflare's cache API docs feature some remarks on how Cloudflare chose to implement some details a certain way and chose to not implement at all some parts of Cache API.
https://developers.cloudflare.com/workers/runtime-apis/cache...
Also, the Cache API specification doesn't exclude support for non-GET requests.
https://w3c.github.io/ServiceWorker/#cache-put
If Cloudflare's Cache API implementation suddenly supported POST requests, the only observable behavior change would be that cache.put() would no longer throw an error for requests other than GET. This is hardly an unacceptable change.
E.g. presumably the body of the request matters for cache matching, but the body can be any arbitrary format the application chooses. The platform has no idea how to normalize it to compute a consistent cache key -- except perhaps to match the whole body byte-for-byte, but for many apps that would not produce the desired behavior. For example, if you had a trace ID in your requests, now none of your requests would hit cache because each one has a unique trace ID, but of course a trace ID is not intended to be considered for caching.
The Cache API can only implement the semantics that the HTTP standard specifies for caching, and the HTTP standard does not specify any semantics for caching POST requests.
That said, what we really should have done was left it up to the application to compute cache keys however they want, and only implemented the lookup from string cache key -> Response object. That's not what the standard says, though.
It would be awesome if there was a way to view the console that actually reflected how a request routed through the system.
- They won't tell you at what point you will outgrow their $200/mo plan and have to buy their $5K+/mo plan. I've asked their support and they say "it almost never happens", but they won't say "It will never happen." HN comment threads are full of people saying they were unexpectedly called by sales saying they needed to go Enterprise.
- There are no logs available (or at least weren't 6-9 months ago) for the service I proxy through Cloudflare at the $200/mo level, you have to go with Enterprise ($5K+ I've been told) to get logs of connections.
- I set up some test certs when I was migrating, and AFAICT there is no way to remove them now. It's been a year, my "Edge Certificates" page has 2 active certs and 6 "Timed Out Validation" certs, I can't find a way to remove them.
- The tunnel issue I had on Friday trying to set up where my tunnel, more details in another comment here but apparently the endpoint they gave me was IPv6 only and not accepting traffic.
- Inability to set up a tunnel, even to test, on a subdomain. You have to dedicate a domain to it, for no good reason that I can tell.
Tunnels are poorly documented.
I'd tend to agree with that, but I was able to find some youtube videos of people setting them up. It was still a little bit of a challenge though because they have moved the menus all around in the last few months, so even the most recent videos I could fine were pointing to locations that didn't exist and I had to go hunting for them.
I would have preferred to just use tailscale for this, but we are using headscale and want to make a service available to our sister company, that doesn't have e-mails in our Google Workgroup where we have the OIDC for auth, so they can't be part of our tailnet without buying them logins or setting up accounts in keycloak or similar.
I'm pretty sure you can use Cloud Identity Free accounts to do this. I've done something similar with OIDC and it didn't cost anything.
    $ host -t A 9c8855f1-e47f-47bf-9e0e-66938be0f076.cfargotunnel.com
    9c8855f1-e47f-47bf-9e0e-66938be0f076.cfargotunnel.com has no A record
    $ host -t AAAA 9c8855f1-e47f-47bf-9e0e-66938be0f076.cfargotunnel.com
    9c8855f1-e47f-47bf-9e0e-66938be0f076.cfargotunnel.com has IPv6 address fd10:aec2:5dae::
    $ telnet -6 9c8855f1-e47f-47bf-9e0e-66938be0f076.cfargotunnel.com 443
    Trying fd10:aec2:5dae::...
    telnet: Unable to connect to remote host: Connection timed out
I got the cloudflared running fairly easily (though their Debian package repo seemed broken and they didn't have an option listed on the setup page for downloading just the binary, I was able to find it after some searching).  That part went smoothly, I just couldn't connect to the tunnel they provisioned.For several years, ngrok was practically free, only recently they've started monetizing once it gained popularity.
KV is super expensive - once you’re operating at non-trivial scale, reading another configuration value per-request in your worker starts to cost thousands per month. KV’s tail latencies are also surprisingly bad (I’ve seen over a minute), even for frequently read keys that should be easily cached.
He wouldn’t disclose any details to me but from point of view S3 was best in class
In my experience even backblaze b2 performs (way) better.
Their community forums are full of such reports.
KV is so expensive that it’s barely usable, and like R2, is very slow.
Agree with the KV point, Upstash is the same. But I just use dragonflydb on a single VM. No point paying for transactions.
Hell, S3 could have 20ms latency and it wouldn't matter since I can't afford it.
1) It will never work 2) The article is just advertising. Jobs, products whatever.
There is a third conclusion which is worrisome. That the leadership of the organization just doesn't get it.
I'm not advocating these as correct, just wondering if other readers share my instantaneous reaction of been-there, seen-that, know-how-it-ends.
But I might have encountered this problem or am about to, and such a post might resonate more.
It is like advertising in a way. But for knowledge. As long as people upvote it, it's resonating.
So, what's the threshold for what should be shared, given that most people don't know most thing things...?
I also think HN does some sort of deduplication if something has been posted recently (to count as upvote instead of new submission), but not sure of the details.
Isn’t that the whole benefit of sites like HN and Reddit?
https://blog.cloudflare.com/introducing-oxy/#relation-to
> Although Pingora, another proxy server developed by us in Rust, shares some similarities with Oxy, it was intentionally designed as a separate proxy server with a different objective.
seems fine to me?