Posted by shaicoleman 15 hours ago
I currently work with several AWS serverless stacks that are challenging or even impossible to integration test locally. While Localstack provide a decent solution, it seems like a service that AWS should offer to enhance the developer experience. They’d also be in the best position to keep it current.
AWS don't want that support nightmare.
Great to see Localstack offset a bit thanks to ... AI driven shift left infrastructure tooling? This is a great trend.
You should build your software around abstractions and interfaces which are portable enough to work locally and in AWS or any other cloud and not just AWS specific APIs.
For example, IAM/S3/SQS policy evaluations can have profound impact on an application running but an abstraction wouldn’t help much here (assuming the developer is putting any thought into securing things). There just isn’t an alternative to these. If you’re rolling out an application using AWS-proprietary services, you have to get into vendor-specific functionality.
The only functional use of a tool like this to me would be to learn how to use AWS so that I can work for people who want me to use AWS. Would that not be to Amazon's benefit?
It could encourage more development and adoption and lead to being a net-positive for the revenue.
The myopathy among us "online people" is assuming number of voices here and elsewhere correlate to revenue.
It does not.
If you want to use that for unit testing, then I think it would be better to mock the calls to AWS services. That way you test only your implementation, in an environment you control.
If you want to use that for local development, then I think it would be better to provision a test environment (using Terraform or any other IaC tool). That way you don't run the risk of a bug slipping into prod because the emulator has a different behaviour than the real service.
- I'm not sure this kind of product is really a foot in the door to create new customers. Someone not willing to create an actual account because they have no money or they just don't want to put their card details is not someone who's going to become a 6 figures per year customer, which is the level to be noticed by those providers.
- The free tier of AWS is actually quite generous. For my own needs I spend less than $10/year total spread around dozens of accounts.
- If one wants to learn AWS, they MUST learn that there are no hard spend limits, and the only way to actually learn it, is to be bitten by it as early as possible. It's better to overspend $5 at the beginning of the journey than to overspend $5k when going to prod.
- The main interest of local cloud is actually to make things easier and iterating faster, because you don't focus on all the security layer. Since everything is local, focus on using the services, period. Meanwhile, if you wanted to rely on actual dev accounts, you need to first make sure that everything is secure. With local cloud you can skip all this. But then, if you decide to go live, you have to fix this security debt and it most often than not break things that "work on my computer".
- Localstack has the actual support of AWS, that's why they have so much features and are able to follow the releases of the services. I doubt this FOSS alternative will have it.
Localstack does have IAM emulation as part of the paid product. I'm intrigued to see how well this does at the same thing.
When you're running hundreds of integration test suites per day in CI pipelines, the free tier is irrelevant. You need fast, deterministic, isolated environments that spin up and tear down in seconds, not real AWS calls that introduce network latency, eventual consistency flakiness, rate limits, and costs that compound with every merge request.
It'd be great to just use AWS but in practice it doesn't happen. Even if billing doesn't, limits + no notion of namespacing will hit you very quickly in CI. It's also not practical to give every dev AWS account, I did it with 200 people it was OK but always caused management pain. Free tier also don't cover organizations.
> they MUST learn that there are no hard spend limits, and the only way to actually learn it, is to be bitten by it as early as possible
This is a bizarre take. "The best way to learn fire safety is to get burned." You can understand AWS billing without treating surprise charges as a rite of passage.
Security for dev accounts is not a big deal, just give each developer an individual account and set up billing alerts.
If your only focus is spending, yes.
Otherwise, a "not a big deal" dev account can quickly become the door to your whole org for hackers
RDS databases, DynamoDB, and S3? Much less so.
That's my point: I'm not the one setting it up and using it, it's the devs using it
And I'm not expecting them to know how to navigate a cloud provider securely.
So it's either setting the dev account with all the required guardrails in place, or using "local cloud" on their computer
Hiding bad system design behind another docker container will not push you to the right direction, but the opposite.
In addition this is def vide-coded (50k loc in one week) so I don't see how can one trust this even.
No pull-requests, no real issues, it smells like it was auto-generated which is disappointing. Makes it harder to trust if you're going to test with "real data", how do we know it won't be sent elsewhere?
>how do we know it won't be sent elsewhere?how do we know it won't be sent elsewhere?
I the past open source meant that you trusted in theory that someone else would notice and report these things. These days though just load up your LLM of choice and ask it to do a security audit. There are some unreliable ways to cheat this and they aren't magical, but it would be pretty hard to subvert this kind of audit.
There is no "this is the core, then we add S3, then we add RDS, then we add ..." history to view and that seems both unnatural and surprising. Over half the commits are messing around with github actions and documentations.
Although I love localstack and am grateful for what they have done, I always thought that an open community-driven solution would be much more suitable and opens a lot of doors for AWS engineers to contribute back. I’m certain that it’s on their best interest to do so (specially as many of their popular products have local versions)
It’s a no-brainer to me as AI adoption continues to increase: local-first integration testing is a must and teams that are equipped to do so will be ahead of everyone else
So by the time you’re ready to push to staging you should be past the point of wanting to emulate AWS and instead pushing to UAT/test/staging (whatever your naming convention) AWS accounts.
Ideally you would have multiple non-production environments in AWS and if your teams are well staffed then your dedicated Cloud Platform / DevOps team should be locking these non-prod environments from developers in the same way as they do to production too.
Bonus points if you can spin up ephemeral environments automatically for feature branches via CI/CD. But that’s often impractical / not pragmatic for your average cloud-based project.
But you can’t have every dev tweaking staging at the same time as they work. How can you debug things when the ground is shifting beneath you?
Ideally every dev has their own AWS account to play with, but that can be cost prohibitive.
A good middle ground is where 95% of work is done locally using emulators and staging is used for the remaining 5%.
One of the first things I do when building a new component is create a docker compose environment for it.