Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been in the AWS world for 4+ years now and my immediate feedback is don't run any local emulators. None! Write unit tests for the internals and test your system end-to-end. I say that both because AWS services can have unpredictable behavior that you need to account for but also because local emulators are at best functional, but in reality far from emulating the AWS world on a 1:1 scale (especially the unpredictable behaviors i mentioned). So instead optimize for many local unit tests and many live end-to-end tests (implies many deployments and parallel environments prod, staging, dev, etc)

When it comes to Lambdas, given the reasons above, there's only one thing that can improve the experience: PROXY. Before i went on parental leave i had the idea is creating a proxy lambda which can be configured with an IP and port number. That IP and port is for your local dev environment. This way, when developing you can instruct a live system to short-circuit and to proxy calls to a local lambda available om the respective port. Trigger end-to-end tests by invoking AWS services that will eventually call the proxy lambda, and then your local lambda with the same environment/context/input, reply with output which will reach the proxy lambda, which will output the same content forward.



Yes, unit tests are the way to go. Lambda being a single function with a defined output makes this really simple. We have unit tests for sublibraries, but the "integration test"-level test are just calling the handler with the test payload and comparing it against the expected response.


To check my understanding, you basically deploy a place holder lambda which proxies events to your dev machine?

That's pretty neat if so, I might have to try that out. Are there any downsides to this approach do you think?

Thanks for your thoughts on emulators/tests too.



Correct. I'm very pedantic and usually I can't help but find flaws in all proposals :)) but I really can't think of anything here. I can see limitations though: if you use lambda layers or native code specifically for the AWS architecture etc, then you need to account for that when proxying but... It's not impossible. You also need to account for having a public IP or use a service that exposes a public ip/port and redirects to your local dev machine. The idea is not super simple in practice but it is very simple compared to the emulators and it gets you very close to 1:1 parity for all those corner cases.


Are you sure?

Getting your live stack to call your Dev machine seems a sure fire way to slip up and accidentally send customers a test email. Or accidentally order $100,000,000 widgets.

In fact, I would say it sounds great, but in fact it's incredibly risky and monumentally stupid.


Read my comments in https://news.ycombinator.com/item?id=26857198

When you write "your live stack", my brain freezes. Even a simple "webapp" project in my team had 4 common stacks (prod, staging, dev, git-master which had every push to master getting deployed) plus individual stacks, one per developer, with the ability to create extra stacks for specific feature development.

Only prod had real data. Obviously (to me) you wouldn't shorcircuit lambdas in the prod env. Not even staging. Not even dev. Not even git-master. Short-circuit development environments where you explore. Very low risks there. We need to learn it's ok to fuck up, as long as we fuck up in a safe environment, nowhere near the nuclear button ;)


Not unless you have live secrets in your local application on your dev machine.

Just create a staging env, if you don't have it already, and point to staging


For a similar relay/proxy setup for development, I've found that a metadata endpoint emulation like provided in aws-vault can make it easier to have any AWS SDK calls behave the same way as they would in AWS itself. You do have to assume the role that the lambda itself would normally assume and you have to allow more than just service linked execution, but you do get to test the whole IAM chain that way as well.


Not sure that i follow you, but the setup i described would proxy the env vars as well, among them the aws credentials, so you should be able to run loudly with exactly the same liberties as the original AWS lambda.


Ha, I'm glad my simplification did not fall too short!

I did wonder with regards to the IP address. If you had a VPN set up for access to a VPC and ran the lambda in that VPC then that would be another option I think. You would just need to configure a static IP for your dev machine within the private network.


Right! I haven't explored that thought. My team kept everything serverless so we touched zero VPC/etc. From a generic perspective, having a setup to proxy into a LAN is more valuable I think - i can imagine for instance how my team had access to certain AWS services, but not all, erc etc. But these are details - the idea would be worth exploring


One "trick" I usually use with Lambdas - add a main() method that allows you to run the Lambda as a regular command line app. If you authenticate locally with a test environment in AWS, you can pretty much test everything in your code easily that way.

You can use this in your local dev env and even in integration tests. I don't need to do this a lot but sometimes it helps when troubleshooting.

Also, you might actually be building a tool which can be used from the command line. I recently took an existing command line tool (written in Go) and ported it to Lambda to expose it as a chatbot.


Modern software development: middle man attacks on AWS.


This 10x!!! Personally i find these setups a complete design failure. Make it work and then figure out how to develop with the constraints you will find later (not the ones you have designed yourself).

So - reverse engineer and monitor in-transit behavior it is...


This is a brilliant idea I'm stealing for the next time I'm doing Lambda dev.


This is the approach I use. For the AWS specific workflows e.g. SNS inbound to S3 bucket, processed and put in another bucket getting the structure of a heterogeneous set of events can be a hassle.


> and test your system end-to-end.

How do you automate this?


Living in my team's bubble i thought everyone runs or tries to run parallel environments: prod, staging, dev, but also an individual (person) or feature env. Why? Because there's no emulator or documentation that will teach you real behavior. Like others have said, AWS seems out of this world. Just like GCP and Azure i might add. Some things you don't expect and they mesmerize you how smart they are. Some you expect and you can't fathom how come you're the "only" one screaming. Random thought: this is how i ended up logging all I bumped into into "Fl-aws" https://github.com/andreineculau/fl-aws

Back to the point: reality is that many build their AWS environment (prod) manually, maybe they duplicate once (dev) also manually, maybe they use some automation for their "code" (lambda) but that's it. This implies it's practically impossible to run end-to-end tests. You can't do that in prod for obvious reasons and you can't do it in dev either - you have many devs queueing, maybe dev is not in sync with prod etc.

My team ran cloudformation end-to-end. We actually orchestrated and wrapped cloudformation (this is yet another topic for not using terraform etc) so that if smth couldn't be done in CFN, it would still be automated and reproducible. Long story short, in 30 minutes (it was this long because we had to wait for cloudfront etc) we had a new environment, ready to play with. A total sandbox. Every dev had their own and it was easy to deploy from a release artifact or a git branch to this environment. Similarly you could create a separate env for more elaborate changes to the architecture. And test in a live environment.

Finally to your question: how do you test end-to-end?

If we talk about lambdas because that's where the business logic lies in a "serverless" architecture, then the answer is by calling the system which will eventually call your lambda/s along the way. If your lambda ia sitting behind AWS gateway, then fire an http request. Is it triggered when objects land on S3? Then push some object to S3. How do you assert? Just the same - http response, S3 changes etc. Not to mention you can also check cloudwatch for specific log entries (though they are not instant).

With this type of a setup, which sounds complex, but it is not since it is 100% reproducible (also from project to project - I had several), adding this proxy-to-my-dev-machine lambda would mean I can make local changes and then fire unit AND end-to-end tests without any changes pushed to AWS, which is the main time/energy consumer imo.

PS: sorry for the wall of text. Like i said i recently realized that the development realities have huge discrepancies, so i tried to summarize my reality :)


A wall of text that's full of something interesting and useful is way more welcome than someone trying to do a funny reddit quip! Thanks!

We have the ability to spin up one-off environments per-project, including our serverless stuff - and we do it automatically on every CI run - so I guess the answer is to do that, and test against that.


Yes, this idea could save my team a ton of headache.

do you have a git repo for your proxy code?


Unfortunately i had this idea just before i changed jobs to a "server-full" position, and now I'm on parental, so i don't have this proxy lambda implemented as a generic solution :(

I only had a PoC and then told myself this would be brilliant as an abstraction


Sound a little like Tilt, Skaffold or build packs?


More like telepresence than skaffold


I don't know them in depth but yes, I'd also go for telepresence (which is smth i discovered after i had my evrika moment and so i started googling for similar ideas or maybe even implementations)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: