Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems like you've traded in a bunch of open source solutions, to a walled garden in AWS and amazon tools.


The trade-off is cost. The article even mentions they drove down the operational cost at least 70%. You can still run whatever open source library in the lambda that you need (still need to ask, is it with the extra bytes), but yea, you are betting big with AWS. GCE serverless is way behind right now.


I wonder how much they could save in operational cost, and how much in raw money it actually is, if they spent this time on trying to optimise cost without rewriting architecture. From my personal experience saving 70% cost on AWS is really not that difficult.


I would argue that this is a poor trade-off. Servers are generally cheap for most projects while labor is expensive.

When using percentages you always have to ask 70% of what?


This has been my experience. Serverless is frustrating beyond belief. I haven’t deployed to production yet so maybe we’ll realize benefits enough to warrant the frustration at that time but so far I have serious doubts.


You can fire at least half of the labor after launch and maintenance can be done by fewer people or ad-hoc contractors.


This might fly in some niche software product, but for anyone who wants to run a permanent software business this is a recipe for disaster.


Sure that's immediate per month savings.

What happens when/if Amazon changes their offering to something that makes your system incompatible overnight? What about your keys getting filched and you inadvertently run 100 GPU bitcoin clusters?

How much would you expend in doing an emergency mass migration somewhere else? Would your company even survive?

People who choose to use Amazon exclusive APIs will get bit. It's not an if, but when. I'm not saying "dont buy ec2 instances or s3 storage instances"... Those in the end are just VMs and storage that you can purchase elsewhere. But whom else runs "Lambda"? What is your migration plan if they they cancel your service/quit offering/not offer it?


What happens when/if Amazon changes their offering to something that makes your system incompatible overnight?

They wont, or at least historically have not. There's no guarantees however, but this seems like a low risk.

More likely is that AWS will either raise their prices (or are undercut by a competitor) such that it makes financial sense to migrate to a new platform.


I may be overstating the "API change overnight" issue, but your comment does not address the 'Lose API keys', or 'Banned from being a customer', or other types of events that would cause an org to lose service.

I remember something very similar happening to a FireBase customer, in which surprise billing and something occurred that caused them to go from $10/mo to $1600/mo. That's the class of "oh shit" I'm talking about.


It's a real concern with AWS. Dealt with an incident where we had a dev-ops full access api key accidentally get checked in to a public repo. Within a hour, there were hundreds of instances running 100% cpus (presumably a bitcoin farm) in our production account.

We didn't get charged for the work, though we did have to talk to Amazon rep to alert them of what had happened.

It's good architectural design (these days) not to marry yourself to your underlying platform. As a core system design, Lambda is worrisome for me for that vendor lock-in


If he actually had scale, he would not be saving in cost. Ironic for a blog post advertising "unlimited scale". There's little chance author has built anything that has scaled efficiently yet.


Nobody ever shows any love for Azure in these discussions.

Disclosure: I work on Project Riff at Pivotal.


I've used both azure functions and aws lambda in production environments. Azure functions feel rushed out the door with gotchas/problems around every corner, including major stability issues. Azure functions are mid transition between v1 and v2, with v1 becoming outdated with nuget version lockin and cluttered with gotchas, but v2 is plagued with stability problems and breaking changes happening every other month.

Aws lambdas have had more refinement done on them. For the time being i wouldn't recommend azure functions unless there's non technical motivations.


OK, I'll do it, although this one requires that you have a live Kubernetes cluster to run your functions on,

I haven't heard much about it other than that it is more friendly open code from the lovely people that brought us Deis and Helm:

https://github.com/Azure/brigade

Hey, I bet you've heard of this, it sounds like Riff is absolutely in the same space :D

I think for most small enterprises today it's not too much to ask that you have a Kubernetes cluster with autoscaling provisioned somewhere. I think in 2018 you're not serious if you don't have at least that (or something comparable, although I've heard "the war is over" and agree that people should just get comfortable already with the idea of K8S if they haven't yet)

There are enough managed offerings today that don't charge anything for masters, where you can simply push a button and get a cluster that is properly configured, and push another button to tear it down when you're done, or call an API and get the same effect.

I know that's not really "serverless" now, and it's all about the cost of running computers in the cloud on a 24/7 basis, so tell me if you've heard this one before...

I've never succeeded in standing up a Kubernetes cluster with ASG for workers that will scale all the way down to zero when demand for worker nodes evaporates for a long enough period of time (10-30 mins?). Admittedly I've never spent that much time trying at it either... I am privileged to have some real physical computers plugged into the wall that I don't have to turn off, so I guess I just don't have to think that way.

There's just not any technical reason that won't work though, is there? You'll need the master(s) to hang around, so it's possible to notice Pending pods and scale back up when the demand returns, right?

(So why am I not seeing this capability advertised or demo from any managed Kubernetes provider offerings, is it really just simple economic answer that given the pricing model of no-cost masters, they don't make any money off you during a period of time that you aren't running any worker nodes?)


> Hey, I bet you've heard of this, it sounds like Riff is absolutely in the same space :D

I have, and I admire a lot of the work Deis folk have been doing at Microsoft. I have different opinions about the future looks, but I could be wrong. And I'm not the only member of the riff team.

In terms of "scale to zero" for workers, I think your "two whys" need is containers on-demand, not workers on-demand. That need is going to be met by the various virtual kubelet efforts underway. Azure have been out front on this, actually, with AWS Fargate coming hot on their heels. I expect that as GKE matures it will hit this too.

As we move towards "five whys", it turns out that we are essentially re-treading the path that Cloud Foundry got to years ago (and Heroku before that): focus on making it easy to run code.

Containers are in themselves an almost-irrelevant implementation detail 99% of devs should never have to care about, just in the same way that most of us don't think about mallocs any more.

I call this the Onsi Haiku Test, after the `cf push` haiku that Onsi Fakhouri gave at a conference a few years back:

    Here is my source code.
    Run it on the cloud for me.
    I do not care how.
And coming into riff from the Cloud Foundry universe, one of my personal agenda items is that riff should pass the Onsi Haiku Test with flying colours.


I would love to hear more of this kind of talk.

I'd really like to get you in the room with a couple of architects and technology leadership in my office. (No seriously, maybe zoom room.)

I'm on the kubernetes train, but they are mostly still hoping on Fargate, having never made this leap, and I have this feeling that I never would have got into the k8s world without the kind of help I got from Deis.

> Containers are in themselves an almost-irrelevant implementation detail 99% of devs should never have to care about

Couldn't agree more. Deis made this easy for me before it was on Kubernetes (CoreOS and Fleet), and when I was finally convinced to leave that stack behind, Deis made it easy for me again to do the same on Kubernetes. I'm the biggest fan of Deis anywhere.

(I've felt the loss of the Deis Workflow maintainers so badly that I'm personally working on the team to fork Deis! But the bus factor is way too high for my place of work, which is a university; they want something they can understand and that they can support or pay a vendor to support if I am not around anymore. That won't stop me, but it also means I need to keep an ear to the ground for something we can use to start doing CI/CD here.)

The technical leaders in my place of work, have already made the leap to AWS, but are just testing the waters of eg. spot market and serverless (lambda) to try to get the cost and reliability benefits to start to materialize, and they would really like to skip containers altogether and start building everything for Lambda. I know enough to say "whoa there Icarus that's no way to reach Lift-and-Shift" and pretty sure from my experience you should start lower (but still with some higher abstraction than plain old Docker containers, and also not Compose or Fargate.)

So I'm in a pickle because Deis is no longer offering support for end users, otherwise that's probably what I'd still be recommending.

I've been looking at possible replacements like Cloud Foundry (and Convox, and Empire) but your haiku hits me right in the feels and is the really important message I need to deliver. I am developing an application right now and I need the kind of devops machinery and support that is appropriate for that kind of effort in 2018

(and I definitely don't want to be embroiled in exploratory project to implement containers for the whole organization some time in the next 5 years, at least not before we can get something out the door for our customers across campus...)

I just don't think we do enough software development to justify spending on something like PCF but I'm not the one who would need to be convinced, either!


If you're using buildpacks, Cloud Foundry is the place to be. I obviously feel like PCF is the bee's knees, but there are OSS alternatives.

You can run OSS Cloud Foundry (now called Cloud Foundry Application Runtime or CFAR) using BOSH and cf-deployment. You can also run Kubernetes with the same operator tools if you use CF Container Runtime (CFCR), for people who need that capability.

SUSE sponsor an OSS GUI called Stratos.

For CI/CD, I am alllll about Concourse. Automation-as-a-Service is a secret gamechanger.

My work email is in my profile if you'd like me to hop on a call with anyone.


Hey, I just watched the Riff video and I'm a little blown away! Can't believe you've been downvoted


Azure: Fix the API Gateway. And the "managed database". Then I might go back to using you.


To be fair everything is a walled garden. Even open source solutions. You still need to have an infrastructure to run your code and unless you want to build your own servers you still need to pay AWS/Azure/GCP/DigitalOcean/etc for renting that infra. So I really don't see what's the problem of using something like AWS exclusively. If anything, it makes your life easier.


Note that this discussion is about AWS Lambda, not AWS generally.

There are upsides and downsides to using AWS Lambda, but characterizing it as a walled garden is pretty reasonable. That's not the same as code you can run on any Linux server.


You really class DO with those other guys for lock-in?

Or is that the part you are blind to?


I mean. Digital Ocean provides infrastructure too. Nothing is stopping you to run something like OpenFaas on DO.


Yeah, everything is a monopoly. Even a free market. You still need to buy stuff!

Wut?


Agreeing here. If you can't run your code without AWS or whatever your vendor is, you got bigger problems.

I'm talking about the core business code. Ops is important but replaceable.


To be fair the serverless framework supports several types of cloud solutions besides AWS — but I’m not so sure how easily one can switch mid-project.


Did you exchange a walk-on part in a war for a lead role in a cage?


Wish you were here to tell you that the Pink Floyd reference doesn't quite fit.


You say it like it’s a bad thing.

I’m betting on Amazon being in business at least as long as IBM. The benefits far outweigh the costs of having to port my code in 100+ years. If the machines aren’t sentient by then...


I hope you enjoy lock-in pricing...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: