Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used to work at Amazon the late 90s and this was the policy they followed. The apache server module written in C would leak so much that the process would have to be killed every 10 requests. The problem with the strategy was that it required a lot of CPU and RAM to startup a new process. Amazons answer was to simply throw hardware at the problem. Growing the company fast was more important than cleaning up RAM. They did get round to resolving the problems a few years later wit better architectures. This to was an example of good engineering trade offs.


> The problem with the strategy was that it required a lot of CPU and RAM to startup a new process.

It's not really kosher, but why not just keep around a fresh process that they can continually fork new handlers from?


Setting it up was expensive, so there's a good chance it involved initializing libraries, launching threads, or otherwise creating state that isn't handled correctly by fork.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: