I think you could actually implement this with nginx + <insert evented/actor server (tornado, rainbows, mochiweb, yaws etc.) here> + your application quite easily. Nginx buffers the client's request (you cannot turn this off) before passing it to the backend but you can turn proxy_buffering off which stops nginx from buffering the response to the client. In this middle tier you could instantly send the headers + loading JS and flush the buffer then determine the pagelets to render and either do it in process or further delegate to application servers with HTTP or your own protocol.
This does kind of beg the question, why use nginx at all? It provides you with a lot of protection against malformed requests and general fuckery. If you need it you can push the middle layer back into nginx as a module which would be screaming fast.
This reminds me of Heroku who do something like this with their 'routing mesh.' client -> nginx -> routing mesh (erlang) -> thin (ruby app server). The erlang process knows which ec2 instance has the ruby process to serve the request and initiates and is basically a smart proxy. You could quite easily query 6 backends simultaneously for each pagelet and pipe the JSON out to the client then throw in the footer as well.
Actually, you can implement this with nginx + client-side JavaScript to merge "sub-pages" right now, you don't need any server-side language or framework.
Guys from Taobao (http://www.taobao.com) have open-sourced pretty much everything you need to do that:
With all respect to the people in taobao, but this is different. The whole page (without css, JavaScript and images) in facebook case generated through one http request. In your case, they use Ajax to get content through serveral http requests. IMHO the facebook method is better. It avoid http request overhead and parallized steps as many as possible.
As you can notice, every "sub-page" is generated individually. Using presented configuration everything is chunked and flushed, so it will be sent to the client right away. Response on the client side looks like this:
This does kind of beg the question, why use nginx at all? It provides you with a lot of protection against malformed requests and general fuckery. If you need it you can push the middle layer back into nginx as a module which would be screaming fast.
This reminds me of Heroku who do something like this with their 'routing mesh.' client -> nginx -> routing mesh (erlang) -> thin (ruby app server). The erlang process knows which ec2 instance has the ruby process to serve the request and initiates and is basically a smart proxy. You could quite easily query 6 backends simultaneously for each pagelet and pipe the JSON out to the client then throw in the footer as well.
I think this deserves some experimentation.
nginx + erlang + rails (serving JSON).