Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cloud is basically an infinitely scalable mainframe. You have dedicated compute optimised for specific workloads, and you string these together in a configuration that makes sense for your requirements.

If you understand the benefits of cloud over generic x86 compute, then you understand mainframes.

Cloud is mainframes gone full circle.



> Cloud is mainframes gone full circle.

Except that now you need to develop the software that gives mainframes their famed reliability yourself. The applications are very different: software developed for cloud always needs to know that part of the system might become unavailable and work around that. A lot of the stack, from the cluster management ensuring a failed node gets replaced and the processes running on them are spun up on another node, all the way up to your code that retries failed operations, needs to be there if you aim for highly reliable apps. With mainframes, you just pretend the computer is perfect and never fails (some exaggeration here, but not much).

Also, reliability is just one aspect - another impressive feature is their observability features. Mainframes used to be the cloud back then and you can trace resource usage with exquisite detail, because we used to bill clients by CPU cycle. Add to that the hardware reliability features built-in (for instance, IBM mainframes have memory in RAID-like arrays).


But latency

The cache design in the Z is so different from cloud computing for collaborative job processes.


While I agree with the sentiment, for me it feels more like UNIX gone full circle.

Instead of having everyone doing telnet, rsh and X Windows connections into the team's development server, we now use ssh and the browser alongside cloud IDEs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: