IIRC at one point Facebook was a 1GB+ executable transpiled from PHP to C++ using HipHop, and that certainly fits any reasonable definition of a monolith, so yes, monoliths can be scaled to an absurd degree.
Because Facebook's entire code base is compiled down to a single binary executable, the company's deployment process is quite different from what you'd normally expect in a PHP environment. Rossi told me that the binary, which represents the entire Facebook application, is approximately 1.5GB in size. When Facebook updates its code and generates a new build, the new binary has to be pushed to all of the company's servers.
Moving a 1.5GB binary blob to countless servers is a non-trivial technical challenge. After exploring several solutions, Facebook came up with the idea of using BitTorrent, the popular peer-to-peer filesharing protocol. BitTorrent is very good at propagating large files over a large number of different servers.
Rossi explained that Facebook created its own custom BitTorrent tracker, which is designed so that individual servers in Facebook's infrastructure will try to obtain slices from other servers that are on the same node or rack, thus reducing total latency.
Rolling out a Facebook update takes an average of 30 minutes—15 minutes to generate the binary executable and another 15 minutes to push the executable to most of Facebook's servers via BitTorrent.
Interesting they call different services over loopback.
I've built sensible monoliths using in-process services - you call them just by a function call. This requires keeping your business logic out of the webapp layer and in a "service" layer, but it's a wonderful way to decompose functionality and make nice monoliths that still have separation of concerns.