>The problem of "I have to get a large portion of the DB into service X" is one I've worked on, so the initial solution is more fragile. It doesn't deal with back pressure. If a service goes down, it "loses" writes and must be resynced from a good state. If for whatever reason data science sets up a HDFS cluster I need to push writes there from my app.
It's hard for me to discuss this because the terms are loosely defined, but my feeling is that you may be making implicit false assumptions around the necessary design of the architecture.
>Sure if you only need to write to one DB, the Confluence method is overkill - however if that solution works for you, I'd imagine you haven't hit the volume and the latency requirements that would require you to seek out a solution like Confluence's anyways.
This explanation is probably the reason for the explosion in overengineering. People hear "Hey, if you're not making things really hard, you're just not important enough!"
Well, everyone thinks they're important, so of course, they must make things hard! If they don't, they're not important.
I work with an organization where most people insist we are at this scale. It's totally false. Our load could easily be managed by one well-tuned database replication setup per app and probably 3-4 app servers per app. But this isn't good enough, because, you see, we are very important.
That means that we have dozens of different types of data storage solutions scattered all over the place (including Mongo, Riak, and Dynamo in addition to a variety of SQL DBs), we have dozens of "microservices", and we have hundreds of app servers, even though the technical requirements could be fulfilled with much, much less.
So why do we have all that? Well, because we're "at scale", which is to say, we want to be important. We have a bunch of people sitting around an office all day who appreciate the feeling of importance more than the feeling of a well-engineered system.
Again, I'm not saying complicated architectures are never justified, but I think that in many if not most cases, complication arises due to organizational and personal psychology much more than any technical constraint that truly mandates it.
It's hard for me to discuss this because the terms are loosely defined, but my feeling is that you may be making implicit false assumptions around the necessary design of the architecture.
>Sure if you only need to write to one DB, the Confluence method is overkill - however if that solution works for you, I'd imagine you haven't hit the volume and the latency requirements that would require you to seek out a solution like Confluence's anyways.
This explanation is probably the reason for the explosion in overengineering. People hear "Hey, if you're not making things really hard, you're just not important enough!"
Well, everyone thinks they're important, so of course, they must make things hard! If they don't, they're not important.
I work with an organization where most people insist we are at this scale. It's totally false. Our load could easily be managed by one well-tuned database replication setup per app and probably 3-4 app servers per app. But this isn't good enough, because, you see, we are very important.
That means that we have dozens of different types of data storage solutions scattered all over the place (including Mongo, Riak, and Dynamo in addition to a variety of SQL DBs), we have dozens of "microservices", and we have hundreds of app servers, even though the technical requirements could be fulfilled with much, much less.
So why do we have all that? Well, because we're "at scale", which is to say, we want to be important. We have a bunch of people sitting around an office all day who appreciate the feeling of importance more than the feeling of a well-engineered system.
Again, I'm not saying complicated architectures are never justified, but I think that in many if not most cases, complication arises due to organizational and personal psychology much more than any technical constraint that truly mandates it.