I started using it after getting frustrated with Gitlab's install process on RHEL (a year ago), and I used Gitolite before that. Gitblit has been refreshingly simple to deal with. Am I missing something other than the standard JVM hate?
I see a lovely list of all sorts of features, but then I get to limitations: "Built-in access controls are not branch-based, they are repository-based."
Cloudwatch does what you're referring to as well. It's more of a basic server monitoring system that happens to integrate with the load balancer.
You get a set of basic VM level metrics, and you can feed it custom metrics from your app, or log files. All of which can be configured to alarm. I don't think it's possible to run advanced statistics on the metrics for alarming (eg, standard deviation from 30 minute exceeds N), but it may be. Usually it's just an event count, like more than N 500 errors over X time.
I do agree you need to think deeper than basic health checks though, 'broken server' is always a hard boolean to nail down.
I think the problem is clustering is still very much a duct-tape situation in postgresql with no real clear consensus on how to build out a cluster.
Postgres-XL looks great for scale out, but you need 4 independent types of servers. Even with all those moving parts, it doesn't provide availability. If you want fail-over, you need pacemaker for the data nodes with traditional sync replication, and something like VRRP for your balancer, and something else to failover the coordinator. Several of these pieces can be tricky to set up in a cloud provider.
BDR looks nice, but it looks like there could be lots of gotchas for consistency in there. Maybe it is a magic bullet though... I don't know much about it yet.
Contrast with something like rethinkdb, mysql-galera, cassandra, etc, you start up enough nodes for quorum, tell them about each other, and you're pretty much done. The clients can handle the balancing, or you can use a pooler/balancer.
In my perfect world, I'd install postgresql-awesome-cluster-edition on 3 nodes, add the 3 IPs (or turn on multicast discovery, if my env can support it), and away we go for read scalability and availability. I do this today for mysql-galera, and other than the fact it's mysql, it's awesome. For writes, if you add 4 or more nodes, there should be some sort of shard system like XL has.
That said, postgresql is still clearly the best SQL and even noSQL single node server out there, it's a really great piece of software.
If you honestly believe that all you have to do is stand up a bunch of instances of Mongo/Cassandra/whatever and you instantly get acceptable HA, then you need to read the [Jepsen series](https://aphyr.com/tags/jepsen)
It depends on what you consider "acceptable HA". There are many instances where I'm not trying to protect from a network partition (single data center, monitored batch data loads, etc) and don't have a requirement for that level of tolerance. However, you're right in that it's important to know that nearly every distributed system has edge cases where things might not appear as you thought. Elasticsearch has a section on their Website detailing their resiliency efforts. I wish every company was as transparent about what they're doing on that front so we can all plan and consider expectations better.
I'm your counterpart at another agency. I'm glad to see other agencies are not doing FIPS on their websites (Which would be RHEL with mod_nss only). I'm a bit confused though, last I looked FedRAMP still required it. Have the mandates been changed?
18Fer here. Before I answer in greater detail, why do you think FIPS requires RHEL with mod_nss only? I don't see why an OpenSSL in FIPS mode wouldn't fit the bill too.
Regardless of your detailed answer, FIPS crypto requirements are a topic of some amusement in professional cryptographic and security circles, and anything you do to push back on them will be a help basically to humanity.
"So at this moment we cannot say whether mod_ssl is going to be a valid crypto module in FIPS mode under RHEL-6 although this is the intent."
That may have changed, and contradict other sources on redhat.com. There are a lot more KB articles on FIPS since the last time I really dug into it over a year ago.
Edit, yes, it looks like it was mod_nss only until the release of RHEL 5.9 last Jan. RHEL-6 was ongoing, but it looks like they claim mod_ssl will work now in other places in the knowledge-base.
FIPS is just one area where it seems like there's a lot of contradictory information for federal IT. After doing the FedRAMP dance, and reading things to the letter, we stopped working towards it and partnered with one of the vendors that got it first. Their remote access was plain text VNC, 8 character password max. I would say I was surprised the paperwork matters more than real security, but I wasn't.
So your post makes no sense. OpenSSL provides the FIPS portion directly. You can just download and compile it according to the instructions and you are now FIPS compliant just awaiting a certification. You can do this yourself, you don't need RedHat or Debian to do it for you.
This is one of the problems with Government and hopefully something that will change. All that is done is piece together bits of what outside vendors have put together and the piecing together is normally done by contractors.
So you think recompiling OpenSSL from scratch, in doing so, deviating from the upstream vendor's supported binaries, and the dependency problems with updates it will cause, just to support a mostly smoke and mirrors standard is a good idea? I'd don't really think that's a best practice in commercial or government IT.
Exactly what the American people have come to expect from the government. Unless its been gift wrapped by a contractor they lack any ability to do anything technical.
You make a RPM and you deploy it like you would any other package. Yes it is a best practice, in fact the people at Red Hat do the _exact_ same thing, the difference is they have the technical capability to make those kinds of changes, as do most people in the commercial IT sector. The government is the one place where they call it IT when its really just glorified procurement.
However thats not even the problem as you stated its supported just fine. It has been for almost 9 years. The bigger issue is there was a perception is wasn't and instead of working to see what reality was people just did nothing.
> Exactly what the American people have come to expect from the government. Unless its been gift wrapped by a contractor they lack any ability to do anything technical.
Exactly the expectations that 18F would like to change.
So in other words, become a FB/Twitter/Play/App store and put yourself at the top of the pyramid. Now you get to be the company changing the API and banning users, but we still haven't actually solved the problem for the users of web services.
basically the technique involves pulsing the current during (dis)charging, and optimising the length and timing of the pulsing in real-time, because the optimal settings change through the life of the battery.
I'm kind of a battery nut. I've been building various pulse chargers and capacitive current limiting chargers for years. Without fail all of my most promising pulse charge/bleed schemes that yielded faster charges and greater runtimes have done so at the expense of cycle numbers. Usually by a factor of 10 or more. I hope they've outsmarted this, but I have my doubts.
I just posted a similar comment above. It seems like the time is right for an open source advanced chat server. Maybe there already is one, I just don't know of it yet.
Are any of these available for companies that can't ship all their internal conversations and files to the cloud? Are XMPP/IRC and overpriced 'enterprise' suites still the go to for private chat?
I started using it after getting frustrated with Gitlab's install process on RHEL (a year ago), and I used Gitolite before that. Gitblit has been refreshingly simple to deal with. Am I missing something other than the standard JVM hate?