I was hoping to see something on Kalman filters. But it was good to see info on state space analysis. Also good to see a simple example on why dynamic range compression is nonlinear. Would have been nice to see more info on what makes a system non-time invariant with examples.
In 2008 or 2009 Palm still had enough relevant legacy apps that they could have convinced me to stay with WebOS, but launching a tablet (no phone) in 2010. Forget it!
That shop has sailed and youre not onboard!
By 2010, you were either android/java or ios/ObjC. If they really wanted to present an alternative platform they should have been giving away those 200k tablets and a compiler/sdk to cs majors. They werent! It was a half hearted effort. Acquisition was probably to bail out board members with palm stock with a buyout.
These were my exact thoughts about HPs printer division. These should be studied in bussiness schools as the definition of enshitification for the next 25 years.
PC side of HP is a different story. Their high end consumer laptops are crap compared to dells xps line.
Comodity/Enterprise gear is equivalent to Dell (primary competitor) at the generic box and monitor level. Maybe a bit better on the power supply and managemet side. Worse if you bought into VMWARE ecosystem.
So I thought HP…meh…dying company with legendary history of innovation in the 80s and 90s.
Then I bought an HP z840 workstation for homelab. This thing is a beast. Engineered out the wazoo! Three pcix16 slots, 1+ TB RAM, 40+ cores. Documentation for days. Way better than similar era Dells.
At least in the late 2010s they still had it, for the right price. For sure not unusable or any where near awful…even 10+ year old kit.
Ive got no idea about gear in the last 3 years or how they will do financially going forward. But if you are looking at the used market, the enterprise workstation gear in the late 2010s has tons of value.
I must have the last good HP printer or something. Mine is the "HP Color LaserJet Pro MFP M281fdw" which is a WiFi enabled colour laser printer. It prints nicely, a set of cartridges lasts me for multiple years (low usage of course), has a built in scanner that works with the drivers available for Linux (even over WiFi), and is happily chuntering away on 3rd party cartridges. No issues whatsoever.
Honestly I'm expecting it to suddenly stop working or something given all the horror stories I hear about HP, but so far ... working just fine.
I'm a bit sad that HP are the last resting place of the Digital Equipment Corporation and that neither they nor the external company that they licensed OpenVMS to offer any VAX VMS hobbyist license, but that's for sure a niche thing to whine about.
> I must have the last good HP printer or something. Mine is the "HP Color LaserJet Pro MFP M281fdw"
I have some of those in my care. They perform fine but they are locked to chipped cartridges.
And when HP learned their customers were moving the chips to 3rd party cartridges, HP worked out a method to cement the chips in place - to make it as hard on their customers as they possibly could.
When I referenced HP with the terms Hostility and Sabotage, it was the M281's I had in mind. Although, crapware applies too. They're reason #4,009,175 to never buy HP.
I have a somewhat older, but higher-end m475dn. Last year, scanner calibration mechanism started failing, and printer couldn't complete the init sequence anymore: it can't be used as a printer anymore either.
It has only seen home office use, and didn't run through the second set of toners.
No service shop wants to touch it either, so I've got a 30kg paperweight.
This is why we need all software and firmware to be free software.
Current AI is like search. You still have to know the vocabulary and right questions to ask. You also need the ability to differentiate a novel answer from a hallucination. Its not going to replace lawyers or doctors any time soon.
I’m just a small business & homelab guy, so I’ll probably never use one of these big distributed file systems. But when people start talking petabytes, I always wonder if these things are actually backed up and what you use for backup and recovery?
It is common for the backup of these systems to be a secondary data center.
Remember that there are two purposes for backup. One is hardware failures, the second is fat fingers. Hardware failures are dealt with by redundancy which always involves keeping redundant information across multiple failure domains. Those domains can be as small as a cache line or as big as a data center. These failures can be dealt with transparently and automagically in modern file systems.
With fat fingers, the failure domain has no natural boundaries other than time. As such, snapshots kept in the file system are the best choice, especially if you have a copy-on-write that can keep snapshots with very little overhead.
There is also the special case of adversarial fat fingering which appears in ransomware. The answer is snapshots, but the core problem is timely detection since otherwise you may not have a single point in time to recover from.
That's a form of hardware failure where the failure domain is a data center or region. It is always good to store enough redundant data outside of whatever failure domains you want to consider in your planning. That might be a device, a server, a rack, a room, a data center, a region or a country/regulatory domain.
Non-local storage is definitely worthwhile there. Snapshots can have additive value there because they may be more coherent states.
Backup and recovery is a process with a non-zero failure rate. The more you test it, the lower the rate, but there is always a failure mode.
With these systems, the runtime guarantees of data integrity are very high and the failure rate is very low. And best of all, failure is constantly happening as a normal activity in the system.
So once you have data integrity guarantees that are better in you runtime system than your backup process, why backup?
There are still reasons, but they become more specific to the data being stored and less important as a general datastore feature.
...and the "Disaster" in "Disaster recovery" may have been localized and extensive (fire, flooding, major earthquake, brownouts due to a faulty transformer, building collapse, a solvent tanker driving through the wall into the server room, a massive sinkhole, etc)
Yes, the dreaded fiber vs. backhoe. But if your distributed file system is geographically redundant, you're not exposed to that, at least from an integrity POV. It sucks that 1/3 or 1/5 or whatever of your serving fleet just disappeared, but backup won't help with that.
> But if your distributed file system is geographically redundant
Redundancy and backups are not the same thing! There's some overlap, but treating them as interchangeable will occasionally result in terrible outcomes, like when a config change that results in all 5/5 datacenters fragmenting and failing to create a quorum, then finding out your services have circular dependencies when you are trying to bootstrap foundational services. Local backups would solve this, each DC would load last known good config, but rebuilding consensus necessary for redundancy requires coordination from now-unreachable hosts.
Well, for active data, the idea is that the replication within the system is enough to keep the data alive from instance failure (assuming that you're doing the proper maintenance and repairing hosts pretty quickly after failure). Backup and recovery, in that case, is used more for saving yourself against fat-fingering an "rm -rf /" type command. Since it's just a file system, you should be able to use any backup and recovery solution that works with regular files.
Because of the replication factor here, I assume that this filesystem is optimised for read throughput rather than capacity. Either way, there is a concept of "nearline" storage. Its a storage tier that is designed to be only really accesed by a backupagent. The general idea is that it stores a snapshot of the main file system every n hours.
After that you have as many snapshots as you can afford.
If you are selling to a non-technical user, phone calls give them a hint of your support. Email support is horrible. Turn around times are too slow. This is the reason I wont buy another framework laptop.
Counterpoint: Recently dealt with a vendor at work and asked their support several highly technical questions together with a bug report for an issue we were having.
They not only answered in 1 day, but also provided a real solution / workaround for our issue, as well as a technical answer to the questions and a technical analysis of why the bug occurs.
Outstanding support, and I would never have guessed it from their website.
I've had both great and terrible email support (great where L1 immediatelly involved L2 support and I got a straight up solution in 15 mins, for instance), but getting something done over a voice call has never been that great!
If L1 can solve things for you, a call sometimes can work, but really, if they can't, it meant multiple calls with L1 and multiple calls with L2 (in one recent example, it took 4 months for an issue to be resolved by internal support at BigCo where I was repeatedly asked for the same screenshot, including them recording me get to it a number of times, until I pinged their manager's manager via email pointing how they have the solution in there if they only read my emails, and got it resolved 2h later).
To make even more confusing, laryngology is the surgical specialty you would go see for a tracheal issue in the neck. (where a trach is performed) And a head and neck surgeon (oncologic surgery) is the one that performs the most laryngectomies. Fricken otolaryngologists…isnt the one name complicated enough.
Laryngologists are head and neck surgeons, in the US and most commonwealth at least where the general training is “otolaryngology-head and neck surgery”.
There is further subspecialization in head and neck for oncologic, reconstructive, and (micro)vascular procedures usually just called “head and neck”, true, which is confusing.