Have been using GrapheneOS for over a year now. With the Google compatibility layer I can use all my banking/credit card apps (4 different ones) and many more. I like the details available online and the frequent updates. Pretty smooth first installation too using their web installer and getting help on via their Matrix rooms or forum is usualy quick. I only have one government app where QR scanning doesn't work but not sure why. All other apps with QR scanning, including banking apps, work.
Same boat here, SpiderOak on Ubuntu (not much issues fortunately) but it does seem abandoned and not sure about its future.
As a general noob I also don't understand how I should use rsync.net appropriately. Do I encrypt all my data files before or is that done by Borg/Restic for example?
GP here. You’d have to encrypt the data before sending it to rsync.net. Borg and Restic can do that for you, but you’d have to manage the encryption keys safely). Doing it by yourself before borg or Restic do the backup is unnecessarily complex.
There’s a special “borg account” at a lower price (compared to the standard one) on rsync.net. That one assumes that you setup the retention policies for older versions in borg since rsync.net doesn’t do any (additional) snapshots/retentions (like it does for the standard, but slightly costlier, accounts).
The bigger issue is making sure others (in one’s circles) who may need the data are able to use the tool for backups and restores. That’s where I prefer GUI tools that are easy to use and easy to demonstrate, rather than CLI tools and scripts.
Correcting for confounding is probably the hardest thing to do when you are building statistical models. There is no true consensus with regard to how you should go about correcting for confounding.
Did you know that if you correct wrongly for a confounder, you actually introduce bias?
As a very modest medical researcher myself, I have become very careful about the conclusions I make with any model I make. While it is very appealing to make conclusions about causation, they are very often wrong.
That's why when studies that make discoveries of new correlations, they have to replicate such findingers in different study designs that actually allow you to say something about causation.
In medicine, that would often mean a randomized study between treatment A and a placebo.
However, with factors such as race, or sex this is obviously impossible.
It appears that you make the false assumption that the data itself are unbiased and are always factually correct. This is untrue. Data does not appear magically in a dataset. It is the interpretation of the world by humans and may therefore carry the original bias, intended or not, of humans. This is why analysts have to think about what the data means whenever they do their analyses. I would say that is their ethical responsibility.
of course they do that, that is literally their job.
But to reverse-engineer the biases that may or may not exist in an original data set -- please explain how this should be accomplished, because I don't see how someone could accurately quantify the amount or degree of race/sex/age/religion/nationality-ism without introducing additional "bias" based on that person's own opinion.
> Data does not appear magically in a dataset.
Right, so why isn't the boss, or exec, or department head, or 3rd-party, who sourced the data responsible for de-biasing the data before even handing it off to the the data scientist, so s/he can just do the job of data science-ing, and not political science-ing? You're putting a whole lot of "ethical responsibility" on just one person -- ironically, the one least likely to be good at interpreting human emotional tendencies -- within a much larger ecosystem.
That's the point, it is very difficult to correctly interpret analyses. That's why you don't take conclusions for granted, and work from the basis of what you think is biologically relevant. There is usually a whole phase preceding the actual analysis. You can visualise potential relationships in directed acyclic graphs, to try and see where bias might be introduced. However, that phase is very often skipped by medical researchers.
I never said the analyst was the sole person bearing the responsibility. You are right that just as much as responsibility must be expected from those that designed the information model, those that collected the data but also the person who ultimately analyses it and prepares it for whatever kind of dissemination. Everybody involved has to take their individual responsibility so that we achieve collective responsibility.