Musk is definitely trying to achieve something that many entrepreneurs would be shy to even attempt. I'm rooting for him, despite feeling that his business model is inherently difficult. I think net-net, he definitely adds to society with his vision and the sheer fact that he tries to do what practically every one thought would not be possible. If anything, it inspires other would-be inventors and entrepreneurs to continue pushing.
Which DOJ official and/or senator is going to go after facebook and reddit? Unless they're openly promoting sex trafficking I don't think the giants are going to have any issues.
Just to be safe, though, reddit did take down any subreddit that discussed prostitution.
The situation is bad enough that Craigslist took down their dating section. And really, any dating site or service (classics like match.com and modern services like Tinder) are easy prey for this law. I saw a headline towards the top of HN that Facebook is dabbling in online dating, which puts the biggest social media company in history in the crosshairs.
In practice, all someone would have to do is post a few underage images to /r/gonewild or similar high-traffic subreddits. Or, for that matter, simply drop random links in unrelated subreddits.
SESTA is a weapon, not a defense. A metaphorical loose cannon.
For those who want to skip to the conclusion of this confrontation, the commissioner was asked to resign from the port authority after an internal investigation by the port authority regarding her behaviour. She was formerly the head of the Government and Ethics committee.
I can relate to what the article says, and yet find it hard to break away from the ecosystem given the tools I've come to use every day runs so well on the Mac. Muscle memory with shortcuts and all.
I've found a quick workaround to simply let the machine cool down. Eg. When running Docker previously and when the machine got heated, I found keyboard problems keep cropping up consistently. Letting it cool for about 20 minutes with a laptop cooler seemed to help.
I'm an iOS dev so have no other choice - but even if I did, I like pretty much every other aspect of the Mac (both this specific laptop, and the ecosystem) and am happy to give Apple my money, but expect a certain level of quality in return, which this generation of keyboards seems to lack.
Definitely agree with you. I feel underwhelmed with the 2017 MBPs (typing on one right now). And you're absolutely correct: if you're developing for certain app systems, you have no choice. I'm a Ruby developer and I remember years ago people switched to Mac simply so they could use Textmate. Some tools just work better on the system (for better or for worse).
I actually quite like the laptop otherwise - I walk to and from work with it every day so the reduced weight and size is great - although the touchbar is meh and the USB-C/dongle situation is still a pain!
If we look at cancer as simply another organism that's looking to survive, we can't blame them for innovating strategies. They're simply trying to carve their own niche in the world (albeit at our expense). C'est la vie.
Much of Radiology and Pathology specimen interpretation is based on reliable and consistent detection. To come and think of it, the application of AI into this area is fantastic because it removes human fatigue and missed diagnoses. AI neural nets seem well equipped for image recognition. At the very least, this can lead to a first level flagging of specimens.
My only concern is that with any system, a major downside would be that human operators would place too much reliance on a system that works great most of the time, resulting in missing something that otherwise would have been caught. This happens every once in a while, such as with EKG machines spitting out a diagnosis based on electrical activity patterns.
What's crazy is that if AI is better than doctors by some significant degree, what do we do when the doctor and AI disagree? Like if doctors are right 85% of the time, but the AI is 90%.
I guess we treat it as another doctor? Like if we have 4 opinions that agree, we go with that one regardless of the source of those opinions (as long as they meet some minimum competence threshold).
As others have said - perform more tests. I've worked around molecular pathology testing for cancer (which comes up with a diagnosis based on data analysis after running DNA or RNA sequencing). If the molecular report differs from what the surgical pathologist saw when looking under the microscope, it's typically not 'it's cancer' vs 'it is not cancer', but more 'this is molecularly non-small cell lung cancer' vs 'this appears to be prostate cancer'. So what will happen is they'll do more staining on the sample, specific to what the molecular report came out with - and a lot of times - bam. That tumor in someone's prostate is actually lung cancer and needs to be treated that way.
Currently, what happens is that if a diagnostic test comes back and it suggests something serious, say cancer, and the doctor does not pursue it, then the doctor would be liable if it did turn out to be cancer.
So if a machine disagreed with a doctor, then I would assume that the doctor will grudgingly have to investigate further until there is enough evidence to rule out that diagnosis.
#headache
What I can see happening is that patients will go to this machine for a second opinion. And if an opinion then returns that contradicts the primary physician, then an entire can of (legal) worms will be open.
--
Addendum:
To elaborate further, there is sometimes what's called the benefit of history.
Say a patient visits 10 doctors. The 10th doctor has an unfair advantage to the first 9 simply because he/she will have the prior knowledge of which diagnoses and treatments were incorrect.
Similarly for an AI vs Human Doctor situation, the incorporation of additional information (for the AI) would require considerable amount of big data to train in order to be able to recognize prior history, failed treatments, and such.
For image specific diagnoses (eg. recognizing melanoma, retinopathy), these do lend themselves to AI very nicely. For other diagnoses that contain a significant amount of, shall we say, "human factors", then less so.
Doctors aren't liable for failing to predict the future or making imperfect diagnosis.
If a doctor reviews the available data, reasonably concludes that it shouldn't be pursued further, and it later does turn out to be cancer, then that by itself does not mean that the doctor is liable for anything. Malpractice requires actual culpable negligence, such as missing something obvious, not interpreting a questionable situation in a manner that turns out to be wrong. The existence of a second, contrary opinion doesn't change that.
This isn't a new issue, there have been CAD systems that outperformed average clinicians (on very specific tasks) since at least the mid 90s. At the end of the day in some jurisdictions liability drives the resolution process, efficiency in others.
>> Like if doctors are right 85% of the time, but the AI is 90%.
There is always the possiblity that the doctors and the device are both right 90% of the time, but not the same 90% of the time.
Or that either the doctors or the device are right most of the time for the most severe cases but the other party is right only for the milder cases, etc.
I know that this is a totally whack-job comment, but the TV show "the Good Doctor" is kinda leaning this way. Instead of relying on a ton of personal bias - the main character is generally more similar to how a ML would diagnose things, obviously there's no way to establish any foundation to this since it's based on a TV show. But it offers a vision of what you're saying but instead of AI it's a Savant syndrome individual that is making better judgments than the rest of the doctors. That being said, I would imagine a Savant is being placed in a role like that is less likely than the show portrays so where does that place AI?
This is definitely going to be an issue. Even in cases where you're measuring your tool against "expert consensus" (often 3-5 physicians), there's a reasonable likelihood that the consensus may be wrong in certain types of cases.
Though even in those cases, you might be looking to show that your tool agrees with physicians at least as often as physicians agree with each other. Malpractice is usually about failing to offer the standard of care, and if you can show a reasonable level of concurrence with the standard of care in research and trials, you may be able to move forward and reach those higher levels of accuracy.
a) Usage still requires the presence of the Doctor.
b) The doctor does nothing but relay the AI’s message.
c) The doctor continues to charge the same and treat the same number of patients.
d) Everyone who expresses “hey isn’t the doctor redundant now? Shouldn’t we be treating more patients for cheaper” gets ridiculed as “one of those people”.
e) Edit: Also, the doctors’ association devotes significant resources to come up with memetically virulent reasons why the world would end if we took doctors out of the loop.
I mean, that’s how a lot of obviated jobs are currently treated...
Insurance companies assume the liability for the doctor's diagnoses. I'm not sure why they'd be unwilling to do the same for the software's diagnosis. Somewhere, an actuary is willing to estimate that risk.
Since this is publicly accessible, what would be the chance that search engines indexed the files? In this case, would Google bot be charged? Or if this were, say, Equifax or Facebook. I mean, in those situations, the companies were blamed for "the leak". It seems rather convenient to cherry pick the law to apply on this poor teenager.
Yes, my gut is telling me the same thing too. The junior dev needs to be motivated (out of the box motivated) to learn and improve. If that isn't there, no amount of coaching/pair programming will help.
It's depressing. Sometimes it's not the lack of mentorship but the lack of motivation on the part of the student. It's almost like someone stuffing a gold coin in one's pocket and that person refusing to take it because it's too far or he can't be bothered.
Most often it's to try and help the other person become a better programmer. I've found that people who have little desire to improve end up sitting there regardless.
The underlying desire to do it, of course, is to give the programmer (the one who we're investing in to improve) a chance to improve. It feels, however, like a wasted effort.
I see, often the person who has little desire to improve needs to be handled differently.
There are various ways to handle low performance, including talking to the person, finding out the reason behind the low performance. Most often it is caused by external factors.
Pair programming or any such coaching tool IMO are effective only after you dig deep into the reason behind low performance. Once the individual is ready to improve, that is when you can employ pair programming.