Although I understand your point, Shoshanna Zuboff would say otherwise. We are the resource; not the product, neither the consumer.
According to her, "their" product used to be information. Then it became prediction. And now it's behavior modification, which they achieve by constantly mining us (the data we provide them).
At least IMHO, this is a more accurate depiction of the current state of affairs. Although in the end, it may be quite a similar metaphor, either way.
This is a good addendum to my post, it expands upon it. I don't even take it as a correction, and I learned a bit. Thanks for this.
I don't know who Shoshanna Zuboff is (I'll 'kagi' them), but I agree with the points made. It's been an evolving strategy of how to exploit users as a resource for financial gain or at least cover the costs of the free tier service.
I don't see what's so difficult. If it was technically possible to run programs that have type errors in other languages, they would have the option to ignore type errors too, because it's convenient.
Typescript also ignores errors (generating js output for you to run) by default, so what's the point if it's not enforced?
Could you expand on it? It is truly surprising for me that anyone would find code with type annotations to be significantly worse, for any reasons whatsoever.
On the contrary, I joyfully read and write code with type annotations. It is obviously very useful knowing which object types a function expects and which it will return.
Indeed it is, but the second body's mass may be neglected if it is orders of magnitude smaller than the larger body. Or at least that's how I understand it (I'm not a physicist).
Compared to the invention of the Internet and/or the WWW, I'm of the opinion that calling Google (the search engine) or PageRank (the algorithm) one of the Great Inventions in Human History (and capitalizing it) is too far-fetched. The first is an indexer for the former, and the latter is not so different from eigenvector centrality, with the added spice of directed random walks.
I do believe that the Internet/WWW is one of our greatest inventions, though, so this is just nit-picking. :)
"the latter is not so different from eigenvector centrality, with the added spice of directed random walk" - I think the parent's point was the impact not that the algorithm itself is unprecedented. You could argue in the same way that the wheel is just a solid symmetric disk connected to a hub in the center that helps vehicles move around on a sufficiently even surface. Note that I don't imply that PageRank is as significant an invention as the wheel.
That's an interesting perspective, and although it might have contributed, I wouldn't put that much weight onto crypto's legacy for AI, IMHO.
The first papers that used GPUs to train neural networks were from the end of the 2000s and the beginning of the 2010s, before the Bitcoin price hike of 2013. But years before that, Nvidia had already introduced the CUDA architecture to GPUs in 2006 [1], which were used, among others things, to speed up algorithms to analyze seismic data for oil and gas exploration [2].
So with or without the "crypto fever", I believe the same advancements in GPU technology would have followed - but maybe not the scarcity brought by the investments in crypto mining. Because of this, we may also argue the opposite, that crypto got in the way of AI development and was one of the culprits of the "GPU rich vs GPU poor" division we hear/read about nowadays.
In a very similar fashion, though, I do tend to believe that PC gaming holds far more importance to the rise of both AI and crypto...
After thinking for a while on why it would be so, I've come to the conclusion that the truth of this statement (yours) ultimately depends on whose pair of eyes is watching it, IMHO.
Reminded me of this great literary article I read not too long ago, detailing some of the story behind the TMSC factory of semiconductors. Truly a marvelous read, if you are interested.
my guy what exactly are you expecting here? this is free as in beer code (apache license). no one is forcing you to use this and no one is asking anything of you for using it. i fully support people releasing their code (that took enormous amounts of blood sweat tears to get working) absolutely however they want to. if i'm interestd enough i'll figure it out and thank them.
so as i see it you have like three options if you are unhappy with that:
1. close the tab
2. dig into the impl and learn as you go
3. do 2 but also write docs
i just really believe i've covered literally all the cases that any reasonable (not whiney, not entitled) person would concede.
> the first issue in the repo is a "Request for a more verbose README", which I agree with.
posted today - do you think it might have something to do with this post we find ourselves convening on? i.e. no one was so bothered about a lack of docs until now?
edit:
i forgot actually something else you could do: email the author and ask nicely for some tips.
According to her, "their" product used to be information. Then it became prediction. And now it's behavior modification, which they achieve by constantly mining us (the data we provide them).
At least IMHO, this is a more accurate depiction of the current state of affairs. Although in the end, it may be quite a similar metaphor, either way.