Many uses. There's tons of situations where you are already accelerating most of your heavy compute with tensor libraries, but the data input/output parts are still in python. They would benefit from loading data in parallel before batching.
Multiprocess, OS threads, and asyncio all solve different problems. Threads are pretty heavyweight compared to async coroutines (aka green threads). The big win with coroutines is it is very cheap to put them to sleep waiting on io. So a web server on a 4 core vm might have 4 worker processes, several threads per process, and dozens/hundreds of coroutines.
> So perhaps you can use this for slurping other people's IP in parallel and train the "AIs" that are supposed to make us redundant.
This has absolutely nothing to do with the technical merits of async or threads. For one, the above two examples are taken directly from work I did to combat deep fakes by identifying various "tells" from the media. Some of us are in fact using machine learning for good.
Multiprocess, OS threads, and asyncio all solve different problems. Threads are pretty heavyweight compared to async coroutines (aka green threads). The big win with coroutines is it is very cheap to put them to sleep waiting on io. So a web server on a 4 core vm might have 4 worker processes, several threads per process, and dozens/hundreds of coroutines.
> So perhaps you can use this for slurping other people's IP in parallel and train the "AIs" that are supposed to make us redundant.
This has absolutely nothing to do with the technical merits of async or threads. For one, the above two examples are taken directly from work I did to combat deep fakes by identifying various "tells" from the media. Some of us are in fact using machine learning for good.