Interesting Framework.
Seems to hail from :
> https://rise.cs.berkeley.edu/
Quite the bee-line from the authors' previous papers. It only seems to have been preceded by : https://arxiv.org/abs/1703.03924
However, the language in the [Ray] paper is completely different. Different from the preceding paper but familiarly distinct. Any specification of outside groups/companies/individuals who were consulted/collaborated with on this project? And who more or less lead the development and design? Maybe I am jumping the gun, but it clearly wasn't of the listed names.
Of all of the authors listed, none of their previous papers read like the Ray paper nor does the proposal paper (Real-Time Machine Learning : The missing pieces). http://arxiv.org/abs/1703.03924 reads like a corporate/industry grade requirements/proposal doc which is a huge departure from all of the author's prior papers... So, out of the blue, corporate level infrastructure project proposal and completion within the span of a year?
Can any of the paper's authors speak more clearly on who led this project over what span of time, under what direction, and with which industry groups? I see no background or papers from the individuals priority listed in the paper reflective of the sort that creates a formalized and industry grade Distributed Computational Framework such as this.
> Robert Nishihara
> Philipp Moritz
Are priority listed yet have no prior papers leading to such a development. To what degree did : https://rise.cs.berkeley.edu/sponsors/
Drive this?
The project is indeed driven by the authors listed on the paper and also the knowledge and experience that was accumulated in the AMPLab (the predecessor of the RISELab, see https://amplab.cs.berkeley.edu/). If you look at the github history, we've been working on it for longer than a year and had various prototypes before that, so it doesn't come out of "thin air" ;)
The lab's sponsors are also helpful, some of them have been experimenting with the system internally and giving us feedback.
Thank you for responding. I indeed spent time/effort in way of my crafted reply because it caught me eye. I haven't fully parsed the paper but covered a number of pages that colored the nature of my inquiry. I in no way intended to take anything away from the author of the paper but wanted to get at what you yourself declared :
"was accumulated in the AMPLab (the predecessor of the RISELab, see https://amplab.cs.berkeley.edu/)" as The backstory behind the paper as I clearly surmised there was one its historical nature. I also wanted to understand how long this was being worked due to the nature of the language used in the paper and how the concepts and language familiarly fit in with other things I've seen. And this right here :
"The lab's sponsors are also helpful, some of them have been experimenting with the system internally and giving us feedback." Yes, I understand the nature of this is moreso for corporate use cases than it is for academic and furthering therein. I was in search of names but already have a number of them I can surmise and a handful more that I will derive. I think its interesting what is being done here but there were choice words that were stated in the paper that limit it. At this juncture and time in the state of AI development, I will reserve any other commentary beyond stating that there are an incredible amount of fundamental limits in approaching things this way that fall on deaf ears due to the shut off nature/sponsorship of such developments. I wish you guys the best and am sure there will be traction as it relates to RL.
Literally almost everything that happens, and everything in the past few decades of tech anyways, is "one for the history books"
History books contain a lot of inane details no one besides readers of history books are ever charged with knowing. (Then there's the important big-picture stuff and sub-arcs that we're supposed to remember lest we repeat them, but most people have already stopped caring by that point because of the inane details)
Assume for a minute that AGI is being developed and in no way shape or form does it function or is it formed in a manner that mainstream AI efforts focus on...
That hypothetical could very well be the reality on the horizon.
What of Safety/Control research that has fundamentally nothing to do with such a system or even its philosophy that the broad majority of these institutions or ventures are centered on? What of deep learning centric methodologies that are incompatible?
Safety/control software and systems development isn't a research topic. It's an engineering practice that is most suited for well qualified and practiced engineers who design safety critical systems that are present all around you.
Safety/Control Engineering isn't a 'lab experiment'. If one were aiming to secure, control and ensure the safety of a system, they'd likely hire a grey bearded team of engineers who are experts and have proven careers doing so. A particular systems design can be imparted on well qualified engineers. This happens everyday.
Without a systems design or even a systems philosophy these efforts are just intellectual shots in the dark.
Furthermore, has anyone even stopped to consider that these problems would get worked out naturally during the development of such a technology?
Modern day AI algorithms and solutions center on mathematical optimization.
AGI centers are far deeper and elusive constructs. One can ignore this all to clear truth all they like.
So...
If one's real concern is about the development of AGI and understanding therein, I think its fine time to admit that it might not come from the race horses everybody's betting on. As such, it is much more worth one's penny to start funding a diverse range of people and groups pursuing it who have sound ideas and solid approaches.
This advice can continue to be ignored such as it currently is and has been for a number of years. It can persist across rather narrow hiring practices....
The closed/open door will or wont swing both ways.
Of all of the authors listed, none of their previous papers read like the Ray paper nor does the proposal paper (Real-Time Machine Learning : The missing pieces). http://arxiv.org/abs/1703.03924 reads like a corporate/industry grade requirements/proposal doc which is a huge departure from all of the author's prior papers... So, out of the blue, corporate level infrastructure project proposal and completion within the span of a year?
Can any of the paper's authors speak more clearly on who led this project over what span of time, under what direction, and with which industry groups? I see no background or papers from the individuals priority listed in the paper reflective of the sort that creates a formalized and industry grade Distributed Computational Framework such as this. > Robert Nishihara > Philipp Moritz Are priority listed yet have no prior papers leading to such a development. To what degree did : https://rise.cs.berkeley.edu/sponsors/ Drive this?