There are groups and companies exploring probabilistic programming as an alternative to CNN and other deep learning techniques. Gamalon (www.gamalon.com) combines human and machine learning to provide more accurate results while requiring much, much less training data. The models it generates are also auditable and human readable/editable - solving the "opaque" issue with deep learning techniques. Uber is exploring some of the same techniques with their Pyro framework.
Having said all of this, we're not arguing that CNN have no place - in fact you can view CNNs as just a different type of program as part of an overall probabilistic programming framework.
What we're seeing is that the requirement of large labeled training sets becomes a huge barriers as complexity scales - making understanding complex, multi-intent language challenging.
My intuition is that the approach Gamalon is using has more potential than deep learning.
I've been playing with the concept for a while however failing to get any good results. Debugging probabilistic programs is so damn long and difficult since bugs can show as subtle biases in output instead of clear cut deviations. (I described my approach here: https://www.quora.com/What-deep-learning-ideas-have-you-trie... ) For me, this is just a hobby.
Joshua B. Tenenbaum et al.'s group seem to name their approach program induction, I had called mine Bayesian Auto Programming. I see you are calling yours Bayesian Program Synthesis. Clearly we have similar intuition about the essence of the solution to AI.
Very interesting report on your discoveries. Makes me remember of Genetic Programming. Was thinking about using the same principles to generate a more declarative Bayesian program (like a subset of SVG).
Do you have some source available around your experiments?
By "human learning" do you mean there is a human in the loop, or do you mean the system mimics how we think humans learn? Looking at their website the latter appears to be the case.
A human can provide guidance to the system and edit models. As to whether it “thinks like a human” I think that’s always a dangerous and loaded statement - but what we believe is that a Bayesian probabilistic approach is closer to the way humans learn and make decisions.
Having said all of this, we're not arguing that CNN have no place - in fact you can view CNNs as just a different type of program as part of an overall probabilistic programming framework.
What we're seeing is that the requirement of large labeled training sets becomes a huge barriers as complexity scales - making understanding complex, multi-intent language challenging.
Disclosure: I work for Gamalon