I am blown away but not scared for my job... yet. I suspect the AI is only as good as the training examples from Github. If so, then this AI will never generate novel algorithms. The AI is simply performing some really amazing pattern matching to suggest code based on other pieces of code.
But over the coming decades AI could dominate coding. I now believe in my lifetime it will be possible for an AI to win almost all coding competitions!
Humans are more than pattern matchers because we do not passively receive and imitate information. We learn cause and effect by perturbing our environment, which is not possible by passively examining data.
An AI agent can interact with an environment and learn from its environment by reinforcement learning. It is important to remember that pattern matching is different from higher forms of learning, like reinforcement learning.
To summarize, I think there are real limitations with this AI, but these limitations are solvable problems, and I anticipate significant future progress
Fortunately the environment for coding AI is a compiler and a CPU which is much faster and cheaper than physical robots, and doesn't require humans for evaluation like dialogue agents and GANs.
Well you still have to assess validity and code quality which is a difficult task , but not unsolvable.
Also Generative Adversarial Networks original implementation was to pit neural networks against each other to train them , they don't need human intervention.
> They feed you all these algorithms in college and your brain suggests new algorithms based on those patterns.
Some come from the other end of the process.
I want to solve that problem -> Functionally, it’d mean this and that -> How would it work? -> What algorithms / patterns are there out there that could help.
Usually people with less formal education and more hands on experience, I’d wager.
More prone to end up reinventing the wheel and spend more time searching for solutions too.
Most people I know who’ve been to college, or otherwise educated in the area they work in, tend to solve problems using what they know (not implying it’s a hard limit. Just an apparently well spread first instinct).
Which fits the pattern matching described by the grandparent.
A few people I know, most of which haven’t been to college, or done much learning at all, but are used to work outside of what they know (that’s an important part), tend to solve problems with things they didn’t know at the time they set out to solve said problems.
Which doesn’t really fit the pattern matching mentioned by the grandparent. At least not in the way it was meant.
To reduce intelligence to pattern matching begs the question: How do you know which patterns to match against which? By some magic we can answer questions of why, of what something means. Purpose and meaning might be slippery things to pin down, but they are real, we navigate them (usually) effortlessly, and we still have no idea how to even begin to get AI to do those things.
I think those are the distance metrics, which is what produces inductive bias, which is the core essence of what we consider 'intelligence'. - Consider a more complicated metric like a graph distance with a bit of interesting topology. That metric is the unit by which the feature space is uniformly reduced. Things which are not linearized by the metric are considered noise, so this forms a heuristic which overlooks features which may have been in reality salient. - This makes it an inductive bias.
(Some call me heterodox, I prefer 'original thinker'.)
To generate new, generally useful algorithms, we need a different type of "AI", i.e. one that combines learning and formal verification. Because algorithm design is a cycle: come up with an algorithm, prove what it can or can't do, and repeat until you are happy with the formal properties. Software can help, but we can't automate the math, yet.
I see a different path forward based on the success of AlphaGo.
This looks like a clever example of supervised learning. But supervised learning doesn't get you cause and effect, it is just pattern matching.
To get at cause and effect, you need reinforcement learning, like AlphaGo. You can imagine an AI writing code that is then scored for performing correctly. Overtime the AI will learn to write code that performs as intended. I think coding can be used as a "playground" for AI to rapidly improve itself, like how AlphaGo could play Go over and over again
Imparting a sense of objective to the AI is surely important, and an architecture like AlphaGo might be useful for the general problem of helping a coder. I'm not seeing it, however, for this particular autocomplete-flavored idiom.
AlphaGo learns a game with fixed, well-defined, measurable objectives, by trying it a bazillion times. In this autocomplete idiom the AI's objective is constantly shifting, and conveyed by extremely partial information.
But you could imagine a different arrangement, where the coder expresses the problem in a more structured way -- hopefully involving dependent types, probably involving tests. That deeper encoding would enable a deeper AI understanding (if I can responsibly use that word). The human-provided spec would have to be extremely good, because AlphaGo needs to run a bazillion times, so you can't go the autocomplete route of expecting the human to actually read the code and determine what works.
This is moreso automation-assisted theorem proving. It takes a lot of human work to get a problem to the point where automation can be useful.
It's like saying that calculators can solve complex math problems; it's true in a sense, but it's not not strictly true. We solve the complex math problems using calculators.
and there's already GPT-f [0], which is a GPT-based automated theorem prover for the Metamath language, which apparently submitted novel short proofs which were accepted into Metamath's archive.
I would very much like GPT-f for something like SMT, then it could actually make Dafny efficient to check (and probably avoid needing to help it out when it gets stuck!)
> If so, then this AI will never generate novel algorithms.
This is true, but the most programmers don't need to generate novel algorithms themselves anyway.
We will only reach a singularity with respect to coding. There are many important problems beyond computer coding like engineering and biology and so on
Coding isn't chess playing it's likely about as general as math or thinking. If you can write novel code you can ultimately do biology or engineering or ultimately anything else.
But over the coming decades AI could dominate coding. I now believe in my lifetime it will be possible for an AI to win almost all coding competitions!