Oh, certainly. I was just alluding to the futility of the comparison :).
Interestingly[1], the power of the type system seems to conflict somewhat with type inference. Or, at least, that a Turing Complete type-level language like in Shen or Idris does require you to spell out increasing amounts of type information. Obviously, this has to do with undecidability of TC languages, but I kind of think it's interesting that e.g. Haskell seems to occupy a kind of "sweet spot" in terms of inference and being able to tell the programmer when it needs a type ascription. AFAICT that would be impossible in a language whose type system is a-priory undecidable, correct? (Just to spell it out: In Haskell, the compiler knows that if you want feature X, Y and Z, then constructs A, B and C become undecidable... and can tell you so.)
[1] Well, I found it interesting, but I'm a layman at best.
Absolutely! The more information a type system can express the less likely it is to guess correctly at what you want just based on your value-level representation of intent.
Which is really fascinating when you think about it. It basically expresses (what we all know) that your code captures only a fragment of the intent of your efforts. Good types can capture much more intent.
So to that end, people explore interactive development. In this scenario, you provide a (potentially partial) type corresponding to the programmatic intent you desire and the compiler conducts proof search to find programs which satisfy that type!
The types are actually more informative than the programs and it provides some evidence that we'd really rather be writing at higher-level typed languages and letting the implementations fall out naturally rather than writing the implementations and hoping that inference can build the proper types.
Yes, I've found some of the published Idris videos quite instructive in this regard. Sometimes it can even infer the program fragment that you were supposed to write (given unification and uniqueness constraints), but it seems limited only to demo-level code and frankly at this point I'd be wondering if I got the types wrong and let the compiler infer bad code! (As opposed to getting the program wrong!)
Agda's emacs-mode is probably the most developed form of this available today, although Coq's tactics are doing roughly the same thing.
Types can't be wrong in the same way that values can be. Rather, to be more specific, in any circumstance where program derivation could even remotely succeed you will have had to have been specific enough with your types that they cannot easily produce the wrong code. It simply would fail to typecheck---at least eventually.
I wouldn't say that program derivation is limited to demo-level code, I would just say that you cannot expect proof search to achieve large fragments of code. It's just too large of a space!
Instead, it's better to think of it as an interactive game. You describe types as best as you can leaving holes where you haven't figured out what you want. The compiler responds telling you information and guesses about those holes. Furthermore, if you can nail down what you must prove sufficiently well to have confidence that it's the right thing then the compiler can probably do the trivial final details for you.
If you look through Conor McBride's (admittedly vast) literature then you'll find many things discussing this idea with respect to his (now defunct) experimental dependently typed language called Epigram.
I really like this analogy to interactive games, thanks!
I kind of see, logically, that the types can't be wrong in quite as many ways as the code/values/terms, but that's still a (possibly) infinite amount of wrong to contend with! Yes, we can, and frequently do, narrow it down, but still... :)
I've been aware of Mr. McBride's literature for some time, but I haven't ventured in beyond a cursory look at that ridiculously-well-punned Sinatra one. ("Do be do"... something?). It's a bit beyond what I can manage, in theory terms, at the moment.
EDIT: Sorry to sound like an excited puppy, but I don't think the time/place was quite right for me to have gone into or explored this field as I would have liked to as an undergraduate.
The thing about being wrong with types is that the end result is an immediate compiler failure. It can be difficult to figure out the right types, but once you do they're certain to behave properly. Thus, the penalty for wrongness is nearly nothing at all.
You face the infinite amount of wrong by working really hard to understand the right.
If you're wrong in values it'll probably still work for a while until it suddenly doesn't for rather mysterious reasons. This can be way down the line, undetected for a long time if not forever. Thus, the pain of being wrong is potentially infinite as well.
So, programming in types is no easier. It's probably even harder today since so few people do it. But the consequences are amazingly different.
And yeah, with Conor's work its "come for the puns, stay for the mind blowing computer science"
> the penalty for wrongness is nearly nothing at all.
Well, I think Edwin Brady showed a couple of C++-template-errmsg-like-things during a couple of his talks, but having actually programmed semi-advanced C++-template things I think I can agree on the general thrust that type-level problems are in some sense "better problems to have".
Btw, thanks for the exchange. (This is getting too off-topic, so I'll leave it at that.)
Interestingly[1], the power of the type system seems to conflict somewhat with type inference. Or, at least, that a Turing Complete type-level language like in Shen or Idris does require you to spell out increasing amounts of type information. Obviously, this has to do with undecidability of TC languages, but I kind of think it's interesting that e.g. Haskell seems to occupy a kind of "sweet spot" in terms of inference and being able to tell the programmer when it needs a type ascription. AFAICT that would be impossible in a language whose type system is a-priory undecidable, correct? (Just to spell it out: In Haskell, the compiler knows that if you want feature X, Y and Z, then constructs A, B and C become undecidable... and can tell you so.)
[1] Well, I found it interesting, but I'm a layman at best.