> A much better approach in my opinion is to start with an unrestricted, effectful language and identify fragments with fewer effects via types.
This is precisely what an effect system is, and what the languages that I mentioned do. So it seems that you agree with the the church of lambda on this point?
> We have been talking about large-scale software engineering, and whether Curry-Howard or program logic is a more scalable approach towards program verification. [...] I cannot see how a verification tool that I'm forced to use in prototype phases when there is no stable specification is cost-effective in large-scale software engineering.
I've explained several times that the assumption that you need to do full on verification from the start with a CH style system is incorrect. You can write your prototype without any verification and add lemmas and theorems about it when the spec stabilizes. This is not a fundamental difference between CH style vs program logic style.
> Sheer expressiveness is not interesting. You can produce trivial typing systems where every program is typable.
Well yes, you also need "well typed programs don't go wrong" of course, which dependent type systems do have. I don't think decidability of type checking is desirable, by the way. What you really want is a system that checks statically as much as possible, but can check dynamically that which it can't verify statically. Rather than two possibilities "correct" vs "incorrect" you want three: "definitely correct", "definitely incorrect" and "can't determine statically, so let's check dynamically". In the last case conventional systems just reject the program as incorrect, but I think that is not as helpful.
> Even if some kind of polished Haskell with dependent types can be build, whether that then generalises to state, concurrency, timing etc in a clean way is another open question.
More open than a program logic system that does the same? I don't think the approaches are that different in this regard. With state/concurrency/etc you work with a description of the computation (like in Haskell you work with a description of an IO computation). Those satisfy certain laws which you can use to reason about them. There is of course a huge design space to be explored here, but that design space isn't all that different than the design space of a program logic style system for the same.
> The term is used in with different meanings in different contexts.
I see, yes. They're sometimes called implicit parameters, sometimes implicit arguments, and there's differences in whether only unification is used or also type class / canonical structures / scala style implicit parameters kind of non-canonical inference.
> That's not full type inference. Sage use hybrid type-checking and inserts dynamic checks where type inference fails.
The dynamic checks are for when type checking fails, not type inference. The type inference will always infer the type if the term is typeable. Of course it may sometimes infer a type that the type checker doesn't know statically whether it's valid or not, so you need dynamic checks then. There is no incompleteness defeating magic here, but if the term is typeable then the type inference will infer a type for the term, whether the type checker statically knows that it's valid or not. I did say this was more of a theoretical curiosity.
This is precisely what an effect system is, and what the
languages that I mentioned do. So it seems that you agree with
the the church of lambda on this point?
No, Eff starts with a pure lambda calculus and adds effects on
top.
I've explained several times that the assumption that you need
to do full on verification from the start with a CH style
system is incorrect. You can write your prototype without any
verification and add lemmas and theorems about it when the
spec stabilizes.
As I said, I don't really see that. There is research going on,
like the aforementioned Trelly that seeks to build such systems,
but they are not there yet.
Well yes, you also need "well typed programs don't go wrong"
of course, which dependent type systems do have.
Well typed programs do go wrong! Here is a well-typed Ocaml program:
List.hd [];;
When I run it, an exception is thrown. The point is: what counts
as "not going wrong" is a fairly arbitrary definition. And for
the trivial typing system I can define programs that go wrong as
the empty set.
I don't think decidability of type checking is desirable, by
the way. What you really want is a system that checks
statically as much as possible, but can check dynamically that
which it can't verify statically. Rather than two
possibilities "correct" vs "incorrect" you want
three: "definitely correct", "definitely incorrect" and "can't
determine statically, so let's check dynamically". In the last
case conventional systems just reject the program as
incorrect, but I think that is not as helpful.
I don't think this will be useful, because than every typo will
turn into a dynamic check, and you end up with a dynamically
typed language for all practical purposes. I think
More open than a program logic system that does the same?
Yes, vastly more open. Those problems are essentially solved for
program logics.
I don't think the approaches are that different in this
regard. With state/concurrency/etc you work with a description
of the computation
Yes, you can work by encoding, but then you pay the "encoding
tax" in verification.
The dynamic checks are for when type checking fails, not type
inference. The type inference will always infer the type if the
term is typeable.
That's not how I understand the Sage paper [1], which says: "Our
implementation performs bidirectional type checking (Pierce and Turner
1998), allowing many types to be inferred locally, but does not yet
perform full type inference (a much more technically challenging
problem)."
That paper itself says so in the first sentence?
But the paper also presents a complete inference algorithm for
GADTs. The point is: it depends on exactly what you mean by
GADTs. I don't know what Ocaml does here. I stopped using Ocaml
just before GADTs were added.
This is precisely what an effect system is, and what the languages that I mentioned do. So it seems that you agree with the the church of lambda on this point?
> We have been talking about large-scale software engineering, and whether Curry-Howard or program logic is a more scalable approach towards program verification. [...] I cannot see how a verification tool that I'm forced to use in prototype phases when there is no stable specification is cost-effective in large-scale software engineering.
I've explained several times that the assumption that you need to do full on verification from the start with a CH style system is incorrect. You can write your prototype without any verification and add lemmas and theorems about it when the spec stabilizes. This is not a fundamental difference between CH style vs program logic style.
> Sheer expressiveness is not interesting. You can produce trivial typing systems where every program is typable.
Well yes, you also need "well typed programs don't go wrong" of course, which dependent type systems do have. I don't think decidability of type checking is desirable, by the way. What you really want is a system that checks statically as much as possible, but can check dynamically that which it can't verify statically. Rather than two possibilities "correct" vs "incorrect" you want three: "definitely correct", "definitely incorrect" and "can't determine statically, so let's check dynamically". In the last case conventional systems just reject the program as incorrect, but I think that is not as helpful.
> Even if some kind of polished Haskell with dependent types can be build, whether that then generalises to state, concurrency, timing etc in a clean way is another open question.
More open than a program logic system that does the same? I don't think the approaches are that different in this regard. With state/concurrency/etc you work with a description of the computation (like in Haskell you work with a description of an IO computation). Those satisfy certain laws which you can use to reason about them. There is of course a huge design space to be explored here, but that design space isn't all that different than the design space of a program logic style system for the same.
> The term is used in with different meanings in different contexts.
I see, yes. They're sometimes called implicit parameters, sometimes implicit arguments, and there's differences in whether only unification is used or also type class / canonical structures / scala style implicit parameters kind of non-canonical inference.
> That's not full type inference. Sage use hybrid type-checking and inserts dynamic checks where type inference fails.
The dynamic checks are for when type checking fails, not type inference. The type inference will always infer the type if the term is typeable. Of course it may sometimes infer a type that the type checker doesn't know statically whether it's valid or not, so you need dynamic checks then. There is no incompleteness defeating magic here, but if the term is typeable then the type inference will infer a type for the term, whether the type checker statically knows that it's valid or not. I did say this was more of a theoretical curiosity.
> Are you sure?
That paper itself says so in the first sentence?