Paul Rubin

2015-11-28 20:40:52 UTC

[Note to whoever: this continues a discussion about functional

programming that started on comp.lang.ada but got off-topic for that

group so I've migrated it here. Please don't crosspost unless there's

significant Ada content in your post.]

grade school. I assume I'm explaining it to a professional trying to do

high-assurance programming so of course that's the definition I'd use.

Treating the above as Haskell code that purports to compute the

factorial, it's easier to reason about than the imperative loop version:

Let P(n) be the proposition "fact(n)=n!". We want to verify that P(n)

is true for all n. We prove it by induction. First, P(1) is obviously

true. Second, assume P(n) is true. So by substitution, fact(n+1) =

(n+1) * fact(n). And since P(n) is true by the induction hypothesis,

fact(n+1) = (n+1) * n! which is (n+1)!, which means we've proved P(n) =>

P(n+1). Therefore by induction, forall n. P(n), QED, so we've verified

the code, and this was very easy.

I'm out of my depth here, but I think to do the same thing with

imperative code is much messier, you have to put preconditions and

postconditions around every statement to prove the loop invariant that

you're going through factorials, etc. I guess you can do that with

SPARK but I don't know how it's done.

At the level of everyday coding though, going from loops to recursion is

nothing to be bothered about, trust me. It's like when people freak out

by whitespace as syntax when they first see Python, or by the

parentheses when they first see Lisp. You get used to it quickly and it

stops mattering. Or anyway, if it's really an obstacle, FP probably

isn't for you.

magic in Javascript: JIT compilation, garbage collection, built-in

objects that make RPC calls, etc. But there's zillions of

not-so-experienced Javascript programmers writing crappy web pages all

over the the world, who never think about that stuff and have never

heard of an accumulator. A programmer with deep knowledge doing

bleeding edge systems has to understand what the machine is doing, but

someone just trying to bang out everyday tasks can usually stay at the

high level. No accumulators.

might be easier to start one way or the other, but either way you pick

up more technique as you gain experience. By the time you're evaluating

different approaches to formal software verification (what this thread

was originally about), you're outside the range of what most

10-year-olds are capable of doing, so it's reasonable to assume more

fluency.

work has been done on this stuff (Data Parallel Haskell is inspired by

Blelloch's NESL nested parallel work):

http://conal.net/blog/posts/parallel-tree-scanning-by-composition

http://www.cs.cmu.edu/~rwh/papers/iolambda/short.pdf

https://wiki.haskell.org/GHC/Data_Parallel_Haskell/References

There is also an array library with GPU support:

http://hackage.haskell.org/package/accelerate

ok with just the online docs and the irc channel, plus hacking code.

Thanks though. I might look at some online math classes sometime.

programming that started on comp.lang.ada but got off-topic for that

group so I've migrated it here. Please don't crosspost unless there's

significant Ada content in your post.]

For example, if you wanted to explain the factorial to a student who

has never been exposed to the concept, chances are that your first

approach wouldn't be to write on a blackboard

"fact 1 = 1 -- [order of these two lines switched by me -Paul]

fact n = n * fact (n-1)"

But that's the mathematical definition of the factorial--this isn'thas never been exposed to the concept, chances are that your first

approach wouldn't be to write on a blackboard

"fact 1 = 1 -- [order of these two lines switched by me -Paul]

fact n = n * fact (n-1)"

grade school. I assume I'm explaining it to a professional trying to do

high-assurance programming so of course that's the definition I'd use.

Treating the above as Haskell code that purports to compute the

factorial, it's easier to reason about than the imperative loop version:

Let P(n) be the proposition "fact(n)=n!". We want to verify that P(n)

is true for all n. We prove it by induction. First, P(1) is obviously

true. Second, assume P(n) is true. So by substitution, fact(n+1) =

(n+1) * fact(n). And since P(n) is true by the induction hypothesis,

fact(n+1) = (n+1) * n! which is (n+1)!, which means we've proved P(n) =>

P(n+1). Therefore by induction, forall n. P(n), QED, so we've verified

the code, and this was very easy.

I'm out of my depth here, but I think to do the same thing with

imperative code is much messier, you have to put preconditions and

postconditions around every statement to prove the loop invariant that

you're going through factorials, etc. I guess you can do that with

SPARK but I don't know how it's done.

At the level of everyday coding though, going from loops to recursion is

nothing to be bothered about, trust me. It's like when people freak out

by whitespace as syntax when they first see Python, or by the

parentheses when they first see Lisp. You get used to it quickly and it

stops mattering. Or anyway, if it's really an obstacle, FP probably

isn't for you.

In any case, if you are using a machine which can only perform one

binary operation at a time, you will always need an accumulator to

compute the final result.

I think that's an advanced, low level concept though. There's tons ofbinary operation at a time, you will always need an accumulator to

compute the final result.

magic in Javascript: JIT compilation, garbage collection, built-in

objects that make RPC calls, etc. But there's zillions of

not-so-experienced Javascript programmers writing crappy web pages all

over the the world, who never think about that stuff and have never

heard of an accumulator. A programmer with deep knowledge doing

bleeding edge systems has to understand what the machine is doing, but

someone just trying to bang out everyday tasks can usually stay at the

high level. No accumulators.

I would gladly challenge this claim by picking two large groups of 10

years old who have never been exposed to imperative programming yet,

I don't really know about this and it doesn't concern me too much. Ityears old who have never been exposed to imperative programming yet,

might be easier to start one way or the other, but either way you pick

up more technique as you gain experience. By the time you're evaluating

different approaches to formal software verification (what this thread

was originally about), you're outside the range of what most

10-year-olds are capable of doing, so it's reasonable to assume more

fluency.

I'm writing programs for massively parallel GPU processors, and last

time I checked Blelloch's scan was still one of the most efficient

algorithms

That's outside my area and maybe a niche, but I know some Haskell and FPtime I checked Blelloch's scan was still one of the most efficient

algorithms

work has been done on this stuff (Data Parallel Haskell is inspired by

Blelloch's NESL nested parallel work):

http://conal.net/blog/posts/parallel-tree-scanning-by-composition

http://www.cs.cmu.edu/~rwh/papers/iolambda/short.pdf

https://wiki.haskell.org/GHC/Data_Parallel_Haskell/References

There is also an array library with GPU support:

http://hackage.haskell.org/package/accelerate

Well, I'm learning OCaml using a pretty nice MOOC which has been set

up by my former university. If you're interested the address is

Ocaml is close enough to what I've been doing already that I think I'mup by my former university. If you're interested the address is

ok with just the online docs and the irc channel, plus hacking code.

Thanks though. I might look at some online math classes sometime.