Discussion:
C++ vs Haskell Matrix Determinant
(too old to reply)
news.swissonline.ch
2003-11-19 21:57:22 UTC
Permalink
Hello

Since a week or two I'm learning functional languages. I realy like the
features like lazy evaluation and I think they are much cleaner then
imperativ languages. Today I did a little benchmark between my matrix
lib in C++ and Haskell. I was shocked ! The C++ versions needs 25
Seconds to calc the determinant of a 12x12 identity matrix. The Haskell
versions needs 1min15sec for a 11x11 identity matrix. Maybe you have
some tricks for me to speed up the haskell version. The Haskell version
is compiled with GHC (ghc -o).

Cheers Tobias

----------
C++ source
----------
#include <iostream>

namespace math
{

template<typename T, unsigned int SIZE>
class vec;

template<typename T, unsigned int ROWS, unsigned int COLS>
class matrix {
private:
T m_fields[ROWS][COLS];

void checkindex(unsigned int row, unsigned int col) const {
if(row >= ROWS) {
throw std::out_of_range("row index is out of range");
}
if(col >= COLS) {
throw std::out_of_range("col index is out of range");
}
}


public:
matrix() {
}

matrix(const matrix<T, ROWS+1, COLS+1>& c, unsigned int ai, unsigned
int ak) {
for(unsigned int i = 0, thisi = 0; i < ROWS+1; ++i) {
if(i != ai) {
for(unsigned int k = 0, thisk = 0; k < COLS+1; ++k) {
if(k != ak) {
m_fields[thisi][thisk] = c.at(i,k);
++thisk;
}
}
++thisi;
}
}
}

T& at(unsigned int row, unsigned int col) {
#ifdef _DEBUG
checkindex(row, col);
#endif
return m_fields[row][col];
}

const T& at(unsigned int row, unsigned int col) const {
#ifdef _DEBUG
checkindex(row, col);
#endif
return m_fields[row][col];
}
};

template<typename T, unsigned int ORDER>
T det(const matrix<T, ORDER, ORDER>& a)
{
T d = T(0.0);
for(unsigned int icol = 0; icol < ORDER; ++icol) {
matrix<T, ORDER-1, ORDER-1> sub(a, 0, icol);
T pre = (icol % 2) == 0 ? T(1.0) : T(-1.0);
d += pre * a.at(0,icol) * det(sub);
}
return d;
}

template<typename T>*/
T det(const matrix<T, 1, 1>& a)
{
return a.at(0,0);
}

template<typename T, unsigned int ORDER>
matrix<T, ORDER, ORDER> identity()
{
matrix<T, ORDER, ORDER> r;
for(unsigned int irow = 0; irow < ORDER; ++irow) {
for(unsigned int icol = 0; icol < ORDER; ++icol) {
r.at(irow,icol) = (icol == irow) ? T(1.0) : T(0.0);
}
}
return r;
}

int main()
{
matrix<float,12,12> mi(identity<float,12>());
int dummy;
cin >> dummy;
if(dummy == 1) {
mi.at(1,1) = 0.9f;
mi.at(1,5) = 0.9f;
mi.at(1,3) = 1.9f;
}
cout << det(mi);
}

--------------
Haskell source
--------------

import List

identity :: Int -> [[Float]]
identity 0 = []
identity (n+1) = [1.0: replicate n 0.0] ++ map (0:) (identity n)

sub :: [a] -> [[a]]
sub [x] = [[]]
sub (x:xs) = xs: (map (x:) (sub xs))

submat :: [[Float]] -> [[[[Float]]]]
submat [[x]] = [[[[]]]]
submat x = [transpose (map sub subm) | subm<-(sub x)]

det :: [[Float]] -> Float
det [[x]] = x
det x = sum (zipWith (*) (zipWith (*) detmatrix sign) (head x))
where detmatrix = [det subm | subm<-(head (submat x))]
sign = [1.0,-1.0] ++ sign

main :: IO ()
main = do {d <- return (det (identity 11)); putStr (show d)}
Oleg Trott
2003-11-20 03:53:18 UTC
Permalink
Post by news.swissonline.ch
Hello
Since a week or two I'm learning functional languages. I realy like the
features like lazy evaluation and I think they are much cleaner then
imperativ languages. Today I did a little benchmark between my matrix
lib in C++ and Haskell. I was shocked ! The C++ versions needs 25
Seconds to calc the determinant of a 12x12 identity matrix. The Haskell
versions needs 1min15sec for a 11x11 identity matrix. Maybe you have
some tricks for me to speed up the haskell version. The Haskell version
is compiled with GHC (ghc -o).
Cheers Tobias
<snip>

You should calculate the determinant of a non-sparce matrix by converting
the matrix to a triangular one, whose determinant is simply the pruduct of
the diagonal elements. The complexity of such an algorithm is polynomial,
while your approach has N! complexity.

HTH
--
Oleg Trott <***@columbia.edu>
Tobias Neukom
2003-11-20 06:17:47 UTC
Permalink
Post by Oleg Trott
Post by news.swissonline.ch
Hello
Since a week or two I'm learning functional languages. I realy like the
features like lazy evaluation and I think they are much cleaner then
imperativ languages. Today I did a little benchmark between my matrix
lib in C++ and Haskell. I was shocked ! The C++ versions needs 25
Seconds to calc the determinant of a 12x12 identity matrix. The Haskell
versions needs 1min15sec for a 11x11 identity matrix. Maybe you have
some tricks for me to speed up the haskell version. The Haskell version
is compiled with GHC (ghc -o).
Cheers Tobias
<snip>
You should calculate the determinant of a non-sparce matrix by converting
the matrix to a triangular one, whose determinant is simply the pruduct of
the diagonal elements. The complexity of such an algorithm is polynomial,
while your approach has N! complexity.
HTH
I use this lib normaly for 4x4 non sparse matrices. I could also use
another matrix (rotation matrix for example), but i think that would not
make a differnce for the benchmark.

Thanx anyway Tobias
Oleg Trott
2003-11-20 08:01:55 UTC
Permalink
Post by Tobias Neukom
Post by Oleg Trott
Post by news.swissonline.ch
Hello
Since a week or two I'm learning functional languages. I realy like the
features like lazy evaluation and I think they are much cleaner then
imperativ languages. Today I did a little benchmark between my matrix
lib in C++ and Haskell. I was shocked ! The C++ versions needs 25
Seconds to calc the determinant of a 12x12 identity matrix. The Haskell
versions needs 1min15sec for a 11x11 identity matrix. Maybe you have
some tricks for me to speed up the haskell version. The Haskell version
is compiled with GHC (ghc -o).
Cheers Tobias
<snip>
You should calculate the determinant of a non-sparce matrix by converting
the matrix to a triangular one, whose determinant is simply the pruduct
of the diagonal elements. The complexity of such an algorithm is
polynomial, while your approach has N! complexity.
HTH
I use this lib normaly for 4x4 non sparse matrices. I could also use
another matrix (rotation matrix for example), but i think that would not
make a differnce for the benchmark.
Thanx anyway Tobias
You are missing the point. You are using a N! algorithm where a N^3/3 one
should be used.

4! / (4^3/3) = 1.125
40! / (40^3/3) = 38246028902245206297450528262317932544000000
--
Oleg Trott <***@columbia.edu>
Jerzy Karczmarczuk
2003-11-20 07:53:01 UTC
Permalink
Post by Oleg Trott
... Today I did a little benchmark between my matrix
lib in C++ and Haskell. I was shocked ! The C++ versions needs 25
Seconds to calc the determinant of a 12x12 identity matrix. The Haskell
versions needs 1min15sec for a 11x11 identity matrix. Maybe you have
some tricks for me to speed up the haskell version.
You should calculate the determinant of a non-sparce matrix by converting
the matrix to a triangular one, whose determinant is simply the pruduct of
the diagonal elements. The complexity of such an algorithm is polynomial,
while your approach has N! complexity.
This is - I suspect - a non-answer.
Obviously it is true, using the recursive Laplace expansion is - typically -
*much more* costly than the Gauss elimination, etc., but I presume that the
original query is just a benchmark, not a package from a usable library.

The problem is that C++ uses random-access arrays, while Haskell - lists. This
may introduce several penalties, horrible if the data structures are processed
randomly or restructured/copied instead of modifying them in place. Here this
problem may not be so dramatic although it is possible that the list algorithm
gets easily an additional factor N for the complexity.

Anyway, if I might suggest - use ARRAYS and not lists, avoid too much of dynamic
memory allocation. Then, repeat the benchmark. Also, perhaps, compute first the
dimension of your matrix, and while descending recursively, monitor it. When it
becomes equal to 2 or 3 use the inline formula.


Jerzy Karczmarczuk
Oleg Trott
2003-11-20 09:58:41 UTC
Permalink
Post by Jerzy Karczmarczuk
Post by Oleg Trott
... Today I did a little benchmark between my matrix
lib in C++ and Haskell. I was shocked ! The C++ versions needs 25
Seconds to calc the determinant of a 12x12 identity matrix. The Haskell
versions needs 1min15sec for a 11x11 identity matrix. Maybe you have
some tricks for me to speed up the haskell version.
You should calculate the determinant of a non-sparce matrix by converting
the matrix to a triangular one, whose determinant is simply the pruduct
of the diagonal elements. The complexity of such an algorithm is
polynomial, while your approach has N! complexity.
This is - I suspect - a non-answer.
Obviously it is true, using the recursive Laplace expansion is - typically
- *much more* costly than the Gauss elimination, etc., but I presume that
the original query is just a benchmark, not a package from a usable
library.
He did call it a "lib".
Post by Jerzy Karczmarczuk
The problem is that C++ uses random-access arrays, while Haskell - lists.
This may introduce several penalties, horrible if the data structures are
processed randomly or restructured/copied instead of modifying them in
place. Here this problem may not be so dramatic although it is possible
that the list algorithm gets easily an additional factor N for the
complexity.
Anyway, if I might suggest - use ARRAYS and not lists, avoid too much of
dynamic memory allocation. Then, repeat the benchmark. Also, perhaps,
compute first the dimension of your matrix, and while descending
recursively, monitor it. When it becomes equal to 2 or 3 use the inline
formula.
You are advising him to use an INCORRECT solution! Not only it has
N! instead of N^3 complexity, but it's very numerically unstable: what
you'll get for anything but the most trivial matrices will have nothing to
do with the correct answer. As I always say, numerics is not for amateurs -
you don't get core dumps or uncaught exceptions, just silent wrong output
(that takes super-exponential time)

As to the validity of such non-programs as "silly little micro-benchmarks",
there must be a FAQ about this somewhere. Basically, these are meaningless
because the results are a product of some arbitrary choices made by the
authors. If the question was "how does the best C++ program for calculating
the determinant (correctly) compare to its best Haskell analogue", then it
might be meaningful.

Cheers,
--
Oleg Trott <***@columbia.edu>
Jerzy Karczmarczuk
2003-11-20 10:39:45 UTC
Permalink
...
Post by Oleg Trott
Post by Jerzy Karczmarczuk
Anyway, if I might suggest - use ARRAYS and not lists, avoid too much of
dynamic memory allocation. Then, repeat the benchmark. Also, perhaps,
compute first the dimension of your matrix, and while descending
recursively, monitor it. When it becomes equal to 2 or 3 use the inline
formula.
You are advising him to use an INCORRECT solution! Not only it has
N! instead of N^3 complexity, but it's very numerically unstable: what
you'll get for anything but the most trivial matrices will have nothing to
do with the correct answer. As I always say, numerics is not for amateurs -
you don't get core dumps or uncaught exceptions, just silent wrong output
(that takes super-exponential time)
Come on, old man, I live in a "numeric world" for some 30 years, I am not as
amateurish as you (very politely) suggest.

Look, this is functional language newsgroup. I *understood* that the fellow
is interested mainly in finding out why the Haskell solution seems to be
much less efficient than his C++ code.

I dismissed all the problems with ill-conditioned matrices, etc. (BTW. the
gaussian elimination may produce some abominations as well if you do not
control the pivoting, etc., in suspect cases...) I know that,
and I suspect that the original poster knows as well that for non-sparse
matrices the Laplace method is not suitable. Don't be Jesuit, we are not
talking about the numerical efficiency here. And even if we were, don't
shout, please, "INCORRECT", because you show that your definition of correct-
ness is not portable... Some symbolic packages use the Laplace expansion to
compute exactly some symbolic determinants as polynomials, without introducing
rational entities generated by the elimination algorithms.

Others, even if there are "exact" algorithms available, prefer to use "inexact",
iterative methods such as the Gauss-Seidel technique. So what? Shall I fight
against them? This is preposterous. Again, this is not a discussion of tools
suitable for, say, molecular biophysicists at Palmer Lab, if you see what I am
aiming at...
Post by Oleg Trott
As to the validity of such non-programs as "silly little micro-benchmarks",
there must be a FAQ about this somewhere. Basically, these are meaningless
because the results are a product of some arbitrary choices made by the
authors.
Perhaps you are right, but since I don't understand at all what do you have
against private benchmarks I leave you with your demons. I did a lot of private
benchmarks based on my arbitrary choices in my life, and I am satisfied. The
original poster didn't claim that he produced a benchmark suitable *for you*.
If you wish to save his soul from damnation [for me it is too late], then
point him to those mythical FAQs instead of suggesting that something must be
somewhere.

Anyway, for some years people will still use the exponential algorithm for
Fibonacci sequences in order to benchmark the efficiency of the recursive
stack processing, etc. So what? Will you tell them that this is a stupid
algorithm? They KNOW that.

==

Once more, I believe that the essence of the problem here is the difference
between the efficiency of random-access data structures as compared to sequen-
tial lists, that's all. Over and out.


Jerzy Karczmarczuk

Loading...