My gripe with PE is that it's made up of specific cases instead of generalized problems. Sometimes, the solution that most people arrive at, and that is accepted by the judge, is one that fails in the general case.
The easiest solution, and the one given as an example, is to truncate all the numbers and add them as doubles. But that wouldn't work with all possible inputs of a generalized version of the problem. It's not a very interesting solution either.
I think PE trains future programmers to ignore edge cases, which is exactly the thing that causes the most bugs in real world software.
EDIT: googling solutions to this problem that people have posted on their blogs, I find most fall into one of two categories:
1. Trivial solutions using a bignum library or a language with built-in bignums. These are correct but nothing interesting was learned by writing them.
2. Solutions that assert that the numbers can be truncated before adding, which works in this case but not in general, as I noted earlier. The really sad thing is that the people who solved it this way seem to be under the impression that this is a general solution. So the exercise has not only taught them to write buggy code, but has also helped them validate a false hypothesis.
A double has 15 digits of precision. Even if the 16th digit of all 50 numbers is 9, it still would not be enough to affect the 11th digit of the sum.
If all the numbers were solid 9's then you would worry about the loss of precision from adding many small numbers to a large one, but with just 50 inputs at worst you loose 3 digits of precision, which is still not enough to affect the 11th digit.
Only 5 numbers, but you need all digits even to compute the first digit of the sum. There are many less obvious edge cases; any set of numbers that adds up to
60000000000...0000000000
with at least one carry from the last column will do.
Hmm, I guess I would split the numbers in half, and sum each set of half's (making sure to set the exponent correctly).
Then add the less significant half to the more significant half.
i.e. basically add the numbers in order of magnitude. It doesn't have to be just two half's.
Did anyone post this as a solution?
I do remember learning this somewhere - it was a warning about loss of precision when adding lots of small numbers to a few large ones. And the solution was to do all the small numbers first.
Seems like this would be a perfect place to teach this lesson. Do you know if this site does that?
For example, problem 13: http://projecteuler.net/index.php?section=problems&id=13
The easiest solution, and the one given as an example, is to truncate all the numbers and add them as doubles. But that wouldn't work with all possible inputs of a generalized version of the problem. It's not a very interesting solution either.
I think PE trains future programmers to ignore edge cases, which is exactly the thing that causes the most bugs in real world software.
EDIT: googling solutions to this problem that people have posted on their blogs, I find most fall into one of two categories:
1. Trivial solutions using a bignum library or a language with built-in bignums. These are correct but nothing interesting was learned by writing them.
2. Solutions that assert that the numbers can be truncated before adding, which works in this case but not in general, as I noted earlier. The really sad thing is that the people who solved it this way seem to be under the impression that this is a general solution. So the exercise has not only taught them to write buggy code, but has also helped them validate a false hypothesis.