When 64 bits isn’t enough
Google have been running a few “treasure hunt” challenges; and with a half hour to spare I decided to give one of the challenges a spin. Now, as the challenges are still live I’ll spare you the details of my solution but naturally I worked a number of subsets of the problem in my head and immediately began to think how I’d represent my solution as an algorithm. I’m most proficent with C# so off I went hacking together a few lines of code and I cheerfully submitted my answer. After a short wait I was informed of the real answer and that my answer was many factors short. I don’t like to be wrong, then it struck me… integer overflow! What a sucker I am!
When developing most line of business applications it’s standard to stick with 32 bit integers. After all, 4,294,967,295 unsigned integers cover the vast majority of common issues. The solution to this particular problem can’t be covered by an unsigned 64 bit integer either. Which leads to two questions:
1. Why does C#/.NET not throw integer overflow exceptions?
It does throw integer overflow exceptions, however it is not enabled by default. You can check this in two different ways, firstly via a compiler switch (/checked) and secondly via the checked keyword.
It’s not a default because there is a significant performance penalty for carrying out the check. In reality though, there is probably an arguement for this to be used at the very least for initial builds to QA/Test environments for applications doing serious number crunching.
2. What do you do when 64 bits isn’t enough?
Now this is the first time I’m genuinely disappointed with the C# language/.NET framework. There is no support out of the box to deal with more than 64 bits. Now granted, this is an edge case for businesses, but it becomes more important in crypto and science. The BCL team had inserted a BigInteger class into System.Numeric for 3.5 and it was present throughout the betas. Unfortunately, it got pulled for performance and compatability reasons. It doesn’t appear to be in the beta for .NET 3.5 SP1 released a few weeks ago, so we’re probably looking at 2009/2010.
Another suggestion is to use the BigInteger implentation supplied in the Visual J# runtime and although sensible, it sums up how idiotic it is that there isn’t a BigInteger implentation in the actual framework! I also read that F# has it’s own BigInt – talk about “red rag to a bull”.
Luckily, there is an answer from the community at Code Project.
Incedently, I was able to confirm my approach was correct by using floating-point instead (checking the most significant bits). Despite my faults, it leads me to the conclusion that this is all a bit poor by MS, even if I had recognised the integer overflow before submitting my answer there is nothing in the framework that would have helped.