When is C++ long double not long double?   3 comments


When the compiler is Microsoft’s. The data type long double exists and can be used but there is no difference between long double and double.

Saturn and Titan have been developed on Linux using the g++ and clang++ compilers, long double occupies 128 bits (only 80 bits are used) and double occupies 64 bits. I used the long double datatype because it allowed deeper zooms than double before the generated picture degrades due to lack of precision. I introduced code to check the required precision to correctly generate fractals as the level of zoom increased, the initial shift from long double works most of the time, for some reason when long double is in fact double it doesn’t. Update: the example fractal illustrating the problem doesn’t shift to multi-precision because it is one of those fractals that the automatic mechanism doesn’t work, zooming in further with the Linux version exhibits the same problem it just does it later because of the greater precision of long doubles.

For Windows versions of Saturn and Titan the testing was cursory so I missed this problem which is present in all released versions of Saturn and Titan to date.

The problem is best illustrated using two screen shots of the yet to be released version 4.0.0 of Saturn, one running on Windows and the other on Linux.

Saturn 4.0.0 on Windows 7

Saturn 4.0.0 on Windows 7

Saturn 4.0.0 on Linux

Saturn 4.0.0 on Linux

So I now have to add code to Saturn and Titan to account for Microsoft defining long double as double.

Advertisements

3 responses to “When is C++ long double not long double?

Subscribe to comments with RSS.

  1. If you don’t mind a little assembly (), It is not difficult to address the FPU and create at least an 80 bit double on Windows. You could then port it to gcc, etc. I did this in the 80’s for a program called FracTools. For a companion product called FracZooms, I slide (sp?) from int to double to 80 bit to extended precision decimal as needed for the depth and resolution for a given image. Just a thought (and obviously not a new one 🙂 )

    –hsm

    • I have done experiments in the past with x86 assembler and fractals in an attempt to improve performance which resulted in code that was actually slower. Implementing long double code in assembler would require an enormous amount of work especially with respect to implementing functions such as square root, sin, cos, tan, asin, acos, atan, cosh, sinh, tanh etc. Writing assembler is error prone, the problem only affects the Windows port of my program so it’ll just shift to use 80 bit multi-precision numbers instead of long double, I suspect the multi-precision library may use the FPU via assembly. I have built my software on Windows using g++ and it does support 80 bit long doubles (128 bits are allocated for each long double) the resulting code is extremely slow.

      I find it strange that in all the enhancements to the x86 architecture that the old stack based FPU has been retained, additional instructions have been added to the CPU to support calculation of 4 single precision or 2 double precision values at the same time, the registers are 128 bits wide but neither Intel or AMD added support for 128 bit long doubles. I believe the Xeon Phi processors have 256 bit (and 512 bit ?) registers for vector processing but they still don’t support 128 bit values, the 80 bit FPU has been removed.

      • Interesting. I was using math co-processors before Intel. At that time there were actually two—one for integers and the other for floating point. You news that they’ve removed the 80 bit registers is yet another example of Intel’s long time misunderstanding of what would be useful to programmers. Did you know that they did the preliminary work on a string co-processor? They abandoned the project because they felt there was no need for it! I understand your remarks about the difficulty of working at the assembler level. In my case that is where I started coding, IBM 360/370 BAL the 8080/8085/Z80 then 8086, etc. I was lucky enough to time it correctly so that C became available for 8 bit machines and later of course for 16/32 bit devices.Then of course C++; first Walter Bright’s work, then Borland and late to the party as usual, Microsoft. I have written low level code on virtually every machine I’ve used simply because it seemed to fit my abilities and interests. In some regard I’m more at home the lower I get 🙂 BTW if you think the workload is large for a math function library, try it in 8080 code sometime. Like the Chinese curse, ‘may you live in interesting times’.The normal functions were fun enough but later adding the complex versions was a real joy.

        Seems like you’ve got a handle on it and I’m quite interested in you use of GCC. I’ve long been chained to VC because that is what my customers expect/demand. I’d like to hear more if you’ve the time…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: