That's correct. Porting the C version of the app, based on the Linux variant, should be feasible, but note that the Linux, Mac OS and Windows versions use hand-optimized assembly code for the Intel x86 platform, so a straight-forward port for HP-UX just using the generic C code would probably have quite disappointing performance even on modern hardware, let alone on legacy hardware.
Happy crunching
Bikeman
I wouldn't be surprised if that HP 712 did better than expected even with a C version of E@H. One of the benefits of the RISC architecture was its superior performance in number crunching relative to the CISC architecture of the x86 family. That's why RISC held a position in the workstation market for so long until the sheer volume of x86's and the advent of Linux pushed RISC aside.
You do have to be somewhat savvy though. I remember back in '92 or so taking an FFT routine that was a naive port of the textbook Cooley-Tukey algorithm and replacing it with something I found online that was optimized for RISC computers like the SparcStation I had. I got a factor of *nine* speedup! Considering how much FFT processing our radar group did, it was a major advance! Similarly, I imagine a good deal of the E@H processing is FFT-like (the Arecibo pulsar search certainly must be); some web searching for an appropriate algorithm may be worthwile, although I would like to think the E@H crowd looked for good C algorithms before they started resorting to x86 extensions (MMX, SSE) or assembly code.
Just a guess, YMMV.
"Better is the enemy of the good." - Voltaire (should be memorized by every requirements lead)
I compiled gcc 2.2, TeX 3.14 and GRASS GIS on a Bull/MIPS R6000 RISC around 1992 and cannot but agree with you. Fast box, not a microprocessor but a CPU board based on ECL logic, UNIX System V with Berkeley extensions,
Tullio
I wouldn't be surprised if that HP 712 did better than expected even with a C version of E@H. One of the benefits of the RISC architecture was its superior performance in number crunching relative to the CISC architecture of the x86 family. That's why RISC held a position in the workstation market for so long until the sheer volume of x86's and the advent of Linux pushed RISC aside.
I would not say that RISC was pushed away, it's more like being assimilated. With the Pentium Pro and similar AMD CPUs, x86 CPUs would maintain the CISC instruction set for backward compatibility, but at the same time decode those instructions into a stream of RISC-like micro-instructions that can be executed by a core that adopted many of the ideas of RISC.
Would the HP 712 be able to complete an S5R5 WU within the deadline with an app compiled from the generic C code? I think it would be a close at best. There were even models of the 712 that had their floating point units disabled, which would make it rather pointless to even try to run E@H on it.
After compiling with GCC on my RISC, I was surprised by the different execution times depending on the optimization level I had chosen. On an x86 CPU it does not make much difference.
After compiling with GCC on my RISC, I was surprised by the different execution times depending on the optimization level I had chosen. On an x86 CPU it does not make much difference.
GCC did grow up in the Unix workstation environment before it got ported to x86, so I wouldn't be surprised that a number of the higher-level optimization stages it has are tuned to RISC architectures. I remember working more with Sun CC than gcc, but with that you'd compile -d (debug) first, then once you got that going you tried -O2 (most major optimizations), and then if you wrote code that avoided dirty tricks with pointers and such you could go -O4 and usually still have working code.
"Better is the enemy of the good." - Voltaire (should be memorized by every requirements lead)
You can find it on the
)
You can find it on the Einstein home page.
Tullio
RE: You can find it on the
)
Thanks Tullio, I didn't know that.
Bill
RE: That's correct.
)
I wouldn't be surprised if that HP 712 did better than expected even with a C version of E@H. One of the benefits of the RISC architecture was its superior performance in number crunching relative to the CISC architecture of the x86 family. That's why RISC held a position in the workstation market for so long until the sheer volume of x86's and the advent of Linux pushed RISC aside.
You do have to be somewhat savvy though. I remember back in '92 or so taking an FFT routine that was a naive port of the textbook Cooley-Tukey algorithm and replacing it with something I found online that was optimized for RISC computers like the SparcStation I had. I got a factor of *nine* speedup! Considering how much FFT processing our radar group did, it was a major advance! Similarly, I imagine a good deal of the E@H processing is FFT-like (the Arecibo pulsar search certainly must be); some web searching for an appropriate algorithm may be worthwile, although I would like to think the E@H crowd looked for good C algorithms before they started resorting to x86 extensions (MMX, SSE) or assembly code.
Just a guess, YMMV.
"Better is the enemy of the good." - Voltaire (should be memorized by every requirements lead)
I compiled gcc 2.2, TeX 3.14
)
I compiled gcc 2.2, TeX 3.14 and GRASS GIS on a Bull/MIPS R6000 RISC around 1992 and cannot but agree with you. Fast box, not a microprocessor but a CPU board based on ECL logic, UNIX System V with Berkeley extensions,
Tullio
RE: I wouldn't be
)
I would not say that RISC was pushed away, it's more like being assimilated. With the Pentium Pro and similar AMD CPUs, x86 CPUs would maintain the CISC instruction set for backward compatibility, but at the same time decode those instructions into a stream of RISC-like micro-instructions that can be executed by a core that adopted many of the ideas of RISC.
Would the HP 712 be able to complete an S5R5 WU within the deadline with an app compiled from the generic C code? I think it would be a close at best. There were even models of the 712 that had their floating point units disabled, which would make it rather pointless to even try to run E@H on it.
CU
Bikeman
After compiling with GCC on
)
After compiling with GCC on my RISC, I was surprised by the different execution times depending on the optimization level I had chosen. On an x86 CPU it does not make much difference.
RE: After compiling with
)
GCC did grow up in the Unix workstation environment before it got ported to x86, so I wouldn't be surprised that a number of the higher-level optimization stages it has are tuned to RISC architectures. I remember working more with Sun CC than gcc, but with that you'd compile -d (debug) first, then once you got that going you tried -O2 (most major optimizations), and then if you wrote code that avoided dirty tricks with pointers and such you could go -O4 and usually still have working code.
"Better is the enemy of the good." - Voltaire (should be memorized by every requirements lead)