My hard drive running Windows for my dual 580 system crashed a few days ago unfortunately. Since then I switched to a PXE boot of Linux until I can get a replacement disk which will probably be a SSD this time around.
If you're already set up for diskless ops with linux and PXE, have you tried booting Windows diskless as well? Although it's pretty limited compared to linux (e.g. no shared /usr) it means you can just create as many different copies as you like (or have disk space for :) Although I pretty much only run linux, I did try BOINC on a diskless Windows XP box and it worked fine.
Can you please recheck your list; HD5850 2wu's seems to contain bad data. Should be less than 3600.
Are there any numbers for the new HD7870XT available?
Additional info for the A8 APU: 2 wu's --> blue screen under win7 and win8, not enough Video RAM
I wonder if that has any effect on PCIe 2.0 platforms that NVIDIA only let run at PCIe 1.0 speeds (e.g. Intel X38). Certainly the 295.xx drivers only run at PCIe 1.0 speed on some Intel PCIe 2.0 platforms (you can check by running the settings program and looking under the 'PowerMizer' link - 2.5GT/s for PCIe 1.0, 5.0GT/s for PCIe 2.0 and presumably 10GT/s for PCIe 3.0).
Thanks for pointing it out, I'll give it a go!
I am not too sure on the Intel X38 and PCI-E 2.0 support although I recall NVIDIA placing a similar limitation back then as they are doing now for LGA2011 due to Intel not officially supporting the new PCI-E spec. It may be worth trying to see if this option works for the older platforms that were also restricted by NVIDIA.
The PCI-E 3.0 spec actually runs at 8 GT/s. However, with some other optimizations , the PCI-E 3.0 spec should allow for approximately double the bandwidth of PCI-E 2.0. NVIDIA has a bandwidth test program called bandwidthTest and AMD a program called bufferBandwidth both of which can be used to measure bandwidth on the bus. When I ran the bandwidth test via each card type and PCI-E 3.0, the bandwidth results were approximately double that of PCI-E 2.0.
If you're already set up for diskless ops with linux and PXE, have you tried booting Windows diskless as well? Although it's pretty limited compared to linux (e.g. no shared /usr) it means you can just create as many different copies as you like (or have disk space for :) Although I pretty much only run linux, I did try BOINC on a diskless Windows XP box and it worked fine.
I have not looked into the possibility of running Windows diskless yet. I will see if I can find some info on setting that up. The less hard disks that I have lying around the better. :)
On the Linux side, the image I made for running BOINC stores the data on NFS and uses a file system called aufs to overlay NFS with the file system running in memory so that my data is preserved on reboot.
The main reason I was running Windows is so that I could also run Seti Astropulse on my GPUs when available. Unfortunately there is not a GPU AP application available yet for Linux. However, I may just stick with Linux on this system and setup another dedicated system for Seti AP.
I have not looked into the possibility of running Windows diskless yet. I will see if I can find some info on setting that up. The less hard disks that I have lying around the better. :)
The two approaches I've used are AoE and iSCSI. Both are supported by gPXE/iPXE, and there's a useful page guiding you through the boot setup.
I originally used the AoE approach as it's much simpler (linux server running 'vblade') but on certain operations the disk I/O performance was lousy. A better AoE server might have helped, but I then tried iSCSI which is more complex to set up but gives much better performance.
Quote:
On the Linux side, the image I made for running BOINC stores the data on NFS and uses a file system called aufs to overlay NFS with the file system running in memory so that my data is preserved on reboot
Here each client has its own root directory on the NFS server, but they all mount the same shared /usr. I have thought about running a filesystem from RAM for BOINC-specific clients, so I will have to look at aufs.
This will be the first consumer grade card with similar FP64 performance to its Tesla counterpart, the K20x while costing a fraction of the K20x. There is an option added to the NVIDIA control panel to enable the full FP64 performance at 1/3 of FP32 performance at the expense of running lower clock frequency. Granted, the FP64 improvements will not help Einstein@home but should come in handy for a project like Milkyway@home.
The card has 2688 cores (14 active SMX units) with a turbo boost up to 876 MHz and overclocking support. It should be interesting to see how this card performs with BRP4. Estimated compute performance for FP32 is 4.7 TFLOPS and for FP64 is 1.3 TFLOPS.
...and unfortunately it'll still be far out of most crunchers reach w/ its $1,000 price tag...guess most of us will just have to wait until the next generation of cards appears and prices on the previous gen stuff starts to fall...
RE: My hard drive running
)
If you're already set up for diskless ops with linux and PXE, have you tried booting Windows diskless as well? Although it's pretty limited compared to linux (e.g. no shared /usr) it means you can just create as many different copies as you like (or have disk space for :) Although I pretty much only run linux, I did try BOINC on a diskless Windows XP box and it worked fine.
RE: Thx
)
Can you please recheck your list; HD5850 2wu's seems to contain bad data. Should be less than 3600.
Are there any numbers for the new HD7870XT available?
Additional info for the A8 APU: 2 wu's --> blue screen under win7 and win8, not enough Video RAM
I have just installed an
)
I have just installed an Nvidia GeForce GT610, and it does one BRP4 work unit at a time, in about 10,950 seconds.
RE: I wonder if that has
)
I am not too sure on the Intel X38 and PCI-E 2.0 support although I recall NVIDIA placing a similar limitation back then as they are doing now for LGA2011 due to Intel not officially supporting the new PCI-E spec. It may be worth trying to see if this option works for the older platforms that were also restricted by NVIDIA.
The PCI-E 3.0 spec actually runs at 8 GT/s. However, with some other optimizations , the PCI-E 3.0 spec should allow for approximately double the bandwidth of PCI-E 2.0. NVIDIA has a bandwidth test program called bandwidthTest and AMD a program called bufferBandwidth both of which can be used to measure bandwidth on the bus. When I ran the bandwidth test via each card type and PCI-E 3.0, the bandwidth results were approximately double that of PCI-E 2.0.
RE: If you're already set
)
I have not looked into the possibility of running Windows diskless yet. I will see if I can find some info on setting that up. The less hard disks that I have lying around the better. :)
On the Linux side, the image I made for running BOINC stores the data on NFS and uses a file system called aufs to overlay NFS with the file system running in memory so that my data is preserved on reboot.
The main reason I was running Windows is so that I could also run Seti Astropulse on my GPUs when available. Unfortunately there is not a GPU AP application available yet for Linux. However, I may just stick with Linux on this system and setup another dedicated system for Seti AP.
RE: I have not looked into
)
The two approaches I've used are AoE and iSCSI. Both are supported by gPXE/iPXE, and there's a useful page guiding you through the boot setup.
I originally used the AoE approach as it's much simpler (linux server running 'vblade') but on certain operations the disk I/O performance was lousy. A better AoE server might have helped, but I then tried iSCSI which is more complex to set up but gives much better performance.
Here each client has its own root directory on the NFS server, but they all mount the same shared /usr. I have thought about running a filesystem from RAM for BOINC-specific clients, so I will have to look at aufs.
NVIDIA just recently
)
NVIDIA just recently announced a new consumer grade card, the Titan:
http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-titan/specifications
This will be the first consumer grade card with similar FP64 performance to its Tesla counterpart, the K20x while costing a fraction of the K20x. There is an option added to the NVIDIA control panel to enable the full FP64 performance at 1/3 of FP32 performance at the expense of running lower clock frequency. Granted, the FP64 improvements will not help Einstein@home but should come in handy for a project like Milkyway@home.
The card has 2688 cores (14 active SMX units) with a turbo boost up to 876 MHz and overclocking support. It should be interesting to see how this card performs with BRP4. Estimated compute performance for FP32 is 4.7 TFLOPS and for FP64 is 1.3 TFLOPS.
...and unfortunately it'll
)
...and unfortunately it'll still be far out of most crunchers reach w/ its $1,000 price tag...guess most of us will just have to wait until the next generation of cards appears and prices on the previous gen stuff starts to fall...
Updated list after a very
)
Updated list after a very long Time ^^
[LINK]http://www.dskag.at/images/Research/EinsteinGPUperformancelist.pdf[/LINK]
Thx for new values & for showing some typos :)
DSKAG Austria Research Team: [LINK]http://www.research.dskag.at[/LINK]
RE: Updated list after a
)
Very useful, thanks! Here's an example of the Titan running 5x:
http://einsteinathome.org/host/6889477/tasks&offset=0&show_names=1&state=3&appid=0