07-15-2019 02:33 AM
Hello, wanted to know if Nvidia Tesla K60 or Quadro K6000 can work on HP Z8 G4 Workstation. Else can you suggest a better graphics card.
Applications: Matlab and Excel for Financial modeling and simulation. Requires lots of processing power
Appreciate your help forum!
07-15-2019 11:17 AM - edited 07-15-2019 11:51 AM
I've not heard of the Tesla K60, - there is a K80 and an M60. Please post a link to a specification.
Teslas are, in effect, GPU accelerators, and would have to be paired with a Quadro. They are typically designed to be in multiple units within a server environment having multiple nodes. Only a couple of Teslas- and the M60 doesn't- have the ability to connect a display and as far as I know, only the C2075 has a display connector and fan so as to survive in a workstation..
The K6000 was a very capable compute GPU in it's era, and as it's PCIe 3.0, has 12GB memory, 384-bit bandwidth, 2880 CUDA cores, up to 4K resolution, and the ability to connect multiple monitors make it a very good prospect for compute use. However, it is easily bettered by a new Quadro P-series for a reasonable cost for 3D CAD use.
I was interested to learn a couple of years ago that my local linear accelerator was designing ( on Siemens NX) the 10m+ super-cooler accelerator modules, microscopically calibrated to the low thousandths of a mm, using K6000's on Dell T3500's(!). They also used Tesla C2075's, one of the very few Teslas with a display output. The simulations were run on 11X 4-CPU Xeon servers in parallel, each with 4 Teslas- but I think those were K80's. Those simulation could run at 100% for a couple of weeks.
However, there have been advancements, and for MATLAB and financial simulation, in my view, both the system RAM and the GPU RAM should be ECC error-correction. Extrapolations of small, high resolution values can change dramatically with a 1-bit error. Extrapolate 1.0738 along a curve for 10,000 iterations. The analogy is that a .05 degree incorrect spacecraft course means missing Mars all together. If the GPU budget is "ample", we can further that conversation.
Another possibility might include the AMD Vega Frontier Edition as those have 64 nCU Compute units, run up to 13.1 TFLOPS of peak SPFP, 4096 shader units (similar but not equal to CUDA cores), 16GB of high bandwidth(HBC) memory, support for 10-bit color, and etc. Not high clock speeds, but among the most compute-oriented GPU's designed during and for the cryptomining craze.
I think they were about $1,200 new but are inexpensive at the moment (some have sold for under $400) and while I can't verify it, it seems worthwhile given that price to look into having a pair- if the power and heat can be addressed. Their disadvantage is that they can use over 300W power and the blower cooler would not be advisable in a long simulation run. There is, however, a factory liquid-cooled version, but I don't know the details of it's installation. Most workstations are resistant to modification that way. This would be one to look into very carefully, but there might be rewards on several levels.
If your system is to run at 100% CPU utilization continuously, an open loop cooler would allow the CPU(s), and GPU(s) to all be water cooled. Consider the Alphacool Eiswand (="Icewall") external open loop liquid cooler which has 6-fans and a 360mm radiator:
And there are waterblocks for most higher end GPU's to add to the loop. I have an Eiswand, still in the box for the office z620 so as to run the E5-1680 v2 at 4.6GHz on all 8-cores. I bought the fan/ radiator/ pump/ and reservoir unit separately and upgraded to all Copper (Alphacool) waterblock and connectors.
I recommend running Passmark Performance Test on the current system as a baseline to compare improvements.
HP z620_2 (2017) (R7) > Xeon E5-1680 v2 (8-core@ 4.3GHz) / z420 Liquid Cooling / 64GB DDR3-1866 ECC Reg / Quadro P2000 5GB _ GTX 1070 Ti 8GB / HP Z Turbo Drive M.2 256GB AHCI + Samsung 970 EVO M.2 NVMe 500GB + HGST 7K6000 4TB / Focusrite Scarlett 2i4 sound interface + 2X Mackie MR824 / 825W PSU /> HP OEM Windows 7 Prof.’l 64-bit > 2X Dell Ultrasharp U2715H (2560 X 1440)
[ Passmark Rating = 6280 / CPU rating = 17178 / 2D = 819 / 3D= 12629 / Mem = 3002 / Disk = 13751 / Single Thread Mark = 2368 [10.23.18]
07-17-2019 05:44 AM
Thanks for your initial response, if I may add more details with regards to my requirement.
I am looking to use the Nvidia Tesla K80 and GTX 1060 in an HP Z8 G4 Workstation with Gold Xeon processor and ECC memory , the Tesla will be used for Matlab simulations and the GTX 1060 for display purposes,alternatively if you suggest the K80 doesn't work I am thinking of using the K6000 although it is less preferred since it is less capable compared to K80 in terms of calculations.
The advantages of the K80 and K6000 is that both offer ECC support for calculations.
Does anyone knows if this configurations will work or is the K80 or the K6000 or both are not compatible with the workstation since they are old versions
power sources wise I have more than 1000W so should be enough for both, but do you think I should consider extra cooling if the setup works? and what do you suggest for the CPU and GPUs
Help is much appreciated.
07-17-2019 01:25 PM - edited 07-17-2019 01:31 PM
Thank you for the clarification.
Certainly the Tesla is excellent for this use and it's typical to run a modest GPU for display purposes. However, it is preferred that the display GPU is a Quadro of the same GPU architecture, in this example, Kepler. This is to avoid driver conflict. However, the K80 as it is a dual GPU with 24GB- or more properly is 2X 12GB, plus the 4,880 CUDA cores and draw 300W+, the heat dissipation required would overwhelm a workstation environment. If you are open to a server environment with the appropriate air volume, that would work . Certainly, a high portion of Xeon Scalables are at home there.
In 2015, I decided to experiment with a Tesla M2090 in a Dell Precision T5500 (2X Xeon X5680 / 48GB RAM / Quadro K2200). A friend was running aerodynamic simulations -which are very calculation intensive on a device he was developing in Excel and I thought I might lend him one of the office secondary systems with an M2090 in it. However, even the modest spec M2090 still ran very hot. As an experiment, I added a "casual" USB fan (39CFM) that is immediately behind the lower front case fan:
The results were promising, but in sustained use, it was running at over 100C.
A more efficient, external cooling solution was considered that increase and direct the air flow:
However, my friend contracted someone to do it all in Matlab.
I believe that a K80 in the Z8 would require this kind of ad hoc solution.
In summary, if the system could be in a 4U server chassis, I'd say K80, hot swap drives and all the trimmings, but in a workstation, consider a K6000- or two- if the PSU is 1000W +, which would yield 24GB RAM and 5,760 CUDA cores. Set the fans to ramp up to a fairly high level. The calculation density of 2X K6000 might equal the K80 as they are both Kepler architecture.
PS: the GTX 1060 is a very capable GPU. I use one in a second system running a 40" 4K monitor used to make for image editing and small scale 3D modeling:
HP z420_3: (2015) (R11) Xeon E5-1650 v2 (6C@ 4.3GHz) / z420 Liquid cooling / 64GB (8X 8GB DDR3-1866 ECC registered / NVIDIA GeForce GTX 1060 6GB/ Samsung 860 EVO 500GB + HGST 7L6000 4TB / ASUS Essence STX / Logitech z2300 2.1 / 600W PSU > Windows 7 Professional 64-bit (HP OEM ) > Samsung 40" 4K
[Passmark System Rating: = 5644 / CPU = 15293 / 2D = 847 / 3D = 10952 / Mem = 2993 Disk = 4858 /Single Thread Mark = 2384 [6.27.19]
07-18-2019 01:27 AM - edited 07-18-2019 01:39 AM
Providing the 3D duties are not extensive and the cooling to the K80 is adequate, it should work. One confluence of 3D and Matlab is 3D graphing:
I'm not sure how much 3D power is used, possibly quite a bit of memory would be an advantage if the dataset is large and the resulting topology is complex. A concern with the Quadro P620:
> would be having only 2GB of RAM. I've found the P2000 to be very competent.
5GB , 10.24 CUDA cores, single height and it's only 75W- does not need a PCIe connector. I think it's possible that the Tesla could be in the primary GPU slot and whichever single height Quadro in the secondary slot at the bottom of the case. NVIDIA used to recmommend that configuration. I'm not sure if that applies. Most dual CPU systems have a bit of a squeeze for the lower GPU position. See the photo of the M2090 in the T5500.
Some Telsa / Matlab users apparently run quite low powered display GPU's, sometimes Quadro NVS which has almost no 3D power. Also, the choice would depend on how versatile the system needs to be. The other question is as to whether the both GPU's should be the same architecture as I've read. I tried a Kepler Quadro with a Maxwell Tesla and except for the prohibitive temperature problem it worked and the Cinebench R15 mark was reasonably good.
> What is the concept of the Tesla cooling solution in the Z8?
I wish I were more current with Tesla; it was about 4 years ago I last considered it. Current GPU's are so powerful and the Teslas' heat problems have shifted the GPU computing equation. Consider a pair of Quadro RTX 4000's, (my next GPU):
2X Quadro RTX4000 =
|CUDA Parallel-Processing Cores||4,608|
|NVIDIA Tensor Cores||576|
|NVIDIA RT Cores||72|
|GPU Memory||16 GB GDDR6|
Single height, lower power, a simpler installation and configuration using a single driver and no exotic cooling. necessary.However, the RTX 4000 is not ECC. The P5000 and P6000 are.