Create an account on the HP Community to personalize your profile and ask a question
03-13-2022 10:25 AM - edited 03-13-2022 10:47 AM
Hello there, today I got two used Xeon X5680s to upgrade my home server based on HP Z800 motherboard. That's the only part that is related to HP Z800 though, as PSU, cooling and case are all non-HP. My previous setup was something like this:
- 2 x Xeon E5620
- 24Gb ECC 1333MHz RAM (6x4Gb)
- Z800 MoBo
- Nvidia Quadro 600
- 500W AeroCool power supply
- BIOS v 3.61
That setup worked flawlessly for more than a year. Today I installed those X5680s and the weirdness began. First of all, when I only installed 2 CPUs and didn't change anything else, my PC would reboot every time it comes to Windows startup screen (past POST, basically). I tinkered for a while and decided to check CMOS battery voltage (which was okay), which led BIOS setting to reset to defaults. After that the PC booted up just fine and worked for around 5 minutes. I noticed that HyperThreading is not enabled and that's why I turned the PC off. After enabling the HT, it started rebooting on Windows startup screen again. That's when I disabled the HyperThreading and decided to check the RAM sticks. What I thought was weird when I first booted the PC after upgrade, is that DIMM5 was reporting an error (something like "rq rs rds", can't remember exact letters). I had a thought that the CPU might be faulty, but it might as well be a problem with MoBo, so I put one DIMM from slot 5 to slot 4 and that problem never appeared again. However, I was getting inconsistent results with different memory modules. It rebooted with all 6 DIMMs, failed with 5, 4, 2 DIMMs, sometimes was working fine with 3 DIMMs and currently I got it running with 1 DIMM in CPU 0 DIMM 1 slot.
Currently I think the problem might be in PSU, because it's a cheap 500W PSU, but I currently don't have a power meter to check how much it draws from the wall. Other possibility is a faulty CPU. I'll try to go to a local server parts seller and check my hardware on their MoBos to see if something's broken. Meanwhile, I'd be happy to hear your thoughts and suggestions, maybe I'm just missing something? If you need additional info - I'll post it here, just ask.
Update #1: Just ran a CPU-Z bench, CPUs were 100% loaded and nothing blacked out. So the PSU is less possibly the culprit.
Solved! Go to Solution.
03-26-2022 10:20 AM
Also, my fellow had a moment of witness and suggested that a used CPU might have microcracks in PCB that might become bigger under pressure of CPU cooler and lead to instability. I randomly tried to untighten one of the heatsinks and now the system boots up well. I don't even know for sure if that is the problem that was causing the faults directly, so this thread might not even be useful to anybody. Still, I consider this issue solved. Thanks again to everyone who participated.
03-13-2022 11:34 AM - edited 03-13-2022 11:38 AM
was the system sitting for a while? i had random memory issues with my z800 that were resolved by cleaning the dimm contacts and then doing the same for the motherboard slots
this product is not cheap, but it does work quite well in removing any contaminate film that can form on the memory/memory sockets over time
ps, it also works on the cpu pads and you can also simply spray the cleaner on the cpu socket pins (don't touch the pins with anything like a swab!! just use the spray from the bottle approx. 4/6 in away from the socket pins)to coat the pins
Last the power draw between the 5620 and the 5680 cpu(s) is quite large, you may simply not have a power supply/wiring that is able to supply the necessary current to the cpu and the memory dimms using your custom power supply setup
03-18-2022 07:25 AM - edited 03-18-2022 07:26 AM
Well, I messed with it some more and got, uh, even weirder results. First of all, I cleaned the contacts and inspected the pins on the sockets - no problems there. I also experimented with mounting pressure (since I don't have an original HP CPU coolers, but some random LGA1366-compatible ones. It's bad, but E5620s worked like a charm), but it seems that wasn't an issue, because even with moderately tightened screws the problem persisted. Then I moved from checking all combinations of populated RAM slots to sockets and tried booting with different variants:
1. I tried booting with one CPU. It went very well and I succeeded loggin into Windows with as much as 5 RAM sticks (sixth one is not available with cooler installed) and it worked stably.
2. Then I moved the CPU to the second socket and everything was fine as well.
3. When I added a second CPU and reduced RAM sticks to just two (each of DIMM1 slots, one for each CPU) it also was able to log into Windows.
4. Then I added two more RAM sticks and retunred to square one. I was able to get to the login screen, but after I logged in, the system shut off in a matter of several seconds.
After that, everything else failed. I kept my CPUs in the "reversed" order (means that chip that was in socket 1 is now in socket 2 and same for socket 2), but wasn't even able to make it work stably with one RAM stick. I also tried clearing CMOS, that didn't work. I tinkered with BIOS and tried useing "memory interleave" as well as "NUMA separate mode" - still no luck. Currently I plan on doing two things:
- trying booting up with old chips to see if this is a MoBo failure;
- taking my CPUs to a local service to check if they boot normally on other MoBos;
I'm still open to suggestions and won't close this thread, so thank you if you put suggestions here. Cheers.
03-18-2022 02:28 PM
Oh, that's a great reminder. I also pointed to that before, but when I managed to get it running on 1 RAM stick with two CPUs, I was able to stress test it and at 100% load it still worked. I doubt that having 6 sticks of RAM instead of 1 will draw more power than a 100% loaded pair of 130W CPUs) I'll look into it though.
03-18-2022 02:39 PM
not all consumer ATX power supplies are "single rail" (although most are) does your adapter supply power to the motherboard ram socket? it should
if using a single rail supply you will need aprox 800 watts for a low power cpu setup with dual cpu's
and 1000 watts if using high power dual cpu's and more than one HD and a mid/high-end gpu
last, as power supplies age their ability to output their full rated load can/will decrease somewhat
03-19-2022 04:33 AM
My PSU is indeed a single-rail one. Although it's just 500W, it's declared +12V power is 450W, so I don't think it's a PSU problem. My theoretical power MAX power consumption with X5680s would be around 353W at full load of everything.
By the way, I put my old E5620s back in and the system started without any problems. So the problem is either in X5680s themselves or in compatibility with my MoBo. My last resort is to check the CPUs at local service shop, because if the chips work fine on other machine, then it's purely a compatibility issue (or a PSU issue, that I'm neglecting so hard, which is what I'd test after I'm sure CPUs are not the problem).
P.S. I hugely relied onto this article from 2014, where dude has a dual X5680s on the v2 Z800 MoBo. It seems that the only difference with my build is power supplies, so it's kind of a shame that it didn't just work 😕
03-19-2022 04:46 AM
Oof. I just noticed there's a missing capacitor on the back of one of the X5680s... I don't think I broke it off, because I'm generally careful with technological stuff, but anyway I doubt that seller would allow to return it. Don't know if that's why the system kept rebooting since I don't know with which CPU I was able to run it.
Didn't find what you were looking for? Ask the community