Did you check the kernel log with dmesg? Or any other logs?
PluMGMK wrote:À propos of nothing, I've been considering dumping VMWare altogether, and trying QEMU with KVM, and possibly passing my graphics card through to a Windows 10 virtual machine. It's already happened a few times that I've wanted to play something for which neither Wine nor Vista will cut it, and dual-booting is a pain in the proverbial. I know I tried something like that back in 2014, and failed rather miserably, but I've learned a lot more about Linux since then.
I succeeded! I bloody succeeded!

Thanks to
this blog I was able to get this set up with QEMU, KVM and LibVirt within the space of about seven hours. It would've been a lot quicker on a distro, since being on Linux from Scratch meant I had to compile QEMU and LibVirt myself, along with myriad dependencies!
Please note that to understand this post fully, you probably will have to consult that blog I just mentioned! I found it quite accessible though.

I figured the first thing I had to do was compile a kernel with the vfio-pci driver, and so I waited for Linux 4.11 to come out since this occasion would mean I would be compiling a new kernel anyway. This was rather silly in hindsight, since I could've at least compiled QEMU, LibVirt and their dependencies on Sunday, and had less work to do yesterday!

But anyway, I built that kernel, with the addition of the vfio-pci and i915 (Intel Graphics) drivers. I then rebooted, went into the BIOS (or UEFI I guess) settings, and changed the main GPU from PCIe graphics to the integrated one. I also connected the VGA port on my motherboard to the secondart input of my monitor. The primary input is, and has always been, connected to the DVI port on my NVIDIA card.
I noticed that I had two "firmware" options for my VM. One was a traditional BIOS, in which case I would need to use the kernel's VGA arbitrator, and therefore apply an extra kernel patch to make that work with the Intel driver. The second option was UEFI, using an OVMF binary. I decided to do the latter, and set about acquiring one of those binaries. The blog says that compiling one from source is a pain, and that it would be better to just get a prebuilt one. I decided I knew better, since the prebuilt ones come in RPMs, which I'm not set up to handle anyway!

I set up the toolchain for compiling my OVMF binary, and went about doing it. It's supposed to be done with GCC 5 or earlier, but I was able to hack it to compile with my GCC 6.3!
I went ahead and compiled QEMU and LibVirt and various dependencies, and tried out LibVirt's graphical Virtual Machine Manager. I was able to create a test VM, but then it was unable to start the virtual network, complaining of being unable to initialize a valid firewall backend. Since doing that would require installing more dependencies, which could also screw about with the host network configuration (which I had built on the KISS principle), I decided to leave networking out for the time being.
After that, there were various other problems which led me to need to go back and compile more dependencies. Fine. Eventually I was able to start my test VM and look at it in the Virtual Machine Manager. But all I saw was black. I kept trying various things, but eventually I concluded that my GCC-6-compiled UEFI firmware was, in fact, braindead! I went back and installed a little tool to extract RPMs and downloaded a prebuilt OVMF binary. I changed the LibVirt settings to point to this new firmware, then recreated the VM from scratch. And… it booted! The first thing I tested it with was a CentOS ISO image, but once that worked, I was ready to try to get it to boot Windows!
Even though, as I said, dual-booting is a pain, my PC does have a hard drive with Windows 10 on it. I finally caved in and put it there last summer, when I bought the Solus Project and discovered that it couldn't run under Wine, and that VMWare's emulated GPU just wouldn't cut it! Anyway, it didn't take long to figure out how to attach this physical hard drive to my VM. This couldn't be done using the Virtual Machine Manager GUI, so I had to edit the VM's LibVirt XML file. No problem.
I proceeded to try to boot Windows. It seemed to work, but it kept complaining of errors and going into recovery mode. At first I thought it was because the VM's firmware was only seeing the Recovery partition and not the main one, and I couldn't imagine how I'd crack that nut.* But then I tried the "Reset my PC" option, figuring it couldn't do that much harm. It informed me that the hard drive was locked. So I checked what partitions were mounted in Linux, and sure enough, there was the partition with my Steam library, which is on the same physical disk. I was quite alarmed, since I was sure I'd already unmounted it, knowing that attaching a mounted drive to a VM is very dangerous.
Anyway, I unmounted the partition, shut off the VM and started it again. This time, Windows booted! Great! The firmware and the OS were working!
I figured that, before going for the big prize, namely passing through the graphics card, it would be best to figure out what was going on with the network. A bit of searching online revealed that for the firewall backend, it was sufficient to install iptables, ebtables and dnsmasq. I went ahead and compiled and installed those, but it still wasn't working. I needed to do a bit of debugging, including hacking the LibVirt source code itself, but I figured out that I needed to move the ebtables binary from /usr/sbin to /sbin, and that for it to find the dnsmasq binary, I needed to reconfigure, recompile and reinstall LibVirt. Once I did all that, I was able to set up a nice normal NAT virtual network!
I started up the VM again, confirmed that the network was usable from inside, briefly cursed the fragmentation of settings in Windows ≥ 8

, and installed a TightVNC server. I then installed the TigerVNC client on Linux, and confirmed that I was able to use it to access the VM's user interface. Good. I'd definitely need that once the display was going directly to the NVIDIA card, rather than through the VM manager GUI.
I had been fretting earlier about how I'd go about assigning my NVIDIA card to the vfio-pci driver at boot-time. All the tutorials for doing it seemed to assume the use of a ramdisk, which I don't have since I'm on Linux from Scratch. There was a tutorial with a shell script on a Fedora wiki. I decided, what the hell, I'll just run those commands right now, and see if they work. Sure enough, they did! I was able to unbind the (unused) NVIDIA GPU from the nouveau driver, and all it did was cause the fan speed to change! No lockups, no kernel panics, no nothing! Seemingly the kernel's got more robust since some of these guides were written! So I was then able to just bind the NVIDIA card's GPU and audio device to the vfio-pci driver.
Now, before attaching the NVIDIA card to the VM, I made sure to go and edit the VM's XML file again, to hide KVM from the guest. This is because NVIDIA's Windows driver locks up when it detects KVM. Ostensibly it's an unintentional but WONTFIX bug, but it seems obvious that it's to prevent people from using GeForces in this way when they want VM users to by Quadros instead. But anyway, with the circumvention complete, I was able to attach the card to the VM, boot it, and switch the input on my monitor to DVI. And it worked! Well actually, it booted up in 800×600 at first. Seemingly it took the NVIDIA driver a few minutes to figure out that the card was there, but once it did the VM operated in full HD! I guess the reason was that the PCIe bus position of the card in the VM was different from what it had been on the physical PC.
I soon figured that the best way of interacting with this Windows system was to bring up TigerVNC on the host, put it in full screen, then switch the input on my monitor. Since the VNC window is full screen, and both systems are operating at 1920×1080, the keyboard and mouse are directed to exactly where I'd expect them to be on the "real" screen.
It's still not fully set up. Notably, I need to get the sound working. Currently the only audio device accessible to Windows, as far as I can tell, is the NVIDIA card's HDMI audio, which is obviously unsuitable since my monitor is DVI. PulseAudio for Windows is apparently a thing, but when I see the words "Windows XP" mentioned on the site, I don't feel very encouraged. I think Jack is probably the best way to go. I read about using it for this purpose back in 2014. The other thing I want to do is write a shell-script that automatically unmounts the partition I mentioned above, rebinds the graphics card to the VFIO driver, starts the VM and launches TigerVNC. That's about half-done, and shouldn't be too difficult to complete.
So right now, I've got Windows with access to my high-end graphics card, and Linux running on my integrated Intel GPU. This isn't ideal. I would prefer two dedicated GPUs, one for Windows and one for Linux. I had a separate card before, from when I tried (and failed) to do this in 2014, but it was low-end due to my misunderstanding of AMD's (then) new GPU naming scheme, and I can't seem to figure out where it is now (probably in an anti-static bag in a box somewhere, but which box?). The other problem with this is that my motherboard actually runs both PCIe slots from the CPU root port, which has no ACS isolation. Therefore if I installed two graphics cards they'd be in the same IOMMU group, and so it would be impossible to assign one to the VM without the other, which of course would be pointless!

It's possible to cheat and patch the kernel to force them into separate groups, but that's not really good practice.

I'll keep looking into it anyway. There's at least one old GPU lying around at home!
*Why have I a Recovery partition, you ask? Well,
this Windows 10 installation was actually, er, copied from a hard drive in a laptop that my father has, but doesn't use anymore. So this particular copy of Windows has been moved around quite a lot!