While using a VM for one of my Coursera courses (The Hardware/Software Interface) today, I noticed some bizarre corruption issues when creating new VDI drive images on a second drive.
It seems that making significant changes to the VM was somehow corrupting the VDI files, and the VMs would then refuse to start. One processor would go to 100% utilization, and the VirtualBox splash screen never shows when starting the VM.
At first, I assumed it was an issue with the VM image I had downloaded. After all, it was a VMware image that I had converted from the VMDK format to VDI. Later, however, I tried creating a brand new VM with a new disk image and found it doing the same thing. When I then tried creating the disk image on a different drive, the problem disappeared.
I ran a btrfs scour on the problem drive, and it turned up no issues… and, other than this problem I’ve not noticed any issue with the drive. I’ve had VM drive images on the volume for a long time, so I’m at a loss as to the cause unless it is a BTRFS change in the 3.16 kernel to which I recently upgraded (when it hit the Ubuntu Utopic repos).
[UPDATE]: Turns out other have seen this problem too. Some searches turned up a workaround: On the virtual SATA controller, enable the use of the host I/O cache.