In this article, we will describe how an ordinary Windows 7 installation can be converted to be booted from iSCSI. We will cover the particularities of the Windows network boot process and and elaborate on the differences to the normal boot. We then describe our solution using some registry modifications.
As pointed out in the last article, a Windows 7 installation that was installed on a locally attached disk cannot be booted from iSCSI directly. The reason is the Windows boot process, which needs to be configured to load the network driver in a very early stage in order to keep the connection to the iSCSI server alive.
But how can Windows load the network driver via network, if it doesn’t already have the connection? To untangle this apparent chicken and egg problem, we will have a closer look at the boot process in general, the involvement of the registry, and a method to enable the installation to boot via network.
Early (Network) Boot Process
If you recall the iPXE config file used to boot Windows from network, you will see that we hooked a drive to point to our iSCSI target location:
sanhook --drive 0x80 <target>
This hints at the explanation how the first egg is hatched without
any chicken present: By the help of another breed of egg-laying parent.
In this case, the very first few files are transferred using support
from the firmware. The operating system did not load any drivers yet, so
it uses firmware calls to retrieve the most important files from the
storage media. Usually, this would be the first hard disk in the system,
which contains the boot loader code, which then in turn loads further
essential files. In our case, we have instructed iPXE to redirect these
calls to the iSCSI target using a
sanhook command (see iPXE command reference) and just
let it appear as though the files had come from a local hard drive.
Additionally, iPXE sets up the iBFT (we mentioned this in the Installation section of the previous article) to let the operating system know about the network-attached nature of the boot media as soon as it has booted far enough to be able to handle this information.
In order to investigate this further and as preparation for the next steps, we first need a suitable test setup. A construct that proved very useful here is to have two VMs in qemu:
- One with no disk, booting from iSCSI via iPXE, with one specific NIC model (e.g., virtio)
- One with the file backing the iSCSI target attached as normal storage, and another NIC model (e.g., e1000)
This allows us to quickly switch between a Windows that boots from disk (where we can make modifications) and the iSCSI boot candidate. To prevent driver intallation from tampering with the results, the VMs have different kinds of network adapters.
Now, to get a better understanding of the boot process, we will find out what files Windows loads with the help of the firmware before taking over driving the devices. To do this, we observe the debug output of a failing boot attempt.
Assuming we have installed Windows on the disk VM, we then try to boot the same image via iSCSI in the second VM. We know this will not work, and we are bound to run into the 0x7B bluescreen. But what we can then do here is boot into “Safe Mode with Networking”. This will print all the files that are being loaded into memory first:
After a number of other files, the boot process stops abruptly after
A bit later, the bluescreen will appear. So this point in the boot
procedure must be the crucial step in which the iSCSI connection is
lost. At this point, Windows takes over from the firmware to drive all
devices on its own. You can even see that a driver named
msiscsi.sys is loaded, which indicates that iSCSI support
in general is already present at this stage. But because Windows does
not have the driver for the new network adapter installed, it fails to
re-establish the iSCSI connection after the firmware support was cut
First attempt: Installed driver
It seems obvious that Windows cannot boot from a network-attached
device if it is unable to drive the network adapter. The simplest
solution to this would be to have the network driver installed before
attempting to boot from network. It is easy enough to simulate that in
our qemu setup, we just need to swap out the virtio device with one that
is natively supported by Windows 7. One such device is the generic Intel
But even in this configuration, Windows is unable to boot from network, although the driver was indeed loaded, as can be verified when booting into Safe Mode. Unfortunately, simply loading the driver file is not enough, the driver has to be initialized and configured as well. In the standard installation, Windows only does this for devices it considers critical for booting, i.e., locally attached storage like AHCI disks.
But then again, as we know, Windows is able to boot from an iSCSI disk, if you directly installed on it. What is the difference? What convinces Windows to consider the network card as critical?
The answer lies in the registry, in a structure called Critical Device Database.
The Critical Device Database (CDDB)
As the name suggests, this database holds entries of devices that are
critical for the Windows boot process. Normally, this consists mainly of
general hardware drivers (e.g., for the PCI subsystem) and locally
attached storage. We can inspect this structure in an installed Windows
in the registry editor
regedit. It is located under
Now, we inspect the same structure in an iSCSI installation. We find that there is an entry for the network card, which is not present in the local disk installation. Let’s take a closer look at the contents of this entry:
The entry point to the database for a critical device is the device
PCI#VEN_8086&DEV_100E. In our case, this is a PCI
vendor/device combination: The vendor
0x8086 means Intel,
the device is
0x100e, which is the network adapter
82540EM Gigabit Ethernet Controller. This entire entry has been
created during the installation onto the iSCSI disk, and is the
important difference to a normal installation.
This registry key contains all the information needed to enumerate and initialize the driver during boot, allowing Windows to re-establish the connection to the iSCSI server after taking over control from the firmware.
Converting a local disk installation
Now that we have found the crucial element of an iSCSI installation, why not just take the CDDB entry from a working iSCSI installation and apply it to the fresh disk installation? As it turns out, this actually does the trick! We can just export the subtree in a running iSCSI instance, boot the new disk installation, and import the registry file.
For drivers that are included in Windows out of the box, this is all you need to successfully boot from iSCSI afterwards. Other drivers need to be installed before booting from the network disk.
We analyzed the boot process of Windows and found the crucial ingredient for a successful iSCSI boot, the CDDB. After transferring the necessary CDDB entry for the network adapter to the fresh disk installation, we were able to boot from iSCSI subsequently. We also briefly covered the possibility of enabling network adapters that are not supported by Windows out of the box.
Obviously, the method described in this post is quite cumbersome, and also limited in its applicability. It can be used to convert a local installation to be iSCSI-bootable, but the requirement of a previous complete iSCSI installation and the necessity to boot the disk image while the actual network hardware is present are time-consuming and practical limitations.
In the next article, we will dive deeper into the Windows registry, and devise a method that only relies on the driver package, a Windows PE boot to obtain some important information, and a Python script to transform all this into a registry update that will enable a Windows installation to boot from iSCSI on potentially multiple different hardware configurations.