Azure Temp storage and Page file review
If you do any work with Azure Virtual Machines, you have probably run into VM SKUs that contain a temporary storage disk. These are VMs with a “d” in the name (e.g. – D4ds). Many people don’t know what they’re for or why they are even an option. As a result, I’ve seen many admins simply avoid using them. Sometimes this doesn’t matter, but depending on your use case, the temporary disk has advantages. For example, I using them in all my new AVD deployments. Before getting into the real reason for this blog post, let’s quickly take a look at what the Temporary Storage is and what it can be used for.
Temporary storage is non-persistent (or ephemeral) storage on an Azure VM that is located locally on that hypervisor host in the Azure datacenter. On the other hand, managed storage (think premium SSDs) attached to a VM is not locally connected to the host server itself. Because the temporary storage disks are locally attached to the host, they have latency and throughput advantages (sometimes MASSIVE advantages) over managed disk storage. The disadvantage of these disks is that they are ephemeral storage, so any data on these disks does not persist after the VM has been deallocated.
This makes temporary disks ideal use cases for workloads that benefit from low latency or high throughput operations that don’t require data persistence through reboots. For example, the Windows paging file.
What is the page file (aka paging file)? The primary use of the page file is to act as an extension of the computer’s memory, but it resides on a disk. When memory utilization increases, Windows will offload least-used or inactive memory pages from RAM to the page file (also referred to as virtual memory in Windows) to free up RAM for more demanding tasks. If contents from the page file are called upon, they are loaded back into RAM, and other data may be swapped to the page file. Therefore, when the page file resides on a high-performance disk, you will experience improved performance in situations with high memory utilization. There is a hidden pagefile.sys file on the root of the drive where your page file resides. If desired, you can have multiple page files that reside on different drives.
If you don’t have a page file, you run the risk of users receiving errors like this:

Performance difference – Managed Disk vs. Temp Storage
The Azure v6 VMs have high-performing NVMe direct-attached storage for their temp disks. In addition to the temp disk performance benefit, the CPU uses the latest generations from Intel/AMD. Microsoft states up to 30% higher performance per core vs previous generation VMs. To give you an idea of the performance benefits of the v6 temp storage, I ran some benchmarks and compared the speed to a 128 GB P10 managed disk (common for an OS drive), the temp storage on a D8ds_v6 VM, and the temp storage on a D8ds_v5 VM. The results are clear. The v5 temp storage is around 4x faster than the P10 managed disk, but the v6 NVMe storage is up to 15x faster than the P10. I’ve included the complete results below for reference, but you get the point – these disks are significantly faster than managed disks and previous generations’ temp storage.
70% Read and 30% write test: (diskspd.exe -c1G -d30 -b4K -r -w30 -o32 -t4 -Sh D:\diskspd_test.dat)
128 GB P10:

V6 D: Temp Storage:

V5 D: Temp Storage

Random 4k Read (diskspd.exe -c1G -d30 -b4K -r -o32 -t4 -Sh C:\diskspd_test.dat):
128 GB P10:

V6 D: Temp Storage

V5 D: Temp Storage

What’s interesting and a change with the v6 SKUs is that in v5 and previous generations, the temporary storage was presented to the host as a formatted D drive on every boot. Therefore, for something like the page file, you could set it to the D drive, and it would always reside there, even after the machine is deallocated and started. However, Microsoft changed this with the v6 SKU VMs. The disks are presented to the VM as RAW, uninitialized, unformatted disks every time the VM starts from a deallocated state (NVMe – Temp NVMe Disks FAQ – Azure Virtual Machines | Microsoft Learn). Since AVD hosts usually have pretty volatile workloads, I always prefer to use “d” family VMs and use the temp storage for the page file.
The issue with pooled AVD hosts and v6 VMs
Now that we have reviewed the Azure VM sizes, performance, temp storage, and page file, we can move on to the issue. When we use a v6 SKU VM, as we previously mentioned, the temp storage is not formatted when the host comes out of a deallocated state. It is always presented as a raw disk and will show like the image below in disk management. This presents an issue for us, because pooled AVD hosts almost always use a scaling plan where machines are automatically deallocated and started throughout the day.

So, if you manually initialize the temp storage in disk management, format it, and set the page file to use it, if the machine is deallocated and then started (which is how AVD scaling plans work), user’s will get the error shown below when they sign in since the configuration in Windows is looking for a D: (or whatever leter was specified), but that drive can’t be found.

What ends up happening in this situation is that a fallback page file is created on the system disk, which we don’t want in this case. The whole reason for using the “d” family VMs in my situation, was to get access to the higher-performing temp storage, and then have the paging file use it. In addition, we don’t want the pagefile.sys file taking up space on our system disk.
The Fix for the Problem
We know what the issue is, but the fix isn’t as simple as just initializing and formatting the disk, and then setting the page file to use it. This is because of how the paging file is initialized in Windows. For example, we can script the initialization and formatting of the disk. We can also script the required changes to the registry to set the page file appropriately to use D:. However, doing this happens after Windows has started, which doesn’t completely fix the issue.
There are two primary registry values that control the page file. They are located in HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management and named ExistingPageFiles and PagingFiles. PagingFiles is what Windows looks at when it starts up, and ExistingPageFiles is what is in use. In this screenshot below, we can see that I scripted PagingFiles to use D:\pagefile.sys 0 0 (the 0 0 means system managed), but since D did not exist when the OS started, it fell back to using \??\C:\pagefile.sys.

This poses an issue, because the pagefile.sys file, which is necessary for the OS to start paging on a drive, won’t be generated on the D since it doesn’t exist when Windows starts. Even if we try changing ExistingPageFiles to use d:\pagefile.sys, WMI will see this, but the OS will still be using c:\pagefile.sys since a pagefile.sys file was never generated on the D drive, and C was set by Windows at startup:

After playing around some more, I found that the location set on the startup of Windows is what will be initially used. If we open the system properties and manually set the page file settings, this will generate a pagefile.sys file on the desired drive, but it won’t remove the initial pagefile.sys file, so Windows will continue to use both locations. In addition, whatever action happens in the background by clicking the ‘set’ button from the image below, I couldn’t replicate it with PowerShell. You can see in that image that our reg values are correct, and a pagefile.sys file exists on D, but C is also still present since it was used at startup. Since we want this automated, this isn’t a feasible solution anyway, but it shows that the reg values need to be there when Windows starts for our page file settings and behavior to be exactly as we want it.

After a bunch of testing and R&D, I finally settled on a script that does what we need if deployed as a scheduled task on the VM that runs on startup. The key to this is understanding that as long as the VM in Azure is not deallocated, it will retain the temporary storage volume initialization, format, and drive letter. When a machine is stopped, it’s not the same as being deallocated. When the device is deallocated, you’re no longer consuming the compute or temp local storage for that VM. In other words, restarting an Azure VM at the OS level does not deallocate the machine. An AVD scaling plan deallocates a VM and starts it from the deallocated state as session hosts scales up and down. The script is available on Github and essentially does this:
- Checks for a page file on D or a D drive. If either exists, it exits.
- Identifies the temp storage volume, exits if the temp storage drive can’t be identified
- Initialize, partition, and format the temp storage volume
- Sets reg values appropriately to use D for the page file
- Checks again for the used page file to verify its not set to D, if its set to D, the script exits.
- Verifies reg values are how we want. Reboots if the reg value check passes, exits if it doesn’t pass.
- The host will reboot before all the services required for AVD are started where it can accept connections, so there is no risk of users connecting and then the system rebooting on them.
- When the host reboots, the script will run again at startup, but should see that there’s a D drive and that the page file exists on D, then exits.
We need to be careful with this since we are calling a reboot just after system startup. Without proper protection, we could end up in an endless bootloop. Therefore, even though there are a few other protections to exit the script if something goes wrong, there’s also a bootloop guard at the start of the script that checks for the number of recent reboots before doing anything. If four or more reboots are detected (since one will always be detected at startup) in the last 12 minutes, the script will exit and log an event in the Event Viewer (Event ID 9876), so if you’re monitoring events with an RMM or another tool, you can get a notification that the boot loop guard popped.

Lastly, every time the script runs, a log file is generated with the date and time located in c:\temp\logs, and only the 21 days of logs are kept. Here is an example of the log the first time the script runs after a host is started from a deallocated state:

And after the reboot, we see in the log that the script found a pagefile on the D drive and exited:

The scheduled task is pretty simple. If you’re manually configuring it, set it to run as system and at startup.


The action is to simply run the PowerShell script from wherever it’s located.

One quick side note about using system-managed page files. In general, using system-managed page files is fine, but keep in mind the limits and minimums on how they are sized from the MS Learn snip below:

Deploying with Intune
Most AVD environments are still deployed in an Active Directory environment. However, in this case, these session hosts were all Entra-Joined and managed by Intune. So, we can deploy this scheduled task using Intune as a Win32 app. The files to deploy this as a Win32 app are here. You can edit the install.ps1 file if you’d like to change the destination for the script file on the local device. This environment already had some intune related files in c:\mem, so I reused that location. Package the file as an .intunewin, or use the one supplied in the link, and deploy the Win32 app with the below settings:

Use the custom detection script from Github which will look for both the file and the scheduled task. Remember to edit this if you change the location of the script on your hosts.

Lastly, deploy to your hosts, and on their next reboot, they’ll have the page file properly set to an initialized and formatted temp storage D drive.

That’s all. Hopefully this helped you understand how you can use the local temporary storage on your Azure VMs to your advantage, and that it can help with performance consistency and reliability in an AVD environment when used for the page file.