Transition AVD FSLogix Profiles from VHDLocations to Cloud Cache

Most AVD environments I’ve worked with rely on FSLogix using VHD locations for profile storage. If you already have VHD Locations configured and you want to move to Cloud Cache, this guide will walk you through the process (along with a mistake I made so you don’t make the same one). I couldn’t find much information on any risks involved when transitioning an existing VHD Locations configuration to Cloud Cache. So, I decided to make a lab and test this out myself.  

There is far more to FSLogix VHDLocations and Cloud Cache and how they work behind the scenes than what I cover in this post. I plan on writing more FSLogix posts in the future, but here is a high-level overview of how the two solutions work – VHD Locations is rather simple, you’re pointing your profile containers at a location (most likely Azure Files). When a user signs in, that share is accessed to check if the user has an existing profile. If so, it’s mounted. If not, a new one is created. With Cloud cache, you specify multiple locations (known as CCD locations) for your profiles. Cloud cache allows you to replicate your user profiles amongst multiple storage accounts (up to four). The first location specified is your primary location (hopefully in the same region as your hosts). If the location is accessible, the user’s profile is read from that location and a local cache (on the AVD host disk) is created to service profile reads/writes. All writes occur directly to the cache on the AVD host’s disk, and are then asynchronously written to the CCD locations specified. This allows your profiles to be up-to-date in multiple locations. You’ll also notice a metadata file is created along with your users’ VHDX files. This contains data for Cloud Cache to keep all locations in sync with the same profile data and help determine if a location is missing data.

The big use case for Cloud Cache is for DR and resiliency. For example, you can use cloud cache to avoid an AVD outage if there is a storage account failure in one region. If you’re in North Central US, you can also have a storage account in East US. If NCUS storage accounts have service degradation, your users’ profiles are still available in East US. Any sessions already established will experience no loss of service. Cloud Cache won’t be able to make writes to the offline location, but after it comes back online the profile will be updated. Additionally, you can have a DR AVD environment on standby in a secondary region. In the event an entire Azure region goes down, you can power on your DR hosts and your users can connect to the DR region hosts with their up-to-date user profiles. What if your primary storage account goes down, and the most up-to-date version of the user profile is on a secondary or tertiary location, what happens when the primary storage location comes back online? If the user is still signed into the same session when a location comes back online, it will simply be updated with the latest data. In the event the user signs out before the location comes back online, the profile location with the newest metadata file will be read, and the locations with old data will be updated. Here is a very simple visual of what a DR setup can look like with cloud cache:  

 Cloud cache does come with some drawbacks

  • User sign-in and log-off will take longer. This depends on a few factors, but it’s generally only several seconds.  
  • The performance of your host’s local disk is critical when using Cloud Cache. This is because instead of the reads/writes happening directly on the storage account (like in VHDLocations) they are now happening on the local VHD host disk and then being sent to the specified CCD locations.
  • Your slowest-performing storage account will be your bottleneck. If you are in Central US, and you have a CCD location in Japan, the latency to make the writes in Japan will be much higher. When users log off, all writes need to completed. So, this can result in longer log-offs. 

 Configuration to move from VHD Locations -> Cloud Cache

Let’s see how we can move from a traditional VHD Locations setup to Cloud Cache. It’s very simple unless you make the same mistake I did, but this is covered below so hopefully, you don’t make the same mistake. Here we have a Typical VHD Locations setup deployed with GPO. We also see some profile containers already exist from using VHD Locations.  

An FSLogix configuration can only use VHD locations OR Cloud cache, not both. So, we need to remove our GPO setting using VHD locations, and then set our CCD locations. To start, I’m just going to use a single location to make sure the profile data is properly retrieved. We do this using type=smb,connectionString=<storageaccountshare>. In my example: type=smb,connectionString=\\avdfslogix.file.core.windows.net\labfslogix

Run a gpupdate on your hosts and we can see our FSLogix registry keys are now using cloud cache CCDLocations instead of VHD Locations:  

However, after signing out and back in, my profiles weren’t being mounted. I pulled up the FSLogix logs (C:\ProgramData\FSLogix\Logs\Profile) and found some interesting things. It sees the config and tries to connect to the CCD location: 

   Shows that the profile couldn’t be loaded because another process has locked a portion of a file: 

  And we can see it creates a local profile and again states the file is locked: 

 I confirmed there were no locks on the VHD files but rebooted the host anyway, and still received the error: 

 Then, I made a new host in a new host pool and tried again using my cloud cache GPO, but still the same result. Disabling AV also resulted in the same error. Since this was happening on a brand new host, I knew I had something misconfigured. After reviewing everything in my FSLogix GPO, I had en error in my CCD locations string. I had a capital ‘T’ leading off my CCD Locations string value, and it needed to be a lowercase ‘t’. These values are case sensitive. Make sure you remember this (https://learn.microsoft.com/en-us/fslogix/reference-configuration-settings?tabs=ccd#ccdlocations). 

  So, now that this is corrected, here is how the entry should look: 

 After fixing the issue, signing in with users who already had profiles in that same directory from the VHD locations had their profiles properly attached without issues. Next, we need to add our second SMB storage account as another cloud cache directory and then test replication. Multiple locations are separated with a single semi-colon, like shown below: 

type=smb,connectionString=\\avdcloudcache1.file.core.windows.net\profiles;type=smb,connectionString=\\avdcloudcachedr.file.core.windows.net\profilesdr 

 Run a gpupdate /force on your hosts to make sure they pull the new locations. You can verify by looking in the registry under HKEY_LOCAL_MACHINE\SOFTWARE\FSLogix\Profiles. You can see our GPO settings: 

 Now, if we sign in, you can see in the animation below that the user’s profile is replicated over to our DR storage account in another region, and Cloud Cache is functioning properly: 

 Here’s another example after a user sign-out showing the VHDX profiles in two locations with identical timestamps: