This message was deleted.
# microsoft-fslogix
s
This message was deleted.
j
• Watch out for the IO shift. You will have IO on both the local and remote storage locations. Make sure the VDA has fast disk (no brainer but hey). • If PVS, be careful on the fslogix cache directory and PVS cache impacts. • You don’t need to bake anything into the master unless you are changing the cache directories - even then, do it via GPO but get it applied to the master and at least one reboot through the master. • Make sure you update AV exclusions to include the cache directories on the local VDA. • Read the scale docs on nutanix if you are using hybrid nodes (4 SSD rather than 2 is preferred)
👍 1
s
Hot damn, this is great!!
With nutanix and pvs cache being 0, which is recommended, guessing it would be better to move it directly to a persistent drive to bypass the filter driver? Any idea on what the sizing is for the fslogix cache?
j
I would put it on the persistent drive yes. I would also enable the clear cache on log off setting else it will fluff up your cache (unless users are connecting to the same box over and over)
👍 1
Re sizing YMMV - suggest testing a range of users and seeing what you see
s
Thank you sir. Much appreciated.
o
Hey, @James Kindon with yesterday's deprecation of that sneaky little devil FRXTRAY, we don't have to worry about that killing our back end for future stuff after being left open in any master images. But you guys might have already fixed that skeleton in the closet for Files a ways back... Was just the first thing I thought of seeing the deprecation announcements. ha
j
@Oz Zy It's funny timing that it was announced actually - our Files team are all over it (also, that skeleton punished Azure Files and a couple of others too)
r
@Steve Noel for example this is what I do
👍 1
j
As previously stated, moving to Cloud Cache is a significant shift in I/O and overall behavior. Some additional things to consider: • Read the new FSLogix docs for Cloud Cache (updated 3/2023) • Ensure you have at least 2 storage providers configured and the HealthyProvidersRequiredForRegister is set to at least 1. • Keep in mind sign-out performance will be dictated by the slowest performing storage provider. • Native VHD Disk Compaction will significantly delay sign-out because the full profile must be brought local, then evaluated for compaction, optionally the container is compacted, then written upstream to all storage providers. • If you don't use native VHD Disk Compaction and use the Invoke-FSLShrink script, you must: ◦ Run the script against all storage locations • The increase in I/O for the local host may change how many users you stack on each VM.
⤴️ 1
👍 4
m
Thank you for sharing @Jason Parker !
👍 2