This message was deleted.
# microsoft-fslogix
s
This message was deleted.
a
Why FSLogix in this scenario and not Citrix Profile Management, particularly for active-active and published apps?
🎯 1
alphabet white y 1
⤴️ 2
♨️ 1
j
What’s the infrastructure?
j
Go for UPM
⤴️ 2
👍 2
d
This isn't a one-time ask per se- in fact I've been getting it a lot lately... but assume that it's either a shared SAN and VMware combo (XIO is what I'm seeing a lot of lately, though I did see one at this size that was attempting NetApp to... limited success) or that they are ready to have a dedicated back-end discussion. So for UPM - same basic assumptions, but I'm wondering more about how people are scaling across datacenters at that level of user accounts. For example, if you are using Microsoft file servers - what does the successful infrastructure to support multiple datacenters look like today? To be clear I'm not looking for theory, I know that. I'm just wanting to get feedback from others that have been doing it to see what is working in 2023 in their real-world scenarios. For example, are you splitting up file servers into groups? Using other services than Windows file servers? That sort of thing.
j
Depends on size but dedicated file server infrastructure pods, splitting into pods with workloads on same infrastructure, etc. Majority of my experience lately is Nutanix Files.
d
Well, I assumed that, lol - I haven't really had customers yet at that scale on Nutanix so I don't have information on how well that's working for profiles.
BUT - let's say I have a CVAD site with 50k users, but all the same Delivery Group because they all have the same apps. How are you splitting up into pods in that scenario in a way that can be maintained and properly balanced (and replicated live across datacenters)?
j
I wouldn’t. I would break up in pods. That’s way too large of site.
You are going to hit other limitations.
d
Hence the question I have 😉
j
Typically you are going to build out in pods. Let’s say 5k users. Replication will depend on infrastructure and image provisioning (MCS vs PVS), could be automation. Then look at resource publishing, aggregation, etc. using StoreFront with a resource assignment strategy using groups, etc. depending on user needs and where the application backends and data live.
Once you have your pod strategy it’s just rinse and repeat after that until you reach desired scale.
d
I am finding this to NOT be the case - managing what users fit into which pods and such just doesn't work. I am just asking about profiles for the time being because of that type of challenge. I'm fine on the workload front (for example, one of my current clients will be in pods active at 6k but able to scale to 12k to survive Datacenter outages and upgrades. So for profiles - I /could/ recommend splitting up users into, say A/B/C groups and different file servers for each spanning datacenters. I just don't like administration thoughts about that and I don't think the clients would either. If there was a way to have 50-100k profiles successfully on a single file system it'd be great, but it seems like a way to better /manage/ it is what I need to be focused on in these scenarios. So - how are people doing that (again, real world...)
j
I was speaking generally how we advise customers/put in guidance that customer follow. Putting all those profile in a single system is a huge failure domain.
@RSRevord may have some additional guidance
d
Agreed - but it becomes a 'how do we manage it' question that is not always easy to answer. You end up with app siloes and crap that makes things suck, especially when people start asking ChatGPT to fix their profile...
r
so personally multi DC UPM replicates better for me than FSLogix. If you can't break apart users logically then FSLogix works best for having multiple shares which I PREFER so i can reduce blast radius per dc on using multipl FS
yes i do know there are other techs but trying to keep solutions somewhat limited.....
you can replicate with a few various tools to get fslogix multi dc ready, which also honestly i haven't given cloudcache a shot in the last version or three....
Let me know other questions imagine we've seen it
d
Here's one example - They use Scale Out with ClusterFS (served from ExtremeIO LUNs, I think) on the back to serve FSL vhdx's. While they seem to be okay, it doesn't really seem that's what MS intended for SOFS in that mode... So it makes me wonder if there's a way to run ClusterFS on the back but NOT use Scale Out, just run traditional file shares. That way, as you said above and I agree - you're limiting the load on the servers themselves but I question if it's viable. As long as users don't have sessions hitting the same Delivery Groups in multiple DCs - it seems viable, especially for UPM instead of FSL... The problem I am facing with this current client is becoming a common ask - they want to start supporting things like OneDrive/Outlook so my default suggestion is to do a separate LUN/shares from it and not try and cram all of that into a single container. Fine in theory - but I don't like working in theory for things like this.
r
well and honestly depending on how they cut workloads etc i don't know that there is a 1 size fits all
another thing to consider is onedrive in mapped or just online only mode zero data retention
wont work for all use cases but i have several where we don't do profiles but wanted them to have options to save some stuff we map onedrive as a drive letter and redirect stuff to it.
d
Yeah, for OneDrive, I'm finding a lot of them want to specify files for offline. Can be frustrating. And as far as no profile - I'm finding that more common too - but it's hard to get around things like Office.
r
thats where the map trick can make a diff, sorry but offline in virtual environments makes no sense....
even if its isolated then its shoudl be isolated no internet/offline so no MS 365 crap period.
d
For things like OneNote it makes sense, downloading every time sucks. But I'm with you - I think they were just concerned about 20,000 people having to download certain documents daily or something like that. To me, I'm thinking just stay on local file shares but... there's still the OST factor. I have never had a company be okay without it, outside of being hosted on Azure, and even then a few complained. So regardless, you end up needing a container to have any kind of sanity, so that's an SMB channel to open; hence the concern.