With our onprem deployment, we have 2 data centers...
# citrix-cloud
l
With our onprem deployment, we have 2 data centers, and use multi-site aggregation to load balance users between the data centers. We don't care what site a user gets sent to on any given day, as profiles are replicated. When thinking about moving to Citrix cloud, I'm having a difficult time figuring out how to get the users to continue to be balanced between the 2 data centers, as there is only one site in Citrix Cloud (unless we had 2 orgIDs, but yuck). Each data center has capacity for everyone, so it's basically a hands off DR as well - Data center 1 down, everyone ends up on data center 2. But why can't I figure out any easy way to do this in a single site? Thoughts?
c
2 separate hosting connections, 1 for each datacenter. 1 machine catalog per hosting connection and then add in both machine catalogs to the same delivery group. Something like this ?
l
We have a machine catalog for each nutanix cluster, and all machines from all catalogs are in the delivery group for each site right now, but with multi-site aggregation, I have 2 delivery groups that are identical, so it load balances. I can't do that with a single site and single delivery group. Yes, I could have a single delivery group with machines from all catalogs and both data centers, but they would not be balanced equally between the 2 data centers as desired
b
If each datacenter has capacity for all users does it really matter if it’s equal between both. Keep it simple like you said with single delivery group. You could get fancy with zones and the like but from what you are saying so far I’m not sure that’s really needed
r
In cloud since its zonal and not separate sites you would use two catalogs one in each zone with one delivery group. Since youre a/a thats all you need to do. If you wanted a/p you would need to use zone preference. If youre using storefront all of cloud goes in one entry for delivery controllers and normally we would use vips for each set of cloud connectors and add in all yhe vips. But you need all the cloud connectors entered for all the zones in case you go into lhc and you need advanced health check enabled on the store.
l
As far as rationale, it's both to prevent on-prem resources from being close to capacity (yes, unlikely even without specific balancing, but I'm just not a fan of 70% on one side, 30% on the other) as well as mgmt preference for an outage, e.g. Why are 70% of users impacted when site a went down? It should only be 50%. @Rob Zylowski, isn't zone preference for giving a user a preferred zone, rather than just normal balancing? I don't want user A to prefer a site, I just want use A to go to Site A, then User B to go to site B, then user C to site A, etc, similar to multi-site aggregation. Im familiar with the zone/resource location configuration for proper LHC, Citrix released their article on it after we had an outage a few years back at my prior company lol.
r
I dont know of any way to get a 70/30 split without just provisioning the 70/30. And yes zone pref means assignjng a home zone. Youu could od that 70/30 but users will always go first to their home zone and only if there are no vdas available go to the alternate zone.
j
I think what Brian is asking is how to guarantee a 50/50 split when there is 100% available capacity at each side
💯 1
You could always get funky with storefront aggregation and treat each set of cloud connectors as a unique site, then aggregate the same as you would with controller configs and distribute launch requests. Technically it should work but probably has some availability drawbacks
r
I think that would negatively affect advanced health check.
l
@James Kindon I had considered that, but the problem is then I need to have 2 delivery groups in the cloud site and wasnt sure how that would behave.
s
Does vertical load balancing go in numerical order? If so, you might be able to have odd machines be DC1 and even machines be DC2. Not the greatest solution, but worth a thought. Would love to see more flexibility in weighting and load management policies, not just for VDAs in the same DG, but for Site Aggregation as well.
j
@Rob Zylowski isn’t this what the AHC was kind of for? To create awareness of what is available in the RL during an outage and then ensure that enumeration and launch hit the appropriate CC? It’s been a while but I thought it would work in our favor in this scenario? @langsbr why two DGs? Just one, then dedupe it via agg? What am I missing there?
r
@James Kindon went back and read sarahs blog on this and I think you are right it would still work as long as all cloud connectors exist in the store. Of course if it needs the 70/30 storefront and aggregation wont work well.
j
Sarah’s blog was a staple of my consulting diet - one of the most read articles ever I would think. It would be interesting to see if the model works for a straight 50/50 split, I’m struggling to see why it wouldn’t but could be missing something obvious
r
I think 50/50 is fine because storefront can do round robin. The 70/30 would be an issue
j
70/30 isn't in the mix - that's what Brian is trying to kill off
r
I agree that it would work though i really do prefer using zones and zone preference that way if they ever go to workspace it will still work.
l
Exactly - I don't want to end up in a 70/30 situation. I am not seeing how multi-site aggregation would work if I have the cloud site added twice pointing to the same DG. It's just going to grab a random VM from that DG and that DG contains both data centers VMs, there's no guarantee it will issue 1 from DC1, one from DC2, etc, so I figured I would need a DG with VMs from DC1 and a DG from DC2. But then how does aggregation work as its goal is to aggregate identical items - so now I'd have 2 identical icons in the same site - how does aggregation know which to issue?
Rob, zone preference would work if we had 10 users and they never changed. Give 5 zone preference for DC1, 5 for DC2. However, when you're talking thousands and thousands of users with relatively high turnover, now you're just managing the balance manually at another layer, aren't you? What happens if 100 people that have preference for DC1 leave, and 50 get hired and have DC2 preference? You have to go in and manually move people around.
Im going to try in dev, but I just don't see how multisite aggregation would know to use ICON 1 then ICON 2, if they are coming from the same site, and I really don't know how it would balance if all the VMs are in one DG.
j
The more I think about this the more I think you are right - it’s ultimately still going to be random and without any form of preference (home zone etc) it all ends up in the same place. I think you are going to have to do a home zone preference logic. Easy enough to do with AD Groups, and easy enough to realign those groups each night with Powershell. 2 groups, one for site A one for site B, distribute users across the two, then each night you could query the groups, get all members, split them in half, empty the groups and repopulate the groups in an even split - just one option to play with
r
When you add the site entry you can also associate zones in the advanced properties and then i think it can work with aggregation. But again i would go with zone preference. Dorsnt need to be exactly equal.
j
I faced similar challenges with XenDesktop when I migrated from XA6.5. Load balancing was somewhat easier. I now use zone preferences (1 per data centre) with AD groups and balance my users using a script. Citrix should offer a better load balancing method, more flexible, more options. You could limit the number of users of half your capacity per VM and tune it depending of the site but changing this value after in policy is slow to apply so not ideal to change in case of DR.