DEP313
Short Description
Download DEP313...
Description
DEP 313 Windows Server 2003 Active Directory Branch Office Deployment Lothar Zeitler Snr. Consultant Microsoft Services Germany
AD Branch Office Scenario ROOTDC1- GC 10.0.0.1 DNS corp.contoso.com HUBDC1 - FSMO DNS 10.0.0.10 branches.corp.contoso.com
ROOT2DC2 - FSMO 10.0.0.2 DNS corp.contoso.com
HQDC1 - FSMO 10.0.0.3 DNS hq.corp.contoso.com
HQDC2 - GC 10.0.0.5 DNS hq.corp.contoso.com
MOMSVR 10.0.0.26 MOM Server corp.contoso.com
TOOLMGRSVR 10.0.0.4 Monitoring Server corp.contoso.com
Data-Center-Site HUBDC2 DNS 10.0.0.11 branches.corp.contoso.com
BHDC1 - GC BHDC2 - GC DNS 10.0.0.14 DNS 10.0.0.12 branches.corp.contoso.com 10.0.0.13 branches.corp.contoso.com branches.corp.contoso.com
STAGINGDC1 DNS GC 10.0.0.25 branches.corp.contoso.com TSDCSERVER 10.0.0.20 ADS Server corp.contoso.com
BHDC4 - GC DNS 10.0.0.15 BHDC3 - GC branches.corp.contoso.com DNS
BOSite1
BOSite2
BOSite3
BOSite4
BOSiten
BODC1 DC
BODC2 DC
BODC3 DC
BODC4 DC
BODC n DC
Staging-Site
What Makes a Branch Office Design Interesting? IP connectivity incl. WAN, link speed, Dial on demand, routers, firewalls, IPSEC Name resolution incl. DNS server, zone and client configuration Active Directory replication to a large number of replication partners FRS replication Group policy implementation Happy users Considerations Proper care of DNS name resolution will guarantee replication success IPSEC preferred firewall solution
Active Directory Branch Office Guide Windows 2000 version published http://www.microsoft.com/technet/treeview/default.as p?url=/technet/prodtechnol/ad/windows2000/deploy/ adguide/default.asp Recommends managing connection objects manually Windows 2003 can be deployed using that guide
Windows 2003 version in draft All deployment steps are tested in scalability lab Biggest deployment so far: 2.400 sites Goal: 3.500 sites
Published September 2003
New Features in Windows 2003 for Branch Office Deployments KCC improvements KCC/ISTG inter-site topology generation Bridgehead Server load-balancing and connection object loadbalancing tool KCC redundant connection object mode for branch offices No more “keep connection objects” mode if replication topology is not 100% closed Better event logging to find disconnected sites
Replication improvements Linked-Valued Replication More replication priorities Intra-Site before Inter-Site NC priorities: Schema -> Config -> domain -> GC Notifications clean-up after site move
Lingering Object detection
New Features in Windows 2003 for Branch Office Deployments No GC full-sync In Windows 2000, schema changes that changed the PAS triggered GC full sync Removed in Windows 2003
Universal Group Caching DNS Improvements Install from media FRS improvements Plus many more….
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
Forest Design Follow recommendations in Windows 2003 Deployment Kit (Chapter 2) http://www.microsoft.com/downloads/details.aspx?familyid=6cde6ee 7-5df1-4394-92ed-2147c3a9ebbe&displaylang=en
Reasons for having multiple forests Political / organizational reasons Unlikely in branch office scenarios
Too many locations where domain controllers must be deployed Complexity of deployment
Too many objects in the directory Should be partitioned on domain level GCs too big? Evaluate not deploying GCs to branch offices Windows 2003: Universal group caching
Recommendation: Deploy single forest for Branch Offices
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
Centralized vs. Decentralized Domain Controller Deployment The number of sites with domain controllers defines the scope of the deployment Deployment options Centralized deployment Domain controllers are located in datacenters / hub sites only Users in branches logon over WAN link
De-centralized deployment All branches have domain controllers Users can logon even if WAN is down
Mixed model
Some branches have DCs, some don’t
Centralized deployment has lower cost of ownership Easier to operate, monitor, troubleshoot
Design Considerations for Domain Controller Placement Local DC requires physical security Domain controller management Monitoring, auditing, SP deployment etc. must be guaranteed Required services – business drivers File & Print, email, database, mainframe Most of them require Windows logon Logon requires DC availability Can the business still run even if WAN is down? Is the business in the branch focused on a LOB application that requires WAN access (mainframe)?
Logon locally or over the WAN WAN logon requires acceptable speed and line availability WAN only an option if WAN is reliable Cached credentials only work for local workstation logon Terminal Service clients use local logon
In many cases, network traffic is important
Client logon traffic – directory replication traffic
Design Considerations for Global Catalog Placement No factor in single domain deployment Turn on GC flag on all DCs No extra cost associated
GC not needed for user logon anymore in multidomain deployments Universal Group Caching
GC placement driven by application requirements in multi-domain deployments Exchange 2000 servers Outlook
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
Domain Design Recommendation for Branch Office Deployment
Use single domain Typically only single administration area Central administration (users and policies) Replication traffic higher, but more flexible model (roaming users, no GC dependencies) Database size no big concern
If high number of users work in central location Create different domains for headquarters and branches
If number of users very high ( > 50,000) Create geographical partitions
High number of domains discouraged Examples: One domain / branch, one domain / state Increases complexity of deployment
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
DNS Design Recommendations
DNS server placement Put DNS server on all domain controllers
DNS client (resolver) configuration Primary DNS server: Local machine Secondary DNS server: Same site DNS server or hub DNS server Windows 2000: Different configuration for forest root DCs
DNS zone configurations Use AD integrated zones (application partitions) Use DNS forwarding
No NS records for Branch Office DCs Configure zones
DNS Design Managing SRV (locator) records and autositecoverage
SRV records are published by netlogon in DNS On site level and domain/forest level Clients search for services in the client site first, and fall back to domain/forest level
Branch Office deployments require specific configuration Large number of domain controllers creates scalability problem for domain level registration If more than 850 branch office DCs want to register SRV records on domain level, registration will fail
Registration on domain/forest level is in most cases meaningless DC cannot be contacted over WAN / DOD link anyways If local look-up in branch fails, client should always fallback to hub only
Disable autositecoverage Use group policy for configuration
Using GPOs for DNS Settings Create new Global Group for Hub DCs Add all non-Branch Office DCs as group members
Create new GPO (BranchOfficeGPO) Configure DC locators records not registered by branch DCs Configure refresh interval
In BranchOfficeGPO properties, deny “Apply Group Policy” to Hub DCs Negative list is easier to manage than positive list No damage if DC is not added to group Smaller number of hub DCs than Branch Office DCs
Edit Default Domain Controllers Policy Disable automated site coverage Important that this is configured for ALL DCs, not only Branch Office DCs
Netlogon Group Policy
Netlogon Group Policy
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
Replication Planning Improvements in Windows 2003
Windows 2000 Topology creation had scalability limits Required to manage connection objects manually
Windows 2003 has many improvements to fully automate topology management New KCC / ISTG algorithm Bridgehead server loadbalancing KCC redundant connection object mode Specifically developed for Branch Office deployments
Replication Planning KCC/ISTG
ISTG = Inter-Site Topology Generator Computes least cost spanning tree Inter-Site replication topology
Does not require ISM Service Windows 2000: ISTG uses ISM service
Runs every 15 minutes by default
Replication Planning KCC/ISTG
Vastly improved inter-site topology generation (KCC/ISTG) scalability Complexity: approximately O(d*s) d = number of domains s = number of sites Win2000: approximately O(d*s²)
Scales to more than 5,000 sites
Still single threaded – uses only one CPU on SMP DCs Performance: 4,000 sites: 10 secs (700 Mhz test system) Ongoing tests in scalability lab
Can generate different topology than Windows 2000 KCC/ISTG Requires Windows 2003 forest functional level
Replication Planning Inter-Site Messaging Service (ISM)
Creates cost matrix for Inter-Site replication Sends and receives SMTP messages if SMTP replication is used Runs recalculates matrix When ISM service starts up When changes happen in site configuration (new sites, site-links, site-link-bridges) Every 24 hours
Information is used by Netlogon for auto-site coverage Load-Balancing tool Universal Group Caching DFS
Has performance impact
Replication Planning ISM – Performance Impact
Number Sites
Runtime / secs
Working Set / MBytes
Virtual Bytes
500 1,000 1,500 2,000 3,000 4,000
4 7 16 22 56 174*
18 60 130 227 456 540
57 100 170 275 894 893
Reference Hardware: 1.7 Ghz, 512 MB RAM
Replication Planning ISM - Recommendations
Up to 2,000 sites Use ISM service
More than 2,000 sites Evaluate hardware platform (RAM, CPU) Use ISM if DCs do not start swapping Evaluate services not available without ISM service Domain based DFS needed?
Replication Planning Bridgehead Server Selection
Windows 2000 On a per site basis, for each domain, one DC per NC used as Bridgehead
Windows 2003 On a per site basis, for each domain, all DCs per NC used as Bridgehead KCC picks DC randomly amongst bridgehead candidates when connection object is created For both incoming and outgoing connection objects
Replication Planning Bridgehead Server Load-Balancing KCC/ISTG randomly chooses Bridgehead Server Both incoming and outgoing replication
Once connection object is established, it is not rebalanced when changes happen Adding new servers does not affect existing connection objects
Has to be used with care in Branch Office Deployments Necessary to control what servers are used as Bridgehead Servers
Recommendation: Use preferred Bridgehead Server List and load balancing tool
Replication Planning Preferred Bridgehead Server List Some servers should not be used as Bridgeheads PDC operations master, Exchange facing GCs, Authentication DCs Weak hardware
Solution: Preferred Bridgehead Server List Allows administrator to restrict what DCs can be used as Bridgehead Servers If Preferred Bridgehead Server List is defined for a site, KCC/ISTG will only use members of the list as Bridgeheads
Warning: If Preferred Bridgehead Server List is defined, make sure that there are at least 2 DCs per NC in the list If there is no DC for a specific NC in the list, replication will not occur out of site for this NC Don’t forget application partitions
If branches have GCs, all bridgeheads should be GCs
Replication Planning Active Directory Load Balancing Tool (ADLB)
ADLB complements the KCC/ISTG Real load balancing of connection objects Stagers schedules using a 15 minute interval Hub-outbound replication only Hub-inbound replication is serialized
Does not interfere with the KCC KCC is still needed / prerequisite Tool does not create manual connection objects, but modifies “from-server” attribute on KCC created connection objects
Can create a preview Allows using the tool as an advisor
Single exe / command line tool Runs on a single server / workstation Uses ISTG in hub site to re-balance connection objects
Not needed for fault tolerance, only as optimization Can be run on any schedule
Replication Planning KCC Redundant Connection Objects Mode
Goal Create stable, simple and predictable replication topology Like mkdsx scripts for Windows 2000
Enabled on a per site level Implementation Creates two redundant connection objects Each branch site replicates from two different Bridge Head Servers Two different Bridge Head Servers replicate from each site Replication schedule is staggered between connection objects
Fail-over is disabled If replication from one Bridge Head fails, the branch can still replicate from the other Bridge Head
Schedule hashing is enabled Inbound connections start replication at random time inside the replication window
Only DCs in same site are used for redundant connection objects Demoting DC causes KCC to create new connection object
Replication Planning KCC Redundant Connection Objects Mode
Schedule for redundant connection objects Use schedule defined on site-link Like, window open 8pm to 2am, replicate once every 180 minutes (= 2 replications)
Divide by “2” and stagger Connection object 1 replicates once between 8pm and 11pm Connection object 2 replicates once between 11pm and 2am
Second replication usually causes little network traffic
Monitoring becomes even more critical Important to act quickly if hub DC becomes unavailable
Replication Planning KCC Redundant Connection Objects Mode BH1
BH2 HUB Site
Site Link 1 Duration 8h Replicate every 240 Min.
Site Link 2 Duration 8h Replicate every 240 Min.
BranchDC01
BranchDC02
Branch01
Branch02
Replication is open from 0:00 and 8:00 a.m.
Replication is open from 0:00 and 8:00 a.m.
240 Min
240 Min
0:00 - 0:15 and 4:00 - 4:15 and 2:00 -2:15 6:00 - 6:15
240 Min
240 Min
0:16 - 0:30 and 4:16 - 4:30 and 2:16 - 2:30 6:16 and 6:30
Replication Planning Recommendations: Sites, Site-Links and Topology Create single site for hub site Leverage KCC load-balancing between Bridgehead servers
Create site-links between Branch Office sites and hub site No redundant site-links or connection objects are needed
Disable transitivity of site-links Not only for performance, but also to avoid branch-branch failover connection objects
Disable auto-site coverage Use both ISM and KCC/ISTG services Use KCC redundant connection objects mode
Use ADLB to load-balance connection objects Use Universal Group Caching to remove requirement for GC in branch Unless branch application requires GC
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
Capacity Planning Replication Planning
Branch Office DCs Usually low load only Use minimum hardware
Datacenter DCs Depends on usage See Windows 2003 Deployment Kit for DC capacity planning
Bridgehead servers Require planning
Capacity Planning Formulas to compute number of Bridgeheads
Hub outbound replication is multi-threaded Hub inbound replication is single-threaded Hub outbound: OC = (H * O) / (K * T) OC = outbound connections H = sum of hours available for outbound replication O = concurrent connection objects K = Number of replications required / day T = time necessary for outbound replication (usually one hour)
Hub inbound: IC = R / N IC = inbound connections R = Length of replication in minutes
Capacity Planning Example Requirements: Replication twice a day (= K) WAN 8 hours available (= H), 480 minutes (= R) High performance hardware (= 100 concurrent connections) (= O) Outbound replication will always finish within 1 hour (= T) DOD lines – conservative replication time hub inbound = 2 minutes (= N)
Applying the formula: Outbound: OC = (H * O) / (K * T) = (8 * 100) / (2 * 1) = 400 If number is too high / low, change parameters: I.e., WAN available for 12 hours: 600 branches I.e., replicating only once a day: 800 branches
Inbound IC = R / N = 480 / 2 = 240 Branches
One BS can support 240 branches
Capacity Planning Bridgehead Server Overload Cause Unbalanced site-links Unbalanced connection objects Replication schedule too aggressive Panic trouble-shooting
Symptoms Bridgehead cannot accomplish replication requests as fast as they come in Replication queues are growing Some DCs NEVER replicate from the bridgehead Once a server has successfully replicated from the bridgehead, its requests are higher prioritized than a request from a server that has never successfully replicated
Monitoring Repadmin /showreps shows NEVER on last successful replication Repadmin /queue
Capacity Planning Bridgehead Server Overload - Solution
Turn off ISTG prevents new connections from being generated
Delete all inbound connection objects Correct site-link balance and schedule Enable ISTG again Monitor AD and FRS replication for recovery
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
Monitoring Design Monitoring is must for any Active Directory Deployment DCs not replicating will be quarantined DCs might have stale data Not finding issues early can lead to more problems later I.e., DC does not replicate because of name resolution problems, then password expires
Use MOM for datacenter / hub site Monitor replication, name resolution, performance
Windows Server 2003 Deployment Kit ships with Toolsmanager System to push and run scripts to Branch DCs Results copied to central server Web page presents Red/Yellow/Green state per server
Evaluate available monitoring tools MOM and third parties
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
Deployment - Overview ROOTDC1- GC 10.0.0.1 DNS corp.contoso.com HUBDC1 - FSMO DNS 10.0.0.10 branches.corp.contoso.com
ROOT2DC2 - FSMO 10.0.0.2 DNS corp.contoso.com
HQDC1 - FSMO 10.0.0.3 DNS hq.corp.contoso.com
HQDC2 - GC 10.0.0.5 DNS hq.corp.contoso.com
MOMSVR 10.0.0.26 MOM Server corp.contoso.com
TOOLMGRSVR 10.0.0.4 Monitoring Server corp.contoso.com
Data-Center-Site HUBDC2 DNS 10.0.0.11 branches.corp.contoso.com
BHDC1 - GC BHDC2 - GC DNS 10.0.0.14 DNS 10.0.0.12 branches.corp.contoso.com 10.0.0.13 branches.corp.contoso.com branches.corp.contoso.com
STAGINGDC1 DNS GC 10.0.0.25 branches.corp.contoso.com TSDCSERVER 10.0.0.20 ADS Server corp.contoso.com
BHDC4 - GC DNS 10.0.0.15 BHDC3 - GC branches.corp.contoso.com DNS
BOSite1
BOSite2
BOSite3
BOSite4
BOSiten
BODC1 DC
BODC2 DC
BODC3 DC
BODC4 DC
BODC n DC
Staging-Site
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
Deploying Non-Branch Domains Not different from normal deployment Documented in Windows 2003 Deployment Kit
Build forest root domain Create all sites (incl. branches) Build other non-branches domains as needed
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
Deploying Branches Domain in Hub Site Install operations master Install bridgehead servers Install and configure ADLB Modify domain GPO for DNS settings Auto-site coverage
Configure DNS zone for NS records Create branches DNS GPO SRV record registration
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
Deploying Staging Site Staging Site has special characteristics All replication topology must be created manually KCC turned off Inter- and Intra-Site Scripts will be provided
Should not register DNS NS records
Create manual connection objects between staging site and production Staging DC needs to be able to replicate 7/24
Install Automated Deployment Services (ADS) Create image for branch DCs pre-promotion
Active Directory Deployment For Branch Offices
Active Directory Design Forest design Decide on centralized or decentralized deployment Domain design DNS design Site topology and replication design Capacity planning Monitoring design
Active Directory deployment Deploying and monitoring non-branch domains Deploying branches domain in hub site Deploying and monitoring a staging site Deploying and monitoring the branch sites
Deploying Branch Sites Build branch DCs in staging site from image Run quality assurance scripts (provided) Move branch DC into branch site Ship DC
General Considerations for Branch Office Deployments Ensure that hub is a robust data center Monitor the deployment Use MOM for hub sites
Do not deploy all branch office domain controllers simultaneously Monitor load on Bridgehead servers as more and more branches come on-line Verify DNS registrations and replication
Balance replication load between Bridgehead Servers Keep track of hardware and software inventory and versions Include operations in planning process Monitoring plans and procedures Disaster recovery and troubleshooting strategy Personnel assignment and training
Personnel assignment and training
Summary Windows 2003 has many improvements for Branch Office deployments New KCC algorithm: no more scalability limit KCC redundant connection object mode: Provides stability Less replication traffic through LVR replication and DNS in app partitions
Deployments are much easier to manage No manual connection object management GPO for DNS locator settings No more island problem
Bridgehead servers more scalable Branch Office guide will have step by step procedures for deployment and tools Total cost of deployment will be much lower
Branch Office Deployment on Windows Server 2003 AD Scale Lab
Goal for Deployment Testing Deploy 3500 domain controllers. Provide feedback for developing the Branch Office Guide for Windows Server 2003. Automate deployment to achieve least amount of manual intervention. Use ADS for automated imaging across 3500 machines. Monitor the Datacenter site using MOM. Monitor all branch offices with set of scripts to verify basic functionality and generate report at common location. Monitor FRS. Deploy in 6 weeks.
Branch Office Deployment Servers HUB Site
Staging Site Root DC’s Staging Site TM Server
ADS Server
TSDC Server
Datacenter TM Server
MOM Server
Staging Site DC
Bridgehead DC’s
Staged DC
Staged DC
Branch Office Site 2
Branch Office Site 1
Keys to a successful deployment Plan well. Double check to avoid simple human errors. Progress with controlled set of changes. Monitoring is essential.
Community Resources Community Resources http://www.microsoft.com/communities/default.mspx
Most Valuable Professional (MVP) http://www.mvp.support.microsoft.com/
Newsgroups Converse online with Microsoft Newsgroups, including Worldwide http://www.microsoft.com/communities/newsgroups/default.mspx
User Groups Meet and learn with your peers http://www.microsoft.com/communities/usergroups/default.mspx
evaluations
© 2003 Microsoft Corporation. All rights reserved. This presentation is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY.
View more...
Comments