Jump to content

Switch Datacenter: Difference between revisions

From Wikitech
Content deleted Content added
No edit summary
Re-write switchover instructions
Line 30: Line 30:
A controlled switchover occurs in a span of 8 days:
A controlled switchover occurs in a span of 8 days:


==== <u>Day 1 - Tuesday @ 14:00 UTC:</u> Traffic+Services</u> ====
==== <u>Day 1 - Tuesday:</u> Traffic+Services</u> ====
Non read-only parts of the Switchover always take place on Tuesday. This process is non disruptive and lower risk and it may be scheduled @ 14:00 UTC, however that is not necessary.
Non read-only parts of the Switchover always take place on Tuesday. This process is non disruptive and lower risk and it may be scheduled @ 14:00 UTC, however that is not necessary.


Line 40: Line 40:
*** Switchover active/passive services from origin <code>dc_from</code> to destination <code>dc_to</code>
*** Switchover active/passive services from origin <code>dc_from</code> to destination <code>dc_to</code>


==== <u>Day 2 - Wednesday @ 14:00 UTC:</u> MediaWiki ====
==== <u>Day 2 - Wednesday:</u> MediaWiki ====


'''The MediaWiki Switchover (read-only), will always take place on the Wednesday of the above mentioned week'''. During read-only (2-3 minutes), no Wikis will be editable and editors will be seeing a warning message asking to try again later. '''Read-only starts @ 14:00 UTC'''. Readers should experience no changes for the entirety of the event.
'''The MediaWiki Switchover (read-only), will always take place on the Wednesday of the above mentioned week'''. During read-only (2-3 minutes), no Wikis will be editable and editors will be seeing a warning message asking to try again later. '''Read-only starts @ 14:00 UTC'''. Readers should experience no changes for the entirety of the event.
Line 53: Line 53:
* Special Cases - [[Switch Datacenter#Manual switch]]
* Special Cases - [[Switch Datacenter#Manual switch]]


==== <u>Day 8 - Wednesday @ 14:00 UTC:</u> Pool back inactive DC====
==== <u>Day 8 - Wednesday:</u> Pool back inactive DC====
'''A week later, we activate caching in the datacenter again''', '''we will repool the inactive DC, and traffic will start flowing, in the normal Multi-DC mode.''' This period may be extended, depending how maintainance work progresses at the inactive DC
'''A week later, we activate caching in the datacenter again''', '''we will repool the inactive DC, and traffic will start flowing, in the normal Multi-DC mode.''' This period may be extended, depending how maintainance work progresses at the inactive DC


Line 123: Line 123:


==== Day 1: Depool source datacentre ====
==== Day 1: Depool source datacentre ====
{{Warning|1=Make sure you have gone through [[Switch_Datacenter#Testing_-_3_weeks_before | Testing]], and [[Switch_Datacenter#Preparation_-_a_few_days_before|Preperation]], including patches.}}

GeoDNS (User-facing) Routing:
GeoDNS (User-facing) Routing:


Line 142: Line 142:


==== General procedure ====
==== General procedure ====
For a global switchover we are using the [[gerrit:plugins/gitiles/operations/cookbooks/+/refs/heads/master/cookbooks/sre/discovery/datacenter.py| sre.discovery.datacenter ]] to depool all services from a DC:
All services, are active-active in DNS discovery, apart from restbase, that needs special treatment. The procedure to fail over to one site only is the same for every one of them:
# reduce the TTL of the DNS discovery records to 10 seconds
* active-active services in DNS discovery will be depooled from said DC
* active/passive ones will be switched over to the alternative DC, per user input
# depool the datacenter we're moving away from in confctl / discovery
# restore the original TTL


However, there are a few services we completely exclude from this process. These are hardcoded in the [[gerrit:plugins/gitiles/operations/cookbooks/+/refs/heads/master/cookbooks/sre/discovery/datacenter.py| sre.discovery.datacenter ]] cookbook.
All of the above is done using the <code>sre.discovery.datacenter</code> cookbook in the case of a global switchover

What the cookbook does is:

# Reduce the TTL of the DNS discovery records to 10 seconds
# Depool the datacenter we're moving away from in confctl / discovery
# Restore the original TTL


==== Day 1: Depooling source DC ====
{{Warning|1=Make sure you have gone through [[Switch_Datacenter#Testing_-_3_weeks_before | Testing]], and [[Switch_Datacenter#Preparation_-_a_few_days_before|Preperation]], including patches.}}
'''Before depooling any service,''' do not forget to review (and copy/paste) the current status of all services, but running:

cookbook.sre.discovery.datacenter status all

The following command will depool all active/active services from a DC, and will prompt to move or skip the active/passive ones.


==== Switchover ====
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# Switch all services to codfw
# Switch all services to codfw
$ sudo cookbook sre.discovery.datacenter depool eqiad --all --reason "Datacenter Switchover" --task-id T12345
$ sudo cookbook sre.discovery.datacenter depool eqiad --all --reason "Datacenter Switchover" --task-id T12345
</syntaxhighlight>
</syntaxhighlight>This will depool all active/active services, and prompt you to move or skip active/passive services.


==== Switch to Multi-DC ====
==== Day 8: Switch to Multi-DC again ====
The following command will repool all active/active services to a DC, and will prompt to move or skip the active/passive ones.
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# Repool eqiad
# Repool eqiad
$ sudo cookbook sre.discovery.datacenter pool eqiad --all --reason "Datacenter Switchback" --task-id T12345
$ sudo cookbook sre.discovery.datacenter pool eqiad --all --reason "Datacenter switch to Multi-DC" --task-id T12345
</syntaxhighlight>
</syntaxhighlight>This will repool all active/active services, and prompt you to move or skip active/passive services.




=== MediaWiki ===
=== MediaWiki ===
We divide the process in logical phases that should be executed sequentially. Within any phase, top-level tasks can be executed in parallel to each other, while subtasks are to be executed sequentially to each other. The phase number is referred to in the names of the tasks in the [[git:operations/cookbooks/|operations/cookbooks]] repository, in the [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/|cookbooks/sre/switchdc/mediawiki/]] path.
We divide the process in logical phases that should be executed sequentially. Within any phase, top-level tasks can be executed in parallel to each other, while subtasks are to be executed sequentially to each other. The phase number is referred to in the names of the tasks in the [[git:operations/cookbooks/|operations/cookbooks]] repository, in the [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/|cookbooks/sre/switchdc/mediawiki/]] path.
==== Day 2: MediaWiki Switchover ====


{{Warning|1=Make sure you have gone through [[Switch_Datacenter#Testing_-_3_weeks_before | Testing]], and [[Switch_Datacenter#Preparation_-_a_few_days_before|Preperation]], including patches.}}
{{Infobox2|data1='''Start the following steps about half an hour to an hour before the scheduled switchover time, in a tmux or a screen.'''}}


{{Note|type=reminder|
==== Execution tip ====
'''Audible indicator''': Put [http://listen.hatnote.com Listen to wikipedia] in the background during the switchover. Silence indicates read-only, when it starts to make sounds again, edits are back up.}}
The best way to run this multi step cookbook is to start it in interactive mode from the cookbook root


{{Note|type=reminder|
<code>sudo cookbook sre.switchdc.mediawiki --ro-reason 'DC switchover (TXXXXXX)' codfw eqiad</code>
'''Execution tip:''' The best way to run this multi step cookbook is to start it in interactive mode from the cookbook root: <br>

<code>sudo cookbook sre.switchdc.mediawiki --ro-reason 'DC switchover (TXXXXXX)' codfw eqiad</code><br>
and proceed through the steps
and proceed through the steps}}
{{Note|text=Start the following steps '''about 30-60mins before the scheduled switchover time''', in a tmux or a screen.}}


==== Phase 0 - preparation ====
==== Phase 0 - preparation ====
<ol>
# Add a scheduled maintenance on [https://manage.statuspage.io/pages/nnqjzz7cd4tj/incidents StatusPage] (Maintenances -> Schedule Maintenance) '''This is not covered by the switchdc script.'''

# Add a scap lock on the deployment server <code>scap lock --all "Datacenter Switchover - T12345"</code>. Do this in another tmux window, as it will stay there for you to unlock at the end of the procedure.'''This is not covered by the switchdc script.'''
<li><u>Manual</u> [https://manage.statuspage.io/pages/nnqjzz7cd4tj/incidents StatusPage]: Add a scheduled maintenance (Maintenances -> Schedule Maintenance)</li>
# Disable puppet on maintenance hosts in both eqiad and codfw: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/00-disable-puppet.py|00-disable-puppet.py]]

# Reduce the TTL on <code>appservers-rw, api-rw, jobrunner, videoscaler, parsoid-php</code> to 10 seconds: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/00-reduce-ttl.py|00-reduce-ttl.py]] '''Make sure that at least 5 minutes (the old TTL) have passed before moving to Phase 1, the cookbook should force you to wait'''.
<li><u>Manual</u> '''scap lock:''' Add a scap lock on a separate tmux/screen on the deployment server. This will block any scap deployments, and it will stay there waiting for your input to unlock it.
# '''Optional''' Warm up APC running the mediawiki-cache-warmup on the new site clusters. The warmup queries will repeat automatically until the response times stabilize: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/00-warmup-caches.py|00-warmup-caches.py]]
scap lock --all "Datacenter Switchover - T12345"</li>
#* The global "urls-cluster" warmup against the appservers cluster

#* The "urls-server" warmup against all hosts in the appservers cluster.
<li><code>[[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/00-disable-puppet.py|00-disable-puppet.py]]</code>: Disables puppet on maintenance hosts in both eqiad and codfw</li>
#*The "urls-server" warmup against all hosts in the api-appservers cluster.

# Set downtime for Read only checks on mariadb masters changed on Phase 3 so they don't page. [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/00-downtime-db-readonly-checks.py|00-downtime-db-readonly-checks.py]]
<li><code>[[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/00-reduce-ttl.py|00-reduce-ttl.py]]:</code> Reduces TTL for various DNS discovery entries. '''Make sure that at least 5 minutes (the old TTL) have passed before moving to Phase 1. The cookbook should force you to wait anyway'''.</li>
{{Infobox2|data1='''Stop for GO/NOGO'''}}

<li>(Optional-Skip)<code>[[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/00-warmup-caches.py|00-warmup-caches.py]]:</code> Warms up APC running the mediawiki-cache-warmup on the new site clusters. The warmup queries will repeat automatically until the response times stabilize:
* The global "urls-cluster" warmup against the appservers cluster
* The "urls-server" warmup against all hosts in the appservers cluster.
*The "urls-server" warmup against all hosts in the api-appservers cluster.</li>

<li><code>[[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/00-downtime-db-readonly-checks.py|00-downtime-db-readonly-checks.py]]:</code> Sets downtime for Read only checks on mariadb masters changed on Phase 3 so they don't page. </li>
</ol>

{{Note|type=error|text='''Stop for GO/NOGO''': Ask your peers for Go or NoGo}}


==== Phase 1 - stop maintenance ====
==== Phase 1 - stop maintenance ====
# Stop maintenance jobs in both datacenters and kill all the periodic jobs (systemd timers) on maintenance hosts in both datacenters: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/01-stop-maintenance.py|01-stop-maintenance.py]]
*<code>[[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/01-stop-maintenance.py|01-stop-maintenance.py]]:</code> Stops maintenance jobs in both datacenters and kill all the periodic jobs (systemd timers) on maintenance hosts in both datacenters. Keep in mind there is a chance of a manual job running. Check again with your peers; usually the way forward is to kill the job by forcr.

{{Infobox2|data1='''Stop for final GO/NOGO before read-only.'''<br>The following steps until Phase 7 need to be executed in quick succession to minimize read-only time}}
{{Note|type=error|text='''Final GO/NOGO before read-only''': <u>''This is the point of no return.</u> The following steps until Phase 7 need to be executed in quick succession to minimise read-only time''}}

==== Phase 2 - read-only mode ====
==== Phase 2 - read-only mode ====
# Go to read-only mode by changing the <code>ReadOnly</code> conftool value: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/02-set-readonly.py|02-set-readonly.py]]
* <code>[[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/02-set-readonly.py|02-set-readonly.py]]:</code> Sets read-only mode by changing the <code>ReadOnly</code> conftool value


==== Phase 3 - lock down database masters ====
==== Phase 3 - lock down database masters ====
# Put old-site core DB masters (shards: s1-s8, x1, es4-es5) in read-only mode and wait for the new site's databases to catch up replication: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/03-set-db-readonly.py|03-set-db-readonly.py]]
* <code>[[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/03-set-db-readonly.py|03-set-db-readonly.py]]:</code> Puts origin DC <code>DC_FROM</code> core DB masters (shards: s1-s8, x1, es4-es5) in read-only mode and waits for destination DC's <code>DC_TO</code> databases to catch up with replication


==== Phase 4 - switch active datacenter configuration ====
==== Phase 4 - switch active datacenter configuration ====


# Switch the discovery records and MediaWiki active datacenter: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/04-switch-mediawiki.py|04-switch-mediawiki.py]]
*<code> [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/04-switch-mediawiki.py|04-switch-mediawiki.py]]:</code> Switches the discovery records and MediaWiki active datacenter
** Flips <code> [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/__init__.py#17| MEDIAWIKI_SERVICE]]</code> to <code>pooled=true</code> in destination DC
#* Flip <code>appservers-rw, api-rw, jobrunner, videoscaler, parsoid-php</code> to <code>pooled=true</code> in the new site. Since both sites are now pooled in etcd, this will not actually change the DNS records for the active datacenter.
#* Flip <code>WMFMasterDatacenter</code> from the old site to the new.
** Flips <code>WMFMasterDatacenter</code> from <code>DC_FROM</code> to <code>DC_TO</code>
** Flips <code> [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/__init__.py#17| MEDIAWIKI_SERVICE]]</code> to <code>pooled=false</code> in source DC
#* Flip <code>appservers-rw, api-rw, jobrunner, videoscaler, parsoid-php</code> to <code>pooled=false</code> in the old site. After this, DNS will be changed for the old DC and internal applications (except mediawiki) will start hitting the new DC


After this, DNS will be changed for the source DC and internal applications (except mediawiki) will start hitting the new DC
==== Phase 5 - DEPRECATED - Invert Redis replication for MediaWiki sessions ====

{{Outdated-inline|year=2023|note=Redis is not used for MW sessions anymore. Cookbook has been removed, go directly to [[Switch_Datacenter#Phase_6_-_Set_new_site's_databases_to_read-write|Phase 6]]}}
==== <del>Phase 5 - DEPRECATED - Invert Redis replication for MediaWiki sessions</del> ====


==== Phase 6 - Set new site's databases to read-write ====
==== Phase 6 - Set new site's databases to read-write ====

# Set new-site's core DB masters (shards: s1-s8, x1, es4-es5) in read-write mode: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/06-set-db-readwrite.py|06-set-db-readwrite.py]]
*<code>[[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/06-set-db-readwrite.py|06-set-db-readwrite.py]]:</code> Sets destination DC's core DB masters (shards: s1-s8, x1, es4-es5) in read-write mode


==== Phase 7 - Set MediaWiki to read-write ====
==== Phase 7 - Set MediaWiki to read-write ====
# Go to read-write mode by changing the <code>ReadOnly</code> conftool value: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/07-set-readwrite.py|07-set-readwrite.py]]
*<code>[[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/07-set-readwrite.py|07-set-readwrite.py]]:</code> Goes back to read-write mode by changing the <code>ReadOnly</code> conftool value

{{Infobox2|data1='''You are now out of read-only mode'''<br>Breathe.}}
{{Colored box|icon=OOjs UI icon logo-wikipedia-invert.svg|background-title-color=#36c|title=You are now out of read-only mode.|title-color=#fff|background-content-color=#eaf3ff|content=Take a breath, smile!}}

==== Phase 8 - Restore rest of MediaWiki ====
==== Phase 8 - Restore rest of MediaWiki ====


<ol>
# Restart Envoy on the jobrunners that are now inactive, to trigger changeprop to re-resolve the DNS name and connect to the new DC: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/08-restart-envoy-on-jobrunners.py|08-restart-envoy-on-jobrunners.py]]
<li><code>[[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/08-restart-envoy-on-jobrunners.py|08-restart-envoy-on-jobrunners.py]]:</code> Restarts pods on the (now) inactive jobrunners, trigger changeprop to re-resolve the DNS name and connect to destination DC
#*A steady rate of 500s is expected until this step is completed, because changeprop will still be sending edits to jobrunners in the old DC, where the database master will reject them.
*A steady rate of 500s is expected until this step is completed, as changeprop may still be sending edits to source DC, though the database master will reject them.
# Start maintenance in the new DC: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/08-start-maintenance.py|08-start-maintenance.py]]
</li>
#* Run puppet on the [[Maintenance_server|maintenance hosts]], which will reactivate systemd timers in both datacenters in the primary DC

#*Most Wikidata-editing bots will restart once this is done and the "[https://grafana.wikimedia.org/d/TUJ0V-0Zk/wikidata-alerts?orgId=1&refresh=5s dispatch lag]" has recovered. This should bring us back to 100% of editing traffic.
<li>
# End the planned maintenance in [https://manage.statuspage.io/pages/nnqjzz7cd4tj/incidents StatusPage]
<code>[[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/08-start-maintenance.py|08-start-maintenance.py]]:</code> Starts maintenance on destination DC
* Runs puppet on the [[Maintenance_server|maintenance hosts]], which will reactivate systemd timers in destination DC
* Most Wikidata-editing bots will restart once this is done and the "[https://grafana.wikimedia.org/d/TUJ0V-0Zk/wikidata-alerts?orgId=1&refresh=5s dispatch lag]" has recovered. This should bring us back to 100% of editing traffic.
</li>

<li>
(Manual) [https://manage.statuspage.io/pages/nnqjzz7cd4tj/incidents StatusPage]: End the planned maintenance
</li>
</ol>

==== Phase 9 - Post read-only ====
==== Phase 9 - Post read-only ====
# Set the TTL for the DNS records to 300 seconds again: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/09-restore-ttl.py|09-restore-ttl.py]]
# Set the TTL for the DNS records to 300 seconds again: [[git:operations/cookbooks/+/master/cookbooks/sre/switchdc/mediawiki/09-restore-ttl.py|09-restore-ttl.py]]
Line 236: Line 279:
# Make sure email works (<code>sudo -i; sudo exim4 -bp | exiqsumm | tail -n 5</code> on mx1001/mx2001 it should fluctuate between 0m and a few minutes, [[mw:Special:EmailUser|test an email]]])
# Make sure email works (<code>sudo -i; sudo exim4 -bp | exiqsumm | tail -n 5</code> on mx1001/mx2001 it should fluctuate between 0m and a few minutes, [[mw:Special:EmailUser|test an email]]])


==== Audible indicator ====
Put [http://listen.hatnote.com/#fr,en,bn,de,es,ru,wikidata| Listen to wikipedia] in the background during the switchover. Silence indicates read-only, when it starts to make sounds again, edits are back up.


==== Dashboards ====
==== Dashboards ====

Revision as of 11:42, 27 March 2024

Introduction

Datacenter switchovers are a standard response to certain types of disasters (web search), where traffic is shifted from one site to another. Technology organisations regularly practice them to ensure that tooling and hardware will respond appropriately in case of an emergency. Moreover, switching between datacenters makes room for potentially disruptive maintenance work on inactive servers, such as database upgrades/changes, hardware replacement etc. In other words, while we're serving traffic from an active datacentre, we are doing our regular upkeep work on the inactive one to maintain its efficiency and reliability.

What?

At Wikimedia, a datacentre switchover means switching over different components between our two main datacentres; eqiad and codfw.

When?

We perform two datacenter switchovers annually, during the week of the solar equinox:

  • Northward: ~21st March
  • Southward: ~21st September

Our switchover process is broken down into stages, where some can progress independently, while others need to progress in lockstep. This page documents all the steps needed for this work, broken down by component. SRE/Service_Operations is driving the process and maintains the software necessary to run the switchover, with a little help from their friends.

Impact

Impact of a switchover is expected to be 2-3 minutes of read-only for MediaWiki, including extensions. Any other services/features/infrastructures not participating directly in the Switchover, will continue to work as normal. However, anything relying on MediaWiki indirectly (e.g. via some data pipeline) may experience some minor impact, for example some delay in receiving events. This is expected.

What does read-only mean?

Read-only is a two-step process: we first set MediaWiki itself read-only, and then theMediaWiki databases. We allow an amount of time between the two so to allow the last in-flight edits to land safely. All read-only functionality will continue to function as usual.

During read-only, any kind of writes reaching our MediaWiki databases (UPDATE, DELETE, INSERT in SQL terms) will be denied. Additionally, any features ignoring the global MediaWiki read-only configuration, will not function during this time window. This scheduled read-only period, adds a 0.001% MediaWiki edit unavailability per year.

Notes: Non-MediaWiki databases, are not part of the switchover.

High Level switchover flow

Scheduling details

Datacenter Switchovers will take place on the work week of a Solar Equinox, where we have assumed that the Northward Solar Equinox happens on March 21st, and the Southward Solar Equinox on September 21st. This doesn't match exactly the astronomical event on purpose.

A controlled switchover occurs in a span of 8 days:

Day 1 - Tuesday: Traffic+Services

Non read-only parts of the Switchover always take place on Tuesday. This process is non disruptive and lower risk and it may be scheduled @ 14:00 UTC, however that is not necessary.

  • Traffic: Disable caching in the origin datacenter - Switch_Datacenter#Traffic
    • ~20 minutes for disabling caching completely from origin dc_from datacentre
  • Services: Depool services in the origin datacenter to destination - Switch_Datacenter#Services
    • ~15-40 minutes to switchover services to destination dc_to
      • Leave active/active services pooled only to destination dc_to
      • Switchover active/passive services from origin dc_from to destination dc_to

Day 2 - Wednesday: MediaWiki

The MediaWiki Switchover (read-only), will always take place on the Wednesday of the above mentioned week. During read-only (2-3 minutes), no Wikis will be editable and editors will be seeing a warning message asking to try again later. Read-only starts @ 14:00 UTC. Readers should experience no changes for the entirety of the event.

  • Switch Mediawiki itself to destination datacentre Switch_Datacenter#MediaWiki
    • ~35 minutes for a complete run of the cookbook, from disabling puppet to re-enabling it, if timed right for the read-only part of the cookbook to fall at the start of the announced window. Doing it in an emergency can be done faster since there is no need to wait for a set time.

Note: For the next 7 calendar days after the MW read-only phase traffic will be flowing solely to one datacentre (destination), rendering the other datacenter effectively inactive.

Day 3 - Thursday: Deployment Server + Special cases

At your convenience, after coordinating with deployers, you may switch the Special cases

Day 8 - Wednesday: Pool back inactive DC

A week later, we activate caching in the datacenter again, we will repool the inactive DC, and traffic will start flowing, in the normal Multi-DC mode. This period may be extended, depending how maintainance work progresses at the inactive DC

Note: As of September 2023, we are running each datacenter as primary for half of the year. The 2 data centers are be considered coequal, alternating roles every 6 months.

Weeks in advance: communication, testing, and preparation

Communication - 10 weeks before

See Switch_Datacenter/Coordination, coordinate dates and communication plan with involved groups.

Testing - 3 weeks before

Run a "live test" of the MediaWiki cookbook, and a dry-run for everything.

Depending on what changes have occurred to our infrastructure/production from the previous switchover, code changes in cookbooks are expected. The purpose of the live-test and the dry-run is to test most of the existing and updated codepaths, and identify potential issues there.

Note: Always use the --dry-run flag when running cookbooks for testing purposes

Live Test

A live test ( --live-test) flag, will skip actions that could harm the primary DC or perform them on the secondary DC, and is available only for the sre.switchdc.mediawiki cookbook. What we should be careful about it is that we "switch" from the currently secondary DC to the currently primary DC. While the live-test process will log your actions to SAL, please remember to announce to #wikimedia-sre and to #wikimedia-operations too, that you will be running this test. Unless something goes really badly, this is a non-disruptive test.

For example, if currently our primary DC is codfw and for the upcoming switchover we will be switching to eqiad, the direction for a live test is eqiad→codfw:

cumin1002:~# cookbook --dry-run sre.switchdc.mediawiki
<entering cookbook menu>

> 00-disable-puppet --live-test eqiad codfw
> 00-reduce-ttl     --live-test eqiad codfw

Limitations: The 03-set-db-readonly cookbook will fail if circular replication is not already enabled everywhere. It can be skipped if the live-test is run before circular replication is enabled. Please check with Data Persistence if you need to run this test or not.

Dry Run

A dry-run is available for both cookbooks we use during a switchover; sre.switchdc.mediawiki and sre.discovery.datacenter. During a dry-run, the direction is the one we have announced.

For example, if we are currently on codfw, switching over to eqiad, a dry-run's direction would be codfw→eqiad, as follows:

cumin1002:~# cookbook --dry-run sre.switchdc.mediawiki
<entering cookbook menu>

> 00-disable-puppet codfw eqiad
> 00-reduce-ttl     codfw eqiad
cumin1002:~# cookbook --dry-run sre.discovery.datacenter depool codfw \
                      --all --reason "Datacenter services switchover dry-run" \
                      --task-id T357547

Preparation - a few days before

Data Persistance checklist:

  • There is no ongoing long-running maintenance that affects database availability or lag (schema changes, upgrades, hardware issues, etc.)
  • Replication is flowing from eqiad -> codfw and from codfw -> eqiad
  • All database servers have its buffer pool filled up. This is taken care automatically with the automatic buffer pool warmup functionality. For sanity checks, some sample load could be sent to the MediaWiki application server to check requests happen as quickly as in the active datacenter.

Service Operations checklist:

Per-service switchover instructions

Traffic

For general procedures see: Global traffic routing.

Day 1: Depool source datacentre

Make sure you have gone through Testing, and Preperation, including patches.

GeoDNS (User-facing) Routing:

  1. #1 Depool Traffic from source DC (DNS): C+2 and Submit
  2. dns1004:~# authdns-update -- This will propagate the change to all nameservers
  3. !log Traffic: depool eqiad from user traffic - Log to AWQ

Day 8: Switch to Multi-DC again

Same procedure as above, after reverting the relevant commit.

Dashboards

Services

General procedure

For a global switchover we are using the sre.discovery.datacenter to depool all services from a DC:

  • active-active services in DNS discovery will be depooled from said DC
  • active/passive ones will be switched over to the alternative DC, per user input

However, there are a few services we completely exclude from this process. These are hardcoded in the sre.discovery.datacenter cookbook.

What the cookbook does is:

  1. Reduce the TTL of the DNS discovery records to 10 seconds
  2. Depool the datacenter we're moving away from in confctl / discovery
  3. Restore the original TTL


Day 1: Depooling source DC

Make sure you have gone through Testing, and Preperation, including patches.

Before depooling any service, do not forget to review (and copy/paste) the current status of all services, but running:

cookbook.sre.discovery.datacenter status all

The following command will depool all active/active services from a DC, and will prompt to move or skip the active/passive ones.

# Switch all services to codfw
$ sudo cookbook sre.discovery.datacenter depool eqiad --all --reason "Datacenter Switchover" --task-id T12345

Day 8: Switch to Multi-DC again

The following command will repool all active/active services to a DC, and will prompt to move or skip the active/passive ones.

# Repool eqiad
$ sudo cookbook sre.discovery.datacenter pool eqiad --all --reason "Datacenter switch to Multi-DC" --task-id T12345


MediaWiki

We divide the process in logical phases that should be executed sequentially. Within any phase, top-level tasks can be executed in parallel to each other, while subtasks are to be executed sequentially to each other. The phase number is referred to in the names of the tasks in the operations/cookbooks repository, in the cookbooks/sre/switchdc/mediawiki/ path.

Day 2: MediaWiki Switchover

Make sure you have gone through Testing, and Preperation, including patches.
Audible indicator: Put Listen to wikipedia in the background during the switchover. Silence indicates read-only, when it starts to make sounds again, edits are back up.

Execution tip: The best way to run this multi step cookbook is to start it in interactive mode from the cookbook root:
sudo cookbook sre.switchdc.mediawiki --ro-reason 'DC switchover (TXXXXXX)' codfw eqiad

and proceed through the steps
Start the following steps about 30-60mins before the scheduled switchover time, in a tmux or a screen.

Phase 0 - preparation

  1. Manual StatusPage: Add a scheduled maintenance (Maintenances -> Schedule Maintenance)
  2. Manual scap lock: Add a scap lock on a separate tmux/screen on the deployment server. This will block any scap deployments, and it will stay there waiting for your input to unlock it. scap lock --all "Datacenter Switchover - T12345"
  3. 00-disable-puppet.py: Disables puppet on maintenance hosts in both eqiad and codfw
  4. 00-reduce-ttl.py: Reduces TTL for various DNS discovery entries. Make sure that at least 5 minutes (the old TTL) have passed before moving to Phase 1. The cookbook should force you to wait anyway.
  5. (Optional-Skip)00-warmup-caches.py: Warms up APC running the mediawiki-cache-warmup on the new site clusters. The warmup queries will repeat automatically until the response times stabilize:
    • The global "urls-cluster" warmup against the appservers cluster
    • The "urls-server" warmup against all hosts in the appservers cluster.
    • The "urls-server" warmup against all hosts in the api-appservers cluster.
  6. 00-downtime-db-readonly-checks.py: Sets downtime for Read only checks on mariadb masters changed on Phase 3 so they don't page.
Stop for GO/NOGO: Ask your peers for Go or NoGo

Phase 1 - stop maintenance

  • 01-stop-maintenance.py: Stops maintenance jobs in both datacenters and kill all the periodic jobs (systemd timers) on maintenance hosts in both datacenters. Keep in mind there is a chance of a manual job running. Check again with your peers; usually the way forward is to kill the job by forcr.
Final GO/NOGO before read-only: This is the point of no return. The following steps until Phase 7 need to be executed in quick succession to minimise read-only time

Phase 2 - read-only mode

Phase 3 - lock down database masters

  • 03-set-db-readonly.py: Puts origin DC DC_FROM core DB masters (shards: s1-s8, x1, es4-es5) in read-only mode and waits for destination DC's DC_TO databases to catch up with replication

Phase 4 - switch active datacenter configuration

After this, DNS will be changed for the source DC and internal applications (except mediawiki) will start hitting the new DC

Phase 5 - DEPRECATED - Invert Redis replication for MediaWiki sessions

Phase 6 - Set new site's databases to read-write

Phase 7 - Set MediaWiki to read-write

You are now out of read-only mode.

Take a breath, smile!

Phase 8 - Restore rest of MediaWiki

  1. 08-restart-envoy-on-jobrunners.py: Restarts pods on the (now) inactive jobrunners, trigger changeprop to re-resolve the DNS name and connect to destination DC
    • A steady rate of 500s is expected until this step is completed, as changeprop may still be sending edits to source DC, though the database master will reject them.
  2. 08-start-maintenance.py: Starts maintenance on destination DC
    • Runs puppet on the maintenance hosts, which will reactivate systemd timers in destination DC
    • Most Wikidata-editing bots will restart once this is done and the "dispatch lag" has recovered. This should bring us back to 100% of editing traffic.
  3. (Manual) StatusPage: End the planned maintenance

Phase 9 - Post read-only

  1. Set the TTL for the DNS records to 300 seconds again: 09-restore-ttl.py
  2. Update DNS records for new database masters deploying eqiad->codfw; codfw->eqiad This is not covered by the switchdc script. Please use the following to SAL log !log Phase 9.5 Update DNS records for new database masters
  3. Run Puppet on the database masters in both DCs, to update expected read-only state: 09-run-puppet-on-db-masters.py. This will remove the downtimes set in Phase 0.
  4. Make sure the CentralNotice banner informing users of readonly is removed. Keep in mind, there is some minor HTTP caching involved (~5mins)
  5. Cancel the scap lock. You will need to go back to the terminal where you added the lock and press enter This is not covered by the switchdc script.
  6. Re-order noc.wm.o's debug.json to have primary servers listed first, see T289745 and backport it using scap. This will test scap2 deployment This is not covered by the switchdc script.
  7. Update maintenance server DNS records eqiad -> codfw This is not covered by the switchdc script.
  8. Reorder the data centers in the default stanza for geomaps. Make sure the new primary DC is set first. This is not covered by the switchdc script. This can happen days after the switchover.
    • Note: The default only affects a small portion of traffic, so this is mostly about logical consistency (when we have no idea what to do, we prefer the primary DC).

Phase 10 - verification and troubleshooting

This is not covered by the switchdc script

  1. Make sure reading & editing works! :)
  2. Make sure recent changes are flowing (see Special:RecentChanges, EventStreams, and the IRC feeds) curl -s -H 'Accept: application/json' https://stream.wikimedia.org/v2/stream/recentchange | jq .
  3. Make sure email works (sudo -i; sudo exim4 -bp | exiqsumm | tail -n 5 on mx1001/mx2001 it should fluctuate between 0m and a few minutes, test an email])


Dashboards

ElasticSearch

General context on how to switchover

CirrusSearch talks by default to the local datacenter ($wmgDatacenter). No special actions are required when disabling a datacenter.

Manually switching CirrusSearch to a specific datacenter can always be done. Point CirrusSearch to codfw by editing wgCirrusSearchDefaultCluster ext-CirrusSearch.php.

To ensure coherence in case of lost updates, a reindex of the pages modified during the switch can be done by following Recovering from an Elasticsearch outage / interruption in updates.

Dashboards


Special cases

Exclusions

Exclusions have been implemented in the Switchover cookbook. The next section is still around for historical and information purposes. While it will probably not be needed, it's still useful information to have around.

If it is needed to exclude services, using the old sre.switchdc.services is still necessary until exclusion is implemented.

# Switch all services to codfw, excluding parsoid and cxserver
$ sudo cookbook sre.switchdc.services --exclude parsoid cxserver -- eqiad codfw

Single service

If you are switching only one service, using the old sre.switchdc.services is still necessary

# Switch the service "parsoid" to codfw-only
$ sudo cookbook sre.switchdc.services --services parsoid -- eqiad codfw

apt

In March 2023 Switchover, we identified issues with apt.wikimedia.org being switched over. As of the September 2023 Switchover, those haven't been solved yet and apt.wikimedia.org won't participate in the Switchover.

apt.wikimedia.org needs a puppet change

restbase-async

As of September 2023, this is no longer needed. We let restbase-async pooled in both DCs for now on. This is kept in the doc for historical purposes for now.

Restbase-async is a bit of a special case, being pooled active/passive with the active in the secondary datacenter. As such, it needs an additional step if we're just switching active traffic over and not simulating a complete failover:

  1. pool restbase-async everywhere
    sudo cookbook sre.discovery.service-route --reason T123456 pool --wipe-cache $dc_from restbase-async
    sudo cookbook sre.discovery.service-route --reason T123456 pool --wipe-cache $dc_to restbase-async
    
  2. depool restbase-async in the newly active dc, so that async traffic is separated from real-users traffic as much as possible.
    sudo cookbook sre.discovery.service-route --reason T123456 depool --wipe-cache $dc_to restbase-async
    

When simulating a complete failover, keep restbase pooled in $dc_to for as long as possible to test capacity, then switch it to $dc_from by using the above procedure.

As it is async, we trade the added latency from running it in the secondary datacenter for the lightened load on the primary datacenter's appservers.

Manual switch

These services require manual changes to be switched over and have not yet been included in service::catalog

  • planet.wikimedia.org
    • The DNS discovery name planet.discovery.wmnet needs to be switched from one backend to another as in example change gerrit:891369. No other change is needed.
  • people.wikimedia.org
    • In puppet hieradata the rsync_src and rsync_dst hosts need to be flipped as in example change gerrit:891382.
    • FIXME: manual rsync command has to be run
    • The DNS discovery name peopleweb.discovery.wmnet needs to be switched from one backend to another as in example change gerrit::891381.
  • noc.wikimedia.org This is no longer applicable as of September 2023, noc.wikimedia.org is now active/active in mw-on-k8s.
    • The noc.wikimedia.org DNS name points to DNS discovery name mwmaint.discovery.wmnet that needs to be switched from one backend to another as in example change gerrit:896118. No other change is needed.

Dashboards

Databases

Main document: MariaDB/Switch Datacenter

Other miscellaneous

Predictable, Recurring Switchovers

A few months after the Switchback of 2023, and following a feedback gathering process, a proposal to move to a predictable set of dates for the dates while also increasing the Switchover duration to 6 months was adopted and turned into a process. The document can be found in the link below:

Recurring, Equinox-based, Data Center Switchovers

Upcoming Switches

See Switch Datacenter/Switchover Dates for a pre-calculated list up to 2050

March 2024

Past Switches

2024 switches

March

  • Services + Traffic: Tuesday, March 19th, 2024 14:00 UTC
  • MediaWiki: Wednesday, March 20th, 2024 14:00UTC
  • Read only: 3 minutes 8 seconds

2023 switches

September
February

Reports

  • Recap
  • Read only: 1 minute 59 seconds


Switching back:

Schedule

Reports

  • Read only: 3 minutes 1 second

2021 switches

Schedule
Reports

Switching back:

Reports

2020 switches

Schedule
  • Services: Monday, August 31st, 2020 14:00 UTC
  • Traffic: Monday, August 31st, 2020 15:00 UTC
  • MediaWiki: Tuesday, September 1st, 2020 14:00 UTC
Reports

Switching back:

  • Traffic: Thursday, September 17th, 2020 17:00 UTC
  • MediaWiki: Tuesday, October 27th, 2020 14:00 UTC
  • Services: Wednesday, October 28th, 2020 14:00 UTC

2018 switches

Schedule
  • Services: Tuesday, September 11th 2018 14:30 UTC
  • Media storage/Swift: Tuesday, September 11th 2018 15:00 UTC
  • Traffic: Tuesday, September 11th 2018 19:00 UTC
  • MediaWiki: Wednesday, September 12th 2018: 14:00 UTC
Reports

Switching back:

Schedule
  • Traffic: Wednesday, October 10th 2018 09:00 UTC
  • MediaWiki: Wednesday, October 10th 2018: 14:00 UTC
  • Services: Thursday, October 11th 2018 14:30 UTC
  • Media storage/Swift: Thursday, October 11th 2018 15:00 UTC
Reports

2017 switches

Schedule
  • Elasticsearch: elasticsearch is automatically following mediawiki switch
  • Services: Tuesday, April 18th 2017 14:30 UTC
  • Media storage/Swift: Tuesday, April 18th 2017 15:00 UTC
  • Traffic: Tuesday, April 18th 2017 19:00 UTC
  • MediaWiki: Wednesday, April 19th 2017 14:00 UTC (user visible, requires read-only mode)
  • Deployment server: Wednesday, April 19th 2017 16:00 UTC
Reports

Switching back:

Schedule
  • Traffic: Pre-switchback in two phases: Mon May 1 and Tue May 2 (to avoid cold-cache issues Weds)
  • MediaWiki: Wednesday, May 3rd 2017 14:00 UTC (user visible, requires read-only mode)
  • Elasticsearch: elasticsearch is automatically following mediawiki switch
  • Services: Thursday, May 4th 2017 14:30 UTC
  • Swift: Thursday, May 4th 2017 15:30 UTC
  • Deployment server: Thursday, May 4th 2017 16:00 UTC
Reports

2016 switches

Schedule
  • Deployment server: Wednesday, January 20th 2016
  • Traffic: Thursday, March 10th 2016
  • MediaWiki 5-minute read-only test: Tuesday, March 15th 2016, 07:00 UTC
  • Elasticsearch: Thursday, April 7th 2016, 12:00 UTC
  • Media storage/Swift: Thursday, April 14th 2016, 17:00 UTC
  • Services: Monday, April 18th 2016, 10:00 UTC
  • MediaWiki: Tuesday, April 19th 2016, 14:00 UTC / 07:00 PDT / 16:00 CEST (requires read-only mode)
Reports

Switching back:

  • MediaWiki: Thursday, April 21st 2016, 14:00 UTC / 07:00 PDT / 16:00 CEST (requires read-only mode)
  • Services, Elasticsearch, Traffic, Swift, Deployment server: Thursday, April 21st 2016, after the above is done

Monitoring Dashboards

Aggregated list of interesting dashboards