Perforce Public Knowledge Base - Failing over to a replica server
Downloads Blog Company Integrations Careers Contact Try Free
Menu Search
Reset Search



Failing over to a replica server

« Go Back



A replica server is being maintained as a standby server for disaster recovery. In the event the master server goes down, how do I failover to the standby server?


These steps assume that a unfiltered read-only replica or forwarding replica is in place. For details on setting up a replica server to use for failover, see this section of the Helix Versioning Engine Administrator Guide for Multi-Site Deployment.  If you are confident that the metadata and the versioned files are valid on the replica, you can skip steps 3 and 4.

To fail over to this standby server:

  1. Ensure that the replica server (soon to be a master) has a valid license
  2. Disable P4AUTH and rpl.forward.login if set
  3. Confirm the metadata if the master is available
  4. Confirm the versioned files if the master is available
  5. Determine the point where work must be resubmitted
  6. Restart the replica server as the new master.
  7. Verify that replication is working
  8. Point Perforce end-users and other clients at the new master.
  9. Optionally perform end user tasks
  10. Optionally convert the old master back into the master again
  1. Ensure that the replica has a valid license

Ensure that the replica server has its own valid license file installed in its P4ROOT directory. To obtain a duplicate server license, please complete the duplicate server request form.

  1. Disable P4AUTH and rpl.forward.login if set

It is helpful to gather replica information before the replica is converted to a master. The replica may already be running and a super user logged in. But if you cannot run p4 commands because a user with super privileges is not logged in, disable any authorization server and disable rpl.forward.login and then log in.

Disable an authorization server by removing the -a flag in the startup script or by unsetting the P4AUTH configurable.

Disable rpl.forward.login by using the "p4d -cunset" flag on the replica.

cd Perforce replica root
p4d -r . "-cshow"
p4d -r . "-cunset rpl.forward.login"
p4d -r . "-cunset <servername>#rpl.forward.login"
p4d -r . "-cshow"

  1. Check the replica metadata against the master metadata

As Perforce stores metadata and versioned file data separately, each needs to be checked for their consistency with each other.

Note:These steps should be carried out before restarting the standby server as the new master.

If the master is down
Log in as a super user.  Even though you will receive a warning message, you will still be able to run commands. For example, you can still run "p4 pull -lj"
despite the warnings.  You will want the statefile to have a recent date and time.
$ p4 -u super -p gabriel:44108 login
Enter password:
Replica access to P4TARGET server failed.
Remote server refused request. Please verify that service user is correctly logged in to remote server, 
then retry.
TCP connect to failed.
connect: Connection refused

$ p4 pull -lj
Current replica journal state is:       Journal 24,     Sequence 767.
The statefile was last modified at:     2016/10/24 12:02:45.
The replica server time is currently:   2016/10/24 12:45:51 -0700 PDT
Replica access to P4TARGET server failed.
Remote server refused request. Please verify that service user is correctly logged in to remote server, 
then retry.
TCP connect to failed.
connect: Connection refused

If the master is up

If the master is up, the replica and master should be at the same number and the statefile last modified at a recent date.

$ p4 pull -lj
Current replica journal state is:       Journal 2836,   Sequence 53123.
Current master journal state is:        Journal 2836,   Sequence 53123.
The statefile was last modified at:     2017/12/06 16:09:48.
The replica server time is currently:   2017/12/06 16:09:49 -0800 PST

If integrity.csv was not set up or to eliminate false alarms

Sometimes journaldbchecksums will show differences that can be explained such as an upgrade.  The replica is actually fine although differences are reported.

If the master can be started, run consistency checks on the master and replica to see if the two match.  For example, run on both master and the replica commands similar to:

p4 changes > changes.txt
p4 users > users.txt
p4 groups > groups.txt
p4 files //... > files.txt
p4 filelog //... > filelog.txt
p4 labels //... > labels.txt
p4 protect -o > protect.txt
p4 branches > branches.txt
p4 integrated > integrated.txt
p4 jobs > jobs.txt
p4 streams > streams.txt
p4 clients > clients.txt

If the commands on the master and replica match, then the replica is very likely to be consistent with the master and will be an acceptable failover.  Note that "p4 clients" are expected to differ between a edge/master configuration, so only run this on forwarding replicas.  If the replica is filtered, also take this into account.

If integrity.csv was set up

Check the consistency of the replica database files.  Perform this step if integrity.csv was set up as described in Verifying Replica Integrity. Otherwise, you
may have to assume that the replica is an exact copy of the master.

Run p4 journaldbchecksums

The p4 journaldbchecksums command can be run on a regular basis against the master server adding journal notes pertaining to database table checksums
on the master. When a replica receives these journal notes, it performs the same checksum computations on its database files and writes the results to the
replica server log file. The entries in the replica server log will appear similar to the following:

Perforce server info:
    Table db.config checksums match. 2011/09/16 12:36:23 version 1: expected 0xB5D23219, actual 0xB5D23219.

In this case the "checksums match" and it can be assumed that the data in the replica server's db.config table is the same as the data on the master. An example
of output where the checksums do not match is as follows:

Perforce server info:
    Table db.working checksums DIFFER. 2011/09/10 22:58:42 version 9: expected 0x3201495D, actual 0x4BBE7670.

This tells us that the data in the replica server's db.working table is different than the data on the master. If "checksums DIFFER" output appears in the Perforce
log, contact Perforce support for assistance.

  1. Check the replica versioned files against the master versioned files

In addition to the metadata (for example, how who are the users on Perforce, how many revisions of the files exist, and so on), the versioned files (source files or depot files) can also be checked because these the replica versioned files may also have still been in the process of transferring from the master to the replica.

Run "p4 pull -l"

Before taking the replica offline and switching it as the master, capture the output of the following command:

p4 pull -l

This command gives information about pending versioned file content transfers for the replica that never made it from the master to replica. In the case of switching
to a master, this list can be used to determine which files and revisions have missing versioned file content on the replica. Be aware that these missing file archives
may need to be replaced.

The output of this command looks similar to:

//depot/unicycle.txt 1.990 text new edit 1D346A0E3555561CA05C9ADB29D2C47B 287588 3573 2011/09/16 15:08:49 0
//depot/users.txt 1.1029 text new edit 8374C36CC6DD04821A5B7C52832CA632 699 3573 2011/09/16 15:08:49 0

At this point, end users should be checked with to see if they have these files available in their workspace. If using Perforce Proxy Servers (P4P), consider checking
the P4PCACHE for these files.

It is likely that the metadata on the replica server will be more current than the versioned files. This is why Perforce strongly recommends running multiple p4 pull -u
startup commands seen in p4 configure show allservers. This will help the replica to keep as up to date as possible with versioned file data submitted to the server.

Check the replica versioned files

To determine the consistency of the replica's versioned file data, run the following command and capture the output:

p4 verify -q //... > verify.txt 2>&1
cat verify.txt
p4 verify -S //... > verifyshelve.txt 2>&1

Note: This command may take a significant time to run. This gives a definitive list of versioned files that are missing based on the metadata that is available on the
replica server. Details on how to handle MISSING errors are covered in MISSING! errors from p4 verify.

Since a verify of the entire contents of the server can take significant time to run, an alternative approach would be to verify a smaller subset of files, those submitted
most recently to the server. For example, if you determine the latest change submitted on the replica happened on 2017/08/15 , you can run a verify of only the files
revisions submitted to the server in the last few days by running:

p4 verify -q "//...@>2017/08/13"

  1. Determine the point where work must be resubmitted

The replica may still have been catching up to the master when the master went down. In most cases, it is sufficient to simply let users know the time of the failure, and users can resubmit their work after that time.

But if desired, there are more sophisticated ways to find out exactly where the replica was at the time of failure.  

Save the state file

The replica server maintains file named "state" file containing a journal position token indicating what journal records from the master have been processed. The journal
position token consists of a journal number and position (byte) offset, for example:

cd Perforce replica root
cat state


In this example the replica server has replicated metadata from the master through journal number 22 up to byte offset 6494.  Record this state file information in case you ever need to know exactly where the replica was at the time of failure.

Check p4 changes

Run commands against the replica server which will give data points to work from. Use p4 changes to determine the last submitted
change on the replica:

p4 changes -m1 -t -s submitted
Change 356967 on 2011/09/15 18:15:58 by p4@support 'Merge down from Release to Main'

The "-t" flag provides the date and time for the last submission to the server. This will show the time of the last submitted changelist.. Advise users that any
submissions made to the master server after this changelist (that is, after this date and time) will need to be resubmitted.

Check rpl.pull.position in the replica log (optional)

If desired, the rpl.pull.position configurable can be enabled resulting in additional metadata replication status messages being written to the replica server log file each
time a metadata pull completes. The output will look something like the following in the replica server log file:

Debug 2011/07/15 06:12:10 pid 27375: MetaData Pull position 22/6494

You can use the contents of the state file and the metadata pull messages in the replica server log and compare the position token values to the journal files on the master server,
if available. It is advisable to contact Perforce Software technical support for advice about replaying these against the replica server.

Check p4 jobs

The "p4 jobs" information on the replica may have not caught up to the master at the time of failure. If Perforce jobs are used, or if Perforce jobs are integrated with an external defect tracking system, you can use p4 jobs to check the last job submitted.

p4 -Ztag jobs -m1 -r  | grep ReportedDate
... ReportedDate 2011/09/15 18:20:04

All jobs after this date will need to be resubmitted. It is advised that when a third party defect tracker is being used that it be re-seeded from this time onwards. Consult the
defect tracker's documentation on how to do this.

  1. Restart the standby server as the new master

To make the standby server the new master, simply stop the replica then restart the instance under a new name thereby making the former replica into a master.

How this is accomplished depends on how the Perforce Server determines its name upon startup.

If the server name is determined by the startup script or the command line

Replica options can be set directly on the p4d command line used to start the replica server using

  • -t host:port- Set target address for replica (default $P4TARGET); this is the equivalent to the P4TARGET variable.
  • -M readonly- Indicates readonly replication of metadata; this is equivalent to db.replication = readonly
  • -D readonly- Indicates readonly replication of depot contents;. this is equivalent to lbr.replication = readonly
  • -In name - Indicates the name of the replica seen in p4 configure show allservers.

To make a replica into a master where the replica is defined by command line options, remove (or comment out) the replica options from the P4D startup command on the replica server.


p4d -t oldmaster:1666 -M readonly -D readonly -In replica1 -r /replica/p4root -p 1667 -L pathto/log -J pathto/journal -d


p4d -r /replica/p4root -p 1667 -L pathto/log -J pathto/journal -In newmaster -d

This causes the (former) replica server to start as a new master server.

If the server name is set in "p4 configure show allservers"

Replica configurables may be defined by p4 configure set and seen in p4 configure show.

$ p4 configure show replica1
replica1: db.replication = readonly
replica1: lbr.replication = readonly
replica1: P4TARGET = master:1666
replica1: serviceUser = service
replica1: startup.1 = pull -i 5
replica1: startup.2 = pull -i 5 -u
replica1: startup.3 = pull -i 5 -u

To make a replica into a master defined by -In or P4NAME, start the replica under a new name or without the previous name

Unix Before

When the replica server starts using -In (or P4NAME):

p4d -In replica1 -r /replica/p4root -p 1667 -d

the replica name allows the replica server to pick up the appropriate options from the server configuration.

Unix After

p4d -In newmaster -r /replica/p4root -p 1667 -d

Windows Before

From a "Run as administrator" command prompt

p4 set -S Perforce

P4NAME=replica1 (set -S)

Windows After

p4 set -S Perforce P4NAME=newmaster
p4 set -S Perforce

P4NAME=newmaster (set -S)

Then restart the Windows service

Choose a new name for your new master. Because the server will have new name, the server will not use any of the variables seen in p4 configure show allservers..

If the -L option and -J options are not in the startup script, after the master is restarted later, you can set the log and journal with

p4 configure set "newmaster#P4JOURNAL=pathto/journal
p4 configure set "newmaster#P4LOG=pathto/log

If the server name is determined by

Later versions of the Perforce Server can use server specifications defined by the 'p4 server' command. This can in conjunction with the '' file in the P4ROOT
directory define the replication role without any options being passed on the command line If "p4 servers" is used the '' file must be removed/renamed prior to
the replica being restarted as a master.

To make a replica into a master where the replica is defined by

  1. Save off the "p4 servers" information:
p4 -ztag servers > serverinfo.txt
  1. Stop the replica and rename or erase the file.

On Unix:

cd Perforce root

On Windows:

cd Perforce root
  1. Adjust the startup script to point to the running journal and current log
  1. Make a backup of the startup script, then adjust the startup script to add the -J journal -L log variables from the command line or from "p4 configure set" as needed.


p4d -In replica1 -r /replica/p4root -p 1667 -d


p4d -In newmaster -r /replica/p4root -p 1667 -d

If the -L option and -J options are not in the startup script, you can set the log and journal with

p4 configure set "newmaster#P4JOURNAL=pathto/journal
p4 configure set "newmaster#P4LOG=pathto/log
  1. Start the new master and create a new file with a new name of your choice:
p4 serverid newmaster

Alternatively, create a new file named "" with the new name and place this file into the replica server root.

  1. Create a new server spec for this new master:
p4 server newmaster

You can use the former master specification as a guide on create the new master specification. You may want to keep the original replica specification if you
want to convert the new master back to a replica later, or you can delete the original replica specification.

  1. Verify that replication is working

Check that replication is working by running with super privileges

p4 pull -l -j

When replication is running properly, the replica number should catch up to the master number. It is likely that a replica that now points to the new master but will not resume replication for two reasons:

  1. The service user needs to be logged into the new master from the machine the replica is running on and the ticket needs to be in the tickets file defined for the replica.

Simply log into the replica, then log in as the service user and copy the .p4tickets or p4tickets.txt file to the P4TICKETS location seen in "p4 configure show allservers".  Then restart the replica.

  1. The offset that the replica currently has into the old master's journal is not correct for the new master journal (see Why replicated journals are not identical); the replica log file will potentially contain messages showing 'Bad opcode'.

To fix this, copy the file named "state" for safekeeping.  Then remove the "/<number" portion of the state file.

For example, edit the file named "state"





Then restart the replica. This will cause replication to start from the beginning of the journal specified in the state file and to continue until it is up to date. Check that replication is working by running "p4 pull -l -j" repeatedly.  If replication is still not working, check the replica log.

  1. Point Perforce end users and other clients at the new master server

Perforce clients, Perforce Proxy Servers, Perforce brokers, and potentially third-party integrations or scripts that connect to the Perforce Server may need to be reconfigured so that their P4PORT setting is pointing to the host:port of the new master server. To the degree which this is necessary depends on the particular
setup. If no mechanism is in place to point clients, proxies, brokers, or third-party integrations to the new master server (such as a DNS server entry for the Perforce server), update each Perforce instance accordingly.

  1. Optionally perform end user tasks

In the event end user workspaces are out of date compared to the metadata that the replica has, they should consider following the steps outlined in Working Disconnected From The Perforce Server. This allows them to bring their workspace back into line and potentially restore files that may be missing from the server.  Users can run "p4 reconcile" to see how their workspace differs from the server.

  1. Optionally convert the old master back into the master again

Once the old server hardware is fixed, you may want to reseed the old master's database (db.*) files with a checkpoint from the "new" master machine. For guidance, follow the Creating the replica guidelines in the "Multi-Site" administrators guide. Most of the replication settings will already exist in the server configuration from the previous replica setup.

Note: The "P4TARGET" variable may have changed as the current master may exist on a different host than before. A new ticket for the service user will need to be created on the new replica machine as outlined in the System Administrator's Guide.

Once the new replica is up and running, a verify -t should be run to setup the transfer of any missing files. If the archive size is significant, archives should be first restored to the new replica location from a current backup, then a verify scheduled. To schedule the transfer run:

p4 verify -t //...
p4 verify -S //...

Any missing files will then be scheduled for transfer. The following command can be used for a summary of the remaining files to be transferred:

p4 pull -l -s
File transfers: 10 active/1327 total, bytes: 1551384 active/106561581 total.


  • This article applies to the case where both metadata and versioned file data are being replicated by p4 pull
  • The following commands and configurables referenced in this article are available only with Perforce Server 2011.1 and later:
    • p4 journaldbchecksums
    • p4 verify -t
    • p4 pull -l -s (the '-s' option was added in the 2011.1 server release)
    • rpl.pull.position (if seen in the replica log file)
Related Links



Was this article helpful?



Please tell us how we can make this article more useful.

Characters Remaining: 255