Perforce Public Knowledge Base - Replica Types and Use Cases
Perforce Software logo
Reset Search
 

 

Article

Replica Types and Use Cases

« Go Back

Information

 
Problem

What capabilities do the different types of Perforce replica servers offer?

Solution

Common reasons for setting up a replica server include offloading work from the main server and providing better performance for users in remote locations. There are different types of replica servers, such as read-only replicas, forwarding replicas, build servers, and Edge Servers. See p4 help server for more details about server types and features and p4 help replication for detail on topics regarding the configuration and operation of replicated servers.
 

Edge Server

A distributed installation contains a Commit Server and one or more Edge Servers. Individual client workspaces are bound to the Edge Server on which they are created, and all work in progress for those workspaces resides only on their owning Edge Server. An Edge Server supports the full Perforce command set; however, there are a few differences in behavior which may affect applications. In summary, and Edge Server:

  • Maintains a copy of commit server metadata and file content and a local copy of some workspace and work-in-progress information

  • Services most commands locally, other than p4 submit, and can handle more operations with no reliance on the commit server

  • Behavior enables Edge Server to handle most commands locally, offloading work from the Commit Server and reducing the overall data transmission between the Commit and Edge server

  • Requires a higher level of machine provisioning and administrative considerations compared to a Perforce Proxy Server and forwarding replica

  • Can not be used as a warm stand-by server for disaster recovery; associated commit server must be checkpointed and backed up separately

  • Can be used as a Build Server allowing write commands as part of the build process


Forwarding Replica

A server of type forwarding-replica is a replica which supports the full Perforce command set. A read-only command which is received by a forwarding-replica is processed locally by the forwarding replica, without consuming any resources on the master server. An update command which is received by a forwarding-replica is forwarded to the master server for processing, similarly to the way in which a proxy or broker forwards commands to the server. Like a proxy, a forwarding-replica acts as a cache of file content, so commands such as p4 sync or p4 resolve, although they are processed by the master server, offload their file transfer operations to the forwarding-replica, thus reducing the load on the master server. The p4 login command on a forwarding-replica automatically logs the user into both the forwarding-replica and the master server. Since the database of an (unfiltered) forwarding-replica is a superset of the database of the master server, an unfiltered forwarding-replica can be used to recover from the catastrophic loss of the master server. In summary, a forwarding-replica:

  • If unfiltered maintains complete replicated copies of master server metadata and file content

  • Services commands that only read metadata; forwards commands that update metadata to master server

  • Behavior enables forwarding replica to process many commands locally with no reliance on the master server

  • Enables offline checkpoint operations to prevent master server downtime for checkpoint/backup

  • Requires a higher level of machine provisioning and administrative considerations compared to a Perforce Proxy Server

  • Can be used as a warm stand-by server for disaster recovery provided no filtering is being done


Build Server

A server of type build-server is a replica which supports build farm integration. A build-server replica supports the same read-only commands that a simple replica supports. In addition, the p4 client command may be used to create or edit client workspaces on a build-server. Such workspaces may issue the p4 sync command, in addition to all read-only commands supported by the replica. The p4 sync command on a bound workspace is processed entirely by the build-server replica, which entirely relieves the master server of all computation, file transfer, networking, and database update resource usage for those sync commands.

The build-server replica records view-mapping and have-list metadata in its local database, using the separate db.view.rp and db.have.rp database tables. Domain information for bound workspaces is recorded in the db.domain table, which is global to all servers in the installation. A workspace which is bound to a build-server must still have a globally-unique name; this is enforced by the p4 client command on the build-server. Since workspace mapping and have list information for bound workspaces is stored locally in the build-server database, the build-server should be checkpointed regularly.

Since the database of an unfiltered build-server is a superset of the database of the master server, an unfiltered build-server can be used to recover from the catastrophic loss of the master server. In summary, a build-server:

  • Maintains local workspace metadata in addition to complete replicated copies of master server metadata and file content

  • Services commands that only read metadata, with the exception of the p4 client and p4 sync commands

  • Behavior enables build replica to process many commands locally with no reliance on the master server

  • Offloads the workload of automated build processes from the master server by allowing the creation and use of client workspaces that are local to the build replica

  • Hosts its own local copies of the client data (db.have.rp, db.view.rp) allowing build clients to use p4 client and p4 sync  


Read-Only Replica

  • Maintains complete replicated copies of master server metadata and file content

  • Services commands that only read metadata, rejects commands that would update metadata

  • Use in conjunction with a broker to offload read-only commands from the master server

  • Enables offline checkpoint operations to prevent master server downtime for checkpoint/backup

  • Can be used as a warm stand-by server for disaster recovery

Related Links

Feedback

 

Was this article helpful?


   

Feedback

Please tell us how we can make this article more useful.

Characters Remaining: 255