Perforce Public Knowledge Base - Client Workspace and Global Metadata Locks
Perforce Software logo
Reset Search
 

 

Article

Client Workspace and Global Metadata Locks

« Go Back

Information

 
Problem

In Perforce Server version 2011.1, a general-purpose server locking facility was introduced to provide server-side client workspace and global metadata locking. This locking facility was added to prevent concurrent Perforce commands from potentially causing client workspace or Perforce database inconsistencies.

Solution

Server Lock Files

Server locks are mapped to file locks created using simple lock files. The default location for lock files is the server.locks subdirectory of P4ROOT. The name and location of the lock file directory is configurable by setting the server.locks.dir configurable. For example, setting the server.locks.dir as follows:
p4 configure set commit#server.locks.dir="/p4/data/locks"

configures the server to use the directory name /p4/data/locks for creating and managing lock files. Note, when setting the server.locks.dir configurable the setting is dynamic; no restart of the Perforce Server is required and the server starts using the new setting immediately.

Server lock files in the server.locks directory are hashed into buckets based on the setting of the spec.hashbuckets configurable, for example:
server.locks/clients/87,d/bruno_ws

which starting with server version 2013.3 is enabled by default to a value of 99.
The server lock file for a specific client is created when that client first runs one of the commands listed below that take a client workspace lock, and persists until the client is deleted. Files under the server.locks directory are only used when the server is running so it's safe to remove them if the server is not running. They will be recreated when needed upon server restart.

Server-Side Client Workspace Locks

Server-side client workspace locks are taken when a client command will update metadata or client workspace files. They are specific to each named client workspace and are used to serialize certain Perforce commands issued from that workspace.  The client workspace lock will be removed upon completion of the command. The following commands are those affected by server-side client workspace locks:
    p4 add
    p4 change [ -i | -d | -t | -U ]

    p4 copy
    p4 delete
    p4 integ

    p4 labelsync
    p4 lock
    p4 merge
    p4 move (p4 rename)

    p4 edit
    p4 reconcile (p4 status, p4 clean)
    p4 reopen
    p4 reload
    p4 resolve
    p4 revert
    p4 shelve
    p4 submit
    p4 sync (p4 flush p4 update)
    p4 unload
    p4 unlock
    p4 unshelve

    The server lock taken on p4 sync is a shared lock allowing only other p4 sync commands to run concurrently from the same client workspace. The server.locks.sync server configurable controls whether the sync command takes a client workspace lock. With Perforce Server 2013.2 through 2014.1, the default value for server.locks.sync is 1 and sync takes a client workspace lock. With 2014.2 and later servers, the default value for server.locks.sync is 0 and sync does not take a lock. All of the other server-side client workspace locks are exclusive locks and block other commands issued from the same client workspace that fall under the scope of server-side client workspace locks.

    Server Global Metadata Locks

    Server-side global metadata locks lock the entire logical repository:
    server.locks/meta/0,d/db

    Global metadata locks are taken when a command is run that will update metadata and/or versioned file data, and are used to serialize commands that would otherwise compromise Perforce Server database or versioned file consistency if run concurrently.

    The following commands are those affected by server global metadata locks:

    p4 archive
    p4 restore
    p4 obliterate
    p4 snap
    p4 retype
    p4 submit (shared lock)

    The server lock taken on p4 submit is a shared lock. It will not block other p4 submit commands but does interact with exclusive locks taken by the other commands listed above. For example, a p4 archive command must wait for any active submits to finish, and a p4 submit command must wait for any active p4 archive commands to finish. With 2013.3 and later Perforce Server's where lockless reads are enabled, an additional set of commands take global shared locks:

    p4 changes (shared lock)
    p4 interchanges (shared lock)
    p4 integ (shared lock)
    p4 istat (shared lock)
    p4 sync (shared lock)

    that follow the same rules as the p4 submit shared lock as described above. 


    Reporting on Client Workspace and Global Metadata Locks

    As of 2013.2, end users can run p4 -ztag info which reports the status of the client workspace lock:
    ... clientName bruno_ws
    ... clientRoot C:\P4DemoWorkspaces\bruno_ws
    ... clientLock exclusive

    With Perforce Server 2012.2 and later, Perforce users of type operator or those with super privileges can run the p4 lockstat -C command to report on client workspace locks. For example, to report on all client workspaces:
    $ p4 lockstat -C
    Write: clients/bruno_ws
    Write: clients/www-live
    

    To report on a specific client workspace:

    $ p4 lockstat -c bruno_ws
    Write: clients/bruno_ws

    With Perforce server 2014.2 and later, the monitor.lsof configurable can be set on Unix platforms and enables the use of the p4 monitor -L command to display a list of locked files:

    $ p4 monitor show -aL
    7991 R bruno      00:00:10 change -i [server.locks/clients/87,d/bruno_ws(W)]


     

    Disabling Server Locks

    By default the locking facility is enabled. Although not recommended, locking can be disabled completely by setting the server.locks.dir configurable to value of disabled:
    p4 configure set chicago-commit#server.locks.dir="disabled"

    It is not always possible to disable metadata locking. Disabling metadata locking has no effect when db.peeking is enabled (2013.3 and later). Disabling metadata locking has no effect on a commit server (2013.2 or later). This is because these features rely on metadata locking.
    Related Links

    Feedback

     

    Was this article helpful?


       

    Feedback

    Please tell us how we can make this article more useful.

    Characters Remaining: 255