Perforce Public Knowledge Base - Maximizing Perforce Performance
Downloads Company Partners Careers Contact Free Trials
Menu Search
Reset Search



Maximizing Perforce Performance

« Go Back



How can I maximize performance using workspace views, protections, MaxResults, MaxScanRows, MaxLockTime and MaxOpenFiles, and configurables?


The Perforce Helix Server's performance depends on the number of files you try to manipulate in a single command invocation, not on the size of the depot. For example, syncing a workspace view of 30 files from a 3,000,000-file depot is about as fast as syncing a client view of 30 files from a 30-file depot.

The number of files a single command affects is determined by:

  • Perforce command line (p4) arguments (or selected folders, in the case of P4V commands). Without arguments as a limit a command operates on all files mapped by the workspace view.
  • Workspace views, branch views, label views, and protections. With unrestricted views and unlimited protections, a command operates on all files in the depot.

At sites where depots are very large, unrestricted views and unqualified commands make a Helix Server work much harder than it needs to. When the Helix Server answers a request, it locks the database for the duration of the computation phase. For normal operations, this is a successful strategy, as the Helix Server can process requests quickly enough to avoid requests piling up. But abnormally large requests can take seconds, sometimes even minutes. If frustrated users hit CTRL-C and retry, the problem gets worse -- the Helix Server starts using lots of memory and responds even more slowly.

Perforce users and administrators can prevent Helix Server thrashing by:

  • Set configurables
  • Use "tight" workspace (and other) views.
  • Assign protections to limit access where appropriate.
  • Set MaxResults to limit the amount of data affected in terms of server memory.
  • Set MaxScanRows to limit the number of database table rows scanned.
  • Set MaxLockTime to limit the amount of time spent during data scans that require tables to be locked, blocking other commands.
  • Set MaxOpenFiles to limit the number of files users can open with any one command (add, edit, delete, and so on).
  • Enable parallel sync

These options are described in more detail in the sections below.

Set configurables

Check configurables by running:

p4 configure show allservers

Change net.tcpsize and filesys.bufsize if needed:

p4 configure set filesys.bufsize=2M
p4 configure set net.tcpsize=2M

More configurables are seen in this online help:

p4 help configurables

Use "tight" views

Define workspace views (and branch and label views) so users access only the files they require. For example, the following "tight" workspace view is restricted to specific depot areas:

//depot/main/svr/devA/...       //ws/main/svr/devA/...
//depot/main/dvr/lport/...      //ws/main/dvr/lport/...
//depot/rel2.0/svr/devA/bin/... //ws/rel2.0/svr/devA/bin/...
//depot/qa/s6test/dvr/...       //ws/qa/s6test/dvr/...

By contrast, the following unrestricted view is easier to set up but invites trouble when depots are very large:

//depot/...   //ws/...

Workspace views, branch views, and label views are defined using the p4 client, p4 branch, and p4 label commands, respectively, by the users who created them. See Using Tight Views.

Note: Although using exclusionary mappings in views will reduce its scope, it will also introduce some significant performance implications as more data will need to be processed. This is discussed in depth in this performance white paper written by our performance Michael Shields. While it dates to 2007 it is still relevant to server performance today.

Assign protections

Protections are a type of view. Protections are set using p4 protect command, and they control which depot files can be affected by the commands that users run. Unlike workspace views in most specifications protections can be set only by superusers. Protections also control read and write permission to depot files, but permission levels themselves have no impact on Helix Server performance.

Protections can be assigned either to users or to groups. For example:

write user  sam        * //depot/admin/...
write group rocketdev  * //depot/rocket/main/...
write group rocketrel2 * //depot/rocket/rel2.0/...

Groups are created by superusers with the p4 group command. Groups make it easier to assign protections and enable you to set three performance-tuning options -- MaxResults, MaxScanRows, MaxLockTime and MaxOpenFiles -- described next.

Set MaxResults to limit the amount of memory used by the server by a given command

Some commands have a much higher impact on server resources than others. The MaxResults limit helps prevent commands from taking up excessive server memory as it buffers/caches data needed to complete the user command. A "result" is equivalent to a database row worth of data, but it is not necessarily the same as the number of rows scanned, nor the final output prevented to the user. It is possible, for example, to have a command that would produce a single line of output to hit a MaxResults limit of 15,000.

Each group has a MaxResults value associated with it. The default value is "unset", but a superuser can use p4 group to limit MaxResults for a particular group. Users in the group cannot run any commands that require more memory than the MaxResults limit. For example:

Group: rocketdev
Maxresults: 50000

Ruth has an unrestricted client view. When she tries:

p4 sync 

her p4 sync command is rejected if the depot contains more than 50,000 files, and she sees an error message at her client such as:

Request too large (over 50000); see 'p4 help maxresults'.

She can get around this by syncing smaller sets of files at a time, for example:

p4 sync //depot/projA/...
p4 sync //depot/projB/...

She gets her files, without tying up the server to process a single, extremely large command.

Set MaxScanrows to limit the number of database rows scanned

Each group also has a MaxScanRows value associated with it. The default value is "unset", but a superuser can use p4 group to limit MaxScanRows for a particular group. Users in the group cannot run any commands that scan more database rows in any Perforce database table than the MaxScanRows limit. This option is particularly useful for limiting wildcarded commands that would otherwise scan every revision. For example:

Group: rocketdev
MaxScanRows: 50000

User bill issues the command:

p4 files //

His command is rejected and the following message is seen at the client:

Too many rows scanned (over 50000); see 'p4 help maxscanrows'.

But if Bill narrows the scope of his command so it scans fewer than 50,000 revisions, it succeeds:

p4 files //depot/main/
//depot/main/foo#4986 - edit change 4994 (text)

Note: An easy way to see how the command hit the MaxScanRows limit, have the user add the "-Ztrack" global option. This displays the same database tracking data sent to the Helix Server log:

​p4 -Ztrack files //

In the above example it's likely the user hit the limit while scanning the db.rev table. Note that it may be slightly more than the limit due to how often the server checks the limit.

The MaxScanRows limit does not affect the following commands:

Set MaxLockTime to limit the amount of time spent locking database tables

Each group also has a MaxLockTime value associated with it. The default value is "unset", but a superuser can use p4 group to limit MaxLockTime for a particular group. The MaxLockTime limit is entered in milliseconds. MaxLockTime operates by starting a counter when the first table is (read) locked, and includes time waiting for any subsequent table read locks. Because the number of users and commands run at any one time varies, this option varies in behavior due to the load on the server. If the time the read locks are held/waited for a command exceeds the group MaxLockTime, the command will fail. For example:

Group: rocketdev
MaxLockTime: 30000

Sandy runs the command:

p4 opened ...

if the read locks taken exceed 30000 (milliseconds = 30 seconds) in duration, Sandy will see this error:

Operation took too long (over 30.00 seconds); see 'p4 help maxlocktime'.

The server log includes details about the command within the Trace output:

--- killed by MaxLockTime

Set MaxOpenFiles to limit the number of files opened by a user at once

Many of the above MaxLimits require some idea of how your users work. Typically a Helix Server administrator uses the Helix Server logs to determine what kind of usage is typical for their server, which requires some analytical work and trial and error at times.

MaxOpenFiles, on the other hand, solves a major performance issue in a very straightforward way; It limits the number of files that can opened for a given operation (add, edit, delete, and so on) in a single user command.

Each group also has a MaxOpenFiles value associated with it. The default value is "unlimited", but a superuser can use p4 group to limit MaxOpenFiles for a particular group. The MaxOpenFiles limit is entered in number of files. For example:

Group: rocketdev
MaxOpenFiles: 1000

Ruth hasn't had a chance to get in trouble yet, so she runs the command:

p4 delete //depot/...

Since, even considering files she doesn't have access too, there are likely much more than 1000 files accessible to her, she will hit the MaxOpenFiles limit:

Opening too many files (over 1000); see 'p4 help maxopenfiles'.

Note: MaxOpenFiles is only available on Helix Server versions 2016.1 and later.

Testing MaxLimits on specific commands

Use the -z{MaxLimit} global option to immediately see the effect of any of the MaxLimits. For exampl, to check the impact of a MaxScanRows limit on a given command:

$ p4 -zmaxScanRows=100000 sync
Too many rows scanned (over 100000); see 'p4 help maxscanrows'.

Restore from checkpoint

Take a checkpoint, move away the db.* files, then restore the checkpoint. This places all the database files sequentially. See Checkpoints for database tree rebalancing.

Use faster I/O

Perforce is most dependent on fast I/O. Consider solid state memory appliances or solid state drives with TRIM. See Recommended Server Hardware Configurations

Turn on autotune

Upgrade to Perforce 2017.1 and try autotune to improve network performance.

p4 configure set net.autotune=1

Some systems do not work well with this value, so make sure Perforce does not hang.

Enable parallel sync

Turn on parallel sync and perhaps parallel submit to retrieve files faster

p4 configure set net.parallel.max=8

See Parallel Sync and its benefits

For more information:

See Tuning Perforce for Performance.

Related Links



Was this article helpful?



Please tell us how we can make this article more useful.

Characters Remaining: 255