The Helix Server is pretty thrifty on CPU resources. Available memory and disk performance are more likely to be possible bottlenecks.
The Helix Server employs a forking process model on Unix variants or a threading process model on Windows. Each Helix client command is executed in its own process or thread. This process model will take advantage of a system with multiple CPU cores. When considering CPU clock speed, faster clock speeds reduce locking time on critical system resources during complex operations.
Server performance is highly dependent upon having sufficient memory. We suggest the following formula to approximate your server's memory requirements. This calculation is a conservative estimate and does not account for all possible usage patterns. It may not satisfy performance expectations in all cases.
| ||NUMBER OF FILES|
|=||ESTIMATED MEMORY REQUIREMENTS|
Two bottlenecks are relevant in memory utilization. The first bottleneck can be avoided by ensuring that the server does not page when it runs large queries and the second by ensuring that the db.rev table (or at least as much of it as practical) can be cached in main memory:
- Determining memory requirements for large queries is fairly straightforward: the server requires about 1 kilobyte of RAM per file to avoid paging; 10,000 files will require 10 MB of RAM and so on.
- To cache db.rev, the size of the db.rev file in an existing installation can be observed and used as an estimate. New installations of Helix can expect db.rev to require about 150-200 bytes per revision, and roughly three revisions per file, or about 0.5 kilobytes of RAM per file.
Thus, if there are 1.5 kilobytes of RAM available per file, or 150 MB for 100,000 files, the server does not page, even when performing operations involving all files. It is still possible that multiple large operations can be performed simultaneously and thus require more memory to avoid paging. On the other hand, the vast majority of operations involve only a small subset of files.
For most installations, a system with 1.5 kilobytes of RAM per file in the depot suffices.
Please Note: For the purposes of the above equations, lazy copies factor in. If a file has been branched twice, you have two lazy copies and one "real" copy. You'd count "3" as the number of files in this case.
Windows-32bit Note: Windows 32-bit platforms have a 2GB per-process memory utilization limit. On Windows, the Helix Server runs as a single process, servicing each client request as a thread within that process. For sites with a very large transaction volume, this 2GB limitation can inhibit performance and large operations. Windows 64-bit platforms do not have this memory limitation. Large sites should consider use of a 64-bit Windows Server.
The Helix Server stores meta data in a binary format using disk storage under the Server root directory. For maximum Perforce Server performance directly attached disk storage is recommended.
The Helix Server stores repository data in the depot directory under the Server root directory. This location is configurable. Repository data can reside on directly attached disk storage or network attached disk storage.
While we do not want to recommend a specific filesystem, historical performance benchmark information, seems to indicate that Linux with XFS produces good results. There may be a trade off in poor data recovery in a power fail event. The BSD filesystem, also used by Solaris, is slow but is much more reliable. The reliability of the Window's NTFS filesystem is somewhere in between.
Total disk space usage needs to be judged based on these factors. They factors are entirely dependent on your data and how much you use Perforce.
- Helix Server depot librarian, where all revisions of all documents under SCM control are stored. The total size is largely dependent on individual use of the Helix versioning system. Text files tend to be thrifty by using an RCS format. A binary data format is available for both data files and very large text files.
- Helix Server meta data size can be calculated with a rough estimate of 0.5KB per user/per file. If you have 10,000 files and 50 users, you will need roughly 250MB of disk for the meta data. Some advanced Server features, such as labels, may increase disk space requirements.
- Helix Server backup checkpoint and journal are the result of good backup practices. These backups can be created in a compressed format. They are roughly 10 times less in size then the total size of the sum of the meta date files.
Helix can be run on virtually any network. Recently Helix has been improved to run on high latency wide area networks. Some special tuning may be necessary. You can consult the Perforce Knowledge Base or contact Perforce Technical Support.
Running the Helix Server on a virtual machine has always been a supported configuration. Virtual Machines do introduce additional layers of processing and have performance implications. Historical data has shown a 5% loss of performance in the branchsubmit benchmark and a 15% loss of performance in the browse benchmark. These numbers may vary in different environments and in different use cases not necessarily covered by these benchmarks. A more comprehensive treatment of virtualized Helix Server performance can be found in the whitepaper entitled "Perforce Versioning Service on VMWare vSphere" (PDF) produced jointly by VMWare and Perforce.
For more information on specific deployments strategies, please see a selection of Customer Case Studies and past conference papers.