Perforce Public Knowledge Base - Structured Server Logs
× PRODUCTS SOLUTIONS CUSTOMERS LEARN SUPPORT
Downloads Company Partners Careers Contact Free Trials
Menu Search
Perforce
Reset Search
 

 

Article

Structured Server Logs

« Go Back

Information

 
Problem

With Helix Server 2012.1 and later, a new logging feature was added that allows for additional information to be recorded about server and user activity in structured server logs. Structured logs are written in comma separated value (CSV) format and are easily parsed using the p4 logparse command.

Structured server logs can be extremely verbose and grow rapidly so a process for managing them should be put in place. Structured logs can be configured to auto-rotate on size and also with a retention policy to automatically purge older logs.

Structured server logs are meant to supplement standard Helix Server logs not replace them.

Solution

Note: For detailed information on setting up and managing structured server logs, see the Helix Versioning Engine Administrator Guide.

This article supplements the main documentation providing details and examples of following:

  • Structured Server Log Records
  • Log File Format
  • Log Parsing
  • Using Custom Log Names

Basic Configuration

The basic setup for a structured log is to configure the log file name and optionally set an auto-rotation and retention policy. For example, to set up an errors.csv structured log to auto-rotate when it reaches 100MB and to retain the last 10 logs:

p4 configure set servername#serverlog.file.1=errors.csv
p4 configure set servername#serverlog.maxmb.1=100
p4 configure set servername#serverlog.retain.1=10

Auto-rotation and retention policies are optional but recommended as best practice to manage the size of structured logs. If you do not specify a servername with your p4 configure set command, the configurable is set for all servers in a Helix distributed environment, including edge and replica servers should those servers not override the global configuration with their own configurable setting.

Structured log files are automatically rotated on checkpoint, journal rotation, overflow of associated serverlog.maxmb.N limit if configured, and the p4 logrotate command. To disable structured log rotation after checkpoint or journal rotation, set the dm.rotatelogwithinjnl configurable to 0.

Use the p4-ztag logstat -s command to get a current summary of journal and log sizes:

p4 -F '%file% %fileSize% %fileNum%' -ztag logstat -s
journal 1789 44
log 86101111 
all.csv 729498 21
commands.csv 120266 8
errors.csv 1890 4
audit.csv 0 2
track.csv 607348 16
user.csv 0 0
events.csv 0 4
integrity.csv 0 1
auth.csv 0 0
route.csv 0 0

The configuration of structured logs is dynamic; after p4 configure set is run, no server restart is required and the server generates the log and starts using it when it processes the next server command.

The name of a structured log automatically configures the events that log will capture. If you want to use custom log names, see the Using Custom Log Names section below on how to configure specific events for a custom log name.

Structured Server Log Records

All structured server log records contain an event number (f_eventtype) written as the first field in the log record.:

Number		Name	        	Description 
0			command-start		Command Start
1			command-compute		Command Compute
2			command-end			Command End
3			errors-all			All errors
4			errors-failed		Failed errors - user errors ('user' has not been enabled, Invalid option:, Submit failed)
5			errors-fatal		Fatal Errors - system errors (Database open errors, Operation 'open' failed)
6			audit				Audit Events (p4 sync, p4 archive)
7			track-usage			Tracking - rusage
8			track-rpc			Tracking - rpc
9			track-db			Tracking - db
10			user				User Events from 'p4 loggappend'
11			trigger				Trigger Events
12			event				startup, shutdown, restart, jj, jc
13			purge				purge-rev (p4 obliterate)
14			network-estimates	Network Estimates
15			integrity			Replica integrity checking events
16			auth				Login events
17			route				network route of client connections
18			audit-size			Audit Events (including file revision's size)

The remaining fields in a log record are based on event type:

Field		Description			Event Types
eventtype		Event Type                All
timestamp		Unix timestamp            All
timestamp2		High-precision timestamp  All
date			Date                      All
pid				Process ID                All
cmdno			Command Number            All
user			User                      All
client			Client                    All
func			Function                  All
host			Host                      All
prog			Program                   All
version			Version                   All
args			Command Arguments         command-(start|compute|end)
severity		Severity Level            errors-(all|failed|fatal)
subsys			Subsystem                 errors-(all|failed|fatal)
subcode			Subsystem Code            errors-(all|failed|fatal)
text			Error Text                errors-(all|failed|fatal)
action			Audit action              audit
file			File                      audit
rev				Revision                  audit
trackType		Tracking type             track-(usage|rpc|db)
timer			Timer                     track-usage
utime			User time                 track-usage
stime			System time               track-usage
io_in			I/O reads                 track-usage
io_out			I/O writes                track-usage
net_in			IPC in                    track-usage
net_out			IPC out                   track-usage
maxrss			Max physical memory       track-usage
page_faults		Page Faults               track-usage
recvCount		Receive count             track-rpc
sendCount		Send count                track-rpc
recvBytes		Receive bytes             track-rpc
sendBytes		Send bytes                track-rpc
rpc_hi_mark_fwd	Highmark forward          track-rpc
rpc_hi_mark_rev	Highmark reverse          track-rpc
recvTime		Receive time              track-rpc
sendTime		Send time                 track-rpc
dbName			Database table name       track-db
pagesIn			Pages In                  track-db
pagesOut		Pages Out                 track-db
pagesCached		Pages Cached              track-db
reorderIntl		Reorder internal          track-db
reorderLeaf		Rorder leaf               track-db
readLocks		Read locks                track-db
writeLocks		Write locks               track-db
gets			Gets                      track-db
positions		Positions                 track-db
scans			Scans                     track-db
puts			Puts                      track-db
deletes			Deletes                   track-db
triggerAction	Trigger action            trigger
triggerData		Trigger data              trigger
eventCode		Event code                event
eventInfo		Event info                event
action			Action                    purge
file			File                      purge
rev				Revision                  purge
event			Event                     default

For a specific log record, use the event type and p4 logschema to get details on the specific fields in the record. For example, an event record (type 12) from the log:

12,1474054242,775358804,2016/09/16 12:30:42 75358804,21204,4,shutdown

p4 -F '%f_name%' logschema 12
f_eventtype
f_timestamp
f_timestamp2
f_date
f_pid
f_eventCode
f_eventInfo

Or you can simply run p4 logparse to show you the fields and the values:

p4 logparse events.csv 
... f_eventtype 12
... f_timestamp 1474054242
... f_timestamp2 775358804
... f_date 2016/09/16 12:30:42 775358804
... f_pid 21204
... f_eventCode 4
... f_eventInfo shutdown
... f_lognum 0
... f_logfile events.csv
... f_offset 71

Notes on empty or unknown in integrity.csv

If you see an "empty" field, the table on the replica is empty. In this case, the table is db.changex so there are no pending changelists in the system.

15,1476985796,756730135,2016/10/20 13:33:16 756730135,37292,9,db.changex,,,0,0,0x0,0x0,,,empty

If you see an "unknown" field, this shows that Perforce was not able to verify the checksum. In the example below, Perforce was not able to verify db.user.rp on the replica. Replica-only tables like db.user.rp, db.have.rp, and db.view.rp do not logically exist on the master/commit server so this is acceptable.

15,1475860873,882557210,2016/10/06 09:34:33 882557210,37292,9,db.user.rp,,,0,0,0x0,0x9CF7B63E,,,unknown (master)

Log Parsing

The p4 logparse command is used to parse structured server logs. The filtering (-F) and formatting (-T) options are based on the field names of the event type for the record. Here are some p4 logparse examples showing filtering and formatting usage:/p>

Determine from commands.csv who ran p4 verify and how long it took to run

p4 logparse -F 'f_func=user-verify' -T 'f_eventtype f_user f_date f_func f_args f_pid' commands.csv 
... f_eventtype 0
... f_date 2016/09/16 10:11:55 357132000
... f_pid 89688
... f_user bruno
... f_func user-verify
... f_args -q://...

... f_eventtype 2
... f_date 2016/09/16 10:11:58 925980000
... f_pid 89688
... f_user bruno
... f_func user-verify
... f_args -q://...

The results show a commant-start (0) and command-end (2) event and date fields which can be used to see the verify runtime.

Report from commands.csv who ran p4 obliterate and how long it took to run

p4 logparse -F 'f_func=user-obliterate' -T 'f_eventtype f_user f_date f_func f_args f_pid' commands.csv 
... f_eventtype 0
... f_date 2016/09/16 10:16:49 250325000
... f_pid 89725
... f_user shane
... f_func user-obliterate
... f_args -y://depot/www/dev/index.html

... f_eventtype 2
... f_date 2016/09/16 10:16:50 406227000
... f_pid 89725
... f_user shane
... f_func user-obliterate
... f_args -y://depot/www/dev/index.html

For a specific obliterate, using the audit.csv log, report what files revisions were purged

p4 logparse -F 'f_pid=89725 f_action=purge-rev' -T 'f_file f_rev' audit.csv 
... f_file //depot/www/dev/index.html
... f_rev #3
... f_file //depot/www/dev/index.html
... f_rev #2
... f_file //depot/www/dev/index.html
... f_rev #1

Report from errors.csv all errors containing the word 'permission' in the error text not from the user 'swarm'

p4 logparse -F 'f_text=*permission* ^f_user=swarm' -T 'f_eventttype f_user f_date f_args f_text' errors.csv 
... f_date 2016/09/16 13:10:43 775433162
... f_user scommon
... f_args -y://depot/www/...
... f_text You don't have permission for this operation.

Report from errors.csv containing fatal system errors and only display the following fields (f_eventtype, f_user, f_text)

p4 logparse -F 'f_eventtype=5' -T 'f_eventtype f_user f_text' errors.csv 
... f_eventtype 5
... f_user vkanczes
... f_text Database open error on db.user!

Report from audit.csv all files user bruno has run p4 sync on

p4 logparse -F 'f_user=bruno' -T 'f_date f_file f_rev' audit.csv 
... f_date 2016/09/16 13:23:51 943341276
... f_file //depot/www/dev/Jam.html
... f_rev #2
... f_date 2016/09/16 13:23:51 980179840
... f_file //depot/www/dev/Jambase.html
... f_rev #2
... f_date 2016/09/16 13:23:51 985908351
... f_file //depot/www/dev/Jamfile.html
... f_rev #3
... f_date 2016/09/16 13:23:51 990579905
... f_file //depot/www/dev/Jamlang.html
... f_rev #2

Report from audit.csv all users that have run p4 sync on a specific depot file

p4 logparse -F 'f_file=//depot/www/dev/Jam.html' -T 'f_user f_date f_file f_rev' audit.csv 
... f_date 2016/09/16 13:23:51 943341276
... f_user bruno
... f_file //depot/www/dev/Jam.html
... f_rev #2
... f_date 2016/09/16 13:29:42 451445324
... f_user scommon
... f_file //depot/www/dev/Jam.html
​... f_rev #2

The filtering options used with p4 logparse are similar to what can be used with p4 jobs -e when searching for jobs; see p4 help jobview for further details on the filtering capabilities.

Using Custom Log Names ("undocumented" functionality)

You can use custom structured server log names. For custom log names, configure the events for the log manually using the server.log.events.N configurable. For example:

p4 configure set serverlog.file.8=custom.csv
p4 configure set serverlog.events.8='command-start,command-end'

Provide server.log.events.N a comma seperated list of events the custom log will capture.

Related Links

Feedback

 

Was this article helpful?


   

Feedback

Please tell us how we can make this article more useful.

Characters Remaining: 255