When I heard what was coming in IBM i 7.2 TR3 and 7.1 TR11 I was excited to learn that two of the new introductions would be a SQL View and SQL Table Function to list spool files in output queues. Retention of spool files has always been a contentious issue in all the IBM i shops I have worked. The task of managing spool files is a thankless task, but without it I am always surprised how much disk space is lost to spool files after a few months.
I always wanted a quick and easy way to identify the following information about spool files:
- How old is the spool file?
- How big is it?
- Who generated it?
- What job generated it?
With the new View and Table function I can get my hands on this information in a couple of the minutes at the most.
To be able get information back from the View and the Table function you will need the following authority:
- If not, read authority to the output queue
- If not, *JOBCTL authority and the output queue has OPRCTL(*YES)
Both the View and the Table function reside in the library QSYS2 and are called:
What is the difference between a View and a Table function? A Table function requires parameters passed to it, a View does not.
When I looked through the available columns in this View I was tempted to call this post "Everything you wanted to know about Spool file, but never dared to ask". There are columns for things I don't think I would want to inquire about. As there are so many I am not going to list them all here, but I will refer you to IBM's page in the Knowledge Center here.
The columns I care about when trying to establish the worst spool file offenders are:
|SIZE||Size of the spool file in kilobytes|
|SPOOLNAME||Spool file name|
|OUTQ||Output queue the spool file is in|
|OUTQLIB||Library the output queue is in|
|CREATED||When the spool file was created (timestamp)|
|STATUS||Status of the spool file|
|PAGES||Number of pages|
|JOB_NAME||Qualified job name that produced the spool file|
|FILENUM||Spool file number|
To find the worst offenders I would want to sort the results of my SQL statement by size, in a descending order:
SELECT SIZE,SPOOLNAME,OUTQ,OUTQLIB,CREATED,USER_NAME,STATUS, PAGES,JOB_NAME,FILENUM FROM QSYS2/OUTPUT_QUEUE_ENTRIES ORDER BY SIZE DESC
The output I would get would look like this:
SIZE SPOOLNAME OUTQ OUTQLIB CREATED 3,144 QSYSPRT HOLDQ QGPL 2015-11-08-17.34.06.452000 712 QPJOBLOG PRT01 QGPL 2015-01-24-18.104.22.1686000 USER_NAME STATUS PAGES JOB_NAME FILENUM ALLANSK READY 425 310442/ALLANSK/CREATE_INV 1 SUSANB HELD 90 066063/SUSANB/RACB1001 9
It would be better for the results to be in a file that I can then work through the list to eliminate the offenders. To do that I could create a simple CL program like this:
01 PGM 02 DLTF FILE(MYLIB/SPLFILES) 03 MONMSG MSGID(CPF2105) 04 RUNSQL SQL('CREATE TABLE MYLIB/SPLFILES AS + (SELECT SIZE,SPOOLNAME,OUTQ,OUTQLIB,CREATED,+ USER_NAME,STATUS,PAGES,JOB_NAME,FILENUM + FROM QSYS2/OUTPUT_QUEUE_ENTRIES + ORDER BY SIZE DESC + FETCH FIRST 200 ROWS ONLY) + WITH DATA') + COMMIT(*NONE) 05 ENDPGM
Line 4: I have used the CREATE TABLE to create an output table/file that contains the results of my SQL statement. The SELECT is similar to the one above, the only difference is that I only want the top 200 worst offenders.
Now I can review the contents of my table/file and decided which spool files can be deleted. Using the columns SPOOLNAME, JOB_NAME, and FILENUM I can use the Delete Spool File command, DLTSPLF, to delete the spool files.
Table function: OUTPUT_QUEUE_ENTRIES()
The Table function is more restrictive than the View, as you can only run it for one output queue. That is also its advantage, if you only needed the information from one output queue it is faster to use this than it is to use the View.
This table function has three parameters:
- Output queue library
- Output queue name
- Detailed (*YES) or summary (*NO) information
The same columns are available in the detailed information of the Table function as are in the View. The exceptions are the information for the output queue and library, but you already know those to be able to call this function. The summary information contains all the columns I am interested in, therefore, most of the time I will be calling this function with the third parameter being '*NO'. While I did not experience any noticeable difference in the time taken to return the results when calling the function with the detailed and summary information IBM recommends that if you do not need the extra columns available in the detail, call the summary version as it will be quicker.
Unlike the View where the columns have short and long names, the columns names returned by the Table function only have long names (which happen to be the same long names as the View's columns). I am not going to list all of the columns returned by this function here, but I will refer you to the appropriate page in IBM's Knowledge Center here.
In this example I want to find the oldest spool files in the output queue QEZJOBLOG that are ready to print. As I am interested in the same columns as before I am going to use the summary information value.
SELECT CREATE_TIMESTAMP,SIZE,SPOOLED_FILE_NAME, TOTAL_PAGES,TOTAL_PAGES,JOB_NAME,FILE_NUMBER FROM TABLE(QSYS2.OUTPUT_QUEUE_ENTRIES('*LIBL','QEZJOBLOG','*NO')) A WHERE STATUS = 'READY' ORDER BY CREATE_TIMESTAMP
Notice that at the end of the FROM line there is a character, this can be any character from the alphabet. If this is omitted the statement will not run.
The results look like:
CREAT00001 USER_NAME SIZE SPOOL00001 TOTAL00001 2015-12-04-22.214.171.1249000 QPGMR 40 QPJOBLOG 2 2015-12-04-126.96.36.1993000 QSECOFR 520 QPJOBLOG 76 2015-12-04-22.58.06.645000 QSYS 32 QPJOBLOG 1 2015-12-04-188.8.131.522000 QSYS 32 QPJOBLOG 1 JOB_NAME FILE_00001 320855/QPGMR/C******** 1 318044/QSECOFR/Q****** 15 320849/QSYS/S********* 1 320845/QSYS/S********* 1
I can see myself using both of these new offerings frequently as part of my regular disk usage reporting, as to gather the information for old and large spool files is so much simpler than it was before.
You can learn more about this from the IBM website:
This article was written for IBM i 7.2 TR3 and 7.1 TR11, and will not work with earlier releases or TRs.