Thursday, May 29, 2014

Adding Column Headings to a file in the IFS, continued

I received a message from Gary Patterson in response to a comment I made on my post Adding Column Headings to a file in the IFS. I said that "I did try the SQL SYSCOLUMNS at first. The problem was with the way SQL interprets the Column Headings. Rather than break them out into 3 fields, it interprets them, as 1 very long field. The very long field was just too unwieldy to use."

In his message Gary explained "SYSCOLUMNS COLUMN_HEADING is a 60 byte column containing up to three rows of column heading information. Each row is exactly 20 bytes long and is populated by either the DDS COLHDG keyword, or the SQL LABEL ON DDL statement." And included an example of the SQL code to "build a column heading row dynamically".

Wednesday, May 28, 2014

Adding Column Headings to a file in the IFS

ifs file column headings csv

In an earlier post I showed how it was possible to copy a file to the IFS in CSV format using CPYTOIMPF command. Unless the recipient of the file knows what the columns are they are going to come back to you asking what they are.

So what can you do to provide column headings?

Monday, May 26, 2014

Overview of IBM i 7.2 by Tim Rowe

7.2 v7r2

Tim Rowe, IBM i Business Architect for Application Development and Systems Management, has written an overview of the new IBM i release 7.2

There is a wealth of new capabilities available with the IBM i 7.2 release. In this article, you will learn about some of the database, security, manageability, and mobile device integration enhancements that keep your IBM i solutions well-positioned for today and into the future.

In it he covers:

  • Information into insights - DB2 for i.
  • Solutions for a modern mobile world - mobile and web.
  • Resiliency with downtime – HyperSwap support in IBM PowerHA.
  • Security without compromise – includes new system value for password rules.
  • Strong integrated value – too much to mention here.

You will find his article here: IBM i 7.2 – The great beyond.

Wednesday, May 21, 2014

Capturing messages from Jar

java jar addenvvar rmvenvvar

In November I described how it was possible to zip files in the IFS using the Jar java command. John King emailed me to inform me that there is a way to monitor messages coming from QShell by using the environment variables QIBM_QSH_CMD_OUTPUT and QIBM_QSH_CMD_ESCAPE_MSG.

I am going to use the same example I used before. I have a file testfile.csv that I want to zip using the Jar. And then I will do the opposite and unzip the zipped file using the Jar. All of these files will be in my folder in the IFS called MyFolder.

Monday, May 19, 2014

Moving from Open Query File to SQL

opnqryf sql

I was catching up on my reading this weekend when I found an IBM white paper discussing the advantages of moving from using the Open Query File command (OPNQRYF) to SQL.

When I first started programming on the AS400 I did use the OPNQRYF command frequently, but I do not use it any more. What I was doing with OPNQRYF I feel I can do so much better with SQL, embedded in RPGLE and CL. For how to embed SQL statements in your CL see Run SQL statements in your CL. There is also a performance advantage to using SQL. OPNQRYF uses the Classic Query Engine, which is slower than the SQL Query Engine used by SQL, see Query Engines: CQE versus SQE.

This white paper discusses the major advantages that SQL has in comparison to the IBM i5/OS Open Query File (OPNQRYF) command. It also provides a methodology for converting OPNQRYF applications to an SQL-based model, as well as some key points to consider during the conversion process.

Even though the white paper was written in 2008, the reasons it makes to move from OPNQRYF to SQL is still valid, perhaps more so with the enhancements added to SQL in subsequent releases of IBM i.

You can open the PDF file directly by this link: IBM white paper: Moving from OPNQRYF to SQL

Wednesday, May 14, 2014

Creating a secure temporary file in SQL

sql db2 declare global temporary table

After posting Creating a SQL table on the fly I received a communication from John Erps asking why I have not used the SQL DECLARE GLOBAL TEMPORARY TABLE statement to create a temporary SQL table.

The DB2 SQL section of IBM’s Infocenter website describes DECLARE GLOBAL TEMPORARY TABLE as:

The DECLARE GLOBAL TEMPORARY TABLE statement defines a declared temporary table for the current application process. The declared temporary table resides in the work file database and its description does not appear in the system catalog. It is not persistent and cannot be shared with other application processes. Each application process that defines a declared temporary table of the same name has its own unique description and instance of the temporary table. When the application process terminates, the temporary table is dropped.

So how does it work on the IBM i?

Wednesday, May 7, 2014

2 APIs to help with zip and unzipping files

qzipzip qzipunzip zip file

In November I discussed zipping files in the IFS using the Java Jar command in QShell, and Djurre Postma posted in the comments that two new APIs had been introduced in IBM i 7.1 to perform the same function without using the Qshell:

  • QzipZip – to zip files and folders.
  • QzipUnzip – to do the opposite and extract files and folders from a zipped file.

At first using them looks more complicated than using the Java Jar command. The documentation provided by IBM is for calling the APIs using C. But once I determined how to code the equivalent in "all free" RPGLE it was a lot easier than I first assumed.

Thursday, May 1, 2014

When was object last used?

object last used dspobjd

Any company who has used an IBM i, or any other server, find that over time the amount of free disk space becomes an issue. Most of the reduction in free disk space can probably be explained by the amount of data being added to files. This makes a need for regular data purges a necessity. But how many also consider purging unused objects from their servers?

As part of regular maintenance of the applications I am responsible for I purge objects that have not been used in the past two years. I do this for two reasons:

  1. If I need to make a mass change I only have to be concerned with objects that are being used.
  2. By removing objects that are not used the size of the library is kept to a minimum. Therefore, the time to save the library as part of the daily save is kept to a minimum.

Once I identify an object that has not been used in that time I move it from the "live" library to an "archive" library, if there is a source member for the object I move it to the a source file in the "archive" library too.

When the purging has been completed the "archive" library can be backed up to tape, and then the library can be deleted from the IBM i. The tape of the save of the "archive" library is stored in a fire proof safe, or equivalent, on site. If any of the purged objects are needed they can be restored from the tape.

So how do you identify when objects were last used?