SELECT CASE WHEN UPPER(BEHAVIOR) = 'NICE' THEN 'MERRY CHRISTMAS'
ELSE 'BAH HUMBUG!'
END AS "Wishing you"
FROM CONTACTS
WHERE PERSON = :You
AND THIS_YEAR = 2025
Wishing you
---------------
MERRY CHRISTMAS
|
Advice about programming, operations, communications, and anything else I can think of
SELECT CASE WHEN UPPER(BEHAVIOR) = 'NICE' THEN 'MERRY CHRISTMAS'
ELSE 'BAH HUMBUG!'
END AS "Wishing you"
FROM CONTACTS
WHERE PERSON = :You
AND THIS_YEAR = 2025
Wishing you
---------------
MERRY CHRISTMAS
|
Have you ever had the need to determine if a number is even or odd? I have had to in various scenarios. This has been made a lot easier with the addition of a couple of scalar functions that have been added as part of the latest Technology Refreshes.
In the past I would need to check the remainder of dividing a number by two.
If I was to do this in RPG my program could look like:
01 **free
02 dcl-s Number packed(1 : 0) ;
03 dcl-s Remainder packed(1 : 0) ;
04 Number = 6 ;
05 Remainder = %rem(Number : 2) ;
06 dsply ('1. Remainder = ' + %char(Remainder)) ;
07 Number = 7 ;
08 Remainder = %rem(Number : 2) ;
09 dsply ('2. Remainder = ' + %char(Remainder)) ;
10 *inlr = *on ;
|
One thing that has always annoyed me within RPG compiler listings is how it handles long variables names. In the "Additional diagnostics messages" section of the listing it did not list the entire variable name, just the first seven characters followed by an ellipsis ( ... ). This could lead to some confusion if there is more than one variable that have the identical first seven characters.
I know this is an extremely simple piece of code, but it illustrates what happened:
01 **free 02 dcl-s Really_not_this_one char(1) ; 03 Really_long_variable_name = 'X' ; 04 *inlr = *on ; |
Line 2: A variable is defined with the name Really_not_this_one.
Line 3: I then use a variable that has not been defined, Really_long_variable_name, that has the same first seven characters as the variable I defined on line 2.
When I compile this it fails with a level 30 error, as the variable on line 3 has not been defined.
Included within the new Technology Refreshes, IBM i 7.6 TR1 and IBM i 7.5 TR7, comes three new date formats to help us with the 2040 date problem. All of the new data formats include the century in the date:
These, like the other date formats, can be used with following Built in Functions, BiFs, and operation codes:
Let me give some examples of using these new data formats:
This is the final part of the following trilogy:
The earlier posts described how to manually perform the SQL statements needed. In this post I am going to show a program that combines both of the SQL statements, and makes a program that can be run time and again.
I am not going to repeat a lot of what I said in those posts, therefore, I recommend you read them before you start with this one.
I will show this program in three parts, as I think that will make it easier to explain and for you to understand. This is the first part:
This is the second part of the trilogy I started last week:
In this post I will be giving an example of how I chose to remove the deleted records from all of the files in a library.
When a record is deleted from a file its space is not available to be reused, unless the file is reusing deleted records. Over time this can result in files have a few active records and many deleted ones. This is a waste of the available storage.
The Reorganize Physical File Member command, RGZPFM, is the command that will remove the delete records from a physical file. You need to be careful when using this command. If any of the files in the library are record address files the reorganization could make it impossible to retrieve the expected records from the file, do not reorganize them.
Recently I have been surprised there have been several messages during the day-end process alerting that various files are full. The on-call system administrator has been answering the messages without issue, and the job continues. This is a sign there is some "clean up" that needs to be performed upon the files this error happened to:
I could use the system reply list, which will automatically answer a message for me. But this will make the response to the error happen to every time, and in this case to all files, that the errors happens to. Which is not what I want.
Making the changes I am going to suggest something that can be and easily performed on a single file basis. I want to be proactive and stop the error from happening for all the files in a library.
Rather than pack all of what I did into one post I am going make this a trilogy:
I do use global variables, maybe the word "a lot" is an exaggeration but the phrase "many times" springs to mind. There are just some SQL statements I build in an RPG program that does not allow me to use a local (RPG) variable. I can use the value contained within a global variable in its place.
I do have a few global variables I use for a single purpose, account number, company number, etc. There are times I just want to have somewhere to put a value into so that I can use it in a SQL statement. To accommodate this general functionality, I have created three "generic" global variables, that I can put miscellaneous values into and then use. One of the advantages of using a global variable is that the value is "local" to the job, i.e., the value in the global variable cannot be seen or used by another job. Therefore, once I have a generic global variable any job can use it, without the danger of overlaying the value from another job.
I decided to create three generic global variables.
I had found that the Work Query command, WRKQRY, was not working on the IBM i 7.6 partition I have been using. I thought that, perhaps, Query had been uninstalled from the partition.
I have learned that IBM is aware of this issue, described as:
WRKQRY does nothing, even though the QU1 LPP is installed
There is a PTF to re-enable the command, that was release on November 19. As the latest Technology Refresh was released after this date, if you already applied IBM i 7.6 TR1 I would check if you have already installed this PTF.
The "Known Issues" page is here.
And the "Fix information" page for the correcting PTF, SJ05457, is here.
The Run Query command, RUNQRY, has been working.
This article was written for IBM i 7.6.
On Friday, along with the new Technology Refreshes, IBM announced there was new version of ACS. When I checked my ACS for an update (Help > Check for updates), the day before I published this post, I was not alerted that there is a new version.
If you do not see the window telling you there is an update you will need to go to the IBM's ACS website, http://ibm.biz/IBMi_ACS (the URL is case sensitive), and download the install file from the website.
Don't worry if you don't have an IBMid, you can create one in a couple of minutes.
Confirm your agreement with IBM's license.
You will then be presented with the "IBM i Access Client Solutions" page. The download for ACS's latest version is the first download.
The Technology Refresh PTFs for IBM i 7.6 TR1 and 7.5 TR7 become available today.
My recommendation is you download the latest CUM PTFs as that should include all of the PTFs for your release.
Do check if the Database (SQL) PTFs are included:
And the ones for RPG too:
If after applying IBM i 7.5 TR7 you notice a performance degradation you need to follow the instructions on this page here. It appears that this is not an issue for IBM i 7.6 TR1.
The question was posed if it possible to retrieve the name of printer that has been entered into a Query for all the Queries in a library?
I already knew how to retrieve the SQL statement from a Query. Alas, the information about the printer is not found that way.
After some searching I found a SQL procedure that will give me the information I need: PRINT_QUERY_DEFINITION. For some reason there is no mention of this in IBM's documentation portal. I found reference to it in the IBM Support portal.
PRINT_QUERY_DEFINITION generates a spool file that lists all the information about the Query, including the choice of output.
My scenario is that I want a list of the Queries in a library and which printers they are defined to use.
Not all Queries use printers. Some will only display, and others will output to an output file. For this example, I created four Queries. The files they are built over, fields selected, selection criteria, column formatting, etc. is irrelevant. All that matters is the output type. This can have three values:
A couple of weeks ago I wrote about deleting the System Audit Journal's receivers. The scenario had the journal's receivers not in the QSYS library, but in QGPL. Someone messaged me saying that in their IBM i partition QAUDJRN's receivers are in QSYS. When they ran the SQL procedure to delete old journal receivers, DELETE_OLD_JOURNAL_RECEIVERS it returned no results.
01 CALL SYSTOOLS.DELETE_OLD_JOURNAL_RECEIVERS( 02 DELETE_OLDER_THAN => CURRENT_TIMESTAMP, 03 JOURNAL_RECEIVER_LIBRARY => 'QSYS', 04 JOURNAL_RECEIVER => 'QAUD%', 05 DELETE_UNSAVED => 'NO', 06 PREVIEW => 'YES') |
I checked the last save information for the journal receivers in QSYS. The SQL table function OBJECT_STATISTICS's SAVE_TIMESTAMP column was null. Using Display Object Description, DSPOBJD, the save date field, ODSDAT, was blank. How could I determine how old each of the receivers were?
In the Spring and Fall IBM updates the IBM i Performance FAQ. The latest one was released on Friday.
It covers all aspects of the IBM i from hardware to software and programming. If you have not perused it I think you will find the parts that are relevant to your career interesting.
You can find it here.
For over a year we have had a SQL table function to be able to retrieve information about the SQL codes. Now we can do the same kind of thing for SQL states. The major difference is that we retrieve the information for the SQL statuses from a SQL View: SQLSTATE_INFO in the library QSYS2.
When working with SQL codes and statuses it is important to remember that they are not all a one-to-one relationship. There are some SQL states that are associated with more than one SQL code, and there are some SQL codes that are associated with more than one SQL state.
The SQLSTATE_INFO view returns three columns:
If SQL state is associated with more than one SQL code each combination will have its own row.
QAUJRN is the system's audit journal that captures various pieces of information that you want it to. A friend has a job that uses the Display Journal command, DSPJRN, to retrieve the data from QAUDJRN.
01 DSPJRN JRN(QSYS/QAUDJRN) RCVRNG(*CURCHAIN) + 02 FROMTIME(&FROMTIME) TOTIME(&TOTIME) + 03 ENTTYP(AF) + 04 OUTPUT(*OUTFILE) OUTFILE(SOMELIB/JRN_AF) |
One day his job errored, and he reached out to me for help as it returned an error he had never seen before. He sent me the job log, and I found the following within it:
IBM i uses certificates for various functions, and certificates will expire. I wanted to find a way where I could check the certificate store for any certificates that would be expiring soon. Whatever method I wanted needed to be simple so that I could move to other partitions too.
Fortunately, there is a SQL Table Function that will give me this information, CERTIFICATE_INFO. It has two parameters:
To use this Table Function you must have *ALLOBJ and *SECADM authority.
Regular readers know that I always recommend if this is the first time you are using this Table Function you want to see all the columns. To do that I would use the following statement:
I have previously written about other types of constraints: unique, primary key, and referential. In this post I am going to describe the check constraint, which is a way I can "push" validation of data into the database, rather than have the logic in all the programs that insert, update, or delete the data from the file or table.
I am going to use a table called PARENT again. I have added a couple of additional columns to it:
01 CREATE TABLE MYLIB.PARENT ( 02 PARENT_ID INTEGER NOT NULL, 03 LAST_NAME VARCHAR(30) NOT NULL, 04 FIRST_NAME VARCHAR(20) NOT NULL, 05 DATE_OF_BIRTH DATE NOT NULL, 06 START_DATE DATE NOT NULL, 07 STATUS CHAR(1) NOT NULL, 08 PRIMARY KEY (PARENT_ID), 09 CONSTRAINT PARENT_ID_CHECK CHECK(PARENT_ID > 0), 10 CONSTRAINT START_DATE_CHECK CHECK(DATE_OF_BIRTH < START_DATE) 11 ) ; |
The germ of the idea for this post came from a question I was asked. The question was for screen that would show the top ten jobs consuming the most CPU, which would refresh on a regular basis. In previous posts I have written about the parts needed to achieve the desired result, here I am going to put it all together.
How do I get the jobs that are consuming the most CPU? I can get the elapsed CPU percent and CPU time from one of my favorite Db2 for i Table functions, ACTIVE_JOB_INFO.
The statement I will be using is:
01 SELECT JOB_NAME, 02 ELAPSED_CPU_PERCENTAGE, 03 ELAPSED_CPU_TIME 04 FROM TABLE(QSYS2.ACTIVE_JOB_INFO( 05 RESET_STATISTICS => 'NO', 06 DETAILED_INFO => 'NONE')) 07 ORDER BY ELAPSED_CPU_PERCENTAGE DESC,ELAPSED_CPU_TIME DESC 08 LIMIT 10 |
The announcement for the latest round of Technology Refreshes for IBM i has been announced for 7.5 (TR1) and 7.5 (TR7). The planned availability date for these is November 21
Having given the announcement and enhancement information a quick read I see lot more useful changes for us.
The Db2 for i enhancements that caught my eye were:
In my previous post I described how I could add several constraints to DDL Tables. Here I am going to describe how I can do the same with DDS physical files.
I will have two sets of parent and child files. The first I will add the constraints using SQL statements. The second I will use the Add Physical File Constraint command, ADDPFCST.
I will be adding the following constraints to the physical files:
This is another one of those posts I did not realize I had not written about, database constraints.
Constraints allow me to move the control the integrity of my data into the database itself. I am going to give some simple examples of what can be achieved with constraints with DDL Tables. Here I am going to cover the following:
There are other types of constraints, but for this post I am only going to write about these.
At this month's TUG event my friend, Sue Romano, Senior Software Engineer at IBM, showed where we could go to download the latest version of the "Db2 for i poster".
The link is found at the bottom of the "Db2 for i – Technology Updates" page. When I visited the page the bottom of the page was obscured by an "accept cookies" notice, which I had to close. Click on the link and the PDF of the poster will open.
|
If you want to go directly to this version of the poster click here.
When printing the poster I found that standard paper it is not the best. If you are in the USA, use legal size paper.
Thank you to Sue for bringing this to my attention. And to the Db2 for i team for keeping it up to date.
What I have been predicting for a while happened on Tuesday September 16, 2025. IBM announced that the end of standard support for IBM i 7.4 will occur on September 30, 2026.
After that date support for this release will only be offered as a Service Extension. In the past the Service Extension is offered at twice the cost of standard support.
If you have Power9, Power10, or Power11 you can upgrade your IBM i release to the two remaining supported releases: 7.5 and 7.6 .
If you have a Power8, 7.4 is the last release that can be run on that server. You will need to upgrade to a later Power server, Power11, to be able to upgrade to one of the newer releases.
IBM's announcement for the Withdrawal from marketing and change in service level: IBM i 7.4.
IBM's document of the IBM Power systems and the IBM i releases they can run.
The idea for this post came from a question I was asked. There were a number of program dump spool files, QPGMDMP, and the manager wanted to know what was the value in a same variable from all of the spool files. While others said there was no alternative than to browse each spool file for the information. I knew there was a better way using SQL.
In these examples I am not using program dump spool files as you would be unable to replicate what I am going to show. Therefore, I am using the output from the Work Output Queue command, WRKOUTQ, with it outputting to a spool file:
01 WRKOUTQ OUTPUT(*PRINT) |
I had a scenario where I needed to determine the remainder value following a division operation for all the rows in a SQL Table. Rather than performing the calculation myself to determine the remainder there is a SQL scalar function that will return it for me, its name is MOD.
IBM's documentation for MOD states that it "divides the first argument by the second argument and returns the remainder". Its syntax is just:
MOD( < dividend value or variable > , < divisor value or variable > ) |
For example:
01 VALUES MOD(9, 2) |
Last week I was fortunate to be in Oslo, Norway, for COMMON Norge's "Professional day". I took this opportunity to hand out some of the the "RPGPGM.COM-unity" ribbons. You can see who I gave them to here.
Norway is the fifth country I have handed ribbons out to. It is wonderful to see how the community is expanding to different areas of the world.
What is the RPGPGM.COM-unity? You will have to click on this link to learn about it.
In June I wrote about how to use SQL to check if a file existed in the IFS. I received a communication from Rich Diedrich offering an alternative, using the access C procedure. I have taken his code and developed it into this example.
I created an external procedure that I made into a service program to perform the checking, and then wrote a small RPG program to call it to validate if certain files exist.
I am going to start by showing the source code for the external procedure, which resides in a module I called TESTMOD1.
This almost slipped past me, the information in the IBM i 7.6 documentation that says that the Convert Date command, CVTDAT can use the '1970 rule'.
Whether CVTDAT uses the '1940 rule' or the '1970 rule' depends upon the presence of the QIBM_QBASEYEAR Environment variable. If its value is '1970' that rule is used, if it is not present then the '1940 rule' is used.
How can I tell if I have that Environment variable? There are two ways. I can use the Work with Environment Variables command, WRKENVVAR:
01 WRKENVVAR |
SYSDISKSTAT returns the information about disk and solid-state drives, SSD. It comes in two forms:
I am not going to say much more about using these two as I gave a lot of detail when they were first introduced.
In the latest release IBM i 7.6 and 7.5 TR6 two new columns have been added to both of the SYSDISKSTAT:
Before I delete an user profile I always want to identify which objects it owns, and then transfer them to another profile or profiles. I can see which objects they own using the Work with Object by Owner command, WRKOBJOWN. If there are a lot of objects in the results I do not find this user interface helpful.
Db2 for i offers us an alternative, the OBJECT_OWNERSHIP table function. I have written about the OBJECT_OWNERSHIP view before, it would appear I overlooked the table function.
The advantage of using a table function is that only the results for the parameters passed are found, like calling an API. With a view the rows are selected according to the selection criteria, which is like searching a file or table. In various situations, one may have an have advantages over the other. In this scenario, I only want the information for one user profile, the table function is more efficient.
The syntax for the OBJECT_OWNERSHIP is simple, as it only needs one parameter passed to it, the user profile:
I get quiet few communications from readers of this website that they have bookmarked RPGPGM.COM in their browser, or they have made it their home page. I am very flattered that people appreciate my work enough to do that.
If you are someone who finds the content of this website useful, and want to have a way to list results from it more prominently in your Google search experience, Google has added a new feature to help: Google Preferred Sources.
At the time of publication this feature is available in the USA and India, for Google searches in English. I am keeping my fingers crossed that it will eventually be rolled out globally.
What does it do? This is what Google says in their blog post announcing the release of "Preferred sources":
When you select your preferred sources, you'll start to see more of their articles prominently displayed within Top Stories, when those sources have published fresh and relevant content for your search.
This is available for Desktop, Android, and iPhone and iPad.
Rather than follow the steps described in the articles to activate Preferred sources, you can just click on the button below.
You must be signed into your Google Account before clicking.
More information can be found:
It is possible to submit jobs to batch that will run for another user profile by using the User parameter in the Submit Job command, SBMJOB. For example:
SBMJOB CMD(DLTF FILE(MYLIB/INVHIST)) JOB(TEST) USER(SIMON1) |
The above shows that the current job is trying to submit a job to run with the SIMON1 user profile. I would consider to be a security risk. If SIMON1 has more authority than the current user, let's say that is NOTSIMON, then NOTSIMON can submit jobs that perform tasks they are not authorized to do, or do something bad that they will not be blamed for.
Fortunately in most cases NOTSIMON would receive the message:
SBMJOB CMD(DLTF FILE(MYLIB/INVHIST)) JOB(TEST) USER(SIMON1) Not authorized to user profile SIMON1. |
I have encountered various companies where all the SBMJOB commands in their programs would use a "special" user profile for batch jobs, that had more authority than the average user. In this scenario when NOTSIMON ran a program that SBMJOB, and I looked at the calling job's job log I would see something like:
Someone sent me this link to an IBM Support document that maps DDS data types to SQL and ODBC data types. And I decided to share it with you as I find this information both interesting and useful.
The original contents of this page have become obsolete, go to this page for up-to-date information.
I understand why IBM i developers should not have all object authority, *ALLOBJ, but at times I am frustrated by my inability to find objects in my partition. I do not want to do anything to them, just know that they exist. I have to find someone with a security office equivalent user profile and ask them to do a search for me.
I have found a way that this frustration can be removed. It will work on all partitions that are IBM i 7.5 or higher. I think what I am going to describe is included in the initial release. As none of the partition I have access to are running just base 7.5 I cannot check that this did not come in a Technology Refresh, TR.
Function Usages are way to be granted access to perform certain higher authorization functions, without being given that higher authorization. I was going through IBM's documentation about them when I came across:
How would I find if a source member with the same name is found in more than one source file? If there is, which one was modified most recently? I am sure those are questions many of us have asked ourselves. How could we can make this easy for ourselves to get this information? Fortunately, Db2 for i has everything I need to do it.
I start with the SYSMEMBERSTAT View, it was introduced a couple of Technology Refreshes ago, IBM i 7.5 TR4 and 7.4 TR10, and it is used in place of SYSPARTITIONSTAT when I need information about members.
What are the columns I am interested in:
The announcement of IBM i 7.6 included notice that the following application development tool set commands were no longer available:
The web page with this information is here.
In the past week there were rumors circulating in social media that this is not true, and it is still possible to use the STRSDA command in 7.6 .
The germ for this post came from a message I received:
Is there any way to retrieve sources for all QMQRY objects available inside a library in one go? My objective is, there are 100's of QMQRY objects(SQL type) inside a library but they don't have predefined source members. I need search for particular string inside all those SQL queries used inside the QMQRY objects.
I decided to work out a way I could do this.
I did not create "100's" of Query Management, QM, queries as what will work for two will work for many more too.
It is possible to retrieve the SQL statement from a QM query object by use of the Retrieve Query Management Query command, RTVQMQRYM. That command copies the retrieved statement into a source member.
When the IBM Power11 servers were announced, July 8, several people contacted me if I had the CPW rating for the new servers.
What is CPW? It is a number that is frequently mentioned, but I have never seen its definition. After some digging I found the following:
The CPW, Commercial Processing Workload, rating provides a measure to show how on-line transactions processing, OLTP, workloads perform on systems that run IBM i. The CPW rating is built using workloads that can utilize the full processing power of the system.
Below is a summary of the Power11 CPW. I have included the top of range Power10 equivalent too, its column is gray.
I was asked if there was a simple way to check an output queue, and if there is a spool file of a certain name for one user, to move it to another output queue.
Fortunately this is not as complicated as it sounds as I can use a SQL Table function to retrieve a list of those spool files, and a scalar function to move the spool file.
In this example I am going to be the user, my profile is SIMON, and whenever I find a spool file QPQUPRFIL in the output queue MYOUTQ I want to move it to the output queue OUTQ2.
First I need to produce a list of eligible spool files. Here I can use the SPOOLED_FILE_INFO SQL Table function:
I had been unable to find any information comparing the performance and efficiency of the new IBM Power11 to previous IBM Power servers. I reached out for this information to people I know and they shared with me an IBM document, whose information I have copied into this post.
Before we get started I need to briefly explain what rPerf is. It is a method to approximate the difference in performance between two Power servers. rPerf is only for AIX. For IBM i performance CPW is used. I found an IBM page explaining what rPerf is here.
All of the quotes I giving below are from the document I received. They are divided into Performance and IT Efficiency. While the document did not group them together I am doing so. I am not including the disclaimers with the quotes, as that will make it difficult to read. The disclaimers can be found at the bottom of this post.
I have found more links on IBM's websites about Power11 that I think are worth sharing:
Video of the announcement on Tuesday July 8, 2025:
Today is the day! The new IBM Power servers using the new Power11 chips have been announced.
Over the years IBM has developed increasingly more power Power, chips that have been significant improvements from the previous one.
On May 20, 2025, IBM published a Redbook with the title "Modernization Techniques for IBM Power".
The abstract that accompanies the Redbook on their website states:
This IBM Redbook offers a high-level overview of modernization, including key concepts and terminology to guide your modernization journey. It explores the components and architectural layers of the IBM Power ecosystem, demonstrating how they create an ideal platform for running mission-critical applications in today's world. The content is designed for business leaders, architects, and application developers.
It devotes a chapter, chapter 9, to the IBM i. I think it is worth downloading this document even if you only read chapter 9.
You can download this Redbook from the link here.
This is one of the IBM i enhancements that was released in version 7.6, but not in 7.5 TR6. This new table function, PROGRAM_RESOLVED_IMPORTS, allows me to get a list of all the imports for an ILE program or service program.
This Table function has four parameters:
This looks like:
The question was is there an easy way to identify "flat files" without having to use the DSPFD command. The answer, of course, is "Yes".
The questioner explained that a "flat file" was a file that was generated without the use of DDS or DDL. In other words, just with the Create Physical File command, CRTPF. For example:
CRTPF FILE(MYLIB/FLATFILE) RCDLEN(100) |
The questioner was finding he could identify these "flat files" with the Display File Description command, DSPFD, like this:
DSPFD FILE(MYLIB/FLATFILE) |
At the MiTec conference earlier this month more people joined the RPGPGM.COM-unity.
You can see photographs of these new members here.
If you see me at an IBM i event feel free to introduce yourself to me. In all likelihood I will have a RPGPGM.COM-unity ribbon on me, and you can become a member. All I ask in return is a photograph of you with it.
If you would like to learn more about the RPGPGM.COM-unity click here.
Tomorrow, Saturday June 21 2025, is the 37th anniversary of when the IBM AS/400 was first announced. You can watch the video of the announcement in the UK here.
What a wonderful journey we have all been on all these years as the AS/400 (1988 – 2000) begat the iSeries (2000 – 2006), which in turn gave way to the System i (2006 – 2008), and finally IBM Power server running the IBM i operating system (2008 – Present). All the time being IBM's premier server and operating system providing modern functionality, stability and robustness to their customers.
Today the AS/400 looks dated, which it is. But IBM Power can hold its own compared to any comparable business system.
I think I did a good job describing this history for the 35th anniversary. If you are interested in learning more what AS/400 was, and what it has become, read the story here.
Happy birthday IBM Power and IBM i! May you have many more!