IDMS/SQL News              7.3

Vol 7.3    Technical Information for Mainframe and CA-IDMSâ   September 1998

 In This Issue

Headlines

Don't Shoot the messenger,...
Hole in SQL support in Westwood!
1 in 500 Product Status for IDMS!
Watch out for '@$#' in APARS!
New Base and Maint Tape for Release 14.0

Potpourri

3390 Disks and Page Size
Transparencies in the changing World
LU6.2 Shortage!

SQL in Practice : How to do Corner

Book Review: Database Performance and Tuning
SQL COMMIT - How Often? Guest article on DB2
How Many Indices for a Table?
SQL RCM Issues
Unload/reload of SQL databases

Product News etc

Two Phase Commit for CA-IDMS
Year 200 and IDMS 10.21

Back to Main Page

Located at http://geocities.datacellar.net/SiliconValley/Vista/4905/idms73.htm


Don't Shoot the Messenger...

We have reliable information that CA Norway has dragged IDMS/SQL News into the Court Case in Oslo. It is believed that copies of IDMS/SQL News have been produced in the Court pleadings from CA. IDMS/SQL News is in its 7th year of publication in Scandinavia. CA people themselves have distributed the publication in various User Meetings in Scandinavia and elsewhere. We will publish the letter from CA asking for the newsletter! Now CA Scandinavia got indigestion on some critical comments on the happenings here .. We are just reporting what has been reported to us from various client sources. As they say "Don't shoot the messenger, if you don't like the message!"

Resignations leaves gaps in SQL Support in Westwood!

--- R. Korhonen in Finland from client/IMC reports

We have conformation from IMC'98 participants and Ian Hill that Carl Hess, IDMSDB Level II, Westwood has said goodbye to Computer Associates. Carl Hess is well known to Scandinavian IDMS/SQL Users in Norway (Bergen and Oslo) and Finland. After SQL guru Doug Inkster (whom we all met in Helsinki in 1993) left IDMS to join the Ingres development, Carl hess has been the key person who dived into the intricacies of the extremely complex world of IDMSAMC, the SQL Optimizing Compiler. Carl has been heavily involved in helping clients like Norsk Hydro Bergen, Finnish Broadcasting Co, Helsinki and now another site in Oslo, and most recently Novo Group Oy, in solving the biggest database corruption in the history of IDMS in Finland.

IDMS - Just 1 in 500 Product!

Recently in a public argument put forward by CA Norway and Admin Director Stein Onsrud claimed that "IDMS is just one among 500 ( 1 in 500) products" CA is supporting and it is not necessary to have any support personnel for such a product in every country. In fact, as per today there are no IDMS technicians in CA Scandinavia (N,FIN,S, DK). Gopi Nathan has left and Ian Hill has moved to EPS Germany!

We are forced to ask the following questions:

A Product Support Directory from CA (CD Rom version 29/1/98) lists IDMS as a family of products consisting of
     CA-IDMS(R) Family 
     10.214                Premium        Support
     CA-ADS/Batch(R)                 ADS370
     CA-ADS/Online(R)                ADS370
     CA-ADS/PC(TM)                   ADSPC
     CA-CULPRIT(TM)                  CULP
     CA-CV Monitor                   IDMS
     CA-DDS(TM)                      DDS
     CA-Dictionary Loader            IDD
     CA-EDP/Auditor(TM)              CULP
     CA-Escape(TM)/DBOMP             EDBOMP
     CA-Escape(TM)/DLI               EDLI
     Transparency
     CA-Escape(TM)/TOTAL             ETOTAL
     Transparency
     CA-Escape(TM)/VSAM              EVSAM
     Transparency
     CA-FACOM Option                 IDMS
     CA-ICMS(TM)                     ICMS
     CA-IDD(R)                       IDD
     CA-IDMS/DC(R)                   IDMSDC
     CA-IDMS(R)                      IDMS
     CA-IDMS(R)/CMS                  IDMS
     CA-IDMS(R)/Compilers            COMPLR
     CA-IDMS(R)/CV                   IDMS
     CA-IDMS(R)/Installs             INST
     CA-IDMS(R)/Mapping              MAPS
     CA-IDMS(R)/Send-Receive         ADS370
     Option
     CA-IDMS(R) Server (ODBC)        IDSERV
     CA-IDMS(R)/ADS Alive            ADSLIV
     CA-IDMS(R)/ADS Trace            ADSTRC
     CA-IDMS(R)/DB Analyzer          IDMSAN
     CA-IDMS(R)/DB Audit             AUDIT
     CA-IDMS(R)/DB Reorg             REORG
     CA-IDMS(R)/DC Sort (IDMS/TP     TPSORT
     Sort)
     CA-IDMS(R)/Dictionary           DICMIG
     Migrator
     CA-IDMS(R)/Dictionary Module    DME
     Editor
     CA-IDMS(R)/Dictionary Query     DQF
     Facility
     CA-IDMS(R)/DML Online           DMLO
     CA-IDMS(R)/Enforcer             ENFRCR
     CA-IDMS(R)/Journal Analyzer     JNLA
     CA-IDMS(R)/Log Analyzer         LOGA
     CA-IDMS(R)/MasterKey            MSTKEY
     CA-IDMS(R)/Online Log Display   OLD
     CA-IDMS(R)/Online Test          OLTEST
     CA-IDMS(R)/PC                   IDMSPC
     CA-IDMS(R)/QbyX(TM)             IDQBYX
     CA-IDMS(R)/SASO                 SASO
     CA-IDMS(R)/Schema Mapper        SCHMAP
     CA-IDMS(R)/SP&G                 SPG
     CA-IDMS(R)/SQL Option           SQLOPT
     CA-IDMS(R)/Task Analyzer        TASKA
     CA-IDMS(R)/Test Database        TDB
     Builder
     CA-IDMS(R)/UCF                  UCF
     CA-IDMS(R) Open Image           IDOI
     CA-IDMS(R)/Presspack            PRPACK
     CA-OLQ(R)                       OLQ
     CA-Perfmon                      PERF
That makes it 50+ different sub products! 80% of them are in use in Scandinavia. Soome of them them are utility products. But all the big ones are in use here more than anywhere else in the world! Certain ones, like IDMS/DB, IDMS/SQL Option and IDMSDC are individually bigger than any product you can think of, from any vendor including IBM! Also LU6.2 support which is not listed (part of IDMSDC) is an extremely time-consuming and important multiplatform feature - both for application development and support. On top of everything, the same directory from CA lists every other product (like CA SORT) separately for each operating system. Then IDMS is supported on all the 3 IBM Operating Systems and more - VM/ESA, MVS/ESA and VSE! Until recently we had a BS2K (Siemens) Client in Denmark. So this is what you call 1 in 500 product!

Newsgroup - IDMS-L

IDMS-L - The IDMS mailing List CA-IDMS Discussion Forum based at the University of Georgia. At the moment besides technical discussion, there is very interesting discussion about IDMS History going on. Those interested, this is the right time to go back to the nostalgic years of 70s and 80s!

Scandinavian Clients - Watchout!

- From Helsinki Dispatches

It is quite common to download a PTF from Star-TCC. The downloaded PTF comes your PC as a TEXT file. Again this has to be uploaded to TSO, normally as a text file. Upload is done usually through FTP of some kind. ASCII to EBCDIC conversion is handeld by the FTP software.

For most ASCII characters this poses no problems, since mainframe PTFS usually contain only hexa values 1,2,... D,E and F. But in recent times there have been many 'C' modules coming into picture. Many of the CSECT names associated with 'C' modules do use special characters '$, @ and #' . Because these characters are mapped on mainframe to 'Å, Ö and Ä' (Finland/Sweden) there can be problem after download/upload, if conversion is not handeld properly. One user has noted that FTP did not handle the conversion correctly and the user ended up having '@' and '$' on mainframe side after the upload! [One should never see such characters here on a 3270 emulation]. The net result is that APAR did not verify. Gave SMP error on CSECT names. User had to manually correct these '@' and '$' to 'Ö and Å' respectively on mainframe. Note that certain FTP configurations do the upload/download correctly.

APARS for IDMSAMC, IDMSDMLC, ADSOBCOM etc fall into this special category.

New 14.0 Base and Maint Tape - 9711

The 14.0 maintenance tape for MVS is available now. There is also a base tape 9711 for MVS clients. Clients on 12.0 can directly go for 9711 14.0 Base Tape. The last APAR on the tape is LO25803. Since then there have been about 200 APARS written. Only a handful of them are Hyper Apars, though it is advisable to stay current by applying the most relevant ones for your shop.

A Potpourri of Information: Technical and not so Technical

WRONG PAGE SIZES FOR 3390'S

- Docup

Page sizes specified for 3390 devices in the CA-IDMS Database Design Guide are partially incorrect. Any value that is not divisible by 4 should be reduced until it is divisible by 4.


Crossfire Section -

Novo Group Crisis The biggest database corruption in Scandinavia in 20 years!


Transparencies - Why a DB2 Transparency won't work

IDMS Customers are familiar with the Transparency products - known originally as ESCAPE/DL1, ESCAPE/TOTAL and ESCAPE/VSAM. Starting in early eighties, these products have been in production at many clients in Scandianvia for more than 10 years. They worked rather well. But in recent times CA talked about DB2 Transparency. On the IDMS side, this product did not take off .

In reality it is impossible to make a fully functional DB2 Transparency! The old ESCAPE products worked fine beacuse they were 'escaping' from old stable products liek DL/1 and VSAM. DL/1 and VSAM have been stabilized for a long time. The usage is static. For example, in DL/1 it was easy for IDMS to duplicate GU, GN and GNP calls (Get Unique, Get Next and Get Next within Parent). IDMS owner-member contsruct was more powerful than DL1 parent-child hierarchy.

But in the case of DB2, the SQL calls can be static as well as dynamic. And both can be extremely complex. Even on static SQL calls a transparency will find it very difficult to implement all the combinations. SQL calls are quite complex even for the optimizer of DB2 itself.

Another difficulty is introduced by the very fact that DB2 is not a static stabilized product. From the primitive Version 1.x to the current Version 5 there has been tremendous changes, enough changes and features to call some of the new versions as totally new products!

Yet another problem is caused by the multi-platform nature of today's DB2. The commands and facilities in DB2 (MVS) are constantly changing to suit the multiplatform architecture (DRDA, ODBC Connections, TCP/IP Interfaces etc). It will be impossible for a transparency product to take care of all these situations and commands.

If DB2-Transparency is like this, how about an IDMS Transparency? We run into the same problems - both against network and relational databases within IDMS. It might be possible to 'mimic' the network database calls, but applications might perform very poorly and erratically. As for SQL tables in IDMS, any product which tries to mimic them will run into the same optmization and SELECT issues we discussed earlier.

Wanted : IDMS/LU6.2 Knowhow


This technical article, which appeared in IDMS/SQL News 7.2 as a tribute to the power of LU6.2/APPC in IDMSDC, is repeated here mainly because this is one of the articles produced by CA Norway in the Court Case against Gopi Nathan. We detest the move again and ask the readers to contemplate on what in this article is not true and what product (from CA and outside CA) other than IDMSDC can support APPC across IDMS/DC vs CICS, IMS-DC, OS/2, Tandem etc?
In IDMS/SQL News 6.3 we had a discussion on some of the problems of LU6.2 when the conventions used by partner systems differ. In this short writeup, we see a mounting problem on a Worldwide basis - Shortage of LU6.2 people!

There have been many requests from IDMS client base to give more importance to the LU6.2 part of IDMSDC. But the vendor was downplaying the product . In the early 90s, CA gave the impression that CA90s had the necessary components to give the functionalities of LU6.2 support in IDMSDC, so that no more development was needed for the current LU6.2 line driver. But alas! Which CA90s component can support program-program communication across CICS-IDMSDC, IDMSDC-OS/2, IDMSDC/Tandem, in Assembler or ADS? None! Since a coherent approach was missing from the vendor, LU6.2 in IDMS have been supported by isolated consultants at various clients, all these years.

Who wants to go through VTAM traces in this "Mickey Mouse" Era of Computing? No one! Who wants to read PTERM (IDMS) traces? No one!! Who wants to go through that meticulous Assembler code of SEND/RECEIVE? No one!!! But then we have a problem! ie there is no one to support a very powerful functionality which has always been a part of IDMSDC all these years!

In spite of all the connectivity noise from various vendors, nothing has replaced the conventional LU6.2 protocol. Even in IBM's DRDA implementation, LU6.2 is made use of internally! IBM is quite serious about the importance of APPC even in this Internet Era, proof of which is seen in publications (May 1998) like Multiplatform APPC Configuration Guide(GG24-4485-00).

In Scandinavia the following companies have production level IDMSDC/LU6.2 knowledge (among other things). System Tjenester Services Oslo, Vegasoft Oy Helsinki. There are also many independent consultants who are well versed in LU6.2.


SQL in Practice - How to do Corner

Database Performance Tuning Handbook

— Oslo Dispatch

Database Performance Tuning Handbook (Book and CD-Rom) by Jeff Dunham [McGraw-Hill - 1998 - (ISBN 0-07-018244-2)] covers all major database products and performance issues in one comprehensive volume.

This book contains tips for optimizing all major database products including Microsoft SQL Server v6.5, DB2 v4.x, CA-IDMS release 14, Oracle v7.3 &8.0 and Sybase v11. For IDMS readers it is satisfying to see that Release 14.0 and Sysplex have been covered very well and some mistakes in the IDMS Manual have been corrected too (for example 3390 page size information).

Another good point about the book is that it is written from a business data processing viewpoint. The author comes straight to the topic in simple words, cutting the unncessary introduction and the extravagant description which you see very often in some books.

IDMS Issues Covered in Detail

Chapter 14 is entirely devoted to IDMS 12.x and 14.x Tuning. Several example in Chapter 12 on mainframe coding has been based on IDMS. Chapters 1, 2 and 17 also contain many examples from IDMS environment. Appendices E, H, P and Q are also based on IDMS.

Vegasoft Products Featured

Client/Server and multiplatform products from Vegasoft Oy, Helsinki (IDMS/Gateway, IDMS/DB2, ADS/DB2 andDB-CICSLINK) have been explained in 12.20 onwards. Appendix D also contain a vendor list of main tools distributors.

Table of Contents

Chapter1: Statistics Collection (and monitors)
Chapter2: Mainframe Blocking and Buffering
Chapter3: PC Files and Buffers
Chapter4: DB2 System Tuning
Chapter5: Logical Database Design
Chapter6: Microsoft SQL 6.5 Tuning
Chapter7: Oracle 7.3 & 8.0 Tuning
Chapter8: Windows NT 4.0 Tuning
Chapter9: Windows 95 Tuning
Chapter10: PC & Client/Server Program Coding
Chapter11: Sybase 11 Tuning
Chapter12: Mainframe Coding
Chapter13: Mainframe Sorts
Chapter14: CA-IDMS 12 & 14 Tuning
Chapter15: Mainframe Multi-Level Caching
Chapter16: PC & Client/Server Caching
Chapter17: Communications and Networking
Chapter18: Advanced Topics

The advanced chapter is really hot. It covers new technology and techniques which are right on the edge.

IDMS/SQL News has no reservations in recommending this as an excellent book. The author Jeff Dunham has home page at http://www.announce.com/~jdunham

The Woes of Commitment

A few things you should know about SQL 'COMMIT' frequency.

Bonnie Baker, Bonnie Baker Corporation
Note: This article is reprinted from DB2 Magazine Fall Edition The guidelines given with reference to DB2 are also relevant for IDMS on IBM mainframe. - /Editor

Lost loves, trains, faithful hound dogs, and impending tragedies are the stuff good country music is made of. But those besequined and cowboy-hatted folks never knew the real tragedy of having a program run for 10 hours, abend, and roll back for even more hours, straight into the tender heart of CICS prime time. When we hear songs about commitment, those of us in the business of writing programs think not about unfaithful loves, but rather about programs that never commit or commit too often, about concurrency issues, locking contention, and the evil necessity of coding restart logic.

Batch programs seem to come in two varieties. There are those that never explicitly commit, relying instead on faith and successful program termination. And there are those that commit with vigor, after every update, insert, or delete. Famine or feast. Or as my Texas-born father was prone to say, "too none or too 'nuff."

Back in the old days, when I had a real job, I had a rule that I tried to abide by: Any program that ran over 30 minutes had to do commits and have restart logic. I learned the hard way that all programs, over time, run longer than 30 minutes. I also learned that no one but me thought that my batch window was necessary and that retrofitting commit and restart logic into a program to avoid intolerable rollbacks and enable running concurrently with online programs was time-consuming and boring.

NO COMMITS

The negatives of not committing at all are the stuff of which IT legends are made. A client and friend called one night because he was watching a job that was engaged in a seemingly endless rollback -- from tape. He wanted advice and comfort. It seems that he had, after much calculation and deliberation, decided to effect a company-mandated change of store number (store number 16 was to become store 22) by doing one multiple-row update of 2 million rows. My friend did not make this decision lightly. While he knew he could lock the whole table, he worried about the possibility of an unexpected rollback. He knew that DB2 creates an undo log record (for rollbacks) and a redo log record (for recovery) for each changed row. He also knew that the log record would consist of a header plus all the data between the first changed byte and the last changed byte for each of his fixed-length rows. He carefully calculated the amount of log space that would be required for the 2 million updates. He decided that his active log data sets were sufficiently large, and he went for the easy, "I don't have to write a program with commit logic to do this" strategy.

What my friend failed to consider was his undoing. There were four indexes on this table and three of them contained store number. Each update had to do a delete and an insert to each index, and DB2 logged undo and redo records for those changes as well. The active logs filled up and were archived to tape, again and again. Even so, all would have been fine, except for the fact that the inserts into the indexes were all in the same place and were causing index page splits of epic proportions. One of the index data sets hit 119 extents, winning (losing?) a tight race with the other two. And when the 120th extent was needed, a -904 SQLCODE (resource unavailable) caused the now legendary rollback -- not from memory, and not from DASD, but from tape, compressed tape.

He hasn't been the same since.

TOO MANY COMMITS

Equally legendary are the jobs that commit after every update, insert, or delete. Each commit, at a minimum, causes two CPU-intensive MVS cross-memory services calls: one to the Systems Services address space to write to the active logs and one to the IRLM address space to release page locks. Cursors without "with hold" syntax are closed and sortout files are destroyed. And if the bind parameter for the job is RELEASE COMMIT, each commit releases tablespace and table-level locks and destroys information needed for some labor-saving and performance-improving features of DB2.

For example, if a job is doing iterative, looping inserts against the same table, DB2 monitors this activity at run time. After the third insert, DB2 gets tired of repetitively building the code needed to do the insert, assumes that you are going to continue doing this insert a reasonable number of times, and decides to create an executable procedure. The procedure reduces CPU overhead and improves performance. However, you must use the insert procedure (IPROC) at least five times to pay for the build overhead. After the eighth insert (the fifth using the procedure), the SQL is faster and cheaper. If a job (bound with RELEASE COMMIT) commits after every insert, the IPROC is never built. If the job commits after every three or four inserts, the procedure barely gets built before its untimely demise. All cost, no payback. Not good.

Frequent commits, exacerbated by the impact of RELEASE COMMIT, also destroy or prevent the building of update procedures (UPROCs), erase the cache used by SEQUENTIAL DETECTION to turn on DYNAMIC PREFETCH, and erase the cache used by INDEX LOOKASIDE to avoid index probes. And to top it all off, tablespace and table-level locks are also released.

One fairly ugly example of too-frequent commits was in a standalone batch program that had the following logic:

1. Use "LOCK TABLE tablename IN EXCLUSIVE MODE" to lock the table that will be updated and to override the page-level locking normally used for the table. (Bypass the LOCK TABLE statement if a parameter so indicates.)

2. Read two million records from a transaction file.

3. As each transaction record is read, perform an update to a DB2 table.

4. Commit after every five updates (commit frequency chosen for restartability, because of the fear of long-running rollbacks, and in anticipation of the eventual need to run concurrently with online programs).

5. Bind with RELEASE COMMIT.

Why was this logic so bad?

First, the initial LOCK TABLE statement did indeed eliminate page-level locking -- but only for one unit of work. At the first commit, the X table lock and IX tablespace lock were released (because that's what RELEASE COMMIT does). When the second unit of work began, the locks acquired by the LOCK TABLE statement had been released. So the table resumed its normal page-level locking -- an IX lock on the tablespace, an IX lock on the table, and X locks on every single page updated. Each commit then released any page locks. (Page locks are always released at commit, regardless of the RELEASE parameter.) But each and every commit also released the table and tablespace intent locks. At the beginning of each new unit of work the intent locks on the tablespace and table had to be reacquired. Not good. Especially for a program that intended to acquire no page locks, just two big locks, one on the tablespace and one on the table, and keep them for the duration of the program run.

Second, because the commits were done after every five updates, the UPROC was barely built when it was destroyed by the commit (because that's what RELEASE COMMIT does). This job paid for the building of 400,000 UPROCs without enjoying the benefit of a single one. The premature demise of the UPROC was very expensive. Again, not good.

AND THIS IS JUST RIGHT

Commit frequency in batch programs must be artfully designed to balance the memory consumption, potential lock escalation problems, concurrency concerns, and never-ending rollbacks of too few commits with the CPU overhead and performance-destroying problems of too frequent commits. On the positive side, commits do release page locks, thereby releasing memory and avoiding lock escalation and timeouts. Commits shorten rollback durations, allow restartability, and let batch programs run concurrently with onlines. However, on the negative side, they also consume CPU and close cursors. And commits, if done too frequently in conjunction with RELEASE COMMIT, prevent the building or cause the destruction of UPROCs and IPROCS. They also erase the information needed for the invocation and continuation of Dynamic Prefetch and Index Lookaside.

How do we balance all these issues and have the concurrency, restartability, and reasonable rollback durations that a fast-disappearing batch window demands?

Consider building a program parameter table that has a key of program_id and contains a column called commit_frequency. You can read the table during program initialization to determine the current commit_frequency for the program. By updating the table, you can change commit_frequency to adapt to a changing mix of concurrently running programs and users without changing program code. In this way, you can maintain an appropriate balance between updates, commits, and lock timeouts.

The same program parameter table can have a column for a LOCK TABLE yes/no flag. Using the table, you can execute LOCK TABLE statements or not, depending upon the setting of the flag and the program and user mix.

Declare cursors "with hold" to avoid closing cursors that you plan to continue using after commit. Reopening and repositioning cursors closed at commit is very expensive, especially if the access path for the cursor includes a data sort or an RID sort.

Consider BINDing with RELEASE DEALLOCATE to avoid releasing tablespace- and table-level locks at every commit. Remember that page locks are always released at commit (or sooner), regardless of your RELEASE parameter. But tablespace and table-level locks are released either at commit or thread deallocation, according to the powerful RELEASE parameter. The IX and IS intent locks taken on tablespaces and tables for page-level lockers are generally benign and do not usually cause concurrency problems. But be aware that there are a few rare but notable exceptions that may conflict with your long duration intent locks. Watch out for concurrently running programs that do mass deletes (no where clause) to tables residing in segmented tablespaces. Watch out for concurrently running, selfish and unfriendly programs that exceed the maximum allowable lock limits and try to escalate from page to table or tablespace level locking. And finally, watch out for concurrently running programs that are issuing LOCK TABLE statements, including long-running CICS asynchronous tasks and IMS WFI programs.

Consider BINDing with RELEASE DEALLOCATE to retain the extremely beneficial UPROCs, IPROCs, Sequential Detection cache, and Index Lookaside cache across commits.

If you continue to bind with RELEASE COMMIT, and you are doing updates or inserts, try to commit well after the UPROCs or IPROCs are built, so that some benefit is derived from them before they are destroyed. If you are doing iterative SQL that is processing data sequentially using unique keyed reads and thereby benefiting from Dynamic Prefetch and Index Lookaside, try to lengthen your commit frequency to the longest tolerable interval to take advantage of these features. And use the largest tolerable locking level to reduce lock acquisition and release overhead.

IT'S ALL IN THE TIMING

Finding just the right balance between that "too none and too 'nuff" is not easy, but the payback is significant. One customer in Australia used the program-control table that I mentioned earlier to change commit frequency on a long-running program that was committing after every single update, insert, and delete. The program ran for 60 hours. It was not a daily job. But it was weekly, and it was scary. He changed the commit frequency to two, then three, and so on, incrementing the count until the job began to cause timeouts (-911s) for online users. He changed the frequency in the table until he achieved the right balance. With this fairly simple change, he was able to reduce the program run time from 60 hours to 20. And while 20 hours might not be "just right," it is definitely better than 60.

Like most things in life, successful commit strategy is based on optimal timing -- like love, crossing train tracks, finding just the right faithful old hound dog, and creating the perfect ingredients for another classic country song. •


Bonnie Baker is a consultant and educator specializing in application-performance issues. She is a five-time winner of the IDUG Best Speaker award and is best known for her series of seminars entitled "Things I Wish They'd Told Me Eight Years Ago." You can reach her through Bonnie Baker Corp. at 800-992-7192 or bkbaker@bonniebaker.com.

How Many Indexes for a Table?

— Helsinki Dispatch!

How many indexes should be there for a table, at most? 5,10,15.... ? It all depends upon the size of the table, processing requirements etc. Nevertheless, there should be some guidelines which can be used to arrive at a meaninful desgin.

In the CODASYL world the same issue was there when some key records participated in too many sets. Later some clients added indexes too, when the integrated indexing was introduced with IDMS 10.0. Since linked sets are not there in SQL (or can be avoided), it may appear that we do have more freedom here. But since the owner-member links are there in a pure SQL database, this necessitates the usages of more foreign keys, which in turn makes more indexes a requirement.

An old IBM DB2 Manual (V 1.2) says that there should not be more than 5 indexes for an OLTP SQL table. Since this was written in mid 80s and there have been advances in technology (12.0 features, ESA, Syplex for example) we can be a little more flexible now. Still, we believe that there is nothing wrong in setting an upper limit of 10 indexes for an SQL Table, with a strong recommendation to keep this at 5, if the table is going to be updated a lot in online.

Recently one among us attended an SQL Course in London (not on IDMS). A UNIX database was used as an example, and the instructor's recommendation was to use not more than 2 indexes for a table to be used in online!!

RCMs and Access Modules

Access Mdoule is a very important component of SQL access. It is vital because, it can be dynamically created at run time, depending upon the options on each access module. If 'AUTO RECREATE' is on, IDMS will recreate the AM at run time, if needed using RCMs involved in the AM.

RCMs will be loaded into reentrant program pool for this reason. If for any reasons, the AM has to be recerated more than once, this can give rise to certain anomaloies, again very much dependent on the various options used. One of those issues were discussed in Serious SQL Issues Item 6 (IDMS/SQL News 7.1). Here is an issue which is related.

CREATE ACCESS MODULE uses the RCM in the reentrant (XA) program pool, if available, rather than a possibly newer one in the load area. The user can force the load area version to be used in any case, by issueing a DCMT VARY PRO NCI for the RCM.

Unload and Reload of an SQL database

First of all SQL database areas can be UNLOAD/RELOAD-ed using the same utility used for network databases. There are some differences and restrictions.

The Utility Manual says " Using UNLOAD on an SQL-defined database:
You can use the UNLOAD utility statement to unload and reload an SQL-defined database to make these changes:
- Change area page ranges
- Change page size
- Change the maximum number of records per page
The page changes are specified in the new DMCL definition.
Note: The modified DMCL must be available during the unload operation as well as during the reload operation." The syntax for SQL unload/reload becomes

 UNLOAD SEGMENT segment-name
 (AREA  area-name)
 RELOAD INTO same-segment-name 
 DMCL dmcl-name
The named DMCL above is the new DMCL which contains the same named SQL segment, optionally with the limited changed described above. This DMCL will be used for the RELOAD operation. The DMCL used for UNLOAD is the one given under //SYSIDMS card. This minute difference is not clear in the manual.

For SQL databases, unload and reload should use the same segment name. So if one has to resize the areas, this can be done only by using a new DMCL. You cannot make changes to table definitions in an SQL-defined database between the unload and reload steps.

For SQL-defined segments, area and table stamps are unloaded during the UNLOAD steps of an SQL-defined database and are reloaded during RELOAD processing. This becomes a problem if one wants to unload an SQL data area from production to test IDMS. Some clients do want to do this to use the same data in testing (the actual database size in pages will be smaller in test). Such an operation is common in the case of network databases.

This article is a reprint from IDMS/SQL News 7.2
Product Experiences / Reviews
In this Issue : Back to Top

OCA-COMMIT - A Two-Phase Commit for CA-IDMS

-- Jean-Daniel Bégin, Aquisoft Inc.

More and more, CA-IDMS sites are also DB2, IMS, MQSeries and/or VSAM users. Moreover, new systems need to maintain integrity across all those repositories. Until now, this has been a non-issue for pure blue sites because CICS supports two-phase commit through its SYNCPOINT command. By requesting a SYNCPOINT, all repositories in use by the task either commit or none of them does. CICS acts as coordinator in the two-phase commit process. CA-IDMS cannot participate in the two-phase commit process because it doesn't offer a DML called PREPARE-TO-COMMIT.

Aquisoft will soon offer OCA-COMMIT. This product gives CICS-IDMS transactions the ability to write a PREPARE-TO-COMMIT journal record. When the CICS transaction requests a SYNCPOINT, a TRUE (task related user exit) is invoked for each repository and a PREPARE-TO-COMMIT request is sent to all. OCA-COMMIT provides a TRUE that sends the request to all CVs that have an active run unit. Note that you can have multiple CA-IDMS run units against different backends. Once all the PREPARE-TO-COMMITs are successful, the COMMITs or their equivalents are requested.

OCA-COMMIT provides all the components needed for synchronization with CICS at startup. In the event of a CA-IDMS crash, upon startup CA-IDMS will synchronize any INDOUBT run unit with CICS. Console messages are issued if CICS is not present. Roll forward and Rollbacks utilities are fully supported.

In the future, we will be looking at writing a coordinator within CA-IDMS. This would allow us to commit run units across multiple backends and/or MQSeries (using our OCA-MQSeries Interface).

The product availability date is Fall 98.

Jean-Daniel Bégin Aquisoft daniel@aquisoft.com (1)-(888) 518-9111 (1)-(414) 969-8073 Visit the home Page at http://www.aquisoft.com

Just One more!

Year 2000 Noise and IDMS Release 10.2

Heard on the net: Just for kicks I started one of our remaining 10.2 CVs up on our time machine, which was IPLed as 29/2/2000. To my amazement, it came up fine, and almost everything seemed to work. PMONCIOD abend was there (but this was there in 12.01 CV too). BIF problems were solved when we copied RHDCEV09 from the 12.01 loadlib. Courtesy IUA Connections Fall 1998.

IDMS/SQL Adds: Most time consuming work on Y2K will be on application coding, not on system stuff. Operating System issues, though critical, will be obviously taken care of. So will be the IDMS startup and Journal Issues. Then one is left with the same old code in COBOL (or ADS) or database records where you accounted only 2 digits for the year! Even in IDMS product code, today most problems are in the area of ADS/OLQ BIFS, which are not strictly database code.

In this Issue : Back to Top (Contents)
Back to Main Page


IDMS/SQL is published on behalf of IDMS WatchDog Group, Helsinki-H›rsholm-Oslo for free circulation among IDMS Users Worldwide. IDMS/SQL News is not a CA publication. CA-IDMS/DB, CA-IDMS/DC and CA-ADS are registered trademarks of Computer Associates International Inc. CICS, IMS-DB/DC and DB2 are registered trademarks of IBM Corporation. Technical examples are only guidelines and modification might be required in certain situations and operating systems. Opinion expressed are those of the authors and do not represent the views of IDMS clients or related vendors. © Permission is hereby granted to use or reproduce the information, only to IDMS Customers and Consultants, provided the material is distributed free of charge.


Comments to idmssql@geocities.com
This page hosted by Get your own Free Home Page
1