IDMS/SQL News              7.4

Vol 7.3    Technical Information for Mainframe and CA-IDMSâ Users   December 1998

* * * Preliminary Issue * * *
* * * We apologize for the spelling mistakes and other errors * * *
* * * There will be an update once the Christmas is over * * *

 In This Issue

Headlines

Y2K Competition!
IDMS-L News Group
Core Features vs Special Enhancements
Easy Upgrade Syndrome!
Programmability!

Potpourri

COBOL Mess
A world without standards!

SQL in Practice : How to do Corner

IDMS vs DB2 Discussion
Row ID Access in DB2

Product News etc

Vegasoft's Multiplatform Product Line for IDMS/DB2

Back to Main Page

Located at http://geocities.datacellar.net/SiliconValley/Vista/4905/idms74.htm


Year 2000 Competition!

--- B. Gilbertson, DB2/IDMS Consultant, New York

As the millenium countdown continues, we are asking the readers to participate in a simple quiz. In the timestamp format (SQL), date/time values are represented internally by 64-bit binary numbers. The bits, numbered from the left starting with 0, have the following meaning:

  +---------------------------------------------------------------------+
  ¦ Bits                 ¦ Meaning                                      ¦
  +----------------------+----------------------------------------------¦
  ¦ Bits 0 through 26    ¦ Number of days since January 1, 0001         ¦
  +----------------------+----------------------------------------------¦
  ¦ Bits 27 through 43   ¦ Number of seconds in the day since midnight  ¦
  +----------------------+----------------------------------------------¦
  ¦ Bits 44 through 63   ¦ Number of microseconds since the last second ¦
  +---------------------------------------------------------------------+
So Timestamp= X'0000000000000000' corresponds to 01/01/001 early morning. Note that, following the tradition of Christian Era (CE, earlier known as AD) the year starts at 0001-01-01 and not at 0000-01-01! The largest year allowed in datetime arithmetic in SQL is 9999-12-31.

Now the question is what is the timestamp value (including seconds and microseconds) corresponding to this maximum value, beyond which IDMS will give an error? How did you find it? The results will be published in January 1999 and the winner (drawn by a program, if there are more than one correct entry) will be sent a small token gift from IDMS/SQL News.
IDMS/SQL News thanks B. Gilbertson for suggesting this competition

Newsgroup - IDMS-L

IDMS-L - The IDMS mailing List CA-IDMS Discussion Forum based at the University of Georgia. At the moment, besides technical topics, there is very interesting discussion about IDMS History going on. Those interested, this is the right time to go back to the nostalgic years of 70s and 80s!

Core Features vs Special Enhancements

Like any other product, within IDMS DB/DC also, some components are considered to be core products. IDMS-DB, DC/UCF and IDD belong to the core. Because of its wide popularity and usage, one can add ADS/Online too here though ADS is not needed for a CICS/IDMS user or IDMS-DC Cobol user. Now we consider SQL as a CORE option, though this directive should have come from the vendor.

Just like the core products, within each sub-product there are some features which can be considered to be core features. But then there are some enhancements which have been added to the core later. In the database area LRF is such an added feature. The failed ADSBATCH was an added feature. The Generator was another 'CASE' like tool which failed miserably. Within mapping, pageable map is an added enhancement. Today in SQL architecture, table procedure is an important added feature. Again there are some facilities, which are available but should be used with caution. SQL access against network database is such a feature.

Overuse can kill your system

Users obviously want to make full use of the product. So there is in fact no upper limit for the usage of any feature. However, in the past certain features were used with caution or completely avoided by certain clients. Pageable map though a nice enhancement, created confusion in update situations (with the ambiguous WAIT/NOWAIT options) and gave too many problems in the initial stages. There are many clients who do not use this feature at all or use it only for browsing.

LRF, which once upon a time served as the foundation for ASF, have been heavily recommended by some technicians and been heavily used by some clients. There are many instances where LRF subschemas almost took the form of a program with tremendous embedded logic within them. This was simply asking for trouble. There are many clients who completely skipped LRF or used it only for query purposes. With the availability of SQL and SQL views, there is no reason for advocating LRF anymore. There is no SQL support for LRF records, either.

Today, SQL access against network, is an added bonus, but care should be observed when mixing SQL access against network, native DML and table procedures in the same program. There are various possibilities of deadlocks within the application thread and still worse, of unpredictable abends. At the same time SQL against an SQL table and SQL access against a network database, can be easily incorporated into the same dialog/program. Other than the fact that the definition of the database is coming from two locations (DML from IDD and SQL from SQL catalog), at runtime IDMSDBMS does not distinguish these accesses. A single RCM and access module is created for the program.

Table Procedure is a facility given to simplify or "SQL-ize" some of the access against complex network structures. But the feature has many pitfalls if complex logic is involved. A great deal depends on the logic and complexity within the table procedures, which is a COBOL program unless you go for Assembler code. And mixing table procedures (which appears as simple tables for the program logic) with complex SQL or DML coding in the final program is asking for trouble. Lately LE 370 programs have been reported to give severe performance problems for table procedures involving large and/or complex databases. Again we give strong warning against portraying Table Procedures as a core feature. [In the DB2 world stored procedures are never portrayed as a core feature and the usage itself is more complex than table procedures in IDMS].

Within SQL, dynamic SQL is another feature which should used with caution. [In the DB2 world, IBM and other experts give strong recommendation against using it in production OLTP systems]

Special Features are like Special Foods

Special foods are for special occasions. So we all eat birthday cakes with unlimited calories and sugar on such special occasions, but we don't make such cakes as part of our daily diet! In a similar way, special features are given for special purposes. Stretching them beyond what they are intended for, is a blueprint for disaster. Again, guidelines regarding usage of new features, in production situations are missing today.

What is the Foundation?

In the multiplatform world of ODBC and TCP/IP access, SQL is a key basic feature which should be understood thoroughly. But marketing slogans give a feeling that SQL is not important but ODBC, WWW access, Visual DBA, Quickbridge, ICE, Gateway and so on are vital. All these new wonderful sub-products/features are dependent on SQL! If your foundation on SQL implementation is not strong enough the patchwork will crumble no matter how good the GUI presentation is! Earlier IDMS/SQL News referred to a classic case, where a client attempted to implement a complex Gateway solution, without bothering to understand the pros, cons and features of mainframe SQL. And understandably, the project done with the 'beta/ESP' version of the Gateway collapsed in totto. In the DB2 world, way back in the late 80s a well known consultancy firm did a wonderful project using CSP (Cross System Product - a resource hog substandard 4GL from IBM) with the full cooperation of IBM. The project collapsed with $100 million loss. Again the reason for the failure was not the base product DB2, but the overemphasis on the substandard 4GL - CSP! Many analysts later commented that the project would have been a success if they had some good CICS/COBOL programmers!

Easy Upgrade Syndrome!

It is said "Release 14.0 is so easy and it's almost like an upgrade tape!" Features Guide 1.5 also says "Simple Upgrade Path"! This statement has been misinterpreted by marketing folks and has given rise to many problems. Management at many sites have been wondering why his DBA and programmers have been spending months for the transition from 12.01 to 14.0, when the upgrade is just as simple as a maintenance tape!

The problem lies in the difference between the "installation of the tape" and the "implementation of the tape". For a marketing person who doesn't know ABC of the product, there is no difference between the two! When one says IDMS 14.0 Installation is like a maintenance, it only means the install of the tape and download/link of the software modules. Yes, there is no big migration like that of 10.2 to 12.0. Yet, the implementation of 14.0 is not as simple as a 12.01 maintenance tape. This is because

For an APAR tape, none of the above steps are required. Most IDMS clients are running 20 years of production data. Unlike new products, a change has to be carefully planned, else the disruption in production or corruption of databases can have disastrous consequences. A database implementation is not like a housekeeping products install.

On top of everything many Scandinavian Clients have receieved the latest 12.01 tape alongwith a cover letter which says "not many people are running 14.0!" Well, why so? If the upgdrade is so easy and if 14.0 has better support for LE/370 Cobol environment, why not many are running R14.0?


Programmability!

- Guest dispatch

Late eighties saw the emergence of CASE tools claiming to accomplish "start to finish" application development cycle. Started initially as database design tools (LBMS Automate+, Bachman etc), CASE tools ended up claiming to do everything including program generation - IEW, IEF, Bachman, Telon, Generator etc. Then soon the whole thing ended just as dramatic as it had begun! There was no productivity. It was not easy to "specify" the business rules to these "higher" level "specifciations". It was not easier than programming. Products like Bachman was very good in supporting database design, but overclaim of A-Z application development and AD/Cycle partnership killed the product. Today, no one talks about CASE Tools.

What was the single factor which contributed to the downfall of CASE Tools? Or rather, what is the single factor which is available in a programming language - COBOL or 4GL -, and sadly missing in a CASE Tool?

The answer to this question evaded me until I saw an advertisement for a car on TV recently! It said "What you see is a car of drivability and dependability!" That's the word: drivability! The advertisement did not claim that you don't have to drive the car! But it is is drivable and dependable!

For a programming language on 370, UNIX or Windows, Batch, 3270 or GUIto survive, the required qualities are "Programmability and Dependability"! CASE Tools didn't have either of these! CASE Tools claimed too much of "automatic things"; some even claimed that you don't have to program at all! Just specify the rules and a program will be generated. This worked in a demo, not in practice. But the outcome of the specification was unpredictable, making it unreliable and unpredictable. On top of everything, today's applications are inter-related groups of programs, not standalone programs! Languages like COBOL with all the drawbacks were programmable, so were the 4Gls like ADS, Natural, Visual Basic, Powerbuilder and so on. The application programmer had the control!

"Cobol is alive, but 4GLs are dead!"

So by nineties, the industry awakened and shouted: "COBOL is still alive!". And some analysts went on to say:"4Gls are dead!". Some of the vendors (even who had successfull 4Gls) started repeating the new "mantra"! These "experts" didn't know ABC of a 4GL. They bracketed ADS, Natural, Generator, and CSP in the same basket. In reality, 4Gls were never dead. Some of them were reborn in new platforms. For example, Powerbuilder is almost like a GUI version of ADS. Even Visual Basic programs are similar to ADS construct in structure (forms, event driven independent processes connected to keys or "clicks").

With object oriented programming and GUI, new terms and languages have evolved. There are no revolutions, but an evolutionary shift in paradigm: COBOL for batch (mainframes) and 4-GLS (including the Windows based GUI products, html and Java) for online applications.

Recently another software company with a ready-made set of applications has been making headlines in Europe as well as the USA. A-Z application development has given way to a new promise and slogan here : you don't have to know anything about the database or programs. Everything is given to you in modules. We don't go into the details here. Many companies have reported tremendous delays and compelte lack of control on what is going on in this case. In the run for Y2K solutions, all are forgotten and forgiven. All we can say is that this product is a goldmine for consultants! (It is an irony that such a ready-made product needs hundreds of consultants for the implementation!) But in the longrun, we have no hesitation in predicting that many companies are going to get their fingers burned with this product! We will come back to this point in 2-3 years time!
A Potpourri of Information: Technical and not so Technical

Cobol Mess!

- R. Korhonen, Kuopio, Finland

Where do we start? Online OLTP enviroments are facing a new problem, which should not have been there at all. The new problem is created by COBOL/370 which is nowadays known as LE 370 {Purists might say that LE is the common runtime for all - C, PL/1 and COBOL - } The problem can be stated in simple words

If anyone remembers the old history, COBOL was not the preferred IBM language in the late 50s or early 60s. The US Govt had to force ANSI COBOL on IBM with a lawsuit. By introducing the powerful structured language PL/1 for 370 machines, IBM tried to diminish the importance of COBOL. But with all its capabilities of pointer variables, dynamic allocation, bit manipulation, BEGIN.. END construct, string manipulation and so on, like many other IBM products, PL/1 was too unwieldy and ended up being clumsy! In short, COBOL survived the onslaught of PL/1. So by early 80s VS COBOL emerged victorious not only in batch, but in online environments like CICS/VSAM, CICS/DL1 and CICS/IDMS, as well.

A language was important only when you code the program. Apart from some restrictions on not using some COBOL verbs in online (DISPLAY, STRING, INSPECT) , at run time no one bothered about the language (we are talking about online). In fact, IDMSDC never invoked COBOL runtime, in the old VS Cobol.

Then by late 80s COBOL II entered the market, as the 'XA COBOL' . A change in language to support 31 bit addressing should have been transparent to the application programming. But this was not to be the case. Some of the COBOLII routines starting with IGZ.. had to be active even in CICS and IDMSDC and gave trouble for these online environments. Ironically some IDMSDC clients found out that by linkediting VS COBOL as AMODE 31, one could easily solve the XA issue. Features like using 'CALL' syntax in CICS and IDMSDC did n't matter for existing programs.

In the 90s COBOL.II was almost stabilizing, though some continued to use VS COBOL. IBM didn't stop there. More reincarnations of COBOL followed. SAA AD/Cycle/COBOL, COBOL/370, COBOL for MVS and VM, COBOL for OS/390 and VM, LE370 and finally LE for OS/390 and VM.

In the 80s we also saw many 4GLs (ADS/Online, ADSBAS/NATURAL...) entering the online scene with success. The industry analysts wrote the demise of COBOL, without realizing that the 4GLs were unsuitable for batch programs! Even though 4GLs challenged it in online enviroment to some extent, there was no match for COBOL in commerical batch. COBOL didn't die, instead it re-emerged with a vengeance. By now, COBOL became an 'IBM mainframe' language.

Problems of LE 370 Environment

LE 370 was supposed to solve the problems of COBOLII and its various incarnations. It did indeed solve some problems, but introduced new ones. LE 370 (common to COBOL, PL1 and C) exposed the user to the existence of many new moduels in the LE 370. Some of them are very heavy. Every time a LE 370 program is started there is something called environment setup. This, if done for every task, will definitely finish off CICS or IDMSDC!

What went wrong?

First of all, COBOL is mainly a batch language. LE 370 emphasizes that point. For a batch program, the overhead of environment setup is not an issue, beacuse it is done only once! And for a job which takes minutes or hours to execute, an overhead of 0.5 seconds is immaterial. But for an OLTP transaction with sub second response time, 0.5 seconds overhead will be disastrous. Elapsed time of such a transaction can sky rocket exponentially. If the COBOL program is called from a non-COBOL program repeatedly then performance will fly out of the window. IDMSDC avoids such a disaster by restricting the environment setup to just one, for an application thread.

Documentation (or lack of it) within IDMS

Documentation related to LE/370 support within IDMSDC is scattered all over the place. Recently IDMS-L group discussion pointed to LI29837 which is a PIB documenting the status of LE/370 and IDMS. But that's not all. DBA has to see PIB LI18624 and LI23664.

Terminology Confusion

On top of everything, new terms have been introduced for familiar entities.
Comparison of Commonly-Used Terms
  © Copyright IBM Corp. 1991, 1997
 
In order to better understand the various terms used throughout the IBM
Language Environment for OS/390 & VM and IBM COBOL for OS/390 & VM
publications and what terms are meant to be equivalent, see the following table.
 
 +------------------------------------------------------------------------+
 ¦ Figure 2. Comparison of Commonly-Used Language Environment and COBOL   ¦
 ¦           for OS/390 & VM Terms                                        ¦
 +------------------------------------------------------------------------¦
 ¦ Language Environment ¦ COBOL for OS/390 & VM Equivalent                ¦
 ¦ Term                 ¦                                                 ¦
 +----------------------+-------------------------------------------------¦
 ¦ Aggregate            ¦ Group Item                                      ¦
 +----------------------+-------------------------------------------------¦
 ¦ Array                ¦ A table created using the OCCURS clause         ¦
 +----------------------+-------------------------------------------------¦
 ¦ Array element        ¦ Table element                                   ¦
 +----------------------+-------------------------------------------------¦
 ¦ Enclave              ¦ Run Unit                                        ¦
 +----------------------+-------------------------------------------------¦
 ¦ External data        ¦ Working-Storage data (defined with EXTERNAL     ¦
 ¦                      ¦ clause)                                         ¦
 +----------------------+-------------------------------------------------¦
 ¦ Local data           ¦ Working-Storage data (defined without EXTERNAL  ¦
 ¦                      ¦ clause)                                         ¦
 +----------------------+-------------------------------------------------¦
 ¦ Pass parameters      ¦ BY VALUE                                        ¦
 ¦ directly, by value   ¦                                                 ¦
 +----------------------+-------------------------------------------------¦
 ¦ Pass parameters      ¦ BY REFERENCE                                    ¦
 ¦ indirectly, by       ¦                                                 ¦
 ¦ reference            ¦                                                 ¦
 +----------------------+-------------------------------------------------¦
 ¦ Pass parameters      ¦ BY CONTENT                                      ¦
 ¦ indirectly, by value ¦                                                 ¦
 +----------------------+-------------------------------------------------¦
 ¦ Routine              ¦ Program                                         ¦
 +----------------------+-------------------------------------------------¦
 ¦ Scalar               ¦ Elementary Item                                 ¦
 +------------------------------------------------------------------------+
So next time if someone tells you that his enclave encountered a problem with local data, you know what it is!

A world without standards

- IDMS/SQL Special Dispatch, Oslo

One comes across the issue of standards almost on a daily basis - ANSI standards of various kinds, POSIX and its variations, ISO standards, HTML standards, Java standard and so on, besides the well known standards of ASCII and IBM's EBCDIC character representations! At least one thing is common about all these standards - ie no one follows these standards to any appreciable degree! The other day my 3 year old refused to touch his cheeseburger, until I had to guarantee that it was alright to have a slice of cheese on the hamburger , according to ISO standards!

For example, SQL standards (SQL2 and SQL3) are not implemented in full by anyone. Even when a vendor claims partial implementation, its meaning is diluted by what everyone calls "extensions"! Invariably these extensions violate the very standard we are talking about! We take only one example. "CREATE INDEX" is not part of SQL standard. Can you pick one site in the world running an SQL database in production without using indices?

Many European languages have some extra letters besides the familar 26 of A-Z. For EBCDIC representation of these extra letters, IBM, for some mysterious reasons chose not to allocate unique hexa values, though there were many empty slots in the available 256 representations. For example, in Finland, hexa value of $ (=5B) was used to represent 'ˇ', hexa value of @ was used for '™' and so on. But the same hexa values were used for other letters in Germany, Denmark and so on.

We don't go through all these complex standards at all. Just take a simple example from everyday life. Country code is standardized on all motor vehicles. So we have 'USA', 'N', 'D', 'DK' and so on. It can be 1 letter, 2 or 3 ! There is no standard on how many characters to use! Nevertheless, if you are driving in Europe, when you see 'CH' ' you probably guess it is Switzerland (from Confederation of Helvetica)!

In the evening you switch on Eurosports and the country names take a new form here. Germany is no longer 'D', but 'GER'! United Kingdom is no longer 'UK' but 'GBR' and so on.

And then you logon to internet, you see still another form of country code on domain names. So here, Norway is no longer 'N' or 'NOR' but 'NO'! Ironically enough, it is given in many UNIX documents and books that a DNS ending of '.COM' is for commerical enterprise in the USA! Now every child knows that '.COM' does not mean that the particualr domain is in the USA! [Again my 3 old tells me that '.COM' today means only one thing: the site is not British! Patriots there always suffix a .uk! or co.uk]

The next day, you are at the work place and look at the COBOL 370 IBM Manual. Here you see yet another country code! So though Norway is still 'NO', Germany is now 'DE' and the United States is just 'US'.

Basically, all this confirms only one thing. Even in this information age, there is no standard: neither on the existing entities nor on new entities. It simply does n't work the way you expect.

Tailpiece: I just got internet access from a service provider who is also the Telcom Operator. And they sent a note where the name was wrong. I thought I could send an eMAIL to correct this. But alas! There was no email. Telephone 132456 (which is engaged all the time), fax or write a letter!


SQL in Practice - How to do Corner

Convert Catalog ?

It is given in the IDMS/R14.0 Conversion notebook that " If the SQL Option was previously installed, you must run the catalog conversion utility against each 12.0 catalog in your environment, after you install CA-IDMS Release 14.0." Which implies that you don't need to run this utility if you had no SQL before. But,

What about SQL as an add-on product?

The manual is not clear about this. Following the above guidelines, one will be tempted to believe that you don't need to run CONVERT since you didn't have SQL before. The manual says : If a conversion was not required, the following message is displayed:

DB002900 Catalog Conversion Completed -- NO CONVERSION REQUIRED

Experience shows that even if you did not have SQL option before, you don't get this message, if you are running the CONVERT after doing an ADDON of SQL! This is because, the utility makes physical changes to the following tables:

 -   SYSTEM.COLUMN
 -   SYSTEM.DBNAME
 -   SYSTEM.DBTABLE
 -   SYSTEM.DMCLFILE
 -   SYSTEM.INDEX
 -   SYSTEM.SCHEMA
 -   SYSTEM.TABLE
So if you run CONVERT after SQL addon, one really gets the following message :

DB002900 Catalog Conversion Completed CALC KEYS n TABLES n SYNTAX n

If all these confuse you it is not surprising. But the bottom line is that you must run CONVERT if you have SQL option now on the current system! It doesn't matter whether you had SQL option before or not. If anyone thinks we are wrong here, please let us know!

IDMS vs DB2!

Just when we thought that such discussions had no meaning, they have popped up in IDMSL Forum - IDMS vs DB2 Benchmarks! And even the question "Is IDMS Relational?"

Before running into the topic let's openly say that the outcome of such technical comparisons have little impact on today's data processing decisions. If the perforance had been a concern, no one (NO ONE) would have used DB2 in the mid eighties.

A C++ FAQ site has explained this point very well, in answer to the question : Is C++ better than Ada? (or Visual Basic, C, FORTRAN, Pascal, Smalltalk,or any other language?)
http://www.cerfnet.com/~mpcline/c++-faq-lite/ "This question generates much much more heat than light. In 99% of the cases, programming language selection is dominated by business considerations, not by technical considerations. Things that really end up mattering are things like availability of a programming environment for the development machine, availability of runtime environment(s) for the deployment machine(s), licensing/legal issues of the runtime and/or development environments, availability of trained developers, availability of consulting services, and corporate culture/politics. These business considerations generally play a much greater role than compile time performance, runtime performance, static vs. dynamic typing, static vs. dynamic binding, etc. Anyone who argues in favor of one language over another in a purely technical manner (i.e., who ignores the dominant business issues) exposes themself as a techie weenie, and deserves not to be heard. Amazingly the above comments are applicable to IDMS vs DB2 comparison as well.

Now let us hear what Ken Paris has to say on IDMS and DB2:

Ken Paris Comments:

I've used IDMS for over 20 years (and, preceding that DMS-1100 - the Univac based equivalent for 4 years). I've also used DB2 for close to 12 years.

I feel comfortable making this statement: "if someone is building a new application using the most current release of either IDMS or DB2, the DB2 system will perform significantly better." In addition, if size is a consideration, the DB2 totally overwhelms IDMS - an associate of mine worked on implementing a DB2 system where one table had 5.3 BILLION rows - and this is a production system - not a query only database. And, from what I understand, this is not the largest table in existence.

As far as SQL - it has always supported updating, if you didn't know that you shouldn't be criticizing the language - if you did, then why make the point? I have written over a thousand IDMS programs and I believe that every one is a simpler program in the SQL world. In fact, a lot of them would not even be required as they can be replaced by a single SQL statement. And if your application doesn't fit into a "two-dimensional" model, then you have a much better chance to deal with it in DB2, which is beginning to support more than tabular data, and this support is being significantly enhanced in version 6 (which allows user defined data types, user defined functions, extenders to the language to deal with these, etc.)

I've always liked IDMS - I was on both the board and TAC (Technical Advisory Committee) of the IDMS Users Association. I still enjoy writing IDMS programs when there is a need to do so. Regardless, I've seen DB2 pass IDMS by and the distance between the two systems will only grow as time goes by.

William Allen Replies

William M. Allen Jr of Arch Consulting has strongly disagreed with Ken on these issues with the following comments. Hello Ken:

I totally disagree with your speed statement, and your 1 line SQL statement!

How many I/O is it going to take to get one record out of that 5 Bil table, versus an IDMS CALC or INDEXED record?

As far as writing a program for a 1 line SQL statement, I have not seen a 1 line SQL statement in years, unless of course you are just talking about "SELECT * FROM TABLE" and what about OLQ for producing reports with no actual code written?

Some SQL statement can be quite complicated, with inner joins, outer joins, reflexive joins and nested subqueries.

The point is there are many people who work on IDMS and DB2. In defense of Jim, of course he knew SQL had update capabilities. He was just making a Joke about Structured QUERY language.

I also worked with IDS on the Univac, converted a few customers to IDMS from IDS, then I worked on IDMS, Oracle, SYBASE, and DB2. I am of the opinion that IDMS kills DB2 for flat out transaction processing. In my experience it always has and probably always will.

IDMS has not had much enhancements since CA has taken over so sure DB2 is gaining ground on IDMS, but where was DB2 20 years ago when any database in IDMS could of been defined as stand alone tables with foreign keys to relate records, IDMS was born with this feature, the only thing missing back then was the SQL language.

A lot of the 'NEW' technology today such as single schema architecture and data warehousing techniques have always been available to users of IDMS they just never knew it was a big deal, it was standard equipment.

I agree the distance between the two systems will grow as time goes by and IDMS users will go to some other platform, but I also know a company that went to DB2 and came back to IDMS. I just don't think that DB2 or IDMS is the database of the future, like IDMS, DB2's time will come as well.

IDMS/SQL Comments

Both are veterans on IDMS. Both have some truth in their statements. We agree with Ken Paris on the last statement : "DB2 is moving ahead!" But the rest of Ken's concerns are not technically sound. Talk about billion row tables and biggest database has no meaning, since both the products can support very large tables. Performance can only measured under the same conditions. For example, on random access with CALC keys IDMS/SQL can always outperform DB2 if the database is very large! But DB2 has re-written its index mechanism recently (both were based on B+tree earlier, both SQLs were born out of System-R research prototype).

It seems Ken Paris is still talking about the CODASYL IDMS. This means Ken is comparing the IDMS of 1986 against DB2 V5 of 1998! Making an a priori assumption that IDMS does not support SQL or nobody is using SQL. Let us compare apples to apples. Our comparison shows that on a mainframe only basis IDMS 12.0 SQL was superior to even DB2 4.3 in the 90s. And with 14.0 features (OCF displays, table procedures and Sysplex) IDMS/SQL was clearly ahead of DB2. On the other hand, with V 5.0 UDB, DB2 is no longer a mainframe-only database. With UDB 5.0 and now 6.0 ready for take-off, IBM is fast moving in a multiplatform direction.

Linda Campbell

Linda Campbell of Campbell Consulting, Inc. makes some great comments in this direction: (We cannot agree more!)

"Until DB2 Version 5 appeared, DB2 was NOT a transaction processing DBMS. Under V5, if designed properly and managed by a very good DB2 DBA (probably several), it can compete with IDMS in general. However, it is a VERY resource intensive DBMS and getting more so all the time.

On the subject of benchmarks, I worked in the Amdahl benchmark center as a performance analyst for MVS subsystems several years ago. I am also an experienced IDMS DBA and have some background in DB2 as a DBA and what I can tell you is that benchmarks in general are useless. I am sure that vendors (IBM, Hitachi, Amdahl, etc.) will tell you otherwise, since they really have no other way to measure processor performance, but you can make a benchmark tell you anything you want it to. I know because I have.

"In DB2, database maintenance is easier to manage.." On this, we believe Linda is talking about IDMS as a CODASYL database. DBA job is easier on SQL Tables. It is tedious on network databases because of the total size and the number of record types involved in a single operation (Changes, unload/reload..). SQL DBA usually has more freedom because he operates on table level. Now if we use IDMS/SQL Tables, the new IDMS DBA has all the advantages of a relational databases too. An overlloked fact in many discussions.

David Lorenzen of EDS

I would match IDMS up against DB2 any day. Several companies I can think of right away, "IT" San Antonio, "Bexar Appraisal" in San Antonio, just to name a few regret having DB2 and would have wished staying on IDMS. Our testing indicated that DB2 was very (and still is) I/O intensive and is nothing more than a means to sell IBM DASD.

A Little History

DB2 as available in 1986 was a very primitive product with a very simple indexing and SQL front end. Such an incomplete product conquered the market only because of Cullinet blunders - the blunder of claiming IDMS as IDMS/R based on ASF. Even a simple SQL front end based on system owned indexing would have been a clean answer to DB2/SQL those days. {William Allen touches upon this point when he says standlone tables with foreign keys could have been defined in IDMS in 1984 itself ). Of the VAX/UNIX SQL products available in the market those days, only INGRES had sound foundation. The more successful product did not even have an optimizer those days. Again Cullinet misssed the point altogether and wasted 1986-88 period concentrating on the VAX/Generator product!

SQL programming is not simple at all though SQL SELECT is very powerful (and complex). In its early stages SQL was called Structured English Query Language (SEQUEL) claiming that to be as simple as plain English. When we look at today's SQL programs we do not see those simplicity advocated by Codd and Date at all. You ask the database what you want, DBMS will give it for you! You don't have to know anything about the complex database structure!? In reality, in order to give the joining criteria on an SQL you must know what those columns stand for and more! Again, now it is too late to talk about the 'cons' of SQL, because SQL is the standard now.

DB2 has the greatest drawback of not having an online environment of its own! What is the preferred online environment of DB2 ? TSO? CICS? IMS-DC? Development activities are heavily TSO based. CICS is basically a run time environment. And the missing integrated dictionary is a big drawback. Third party dictionaries and the unusable repostory are no match.

On a mainframe only basis, any SQL application can be developed faster in IDMS/ADS/COBOL than CICS/DB2. Some of the 'SQL performance' concerns which were relevant in the 80s are no longer valid now, because of advances in hadrware technology. Pros and Cons of SQL are applicable to both IDMS and DB2 now. IDMS's foundation is very strong and integarted. What is missing today is the positioning and some front end (multiplatform) tools to take it to the 21st century. On the other hand, DB2's foundation is weak and will continue to be so because you cannot change the foundation of a building once 10 stories have been built on top of that! But enhancemnets are great. We get a vague feeling that OLTP concerns are giving way to more and more multiplatform front end solutions. One cannot have OLTP transactions running together with a Windows front end or WWW application runs in update mode with its interepreted code locking the whole database. They cannot co-exist irrespective of the database you are talking about. In many martketing claims, one sees such things on the projector but in reality data is very often duplicated or "replicated" if you prefer a less damaging terminology!

SQL is here to Stay

All said and done, SQL option in IDMS has not been highlighted by veterans inside and outside CA. This has been detrimental to the product. Many IDMS veterans have been overly concerned about the performance figures. The author has come across a lecture recently by an IDMS veteran who explicitly doubted the performance of SQL, forgetting the fact that he was comparing the IDMS 10.2 (where the maximum number of pages in a buffer in online never exceeded 15), whereas IDMS 12.0/140 clients are running online with 1000s of pages in buffer! Another danger of these arguments is that the management is not concerned about performance differences at minute level our DBA is talking about! The net result of not using SQL in IDMS will always be the same : the management will decide one day that they should go for SQL anyway, and since SQL within IDMS does not perform, they have to go for DB2 or a UNIX database! You don't have much choice here. SQL with all the drawbacks is here to stay. So the choice is : do you want to use it within IDMS or do you want to switch to DB2 or a UNIX product?

Related Links: IDMS/SQL News 6.1 The Power of Release 14.0
Programmer Productivity Enhancements
IDMS/SQL News 6.1 DML vs SQL case study SQL Performance matched that of DML and more!

Row ID Access in DB2 for OS/390 V 6!

Database key has been an integral part of IDMS CALC Access Mechanism. Relational Gurus never liked this hashed access. Date's writing tried to belittle hashing always. Hashed access was one thing missing in DB2. In recent releases (4.x) Db2 started using RID list internally, because there was no other way to get real performance. One suspects that row levele locking could not have been possible without some way identifying a "dbkey"! Now with DB2 Release 6, there is no more hiding, Row ID is available to the programmers. It is unlcear whether one can use utilize a user defined symbolic key which will map to a rowd id at STORE (like IDMS's CALC), but row id is available for the user to manipualate just the same way we do an OBTAIN with DBKEY!

Direct row access

User is given the option of including ROWID as a column in the table. Later if an application selects a row from a table that contains a ROWID column, the row ID value implicitly contains the location of the row. If you use that row ID value in the search condition of subsequent SELECTs, DB2 might be able to navigate directly to the row. This access method is called direct row access. Direct row access is very fast, because DB2 does not need to use the index or a table space scan to find the row. Direct row access can be used on any table that has a ROWID column. To use direct row access, you first select the values of a row into host variables. The value that is selected from the ROWID column contains the location of that row. Later, when you perform queries that access that row, you include the row ID value in the search condition. If DB2 determines that it can use direct row access, it uses the row ID value to navigate directly to the row.

IBM on the Move!

IBM, when published the IDMS-DB2 Conversion Guide in 1989, was well aware of the shortcomings of DB2 and perfectly knew that IDMS was a superior database! Even as late as 1991, when IDMS 12.0 ESP was just coming out, even as a relational database, DB2 was far behind IDMS/SQL. [Tragically IDMS Users and the Vendor never realized this!] Even the SQL Reference Manuals of DB2 V 1.x and 2.x were smaller than IDMS SQL Reference Manual! Support for CICS was termed by some as a "Marriage in heaven" those days.

But an IBM product with a large client base, cannot stay static for an indefinite period of time. The last 5 years, DB2 has been considerably enhanced. Versions 2.3, 4.x, 5.0 and now 6.o have taken the product a long way. Several features available in IDMS 12.0 appeared in DB2 4.x and 5.0 (Row level locking, Catalog Reorg, Check Constraints etc). Even then, restrictions on page size remained - either 4K or 32K. But now DB2 V 6 is allowing 8K and 16K page sizes, at last. The new version - DB2 UDB for OS/390 Version 6 - focuses on DB2 family compatibility with rich new object-oriented function and triggers, performance improvements for utilities and queries, greater capacity, more built-in functions, the capability to alter partitions, and more powerful network computing.
Product Experiences / Reviews
In this Issue : Back to Top

Vega 90s Product Line for IDMS and DB2

--- compiled from WWW

Customers are familiar with Vegasoft's ADS/DB2 interface, using which several clients are running ADS application accessing (update) both IDMS and DB2 from and IDMSDC environment. No duplication of data, no extra overhead --- clean direct access to the real live DB2 data for the IDMS user. For the DB2 user, the advantage is that they can make use of the power of a mainframe 4GL and dictionary (IDD is used for storing many DB2 related information).

Besides ADS/DB2, VEGA-90's supports several application interfaces:

Clients
VG-IDMS Client (includes the VG-IDMS Gateway Base Option)
VG-BATCH Client (includes the VG-BATCH Gateway Base Option)
VG-Windows Client (includes the VG-APPC Server)
Servers VG-IDMS/DB2 Server
VG-CICS Server
VT-NT/ODBC Server
VG-APPC Server
VG-IDMS/WEB Server (VG-IDMS Gateway TCP/IP Option is required.)
Gateways VG-IDMS Gateway Base Option (includes LU6.2 support)
VG-IDMS Gateway TCP/IP Option
VG-IDMS Gateway MQSeries Option
VG-BATCH Gateway Base Option (includes LU6.2 support)
VG-BATCH Gateway TCP/IP Option
Tools VG-ADS Precompiler
VG-COBOL Precompiler
For detailed information please go to Vegasoft Home Page Complete Manuals are available in Acrobat Format for you to download.

Just One more!

"Accomplishing the impossible means only the boss will add it to your regular duties"
Source: David S. Pinhasik, Tadiran Information Systems,Sheba Medical Center Hashomer via IDMS-L Newsgroup

In this Issue : Back to Top (Contents)
Back to Main Page


IDMS/SQL is published on behalf of IDMS WatchDog Group, Helsinki-H›rsholm-Oslo for free circulation among IDMS Users Worldwide. IDMS/SQL News is not a CA publication. CA-IDMS/DB, CA-IDMS/DC and CA-ADS are registered trademarks of Computer Associates International Inc. CICS, IMS-DB/DC and DB2 are registered trademarks of IBM Corporation. Technical examples are only guidelines and modification might be required in certain situations and operating systems. Opinion expressed are those of the authors and do not represent the views of IDMS clients or related vendors. © Permission is hereby granted to use or reproduce the information, only to IDMS Customers and Consultants, provided the material is distributed free of charge.


Comments to idmssql@geocities.com
This page hosted by Get your own Free Home Page
1