ERP in the Health Care Industry

The by-word of the latest market focusing techniques is 'one-to-one marketing' wherein an individual is identified and that individual-specific services and products are developed so that most, if not all, of his demands for products and services may be taken care of. This remit however is extremely high on promise and abominably low on deliverance. In theory all one needs is to identify and analyse all the needs of a particular individual and then devise the various products and services that the individual might require. This however is next to impossible to carry out in practice in the health care sector, especially without any Enterprise Resource Planning support.

Trying to build a complete personal profile of a customer, who might turn out only to be merely a potential one. Since any organisation of decent size is bound to have at least several hundreds to quite possibly millions of customers, innumerable man-hours will have to spent and literally copious quantities of midnight oil have to burnt as the organisation wrestles hard in building such an exhaustive profile. Unless there is a sound and comprehensive ERP and a Customer Synchronised Resource Planning (CSRP) system on as extensive extent as possible in place, efficient management of all the data and support in the analysis thereof would almost be impracticable.

The ERP systems that are currently available are said to belong to the client-server era. These systems are built with a clear separation of the various functional components. Graphical User Interface (or GUI, as it is commonly referred to) concepts and techniques are implemented as the front-ends on the client machines. Powerful back-ends are present on the server machines which host the databases and run the various business logic which are written as server procedures.

The databases are built using relational database technology while the business logic is split depending on the product architecture to be executed on the client or server or both machines. Increasingly, the various software manufacturers are emphasising on the concept s of objects and implementing the same using OOP (Object Oriented Programming). This has further improved performance in terms of efficiency, speed of transmission, data acquisition as well as manipulation thereof, and, perhaps the most important of all, data security - the one thing that is giving maximum heartburn to the doctor and his patient alike.

With suitable communications infrastructure, these systems could be deployed in ma distributed environment and the business processes may span across multiple geographical locations.

The current generation of database systems are based on relational technology (RDBMS). These database systems support data seeking using standard query language known as Structured Query Language (or SQL - pronounced as seequell). Business logic which specify the set of actions that need to be performed (such as checking stock situation, etc.) is written using SQL and is invoked when the user performs a particular action. These database systems support access of multiple data sources and allow synchronisation of data manipulation across these sources.

ERP systems built on this technology will support organisations with the need to setup distributed systems which have considerably less dependence on a centralised information resource location.

Use of the Standard Query Language will enable organisations to perform post-implementation maintenance with confidence since the systems in place are not tied to proprietary languages and are essentially cross-platform, which essentially means that they will work equally well on databases that support them, thereby eliminating the need for the end-users to be overburdened and frustrated in trying to find out the necessary information as quickly as possible.

The skills required to perform this activity will be at a premium in the market place. Scalability issues would need to be addressed since sizing of the required hardware may have to be done to cater to a particular business process activities performed at a specific location. Addition of new locations must not lead to the disruption of the older locations.

In concert with simultaneous data warehousing on a global scale, ERP will allow the analytical manipulations of the atomic data contained therein. This is possibly the biggest and the most valuable payoff of it all. [Top]


Datamarts and Data Warehousing in Health Care Industry

A data warehouse has been defined as a collection of data in support, principally of, decision making process which is

  1. subject-oriented,
  2. integrated,
  3. time-variant, and
  4. non-volatile.

Essentially, a data warehouse solution ensures consistent and cleansed information to plan and to make everyday decisions for smooth functioning of an enterprise. The challenges faced in accessing the information are:

  • Retrieving facts takes too long and is often out-of-date
  • Analysis disrupts daily operations and interferes with transaction performance
  • Data is raw and unrecognisable, not in an easily understandable format
  • Data is subject to constant change and is seldom consistent
  • Ad hoc queries are difficult to process

The Decision Support Systems have the ability to:

  • Provide a multi-dimensional, conceptual view of data
  • Create complex criteria sets which allow pinpoint access to required information
  • Provide rapid response to queries and the ability to support ad hoc queries
  • Support for hierarchical consolidation of data, and the ability to "drill down" into detail
  • The ability to leverage existing investments in information technology
  • In other words, the data warehouse is a database designed specifically to meet the needs of decision support systems (DSS), rather than transaction processing systems (TPS).

The broad differences between Transaction Processing and Data Warehousing are as follows:

  1. A datamart has data specific to a business area/development. It contains only a subset of the enterprise data that is value to a specific business unit or department areas of an enterprise. The data may be captured from operational systems or enterprise data warehouse. The analysing is only to an extent of a single business area unlike an enterprise data warehouse, which can analyse data across multiple business areas of the enterprise.
  2. Metadata is an information repository of the datamart. The metadata stores the definitions of source data, target data and source to target mappings. Management if information about the enterprise data is as important as the data itself. Metadata is to a data warehouse what a road map is to a navigator. It is an integral part of the decision support system (DSS), which must be constructed alongside the datamart.

An important aspect of the data warehouse environment is the metadata. Simply stated, metadata is data about the data. Metadata keeps track of what is where, in the data warehouse. Typically, the things the metadata store tracks are:

  • the structure of data in the transaction processing environment
  • the structure of data within the datamart
  • the source of data feed into the datamart
  • the transformation information of the data as it passes into the data warehouse
  • extraction information
  • update information in periodically updated datamarts
  • Extraction is the first phase of moving operational data into the datamart. The operational data can be in form of records in the tables of a RDBMS or flat files where each field is separated by a delimiter. The datamart should be able to selectively retrieve from a variety of disparate databases on incompatible RDBMS and file systems. The datamart tools should make the process of extracting data from the source a simple exercise.
  • Next comes transformation wherein the population of the datamart changes the structure of data storage. The transformation process is carried out after designing the datamart schema. It is a process that ensures that data is moved into the datamart, it changes the structure of data suitable for transaction processing to a structure that is most suitable for DSS analysis. Datamart tools should be able to automatically perform complex transformations such as date, arithmetic, character, lookup, encoding, conditional, and multi-step through simple visual interface.
  • Loading (or populating, as it usually termed) of the datamart with the transformed data is an iterative process. The datamarts have to be populated continually and incrementally to reflect the changes in the operational system. The datamart tools should have the ability to automatically load the records into the target tables, schedule the start and end timings of the load and number of rows loaded into the datamart with the changes in the operational data.
  • In a data warehouse environment, the data that is collected on a daily basis is analysed and a summarised snapshot of the information is also stored. Such summarised reports have the capability of being "drilled down" to as fundamental level as required. Since most persons require a summary report to help in most of their decision-making process and very rarely need to refer to the detailed figures/information, such snapshots prove to be more value than any other collection of data.  [Top]

Need for Data Warehouses & Datamarts in Enterprises

The increase in availability of data has created a challenge for organisations to utilise it at the appropriate time for optimal decision making. Data warehousing technology helps in effective management of scattered data, by validating and organising it at one place - the data warehouse. Information needed to by decision-makers to reply to ad hoc queries can thus be shared by authorised users at various levels of the enterprise.

Consequently, an increasing number of organisations are rapidly embracing data warehousing for faster solutions to ad hoc queries and business problems, The data warehousing technology significantly changes the way information system (IS) departments function. Data warehouses shift the load of responding to user queries to the users themselves and allow the IS departments to concentrate mainly on storing data consistently and maintaining the systems. The users can analyse their queries and retrieve data themselves even as they are thinking about the problem. The IS departments still retain the responsibility of monitoring user access, thus controlling the access and protecting the data.

The advanced tools that all data warehousing systems must provide for are flexible solution for ad hoc reporting, multi-dimensional analysis, advanced metric computation, and collaborative information sharing, thus enabling informed decision making across the enterprise. [Top]

Use of Datawarehouses and Datamarts in the Health Care Industry

Let us now consider the following situation. A doctor wishes to see the details of the past illnesses of a patient. He refers to past records of course, but does he need to see each and every detail on every occasion? I should think not. Maybe, out of every ten times, he requires to see them twice or three times maximally. The rest of the seven or eight times he might want to take a brief glimpse at the summary, or look at the diagnosis. The information is lying all over the place, for the patient is a frequent traveller you see and who more often than not suffers from tummy upset.

The poor doctor waits, waits and then waits some more as the data are collected from the various sources, it is simply impractical to have a central server to house the medical (and other related) data and support find and seek solutions at the same time for anything beyond ten thousand patients (the costs and processing speeds required would be simply out of this world, i.e., astronomical), and presented to the doctor in a meaningful. If the doctor then seeks some additional details, then he quite simply 'has had it' - as they say.

It would far better and simpler if the data is available in a summarised form when first requested with suitable options that allow linking to other data in a "drilled down" fashion. This summary is created as soon as the particulars of a patient is entered into the database. The report is generated as a web page which may be accessed using a suitable browser by anyone connected to the system, after undergoing various security checks. I describe a possible sequence of data storage/retrieval/analysis using data warehousing and datamarting technology below:

  • Patient comes in and after various security checks (through cross-checking of electronic fingerprinting/smartcard/optical data disk etc.) is logged on.
  • The doctor/carer collects all the necessary details from the patient
  • He feeds in these information into the system
  • The data gets stored physically into the server, after suitably broken down into their atomic state
  • A snapshot summary in the form of web page(s) with suitable links for seeking of the data, that is collected and inserted, is compiled and stored immediately (hence the necessity of having computers equipped with fast processors for such servers)
  • This very summary is also displayed on the screen of the doctor and he may immediately use this summary for analysing the case that is presented by the patient.
  • He finds that he needs further information about some ailment that the patient had suffered around six months ago
  • He clicks on the link that helps him in getting to the information that he seeks, it is automatically found and presented in its web page(s) form on his screen, even though the information was retrieved by traversing large geographical boundaries and establishing communication link with the server which is housing the data.
  • The doctor requires further information about, say, the tracing of the ECG that was taken at that time for he feels that he needs to find out more about this since the patient is making a complaint that could be explained only after a brief but proper review of the tracing.
  • He clicks at the link on the web page that displays the ECG tracing. This data is stored in that server or a data warehouse that collects every data which emanates from a locally connected server. Further links are available to help in the interpretation or display of the findings of the ECG tracing.
  • The doctor feels that needs to see a graphical depiction of the records of the blood pressure and pulse rates over the past five years. He makes a series of clicks on interconnected links that helps him get to exactly the information as he wants it.
  • He is interested in the heart and chest sounds as they were ausculted six months ago. He makes another series of clicks which presents the desired sounds "live". He wishes to replay a section of the sound but after filtering out all extraneous noises and amplifying it by a factor of 5. He makes another series of clicks and he achieves the desired result without any difficulty.
  • A 2D-echocardiography was also carried out a year ago. The patient does not remember the date, but the data stored within the smart card of the patient accurately noted it. The doctor accesses the images, and then with the help of the telemedicine package he is able to render a 3D image from the 2D images. He is then ready to make his assessment.
  • He looks at the medications that had been prescribed before. He alters the dose of one, discontinues two and re-introduces three more.
  • After he is satisfied with the summary report, he stores it and the accompanying data permanently into his server.
  • Thereafter the patient's smart card is updated with the currently obtained data. A new summary report is prepared specifically for the smart card by the software programme and this is also inserted into the card.
  • As one can see, the doctor is requesting the details of only a few particular items from past records. There is always a possibility that he might want to see each and every little details that were collected at that time and this too is easily displayable if necessary. Most of the times however, the doctor will be satisfied to see the principal complaints, date and time of the complaints, broad negative/positive physical and investigatory findings, diagnosis made, treatment administered and the various medicines prescribed.

The only detail that perhaps should normally be available without the requirement of any further linking, and consequently further delays, would be that of the dosages of the various medications prescribed since this is sought by most of the doctors for deciding on the management of the present complaints. Many a times certain medicines need to be avoided if they have previously prescribed and found to be unsuitable (and I am not merely referring to allergic reactions to medication) or are absolutely contraindicated. Decision support systems are another important area of application of computers in medicine. Data warehouses and datamarts are vital for the sound development and implementation of such systems that would go positively a long way towards providing the best of care. [Top]

Administrative Issues

Administration of the telemedicine network should however not be in the hands of the state. It must lie with a competent NGO on whom the medical fraternity and the patients have complete trust. My recommendation is that the NGO should be the World Health Organisation (WHO). Since it has a presence at nearly all levels of nearly every state of the world. As the medical fraternity is in direct contact and in constant interaction with it, practically on a daily basis, the state-level medical councils must act as the liaising body to such a network.

The WHO can set up a global taskforce which will interact with the various medical councils responsible for the licensing of the medical practitioners who will have access to the network. The software manufacturer(s) who will actually develop the system must have such security features built into the software so that these bodies may have the freedom to lock or unlock areas of information t hat the various end-users may have access to. The various payers would not necessarily be interested in every detailed bit of information that exists within the network but might seek information on certain details from time to time. Hence, the information access requirement of every payer (whether they be first party or third party) should be kept in mind while detailing the accessibility and security clearances for them.

Else, telemedicine network may be built on the lines of the Internet which no one owns. In fact, it can have a genesis that is parallel to the development of the Web. A dedicated telemedicine network may be developed and then opened up for the whole world . Alternatively, one can choose to use the Internet itself to connect the various hub nodes and exchange data. This would certainly make use of the infrastructure that is already in place, and save the hassles and expenses involved in developing a dedicated telemedicine-only network.

Actually, there are already a number of excellent application softwares built for the purpose of EDI in the market and the various stakeholders of the health care industry and entities with vested interest may choose to ride piggy-back on the Internet of today to "get connected". [Top]

Legal Issues

There would however be the requirement for monitoring of the ultra-sensitive data and the legal compulsions of treating a patient who not only happens to be residing in a foreign country but is actually a citizen of yet another foreign country. E.g., say a Japanese subject is residing in the USA and is consulting a British doctor who happens to be flying on an Australian plane over South Africa at that very moment. Who will then be legally responsible for cases of negligence, etc.? In my opinion World Health Organisation (WHO) is the best agency to effectively deal, co-ordinate and administer it, albeit to a limited extent, as it sees fit since it is absolutely vital that agreement amongst the various state governments and the licensing authorities would not only have to be reached and but effectively implemented as well. Even cross state legislative requirements in the US has many repercussions that need to be addressed. [Top]

Networking the Datawarehouse/Datamart

The traffic movement of such a telemedicine network will have to be as fool-proof as is possible. The various checks for authentication too need to be equally water-tight. In this section I provide an insight into the possible modes for transmission and verification for telemedicine network.

The Internet is probably the easiest mode of transmission since the technology is already in place. However, since it is as easily accessible to anyone armed with a PC, a modem and a telephone line connecting him to the nearest ISP (Internet Service Provider), it is dangerously open for hackers of all hues to breach through several levels of security and gain access to restricted information of all sorts. Thus, the security of information has got to be most carefully devised.

Smartcards with encrypted security codes in the form of passwords are one level only. Such cards should only have the vital statistics (names, unique reference number valid for telemedicine network, pulse rates, BP, temperatures, and respiration rates for the past year,), and such information only that are required for emergency purposes only (like drug allergies, presence of diabetes/haemophilia/hypertension, etc.). This improves the overall safety of the information contained for no one may be able to know beyond what is already present without the user's express consent. Added to this, should the users at either ends, i.e., both the patient and the health care service provider (HCSP), be forced to use their own codes/passwords to gain access to the information then the security depth is definitely increased.

Further protection may be provided by the software manufacturer. This may even be at the level of individual datas. Restricted viewing of data will allow ultra-sensitive data to be permanently masked to all unless it is the person who put the data in originally or the person whose data is being put in.

Instead of a smartcard, one may use miniaturised flash-ROM chips capable of storing large amounts of information of a variegated nature ranging from the details of driving license to a complete recording of one's favourite song. Such chips may be housed on such objects that a person happens wear daily and possibly continuously (e.g., a ring, a bracelet, a dog-tag) or even be implanted under the skin (e.g., left arm, etc.). Specialised equipment will however be required to read from and to these chips. The good part is that the chances of loss or some unscrupulous person gaining wholly unwarranted access to the information contained therein is almost negligible. The bad part is that you have allowed, consciously or inadvertently, the "big brother" to watch you continuously.

The ultimate payoff has to be decided by the society at large, but quite frankly I will be more comfortable with a card stuck to my lapel or ring on my finger that I can take off at will, rather than have some foreign object stuck underneath my skin. Viola! With such gurus like Mr. Nicholas Negraponte of Medial Lab, MIT, USA, et al working hard at more innovative offerings at every conceivable time of the day, it is most likely that we shall be able to lay our hands onto a safer and yet more accurate means of storing all our frank and the dark little secrets without fear or worry that someone might get them in their dirty clutches without our explicit as well as implicit consent. The world is truly awaiting for some wonder chip to appear in this area and make our lives a far less stressful one.

I opine that the owner of the actual data is the person himself. After all, it is his information that is being put in and one must have total control over the same. It is he who must decide what he wishes to reveal and have the rest of the world to know about. Since the physical possession of the data remains in the hands of the medical practitioner or the hospital, they are the possessors of the data.

The responsibility of verification and authentication of the data however lies with the person who actually collects, computes and inserts the data. It is he who must make the final decision as to whether the data is representative and the accuracy of the version of events as they have actually occurred. He must also decide when and what to insert. Thus, it should be these two persons who should have as complete as possible control over the data. Who else should be able to view the datas must be decided by the patient in consultation with the doctor who examined him.

There is always the opportunity of building up a separate network dedicated wholly, solely and exclusively to telemedicine network. However, this would require a yeoman's effort, to say the least, and since too many players would have to agree on the various modalities and rules of the game, as it were, it could jolly well prove to be a non-starter. Theoretically, this is the best option, but practically? Well, direct person-to-person connection is possible. The major flaw is how could data be then shared amongst the various players? The whole concept of telemedicine along with the full exploitation of its overall capabilities would then collapse. [Top]


Network Connection

Network Connection [Top]


Connection-Transmission

Connection & Transmission Sequence     [Top]


Smartcard Design

Layout and Details of a Smart Card       [Top]


Go to a more detailed write-up on microprocessor-chip-enabled smartcard-based
medical data management system


© Dr. S. B. Bhattacharyya


Counter

Copyright: Sudisa - 1997 - 2005.    Last Updated: Tuesday, March 13, 2001

1