Category Archives: Clinical Information Systems

Imagining Healthcare–Some Secret Desires

Published by:

I challenge you to look at the financial world and not come away depressed. Or take a look at the happenings in the political arena and tell me if it doesn’t leave a bitter taste in your mouth. And then I invite you to turn your attention to what is happening in the technological and scientific world. I bet it will turn even the most despondent among us a little bit optimistic.

Then there are those times when you encounter something which just puts a smile on your face. Witnessing IBM Watson’s virtuoso performance was one such moment. Today, there is this news on TechCrunch of a new way of interacting with computers that a group of researchers from Microsoft and Carnegie Mellon University have come up with. Take a look:

Here is how Computing Now describes it:

A wearable projection system that Microsoft Research and Carnegie Mellon University (CMU) developed lets users create graphical-input interfaces on any surface. OmniTouch has a pico- projector that can display images of keyboards, keypads, or other traditional input devices onto any surface, even a human hand. It uses a depth-sensing camera—like that used in Microsoft’s Kinect motion-sensing input device for the Xbox 360 video game console—to track a user’s finger movements on any surface. The system is mounted on a user’s shoulder, but the researchers say it eventually could be reduced to the size of a deck of cards. Chris Harrison, Microsoft Research PhD Fellow and CMU doctoral student, said OmniTouch lets users have a wide range of input because it is optically-based. OmniTouch does not require calibration. According to the researchers, a user can begin utilizing the device without having to calibrate it.. Work on the project will be presented 19 October at the Association for Computing Machinery Symposium on User Interface Software and Technology in Santa Barbara, California. (PhysOrg.com)(Chris Harrison website)(Carnegie Mellon University)

Fascinating!

Combine this with Watson like intelligence (or even something like Siri) and you have a powerful system. Shrink it down to a head-mountable size, small enough to fit on the front of a baseball cap, improve the computer vision algorithms it uses and you have a technology that is, in terms of features, already ahead of HAL 9000 or the on-board computer of the Enterprise of Star Trek fame. I believe this could be made available in roughly this configuration in 2 to 3 years.

Now, why do I believe this has the potential of revolutionizing healthcare? It addresses several prickly challenges peculiar to the doctors’ needs.

  • The clinicians almost always have to use both their hands (and sometimes their minds) for the procedure they are performing (measuring BP, performing a clinical examination, surgical operation etc.). Leaving the patient to access a computer (or even a tablet) is not convenient. Using a tablet like device brings up the issue of sterilization and the ability of such devices to tolerate the sterilization procedures.
  • Tablet computers show their outputs only in a limited area – a tiny screen bounded by its bezel. A representation of the real world has to be recreated within these confines (something like augmented reality). With the OmniTouch approach, any surface, be it the wrist of the surgeon, abdomen of the patient, or her pelvic cavity, not just becomes the input device; it also is transformed into a screen. Never before has computing been this close to the real world.
  • Most computers require data to be entered explicitly. However, is it possible that we go about our business, and computers do the data capturing, without intruding? I think we are on the cusp of a technology convergence where this would not seem so far fetched. OmniTouch (or more specifically Kinect) technology will have to be combined with some nifty activity recognition capabilities to achieve just that. Imagine, a nurse administers an antibiotic injection while the head-mounted device, recognizes the drug, the patient, the nurse and the act of administering the drug, using a combination of bar-code/QR code recognition, facial recognition and activity recognition. All data recorded, no keys pressed, no notes dictated!

Many possible uses come to mind.

Scenario 1

Just imagine, a newbie surgeon getting the guidance from this device. I can visualize him taking a peek at the text book, for the next step in surgery, projected next to the incision site on the sterile drapes covering the patient. Combined with a Siri like interaction capability, you could very well imagine a scenario like this:

(Let us call the smart tool made by melding these cool technologies, Annika*, and let the rookie surgeon’s appellation be Dr. Greenstick, for our little fantasy’s purposes)

Dr. Greenstick (muttering to self): This looks like the internal iliac artery.
Annika: Dr. Greenstick, I think it is the ureter. I would think twice before ligating it. Why don’t you clear away some of the fascia so that it is more visible.

(Dr. Greenstick teases off some of the fascia in the pelvic fossa)

Annika: I can see it is the ureter. Do you want me to point it out for you?

Dr. Greenstick: Yes, please.

(Annika projects a fluorescent green line, curving along the course of the ureter in the pelvis making it obvious)

– – – – end scene – – – –

Scenario 2

As the nurse adjusts the Oxytocin infusion pump for a patient in labor with tardy uterine contractions, Annika projects the recommended infusion rate right on the surface of the infusion pump console, calculated based upon an assessment of contractions and the fetal heart rate.

Scenario 3

The neonatologist examining a baby indicates to Annika, by placing both his index fingers on the opposite sides of the baby’s head, the level he wants the head-circumference to be measured. Annika obliges by displaying the circumference (and a graph to show if the circumference deviates from the normal) right on the forehead of the baby.

Scenario 4

A diabetologist is monitoring the progress of a slow healing foot ulcer of patient on his return visit. Annika quietly displays, next to the ulcer its image from the patient’s previous visit, to allow the diabetologist to compare it with its present state.

I could go on, but I am sure you get the picture.

Oh, OmniTouch, how you have spurred the imagination.

Somewhat incongruously, it reminds me of a couplet from a ghazal (a form of music and poetry popular in the Indian subcontinent). It goes like this:

Agar sharar hain to bhadkein, jo phool hain to khilein
Tarah-tarah ki talab tere rang-e-lab se hai

My rough translation (with apologies to the great poet, Faiz):

If they be embers let them burst into flames, if they be flowers let them blossom
So many be the desires that the color of your lips inspires

Yet another secret desire – IBM Watson and OmniTouch teams please get together to bring these fantasies to life, for the larger good, eh? And come on now, what’s with the name OmniTouch!? Can you not think up of a name that befits such cool piece of technology.

*Annika – a human female who is ‘assimilated’ by the Borg, rendered into one of their own, enhanced by the technology and knowledge of all the civilizations previously assimilated, and given the designation, Seven of Nine. Later she was rescued by the Voyager team and inducted as a Starship Voyager’s staff –  ruthlessly efficient, emotionally distant and yet very sexy. She goes through an agonizing process of rediscovering her humanity but the vestiges of the Borg still remain a part of her.

Share

Google Wave — Worth Saving for Health Systems

Published by:

If you decide to build a new Electronic Medical Record (EMR) System or some other smart tool to make a difference for the clinicians, would you build it up from scratch or would you look around for a ready-made platform to build upon. What if I told you told you there exists just the technology for building that next generation tool of yours? What more, it is free. Don’t believe me? Just read on.

  • The said technology has at its heart a well-thought out and clearly documented protocol
  • It allows plug and play applications that can be built with relative ease, because of exposed APIs that are easy to understand. Some of these might be automation tools, like agents or even user-assistance tools. So you can build a feature-rich system just by assembling third party gadgets. Suppose you want your coding done at the run time as you type or dictate your clinical notes, you might be able to just “add” such a tool to your application.
  • It allows collaboration. This is not your mom’s collaboration (recipes over email), but real time collaboration, not just of text but of other database operations and actions that every other participant can see and modify. One could even think of hundreds of participants working over a common artifact. I daresay, the healthcare reform bill could have been written in 2 weeks if they had used Google Wave. (Not really. Technology can only help this much. Agreeing to agree, agreeing to disagree, disagreeing to agree, waffling, and grand standing would still take as long). The collaboration would not just extend to entry of clinical data (which itself could have multiple authors and sources, the clinicians, paramedical personnel, patient and families, medical devices, etc.) but also metadata and the knowledge artifacts that make such applications truly clinical by changing their behavior based upon current medical knowledge. Imagine, a group of cancer specialists and radiologists collaborating to create new rules for screening for breast-cancer – rules that are executable, not text admonitions. Rules that can be directly executed by the rules-engine of your clinical application to allow advising your next patient, whether she needs mammography or not based upon current research. The collaborations can be changed easily, with participants being included and dropped as easily, as needed, across boundaries of organizations.
  • It automatically maintains a record of actions of all participants, so one can tell what data was changed by whom, where and when. This means you can roll-back any actions if you want, you can even rewind and play forward. This also allows keeping an audit trail of clinical activity and makes provenance possible for the knowledge and metadata that is authored by experts to provide the clinical intelligence for the applications.
  • It has in-built capabilities for user authentication and data privacy.
  • It has federated service architecture, which allows for flexibly linking up networks and for safety of data by redundancy. You may keep your network all to yourself of course, if that is how you like it.
  • The specification for the technology is open and free and so is a much of the code for the service and the tools. Anyone can contribute towards improving the protocol and the API specification.

In short, it has much of the infrastructure taken care of so that you need to develop only the interesting stuff. Just a little bit of baking and some icing and you could have killer cakes going around.

I doubt if it is the first thing that comes to your mind, but I am talking of Google Wave of course. (https://wave.google.com/).

I became aware of Google Wave when it was rumored as Google’s next big project, to be released at Google IO 2009.  And released it was, with some fanfare. It was received with matching enthusiasm by developers. In the world of technology, new tools are announced every day. Most these days seem to be designed to facilitate teenage banter. Google Wave seemed like one more fun channel to help you make doodles while you chat.  I kept checking it on and off and saw it progressively improve. However, I never quite saw its value beyond chatting and working on documents with someone else at the same time.  Sure, some of the gadgets and robots that other developers created seemed clever but nothing that would make me log in everyday. Clearly, it was no alternative to email as it was made out to be by Google.

I was prodded into taking another look at Google Wave only recently when the paper by two Googlers, Gaw and Shankar was released. They propose use of Google Wave to create Personal Health Records (PHRs). They emphasize the collaboration capabilities of the Wave technology and how it would allow aggregation of clinical records for a patient from different resources. Spurred by it we had started researching if Google Wave is where we should be building authoring tools for Proteus and GreEd. We were really excited about its potential to provide a platform for collaborative development of executable clinical knowledge.

But soon we got the bad news that Google is pulling the plug on the product and any further development of the technology.

I am not the one to rush in to take up causes. But because you belong in a certain field some causes are given to you and you can’t just turn a blind eye to them. Google Wave is certainly one cause that seems worth fighting for. If enough number of thought leaders and developers make an appeal to Google, they may reverse their decision to kill Google Wave. Thus the campaign “Save the Wave”.

Please click on the following image and express your support.

Vote for Saving Google Wave

Vote for Saving Google Wave

Some Clarifications:

  • The protocol and the APIs are still undergoing development but have already demonstrated their potential by numerous applications that third party developers (individuals and corporations, both large and small) have created
  • As far as I know, no auto-coding tool currently exists, but it will not be too difficult to build one on top of Spelly, the semantic spell-check robot that Google’s NLP group has developed.
Share

Proteus and GreEd to Go Live in Henry Ford Health System

Published by:

The Henry Ford Health System is one of the largest health care providers of USA. It has also been at the forefront of many cutting edge innovations in healthcare. One such Henry Ford effort has been in progress, silently and away from the limelight, for the the last two years.  However, soon it will lead to deployment of Proteus – the unique clinical decision support technology and GreEd – the clinical/business rules management system to implement clinical guidelines to allow physicians to save time and yet make better decisions about their patients.

This effort is called Semantic Data Capture Initiative project. I have just added a new page on this blog to give you some idea of what this project is about.

The Semantic Data Capture Initiative page provides you with an overview of the project. I will keep posting updates from this project here. Stay tuned.

Share

Triumph of Open Source? : Lessons from VistA

Published by:

An interesting article in Washington Monthly highlights how VistA achieved its popularity was referred to on the AMIA Clinical Information System LISTSERV by Scot Silverstein.

To my mind, the key elements of VistA’s success are:

  • Participation of clinicians at every stage of its development including their writing of pieces of code and modules
  • Continuous, ongoing evolution and innovation
  • High degree of adaptability to different needs, not in small measure due to its being open source
  • Starting small and growing outwards, organically, rather than with a grand plan in a top-down approach

The bottom line is, Clinical Information Systems belong to clinicians. The sooner the Information Technology finds a way to hand it over to them the better it will be for the clinicians and for healthcare.

Share

The Power of the Checklist

Published by:

We know that most successful medical departments exploit the power of the checklists to achieve their results. Some interesting thoughts about this in a blog here:  Checklists aren’t just for pilots. Do watch the video clips and read Atul Gawande’s New Yorker article referred to in the blog. Gawande says:

If a new drug were as effective at saving lives as Peter Pronovost’s checklist, there would be a nationwide marketing campaign urging doctors to use it.

The question we need to ask is how to best integrate checklists in the clinical information systems. The answer to me is obvious: by having a process-oriented approach. I think the current Clinical Systems are based on the table paradigm which do not lend themselves to ordering the sequence of activities that need to be performed. Therefore it stands to reason that the approach that we are pursuing (Proteus), which has a process orientation at its core, is not just for clinical decision support but also for guiding all activities in the clinical world.

Share