Who let Ebola Out? The Computer did it!

Published by:

Ebola made a sensational entry into the cognitive radar of Americans with an infected patient being allowed to go home even after having been to a hospital. If the hospital is to believed, it is the EHR’s fault. The Politico report Did a computer raise Ebola spread risk? is one of several that finger the EHR. Was it really the EHR’s fault, or is it just a convenient scapegoat?

The good questions to ask at this stage are:

  1. Was it really the EHR’s fault?
  2. Could the result have been different with a different EHR?
  3. Could the EHR have been configured differently to prevent this blunder?
  4. Should the EHRs be updated each time an epidemic threat is on the horizon?
  5. Could the result have been different with paper based charting?
  6. Should travel history be part of the routine history questions for all patients, and then be added to the “always visible” sections (e.g., Problem List) of the chart, if they have traveled from a region with certain infectious diseases?
The truth is that the EHRs, as they exist today, reduce the clinician’s vision to a telescopic one.
Clinicians could be scrutinizing a particular part of the data with great intensity, while ignoring a much bigger elephant sitting right next to them. There is an urgent need to create an alternative way for the clinicians to size up entire data or at least automatically flag any piece of it that is suspicious or an outlier.

Imagining Healthcare–Some Secret Desires

Published by:

I challenge you to look at the financial world and not come away depressed. Or take a look at the happenings in the political arena and tell me if it doesn’t leave a bitter taste in your mouth. And then I invite you to turn your attention to what is happening in the technological and scientific world. I bet it will turn even the most despondent among us a little bit optimistic.

Then there are those times when you encounter something which just puts a smile on your face. Witnessing IBM Watson’s virtuoso performance was one such moment. Today, there is this news on TechCrunch of a new way of interacting with computers that a group of researchers from Microsoft and Carnegie Mellon University have come up with. Take a look:

Here is how Computing Now describes it:

A wearable projection system that Microsoft Research and Carnegie Mellon University (CMU) developed lets users create graphical-input interfaces on any surface. OmniTouch has a pico- projector that can display images of keyboards, keypads, or other traditional input devices onto any surface, even a human hand. It uses a depth-sensing camera—like that used in Microsoft’s Kinect motion-sensing input device for the Xbox 360 video game console—to track a user’s finger movements on any surface. The system is mounted on a user’s shoulder, but the researchers say it eventually could be reduced to the size of a deck of cards. Chris Harrison, Microsoft Research PhD Fellow and CMU doctoral student, said OmniTouch lets users have a wide range of input because it is optically-based. OmniTouch does not require calibration. According to the researchers, a user can begin utilizing the device without having to calibrate it.. Work on the project will be presented 19 October at the Association for Computing Machinery Symposium on User Interface Software and Technology in Santa Barbara, California. (PhysOrg.com)(Chris Harrison website)(Carnegie Mellon University)


Combine this with Watson like intelligence (or even something like Siri) and you have a powerful system. Shrink it down to a head-mountable size, small enough to fit on the front of a baseball cap, improve the computer vision algorithms it uses and you have a technology that is, in terms of features, already ahead of HAL 9000 or the on-board computer of the Enterprise of Star Trek fame. I believe this could be made available in roughly this configuration in 2 to 3 years.

Now, why do I believe this has the potential of revolutionizing healthcare? It addresses several prickly challenges peculiar to the doctors’ needs.

  • The clinicians almost always have to use both their hands (and sometimes their minds) for the procedure they are performing (measuring BP, performing a clinical examination, surgical operation etc.). Leaving the patient to access a computer (or even a tablet) is not convenient. Using a tablet like device brings up the issue of sterilization and the ability of such devices to tolerate the sterilization procedures.
  • Tablet computers show their outputs only in a limited area – a tiny screen bounded by its bezel. A representation of the real world has to be recreated within these confines (something like augmented reality). With the OmniTouch approach, any surface, be it the wrist of the surgeon, abdomen of the patient, or her pelvic cavity, not just becomes the input device; it also is transformed into a screen. Never before has computing been this close to the real world.
  • Most computers require data to be entered explicitly. However, is it possible that we go about our business, and computers do the data capturing, without intruding? I think we are on the cusp of a technology convergence where this would not seem so far fetched. OmniTouch (or more specifically Kinect) technology will have to be combined with some nifty activity recognition capabilities to achieve just that. Imagine, a nurse administers an antibiotic injection while the head-mounted device, recognizes the drug, the patient, the nurse and the act of administering the drug, using a combination of bar-code/QR code recognition, facial recognition and activity recognition. All data recorded, no keys pressed, no notes dictated!

Many possible uses come to mind.

Scenario 1

Just imagine, a newbie surgeon getting the guidance from this device. I can visualize him taking a peek at the text book, for the next step in surgery, projected next to the incision site on the sterile drapes covering the patient. Combined with a Siri like interaction capability, you could very well imagine a scenario like this:

(Let us call the smart tool made by melding these cool technologies, Annika*, and let the rookie surgeon’s appellation be Dr. Greenstick, for our little fantasy’s purposes)

Dr. Greenstick (muttering to self): This looks like the internal iliac artery.
Annika: Dr. Greenstick, I think it is the ureter. I would think twice before ligating it. Why don’t you clear away some of the fascia so that it is more visible.

(Dr. Greenstick teases off some of the fascia in the pelvic fossa)

Annika: I can see it is the ureter. Do you want me to point it out for you?

Dr. Greenstick: Yes, please.

(Annika projects a fluorescent green line, curving along the course of the ureter in the pelvis making it obvious)

– – – – end scene – – – –

Scenario 2

As the nurse adjusts the Oxytocin infusion pump for a patient in labor with tardy uterine contractions, Annika projects the recommended infusion rate right on the surface of the infusion pump console, calculated based upon an assessment of contractions and the fetal heart rate.

Scenario 3

The neonatologist examining a baby indicates to Annika, by placing both his index fingers on the opposite sides of the baby’s head, the level he wants the head-circumference to be measured. Annika obliges by displaying the circumference (and a graph to show if the circumference deviates from the normal) right on the forehead of the baby.

Scenario 4

A diabetologist is monitoring the progress of a slow healing foot ulcer of patient on his return visit. Annika quietly displays, next to the ulcer its image from the patient’s previous visit, to allow the diabetologist to compare it with its present state.

I could go on, but I am sure you get the picture.

Oh, OmniTouch, how you have spurred the imagination.

Somewhat incongruously, it reminds me of a couplet from a ghazal (a form of music and poetry popular in the Indian subcontinent). It goes like this:

Agar sharar hain to bhadkein, jo phool hain to khilein
Tarah-tarah ki talab tere rang-e-lab se hai

My rough translation (with apologies to the great poet, Faiz):

If they be embers let them burst into flames, if they be flowers let them blossom
So many be the desires that the color of your lips inspires

Yet another secret desire – IBM Watson and OmniTouch teams please get together to bring these fantasies to life, for the larger good, eh? And come on now, what’s with the name OmniTouch!? Can you not think up of a name that befits such cool piece of technology.

*Annika – a human female who is ‘assimilated’ by the Borg, rendered into one of their own, enhanced by the technology and knowledge of all the civilizations previously assimilated, and given the designation, Seven of Nine. Later she was rescued by the Voyager team and inducted as a Starship Voyager’s staff –  ruthlessly efficient, emotionally distant and yet very sexy. She goes through an agonizing process of rediscovering her humanity but the vestiges of the Borg still remain a part of her.


Dazzling, Dr. Watson!

Published by:

So, IBM’s Watson won Jeopardy hands down, playing against Jennings and Rutter.

I am sure many NLP researchers will say there’s nothing new in what IBM’s Watson project achieved – all this has been done before. It has also not received the kind of attention that it deserves in the tech and mainstream media. The hoopla surrounding Kasparov’s loss to the chess playing machine Deep Blue was much bigger.

Incremental advancement in the eyes of many, to me, it is a momentous event. Watson is as much a research feat as an engineering marvel. It brought together many of the advancements in software and hardware to to notch this singular, elegant success.  I get a sense that IBM’s Watson research team understands the import of this achievement but is downplaying its implications a little bit. By all indications they did treat it as a their own little Manhattan project.

Let me explain why Watson is a daunting technology for me – an informatics researcher’s view, if you will:

Search will never be the same again

Google’s search delivers dumb web pages, but Watson, without being connected to the Web, delivers answers. This is what the all so many search engines that keep cropping up have been promising but failing to deliver.

It learns

Watson learns the real connections between facts and that too from all the undisciplined way we humans have been documenting them. If in a matter of two years it could beat the Jeopardy champs, imagine what will it be able to do down the line. It can certainly learn from its own mistakes (and successes) as from your and mine. Sure Google also learns but it learns paltry little from the same resources.

We do not need Structured Data anymore

One major challenge, in particular for healthcare, has been having the data in a form so that we can use the existing powerful database operations like finding the right information, cross referencing different items, from it. It has always needed humans to be disciplined enough to enter the structured data to work around this failing of computers – the inability to work with unstructured data. Humans have to adapt to computers if they are to be fully exploited. This is the reason, why most EMRs are such inadequate tools for the clinicians’ primary tasks; EMRs are willing to accept unstructured data but have little capability do much with it. With Watson like technology under the hood, now you, as a clinician, can keep merrily describing the patient as you would to a resident or a colleague, the data will be ‘understood’ and stored the proper way in its memory for asking all sorts of interesting questions about the patient at a later point.

In fact, if the technology is adapted to include action recognition from other cues (videos, bar codes, sensors etc.) even documentation by narration will become increasingly redundant.

We do not need to author Rules

Many business and clinical solutions get their smarts from rules engines but the rules which provide the actual logic in them, are authored by some human expert. Watson like technology will make that redundant. If you tell it that the patient is a male, presenting with acute abdomen, has right iliac fossa tenderness and fever, it will, with moderate level of confidence, tell you that it is acute appendicitis, and that you better get blood counts and sonography, to clinch the diagnosis irrevocably.  The thing is, no one would have fed the rules for differential diagnosis of acute abdomen anywhere in the system – it would have learned that from reading the surgery textbooks it is provided beforehand. For a while, I think there will be back on forth between the doctor and Watson, for one diagnosis that their combined understanding can settle upon, much like discussing a case with a colleague.  But Watson would learn and remember much more from those interactions than the doctor would, progressively diminishing the  need for the latter.

Information Retrieval Researchers can go home

The information retrieval researchers can also start packing up or quickly find some other problems to solve. Since nearly a decade medical informatics researchers have been trying to develop ways of providing adventitious information to the clinicians, that is highly pertinent to the patient on hand and his/her current problem – something like context sensitive help. There have been several ideas but all of them focused on tagging the data and resources themselves in a particular way to make this possible. These approaches will become redundant, since Watson like tech will not just identify the context much better but also dip into its learning to deliver the right resources. And all of this without any new XML tags having to be created

Knowledge Discovery will take a leap forward

Since it discovers patterns some parts of the technology suite of Watson can help pull out nuggets of unsuspected connections between facts. This will allow identifying things like new causalities for diseases and unsuspected benefits and side-effects of drugs, diets and interventions.

High level professionals should start feeling the heat

First they came for the typists, and I didn’t say anything because I was not a typist, then they came for the clerks, and I didn’t say anything because I was not a clerk ….

In fact, I even caught myself smiling smugly because I was a highly educated medical specialist. I knew the computers posed no risk to me. Suddenly, I am not so sure.  Tally all that I have written above and it will be obvious to you that we are about to cede much of the intellectual grounds also to the computers. What will remain will be the contact based part of healthcare. Well, at least until the robots achieve a little more dexterity and are able to feign a better smile, when they say, “And how are you today, Mrs. Patterson” in a calm, reassuring and friendly voice (think HAL 9000).

Get a glimpse of the DeepQA project which resulted in Watson, from Dave Ferrucci the PI of the project.

I look forward to the day when we will be able to deploy Watson as an inference tool for Proteus.

Now, only if they can bring Watson’s size down to fit into my smartphone and for it to understand the Indian accent.


Google Wave — Worth Saving for Health Systems

Published by:

If you decide to build a new Electronic Medical Record (EMR) System or some other smart tool to make a difference for the clinicians, would you build it up from scratch or would you look around for a ready-made platform to build upon. What if I told you told you there exists just the technology for building that next generation tool of yours? What more, it is free. Don’t believe me? Just read on.

  • The said technology has at its heart a well-thought out and clearly documented protocol
  • It allows plug and play applications that can be built with relative ease, because of exposed APIs that are easy to understand. Some of these might be automation tools, like agents or even user-assistance tools. So you can build a feature-rich system just by assembling third party gadgets. Suppose you want your coding done at the run time as you type or dictate your clinical notes, you might be able to just “add” such a tool to your application.
  • It allows collaboration. This is not your mom’s collaboration (recipes over email), but real time collaboration, not just of text but of other database operations and actions that every other participant can see and modify. One could even think of hundreds of participants working over a common artifact. I daresay, the healthcare reform bill could have been written in 2 weeks if they had used Google Wave. (Not really. Technology can only help this much. Agreeing to agree, agreeing to disagree, disagreeing to agree, waffling, and grand standing would still take as long). The collaboration would not just extend to entry of clinical data (which itself could have multiple authors and sources, the clinicians, paramedical personnel, patient and families, medical devices, etc.) but also metadata and the knowledge artifacts that make such applications truly clinical by changing their behavior based upon current medical knowledge. Imagine, a group of cancer specialists and radiologists collaborating to create new rules for screening for breast-cancer – rules that are executable, not text admonitions. Rules that can be directly executed by the rules-engine of your clinical application to allow advising your next patient, whether she needs mammography or not based upon current research. The collaborations can be changed easily, with participants being included and dropped as easily, as needed, across boundaries of organizations.
  • It automatically maintains a record of actions of all participants, so one can tell what data was changed by whom, where and when. This means you can roll-back any actions if you want, you can even rewind and play forward. This also allows keeping an audit trail of clinical activity and makes provenance possible for the knowledge and metadata that is authored by experts to provide the clinical intelligence for the applications.
  • It has in-built capabilities for user authentication and data privacy.
  • It has federated service architecture, which allows for flexibly linking up networks and for safety of data by redundancy. You may keep your network all to yourself of course, if that is how you like it.
  • The specification for the technology is open and free and so is a much of the code for the service and the tools. Anyone can contribute towards improving the protocol and the API specification.

In short, it has much of the infrastructure taken care of so that you need to develop only the interesting stuff. Just a little bit of baking and some icing and you could have killer cakes going around.

I doubt if it is the first thing that comes to your mind, but I am talking of Google Wave of course. (https://wave.google.com/).

I became aware of Google Wave when it was rumored as Google’s next big project, to be released at Google IO 2009.  And released it was, with some fanfare. It was received with matching enthusiasm by developers. In the world of technology, new tools are announced every day. Most these days seem to be designed to facilitate teenage banter. Google Wave seemed like one more fun channel to help you make doodles while you chat.  I kept checking it on and off and saw it progressively improve. However, I never quite saw its value beyond chatting and working on documents with someone else at the same time.  Sure, some of the gadgets and robots that other developers created seemed clever but nothing that would make me log in everyday. Clearly, it was no alternative to email as it was made out to be by Google.

I was prodded into taking another look at Google Wave only recently when the paper by two Googlers, Gaw and Shankar was released. They propose use of Google Wave to create Personal Health Records (PHRs). They emphasize the collaboration capabilities of the Wave technology and how it would allow aggregation of clinical records for a patient from different resources. Spurred by it we had started researching if Google Wave is where we should be building authoring tools for Proteus and GreEd. We were really excited about its potential to provide a platform for collaborative development of executable clinical knowledge.

But soon we got the bad news that Google is pulling the plug on the product and any further development of the technology.

I am not the one to rush in to take up causes. But because you belong in a certain field some causes are given to you and you can’t just turn a blind eye to them. Google Wave is certainly one cause that seems worth fighting for. If enough number of thought leaders and developers make an appeal to Google, they may reverse their decision to kill Google Wave. Thus the campaign “Save the Wave”.

Please click on the following image and express your support.

Vote for Saving Google Wave

Vote for Saving Google Wave

Some Clarifications:

  • The protocol and the APIs are still undergoing development but have already demonstrated their potential by numerous applications that third party developers (individuals and corporations, both large and small) have created
  • As far as I know, no auto-coding tool currently exists, but it will not be too difficult to build one on top of Spelly, the semantic spell-check robot that Google’s NLP group has developed.

Healthcare Informatics Services via Cloud – IEEE Workshop

Published by:

IEEE’s annual International Conference on Web Services and Cloud this year is featuring a special health informatics workshop.

Find more about the workshop and its call for papers here.

If you are interested in use of Web Services or Cloud Computing to make a difference in healthcare this will be an event to keep an eye on.

This is a great opportunity to present your ideas and experiences or demo some of the work you have already done.


The Knowledge Last Mile Problem

Published by:

File:Ancients repository of knowledge.jpg

Colonel Jack O’Neill, the intrepid leader of the Stargate SG-1 taskforce of the eponymous popular Sci-Fi TV show, in his time took some remarkably astute decisions, oftentimes in situations of great stress, saving our galaxy from being overrun by nasty alien races more than a few times. Now, O’Neill is a great generalist and thinks fast on his feet but he never was a paragon of erudition nor does he have any such pretensions. Indeed, he harbors a robust disdain for anything resembling scholarly pursuit. However we all know, in intergalactic matters our gut-level decisions are not always enough. From time to time we have to invoke a higher body of knowledge. When the Replicators were on the verge of exterminating humans and no weapon in possession of humans and their alien allies seemed to be having any lasting impact on the ferocious onslaught, it was clear a new kind of weapon was desperately needed if the Replicators were to be thwarted. Fortunately for the earthlings, Colonel O’Neill very recently, had downloaded into his brain the entire knowledge repository of an incredibly advanced race, the Ancients. Armed with this knowledge he could quickly devise a weapon capable of annihilating the Replicators. Needless to say, once again the Milky Way Galaxy was saved.

Note that O’Neill did not learn from the Ancients. Instead his mind just imbibed the knowledge. He ‘acquired’ countless skills, including those needed to deal with the Replicator emergency, without having to go through the arduous process of learning. I shudder to think what would have happened if O’Neill had had to read countless PDFs and WebPages before he was suitably equipped.

In short, the Ancients had developed the perfect technology to bridge the last mile knowledge gap, the gap between existing knowledge and its translation into practice, the gap that every human institution that works with knowledge has struggled with. No matter the amount of conclusive findings clinical research throws up, if the clinicians do not integrate them into their care-delivery is it worth anything?

Since beginning there has been the gap, the gap between research findings and how it is applied in practice.


The good intentioned knowledge producing people and institutions reduced it by making it easier for people to access the results of the research. This was done by using journals, monographs, textbooks and such, and later on making much of it available for little or no cost by the medium of internet.


But soon the realization dawned upon the knowledge producers and providers that just providing easier access to the research results is never going to be enough. The outputs of research efforts need to be sifted through to allow only the reliable information to impact practice. What came to be known as Evidence Based Medicine was driven by such meta-analysis. Besides vetting research for quality, the research results have to be articulated in actionable terms, which, in the field of medicine, came be known as clinical guidelines. The users of the knowledge, such as the clinicians, do not usually have time and inclination to undertake such analyses and translations. So it seemed to make sense that pre-processing the research results into meta-research and guidelines before it is made available will lead to improved application of the research into practice.

These efforts did result in narrowing the gap but not remarkably. The original enthusiasm for guidelines and meta-research seems to have lost steam.


What if a clinician’s mind had access to a knowledge repository just like the Ancients’, and had the ability to simply suck in on demand, the most appropriate pieces of knowledge? That would surely eliminate the gap.

A clinician with access to such a knowledge repository would be able to manage almost any condition using the best evidence and recommended practices. For such a scenario to unfold will take a very long time because we do not yet possess that level of ability to tinker with minds. However, we do possess reasonable knowledge of computers. After all we created them. Can we then think of a way in which all human wisdom, expressed into actions, be made available in a shared knowledge repository. The computer of the clinician can then access this knowledge on demand to get precise advice about what next needs to be done about the patient on hand. The wisdom in such a repository could be deposited, over course of time, by clinicians themselves to be shared with other clinicians. The clinicians would enhance the knowledge to work for themselves but will also be able to share it to be able to make significant difference to others.

What will be the form of knowledge contained in such a knowledge repository? Remember, the knowledge should be able to modify behavior of a computer system to be able to advise the clinicians about individual patients. Software has been used for a long time as the standard way of altering the behavior of computers. However, software is created by programmers with special skills using sophisticated development tools. If the knowledge repository has to acquire the necessary knowledge that evolves with the understanding of the experts and clinicians, it will need to be modifiable by people with clinical expertise who normally do not possess software development skills.


This is where knowledge form like Proteus comes in. The Proteus knowledge is executable and yet modifiable by clinicians. We are in the process of building early mechanisms to allow clinicians to deposit such knowledge in a publicly available repository, which will allow clinicians to integrate, on demand, any part of knowledge they need to modify behavior of the Proteus Engine, a software module which interprets the knowledge and provides advice to the clinicians about individual patients based on the data from those patients.


SDCI Presents at Henry Ford Quality Expo

Published by:

An annual fixture for Henry Ford Health System is its Quality Expo. This is an opportunity for the hospitals, clinics and many other departments of Henry Ford to present any efforts aimed at improving quality of our services. This year our Semantic Data Capture Initiative project was also on display. Team members, Teresa Hantz and Patti Williams of the CSRI Department created the flash video that was displayed next to the poster. We all were impressed by how quickly Teresa mastered the applications, Protean and the video capturing tool. It is also noteworthy that she managed to highlight the essence of the tools in less than three minutes of video.

This introductory video provides you with a quick overview of knowledge editing in Proteus environment as well as how easy it is to edit a rule in GreEd.

You can check out the video here: quality_expo2009.swf.  We suggest running the video in full-screen mode of your browser (press F11).


Proteus Open Source Now!

Published by:

This is to announce the availability of source code for tools related to clinical decision support guidelines model, Proteus under an open source license (EPL). The open source development will proceed under the new Proteus Intelligent Processes (PIP) project.

With this announcement, we are also opening up the project for general participation. The code and related information can be found at http://kenai.com/projects/pip/.  The home for Proteus will remain at http://proteme.org. Introductory information about the rule authoring system GreEd is available at http://proteme.org/blog/greed/.

This also coincides with the release of the version 2.7 (beta), which has several new features to make knowledge authoring more exciting and easy.  Take the new application for a spin by downloading it from http://www.proteme.org/download3.html.

What’s New

I list some of the new features in Version 2.7 below:

Protean (Clinical Workflow Authoring Tool)

  • Sharing executable knowledge
  • Unlimited undo and redo
  • Promotion and demotion
  • Move an item from one location to another
  • Search your library of components

GreEd (Rule Authoring Tool)

  • Undo and Redo
  • Default Inference
  • Semantic Guidance and constraints
  • New operators for your expressions, like [N of M] and [Between]
  • Date Fields and Operations

Read more about the new features here: http://kenai.com/projects/pip/pages/WhatIsNew.

This is a major milestone for Proteus which was made possible by contributions from many wonderful people. Much of the development for this version was done in the Semantic Data Capture Initiative project of Henry Ford Health System, my employer. Besides Henry Ford, Lister Hill Center of National Library of Medicine played a critical role at the nascent stage of Proteus. Several ideas related to metadata usage and rule authoring were developed at City of Hope National Medical Center.

We will be scheduling a web seminar to provide a quick introduction to Proteus, GreEd and the PIP project and demonstrate the tools. Please let me know if you are interested in participating.

I will be at the upcoming AMIA annual symposium, in San Francisco and will be happy to meet you if you are planning to attend.

We welcome your participation and feedback.

Feel free to contact me.


Get to know GreEd Better

Published by:

In one of my previous posts, I had promised that I will be sharing with you more information about GreEd.  I did one better; I posted several pages about GreEd. You can find these here: http://proteme.org/blog/greed/

The same pages can also be accessed from the top menu of this blog.

Stay tuned, we will be adding some flash demos and tutorials for GreEd in near future. I will also keep you informed about development of GreEd.

P.S. Do not worry about mispronouncing GreEd, it is pronounced same as the good old human foible – greed. Either way, we wouldn’t be too offended.


Proteus and GreEd to Go Live in Henry Ford Health System

Published by:

The Henry Ford Health System is one of the largest health care providers of USA. It has also been at the forefront of many cutting edge innovations in healthcare. One such Henry Ford effort has been in progress, silently and away from the limelight, for the the last two years.  However, soon it will lead to deployment of Proteus – the unique clinical decision support technology and GreEd – the clinical/business rules management system to implement clinical guidelines to allow physicians to save time and yet make better decisions about their patients.

This effort is called Semantic Data Capture Initiative project. I have just added a new page on this blog to give you some idea of what this project is about.

The Semantic Data Capture Initiative page provides you with an overview of the project. I will keep posting updates from this project here. Stay tuned.