The ACM TechNews email pointed me to a new entry in the New York Times Bits blog on the subject of brain-computer interfaces.
Human-computer interface (HCI) technologies are the means by which we tell computers what we want them to do, and how they respond to our commands, requests, and in some cases plaintive beggings.
The computer-to-human direction is working best so far. Computers generally do fairly well at communicating with healthcare providers, including but not limited to physicians and nurses. Providers generally are asking computers for information, and computers can easily display text and graphical information.
There are numerous caveats around that generalization, including but not limited to screen clutter, alert fatigue, and proliferation of what Edward Tufte calls "chart junk". But overall, humans are pretty forgiving in terms of processing audiovisual information, and HCI has unarguably made order-of-magnitude improvements in the delivery of information over the past couple decades.
The human-to-computer direction is in worse shape. We mostly still use keyboards and mouse-like pointing devices on the desktop, and crude touch-screen interfaces on mobile devices. All these HCI technologies were introduced in the late 1960's and early '70's, before many members of the current healthcare workforce were even born. For decades we have been promised voice input, gesture recognition via haptic interfaces, handwriting recognition, and the promises remain largely unfulfilled.
There are some spectacular successes in human-to-computer-direction interfaces, notably in such areas as robotic surgery and laparoscopic interventions. Such successes were costly to develop and remain costly to purchase and use, all of which translates into near-prohibitive cost to third-party payers and unquestionably prohibitive cost to "retail" consumers.
Skepticism with respect to optimistic stories like the one in the Bits blog is probably still justified. That said, I feel the time is ripe for a sea change in HCI, and the biggest arena for innovation is in the neglected area of human-to-computer HCI.
Sometime soon I hope to survey the proliferation of low-cost mobile health technologies, now in commercial release in such diverse aspects as ultrasound, laboratory tests, auscultation, and capturing vital signs. These are extending the fairly mature market for sport-related monitoring devices and synergistically leveraging the low cost, widely adopted mobile computing platforms like Apple iOS and Android devices.
Microsoft is playing catch-up in the mobile marketplace, with some interesting new technologies like the Surface touchpad. Reviewers are saying it's not ready for prime time due to its weight and relatively short battery life, but with Microsoft's vast ongoing investment in health IT expertise and its well-established VAR network, it is too soon to write off the aging but still 800-pound gorilla.
A third direction from which synergistic momentum is originating is the vast governmental push to promote health IT adoption. While it is true that free money is often misspent, subsidizing health IT does not constitute free money, because adoption has so many related costs, monetary and otherwise. What subsidization can do is to push potential adopters over the edge from spectators to players in the high-tech medical world.
Bottom line: remain skeptical, but be alert for a paradigm shift in health IT HCI.
Comments