Just about the time I got back from my vacation, HISTalk posted a very interesting interview with Novo Innovations CEO Robert Connely. In it, they discuss in some depth the way Novo's EMR synchronization solutions work. They have had what appear to be some remarkable successes integrating hospitals and labs with community clinics, not just one-offs but across what sound like some mid-sized health systems (check out the Company News items on the right side of their Website) and a one-pager about their implementation at Atlanta's Northside Hospital [pdf], a 455-bed community hospital with 1450+ physicians on staff.
What caught my eye is that they are using an agent-oriented distributed architecture. This is one of the architectures that I believe will dominate the future of healthcare informatics, as I predicted in my 2001 scenario planning paper on application architectures. But - such architectures present challenges in terms of performance and security, and can be awfully hard to debug when things go bad. Still, Novo's example is worth studying if you are responsible for designing large-scale systems, particularly in the healthcare arena.
What Are Agent Architectures?
Here's a definition from a 2000 paper by Nicholas Jennings and Michael Woolridge:
At present, there is a great deal of ongoing debate about exactly what constitutes an agent, yet there is nothing approaching a universal consensus. However, an increasing number of researchers find the following characterisation useful:
an agent is an encapsulated computer system that is situated in some environment,and that is capable of flexible, autonomous action in that environment in order to meet its design objectives
There are a number of points about this definition that require further explanation. Agents are: (i) clearly identifiable problem solving entities with well-defined boundaries and interfaces; (ii) situated (embedded) in a particular environment—they receive inputs related to the state of that environment through their sensors and they act on the environment through their effectors; (iii) designed to fulfil a specific role—they have particular objectives to achieve, that can either be explicitly or implicitly represented within the agents; (iv) autonomous—they have control both over their internal state and over their own behaviour; (v) capable of exhibiting flexible (contextdependent) problem solving behaviour—they need to be reactive (able to respond in a timely fashion to changes that occur in their environment in order to satisfy their design objectives) and proactive (able to opportunistically adopt new goals and take the initiative in order to satisfy their design objectives).
Agent architectures introduce no revolutionary new technologies - but the same could be said about the emergence of other paradigms for distributed enterprise architectures, for example J2EE and Microsoft.NET. Agent architectures are comparable, feature for feature, with both of these platforms and with earlier distributed object architectures. What's different? In an online monograph entitled Agent-Oriented Software Engineering, Dr. Jürgen Lind puts it this way:
After the sobering remarks about the basic similarities of the agent- and object-oriented approaches one may be tempted to conclude that agent-orientation are just the emperor's new clothes. But that is not what I was trying to say. Even if the technical contributions or agent-oriented software engineering are not really revolutionary the conceptual contribution is nonetheless huge. Agent-oriented software engineering provides an epistemological framework for effective communication and reasoning about complex software system on the basis of mental qualities. It provides a consistent new set of terms and relations that adequately capture complex systems and that support easier and more natural development of these systems.
In other words, these architectures require a new way of thinking about distributed object systems - but in so requiring, they break one's thinking out of the ingrained habits created by procedural languages inherently designed for a single machine. Agent architectures allow you to program the entire distributed system, spanning any number of machines running various operating systems on heterogeneous hardware platforms, as if it were a single intelligent entity.
Where Do Agent Architectures Fit In?
Are agent architectures simply the Next Big Thing, soon to be surrounded by hype, the IT equivalent of the One Minute Manager "theory" of management? That's possible, but there are good reasons to believe that the time is ripe for a paradigm shift in enterprise architectures, and agent systems are a very logical candidate for the next dominant paradigm. In my mind, Novo's success is evidence that the shift may be underway.
In my scenario planning paper on application architectures, I described a scenario entitled Caffeine Nation, in which Microsoft's dominance over the enterprise server-based applications and operating systems was diminished and high-bandwidth networking became ubiquitous. I predicted that agent technologies that leverage the ubiquitous connectivity in creative new ways would be good business bets in such a scenario. That's Novo all over. I also predicted that high value-added enterprise framework applications operating on Open Source platforms would leverage the new world of ubiquitous high bandwidth, leapfrogging the client/server-oriented existing market leaders. That's exactly what Novo appears to be doing.
I predicted that in a Caffeine Nation scenario, most significant applications would employ continuous or intermittent connectivity architectures whose features and functions were shaped by the J2EE paradigm. Novo has taken that route, but my prediction was off by a fair margin: it turns out that the server platform ecosystem is more diverse than I anticipated, with both Microsoft.NET and lightweight platforms like LAMP (Linux, AJAX, MySQL, PHP) are giving J2EE a run for the money.
During those dark days, innovation never stopped. The whole universe of Web 2.0 was born during those times, and companies like Novo were born. Wtih the US and global economies in much better shape, the innovations of those days are coming to fruition in new ways of doing business and more broadly, new ways of thinking about how to exploit the possibilities of the Web.
Stewart Brand has said that scenario planning isn't about always being right, it's about never being wrong. Caffeine Nation was only one of several scenarios I described, and we actually spent five years or so suffering through a reality I described in a different scenario, called Through A Glass Darkly. My scenarios were first written pre-9/11, and at the very early stages of the dot-com implosion and the Nuclear Winter of IT. In hunker-down mode, the prevalence of Windows held tight in a risk-averse world. Other for-profit-based platforms suffered a lot during that period, as did purveyors of hardware and application software. We have emerged from those dark times into something resembling Caffeine Nation, but corporate leadership who considered and hedged their bets to allow for any of my scenarios coming to pass were in the best shape - ready for the worst and also for the re-emergence of better times.
I haven't done an application architectures scenario planning exercise since then, but the time may be right to take that on again. Agent architectures may play a role in one or more such scenarios. It is clear to me that another paradigm shift is about to take place, as we reach the edge of what's possible with the tools and techniques that dominate our thinking and actions today. For now, what is important for enterprise architects is that they keep their eyes on the horizon and anticipate the paradigm shift that is about to take place, whatever direction it may take.
Caveats for Healthcare System Architects
AOP is not a total bed of roses, of course. Here are some issues you need to think about before diving in:
- Security: To what degree can agents operating on their own initiative be trusted with sensitive information? How do you guarantee a reliable audit trail?
- Performance: Many healthcare systems must operate in or near real-time. As on any other distributed system architecture, tasks performed on agent architectures have potentially non-deterministic execution times.
- Robustness: Some healthcare systems are mission-critical at the life-or-death level. Agent architectures are at risk of communication-related disruptions, like any other distributed architecture. And as with client/server applications that execute using threaded processes, process synchronization is difficult to guarantee: deadlocks and race conditions are always a possibility.
- Customer perception: Healthcare institutions are risk-averse by nature. Before betting the farm on agent architectures, read Geoffrey Moore's Crossing the Chasm and think hard ab out its implications. Then go have a heart-to-heart talk with your best customers and see how they feel about being early adopters of the Next Big Thing.
Dale,
Thanks much for the posting on this subject - you've provided a great explaination of this exciting new model.
I especially appreciate the Archectural Caveats - we constantly remind ourselves not to to view agents as a hammer and all problems as a nail. The likely scenario I believe will play out is that agents will not replace SOA, C/S, Web or any previous architectural approaches - but will live comfortably along side these models, doing tasks particularly well suited for agents.
For example, in a mission-critical enterprise system, SOA and C/S applications may be the best bet...if remotely accessing information is the need, the Web is a great approach...but if collaborating across a disconnected community, automating the secure exchange of information, and achieving a level of interoperability between disparate applications is the goal - agent architectures are the best approach we've seen.
I hope you don't mind if we point people to your blog to get a better idea of what we do!
Posted by: Robert Connely | August 31, 2006 at 07:20 PM
Hi Robert,
What you say is totally in keeping with Marshall McLuhan's view of media - old media never die, they just become the content of the new media. In your case, you are encapsulating C/S applications (some that do exist and others that should exist) inside the agent architecture. SOA does the same thing, but is really just a variation on the C/S paradigm - a variation on remote procedure calls, which were commonplace on IBM mainframes as early as the 1960's.
I agree with your triage. Unfortunately a lot of the C/S vendors don't get it yet. In my paper I talked a lot about the need for intermittent connectivity - applications that do best when connected to the grid, but that can operate without connectivity, at least for a time, just as well (or at least "good enough") as when they are connected. With mission-critical apps moving from the desktop to the laptop, palmtop/PDA, and even SmartPhones, it's not acceptable for the app to fail when the wireless LAN or WAN is unavailable.
Even wired networks go down at times, so it's not like this is a new issue - it's just one to which the app developers often appear to be blind.
I gotta go - yes, refer anyone you like here, the more the merrier. I'm going to be writing more about agent architectures and other emergent meta-technologies in days to come.
Posted by: Hunscher | September 01, 2006 at 10:53 AM