Article in EE Times
This article was published in September, 1998.
In-building nets seek unifying force
Peter Dudley
Some day, most buildings will have a network that links appliances, heating-and-cooling systems, lighting, security, communications, entertainment devices, and computing equipment. Companies owning the technologies that form the founde technologies that form the foundation of these networks stand to be very successful; that's why every major technology vendor seems to be promoting a different standard, either for the network itself or for interoperability software. Because betting on one standard alone is risky, device manufacturers require a unified approach to let them forge ahead with development while the different specs slog it out for supremacy.
On the hardware and communications side, no less than eight organizations—probably more—were developing standards at last count. Among the better known are the Home Automation Association, the Home Phoneline Networking Alliance, the HomeRF Working Group, the Video Electronics Standards Association, Bluetooth, and the CeBUS Industry Council. Each has major industry backers and approaches the problems in a different way, from promoting wireless networks to new wiring, to use of existing copper phone lines. Because it's likely more than one of these standards will gain acceptance, there may be a large market in the future for small hardware adapters, akin to the plethora of cabling pin-outs in the early years of PCs.
Software competition
The real problem for device vendors, though, is on the software side: it is a practical impossibility to support all competing software standtandards in a single product. Resource constraints in the device coupled with investment in development, maintenance, and support preclude taking that path. Additionally, after one protocol is embedded, it becomes very difficult to provide software "adapters" for the shipping, embedded product. And there are multiple methods of interoperability, access, and control of these embedded network appliances.
Device vendors who were around early this decade know that getting caught up in protocol wars would be like arguing the OS religion in 1991—a drain on resources and ultimately futile. Furthermore, in the embedded appliance market, the application is King—each device serves its purpose and is not expected to act as a general-purpose platform for multiple applications. A toaster toasts bread, a dishwasher washes dishes, and a thermostat controls the furnace and cooler. Which protocol each supports is unimportant to the appliance's operation; in fact, the protocol that comes out on top will be the one that best serves to provide seamless access to, control of, and interoperability between the disparate devices. The inherent differences between the appliances dictates that no single protocol or system can predefine all possible functions—one size does not fit all.
In the fully networked building, if the l lights on one floor suddenly lost power, the lighting system could easily check if the power was still coming in to the building and, if so, trigger an alarm in the security system. This is possible only if the lights can interoperate with both the security system and the power meter on a software application level. But a sophisticated, complex security application can not serve as the basis for a tiny information server inside a power meter. Similarly, should the light fixture manufacturer design for interaction with the power meter or for interaction with the security system? If they are running different protocols, confusion results.
Thus, appliance and device manufacturers need to cover their bases and avoid betting on the wrong standard. Either they can just wait and see what happens in the market, or they can begin developing products now to take advantage of the market when it matures. To be successful, they need to do two things. First, they must focus on the core application and the particular data and controls within that application that must be exposed to external management and interoperability. Second, they must build in an abstraction layer that exposes those internal data elements and controls in a granular, systematic way. Let the abstraction layer handle all the different protocols so the core functionality off the appliance is not affected.
Two primary architectures exist for software abstraction layers: The embedded abstraction layer is contained entirely within the appliance while the proxy-based layer requires another machine running translator software. Each method has both benefits and detriments.
A proxy-based system allows the software in the device to be smaller, reducing the per-unit cost of goods slightly. Additionally, if the exact configuration of the proxy server is known, a custom client-server communication system can be implemented between the proxy and the appliance. Finally, a proxy server can be anything from a dedicated device to a full-featured PC or workstation, allowing additional flexibility.
Because a proxy machine introduces a second point of failure and an additional level of complexity, an embedded system offers greater reliability, reduced overall cost of ownership, reduced cost and complexity of support, and greater control over the application. In addition, an embedded implementation can be operated in a stand-alone mode if the network connection is lost. Finally, although a proxy system can offer greater flexibility in upgrades, it also can introduce versioning complexities resulting in compatibility and support headaches.
The abstraction layer, a kind of backplane, works as a "data dictionary", mapping between external data references and internal application data elements. By writing a small amount of "glue code" to expose the deeply embedded data elements' GET and SET routines for backplane access, the engineer can push all the heavy lifting onto the backplane and out of the application.
Flexible Access
The backplane then can plug in new modules for different technologies. This means that the appliance can support today's access methods such as SNMP and HTTP while providing a path to supporting more in the future, such as Java and CORBA. Without the backplane architecture, the access method code becomes integrated into the application itself, resulting in an ever-growing, ever-more-complex application consisting of one stack and access code line for each protocol supported.
The backplane architecture also offers another benefit: It takes the burden of external protocols, so the engineering staff does not have to build expertise in those protocols and in maintaining stacks for them. The company building routers may know SNMP but does not have HTTP or HTML expertise; they can rely on the backplane for HTTP support and outsource the HTML work.
In some cases, the application may be best served by housing the backplane in an external device and using a specialized, proprietary, lightweight protocol to communicate directly with the device itself. For example, a light switch may actually house the backplane for the bulbs and sensors it controls. In most cases, though, it makes more sense—economically and from a development and maintenance standpoint—to make sure the backplane is embedded into the appliance.