LUSTER

LUSTER:  a Wireless Sensor Network for Environmental Research

Sub-System Details

We are creating, integrating, and testing new sub-systems, sensors, and interfaces in LUSTER. Descriptions and pictures of the various sub-systems within the network follow.

Sensor Queries and Data Extraction

SenQ is a flexible query system that provides access to streaming sensor data internally via a TinyOS API, and externally via an efficient network protocol. SenQ is used to query seven connected light sensors, voltage bias for calibration, and internal battery voltage once each second. As data values are received, they are are combined into a single message for transmission to the base station and nearest storage nodes. Messages are timestamped for sequencing, and include the address of the originating node.

The sampling period is configurable at deployment time. If a sensor fails validation, due to poor calibration, transient failures, etc., the operator can configure the SensorNode to omit the offending sensor permanently. Failure and communication statistics are maintained by the SensorNode application to aid the operator in diagnosing problems. Using SenQ allows the deployment validator to have snapshot and streaming access to all available sensors, including light, temperature, internal voltage, and signal strength (RSSI).

LiteTDMA MAC protocol

Wireless communications must be as efficient and low-pwer as possible to facilitate long-term functioning of the system. Since LUSTER is schedule-driven rather than event driven and communication within one cluster is one hop, we decided to use a TDMA MAC rather than a CSMA protocol such as B-MAC.

We designed and implemented a low-power TDMA network protocol, LiteTDMA, as part of LUSTER. The protocol is designed to be flexible and adjustable to the current system requirements. The number and duration of transmission slots in the schedule can be adjusted at runtime depending on the number of slave nodes and the sensing rate.

Coordination is managed by one active master node. There can be as many slave nodes as the LiteTDMA configuration permits. Dormant master nodes are also allowed that periodically wake up and take over management if the active master node is not functioning properly. Communication is organized into repeating superframes, shown in Figure 2. Each superframe has a Master slot, a Sleep slot, and a number of Slave slots.

Figure 2: LiteTDMA superframe format and timing.

Time Synchronization LiteTDMA slaves synchronize on the reception of control message that begins every superframe. Control messages are sent out including a 32-bit global time value that updates the slave's clock.

New Node Registration LiteTDMA allows for dynamic new node registration with the network as Figure 2 illustrates. A special “newbie” slot follows the Master slot at a predefined rate. During this slot the new, unregistered nodes are allowed to contend for the registration in a CSMA fashion. They submit a registration request with their unique 64-bit hardware IDs. If the master has free slots available, it acknowledges a request with a slot ID assignment message containing the slave’s hardware ID.

Dynamic Performance Optimization All of the LiteTDMA parameters can be adjusted at runtime and broadcasted to the slaves, which reconfigure themselves with the new parameters immediately. This unique feature allows master nodes to dynamically choose the optimal parameters (like the supframe length, the number of slave nodes, and the new node admission rate)to get better performance.

Performance Monitoring LiteTDMA captures internal performance-related events, for example: the number of messages sent and received successfully, failed sends, internalmessage buffer overflows and a list of internal variables for debugging purposes. These statistics can be reset or reported on demand, or scheduled to report periodically. This has been very useful for the debugging and performance evaluation of LiteTDMA in LUSTER deployments.

Reliable Distributed Storage

LUSTER provides reliable distributed storage service distinguishing itself from existing approaches developed for motes in two ways. First, it provides easy and non-intrusive access to the data collected, without disturbing or interacting with other elements of the deployed system. Second, it enables storage of gigabytes of data. Many existing file systems for the on-board flash handle on the order of kilobytes of memory. Larger capacities are desirable for remote data logging.

Figure 3: Storage node software architecture.

Figure 3 shows the software architecture for a storage node. The Data Decoder component parses the sensor data report messages and delivers the sensor data to the storage manager component. The Storage Manager component executes the configured policies. The SDFileSys component provides a FAT16-compatible file system on the Secure Digital/MMC card. The StorageQ component receives and processes queries.

Storage Policies System storage policies control the behavior of the Storage Manager component, and may be configured by messages received from the back-end server. There are two types of policies. An organization policy defines the logical layout of data stored in the flash memory. An overwrite policy determines what to do with new data when the storage is full.

Figure 4: Example sensor (SN) and storage (FN) node deployment topologies.

Coverage and Deployment Storage nodes may store data from any overheard sensor reports, or they may restrict their coverage to avoid overburdening their energy resources. At deployment time, the base station or validation mote may send a configuration message to the Data Decoder component with a node ID bitmask todetermine coverage. Figure 4 shows two examples of storage node deployments, when the system uses a grid topology and when sensor nodes are randomly distributed.

Delay Tolerant Networking

In LUSTER, we use Delay Tolerant Networking (DTN) techniques to increase reliability, particularly when connections to the gateway or the back-end server are lost. For example, in the Hog Island deployment, power to the wireless access point is lost at night, and the directional antenna at the gateway is subject to transient unreliability due to wind. There are two major parts to the DTN solutions in LUSTER. One is an overhearing-based logging technique (described in the Storage Component), and the other is delayed data retrieval.

Deployment Time Validation

The Deployment Time Validation(DTV) in LUSTER is as important as it is challenging. Since this system is deployed on a remote island on the Eastern Shore of Virginia and requires considerable time to drive, boat, and finally hike to reach, it is rather difficult to validate the whole system using existing testing or debugging methods in the field. What is really needed is a lightweight validation tool with long battery life.

We developed a deployment time validation approach, named SeeDTV, that consists of techniques and procedures for WSN verification, and an in-situ user interface device, called SeeMote. SeeDTV has demonstrated the potential for early problem detection in three domains of WSNs: sensor node devices, wireless network physical and logical integrity, and connectivityto the back-end data server. SeeMote can display this useful information on the LCD screen in a series of test modes, as shown in Figure 5.

Figure 5: SeeDTV user interface for deployment time validation on a SeeMote device.