The vision of one man lends not its wings to another man.
-- Kahlil Gibran, The Prophet
Multi-Representation Modelling (MRM) -- the joint execution of different models of the same phenomenon -- has been explored in applications in a number of domains, from multi-resolution graphics and battlefield simulations to climate models and molecular models. In most of these domains, MRM has proven beneficial for some applications no matter what MRM approach has been used. In §2.1, we present example applications that employ multi-models. In §2.2, using examples from battlefield simulations, we describe alternative MRM approaches, wherein all but one model may suspend execution. In §2.3, we describe work that has influenced our approach.
We present a sampling of domains in which MRM in some form has been employed. For these domains, MRM has been considered beneficial for many applications. A detailed discussion of domains employing MRM is in Appendix A along with evaluations of whether the MRM approaches satisfy R1, R2 and R3.
In multi-resolution graphical modelling, the system maintains multiple representations, or levels of detail , of an object and renders the appropriate representation depending on the object's distance from the viewer [Clark76]. Coarser levels of detail for an object employ fewer polygons, thus reducing the time required to render the object. Moreover, coarse levels of detail depict the object satisfactorily when the perceived size of the object relative to the viewing area is small, for example, when the object is distant from the viewer. In multi-resolution graphical models, researchers concentrate on generating levels of detail automatically before run-time; at run-time, an appropriate level is selected for visually-appealing rendering [Gar95] [Heck94] [Heck97] [Luebke97] [Puppo97]. A few applications permit a user to change a level of detail at run-time, thus requiring re-generation of other levels of detail [Berm94] [Lee98] [Zorin97].
Hierarchical autonomous agents jointly execute multiple layers (e.g., a deliberative layer [Sacer74] and a reactive layer [Agre87]) in order to utilise the capabilities of each layer [Albus97] [Bon97] [Firby87] [Gat92] [Hanks90] [Laird91] [Sim94] [Was98a]. Multiple layers enable an agent to pre-plan some of the steps required to fulfill its goal yet exhibit robust behaviour when unexpected or urgent situations occur. Usually, each layer maintains representation about the agent's goal or surroundings [Brooks86] [Brill98]. Eliminating inconsistencies among dependent parts of the representations for multiple layers is an open issue.
In blackboard systems such as
, many processes write to and read from a single data structure, called a blackboard [Erman80].
translates spoken sentences into the corresponding alphabetic representation.
's blackboard is a multi-model; each layer is a different model of a spoken sentence. Layers corresponding to sentence fragments such as phonemes, words and phrases execute jointly to produce multiple interpretations of one sentence. Each interpretation is a consistent view of the sentence. Multiple interpretations are ranked by a credibility metric; the most credible interpretation is the best translation of the spoken sentence. However, maintaining multiple interpretations of a sentence is resource-intensive.
In a multi-processor configuration, each processor may access a fast local cache in order to reduce accesses to slow main memory. Processors may read and modify copies of main memory data stored in their caches. Ensuring that processors access correct versions of cached data is the cache coherence problem [Henn96] [Arch86]. The main memory copy and each cache copy of a datum are concurrent representations of a variable. Processes issue interactions in the form of read and write operations to any copy. Caches and main memory copies bear simple relationships, such as equality, with one another.
In polymorphic languages, data may have multiple types [Card85]. Some languages support
In relational database applications, data are abstracted into relations, which essentially are tables whose rows are tuples and columns are values for members of tuples [Codd70] [Astra76] [Stone76] [Linton84]. In object-oriented databases, data are abstracted as relationships among entities [Chen76] [Balzer85]. A view in a database is derivative, i.e., the view is a set of relations derived from existing relations or relationships [Cham75]. A view in an integrated environment is constructive, i.e., the database is constructed from individual views [Gar87]. Changes to a view must be translated to changes in the database [Ban81] [Hor86].
In nested climate modelling, Limited Area Models (LAMs), which predict regional climate, execute jointly with Global Circulation Models (GCMs), which predict wide-ranging climate changes. The joint execution produces more accurate predictions of the weather than either alone [Giorgi90] [Giorgi91] [Risbey96]. Typically, GCM data for large geographic areas are translated to LAM input. LAMs supplied with this input data perform further computations to predict weather for small geographic areas. Ideally, LAM data should be translated to GCM input as well in order to account for local factors that may influence global climate. However, translating GCM data for LAM input is common, but the reverse translation is an open problem.
When theoretical studies on the potential energy surfaces for chemical reactions of a large system are carried out, low-computation low-detail models, such as molecular mechanics models, are used initially for most of the system, and high-computation high-detail models, such as molecular orbital methods, are used subsequently for a small part of the system [Matsu96] [Sven96a] [Humbel96] [Sven96b]. Such integrated models enable researchers to study interesting aspects of a reaction in detail without incurring the cost of modelling the entire reaction in detail. Integrated molecular models permit interactions at multiple levels and are remarkably consistent with one another. Also, reported resource consumption is low.
In a number of commercial computer games, players control characters inhabiting a world displayed at multiple resolutions. Usually, a player interacts at the most detailed resolution level, with the other resolution levels existing solely to provide the player with a wider or less-cluttered view of the game world. In a few games, the player may transition to less-detailed resolution levels and interact at those resolution levels. Typically, players can interact at only one resolution level at a time. In most games, all processing takes place at the most detailed resolution level.
A number of battlefield simulations require the joint execution of multiple models, for example, training models and analysis models [AMG95] [Davis93] [Davis98] [DIS93] [DoD94] [Reyn94]. Typically, battlefield simulations employ an approach called aggregation-disaggregation to ensure that entities interact at the same representation level. Aggregation-disaggregation enables many independently-designed simulations to execute jointly. However, aggregation-disaggregation scales poorly with large numbers of jointly-executing models or interacting entities; it can preclude concurrent multi-representation interactions, give rise to inconsistencies among the multiple representations, and increase resource consumption.
In Table 1, we evaluate the MRM approaches employed in the above domains with regard to our MRM requirements of multi-representation interactions (R1), multi-representation consistency (R2) and cost-effectiveness (R3). The evaluation here is intentionally brief; it is meant to highlight shortcomings of previous work. Detailed evaluations of these domains are in Appendix A. In Table 1, darkly-shaded cells signify that a domain satisfies a requirement. Lightly-shaded cells signify that a domain satisfies a requirement poorly. Unshaded cells signify that a domain does not satisfy a requirement. An ideal MRM approach for each domain will have all three cells shaded darkly.
MRM approaches such as selective viewing and aggregation-disaggregation execute only one model at a time. In selective viewing , only the most detailed model is executed. In aggregation-disaggregation , at any given time, only one model is executed; depending on the interactions among entities, the system may change the currently-executing model by transitioning among models. In Variable Resolution Modelling , processes are modelled at different resolution levels. At any time, a user may choose to model processes or sub-processes at higher or lower detail. The system transitions among multiple process models in order to satisfy the user's request. In the following sections, we critique each approach briefly. Most of the examples in these sections are from battlefield simulations because of our experience and familiarity with that domain.
With selective viewing, only the most detailed model is executed, and all other models are emulated by selecting information, or views, from the representation of the most detailed model [Davis93]. Selective viewing is employed when modelling a phenomenon in detail at all times is considered necessary. Low-resolution views of a multi-model are generated from the most detailed model. While this approach may be suitable for games because available processing resources can execute the most detailed model at near-real-time, for more complex models, selective viewing has many disadvantages.
First, executing the most detailed model incurs the highest resource usage cost. Proponents of selective viewing may argue that the smallest detail can affect the execution of the complete model (e.g., a butterfly flapping its wings in Columbia can affect the weather of Western Europe). While this argument may be valid in some cases, for most models, most of the details can be abstracted reasonably in order to conserve resources.
Second, the most detailed model is likely to be the most complex model. One of the main benefits of modelling is to make reasonable simplifications in order to study a phenomenon efficiently. Executing the most detailed model adds complexity instead of reducing it.
Third, executing the most detailed model may limit the opportunities for performing some types of analyses. Abstract models enable a user to make high-level decisions regarding the multi-model. These high-level decisions are likely to change the behaviour of many entities, thus enabling broad analyses of the multi-model. Enabling equivalent analyses in a detailed model requires making corresponding low-level decisions. These low-level decisions may not exist or may be difficult to make. Thus, the equivalent analyses in a detailed model may be impossible or infeasible.
Fourth, some multiple models may not bear hierarchical relationships with one another, i.e., none of them is the most detailed model. Selective viewing implies that the most detailed model is a monolithic model. For non-hierarchical models, the monolithic model must be created by capturing all the details of all the models. Such a monolithic model requires additional design effort and is likely to be very complex.
The philosophical question of what is the most detailed model can entrap designers into adding ever-increasing detail to a model by refining entities in the model increasingly. However, even assuming a designer can escape this trap eventually, selective viewing is not suitable for the execution of a multi-model because of the above disadvantages.
Inconsistencies can arise in a multi-model when a low resolution entity (LRE), e.g., corp, interacts with a high resolution entity (HRE), e.g., tank. A common MRM approach is to change the resolution of an entity dynamically to match the resolution of other interacting entities. This dynamic change is called aggregation (HREs → LRE) or disaggregation (LRE → HREs). Aggregation-disaggregation ensures that entities interact with one another at the same level by forcibly changing their representation levels [Smith94]. Typically, if an LRE interacts with an HRE, the LRE is disaggregated into its constituents, which interact at the HRE level. LRE-LRE interactions would be at the LRE level. A disaggregated LRE may be re-aggregated so that it can interact subsequently at the LRE level. We critique the variations on aggregation-disaggregation [Nat96].
Full disaggregation involves disaggregating an LRE into its constituent HREs. In Figure 2, LREs L1 and L2 are disaggregated when they interact with an HRE. Typically, full disaggregation occurs when an LRE establishes contact (e.g., sensor, line-of-sight) with an HRE. Full disaggregation ensures that all entities interact at the same representation levels. However, full disaggregation is often too aggressive -- although only some HREs that constitute an LRE may be involved in a particular interaction, all the constituent HREs will be disaggregated. Moreover, full disaggregation leads to chain disaggregation -- cascading disaggregation of interacting LREs when one of them interacts with an HRE (e.g., the disaggregation of LRE L3). The large number of entities instantiated in full disaggregation may place a high demand on system resources. Accordingly, full disaggregation is restricted to small-scale multi-models [Cald95a].
Partial disaggregation attempts to overcome the main limitations of full disaggregation by disaggregating an LRE partly rather than entirely. As seen in Figure 3, a partition is created inside LRE L2 such that only a part of L2 is disaggregated into HREs that interact with the disaggregated constituents of LRE L1; the remaining part of L2 is left as an LRE to interact with LRE L3. For example, in the
[Hardy94] [Burd95] linkage, a
entity that engages a
entity is partitioned such that one part disaggregates and fights a disaggregate-level battle in the
world, while the other part remains aggregated and fights aggregate-level battles in the
As seen in Figure 3, partial disaggregation has the potential to control chain disaggregation. This potential depends on how easily a partition can be constructed inside an LRE. The criteria for constructing the partition must be chosen carefully to prevent partial disaggregation from degenerating into full disaggregation.
A common aggregation-disaggregation variant is to demarcate a pre-determined region of the simulated domain, called a playbox , within which only HREs can participate [Karr94]. Conceptually, the playbox may be defined in any domain, for example, a spatial domain such as a simulated battlefield. Entities inside the playbox are disaggregated while those outside remain aggregated. An LRE that crosses into the playbox must be disaggregated; likewise, when all the disaggregated constituent entities of an LRE leave the playbox, they are aggregated into the LRE. The playbox is typically static in terms of location and boundaries, although it can be dynamic.
Playboxes may force entities to disaggregate unnecessarily, for example, when an entity enters a playbox but does not interact with others in the playbox (e.g., LRE L2 in Figure 4). Furthermore, thrashing can occur when the trajectory of an entity causes it to enter and leave the playbox rapidly. Cross-level interactions across the boundary of the playbox (e.g., interactions between the disaggregated L2 and LRE L3 in Figure 4) must be addressed separately. Additionally, static playboxes artificially constrain the region in which LREs and HREs may interact meaningfully. Projects that use playboxes are
Consider a situation where an HRE requires the attributes of the constituent HREs of an LRE but does not interact with them. For example, an Unmanned Airborne Vehicle (UAV) may obtain aerial pictures that are processed for details of entities observed in an area. Since LREs are a modelling abstraction, any LRE in the UAV picture must be depicted as its constituent HREs. In this case, disaggregating the LRE is wasteful since only a perception of the constituent HREs is required. In pseudo-disaggregation, an HRE receives low-resolution information from LREs and
disaggregates the information to obtain high-resolution information. For example, in Figure 5, the UAV is an HRE that pseudo-disaggregates LREs L1 and L2. Pseudo-disaggregation is applicable when the interaction is unidirectional, i.e., L1 and L2 do not interact with the UAV. The algorithms used by the UAV to disaggregate L1 and L2 locally must be similar to the ones L1 and L2 would use to disaggregate themselves, if required. Each HRE must incorporate rules to disaggregate every LRE in the simulation. Pseudo-disaggregation is employed by
[Weat93] and others [Allen96].
In Davis's Variable Resolution Modelling (VRM), designers construct families of models that support dynamic changes in resolution [Davis92] [Davis93]. For example, a coarse model of weather prediction may include season and geographical location. A model at a finer resolution may include temperature variations, cloud patterns and wind directions. A model at yet finer resolution may include rates of temperature changes, range of temperatures and so on. Designing with VRM in mind facilitates the construction of models that can execute at any desired level of resolution.
VRM involves building tunable process hierarchies, while MRM involves making multiple models execute jointly. It is possible for a simulation to incorporate both philosophies. For example, in a multi-resolution simulation, various aggregate-level and disaggregate-level entities may interact with one another. Users may vary the resolution at which the simulation proceeds. There are two aspects to this variability: one, the interactions among entities, which is our focus, and two, the resolution of simulation processes, which is Davis's focus. We address issues that arise when aggregate-level entities interact with disaggregate-level entities. Davis addresses issues that arise when one wishes to observe phenomena such as invasions or stratagems at variable resolution. Designers may describe the movement of a single tank either by a very high-level process or by low-level sub-processes that involve factors like fine-grained terrain conditions and availability of fuel. Here, the motion of the tank is a VRM process, but the interaction of the tank with other tanks or platoons is an MRM issue.
VRM is related to MRM because a process at multiple resolution levels is likely to require multiple representations. Many VRM researchers argue for the existence of multiple resolutions [Davis98] [Harsh92] [Hill92a] [Hill92b] [Horr92]. However, in VRM, users are expected to transition among models during execution rather than execute multiple models concurrently. VRM complements MRM; the relationships among hierarchical resolution levels for a process are mapping functions that translate attributes among multiple representations.
briefly in order to discuss work that has influenced our approach to MRM.
includes the concept of a Multiple Representation Entity (MRE) which is a technique to maintain concurrent representations based on four fundamental observations about MRM [Reyn97]. MREs are internally consistent and interact at multiple representation levels concurrently. A Consistency Enforcer (CE) consisting of an Attribute Dependency Graph (ADG) and application-specific mapping functions maintains consistency among multiple representations in an MRE. An Interaction Resolver (IR) based on our taxonomy of interactions resolves the effects of dependent concurrent interactions [Nat99]. MREs reduce simulation and consistency costs [Nat97].
Determining whether a multi-model is satisfactory is ultimately a form of the Turing test [Turing50] because only end-users can determine whether the multi-model meets their requirements. Crucial to a multi-model is the effective joint execution of its constituent models. We believe effective joint execution can be achieved by maintaining consistency among concurrent representations. Consistent concurrent representations enable consistent concurrent behaviour since behaviour is influenced by state [Hop79]. Approaches like Temporal Logic of Actions support the notion that behaviour is influenced by state [Lam94] [Abadi95]. The definition of consistency is application-dependent. For some applications, consistency may be bi-modal (i.e., the representations are consistent or inconsistent), whereas for others it may be multi-modal (i.e., the representations are consistent to some degree). For yet other applications, consistency may be similar to determining the effectiveness of a real-time system that schedules tasks according to their deadlines and their expected values [Burns98].
Dependency graphs similar to our ADGs have been used to capture cause-effect relationships in Petri Nets [Peter77] [Petri62], dataflow models [Dennis80] [Ack82] [Davis82] [Gajski82] [Grim93], object-oriented design [Rum91] [Shlaer92] and logical time systems [Lam78]. Since attribute relationships can be viewed as constraints [Allen92] [Hill92a] [Horr92], a CE may be implemented as a constraint solver. Typically, a constraint solver operates in the Herbrand universe [Jaffar94] [Saras91]. Although constraint solving in the Herbrand universe can be complex [Früh92a] [Früh92b] [Van96], constraint solving in other domains can be simplified [Marr93] [García93] [Free90] [Jaffar92] [Cormen89]. A CE may be implemented as a set of mediators. The relationships among attributes at multiple representation levels may be realized by mediators, which capture behavioral relationships in complex systems [Sull94]. A CE may be implemented as an attribute grammar, which is a means of propagating changes among dependent attributes [Knuth68] [Knuth71] [Reps84] [Besh85] [Demers85] [Reps86] [Hor86].
Interactions are common in many domains, for example, database transactions and operations [Eswa76]; processor interrupts; cache operations [Henn96]; reads and writes to shared memory in parallel processing systems; operations, events and actions in object-oriented and process modelling [Rum91] [Shlaer92] [Alhir98]; method invocations and function calls in object-oriented systems; messages in distributed processing systems and logical time systems [Lam78]; accesses to a blackboard [Erman80]; and exceptions in programming languages [Good75] [Barnes80] [Liskov79] [Strou91] [Yemini85]. Resolving the effects of interactions, transactions, events or operations that overlap in time is a well-known problem. The effects of concurrent interactions in MRM are similar to race conditions. In both cases a
A traditional policy for resolving concurrent events, operations, transactions or interactions is serialization -- imposing an order on them [Eswa76] [Haer83]. Serialization is often a valid policy when the concurrent events or transactions are logically independent. Traditionally, database systems serialize independent transactions [Bern81] [Papa86] [Brahma90]. Cache coherence models also serialize independent operations on cache blocks [Henn96] [Arch86]. Object and process modelling techniques either require that one action execute in a state at a time or recommend partitioning the states in which concurrent events can occur and then reflecting the effects of those events simultaneously [Alhir98] [Rum91] [Shlaer92]. Either approach assumes the concurrent events are independent. In logical time systems such as Lamport time [Lam78], virtual time [Jeff85], vector clocks [Matt89], PDES [Fuji90] and isotach systems [Will93], independence is tied to a notion of concurrence, i.e., two events are assumed independent if it cannot be determined that there exists a cause-effect relationship among them.
The effects of some concurrent interactions may not be captured by any serial order. For example, the semantics of one interaction may interfere with the semantics of another interaction such that one or the other or both may be fully or partially excluded, ignored, delayed or even enhanced. Some database schemes utilise semantic information about transactions to reorder concurrent transactions, possibly non-serializably [Badri92] [Barg91] [Garcia83] [Korth88] [Lynch83] [Munson96] [Weihl88] [Thom98]. However, even these approaches assume that the interactions are logically independent. Some concurrent interactions may be logically dependent, i.e, their concurrent occurrence is a factor in determining their effects. We classify such interactions and evaluate our approach based on criteria for a good taxonomy [Amo94] [How97].
After considering specification methodologies such as DFDs, PERT charts, IDEF0-3, UML [Alhir98] [Fowler97] [Texel97], OOA [Shlaer92] and Rumbaugh's Object Modelling Techniques [Rum91], we chose the High Level Architecture's Object Model Template (OMT) [OMT98] as a base for presenting our techniques in a manner useful to designers of multi-models. OMT permits designers to specify object classes and interactions [JPSD97] [JTFp97] [RPR97].
A number of domains employ some form of multi-representation modelling (MRM) with varying degrees of success. We presented some MRM applications and summarised their strengths and deficiencies using the metrics of multi-representation interactions (R1), multi-representation consistency (R2) and cost-effectiveness (R3). Common approaches for MRM involve executing the most detailed model or transitioning from one model to another. These approaches can make the multiple models inconsistent and incur high costs. Maintaining consistent representations of multiple models can be more effective than alternative approaches to MRM. We explore that thought in subsequent chapters.