Resource allocation is a negotiation process between resources and objects. Legion encompasses a variety of resources, including hosts, networks, vaults, and I/O devices such as printers and sensors. Users will require blocks of time on some or all of these resources. They must also be able to control placement of their objects' active and persistent states or Legion will be unsuitable for certain types of applications. Administrators, on the other hand, must regulate usage of and control access to their resources, as well as enforce local usage policies, security policies, etc. Different applications may require certain types and levels of parallelism, storage and memory space, I/O support, communication, and communication topology. A single, generic resource scheduling arrangement is simply not feasible under these conditions: in order to give equal power to resource providers and consumers both parties must be able to arrive at an arrangement.
Application- and object-specific resources allocation is crucial to achieving both user and administrator goals. Legion supports the development and use of allocation algorithms tailored to objects or to applications so that the resource objects suit the resource provider's interests. Each provider can create his or her own flavors of resource objects. Class objects enforce policies for their instances, and both parties can export interfaces to facilitate negotiations. This placement philosophy allows each allocation decision to be tailored to the application, so as to better match actual application run-time behavior and application-specific knowledge.
In the allocation of resources for a specific task there are three steps: decision, enactment, and monitoring. In the decision stage, the task's characteristics, requirements, and run-time behavior, the resource's properties and policies, and users' preferences must all be considered. Legion provides an information infrastructure and a resource negotiation framework for this stage. The allocation decision includes target hosts and storage space with assorted host objects and vaults. At the enactment stage, the allocation coordinator sends an activation request, including the desired host-vault mapping for the object, to the class object which will carry out the request. The class object checks that the placement is acceptable and then coordinates with the host object and vault to create and start the new object. The monitoring stage ensures that the new object is operating correctly and that it is using the correct allocation of resources. An object-mandatory interface includes functions to establish and remove triggers that will monitor object status.
There are three special objects involved in Legion resource management; the Collection, the Scheduler, and the Enactor. The Collection collects resource information, constantly monitoring the system's host and vault objects to determine which resources are in use and which are available for what kind of tasks. The Scheduler determines possible resource use schedules for specific tasks and makes policy decisions about placing those tasks. The Enactor negotiates with resources to carry out those schedules and acquires reservation tokens from successful negotiations.
Suppose, for example, that a user wants ClassFoo to start instance Foo on another host. The user sends a call (Figure 20, step 1) to the basic Legion Scheduler.* The Scheduler then consults the Collection to determine what resources are appropriate and available for Foo (step 2) and builds a sample schedule or series of schedules (step 3). It then sends a sample schedule to the Enactor (step 4). The Enactor contacts each resource on the schedule and requests an allocation of time (step 5). Once it has contacted each resource and reserved the necessary time it confirms the schedule with the Scheduler (step 6), and then contacts ClassFoo and tells it to begin Foo on the appropriate resources (step 7). ClassFoo contacts the resource(s) and sends the order to create Foo (step 8).
|Figure 20Steps in Scheduling Instance Foo|
Before the Scheduler approaches the Collection (step 2 in Figure 20) it must know Foo's specific needs: computing time, any dependency graphs, special requirements, etc. If it is class-specific, the Scheduler may explicitly know this information, or it can obtain descriptive information from the class via the attributes interface.
|Figure 21Collection and Scheduler|
With this information, it asks the Collection for a list of the correct and available resources (Figure 21, step 1). Individual resources may limit outside usage to certain types of users, objects, hosts, etc., so the Collection has information regarding which resources will accept instances of ClassFoo, on its particular host, started by a particular user. If the Scheduler requests a SPARC, for example, the Collection searches for a list of those SPARCs that are accessible, available, and have the proper amount of free space.
The Collection returns a list of matching resources to the Scheduler (Figure 21, step 2). The Scheduler builds a set of possible schedules (Figure 21, step 3), prioritizes them, and sends an ordered list of schedules off to the Enactor. This can vary, of course, depending on the individual scheduler. A different scheduler might send only one or several possible schedules.
When the Enactor is ready to request reservations of time and space on the resources on a chosen schedule it methodically goes down its list and approaches each one individually. If any resource refuses the Enactor's request, the Enactor either starts on the next one or, if all the schedules on its list have failed, informs the Scheduler that it needs a new schedule. In at the scenario in Figure 20, host object Beta and vault object Beta are the only resources required, so the Enactor approaches each one with the make_reservation() function (this function asks for an instant reservation). It requests reservation for the host and the vault separately. Host object Beta then decides whether or not to accept the reservation. Note that Beta is free to refuse the reservation at any point during the entire scheduling procedure if previous reservations or internal considerations require a cancellation, regardless of its usual policies towards the user and the user's host. If it refuses, the Enactor tells the Scheduler that the schedule is not possible, and requests another schedule, or, if the Scheduler has sent a list of schedules, the Enactor moves down to the next schedule. If this happens after other hosts objects on the failed schedule have been contacted and have agreed to make reservations, the Enactor contacts them again to release its reservations. The prototype Enactor allows schedule expression as a difference set from the previous schedule. Figure 22 shows the Scheduler data structure.
This system is intended to avoid "reservation thrashing," i.e., repetitive reservations when an Enactor releases a reservation and immediately re-reserves the resource as part of a new schedule.
|Figure 22Legion Scheduler data structure|
In our example, however, host object Beta allows a reservation and sends the Enactor a reservation token. The Enactor then contacts vault object Beta. Once all resources on a schedule have been contacted and have sent in reservation tokens, the Enactor notifies the Scheduler that the task can be completed (Figure 23, step 1).
|Figure 23Legion resource management|
The Scheduler then tells the Enactor to contact ClassFoo (Figure 23, step 2). The Enactor sends a call to create_instance(reserved host object name, reservation token) to ClassFoo (Figure 23, step 3), which in turn sends a start_object(reservation token) call to host object Beta (Figure 23, step 4) and the name of the vault Beta should use. Beta then creates instance Foo (Figure 23, step 5).
Both ClassFoo and the resource have veto power throughout the reservation process: Legion's resource management system handles denial or failure at any stage. For example, if a resource object refuses to grant a reservation the Enactor asks the Scheduler for another schedule and tries again. A reserved resource can also reject a start_object() call. The two parties remain autonomous during this entire procedure, and are not obligated to honor each other's commitments.
Depending on the individual resource, a reservation may have a time-out period after which the reservation is released. The reservation guarantees that a block of time and space will remain open for a specific period, beginning with the creation of the token. Note that the token may expire by the time ClassFoo is ready to use it.
A system may have multiple Schedulers, for different processes, problems, levels of granularity, etc. For information on writing your own Scheduler, or using multiple Schedulers, please contact the Legion Research Group.
The Scheduler can be called directly from a user-level program or it can be called from a Class Object. Writers of schedulers should support the following interface. For complete information, consult $LEGION/src/ServiceObjects/Schedulers. Scheduler writers are encouraged to derive new schedulers off of the existing schedulers, and to overload the placement generation methods.
virtual UVaL_Reference<LegionSchedulerResponse> scheduleObjects(
The Scheduler then works with the Collection and Enactor to negotiate and implement a schedule. The response indicates whether or not the scheduling action succeeded. For more information on the arguments and return values, see LegionSchedulerResponse and LegionRequestToScheduler in $LEGION/include/legion/LegionSchedule.h.
The getCandidatePlacements call returns a list of possible placements for a particular object. In future releases, this will be superseded by a new function which will generate a suite of schedules (the same schedules that would be passed to the Enactor if scheduleObjects() were called).
virtual UVaL_Reference<LegionLOID> getEnactor();
virtual UVaL_Reference<LegionLOID> setEnactor(
These functions set and/or return the current value of the Enactor that the Scheduler will use. The getEnactor() call returns the current value. The setEnactor() call sets the value to the argument and returns the prior value of this variable.
virtual UVaL_Reference<LegionLOID> getCollection();
virtual UVaL_Reference<LegionLOID> setCollection(
These functions set and/or return the current value of the Collection that the Scheduler will use. getCollection() returns the current value, and setCollection() sets the value to the argument, and returns the prior value of this variable.
The primary interface to the Enactor is through the enact_schedule() call. There are additional functions for performing lower-level operations (such as making and canceling individual reservations, or to activate or create individual objects), but, from the point-of-view of a typical Scheduler, enact_schedule() is the most important method.
If a master schedule's placement fails, look for subordinate delta schedules with alternative placements for the failed mapping. Iterate over these, and for each one release any current reservations that are in conflict with the delta schedule. Make new delta schedule reservations. If these fail, move on to the next delta schedule.
As soon as a schedule is successful (i.e. reservations are obtained for all placements in the schedule), the schedule is returned to the caller with a success code. If all schedules fail, the return value includes error codes indicating which placements caused each schedule to fail.
For complete information on the data structures passed, schedule writers are encouraged to examine the header file $LEGION/include/legion/ LegionSchedule.h and to look at the source code for the example schedulers in $LEGION/src/ServiceObjects/Schedulers.
The Collection supports five methods useful for resource management, although only one of them is used by Schedulers. The QueryCollection() method call passes in a string containing a logical expression describing the systems of interest.
The Collection then searches its database of system information and returns a set of system description matching that request.
The Legion Collection object uses the MESSIAHS Interface Language (MIL) . Collection queries can be constructed using the interface below.
int-binop -> + | - | / | * | mod | & | | | max | min int-expr -> int-expr int-binop int-expr | (int-expr) | integer | int(float-expr) | id string-expr -> string-expr + string-expr | (string-expr) | string | id id -> $attribute-name float-binop -> + | - | / | * | max | min float-expr -> float-expr float-binop float-expr | (float-expr) | float | float(int-expr) | id comp -> < | > | = | >= | <= | <> bool-binop -> and | or | xor bool-expr -> bool-expr bool-expr bool-expr | not bool-expr | int-expr comp int-expr | float-expr comp float-expr | string-expr comp string-expr | match(string-expr, string-expr) | (bool-expr) | true | false | id
In MIL, variables are of the form $VARNAME (e.g. $system_arch). The official variable names will be those exported by resource objects in their attributes. To get some idea of the current set of attributes, consult the documentation on resource objects (for the most up-to-date information, invoke the retrieve_all_attributes() method on a resource object and examine the results).
* ClassFoo can also have an associated external Scheduler so that a user could call the class and the class would then call its Scheduler.
Directory of Legion 1.5 Manuals