TAO’s Asynchronous Method Handling (AMH)

 

 

Table of Contents

 

  1. Motivation
  2. Design
  3. Proposed Implementation
    1. Implied-IDL
    2. Issues
      1. General
      2. Exceptions
      3. AMH_QuoterResponseHandler implied IDL
      4. Specifying AMH Skeletons
      5. Design Trade-offs
        1. Static / Runtime
        2. Per-POA / Per-Object
  4. Implementation Considerations

 

 

Motivation:

 

For many types of systems, CORBA AMI improves concurrency, scalability, and responsiveness significantly.  Since AMI allows a client to invoke multiple two-way requests without waiting for responses, the client can use the time normally spent waiting for replies to perform other useful work. A similar capability on the server side would be quite useful for many classes of applications, for example, a multi-tier system. In a multi-tier system, one or more “middle-tier” servers are placed between a source client and a sink server. A source client’s two-way request may visit multiple middle-tier servers before it reaches its sink server; the result then flows in reverse through these intermediary servers before arriving back at the source client (Fig 1).

 

 

Fig 1

 

 

Without AMH capability, the general behaviour of the system is as in Fig 2.  The middle tier server is blocked waiting for the reply to come back from the sink-server. To improve the throughput of the middle-tier server, it could be made multi-threaded so that it can handle multiple requests; but threading is not a very scalable option. For further discussions on various other options possible and their trade-offs please refer to [AMH.doc] Thus a capability similar to AMI, where the (middle-tier) server could process new incoming requests without having to wait for responses from sink-servers, could prove to be of great benefit.

 

 Fig 2

 

 

Design:

 

The proposed design for AMH follows quite closely to AMI. When a new request comes into the server, the underlying ORB creates a ResponseHandler (RH) object. The RH object and other ‘in’ parameters are then passed to the servant. The RH object is such that, the servant can now use it to fill in the appropriate ‘out’ or return values and the ORB takes care of sending them back to the client. The servant is now free to do as it pleases with the RH. It can either block waiting for a response to come back from the sink-server and then use the RH to send back the response or it can store the RH (somewhere in a queue) and return control immediately back to the ORB. Later, when the response arrives from the sink-server, the middle-tier server can extract the right RH (from the queue) and send back the response to the client. An even more sophisticated approach would be to store the RH in an AMI ReplyHandler object so that the response is sent back auto-magically. This is illustrated in Fig-3. 

 Fig-3

 

 

Proposed Implementation:

 

 

Consider the canonical example:

 

module Stock

{

      exception Invalid_Stock_Symbol {};

 

Interface Quoter

      {

            long get_quote( in string stock_name )

raises (Invalid_Stock_Symbol);

      };

};

 

Our proposed Implied IDL is then, something like this:

 

module Stock

{

interface AMH_QuoterResponseHandler;

 

// Only the skeleton class is generated!!!!

interface AMH_Quoter

{

void get_quote (in AMH_QuoterResponseHandler handler,

                            in string stock_name);

      };

 

valuetype AMH_QuoterExceptionHolder  /* : ??? some base valuetype?? */

{

void raise_get_quote () raises (Invalid_Stock_Symbol);

      };

 

interface AMH_QuoterResponseHandler

{

void get_quote (in long return_value);

void get_quote_excep (in AMH_QuoterExceptionHolder holder);

      };

 

/* OR:

interface AMH_QuoterResponseHandler : AMI_QuoterReplyHandler {};

*/

};

 

The ResponseHandler is thus responsible for marshalling any out or return values and it collaborates with the ORB in this task (in above example, the response is sent back to the client when the method get_quote of AMH_Quoter is called). When the ORB creates the ResponseHandler for an incoming request it stores state such as request ID, the connection used to receive the request, and possibly, some state related to the interceptors. This state is then used by the ORB to send back the reply to the client. The ResponseHandler also has the required interface so that a servant can access (at any time) any information that it would have had in a normal upcall. 

 

 

Issues:

General:

-        The memory management rules for the ResponseHandler (RH) are yet to be fully concretized but right now it is assumed that the application will be responsible for deallocating the RH after it is done with it.

-        Also, the RH has special semantics. It can be ‘invoked’ only once. Once it has been invoked, invoking it again will raise a ‘suitable’ exception on the server side (like an “Operation_Not_Possible” ).

 

Exceptions:

  The benefits of using a valuetype to hold an exception are:

-        It is similar to the existing AMI stuff

-        It works with the standard mapping as well as with the "alternative mapping for dialects without native exceptions" and

-        It is illegal to pass exceptions as arguments in IDL; using a valuetype might keep the language lawyers at bay.

 

AMH_QuoterResponseHandler implied IDL:

-        As enumerated above, the ResponeHandler could either be a new class or derive from the already existing AMI ReplyHandler class. Making the RH sub-class from the ReplyHandler has certain advantages, namely, that it will remain interface compatible with the AMI class and involve less work from the IDL compiler. But, the disadvantage would be that having the interface could be potentially confusing because what happens inside the get_quote methods differs quite radically in the two classes; in the ReplyHandler, it is an upcall while in the ResponseHandler it is a ‘downcall’ .

 

Specifying AMH skeletons:

-        It should be possible to generate only skeleton-side specific files for AMH. The question then is how to allow server developers to specify whether they want AMH skeletons vs. Non-AMH skeletons? The solution is to have flags for the IDL compiler, like -GH (compare to -GC for AMI callbacks). The developer then derives from either the POA class or the POA_AMH class to choose between each one.

 

Design Tradeoffs

-        Static / Runtime Selection:

o      Determine completely at run-time if application developer wants AMH or not. This would involve the app-developer implementing twice the number of methods. Need to discuss if this option makes sense and what are the benefits of doing this.

o      The app-developer can select the ‘right’ (AMH-enabled or not) servant to instantiate at run-time with the POA. The servant implementation must, then, derive from the appropriate POA_* or POA_AMH_* class.

-        Activation Granularity: Per POA / Per Object

o      One approach is to activate all AMH-enabled servants in a separate POA that has explicitly been set up with a Policy to handle AMH. In this case, the granularity is per-POA

o      The other approach is to inherently have AMH capability in the POA so that both AMH-enabled and non-AMH servants can exist in the same POA. This would be harder to implement and the gains from this are not too apparent. Programmatically, the per-POA approach can achieve any functionality needed, since servants can be instantiated across multiple POAs.

 

 

 

Implementation Considerations:

 

-        Many optimizations currently in place in TAO are based on assumptions of a single activation record for handling of a request (receiving a request and sending back a reply). AMH breaks this basic assumption. The request could be received in one thread and the reply could possibly be sent in another thread.

-        Some CORBA specs also implicitly assume a single activation record; thus introducing AMH requires a rethinking of most of the ORB’s functionality and how it is implemented. An example is the Interceptors spec. Details are yet to be worked out on how the current implementation of Interceptors may break and in what scenario (in request path or reply path) and how it could be avoided.

-        All the interactions between the POA, Skeleton and Servant need to be analyzed carefully regarding scope and lifetime, since, now,  (C++) objects now could exist on the heap. Reference counting of the objects could be a potential solution.

-        TAO also supports many concurrency models such as Single-Threaded, Thread-Pools, Leader-Follower, etc. Again, there is need for analysis if AMH will work with all these concurrency models. But due to the way TAO has been built making use of design patterns such as Factory and Strategy, it is foreseen that AMH could work seamlessly with the many concurrency models and that AMH could actually be an issue that is orthogonal to concurrency

-        Other misc. issues include interaction of AMH with Servant Managers/Locators/Activators and with POA_Current

-        In an ideal implementation, there should be no penalty for not using AMH while still being able to enable AMH at run-time. A possible way to do this would be subset AMH (similar to RT-CORBA) into a library and load it as needed.