Theodolite was inspired by features offered by different ESB solutions such as Mule but also by wanting to expand on some of their concepts.
At the moment I am just trying to jot down some ideas and start a tentative manual as soon as something works.
Some of the features it tries to implement are:
Interface-based service-orientated contracts by proxying orchestrated interfaces
Focus on pure native component implementation
Strict separation of component logic and data marshalling
Ability to use nested interfaces and have them orchestrated by the runtime
Complete abstraction of transport layer plumbing
Emphasis on strongly typed boundaries between components that are oblivious to how data gets passed in and out
Data marshalling is encapsulated in a dynamically loadable and unloadable driver mechanism, akin to dynamic module loading for I/O subsystems in most modern operating systems.
Transparent driver/component lifecycle
Emphasis on test driven development and non-intrusive orchestration of components (no impedance mismatch between component design and deployment)
Ability to use natural programming paradigms to structure component code and have the runtime instrumentalize the transaction demarcation and data plumbing, e.g. a transparent conversion of a producer/consumer pattern to a JMS based deployment scenario.
The following points will discussed in greater detail in the following sections.
A central motivation of Theodolite is to allow programmers to code components in a fashion that they can be totally oblivious of how cross cutting concerns such as transactions, data marshalling and dependency injection are achieved. It should be possible to write an application in a natural fashion without having to presuppose deployment considerations such as how data is to be moved, how processing is to parallelized or how atomicity of operations is to be managed.
Current ESB implementations already address many of the above mentioned issues. The difference that Theodolite brings is an even higher level abstraction to low level data marshalling.
Just as I/O-subsystems of modern operating systems hide the lower-level implementation details to higher-level code, Theodolite provides a contract-based service interface to application components that allows them to developed in TDD fashion with no consideration of how the I/O is going to be handled by lower level code.
Other approaches such as the Mule client or Spring's JMS template provide added abstraction to these I/O concerns but their usage has to be coded into the application semantics. So although you inject the dependencies into a template via an IOC mechanism, you still have include a stub into the client code, either explicitly or by implementing a domain-specific wrapper. Either way, you can't for example leverage JMS without coding some kind of implicit call to a cross-cutting concern.
The producer/consumer pattern is common programming idiom which can be written quite intuitively in normal code. In enterprise applications however, you usually want to emulate this pattern using some form of MOM. It could be possible, for example, to demarcate such a pattern in code, use the natural paradigm for end to end unit testing and have the pattern orchestrated to a JMS based scenario at deployment time. Such demarcation could occur by convention (e.g. by using a BlockingQueue) or by annotations.
A component's implementation should not have to be aware of how data is passed in and out of it. It should merely adhere to an interface definition whose semantics are independent of the actual data marshalling. For example, you could have the following component implementation:
...
private InnerInterface ii;
...
public void doFoo() {
.....
ii.callSomeAsynchronousMethod(args);
.....
Object result = ii.callSomeSynchronousMethod(args);
.....
}
In a unit test scenario, you could just use Spring DI to wire up the dependencies, but at deployment time you would still have to worry about how this data is getting marshaled.
The aim of Theodolite is to be preserve the interface usage and handle all of the data plumbing behind the scenes.
Component deployment should be as natural as dependency injected unit tests. There should be no impedance mismatch between component internals and the glue used to wire them up in an enterprise scenario.
The lifecycle of drivers and components is transparent and simple:
A component depends on a driver to handle it's I/O requirements so a component cannot function without the correct driver being loaded
A component can be enlisted or delisted from a driver
If a driver is unloaded, then any enlisted components need to be taken offline.
For each transport protocol there exists only one driver
Multiple components can enlist themselves with one driver
The advantage is a clear separation of concerns and the ability to unload an old driver and update to a new driver of a given protocol without having to take the ESB down.
The 0.0.1 release only offers the option of embedding Theodolite into a running process. To use it in a standalone fashion, you need to create a main method which can instantiate the main Theodolite class.
Say you have an interface definition that you would like to expose, for example:
package fully.qualified.package.name;
public interface SimpleTestInterface {
void doSomething();
}
for which you have written an implementation (e.g. fully.qualified.package.name.SimpleTestImpl), you can configure this as an interface based component in a Theodolite XML configuration file as follows:
<config>
<provider class="net.sf.theodolite.transport.provider.hessian.HessianProviderConfiguration">
<endpoint>
<address class="net.sf.theodolite.transport.protocol.HessianEndpointAddress">
<identifier>http://localhost:28888/foo/</identifier>
</address>
</endpoint>
</provider>
<driver class="net.sf.theodolite.transport.driver.hessian.HessianDriverConfiguration" />
<component>
<interfaceDef>fully.qualified.package.name.SimpleTestInterface</interfaceDef>
<implementation>fully.qualified.package.name.SimpleTestImpl</implementation>
<initialState>STARTED</initialState>
<protocol>hessian</protocol>
</component>
</config>
The following code snippet shows how you can
create a Theodolite instance,
provide it with the name of a config file in the classpath,
create an endpoint for the Hessian transport protocol
and create a client proxy in order to call methods on the interface definition
public static void main(String[] args) {
// Start the main class with a reference to your config file in the classpath
Theodolite t = new Theodolite("theodolite.conf.xml");
// Create an endpoint
EndpointAddress sending = new HessianEndpointAddress("localhost", 28888, "foo");
Endpoint<HessianProtocol> endpoint = new EndpointImpl<HessianProtocol>(sending);
// Create a client proxy
TheodoliteClientFactory<SimpleTestInterface> theodoliteFactory;
theodoliteFactory = new TheodoliteClientFactory<SimpleTestInterface>(
SimpleTestInterface.class, theodolite);
SimpleTestInterface testInterface = theodoliteFactory.createProxy();
// call methods on the interface
testInterface.doSomething();
}
The scenarios module provides an example test case (SimpleTheodoliteTest.java) showing how a simple interface gets orchestrated as a Hessian web service.