Tuesday, December 31, 2013

Migration of a Maven-based project to Gradle



Recently I've started noticing that some of the big names in java open source have migrated their builds from Maven to Gradle.

To my own development needs, Maven was good enough, so for some time I remained sceptic with Gradle and carried on working with the former. Today I gave Gradle an opportunity and so far I find it has been amazing.

As with any new technology, to gain adoption you can't just simply be "better" because for most of the people, changing stuff require a lot of effort, so marginal improvements are simply not strong enough, you need to be radically better. I believe the creators of Gradle had this in mind when they created this tool.

The learning Curve

Any tool has it's own learning curve so, before I even tried to do something with it, I started reading the documentation. At the very beginning, it seemed to me that Gradle was just a pretty redo of Apache Ant but written in groovy and I started wondering why I was wasting my time on that. 

My advice for those who get the same initial feeling is: carry on reading, it's going to get good really soon.

After reading the quick-start chapter of the Gradle documentation I was able to see the power of this tool but still I had the taste that it was something equals to Ant but prettier and empowered by the fact that the builds are scripted in Groovy. The breakthrough came to me when I started reading about the so called "plugins", these plugins allow a full set of build conventions (like maven plugins) but are extremely simple to configure (unlike maven plugins).

After a little more than two hours of reading, I already had a couple ideas on how to use Gradle for building my apps, so I started migrating a simple tool I have for my day-to-day work. I'd need to say that the learning curve is very smooth.

The project I migrated

To start playing, I took one desktop-app I created to help my day-to-day work, this is just a simple Java based app, with Swing UI but it has an embedded instance of Mule ESB. This app was distributed as a zip file that anyone could execute with two clicks. The following is the pom.xml file I had to build and distribute the app.



I used an archetype that allowed me to easily get the executable file with its dependencies for running, also I had configured the project so it creates a zip file that I can distribute without much effort.

In order to start migrating the app, first I needed to create the Gradle build file, this is a groovy script called "build.gradle", this file is our new "pom.xml".

This is how the final file looks:

The first line of the script applies the java plugin to the project, this imports a predefined build configuration that applies to common java projects.

My project directory structure already had the "/src/main/java", "/src/main/resources", etc that typical maven projects have, I am happy with that structure and I simply didn't wish to change it, fortunately the creators of the java plugin liked these conventions as well and made them to be their standard.

The second line of the script applies the "application" plugin, this is a fantastic plugin that helps you distribute standalone java apps.

Next, I define the naming properties of my project, this is the same concept of the coordinates we need to define for each maven project, the main difference is that these are optional. The name or the artifact will be taken from the name of the folder where the project is contained.

Then, I defined as extended properties the versions of each of my dependencies, my first attempt was to not use the 'ext.' object but Gradle printed a warning when building saying that this is deprecated, It was not obvious to me how to get this done properly, so I took a sneak peak on the spring framework build script to get some inspiration.

After that, I've defined the libraries my project depends on. Gradle provides a more concise syntax for defining this as well but I sticked with the little verbosity that I'm used to because of my maven background.

I configured the main class as a project property as well, this is just required for the application plugin to know what is the entry point of the app.

Finally, I've configured my repos. Since Gradle can use maven repos (but it is not maven) I need to specify that I want the maven central repo to be included, as well as my local repo (.m2/repository), and a couple of other repos I normally configure.

That is all for the migration of my build, let's now take a look at how to use it!

Building and Distributing

To build the project I can simply run:

$ gradle build

The 'build' word is just one of the tasks the java plugin for Gradle has. We have many options i.e. clean, test, etc, just like we would expect from Maven out of the box. For a list of tasks we can run, please take a look at:


Also I can use tasks from the application plugin to run my app:

$ gradle run

And this runs the app from the defined main class. This application plugin has also some very interesting options for me to distribute this app, for example I could create a distribution zip file, tar file, and even os-specific startup scripts, for more information on what you can do with this plugin, just take a look at:


Conclusions

This configuration is just scratching the surface of what you can do with this very powerful build system and also since this is just my first ride, there should be things that I could have done better, for example maybe there is a way to define the default set of repositories for each build.

My overall feeling with this build system is that the configuration is more concise, and making complex configurations that would normally take creating Mojos and a lot of reading, can be simply done with a small amount of groovy.

Even though I din't get the exact same result in my distribution, I like what I had out of the box. My previous distribution was based as well on accepting what I got from an archetype, so in this case, I'd need to say that what comes out of the box is good enough.

What I didn't like is that maven does not regard itself as a mere build tool but it is aimed to standardize projects, I've got the feeling that this idea has been lost on Gradle.

Gradle has surprised me with its simplicity and yet it looks like you can tackle simple and complex things in the same way, this is, of course, thanks to the pre-built plugins, that are, I believe, what makes the difference for the adoption of this tool.

Saturday, May 18, 2013

How to build a cloud connector for Mule ESB

I'm working on a particular project which uses Mule ESB as a central piece, for adding a new feature to my project I need to consume Google's custom search API so for making it beautiful I'm going to create a connector using Mule's DevKit and showing step by step the process of building a connector that consumes a REST API.

First, I create a DevKit module from the archetype:

mvn archetype:generate -DarchetypeGroupId=org.mule.tools.devkit -DarchetypeArtifactId=mule-devkit-archetype-generic -DarchetypeVersion=3.4.0 -DarchetypeRepository=http://repository.mulesoft.org/releases/ -DgroupId=com.mycompany -DartifactId=mule-module-google-custom-search -Dversion=1.0.0-SNAPSHOT -DmuleVersion=3.4.0 -DmuleModuleName=GoogleSearch -Dpackage=com.mulesoft.module.googlesearch -DarchetypeRepository=http://repository.mulesoft.org/releases

Since I'm building a cloud connector for google's custom search API it is handy to have it's reference at hand:

https://developers.google.com/custom-search/v1/cse/list 

You may also want to create a custom search engine:

http://www.google.com/cse/create/new

Lastly: here is the link for the source code of the project:

https://github.com/juancavallotti/mule-module-google-custom-search

Now let's the coding begin! First of all I want to clean up the sample module, pretty much removing everything except the class and it's annotations. I have picked to be a module (instead of a proper connector) because it's a fair simple functionality which does not require work for setting up a connection. The class looks like this:

1
2
3
4
5
6
7
8
9
/**
 * Google custom search Module.
 *
 * @author Juan Alberto López Cavallotti.
 */
@Module(name="google-search", schemaVersion="1.0.0-SNAPSHOT")
public class GoogleSearchModule {

}

So first we will add the required configuration parameters as stated at the documentation, these parameters are the API Key, and the custom search engine ID or it's url (one of both are required). The search query is also required but we'll save it for passing it as a parameter to the only operation this connector will have. I will give nice names to these parameters instead of the really short ones they currently have, this way is more self-documented.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
@Module(name="google-search", schemaVersion="1.0.0-SNAPSHOT")
public class GoogleSearchModule {

    @Configurable
    private String apiKey;

    @Configurable
    @Optional @Default("")
    private String searchEngineId;

    @Configurable
    @Optional @Default("")
    private String searchEngineUrl;

    //getters - setters
}


DevKit demands us to write javadoc for each element on the class so it is able to generate the connector's documentation properly. I won't show this documentation on this post unless I have something to say about it, having said that, you might want to take a look at the final version of this module in order to get more insight on its development.

Next, I want to create the one and only operation as a method of this module definition, this method has a lot of parameters, most of all with default values. For me this is pretty ugly and DevKit allows me to do better so I will do, I will pick the most important parameters for direct setting and the others would be gathered through a POJO. The return value I will keep it as the JSON String the service returns (for the sake of simplicity) but kind of more beautiful would be to create an Object Structure and take advantage that Mule has bundled the Jackson Library.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
/**
 * Perform a Google custom search.
 *
 * {@sample.xml ../../../doc/GoogleSearchModule-connector.xml.sample google-search:search}
 *
 * @param query The search query to send to google search.
 * @param siteSearch The site where to search.
 * @param searchType The type of the search to be performed.
 * @param searchConfiguration Configuration for this google search.
 *
 * @return The JSON result as returned by the google custom search API.
 */
@Processor
public String search(String query, @Optional @Default("") String siteSearch, 
                     @Optional SearchType searchType, @Optional SearchConfiguration searchConfiguration) {
    return null;
}


You need to consider the following:

  • The method must be annotated with one of the annotations provided by devkit to generate different types of message processors. (Each type has its own signature requirements).
  • Every operation should be documented properly.
  • You need to create a sample usage of the operation on the sample file, (which luckily it gets created by the archetype).
  • You need to document EVERY parameter if you wish the build to succeed.


<!-- BEGIN_INCLUDE(google-search:search) -->
<google-search:search query="#[header:inbound:query]" />
<!-- END_INCLUDE(google-search:search) -->

Now, the sample XML looks like the following:


I want to perform an initial configuration and validation when the module starts just to make sure we're ready to go when making a search so I create a method to take advantage of the configuration element lifecycle:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
private HashMap<String, String> connectorConfigs;
 
/**
 * Perform module initialization.
 */
@Start
public void initializeConfiguration() {
    connectorConfigs = new HashMap<String, String>();
 
    connectorConfigs.put("key", apiKey);

    if (StringUtils.isBlank(searchEngineId) && StringUtils.isBlank(searchEngineUrl)) {
        throw new IllegalArgumentException("You must configure either searchEngineId or searchEngineUrl");
    }
 
    if (StringUtils.isNotBlank(searchEngineId) && StringUtils.isNotBlank(searchEngineUrl)) {
        throw new IllegalArgumentException("You must configure a reference to the custom search engine.");
    }
 
    addIfNotBlank(connectorConfigs, "cx" , searchEngineId);
    addIfNotBlank(connectorConfigs, "cref", searchEngineUrl);
}


This basically creates a map that will be the base parameters for every request. Every request should have at least a reference to the API key (which is the way google charges us for our searches) and the reference to our custom search engine (which we need to create manually in order to use the API).  


Next, let's dig into the actual implementation. I will want to access the MuleEvent for this message processor so, I will modify slightly the signature of the method. What this implementation does is: create a map with the parameters to send to the API, then convert these maps into the API url and finally dispatch it through Mule's HTTPS connector. This is very convenient since it allows the configuration of the HTTPS connector parameters even though there is still room for improvement!! I could have made the http connector to use configurable but I won't just to keep it simple. The search method now looks like this:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
@Processor
@Inject
@Mime("application/json")
public String search(MuleEvent event, String query, @Optional @Default("") String siteSearch,
                     @Optional @Default("WEB_SEARCH") SearchType searchType, @Optional SearchConfiguration searchConfiguration) {
 
    MuleContext context = event.getMuleContext();
 
    MuleMessage message = event.getMessage();
 
    HashMap<String, String> searchParams = buildSearchParams(query, siteSearch, searchType, searchConfiguration);
 
    String apiUrl = buildSearchUrl(searchParams);
 
    try {
        OutboundEndpoint endpoint = context.getEndpointFactory().getOutboundEndpoint(apiUrl);
 
        //configure the message.
        message.setOutboundProperty("http.method", "GET");
 
        MuleEvent responseEvent = endpoint.process(event);
        //return the payload.
        return responseEvent.getMessage().getPayload(String.class);
    } catch (MuleException e) {
        logger.error("Error while querying the google custom search API", e);
    }
    return null;
}

This is pretty straightforward. I won't get into the implementation of the auxiliary methods because it is just boilerplate. I want to highlight the @Inject annotation I used, this is used to get the actual MuleEvent (and distinguish this parameter from what the user needs to provide) and also the @Mime annotation which will generate the appropriate message header.

Now we just want to create the MuleStudio update site so we can install and try this module:

mvn clean package -Ddevkit.studio.package.skip=false

We can import this through update site and use it! There are furhter configurations I can (and probably will) make to this connector, DevKit has annotations for customizing the studio dialogs and much more!!

Here is a sample project which uses the recently created connector:


<mule xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns:google-search="http://www.mulesoft.org/schema/mule/google-search" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:spring="http://www.springframework.org/schema/beans" version="EE-3.4.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/google-search http://www.mulesoft.org/schema/mule/google-search/1.0.0-SNAPSHOT/mule-google-search.xsd">
    
    <google-search:config name="Google_Search" apiKey="<api key>" searchEngineId="<engine id>" doc:name="Google Search"/>
    
    <flow name="mule-configFlow1" doc:name="mule-configFlow1">
        <http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8081" doc:name="HTTP"/>
        <google-search:search config-ref="Google_Search" query="http connector" doc:name="Google Search" />
        <logger level="ERROR" message="Payload is: #[payload]" doc:name="Logger"/>
        <logger level="ERROR" doc:name="Logger"/>
    </flow>
</mule>

Please continue reading the  DevKit cookbook for much more info:

http://www.mulesoft.org/documentation/display/current/Cloud+Connector+Devkit+Cookbook

Tuesday, May 14, 2013

How To Unit-Test an Annotation Processor

The Issue

I have been developing lately an annotation processor and found that unit-testing it is not as trivial as it might sound.

Some would say "You just need to mock all of the java.lang.model.* packages and that's it!" Unfortunately mocking these elements is not as easy as it might appear and I don't really find a reason why to do this other than speeding up the test suite, this alternative has to be taken real seriously and prioritize the test time / development effort.

The solution

The path I went through is invoking the Java compiler within my unit tests. Doing this is kind of straightforward but some considerations must be taken.

  • In order for the tests to be repeatable, please make sure your working directory is invariant, maven does a good job at this.
  • It's easier if your annotation processor can be auto-discovered. This is place the appropriate "javax.annotation.processing.Processor" file on META-INF/services/
  • Make sure you delete the junk files after you have tested. Since we're going to compile classes some .class files may appear, it is not mandatory to delete them but they would get mixed with the source code which is not nice at all.
Now that we've taken care of the errands, I present you a generic annotation processing test case which you can use for your unit tests.

Please note that this is not the only (or strictly the best) way of doing this but is a way and I like it so, here is the code:


First, we have a simple interface which we will implement to build our test cases, implementations return the list of files to compile and then might verify if the compilation has failed of there were some compilation messages.

Next, we have a really simple sample implementation of a smoke test which verifies that no "Critical Warnings" or "Compiler Errors" have been issued from a valid configuration.

Finally we have the test class. For this I make use of the parameterized test runner that JUnit 4 provides and the parameters are exactly the different implementations of our simple interface.

Please note that in order to be completely generic, this unit test must obtain its parameters from configuration nevertheless I made it like this for the sake of simplicity and just keeping note of the room for improvement.

After this it is just boilerplate code, get the compiler, configure the file manager, compiling the relevant files, invoking the test and finally cleaning up after ourselves.

Please feel free to modify or upgrade this test case and I hope you find it as useful as it is to me. 

Monday, May 6, 2013

jDTO Binder gains experimental support for compile-time config validation

One of the most frustrating things that emerge when working with a framework is misconfiguration, specially if we find out of this misconfiguration on runtime or worse, after moving the application into production.

Normally when frameworks suffer this kind of weakness, developers might decide to create IDE-specific plugins or extra tooling to prevent unwanted and unnoticed runtime misconfiguration. The object to object mapping framework I've developed (jDTO Binder) is a great tool but sadly suffers from this issue so, as part of release 1.5, I am currently developing a set of tools that aim to ease the development. The first of this tool is a compile-time annotation processor that can validate the configuration for a given DTO.

Annotation Config Validation

The great thing of compile-time annotation processors is that are able to integrate with any IDE and they're able to have the build failing if a misconfiguration is detected.

Here is how it looks jDTO Binder compile-time verifier out of the box in Netbeans IDE 7.2:


Currently this is experimental and I'm working on the idea, the specific validations that will be performed and the way this validations should manifest will emerge on the development of this idea and for the time being very experimental and limited support is implemented but hopefully it will plot on improved productivity.

In the current state the compile time verifier functionality checks and prints compile-time warnings when the source property of a given DTO field is mistyped and also encourages the user to place annotations on getters instead of on fields following a performance suggestion showed on the official documentation.