First Steps to Arquillian

Overview

Arquillian is an integration test framework that allows tests to be executed in a managed environment. In this blog post, arquillian will be used to test persistence functionality within WildFly 8.

Installation

Simply add the maven dependencies:

    <!-- Make Arquillian work with JUnit -->
    <dependency>
      <groupId>org.jboss.arquillian.junit</groupId>
      <artifactId>arquillian-junit-container</artifactId>
      <scope>test</scope>
    </dependency>

    <dependency>
      <groupId>org.jboss.arquillian.protocol</groupId>
      <artifactId>arquillian-protocol-servlet</artifactId>
      <scope>test</scope>
    </dependency>

    <!-- For managing and wrapping of maven dependencies used by the application under test -->
    <dependency>
      <groupId>org.jboss.shrinkwrap.resolver</groupId>
      <artifactId>shrinkwrap-resolver-impl-maven</artifactId>
      <scope>test</scope>
    </dependency>

    <dependency>
      <groupId>org.jboss.arquillian.extension</groupId>
      <artifactId>arquillian-persistence-dbunit</artifactId>
      <version>1.0.0.Alpha7</version>
      <scope>test</scope>
    </dependency>

A managed container where Arquillian will execute the tests is required. To do this, provide a default activated profile with the managed container declared as dependency. Since WildFly will be used, wildfly-arquillian-container-managed is declared as dependency.

  <profiles>
    <profile>
      <id>arq-wildfly-managed</id>
      <activation>
        <activeByDefault>true</activeByDefault>
      </activation>
      <dependencies>
        <dependency>
          <groupId>org.wildfly</groupId>
          <artifactId>wildfly-arquillian-container-managed</artifactId>
          <version>${version.arquillian.container}</version>
          <scope>test</scope>
        </dependency>
      </dependencies>
    </profile>  </profiles>

Arquillian configuration is declared in the arquillian.xml file in the test resource path.

<?xml version="1.0" encoding="UTF-8"?>
<arquillian xmlns="http://jboss.org/schema/arquillian"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://jboss.org/schema/arquillian
        http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

  <!-- Force the use of the Servlet 3.0 protocol with all containers, as 
    it is the most mature -->
  <defaultProtocol type="Servlet 3.0" />

  <!-- configuration for wildfly instance. Will look at JBOSS_HOME environment variable for wildfly location -->
  <container qualifier="jboss" default="true">
  </container>

  <extension qualifier="persistence">
    <property name="defaultDataSource">java:jboss/datasources/TestDS</property>
  </extension>
</arquillian>

Since the location of WildFly server is not configured in the jboss configuration, the managed container will be looking for the WildFly server from the JBOSS_HOME environment variable. The following should be declared as an environment variable:
JBOSS_HOME=/path/to/wildfly

Deployment Declaration in JUnit

Arquillian tests do not differ much from ordinary JUnit tests. It just needs some additional configurations.

import org.jboss.arquillian.container.test.api.Deployment;
import org.jboss.arquillian.junit.Arquillian;
import org.jboss.arquillian.persistence.Cleanup;
import org.jboss.arquillian.persistence.CleanupStrategy;
import org.jboss.arquillian.persistence.TestExecutionPhase;
import org.jboss.shrinkwrap.api.Archive;
import org.jboss.shrinkwrap.api.ShrinkWrap;
import org.jboss.shrinkwrap.api.asset.EmptyAsset;
import org.jboss.shrinkwrap.api.spec.WebArchive;
import org.jboss.shrinkwrap.resolver.api.maven.Maven;
import org.junit.runner.RunWith;

@RunWith(Arquillian.class)
@Cleanup(phase = TestExecutionPhase.AFTER, strategy = CleanupStrategy.USED_ROWS_ONLY)
public class SampleModelTest {

  @Deployment
  public static Archive<?> createDeployment() {

    return ShrinkWrap
            .create(WebArchive.class)
            .addAsLibraries(
                    Maven.resolver().loadPomFromClassLoaderResource("persistence-test-pom.xml")
                            .importRuntimeDependencies().resolve().withTransitivity().asFile())
            .addPackages(true, "com.sample.model")
            .addPackages(true, "com.sample.data")
            .addAsWebInfResource("h2test-ds.xml")
            .addAsResource("test-persistence.xml", "META-INF/persistence.xml")
            .addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml");
  }
  ...
}
  • Arquillian has a JUnit runner class and this can be activated by annotating the test class with @RunWith(Arquillian.class)
  • The static method @Deployment annotation indicates how the test class should be packaged before being deployed to wildfly for testing. This method must be a public static method.
  • ShrinkWrap is used to generate the archive file. For generating a war file, WarArchive.class must be specified in the create() method.
  • The libraries that the application under test will be using can be included in the package by calling addAsLibraries(). In the case of this example, persistence-test-pom.xml was created to declare the dependencies that will be used within this test.
  • Packages or classes that will be used in the test can be added to the web archive via addPackages() and addClasses(), respectively.
  • A data source can also be declared and added to the WEB-INF resource of the web archive so that it can be loaded by WildFly when the test archive is deployed to the server by Arquillian. Alternatively, the datasource can also be declared in WildFly via the admin console so that the addition of the h2test-ds.xml file can be omitted.
  • Since JPA is used by the application, META-INF/persistence.xml must be present in the classpath. For this example, the persistent unit is declared such that it is using the test data source.
  • Lastly, CDI is used by the example and thus beans.xml must be present in the web archive.

Only the basic dependencies need to be declared within the POM that will be used in the test.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <artifactId>persistence</artifactId>
  <groupId>com.test</groupId>
  <version>0.0.1-SNAPSHOT</version>
  <name>Sample</name>

  <properties>
    <deltaspike.version>1.2.1</deltaspike.version>
    <commons.beanutils.version>1.9.2</commons.beanutils.version>
  </properties>

  <dependencies>

    <dependency>
      <groupId>commons-beanutils</groupId>
      <artifactId>commons-beanutils</artifactId>
      <version>${commons.beanutils.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.deltaspike.core</groupId>
      <artifactId>deltaspike-core-api</artifactId>
      <version>${deltaspike.version}</version>
      <scope>compile</scope>
    </dependency>

    <dependency>
      <groupId>org.apache.deltaspike.modules</groupId>
      <artifactId>deltaspike-partial-bean-module-api</artifactId>
      <version>${deltaspike.version}</version>
      <scope>compile</scope>
    </dependency>

    <dependency>
      <groupId>org.apache.deltaspike.modules</groupId>
      <artifactId>deltaspike-partial-bean-module-impl</artifactId>
      <version>${deltaspike.version}</version>
      <scope>runtime</scope>
    </dependency>

    <dependency>
      <groupId>org.apache.deltaspike.modules</groupId>
      <artifactId>deltaspike-jpa-module-api</artifactId>
      <version>${deltaspike.version}</version>
      <scope>compile</scope>
    </dependency>

    <dependency>
      <groupId>org.apache.deltaspike.modules</groupId>
      <artifactId>deltaspike-jpa-module-impl</artifactId>
      <version>${deltaspike.version}</version>
      <scope>runtime</scope>
    </dependency>

    <dependency>
      <groupId>org.apache.deltaspike.core</groupId>
      <artifactId>deltaspike-core-impl</artifactId>
      <version>${deltaspike.version}</version>
      <scope>runtime</scope>
    </dependency>

    <dependency>
      <groupId>org.apache.deltaspike.modules</groupId>
      <artifactId>deltaspike-data-module-api</artifactId>
      <version>${deltaspike.version}</version>
      <scope>compile</scope>
    </dependency>

    <dependency>
      <groupId>org.apache.deltaspike.modules</groupId>
      <artifactId>deltaspike-data-module-impl</artifactId>
      <version>${deltaspike.version}</version>
      <scope>runtime</scope>
    </dependency>
  </dependencies>
</project>
<?xml version="1.0" encoding="UTF-8"?>
<datasources xmlns="http://www.jboss.org/ironjacamar/schema"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://www.jboss.org/ironjacamar/schema http://docs.jboss.org/ironjacamar/schema/datasources_1_0.xsd">
   <!-- The datasource is bound into JNDI at this location. We reference 
      this in META-INF/test-persistence.xml -->
   <datasource jndi-name="java:jboss/datasources/TestDS"
      pool-name="test" enabled="true"
      use-java-context="true">
      <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url>
      <driver>h2</driver>
      <security>
         <user-name>sa</user-name>
         <password>sa</password>
      </security>
   </datasource>
</datasources>

A data source was loaded with the WAR file but it might be better to create a staging DB instance and declare a data source JNDI inside the WildFly server. This way, the JPA models within the code can be validated against the latest DB schema that will be loaded in a higher environment.

For this example, the arquillian persistence extension was used to easily use DBUnit to validate DB-related code.

@RunWith(Arquillian.class)
@Cleanup(phase = TestExecutionPhase.AFTER, strategy = CleanupStrategy.USED_ROWS_ONLY)
public class SampleModelTest {
  ...
  
  @Test
  @UsingDataSet("datasets/user-model.yml")
  @ShouldMatchDataSet("datasets/expected-user-model.yml")
  public void fieldsShouldMatch() {
  }
}
  • JUnit’s @Test annotation is used to indicate the test method.
  • @UsingDataSet is used to load data from a file. XML, JSON and YML formats can be used. In this example, YML was used.
  • @ShouldMatchDataSet is used to check the loaded data.

The following are the YML files:

user:
– id: 1
name: "NEW"
age: 30

user:
– name: "NEW"
age: 30

Conclusion

Arquillian provides the safety net to check if the application is compatible with the environment where it will be deployed to. The downside for this procedure is that the test takes longer to finish.

Content-Based Router in SwitchYard using Camel

I have been working on a project which uses SwitchYard (SY) for some time now. Although I am not the main person responsible for coding in SY, I was able to do some modifications in that code base which sort of allowed me to check how SY is used in our project. Since doing those modifications, I have always felt that we are not really using SY’s full potential. For example, I know that SY supports CDI and Camel but I could not see references to these in our codes. I think this is mainly due to our unfamiliarity with SY.

In this post, I would like to try a simple use case that hopefully can show how powerful SY is if it is used with CDI and Camel. To do this, a simple service that can be accessed via HTTP will be created. The service will accept an XML payload that can have 1 of 2 possible formats.

One format will be something like

<request>
<person>
<name></name>
</person>
</request>

The other format will be something like

<request>
<robot>
<model></model>
</robot>
</request>

The service will have to print out “Hello, ${name}” or “Hello, ${model}” depending on the XML format received. This service will give us a chance to implement the Content-based Router Pattern in Camel.

Before we build the service, please note that this post assumes that you know the basics of SY and that you already have SwitchYard 1.0 and Eclipse IDE with SY plugin configured in your machine.

To start building the service, we first have to create a switchyard project in eclipse.

We then have to create a new Java interface with a hello() method. This interface will be the service that will take in incoming requests.

public interface HelloService {
	String hello();
}

After creating the service interface, we can then create a component that will handle service requests. In our case, the component will be a camel implementation and thus should extend from RouteBuilder.

public class CamelServiceRoute extends RouteBuilder {

	/**
	 * The Camel route is configured via this method.  The from endpoint is required to be a SwitchYard service.
	 */
	public void configure() {
		from("switchyard://HelloService")
			.streamCaching()
			.choice()
				.when(xpath("/request/person").booleanResult())
					.to("bean:person")
				.when(xpath("/request/robot").booleanResult())
					.to("bean:robot");			
	}
}

RouteBuilders allow the formulation of routing logic by exposing exchange and message details. In our configure() method above, the urn and message body were used to determine which bean to call.

For routes in SY, the from() method should always point to a SwitchYard endpoint. This means that the scheme should always be "switchyard:". This is shown in line 7 where the from() points to the interface (HelloService) that we previously created. streamCaching() in line 8 just specifies that the stream should be cached so that it can be used again.

The bulk of the routing logic is in the choice(), when() and to() specifications. Depending on the XML, the appropriate beans can be called for further processing. Camel provides the xpath() method to allow the XML payload to be inspected. If the the XML received is request -> person (lines 10-11), the “person” bean will be called . If the XML received is request -> robot (lines 12-13), the “robot” bean will be called.

We will use CDI to define the beans used in the 2 to() calls above.

@Named("person")
public class PersonBean {
	
	public void greet(@XPath("/request/person/name/text()") String name) {
		System.out.println("Hello, " + name);
	}

}

@Named("robot")
public class RobotBean {

	public void greet(@XPath("/request/robot/model/text()") String name) {
		System.out.println("Hello, " + name);
	}
}

Lines 1 and 10 use the java.inject.Named annotation to specify the names of these beans. Since both these classes only specify one public method each, Camel will call the single public method when these beans are called from the RouteBuilder class (For more information about bean binding, kindly check this). In lines 4 and 13, the parameters are annotated with org.apache.camel.language.XPath. @XPath instructs Camel to parse the XML payload and then assign the value to the parameter.

After creating the bean, the last step will be promoting and exposing HelloService by binding it to one of available SY bindings. For our service, we can use the HTTP binding. Bindings can be configured in the visual editor (of Eclipse IDE) or switchyard.xml.

<?xml version="1.0" encoding="UTF-8"?>
<switchyard xmlns="urn:switchyard-config:switchyard:1.0" xmlns:camel="urn:switchyard-component-camel:config:1.0" xmlns:http="urn:switchyard-component-http:config:1.0" xmlns:sca="http://docs.oasis-open.org/ns/opencsa/sca/200912" name="sy-camel-hello" targetNamespace="urn:com.sy.camel:hello:1.0">
  <sca:composite name="sy-camel-hello" targetNamespace="urn:com.sy.camel:hello:1.0">
    <sca:component name="CamelServiceRoute">
      <camel:implementation.camel>
        <camel:java class="com.sy.camel.hello.CamelServiceRoute"/>
      </camel:implementation.camel>
      <sca:service name="HelloService">
        <sca:interface.java interface="com.sy.camel.hello.HelloService"/>
      </sca:service>
    </sca:component>
    <sca:service name="HelloService" promote="CamelServiceRoute/HelloService">
      <sca:interface.java interface="com.sy.camel.hello.HelloService"/>
      <http:binding.http name="helloHttp">
        <http:contextPath>http/hello</http:contextPath>
        <http:method>POST</http:method>
      </http:binding.http>
    </sca:service>
  </sca:composite>
</switchyard>

We can now test our application by issuing HTTP Post requests to http://localhost:8080/http/hello. The resulting console view should be as follows:

19:22:06,834 INFO  [stdout] (http-localhost/127.0.0.1:8080-1) Hello, Robocop

19:22:14,796 INFO  [stdout] (http-localhost/127.0.0.1:8080-1) Hello, Pam

A Sample Usage of Java’s Future and Spring’s @Async

Yesterday, a colleague asked about Java Threading. We have this sending-of-files-via-FTP requirement that is already working. Our users wanted us to add functionality that will allow them to do the sending asynchronously. The initial plan was to put the sending of the files in another Thread manually. This means that we have to create a Runnable implementation and manage the Threads ourselves.

My initial thought when my colleague asked about this is that this is something that JavaScript’s Promises can solve very easily. And then it hit me that I read something about this in Java 7. In Java 7, this is called Future. Futures allow the result of an asynchronous operation to be retrieved at a later time. Vanilla Java has implementations, such as FutureTask, that allow the execution of the task to be run in a separate thread. Spring made this easier and cleaner by making it declarable via the @Async annotation.

Here’s a very simple code that uses Java’s Future and Spring’s @Async feature:

After downloading the required Spring dependencies (it’s in spring-context for Maven users), we have to enable Spring’s asynchronous tasks by using the @EnableAsync annotation.

import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.annotation.EnableAsync;

@Configuration
@EnableAsync
public class FtpAsyncConfig {
}

We can then use the @Async annotation to tell Spring to execute the method asynchronously when called.

import java.io.File;
import java.util.concurrent.Future;

import org.springframework.scheduling.annotation.Async;
import org.springframework.scheduling.annotation.AsyncResult;
import org.springframework.stereotype.Component;

@Component
public class DefaultFtpApp implements FtpApp {
	@Async
	public Future<Boolean> sendAsync(final File file, final String host)
			throws InterruptedException {
		boolean result = false;

		//call the long running task here
		// this is in place of the actual sending of the file via FTP
		Thread.sleep(1000);

		// say sending succeeds and returns true
		result = true;

		System.out.println("Done running...");

		// Wrap the result in an AsyncResult
		return new AsyncResult<>(result);
	}
}

The other things to note above is the return type and the return value. Spring’s asynchronous task allows a void return value but since we want the result of the operation to be queried by the users, we’re returning a Future object. The return type must be specified inside the angle brackets. The actual return value is wrapped in an implementation of the Future interface, the AsyncResult class. The actual return value must be passed into its constructor.

We can now test this.

import java.io.File;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;

import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.test.context.support.AnnotationConfigContextLoader;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(loader = AnnotationConfigContextLoader.class, classes = {
		FtpAsyncConfig.class, DefaultFtpApp.class })
public class FtpAppTest {

	@Autowired
	private FtpApp ftp;

	@Test
	public void test() throws InterruptedException, ExecutionException {

		System.out.println("About to run...");

		// Call to sendAsync
		// this will execute in another thread. The sysout below this line will
		// execute before the sysout within sendAsync
		Future<Boolean> ftpResult = ftp.sendAsync(new File("somefile"),
				"somehost");

		System.out.println("This will run immediately.");

		// Using get without a timeout will wait for the async task to finish.
		// If you want to wait for only a certain time for the result, you may
		// use the get(long timeout, TimeUnit unit) instead. This flavor of
		// get() will throw a TimeoutException if the result is not yet given
		// after the specified amount of time.
		Boolean result = ftpResult.get();

		System.out.println("And the result of get() is " + result);

		Assert.assertTrue(result);

	}
}

The get() method can be used on the Future interface to retrieve the result of the asynchronous operation. This method also has another flavor where a timeout can be specified.

Introduction to CoffeeScript by Jeremy Ashkenas

Last weekend, I was fortunate to have attended a CoffeeScript talk by no other than Jeremy Ashkenas. I was really lucky that Manila.js had organized this event to be really near my workplace such that I was able to sneak out during my break (huge thanks to Manila.js for organizing this event).

To be honest, I wasn’t really sure who Jeremy Ashkenas was before this event. So when I looked him up and found out he was the creator of CoffeeScript, that got me excited about this event. I had looked at CoffeeScript before and thought that it was just too much work for client-side scripting (CoffeeScript will need to be compiled to JavaScript). [I haven’t really done any server-side JS programming so my evaluation of CoffeeScript is based on my experience in client-side scripting.] But getting the introduction straight from the creator was still an eye-opener.

Here’s a summary of his talk:

Symbiotic Languages

Symbiotic Languages are languages that compile to another language’s bytecode or source code. There are more Symbiotic Languages nowadays due to more mature VMs or runtime environments.

One such example is the Java Virtual Machine (JVM). We’ve recently seen a proliferation of programming languages that run on top of the JVM. Some of these languages are not even semantically close to Java. Languages such as Clojure, JRuby, Jython and Scala have constructs that are alien to Java. Clojure and Scala support higher-order functions (although Project Lambda will allow Lambda expressions in Java 8) while JRuby and Jython are JVM implmentations of 2 of the most popular dynamic languages Ruby and Python.

A symbiotic language’s strength is that it allows morphing the original language’s constructs (this can be a complete paradigm shift but it still has limits) to suit a different application’s need (or even another programmer’s preference) and still run the compiled code on top of a stable environment.

CoffeeScript is just JavaScript

We’ve seen huge strides in JavaScript engines in the last 5-10 years. Google’s launch of Chrome and the V8 JavaScript Engine started this browser performance war between major browsers like Firefox, Chrome and IE.

It truly is impressive how JavaScript has improved since its inception in 1995. For a language that was designed in 10 days, it’s amazing how it got a lot of things right. Unfortunately, it still has some bad parts (Douglas Crockford’s JavaScript: The Good Parts is a great reference for this)…

CoffeeScript’s inspiration is using only the good parts. Coffeescript is a concise language that has syntax / structure borrowed from Ruby, Python and Haskell. It is compiled to an equivalent JavaScript that is guaranteed to pass through JSLint. As such, existing JavaScript libraries can work seamlessly with it.

CoffeeScript Demo

Here are some examples used in the talk. CoffeeScript codes and their equivalent translated JavaScript codes are provided in this section.

In CoffeeScript, everything is an expression.

print if one
  two
else
  three

In the code above, the return keyword can be omitted and the last line will be used as the value of the if-else expression that will be passed to the function print (parenthesis is also optional).

print(one ? two : three);

CoffeeScript is also smart enough to translate this piece of code to a ternary expression.

Expressions also allow the simplification of handling an error.

result = try
	missing.property
catch error
	error

console.log "The error is #{result}"

In the code above, result will be assigned the value of the valid expression if no error is encountered. It will be assigned the value of the error if there’s an error.

result = (function() {
  try {
    return missing.property;
  } catch (_error) {
    error = _error;
    return error;
  }
})();

console.log("The error is " + result);

CoffeeScript also support list comprehensions. This allows concise code for transforming a list.

print (transform item for item in list)

This translates to multiple lines in JavaScript.

print((function() {
  var _j, _len1, _results;

  _results = [];
  for (_j = 0, _len1 = list.length; _j < _len1; _j++) {
    item = list[_j];
    _results.push(transform(item));
  }
  return _results;
})());

It’s also easier to declare a class in CoffeeScript.

class Pirate
	hello: ->
		console.log 'yarr'

The usage of a class keyword provides a clearer usage for Pirate. It isn’t as clear when pure JavaScript is used due to the usage of the function keyword. The function is used for both function declaration and class declaration.

var Pirate;

Pirate = (function() {
  function Pirate() {}

  if (century > 1700) {
    Pirate.prototype.loot = function() {
      return say("Give me gold");
    };
  } else {
    ({
      loot: function() {
        return say("Arr");
      }
    });
  }

  return Pirate;

})();

CoffeeScript also allows the dynamic declaration of functions for a class. In this example, the definition of loot function can be determined at runtime.

class Pirate
	if century > 1700
		loot: ->
			say "Give me gold"
	else
		loot: ->
			say "Arr"
var Pirate;

Pirate = (function() {
  function Pirate() {}

  if (century > 1700) {
    Pirate.prototype.loot = function() {
      return say("Give me gold");
    };
  } else {
    ({
      loot: function() {
        return say("Arr");
      }
    });
  }

  return Pirate;

})();

Lastly, CoffeeScript allows the lexical scoping of the this keyword by using the fat arrow (=>)

class Book
	save: ->
		jQuery.ajax this.url, this.data, (response) => 
			merge this.data, response.data

This is especially useful for callback functions when using AJAX. In the code above, the this keyword in the response callback still refers to the Book class. In pure JavaScript, the this keyword is dynamically bound to the caller of the function.

This is how CoffeeScript handles or translates the this keyword within the fat arrow:

var Book;

Book = (function() {
  function Book() {}

  Book.prototype.save = function() {
    var _this = this;

    return jQuery.ajax(this.url, this.data, function(response) {
      return merge(_this.data, response.data);
    });
  };

  return Book;

})();

Conclusion

In this post, we’ve seen the strengths of CoffeeScript. It provides clear, concise and powerful constructs which allows us to write expressive code. CoffeeScript just translates to plain, clean JavaScript code which has a whole gamut of programming niceties. But will I use it?

I’d say yes and no. I really like how CoffeeScript shields us from JavaScript’s ugly parts and how it gives us extra powerful constructs that are sorely missing from JavaScript but I still think that the ease of debugging is very important. Imagine getting an error for a line no. that does not directly map to the source code that you’ve created. Yes, CoffeeScript preserves the names you use in your code, but it still is easier if we can get direct feedback from our debugger, right? Of course, this can change in the (near?) future (check out source map).

‘Hello, World’ for Spring-AMQP and RabbitMQ

RabbitMQ provides messaging capabilities for applications. It supports several operating systems and programming languages which makes it adaptable to a wide band of applications.

For Java applications, Spring already provides support for RabbitMQ via the Spring-AMQP package.

RabbitMQ Configuraiton

To start using RabbitMQ, the following needs to be installed:

After installation, RabbitMQ’s server must be started and configured. To start RabbitMQ server,

  1. Go to {RabbitMQ home directory}/sbin directory
  2. Run rabbitmq-server

RabbitMQ already provides a default username/password and a default virtual host. But you might want to organize your message queues by creating new users and virtual hosts. To do these, the following can be done:

  • To create a new user, run
    rabbitmqctl add_user {username} {password}
  • To create a virtual host, run
    rabbitmqctl add_vhost {vhost}
  • To give the new user permission to use the virtual host, run
    rabbitmqctl set_permissions -p {vhost} {username} ".*" ".*" ".*"

In addition, RabbitMQ provides a web-based management console. This has to be first enabled by running

rabbitmq-plugins enable rabbitmq_management

. The management web console can then be accessed via http://{server}:15672/ after restarting the RabbitMQ server.

“Hello, World!” Java Application

For our simple application, we will be producing messages of “Hello, {ordinal number}” and this should be received and echoed to the console by the consumer. The producer will be putting this message every second.

Spring AMQP configuration

To start of, Spring AMQP must be downloaded. Here’s the maven configuration for Spring AMQP for rabbit:

<dependency>
 <groupId>org.springframework.amqp</groupId>
 <artifactId>spring-rabbit</artifactId>
 <version>1.1.3.RELEASE</version>
</dependency>

Spring AMQP gives us abstractions via configuration (either XML configuration or @Bean configuration). For our simple application, we will be using the XML configuration.

The first step is to configure the connection factory, exchange and queue that will be used by the application.

<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:rabbit="http://www.springframework.org/schema/rabbit"
  xsi:schemaLocation="http://www.springframework.org/schema/rabbit
  http://www.springframework.org/schema/rabbit/spring-rabbit-1.0.xsd">

  <rabbit:connection-factory id="connectionFactory" host="localhost" virtual-host="sample" username="admin" password="qwerty" />

  <rabbit:admin connection-factory="connectionFactory" />

  <rabbit:queue name="testqueue" />

  <rabbit:direct-exchange name="test exchange"
    <rabbit:bindings>
      <rabbit:binding queue="testqueue"></rabbit:binding>
    </rabbit:bindings>
  </rabbit:direct-exchange>

   <rabbit:template id="amqpTemplate" connection-factory="connectionFactory" exchange="testexchange" queue="testqueue" /> 
</beans>
  1. The connection factory is configured in line 7. The virtual-host, username and password can be specified here. If they are not supplied the default values of “/”, “guest” and “guest”, respectively, will be used.
  2. Line 9 gives this application admin rights for creating the queues and exchanges (if they’re not yet created).
  3. Line 11 declares a queue with the queue name “testqueue”. If a queue with the name “testqueue” does not exist, it will be created.
  4. Lines 13-17 declare the exchange that will be used by the producer. The bindings tell it to send the message to “testqueue”.
  5. Line 19 declares the template that will be used for sending and consuming messages. This template already has several convenience methods for sending and getting messages. The default connection factory, exchange and queue that will be used by the template can be configured here.

Producer Code

The producer can simply use the AMQP template to send messages.

package com.rabbit;

import java.util.concurrent.atomic.AtomicInteger;

import org.springframework.amqp.core.AmqpTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.scheduling.annotation.Scheduled;

public class Producer {

	@Autowired
	private AmqpTemplate messageQueue;
	
	private final AtomicInteger counter = new AtomicInteger();

	public static void main(String[] args) {
		new ClassPathXmlApplicationContext(
				"classpath:META-INF/spring/mq-producer-context.xml");
	}

	@Scheduled(fixedRate = 1000)
	public void execute() {
		System.out.println("execute...");
		messageQueue.convertAndSend("hello " + counter.incrementAndGet());
	}
}
  1. In line 25, the template’s convertAndSend is used to send messages to the queue.
  2. Spring’s scheduled task is used in line 22 to allow the execution of the execute() method every 1 second.

Here’s the complete XML configuration of the producer:

<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:rabbit="http://www.springframework.org/schema/rabbit"
	xmlns:context="http://www.springframework.org/schema/context"
xmlns:task="http://www.springframework.org/schema/task"
	xsi:schemaLocation="http://www.springframework.org/schema/rabbit
		http://www.springframework.org/schema/rabbit/spring-rabbit-1.0.xsd
		http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
		http://www.springframework.org/schema/beans
		http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
		http://www.springframework.org/schema/task
		http://www.springframework.org/schema/task/spring-task-3.0.xsd">

	<import resource="mq-context.xml" />

	<task:scheduler id="myScheduler" pool-size="10" />
	<task:annotation-driven scheduler="myScheduler" />

	<bean id="producer" class="com.rabbit.Producer"></bean>
</beans>

Consumer code

For the consumer, we will be using asynchronous messaging to avoid polling for messages from the queue. To use this, MessageListener must be implemented and a concrete implementation of onMessage(Message) method must be supplied.

package com.rabbit;

import org.springframework.amqp.core.Message;
import org.springframework.amqp.core.MessageListener;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class Consumer implements MessageListener {

	public static void main(String[] args) {
		new ClassPathXmlApplicationContext(
				"classpath:META-INF/spring/mq-consumer-context.xml");
	}

	public void onMessage(Message message) {
		System.out.println(message);
		try {
			Thread.sleep(1500);
		} catch (InterruptedException e) {
			e.printStackTrace();
		}
	}

}

The listener must be registered to Spring AMQP via the XML configuration.

<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:rabbit="http://www.springframework.org/schema/rabbit"
	xmlns:context="http://www.springframework.org/schema/context"
	xmlns:aop="http://www.springframework.org/schema/aop"
	xmlns:task="http://www.springframework.org/schema/task"
	xsi:schemaLocation="http://www.springframework.org/schema/rabbit
		http://www.springframework.org/schema/rabbit/spring-rabbit-1.0.xsd
		http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
		http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
		http://www.springframework.org/schema/beans
		http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
		http://www.springframework.org/schema/task
		http://www.springframework.org/schema/task/spring-task-3.0.xsd">
		
	<import resource="mq-context.xml"/>
	
	<rabbit:listener-container
		connection-factory="connectionFactory">
		<rabbit:listener ref="consumer" queue-names="testqueue" />
	</rabbit:listener-container>
	
	<context:annotation-config />
	<context:component-scan base-package="com.rabbit" />
	<aop:aspectj-autoproxy />

	<bean id="consumer" class="com.rabbit.Consumer"></bean>
</beans>

Lines 17-20 sets up the consumer class as the listener who will receive messages fromt “testqueue”.

Running both the consumer and producer above will send and receive several “hello…” messages.

Annotation-based HTML to Object Mapper using JSoup Parser

I’ve recently worked on a project that requires crawling and retrieval of information from a website. After looking for open source Java HTML parsers, we found JSoup. JSoup is a library that provides JQuery-like selectors for extracting data from an HTML source.

JSoup is awesome but it also left us with a lot of boilerplate codes for parsing different HTML pages. To avoid verbose code, I tried playing around with annotations. The idea is to use annotations to map an HTML source to a Java object (sort of like JAXB). The basic code of what I came up with is discussed in this blog post. (Please do note that I used Spring and there may be some Spring APIs in the code.)

For the implementation, the annotations’ targets are the setters of the Java object’s fields.

The first annotation is the @Selector. This will store the CSS selector for retrieving the element that contains the value that will be set using the annotated setter. The value parameter should contain the CSS selector of the HTML element.

@Target({ ElementType.METHOD })
@Retention(RetentionPolicy.RUNTIME)
public @interface Selector {
    String value();
}

@Selector will need either of the following annotations to determine how the value will be extracted from the selected element:

  • @TextValue – retrieve the text within the element (remove all HTML tags within the element)
  • @HtmlValue – retrieve the HTML within the element
  • @AttributeValue – retrieve the value from an attribute in the element. The name of the attribute can be specified in the name parameter.
@Target({ ElementType.METHOD })
@Retention(RetentionPolicy.RUNTIME)
public @interface TextValue {
}

@Target({ ElementType.METHOD })
@Retention(RetentionPolicy.RUNTIME)
public @interface HtmlValue {
}

@Target({ ElementType.METHOD })
@Retention(RetentionPolicy.RUNTIME)
public @interface AttributeValue {
String name();
}

The HTML parser just needs to read annotations from a Java bean’s methods and retrieve the different annotations above. When a @Selector is present in a method, the value of the @Selector will be used to retrieve the element. @TextValue, @HtmlValue or @AttributeValue will then be used to get the data from the element.

import java.io.InputStream;
import java.lang.reflect.Method;

import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
import org.springframework.core.convert.ConversionService
import org.springframework.core.convert.support.DefaultConversionService;
import com.google.common.base.Preconditions;

public class JSoupHtmlParser<T> implements HtmlParser<T> {

    // host of the website that will be crawled
    private final static String HOST = "localhost:8080/sample";

    private final Class<T> classModel;

    // Pass in the class Java bean that will contain the mapped data from the HTML source
    public JSoupHtmlParser( final Class<T> classModel) {
        this.classModel = classModel;
    }

    // Main method that will translate HTML to object
    public T parse( final InputStream is) throws HtmlParserException {
        try {
            final Document doc = Jsoup.parse(is, "UTF-8", HOST );
            T model = this.classModel.newInstance();

            for (Method m : this.classModel.getMethods()) {
                String value = null;
                // check if Selector annotation is present in any of the methods
                if (m.isAnnotationPresent(Selector .class)) {
                    value = parseValue(doc, m);
                }

                if (value != null) {
                    m.invoke( model , convertValue(value, m));
                }
            }

            return model ;
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    // Use Spring's ConversionService to convert the selected value from String to the type of the parameter in the setter method
    private static final ConversionService conversion = new DefaultConversionService();

    private Object convertValue( final String value, final Method m) {
        Preconditions. checkArgument(m.getParameterTypes().length > 0);

        // Only set the first parameter
        return conversion .convert(value, m.getParameterTypes()[0]);
    }

    private String parseValue( final Document doc, final Method m) {
        final String selector = m.getAnnotation(Selector .class).value();

        final Elements elems = doc.select(selector);

        if (elems.size() > 0) {
            // no support for multiple selected elements yet. Just get the first element.
            final Element elem = elems.get(0);

            // Check which value annotation is present and retrieve data depending on the type of annotation
            if (m.isAnnotationPresent(TextValue .class)) {
                return elem.text();
            } else if (m.isAnnotationPresent(HtmlValue.class)) {
                return elem.html();
            } else if (m.isAnnotationPresent(AttributeValue. class)) {
                return elem.attr(m.getAnnotation(AttributeValue .class).name());
            }
        }

        return null ;
    }
}

Spring FactoryBean

While working on a Spring application, I noticed that there was a type discrepancy between the bean declaration in the application context and the class type of the to-be-injected property in Java.

The velocityEngine bean declared in the application context was of type VelocityEngineFactoryBean.

    <bean id="velocityEngine" 
        class="org.springframework.ui.velocity.VelocityEngineFactoryBean">
        <property name="resourceLoaderPath" value="/email_templates/"/>
    </bean>

While setVelocityEngine() of the Java class accepts an object of type VelocityEngine.

import org.apache.velocity.app.VelocityEngine;

@Component
public class Emailer {

    private VelocityEngine velocityEngine;

    @Autowired
    public void setVelocityEngine(VelocityEngine velocityEngine) {
        this.velocityEngine = velocityEngine;
    }
}

As it turns out, the FactoryBean is Spring’s way of instantiating more complex objects. To properly create a FactoryBean, a class must implement org.springframework.beans.factory.FactoryBean<T>. There are 3 methods of note in this interface:

  • T getObject() should return an instance of the object. For the case of the example above, this is an instance of VelocityEngine
  • Class<?> getObjectType() should return the type of the object that will be created by the FactoryBean. This can be null if the type is not known in advance.
  • boolean isSingleton() should return true if there should only be one copy of the object. This allows Spring to cache the instance of the created object.