Arquillian Docker


For some time I have been thinking about integrating Arquillian with Docker. This is something like moving integration testing to next step.

I have done some PoC, but nothing serious, only to see how we can face the problem, if it would be possible or not and so on.

So the first thing we must take into consideration is if this integration should be in form of a new container support (like Wildfly, TomEE, Tomcat, …) or as Extension.

For me the first idea was to implement as a new container. It could work pretty well but the problem is:

  1. each container can contain inside any application server, so we need to start one container (Docker container) and then execute the deploy of application server within docker using the remote adapter of that application server. Of course another idea is to create one container for each conjunction docker-applicationserver.

To summarize: you can start a container which starts a Wildfly then you need to use Wildfly Remote adapter to deploy the application into that remote container (Which in fact is running inside Docker).
Then it would be great that using the same docker container adapter you can use it with Apache TomEE by only configuring the remote adapter of TomEE.

  1. second problem is time. You can spawn a new process (Docker) using Spacelift, but you need to know when the container is up and ready to receive incomings. Starting a Docker is pretty fast, but then you need to add the time of starting the container which can be some seconds as well. Moreover some containers starts the server as daemons (basically those that are implemented like Linux services) and when they finish to start, Docker process remains in background (you know that this happens because the process returns the container id). But other cases for example you can see this behaviour in Docker images with Tomcat, the Docker process never returns, it is on foreground until you kill it.

So after watching this requirements one can think that implementing as Extension may be a better idea because we will have more control over all. But of course I may be wrong.

In this field I think that Arquillian-Spacelift can help us a lot, currently I am working on it because of some missing scenarios I have found during my usage with MongoDB.

Also we must think about next scenario, we must be able to start as many docker containers as required by an application but not necessarily all of them need to have a container installed. For example you may want to start a container with a MongoDB linked to another Docker container with a Wildfly, so you can execute the tests with similar architecture as in production.

As you may see there is a lot of features that we can offer, of course we can start one by one.

I think that the first decision could be if it should be a container adapter or a extension.

Any thoughts?

My first idea around Docker was to support it similar to a Server in the new Arquillian 2.0 design plans. A Container can contain nested Containers and deployable is optional and can deploy any Type. The Type just depends on what the Container supports, e.g. a ShrinkWrap Archive, a Zip, and Image etc…

A Server Container should then be able to ‘deploy’ a Zipped v of WildFly and the remote adapter simply inherit IP information from the ‘host’ Container.

But in retrospect, Docker wouldn’t really be used as a Deployable Container, it’s more a complete finished image. The Docker Container would therefor most likely not ‘deploy’ the Zipped v of WildFly, but simply have a WildFly Remote adapter inherit it’s configuration.

I’m guessing you would need to do a port scan until the desired Container management/http port opens up, regardless of the state of the Docker process it self.

I think as far as Arquillian 1.0 API goes an Extension is the best solution. With some manual configuration you can have SpaceLift start any Docker Image during BeforeSuite and the user can configure any Remote Adapter to use the Docker IP. (If you want to add a dash of magic you can attempt to configure the current container with the Docker Image IP, but the property name depends on the Container Adapter in use and is not generally typed to anything beyond String).

Yes I think that this is the best way to do it because in this case Container (Docker) will inherit the configuration of adapter which is exactly the case. Docker as container of information but not being so important what kind of information.

Said that, I think that depending on time we must wait for Arquillian 2.0 (Candaidate Release) before starting with Docker, because we are going to spend some efforts in implementing as Extension, to “throw away” after Arquillian 2.0

Yes I have done this in MongoDB, we will need to provide something similar with Docker (probably using Docker RESTAPI)

Hey lordofthejars,

Thanks for your thoughts so far.
Here is what I think about it:

I definitely think, that we should go the way of the Extension. This has a lot of advantages in my opinion:
We “just” have to startup the Docker container, forward the necessary ports and all remote containers will work out of the box. This was we won´t need to write a new adapter for each container.
One thing we could do on-top is to write a managed-docker container (e.g managed-docker-wildfly-8.1) and this would get the docker wildfly image from docker hub and also bundle the remote wildfly adapter.

One important thing about booting up multiple docker images in my opinion is, that we support the CoreOS system. We currently use this everywhere we can and it supports describing a complete cloud environment with Docker containers. It helps to isolate the different parts of the system and keep the the cloud up and running.
What we can now do is create a CoreOS extension, that boots up a CoreOS cloud and deploys the tests. This way we can provide real Integration Testing - from end to end - with the same Infrastructure as in Production.
To me, this would be more than just awesome.

What do you think? What about your PoCs? Are they already running something?

Best wishes,

Daniel Sachse

The time here is undefined. But the Docker integration would serve as a nice test ground for the new APIs.

Regardless, I don’t think the code base we’re talking about here will be massive, and the majority of it would be around integration with Docker/SpaceLift, and I don’t see that change between 1.0/2.0.

Cool, then let’s do an extension no problem, in fact as I said for Arquillian 1.x an Extension is the best approach.

Yes I really like the idea, about PoC they are using Arquillian Spacelift, let me find them and I will share with you, but no secret there after reading what Arquillian Spacelift does. Also you are talking about CoreOS (Which is an really amazing system), but it would be fine to have in mind that it must runs with “standalone local” dockers.

Yes, running just the Docker container is totally fine. I am just talking about a separate Extension, that should make it possible, to boot up a CoreOS system that internally uses Docker containers as this is what we mostly use.

1 Like

You want to boot up CoreOS as a Docker container to control Docker containers?

I want to start by saying that this integration is going to be f*cking awesome! It fits perfectly with the grand vision we’ve had for Arquillian all along, to give developers the ability to do real testing. Nothing is more real than the whole operating system and all its services. With CoreOS, it’s even possible to bootstrap an entire infrastructure in a jiffy.

As an alternative to Spacelift, I want to point out that there’s a native Java API for Docker in the works. I’ll admit I haven’t used it, but it appears to be active and that’s promising, so we should explore it.

The benefit is that we might be able to tap into post-launch hooks to avoid the need to resort to port scanning, though in the prototype we certainly don’t have to get it perfect.

When I first thought about the integration, I was mapping it as a container in my mind, but certainly I can see the argument for making it in extension instead, particularly using Arquillian 1.x. One thing to keep in mind is that we’re going to learn a lot along the way, so anything we assume in the beginning might end up totally changing in a few months. We should encourage each other to think way outside the box because there is massive potential here.

To encourage the most innovation, I’d like to see the CoreOS extension implemented separately to start out. It’s focus is on a different scale (infrastructure vs single container) and therefore it may need a different type of thinking than what we want for the Docker extension. Of course, we should exchange ideas between them and maybe we’ll discover that they are, in fact, a single extension.

To ensure that we start off on the right foot with this extension, I think we should take a page from the BDD playbook and write up a narrative & features so that we have a clear goal in mind. The features should set expectations of how exactly the extension will work so that people hacking on it know what we are trying to accomplish.

Here’s the template:

Title (one line describing the story)
As a [role]
I want [feature]
So that [benefit]
Acceptance Criteria: (presented as Scenarios)
Scenario 1: Title
Given [context]
  And [some more context]...
When  [event]
Then  [outcome]
  And [another outcome]...

Alex and Daniel, would you like to fill in the blanks with your vision (Alex for Docker, Daniel for CoreOS + Docker)?

Let’s do this!


Not quite. We use CoreOS to control our cloud system. We have several CoreOS nodes running docker containers inside them. CoreOS takes total control onto which node it boots up which docker containers.
We migrated our complete infrastructure to CoreOS. We have containers for our Application Servers, DB Instances, Caches, Queues, etc.

Interesting project the one called docker-java. Yes I think we can use this project (or at least we can try) so we don’t have to deal with CLI problems. Anyway in future we can use Spacelift to install “Docker” so the requirement of the library that says “that docker must be already installed” would not be a requirement for arquillian because we can install it using Spacelift. But well this is another history.

About my vision of Docker extension, I will write for sure following this template, but I need one week or so to finish some other work :smile:

I think, and this is my opinion that maybe we should start from the ground, I mean let’s start with the small part (Docker) to then grow to CoreOS. Of course I talk without knowing CoreOS at all, but for me it seems that starting with Docker will help us to CoreOS.

CoreOS seems like a good way forward. In future, however, I envision that we would want to support other OSes as docker images well, to be closer to production environment. So, it would make sense to make a control protocol pluggable - either CoreOS or Spacelift or whatever else.

So, I rather like to think of a container then extension. I have similar concept in mind - a proxy cartridge on OpenShift that is able to spawn other cartridges with containers you want to test. If you think about it, it’s architecturally very similar to that docker one - you have a container/extension, you connect to it and deploy to a container inside of parent container/extension. Have you guys already though of how to propagate information about deployment from inner container (deployment url etc)? It could need a reverse proxy or something similar.

If extension is the way to go for 1.x version, ok then :wink:

As for Spacelift, please feel free to raise any questions or tomatoes :-). While executing native code and using third party libraries not in Java with fluent Java API has proven itself to be useful, it still has a long way to go.

Currently I am doing some PoC with so at first it seems that Spacelift would not be required but as I mentioned earlier we could use it to install docker or to know some docker related information.

About host, we can add a mapping between local port and output port, equivalent to -p 8080:8080 so host is localhost which is redirected to inner docker container.

Exploring some possibilities as well, let me finish my PoC and then I share here my results.

Tomorrow I will share the snippet I have done with Tomcat Docker (basically because it was the Docker container I have installed on my computer). And then you will be able to see what problems can be found and for sure how we can face them and design the first prototype of the extension. Let me say for now that it will require some configuration parameters that cannot be get from default like container name and port forwarding (although this would be avoidable by querying for container ip).

Ok here there is the example:

 public class DockerTest {
   @Deployment(managed = false, name = "helloworld", testable = false) //<1>
   public static WebArchive createDeployment() {
      return ShrinkWrap.create(WebArchive.class, "hello.war")
                  .addClass(HelloWorldServlet.class); //<2>
   private Deployer deployer;
   public void startDockerContainer() {
      DockerClientConfigBuilder configBuilder = DockerClientConfig
      configBuilder.withVersion("1.12").withUri("http://localhost:2375"); //<3>
      final DockerClient dockerClient = DockerClientBuilder.getInstance(
      ExposedPort jmxPort = ExposedPort.tcp(8089);
      final CreateContainerResponse container = dockerClient
            .withExposedPorts(jmxPort).exec(); //<4>
      ExposedPort tomcatPort = ExposedPort.tcp(8080);
      Ports ports = new Ports(tomcatPort, Ports.Binding(8080));
      ports.bind(jmxPort, Ports.Binding(8089));
      Runtime.getRuntime().addShutdownHook(new Thread() {
         public void run() {
            .withPortBindings(ports).exec(); //<5>
      new Thread() {
         public void run() {
            dockerClient.waitContainerCmd(container.getId()).exec(); //<6>
   public void executeTest() throws InterruptedException,
         IOException { 
      System.out.println("Waiting for container");
      Thread.sleep(5000); //<7>
      URL obj = new URL("http://localhost:8080/hello/HelloWorld");
      HttpURLConnection con = (HttpURLConnection) obj.openConnection();
      BufferedReader in = new BufferedReader(
              new InputStreamReader(con.getInputStream()));
      String inputLine;
      StringBuffer response = new StringBuffer();
      while ((inputLine = in.readLine()) != null) {
      //print result

There is some hints to be discussed here.

<1> Because it is a simple test not an extension I need to decide when the container is ready to be used.
<2> No secret here an standard package
<3> Some configuration parameters for docker server (note that docker server could be even remote, which means that you can execute your tests to remote docker platform like openshift or tutum.
<4> To create a container we may require to configure a lot of parameters like expose ports, name, quota, …
<5> And more parameters when you want to start the container.
<6> Note that some containers does not return the control, for example in this case (and same for AS), if server is not started as linux service, the control is never returned to caller, so we need to create a thread to not block the execution and stop the container when the JVM dies.
<7> Because of <6> we can know when the container is started but not when the process after the container is booted up is started. For this reason we will need to implement some mechanism to ensure that the AS is up and running.

So for me the first thing to discuss is the configuration. As you may see in every command has a lot of possible configuration parameters. First approach could be add them in arquillian.xml, but I think it would be a headache for us and for the final users. Do you think to configure the docker server inside arquillian.xml but the rest of parameters use a yaml configuration file? Also I think that it would be easier when we want to boot several docker containers at same test. I am thinking of something like

Another problem is @ArquillianResource if users use port forwarding then no problem, if not then we will need to use the REST API to get the container IP and deploy the webarchive. But this leave us to another problem. In arquillian.xml you configure the host of remote server. If you don’t use port forwarding then you need to get the ip that docker server has given to the container (which may change in every execution), so we need a way to modify dynamically the property of remote adapter. Of course we can make port forwarding mandatory.

A feature I see I think it would be amazing to implement is that you can even create your containers from Dockerfile from Java. So users would be able to create container images from test and more important they will be able to materialize the Shrinkwrap deployment file inside the container itself.

I send you the pom file used:










And arquillian.xml

 <?xml version="1.0"?>
 <arquillian xmlns:xsi=""
   <container qualifier="tomcat" default="true">
         <property name="host">localhost</property>
         <property name="port">8080</property>
         <property name="user">admin</property>
         <property name="pass">mypass</property>

For me next steps is to create some kind of BDD specs of what the extension should do.


I have written some features in BDD form.

Start a Docker container and Deploy artifacts

As a developer
I want to be able to execute Arquillian tests in/against Docker container
So that my integration tests are executed in a production like-environment

Acceptance Criteria: 

Scenario 1: Docker container with an application server should be started
Given Arquillian test
  And Docker Server installed and TCP port open
  And Docker container with application server correctly configured
  And Arquillian remote adapter for application server
When  Test is started
Then  Docker container shall be started
  And Deployment file deployed inside Docker's application server

Scenario 2: An exception should be thrown if Docker server is not available
Given Arquillian test
  And Docker container with application server correctly configured
  And Arquillian remote adapter for application server
When Test is executed
Then An exception shall be thrown because of Docker server not running or TCP port not open

Scenario 3: Docker container parameters should be available during test
Given Arquillian test
  And Docker Server installed and TCP port open
  And Docker container with application server correctly configured
  And Arquillian remote adapter for application server
When Test is executed
Then Docker parameters like host, port, ... shall be available during test execution.

As a developer
I want to be able to create a Docker container from Dockerfile
So that integration tests can run against a new created container

Acceptance Criteria:

Scenario1: A tar file with Dockerfile can be used for creating a new container containing an application server
Given Arquillian test
  And Docker Server installed and TCP port open
  And a Dockerfile file defining inside or any other parent container an application server
  And Arquillian remote adapter for application server
When Test is executed
Then New Docker container shall be created
  And Deployment file deployed inside Docker's application server

Scenario2: A tar file with Dockerfile can be loaded from URL.
Given a Dockerfile URL (file, http, classpath, ...)
When Test is executed
Then Docker container shall be created from Dockerfile provided by URL

Scenario3: A directory with Dockerfile  can be used for creating a new container containing an application server.
Given a Folder with Dockerfile on root
When Test is executed
Then Docker container shall be created from Dockerfile provided.

Scenario4: An exception is thrown if Dockerfile cannot be read
Given a Dockerfile
When Test is executed
Then An exception shall be thrown if Dockerfile is not readable

As a developer
I want to be able to orchestrate several Docker container inside test
So I can reproduce complex scenarios

Acceptance Criteria:

Scenario1: Multiple Docker containers can be spawn in same test
Given Arquillian test
  And Docker Server installed and TCP port open
  And A definition of Docker containers with at least one having an application server
  And Arquillian remote adapter for application server
When  Test is started
Then  Docker containers shall be orchestrated
  And Deployment file deployed inside Docker's application server