Even simpler Arquillian Chameleon usage with Gradle

11 June 2018

In a previous post I have described how easy it has become to use Arquillian via the Chameleon extension. The only "complex" part that’s left is the @Deployment-annotated method specificing the deployment via Shrinkwrap.

What exists for this is the @MavenBuild-annotation. It allows to trigger a maven-build and use the generated artifact. Usually, this would be the regularly built EAR or WAR-file as the deployment; which is fine in a lot of situations. Unfortunately, there is no @GradleBuild-annotation today. But there is the @File-annotation to just reference any EAR or WAR on the filesystem; assuming it was previously built by the Gradle-build, we can just reference the artifact.

@ChameleonTarget(value = "wildfly:11.0.0.Final:managed")
public class HelloServiceIT {

    private HelloService service;

    public void shouldGreetTheWorld() throws Exception {
        Assert.assertEquals("hello", service.hello());

Note that there is no @Deployment-annotated method. The build/libs/hello.war is built with the normal Gradle build task. If we set up our integrationTest-task like this, we can require the build-task as a dependency:

test {
    // Do not run integration-tests having suffix 'IT'
    include '**/*Test.class'

dependencies {
    testCompile 'org.arquillian.container:arquillian-chameleon-junit-container-starter:1.0.0.CR2'
    testCompile 'org.arquillian.container:arquillian-chameleon-file-deployment:1.0.0.CR2'

task integrationTest(type: Test) {
    group 'verification'
    description 'Run integration-tests'
    dependsOn 'build'
    include '**/*IT.class'

Run it with gradle integrationTest.

If you are wondering what other containers are supported and can be provided via the @ChameleonTarget-annotation. See here for the list. The actual config of supported containers is located in a file called containers.yaml.


The only disadvantage right now is that it will only work as expected when running a full gradle integrationTest. If you are e.g. in Eclipse and trigger a single test, it will simply use the already existing artifact instead of creating/building it again. This is what @MavenBuild is doing; and I hope we will get the equivalent @GradleBuild as well soon.

Websphere Liberty, EclipseLink and Caching in the Cluster

04 June 2018

Cache Coordination

When using JPA, sooner or later the question of caching will arise to improve performance. Especially for data that is frequently read but only written/updated infrequently, it makes sense to enable the second-level cache via shared-cache-mode-element in the persistence.xml. See the Java EE 7 tutorial for details.

By default, EclipseLink has the second-level cache enabled as you can read here. Consider what will happen in a clustered environment: What happens if server one has the entity cached and server two will update the entity? server one will have a stale cache-entry and by default noone will tell the server that its cache is out-of-date. How to deal with it? Define a hard-coded expiration? Or not use the second-level-cache at all?

A better solution is to get the second-level caches sychronized in the cluster. EclipseLink’s vendor-specific feature for this is called cache-coordination. You can read more about it here, but in a nutshell you can use either JMS, RMI or JGroups to distribute cache-invalidations/updates within the cluster. This post focuses on getting EclipseLink’s cache-coordination working under Websphere Liberty via JGroups.

Application Configuration

From the application’s perspective, you only have to enable this feature in the persistence.xml via

<property name="eclipselink.cache.coordination.protocol" value="jgroups" />

Liberty Server Configuration with Global Library

Deploying this application on Webspher Liberty, will lead to the following error:

Exception Description: ClassNotFound: [org.eclipse.persistence.sessions.coordination.jgroups.JGroupsTransportManager] specified in [eclipselink.cache.coordination.protocol] property.

Thanks to the great help on the openliberty.io mailing-list, I was able to solve the problem. You can read the full discussion here.

The short summary is that the cache-coordination feature of EclipseLink using JGroups is an extension and Liberty does not ship this extension by default. RMI and JMS are supported out-of-the-box but both have disadvantages:

  • RMI is a legacy technology that I have not worked with in years.

  • JMS in general is a great technology for asychroneous communication but it requires a message-broker like IBM MQ or ActiveMQ. This does not sound like a good fit for a caching-mechanism.

This leaves us with JGroups. The prefered solution to get JGroups working is to replace the JPA-implementation with our own. For us, this will simply be EclipseLink but including the extension. In Liberty this is possible via the jpaContainer feature in the server.xml. The offical documentation describes how to use our own JPA-implementation. As there are still a few small mistakes you can make on the way, let me describe the configuration that works here in detail:

  1. Assuming you are working with the javaee-7.0-feature in the server.xml (or in specific jpa-2.1), you will have to get EclipseLink 2.6 as this implements JPA 2.1. For javaee-8.0 (or in specific jpa-2.2) it would be EclipseLink 2.7.

    I assume javaee-7.0 here; that’s why I downloaded EclipseLink 2.6.5 OSGi Bundles Zip.

  2. Create a folder lib/global within your Liberty server-config-folder. E.g. defaultServer/lib/global and copy the following from the zip (same as referenced here plus the extension):

    • org.eclipse.persistence.asm.jar

    • org.eclipse.persistence.core.jar

    • org.eclipse.persistence.jpa.jar

    • org.eclipse.persistence.antlr.jar

    • org.eclipse.persistence.jpa.jpql.jar

    • org.eclipse.persistence.jpa.modelgen.jar

    • org.eclipse.persistence.extension.jar

  3. If you would use it like this, you will find a ClassNotFoundException later for the actual JGroups implementation-classes. You will need to get it seperately from here.

    If we look on the 2.6.5-tag in EclipseLink’s Git Repo, we see that we should use org.jgroups:jgroups:3.2.8.Final.

    Download it and copy the jgroups-3.2.8.Final.jar to the lib/global folder as well.

  4. The last step is to set up your server.xml like this:

    <?xml version="1.0" encoding="UTF-8"?>
    <server description="new server">
        <!-- Enable features -->
        <basicRegistry id="basic" realm="BasicRealm">
        <httpEndpoint id="defaultHttpEndpoint"
                      httpsPort="9443" />
    	<applicationManager autoExpand="true"/>
    	<jpa defaultPersistenceProvider="org.eclipse.persistence.jpa.PersistenceProvider"/>

Some comments on the server.xml:

  • Note that we have to list all of the features that are included in the javaee-7.0 feature minus the jpa-2.1 feature explicitly now because we don`t want the default JPA-provider.

  • Instead of jpa-2.1 I added jpaContainer-2.1 to bring our own JPA-provider.

  • The defaultPersistenceProvider will set the JPA-provider to use ours and is required by the jpaContainer feature.

Liberty Configuration without Global Library

Be aware that there are different ways how to include our EclipseLink library. Above, I chose the way that requires the least configuration in the server.xml and also works for dropin-applications. The way I did it was via a global library. The offical documentation defines it as an explicit library in the server.xml and reference it for each invidual application like this:

<bell libraryRef="eclipselink"/>
<library id="eclipselink">
	<file name="${server.config.dir}/jpa/org.eclipse.persistence.asm.jar"/>
	<file name="${server.config.dir}/jpa/org.eclipse.persistence.core.jar"/>
	<file name="${server.config.dir}/jpa/org.eclipse.persistence.jpa.jar"/>
	<file name="${server.config.dir}/jpa/org.eclipse.persistence.antlr.jar"/>
	<file name="${server.config.dir}/jpa/org.eclipse.persistence.jpa.jpql.jar"/>
	<file name="${server.config.dir}/jpa/org.eclipse.persistence.jpa.modelgen.jar"/>

	<file name="${server.config.dir}/jpa/org.eclipse.persistence.extension.jar"/>
	<file name="${server.config.dir}/jpa/jgroups.jar"/>

<application location="myapp.war">
    <classloader commonLibraryRef="eclipselink"/>

Also note, that the JARs are this time in the defaultServer/jpa-folder, not under defaultServer/lib/global and I removed all the version-suffixes from the file-names. Additionally, make sure to add <feature>bells-1.0</feature>.

Try it

As this post is already getting to long, I will not got into detail here how to use this from your Java EE application. This will be for another post. But you can already get a working Java EE project to get your hands dirty from my GitHub repository. Start the Docker Compose environment and use the contained test.sh to invoke some cURL requests against the application on two different cluster-nodes.


With the either of the aboved approaches I was able to enable EclipseLink’s cache-coordination feature on Websphere Liberty for Java EE 7.

I did not try it, but I would assume that it will work similar for Java EE 8 on the latest OpenLiberty builds.

For sure it is nice that plugging in your own JPA-provider is so easy in Liberty; but I don’t like that I have to do this to get a feature of EclipseLink working under Liberty which I would expect to work out of the box. EclipseLink’s cache-coordination feature is a quiet useful extension and it leaves me uncomfortable that I have configured my own snowflake Liberty instead of relying on the standard package. On the other hand, it works; and if I make sure to use the exact same version of EclipseLink as packaged with Liberty out of the box, I would hope the differences are minimal.

The approach I chose/prefer in the end is Liberty Server Configuration with Global Library instead of using the approach that is also in the offical documentation (Liberty Configuration without Global Library). The reason is that for Liberty Configuration without Global Library I have to reference the library in the server.xml indvidually for each application. This will not work for applications I would like throw into the dropins.

Deploying a Java EE 7 Application with Kubernetes to the Google Cloud

30 May 2018

In this post I am describing how to deploy a dockerized Java EE 7 application to the Google Cloud Platform (GCP) with Kubernetes.

My previous experience is only with AWS; in specific with EC2 and ECS. So, this is not only my first exposure to the Google Cloud but also my first steps with Kubernetes.

The Application

The application I would like to deploy is a simple Java EE 7 application exposing a basic HTTP/Rest endpoint. The sources are located on GitHub and the Docker image can be found on Docker Hub. If you have Docker installed, you can easily run it locally via

docker run --rm --name hello -p 80:8080 38leinad/hello

Now, in your browser or via cURL, go to http://localhost/hello/resources/health. You should get UP as the response. A simple health-check endpoint. See here for the sources.

Let’s deploy it on the Google Cloud now.

Installation and Setup

Obviously, you will have to register on https://cloud.google.com/ for a free trial-account first. It is valid for one year and also comes with a credit of $300. I am not sure yet what/when resources will cost credit. After four days of tinkering, $1 is gone.

Once you have singed up, you can do all of the configuration and management of your apps from the Google Cloud web-console. They even have an integrated terminal running in the browser. So, strictly it is not required to install any tooling on your local system if you are happy with this.

The only thing we will do from the web-console is the creation of a Kubernetes Cluster (You can also do this via gcloud from the commandline). For this you go to "Kubernetes Engine / Kubernetes clusters" and "Create Cluster". You can leave all the defaults, just make sure to remember the name of the cluster and the zone it is deployed to. We will need this later to correctly set up the kubectl commandline locally. Note that it will also ask you to set up a project before creating the cluster. This allows grouping of resources in GCP based on different projects which is quiet useful.

Setting up the cluster is heavy lifting and thus can take some minutes. In the meantime, we can already install the tools.

  1. Install SDK / CLI (Centos): https://cloud.google.com/sdk/docs/quickstart-redhat-centos.

    I had to make sure to be logged out of my Google-account before running gcloud init. Without doing this, I received a 500 http-response.

    Also, when running gcloud init it will ask your for a default zone. Choose the one you used when setting up the cluster. Mine is europe-west1-b.

  2. Install the kubectl command:

    gcloud components install kubectl

    Note that you can also install kubectl independently. E.g. I already had it installed from here while using minikube.

  3. Now, you will need the name of the cluster you have created via the web-console. Configure the gcloud CLI-tool for your cluster:

    gcloud container clusters get-credentials <cluster-name> --zone <zone-name> --project <project-name>

    You can easily get the full command with correct parameters when opening the cluster in the web-console and clicking the "Connect" button for the web-based CLI.

Run kubectl get pods just to see if the command works. You should see No resources found.. At this point, we have configured our CLI/kubectl to interact with our kubernetes cluster.


The next thing we will do is optional but makes life easier once you have multiple applications deployed on your cluster. You can create a namespace/context per application your are deploying to GCP. This allows you to always only see the resources of the namespace you are currently working with. It also allows you to delete the namespace and it will do a cascading delete of all the resources. So, this is very nice for experimentation and not leaving a big mess of resources.

kubectl create namespace hello-namespace
kubectl get namespaces

We create a namespace for our application and check if it actually was created.

You can now attach this namespace to a context. A context is not a resource on GCP but is a configuration in your local <user-home>/.kube/config.

kubectl config set-context hello-context --namespace=hello-namespace \
  --cluster=<cluster-name> \

What is <cluster-name> and <user-name> that you have to put in? Easiest, is to get it from running

kubectl config view

Let’s activate this context. All operations will be done within the assigned namespace from now on.

kubectl config use-context hello-context

You can also double-check the activated context:

kubectl config current-context

Run the kubectl config view command again or even check in <user-home>/.kube/config. As said before, the current-context can be found here and is just a local setting.

You can read more on namespaces here.

Deploying the Application

Deploying the application in Kubernetes requires three primitives to be created:

  • Deployment/Pods: These are the actually docker-containers that are running. A pod actually could consist of multiple containers. Think of e.g. side-car containers in a microservice architecture.

  • Service: The containers/Pods are hidden behind a service. Think of the Service as e.g. a load-balancer: You never interact with the individual containers directly; the load-balancer is the single service you as a client call.

  • Ingress: Our final goal is to access our application from the Internet. By default, this is not possible. You will have to set up an Ingress for Incoming Traffic. Basically, you will get an internet-facing IP-address that you can call.

All these steps are quiet nicely explained when you read the offical doc on Setting up HTTP Load Balancing with Ingress. What you will find there, is that Deployment, Service and Ingress are set up via indivdual calls to kubectl. You could put all these calls into a shell-script to easily replay them, but there is something else in the Kubernets world. What we will be doing here instead, is define these resources in a YAML file.

apiVersion: apps/v1beta2
kind: Deployment
  name: hello-deployment
      app: hello
  replicas: 1
        app: hello
      - name: hello
        image: 38leinad/hello:latest
        - containerPort: 8080
apiVersion: v1
kind: Service
  name: hello-service
  type: NodePort
    app: hello
    - port: 8080
apiVersion: extensions/v1beta1
kind: Ingress
  name: hello-ingress
    serviceName: hello-service
    servicePort: 8080

We can now simply call kubectl apply -f hello.yml.

Get the public IP by running

kubectl get ingress hello-ingress

You can now try to open http://<ip>/hello/resources/health in your browser or with cURL. You should get an "UP" response. Note that this can actually take some minutes before it will work.

Once it worked, you can check the application-server log as well like this:

kubectl get pods
kubectl logs -f <pod-name>

Note that the first command is to get the name of the Pod. The second command will give you the log-output of the container; you might know this from plain Docker already.

We succesfully deployed a dockerized application to the Google Cloud via Kubernetes.

A final not on why namespaces are useful: What you can do now to start over again is invoke

kubectl delete namespace hello-namespace

and all the resources in the cluster are gone.

Lastly, a cheat-sheet for some of the important kubectl commands can be found here. Here, you will also find how to get auto-completion in your shell which is super-useful. As I am using zsh, I created an alias for it:

alias kubeinit="source <(kubectl completion zsh)"

Websphere Liberty EclipseLink Logging

14 May 2018

Websphere Liberty uses EclipseLink as the default JPA-implementation. How to log the SQL-commands from EclipseLink in the Websphere Liberty stdout/console?

First step is enabling the logging in the persistence.xml:

    <property name="eclipselink.logging.level.sql" value="FINE" />
    <property name="eclipselink.logging.level" value="FINE" />
    <property name="eclipselink.logging.level.cache" value="FINE" />

This is not sufficient to get any output on stdout.

Additionally, the following snippet needs to be added to the server.xml:

<logging traceSpecification="*=info:eclipselink.sql=all" traceFileName="stdout" traceFormat="BASIC"/>

Set traceFileName="trace.log" to get the statements printed to the trace.log instead.

Gradle and Docker Compose for System Testing

06 May 2018

Recently, I read this article on a nice Gradle-plugin that allows to use Docker Compose from Gradle. I wanted to try it out myself with a simple JavaEE app deployed on Open Liberty. In specific, the setup is as follows: The JavaEE application (exposing a Rest endpoint) is deployed on OpenLiberty running within Docker. The system-tests are invocing the Rest endpoint from outside the Docker environment via HTTP.

I had two requirements that I wanted to verify in specific:

  • Usually, when the containers are started from docker-perspecive, it does not mean that also the deployed application is fully up and running. Either you have to write some custom code that monitors the application-log for some marker; or, we can leverage the Docker health-check. Does the Docker Compose Gradle-plugin provide any integration for this so we only run the system-tests once the application is up?

  • System-test will be running on the Jenkins server. Ideally, a lot of tests are running in parallel. For this, it is necessary to use dynamic ports. Otherwise, there could be conflicts for the exposed HTTP ports of the different system-tests. Each system-test somehow needs to be aware of its dynamic ports. Does the Gradle-plugin help us with this?

Indeed, the Gradle-plugin helps us with these two requirements.

Rest Service under Test

The Rest endpoint under test looks like this:

public class PingResource {

	static AtomicInteger counter = new AtomicInteger();

	public Response ping() {
		if (counter.incrementAndGet() > 10) {
			System.out.println("++ UP");
			return Response.ok("UP@" + System.currentTimeMillis()).build();
		else {
			System.out.println("++ DOWN");
			return Response.serverError().build();


I added some simple logic here to only return HTTP status code 200 after some number of request. This is to verify the health-check mechanism works as expected.

System Test

The system-tests is a simple JUnit test using the JAX-RS client to invoke the ping endpoint.

public class PingST {

    public void testMe() {
        Response response = ClientBuilder.newClient()
            .target("http://localhost:"+ System.getenv("PING_TCP_9080") +"/ping")

        assertThat(response.getStatus(), CoreMatchers.is(200));
        assertThat(response.readEntity(String.class), CoreMatchers.startsWith("UP"));

You can already see here, that we read the port from an environment variable. Also, the test should only succeed when we get the response UP.

Docker Compose

The docker-compose.yml looks as follows:

version: '3.4'
    image: openliberty/open-liberty:javaee7
     - "9080"
     - "./build/libs/:/config/dropins/"
      test: wget --quiet --tries=1 --spider http://localhost:9080/ping/resources/ping || exit 1
      interval: 5s
      timeout: 10s
      retries: 3
      start_period: 30s

We are using the health-check feature here. If you run docker ps the column STATUS will tell you the health of the container based on executing this command. The ping service should only show up as healthy after ~ 30 + 10 * 5 seconds. This is because it will only start the health-checks after 30 seconds. And then the first 10 requests will return response-code 500. After this, it will flip to status-code 200 and return UP.

If the Gradle-plugin makes sure to only run the tests once the health of the container is Ok, the PingST should pass successfully.

Gradle Build

The latest part is the build.gradle that brings it all together:

plugins {
    id 'com.avast.gradle.docker-compose' version '0.7.1'(1)

apply plugin: 'war'
apply plugin: 'maven'
apply plugin: 'eclipse-wtp'

group = 'de.dplatz'
version = '1.0-SNAPSHOT'

sourceCompatibility = 1.8
targetCompatibility = 1.8

repositories {

dependencies {
    providedCompile 'javax:javaee-api:7.0'

    testCompile 'org.glassfish.jersey.core:jersey-client:2.25.1'
    testCompile 'junit:junit:4.12'

war {
	archiveName 'ping.war'

dockerCompose {(2)
    useComposeFiles = ['docker-compose.yml']

task systemTest( type: Test ) {(3)
    include '**/*ST*'
    doFirst {

test {
    exclude '**/*ST*'(4)
  1. The Docker Compose gradle-plugin

  2. A seperate task to run system-tests

  3. The task to start the Docker environment based on the docker-compose.yml

  4. Don’t run system-tests as part of the regular unit-test task

The tasks composeUp and composeDown can be used to manually start/stop the environment, but the system-test task (systemTest) has a dependency on the Docker environment via isRequiredBy(project.tasks.itest).

We also use dockerCompose.exposeAsEnvironment(itest) to expose the dynamic ports as environment variables to PingST. In the PingST class you can see that PING_TCP_9080 is the environment variable name that contains the exposed port on the host for the container-port 9080.

Please note that the way I chose to seperate unit-tests and system-tests here in the build.gradle is very pragmatic but might not be ideal for bigger projects. Both tests share the same classpath. You might want to have a seperate Gradle-project for the system-tests altogether.

Wrapping it up

We can now run gradle systemTest to run our system-tests. It will first start the Docker environment and monitor the health of the containers. Only when the contain is healthy (i.e. the application is fully up and running), will gradle continue and execute PingST.

Also, ports are dynamically assigned and the PingST reads them from the environment. With this approach, we can safely run the tests on Jenkins where other tests might already be using ports like 9080.

The com.avast.gradle.docker-compose plugin allows us to easily integrate system-tests for JavaEE applications (using Docker) into our Gradle build. Doing it this way, allows every developer that has Docker installed, to run these tests locally as well and not only on Jenkins.

MicroProfile Metrics

11 April 2018

These are my personal notes on getting familiar with MicroProfile 1.3. In specific Metrics 1.1. As a basis, I have been using the tutorial on OpenLiberty.io. Not suprising, I am using OpenLiberty (version The server.xml which serves as the starting-point is described here. I am just listing the used features here:


Some differences:

  • javaee-7.0 is used, as Java EE 8 seems not to be supported yet by the release builds.

  • microProfile-1.3 to enable all features as part of MicroProfile 1.3

As a starting-point for the actual project I am using my Java EE WAR template.

To get all MicroProfile 1.3 dependencies available in your gradle-build, you can add the following provided-dependency:

providedCompile 'org.eclipse.microprofile:microprofile:1.3'

Now lets write a simple Rest-service to produce some metrics.

public class MagicNumbersResource {

	static int magicNumber = 0;

	@Counted(name = "helloCount", absolute = true, monotonic = true, description = "Number of times the hello() method is requested")
	@Timed(name = "helloRequestTime", absolute = true, description = "Time needed to get the hello-message")
	public void setMagicNumber(Integer num) throws InterruptedException {
		magicNumber = num;

	@Gauge(unit = MetricUnits.NONE, name = "magicNumberGuage", absolute = true, description = "Magic number")
	public int getMagicNumber() {
		return magicNumber;

I am using:

  • A @Timed metric that records the percentiles and averages for the duration of the method-invocation

  • A @Counted metric that counts the number of invocations

  • A @Gauge metric that just takes the return-value of the annotated method as the metric-value.

Now deploy and invoke curl -X POST -H "Content-Type: text/plain" -d "42" http://localhost:9080/mptest/resources/magic. (This assumes the application/WAR is named mptest).

Now open http://localhost:9080/metrics in the browser. You should see the following prometheus-formatted metrics:

# TYPE application:hello_request_time_rate_per_second gauge
application:hello_request_time_rate_per_second 0.1672874737158507
# TYPE application:hello_request_time_one_min_rate_per_second gauge
application:hello_request_time_one_min_rate_per_second 0.2
# TYPE application:hello_request_time_five_min_rate_per_second gauge
application:hello_request_time_five_min_rate_per_second 0.2
# TYPE application:hello_request_time_fifteen_min_rate_per_second gauge
application:hello_request_time_fifteen_min_rate_per_second 0.2
# TYPE application:hello_request_time_mean_seconds gauge
application:hello_request_time_mean_seconds 2.005084111
# TYPE application:hello_request_time_max_seconds gauge
application:hello_request_time_max_seconds 2.005084111
# TYPE application:hello_request_time_min_seconds gauge
application:hello_request_time_min_seconds 2.005084111
# TYPE application:hello_request_time_stddev_seconds gauge
application:hello_request_time_stddev_seconds 0.0
# TYPE application:hello_request_time_seconds summary
# HELP application:hello_request_time_seconds Time needed to get the hello-message
application:hello_request_time_seconds_count 1
application:hello_request_time_seconds{quantile="0.5"} 2.005084111
application:hello_request_time_seconds{quantile="0.75"} 2.005084111
application:hello_request_time_seconds{quantile="0.95"} 2.005084111
application:hello_request_time_seconds{quantile="0.98"} 2.005084111
application:hello_request_time_seconds{quantile="0.99"} 2.005084111
application:hello_request_time_seconds{quantile="0.999"} 2.005084111 (1)
# TYPE application:magic_number_guage gauge
# HELP application:magic_number_guage Magic number
application:magic_number_guage 42 (3)
# TYPE application:hello_count counter
# HELP application:hello_count Number of times the hello() method is requested
application:hello_count 1 (2)
  1. This is one of the percentiles from @Timed. Due to the sleep, it is close to two seconds.

  2. This metrics is based on @Counted. We invoked the method once via curl.

  3. This metric is based on the @Gauge. We did a post with curl to set the magicNumber to 42. So, this is what the gauge will get from getMagicNumber().

As a final note: I like the Java EE-approach of having a single dependency to develop against (javax:javaee-api:7.0). I have used the same approach here for the Microprofile. If you instead only want to enable the metrics-feature in Liberty and only want to program against the related API, you can instead have used the following feature in the server.xml:


And the following dependency in your build.gradle:

providedCompile 'org.eclipse.microprofile.metrics:microprofile-metrics-api:1.1'

I find this approach more cumbersome if multiple MicroProfile APIs are used; and the neglectable difference in startup-time of Liberty confirms that there is no disadvantage.

In a later post we will look at what can be done with the metrics.

Websphere Traditional, Docker and Auto-Deployment

10 April 2018

The software I work with on my job is portable accross different application-servers; including Websphere Trational, Websphere Liberty and JBoss. In the past, it took cosiderable time for me to test/make sure a feature works as expected on Websphere. In part, because it was hard for me to keep all different websphere version installed on my machine and not mess them up over time.

Now, with the docker images provided by IBM, it has become very easy. Just fire up a container and test it.

To make the testing/deployment very easy, I have enabled auto-deploy in my container-image.

The image contains a jython script so you don’t have to apply this configuration manually.

import java.lang.System as sys

cell = AdminConfig.getid('/Cell:DefaultCell01/')
md = AdminConfig.showAttribute(cell, "monitoredDirectoryDeployment")
AdminConfig.modify(md, [['enabled', "true"]])
AdminConfig.modify(md, [['pollingInterval', "1"]])

print AdminConfig.show(md)


print 'Done.'

It allows me to work with VSCode and Gradle as I have described in this post.

Start the docker container with below command to mount the auto-deploy folder as a volume:

docker run --name was9 --rm -p 9060:9060 -p 9080:9080 -p 7777:7777 -v ~/junk/deploy:/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/monitoredDeployableApps 38leinad/was-9

You can now copy a WAR file to ~/junk/deploy/servers/server1/ on your local system and it will get deployed automatically within the container.

After this post, I have extended the was-9 container so can directly mount /opt/IBM/WebSphere/AppServer/profiles/AppSrv01/monitoredDeployableApps/servers/server1/. It even supports deployment of a WAR/EAR that is already in this folder when the container is started. This is not the default behaviour of Websphere. Basically, the container will do a touch on any WAR/EAR in this folder once the auto-deploy service is watching the folder.

Gradle and Arquillian Chameleon even simpler

07 April 2018

In a previous post I have already described how to use Arquillian Chameleon to simplify the Arquillian config.

With the latest improvements that are described here in more detail, it is now possible to minimize the required configuration:

  • Only a single dependency

  • No arquillian.xml

As before, I assume Gradle 4.6 with enableFeaturePreview('IMPROVED_POM_SUPPORT') in the settings.gradle.

With this, we only have to add a single dependency to use arquillian:

dependencies {
    providedCompile 'javax:javaee-api:7.0'

    testCompile 'org.arquillian.container:arquillian-chameleon-junit-container-starter:1.0.0.CR2'

    testCompile 'junit:junit:4.12'
    testCompile 'org.mockito:mockito-core:2.10.0'

The used container only needs to be defined via the @ChameleonTarget annotation. Also note the new @RunWith(ArquillianChameleon.class). This not the regular @RunWith(ArquillianChameleon.class).

public class GreetingServiceTest {

    public static WebArchive deployService() {
        return ShrinkWrap.create(WebArchive.class)

    private Service service;

    public void shouldGreetTheWorld() throws Exception {
        Assert.assertEquals("hello world", service.hello());

There is also support now for not having to write the @Deployment method. Up to now, only for maven-build and specifing a local file.

Open Liberty with DerbyDB

13 March 2018

In this post I describe how to use Open Liberty with the lightweight Apache Derby database.

Here are the steps:

  1. Download Apache Derby.

  2. Configure the driver/datasource in the server.xml

        <!-- https://www.ibm.com/support/knowledgecenter/de/SS7K4U_liberty/com.ibm.websphere.wlp.zseries.doc/ae/twlp_dep_configuring_ds.html -->
        <variable name="DERBY_JDBC_DRIVER_PATH" value="/home/daniel/dev/tools/db-derby-"/>
        <library id="DerbyLib">
            <fileset dir="${DERBY_JDBC_DRIVER_PATH}"/>
        <dataSource id="DefaultDerbyDatasource" jndiName="jdbc/defaultDatasource" statementCacheSize="10" transactional="true">
           <jdbcDriver libraryRef="DerbyLib"/>
           <properties.derby.embedded connectionAttributes="upgrade=true" createDatabase="create" databaseName="/var/tmp/sample.embedded.db" shutdownDatabase="false"/>
    	   <!--properties.derby.client databaseName="/var/tmp/sample.db" user="derbyuser" password="derbyuser" createDatabase="create" serverName="localhost" portNumber="1527" traceLevel="1"/-->

    Note that the database is embeeded and file-based. This means, no database-server needs to be started manually. On application-server startup an embeeded database is started and will write to the file under databaseName. Use the memory: prefix, to just hold it in main-memory and not on the filesystem.

    As an alternative, you can also start the Derby-network-server seperately and connect by using the properties.derby.client instead.

  3. In case you want to use the datasource with JPA, provide a persistence.xml:

    <?xml version="1.0" encoding="UTF-8"?>
    <persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    	<persistence-unit name="prod" transaction-type="JTA">
    			<property name="hibernate.show_sql" value="true" />
    			<property name="eclipselink.logging.level" value="FINE" />
    			<property name="javax.persistence.schema-generation.database.action" value="drop-and-create" />
    			<property name="javax.persistence.schema-generation.scripts.action" value="drop-and-create" />
    			<property name="javax.persistence.schema-generation.scripts.create-target" value="bootstrapCreate.ddl" />
    			<property name="javax.persistence.schema-generation.scripts.drop-target" value="bootstrapDrop.ddl" />

    With the default settings of Gradle’s war-plugin, you can place it under src/main/resources/META-INF and the build should package it under WEB-INF/classes/META-INF.

  4. You should now be able to inject the entity-manager via

    EntityManager em;

This blog has a similar guide on how to use PostgreSQL with Open Liberty.

Gradle and Arquillian for OpenLiberty

12 March 2018

In this post I describe how to use arquillian together with the container-adapter for Websphere-/Open-Liberty.

The dependencies are straight-forward as for any other container-adapter except the additional need for the tools.jar on the classpath:

dependencies {
    providedCompile 'javax:javaee-api:7.0'

    // this is the BOM
    testCompile 'org.jboss.arquillian:arquillian-bom:1.3.0.Final'
    testCompile 'org.jboss.arquillian.junit:arquillian-junit-container'

    testCompile files("${System.properties['java.home']}/../lib/tools.jar")
    testCompile 'org.jboss.arquillian.container:arquillian-wlp-managed-8.5:1.0.0.CR1'

    testCompile 'junit:junit:4.12'
    testCompile 'org.mockito:mockito-core:2.10.0'

A minimalistic arquillian.xml looks like the following:

<?xml version="1.0" encoding="UTF-8"?>
<arquillian xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

        <property name="deploymentExportPath">build/deployments</property>

    <container qualifier="wlp-dropins-deployment" default="true">
            <property name="wlpHome">${wlp.home}</property>
            <property name="deployType">dropins</property>
            <property name="serverName">server1</property>


As there is no good documentation, on the supported properties, I had to look into the sources over on Github.

Also, you might not want to hard-code the wlp.home here. Instead you can define it in your build.gradle like this:

test {
    systemProperty "arquillian.launch", "wlp-dropins-deployment"
    systemProperty "wlp.home", project.properties['wlp.home']

This will allow you to run gradle -Pwlp.home=<path-to-wlp> test.

Gradle and Arquillian for Wildfly

28 February 2018

In this post I describe how to set up arquillian to test/deploy on Wildfly. Note that there is a managed and a remote-adapter. Managed will mean that arquillian manages the application-server and thus starts it. Remote means that the application-server was already started somehow and arquillian will only connect and deploy the application within this remote server. Below you will find the dependencies for both types of adpaters.

dependencies {
    providedCompile 'javax:javaee-api:7.0'

    // this is the BOM
    testCompile 'org.jboss.arquillian:arquillian-bom:1.3.0.Final'
    testCompile 'org.jboss.arquillian.junit:arquillian-junit-container'

    testCompile 'org.wildfly.arquillian:wildfly-arquillian-container-managed:2.1.0.Final'
    testCompile 'org.wildfly.arquillian:wildfly-arquillian-container-remote:2.1.0.Final'

    testCompile 'junit:junit:4.12'
    testCompile 'org.mockito:mockito-core:2.10.0'
Note that the BOM-import will only work with Gradle 4.6+

An arquillian.xml for both adapters looks like the following. The arquillian-wildfly-managed config is enabled here by default.

<?xml version="1.0" encoding="UTF-8"?>
<arquillian xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

        <property name="deploymentExportPath">build/deployments</property>

    <!-- Start JBoss manually via:
        ./standalone.sh -Djboss.socket.binding.port-offset=100 -server-config=standalone-full.xml
    <container qualifier="arquillian-wildfly-remote">
            <property name="managementPort">10090</property>

    <container qualifier="arquillian-wildfly-managed" default="true">
            <property name="jbossHome">/home/daniel/dev/app-servers/jboss-eap-7.0-test</property>
            <property name="serverConfig">${jboss.server.config.file.name:standalone-full.xml}</property>
            <property name="allowConnectingToRunningServer">true</property>

As an additional tip: I always set deploymentExportPath to a folder withing gradle’s build-directory because sometimes it is helpful to have a look at the deployment generated by arquillian/shrinkwrap.

In case you don’t want to define a default adapater or overwrite it (e.g. via a gradle-property from the commandline), you can define the arquillian.launch system property within the test-configuration.

test {
    systemProperty "arquillian.launch", "arquillian-wildfly-managed"

Gradle and Arquillian Chameleon

26 February 2018

The lastest Gradle 4.6 release candiates come with BOM-import support.

It can be enabled in the settings.gradle by defining enableFeaturePreview('IMPROVED_POM_SUPPORT').

With this, the Arquillian BOM can be easily imported and the dependecies to use Arquillian with the Chameleon Adapter look like the following:

dependencies {
    providedCompile 'javax:javaee-api:7.0'

    // this is the BOM
    testCompile 'org.jboss.arquillian:arquillian-bom:1.3.0.Final'
    testCompile 'org.jboss.arquillian.junit:arquillian-junit-container'
    testCompile 'org.arquillian.container:arquillian-container-chameleon:1.0.0.Beta3'

    testCompile 'junit:junit:4.12'
    testCompile 'org.mockito:mockito-core:2.10.0'

Chameleon allows to easily manage the container adapters by simple configuration in the arquillian.xml. As of today, Wildfly and Glassfish are supported but not Websphere liberty.

To define Wildfly 11, the following arquillian.xml (place under src/test/resources) is sufficient:

<?xml version="1.0" encoding="UTF-8"?>
<arquillian xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

    <container qualifier="wildfly" default="true">
            <property name="chameleonTarget">wildfly:11.0.0.Final:managed</property>

With this little bit of Gradle and Arquillian magic, you should be able to run a test like below. The Wildfly 11 container will be downloaded on the fly.

public class GreetingServiceTest {

    public static WebArchive deployService() {
        return ShrinkWrap.create(WebArchive.class)

    private Service service;

    public void shouldGreetTheWorld() throws Exception {
        Assert.assertEquals("hello world", service.hello());

Gradle: Automatic and IDE-independent redeployments on OpenLiberty

25 February 2018

The last weeks I have started to experiment how well VSCode can be used for Java EE development. I have to say that it is quiet exciting to watch what the guys at Microsoft and Red Hat are doing with the Java integration. The gist of it: It cannot replace a real Java IDE yet for a majority of heavy development, but i can see the potential due to its lightweightness in projects that also involve a JavaScript frontend. The experience of developing Java and JavaScript in this editor is quiet nice compared to a beast like Eclipse.

One of my first goals for quick development: Reproduce the automatical redeploy you get from IDEs like Eclipse (via JBoss Tools). I.e. changing a Java-class automatically triggers a redeploy of the application. As long as you make sure the WAR-file is small, this deploy task takes less then a second and allows for quick iterations.

Here the steps how to make this work in VS Code; actually, they are independent of VSCode and just leverage Gradle’s continous-build feature.

Place this task in your build.gradle. It deploys your application to the dropins-folder of OpenLiberty if you have set up the environment variable wlpProfileHome.

task deployToWlp(type: Copy, dependsOn: 'war') {
    dependsOn 'build'
    from war.archivePath
    into "${System.env.wlpProfileHome}/dropins"

Additionally, make sure to enable automatic redeploys in your server.xml whenever the contents of the dropins-folder change.

<!-- hot-deploy for dropins -->
<applicationMonitor updateTrigger="polled" pollingRate="500ms" dropins="dropins" dropinsEnabled="true"/>

Every time you run gradlew deployToWlp, this should trigger a redeploy of the latest code.

Now comes the next step: Run gradlew deployToWlp -t for continuous builds. Every code-change should trigger a redeploy. This is indepdent of any IDE and thus works nicely together with VS Code in case you want this level of interactivity. If not, it is very easy to just map a shortcut to the gradle-command in VSCode to trigger it manually.

Arquillian UI Testing from Gradle

24 February 2018

Lets for this post assume we want to test some Web UI that is already running somehow. I.e. we don’t want to start up the container with the web-app from arquillian.

Arquillian heavily relies on BOMs to get the right dependencies. Gradle out of the box is not able to handle BOMs; but we can use the nebula-plugin. Import-scoped POMs are not supported at all.

So, make sure you have the following in your build.gradle:

plugins {
    id 'nebula.dependency-recommender' version '4.1.2'

apply plugin: 'java'

sourceCompatibility = 1.8
targetCompatibility = 1.8

repositories {

dependencyRecommendations {
    mavenBom module: 'org.jboss.arquillian:arquillian-bom:1.2.0.Final'

dependencies {
    testCompile 'junit:junit:4.12'

    testCompile "org.jboss.arquillian.junit:arquillian-junit-container"
    testCompile "org.jboss.arquillian.graphene:graphene-webdriver:2.0.3.Final"

Now the test:

public class HackerNewsIT {

    WebDriver browser;

    public void name() {
        String title = browser.getTitle();
        Assert.assertThat(title, CoreMatchers.is("Hacker News"));


Run with it with gradle test.

By default, HTMLUnit will be used as the browser. To use Chrome, download the https://sites.google.com/a/chromium.org/chromedriver/WebDriver.

If you dont want to put it on your PATH, tie it to the WebDriver like this in your arquillian.xml:

 <arquillian xmlns="http://jboss.com/arquillian" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

    <extension qualifier="webdriver">
        <property name="browser">chrome</property>
        <property name="chromeDriverBinary">/home/daniel/dev/tools/chromedriver</property>


Checkstyke with Gradle

30 January 2018

Get a checkstyle.xml and; e.g. from SUN and place in your gradle-project under config/checkstyle/checkstyle.xml.

Now add the following to your build.gradle:

apply plugin: 'checkstyle'

checkstyle {
    showViolations = true
    ignoreFailures = false

Run with it with gradle check.

If there are violations, a HTML-report will be written to build/reports/checkstyle.

OpenLiberty Java EE 8 Config

22 January 2018

I am working of the latest Development Builds of Open Liberty supporting Java EE 8. You can download them here under "Development builds".

When you create a new server in Websphere/Open Liberty via ${WLP_HOME}/bin/server create server1, the generated server.xml is not configured properly for SSL, Java EE, etc. Here is a minimal server.xml that works:

<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">

    <!-- Enable features -->

    <!-- To access this server from a remote client add a host attribute to the following element, e.g. host="*" -->
    <httpEndpoint httpPort="9080" httpsPort="9443" id="defaultHttpEndpoint"/>

    <keyStore id="defaultKeyStore" password="yourpassword"/>

    <!-- Automatically expand WAR files and EAR files -->
    <applicationManager autoExpand="true"/>

    <quickStartSecurity userName="admin" userPassword="admin12!"/>

    <!-- hot-deploy for dropins -->
    <applicationMonitor updateTrigger="polled" pollingRate="500ms"
                    dropins="dropins" dropinsEnabled="true"/>

Together with this build.gradle file you can start developing Java EE 8 applications:

apply plugin: 'war'
apply plugin: 'maven'

group = 'de.dplatz'
version = '1.0-SNAPSHOT'

sourceCompatibility = 1.8
targetCompatibility = 1.8

repositories {

dependencies {
    providedCompile 'javax:javaee-api:8.0'
    testCompile 'junit:junit:4.12'

war {
    archiveName 'webapp.war'

task deployToWlp(type: Copy, dependsOn: 'war') {
    dependsOn 'build'
    from war.archivePath
    into "${System.env.wlpProfileHome}/dropins"

OpenLiberty Debug Config

21 January 2018

You can run a Websphere/Open Liberty via ${WLP_HOME}/bin/server debug server1 in debug-mode. But this makes the server wait for a debugger to attach. How to attach later?

Create a file ${WLP_HOME}/usr/servers/server1/jvm.options and add the debug-configuration:


Now you can use ${WLP_HOME}/bin/server run server1.

Gradle deploy-task

20 January 2018

Deploy to e.g. Websphere liberty by adding this task to your build.gradle file:

task deployToWlp(type: Copy, dependsOn: 'war') {
    dependsOn 'build'
    from war.archivePath
    into "${System.env.wlpProfileHome}/dropins"

Assuming you have the environment-variable set, you can now run gradlew deployToWlp.

Implementing JAX-RS-security via Basic-auth

31 October 2017

Basic-auth is the simplest and weakest protection you can add to your resources in a Java EE application. This post shows how to leverage it for JAX-RS-resources that are accessed by a plain HTML5/JavaScript app.

Additionally, I had the following requirements:

  • The JAX-RS-resource is requested from a prue JavaScript-based webapp via the fetch-API; I want to leverage the authentication-dialog from the browser within the webapp (no custom dialog as the webapp should stay as simple as possible and use as much as possible the standard offered by the browser).

  • But I don’t want the whole WAR (i.e. JavaScript app) to be protected. Just the request to the JAX-RS-endpoint should be protected via Basic-auth

  • At the server-side I want to be able to connect to my own/custom identity-store; i.e. I want to programatically check the username/password myself. In other words: I don’t want the application-server’s internal identity-stores/authentication.

Protecting the JAX-RS-endpoint at server-side is as simple as implementing a request-filter. I could have used a low-level servlet-filter, but instead decided to use the JAX-RS-specific equivalent:

import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ContainerRequestFilter;
import javax.ws.rs.core.Response;
import javax.ws.rs.ext.Provider;

public class SecurityFilter implements ContainerRequestFilter {

	public void filter(ContainerRequestContext requestContext) throws IOException {
		String authHeader = requestContext.getHeaderString("Authorization");
		if (authHeader == null || !authHeader.startsWith("Basic")) {
			requestContext.abortWith(Response.status(401).header("WWW-Authenticate", "Basic").build());

		String[] tokens = (new String(Base64.getDecoder().decode(authHeader.split(" ")[1]), "UTF-8")).split(":");
		final String username = tokens[0];
		final String password = tokens[1];

		if (username.equals("daniel") && password.equals("123")) {
			// all good
		else {


If the Authorization header is not present, we request the authentication-dialog from the browser by sending the header WWW-Authenticate=Basic. If i directly open up the JAX-RS-resource in the browser, I get the uthentication-dialog from the browser and can access the resource (if I provide the correct username and password).

Now the question is if this also works when the JAX-RS-resource if fetched via the JavaScript fetch-API. I tried this:

function handleResponse(response) {
	if (response.status == "401") {
		alert("not authorized!")
	} else {
		response.json().then(function(data) {


It did not work; I was getting 401 from the server because the browser was not sending the "Authorization" header; but the browser also did not show the authentication-dialog.

A peak into the spec hinted that it should work:

  1. If request’s use-URL-credentials flag is unset or authentication-fetch flag is set, then run these subsubsteps: …​

  2. Let username and password be the result of prompting the end user for a username and password, respectively, in request’s window.

So, i added the credentials to the fetch:

fetch("http://localhost:8080/service/resources/health", {credentials: 'same-origin'}).then(handleResponse);

It worked. The browser shows the authentication-dialog after the first 401. In subsequent request to the JAX-RS-resouce, the "Authorization" header is always sent along. No need to reenter every time (Chrome discards it as soon as the browser window is closed).

The only disadvantage I found so far is from a development-perspective. I usually run the JAX-RS-endpoint seperately from my Javascript app; i.e. the JAX-RS-endpoint is hosted as a WAR in the application-server but the JavaScript-app is hosted via LiveReload or browser-sync. In this case, the JAX-RS-service and the webapp do not have the same origin (different port) and I have to use the CORS-header Access-Control-Allow-Origin=* to allow communication between the two. But with this header set, the Authorization-token collected by the JavaScript-app will not be shared with the JAX-RS-endpoint.

Github - Switch to fork

05 October 2017

Say you just have cloned a massive github repository (like Netbeans) where cloning already takes minutes and now decide to contribute. You will fork the repo and than clone the fork and spend another X minutes waiting?

This sometimes seems like to much of an effort. And thankfully, there are steps how you can transform the already cloned repo to use your fork.

  1. Fork the repo

  2. Rename origin to upstream (your fork will be origin)

    git remote rename origin upstream
  3. Set origin as your fork

    git remote add origin git@github...my-fork
  4. Fetch origin

    git fetch origin
  5. Make master track new origin/master

    git checkout -B master --track origin/master

Websphere Administration via JMX, JConsole and JVisualVM

25 September 2017

How to connect to the Websphere-specific MBean server to configure the environment and monitor the applications?

Start JConsole with the following script:


# Change me!
export HOST=swpsws16
export IIOP_PORT=9811

export WAS_HOME=/home/daniel/IBM/WebSphere/AppServer

export PROVIDER=-Djava.naming.provider.url=corbaname:iiop:$HOST:$IIOP_PORT

export CLASSPATH=$CLASSPATH:$WAS_HOME/java/lib/tools.jar
export CLASSPATH=$CLASSPATH:$WAS_HOME/runtimes/com.ibm.ws.admin.client_8.5.0.jar
export CLASSPATH=$CLASSPATH:$WAS_HOME/runtimes/com.ibm.ws.ejb.thinclient_8.5.0.jar
export CLASSPATH=$CLASSPATH:$WAS_HOME/runtimes/com.ibm.ws.orb_8.5.0.jar
export CLASSPATH=$CLASSPATH:$WAS_HOME/java/lib/jconsole.jar

export URL=service:jmx:iiop://$HOST:$IIOP_PORT/jndi/JMXConnector

$WAS_HOME/java/bin/java -classpath $CLASSPATH $PROVIDER sun.tools.jconsole.JConsole $URL

Even nicer: Install VisualWAS plugin for JVisualVM.

  • Use "Add JMX Connection"

  • Use Connection-Type "Websphere"

  • For port, use SOAP_CONNECTOR_ADDRESS (default 8880)

Websphere and JVisualVM

25 September 2017

How to inspect a Websphere server via JVisualVM?

Go to "Application servers > SERVER-NAME > Java and Process management > Process Defintion > Java Virtual Machine > Generic JVM arguments" and add the following JMV settings:

-Djavax.management.builder.initial= \
-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.local.only=false \
-Dcom.sun.management.jmxremote.port=1099 \

Providing an external ip or hostname was important for it to work.

Select "Add JMX Connection" in JVisualVM and enter:

Jenkins in Docker using Docker

23 September 2017

Say you want to run your Jenkins itself in docker. But the Jenkins build-jobs also uses docker!?

Either you have to install docker in docker, or you let the Jenkins docker-client access the host’s docker-daemon.

  1. Map the unix socket into the Jenkins container:

    -v /var/run/docker.sock:/var/run/docker.sock
  2. But the jenkins user will not have permissions to access the socket by default. So, first check the GID of the group that owns the socket:

    getent group dockerroot
  3. Now create a group (name is irrelevant; lets name it "docker") in the Jenkins container with the same GID and assign the jenkins user to it:

    sudo groupadd -g 982 docker
    sudo usermod -aG docker jenkins

ES6 with Nashorn in JDK9

14 June 2017

JDK9 is planning to incrementally support the ES6 features of JavaScript. In the current early-access builds (tested with 9-ea+170), major features like classes are not supported yet; but keywords like let/const, arrow functions and string-interpolation already work:

#!jjs --language=es6
"use strict";

let hello = (from, to) => print(`Hello from ${from} to ${to}`);

if ($EXEC('uname -n')) {
    let hostname = $OUT.trim();
    hello(hostname, 'daniel');

For details on what’s included by now, read JEP 292.

AWS ECS: Push a docker container

28 May 2017

Steps to deploy docker containers to AWS EC2:

  1. Created a docker-repository with the name de.dplatz/abc, you will get a page with all the steps and coordinates for docker login, docker tag and docker push.

  2. From CLI run:

    aws ecr get-login --region eu-central-1
    docker tag de.dplatz/abc:latest <my-aws-url>/de.dplatz/abc:latest
    docker push <my-aws-url>/de.dplatz/abc:latest

See here for starting the container.

JDK9 HttpClient

20 May 2017

Required some clarification from the JDK team how to access the new HttpClient API (which actually is incubating now):

$ ./jdk-9_168/bin/jshell --add-modules jdk.incubator.httpclient
|  Welcome to JShell -- Version 9-ea
|  For an introduction type: /help intro

jshell> import jdk.incubator.http.*;

jshell> import static jdk.incubator.http.HttpResponse.BodyHandler.*;

jshell> URI uri = new URI("http://openjdk.java.net/projects/jigsaw/");
uri ==> http://openjdk.java.net/projects/jigsaw/

jshell> HttpRequest request = HttpRequest.newBuilder(uri).build();
request ==> http://openjdk.java.net/projects/jigsaw/ GET

jshell> HttpResponse response = HttpClient.newBuilder().build().send(request, discard(null));
response ==> jdk.incubator.http.HttpResponseImpl@133814f

jshell> response.statusCode();
$6 ==> 200

I really like the jshell-integration in Netbeans; unfortunately, it does not allow to set commandline-flags for the started shells yet. Filed an issue and got a workaround for now.

Websphere Liberty Admin Console

12 May 2017

$ bin/installUtility install adminCenter-1.0
<!-- Enable features -->
    <!-- ... -->

<keyStore id="defaultKeyStore" password="admin123" />

<basicRegistry id="basic" realm="BasicRealm">
    <user name="admin" password="admin123" />
[AUDIT   ] CWWKT0016I: Web application available (default_host): http://localhost:9090/adminCenter/


01 May 2017

strace -fopen,read,close,fstat java -jar Test.jar

Docker Rest API

01 May 2017

SSL keys are at /cygdrive/c/Users/<username>/.docker/machine/machines/default

 curl --insecure -v --cert cert.pem --key key.pem -X GET

Docker JVM Memory Settings

01 May 2017

Read this, this and this.

  • JDK9 has -XX:+UseCGroupMemoryLimitForHeap

  • JDK8 pre 131: Always specify -Xmx1024m and -XX:MaxMetaspaceSize

  • JDK8 since 131: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap

Stacktrace in Eclipse Debugger

12 April 2017

How to see the stacktrace for an exception-variable within the eclipse debugger?

Go to Preferences / Java / Debug / Detail Formatter; Add for Throwable:

java.io.Writer stackTrace = new java.io.StringWriter();
java.io.PrintWriter printWriter = new java.io.PrintWriter(stackTrace);
return getMessage() + "\n" + stackTrace;

Java debug-flags

22 March 2017

// shared-memory (windows only)
// socket


07 March 2017

Monitor filesystem-changes:

while inotifywait -qr /dir/to/monitor; do
    rsync -avz /dir/to/monitor/ /dir/to/sync/to

List classes in Jar

29 January 2017

List all classes in a jar-file:

$ unzip -l MyJar.jar "*.class" | tail -n+4 | head -n-2 | tr -s ' ' | cut -d ' ' -f5 | tr / . | sed 's/\.class$//'

rsync tricks

20 January 2017

This command removes files that have been removed from the source directory but will not overwrite newer files in the destination:

$ rsync -avu --delete sourcedir/ /cygwin/e/destdir/

To rsync to another system with ssh over the net:

$ rsync -avu --delete -e ssh sourcedir/ username@machine:~/destdir/

Shell Alias-Expansion

17 January 2017

Say, you have defined an alias:

$ alias gg='git log --oneline --decorate --graph'

But when typing 'gg' wouldn’t it be nice to expand the alias so you can make a small modification to the args?

$ gg<Ctrl+Alt+e>

Say, you want to easily clear the screen; there is a shortcut Ctrl+L. But maybe you also always want to print the contents of the current directory: you can rebind the shortcut:

$ bind -x '"\C-l": clear; ls -l'

Java Version Strings

16 January 2017

For what JDK version is a class compiled?

$ javap -verbose MyClass.class | grep "major"
  • Java 5: major version 49

  • Java 6: major version 50

  • Java 7: major version 51

  • Java 8: major version 52

SSH Keys

13 January 2017

To connect to a remote-host without password-entry (for scripting):

# generate ssh keys for local (if not already done)
$ ssh-keygen
$ ssh-copy-id -i ~/.ssh/id_rsa.pub <remote-host>
$ ssh <remote-host>

Maven Fat & Thin Jar

12 January 2017

Building a fat and a thin jar in one go:

                    <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">

Commandline HTTP-Server

10 January 2017

A very simple http-server:

while true ; do echo -e  "HTTP/1.1 200 OK\nAccess-Control-Allow-Origin: *\n\n $(cat index.html)" |  nc -l localhost 1500; done

Older posts are available in the archive.