Deploying Java EE 8 Applications to Heroku

01 November 2018

I am currently developing a simple web-app that is most-likely only used by myself and maybe some friends. It is using Java EE 8 and also has a HTML/JavaScript UI that gives me the possibility to tinker with some modern browser-APIs like WebComponents, Shadow-DOM, etc.

As I like to leverage such hobby-projects to also try and learn new stuff, I was looking for a simple (and cheap) way to host this application in the cloud. Obviously, AWS, Azure, Google Cloud would be options if my focus would be on learning something new with these cloud platforms. But this time I wanted to focus on the actual application and thus use something slightly more developer-friendly. In this post I will show how to deploy a Java EE 8 application on Heroku using TomEE and OpenLiberty.

As there are not many references on the internet that describe how to deploy Java EE applications on Heroku (specifically not an application-server-based deployment), I think this write-up might also be helpful to others.

Procfile and Fat Jar Solutions

From past experience I know that Heroku makes it simple to deploy to the cloud. It integrates nicely with Git and deploying can be as simple as typing git push heroku master. Literally. Basically, you define a Procfile that tells heroku how to build and deploy the application. If I would want to use a fat-jar solution like PayaraMicro, Thorntail or just repackaging as a fat-jar, this would work easily. Heroku will detect popular build-systems like Maven and Gradle, build the application and the Procfile just needs to contain the command-line to run the Jar. See here for the steps.

This is not an option for me as I want to do the main development on a regular application-server; deploying to production with a different model then what is used in development does not sound like a great idea. Why do the main development on a regular application-server? Because the build is much fast than when it needs to download and package a 50 MB Jar-file.

Docker Container Registry

As Docker playes nicely with Java EE application-servers, the next logical step is to ask if you can somehow host a Docker container on Heroku. And you can. They have a Docker conatainer registry where you can easily push images. Read the steps here. The "downside" for me is that it does not have such a nice workflow as you are accustomed to from Heroku. Instead of doing git push heroku master, you now have to build locally or on some other build-server and then you basically do a docker push. This can easily lead to situations where you just start fiddling around and at one point and end with a deployed container that does not respresent a specific commit. I am not saying that this has to be a big problem for a hobby-project but why not aim for a better solution?

Docker-based Build and Deploy via heroku.yml

The service I finally opted for is still in public beta but promises to combine the easy workflow of git push heroku master with Docker. The idea is to use Docker for building and deploying your application. A heroku.yml is used to define what images to build and what containers run. The heroku.yml can look as simple as this:

build:
  docker:
    web: Dockerfile

INFO: Note that you can find the whole project on my GitHub repository.

This just means that during the build-stage an image named web will be built based on the Dockerfile in the root of the project. What command will be used to run it? By default, whatever is defined via EXEC in the Dockerfile.

How to set up the Dockerfile? As it is needed to build our application (via Gradle or Maven) and also deploy it, multi-stage builds are the answer.

FROM openjdk:8-jdk-alpine as build
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN ./gradlew build

FROM tomee:8-jre-8.0.0-M1-plume
COPY src/main/tomee/run_tomee.sh /usr/local/
COPY src/main/tomee/config/server.xml /usr/local/tomee/conf/
COPY --from=0 /usr/src/app/build/libs/heroku-javaee-starter.war /usr/local/tomee/webapps/
CMD /usr/local/run_tomee.sh

In the first stage we use a plain OpenJDK-image to build our WAR-file with Gradle. The second stage is based on an official TomEE base-image and additionally contains the WAR-file built in the first stage. Note that we also package a dedicated shell-script to start TomEE; and the server.xml is mainly included to read the HTTP-port from an environment-variable.

Heroku works in the following way: When the container is started, an environment-variable named PORT is defined. It is the responsibility of the application to use this port. For TomEE, I was only able to do this by taking the environment-variable in the Shell and then setting it as a Java system-property which is read in the server.xml. In contrast to this, OpenLiberty directly allows to access environment-variables in its configuration-file (which is coincidentally also called server.xml).

I will assume that you have a general understanding how to build a Java EE WAR-file with Gradle or Maven; there is nothing special here.

Deploy to TomEE on Heroku

Now lets see how we can get this deployed to Heroku.

  1. Create an account for Heroku, download/install the Heroku CLI and run heroku login.

  2. Get the Heroku Java EE Starter Project from my GitHub Repo.

    git clone https://github.com/38leinaD/heroku-javaee-starter.git
    cd heroku-javaee-starter
  3. Create an application at Heroku and set the Stack so we can work with Docker and the heroku.yml.

    heroku create
    heroku stack:set container
  4. And now the only step that you will need to repeat later during development; and it is the reason why it is so nice to work with Heroku in the first place:

    git push heroku master

    This will push your code to Heroku and trigger the build and deployment of the application.

  5. You might remember from earlier that we gave the container the name web in the heroku.yml. By convention the container with this name is automatically scaled to one instance. If you would name the container differently (let`s assume myapp), you need to run heroku ps:scale myapp=1 manually. Anyway, you can check with heroku ps what processes/containers are running for your application.

  6. If you want to see the actual stdout/log of the container starting up, you can use heroku logs --tail.

  7. Once the application-server is started, you can run heroku open and it will open the URL under which your application is deployed on Heroku in your default browser.

Deploy to OpenLiberty on Heroku

What changes are needed to deploy to a different application-server? E.g. OpenLiberty? For one, a different Dockerfile that packages the WAR into an OpenLiberty container. The reference which Dockerfile is used can be found in the heroku.yml. You can simply change it to Dockerfile.liberty if you want to try it out. As already stated before, the setting of the HTTP-port from an environment-varible can easily be done from OpenLiberty’s server.xml.

To try it out, simply change the heroku.yml and run:

git add heroku.yml
git commit -m "Deploy to OpenLiberty this time."
git push heroku master

You can monitor the startup of OpenLiberty with heroku logs --tail.

Summary

I hope it was possible for me to convience you that using Heroku for deploying Java EE application is an easy option for at least hobby-projects. It only takes seconds to deploy an application and share it with family, friends or testers. :-)

The nice thing about integrating so nicely with Docker and Git, is that you don’t have a lot of proprietary content in your project. Except for the heroku.yml there is nothing. If your application grows, you can easily move to AWS or another cloud-provider.

Building native Java Applications with GraalVM

20 October 2018

Introduction

GraalVM is an experimental JVM featuring a new Just-in-time (JIT) compiler that might some day replace HotSpot. One noteable feature is the ability to also use this JIT to build native applications that do not require a JVM to be installed on the system. It is just a native application like an .exe under Windows.

There are other solutions that allow you to bundle your Java application as a "kind of" native app (e.g. including the JRE in some bundled form), but the native application built by GraalVM has a better performance in in regards to startup-time. Where normal Java applications are slow on startup because the JIT needs to warm up and optimize the code, the native application built by GraalVM is multiple factors of a magnitude faster. In real numbers: On my system, the below application started via java -jar took 200 milliseconds where the native application took 1 millisecond only.

Hello Native

Here are the steps to build and run a simple commandline-app via GraalVM.

Important
You need to have the native devlopment-tools of your OS installed. For me on CentOS, this is:
  • glibc-devel

  • zlib-devel

  • gcc

  • glibc-static

  • zlib-static

Now the steps:

  1. Get GraalVM. I use SDKMan to download and manage my Java versions. Simply run:

    sdk install java 1.0.0-rc7-graal

    SDKMan will ask if it should set graal as the default Java-version. I would not do so; rather, set it manually in the current shell:

    export JAVA_HOME=/home/daniel/.sdkman/candidates/java/1.0.0-rc7-graal
    export PATH="$JAVA_HOME/bin:$PATH"
  2. Create a simple Java-project; e.g. via Gradle:

    mkdir graal-native && cd graal-native
    gradle init --type java-application
  3. Build the jar via Gradle:

    gradle build
  4. Build the native image/application with native-image utility from GraalVM.

    native-image \
        -cp build/libs/graal-native.jar \
        -H:+ReportUnsupportedElementsAtRuntime \
        --static --no-server App

    Note that the gradle-build built the standard Jar to build/libs/graal-native.jar. Also, the fully qualified class-name of the class with the main-method is App.

  5. A native executable with the same classname (only lower-case) should have been built. Run it with ./app.

Reflective access

Building a native image from your Java-application will limit the ability to use reflection. Read this for the limitations of GraalVM and where a special JSON-file with metadata is required.

Let’s create a small example in the App class:

import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;

public class App {
	public String getGreeting() {
		return "Hello world.";
	}

	public static void main(String[] args) {
		App app = new App();
		try {
			Method greetMethod = App.class.getMethod("getGreeting", new Class[] {});
			System.out.println(greetMethod.invoke(app, new Object[] {}));
		} catch (NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
				| InvocationTargetException e) {
			System.err.println("Something went wrong...");
			e.printStackTrace();
		}

	}
}

Building the JAR and creating a native-image should work like before. Running the app, should also work due to the Automatic detection feature. It works, because the compiler can intercept the reflection-calls and replace them with the native calls because getGreeting is a constant String.

Let’s see if it will still work when we provide the method-name as a commandline-argument to the application:

import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;

public class App {
	public String getGreeting() {
		return "Hello world.";
	}

	public static void main(String[] args) {
		String methodName = args[0];
		System.out.println("Method accessed reflectively: " + methodName);

		App app = new App();
		try {
			Method greetMethod = App.class.getMethod(methodName, new Class[] {});
			System.out.println(greetMethod.invoke(app, new Object[] {}));
		} catch (NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
				| InvocationTargetException e) {
			System.err.println("Something went wrong...");
			e.printStackTrace();
		}

	}
}

We build the native image like before. But running the app will fail:

> ./app getGreeting
Method accessed reflectively: getGreeting
Something went wrong...
java.lang.NoSuchMethodException: App.getGreeting()
	at java.lang.Throwable.<init>(Throwable.java:265)
	at java.lang.Exception.<init>(Exception.java:66)
	at java.lang.ReflectiveOperationException.<init>(ReflectiveOperationException.java:56)
	at java.lang.NoSuchMethodException.<init>(NoSuchMethodException.java:51)
	at java.lang.Class.getMethod(Class.java:1786)
	at App.main(App.java:15)
	at com.oracle.svm.core.JavaMainWrapper.run(JavaMainWrapper.java:163)

Lets create a file called reflectionconfig.json with the necessary meta-information for the App class:

[
  {
    "name" : "App",
    "methods" : [
      { "name" : "getGreeting", "parameterTypes" : [] }
    ]
  }
]

Build the native application with the meta-data file:

native-image \
    -cp build/libs/graal-native.jar \
    -H:ReflectionConfigurationFiles=reflectionconfig.json \
    -H:+ReportUnsupportedElementsAtRuntime \
    --static --no-server App

Run the application again, and you should see it works now:

> ./app getGreeting
Method accessed reflectively: getGreeting
Hello world.

Conclusion

GraalVM is certainly a nice piece of research. Actually, more than that; according to Top 10 Things To Do With GraalVM, it is used in production by Twitter. I will be trying out the native integration with JavaScript/NodeJS in a future post. As this post is mainly for my own records, I might have skimmed over some important details. You might want to read this excellent article to run netty on GraalVM for a more thorough write-up.

Using Java Annotation Processors from Gradle and Eclipse

14 October 2018

This post describes how to use/reference a Java Annotation Processor from your Gradle-based Java project. The main challenge is the usage from within Eclipse which requires some additional steps.

Let’s assume we want to use Google’s auto-service annotation-processor which generates META-INF/services/ files based on annotation service-providers with @AutoService annoations.

Basic Setup

Adjust your build.gradle to reference the Gradle APT plugin and add a dependency.

plugins {
    id "net.ltgt.apt-eclipse" version "0.18"
}

dependencies {
	annotationProcessor ('com.google.auto.value:auto-value:1.5')
}

The plugin net.ltgt.apt-eclipse will also pull in net.ltgt.apt (which is independent of any IDE) and the standard eclipse plugin.

The annotation-processor is now properly called during compilation if you run gradle build. The only problem left is how to run it from within Eclipse.

Eclipse Integration

If you carefully check the README.md, you will see that when using the Buildship plugin in Eclipse (which should be the default because Eclipse ships with it) you have to perform some manual steps:

When using Buildship, you’ll have to manually run the eclipseJdtApt and eclipseFactorypath tasks to generate the Eclipse configuration files, then either run the eclipseJdt task or manually enable annotation processing: in the project properties → Java Compiler → Annotation Processing, check Enable Annotation Processing. Note that while all those tasks are depended on by the eclipse task, that one is incompatible with Buildship, so you have to explicitly run the two or three aforementioned tasks and not run the eclipse task.

What you have to do, is run the following command on your project:

gradle eclipseJdtApt eclipseFactorypath eclipseJdt

From within Eclipse, you now have to run right-click the project and select Gradle / Refresh Gradle Project. Afterwards, Project / Clean. With this clean build, the annotation-processor should be running.

In case it does not work, you can double-check if the project was configured properly by right-clicking the project and going to Properties / Java Compiler / Annotation Processing / Factory Path; the auto-value JAR-file should be referenced here.

At this point, your annotation-processor should work fine; also from within Eclipse. But in case your annotation-processor is generating Java classes, you will not see them in Eclipse because they are generated to build/generated/sources/apt/main.

I have found two ways to deal with it.

  • Either, generate them to src/main/generated in case you have some need to also check them in source-control.

    compileJava {
    	options.annotationProcessorGeneratedSourcesDirectory = file("${projectDir}/src/main/generated")
    }
  • Or, make the build-subfolder a source-folder in Eclipse:

    eclipse {
        classpath {
            file.beforeMerged { cp ->
                cp.entries.add( new org.gradle.plugins.ide.eclipse.model.SourceFolder('build/generated/source/apt/main', null) )
            }
        }
    }

In the future, I want to be able to quickly write an annotation-processor when needed. I have put a Gradle project containing a minimal annotation-processor including unit-test in my Github repo.

JEP 330: Launch Single-File Source-Code Programs

24 September 2018

Java 11 includes JEP 330 which allows to use Java source-files like shell-scripts.

Create a file named util with the following content:

#!java --source 11

public class Util {
	public static void main (String[] args) {
		System.out.println("Hello " + args[0] + "!");
	}
}

Make sure it is executable by running chmod u+x util.

Running the script, will compile it on the fly:

> ./util Daniel
Hello Daniel!

As of now, editors like Visual Studio code don’t recognize the file as Java files automatically. This means, code-completion and syntax hightlighting do not work without manual steps. Let’s hope this gets fixed soon after the release of Java 11.

OpenJFX 11

23 September 2018

As of Java 11, JavaFX is no longer packaged with the runtime but is a seperate module. Go to the OpenJFX website for "Getting Started" docs. In this post, I will provide a minimal setup for building and testing a OpenFX 11 application. The purpose is not to describe the steps in detail, but to have some Gradle- and code-samples at hand for myself.

Of course, you will need Java 11. As of this writing, Java 11 is not released so you will need to get an early-access version.

The Application-class looks like this:

package sample;

import java.io.IOException;

import javafx.application.Application;
import javafx.fxml.FXML;
import javafx.fxml.FXMLLoader;
import javafx.scene.Parent;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.control.Label;
import javafx.scene.control.TextField;
import javafx.stage.Stage;

public class HelloFX extends Application {

	public static class Controller {

		@FXML
		TextField inputField;

		@FXML
		Label label;

		@FXML
		Button applyButton;

		public void applyButtonClicked() {
			label.setText(inputField.getText());
		}
	}

	@Override
	public void start(Stage stage) throws IOException {
		Parent root = FXMLLoader.load(getClass().getResource("/sample.fxml"));
		Scene scene = new Scene(root, 640, 480);
		stage.setScene(scene);
		stage.show();
	}

	public static void main(String[] args) {
		launch();
	}
}

The controller is embeeded to simplify the example. It is used from within the sample.fxml under src/main/resources.

<?xml version="1.0" encoding="UTF-8"?>

<?import javafx.scene.control.Button?>
<?import javafx.scene.control.Label?>
<?import javafx.scene.control.TextField?>
<?import javafx.scene.layout.ColumnConstraints?>
<?import javafx.scene.layout.GridPane?>
<?import javafx.scene.layout.RowConstraints?>

<GridPane alignment="center" hgap="10" vgap="10" xmlns="http://javafx.com/javafx/10.0.1" xmlns:fx="http://javafx.com/fxml/1" fx:controller="sample.HelloFX$Controller">
   <children>
            <TextField id="input" fx:id="inputField" layoutX="15.0" layoutY="25.0" />
            <Label id="output" fx:id="label" layoutX="15.0" layoutY="84.0" text="TEXT GOES HERE" GridPane.rowIndex="1" />
            <Button id="action" fx:id="applyButton" layoutX="124.0" layoutY="160.0" mnemonicParsing="false" onAction="#applyButtonClicked" text="Apply" GridPane.rowIndex="2" />
   </children>
   <columnConstraints>
      <ColumnConstraints />
   </columnConstraints>
   <rowConstraints>
      <RowConstraints />
      <RowConstraints minHeight="10.0" prefHeight="30.0" />
      <RowConstraints minHeight="10.0" prefHeight="30.0" />
   </rowConstraints>
</GridPane>

Of course, we want to write tested code. So, we can write a UI-test using TestFX.

package sample;

import java.io.IOException;

import org.junit.jupiter.api.Test;
import org.testfx.api.FxAssert;
import org.testfx.framework.junit5.ApplicationTest;
import org.testfx.matcher.control.LabeledMatchers;

import javafx.stage.Stage;

public class HelloFXTest extends ApplicationTest {

	@Override
	public void start(Stage stage) throws IOException {
		new HelloFX().start(stage);
	}

	@Test
	public void should_drag_file_into_trashcan() {
		// given:
		clickOn("#input");
		write("123");

		// when:
		clickOn("#action");

		// then:
		FxAssert.verifyThat("#output", LabeledMatchers.hasText("123"));
	}
}

Now, the build.gradle that ties it all together.

apply plugin: 'application'

def currentOS = org.gradle.internal.os.OperatingSystem.current()
def platform
if (currentOS.isWindows()) {
    platform = 'win'
} else if (currentOS.isLinux()) {
    platform = 'linux'
} else if (currentOS.isMacOsX()) {
    platform = 'mac'
}

repositories {
    mavenCentral()
}

dependencies {
    // we need to depend on the platform-specific libraries of openjfx
    compile "org.openjfx:javafx-base:11:${platform}"
    compile "org.openjfx:javafx-graphics:11:${platform}"
    compile "org.openjfx:javafx-controls:11:${platform}"
    compile "org.openjfx:javafx-fxml:11:${platform}"

    // junit 5
    testImplementation 'org.junit.jupiter:junit-jupiter-api:5.3.1'
    testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.3.1'

    // testfx with junit5 binding
    testImplementation 'org.testfx:testfx-core:4.0.14-alpha'
    testImplementation 'org.testfx:testfx-junit5:4.0.14-alpha'
}

// add javafx modules to module-path during compile and runtime
compileJava {
    doFirst {
        options.compilerArgs = [
                '--module-path', classpath.asPath,
                '--add-modules', 'javafx.controls,javafx.fxml'
        ]
    }
}

run {
    doFirst {
        jvmArgs = [
                '--module-path', classpath.asPath,
                '--add-modules', 'javafx.controls,javafx.fxml'
        ]
    }
}

test {
    // use junit5 engine in gradle
    useJUnitPlatform()
    // log all tests
    testLogging {
        events 'PASSED', 'FAILED', 'SKIPPED'
    }
    // log output of tests; enable when needed
    //test.testLogging.showStandardStreams = true
}

mainClassName='sample.HelloFX'

Some comments are give as part of the code. So, no further explaination is give here.

Execute gradle test to run the tests. Execute gradle run to just run the application.

KumuluzEE for Standalone Java EE Microservices

09 September 2018

I have to admit that I have never been too excited about frameworks like KumuluzEE, Thorntail (previously Wildfly Swarm), Payara Micro, etc.. Regular application-servers that offer a seperation between platform and application-logic feel more natural; even more so now with Docker as it can reduce the image-size significantly.

But in certain situation I can see that it is useful to have a standalone Java application which can be started with java -jar instead of requiring an application-server. Due to this reason, I felt the need to give these frameworks/platforms a try.

In this post, I would like to start with KumuluzEE which advertises the easy migration of Java EE applications to cloud-native microservices on it’s website. The advantage, like with Thorntail, to me is that I can code against the regular Java EE APIs and thus do not have to learn a new framework.

Below, I will describe the main things that need to be done to a Maven-based Java EE project to migrate it to KumuluzEE. You can find the final version of the project in my Git repo.

Steps

As the generated artifact is an Uber-Jar and no WAR-file, change the packaging-type to 'jar'.

<packaging>jar</packaging>

Add the dependencies to KumuluzEE and remove the dependency to the Java EE APIs (they will be transitively included). This is already what I don’t like at all: I will have to fiddle with and include each Java EE spec individually; no way to just depend on all parts of the spec.

<dependencies>
    <dependency>
        <groupId>com.kumuluz.ee</groupId>
        <artifactId>kumuluzee-core</artifactId>
    </dependency>
    <dependency>
        <groupId>com.kumuluz.ee</groupId>
        <artifactId>kumuluzee-servlet-jetty</artifactId>
    </dependency>
    <dependency>
        <groupId>com.kumuluz.ee</groupId>
        <artifactId>kumuluzee-jsp-jetty</artifactId>
    </dependency>
    <dependency>
        <groupId>com.kumuluz.ee</groupId>
        <artifactId>kumuluzee-el-uel</artifactId>
    </dependency>
    <dependency>
        <groupId>com.kumuluz.ee</groupId>
        <artifactId>kumuluzee-jax-rs-jersey</artifactId>
    </dependency>
    <dependency>
        <groupId>com.kumuluz.ee</groupId>
        <artifactId>kumuluzee-cdi-weld</artifactId>
    </dependency>
    <dependency>
        <groupId>com.kumuluz.ee</groupId>
        <artifactId>kumuluzee-jsf-mojarra</artifactId>
    </dependency>
    <dependency>
        <groupId>com.kumuluz.ee</groupId>
        <artifactId>kumuluzee-jpa-eclipselink</artifactId>
    </dependency>
    <dependency>
        <groupId>com.kumuluz.ee</groupId>
        <artifactId>kumuluzee-bean-validation-hibernate-validator</artifactId>
    </dependency>
    <dependency>
        <groupId>com.kumuluz.ee</groupId>
        <artifactId>kumuluzee-json-p-jsonp</artifactId>
    </dependency>
    <dependency>
        <groupId>com.kumuluz.ee</groupId>
        <artifactId>kumuluzee-jta-narayana</artifactId>
    </dependency>
    <dependency>
        <groupId>com.kumuluz.ee</groupId>
        <artifactId>kumuluzee-microProfile-1.2</artifactId>
    </dependency>
</dependencies>
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>com.kumuluz.ee</groupId>
            <artifactId>kumuluzee-bom</artifactId>
            <version>3.0.0-SNAPSHOT</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

As the application is packaged as a JAR-file and not as WAR, there is a different structure required in the build already. Instead of having a src/main/webapp, you have to place it under src/main/resources/webapp. Also, files like beans.xml and persistence.xml have to be placed under src/main/resources/META-INF instead of src/main/resources/webapp/WEB-INF. Below you find the basic structure.

.
└── src
    └── main
        ├── java
        └── resources
            ├── META-INF
            │   └─ beans.xml
            └── webapp
                ├── index.xhtml
                └── WEB-INF
                    ├── faces-config.xml
                    └── web.xml

I also had to remove the usage of EJB’s as they are not available in KumuluzEE; which is understandable as it is a big specification and is step-by-step replaced by CDI-based mechanisms like @Transactional.

It took me quiet some fiddeling to get the app running; one of my main issues was that I had Jersey as transitive dependency for KumuluzEE and also as a test-dependency (as a test-client to invoke the JAX-RS endpoint). The version difference influenced the versions in my Uber-Jar. In the end, I see this as a problem in Maven, but nevertheless, this would not have happend when just coding against the JavaEE API and deploying on an app-server.

Before all the Maven fiddeling, I also tried to create a KumuluzEE-compatible Uber-Jar with Gradle but gave up. I created an issue and move on to Maven instead.

Once I had all my issues resolved, the application itself was running smoothly. Having gone through the motions once, I feel like it could be a viable alternative for developing small microservice or standalone-apps that can be sold/packaged as products but should not require the installation of an app-server.

I also appreciate the availability of extensions like service discovery with Consul, access-management with KeyCloak, streaming with Kafka and full support for Microprofile 1.2. For sure, I will consider it the next time I feel the need for developing a small/standalone Java application. Small is relative though; creating the Uber-Jar and using CDI, JAX-RS, JSF and JPA add roughly 26 MB to the application.

Building Self-Contained and Configurable Java EE Applications

25 June 2018

In this post I would like to outline how to build a self-contained Java EE application (WAR), including JPA via a custom JDBC-driver, but with zero application-server configuration/customizing. The goal is to drop the Java EE application into a vanilla application-server. Zero configuration outside the WAR-archive. I will be using the latest Java EE 8-compliant application-servers but that does not mean you cannot use a Java EE 7-compliant server.

To achieve our goal, I will be leveraging a feature of Java EE 7 that I always found interesting but did not use very often due to it’s limitations: @DatasourceDefinition. It is a way of declaring a datasource and connection-pool within your application via annotation; instead of having to configure it outside the application via non-portable configuration-scripts for the application-server of your choice. E.g. on JBoss you would usually configure your datasource in the standalone*.xml; either directly or via a JBoss .cli-script. Below you find an example how to define a datasource via annotation in a portable way:

@DataSourceDefinition(
        name = "java:app/jdbc/primary",
        className = "org.postgresql.xa.PGXADataSource",
        user = "postgres",
        password = "postgres",
        serverName = "localhost",
        portNumber = 5432,
        databaseName = "postgres")

To me, this was seldom useful because you hard-code your database-credentials. There has been a proposal for Java EE 7 to support password-aliasing, but it never made it into the spec. In the past, I only used it for small applications and proof-of-concepts.

Until now! A twitter-discussion lead me to realize that at least Wildfly and Payara come with vendor-specific features to do variable-replaments in the annotation-values. But lets start from the beginning.

Datasource-definition and JPA

Below you find a useful pattern to define and produce a datasource within your application:

@Singleton
@DataSourceDefinition(
        name = "java:app/jdbc/primary",
        className = "org.postgresql.xa.PGXADataSource",
        user = "postgres",
        password = "postgres",
        serverName = "postgres",
        portNumber = 5432,
        databaseName = "postgres",
        minPoolSize = 10,
        maxPoolSize = 50)
public class DatasourceProducer {

	@Resource(lookup="java:app/jdbc/primary")
	DataSource ds;

	@Produces
	public DataSource getDatasource() {
		return ds;
	}
}

The @DatasourceDefinition annotation is sufficient here to bind the datasource for PostgreSQL under the global JNDI-name java:app/jdbc/primary.

The usage of @Resource and @Produces is just additional code that exposes the datasource and makes it injectable in other managed beans via @Inject Datasource ds. But for JPA, this is not needed. What we need is a persistence.xml that uses the same JNDI-name:

<?xml version="1.0" encoding="UTF-8"?>
<persistence
    version="2.1"
    xmlns="http://xmlns.jcp.org/xml/ns/persistence"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
    <persistence-unit name="DefaultPU" transaction-type="JTA">
        <jta-data-source>java:app/jdbc/primary</jta-data-source>
        <exclude-unlisted-classes>false</exclude-unlisted-classes>
        <properties>
            <property name="javax.persistence.schema-generation.database.action" value="drop-and-create" />
            <property name="javax.persistence.schema-generation.scripts.action" value="drop-and-create" />
            <property name="javax.persistence.schema-generation.scripts.create-target" value="schemaCreate.ddl" />
            <property name="javax.persistence.schema-generation.scripts.drop-target" value="schemaDrop.ddl" />

            <property name="eclipselink.logging.level.sql" value="FINE" />
            <property name="eclipselink.logging.level" value="FINE" />

            <property name="hibernate.show_sql" value="true" />
            <property name="hibernate.format_sql" value="true" />
        </properties>
    </persistence-unit>
</persistence>

From here on, it is plain JPA: Define some entity and inject the EntityManager via @PersistenceContext EntityManager em; to interact with JPA.

Packaging of the JDBC-driver

You might have noticed that the @DataSourceDefinition references the JDBC-driver-class org.postgresql.xa.PGXADataSource. Obviously, it has to be available for the application so it can connect to the database. This can be achieved by placing the JDBC-driver in the application-server. E.g. under Wildfly, you register the JDBC-driver as a module. But what we want is a self-contained application where the JDBC-driver is coming within the application’s web-archive (WAR). This is very simple to achieve by adding a runtime-dependency to to the JDBC-driver. You favorite build-tool should support it. In Gradle, it is done like this:

dependencies {
    providedCompile 'javax:javaee-api:8.0'
    runtime 'org.postgresql:postgresql:9.4.1212'
}

Dynamic Configuration

What we have now is a self-contained Java EE application-archive (WAR) but the connection to the database and the credentials are hard-coded in the annotation-properties. To make this really useful, we have to be able to overwrite this values for each stage and deployment. I.e. the database-credentials to the QA-environment’s database will be different than for production. Unfortunately, there is no portable/standard way. But if you are willing to commit to a specific application-server, it is possible. A Twitter-discussion lead me to the documentation for Payara and Wildfly both supporting this feature in some way.

Payara

So, for Payara we find the documentation here. Note that we will have to modify the annotation-values like this to read from environment variables:

@DataSourceDefinition(
        name = "java:app/jdbc/primary",
        className = "org.postgresql.xa.PGXADataSource",
        user = "${ENV=DB_USER}",
        password = "${ENV=DB_PASSWORD}",
        serverName = "${ENV=DB_SERVERNAME}",
        portNumber = 5432,
        databaseName = "${ENV=DB_DATABASENAME}",
        minPoolSize = 10,
        maxPoolSize = 50)

You can find this as a working Gradle-project plus Docker-Compose environment on Github. The steps are very simple:

git clone https://github.com/38leinaD/jee-samples.git
cd jee-samples/datasource-definition/cars
./gradlew build
docker-compose -f docker-compose.payara.yml up

When the server is started, you can send below request to create a new row in a database-table:

curl -i -X POST -d '{"model": "tesla"}' -H "Content-Type: application/json" http://localhost:8080/cars/resources/cars

If you are wondering where the values like ${ENV=DB_USER} are set, check the docker-compose.payara.yml.

Widlfly

So, how about Wildfly?

For Wildfly, you can find it under "Annotation Property Replacement" in the admin-guide.

First, we have to enable the variable-replacement feature in the standalone*.xml; which is not the case by default.

<subsystem xmlns="urn:jboss:domain:ee:4.0">
    <annotation-property-replacement>true</annotation-property-replacement>
    <!-- ... -->
</subsystem>

So, technically, we still hava to modify the application-server in the standalone*.xml in this case.

But then, you can use annotation-properties in the format ${<environment-variable>:<default-value>}:

@DataSourceDefinition(
    name = "java:app/jdbc/primary",
    className = "org.postgresql.xa.PGXADataSource",
    user = "${DB_USER:postgres}",
    password = "${DB_PASSWORD:postgres}",
    serverName = "${DB_SERVERNAME:postgres}",
    portNumber = 5432,
    databaseName = "${DB_DATABASENAME:postgres}",
    minPoolSize = 10,
    maxPoolSize = 50)

If you try this, you might notice the following exception:

Caused by: org.postgresql.util.PSQLException: FATAL: role "${DB_USER:postgres}" does not exist
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455)
	at org.postgresql.core.v3.QueryExecutorImpl.readStartupMessages(QueryExecutorImpl.java:2586)
	at org.postgresql.core.v3.QueryExecutorImpl.<init>(QueryExecutorImpl.java:113)
	at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222)
	at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:52)
	at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:216)
	at org.postgresql.Driver.makeConnection(Driver.java:404)
	at org.postgresql.Driver.connect(Driver.java:272)
	at java.sql.DriverManager.getConnection(DriverManager.java:664)
	at java.sql.DriverManager.getConnection(DriverManager.java:247)
	at org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:86)
	at org.postgresql.xa.PGXADataSource.getXAConnection(PGXADataSource.java:48)
	at org.jboss.jca.adapters.jdbc.xa.XAManagedConnectionFactory.getXAManagedConnection(XAManagedConnectionFactory.java:515)
	... 133 more

It seems there is a bug in the latest Wildfly that does not allow to use variables for the user/password properties. For now, we will continue with user and password beeing hardcoded and only the serverName and databaseName as dyanmic values:

@DataSourceDefinition(
    name = "java:app/jdbc/primary",
    className = "org.postgresql.xa.PGXADataSource",
    user = "postgres",
    password = "postgres",
    serverName = "${DB_SERVERNAME:postgres}",
    portNumber = 5432,
    databaseName = "${DB_DATABASENAME:postgres}",
    minPoolSize = 10,
    maxPoolSize = 50)

This works without any issues if the defaults match your environment. Explicitly overwriting these values can be achived via Java’s system-properties. E.g -DDB_SERVERNAME=postgres1 on the commandline. See docker-compose.wildfly.yml for a complete example. Before you can run this Wildfly-setup in the demo-application, you need to comment in the right annotation in DatasourceProducer.java. The default setup is for Payara.

Liberty

Liberty does not have support for variables yet, but there is interest and an issue has been filed:

Conclusion

If you make a choice for either Payara or Wildfly, you are able to build a truely self-contained Java EE application. We have seen how to achive this for a WAR-archive leveraging JPA or plain JDBC. The JDBC-driver is contained within the WAR-archive and configuration for the datasources can be inject from the outside via environment variables or Java system-properties.

Payara and Wildfly offer slightly different mechanisms and syntax. Payara shines because it does not require any additional application-server config. But we cannot specify defaults in the annotation-values and always need to provide environment-variables from the outside.

Wildfly allows to set default-values on the annotation-properties. This makes it possible to deploy e.g. in a development-environment without the need to set any environment-variables. A minor disadvantage is that the default configuration does not have the annotation-property-replacement enabled. So, the only vendor-specific config that is required is the enabling of this feature. Also, currently this mechanism is riddled by a bug. Overwriting the user/password is not working at the time of writing.

With this, both application-servers offer a useful feature for cloud-native applications. Unfortunately, you have to decide for a specific application-server to leverage it. But standardization-efforts are already on their way. The above discussion on Twitter has already been brought over to the Jakarta EE mailing-list. Feel free to join the discussion if you think this is a useful feature that should be standardized.

Post Mortem

Some time after writing this article, I notices that the OmniFaces library comes with a nice workaround via a wrapper datasource that reads all the wrapped datasource’s configuration from a config-file.

Arjan Tijms, who is one of the creators of the library, has described the implementation in detail on his blog.

Checkstyle Configuration from External JAR

23 June 2018

In a previous post I have described the minimal configuration to get checkstyle working with Gradle. What i did not like, is that I have to place the checkstyle.xml in my project. Assuming I stick with the standard checkstyle.xml from Google or Sun (or I have a corporate one), I do no want to place it in each and every repo.

What I found now is that Gradle supports referencing resources from within published artifacts. In the below configuration, the google_checks.xml is referenced from the published artifact com.puppycrawl.tools:checkstyle:8.10.1 directly.

apply plugin: 'checkstyle'

configurations {
    checkstyleConfig
}
def versions = [
    checkstyle: '8.10.1',
]
dependencies {
    checkstyleConfig ("com.puppycrawl.tools:checkstyle:${versions.checkstyle}") {
        transitive = false
    }
}
checkstyle {
    showViolations = true
    ignoreFailures = false
    toolVersion = "${versions.checkstyle}"
    config = resources.text.fromArchiveEntry(configurations.checkstyleConfig, 'google_checks.xml')
}

The example is derived from the offical gradle docs.

Even simpler Arquillian Chameleon usage with Gradle

11 June 2018

In a previous post I have described how easy it has become to use Arquillian via the Chameleon extension. The only "complex" part that’s left is the @Deployment-annotated method specificing the deployment via Shrinkwrap.

What exists for this is the @MavenBuild-annotation. It allows to trigger a maven-build and use the generated artifact. Usually, this would be the regularly built EAR or WAR-file as the deployment; which is fine in a lot of situations. Unfortunately, there is no @GradleBuild-annotation today. But there is the @File-annotation to just reference any EAR or WAR on the filesystem; assuming it was previously built by the Gradle-build, we can just reference the artifact.

@RunWith(ArquillianChameleon.class)
@File("build/libs/hello.war")
@ChameleonTarget(value = "wildfly:11.0.0.Final:managed")
public class HelloServiceIT {

    @Inject
    private HelloService service;

    @Test
    public void shouldGreetTheWorld() throws Exception {
        Assert.assertEquals("hello", service.hello());
    }
}

Note that there is no @Deployment-annotated method. The build/libs/hello.war is built with the normal Gradle build task. If we set up our integrationTest-task like this, we can require the build-task as a dependency:

test {
    // Do not run integration-tests having suffix 'IT'
    include '**/*Test.class'
}

dependencies {
    testCompile 'org.arquillian.container:arquillian-chameleon-junit-container-starter:1.0.0.CR2'
    testCompile 'org.arquillian.container:arquillian-chameleon-file-deployment:1.0.0.CR2'
}

task integrationTest(type: Test) {
    group 'verification'
    description 'Run integration-tests'
    dependsOn 'build'
    include '**/*IT.class'
}

Run it with gradle integrationTest.

If you are wondering what other containers are supported and can be provided via the @ChameleonTarget-annotation, see here for the list. The actual config of supported containers is located in a file called containers.yaml.

Conclusion

The only disadvantage right now is that it will only work as expected when running a full gradle integrationTest. If you are e.g. in Eclipse and trigger a single test, it will simply use the already existing artifact instead of creating/building it again. This is what @MavenBuild is doing; and I hope we will get the equivalent @GradleBuild as well soon.

Websphere Liberty, EclipseLink and Caching in the Cluster

04 June 2018

Cache Coordination

When using JPA, sooner or later the question of caching will arise to improve performance. Especially for data that is frequently read but only written/updated infrequently, it makes sense to enable the second-level cache via shared-cache-mode-element in the persistence.xml. See the Java EE 7 tutorial for details.

By default, EclipseLink has the second-level cache enabled as you can read here. Consider what will happen in a clustered environment: What happens if server one has the entity cached and server two will update the entity? server one will have a stale cache-entry and by default noone will tell the server that its cache is out-of-date. How to deal with it? Define a hard-coded expiration? Or not use the second-level-cache at all?

A better solution is to get the second-level caches sychronized in the cluster. EclipseLink’s vendor-specific feature for this is called cache-coordination. You can read more about it here, but in a nutshell you can use either JMS, RMI or JGroups to distribute cache-invalidations/updates within the cluster. This post focuses on getting EclipseLink’s cache-coordination working under Websphere Liberty via JGroups.

Application Configuration

From the application’s perspective, you only have to enable this feature in the persistence.xml via

<property name="eclipselink.cache.coordination.protocol" value="jgroups" />

Liberty Server Configuration with Global Library

Deploying this application on Webspher Liberty, will lead to the following error:

Exception Description: ClassNotFound: [org.eclipse.persistence.sessions.coordination.jgroups.JGroupsTransportManager] specified in [eclipselink.cache.coordination.protocol] property.

Thanks to the great help on the openliberty.io mailing-list, I was able to solve the problem. You can read the full discussion here.

The short summary is that the cache-coordination feature of EclipseLink using JGroups is an extension and Liberty does not ship this extension by default. RMI and JMS are supported out-of-the-box but both have disadvantages:

  • RMI is a legacy technology that I have not worked with in years.

  • JMS in general is a great technology for asychroneous communication but it requires a message-broker like IBM MQ or ActiveMQ. This does not sound like a good fit for a caching-mechanism.

This leaves us with JGroups. The prefered solution to get JGroups working is to replace the JPA-implementation with our own. For us, this will simply be EclipseLink but including the extension. In Liberty this is possible via the jpaContainer feature in the server.xml. The offical documentation describes how to use our own JPA-implementation. As there are still a few small mistakes you can make on the way, let me describe the configuration that works here in detail:

  1. Assuming you are working with the javaee-7.0-feature in the server.xml (or in specific jpa-2.1), you will have to get EclipseLink 2.6 as this implements JPA 2.1. For javaee-8.0 (or in specific jpa-2.2) it would be EclipseLink 2.7.

    I assume javaee-7.0 here; that’s why I downloaded EclipseLink 2.6.5 OSGi Bundles Zip.

  2. Create a folder lib/global within your Liberty server-config-folder. E.g. defaultServer/lib/global and copy the following from the zip (same as referenced here plus the extension):

    • org.eclipse.persistence.asm.jar

    • org.eclipse.persistence.core.jar

    • org.eclipse.persistence.jpa.jar

    • org.eclipse.persistence.antlr.jar

    • org.eclipse.persistence.jpa.jpql.jar

    • org.eclipse.persistence.jpa.modelgen.jar

    • org.eclipse.persistence.extension.jar

  3. If you would use it like this, you will find a ClassNotFoundException later for the actual JGroups implementation-classes. You will need to get it seperately from here.

    If we look on the 2.6.5-tag in EclipseLink’s Git Repo, we see that we should use org.jgroups:jgroups:3.2.8.Final.

    Download it and copy the jgroups-3.2.8.Final.jar to the lib/global folder as well.

  4. The last step is to set up your server.xml like this:

    <?xml version="1.0" encoding="UTF-8"?>
    <server description="new server">
    
        <!-- Enable features -->
        <featureManager>
    		<feature>servlet-3.1</feature>
    		<feature>beanValidation-1.1</feature>
    		<feature>ssl-1.0</feature>
    		<feature>jndi-1.0</feature>
    		<feature>jca-1.7</feature>
    		<feature>jms-2.0</feature>
    		<feature>ejbPersistentTimer-3.2</feature>
    		<feature>appSecurity-2.0</feature>
    		<feature>j2eeManagement-1.1</feature>
    		<feature>jdbc-4.1</feature>
    		<feature>wasJmsServer-1.0</feature>
    		<feature>jaxrs-2.0</feature>
    		<feature>javaMail-1.5</feature>
    		<feature>cdi-1.2</feature>
    		<feature>jcaInboundSecurity-1.0</feature>
    		<feature>jsp-2.3</feature>
    		<feature>ejbLite-3.2</feature>
    		<feature>managedBeans-1.0</feature>
    		<feature>jsf-2.2</feature>
    		<feature>ejbHome-3.2</feature>
    		<feature>jaxws-2.2</feature>
    		<feature>jsonp-1.0</feature>
    		<feature>el-3.0</feature>
    		<feature>jaxrsClient-2.0</feature>
    		<feature>concurrent-1.0</feature>
    		<feature>appClientSupport-1.0</feature>
    		<feature>ejbRemote-3.2</feature>
    		<feature>jaxb-2.2</feature>
    		<feature>mdb-3.2</feature>
    		<feature>jacc-1.5</feature>
    		<feature>batch-1.0</feature>
    		<feature>ejb-3.2</feature>
    		<feature>json-1.0</feature>
    		<feature>jaspic-1.1</feature>
    		<feature>distributedMap-1.0</feature>
    		<feature>websocket-1.1</feature>
    		<feature>wasJmsSecurity-1.0</feature>
    		<feature>wasJmsClient-2.0</feature>
    
    		<feature>jpaContainer-2.1</feature>
        </featureManager>
    
    
        <basicRegistry id="basic" realm="BasicRealm">
        </basicRegistry>
    
        <httpEndpoint id="defaultHttpEndpoint"
                      httpPort="9080"
                      httpsPort="9443" />
    
    	<applicationManager autoExpand="true"/>
    
    	<jpa defaultPersistenceProvider="org.eclipse.persistence.jpa.PersistenceProvider"/>
    
    </server>

Some comments on the server.xml:

  • Note that we have to list all of the features that are included in the javaee-7.0 feature minus the jpa-2.1 feature explicitly now because we don`t want the default JPA-provider.

  • Instead of jpa-2.1 I added jpaContainer-2.1 to bring our own JPA-provider.

  • The defaultPersistenceProvider will set the JPA-provider to use ours and is required by the jpaContainer feature.

Liberty Configuration without Global Library

Be aware that there are different ways how to include our EclipseLink library. Above, I chose the way that requires the least configuration in the server.xml and also works for dropin-applications. The way I did it was via a global library. The offical documentation defines it as an explicit library in the server.xml and reference it for each invidual application like this:

<bell libraryRef="eclipselink"/>
<library id="eclipselink">
	<file name="${server.config.dir}/jpa/org.eclipse.persistence.asm.jar"/>
	<file name="${server.config.dir}/jpa/org.eclipse.persistence.core.jar"/>
	<file name="${server.config.dir}/jpa/org.eclipse.persistence.jpa.jar"/>
	<file name="${server.config.dir}/jpa/org.eclipse.persistence.antlr.jar"/>
	<file name="${server.config.dir}/jpa/org.eclipse.persistence.jpa.jpql.jar"/>
	<file name="${server.config.dir}/jpa/org.eclipse.persistence.jpa.modelgen.jar"/>

	<file name="${server.config.dir}/jpa/org.eclipse.persistence.extension.jar"/>
	<file name="${server.config.dir}/jpa/jgroups.jar"/>
</library>

<application location="myapp.war">
    <classloader commonLibraryRef="eclipselink"/>
</application>

Also note, that the JARs are this time in the defaultServer/jpa-folder, not under defaultServer/lib/global and I removed all the version-suffixes from the file-names. Additionally, make sure to add <feature>bells-1.0</feature>.

Try it

As this post is already getting to long, I will not got into detail here how to use this from your Java EE application. This will be for another post. But you can already get a working Java EE project to get your hands dirty from my GitHub repository. Start the Docker Compose environment and use the contained test.sh to invoke some cURL requests against the application on two different cluster-nodes.

Conclusion

With the either of the aboved approaches I was able to enable EclipseLink’s cache-coordination feature on Websphere Liberty for Java EE 7.

I did not try it, but I would assume that it will work similar for Java EE 8 on the latest OpenLiberty builds.

For sure it is nice that plugging in your own JPA-provider is so easy in Liberty; but I don’t like that I have to do this to get a feature of EclipseLink working under Liberty which I would expect to work out of the box. EclipseLink’s cache-coordination feature is a quiet useful extension and it leaves me uncomfortable that I have configured my own snowflake Liberty instead of relying on the standard package. On the other hand, it works; and if I make sure to use the exact same version of EclipseLink as packaged with Liberty out of the box, I would hope the differences are minimal.

The approach I chose/prefer in the end is Liberty Server Configuration with Global Library instead of using the approach that is also in the offical documentation (Liberty Configuration without Global Library). The reason is that for Liberty Configuration without Global Library I have to reference the library in the server.xml indvidually for each application. This will not work for applications I would like throw into the dropins.

Deploying a Java EE 7 Application with Kubernetes to the Google Cloud

30 May 2018

In this post I am describing how to deploy a dockerized Java EE 7 application to the Google Cloud Platform (GCP) with Kubernetes.

My previous experience is only with AWS; in specific with EC2 and ECS. So, this is not only my first exposure to the Google Cloud but also my first steps with Kubernetes.

The Application

The application I would like to deploy is a simple Java EE 7 application exposing a basic HTTP/Rest endpoint. The sources are located on GitHub and the Docker image can be found on Docker Hub. If you have Docker installed, you can easily run it locally via

docker run --rm --name hello -p 80:8080 38leinad/hello

Now, in your browser or via cURL, go to http://localhost/hello/resources/health. You should get UP as the response. A simple health-check endpoint. See here for the sources.

Let’s deploy it on the Google Cloud now.

Installation and Setup

Obviously, you will have to register on https://cloud.google.com/ for a free trial-account first. It is valid for one year and also comes with a credit of $300. I am not sure yet what/when resources will cost credit. After four days of tinkering, $1 is gone.

Once you have singed up, you can do all of the configuration and management of your apps from the Google Cloud web-console. They even have an integrated terminal running in the browser. So, strictly it is not required to install any tooling on your local system if you are happy with this.

The only thing we will do from the web-console is the creation of a Kubernetes Cluster (You can also do this via gcloud from the commandline). For this you go to "Kubernetes Engine / Kubernetes clusters" and "Create Cluster". You can leave all the defaults, just make sure to remember the name of the cluster and the zone it is deployed to. We will need this later to correctly set up the kubectl commandline locally. Note that it will also ask you to set up a project before creating the cluster. This allows grouping of resources in GCP based on different projects which is quiet useful.

Setting up the cluster is heavy lifting and thus can take some minutes. In the meantime, we can already install the tools.

  1. Install SDK / CLI (Centos): https://cloud.google.com/sdk/docs/quickstart-redhat-centos.

    I had to make sure to be logged out of my Google-account before running gcloud init. Without doing this, I received a 500 http-response.

    Also, when running gcloud init it will ask your for a default zone. Choose the one you used when setting up the cluster. Mine is europe-west1-b.

  2. Install the kubectl command:

    gcloud components install kubectl

    Note that you can also install kubectl independently. E.g. I already had it installed from here while using minikube.

  3. Now, you will need the name of the cluster you have created via the web-console. Configure the gcloud CLI-tool for your cluster:

    gcloud container clusters get-credentials <cluster-name> --zone <zone-name> --project <project-name>

    You can easily get the full command with correct parameters when opening the cluster in the web-console and clicking the "Connect" button for the web-based CLI.

Run kubectl get pods just to see if the command works. You should see No resources found.. At this point, we have configured our CLI/kubectl to interact with our kubernetes cluster.

Namespaces

The next thing we will do is optional but makes life easier once you have multiple applications deployed on your cluster. You can create a namespace/context per application your are deploying to GCP. This allows you to always only see the resources of the namespace you are currently working with. It also allows you to delete the namespace and it will do a cascading delete of all the resources. So, this is very nice for experimentation and not leaving a big mess of resources.

kubectl create namespace hello-namespace
kubectl get namespaces

We create a namespace for our application and check if it actually was created.

You can now attach this namespace to a context. A context is not a resource on GCP but is a configuration in your local <user-home>/.kube/config.

kubectl config set-context hello-context --namespace=hello-namespace \
  --cluster=<cluster-name> \
  --user=<user-name>

What is <cluster-name> and <user-name> that you have to put in? Easiest, is to get it from running

kubectl config view

Let’s activate this context. All operations will be done within the assigned namespace from now on.

kubectl config use-context hello-context

You can also double-check the activated context:

kubectl config current-context

Run the kubectl config view command again or even check in <user-home>/.kube/config. As said before, the current-context can be found here and is just a local setting.

You can read more on namespaces here.

Deploying the Application

Deploying the application in Kubernetes requires three primitives to be created:

  • Deployment/Pods: These are the actually docker-containers that are running. A pod actually could consist of multiple containers. Think of e.g. side-car containers in a microservice architecture.

  • Service: The containers/Pods are hidden behind a service. Think of the Service as e.g. a load-balancer: You never interact with the individual containers directly; the load-balancer is the single service you as a client call.

  • Ingress: Our final goal is to access our application from the Internet. By default, this is not possible. You will have to set up an Ingress for Incoming Traffic. Basically, you will get an internet-facing IP-address that you can call.

All these steps are quiet nicely explained when you read the offical doc on Setting up HTTP Load Balancing with Ingress. What you will find there, is that Deployment, Service and Ingress are set up via indivdual calls to kubectl. You could put all these calls into a shell-script to easily replay them, but there is something else in the Kubernets world. What we will be doing here instead, is define these resources in a YAML file.

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: hello-deployment
spec:
  selector:
    matchLabels:
      app: hello
  replicas: 1
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
      - name: hello
        image: 38leinad/hello:latest
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: hello-service
spec:
  type: NodePort
  selector:
    app: hello
  ports:
    - port: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-ingress
spec:
  backend:
    serviceName: hello-service
    servicePort: 8080

We can now simply call kubectl apply -f hello.yml.

Get the public IP by running

kubectl get ingress hello-ingress

You can now try to open http://<ip>/hello/resources/health in your browser or with cURL. You should get an "UP" response. Note that this can actually take some minutes before it will work.

Once it worked, you can check the application-server log as well like this:

kubectl get pods
kubectl logs -f <pod-name>

Note that the first command is to get the name of the Pod. The second command will give you the log-output of the container; you might know this from plain Docker already.

We succesfully deployed a dockerized application to the Google Cloud via Kubernetes.

A final not on why namespaces are useful: What you can do now to start over again is invoke

kubectl delete namespace hello-namespace

and all the resources in the cluster are gone.

Lastly, a cheat-sheet for some of the important kubectl commands can be found here. Here, you will also find how to get auto-completion in your shell which is super-useful. As I am using zsh, I created an alias for it:

alias kubeinit="source <(kubectl completion zsh)"

Websphere Liberty EclipseLink Logging

14 May 2018

Websphere Liberty uses EclipseLink as the default JPA-implementation. How to log the SQL-commands from EclipseLink in the Websphere Liberty stdout/console?

First step is enabling the logging in the persistence.xml:

<properties>
    <property name="eclipselink.logging.level.sql" value="FINE" />
    <property name="eclipselink.logging.level" value="FINE" />
    <property name="eclipselink.logging.level.cache" value="FINE" />
</properties>

This is not sufficient to get any output on stdout.

Additionally, the following snippet needs to be added to the server.xml:

<logging traceSpecification="*=info:eclipselink.sql=all" traceFileName="stdout" traceFormat="BASIC"/>

Set traceFileName="trace.log" to get the statements printed to the trace.log instead.

Gradle and Docker Compose for System Testing

06 May 2018

Recently, I read this article on a nice Gradle-plugin that allows to use Docker Compose from Gradle. I wanted to try it out myself with a simple JavaEE app deployed on Open Liberty. In specific, the setup is as follows: The JavaEE application (exposing a Rest endpoint) is deployed on OpenLiberty running within Docker. The system-tests are invocing the Rest endpoint from outside the Docker environment via HTTP.

I had two requirements that I wanted to verify in specific:

  • Usually, when the containers are started from docker-perspecive, it does not mean that also the deployed application is fully up and running. Either you have to write some custom code that monitors the application-log for some marker; or, we can leverage the Docker health-check. Does the Docker Compose Gradle-plugin provide any integration for this so we only run the system-tests once the application is up?

  • System-test will be running on the Jenkins server. Ideally, a lot of tests are running in parallel. For this, it is necessary to use dynamic ports. Otherwise, there could be conflicts for the exposed HTTP ports of the different system-tests. Each system-test somehow needs to be aware of its dynamic ports. Does the Gradle-plugin help us with this?

Indeed, the Gradle-plugin helps us with these two requirements.

Rest Service under Test

The Rest endpoint under test looks like this:

@Stateless
@Path("ping")
public class PingResource {

	static AtomicInteger counter = new AtomicInteger();

	@GET
	public Response ping() {
		if (counter.incrementAndGet() > 10) {
			System.out.println("++ UP");
			return Response.ok("UP@" + System.currentTimeMillis()).build();
		}
		else {
			System.out.println("++ DOWN");
			return Response.serverError().build();
		}

	}
}

I added some simple logic here to only return HTTP status code 200 after some number of request. This is to verify the health-check mechanism works as expected.

System Test

The system-tests is a simple JUnit test using the JAX-RS client to invoke the ping endpoint.

public class PingST {

    @Test
    public void testMe() {
        Response response = ClientBuilder.newClient()
            .target("http://localhost:"+ System.getenv("PING_TCP_9080") +"/ping")
            .path("resources/ping")
            .request()
            .get();

        assertThat(response.getStatus(), CoreMatchers.is(200));
        assertThat(response.readEntity(String.class), CoreMatchers.startsWith("UP"));
    }
}

You can already see here, that we read the port from an environment variable. Also, the test should only succeed when we get the response UP.

Docker Compose

The docker-compose.yml looks as follows:

version: '3.4'
services:
  ping:
    image: openliberty/open-liberty:javaee7
    ports:
     - "9080"
    volumes:
     - "./build/libs/:/config/dropins/"
    healthcheck:
      test: wget --quiet --tries=1 --spider http://localhost:9080/ping/resources/ping || exit 1
      interval: 5s
      timeout: 10s
      retries: 3
      start_period: 30s

We are using the health-check feature here. If you run docker ps the column STATUS will tell you the health of the container based on executing this command. The ping service should only show up as healthy after ~ 30 + 10 * 5 seconds. This is because it will only start the health-checks after 30 seconds. And then the first 10 requests will return response-code 500. After this, it will flip to status-code 200 and return UP.

If the Gradle-plugin makes sure to only run the tests once the health of the container is Ok, the PingST should pass successfully.

Gradle Build

The latest part is the build.gradle that brings it all together:

plugins {
    id 'com.avast.gradle.docker-compose' version '0.7.1'(1)
}

apply plugin: 'war'
apply plugin: 'maven'
apply plugin: 'eclipse-wtp'

group = 'de.dplatz'
version = '1.0-SNAPSHOT'

sourceCompatibility = 1.8
targetCompatibility = 1.8

repositories {
    jcenter()
}

dependencies {
    providedCompile 'javax:javaee-api:7.0'

    testCompile 'org.glassfish.jersey.core:jersey-client:2.25.1'
    testCompile 'junit:junit:4.12'
}

war {
	archiveName 'ping.war'
}

dockerCompose {(2)
    useComposeFiles = ['docker-compose.yml']
    isRequiredBy(project.tasks.systemTest)
}

task systemTest( type: Test ) {(3)
    include '**/*ST*'
    doFirst {
        dockerCompose.exposeAsEnvironment(systemTest)
    }
}

test {
    exclude '**/*ST*'(4)
}
  1. The Docker Compose gradle-plugin

  2. A seperate task to run system-tests

  3. The task to start the Docker environment based on the docker-compose.yml

  4. Don’t run system-tests as part of the regular unit-test task

The tasks composeUp and composeDown can be used to manually start/stop the environment, but the system-test task (systemTest) has a dependency on the Docker environment via isRequiredBy(project.tasks.itest).

We also use dockerCompose.exposeAsEnvironment(itest) to expose the dynamic ports as environment variables to PingST. In the PingST class you can see that PING_TCP_9080 is the environment variable name that contains the exposed port on the host for the container-port 9080.

Please note that the way I chose to seperate unit-tests and system-tests here in the build.gradle is very pragmatic but might not be ideal for bigger projects. Both tests share the same classpath. You might want to have a seperate Gradle-project for the system-tests altogether.

Wrapping it up

We can now run gradle systemTest to run our system-tests. It will first start the Docker environment and monitor the health of the containers. Only when the contain is healthy (i.e. the application is fully up and running), will gradle continue and execute PingST.

Also, ports are dynamically assigned and the PingST reads them from the environment. With this approach, we can safely run the tests on Jenkins where other tests might already be using ports like 9080.

The com.avast.gradle.docker-compose plugin allows us to easily integrate system-tests for JavaEE applications (using Docker) into our Gradle build. Doing it this way, allows every developer that has Docker installed, to run these tests locally as well and not only on Jenkins.

MicroProfile Metrics

11 April 2018

These are my personal notes on getting familiar with MicroProfile 1.3. In specific Metrics 1.1. As a basis, I have been using the tutorial on OpenLiberty.io. Not suprising, I am using OpenLiberty (version 18.0.0.1). The server.xml which serves as the starting-point is described here. I am just listing the used features here:

server.xml
<featureManager>
    <feature>javaee-7.0</feature>
    <feature>localConnector-1.0</feature>
    <feature>microProfile-1.3</feature>
</featureManager>

Some differences:

  • javaee-7.0 is used, as Java EE 8 seems not to be supported yet by the release builds.

  • microProfile-1.3 to enable all features as part of MicroProfile 1.3

As a starting-point for the actual project I am using my Java EE WAR template.

To get all MicroProfile 1.3 dependencies available in your gradle-build, you can add the following provided-dependency:

providedCompile 'org.eclipse.microprofile:microprofile:1.3'

Now lets write a simple Rest-service to produce some metrics.

@Stateless
@Path("magic")
public class MagicNumbersResource {

	static int magicNumber = 0;

	@POST
	@Consumes("text/plain")
	@Counted(name = "helloCount", absolute = true, monotonic = true, description = "Number of times the hello() method is requested")
	@Timed(name = "helloRequestTime", absolute = true, description = "Time needed to get the hello-message")
	public void setMagicNumber(Integer num) throws InterruptedException {
		TimeUnit.SECONDS.sleep(2);
		magicNumber = num;
	}

	@Gauge(unit = MetricUnits.NONE, name = "magicNumberGuage", absolute = true, description = "Magic number")
	public int getMagicNumber() {
		return magicNumber;
	}
}

I am using:

  • A @Timed metric that records the percentiles and averages for the duration of the method-invocation

  • A @Counted metric that counts the number of invocations

  • A @Gauge metric that just takes the return-value of the annotated method as the metric-value.

Now deploy and invoke curl -X POST -H "Content-Type: text/plain" -d "42" http://localhost:9080/mptest/resources/magic. (This assumes the application/WAR is named mptest).

Now open http://localhost:9080/metrics in the browser. You should see the following prometheus-formatted metrics:

# TYPE application:hello_request_time_rate_per_second gauge
application:hello_request_time_rate_per_second 0.1672874737158507
# TYPE application:hello_request_time_one_min_rate_per_second gauge
application:hello_request_time_one_min_rate_per_second 0.2
# TYPE application:hello_request_time_five_min_rate_per_second gauge
application:hello_request_time_five_min_rate_per_second 0.2
# TYPE application:hello_request_time_fifteen_min_rate_per_second gauge
application:hello_request_time_fifteen_min_rate_per_second 0.2
# TYPE application:hello_request_time_mean_seconds gauge
application:hello_request_time_mean_seconds 2.005084111
# TYPE application:hello_request_time_max_seconds gauge
application:hello_request_time_max_seconds 2.005084111
# TYPE application:hello_request_time_min_seconds gauge
application:hello_request_time_min_seconds 2.005084111
# TYPE application:hello_request_time_stddev_seconds gauge
application:hello_request_time_stddev_seconds 0.0
# TYPE application:hello_request_time_seconds summary
# HELP application:hello_request_time_seconds Time needed to get the hello-message
application:hello_request_time_seconds_count 1
application:hello_request_time_seconds{quantile="0.5"} 2.005084111
application:hello_request_time_seconds{quantile="0.75"} 2.005084111
application:hello_request_time_seconds{quantile="0.95"} 2.005084111
application:hello_request_time_seconds{quantile="0.98"} 2.005084111
application:hello_request_time_seconds{quantile="0.99"} 2.005084111
application:hello_request_time_seconds{quantile="0.999"} 2.005084111 (1)
# TYPE application:magic_number_guage gauge
# HELP application:magic_number_guage Magic number
application:magic_number_guage 42 (3)
# TYPE application:hello_count counter
# HELP application:hello_count Number of times the hello() method is requested
application:hello_count 1 (2)
  1. This is one of the percentiles from @Timed. Due to the sleep, it is close to two seconds.

  2. This metrics is based on @Counted. We invoked the method once via curl.

  3. This metric is based on the @Gauge. We did a post with curl to set the magicNumber to 42. So, this is what the gauge will get from getMagicNumber().

As a final note: I like the Java EE-approach of having a single dependency to develop against (javax:javaee-api:7.0). I have used the same approach here for the Microprofile. If you instead only want to enable the metrics-feature in Liberty and only want to program against the related API, you can instead have used the following feature in the server.xml:

<feature>mpMetrics-1.1</feature>

And the following dependency in your build.gradle:

providedCompile 'org.eclipse.microprofile.metrics:microprofile-metrics-api:1.1'

I find this approach more cumbersome if multiple MicroProfile APIs are used; and the neglectable difference in startup-time of Liberty confirms that there is no disadvantage.

In a later post we will look at what can be done with the metrics.

Websphere Traditional, Docker and Auto-Deployment

10 April 2018

The software I work with on my job is portable accross different application-servers; including Websphere Trational, Websphere Liberty and JBoss. In the past, it took cosiderable time for me to test/make sure a feature works as expected on Websphere. In part, because it was hard for me to keep all different websphere version installed on my machine and not mess them up over time.

Now, with the docker images provided by IBM, it has become very easy. Just fire up a container and test it.

To make the testing/deployment very easy, I have enabled auto-deploy in my container-image.

The image contains a jython script so you don’t have to apply this configuration manually.

import java.lang.System as sys

cell = AdminConfig.getid('/Cell:DefaultCell01/')
md = AdminConfig.showAttribute(cell, "monitoredDirectoryDeployment")
AdminConfig.modify(md, [['enabled', "true"]])
AdminConfig.modify(md, [['pollingInterval', "1"]])

print AdminConfig.show(md)

AdminConfig.save()

print 'Done.'

It allows me to work with VSCode and Gradle as I have described in this post.

Start the docker container with below command to mount the auto-deploy folder as a volume:

docker run --name was9 --rm -p 9060:9060 -p 9080:9080 -p 7777:7777 -v ~/junk/deploy:/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/monitoredDeployableApps 38leinad/was-9

You can now copy a WAR file to ~/junk/deploy/servers/server1/ on your local system and it will get deployed automatically within the container.

Note
After this post, I have extended the was-9 container so can directly mount /opt/IBM/WebSphere/AppServer/profiles/AppSrv01/monitoredDeployableApps/servers/server1/. It even supports deployment of a WAR/EAR that is already in this folder when the container is started. This is not the default behaviour of Websphere. Basically, the container will do a touch on any WAR/EAR in this folder once the auto-deploy service is watching the folder.

Gradle and Arquillian Chameleon even simpler

07 April 2018

In a previous post I have already described how to use Arquillian Chameleon to simplify the Arquillian config.

With the latest improvements that are described here in more detail, it is now possible to minimize the required configuration:

  • Only a single dependency

  • No arquillian.xml

As before, I assume Gradle 4.6 with enableFeaturePreview('IMPROVED_POM_SUPPORT') in the settings.gradle.

With this, we only have to add a single dependency to use arquillian:

dependencies {
    providedCompile 'javax:javaee-api:7.0'

    testCompile 'org.arquillian.container:arquillian-chameleon-junit-container-starter:1.0.0.CR2'

    testCompile 'junit:junit:4.12'
    testCompile 'org.mockito:mockito-core:2.10.0'
}

The used container only needs to be defined via the @ChameleonTarget annotation. Also note the new @RunWith(ArquillianChameleon.class). This not the regular @RunWith(ArquillianChameleon.class).

@RunWith(ArquillianChameleon.class)
@ChameleonTarget("wildfly:11.0.0.Final:managed")
public class GreetingServiceTest {

    @Deployment
    public static WebArchive deployService() {
        return ShrinkWrap.create(WebArchive.class)
                .addClass(Service.class);
    }

    @Inject
    private Service service;

    @Test
    public void shouldGreetTheWorld() throws Exception {
        Assert.assertEquals("hello world", service.hello());
    }
}

There is also support now for not having to write the @Deployment method. Up to now, only for maven-build and specifing a local file.

Open Liberty with DerbyDB

13 March 2018

In this post I describe how to use Open Liberty with the lightweight Apache Derby database.

Here are the steps:

  1. Download Apache Derby.

  2. Configure the driver/datasource in the server.xml

        <!-- https://www.ibm.com/support/knowledgecenter/de/SS7K4U_liberty/com.ibm.websphere.wlp.zseries.doc/ae/twlp_dep_configuring_ds.html -->
        <variable name="DERBY_JDBC_DRIVER_PATH" value="/home/daniel/dev/tools/db-derby-10.14.1.0-bin/lib"/>
        <library id="DerbyLib">
            <fileset dir="${DERBY_JDBC_DRIVER_PATH}"/>
        </library>
        <dataSource id="DefaultDerbyDatasource" jndiName="jdbc/defaultDatasource" statementCacheSize="10" transactional="true">
           <jdbcDriver libraryRef="DerbyLib"/>
           <properties.derby.embedded connectionAttributes="upgrade=true" createDatabase="create" databaseName="/var/tmp/sample.embedded.db" shutdownDatabase="false"/>
    	   <!--properties.derby.client databaseName="/var/tmp/sample.db" user="derbyuser" password="derbyuser" createDatabase="create" serverName="localhost" portNumber="1527" traceLevel="1"/-->
        </dataSource>

    Note that the database is embeeded and file-based. This means, no database-server needs to be started manually. On application-server startup an embeeded database is started and will write to the file under databaseName. Use the memory: prefix, to just hold it in main-memory and not on the filesystem.

    As an alternative, you can also start the Derby-network-server seperately and connect by using the properties.derby.client instead.

  3. In case you want to use the datasource with JPA, provide a persistence.xml:

    <?xml version="1.0" encoding="UTF-8"?>
    <persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    	xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence
                 http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
    
    	<persistence-unit name="prod" transaction-type="JTA">
    		<jta-data-source>jdbc/defaultDatasource</jta-data-source>
    		<properties>
    			<property name="hibernate.show_sql" value="true" />
    			<property name="eclipselink.logging.level" value="FINE" />
    			<property name="javax.persistence.schema-generation.database.action" value="drop-and-create" />
    			<property name="javax.persistence.schema-generation.scripts.action" value="drop-and-create" />
    			<property name="javax.persistence.schema-generation.scripts.create-target" value="bootstrapCreate.ddl" />
    			<property name="javax.persistence.schema-generation.scripts.drop-target" value="bootstrapDrop.ddl" />
    		</properties>
    	</persistence-unit>
    </persistence>

    With the default settings of Gradle’s war-plugin, you can place it under src/main/resources/META-INF and the build should package it under WEB-INF/classes/META-INF.

  4. You should now be able to inject the entity-manager via

    @PersistenceContext
    EntityManager em;

This blog has a similar guide on how to use PostgreSQL with Open Liberty.

Gradle and Arquillian for OpenLiberty

12 March 2018

In this post I describe how to use arquillian together with the container-adapter for Websphere-/Open-Liberty.

The dependencies are straight-forward as for any other container-adapter except the additional need for the tools.jar on the classpath:

dependencies {
    providedCompile 'javax:javaee-api:7.0'

    // this is the BOM
    testCompile 'org.jboss.arquillian:arquillian-bom:1.3.0.Final'
    testCompile 'org.jboss.arquillian.junit:arquillian-junit-container'

    testCompile files("${System.properties['java.home']}/../lib/tools.jar")
    testCompile 'org.jboss.arquillian.container:arquillian-wlp-managed-8.5:1.0.0.CR1'

    testCompile 'junit:junit:4.12'
    testCompile 'org.mockito:mockito-core:2.10.0'
}

A minimalistic arquillian.xml looks like the following:

<?xml version="1.0" encoding="UTF-8"?>
<arquillian xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns="http://jboss.org/schema/arquillian"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

    <engine>
        <property name="deploymentExportPath">build/deployments</property>
    </engine>

    <container qualifier="wlp-dropins-deployment" default="true">
        <configuration>
            <property name="wlpHome">${wlp.home}</property>
            <property name="deployType">dropins</property>
            <property name="serverName">server1</property>
        </configuration>
    </container>

</arquillian>

As there is no good documentation, on the supported properties, I had to look into the sources over on Github.

Also, you might not want to hard-code the wlp.home here. Instead you can define it in your build.gradle like this:

test {
    systemProperty "arquillian.launch", "wlp-dropins-deployment"
    systemProperty "wlp.home", project.properties['wlp.home']
}

This will allow you to run gradle -Pwlp.home=<path-to-wlp> test.

Gradle and Arquillian for Wildfly

28 February 2018

In this post I describe how to set up arquillian to test/deploy on Wildfly. Note that there is a managed and a remote-adapter. Managed will mean that arquillian manages the application-server and thus starts it. Remote means that the application-server was already started somehow and arquillian will only connect and deploy the application within this remote server. Below you will find the dependencies for both types of adpaters.

dependencies {
    providedCompile 'javax:javaee-api:7.0'

    // this is the BOM
    testCompile 'org.jboss.arquillian:arquillian-bom:1.3.0.Final'
    testCompile 'org.jboss.arquillian.junit:arquillian-junit-container'

    testCompile 'org.wildfly.arquillian:wildfly-arquillian-container-managed:2.1.0.Final'
    testCompile 'org.wildfly.arquillian:wildfly-arquillian-container-remote:2.1.0.Final'

    testCompile 'junit:junit:4.12'
    testCompile 'org.mockito:mockito-core:2.10.0'
}
Note
Note that the BOM-import will only work with Gradle 4.6+

An arquillian.xml for both adapters looks like the following. The arquillian-wildfly-managed config is enabled here by default.

<?xml version="1.0" encoding="UTF-8"?>
<arquillian xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns="http://jboss.org/schema/arquillian"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

    <engine>
        <property name="deploymentExportPath">build/deployments</property>
    </engine>

    <!-- Start JBoss manually via:
        ./standalone.sh -Djboss.socket.binding.port-offset=100 -server-config=standalone-full.xml
     -->
    <container qualifier="arquillian-wildfly-remote">
        <configuration>
            <property name="managementPort">10090</property>
        </configuration>
    </container>

    <container qualifier="arquillian-wildfly-managed" default="true">
        <configuration>
            <property name="jbossHome">/home/daniel/dev/app-servers/jboss-eap-7.0-test</property>
            <property name="serverConfig">${jboss.server.config.file.name:standalone-full.xml}</property>
            <property name="allowConnectingToRunningServer">true</property>
        </configuration>
    </container>
</arquillian>

As an additional tip: I always set deploymentExportPath to a folder withing gradle’s build-directory because sometimes it is helpful to have a look at the deployment generated by arquillian/shrinkwrap.

In case you don’t want to define a default adapater or overwrite it (e.g. via a gradle-property from the commandline), you can define the arquillian.launch system property within the test-configuration.

test {
    systemProperty "arquillian.launch", "arquillian-wildfly-managed"
}

Gradle and Arquillian Chameleon

26 February 2018

The lastest Gradle 4.6 release candiates come with BOM-import support.

It can be enabled in the settings.gradle by defining enableFeaturePreview('IMPROVED_POM_SUPPORT').

With this, the Arquillian BOM can be easily imported and the dependecies to use Arquillian with the Chameleon Adapter look like the following:

dependencies {
    providedCompile 'javax:javaee-api:7.0'

    // this is the BOM
    testCompile 'org.jboss.arquillian:arquillian-bom:1.3.0.Final'
    testCompile 'org.jboss.arquillian.junit:arquillian-junit-container'
    testCompile 'org.arquillian.container:arquillian-container-chameleon:1.0.0.Beta3'

    testCompile 'junit:junit:4.12'
    testCompile 'org.mockito:mockito-core:2.10.0'
}

Chameleon allows to easily manage the container adapters by simple configuration in the arquillian.xml. As of today, Wildfly and Glassfish are supported but not Websphere liberty.

To define Wildfly 11, the following arquillian.xml (place under src/test/resources) is sufficient:

<?xml version="1.0" encoding="UTF-8"?>
<arquillian xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns="http://jboss.org/schema/arquillian"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

    <container qualifier="wildfly" default="true">
        <configuration>
            <property name="chameleonTarget">wildfly:11.0.0.Final:managed</property>
        </configuration>
    </container>
</arquillian>

With this little bit of Gradle and Arquillian magic, you should be able to run a test like below. The Wildfly 11 container will be downloaded on the fly.

@RunWith(Arquillian.class)
public class GreetingServiceTest {

    @Deployment
    public static WebArchive deployService() {
        return ShrinkWrap.create(WebArchive.class)
                .addClass(Service.class);
    }

    @Inject
    private Service service;

    @Test
    public void shouldGreetTheWorld() throws Exception {
        Assert.assertEquals("hello world", service.hello());
    }
}

Gradle: Automatic and IDE-independent redeployments on OpenLiberty

25 February 2018

The last weeks I have started to experiment how well VSCode can be used for Java EE development. I have to say that it is quiet exciting to watch what the guys at Microsoft and Red Hat are doing with the Java integration. The gist of it: It cannot replace a real Java IDE yet for a majority of heavy development, but i can see the potential due to its lightweightness in projects that also involve a JavaScript frontend. The experience of developing Java and JavaScript in this editor is quiet nice compared to a beast like Eclipse.

One of my first goals for quick development: Reproduce the automatical redeploy you get from IDEs like Eclipse (via JBoss Tools). I.e. changing a Java-class automatically triggers a redeploy of the application. As long as you make sure the WAR-file is small, this deploy task takes less then a second and allows for quick iterations.

Here the steps how to make this work in VS Code; actually, they are independent of VSCode and just leverage Gradle’s continous-build feature.

Place this task in your build.gradle. It deploys your application to the dropins-folder of OpenLiberty if you have set up the environment variable wlpProfileHome.

task deployToWlp(type: Copy, dependsOn: 'war') {
    dependsOn 'build'
    from war.archivePath
    into "${System.env.wlpProfileHome}/dropins"
}

Additionally, make sure to enable automatic redeploys in your server.xml whenever the contents of the dropins-folder change.

<!-- hot-deploy for dropins -->
<applicationMonitor updateTrigger="polled" pollingRate="500ms" dropins="dropins" dropinsEnabled="true"/>

Every time you run gradlew deployToWlp, this should trigger a redeploy of the latest code.

Now comes the next step: Run gradlew deployToWlp -t for continuous builds. Every code-change should trigger a redeploy. This is indepdent of any IDE and thus works nicely together with VS Code in case you want this level of interactivity. If not, it is very easy to just map a shortcut to the gradle-command in VSCode to trigger it manually.

Arquillian UI Testing from Gradle

24 February 2018

Lets for this post assume we want to test some Web UI that is already running somehow. I.e. we don’t want to start up the container with the web-app from arquillian.

Arquillian heavily relies on BOMs to get the right dependencies. Gradle out of the box is not able to handle BOMs; but we can use the nebula-plugin. Import-scoped POMs are not supported at all.

So, make sure you have the following in your build.gradle:

plugins {
    id 'nebula.dependency-recommender' version '4.1.2'
}

apply plugin: 'java'

sourceCompatibility = 1.8
targetCompatibility = 1.8

repositories {
    jcenter()
}

dependencyRecommendations {
    mavenBom module: 'org.jboss.arquillian:arquillian-bom:1.2.0.Final'
}

dependencies {
    testCompile 'junit:junit:4.12'

    testCompile "org.jboss.arquillian.junit:arquillian-junit-container"
    testCompile "org.jboss.arquillian.graphene:graphene-webdriver:2.0.3.Final"
}

Now the test:

@RunAsClient
@RunWith(Arquillian.class)
public class HackerNewsIT {

    @Drone
    WebDriver browser;

    @Test
    public void name() {
        browser.get("https://news.ycombinator.com/");
        String title = browser.getTitle();
        Assert.assertThat(title, CoreMatchers.is("Hacker News"));
    }

}

Run with it with gradle test.

By default, HTMLUnit will be used as the browser. To use Chrome, download the https://sites.google.com/a/chromium.org/chromedriver/WebDriver.

If you dont want to put it on your PATH, tie it to the WebDriver like this in your arquillian.xml:

 <arquillian xmlns="http://jboss.com/arquillian" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">

    <extension qualifier="webdriver">
        <property name="browser">chrome</property>
        <property name="chromeDriverBinary">/home/daniel/dev/tools/chromedriver</property>
    </extension>

</arquillian>

Checkstyle with Gradle

30 January 2018

Get a checkstyle.xml and; e.g. from SUN and place in your gradle-project under config/checkstyle/checkstyle.xml.

Now add the following to your build.gradle:

apply plugin: 'checkstyle'

checkstyle {
    showViolations = true
    ignoreFailures = false
}

Run with it with gradle check.

If there are violations, a HTML-report will be written to build/reports/checkstyle.

OpenLiberty Java EE 8 Config

22 January 2018

I am working of the latest Development Builds of Open Liberty supporting Java EE 8. You can download them here under "Development builds".

When you create a new server in Websphere/Open Liberty via ${WLP_HOME}/bin/server create server1, the generated server.xml is not configured properly for SSL, Java EE, etc. Here is a minimal server.xml that works:

<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">

    <!-- Enable features -->
    <featureManager>
        <feature>javaee-8.0</feature>
        <feature>localConnector-1.0</feature>
    </featureManager>

    <!-- To access this server from a remote client add a host attribute to the following element, e.g. host="*" -->
    <httpEndpoint httpPort="9080" httpsPort="9443" id="defaultHttpEndpoint"/>

    <keyStore id="defaultKeyStore" password="yourpassword"/>

    <!-- Automatically expand WAR files and EAR files -->
    <applicationManager autoExpand="true"/>

    <quickStartSecurity userName="admin" userPassword="admin12!"/>

    <!-- hot-deploy for dropins -->
    <applicationMonitor updateTrigger="polled" pollingRate="500ms"
                    dropins="dropins" dropinsEnabled="true"/>
</server>

Together with this build.gradle file you can start developing Java EE 8 applications:

apply plugin: 'war'
apply plugin: 'maven'

group = 'de.dplatz'
version = '1.0-SNAPSHOT'

sourceCompatibility = 1.8
targetCompatibility = 1.8

repositories {
    jcenter()
}

dependencies {
    providedCompile 'javax:javaee-api:8.0'
    testCompile 'junit:junit:4.12'
}

war {
    archiveName 'webapp.war'
}

task deployToWlp(type: Copy, dependsOn: 'war') {
    dependsOn 'build'
    from war.archivePath
    into "${System.env.wlpProfileHome}/dropins"
}

OpenLiberty Debug Config

21 January 2018

You can run a Websphere/Open Liberty via ${WLP_HOME}/bin/server debug server1 in debug-mode. But this makes the server wait for a debugger to attach. How to attach later?

Create a file ${WLP_HOME}/usr/servers/server1/jvm.options and add the debug-configuration:

-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=7777

Now you can use ${WLP_HOME}/bin/server run server1.

Gradle deploy-task

20 January 2018

Deploy to e.g. Websphere liberty by adding this task to your build.gradle file:

task deployToWlp(type: Copy, dependsOn: 'war') {
    dependsOn 'build'
    from war.archivePath
    into "${System.env.wlpProfileHome}/dropins"
}

Assuming you have the environment-variable set, you can now run gradlew deployToWlp.

Implementing JAX-RS-security via Basic-auth

31 October 2017

Basic-auth is the simplest and weakest protection you can add to your resources in a Java EE application. This post shows how to leverage it for JAX-RS-resources that are accessed by a plain HTML5/JavaScript app.

Additionally, I had the following requirements:

  • The JAX-RS-resource is requested from a prue JavaScript-based webapp via the fetch-API; I want to leverage the authentication-dialog from the browser within the webapp (no custom dialog as the webapp should stay as simple as possible and use as much as possible the standard offered by the browser).

  • But I don’t want the whole WAR (i.e. JavaScript app) to be protected. Just the request to the JAX-RS-endpoint should be protected via Basic-auth

  • At the server-side I want to be able to connect to my own/custom identity-store; i.e. I want to programatically check the username/password myself. In other words: I don’t want the application-server’s internal identity-stores/authentication.

Protecting the JAX-RS-endpoint at server-side is as simple as implementing a request-filter. I could have used a low-level servlet-filter, but instead decided to use the JAX-RS-specific equivalent:

import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ContainerRequestFilter;
import javax.ws.rs.core.Response;
import javax.ws.rs.ext.Provider;

@Provider
public class SecurityFilter implements ContainerRequestFilter {

	@Override
	public void filter(ContainerRequestContext requestContext) throws IOException {
		String authHeader = requestContext.getHeaderString("Authorization");
		if (authHeader == null || !authHeader.startsWith("Basic")) {
			requestContext.abortWith(Response.status(401).header("WWW-Authenticate", "Basic").build());
			return;
		}

		String[] tokens = (new String(Base64.getDecoder().decode(authHeader.split(" ")[1]), "UTF-8")).split(":");
		final String username = tokens[0];
		final String password = tokens[1];

		if (username.equals("daniel") && password.equals("123")) {
			// all good
		}
		else {
			requestContext.abortWith(Response.status(401).build());
			return;
		}
	}

}

If the Authorization header is not present, we request the authentication-dialog from the browser by sending the header WWW-Authenticate=Basic. If i directly open up the JAX-RS-resource in the browser, I get the uthentication-dialog from the browser and can access the resource (if I provide the correct username and password).

Now the question is if this also works when the JAX-RS-resource if fetched via the JavaScript fetch-API. I tried this:

function handleResponse(response) {
	if (response.status == "401") {
		alert("not authorized!")
	} else {
		response.json().then(function(data) {
			console.log(data)
		});
	}
}

fetch("http://localhost:8080/service/resources/health").then(handleResponse);

It did not work; I was getting 401 from the server because the browser was not sending the "Authorization" header; but the browser also did not show the authentication-dialog.

A peak into the spec hinted that it should work:

  1. If request’s use-URL-credentials flag is unset or authentication-fetch flag is set, then run these subsubsteps: …​

  2. Let username and password be the result of prompting the end user for a username and password, respectively, in request’s window.

So, i added the credentials to the fetch:

fetch("http://localhost:8080/service/resources/health", {credentials: 'same-origin'}).then(handleResponse);

It worked. The browser shows the authentication-dialog after the first 401. In subsequent request to the JAX-RS-resouce, the "Authorization" header is always sent along. No need to reenter every time (Chrome discards it as soon as the browser window is closed).

The only disadvantage I found so far is from a development-perspective. I usually run the JAX-RS-endpoint seperately from my Javascript app; i.e. the JAX-RS-endpoint is hosted as a WAR in the application-server but the JavaScript-app is hosted via LiveReload or browser-sync. In this case, the JAX-RS-service and the webapp do not have the same origin (different port) and I have to use the CORS-header Access-Control-Allow-Origin=* to allow communication between the two. But with this header set, the Authorization-token collected by the JavaScript-app will not be shared with the JAX-RS-endpoint.

Github - Switch to fork

05 October 2017

Say you just have cloned a massive github repository (like Netbeans) where cloning already takes minutes and now decide to contribute. You will fork the repo and than clone the fork and spend another X minutes waiting?

This sometimes seems like to much of an effort. And thankfully, there are steps how you can transform the already cloned repo to use your fork.

  1. Fork the repo

  2. Rename origin to upstream (your fork will be origin)

    git remote rename origin upstream
  3. Set origin as your fork

    git remote add origin git@github...my-fork
  4. Fetch origin

    git fetch origin
  5. Make master track new origin/master

    git checkout -B master --track origin/master

Websphere Administration via JMX, JConsole and JVisualVM

25 September 2017

How to connect to the Websphere-specific MBean server to configure the environment and monitor the applications?

Start JConsole with the following script:

#!/bin/bash

# Change me!
export HOST=swpsws16
# This is ORB_LISTENER_ADDRESS
export IIOP_PORT=9811

export WAS_HOME=/home/daniel/IBM/WebSphere/AppServer

export PROVIDER=-Djava.naming.provider.url=corbaname:iiop:$HOST:$IIOP_PORT

export CLASSPATH=
export CLASSPATH=$CLASSPATH:$WAS_HOME/java/lib/tools.jar
export CLASSPATH=$CLASSPATH:$WAS_HOME/runtimes/com.ibm.ws.admin.client_8.5.0.jar
export CLASSPATH=$CLASSPATH:$WAS_HOME/runtimes/com.ibm.ws.ejb.thinclient_8.5.0.jar
export CLASSPATH=$CLASSPATH:$WAS_HOME/runtimes/com.ibm.ws.orb_8.5.0.jar
export CLASSPATH=$CLASSPATH:$WAS_HOME/java/lib/jconsole.jar

export URL=service:jmx:iiop://$HOST:$IIOP_PORT/jndi/JMXConnector

$WAS_HOME/java/bin/java -classpath $CLASSPATH $PROVIDER sun.tools.jconsole.JConsole $URL

Even nicer: Install VisualWAS plugin for JVisualVM.

  • Use "Add JMX Connection"

  • Use Connection-Type "Websphere"

  • For port, use SOAP_CONNECTOR_ADDRESS (default 8880)

Websphere and JVisualVM

25 September 2017

How to inspect a Websphere server via JVisualVM?

Go to "Application servers > SERVER-NAME > Java and Process management > Process Defintion > Java Virtual Machine > Generic JVM arguments" and add the following JMV settings:

-Djavax.management.builder.initial= \
-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.local.only=false \
-Dcom.sun.management.jmxremote.port=1099 \
-Djava.rmi.server.hostname=10.226.2.64

Providing an external ip or hostname was important for it to work.

Select "Add JMX Connection" in JVisualVM and enter: 10.226.2.64:1099.

Jenkins in Docker using Docker

23 September 2017

Say you want to run your Jenkins itself in docker. But the Jenkins build-jobs also uses docker!?

Either you have to install docker in docker, or you let the Jenkins docker-client access the host’s docker-daemon.

  1. Map the unix socket into the Jenkins container:

    -v /var/run/docker.sock:/var/run/docker.sock
  2. But the jenkins user will not have permissions to access the socket by default. So, first check the GID of the group that owns the socket:

    getent group dockerroot
  3. Now create a group (name is irrelevant; lets name it "docker") in the Jenkins container with the same GID and assign the jenkins user to it:

    sudo groupadd -g 982 docker
    sudo usermod -aG docker jenkins

ES6 with Nashorn in JDK9

14 June 2017

JDK9 is planning to incrementally support the ES6 features of JavaScript. In the current early-access builds (tested with 9-ea+170), major features like classes are not supported yet; but keywords like let/const, arrow functions and string-interpolation already work:

#!jjs --language=es6
"use strict";

let hello = (from, to) => print(`Hello from ${from} to ${to}`);

if ($EXEC('uname -n')) {
    let hostname = $OUT.trim();
    hello(hostname, 'daniel');
}

For details on what’s included by now, read JEP 292.

AWS ECS: Push a docker container

28 May 2017

Steps to deploy docker containers to AWS EC2:

  1. Created a docker-repository with the name de.dplatz/abc, you will get a page with all the steps and coordinates for docker login, docker tag and docker push.

  2. From CLI run:

    aws ecr get-login --region eu-central-1
    docker tag de.dplatz/abc:latest <my-aws-url>/de.dplatz/abc:latest
    docker push <my-aws-url>/de.dplatz/abc:latest

See here for starting the container.

JDK9 HttpClient

20 May 2017

Required some clarification from the JDK team how to access the new HttpClient API (which actually is incubating now):

$ ./jdk-9_168/bin/jshell --add-modules jdk.incubator.httpclient
|  Welcome to JShell -- Version 9-ea
|  For an introduction type: /help intro

jshell> import jdk.incubator.http.*;

jshell> import static jdk.incubator.http.HttpResponse.BodyHandler.*;

jshell> URI uri = new URI("http://openjdk.java.net/projects/jigsaw/");
uri ==> http://openjdk.java.net/projects/jigsaw/

jshell> HttpRequest request = HttpRequest.newBuilder(uri).build();
request ==> http://openjdk.java.net/projects/jigsaw/ GET

jshell> HttpResponse response = HttpClient.newBuilder().build().send(request, discard(null));
response ==> jdk.incubator.http.HttpResponseImpl@133814f

jshell> response.statusCode();
$6 ==> 200

I really like the jshell-integration in Netbeans; unfortunately, it does not allow to set commandline-flags for the started shells yet. Filed an issue and got a workaround for now.

Websphere Liberty Admin Console

12 May 2017

$ bin/installUtility install adminCenter-1.0
server.xml
<!-- Enable features -->
<featureManager>
    <!-- ... -->
    <feature>adminCenter-1.0</feature>
</featureManager>

<keyStore id="defaultKeyStore" password="admin123" />

<basicRegistry id="basic" realm="BasicRealm">
    <user name="admin" password="admin123" />
</basicRegistry>
[AUDIT   ] CWWKT0016I: Web application available (default_host): http://localhost:9090/adminCenter/

Docker JVM Memory Settings

01 May 2017

Read this, this and this.

  • JDK9 has -XX:+UseCGroupMemoryLimitForHeap

  • JDK8 pre 131: Always specify -Xmx1024m and -XX:MaxMetaspaceSize

  • JDK8 since 131: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap

Docker Rest API

01 May 2017

SSL keys are at /cygdrive/c/Users/<username>/.docker/machine/machines/default

 curl --insecure -v --cert cert.pem --key key.pem -X GET https://192.168.99.100:2376/images/json

strace

01 May 2017

strace -fopen,read,close,fstat java -jar Test.jar

Stacktrace in Eclipse Debugger

12 April 2017

How to see the stacktrace for an exception-variable within the eclipse debugger?

Go to Preferences / Java / Debug / Detail Formatter; Add for Throwable:

java.io.Writer stackTrace = new java.io.StringWriter();
java.io.PrintWriter printWriter = new java.io.PrintWriter(stackTrace);
printStackTrace(printWriter);
return getMessage() + "\n" + stackTrace;

Java debug-flags

22 March 2017

-Xdebug
// shared-memory (windows only)
-agentlib:jdwp=transport=dt_shmem,address=eclipse,server=y,suspend=n
// socket
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=9999

inotifywait

07 March 2017

Monitor filesystem-changes:

while inotifywait -qr /dir/to/monitor; do
    rsync -avz /dir/to/monitor/ /dir/to/sync/to
done

List classes in Jar

29 January 2017

List all classes in a jar-file:

$ unzip -l MyJar.jar "*.class" | tail -n+4 | head -n-2 | tr -s ' ' | cut -d ' ' -f5 | tr / . | sed 's/\.class$//'

rsync tricks

20 January 2017

This command removes files that have been removed from the source directory but will not overwrite newer files in the destination:

$ rsync -avu --delete sourcedir/ /cygwin/e/destdir/

To rsync to another system with ssh over the net:

$ rsync -avu --delete -e ssh sourcedir/ username@machine:~/destdir/

Shell Alias-Expansion

17 January 2017

Say, you have defined an alias:

$ alias gg='git log --oneline --decorate --graph'

But when typing 'gg' wouldn’t it be nice to expand the alias so you can make a small modification to the args?

$ gg<Ctrl+Alt+e>

Say, you want to easily clear the screen; there is a shortcut Ctrl+L. But maybe you also always want to print the contents of the current directory: you can rebind the shortcut:

$ bind -x '"\C-l": clear; ls -l'

Java Version Strings

16 January 2017

For what JDK version is a class compiled?

$ javap -verbose MyClass.class | grep "major"
  • Java 5: major version 49

  • Java 6: major version 50

  • Java 7: major version 51

  • Java 8: major version 52

SSH Keys

13 January 2017

To connect to a remote-host without password-entry (for scripting):

# generate ssh keys for local (if not already done)
$ ssh-keygen
$ ssh-copy-id -i ~/.ssh/id_rsa.pub <remote-host>
$ ssh <remote-host>

Maven Fat & Thin Jar

12 January 2017

Building a fat and a thin jar in one go:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <version>2.4.3</version>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>shade</goal>
            </goals>
            <configuration>
                <shadedArtifactAttached>true</shadedArtifactAttached>
                <shadedClassifierName>all</shadedClassifierName>
                <transformers>
                    <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                        <mainClass>com.mycompany.myproduct.Main</mainClass>
                    </transformer>
                </transformers>
            </configuration>
        </execution>
    </executions>
</plugin>

Commandline HTTP-Server

10 January 2017

A very simple http-server:

while true ; do echo -e  "HTTP/1.1 200 OK\nAccess-Control-Allow-Origin: *\n\n $(cat index.html)" |  nc -l localhost 1500; done


Older posts are available in the archive.