Skip to main content

Logging Guide for Hibernate 4, 5 & 6 – Use the right config for dev and prod

by Thorben Janssen at January 25, 2022 04:01 PM

The post Logging Guide for Hibernate 4, 5 & 6 – Use the right config for dev and prod appeared first on Thorben Janssen.

Choosing the right logging configuration can make the difference between finding a performance issue during development or suffering from it on production. But it can…

The post Logging Guide for Hibernate 4, 5 & 6 – Use the right config for dev and prod appeared first on Thorben Janssen.

by Thorben Janssen at January 25, 2022 04:01 PM

JDK 1.0+: ASCII Progress Indicator

by admin at January 25, 2022 02:37 PM

This example implements a progress indicator as an animated rotating character:

public class Progress {

    private int counter = 0;
    private char sequence[] = {'-','\\','|','/'};

    public void showProgress(){
        int slot = counter % sequence.length;
        char current = sequence[slot];
        //the backspace character

The test below will show a rotating character for 5 seconds. The animation may not work in your IDE or unit test, but should work in the console:

public class ProgressTest {
    public void rotate(){
        var progress = new Progress();
        for (int i = 0; i < 20; i++) {
            try {
            } catch (InterruptedException e) {

by admin at January 25, 2022 02:37 PM

Kafka Connect CLI, JFR Unit, OSS Archetypes and JPMS--an podcast

by admin at January 23, 2022 07:46 PM

Subscribe to podcast via: spotify| iTunes| RSS

The #174 episode with Gunnnar Morling (@gunnarmorling) about:
kcctl - the Kafka Connnect CLI, JfrUnit - assertions for JDK Flight Recorder events, Layrry - A Launcher and API for Modularized Java Applications and the OSS Quickstart
is available for download.

by admin at January 23, 2022 07:46 PM

Hashtag Jakarta EE #108

by Ivar Grimstad at January 23, 2022 10:59 AM

Welcome to issue number one hundred and eight of Hashtag Jakarta EE!

The release reviews for the various specifications targeting Jakarta EE 10 are adding up. You can follow the progress of the reviews by looking at the Pull Requests with the materials and/or by following the ballot threads on the Public Jakarta EE Specification Committee mailing list.

There have been some discussions going on the various mailing lists regarding how to stage and release milestones and release candidates of the specification artifacts. To help with this, the Jakarta EE Platform project has written up guidelines for Milestones and Release Candidates on the Jakarta EE Platform Project Wiki.

The specification project for Jakarta RPC that I mentioned in #107 has been approved and is being set up as we speak. Join the Jakarta RPC mailing list if you are interested in participating in the project, or just want to know firsthand what’s being discussed.

Another discussion that came up on the mailing list is regarding JCache (JSR 107) and whether that could be a candidate for a Jakarta specification. Follow the email thread on the Jakarta EE Community mailing list.

If you want to learn more about what’s coming in Jakarta EE 10, do join my talk Get Ready for Jakarta EE 10! on Monday at 11:00 EST. It is streamed live on YouTube. The talk is as part of the jChampionsConference, a conference where all the speakers are Java Champions.

by Ivar Grimstad at January 23, 2022 10:59 AM

[DEUTSCH] JAX-RS 3.1 – Neue Features Teil 4 von 7 | Head Crashing Informatics 44 | Java

by Markus Karg at January 22, 2022 04:00 PM

#JAX-RS 3.1 bringt endlich lange erwartete, neue Features. In dieser deutschen Videoserie zeige ich Euch die Highlights! Als Co-Autor der JAX-RS-Spezifikation und Committer der #Jakarta REST API bekommt Ihr die Features direkt vom Erzeuger präsentiert! Ich freue mich auf Euere Fragen in den Video-Kommentaren!

Wenn Dir dieses Video gefällt, dann gib ihm bitte einen Daumen hoch, empfehle es weiter, abonniere meinen Kanal, oder werden mein Unterstützer Danke! 🙂

Stay safe and… Party On!

by Markus Karg at January 22, 2022 04:00 PM

Migrating from Spring Boot to Quarkus with MTA

by F.Marchioni at January 21, 2022 05:48 PM

In this article we will walk through a sample migration of a Spring Boot REST Application to Quarkus using Red Hat Migration Toolkit for Applications (MTA). Overview of Red Hat Migration Toolkit The Migration Toolkit for Applications (MTA) is an extensible tool which you can use to simplify the migration of several Java applications. This ... Read more

The post Migrating from Spring Boot to Quarkus with MTA appeared first on Mastertheboss.

by F.Marchioni at January 21, 2022 05:48 PM

What's New in the January 2022 Payara Platform Release?

by Priya Khaira-Hanks at January 19, 2022 02:00 PM

The January 2022 Payara Platform release is here!  Payara Platform Enterprise 5.35.0 includes 2 improvements and 7 bug fixes. 

Users can also use the newly updated IntelliJ IDEA Payara Platform Tools plugin, detailed below! 

Read more below to learn about the highlights of this Enterprise-only release.

by Priya Khaira-Hanks at January 19, 2022 02:00 PM

Making Readable Code With Dependency Injection and Jakarta CDI

by otaviojava at January 18, 2022 03:53 PM

Learn more about dependency injection with Jakarta CDI and enhance the effectiveness and readability of your code. Link:

by otaviojava at January 18, 2022 03:53 PM

Dynamic Inserts and Updates with Spring Data JPA

by Thorben Janssen at January 18, 2022 01:00 PM

The post Dynamic Inserts and Updates with Spring Data JPA appeared first on Thorben Janssen.

When you persist a new entity or update an existing one with Spring Data JPA, you might have recognized that you’re always executing the same…

The post Dynamic Inserts and Updates with Spring Data JPA appeared first on Thorben Janssen.

by Thorben Janssen at January 18, 2022 01:00 PM

How to code a Quarkus REST Client

by F.Marchioni at January 17, 2022 02:05 PM

This article is a walk through Quarkus REST Client API using MicroProfile REST Client. We will develop a basic REST Endpoint and then we will set up a simple Client project with a Service interface for our REST Service. When developing REST Client API, Quarkus offers two options: JAX-RS Web Client: This is the standard ... Read more

The post How to code a Quarkus REST Client appeared first on Mastertheboss.

by F.Marchioni at January 17, 2022 02:05 PM

Hashtag Jakarta EE #107

by Ivar Grimstad at January 16, 2022 10:59 AM

Welcome to issue number one hundred and seven of Hashtag Jakarta EE!

My first conference of 2022 is a wrap! Read all about it in my write-up of CodeMash 2022. In-person conferences are possible in these pre-post-pandemic times as long as care is taken and local regulations are followed. The next in-person conference I am speaking at is SnowOne in Novosibirsk, Russia. There are some moving parts to be sorted out first with regard to visas, covid-regulations, and other disruptions that may or may not happen, but hopefully, I will know more in the coming days.

Before that, I will speak at the upcoming virtual jChampions Conference. This is a conference where all the speakers are Java Champions, and it is totally free! Imagine that!

So, what is going on with Jakarta EE 10? The minutes from the weekly platform call are always a good place to look for information. Another place is the Jakarta EE Platform project mailing list which is pretty active these days. There are still two weeks until the ballot for Jakarta Activation 2.1 closes. The usual ballot period for release reviews is 14 days, but for this one, it was extended to four weeks to compensate for the holiday period.

A new specification (Jakarta RPC) has been proposed and the creation review ballot is ongoing for approval by the Jakarta EE Specification Committee. The main goal of Jakarta RPC is to make gRPC easier to use within the Jakarta EE ecosystem. It is exciting to see new specifications like this one being proposed. It shows that the goal of establishing Jakarta EE as a platform for innovation is succeeding.

by Ivar Grimstad at January 16, 2022 10:59 AM

Payara at the JakartaOne Livestream

by Priya Khaira-Hanks at January 13, 2022 11:22 AM

The JakartaOne Livestream is a huge event in the Jakarta EE and MicroProfile calendar. Organised by the Eclipse Foundation, it is a one-day virtual conference for developers and technical business leaders.

It brings insight into the current state and future of Jakarta EE and related technologies focused on developing cloud-native Java applications. 

by Priya Khaira-Hanks at January 13, 2022 11:22 AM

My First OpenJDK Contribution CODE DEEP-DIVE (Part 4 of 4) | Java 18 | Head Crashing Informatics 43

by Markus Karg at January 08, 2022 04:00 PM

My first contribution to the #OpenJDK open source project, hence to #Java18 itself, improves performance of InputStream::transferTo approximately by factor 2 to 5 – at least for file-to-file transfers (socket-based transfers will follow in a later contribution). Get an in-depth introduction to this project, why I did it, how I did it, what tricks it does under the hood, and see each line of source code. A great way to learn about the difference between old-school and modern Java programming, BTW!

If you like this video, please give it a thumbs up, share it, subscribe to my channel, or become my patreon Thanks! 🙂

Stay safe and… Party On!

by Markus Karg at January 08, 2022 04:00 PM

JConf Peru 2021 Conference Report

by Reza Rahman at January 01, 2022 09:31 PM

JConf Peru 2021 took place November 27th. This was the third time the event was held and due to the pandemic, the 2021 event was virtual and free. I am very proud to have participated as an invited speaker – the JConf series is an admirable effort by the Spanish speaking Java community to hold world-class events. My friend and Peru JUG leader Jose Diaz and his wife Miryan Ramirez have worked hard to make JConf Peru a reality. Jakarta EE had a strong presence at the event including talks on Quarkus and TomEE. I delivered three talks at the conference focused on Jakarta EE and Azure.

Powering Java on Azure with Open Liberty and OpenShift

Early in the morning I delivered my slide free talk titled “Powering Java on Azure with Open Liberty and OpenShift”. The material covers the key work Microsoft and IBM is doing to enable Jakarta EE, MicroProfile, Open Liberty and OpenShift on Azure. I demo in real time how to stand up an OpenShift cluster on Azure quickly and deploy a realistic Java EE/Jakarta EE/MicroProfile application that integrates with some services on the cloud such as a database. The essential material for the talk is available on the Microsoft documentation site as a how-to guide. A recording of the talk is now available on YouTube.

Effective Kubernetes for Jakarta EE and MicroProfile Developers

Towards mid-day, I delivered another entirely slide-free talk – “Effective Kubernetes for Jakarta EE and MicroProfile Developers”. The talk covers some of the key things Jakarta EE and MicroProfile developers need to know while using Kubernetes. This includes:

  • How Kubernetes primitives (such as deployments, services and ingress controllers) align with application server administration, clustering, auto-scaling, auto-discovery, and load-balancing.
  • How to add self-healing capabilities using Kubernetes probes and monitoring with open source tools like Prometheus/Grafana.
  • How Kubernetes can be extended using Operators to effectively manage application server clusters.
  • How the CI/CD pipeline of your application can be adapted to Kubernetes.

A recording of the talk is now available on YouTube.

All the material for the talk is available in self-paced workshop format on GitHub. The material will take you about a day to complete end-to-end (please reach out if you need any help).

Why Jakarta EE Developers are First-Class Citizens on Azure

I wrapped up the conference by delivering my talk titled “Why Jakarta EE Developers are First-Class Citizens on Azure”. This talk covers all the work Microsoft is doing by partnering with companies like Oracle, IBM and Red Hat to support Jakarta EE developers on Azure. This includes fully enabling runtimes such as WebLogic, WebSphere Traditional, WebSphere Liberty, Open Liberty and JBoss EAP on virtual machines, the Azure Kubernetes Service (AKS) and App Service (the premier PaaS platform for Azure). I also cover important work such as supporting JMS in Azure Service Bus as a well as the Jakarta EE on Azure roadmap.

There is a brief end-to-end demo that is part of the talk. You can run the demo yourself using step-by-step instructions available on GitHub to get a feel for how the Jakarta EE on Azure experience looks like (please reach out if you need help). The slides for the talk are available on Speaker Deck. The video for the talk is now posted on YouTube.

It is worth reminding that myself and my team are always ready to work closely with Java/Jakarta EE developers on Azure migrations – completely for free. To take advantage of this, you simply need to fill this survey out or reach out to me directly.

Beautiful Peru

Peru is a country rich in heritage and natural beauty. It is one of the six cradles of civilization and the center of the mighty Inca entire. I am proud to say I got to see a bit of this amazing country as part of my brief trip for the first JConf Peru. Just check out the album below of photos I took (click this link to view the album if the embedded slideshow is not working)!

All in all, I am happy to have had the opportunity to speak at JConf Peru again. I am very glad the event continued in virtual format despite the pandemic. I hope to speak there again and hopefully visit beautiful Peru again in the future.

by Reza Rahman at January 01, 2022 09:31 PM

DOAG 2021 Conference Report

by Reza Rahman at December 28, 2021 08:05 PM

The DOAG 2021 conference took place November 16-18. This is the long running annual event organized by the German Oracle technology user community – one of the largest in the world. Due to the pandemic, the 2021 event was virtual. There is always some Java related content in the event, including on Jakarta EE, WebLogic, Coherence and Helidon. I delivered two talks on Jakarta EE, WebLogic and Azure.

Running WebLogic on Azure Kubernetes and Virtual Machines

On the second day of the conference, I had the relatively rare opportunity to deliver a talk titled “Running WebLogic on Azure Kubernetes and Virtual Machines”. The material covers the key work Microsoft and Oracle is doing to power WebLogic on Azure. The work includes tools, solutions, guides, samples and scripts to fully enable WebLogic on both Azure Virtual Machines as well as the Azure Kubernetes Service (AKS). The solutions support simple use cases such easily creating a single working WebLogic instance. They also support common use cases such as clustering, load-balancing, failover, disaster recovery, database connectivity, caching via Coherence, consolidated logging via ELK, and Azure Active Directory integration. The introductory session includes demos for VMs and AKS. I also cover the longer-term roadmap for WebLogic on Azure. The slides for the talk are available on SpeakerDeck.

It is worth reminding that I and my team are always ready to work closely with Java/Jakarta EE developers on Azure migrations – completely for free. To take advantage of this, you simply need to fill this survey out or reach out to me directly.

Jakarta EE – Present and Future

On the last day of the conference, I delivered my fairly popular talk – “Jakarta EE – Present and Future”. The talk is essentially a state of the union for Jakarta EE. It covers a little bit of history, context, Jakarta EE 8, Jakarta EE 9/9.1 as well as what’s ahead for Jakarta EE 10. One key component of the talk is the importance and ways of direct developer contributions into Jakarta EE, if needed with help from the Jakarta EE Ambassadors. Jakarta EE 10 and the Jakarta Core Profile should bring an important set of changes including to CDI, Jakarta REST, Concurrency, Security, Faces, Batch and Configuration. The slides for the talk are available on SpeakerDeck.

I am very happy to have had the opportunity to speak at the DOAG conference. I hope to participate again in the future.

by Reza Rahman at December 28, 2021 08:05 PM

Infinispan Apache Log4j 2 CVE-2021-44228 vulnerability

December 12, 2021 10:00 PM

Infinispan 10+ uses Log4j version 2.0+ and can be affected by vulnerability CVE-2021-44228, which has a 10.0 CVSS score. The first fixed Log4j version is 2.15.0.
So, until official patch is coming, - you can update used logger version to the latest in few simple steps


cd /opt/infinispan-server-10.1.8.Final/lib/

rm log4j-*.jar
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-api-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-core-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-jul-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-slf4j-impl-2.15.0.jar ./

Please, note - patch above is not official, but according to initial tests it works with no issues

December 12, 2021 10:00 PM

MicroProfile 5, MicroProfile Rest Client 3.0 and JPA enhancements in Open Liberty

November 30, 2021 12:00 AM

Open Liberty offers MicroProfile 5.0, which includes MicroProfile Rest Client 3.0 and aligns with Jakarta EE 9.1. This beta release also introduces the ability to declare "default" JPA persistence properties.

For, there is a single beta package for Open Liberty:

  • All Beta Features: this package contains all Open Liberty beta features and GA features and functions.

This means that you can try out our in-development Open Liberty features by just adding the relevant coordinates to your build tools.

If you give the beta package a try, let us know what you think.

All Beta Features package

The All Beta Features package includes the following beta features and enhancements:

MicroProfile 5.0

MicroProfile 5.0 enables applications to use MicroProfile APIs together with Jakarta EE 9.1. MicroProfile 5.0 does not provide any other functional updates except aligning with Jakarta EE 9.1. MicroProfile 5.0 includes the following features:

  • Config 3.0

  • Fault Tolerance 4.0

  • Rest Client 3.0

  • Health 4.0

  • Metrics 4.0

  • Open Tracing 3.0

  • Open API 3.0

  • JWT propagation 2.0.

This beta driver will be used as a compatible implementation for releasing MicroProfile 5.0.

Include the following in the server.xml to enable the MicroProfile 5.0 feature in Open Liberty:


Alternatively, you can enable the individual MicroProfile features that you need, for example:


For more information about this update, check out the MicroProfile 5.0 Release on GitHub.

MicroProfile Rest Client 3.0

MicroProfile Rest Client is an API that helps developers to write type-safe interfaces that abstract and invoke remote RESTful services. This is the 3.0 release of MicroProfile Rest Client, and it adds support for Jakarta EE 9.1 technologies. From a developer’s perspective, the only change from the previous release (2.0) is the package space name change of Jakarta packages from javax. to jakarta.. However, another change is that the Open Liberty implementation has changed from Apache CXF to RESTEasy - this change brings with it some behavior and property changes (most of which are already documented as differences between jaxrs-2.1 and restfulWS-3.0).

To use this new feature, you would need to add mpRestClient-3.0 to the featureManager element in the server.xml. The code should be similar to previous versions of MP Rest Client, but the packages should change from javax. to jakarta..


For more information, check out:

Define JPA persistence properties at server scope

This new JPA enhancement adds the ability to declare "default" JPA persistence properties to all container-managed persistence contexts as a Liberty server.xml configuration.

Previously if a persistence property needed to be set for all persistence.xml configuration files you would manually update all persistence.xml files in all applications. This could end up requiring hundreds of manual updates and/or rebuilding of applications. With this enhancement you can specify persistence properties in the server.xml that will propagate to all container-managed persistence units for applications installed on that server.

To start using the new feature, add the <defaultProperties> configuration element to the <jpa> configuration in your server.xml file. Specify the persistence properties that you want to apply to all container-managed persistence units, as shown in the following examples:

Example 1:

    <jpa defaultPersistenceProvider="org.hibernate.jpa.HibernatePersistenceProvider">
            <property name="javax.persistence.lock.timeout" value="4000"/>
            <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle12cDialect"/>
Example 2:

            <property name="javax.persistence.lock.timeout" value="12345"/>
            <property name="eclipselink.cache.shared.default" value="false"/>

Technical description

These defaultProperties are integration-level persistence properties that are supplied to the specified persistence provider when the PersistenceProvider.createContainerEntityManagerFactory method is called by the JPA Container.

According to the JPA specification (

If the same property or hint is specified more than once, the following order of overriding applies, in order of decreasing precedence:
  • argument to method of EntityManager, Query, or TypedQuery interface
  • specification to NamedQuery (annotation or XML)
  • argument to createEntityManagerFactory method
  • specification in persistence.xml

These defaultProperties persistence property values override any properties with the same name that are specified in a persistence.xml file. However, property values specified through PersistenceContext annotation, or the persistence-context-ref deployment descriptor element, or Query Hints property will override these defaultProperties.

Try it now

To try out these features, just update your build tools to pull the Open Liberty All Beta Features package instead of the main release. The beta works with Java SE 17, Java SE 11, or Java SE 8.

If you’re using Maven, here are the coordinates:


Or for Gradle:

dependencies {
    libertyRuntime group: 'io.openliberty.beta', name: 'openliberty-runtime', version: '[,)'

Or take a look at our Downloads page.

Jakarta EE 9.1 Beta Features

Are you looking for our regular section regarding Jakarta EE 9.1 Beta feature updates? Well good news, as of the Jakarta EE 9.1 features are now out of beta and fully supported. That means that you can either use them in the official release, or continue to use them in the beta package. Just as before, you can enable the individual features you want or you can just add the Jakarta EE 9.1 convenience feature to enable all of the Jakarta EE 9.1 beta features at once:


Or you can add the Web Profile convenience feature to enable all of the Jakarta EE 9.1 Web Profile beta features at once:


Your feedback is welcomed

Let us know what you think on our mailing list. If you hit a problem, post a question on StackOverflow. If you hit a bug, please raise an issue.

November 30, 2021 12:00 AM

Jakarta EE 9.1 support and configurable response headers in Open Liberty

November 26, 2021 12:00 AM

Jakarta EE 9.1 support is now available as part of Open Liberty, alongside configurable response headers, which offer more granular control over response headers! Several significant bug fixes are also part of this release.

In Open Liberty

Along with the new features and functions added to the runtime, we’ve also made updates to our guides.

Run your apps using

If you’re using Maven, here are the coordinates:


Or for Gradle:

dependencies {
    libertyRuntime group: 'io.openliberty', name: 'openliberty-runtime', version: '[,)'

Or if you’re using Docker:

FROM open-liberty

Or take a look at our Downloads page.

Ask a question on Stack Overflow

Jakarta EE 9.1 support

jakarta ee

Jakarta EE 9.1 support is now available in Open Liberty!! This support allows you to run Jakarta EE 9.1 applications using Java 8, 11, or 17 with other Open Liberty value-add features that are updated to support Jakarta EE 9.1. Many of you have followed our progress for delivering Jakarta EE 9.1 via our beta releases and corresponding blogs, and we’d like to thank those that provided feedback along the way. If you’re targeting a new application for Jakarta EE 9.1, make sure to use the jakarta namespace. For existing applications that you’d like to move from Java EE (and its javax namespace) to Jakarta EE (and its jakarta namespace) we recommend trying the Eclipse Transformer, an open source project originally developed by members of the Open Liberty team and then contributed to the Eclipse Foundation.

With the RESTful Web Services 3.0 (formerly called JAXRS) support in Open Liberty, there is a significant performance improvement for applications which use the RESTful Web Services function. This performance improvement was achieved by moving our RESTful Web Services implementation from Apache CXF to using RestEasy. With this new version, CDI is enabled by default, and JSON Binding is not enabled by default with the feature being specified in your server.xml.

Any Liberty features with API and/or SPI functions that use Jakarta EE APIs as part of their method signatures have been updated to have their package versions be reset to 10.0 when using those API / SPIs with Jakarta EE 9.1. Any bundles used for user features that depend on those packages will need to change the import package version range when updating the user feature to use Jakarta EE 9.1.

With the introduction of Jakarta EE 9.1, the Jakarta Enterprise Beans 4.0 specification includes a few minor changes over the prior version of the specification, Enterprise JavaBeans (EJB) 3.2, as follows:

  • Note the new names of the features; all of the same features exist, but the feature name prefix has changed from ejb to enterpriseBeans. For example, enterpriseBeansLite-4.0 is the new version of ejbLite-3.2.

  • The API package has changed from javax.ejb to jakarta.ejb

  • The @Schedule annotation is now repeatable

  • The following API methods have been removed:

    • javax.ejb.EJBContext.getCallerIdentity() → use getCallerPrincipal()

    • javax.ejb.EJBContext.getEnvironment() → use JNDI lookup in java:comp/env

    • javax.ejb.EJBContext.isCallerInRole( → use isCallerInRole(String)

    • javax.ejb.SessionContext.getMessageContext() (removed with JAX-RPC)

All other capabilities of Enterprise Beans remain the same as the prior specification version (3.2).

Although many of the Jakarta EE 9.1 features have only received a version update, the majority have also had their name changed. The following table lists the features for which both the short name and the version number are changed. To update one of these features for Jakarta EE 9.1, you must change both the feature short name and version number in your server.xml file.

Table 1. Jakarta EE 9.1 feature updates, short name and version
Jakarta EE 9.1 feature name Java EE/Jakarta EE 8 short name and version Jakarta EE 9.1 short name and version

Jakarta Enterprise Beans



Jakarta Enterprise Beans Home Interfaces



Jakarta Enterprise Beans Lite



Jakarta Enterprise Beans Persistent Timers



Jakarta Enterprise Beans Remote



Jakarta Expression Language



Jakarta Authorization



Jakarta Authentication



Jakarta EE Platform



Jakarta EE Application Client



Jakarta Mail



Jakarta XML Binding



Jakarta RESTful Web Services



Jakarta RESTful Web Services Client



Jakarta XML Web Services



Jakarta Connectors



Jakarta Connectors Inbound Security



Jakarta Messaging



Jakarta Persistence



Jakarta Persistence Container



Jakarta Server Faces



Jakarta Server Faces Container



Jakarta Server Pages



Messaging Server Client



Messaging Server Security



Messaging Server



For a full overview of what has changed, visit the Jakarta EE 9.1 feature updates page.

To enable the Jakarta EE 9.1 features, add the corresponding feature to your server.xml. You can enable either the individual features you want or you can add the Jakarta EE 9.1 convenience features. For example, to enable all of the Jakarta EE 9.1 features at once add:


Or you can add the Web Profile convenience feature to enable all of the Jakarta EE 9.1 Web Profile features at once:


For details regarding the APIs and SPIs, check out the Jakarta EE 9.1 javadoc.

Configurable Response Headers

You can now configure Open Liberty to modify response headers. The available configuration options allow for headers to be appended, for existing headers to be overwritten, for missing headers to be added, and for undesired headers to be removed from all responses being serviced by an HTTP endpoint. This configuration offers more granular control over response headers, which offers a solution to modifying headers without the need to change existing applications, filters, or otherwise.

To use configurable response headers, begin by defining a new element called <headers> in the server.xml. You can configure this for individual HTTP endpoints or for all endpoints at once.

Configuring for individual HTTP endpoints:

<httpEndpoint id="defaultHttpEndpoint"


Configuring for all HTTP endpoints:

<httpEndpoint id="defaultHttpEndpoint"

<httpEndpoint id="otherHttpEndpoint"

<headers id="myHeadersID">

The add attribute allows multiple headers with the same name to be added to a response, similar to the HttpServletResponse’s addHeader API. Similarly, the set attribute is analogous to the setHeader API, which sets a response header to the given name and value. This overwrites existing headers that share the same name. The setIfMissing attribute will only set the configured headers if they are not already present on the response. Lastly, the remove attribute will remove any response headers whose name matches a name defined by the configuration.

Each header entry for the add, set, and setIfMissing attributes can be provided as a stand-alone header name. Optionally, a header value can be added by appending the colon : character after every header name. Note, however, that the remove attribute only expects header names and not a header name:value pair.

As seen in the example above, one way to configure the <headers> element is to declare each individual header within it own add, set, setIfMissing, or remove attribute. In addition to this configuration, headers can be provided as a comma delimited list.

The following server.xml configuration declares individual headers within the desired configuration attributes:


This configuration can also be declared as comma delimited lists, such as:

<headers add="foo:bar, foo:bar2" set="customHeader:customValue" setIfMissing="X-Forwarded-Proto:https" remove="Via"/>

There are three warning messages relating to misconfigurations for this feature. Note that if a configuration value is considered to be misconfigured, it will not be utilized. Furthermore, if the misconfigured value had a non-empty header name, any further configurations with this same name will also be ignored.

The first warning message, CWWKT0042W, will be logged whenever a header name is left empty. While header values are completely optional, the configuration does expect a non-empty header name.

The add configuration allows for multiple headers with the same name to be configured. However, it would be ambiguous to repeat a header name in any other configuration attribute. For instance, consider the set attribute option, which is meant to overwrite an existing header that shared the declared header name. If the set configuration contained two headers with the same name, it would be unclear which of the two values should be chosen. Similarly, if the same header name is present in two or more configurations, the same ambiguity is true. As such, and excluding repetitions in the add configuration, whenever a header name is found to be used more than once, the warning message CWWKT0043W will be logged.

The third warning message, CWWKT0044W, is logged if a header that has already been flagged as a duplicate by the CWWKT0043W warning message, continues to be utilized by further configurations.

Warning Message Descriptions:

CWWKT0042W : An empty header name was found when the 'set` configuration was parsed. This value is ignored.

CWWKT0043W : A duplicate header name was found in the [foo] header using the set configuration. All configurations for the [foo] header are ignored. Any header that is defined by the remove, add, set, or setIfMissing configurations must be unique across all configurations.

CWWKT0044W : The [foo] header, which is marked as a duplicate header name, was found in the set configuration. The [foo] header is ignored. Any header that is defined by the set configuration must contain unique header names.


Open Liberty now provides a way to control response headers for a given HTTP endpoint. These can be appended, configured to overwrite, to only be added if not already present, or completely removed from all responses. Try it out for yourself!

Notable bugs fixed in this release

We’ve spent some time fixing bugs. The following sections describe just some of the issues resolved in this release. If you’re interested, here’s the full list of bugs fixed in

  • Throughput performance degradation in eclipselink due to Thread.getStackTrace calls

    We discovered an issue where a change to the org.eclipse.persistence.internal.helper.ConcurrencyManager class caused a ~75% throughput performance degradation in eclipselink. This lost throughput was caused by calls to Thread.getStackTrace(). This regression showed up for jpa-2.2 in and persistence-3.0 in This issue has now been fixed by removing the getStackTrace() calls.

  • MicroProfile OpenAPI 2.0 includes non-public fields in the generated documentation

    Previously, when a schema was created for a class which includes a private field, the private field would be listed in the generated OpenAPI document, for example:

    public class Example {
        private String field1;
        public String field2;

    results in

          type: object
              type: string
              type: string

    The field field1 should not have appeared in the generated OpenAPI document as it is private. This issue has been fixed by setting the mp.openapi.extensions.smallrye.private-properties.enable property to disable non-public properties by default.

  • Port bind skipped at server startup

    Previously, in an extremely rare scenario, configured ports could silently fail to bind - preventing Liberty from using them. This issue was caused by a subtle race condition in the code responsible for delaying the port bind until the server is ready to handle traffic.

    In the failing scenario, the port started message would not be emitted - for example the following message would be missing:

    CWWKO0219I: TCP Channel defaultHttpEndpoint has been started and is now listening for requests on host * (IPv4) port 9080.

    and the following FFDC will be seen:

    Exception = java.lang.RuntimeException
    Source =
    probeid = 254
    Stack Dump = java.lang.RuntimeException: java.nio.channels.NotYetBoundException
            at java.base/
    Caused by: java.nio.channels.NotYetBoundException
            at java.base/
            at java.base/
            ... 2 more

    This issue has now been fixed so that all configured ports should start, or if there is a problem some meaningful error message should be logged.

  • Application fails to restart in server.xml update scenario

    We discovered an issue where an application would fail to restart, due to a race condition during server reconfiguration when multiple apps are installed. The problem occurs when one app starts before another app is finished uninstalling. In theory this shouldn’t be a problem - however for this scenario these apps are sharing a VirtualHost configuration object, and in this case one app updates the parent VirtualHost as part of its uninstall process in such a way that the other gets into an invalid state. The server log will show a warning such as CWWKZ0020I: Application <app_name> not updated. This issue was fixed by fixing the race condition that caused the failure.

  • HTTP upgrade to WebSocket can cause quiesce errors

    When a WebSocket connection is started, it starts as an HTTP connection. Previously, If an error occurred during the transition between an HTTP and a WebSocket connection, which was known to be a WebSocket upgrade, the error processing would neglect to decrement a connection counter, which then caused the server to believe there is an open connection during server stop. There were two scenarios where these quiesce errors would occur:

    • When a read error occurred during the transition between an HTTP and a WebSocket connection, the error processing neglected to decrement a connection counter, which then causes the server to believe there is an open connection during server stop.

    • If a client immediately closed the WebSocket connection after it was opened, the original upgrade request handling may not have had enough time to close properly on the server. Once again, the connection counter failed to decrement leading the server to believe there is an open connection during the server stop.

      This issue has been fixed by adding a new flag called decrementNeeded which helps to ensure that the decrement is not neglected.

  • Ensure ServletRequestListener#requestDestroyed is always called

    We discovered a bug where the ServletRequestListener#requestDestroyed call does not occur, if an exception occurs during async servlet while an appSecurity-x.0 is enabled. For this bug to occur, two conditions must be met: the webContainer property deferServletRequestListenerDestroyOnError is true and an appSecurity-x.0 feature is enabled. This issue has now been resolved.

  • ClassCastException in JSP relating to JDT internal classes

    Open Liberty introduced a bug where the following error occurred for certain class lookups in JSP:

    Error 500: java.lang.ClassCastException: class org.eclipse.jdt.internal.compiler.lookup.PlainPackageBinding cannot be cast to class org.eclipse.jdt.internal.compiler.lookup.TypeBinding (org.eclipse.jdt.internal.compiler.lookup.PlainPackageBinding and org.eclipse.jdt.internal.compiler.lookup.TypeBinding are in unnamed module of loader org.eclipse.osgi.internal.loader.EquinoxClassLoader @3522bc53)

    This issue has now been fixed.

New and updated guides since the previous release

As Open Liberty features and functionality continue to grow, we continue to add new guides to on those topics to make their adoption as easy as possible. Existing guides also receive updates in order to address any reported bugs/issues, keep their content current, and expand what their topic covers.

  • Creating a multi-module application

    • Previously the guide demonstrated how to build an application with multiple modules using Maven and Open Liberty. With this update, it now also introduces how to use the Liberty Maven plug-in to develop a multi-module application in development mode without having to prebuild the JAR and WAR files.

Get Open Liberty now

November 26, 2021 12:00 AM

Welcome AsiaInfo to the Jakarta EE Working Group!

by Tanja Obradovic at November 18, 2021 05:06 PM

It is great to see that our Chinese membership is growing!   HuNan AsiaInfo AnHui is a Chinese software company, mainly engaged in basic software research and development, committed to communication operators, artificial intelligence, 5G, Internet of Things software and cloud computing, big data and other industry applications. Current products include database product AntDB, Application middleware product FlyingServer, AI-RPA, etc. The goal is to provide good basic software, connect relevant global norms and technologies, and serve the huge software demand market in China.

Warm welcome to the HuNan AsiaInfo AnHui as a new member of the Jakarta EE Working Group! 

by Tanja Obradovic at November 18, 2021 05:06 PM

JPA query methods: influence on performance

by Vladimir Bychkov at November 18, 2021 07:22 AM

Specification JPA 2.2/Jakarta JPA 3.0 provides for several methods to select data from database. In this article we research how these methods affect on performance

by Vladimir Bychkov at November 18, 2021 07:22 AM

Welcome SOUJava to the Jakarta EE Working Group!

by Tanja Obradovic at November 17, 2021 04:50 PM

We have another Java User Group as a Guest member! I am extremely happy to let you know that SOUJava has joined Jakarta EE Working Group! 

SOU Java does not need an introduction in the Java community! Their members are already heavily involved in Jakarta EE-related projects and having them as a Guest Member of the Jakarta EE Working Group additionally shows their commitment to advancing enterprise Java development.

Please join me in welcoming the SOUJava to Jakarta EE Working Group! It is great to see the Jakarta EE Working Group and our community is continuously growing.

by Tanja Obradovic at November 17, 2021 04:50 PM

Eclipse Jetty Servlet Survey

by jesse at October 27, 2021 01:25 PM

This short 5-minute survey is being presented to the Eclipse Jetty user community to validate conjecture the Jetty developers have for how users will leverage JakartaEE servlets and the Jetty project. We have some features we are gauging interest in

by jesse at October 27, 2021 01:25 PM

Jersey 2.35, and Jersey 3.0.3

by Jan at October 21, 2021 11:25 PM

Jersey 2.35 and corresponding Jersey 3.0.3 have been released recently. There is a number of new features. We support JDK 17, JDK 16, JDK 15, JDK 14, JDK 13, JDK 12, JDK 11, and JDK 8. It means, Jersey is … Continue reading

by Jan at October 21, 2021 11:25 PM

EJB 3 编程笔记(2)

October 01, 2021 09:00 AM

在EJB3编程笔记(1)中,我提到应用程序客户主类(application client main class)里的资源注射必须针对静态变量或者静态方法。现在来看一下EJB bean里的资源注射。下面这段代码里隐藏着一段错误: @Stateless style="font-weight: normal;"> public class HelloBean implements HelloIF { private static EchoIF echoBean; style="font-weight: normal;">@EJB(beanName="EchoBean", name="ejb/echo...

October 01, 2021 09:00 AM

EJB 3 编程笔记(1)

October 01, 2021 09:00 AM

我觉得学习一样东西最好的方法是通过实例,特别是从人家犯的错误中成长。我想在这里把我使用EJB3的体会纪录下来,供大家参考。 要用 EJB3, 你首先需要一个 Java EE 5 的应用服务器。Glassfish 的预览版充分实现了Java EE 5平台中的最新特性,而且完全免费和开放源代码。其次最好也用一个称手的 IDE, 譬如 NetBeans。当然现在 EJB 3 已经简化很多,用 VIM 来创建 EJB 应用也不难。 下面这个应用客户程序用 @EJB 来注射资源和引用 helloBean。不过有个小错误。它会通过编译和部署,但不能运行。错在哪里? import javax.ejb.EJB...

October 01, 2021 09:00 AM

Custom Identity Store with Jakarta Security in TomEE

by Jean-Louis Monteiro at September 30, 2021 11:42 AM

In the previous post, we saw how to use the built-in ‘tomcat-users.xml’ identity store with Apache TomEE. While this identity store is inherited from Tomcat and integrated into Jakarta Security implementation in TomEE, this is usually good for development or simple deployments, but may appear too simple or restrictive for production environments. 

This blog will focus on how to implement your own identity store. TomEE can use LDAP or JDBC identity stores out of the box. We will try them out next time.

Let’s say you have your own file store or your own data store like an in-memory data grid, then you will need to implement your own identity store.

What is an identity store?

An identity store is a database or a directory (store) of identity information about a population of users that includes an application’s callers.

In essence, an identity store contains all information such as caller name, groups or roles, and required information to validate a caller’s credentials.

How to implement my own identity store?

This is actually fairly simple with Jakarta Security. The only thing you need to do is create an implementation of ``. All methods in the interface have default implementations. So you only have to implement what you need.

public interface IdentityStore {

   default CredentialValidationResult validate(Credential credential) {

   default Set getCallerGroups(CredentialValidationResult validationResult) {

   default int priority() {

   default Set validationTypes() {

   enum ValidationType {

By default, an identity store is used for both validating user credentials and providing groups/roles for the authenticated user. Depending on what #validationTypes() will return, you will have to implement #validate(…) and/or #getCallerGroups(…)

#getCallerGroups(…) will receive the result of #valide(…). Let’s look at a very simple example:

public class TestIdentityStore implements IdentityStore {

   public CredentialValidationResult validate(Credential credential) {

       if (!(credential instanceof UsernamePasswordCredential)) {
           return INVALID_RESULT;

       final UsernamePasswordCredential usernamePasswordCredential = (UsernamePasswordCredential) credential;
       if (usernamePasswordCredential.compareTo("jon", "doe")) {
           return new CredentialValidationResult("jon", new HashSet<>(asList("foo", "bar")));

       if (usernamePasswordCredential.compareTo("iron", "man")) {
           return new CredentialValidationResult("iron", new HashSet<>(Collections.singletonList("avengers")));

       return INVALID_RESULT;


In this simple example, the identity store is hardcoded. Basically, it knows only 2 users, one of them has some roles, while the other has another set of roles.

You can easily extend this example and query a local file, or an in-memory data grid if you need. Or use JPA to access your relational database.

IMPORTANT: for TomEE to pick it up and use it in your application, the identity store must be a CDI bean.

The complete and runnable example is available under

The post Custom Identity Store with Jakarta Security in TomEE appeared first on Tomitribe.

by Jean-Louis Monteiro at September 30, 2021 11:42 AM

Book Review: Practical Cloud-Native Java Development with MicroProfile

September 24, 2021 12:00 AM

Practical Cloud-Native Java Development with MicroProfile cover

General information

  • Pages: 403
  • Published by: Packt
  • Release date: Aug 2021

Disclaimer: I received this book as a collaboration with Packt and one of the authors (Thanks Emily!)

A book about Microservices for the Java Enterprise-shops

Year after year many enterprise companies are struggling to embrace Cloud Native practices that we tend to denominate as Microservices, however Microservices is a metapattern that needs to follow a well defined approach, like:

  • (We aim for) reactive systems
  • (Hence we need a methodology like) 12 Cloud Native factors
  • (Implementing) well-known design patterns
  • (Dividing the system by using) Domain Driven Design
  • (Implementing microservices via) Microservices chassis and/or service mesh
  • (Achieving deployments by) Containers orchestration

Many of these concepts require a considerable amount of context, but some books, tutorials, conferences and YouTube videos tend to focus on specific niche information, making difficult to have a "cold start" in the microservices space if you have been developing regular/monolithic software. For me, that's the best thing about this book, it provides a holistic view to understand microservices with Java and MicroProfile for "cold starter developers".

About the book

Using a software architect perspective, MicroProfile could be defined as a set of specifications (APIs) that many microservices chassis implement in order to solve common microservices problems through patterns, lessons learned from well known Java libraries, and proposals for collaboration between Java Enterprise vendors.

Subsequently if you think that it sounds a lot like Java EE, that's right, it's the same spirit but on the microservices space with participation for many vendors, including vendors from the Java EE space -e.g. Red Hat, IBM, Apache, Payara-.

The main value of this book is the willingness to go beyond the APIs, providing four structured sections that have different writing styles, for instance:

  1. Section 1: Cloud Native Applications - Written as a didactical resource to learn fundamentals of distributed systems with Cloud Native approach
  2. Section 2: MicroProfile Deep Dive - Written as a reference book with code snippets to understand the motivation, functionality and specific details in MicroProfile APIs and the relation between these APIs and common Microservices patterns -e.g. Remote procedure invocation, Health Check APIs, Externalized configuration-
  3. Section 3: End-to-End Project Using MicroProfile - Written as a narrative workshop with source code already available, to understand the development and deployment process of Cloud Native applications with MicroProfile
  4. Section 4: The standalone specifications - Written as a reference book with code snippets, it describes the development of newer specs that could be included in the future under MicroProfile's umbrella

First section

This was by far my favorite section. This section presents a well-balanced overview about Cloud Native practices like:

  • Cloud Native definition
  • The role of microservices and the differences with monoliths and FaaS
  • Data consistency with event sourcing
  • Best practices
  • The role of MicroProfile

I enjoyed this section because my current role is to coach or act as a software architect at different companies, hence this is good material to explain the whole panorama to my coworkers and/or use this book as a quick reference.

My only concern with this section is about the final chapter, this chapter presents an application called IBM Stock Trader that (as you probably guess) IBM uses to demonstrate these concepts using MicroProfile with OpenLiberty. The chapter by itself presents an application that combines data sources, front/ends, Kubernetes; however the application would be useful only on Section 3 (at least that was my perception). Hence you will be going back to this section once you're executing the workshop.

Second section

This section divides the MicroProfile APIs in three levels, the division actually makes a lot of sense but was evident to me only during this review:

  1. The base APIs to create microservices (JAX-RS, CDI, JSON-P, JSON-B, Rest Client)
  2. Enhancing microservices (Config, Fault Tolerance, OpenAPI, JWT)
  3. Observing microservices (Health, Metrics, Tracing)

Additionally, section also describes the need for Docker and Kubernetes and how other common approaches -e.g. Service mesh- overlap with Microservice Chassis functionality.

Currently I'm a MicroProfile user, hence I knew most of the APIs, however I liked the actual description of the pattern/need that motivated the inclusion of the APIs, and the description could be useful for newcomers, along with the code snippets also available on GitHub.

If you're a Java/Jakarta EE developer you will find the CDI section a little bit superficial, indeed CDI by itself deserves a whole book/fascicle but this chapter gives the basics to start the development process.

Third section

This section switches the writing style to a workshop style. The first chapter is entirely focused on how to compile the sample microservices, how to fulfill the technical requirements and which MicroProfile APIs are used on every microservice.

You must notice that this is not a Java programming workshop, it's a Cloud Native workshop with ready to deploy microservices, hence the step by step guide is about compilation with Maven, Docker containers, scaling with Kubernetes, operators in Openshift, etc.

You could explore and change the source code if you wish, but the section is written in a "descriptive" way assuming the samples existence.

Fourth section

This section is pretty similar to the second section in the reference book style, hence it also describes the pattern/need that motivated the discussion of the API and code snippets. The main focus of this section is GraphQL, Reactive Approaches and distributed transactions with LRA.

This section will probably change in future editions of the book because at the time of publishing the Cloud Native Container Foundation revealed that some initiatives about observability will be integrated in the OpenTelemetry project and MicroProfile it's discussing their future approach.

Things that could be improved

As any review this is the most difficult section to write, but I think that a second edition should:

  • Extend the CDI section due its foundational status
  • Switch the order of the Stock Tracer presentation
  • Extend the data consistency discussión -e.g. CQRS, Event Sourcing-, hopefully with advances from LRA

The last item is mostly a wish since I'm always in the need for better ways to integrate this common practices with buses like Kafka or Camel using MicroProfile. I know that some implementations -e.g. Helidon, Quarkus- already have extensions for Kafka or Camel, but the data consistency is an entire discussion about patterns, tools and best practices.

Who should read this book?

  • Java developers with strong SE foundations and familiarity with the enterprise space (Spring/Java EE)

September 24, 2021 12:00 AM

GlassFish & Payara Auto-Clustering: Running Jakarta EE Highly-Available Applications in the Cloud

by Tetiana Fydorenchyk at September 21, 2021 11:19 AM

Explore automatic clusterization of Glassfish and Payara in one click with no manual configurations required. The main advantage of this solution is in automatic interconnection of multiple application server instances upon the application topology change, which implements the commonly used clustering configuration. Find out how to get auto-clustered highly available Java servers up and running in the cloud in a matter of minutes.

by Tetiana Fydorenchyk at September 21, 2021 11:19 AM

#156 Bash, Apple and EJB, TomEE, Geronimo and Jakarta EE

by David Blevins at September 14, 2021 02:07 PM

New podcast episode with Adam Bien & David Blevins.  Apple and EJB, @ApacheTomEE, @tomitribe, @JakartaEE, the benefits of code generation with bash, and over-engineering”–the 156th

The post #156 Bash, Apple and EJB, TomEE, Geronimo and Jakarta EE appeared first on Tomitribe.

by David Blevins at September 14, 2021 02:07 PM

Top Trends in the Jakarta EE Developer Survey Results

by Mike Milinkovich at September 14, 2021 11:00 AM

Our annual Jakarta EE Developer Survey results gives everyone in the Java ecosystem insight into how the cloud native world for enterprise Java is unfolding and what the latest developments mean for their strategies and businesses. Here’s a brief look at the top technology trends revealed in this year’s survey.

For context, this year’s survey was completed by almost 950 software developers, architects, and decision-makers around the world. I’d like to sincerely thank everyone who took the time to complete the survey, particularly our survey partners, Jakarta EE Working Group members Fujitsu, IBM, Jelastic, Oracle, Payara, Red Hat, and Tomitribe, who shared the survey with their communities. Your support is crucial to help ensure the survey results reflect the viewpoints of the broadest possible Java developer audience.

Jakarta EE Continues to Deliver on Its Promise

Multiple data points from this year’s survey confirm that Jakarta EE is fulfilling its promise to accelerate business application development for the cloud.

As in the 2020 survey results, Jakarta EE emerged as the second-place cloud native framework with 47 percent of respondents saying they use the technologies. That’s an increase of 12 percent over the 2020 survey results, reflecting the industry’s increasing recognition that Jakarta EE delivers important strategic and technical benefits.

Almost half of the survey respondents have either already migrated to Jakarta EE or plan to within the next six to 24 months. Together, Java EE 8, Jakarta EE 8, and Jakarta EE 9 are now used by 75 percent of survey respondents. And Jakarta EE 9 usage reached nine percent despite the fact the software was only released in December 2020.

With the rise of Jakarta EE, it’s not surprising that developers are also looking for faster support from Java EE/Jakarta EE and cloud vendors.

Microservices Usage Continues to Increase

Interestingly, the survey revealed that monolithic approaches are declining in favor of hybrid architectures. Only 18 percent of respondents said they’re maintaining a monolithic approach, compared to 29 percent who have adopted a hybrid approach and 43 percent who are using microservices.

A little over a year ago, monolithic implementations were outpacing hybrid approaches, showing just how quickly the cloud native Java world is evolving. In alignment with these architectural trends, MicroProfile adoption is up five percent over last year to 34 percent.

Download the Complete Survey Results

For additional insight and access to all of the data collected in our 2021 Jakarta EE Developer survey, we invite everyone to download the survey results.

by Mike Milinkovich at September 14, 2021 11:00 AM

Tomcat and TomEE Clustering Automation

by Tetiana Fydorenchyk at August 18, 2021 11:29 AM

Explore the tips on how to install automatically clustered Tomcat and TomEE servers to get a highly available solution that can efficiently serve a large number of users, process a lot of traffic, and be reliable.
Tomcat TomEE Automatic Clustering

by Tetiana Fydorenchyk at August 18, 2021 11:29 AM

Jakarta Community Acceptance Testing (JCAT)

by javaeeguardian at July 28, 2021 05:41 AM

Today the Jakarta EE Ambassadors are announcing the start of the Jakarta EE Community Acceptance (JCAT) Testing initiative. The purpose of this initiative is to test Jakarta EE 9/9.1 implementations testing using your code and/or applications. Although Jakarta EE is extensively tested by the TCK, container specific tests, and QA, the purpose of JCAT is for developers to test the implementations.

Jakarta EE 9/9.1 did not introduce any new features. In Jakarta EE 9 the APIs changed from javax to jakarta. Jakarta EE 9.1 raised the supported floor to Java 11 for compatible implementations. So what are we testing?

  • Testing individual spec implementations standalone with the new namespace. 
  • Deploying existing Java EE/Jakarta EE applications to EE 9/9.1.
  • Converting Java EE/Jakarta EE applications to the new namespace.
  • Running applications on Java 11 (Jakarta EE 9.1)

Participating in this initiative is easy:

  1. Download a Jakarta EE implementation:
    1. Java 8 / Jakarta EE 9 Containers
    2. Java 11+ / Jakarta EE 9.1 Containers
  2. Deploy code:
    1. Port or run your existing Jakarta EE application
    2. Test out a feature using a starter template

To join this initiative, please take a moment to fill-out the form:

 Sign-up Form 

To submit results or feedback on your experiences with Jakarta EE 9/9.1:

  Jakarta EE 9 / 9.1 Feedback Form


Start Date: July 28, 2021

End Date: December 31st, 2021

by javaeeguardian at July 28, 2021 05:41 AM

Jakarta EE 9.1 Accelerates Open Source Enterprise Java

by Mike Milinkovich at May 26, 2021 11:05 AM

Just a little more than five months ago, I was sharing news about the Jakarta EE 9 platform release. Today, I’m very pleased to tell you that the Jakarta EE Working Group has released the Jakarta EE 9.1 Platform and Web Profile specifications and related Technology Compatibility Kits (TCKs). Congratulations and thanks to everyone in the Jakarta EE community who made this release possible.

The accelerated innovation we’re seeing in Jakarta EE, and the growing number of compatible implementations, are clear signs that enterprise Java is experiencing a renaissance.

Enterprises Have New Agility to Develop and Evolve Java Applications

Jakarta EE 9 opened the door to the next era of innovation using cloud native technologies for Java by delivering the “big bang” namespace change to jakarta.*. 

Jakarta EE 9.1 takes that rejuvenation to the next level. The release includes a number of updates and new options, and is compatible with Java SE 11, which is seeing increasing adoption. The 2020 Jakarta EE Developer Survey revealed that 28 percent of respondents were using Java SE 11, compared to 20 percent of respondents in 2019.

Together, the advances in Jakarta EE 9.1 give enterprises the flexibility to make more choices, and to mix and match technologies as needed to meet their unique application development and migration requirements. With Jakarta EE 9.1, enterprises can:

  • Develop and deploy Jakarta EE 9.1 applications on Java SE 11, the most current LTS release of Java SE, as well as Java SE 8
  • Leverage Java SE 11 features that have been added since Java SE 8 in their Jakarta EE 9.1 applications 
  • Take advantage of new technologies that support Java SE 11 in their Jakarta EE 9.1 applications
  • Move existing Jakarta EE 9 applications to Java SE 11 without changes
  • Migrate existing Java EE and Jakarta EE 8 applications to Jakarta EE 9.1 using the same straightforward process available for migration to Jakarta EE 9

With a variety of paths to choose from, every enterprise can develop and migrate Java applications in a way that aligns with their technical objectives and business goals.

There Are Already Five Jakarta EE 9.1-Compatible Applications

As we announce Jakarta EE 9.1, five products from global leaders in the Java ecosystem have already been certified as compatible with the release:

  • IBM’s Open Liberty
  • Eclipse Glassfish
  • Apache TomEE
  • Red Hat’s Wildfly
  • ManageCat’s ManageFish

These implementations are proof positive the Java ecosystem recognizes the value Jakarta EE brings to their business and the technologies they develop.

The rapid technology adoption we’re seeing with Jakarta EE is thanks to the openness of  the Jakarta EE Specification Process. This simplified process dramatically lowers the barrier to entry, making it much easier for organizations of all sizes to have their products certified as a compatible implementation and leverage the Jakarta EE brand for their own business success.

The number of compatible implementations across Jakarta EE releases is growing all the time, so be sure to check the Jakarta EE compatible products webpage for the latest list. To be listed as a Jakarta EE-compatible product, follow the instructions here.

Learn More About Jakarta EE 9.1 and Get Involved

To learn more about the Jakarta EE 9.1 release contents, read the Jakarta EE 9.1 release plan and check out the specifications.

As the focus shifts to Jakarta EE 10, the Jakarta EE Working Group and community welcome all organizations and individuals who want to participate. To learn more and get involved in the conversation, explore the benefits of membership in the Jakarta EE Working Group and connect with the community.

by Mike Milinkovich at May 26, 2021 11:05 AM

Jakarta EE Ambassadors Joint Position on Jakarta EE and MicroProfile Alignment

by javaeeguardian at May 11, 2021 03:32 AM

The Jakarta EE Ambassadors are encouraged by the continued progress and relevance of both Jakarta EE and MicroProfile. We believe a clear, collaborative, and complementary relationship between Jakarta EE and MicroProfile is very important for the Java ecosystem. 

Unfortunately the relationship has been unclear to many in the Java community, sometimes appearing to be disconnected, overlapping and competitive. MicroProfile builds on top of some Jakarta EE specifications and many Jakarta EE applications now use MicroProfile APIs. For the success of both, it is imperative that the technology sets clarify alignment to ensure continuity and predictability. 

The Cloud Native for Java (CN4J) Alliance was recently formed to address these concerns. The alliance is composed of members from both the Jakarta EE and MicroProfile working groups. The Jakarta EE Ambassadors view this as a positive step.  This joint position statement and the additional slide deck linked below summarize what the Jakarta EE Ambassadors would like to see from CN4J as well as the alignment between Jakarta EE and MicroProfile. 

We see Jakarta EE and MicroProfile fulfilling distinctly important roles. Jakarta EE will continue to be the stable core for a very broad ecosystem. MicroProfile will continue to strongly focus on microservices, velocity, and innovation. Our perspective on each is as follows:

Jakarta EE

  • One major release per year
  • Targets monolithic applications, microservices and standalone (Java SE/command line) applications – both on premises and on the cloud
  • Maintains a stronger commitment to backwards compatibility
  • Enables specifications to be used independently
  • Enables the ecosystem to build on Jakarta EE technologies, including MicroProfile and Spring


  • Multiple releases per year
  • Targets microservices and cloud native applications
  • Strongly focuses on innovation and velocity including domains such as OpenTelemetry, gRPC, and GraphQL
  • Depends on core technologies from Jakarta EE
  • Less stringent requirements on backwards compatibility

It does appear the majority of our community would like to see eventual convergence between the technology sets. It is nonetheless understood this may not be practical in the short term or without its drawbacks. It is also clear that some very mature MicroProfile specifications like Configuration need to be used by Jakarta EE. We believe the best way to meet this need is to move these specifications from MicroProfile to Jakarta EE. The specifications being moved should adopt the jakarta.* namespace. The transition should be a collaborative effort. This is in accordance with what we believe developers have said they want, including through multiple surveys over time.

Jakarta EE also needs to make some significant changes to better serve the needs of MicroProfile and the Java ecosystem.  One key aspect of this is enabling specifications to be used independently, including standalone TCKs. Another key aspect is focusing on Jakarta EE Profiles that make the most sense today:

Core Profile

  • Core Jakarta EE specifications needed by MicroProfile
  • Some specifications moved from MicroProfile such as Configuration
  • CDI Lite

Full Profile

  • All Jakarta EE specifications
  • Deprecate/make optional older technologies

Jakarta EE and MicroProfile are both critical to the continued success of Java. We are committed to working with all key stakeholders towards a strong alignment between these technology sets. We invite all developers to join us in ensuring a bright future for both Jakarta EE and MicroProfile.

Additional Material

by javaeeguardian at May 11, 2021 03:32 AM

An Overview Between Java 8 and Java 11

by otaviojava at May 10, 2021 07:33 PM

This tutorial covers the basics of Java 8 and Java 11; it is a start to prepare you for the next LTS: Java 17.

by otaviojava at May 10, 2021 07:33 PM

Your Voice Matters: Take the Jakarta EE Developer Survey

by dmitrykornilov at April 17, 2021 11:36 AM

The Jakarta EE Developer Survey is in its fourth year and is the industry’s largest open source developer survey. It’s open until April 30, 2021. I am encouraging you to add your voice. Why should you do it? Because Jakarta EE Working Group needs your feedback. We need to know the challenges you facing and suggestions you have about how to make Jakarta EE better.

Last year’s edition surveyed developers to gain on-the-ground understanding and insights into how Jakarta solutions are being built, as well as identifying developers’ top choices for architectures, technologies, and tools. The 2021 Jakarta EE Developer Survey is your chance to influence the direction of the Jakarta EE Working Group’s approach to cloud native enterprise Java.

The results from the 2021 survey will give software vendors, service providers, enterprises, and individual developers in the Jakarta ecosystem updated information about Jakarta solutions and service development trends and what they mean for their strategies and businesses. Additionally, the survey results also help the Jakarta community at the Eclipse Foundation better understand the top industry focus areas and priorities for future project releases.

A full report from based on the survey results will be made available to all participants.

The survey takes less than 10 minutes to complete. We look forward to your input. Take the survey now!

by dmitrykornilov at April 17, 2021 11:36 AM

Less is More? Evolving the Servlet API!

by gregw at April 13, 2021 06:19 AM

With the release of the Servlet API 5.0 as part of Eclipse Jakarta EE 9.0 the standardization process has completed its move from the now-defunct Java Community Process (JCP) to being fully open source at the Eclipse Foundation, including the

by gregw at April 13, 2021 06:19 AM

Undertow AJP balancer. UT005028: Proxy request failed: java.nio.BufferOverflowException

April 02, 2021 09:00 PM

Wildfly provides great out of the box load balancing support by Undertow and modcluster subsystems
Unfortunately, in case HTTP headers size is huge enough (close to 16K), which is so actual in JWT era - pity error happened:

ERROR [io.undertow.proxy] (default I/O-10) UT005028: Proxy request to /ee-jax-rs-examples/clusterdemo/serverinfo failed: java.nio.BufferOverflowException
 at io.undertow.server.handlers.proxy.ProxyHandler$HTTPTrailerChannelListener.handleEvent(
 at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction$1.completed(
 at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction$1.completed(
 at io.undertow.client.ajp.AjpClientExchange.invokeReadReadyCallback(
 at io.undertow.client.ajp.AjpClientConnection.initiateRequest(
 at io.undertow.client.ajp.AjpClientConnection.sendRequest(
 at io.undertow.server.handlers.proxy.ProxyHandler$
 at io.undertow.util.SameThreadExecutor.execute(
 at io.undertow.server.HttpServerExchange.dispatch(
Caused by: java.nio.BufferOverflowException
 at java.nio.Buffer.nextPutIndex(
 at java.nio.DirectByteBuffer.put(
 at io.undertow.protocols.ajp.AjpUtils.putString(
 at io.undertow.protocols.ajp.AjpClientRequestClientStreamSinkChannel.createFrameHeaderImpl(
 at io.undertow.protocols.ajp.AjpClientRequestClientStreamSinkChannel.generateSendFrameHeader(
 at io.undertow.protocols.ajp.AjpClientFramePriority.insertFrame(
 at io.undertow.protocols.ajp.AjpClientFramePriority.insertFrame(
 at io.undertow.server.protocol.framed.AbstractFramedChannel.flushSenders(
 at io.undertow.server.protocol.framed.AbstractFramedChannel.flush(
 at io.undertow.server.protocol.framed.AbstractFramedChannel.queueFrame(
 at io.undertow.server.protocol.framed.AbstractFramedStreamSinkChannel.queueFinalFrame(
 at io.undertow.server.protocol.framed.AbstractFramedStreamSinkChannel.shutdownWrites(
 at io.undertow.channels.DetachableStreamSinkChannel.shutdownWrites(
 at io.undertow.server.handlers.proxy.ProxyHandler$HTTPTrailerChannelListener.handleEvent(

The same request directly to backend server works well. Tried to play with ajp-listener and mod-cluster filter "max-*" parameters, but have no luck.

Possible solution here is switch protocol from AJP to HTTP which can be bit less effective, but works well with big headers:

/profile=full-ha/subsystem=modcluster/proxy=default:write-attribute(name=listener, value=default)

April 02, 2021 09:00 PM

Jakarta EE Tutorial russian translation

by Vladimir Bychkov at March 13, 2021 07:22 PM

Translation of Jakarta EE Tutorial into russian

by Vladimir Bychkov at March 13, 2021 07:22 PM

Oracle Joins MicroProfile Working Group

by dmitrykornilov at January 08, 2021 06:02 PM

I am very pleased to announce that since the beginning of 2021 Oracle is officially a part of MicroProfile Working Group. 

In Oracle we believe in standards and supporting them in our products. Standards are born in blood, toil, tears, and sweat. Standards are a result of collaboration of experts, vendors, customers and users. Standards bring the advantages of portability between different implementations that make standard-based solutions vendor-neutral.

We created Java EE which was the first enterprise Java standard. We opened it and moved it to the Eclipse Foundation to make its development truly open source and vendor neutral. Now we are joining MicroProfile which in the last few years has become a leading standard for cloud-native solutions.

We’ve been supporting MicroProfile for years before officially joining the Working Group. We created project Helidon which has supported MicroProfile APIs since MicroProfile version 1.1. Contributing to the evolution and supporting new versions of MicroProfile is one of our strategic goals.

I like the community driven and enjoyable approach of creating cloud-native APIs invented by MicroProfile. I believe that our collaboration will be effective and together we will push MicroProfile forward to a higher level.

by dmitrykornilov at January 08, 2021 06:02 PM

A fishing day with Jakarta EE and MicroProfile

by Edwin Derks at December 24, 2020 09:23 AM

Over the years, several implementations of the Jakarta EE and MicroProfile platforms have been developed by vendors. Some implementations are fully compatible with these platforms, others support a subset of specifications from the platforms, or are building on top of these. Implementations are often built as an open-source project and shipped by a vendor as a product for their customers. One of the things I noticed over the years, is that these projects are often named after animals. More particularly, there are currently three Jakarta EE / MicroProfile supporting runtimes available that refer to… fish. The members of this trio in question are:

Product Name Link
Eclipse GlassFish

Since they share a common aspect in their product name, does that mean that they have something other in common, or is this just a coincidence? Let’s go over the high-level purpose and product definitions to find that out.

Eclipse GlassFish

If we look at the Jakarta EE compatible products page, Eclipse GlassFish shows up as both a Jakarta EE 8 and Jakarta EE 9 compatible application server. Looking at the history of Eclipse GlassFish, this is not a surprise. Until this project was moved from Oracle to the Eclipse Foundation in 2017, it was the reference implementation application server for Java EE. After moving Java EE to the Eclipse Foundation and rebranding the platform as Jakarta EE, the official concept of a reference implementation has been dropped. Although, technically speaking, Eclipse GlassFish remains the “unofficial” reference implementation of new versions of Jakarta EE. This means that for future versions of Jakarta EE, Eclipse GlassFish can be used to test-drive updates to, or implementation of, new specifications that are going to be supported by Jakarta EE. In addition, speaking hypothetically, if no other vendors would be around to implement Jakarta EE, the Eclipse Foundation would still have its own implementation of Jakarta EE under their own roof. This is important because, without any implementations, the Jakarta EE platform is just a set of specifications that be used to build enterprise applications, but not run them. As a developer, you can easily download Eclipse GlassFish and use this application server to start a project in order to build enterprise applications. However, there are two things noteworthy that you should know:

  • This application server only implements the Jakarta EE platform. It lacks the cloud-native capabilities that the MicroProfile specifications add when compared to other application servers that implement both Jakarta EE and MicroProfile;
  • There is currently no commercial support available for Eclipse GlassFish. If you want to use this application server for your projects in production, that is perfectly fine. However, without such a support contract, in case you run into problems and are in need of a patch or fix, you are at the mercy of the community. You can file an issue in the open-source project or provide a patch there yourself in order to eventually release a new version of Eclipse GlassFish containing the fix.


Simply put, Payara is a commercially supported project that builds on Eclipse GlassFish while building their own commercial features on top of it. When we look at the Jakarta EE compatible products page, Payara shows up as a Jakarta EE 8 compatible application server. However, since Jakarta EE 9 has been released this month, and having a compatible Eclipse GlassFish application server around the corner, we can expect a Jakarta EE 9 compatible version of Payara shortly. Over the years, Payara has built an ever-growing set of commercial features in their product. These features often aim at cloud-native development. This makes Payara a good fit for running instances in microservices architectures deployed on cloud environments. In addition, the company aims to support the latest LTS releases of Java, even providing support for various JVMs that you can use to run Payara. Speaking of running Payara, you also have the option of using the full-blown application server, or a slimmed down runtime in the form of Payara Micro. In case you are a fan of Uber/Fat JAR’s, you even have the option of generating such artifacts with Payara Micro. In short, as a developer, you can use Payara for building and deploying enterprise applications in modern, cloud-native environments using some of your favorite flavors for packaging and running your applications. A few things noteworthy to mention for Payara are:

  • Payara provides support for migration from another application server to Payara in case you are interested in such a migration;
  • Payara supports both Jakarta EE and MicroProfile in order to make it a fit for running in cloud-native environments;
  • Payara provides several features for optimizing resource consumption of running Payara instances.


Although this product references a fish, it is a new kid on the block and doesn’t share any particular existing base for Jakarta EE or MicroProfile. Piranha is not compatible with Jakarta EE or MicroProfile (yet) but supports a large part of the specifications in enterprise applications that you can build and run on Piranha. Like some other newer runtimes on the market that support Jakarta EE and/or MicroProfile specifications, it uses the best breeds of implementations or provides its own. Having that said, what are Piranha’s goals? The product definition states that you can use Piranha to build Jakarta EE and MicroProfile based applications (among other frameworks or libraries), aiming for the smallest runtime possible in order to run them. Ship less, consume less, spend less seems to be the goal, which makes sense in cloud-native environments where spending resources cost money and spending less can be beneficial. When you are interested in using Piranha as a developer, you should know these things:

  • Piranha is brand new and, as far as I know, doesn’t provide commercial support yet. However, if you are in the situation of building a non-mission-critical application from the ground up with cost efficiency in mind, starting off with Piranha should not hurt. With your feedback, you can help shape and mature the product, which can benefit you in the long run;
  • Piranha supports or integrates with other frameworks and libraries that might be a good fit for your project. This even includes GUI’s and testing, so be sure to check these out!


Next to these “fishy” runtimes from the Jakarta EE and MicroProfile ecosystems, there are of course several other runtimes available that you can check and try out in order to see if these are a fit for your project. I’m curious if there will be any future implementations referring to a fish, and what the idea or vision behind the name would be. How would you name your “fishy” runtime? Please reach out to me on my Twitter when you have an idea, and who knows we can start a trend or project that makes it happen.

by Edwin Derks at December 24, 2020 09:23 AM

Very Merry Christmas with Jersey 2.33

by Jan at December 24, 2020 12:09 AM

Jersey 2.33 is out! As usually, right before Christmas we put together as many fixes and new features as possible to deliver new Jersey for you, the Jersey customers. During the work on Jersey 2.33, we already delivered Jersey 3.0.0, … Continue reading

by Jan at December 24, 2020 12:09 AM

An introduction to MicroProfile GraphQL

by Jean-François James at November 14, 2020 05:05 PM

If you’re interested in MicroProfile and APIs, please checkout my presentation Boost your APIs with GraphQL . I did it at EclipseCon 2020. Thanks to the organizers for the invitation! The slide deck is on Slideshare. I’ve tried to be high-level and explain how GraphQL differentiates from REST and how easy it is to implement […]

by Jean-François James at November 14, 2020 05:05 PM

General considerations on updating Enterprise Java projects from Java 8 to Java 11

September 23, 2020 12:00 AM


The purpose of this article is to consolidate all difficulties and solutions that I've encountered while updating Java EE projects from Java 8 to Java 11 (and beyond). It's a known fact that Java 11 has a lot of new characteristics that are revolutionizing how Java is used to create applications, despite being problematic under certain conditions.

This article is focused on Java/Jakarta EE but it could be used as basis for other enterprise Java frameworks and libraries migrations.

Is it possible to update Java EE/MicroProfile projects from Java 8 to Java 11?

Yes, absolutely. My team has been able to bump at least two mature enterprise applications with more than three years in development, being:

A Management Information System (MIS)

Nabenik MIS

  • Time for migration: 1 week
  • Modules: 9 EJB, 1 WAR, 1 EAR
  • Classes: 671 and counting
  • Code lines: 39480
  • Project's beginning: 2014
  • Original platform: Java 7, Wildfly 8, Java EE 7
  • Current platform: Java 11, Wildfly 17, Jakarta EE 8, MicroProfile 3.0
  • Web client: Angular

Mobile POS and Geo-fence

Medmigo REP

  • Time for migration: 3 week
  • Modules: 5 WAR/MicroServices
  • Classes: 348 and counting
  • Code lines: 17160
  • Project's beginning: 2017
  • Original platform: Java 8, Glassfish 4, Java EE 7
  • Current platform: Java 11, Payara (Micro) 5, Jakarta EE 8, MicroProfile 3.2
  • Web client: Angular

Why should I ever consider migrating to Java 11?

As everything in IT the answer is "It depends . . .". However there are a couple of good reasons to do it:

  1. Reduce attack surface by updating project dependencies proactively
  2. Reduce technical debt and most importantly, prepare your project for the new and dynamic Java world
  3. Take advantage of performance improvements on new JVM versions
  4. Take advantage from improvements of Java as programming language
  5. Sleep better by having a more secure, efficient and quality product

Why Java updates from Java 8 to Java 11 are considered difficult?

From my experience with many teams, because of this:

Changes in Java release cadence

Java Release Cadence

Currently, there are two big branches in JVMs release model:

  • Java LTS: With a fixed lifetime (3 years) for long term support, being Java 11 the latest one
  • Java current: A fast-paced Java version that is available every 6 months over a predictable calendar, being Java 15 the latest (at least at the time of publishing for this article)

The rationale behind this decision is that Java needed dynamism in providing new characteristics to the language, API and JVM, which I really agree.

Nevertheless, it is a know fact that most enterprise frameworks seek and use Java for stability. Consequently, most of these frameworks target Java 11 as "certified" Java Virtual Machine for deployments.

Usage of internal APIs

Java 9

Errata: I fixed and simplified this section following an interesting discussion on reddit :)

Java 9 introduced changes in internal classes that weren't meant for usage outside JVM, preventing/breaking the functionality of popular libraries that made use of these internals -e.g. Hibernate, ASM, Hazelcast- to gain performance.

Hence to avoid it, internal APIs in JDK 9 are inaccessible at compile time (but accesible with --add-exports), remaining accessible if they were in JDK 8 but in a future release they will become inaccessible, in the long run this change will reduce the costs borne by the maintainers of the JDK itself and by the maintainers of libraries and applications that, knowingly or not, make use of these internal APIs.

Finally, during the introduction of JEP-260 internal APIs were classified as critical and non-critical, consequently critical internal APIs for which replacements are introduced in JDK 9 are deprecated in JDK 9 and will be either encapsulated or removed in a future release.

However, you are inside the danger zone if:

  1. Your project compiles against dependencies pre-Java 9 depending on critical internals
  2. You bundle dependencies pre-Java 9 depending on critical internals
  3. You run your applications over a runtime -e.g. Application Servers- that include pre Java 9 transitive dependencies

Any of these situations means that your application has a probability of not being compatible with JVMs above Java 8. At least not without updating your dependencies, which also could uncover breaking changes in library APIs creating mandatory refactors.

Removal of CORBA and Java EE modules from OpenJDK


Also during Java 9 release, many Java EE and CORBA modules were marked as deprecated, being effectively removed at Java 11, specifically:

  • (JAX-WS, plus the related technologies SAAJ and Web Services Metadata)
  • java.xml.bind (JAXB)
  • java.activation (JAF)
  • (Common Annotations)
  • java.corba (CORBA)
  • java.transaction (JTA)
  • (Aggregator module for the six modules above)
  • (Tools for JAX-WS)
  • jdk.xml.bind (Tools for JAXB)

As JEP-320 states, many of these modules were included in Java 6 as a convenience to generate/support SOAP Web Services. But these modules eventually took off as independent projects already available at Maven Central. Therefore it is necessary to include these as dependencies if our project implements services with JAX-WS and/or depends on any library/utility that was included previously.

IDEs and application servers


In the same way as libraries, Java IDEs had to catch-up with the introduction of Java 9 at least in three levels:

  1. IDEs as Java programs should be compatible with Java Modules
  2. IDEs should support new Java versions as programming language -i.e. Incremental compilation, linting, text analysis, modules-
  3. IDEs are also basis for an ecosystem of plugins that are developed independently. Hence if plugins have any transitive dependency with issues over JPMS, these also have to be updated

Overall, none of the Java IDEs guaranteed that plugins will work in JVMs above Java 8. Therefore you could possibly run your IDE over Java 11 but a legacy/deprecated plugin could prevent you to run your application.

How do I update?

You must notice that Java 9 launched three years ago, hence the situations previously described are mostly covered. However you should do the following verifications and actions to prevent failures in the process:

  1. Verify server compatibility
  2. Verify if you need a specific JVM due support contracts and conditions
  3. Configure your development environment to support multiple JVMs during the migration process
  4. Verify your IDE compatibility and update
  5. Update Maven and Maven projects
  6. Update dependencies
  7. Include Java/Jakarta EE dependencies
  8. Execute multiple JVMs in production

Verify server compatibility


Mike Luikides from O'Reilly affirms that there are two types of programmers. In one hand we have the low level programmers that create tools as libraries or frameworks, and on the other hand we have developers that use these tools to create experience, products and services.

Java Enterprise is mostly on the second hand, the "productive world" resting in giant's shoulders. That's why you should check first if your runtime or framework already has a version compatible with Java 11, and also if you have the time/decision power to proceed with an update. If not, any other action from this point is useless.

The good news is that most of the popular servers in enterprise Java world are already compatible, like:

If you happen to depend on non compatible runtimes, this is where the road ends unless you support the maintainer to update it.

Verify if you need an specific JVM


On a non-technical side, under support contract conditions you could be obligated to use an specific JVM version.

OpenJDK by itself is an open source project receiving contributions from many companies (being Oracle the most active contributor), but nothing prevents any other company to compile, pack and TCK other JVM distribution as demonstrated by Amazon Correto, Azul Zulu, Liberica JDK, etc.

In short, there is software that technically could run over any JVM distribution and version, but the support contract will ask you for a particular version. For instance:

Configure your development environment to support multiple JDKs

Since the jump from Java 8 to Java 11 is mostly an experimentation process, it is a good idea to install multiple JVMs on the development computer, being SDKMan and jEnv the common options:



SDKMan is available for Unix-Like environments (Linux, Mac OS, Cygwin, BSD) and as the name suggests, acts as a Java tools package manager.

It helps to install and manage JVM ecosystem tools -e.g. Maven, Gradle, Leiningen- and also multiple JDK installations from different providers.



Also available for Unix-Like environments (Linux, Mac OS, Cygwin, BSD), jEnv is basically a script to manage and switch multiple JVM installations per system, user and shell.

If you happen to install JDKs from different sources -e.g Homebrew, Linux Repo, Oracle Technology Network- it is a good choice.

Finally, if you use Windows the common alternative is to automate the switch using .bat files however I would appreciate any other suggestion since I don't use Windows so often.

Verify your IDE compatibility and update

Please remember that any IDE ecosystem is composed by three levels:

  1. The IDE acting as platform
  2. Programming language support
  3. Plugins to support tools and libraries

After updating your IDE, you should also verify if all of the plugins that make part of your development cycle work fine under Java 11.

Update Maven and Maven projects


Probably the most common choice in Enterprise Java is Maven, and many IDEs use it under the hood or explicitly. Hence, you should update it.

Besides installation, please remember that Maven has a modular architecture and Maven modules version could be forced on any project definition. So, as rule of thumb you should also update these modules in your projects to the latest stable version.

To verify this quickly, you could use versions-maven-plugin:


Which includes a specific goal to verify Maven plugins versions:

mvn versions:display-plugin-updates


After that, you also need to configure Java source and target compatibility, generally this is achieved in two points.

As properties:


As configuration on Maven plugins, specially in maven-compiler-plugin:


Finally, some plugins need to "break" the barriers imposed by Java Modules and Java Platform Teams knows about it. Hence JVM has an argument called illegal-access to allow this, at least during Java 11.

This could be a good idea in plugins like surefire and failsafe which also invoke runtimes that depend on this flag (like Arquillian tests):


Update project dependencies

As mentioned before, you need to check for compatible versions on your Java dependencies. Sometimes these libraries could introduce breaking changes on each major version -e.g. Flyway- and you should consider a time to refactor this changes.

Again, if you use Maven versions-maven-plugin has a goal to verify dependencies version. The plugin will inform you about available updates.:

mvn versions:display-dependency-updates


In the particular case of Java EE, you already have an advantage. If you depend only on APIs -e.g. Java EE, MicroProfile- and not particular implementations, many of these issues are already solved for you.

Include Java/Jakarta EE dependencies


Probably modern REST based services won't need this, however in projects with heavy usage of SOAP and XML marshalling is mandatory to include the Java EE modules removed on Java 11. Otherwise your project won't compile and run.

You must include as dependency:

  • API definition
  • Reference Implementation (if needed)

At this point is also a good idea to evaluate if you could move to Jakarta EE, the evolution of Java EE under Eclipse Foundation.

Jakarta EE 8 is practically Java EE 8 with another name, but it retains package and features compatibility, most of application servers are in the process or already have Jakarta EE certified implementations:

We could swap the Java EE API:


For Jakarta EE API:


After that, please include any of these dependencies (if needed):

Java Beans Activation

Java EE


Jakarta EE


JAXB (Java XML Binding)

Java EE


Jakarta EE





Java EE


Jakarta EE


Implementation (runtime)


Implementation (standalone)


Java Annotation

Java EE


Jakarta EE


Java Transaction

Java EE


Jakarta EE



In the particular case of CORBA, I'm aware of its adoption. There is an independent project in eclipse to support CORBA, based on Glassfish CORBA, but this should be investigated further.

Multiple JVMs in production

If everything compiles, tests and executes. You did a successful migration.

Some deployments/environments run multiple application servers over the same Linux installation. If this is your case it is a good idea to install multiple JVMs to allow stepped migrations instead of big bang.

For instance, RHEL based distributions like CentOS, Oracle Linux or Fedora include various JVM versions:


Most importantly, If you install JVMs outside directly from RPMs(like Oracle HotSpot), Java alternatives will give you support:


However on modern deployments probably would be better to use Docker, specially on Windows which also needs .bat script to automate this task. Most of the JVM distributions are also available on Docker Hub:


September 23, 2020 12:00 AM

Setting Up a Jakarta EE Development Environment with SDKMAN, Eclipse IDE and TomEE MicroProfile

July 29, 2020 12:00 AM

What’s up, folks?! So, in this post, I want to show you how to set up a Jakarta EE development in a clean Linux (Ubuntu) installation. We will set up Java and Maven from a version manager tool called SDKMAN, the Eclipse IDE and the TomEE Application Server. SDKMAN First of all, we need to download the Java Development Kit (JDK). Because Java and the Java Virtual Machine (JVM) are specifications we have some implementations for it, like Amazon Correto, OpenJDK, OracleJDK and many others, for this tutorial, we will use the AdoptOpenJDK.

July 29, 2020 12:00 AM

Secure your JAX-RS APIs with MicroProfile JWT

by Jean-François James at July 13, 2020 03:55 PM

In this article, I want to illustrate in a practical way how to secure your JAX-RS APIs with MicroProfile JWT (JSON Web Token). It is illustrated by a GitHub project using Quarkus, Wildfly, Open Liberty and JWTenizr. A basic knowledge of MP JWT is needed and, if you don’t feel comfortable with that, I invite […]

by Jean-François James at July 13, 2020 03:55 PM

Jakarta EE: Multitenancy with JPA on WildFly, Part 1

by Rhuan Henrique Rocha at July 12, 2020 10:49 PM

In this two-part series, I demonstrate two approaches to multitenancy with the Jakarta Persistence API (JPA) running on WildFly. In the first half of this series, you will learn how to implement multitenancy using a database. In the second half, I will introduce you to multitenancy using a schema. I based both examples on JPA and Hibernate.

Because I have focused on implementation examples, I won’t go deeply into the details of multitenancy, though I will start with a brief overview. Note, too, that I assume you are familiar with Java persistence using JPA and Hibernate.

Multitenancy architecture

Multitenancy is an architecture that permits a single application to serve multiple tenants, also known as clients. Although tenants in a multitenancy architecture access the same application, they are securely isolated from each other. Furthermore, each tenant only has access to its own resources. Multitenancy is a common architectural approach for software-as-a-service (SaaS) and cloud computing applications. In general, clients (or tenants) accessing a SaaS are accessing the same application, but each one is isolated from the others and has its own resources.

A multitenant architecture must isolate the data available to each tenant. If there is a problem with one tenant’s data set, it won’t impact the other tenants. In a relational database, we use a database or a schema to isolate each tenant’s data. One way to separate data is to give each tenant access to its own database or schema. Another option, which is available if you are using a relational database with JPA and Hibernate, is to partition a single database for multiple tenants. In this article, I focus on the standalone database and schema options. I won’t demonstrate how to set up a partition.

In a server-based application like WildFly, multitenancy is different from the conventional approach. In this case, the server application works directly with the data source by initiating a connection and preparing the database to be used. The client application does not spend time opening the connection, which improves performance. On the other hand, using Enterprise JavaBeans (EJBs) for container-managed transactions can lead to problems. As an example, the server-based application could do something to generate an error to commit or roll the application back.

Implementation code

Two interfaces are crucial to implementing multitenancy in JPA and Hibernate:

  • MultiTenantConnectionProvider is responsible for connecting tenants to their respective databases and services. We will use this interface and a tenant identifier to switch between databases for different tenants.
  • CurrentTenantIdentifierResolver is responsible for identifying the tenant. We will use this interface to define what is considered a tenant (more about this later). We will also use this interface to provide the correct tenant identifier to MultiTenantConnectionProvider.

In JPA, we configure these interfaces using the persistence.xml file. In the next sections, I’ll show you how to use these two interfaces to create the first three classes we need for our multitenancy architecture: DatabaseMultiTenantProvider, MultiTenantResolver, and DatabaseTenantResolver.


DatabaseMultiTenantProvider is an implementation of the MultiTenantConnectionProvider interface. This class contains logic to switch to the database that matches the given tenant identifier. In WildFly, this means switching to different data sources. The DatabaseMultiTenantProvider class also implements the ServiceRegistryAwareService, which allows us to inject a service during the configuration phase.

Here’s the code for the DatabaseMultiTenantProvider class:

public class DatabaseMultiTenantProvider implements MultiTenantConnectionProvider, ServiceRegistryAwareService{
    private static final long serialVersionUID = 1L;
    private static final String TENANT_SUPPORTED = "DATABASE";
    private DataSource dataSource;
    private String typeTenancy ;

    public boolean supportsAggressiveRelease() {
        return false;
    public void injectServices(ServiceRegistryImplementor serviceRegistry) {

        typeTenancy = (String) ((ConfigurationService)serviceRegistry

        dataSource = (DataSource) ((ConfigurationService)serviceRegistry

    public boolean isUnwrappableAs(Class clazz) {
        return false;
    public <T> T unwrap(Class<T> clazz) {
        return null;
    public Connection getAnyConnection() throws SQLException {
        final Connection connection = dataSource.getConnection();
        return connection;

    public Connection getConnection(String tenantIdentifier) throws SQLException {

        final Context init;
        //Just use the multi-tenancy if the hibernate.multiTenancy == DATABASE
        if(TENANT_SUPPORTED.equals(typeTenancy)) {
            try {
                init = new InitialContext();
                dataSource = (DataSource) init.lookup("java:/jdbc/" + tenantIdentifier);
            } catch (NamingException e) {
                throw new HibernateException("Error trying to get datasource ['java:/jdbc/" + tenantIdentifier + "']", e);

        return dataSource.getConnection();

    public void releaseAnyConnection(Connection connection) throws SQLException {
    public void releaseConnection(String tenantIdentifier, Connection connection) throws SQLException {

As you can see, we call the injectServices method to populate the datasource and typeTenancy attributes. We use the datasource attribute to get a connection from the data source, and we use the typeTenancy attribute to find out if the class supports the multiTenancy type. We call the getConnection method to get a data source connection. This method uses the tenant identifier to locate and switch to the correct data source.


MultiTenantResolver is an abstract class that implements the CurrentTenantIdentifierResolver interface. This class aims to provide a setTenantIdentifier method to all CurrentTenantIdentifierResolver implementations:

public abstract class MultiTenantResolver implements CurrentTenantIdentifierResolver {

    protected String tenantIdentifier;

    public void setTenantIdentifier(String tenantIdentifier) {
        this.tenantIdentifier = tenantIdentifier;

This abstract class is simple. We only use it to provide the setTenantIdentifier method.


DatabaseTenantResolver also implements the CurrentTenantIdentifierResolver interface. This class is the concrete class of MultiTenantResolver:

public class DatabaseTenantResolver extends MuiltiTenantResolver {

    private Map<String, String> regionDatasourceMap;

    public DatabaseTenantResolver(){
        regionDatasourceMap = new HashMap();
        regionDatasourceMap.put("default", "MyDataSource");
        regionDatasourceMap.put("america", "AmericaDB");
        regionDatasourceMap.put("europa", "EuropaDB");
        regionDatasourceMap.put("asia", "AsiaDB");

    public String resolveCurrentTenantIdentifier() {

        if(this.tenantIdentifier != null
                && regionDatasourceMap.containsKey(this.tenantIdentifier)){
            return regionDatasourceMap.get(this.tenantIdentifier);

        return regionDatasourceMap.get("default");


    public boolean validateExistingCurrentSessions() {
        return false;


Notice that DatabaseTenantResolver uses a Map to define the correct data source for a given tenant. The tenant, in this case, is a region. Note, too, that this example assumes we have the data sources java:/jdbc/MyDataSource, java:/jdbc/AmericaDB, java:/jdbc/EuropaDB, and java:/jdbc/AsiaDB configured in WildFly.

Configure and define the tenant

Now we need to use the persistence.xml file to configure the tenant:

    <persistence-unit name="jakartaee8">

            <property name="javax.persistence.schema-generation.database.action" value="none" />
            <property name="hibernate.dialect" value="org.hibernate.dialect.PostgresPlusDialect"/>
            <property name="hibernate.multiTenancy" value="DATABASE"/>
            <property name="hibernate.tenant_identifier_resolver" value="net.rhuanrocha.dao.multitenancy.DatabaseTenantResolver"/>
            <property name="hibernate.multi_tenant_connection_provider" value="net.rhuanrocha.dao.multitenancy.DatabaseMultiTenantProvider"/>


Next, we define the tenant in the EntityManagerFactory:

protected EntityManagerFactory emf;

protected EntityManager getEntityManager(String multitenancyIdentifier){

    final MuiltiTenantResolver tenantResolver = (MuiltiTenantResolver) ((SessionFactoryImplementor) emf).getCurrentTenantIdentifierResolver();

    return emf.createEntityManager();

Note that we call the setTenantIdentifier before creating a new instance of EntityManager.


I have presented a simple example of multitenancy in a database using JPA with Hibernate and WildFly. There are many ways to use a database for multitenancy. My main point has been to show you how to implement the CurrentTenantIdentifierResolver and MultiTenantConnectionProvider interfaces. I’ve shown you how to use JPA’s persistence.xml file to configure the required classes based on these interfaces.

Keep in mind that for this example, I have assumed that WildFly manages the data source and connection pool and that EJB handles the container-managed transactions. In the second half of this series, I will provide a similar introduction to multitenancy, but using a schema rather than a database. If you want to go deeper with this example, you can find the complete application code and further instructions on my GitHub repository.

by Rhuan Henrique Rocha at July 12, 2020 10:49 PM

Jakarta EE Cookbook

by Elder Moraes at July 06, 2020 07:19 PM

About one month ago I had the pleasure to announce the release of the second edition of my book, now…

by Elder Moraes at July 06, 2020 07:19 PM

SonarQube Just Do It!

May 14, 2020 07:56 AM

Code transparency … Yes please…

Recently I gave a small seminar for my current client about Clean Code and Craftsmanship. The group I was talking to
consisted of developers of all levels from junior to senior.

To my complete consternation, when I started talking about tools like PMD / CheckStyle and SonarQube, I found out that
none of them had ever heard of these tools. Not even the Senior developers.

Well this is bad and needs to be fixed!

This article will give a short explanation about what SonarQube is.
It is also a quick guide on how to start working with it.
This will be done with the use of docker because I want to :-)


  • docker(-compose)
  • maven
  • java

What is SonarQube?

This is what SonarQube has to say about it.

Continues Inspection
SonarQube provides the capability to not only show health of an application but also to highlight issues newly
introduced. With a Quality Gate in place, you can fix the leak and therefore improve code quality systematically.


SonarQube will analyse your code by analysing it statically. Without knowing the inner workings of your application it will
“look” at your code and search for code constructions that are known to be fragile or just wrong. After the scan
it will generate a report in the form of a browseable website. All issues are explained and you are probably also provided
a possible solution for it.


Noncompliant code: “for” loop increment clauses should modify the loops’ counters

for (i = 0; i < 10; j++) {  // Noncompliant
// ...

Lots of these language constructions can be tested statically and at the point of this writing there are 423 checks done.

Use it yourself

With docker it is very easy to start your own SonarQube service:

docker run --rm --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube:alpine

The command provided above will erase all history when stopped, but for learning purposes and demo’s that is fine.
Farther below I will give some more permanent solutions for local use and some hints for enterprise use.

now go to your maven project and run:

mvn sonar:sonar

If maven the tells you that the build was successful, you can go look on localhost:9000 and
be amazed at how good or bad your code is.

Take time

If you are willing to take the time to really look into the found issues (yes even the ‘info’ ones) and willing to
go and fix these issues, you are on the way to becoming a better developer.

Positive effect on a team

The beauty of using a tool like Sonar is that it will also keep history when configured correctly and therefore provide
you with a way of monitoring improvement. If you see improvements and also demonstrate this during sprint reviews, you
will start to notice a marked positive effect on your team. Team members will be more proud of the code they are writing
and become better as a group. Reviews will start to take code quality into account.

Quality Gate

When you have a level of quality reached where you are comfortable you don’t want to loose it. This is the moment you
can introduce the use of a quality gate. This is a level of quality you can define for your project and if the code does
not meet the requirements set by the Quality Gate the build will fail. So that is the moment that Code quality might
fail the build. A very powerful thing and one that raises the maturity level of the team significantly.

More advanced examples

Local with database

If you are done with the above provided commands and you want a more permanent solution for your local projects but don’t
want to have to create a complete pipeline just for your hobby projects…

We need a database image and the sonar image.

Here the docker-compose.yml file containing the two images and their connections

version: "3"

image: sonarqube
- "9000:9000"
- sonarnet
- SONARQUBE_JDBC_URL=jdbc:postgresql://database:5432/sonar
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins

image: postgres
- sonarnet
- postgresql:/var/lib/postgresql
# This needs explicit mapping due to
- postgresql_data:/var/lib/postgresql/data

driver: bridge


This image creates volumes for all stuff needed to be saved.

Lets start it up.

docker-compose up -d

You can leave of the -d if you want to see what’s happening. It will start in the foreground and not in ‘detached’ mode.

Now if you stop the containers it will preserve the state of your database in the volumes defined.

To stop the containers go back to your docker-compose.yml file and do:

docker-compose down

And if you want to loose all your data (volumes) too:

docker-compose down -v


If you use tools like Jenkins to control your pipeline it is very useful to add SonarQube as part of the pipeline setup.
If you do not have control over the pipeline environment, you should ask the Ops guys to install Sonar for you. If you do have
control over the pipeline, it is a very good idea to make it part of it. It will make code quality something as part of your
daily life. Jenkins has good integration for tools like sonar and configuring it is not the obstacle it might seem.

Just maven

You don’t have to do anything to have sonar working for maven. It is one of the default plugins (it is that important yes!) and always available
for all projects. You just have to tell it where sonar ‘lives’ and this can be done through the command line (see above).
If you want to just be able to do mvn sonar:sonar you can tell maven where sonar lives by editing the $HOME/.m2/settings.xml file
by adding the following piece of code to it:


some lines have been commented out. This is because in the form I provided above (docker-compose.yml) you don’t need to tell maven
where the database is because it already knows. If you choose to install a database native on your machine and sonar to (without docker)
you can enable these lines and adjust them to conform to your needed settings.


Not doing these kinds of code checks as a developer is robbing you of a learning experience and the opportunity of an extra
review. The static code checker does not get tired or is under pressure. It will just check your code and help you become
better and your code too.

So SonarQube … Just do it!

May 14, 2020 07:56 AM

Workshops: Reactive Apps with Quarkus and OpenShift

by Niklas Heidloff at May 11, 2020 01:19 PM

In the context of cloud-native applications the topic ‘reactive’ becomes more and more important, since more efficient applications can be built and user experiences can be improved. If you want to learn more about reactive functionality in Java applications, read on and try out the sample application and the two new workshops.

Benefits of reactive Applications

In order to demonstrate benefits of reactive applications, I’ve developed a sample application with a web interface that is updated automatically when new data is received rather than pulling for updates. This is more efficient and improves the user experience.

The animation shows how articles can be created via curl commands in the terminal at the bottom. The web application receives a notification and adds the new article to the page.

Another benefit of reactive systems and reactive REST endpoints is efficiency. This scenario describes how to use reactive systems and reactive programming to achieve faster response times. Especially in public clouds where costs depend on CPU, RAM and compute durations this model saves money.

The project contains a sample endpoint which reads data from a database in two different versions, one uses imperative code, the other one reactive code. The reactive stack of this sample provides response times that take less than half of the time compared to the imperative stack: Reactive: 793 ms – Imperative: 1956 ms.


I’ve written two workshops which demonstrate and explain how to build reactive functionality with Quarkus and MicroProfile and how to deploy and run it on OpenShift. You can use Red Hat OpenShift on IBM Cloud or you can run OpenShift locally via Code Ready Containers.

The sample used in the workshops leverages heavily Quarkus which is “a Kubernetes Native Java stack […] crafted from the best of breed Java libraries and standards”. Additionally Eclipse MicroProfile, Eclipse Vert.x, Apache Kafka, PostgreSQL, Eclipse OpenJ9 and Kubernetes are used.

Workshop: Reactive Endpoints with Quarkus on OpenShift

This workshop focusses on how to provide reactive REST APIs and how to invoke services reactively. After you have completed this workshop, you’ll understand the following reactive functionality:

  • Reactive REST endpoints via CompletionStage
  • Exception handling in chained reactive invocations
  • Timeouts via CompletableFuture
  • Reactive REST invocations via MicroProfile REST Client

Open Workshop

Workshop: Reactive Messaging with Quarkus on OpenShift

This workshop focusses on how to do messaging with Kafka and MicroProfile. After you have completed this workshop, you’ll understand the following reactive functionality:

  • Sending and receiving Kafka messages via MicroProfile
  • Sending events from microservices to web applications via Server Sent Events
  • Sending in-memory messages via MicroProfile and Vert.x Event Bus

Open Workshop

Next Steps

To learn more, check out the other articles of this blog series:

The post Workshops: Reactive Apps with Quarkus and OpenShift appeared first on Niklas Heidloff.

by Niklas Heidloff at May 11, 2020 01:19 PM

Monitoring REST APIs with Custom JDK Flight Recorder Events

January 29, 2020 02:30 PM

The JDK Flight Recorder (JFR) is an invaluable tool for gaining deep insights into the performance characteristics of Java applications. Open-sourced in JDK 11, JFR provides a low-overhead framework for collecting events from Java applications, the JVM and the operating system.

In this blog post we’re going to explore how custom, application-specific JFR events can be used to monitor a REST API, allowing to track request counts, identify long-running requests and more. We’ll also discuss how the JFR Event Streaming API new in Java 14 can be used to export live events, making them available for monitoring and alerting via tools such as Prometheus and Grafana.

January 29, 2020 02:30 PM

Enforcing Java Record Invariants With Bean Validation

January 20, 2020 04:30 PM

Record types are one of the most awaited features in Java 14; they promise to "provide a compact syntax for declaring classes which are transparent holders for shallowly immutable data". One example where records should be beneficial are data transfer objects (DTOs), as e.g. found in the remoting layer of enterprise applications. Typically, certain rules should be applied to the attributes of such DTO, e.g. in terms of allowed values. The goal of this blog post is to explore how such invariants can be enforced on record types, using annotation-based constraints as provided by the Bean Validation API.

January 20, 2020 04:30 PM

Jakarta EE 8 CRUD API Tutorial using Java 11

by Philip Riecks at January 19, 2020 03:07 PM

As part of the Jakarta EE Quickstart Tutorials on YouTube, I've now created a five-part series to create a Jakarta EE CRUD API. Within the videos, I'm demonstrating how to start using Jakarta EE for your next application. Given the Liberty Maven Plugin and MicroShed Testing, the endpoints are developed using the TDD (Test Driven Development) technique.

The following technologies are used within this short series: Java 11, Jakarta EE 8, Open Liberty, Derby, Flyway, MicroShed Testing & JUnit 5

Part I: Introduction to the application setup

This part covers the following topics:

  • Introduction to the Maven project skeleton
  • Flyway setup for Open Liberty
  • Derby JDBC connection configuration
  • Basic MicroShed Testing setup for TDD

Part II: Developing the endpoint to create entities

This part covers the following topics:

  • First JAX-RS endpoint to create Person entities
  • TDD approach using MicroShed Testing and the Liberty Maven Plugin
  • Store the entities using the EntityManager

Part III: Developing the endpoints to read entities

This part covers the following topics:

  • Develop two JAX-RS endpoints to read entities
  • Read all entities and by its id
  • Handle non-present entities with a different HTTP status code

Part IV: Developing the endpoint to update entities

This part covers the following topics:

  • Develop the JAX-RS endpoint to update entities
  • Update existing entities using HTTP PUT
  • Validate the client payload using Bean Validation

Part V: Developing the endpoint to delete entities

This part covers the following topics:

  • Develop the JAX-RS endpoint to delete entities
  • Enhance the test setup for deterministic and repeatable integration tests
  • Remove the deleted entity from the database

The source code for the Maven CRUD API application is available on GitHub.

For more quickstart tutorials on Jakarta EE, have a look at the overview page on my blog.

Have fun developing Jakarta EE CRUD API applications,



The post Jakarta EE 8 CRUD API Tutorial using Java 11 appeared first on rieckpil.

by Philip Riecks at January 19, 2020 03:07 PM

Deploy a Jakarta EE application to the root context

by Philip Riecks at January 07, 2020 06:24 AM

With the presence of Docker, Kubernetes and cheaper hardware, the deployment model of multiple applications inside one application server has passed. Now, you deploy one Jakarta EE application to one application server. This eliminates the need for different context paths.  You can use the root context / for your Jakarta EE application. With this blog post, you'll learn how to achieve this for each Jakarta EE application server.

The default behavior for Jakarta EE application server

Without any further configuration, most of the Jakarta EE application servers deploy the application to a context path based on the filename of your .war. If you e.g. deploy your my-banking-app.war application, the server will use the context prefix /my-banking-app for your application. All you JAX-RS endpoints, Servlets, .jsp, .xhtml content is then available below this context, e.g /my-banking-app/resources/customers.

This was important in the past, where you deployed multiple applications to one application server. Without the context prefix, the application server wouldn't be able to route the traffic to the correct application.

As of today, the deployment model changed with Docker, Kubernetes and cheaper infrastructure. You usually deploy one .war within one application server running as a Docker container. Given this deployment model, the context prefix is irrelevant. Mapping the application to the root context / is more convenient.

If you configure a reverse proxy or an Ingress controller (in the Kubernetes world), you are happy if you can just route to / instead of remembering the actual context path (error-prone).

Deploying to root context: Payara & Glassfish

As Payara is a fork of Glassfish, the configuration for both is quite similar. The most convenient way for Glassfish is to place a glassfish-web.xml file in the src/main/webapp/WEB-INF folder of your application:

<!DOCTYPE glassfish-web-app PUBLIC "-// GlassFish Application Server 3.1 Servlet 3.0//EN"

For Payara the filename is payara-web.xml:

<!DOCTYPE payara-web-app PUBLIC "-// Payara Server 4 Servlet 3.0//EN" "">

Both also support configuring the context path of the application within their admin console. IMHO this less convenient than the .xml file solution.

Deploying to root context: Open Liberty

Open Liberty also parses a proprietary web.xml file within src/main/webapp/WEB-INF: ibm-web-ext.xml

  <context-root uri="/"/>

Furthermore, you can also configure the context of your application within your server.xml:


  <httpEndpoint id="defaultHttpEndpoint" httpPort="9080" httpsPort="9443"/>

  <webApplication location="app.war" contextRoot="/" name="app"/>

Deploying to root context: WildFly

WildFly also has two simple ways of configuring the root context for your application. First, you can place a jboss-web.xml within src/main/webapp/WEB-INF:

<!DOCTYPE jboss-web PUBLIC "-//JBoss//DTD Web Application 2.4//EN" "">

Second, while copying your .war file to your Docker container, you can name it ROOT.war:

FROM jboss/wildfly
 ADD target/app.war /opt/jboss/wildfly/standalone/deployments/ROOT.war

For more tips & tricks for each application server, have a look at my cheat sheet.

Have fun deploying your Jakarta EE applications to the root context,


The post Deploy a Jakarta EE application to the root context appeared first on rieckpil.

by Philip Riecks at January 07, 2020 06:24 AM

Jakarta EE: Creating an Enterprise JavaBeans Timer

by Rhuan Henrique Rocha at December 17, 2019 03:33 AM

Enterprise JavaBeans (EJB) has many interesting and useful features, some of which I will be highlighting in this and upcoming articles. In this article, I’ll show you how to create an EJB timer programmatically and with annotation. Let’s go!

The EJB timer feature allows us to schedule tasks to be executed according a calendar configuration. It is very useful because we can execute scheduled tasks using the power of Jakarta context. When we run tasks based on a timer, we need to answer some questions about concurrency, which node the task was scheduled on (in case of an application in a cluster), what is the action if the task does not execute, and others. When we use the EJB timer we can delegate many of these concerns to Jakarta context and care more about business logic. It is interesting, isn’t it?

Creating an EJB timer programmatically

We can schedule an EJB timer to runs according to a business logic using a programmatic approach. This method can be used when we want a dynamic behavior, according to the parameter values passed to the process. Let’s look at an example of an EJB timer:

import javax.annotation.Resource;
import javax.ejb.SessionContext;
import javax.ejb.Stateless;
import javax.ejb.Timeout;
import java.util.logging.Logger;

public class MyTimer {

    private Logger logger = Logger.getLogger(MyTimer.class.getName());
    private SessionContext context;

    public void initTimer(String message){
        context.getTimerService().createTimer(10000, message);

    public void execute(){"Starting");

        context.getTimerService().getAllTimers().stream().forEach(timer ->;

To schedule this EJB timer, call this method:

private MyTimer myTimer;

After passing 10000 milliseconds, the method annotated with @Timeout will be called.

Scheduling an EJB timer using annotation

We can also create an EJB timer that is automatically scheduled to run according to an annotation configuration. Look at this example:

public class MyTimerAutomatic {

    private Logger logger = Logger.getLogger(MyTimerAutomatic.class.getName());

    @Schedule(hour = "*", minute = "*",second = "0,10,20,30,40,50",persistent = false)
    public void execute(){"Automatic timer executing");


As you can see, to configure an automatic EJB timer schedule, you can annotate the method using @Schedule and configure the calendar attributes. For example:

@Schedule(hour = "*", minute = "*",second = "0,10,20,30,40,50",persistent = false)

As you can see, the method execute is configured to be called every 10 seconds. You can configure whether the timer is persistent as well.


EJB timer is a good EJB feature that is helpful in solving many problems. Using the EJB timer feature, we can schedule tasks to be executed, thereby delegating some responsibilities to Jakarta context to solve for us. Furthermore, we can create persistent timers, control the concurrent execution, and work with it in a clustered environment.  If you want to see the complete example, visit this repository on GitHub.

This post was released at Developer Red Hat blog and you can see here.


by Rhuan Henrique Rocha at December 17, 2019 03:33 AM

Modernizing our GitHub Sync Toolset

November 19, 2019 08:10 PM

I am happy to announce that my team is ready to deploy a new version of our GitHub Sync Toolset on November 26, 2019 from 10:00 to 11:00 am EST.

We are not expecting any disruption of service but it’s possible that some committers may lose write access to their Eclipse project GitHub repositories during this 1 hour maintenance window.

This toolset is responsible for syncronizing Eclipse committers accross all our GitHub repositories and on top of that, this new release will start syncronizing contributors.

In this context, a contributor is a GitHub user with read access to the project GitHub repositories. This new feature will allow committers to assign issues to contributors who currently don’t have write access to the repository. This feature was requested in 2015 via Bug 483563 - Allow assignment of GitHub issues to contributors.

Eclipse Committers are reponsible for maintaining a list of GitHub contributors from their project page on the Eclipse Project Management Infrastructure (PMI).

To become an Eclipse contributor on a GitHub for a project, please make sure to tell us your GitHub Username in your Eclipse account.

November 19, 2019 08:10 PM

Building Microservices with Jakarta EE and MicroProfile @ EclipseCon 2019

by Edwin Derks at November 01, 2019 09:02 AM

This year’s EclipseCon was my second time visiting, and simultaneously speaking at this conference. Aside from all the amazing projects that are active within the Eclipse Foundation, this year’s edition contained a long anticipated present: the release of Jakarta EE 8. Reason enough for me and two colleagues to provide a workshop where attendees could actually get hands-on with Jakarta EE 8 and it’s microservices-enabling cousin: Eclipse MicroProfile.

This workshop focusses not only on the various API’s that are provided by Jakarta EE and MicroProfile, but also on development with the Payara application server and how this all fits in a containerised environment.

The slides of the workshop can be found here:

Of course, we hope to evolve this workshop in order to get hands on with new Jakarta EE and MicroProfile features in the near future. Stay tuned!

by Edwin Derks at November 01, 2019 09:02 AM

Java EE - Jakarta EE Initializr

October 25, 2019 10:07 PM

Getting started with Jakarta EE just became even easier!

Get started


New Archtetype with JakartaEE 8

JakartaEE 8 + Payara 5.193.1 + MicroProfile 3.1 running on Java 11

October 25, 2019 10:07 PM

A Tool for Jakarta EE Package Renaming in Binaries

by BJ Hargrave ( at October 17, 2019 09:26 PM

In a previous post, I laid out my thinking on how to approach the package renaming problem which the Jakarta EE community now faces. Regardless of whether the community chooses big bang or incremental, there are still existing artifacts in the world using the Java EE package names that the community will need to use together with the new Jakarta EE package names.

Tools are always important to take the drudgery away from developers. So I have put together a tool prototype which can be used to transform binaries such as individual class files and complete JARs and WARs to rename uses of the Java EE package names to their new Jakarta EE package names.

The tools is rule driven which is nice since the Jakarta EE community still needs to define the actual package renames for Jakarta EE 9. The rules also allow the users to control which class files in a JAR/WAR are transformed. Different users may want different rules depending upon their specific needs. And the tool can be used for any package renaming challenge, not just the specific Jakarta EE package renames.

The tools provides an API allowing it to be embedded in a runtime to dynamically transform class files during the class loader definition process. The API also supports transforming JAR files. A CLI is also provided to allow use from the command line. Ultimately, the tool can be packaged as Gradle and Maven plugins to incorporate in a broader tool chain.

Given that the tool is prototype, and there is much work to be done in the Jakarta EE community regarding the package renames, I have started a list of TODOs in the project' issues for known work items.

Please try out the tool and let me know what you think. I am hoping that tooling such as this will ease the community cost of dealing with the package renames in Jakarta EE.

PS. Package renaming in source code is also something the community will need to deal with. But most IDEs are pretty good at this sort of thing, so I think there is probably sufficient tooling in existence for handling the package renames in source code.

by BJ Hargrave ( at October 17, 2019 09:26 PM

Deploying MicroProfile Microservices with Tekton

by Niklas Heidloff at August 08, 2019 02:48 PM

This article describes Tekton, an open-source framework for creating CI/CD systems, and explains how to deploy microservices built with Eclipse MicroProfile on Kubernetes and OpenShift.

What is Tekton?

Kubernetes is the de-facto standard for running cloud-native applications. While Kubernetes is very flexible and powerful, deploying applications is sometimes challenging for developers. That’s why several platforms and tools have evolved that aim to make deployments of applications easier, for example Cloud Foundry’s ‘cf push’ experience, OpenShift’s source to image (S2I), various Maven plugins and different CI/CD systems.

Similarly as Kubernetes has evolved to be the standard for running containers and similarly as Knative is evolving to become the standard for serverless platforms, the goal of Tekton is to become the standard for continuous integration and delivery (CI/CD) platforms.

The biggest companies that are engaged in this project are at this point Google, CloudBees, IBM and Red Hat. Because of its importance the project has been split from Knative which is focussed on scale to zero capabilities.

Tekton comes with a set of custom resources to define and run pipelines:

  • Pipeline: Pipelines can contain several tasks and can be triggered by events or manually
  • Task: Tasks can contain multiple steps. Typical tasks are 1. source to image and 2. deploy via kubectl
  • PipelineRun: This resource is used to trigger pipelines and to pass parameters like location of Dockerfiles to pipelines
  • PipelineResource: This resource is used, for example, to pass links to GitHub repos

MicroProfile Microservice Implementation

I’ve created a simple microservice which is available as open source as part of the cloud-native-starter repo.

The microservice contains the following functionality:

If you want to use this code for your own microservice, remove the three Java files for the REST GET endpoint and rename the service in the pom.xml file and the yaml files.

Setup of the Tekton Pipeline

I’ve created five yaml files that define the pipeline to deploy the sample authors microservice.

1) The file task-source-to-image.yaml defines how to 1. build the image within the Kubernetes cluster and 2. how to push it to a registry.

For building the image kaniko is used, rather than Docker. For application developers this is almost transparent though. As usual images are defined via Dockerfiles. The only difference I ran into is how access rights are handled. For some reason I couldn’t write the ‘server.xml’ file into the ‘/config’ directory. To fix this, I had to manually assign access rights in the Dockerfile first: ‘RUN chmod 777 /config/’.

The source to image task is the first task in the pipeline and has only one step. The screenshot shows a representation of the task in the Tekton dashboard.

2) The file task-deploy-via-kubectl.yaml contains the second task of the pipeline which essentially only runs kubectl commands to deploy the service. Before this can be done, the template yaml file is changed to contain the full image name for the current user and environment.

kind: Task
  name: deploy-via-kubectl
      - name: git-source
        type: git
      - name: pathToDeploymentYamlFile
        description: The path to the yaml file with Deployment resource to deploy within the git source
    - name: update-yaml
      image: alpine
      command: ["sed"]
        - "-i"
        - "-e"
        - "s;authors:1;${inputs.params.imageUrl}:${inputs.params.imageTag};g"
        - "/workspace/git-source/${inputs.params.pathToContext}/${inputs.params.pathToDeploymentYamlFile}"
    - name: run-kubectl-deployment
      image: lachlanevenson/k8s-kubectl
      command: ["kubectl"]
        - "apply"
        - "-f"
        - "/workspace/git-source/${inputs.params.pathToContext}/${inputs.params.pathToDeploymentYamlFile}"

3) The file pipeline.yaml basically only defines the order of the two tasks as well as how to pass parameters between the different tasks.

The screenshot shows the pipeline after it has been run. The output of the third and last steps of the second task ‘deploy to cluster’ is displayed.

4) The file resource-git-cloud-native-starter.yaml only contains the address of the GitHub repo.

kind: PipelineResource
  name: resource-git-cloud-native-starter
  type: git
    - name: revision
      value: master
    - name: url

5) The file pipeline-account.yaml is necessary to define access rights from Tekton to the container registry.

Here are the complete steps to set up the pipeline on the IBM Cloud Kubernetes service. Except of the login capabilities the same instructions should work as well for Kubernetes services on other clouds and the Kubernetes distribution OpenShift.

First get an IBM lite account. It’s free and there is no time restriction. In order to use the Kubernetes service you need to enter your credit card information, but there is a free Kubernetes cluster. After this create a new Kubernetes cluster.

To create the pipeline, invoke these commands:

$ git clone
$ cd cloud-native-starter
$ ROOT_FOLDER=$(pwd)
$ REGISTRY_NAMESPACE=<your-namespace>
$ CLUSTER_NAME=<your-cluster-name>
$ cd ${ROOT_FOLDER}/authors-java-jee
$ ibmcloud login -a -r us-south -g default
$ ibmcloud ks cluster-config --cluster $CLUSTER_NAME
$ export <output-from-previous-command>
$ REGISTRY=$(ibmcloud cr info | awk '/Container Registry  /  {print $3}')
$ ibmcloud cr namespace-add $REGISTRY_NAMESPACE
$ kubectl apply -f deployment/tekton/resource-git-cloud-native-starter.yaml 
$ kubectl apply -f deployment/tekton/task-source-to-image.yaml 
$ kubectl apply -f deployment/tekton/task-deploy-via-kubectl.yaml 
$ kubectl apply -f deployment/tekton/pipeline.yaml
$ ibmcloud iam api-key-create tekton -d "tekton" --file tekton.json
$ cat tekton.json | grep apikey 
$ kubectl create secret generic ibm-cr-push-secret --type="" --from-literal=username=iamapikey --from-literal=password=<your-apikey>
$ kubectl annotate secret ibm-cr-push-secret
$ kubectl apply -f deployment/tekton/pipeline-account.yaml

Execute the Tekton Pipeline

In order to invoke the pipeline, a sixth yaml file pipeline-run-template.yaml is used. As stated above, this file needs to be modified first to contain the exact image name.

The pipeline-run resource is used to define input parameters like the Git repository, location of the Dockerfile, name of the image, etc.

kind: PipelineRun
  generateName: pipeline-run-cns-authors-
    name: pipeline
    - name: git-source
        name: resource-git-cloud-native-starter
    - name: pathToContext
      value: "authors-java-jee"
    - name: pathToDeploymentYamlFile
      value: "deployment/deployment.yaml"
    - name: pathToServiceYamlFile
      value: "deployment/service.yaml"
    - name: imageUrl
      value: <ip:port>/<namespace>/authors
    - name: imageTag
      value: "1"
    - name: pathToDockerFile
      value: "DockerfileTekton"
    type: manual
  serviceAccount: pipeline-account

Invoke the following commands to trigger the pipeline and to test the authors service:

$ cd ${ROOT_FOLDER}/authors-java-jee/deployment/tekton
$ REGISTRY=$(ibmcloud cr info | awk '/Container Registry  /  {print $3}')
$ sed "s+<namespace>+$REGISTRY_NAMESPACE+g" pipeline-run-template.yaml > pipeline-run-template.yaml.1
$ sed "s+<ip:port>+$REGISTRY+g" pipeline-run-template.yaml.1 > pipeline-run-template.yaml.2
$ sed "s+<tag>+1+g" pipeline-run-template.yaml.2 > pipeline-run.yaml
$ cd ${ROOT_FOLDER}/authors-java-jee
$ kubectl create -f deployment/tekton/pipeline-run.yaml
$ kubectl describe pipelinerun pipeline-run-cns-authors-<output-from-previous-command>
$ clusterip=$(ibmcloud ks workers --cluster $CLUSTER_NAME | awk '/Ready/ {print $2;exit;}')
$ nodeport=$(kubectl get svc authors --output 'jsonpath={.spec.ports[*].nodePort}')
$ open http://${clusterip}:${nodeport}/openapi/ui/
$ curl -X GET "http://${clusterip}:${nodeport}/api/v1/getauthor?name=Niklas%20Heidloff" -H "accept: application/json"

After running the pipeline you’ll see two Tekton pods and one authors pod in the Kubernetes dashboard.

Try out this sample yourself!

The post Deploying MicroProfile Microservices with Tekton appeared first on Niklas Heidloff.

by Niklas Heidloff at August 08, 2019 02:48 PM

Back to the top