Skip to main content

Java EE 6: Migration: From Application Servers over MicroProfile to Serverless AWS Lambda

by admin at June 03, 2023 05:56 AM

At the JDD 2012 conference, more than ten years ago I delivered a session a with the title: Java EE: The Future Is Now, But Is Not Evenly Distributed Yet.

Ten years later, in 2022, I was invited again and delivered a session with the title "Java EE 6 and the Future Is Now, and now we got.. clouds", in which I discussed a migration path to a serverless architecture:

During the talk, I didn't managed to migrate the application to modern Java and clouds, but promised to record a screencast, which is also available:

by admin at June 03, 2023 05:56 AM

JPrime 2023

by Ivar Grimstad at June 01, 2023 04:57 PM

JPrime is a very friendly conference that is a pleasure to speak at. This year it attracted around 1250 attendees, and with two parallel tracks, all speakers get a decent crown in their presentations.

The venue is great, with an outdoor area to escape the crowds in the exhibition area and enjoy the Sofia sun in between sessions.

My talk Modern and Lightweight Cloud Application Development with Jakarta EE 10 went well. I had a lot of demos and even added an extra demo right before the talk. Check out the slides on SpeakerDeck.

No conference without #runWithJakartaEE. JPrime was no exception. We had a morning run before each of the two conference days in addition to the day after the conference. The runners were Yarden, Emily, Grace, Heinz, Tagir, and me as you can see in the gallery below.

by Ivar Grimstad at June 01, 2023 04:57 PM

Monoliths, Microservices, Auth, API Gateways, Schedulers-111th

by admin at June 01, 2023 01:00 PM

The 2023.6 / 111th edition of with the following topics:
"Microservices vs. Monoliths, Schedulers, Lightweight vs. Heavyweight, Time Derived Properties, Groovy, JavaScript": ready to watch:

See you every first Monday of the month at 8pm CET (UTC+1:00). Show is also announced at:

Are you nice? :-) Then checkout: the airhacks discord server

Any questions left? Ask now: and get the answers at the next Some questions are also answered with a short video: 60 seconds or less with Java

by admin at June 01, 2023 01:00 PM

Exploring Java Records In A Jakarta EE Context

by Luqman Saeed at May 30, 2023 09:55 AM

Java Records, one of the major highlights of the Java 16 release, provides a concise and immutable way to define classes for modelling data. This conciseness lends itself useful in a typical Jakarta EE application that can have a number of layers that need to share data. For example the data layer might want to return a subset of a given data set to a calling client through a data projection object. The REST layer might want to have separate entities for server and client side among others. This blog post explores the adoption of Java Records in a Jakarta EE application as a data transfer and projection object.

by Luqman Saeed at May 30, 2023 09:55 AM

MicroProfile 6 and Jakarta EE 10 guide updates in Open Liberty

May 30, 2023 12:00 AM

Concurrent with the Open Liberty release, 44 of the Open Liberty guides have been updated to make use of the latest MicroProfile 6 and Jakarta EE 10 specifications. Various bugs have been fixed as part of this release.

In Open Liberty

Run your apps using

If you’re using Maven, here are the coordinates:


Or for Gradle:

dependencies {
    libertyRuntime group: 'io.openliberty', name: 'openliberty-runtime', version: '[,)'

Or if you’re using container images:


Or take a look at our Downloads page.

Ask a question on Stack Overflow

44 guides updated to use MicroProfile 6 and Jakarta EE 10

As Open Liberty features and functionality continue to grow, we continue to add new guides to on those topics to make their adoption as easy as possible. Existing guides also receive updates to address any reported bugs/issues, keep their content current, and expand what their topic covers.

Concurrent with the release, the following 44 guides have been updated to use the latest MicroProfile 6 and Jakarta EE 10 specifications:

For the full list of Open Liberty guides, refer to the guides page.

Notable bugs fixed in this release

We’ve spent some time fixing bugs. The following sections describe just some of the issues resolved in this release. If you’re interested, here’s the full list of bugs fixed in

  • Memory Leak in MicroProfile OpenAPI’s SchemaRegistry.current

    A user reported a memory leak that occurred with each application restart, where 100MB of additional memory was used each time. The culprit ended up coming from MicroProfile OpenAPI’s SchemaRegistry class.

    This issue has been reported upstream to SmallRye, and has also been fixed directly in Liberty.

  • HTTP/2 max frame size exceeded when compression is used

    When compression is configured in the server.xml on an httpendpoint and http/2 is used, the http/2 max frame size may be exceeded, leading to a FRAME_SIZE_ERROR appearing in the server’s log.

    This issue has been resolved and the http/2 response data is now split into multiple data frames to avoid sending a data frame larger than the http/2 max frame size of the client.

  • EntryNotFoundException thrown in federated registries when using custom input/output configuration

    When running with federatedRegistries-1.0, it is possible to get an EntryNotFoundException when defining a non-identifier type property for the federated registries input/output mapping. This exception can occur in any of the*Bridge classes, but the key is it originates from a BridgeUtils.getEntityByIdentifier call.

    The following is an example stack: CWIML1010E: The user registry operation could not be completed. The uniqueId = null and uniqueName = null attributes of the identifier object are either not valid or not defined in the back-end repository.
            at web.UserRegistryServlet.handleMethodRequest(
            at web.UserRegistryServlet.doGet(
            at javax.servlet.http.HttpServlet.service(
            at javax.servlet.http.HttpServlet.service(

    This issue has been resolved and the method no longer throws EntryNotFoundException.

  • requestTiming-1.0 causes elevated (or spiking) CPU performance due to the SlowRequestManager

    When using the requestTiming-1.0 feature in OpenLiberty, the CPU usage is elevated. CPU impact correlates to CPU capacity.

    This is more obvious when a lower threshold is set for the "slow request" threshold (i.e. e.g. <= 15s). Even so, may not encounter a noticeable impact depending on CPU capacity.

    This is also more obvious if the request has a high hung threshold or if the request is hung indefinitely and can not be terminated by by the interruptHungRequest attribute (leading to an indefinite hang). This allows for a bigger window of opportunity to witness any CPU spikes/elevation.

    This issue has been resolved and the elevated CPU usage no longer occurs.

  • Request Timing metrics not showing up with mpMetrics-5.0 (when used with requestTiming-1.0 feature).

    When using the mpMetrics-5.0 and requestTiming-1.0 features, the request timing metrics are not being provided.

    This issue has been resolved and the expected request timing metrics are now provided.

Get Open Liberty now

May 30, 2023 12:00 AM

How to persist additional attributes for an association with JPA and Hibernate

by Thorben Janssen at May 29, 2023 04:35 PM

The post How to persist additional attributes for an association with JPA and Hibernate appeared first on Thorben Janssen.

JPA and Hibernate allow you to define associations between entities with just a few annotations, and you don’t have to care about the underlying table…

The post How to persist additional attributes for an association with JPA and Hibernate appeared first on Thorben Janssen.

by Thorben Janssen at May 29, 2023 04:35 PM

Hashtag Jakarta EE #178

by Ivar Grimstad at May 28, 2023 09:59 AM

Welcome to issue number one hundred and seventy-eight of Hashtag Jakarta EE!

The due date for submitting plan reviews for specifications that are candidates for Jakarta EE 11 is May 30. And that date is fast approaching! The pull requests for those that are ready are labeled as Plan Review in the Jakarta EE Specification Committee’s GitHub repository.

A great way to get involved in specification work is to participate in the discussions happening in the GitHub issue trackers for the various specifications. An example is the discussion about HTTP status codes going on in Jakarta REST. Please chime in if you have an opinion or any relevant industry experience that can help guide the decision.

May is a busy month for conferences. Next week, I am going to Sofia to speak at JPrime 2023. It’s been a while since I was at JPrime, so I am very much looking forward to it! Check out the cool speaker promo they created for me.

Relax, I won’t bug you with the link to the course I created for LinkedIn in every Hashtag Jakarta EE, but bear with me for a while. If you are new to Jakarta EE or just want to complete a course for the Jakarta EE skill on LinkedIn, I have just published an overview course of Jakarta EE on LinkedIn Learning. Check it out, and tell me what you think!

by Ivar Grimstad at May 28, 2023 09:59 AM

More Performance! | OpenJDK Contrib | Java Coding | Head Crashing Informatics 78

by Markus Karg at May 27, 2023 04:00 PM

And AGAIN: I improved the I/O performance of #OpenJDK 21 even more! If your #Java program transfer bytes from a socket to a file, then you MUST watch this video!

If you like this video, please give it a thumbs up, share it, subscribe to my channel, or become my patreon Thanks! 🙂

by Markus Karg at May 27, 2023 04:00 PM

Payara Monthly Catch: May 2023

by Priya Khaira-Hanks at May 25, 2023 09:32 AM

Welcome to our May selection of the best blogs, videos, podcasts and tutorials from the world of  Java, Jakarta EE, cloud computing and open source. 

by Priya Khaira-Hanks at May 25, 2023 09:32 AM

Enterprise Kotlin - Kotlin and Jakarta EE

May 25, 2023 12:00 AM

Note: this blog post is also published on the Computas blog. ![ates](../media/jakarta-ee-logo.png) The Jakarta EE logo, by the Eclipse Foundation
If you look at the documentation on the Kotlin web page (

May 25, 2023 12:00 AM

Coding Microservice From Scratch (Part 12) | JAX-RS Done Right! | Head Crashing Informatics 77

by Markus Karg at May 13, 2023 04:00 PM

Write a pure-Java microservice from scratch, without an application server nor any third party frameworks, tools, or IDE plugins — Just using JDK, Maven and JAX-RS aka Jakarta REST 3.1. This video series shows you the essential steps!

Switching from Jersey to RESTEasy in five minutes, without touching the source code — how cool is that!

If you like this video, please give it a thumbs up, share it, subscribe to my channel, or become my patreon Thanks! 🙂

by Markus Karg at May 13, 2023 04:00 PM

Podman Desktop: A Beginner’s Guide to Containerization

by F.Marchioni at May 12, 2023 08:47 AM

Podman is a popular containerization tool that allows users to manage containers, images, and other related resources. The Podman Desktop Tool is an easy-to-use graphical interface for managing Podman containers on your desktop. In this tutorial, we’ll go over how to use the Podman Desktop Tool to manage WildFly container image, covering some of its ... Read more

The post Podman Desktop: A Beginner’s Guide to Containerization appeared first on Mastertheboss.

by F.Marchioni at May 12, 2023 08:47 AM

The Jakarta EE 2023 Developer Survey is now open!

by Tanja Obradovic at March 29, 2023 09:24 PM

The Jakarta EE 2023 Developer Survey is now open!

It is that time of the year: the Jakarta EE 2023 Developer Survey open for your input! The survey will stay open until May 25st.

I would like to invite you to take this year six-minute survey, and have the chance to share your thoughts and ideas for future Jakarta EE releases, and help us discover uptake of the Jakarta EE latest versions and trends that inform industry decision-makers.

Please share the survey link and to reach out to your contacts: Java developers, architects and stakeholders on the enterprise Java ecosystem and invite them to participate in the 2023 Jakarta EE Developer Survey!


Tanja Obradovic Wed, 2023-03-29 17:24

by Tanja Obradovic at March 29, 2023 09:24 PM

How to filter through a JSON Document using Java 8 Stream API

by F.Marchioni at March 26, 2023 07:58 AM

Filtering through a JSON document using Java 8 Stream API involves converting the JSON document into a stream of objects and then using the filter method to select the objects that match a given condition. Here are the steps to filter through a JSON document using Java 8 Stream API. Plain Java Streams filtering Java ... Read more

The post How to filter through a JSON Document using Java 8 Stream API appeared first on Mastertheboss.

by F.Marchioni at March 26, 2023 07:58 AM

8 things you need to know when migrating to Hibernate 6.x

by Thorben Janssen at March 21, 2023 01:00 PM

The post 8 things you need to know when migrating to Hibernate 6.x appeared first on Thorben Janssen.

Hibernate 6 has been released for a while, and the latest Spring Data JPA version includes it as a dependency. So, it’s no surprise that…

The post 8 things you need to know when migrating to Hibernate 6.x appeared first on Thorben Janssen.

by Thorben Janssen at March 21, 2023 01:00 PM

Openshift Cheatsheet for DevOps

by F.Marchioni at March 06, 2023 02:05 PM

In this article you will find a comprehensive Openshift Container Platform cheat sheet for System Administrators and Developers. Login and Configuration Firstly, let’s check the most common commands for Login and Configuration in OpenShift: #login with a user oc login -u developer -p developer #login as system admin oc login -u system:admin #User Information ... Read more

The post Openshift Cheatsheet for DevOps appeared first on Mastertheboss.

by F.Marchioni at March 06, 2023 02:05 PM

Quarkus Reactive REST made easy

by F.Marchioni at February 25, 2023 10:10 AM

Quarkus JAX RS implementation has improved a lot since its first release. Within this tutorial we will show some new features which are available in Quarkus starting from the new reactive REST paradigm. Quarkus uses SmallRye Mutiny for as main Reactive library. In our first tutorial, we have discussed how to use Mutiny to deliver ... Read more

The post Quarkus Reactive REST made easy appeared first on Mastertheboss.

by F.Marchioni at February 25, 2023 10:10 AM

How to change Quarkus default HTTP Port?

by F.Marchioni at February 17, 2023 09:23 PM

Quarkus includes the “undertow” extension which is triggered when you include a JAXRS dependency in your project. We will see in this tutorial which are the most common settings you can apply to a Quarkus application to configure the embedded Undertow server. First of all let’s specify how you can set configuration parameters on Quarkus. ... Read more

The post How to change Quarkus default HTTP Port? appeared first on Mastertheboss.

by F.Marchioni at February 17, 2023 09:23 PM

What is Apache Camel and how does it work?

by Rhuan Henrique Rocha at February 16, 2023 11:14 PM

In this post, I will talk to you about what the Apache Camel is. It is a brief introduction before I starting to post practical content. Thus, let’s go to understand what this framework is.

Apache Camel is an open source Java integration framework that allows different applications to communicate with each other efficiently. It provides a platform for integrating heterogeneous software systems. Camel is designed to make application integration easy, simplifying the complexity of communication between different systems.

Apache Camel is written in Java and can be run on a variety of platforms, including Jakarta EE application servers and OSGi-based application containers, and can runs inside cloud environments using Spring Boot or Quarkus. Camel also supports a wide range of network protocols and message formats, including HTTP, FTP, SMTP, JMS, SOAP, XML, and JSON.

Camel uses the Enterprise Integration Patterns (EIP) pattern to define the different forms of integration. EIP is a set of commonly used design patterns in system integration. Camel implements many of these patterns, making it a powerful tool for integration solutions.

Additionally, Camel has a set of components that allow it to integrate with different systems. The components can be used to access different resources, such as databases, web services, and message systems. Camel also supports content-based routing, which means it can route messages based on their content.

Camel is highly configurable and extensible, allowing developers to customize its functionality to their needs. It also supports the creation of integration routes at runtime, which means that routes can be defined and changed without the need to restart the system.

In summary, Camel is a powerful and flexible tool for software system integration. It allows different applications to communicate efficiently and effectively, simplifying the complexity of system integration. Camel is a reliable and widely used framework that can help improve the efficiency and effectiveness of system integration in a variety of environments.

If you want to start using this framework you can access the documentation at the site. It’s my first post about the Apache Camel and will post more practical content about this amazing framework.

by Rhuan Henrique Rocha at February 16, 2023 11:14 PM

Jersey 3.1.1 released – focused on performance

by Jan at February 03, 2023 11:50 PM

Jersey 2.38 (Jakarta REST 2.1 compatible release) and Jersey 3.0.9 (Jakarta REST 3.0 compatible) have been released before Christmas. Jersey 3.1.1 is aligned with these releases. Apart from minor features (JDK 20 support, less repetitive warnings) and fixes, the big … Continue reading

by Jan at February 03, 2023 11:50 PM

How to use a Datasource in Quarkus

by F.Marchioni at February 02, 2023 10:13 AM

Agroal is a connection pool implementation that can be used with Quarkus to manage database connections. In this tutorial, we will go over how to use the DataSource in a Quarkus application. First, you’ll need to add the Agroal extension to your Quarkus application. You can do this by adding the following dependency to your ... Read more

The post How to use a Datasource in Quarkus appeared first on Mastertheboss.

by F.Marchioni at February 02, 2023 10:13 AM

Jakarta Persistence 3.1 new features

by F.Marchioni at February 01, 2023 11:12 AM

This tutorial introduces Jakarta Persistence API 3.1 as a standard for management of persistence and O/R mapping in Java environments. We will discuss the headlines with a simple example that you can test on a Jakarta EE 10 runtime. New features added in Jakarta Persistence 3.1 There are several new features available in Jakarta Persistence ... Read more

The post Jakarta Persistence 3.1 new features appeared first on Mastertheboss.

by F.Marchioni at February 01, 2023 11:12 AM

Jakarta EE track at Devnexus 2023!!!!

by Tanja Obradovic at January 31, 2023 08:25 PM

Jakarta EE track at Devnexus 2023!!!!

We have great news to share with you!

For the very first time at Devnexus 2023 we will have Jakarta EE track with 10 sessions and we will take this opportunity, to whenever possible, celebrate all we have accomplished in Jakarta EE community.

Jakarta EE track sessions

You may not be aware but this year (yes, time flies!!) marks 5 years of Jakarta EE, so we will be celebrating through out the year! Devnexus 2023, looks a great place to mark this milestone as well! So stay tuned for details, but in the meanwhile please help us out, register for the event come to see us and spread the word.

Help us out in spreading the word about Jakarta EE track @Devnexus 2023, just re-share posts you see from us on various social platforms!
To make it easier for you to spread the word on socials,  we also have prepared a social kit document to help us with promotion of the Jakarta EE track @Devnexus 2023, sessions and speakers. The social kit document is going to be updated with missing sessions and speakers, so visit often and promote far and wide.

Note: Organizers wanted to do something for people impacted by the recent tech layoffs, and decided to offer a 50% discount for any conference pass (valid for a limited time). Please use code DN-JAKARTAEE for @JakartaEE Track to get additional 20% discount!

 In addition, there will be an IBM workshop that will be highlighting Jakarta EE; look for "Thriving in the cloud: Venturing beyond the 12 factors". Please use the promo code ($100 off): JAKARTAEEATDEVNEXUS the organizers prepared for you (valid for a limited time).

I hope to see you all at Devnexus 2023!

Tanja Obradovic Tue, 2023-01-31 15:25

by Tanja Obradovic at January 31, 2023 08:25 PM

REST Crud Application using Quarkus and Vue.js

by F.Marchioni at January 25, 2023 03:01 PM

This article shows how to run a Quarkus 3 application using Jakarta REST Service and a Vue.js front-end. The example application will wrap the CRUD method of the endpoint with equivalent Vue.js functions. Let’s get started ! Pre-requisites: You should be familiar with REST Services and VueJS Web interfaces. If you are new to that, ... Read more

The post REST Crud Application using Quarkus and Vue.js appeared first on Mastertheboss.

by F.Marchioni at January 25, 2023 03:01 PM

Comparing Jackson vs JSONB

by F.Marchioni at January 22, 2023 07:06 PM

JSON-B and Jackson are both libraries that can be used for parsing and generating JSON data in Java. However, they have some differences in their functionality and usage. This tutorial will discuss them in detail. Jackson and JSON-B in a nutshell Firstly, if you are new to JSON parsing, let’s give an overview to these ... Read more

The post Comparing Jackson vs JSONB appeared first on Mastertheboss.

by F.Marchioni at January 22, 2023 07:06 PM

Getting started with Quarkus 3

by F.Marchioni at January 19, 2023 12:47 PM

This article introduces some of the new features of the upcoming Quarkus 3 release which is, at the time of writing, in Alpha state. We will cover the main highlights and some tools you can use to upgrade existing Quarkus applications. Quarkus 3 highlights Firstly, let’s discuss Quarkus 3 main highlights: An example Quarkus 3 ... Read more

The post Getting started with Quarkus 3 appeared first on Mastertheboss.

by F.Marchioni at January 19, 2023 12:47 PM

Jakarta EE and MicroProfile at EclipseCon Community Day 2022

by Reza Rahman at November 19, 2022 10:39 PM

Community Day at EclipseCon 2022 was held in person on Monday, October 24 in Ludwigsburg, Germany. Community Day has always been a great event for Eclipse working groups and project teams, including Jakarta EE/MicroProfile. This year was no exception. A number of great sessions were delivered from prominent folks in the community. The following are the details including session materials. The agenda can still be found here. All the materials can be found here.

Jakarta EE Community State of the Union

The first session of the day was a Jakarta EE community state of the union delivered by Tanja Obradovic, Ivar Grimstad and Shabnam Mayel. The session included a quick overview of Jakarta EE releases, how to get involved in the work of producing the specifications, a recap of the important Jakarta EE 10 release and as well as a view of what’s to come in Jakarta EE 11. The slides are embedded below and linked here.

Jakarta Concurrency – What’s Next

Payara CEO Steve Millidge covered Jakarta Concurrency. He discussed the value proposition of Jakarta Concurrency, the innovations delivered in Jakarta EE 10 (including CDI based @Asynchronous, @ManagedExecutorDefinition, etc) and the possibilities for the future (including CDI based @Schedule, @Lock, @MaxConcurrency, etc). The slides are embedded below and linked here. There are some excellent code examples included.

Jakarta Security – What’s Next

Werner Keil covered Jakarta Security. He discussed what’s already done in Jakarta EE 10 (including OpenID Connect support) and everything that’s in the works for Jakarta EE 11 (including CDI based @RolesAllowed). The slides are embedded below and linked here.

Jakarta Data – What’s Coming

IBM’s Emily Jiang kindly covered Jakarta Data. This is a brand new specification aimed towards Jakarta EE 11. It is a higher level data access abstraction similar to Spring Data and DeltaSpike Data. It encompasses both Jakarta Persistence (JPA) and Jakarta NoSQL. The slides are embedded below and linked here. There are some excellent code examples included.

MicroProfile Community State of the Union

Emily also graciously delivered a MicroProfile state of the union. She covered what was delivered in MicroProfile 5, including alignment with Jakarta EE 9.1. She also discussed what’s coming soon in MicroProfile 6 and beyond, including very clear alignment with the Jakarta EE 10 Core Profile. The slides are embedded below and linked here. There are some excellent technical details included.

MicroProfile Telemetry – What’s Coming

Red Hat’s Martin Stefanko covered MicroProfile Telemetry. Telemetry is a brand new specification being included in MicroProfile 6. The specification essentially supersedes MicroProfile Tracing and possibly MicroProfile Metrics too in the near future. This is because the OpenTracing and OpenCensus projects merged into a single project called OpenTelemetry. OpenTelemetry is now the de facto standard defining how to collect, process, and export telemetry data in microservices. It makes sense that MicroProfile moves forward with supporting OpenTelemetry. The slides are embedded below and linked here. There are some excellent technical details and code examples included.

See You There Next Time?

Overall, it was an honor to organize the Jakarta EE/MicroProfile agenda at EclipseCon Community Day one more time. All speakers and attendees should be thanked. Perhaps we will see you at Community Day next time? It is a great way to hear from some of the key people driving Jakarta EE and MicroProfile. You can attend just Community Day even if you don’t attend EclipseCon. The fee is modest and includes lunch as well as casual networking.

by Reza Rahman at November 19, 2022 10:39 PM

Jersey 3.1.0 is finally released

by Jan at November 15, 2022 03:31 PM

We were waiting so long! But it is here, Jakarta EE 10 is released, all the implementations Jersey depends on are final, and so Jersey 3.1.0, the final release compatible with Jakarta REST 3.1 is finally released! There are a … Continue reading

by Jan at November 15, 2022 03:31 PM

JFall 2022

November 04, 2022 09:56 AM

An impression of JFall by yours truly.


Sold out!

Packet room!

Very nice first keynote speaker by Saby Sengupta about the path to transform.
He is a really nice storyteller. He had us going.

Dutch people, wooden shoes, wooden hat, would not listen

  • Saby


Get the answer to three why questions. If the answers stop after the first why. It may not be a good idea.

This great first keynote is followed by the very well known Venkat Subramaniam about The Art of Simplicity.

The question is not what can we add? But What can we remove?

Simple fails less

Simple is elegant

All in al a great keynote! Loved it.

Design Patterns in the light of Lambdas

By Venkat Subramaniam

The GOF are kind of the grand parents of our industry. The worst thing they have done is write the damn book.
— Venkat

The quote is in the context of that writing down grandmas fantastic recipe does not work as it is based on the skill of grandma and not the exact amount of the ingredients.

The cleanup is the responsibility of the Resource class. Much better than asking developers to take care of it. It will be forgotten!

The more powerful a language becomes the less we need to talk about patterns. Patterns become practices we use. We do not need to put in extra effort.

I love his way of presenting, but this is the one of those times - I guess - that he is hampered by his own succes. The talk did not go deep into stuff. During his talk I just about covered 5 not too difficult subjects. I missed his speed and depth.

Still a great talk though.


Was actually very nice!

NLJUG update keynote

The Java Magazine was mentioned we (as Editors) had to shout for that!

Please contact me (@ivonet) if you have ambitions to either be an author or maybe even as a fellow editor of the magazine. We are searching for a new Editor now.

Then the voting for the Innovation Awards.

I kinda missed the next keynote by ING because I was playing with a rubix cube and I did not really like his talk

jakarta EE 10 platform

by Ivar Grimstad

Ivar talks about the specification of Jakarta EE.

To create a lite version of CDI it is possible to start doing things at build time and facilitate other tools like GraalVM and Quarkus.

He gives nice demos on how to migrate code to work in de jakarta namespace.

To start your own Jakarta EE application just go to en follow the very simple UI instructions

I am very proud to be the creator of that UI. Thanks, Ivar for giving me a shoutout for that during your talk. More cool stuff will follow soon.

Be prepared to do some namespace changes when moving from Java EE 8 to Jakarta EE.

All slides here


I had a fantastic day. For me, it is mainly about the community and seeing all the people I know in the community. I totally love the vibe of the conference and I think it is one of the best organized venues.

See you at JSpring.


November 04, 2022 09:56 AM

How to make your own scraper and then forget about it?

October 28, 2022 12:00 AM

So you've found a web page that changes frequently, and you want to follow the changes, but they don't provide a changelog? Then you might want to track the changes yourself. I've done that on a couple of pages - most notably tracking how bonus point awards change on the Norwegian bonus point system Viatrumf. Feel free to check it out. This solution

October 28, 2022 12:00 AM

Survey Says: Confidence Continues to Grow in the Jakarta EE Ecosystem

by Mike Milinkovich at September 26, 2022 01:00 PM

The results of the 2022 Jakarta EE Developer Survey are very telling about the current state of the enterprise Java developer community. They point to increased confidence about Jakarta EE and highlight how far Jakarta EE has grown over the past few years.

Strong Turnout Helps Drive Future of Jakarta EE

The fifth annual survey is one of the longest running and best-respected surveys of its kind in the industry. This year’s turnout was fantastic: From March 9 to May 6, a total of 1,439 developers responded. 

This is great for two reasons. First, obviously, these results help inform the Java ecosystem stakeholders about the requirements, priorities and perceptions of enterprise developer communities. The more people we hear from, the better picture we get of what the community wants and needs. That makes it much easier for us to make sure the work we’re doing is aligned with what our community is looking for. 

The other reason is that it helps us better understand how the cloud native Java world is progressing. By looking at what community members are using and adopting, what their top goals are and what their plans are for adoption, we can better understand not only what we should be working on today, but tomorrow and for the future of Jakarta EE. 

Findings Indicate Growing Adoption and Rising Expectations

Some of the survey’s key findings include:

  • Jakarta EE is the basis for the top frameworks used for building cloud native applications.
  • The top three frameworks for building cloud native applications, respectively, are Spring/Spring Boot, Jakarta EE and MicroProfile, though Spring/Spring Boot lost ground this past year. It’s important to note that Spring/SpringBoot relies on Jakarta EE developments for its operation and is not competitive with Jakarta EE. Both are critical ingredients to the healthy enterprise Java ecosystem. 
  • Jakarta EE 9/9.1 usage increased year-over-year by 5%.
  • Java EE 8, Jakarta EE 8, and Jakarta EE 9/9.1 hit the mainstream with 81% adoption. 
  • While over a third of respondents planned to adopt, or already had adopted Jakarta EE 9/9.1, nearly a fifth of respondents plan to skip Jakarta EE 9/9.1 altogether and adopt Jakarta EE 10 once it becomes available. 
  • Most respondents said they have migrated to Jakarta EE already or planned to do so within the next 6-24 months.
  • The top three community priorities for Jakarta EE are:
    • Native integration with Kubernetes (same as last year)
    • Better support for microservices (same as last year)
    • Faster support from existing Java EE/Jakarta EE or cloud vendors (new this year)

Two of the results, when combined, highlight something interesting:

  • 19% of respondents planned to skip Jakarta EE 9/9.1 and go straight to 10 once it’s available 
  • The new community priority — faster support from existing Java EE/Jakarta EE or cloud vendors — really shows the growing confidence the community has in the ecosystem

After all, you wouldn’t wait for a later version and skip the one that’s already available, unless you were confident that the newer version was not only going to be coming out on a relatively reliable timeline, but that it was going to be an improvement. 

And this growing hunger from the community for faster support really speaks to how far the ecosystem has come. When we release a new version, like when we released Jakarta EE 9, it takes some time for the technology implementers to build the product based on those standards or specifications. The community is becoming more vocal in requesting those implementers to be more agile and quickly pick up the new versions. That’s definitely an indication that developer demand for Jakarta EE products is growing in a healthy way. 

Learn More

If you’d like to learn more about the project, there are several Jakarta EE mailing lists to sign up for. You can also join the conversation on Slack. And if you want to get involved, start by choosing a project, sign up for its mailing list and start communicating with the team.

by Mike Milinkovich at September 26, 2022 01:00 PM

Jakarta EE 10 Brings Java Development Into the Modern Cloud Native Era

by Mike Milinkovich at September 22, 2022 04:00 PM

Jakarta EE, a Working Group hosted by the Eclipse Foundation, released Jakarta EE 10 today. 

This achievement was only possible because of a global community of contributors. Congratulations and thank you to everyone who played a part in this release. 

There are many new and innovative features added by the Jakarta EE community.

Jakarta EE 10 Enables Modern, Lightweight Java Applications and Microservices

Let’s start with some of the key updates in Jakarta EE 10 — updates that plant Jakarta EE firmly in the modern era of open source microservices and containers. 

Most prominently, Jakarta EE 10 includes a new profile specification: Jakarta EE Core Profile. The Core Profile includes a subset of Jakarta EE specifications that target the smaller, lightweight runtimes needed for microservices development. This is the first new Profile added to the enterprise Java specifications in over a decade.

In addition, new functionality has been added to more than 20 component specifications. For example:

Jakarta EE 10 also broadens support for annotations so it’s easier to build modularized applications and there’s better integration across component APIs.

Finally, I want to point out that Jakarta EE 10 gives enterprises the flexibility to leverage Java in the way that’s best for their organization. They can:

  • Develop and deploy Jakarta EE 10 applications on Java SE 11 as well as Java SE 17, the most current long-term support (LTS) release of Java SE
  • Take advantage of new features, including the modular system, that were introduced in Java SE 9 and supported in Java SE 11

The Jakarta EE Gamble Is Paying Off

This is all great news for Jakarta EE. But to understand how significant this release is, we need to go back to the Java EE days.

Java EE was the bedrock of application development for the Fortune 1000 for 20 years before it moved to the Eclipse Foundation as Jakarta EE. But the first Jakarta EE releases didn’t add new functionality. Then, Jakarta EE 9 introduced a major breaking change: the move to the jakarta.* namespace.

It’s hard to overstate what a gamble that was. Java EE had been basically backwards-compatible for more than two decades. We asked enterprises to change the fundamentals of applications they’d been relying on for a long time. We asked the enterprise Java ecosystem to re-align their products and opens source projects on a new namespace. Oftentimes, when you try to make such a radical change, your ecosystem says no, it’s too much work. And quite a few people thought the Jakarta EE gamble could fail for exactly that reason. 

But it didn’t. IBM, Red Hat, Payara, Spring, the Apache Tomcat and TomEE projects, and Eclipse Jetty, to name a few, all moved to the new namespace with us. 

Now, with new support for modern microservices architectures and containers, Jakarta EE 10 paves the way for Jakarta EE to drive the innovative, multi-vendor standards needed for the future of our industry. 

Get Involved in the Future of Jakarta EE

The momentum around Jakarta EE 10 is well underway. Eclipse GlassFish has released a compatible implementation, and other enterprises and project teams — including Fujitsu, IBM, Oracle, Payara, Red Hat and Tomitribe — are already working towards certifying Jakarta EE 10 compatible products 

Jakarta EE has an exciting future ahead, and we want everyone to participate and contribute. To learn more, connect with the global community. If enterprise Java is important to your business strategy, join the Jakarta EE Working Group. Learn more about the benefits and advantages of membership here.

by Mike Milinkovich at September 22, 2022 04:00 PM

Jakarta EE 10 has Landed!

by javaeeguardian at September 22, 2022 03:48 PM

The Jakarta EE Ambassadors are thrilled to see Jakarta EE 10 being released! This is a milestone release that bears great significance to the Java ecosystem. Jakarta EE 8 and Jakarta EE 9.x were important releases in their own right in the process of transitioning Java EE to a truly open environment in the Eclipse Foundation. However, these releases did not deliver new features. Jakarta EE 10 changes all that and begins the vital process of delivering long pending new features into the ecosystem at a regular cadence.

There are quite a few changes that were delivered – here are some key themes and highlights:

  • CDI Alignment
    • @Asynchronous in Concurrency
    • Better CDI support in Batch
  • Java SE Alignment
    • Support for Java SE 11, Java SE 17
    • CompletionStage, ForkJoinPool, parallel streams in Concurrency
    • Bootstrap APIs for REST
  • Closing standardization gaps
    • OpenID Connect support in Security, @ManagedExecutorDefinition, UUID as entity keys, more SQL support in Persistence queries, multipart/form-data support in REST, @ClientWindowScoped in Faces, pure Java Faces views
    • CDI Lite/Core Profile to enable next generation cloud native runtimes – MicroProfile will likely align with CDI Lite/Jakarta EE Core
  • Deprecation/removal
    • @Context annotation in REST, EJB Entity Beans, embeddable EJB container, deprecated Servlet/Faces/CDI features

While there are many features that we identified in our Jakarta EE 10 Contribution Guide that did not make it yet, this is still a very solid release that everyone in the Java ecosystem will benefit from, including Spring, MicroProfile and Quarkus. You can see here what was delivered, what’s on the way and what gaps still remain. You can try Jakarta EE 10 out now using compatible implementations like GlassFish, Payara, WildFly and Open Liberty. Jakarta EE 10 is proof in the pudding that the community, including major stakeholders, has not only made it through the transition to the Eclipse Foundation but now is beginning to thrive once again.

Many Ambassadors helped make this release a reality such as Arjan Tijms, Werner Keil, Markus Karg, Otavio Santana, Ondro Mihalyi and many more. The Ambassadors will now focus on enabling the community to evangelize Jakarta EE 10 including speaking, blogging, trying out implementations, and advocating for real world adoption. We will also work to enable the community to continue to contribute to Jakarta EE by producing an EE 11 Contribution Guide in the coming months. Please stay tuned and join us.

Jakarta EE is truly moving forward – the next phase of the platform’s evolution is here!

by javaeeguardian at September 22, 2022 03:48 PM

Java Reflections unit-testing

by Vladimir Bychkov at July 13, 2022 09:06 PM

How make java code with reflections more stable? Unit tests can help with this problem. This article introduces annotations @CheckConstructor, @CheckField, @CheckMethod to create so unit tests automatically

by Vladimir Bychkov at July 13, 2022 09:06 PM

The Power of Enum – Take advantage of it to make your code more readable and efficient

by otaviojava at July 06, 2022 06:51 AM

Like any other language, Java has the enum feature that allows us to enumerate items. It is helpful to list delimited items in your code, such as the seasons. And we can go beyond it with Java! It permits clean code design. Indeed, we can apply several patterns such as VO from DDD, Singleton, and […]

by otaviojava at July 06, 2022 06:51 AM

Java EE - Jakarta EE Initializr

May 05, 2022 02:23 PM

Getting started with Jakarta EE just became even easier!

Get started

Hot new Update!

Moved from the Apache 2 license to the Eclipse Public License v2 for the newest version of the archetype as described below.
As a start for a possible collaboration with the Eclipse start project.

New Archetype with JakartaEE 9

JakartaEE 9 + Payara 5.2022.2 + MicroProfile 4.1 running on Java 17

  • And the docker image is also ready for x86_64 (amd64) AND aarch64 (arm64/v8) architectures!

May 05, 2022 02:23 PM

FOSDEM 2022 Conference Report

by Reza Rahman at February 21, 2022 12:24 AM

FOSDEM took place February 5-6. The European based event is one of the most significant gatherings worldwide focused on all things Open Source. Named the “Friends of OpenJDK”, in recent years the event has added a devroom/track dedicated to Java. The effort is lead by my friend and former colleague Geertjan Wielenga. Due to the pandemic, the 2022 event was virtual once again. I delivered a couple of talks on Jakarta EE as well as Diversity & Inclusion.

Fundamentals of Diversity & Inclusion for Technologists

I opened the second day of the conference with my newest talk titled “Fundamentals of Diversity and Inclusion for Technologists”. I believe this is an overdue and critically important subject. I am very grateful to FOSDEM for accepting the talk. The reality for our industry remains that many people either have not yet started or are at the very beginning of their Diversity & Inclusion journey. This talk aims to start the conversation in earnest by explaining the basics. Concepts covered include unconscious bias, privilege, equity, allyship, covering and microaggressions. I punctuate the topic with experiences from my own life and examples relevant to technologists. The slides for the talk are available on SpeakerDeck. The video for the talk is now posted on YouTube.

Jakarta EE – Present and Future

Later the same day, I delivered my fairly popular talk – “Jakarta EE – Present and Future”. The talk is essentially a state of the union for Jakarta EE. It covers a little bit of history, context, Jakarta EE 8, Jakarta EE 9/9.1 as well as what’s ahead for Jakarta EE 10. One key component of the talk is the importance and ways of direct developer contributions into Jakarta EE, if needed with help from the Jakarta EE Ambassadors. Jakarta EE 10 and the Jakarta Core Profile should bring an important set of changes including to CDI, Jakarta REST, Concurrency, Security, Faces, Batch and Configuration. The slides for the talk are available on SpeakerDeck. The video for the talk is now posted on YouTube.

I am very happy to have had the opportunity to speak at FOSDEM. I hope to contribute again in the future.

by Reza Rahman at February 21, 2022 12:24 AM

Making Readable Code With Dependency Injection and Jakarta CDI

by otaviojava at January 18, 2022 03:53 PM

Learn more about dependency injection with Jakarta CDI and enhance the effectiveness and readability of your code. Link:

by otaviojava at January 18, 2022 03:53 PM

Infinispan Apache Log4j 2 CVE-2021-44228 vulnerability

December 12, 2021 10:00 PM

Infinispan 10+ uses Log4j version 2.0+ and can be affected by vulnerability CVE-2021-44228, which has a 10.0 CVSS score. The first fixed Log4j version is 2.15.0.
So, until official patch is coming, - you can update used logger version to the latest in few simple steps


cd /opt/infinispan-server-10.1.8.Final/lib/

rm log4j-*.jar
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-api-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-core-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-jul-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-slf4j-impl-2.15.0.jar ./

Please, note - patch above is not official, but according to initial tests it works with no issues

December 12, 2021 10:00 PM

JPA query methods: influence on performance

by Vladimir Bychkov at November 18, 2021 07:22 AM

Specification JPA 2.2/Jakarta JPA 3.0 provides for several methods to select data from database. In this article we research how these methods affect on performance

by Vladimir Bychkov at November 18, 2021 07:22 AM

Eclipse Jetty Servlet Survey

by Jesse McConnell at October 27, 2021 01:25 PM

This short 5-minute survey is being presented to the Eclipse Jetty user community to validate conjecture the Jetty developers have for how users will leverage JakartaEE servlets and the Jetty project. We have some features we are gauging interest in before supporting in Jetty 12 and your responses will help shape its forthcoming release.

We will summarize results in a future blog.

by Jesse McConnell at October 27, 2021 01:25 PM

Custom Identity Store with Jakarta Security in TomEE

by Jean-Louis Monteiro at September 30, 2021 11:42 AM

In the previous post, we saw how to use the built-in ‘tomcat-users.xml’ identity store with Apache TomEE. While this identity store is inherited from Tomcat and integrated into Jakarta Security implementation in TomEE, this is usually good for development or simple deployments, but may appear too simple or restrictive for production environments. 

This blog will focus on how to implement your own identity store. TomEE can use LDAP or JDBC identity stores out of the box. We will try them out next time.

Let’s say you have your own file store or your own data store like an in-memory data grid, then you will need to implement your own identity store.

What is an identity store?

An identity store is a database or a directory (store) of identity information about a population of users that includes an application’s callers.

In essence, an identity store contains all information such as caller name, groups or roles, and required information to validate a caller’s credentials.

How to implement my own identity store?

This is actually fairly simple with Jakarta Security. The only thing you need to do is create an implementation of ``. All methods in the interface have default implementations. So you only have to implement what you need.

public interface IdentityStore {

   default CredentialValidationResult validate(Credential credential) {

   default Set getCallerGroups(CredentialValidationResult validationResult) {

   default int priority() {

   default Set validationTypes() {

   enum ValidationType {

By default, an identity store is used for both validating user credentials and providing groups/roles for the authenticated user. Depending on what #validationTypes() will return, you will have to implement #validate(…) and/or #getCallerGroups(…)

#getCallerGroups(…) will receive the result of #valide(…). Let’s look at a very simple example:

public class TestIdentityStore implements IdentityStore {

   public CredentialValidationResult validate(Credential credential) {

       if (!(credential instanceof UsernamePasswordCredential)) {
           return INVALID_RESULT;

       final UsernamePasswordCredential usernamePasswordCredential = (UsernamePasswordCredential) credential;
       if (usernamePasswordCredential.compareTo("jon", "doe")) {
           return new CredentialValidationResult("jon", new HashSet<>(asList("foo", "bar")));

       if (usernamePasswordCredential.compareTo("iron", "man")) {
           return new CredentialValidationResult("iron", new HashSet<>(Collections.singletonList("avengers")));

       return INVALID_RESULT;


In this simple example, the identity store is hardcoded. Basically, it knows only 2 users, one of them has some roles, while the other has another set of roles.

You can easily extend this example and query a local file, or an in-memory data grid if you need. Or use JPA to access your relational database.

IMPORTANT: for TomEE to pick it up and use it in your application, the identity store must be a CDI bean.

The complete and runnable example is available under

The post Custom Identity Store with Jakarta Security in TomEE appeared first on Tomitribe.

by Jean-Louis Monteiro at September 30, 2021 11:42 AM

Book Review: Practical Cloud-Native Java Development with MicroProfile

September 24, 2021 12:00 AM

Practical Cloud-Native Java Development with MicroProfile cover

General information

  • Pages: 403
  • Published by: Packt
  • Release date: Aug 2021

Disclaimer: I received this book as a collaboration with Packt and one of the authors (Thanks Emily!)

A book about Microservices for the Java Enterprise-shops

Year after year many enterprise companies are struggling to embrace Cloud Native practices that we tend to denominate as Microservices, however Microservices is a metapattern that needs to follow a well defined approach, like:

  • (We aim for) reactive systems
  • (Hence we need a methodology like) 12 Cloud Native factors
  • (Implementing) well-known design patterns
  • (Dividing the system by using) Domain Driven Design
  • (Implementing microservices via) Microservices chassis and/or service mesh
  • (Achieving deployments by) Containers orchestration

Many of these concepts require a considerable amount of context, but some books, tutorials, conferences and YouTube videos tend to focus on specific niche information, making difficult to have a "cold start" in the microservices space if you have been developing regular/monolithic software. For me, that's the best thing about this book, it provides a holistic view to understand microservices with Java and MicroProfile for "cold starter developers".

About the book

Using a software architect perspective, MicroProfile could be defined as a set of specifications (APIs) that many microservices chassis implement in order to solve common microservices problems through patterns, lessons learned from well known Java libraries, and proposals for collaboration between Java Enterprise vendors.

Subsequently if you think that it sounds a lot like Java EE, that's right, it's the same spirit but on the microservices space with participation for many vendors, including vendors from the Java EE space -e.g. Red Hat, IBM, Apache, Payara-.

The main value of this book is the willingness to go beyond the APIs, providing four structured sections that have different writing styles, for instance:

  1. Section 1: Cloud Native Applications - Written as a didactical resource to learn fundamentals of distributed systems with Cloud Native approach
  2. Section 2: MicroProfile Deep Dive - Written as a reference book with code snippets to understand the motivation, functionality and specific details in MicroProfile APIs and the relation between these APIs and common Microservices patterns -e.g. Remote procedure invocation, Health Check APIs, Externalized configuration-
  3. Section 3: End-to-End Project Using MicroProfile - Written as a narrative workshop with source code already available, to understand the development and deployment process of Cloud Native applications with MicroProfile
  4. Section 4: The standalone specifications - Written as a reference book with code snippets, it describes the development of newer specs that could be included in the future under MicroProfile's umbrella

First section

This was by far my favorite section. This section presents a well-balanced overview about Cloud Native practices like:

  • Cloud Native definition
  • The role of microservices and the differences with monoliths and FaaS
  • Data consistency with event sourcing
  • Best practices
  • The role of MicroProfile

I enjoyed this section because my current role is to coach or act as a software architect at different companies, hence this is good material to explain the whole panorama to my coworkers and/or use this book as a quick reference.

My only concern with this section is about the final chapter, this chapter presents an application called IBM Stock Trader that (as you probably guess) IBM uses to demonstrate these concepts using MicroProfile with OpenLiberty. The chapter by itself presents an application that combines data sources, front/ends, Kubernetes; however the application would be useful only on Section 3 (at least that was my perception). Hence you will be going back to this section once you're executing the workshop.

Second section

This section divides the MicroProfile APIs in three levels, the division actually makes a lot of sense but was evident to me only during this review:

  1. The base APIs to create microservices (JAX-RS, CDI, JSON-P, JSON-B, Rest Client)
  2. Enhancing microservices (Config, Fault Tolerance, OpenAPI, JWT)
  3. Observing microservices (Health, Metrics, Tracing)

Additionally, section also describes the need for Docker and Kubernetes and how other common approaches -e.g. Service mesh- overlap with Microservice Chassis functionality.

Currently I'm a MicroProfile user, hence I knew most of the APIs, however I liked the actual description of the pattern/need that motivated the inclusion of the APIs, and the description could be useful for newcomers, along with the code snippets also available on GitHub.

If you're a Java/Jakarta EE developer you will find the CDI section a little bit superficial, indeed CDI by itself deserves a whole book/fascicle but this chapter gives the basics to start the development process.

Third section

This section switches the writing style to a workshop style. The first chapter is entirely focused on how to compile the sample microservices, how to fulfill the technical requirements and which MicroProfile APIs are used on every microservice.

You must notice that this is not a Java programming workshop, it's a Cloud Native workshop with ready to deploy microservices, hence the step by step guide is about compilation with Maven, Docker containers, scaling with Kubernetes, operators in Openshift, etc.

You could explore and change the source code if you wish, but the section is written in a "descriptive" way assuming the samples existence.

Fourth section

This section is pretty similar to the second section in the reference book style, hence it also describes the pattern/need that motivated the discussion of the API and code snippets. The main focus of this section is GraphQL, Reactive Approaches and distributed transactions with LRA.

This section will probably change in future editions of the book because at the time of publishing the Cloud Native Container Foundation revealed that some initiatives about observability will be integrated in the OpenTelemetry project and MicroProfile it's discussing their future approach.

Things that could be improved

As any review this is the most difficult section to write, but I think that a second edition should:

  • Extend the CDI section due its foundational status
  • Switch the order of the Stock Tracer presentation
  • Extend the data consistency discussión -e.g. CQRS, Event Sourcing-, hopefully with advances from LRA

The last item is mostly a wish since I'm always in the need for better ways to integrate this common practices with buses like Kafka or Camel using MicroProfile. I know that some implementations -e.g. Helidon, Quarkus- already have extensions for Kafka or Camel, but the data consistency is an entire discussion about patterns, tools and best practices.

Who should read this book?

  • Java developers with strong SE foundations and familiarity with the enterprise space (Spring/Java EE)

September 24, 2021 12:00 AM

#156 Bash, Apple and EJB, TomEE, Geronimo and Jakarta EE

by David Blevins at September 14, 2021 02:07 PM

New podcast episode with Adam Bien & David Blevins.  Apple and EJB, @ApacheTomEE, @tomitribe, @JakartaEE, the benefits of code generation with bash, and over-engineering”–the 156th

The post #156 Bash, Apple and EJB, TomEE, Geronimo and Jakarta EE appeared first on Tomitribe.

by David Blevins at September 14, 2021 02:07 PM

Jakarta Community Acceptance Testing (JCAT)

by javaeeguardian at July 28, 2021 05:41 AM

Today the Jakarta EE Ambassadors are announcing the start of the Jakarta EE Community Acceptance (JCAT) Testing initiative. The purpose of this initiative is to test Jakarta EE 9/9.1 implementations testing using your code and/or applications. Although Jakarta EE is extensively tested by the TCK, container specific tests, and QA, the purpose of JCAT is for developers to test the implementations.

Jakarta EE 9/9.1 did not introduce any new features. In Jakarta EE 9 the APIs changed from javax to jakarta. Jakarta EE 9.1 raised the supported floor to Java 11 for compatible implementations. So what are we testing?

  • Testing individual spec implementations standalone with the new namespace. 
  • Deploying existing Java EE/Jakarta EE applications to EE 9/9.1.
  • Converting Java EE/Jakarta EE applications to the new namespace.
  • Running applications on Java 11 (Jakarta EE 9.1)

Participating in this initiative is easy:

  1. Download a Jakarta EE implementation:
    1. Java 8 / Jakarta EE 9 Containers
    2. Java 11+ / Jakarta EE 9.1 Containers
  2. Deploy code:
    1. Port or run your existing Jakarta EE application
    2. Test out a feature using a starter template

To join this initiative, please take a moment to fill-out the form:

 Sign-up Form 

To submit results or feedback on your experiences with Jakarta EE 9/9.1:

  Jakarta EE 9 / 9.1 Feedback Form


Start Date: July 28, 2021

End Date: December 31st, 2021

by javaeeguardian at July 28, 2021 05:41 AM

Your Voice Matters: Take the Jakarta EE Developer Survey

by dmitrykornilov at April 17, 2021 11:36 AM

The Jakarta EE Developer Survey is in its fourth year and is the industry’s largest open source developer survey. It’s open until April 30, 2021. I am encouraging you to add your voice. Why should you do it? Because Jakarta EE Working Group needs your feedback. We need to know the challenges you facing and suggestions you have about how to make Jakarta EE better.

Last year’s edition surveyed developers to gain on-the-ground understanding and insights into how Jakarta solutions are being built, as well as identifying developers’ top choices for architectures, technologies, and tools. The 2021 Jakarta EE Developer Survey is your chance to influence the direction of the Jakarta EE Working Group’s approach to cloud native enterprise Java.

The results from the 2021 survey will give software vendors, service providers, enterprises, and individual developers in the Jakarta ecosystem updated information about Jakarta solutions and service development trends and what they mean for their strategies and businesses. Additionally, the survey results also help the Jakarta community at the Eclipse Foundation better understand the top industry focus areas and priorities for future project releases.

A full report from based on the survey results will be made available to all participants.

The survey takes less than 10 minutes to complete. We look forward to your input. Take the survey now!

by dmitrykornilov at April 17, 2021 11:36 AM

Less is More? Evolving the Servlet API!

by gregw at April 13, 2021 06:19 AM

With the release of the Servlet API 5.0 as part of Eclipse Jakarta EE 9.0 the standardization process has completed its move from the now-defunct Java Community Process (JCP) to being fully open source at the Eclipse Foundation, including the new Eclipse EE Specification Process (JESP) and the transition of the APIs from the javax.* to the jakarta.* namespace.  The move represents a huge amount of work from many parties, but ultimately it was all meta work, in that Servlet 5.0 API is identical to the 4.0 API in all regards but name, licenses, and process, i.e. nothing functional has changed.

But now with the transition behind us, the Servlet API project is now free to develop the standard into a 5.1 or 6.0 release.  So in this blog, I will put forward my ideas for how we should evolve the Servlet specification, specifically that I think that before we add new features to the API, it is time to remove some.

Backward Compatibility

Version 1.0  was created in 1997 and it is amazing that over 2 decades later, a Servlet written against that version should still run in the very latest EE container.  So why with such a great backward compatible record should we even contemplate introducing breaking changes to future Servlet API specification?  Let’s consider some of the reasons that a developer might choose to use EE Servlets over other available technologies:

Not all web applications need high performance and when they do, it is seldom the Servlet container itself that is the bottleneck.   Yet pure performance remains a key selection criteria for containers as developers either wish to have the future possibility of high request rates or need every spare cycle available to help their application meet an acceptable quality of service. Also there is the environmental impact of the carbon foot print of unnecessary cycles wasted in the trillion upon trillions of HTTP requests executed.   Thus application containers always compete on performance, but unfortunately many of the features added over the years have had detrimental affects to over-all performance as they often break the “No Taxation without Representation” principle: that there should not be a cost for all requests for a feature only used by <1%.
Developers seek to have the current best practice features available in their container.   This may be as simple as changing from byte[] to ByteBuffers or Collections, or it may be more fundamental integration of things such as dependency injection, coding by convention, asynchronous, reactive, etc.  The specification has done a reasonable job supporting such features over the years, but mistakes have been made and some features now clash, causing ambiguity and complexity. Ultimately feature integration can be an N2 problem, so reducing or simplifying existing features can greatly reduce the complexity of introducing new features.
The availability of multiple implementations of the Servlet specification is a key selling point.  However the very same issues of poor integration of many features has resulted in too many dark corners of the specification where the expected behavior of a container is simply not defined, so portability is by no means guaranteed.   Too often we find ourselves needing to be bug-for-bug compatible with other implementations rather than following the actual specification.
Any radical departure from the core Servlet API will force developers away from what  they know and to evaluate alternatives.  But there are many non core features in the API and this blog will make the case that there are some features which can can be removed and/or simplified without hardly being noticed by the bulk of applications.  My aim with this blog is that your typical Servlet developer will think: “why is he making such a big fuss about something I didn’t know was there”, whilst your typical Servlet container implementer will think “Exactly! that feature is such a PITA!!!”.

If the Servlet API is to continue to be relevant, then it needs to be able to compete with start-of-the-art HTTP servers that do not support decades of EE legacy.  Legacy can be both a strength and a weakness, and I believe now is the time to focus on the former.  The namespace break from java.* to jakarta.* has already introduced a discontinuity in backward compatibility.   Keeping 5.0 identical in all but name to 4.0 was the right thing to do to support automatic porting of applications.  However, it has also given developers a reason to consider alternatives, so now is the time to act to ensure that Servlet 6.0 a good basis for the future of EE Servlets.

Getting Cross about Cross-Context Dispatch

Let’s just all agree upfront, without going into the details, that cross-context dispatch is a bad thing. For the purposes of the rest of this blog, I’m ignoring the many issues of cross-context dispatch.  I’ll just say that every issue I will discuss below becomes even more complex when cross-context dispatch is considered, as it introduces: additional class loaders; different session values in the same session ID space; different authentication realms; authorization bypass. Don’t even get me started on the needless mind-bending complexities of a context that forwards to another then forwards back to the original…

Modern web applications are now often broken up into many microservices, so the concept of one webapp invoking another is not in itself bad, but the idea of those services being co-located in the same container instance is not very general nor flexible assumption. By all means, the Servlet API should support a mechanism to forward or include other resources, but ideally, this should be done in a way that works equally for co-resident, co-located, and remote resources.

So let’s just assume cross-context dispatch is already dead.

Exclude Include

The concept of including another resource in a response should be straight forward, but the specification of RequestDispatcher.include(...) is just bizarre!

@WebServlet(urlPatterns = {"/servletA/*"})
public static class ServletA extends HttpServlet
    @Override protected void doGet(HttpServletRequest request,
                                   HttpServletResponse response) throws IOException
        request.getRequestDispatcher("/servletB/infoB").include(request, response);

The ServletA above includes ServletB in its response.  However, whilst within ServletB any calls to getServletPath() or getPathInfo(),will still return the original values used to call ServletA, rather than the “/servletB” or “/infoB”  values for the target Servlet (as is done for a call to  forward(...)).  Instead the container must set an ever-growing list of Request attributes to describe the target of the include and any non trivial Servlet that acts on the actual URI path must do something like:

public boolean doGet(HttpServletRequest request, HttpServletResponse response)
    throws ServletException, IOException
    String servletPath;
    String pathInfo;
    if (request.getAttribute(RequestDispatcher.INCLUDE_REQUEST_URI) != null)
        servletPath = (String)
        pathInfo = (String)
        servletPath = request.getServletPath();
        pathInfo = request.getPathInfo();
    String pathInContext = URIUtil.addPaths(servletPath, pathInfo);
    // ...

Most Servlets do not do this, so they are unable to be correctly be the target of an include.  For the Servlets that do correctly check, they are more often than not wasting CPU cycles needlessly for the vast majority of requests that are not included.

Meanwhile,  the container itself must set (and then reset) at least 5 attributes, just in case the target resource might lookup one of them. Furthermore, the container must disable most of the APIs on the response object during an include, to prevent the included resource from setting the headers. So the included Servlet must be trusted to know that it is being included in order to serve the correct resource, but is then not trusted to not call APIs that are inconsistent with that knowledge. Servlets should not need to know the details of how they were invoked in order to generate a response. They should just use the paths and parameters of the request passed to them to generate a response, regardless of how that response will be used.

Ultimately, there is no need for an include API given that the specification already has a reasonable forward mechanism that supports wrapping. The ability to include one resource in the response of another can be provided with a basic wrapper around the response:

@WebServlet(urlPatterns = {"/servletA/*"})
public static class ServletA extends HttpServlet
    protected void doGet(HttpServletRequest request,
                         HttpServletResponse response) throws IOException
            .forward(request, new IncludeResponseWrapper(response));

Such a response wrapper could also do useful things like ensuring the included content-type is correct and better dealing with error conditions rather than ignoring an attempt to send a 500 status. To assist with porting, the include can be deprecated it’s implementation replaced with a request wrapper that reinstates the deprecated request attributes:

default void include(ServletRequest request, ServletResponse response)
    throws ServletException, IOException
    forward(new Servlet5IncludeAttributesRequestWrapper(request),
            new IncludeResponseWrapper(response));

Dispatch the DispatcherType

The inclusion of the method Request.getDispatcherType()in the Servlet API is almost an admission of defeat that the specification got it wrong in so many ways that required a Servlet to know how and/or why it is being invoked in order to function correctly. Why must a Servlet know its DispatcherType? Probably so it knows it has to check the attributes for the corresponding values? But what if an error page is generated asynchronously by including a resource that forwards to another? In such a pathological case, the request will contain attributes for ERROR, ASYNC, and FORWARD, yet the type will just be FORWARD.

The concept of DispatcherType should be deprecated and it should always return REQUEST.  Backward compatibility can be supported by optionally applying a wrapper that determines the deprecated DispatcherType only if the method is called.

Unravelling Wrappers

A key feature that really needs to be revised is 6.2.2 Wrapping Requests and Responses, introduced in Servlet 2.3. The core concept of wrappers is sound, but the requirement of Wrapper Object Identity (see Object Identity Crisis below) has significant impacts. But first let’s look at a simple example of a request wrapper:

public static class ForcedUserRequest extends HttpServletRequestWrapper
    private final Principal forcedUser;
    public ForcedUserRequest(HttpServletRequest request, Principal forcedUser)
        this.forcedUser = forcedUser;
    public Principal getUserPrincipal()
        return forcedUser;
    public boolean isUserInRole(String role)
        return forcedUser.getName().equals(role);

This request wrapper overrides the existing getUserPrincipal() and isUserInRole(String)methods to forced user identity.  This wrapper can be applied in a filter or in a Servlet as follows:

@WebServlet(urlPatterns = {"/servletA/*"})
public static class ServletA extends HttpServlet
    protected void doGet(HttpServletRequest request, HttpServletResponse response)
        throws ServletException, IOException
            .getRequestDispatcher("/servletB" + req.getPathInfo())
            .forward(new ForcedUserRequest(req, new UserPrincipal("admin")),

Such wrapping is an established pattern in many APIs and is mostly without significant problems. For Servlets there are some issues: it should be better documented if  the wrapped user identity is propagated if ServletB makes any EE calls (I think no?);  some APIs have become too complex to sensibly wrap (e.g HttpInputStream with non-blocking IO). But even with these issues, there are good safe usages for this wrapping to override existing methods.

Object Identity Crisis!

The Servlet specification allows for wrappers to do more than just override existing methods! In 6.2.2, the specification says that:

“… the developer not only has the ability to override existing methods on the request and response objects, but to provide new API… “

So the example above could introduce new API to access the original user principal:

public static class ForcedUserRequest extends HttpServletRequestWrapper
    // ... getUserPrincipal & isUserInRole as above
    public Principal getOriginalUserPrincipal()
        return super.getUserPrincipal();
    public boolean isOriginalUserInRole(String role)
        return super.isUserInRole(role);

In order for targets to be able to use these new APIs then they must be able to downcast the passed request/response to the known wrapper type:

@WebServlet(urlPatterns = {"/servletB/*"})
public static class ServletB extends HttpServlet
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
        throws ServletException, IOException
        MyWrappedRequest myr = (MyWrappedRequest)req;
        resp.getWriter().printf("user=%s orig=%s wasAdmin=%b%n",

This downcast will only work if the wrapped object is passed through the container without any further wrapping, thus the specification requires “wrapper object identity”:

… the container must ensure that the request and response object that it passes to the next entity in the filter chain, or to the target web resource if the filter was the last in the chain, is the same object that was passed into the doFilter method by the calling filter. The same requirement of wrapper object identity applies to the calls from a Servlet or a filter to RequestDispatcher.forward  or  RequestDispatcher.include, when the caller wraps the request or response objects.

This “wrapper object identity” requirement means that the container is unable to itself wrap requests and responses as they are passed to filters and servlets. This restriction has, directly and indirectly, a huge impact on the complexity, efficiency, and correctness of Servlet container implementations, all for very dubious and redundant benefits:

Bad Software Components
In the example of ServletB above, it is a very bad software component as it cannot be invoked simply by respecting the signature of its methods. The caller must have a priori knowledge that the passed request will be downcast and any other caller will be met with a ClassCastException. This defeats the whole point of an API specification like Servlets, which is to define good software components that can be variously assembled according to their API contracts.
No Multiple Concerns
It is not possible for multiple concerns to wrap request/responses. If another filter applies its own wrappers, then the downcast will fail. The requirement for “wrapper object identity” requires the application developer to have total control over all aspects of the application, which can be difficult with discovered web fragments and ServletContainerInitializers.
Mutable Requests
By far the biggest impact of “wrapper object identity” is that it forces requests to be mutable! Since the container is not allowed to do its own wrapping within RequestDispatcher.forward(...) then the container must make the original request object mutable so that it changes the value returned from getServletPath() to reflect the target of the dispatch.  It is this impact that has significant impacts on complexity, efficiency, and correctness:

  • Mutating the underlying request makes the example implementation of isOriginalUserInRole(String) incorrect because it calls super.isUserInRole(String) whose result can be mutated if the target Servlet has a run-as configuration.  Thus this method will inadvertently return the target rather than the original role.
  • There is the occasional need for a target Servlet to know details of the original request (often for debugging), but the original request can mutate so it cannot be used. Instead, an ever-growing list of Request attributes that must be set and then cleared on the original request attributes, just in case of the small chance that the target will need one of them.  A trivial forward of a request can thus require at least 12 Map operations just to make available the original state, even though it is very seldom required. Also, some aspects of the event history of a request are not recoverable from the attributes: the isUserInRolemethod; the original target of an include that does another include.
  • Mutable requests cannot be safely passed to asynchronous processes, because there will be a race between the other thread call to a request method and any mutations required as the request propagates through the Servlet container (see the “Off to the Races” example below).  As a result, asynchronous applications SHOULD copy all the values from the request that they MIGHT later need…. or more often than not they don’t, and many work by good luck, but may fail if timing on the server changes.
  • Using immutable objects can have significant benefits by allowing the JVM optimizer and GC to have knowledge that field values will not change.   By forcing the containers to use mutable request implementations, the specification removes the opportunity to access these benefits. Worse still, the complexity of the resulting request object makes them rather heavy weight and thus they are often recycled in object pools to save on the cost of creation. Such pooled objects used in asynchronous environments can be a recipe for disaster as asynchronous processes may reference a request object after it has been recycled into another request.
New APIs can be passed on objects set as request attribute values that will pass through multiple other wrappers, coexist with other new APIs in attributes and do not require the core request methods to have mutable returns.

The “wrapper object identity” requirement has little utility yet significant impacts on the correctness and performance of implementations. It significantly impairs the implementation of the container for a feature that can be rendered unusable by a wrapper applied by another filter.  It should be removed from Servlet 6.0 and requests passed in by the container should be immutable.

Asynchronous Life Cycle

A bit of history

Jetty continuations were a non-standard feature introduced in Jetty-6 (around 2005) to support thread-less waiting for asynchronous events (e.g. typically another HTTP request in a chat room). Because the Servlet API had not been designed for thread-safe access from asynchronous processes, the continuations feature did not attempt to let arbitrary threads call the Servlet API.  Instead, it has a suspend/resume model that once the asynchronous wait was over, the request was re-dispatched back into the Servlet container to generate a response, using the normal blocking Servlet API from a well-defined context.

When the continuation feature was standardized in the Servlet 3.0 specification, the Jetty suspend/resume model was supported with the APIs ServletRequest.startAsync() and AsyncContext.dispatch() methods.  However (against our strongly given advice), a second asynchronous model was also enabled, as represented by ServletRequest.startAsync() followed by AsyncContext.complete().  With the start/complete model, instead of generating a response by dispatching a container-managed thread, serialized on the request, to the Servlet container, arbitrary asynchronous threads could generate the response by directly accessing the request/response objects and then call the AsyncContext.complete() method when the response had been fully generated to end the cycle.   The result is that the entire API, designed not to be thread safe, was now exposed to concurrent calls. Unfortunately there was (and is) very little in the specification to help resolve the many races and ambiguities that resulted.

Off to the Races

The primary race introduced by start/complete is that described above caused by mutable requests that are forced by “wrapper object identity”. Consider the following asynchronous Servlet:

@WebServlet(urlPatterns = {"/async/*"}, asyncSupported = true)
public static class AsyncServlet extends HttpServlet
    protected void doGet(HttpServletRequest request, HttpServletResponse response)
        throws ServletException, IOException
        AsyncContext async = request.startAsync();
        PrintWriter out = response.getWriter();
        async.start( () ->
            out.printf("path=%s special=%b%n",

If invoked via a RequestDispatcher.forward(...), then the result produced by this Servlet is a race: will the thread dispatched to execute the lambda execute before or after the thread returns from the `doGet` method (and any applied filters) and the pre-forward values for the path and role are restored? Not only could the path and role be reported either for the target or caller, but the race could even split them so they are reported inconsistently.  To avoid this race, asynchronous Servlets must copy any value that they may use from the request before starting the asynchronous thread, which is needless complexity and expense. Many Servlets do not actually do this and just rely on happenstance to work correctly.

This problem is the result of  the start/complete lifecycle of asynchronous Servlets permitting/encouraging arbitrary threads to call the existing APIs that were not designed to be thread-safe.  This issue is avoided if the request object passed to doGet is immutable and if it is the target of a forward, it will always act as that target. However, there are other issues of the asynchronous lifecycle that cannot be resolved just with immutability.

Out of Time

The example below is a very typical race that exists in many applications between a timeout and asynchronous processing:

protected void doGet(HttpServletRequest request,
                     HttpServletResponse response) throws IOException
    AsyncContext async = request.startAsync();
    PrintWriter out = response.getWriter();
    async.addListener(new AsyncListener()
        public void onTimeout(AsyncEvent asyncEvent) throws IOException
            out.printf("Request %s timed out!%n", request.getServletPath());
            out.printf("timeout=%dms%n ", async.getTimeout());
    CompletableFuture<String> logic = someBusinessLogic();
    logic.thenAccept(answer ->
        out.printf("Request %s handled OK%n", request.getServletPath());
        out.printf("The answer is %s%n", answer);

Because the handling of the result of the business logic may be executed by a non-container-managed thread, it may run concurrently with the timeout callback. The result can be an incorrect status code and/or the response content being interleaved. Even if both lambdas grab a lock to mutually exclude each other, the results are sub-optimal, as both will eventually execute and one will ultimately throw an IllegalStateException, causing extra processing and a spurious exception that may confuse developers/deployers.

The current specification of the asynchronous life cycle is the worst of both worlds for the implementation of the container. On one hand, they must implement the complexity of request-serialized events, so that for a given request there can only be a single container-managed thread in service(...), doFilter(...), onWritePossible(), onDataAvailable(), onAllDataRead()and onError(), yet on the other hand an arbitrary application thread is permitted to concurrently call the API, thus requiring additional thread-safety complexity. All the benefits of request-serialized threads are lost by the ability of arbitrary other threads to call the Servlet APIs.

Request Serialized Threads

The fix is twofold: firstly make more Servlet APIs immutable (as discussed above) so they are safe to call from other threads;  secondly and most importantly, any API that does mutate state should only be able to be called from request-serialized threads!   The latter might seem a bit draconian as it will make the lambda passed to thenAccept in the example above throw an IllegalStateException when it tries to setStatus(int) or call complete(), however, there are huge benefits in complexity and correctness and only some simple changes are needed to rework existing code.

Any code running within a call to service(...), doFilter(...), onWritePossible(), onDataAvailable(), onAllDataRead()and onError() will already be in a request-serialized thread, and thus will require no change. It is only code executed by threads managed by other asynchronous components (e.g. the lambda passed to thenAccept() above) that need to be scoped. There is already the method AsyncContext.start(Runnable) that allows a non-container thread to access the context (i.e. classloader) associated with the request. An additional similar method AsyncContext.dispatch(Runnable) can be provided that not only scopes the execution but mutually excludes it and serializes it against any call to the methods listed above and any other dispatched Runnable. The Runnables passed may be executed within the scope of the dispatch call if possible (making the thread momentarily managed by the container and request serialized) or scheduled for later execution.  Thus calls to mutate the state of a request can only be made from threads that are serialized.

To make accessing the dispatch(Runnable) method more convenient, an executor can be provided with AsyncContext.getExecutor() which provides the same semantic.  The example above can now be simply updated:

protected void doGet(HttpServletRequest request,
                     HttpServletResponse response) throws IOException
    AsyncContext async = request.startAsync();
    PrintWriter out = response.getWriter();
    async.addListener(new AsyncListener()
        public void onTimeout(AsyncEvent asyncEvent) throws IOException
            out.printf("Request timed out after %dms%n ", async.getTimeout());
    CompletableFuture<String> logic = someBusinessLogic();
    logic.thenAcceptAsync(answer ->
        out.printf("The answer is %s%n", answer);
    }, async.getExecutor());

Because the AsyncContext.getExecutor() is used to invoke the business logic consumer, then the timeout and business logic response methods are mutually excluded. Moreover, because they are serialized by the container, the request state can be checked between each, so that if the business logic has completed the request, then the timeout callback will never be called, even if the underlying timer expires while the response is being generated. Conversely, if the business logic result is generated after the timeout, then the lambda to generate the response will never be called.  Because both of the tasks in this example call complete, then only one of them will ever be executed.

And Now You’re Complete

In the example below, a non-blocking read listener has been set on the request input stream, thus a callback to onDataAvailable() has been scheduled to occur at some time in the future.  In parallel, an asynchronous business process has been initiated that will complete the response:

protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException
    AsyncContext async = request.startAsync();
    request.getInputStream().setReadListener(new MyReadListener());
    CompletableFuture<String> logicB = someBusinessLogicB();
    PrintWriter out = response.getWriter();
    logicB.thenAcceptAsync(b ->
        out.printf("The answer for %s is B=%s%n", request.getServletPath(), b);
    }, async.getExecutor());

The example uses the proposed APIs above so that any call to complete is mutually excluded and serialized with the call to doGet and onDataAvailable(...). Even so, the current spec is unclear if the complete should prevent any future callback to onDataAvailable(...) or if the effect of complete() should be delayed until the callback is made (or times out). Given that the actions can now be request-serialized, the spec should require that once a request serialized thread that has called complete returns, then the request cycle is complete and there will be no other callbacks other than onComplete(...), thus cancelling any non-blocking IO callbacks.

To Be Removed

Before extending the Servlet specification, I believe the following existing features should be removed or deprecated:

  • Cross context dispatch deprecated and existing methods return null.  Once a request is matched to a context, then it will only ever be associated with that context and the getServletContext() method will return the same value no matter what state the request is in.
  • The “Wrapper Object Identity” requirement is removed and the request object will be required to be immutable in regards to the methods affected by a dispatch and may be referenced by asynchronous threads.
  • The RequestDispatcher.include(...) is deprecated and replaced with utility response wrappers.  The existing API can be deprecated and its implementation changed to use a request wrapper to simulate the existing attributes.
  • The special attributes for FORWARD, INCLUDE, ASYNC are removed from the normal dispatches.  Utility wrappers will be provided that can simulate these attributes if needed for backward compatibility.
  • The getDispatcherType() method is deprecated and returns REQUEST, unless a utility wrapper is used to replicate the old behavior.
  • Servlet API methods that mutate state will only be callable from request-serialized container-managed threads and will otherwise throw IllegalStateException. New AsyncContext.dispatch(Runnable) and AsyncContext.getExecutor() methods will provide access to request-serialization for arbitrary threads/lambdas/Runnables

With these changes, I believe that many web applications will not be affected and most of the remainder could be updated with minimal effort. Furthermore, utility filters can be provided that apply wrappers to obtain almost all deprecated behaviors other than Wrapper Object Identity. In return for the slight break in backward compatibility, the benefit of these changes would be significant simplifications and efficiencies of the Servlet container implementations. I believe that only with such simplifications can we have a stable base on which to build new features into the Servlet specification. If we can’t take out the cruft now, then when?

The plan is to follow this blog up with another proposing some more rationalisation of features (I’m looking at you sessions and authentication), before another blog proposing some new features an future directions.

by gregw at April 13, 2021 06:19 AM

Undertow AJP balancer. UT005028: Proxy request failed: java.nio.BufferOverflowException

April 02, 2021 09:00 PM

Wildfly provides great out of the box load balancing support by Undertow and modcluster subsystems
Unfortunately, in case HTTP headers size is huge enough (close to 16K), which is so actual in JWT era - pity error happened:

ERROR [io.undertow.proxy] (default I/O-10) UT005028: Proxy request to /ee-jax-rs-examples/clusterdemo/serverinfo failed: java.nio.BufferOverflowException
 at io.undertow.server.handlers.proxy.ProxyHandler$HTTPTrailerChannelListener.handleEvent(
 at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction$1.completed(
 at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction$1.completed(
 at io.undertow.client.ajp.AjpClientExchange.invokeReadReadyCallback(
 at io.undertow.client.ajp.AjpClientConnection.initiateRequest(
 at io.undertow.client.ajp.AjpClientConnection.sendRequest(
 at io.undertow.server.handlers.proxy.ProxyHandler$
 at io.undertow.util.SameThreadExecutor.execute(
 at io.undertow.server.HttpServerExchange.dispatch(
Caused by: java.nio.BufferOverflowException
 at java.nio.Buffer.nextPutIndex(
 at java.nio.DirectByteBuffer.put(
 at io.undertow.protocols.ajp.AjpUtils.putString(
 at io.undertow.protocols.ajp.AjpClientRequestClientStreamSinkChannel.createFrameHeaderImpl(
 at io.undertow.protocols.ajp.AjpClientRequestClientStreamSinkChannel.generateSendFrameHeader(
 at io.undertow.protocols.ajp.AjpClientFramePriority.insertFrame(
 at io.undertow.protocols.ajp.AjpClientFramePriority.insertFrame(
 at io.undertow.server.protocol.framed.AbstractFramedChannel.flushSenders(
 at io.undertow.server.protocol.framed.AbstractFramedChannel.flush(
 at io.undertow.server.protocol.framed.AbstractFramedChannel.queueFrame(
 at io.undertow.server.protocol.framed.AbstractFramedStreamSinkChannel.queueFinalFrame(
 at io.undertow.server.protocol.framed.AbstractFramedStreamSinkChannel.shutdownWrites(
 at io.undertow.channels.DetachableStreamSinkChannel.shutdownWrites(
 at io.undertow.server.handlers.proxy.ProxyHandler$HTTPTrailerChannelListener.handleEvent(

The same request directly to backend server works well. Tried to play with ajp-listener and mod-cluster filter "max-*" parameters, but have no luck.

Possible solution here is switch protocol from AJP to HTTP which can be bit less effective, but works well with big headers:

/profile=full-ha/subsystem=modcluster/proxy=default:write-attribute(name=listener, value=default)

April 02, 2021 09:00 PM

Oracle Joins MicroProfile Working Group

by dmitrykornilov at January 08, 2021 06:02 PM

I am very pleased to announce that since the beginning of 2021 Oracle is officially a part of MicroProfile Working Group. 

In Oracle we believe in standards and supporting them in our products. Standards are born in blood, toil, tears, and sweat. Standards are a result of collaboration of experts, vendors, customers and users. Standards bring the advantages of portability between different implementations that make standard-based solutions vendor-neutral.

We created Java EE which was the first enterprise Java standard. We opened it and moved it to the Eclipse Foundation to make its development truly open source and vendor neutral. Now we are joining MicroProfile which in the last few years has become a leading standard for cloud-native solutions.

We’ve been supporting MicroProfile for years before officially joining the Working Group. We created project Helidon which has supported MicroProfile APIs since MicroProfile version 1.1. Contributing to the evolution and supporting new versions of MicroProfile is one of our strategic goals.

I like the community driven and enjoyable approach of creating cloud-native APIs invented by MicroProfile. I believe that our collaboration will be effective and together we will push MicroProfile forward to a higher level.

by dmitrykornilov at January 08, 2021 06:02 PM

An introduction to MicroProfile GraphQL

by Jean-François James at November 14, 2020 05:05 PM

If you’re interested in MicroProfile and APIs, please checkout my presentation Boost your APIs with GraphQL. I did it at EclipseCon 2020. Thanks to the organizers for the invitation! The slide deck is on Slideshare. I’ve tried to be high-level and explain how GraphQL differentiates from REST and how easy it is to implement a […]

by Jean-François James at November 14, 2020 05:05 PM

General considerations on updating Enterprise Java projects from Java 8 to Java 11

September 23, 2020 12:00 AM


The purpose of this article is to consolidate all difficulties and solutions that I've encountered while updating Java EE projects from Java 8 to Java 11 (and beyond). It's a known fact that Java 11 has a lot of new characteristics that are revolutionizing how Java is used to create applications, despite being problematic under certain conditions.

This article is focused on Java/Jakarta EE but it could be used as basis for other enterprise Java frameworks and libraries migrations.

Is it possible to update Java EE/MicroProfile projects from Java 8 to Java 11?

Yes, absolutely. My team has been able to bump at least two mature enterprise applications with more than three years in development, being:

A Management Information System (MIS)

Nabenik MIS

  • Time for migration: 1 week
  • Modules: 9 EJB, 1 WAR, 1 EAR
  • Classes: 671 and counting
  • Code lines: 39480
  • Project's beginning: 2014
  • Original platform: Java 7, Wildfly 8, Java EE 7
  • Current platform: Java 11, Wildfly 17, Jakarta EE 8, MicroProfile 3.0
  • Web client: Angular

Mobile POS and Geo-fence

Medmigo REP

  • Time for migration: 3 week
  • Modules: 5 WAR/MicroServices
  • Classes: 348 and counting
  • Code lines: 17160
  • Project's beginning: 2017
  • Original platform: Java 8, Glassfish 4, Java EE 7
  • Current platform: Java 11, Payara (Micro) 5, Jakarta EE 8, MicroProfile 3.2
  • Web client: Angular

Why should I ever consider migrating to Java 11?

As everything in IT the answer is "It depends . . .". However there are a couple of good reasons to do it:

  1. Reduce attack surface by updating project dependencies proactively
  2. Reduce technical debt and most importantly, prepare your project for the new and dynamic Java world
  3. Take advantage of performance improvements on new JVM versions
  4. Take advantage from improvements of Java as programming language
  5. Sleep better by having a more secure, efficient and quality product

Why Java updates from Java 8 to Java 11 are considered difficult?

From my experience with many teams, because of this:

Changes in Java release cadence

Java Release Cadence

Currently, there are two big branches in JVMs release model:

  • Java LTS: With a fixed lifetime (3 years) for long term support, being Java 11 the latest one
  • Java current: A fast-paced Java version that is available every 6 months over a predictable calendar, being Java 15 the latest (at least at the time of publishing for this article)

The rationale behind this decision is that Java needed dynamism in providing new characteristics to the language, API and JVM, which I really agree.

Nevertheless, it is a know fact that most enterprise frameworks seek and use Java for stability. Consequently, most of these frameworks target Java 11 as "certified" Java Virtual Machine for deployments.

Usage of internal APIs

Java 9

Errata: I fixed and simplified this section following an interesting discussion on reddit :)

Java 9 introduced changes in internal classes that weren't meant for usage outside JVM, preventing/breaking the functionality of popular libraries that made use of these internals -e.g. Hibernate, ASM, Hazelcast- to gain performance.

Hence to avoid it, internal APIs in JDK 9 are inaccessible at compile time (but accesible with --add-exports), remaining accessible if they were in JDK 8 but in a future release they will become inaccessible, in the long run this change will reduce the costs borne by the maintainers of the JDK itself and by the maintainers of libraries and applications that, knowingly or not, make use of these internal APIs.

Finally, during the introduction of JEP-260 internal APIs were classified as critical and non-critical, consequently critical internal APIs for which replacements are introduced in JDK 9 are deprecated in JDK 9 and will be either encapsulated or removed in a future release.

However, you are inside the danger zone if:

  1. Your project compiles against dependencies pre-Java 9 depending on critical internals
  2. You bundle dependencies pre-Java 9 depending on critical internals
  3. You run your applications over a runtime -e.g. Application Servers- that include pre Java 9 transitive dependencies

Any of these situations means that your application has a probability of not being compatible with JVMs above Java 8. At least not without updating your dependencies, which also could uncover breaking changes in library APIs creating mandatory refactors.

Removal of CORBA and Java EE modules from OpenJDK


Also during Java 9 release, many Java EE and CORBA modules were marked as deprecated, being effectively removed at Java 11, specifically:

  • (JAX-WS, plus the related technologies SAAJ and Web Services Metadata)
  • java.xml.bind (JAXB)
  • java.activation (JAF)
  • (Common Annotations)
  • java.corba (CORBA)
  • java.transaction (JTA)
  • (Aggregator module for the six modules above)
  • (Tools for JAX-WS)
  • jdk.xml.bind (Tools for JAXB)

As JEP-320 states, many of these modules were included in Java 6 as a convenience to generate/support SOAP Web Services. But these modules eventually took off as independent projects already available at Maven Central. Therefore it is necessary to include these as dependencies if our project implements services with JAX-WS and/or depends on any library/utility that was included previously.

IDEs and application servers


In the same way as libraries, Java IDEs had to catch-up with the introduction of Java 9 at least in three levels:

  1. IDEs as Java programs should be compatible with Java Modules
  2. IDEs should support new Java versions as programming language -i.e. Incremental compilation, linting, text analysis, modules-
  3. IDEs are also basis for an ecosystem of plugins that are developed independently. Hence if plugins have any transitive dependency with issues over JPMS, these also have to be updated

Overall, none of the Java IDEs guaranteed that plugins will work in JVMs above Java 8. Therefore you could possibly run your IDE over Java 11 but a legacy/deprecated plugin could prevent you to run your application.

How do I update?

You must notice that Java 9 launched three years ago, hence the situations previously described are mostly covered. However you should do the following verifications and actions to prevent failures in the process:

  1. Verify server compatibility
  2. Verify if you need a specific JVM due support contracts and conditions
  3. Configure your development environment to support multiple JVMs during the migration process
  4. Verify your IDE compatibility and update
  5. Update Maven and Maven projects
  6. Update dependencies
  7. Include Java/Jakarta EE dependencies
  8. Execute multiple JVMs in production

Verify server compatibility


Mike Luikides from O'Reilly affirms that there are two types of programmers. In one hand we have the low level programmers that create tools as libraries or frameworks, and on the other hand we have developers that use these tools to create experience, products and services.

Java Enterprise is mostly on the second hand, the "productive world" resting in giant's shoulders. That's why you should check first if your runtime or framework already has a version compatible with Java 11, and also if you have the time/decision power to proceed with an update. If not, any other action from this point is useless.

The good news is that most of the popular servers in enterprise Java world are already compatible, like:

If you happen to depend on non compatible runtimes, this is where the road ends unless you support the maintainer to update it.

Verify if you need an specific JVM


On a non-technical side, under support contract conditions you could be obligated to use an specific JVM version.

OpenJDK by itself is an open source project receiving contributions from many companies (being Oracle the most active contributor), but nothing prevents any other company to compile, pack and TCK other JVM distribution as demonstrated by Amazon Correto, Azul Zulu, Liberica JDK, etc.

In short, there is software that technically could run over any JVM distribution and version, but the support contract will ask you for a particular version. For instance:

Configure your development environment to support multiple JDKs

Since the jump from Java 8 to Java 11 is mostly an experimentation process, it is a good idea to install multiple JVMs on the development computer, being SDKMan and jEnv the common options:



SDKMan is available for Unix-Like environments (Linux, Mac OS, Cygwin, BSD) and as the name suggests, acts as a Java tools package manager.

It helps to install and manage JVM ecosystem tools -e.g. Maven, Gradle, Leiningen- and also multiple JDK installations from different providers.



Also available for Unix-Like environments (Linux, Mac OS, Cygwin, BSD), jEnv is basically a script to manage and switch multiple JVM installations per system, user and shell.

If you happen to install JDKs from different sources -e.g Homebrew, Linux Repo, Oracle Technology Network- it is a good choice.

Finally, if you use Windows the common alternative is to automate the switch using .bat files however I would appreciate any other suggestion since I don't use Windows so often.

Verify your IDE compatibility and update

Please remember that any IDE ecosystem is composed by three levels:

  1. The IDE acting as platform
  2. Programming language support
  3. Plugins to support tools and libraries

After updating your IDE, you should also verify if all of the plugins that make part of your development cycle work fine under Java 11.

Update Maven and Maven projects


Probably the most common choice in Enterprise Java is Maven, and many IDEs use it under the hood or explicitly. Hence, you should update it.

Besides installation, please remember that Maven has a modular architecture and Maven modules version could be forced on any project definition. So, as rule of thumb you should also update these modules in your projects to the latest stable version.

To verify this quickly, you could use versions-maven-plugin:


Which includes a specific goal to verify Maven plugins versions:

mvn versions:display-plugin-updates


After that, you also need to configure Java source and target compatibility, generally this is achieved in two points.

As properties:


As configuration on Maven plugins, specially in maven-compiler-plugin:


Finally, some plugins need to "break" the barriers imposed by Java Modules and Java Platform Teams knows about it. Hence JVM has an argument called illegal-access to allow this, at least during Java 11.

This could be a good idea in plugins like surefire and failsafe which also invoke runtimes that depend on this flag (like Arquillian tests):


Update project dependencies

As mentioned before, you need to check for compatible versions on your Java dependencies. Sometimes these libraries could introduce breaking changes on each major version -e.g. Flyway- and you should consider a time to refactor this changes.

Again, if you use Maven versions-maven-plugin has a goal to verify dependencies version. The plugin will inform you about available updates.:

mvn versions:display-dependency-updates


In the particular case of Java EE, you already have an advantage. If you depend only on APIs -e.g. Java EE, MicroProfile- and not particular implementations, many of these issues are already solved for you.

Include Java/Jakarta EE dependencies


Probably modern REST based services won't need this, however in projects with heavy usage of SOAP and XML marshalling is mandatory to include the Java EE modules removed on Java 11. Otherwise your project won't compile and run.

You must include as dependency:

  • API definition
  • Reference Implementation (if needed)

At this point is also a good idea to evaluate if you could move to Jakarta EE, the evolution of Java EE under Eclipse Foundation.

Jakarta EE 8 is practically Java EE 8 with another name, but it retains package and features compatibility, most of application servers are in the process or already have Jakarta EE certified implementations:

We could swap the Java EE API:


For Jakarta EE API:


After that, please include any of these dependencies (if needed):

Java Beans Activation

Java EE


Jakarta EE


JAXB (Java XML Binding)

Java EE


Jakarta EE





Java EE


Jakarta EE


Implementation (runtime)


Implementation (standalone)


Java Annotation

Java EE


Jakarta EE


Java Transaction

Java EE


Jakarta EE



In the particular case of CORBA, I'm aware of its adoption. There is an independent project in eclipse to support CORBA, based on Glassfish CORBA, but this should be investigated further.

Multiple JVMs in production

If everything compiles, tests and executes. You did a successful migration.

Some deployments/environments run multiple application servers over the same Linux installation. If this is your case it is a good idea to install multiple JVMs to allow stepped migrations instead of big bang.

For instance, RHEL based distributions like CentOS, Oracle Linux or Fedora include various JVM versions:


Most importantly, If you install JVMs outside directly from RPMs(like Oracle HotSpot), Java alternatives will give you support:


However on modern deployments probably would be better to use Docker, specially on Windows which also needs .bat script to automate this task. Most of the JVM distributions are also available on Docker Hub:


September 23, 2020 12:00 AM

Secure your JAX-RS APIs with MicroProfile JWT

by Jean-François James at July 13, 2020 03:55 PM

In this article, I want to illustrate in a practical way how to secure your JAX-RS APIs with MicroProfile JWT (JSON Web Token). It is illustrated by a GitHub project using Quarkus, Wildfly, Open Liberty and JWTenizr. A basic knowledge of MP JWT is needed and, if you don’t feel comfortable with that, I invite […]

by Jean-François James at July 13, 2020 03:55 PM

Jakarta EE: Multitenancy with JPA on WildFly, Part 1

by Rhuan Henrique Rocha at July 12, 2020 10:49 PM

In this two-part series, I demonstrate two approaches to multitenancy with the Jakarta Persistence API (JPA) running on WildFly. In the first half of this series, you will learn how to implement multitenancy using a database. In the second half, I will introduce you to multitenancy using a schema. I based both examples on JPA and Hibernate.

Because I have focused on implementation examples, I won’t go deeply into the details of multitenancy, though I will start with a brief overview. Note, too, that I assume you are familiar with Java persistence using JPA and Hibernate.

Multitenancy architecture

Multitenancy is an architecture that permits a single application to serve multiple tenants, also known as clients. Although tenants in a multitenancy architecture access the same application, they are securely isolated from each other. Furthermore, each tenant only has access to its own resources. Multitenancy is a common architectural approach for software-as-a-service (SaaS) and cloud computing applications. In general, clients (or tenants) accessing a SaaS are accessing the same application, but each one is isolated from the others and has its own resources.

A multitenant architecture must isolate the data available to each tenant. If there is a problem with one tenant’s data set, it won’t impact the other tenants. In a relational database, we use a database or a schema to isolate each tenant’s data. One way to separate data is to give each tenant access to its own database or schema. Another option, which is available if you are using a relational database with JPA and Hibernate, is to partition a single database for multiple tenants. In this article, I focus on the standalone database and schema options. I won’t demonstrate how to set up a partition.

In a server-based application like WildFly, multitenancy is different from the conventional approach. In this case, the server application works directly with the data source by initiating a connection and preparing the database to be used. The client application does not spend time opening the connection, which improves performance. On the other hand, using Enterprise JavaBeans (EJBs) for container-managed transactions can lead to problems. As an example, the server-based application could do something to generate an error to commit or roll the application back.

Implementation code

Two interfaces are crucial to implementing multitenancy in JPA and Hibernate:

  • MultiTenantConnectionProvider is responsible for connecting tenants to their respective databases and services. We will use this interface and a tenant identifier to switch between databases for different tenants.
  • CurrentTenantIdentifierResolver is responsible for identifying the tenant. We will use this interface to define what is considered a tenant (more about this later). We will also use this interface to provide the correct tenant identifier to MultiTenantConnectionProvider.

In JPA, we configure these interfaces using the persistence.xml file. In the next sections, I’ll show you how to use these two interfaces to create the first three classes we need for our multitenancy architecture: DatabaseMultiTenantProvider, MultiTenantResolver, and DatabaseTenantResolver.


DatabaseMultiTenantProvider is an implementation of the MultiTenantConnectionProvider interface. This class contains logic to switch to the database that matches the given tenant identifier. In WildFly, this means switching to different data sources. The DatabaseMultiTenantProvider class also implements the ServiceRegistryAwareService, which allows us to inject a service during the configuration phase.

Here’s the code for the DatabaseMultiTenantProvider class:

public class DatabaseMultiTenantProvider implements MultiTenantConnectionProvider, ServiceRegistryAwareService{
    private static final long serialVersionUID = 1L;
    private static final String TENANT_SUPPORTED = "DATABASE";
    private DataSource dataSource;
    private String typeTenancy ;

    public boolean supportsAggressiveRelease() {
        return false;
    public void injectServices(ServiceRegistryImplementor serviceRegistry) {

        typeTenancy = (String) ((ConfigurationService)serviceRegistry

        dataSource = (DataSource) ((ConfigurationService)serviceRegistry

    public boolean isUnwrappableAs(Class clazz) {
        return false;
    public <T> T unwrap(Class<T> clazz) {
        return null;
    public Connection getAnyConnection() throws SQLException {
        final Connection connection = dataSource.getConnection();
        return connection;

    public Connection getConnection(String tenantIdentifier) throws SQLException {

        final Context init;
        //Just use the multi-tenancy if the hibernate.multiTenancy == DATABASE
        if(TENANT_SUPPORTED.equals(typeTenancy)) {
            try {
                init = new InitialContext();
                dataSource = (DataSource) init.lookup("java:/jdbc/" + tenantIdentifier);
            } catch (NamingException e) {
                throw new HibernateException("Error trying to get datasource ['java:/jdbc/" + tenantIdentifier + "']", e);

        return dataSource.getConnection();

    public void releaseAnyConnection(Connection connection) throws SQLException {
    public void releaseConnection(String tenantIdentifier, Connection connection) throws SQLException {

As you can see, we call the injectServices method to populate the datasource and typeTenancy attributes. We use the datasource attribute to get a connection from the data source, and we use the typeTenancy attribute to find out if the class supports the multiTenancy type. We call the getConnection method to get a data source connection. This method uses the tenant identifier to locate and switch to the correct data source.


MultiTenantResolver is an abstract class that implements the CurrentTenantIdentifierResolver interface. This class aims to provide a setTenantIdentifier method to all CurrentTenantIdentifierResolver implementations:

public abstract class MultiTenantResolver implements CurrentTenantIdentifierResolver {

    protected String tenantIdentifier;

    public void setTenantIdentifier(String tenantIdentifier) {
        this.tenantIdentifier = tenantIdentifier;

This abstract class is simple. We only use it to provide the setTenantIdentifier method.


DatabaseTenantResolver also implements the CurrentTenantIdentifierResolver interface. This class is the concrete class of MultiTenantResolver:

public class DatabaseTenantResolver extends MuiltiTenantResolver {

    private Map<String, String> regionDatasourceMap;

    public DatabaseTenantResolver(){
        regionDatasourceMap = new HashMap();
        regionDatasourceMap.put("default", "MyDataSource");
        regionDatasourceMap.put("america", "AmericaDB");
        regionDatasourceMap.put("europa", "EuropaDB");
        regionDatasourceMap.put("asia", "AsiaDB");

    public String resolveCurrentTenantIdentifier() {

        if(this.tenantIdentifier != null
                && regionDatasourceMap.containsKey(this.tenantIdentifier)){
            return regionDatasourceMap.get(this.tenantIdentifier);

        return regionDatasourceMap.get("default");


    public boolean validateExistingCurrentSessions() {
        return false;


Notice that DatabaseTenantResolver uses a Map to define the correct data source for a given tenant. The tenant, in this case, is a region. Note, too, that this example assumes we have the data sources java:/jdbc/MyDataSource, java:/jdbc/AmericaDB, java:/jdbc/EuropaDB, and java:/jdbc/AsiaDB configured in WildFly.

Configure and define the tenant

Now we need to use the persistence.xml file to configure the tenant:

    <persistence-unit name="jakartaee8">

            <property name="javax.persistence.schema-generation.database.action" value="none" />
            <property name="hibernate.dialect" value="org.hibernate.dialect.PostgresPlusDialect"/>
            <property name="hibernate.multiTenancy" value="DATABASE"/>
            <property name="hibernate.tenant_identifier_resolver" value="net.rhuanrocha.dao.multitenancy.DatabaseTenantResolver"/>
            <property name="hibernate.multi_tenant_connection_provider" value="net.rhuanrocha.dao.multitenancy.DatabaseMultiTenantProvider"/>


Next, we define the tenant in the EntityManagerFactory:

protected EntityManagerFactory emf;

protected EntityManager getEntityManager(String multitenancyIdentifier){

    final MuiltiTenantResolver tenantResolver = (MuiltiTenantResolver) ((SessionFactoryImplementor) emf).getCurrentTenantIdentifierResolver();

    return emf.createEntityManager();

Note that we call the setTenantIdentifier before creating a new instance of EntityManager.


I have presented a simple example of multitenancy in a database using JPA with Hibernate and WildFly. There are many ways to use a database for multitenancy. My main point has been to show you how to implement the CurrentTenantIdentifierResolver and MultiTenantConnectionProvider interfaces. I’ve shown you how to use JPA’s persistence.xml file to configure the required classes based on these interfaces.

Keep in mind that for this example, I have assumed that WildFly manages the data source and connection pool and that EJB handles the container-managed transactions. In the second half of this series, I will provide a similar introduction to multitenancy, but using a schema rather than a database. If you want to go deeper with this example, you can find the complete application code and further instructions on my GitHub repository.

by Rhuan Henrique Rocha at July 12, 2020 10:49 PM

Jakarta EE Cookbook

by Elder Moraes at July 06, 2020 07:19 PM

About one month ago I had the pleasure to announce the release of the second edition of my book, now called “Jakarta EE Cookbook”. By that time I had recorded a video about and you can watch it here:

And then came a crazy month and just now I had the opportunity to write a few lines about it! 🙂

So, straight to the point, what you should know about the book (in case you have any interest in it).

Target audience

Java developers working on enterprise applications and that would like to get the best from the Jakarta EE platform.

Topics covered

I’m sure this is one of the most complete books of this field, and I’m saying it based on the covered topics:

  • Server-side development
  • Building services with RESTful features
  • Web and client-server communication
  • Security in the enterprise architecture
  • Jakarta EE standards (and how does it save you time on a daily basis)
  • Deployment and management using some of the best Jakarta EE application servers
  • Microservices with Jakarta EE and Eclipse MicroProfile
  • CI/CD
  • Multithreading
  • Event-driven for reactive applications
  • Jakarta EE, containers & cloud computing

Style and approach

The book has the word “cookbook” on its name for a reason: it follows a 100% practical approach, with almost all working code available in the book (we only omitted the imports for the sake of the space).

And talking about the source code being available, it is really available on my Github:

PRs and Stars are welcomed! 🙂

Bonus content

The book has an appendix that would be worthy of another book! I tell the readers how sharing knowledge has changed my career for good and how you can apply what I’ve learned in your own career.

Surprise, surprise

In the first 24 hours of its release, this book simply reached the 1st place at Amazon among other Java releases! Wow!

Of course, I’m more than happy and honored for such a warm welcome given to my baby… 🙂

If you are interested in it, we are in the very last days of the special price in celebration of its release. You can take a look here

Leave your comments if you need any clarification about it. See you!

by Elder Moraes at July 06, 2020 07:19 PM

Monitoring REST APIs with Custom JDK Flight Recorder Events

January 29, 2020 02:30 PM

The JDK Flight Recorder (JFR) is an invaluable tool for gaining deep insights into the performance characteristics of Java applications. Open-sourced in JDK 11, JFR provides a low-overhead framework for collecting events from Java applications, the JVM and the operating system.

In this blog post we’re going to explore how custom, application-specific JFR events can be used to monitor a REST API, allowing to track request counts, identify long-running requests and more. We’ll also discuss how the JFR Event Streaming API new in Java 14 can be used to export live events, making them available for monitoring and alerting via tools such as Prometheus and Grafana.

January 29, 2020 02:30 PM

Enforcing Java Record Invariants With Bean Validation

January 20, 2020 04:30 PM

Record types are one of the most awaited features in Java 14; they promise to "provide a compact syntax for declaring classes which are transparent holders for shallowly immutable data". One example where records should be beneficial are data transfer objects (DTOs), as e.g. found in the remoting layer of enterprise applications. Typically, certain rules should be applied to the attributes of such DTO, e.g. in terms of allowed values. The goal of this blog post is to explore how such invariants can be enforced on record types, using annotation-based constraints as provided by the Bean Validation API.

January 20, 2020 04:30 PM

Jakarta EE 8 CRUD API Tutorial using Java 11

by Philip Riecks at January 19, 2020 03:07 PM

As part of the Jakarta EE Quickstart Tutorials on YouTube, I've now created a five-part series to create a Jakarta EE CRUD API. Within the videos, I'm demonstrating how to start using Jakarta EE for your next application. Given the Liberty Maven Plugin and MicroShed Testing, the endpoints are developed using the TDD (Test Driven Development) technique.

The following technologies are used within this short series: Java 11, Jakarta EE 8, Open Liberty, Derby, Flyway, MicroShed Testing & JUnit 5

Part I: Introduction to the application setup

This part covers the following topics:

  • Introduction to the Maven project skeleton
  • Flyway setup for Open Liberty
  • Derby JDBC connection configuration
  • Basic MicroShed Testing setup for TDD

Part II: Developing the endpoint to create entities

This part covers the following topics:

  • First JAX-RS endpoint to create Person entities
  • TDD approach using MicroShed Testing and the Liberty Maven Plugin
  • Store the entities using the EntityManager

Part III: Developing the endpoints to read entities

This part covers the following topics:

  • Develop two JAX-RS endpoints to read entities
  • Read all entities and by its id
  • Handle non-present entities with a different HTTP status code

Part IV: Developing the endpoint to update entities

This part covers the following topics:

  • Develop the JAX-RS endpoint to update entities
  • Update existing entities using HTTP PUT
  • Validate the client payload using Bean Validation

Part V: Developing the endpoint to delete entities

This part covers the following topics:

  • Develop the JAX-RS endpoint to delete entities
  • Enhance the test setup for deterministic and repeatable integration tests
  • Remove the deleted entity from the database

The source code for the Maven CRUD API application is available on GitHub.

For more quickstart tutorials on Jakarta EE, have a look at the overview page on my blog.

Have fun developing Jakarta EE CRUD API applications,



The post Jakarta EE 8 CRUD API Tutorial using Java 11 appeared first on rieckpil.

by Philip Riecks at January 19, 2020 03:07 PM

Deploy a Jakarta EE application to the root context

by Philip Riecks at January 07, 2020 06:24 AM

With the presence of Docker, Kubernetes and cheaper hardware, the deployment model of multiple applications inside one application server has passed. Now, you deploy one Jakarta EE application to one application server. This eliminates the need for different context paths.  You can use the root context / for your Jakarta EE application. With this blog post, you'll learn how to achieve this for each Jakarta EE application server.

The default behavior for Jakarta EE application server

Without any further configuration, most of the Jakarta EE application servers deploy the application to a context path based on the filename of your .war. If you e.g. deploy your my-banking-app.war application, the server will use the context prefix /my-banking-app for your application. All you JAX-RS endpoints, Servlets, .jsp, .xhtml content is then available below this context, e.g /my-banking-app/resources/customers.

This was important in the past, where you deployed multiple applications to one application server. Without the context prefix, the application server wouldn't be able to route the traffic to the correct application.

As of today, the deployment model changed with Docker, Kubernetes and cheaper infrastructure. You usually deploy one .war within one application server running as a Docker container. Given this deployment model, the context prefix is irrelevant. Mapping the application to the root context / is more convenient.

If you configure a reverse proxy or an Ingress controller (in the Kubernetes world), you are happy if you can just route to / instead of remembering the actual context path (error-prone).

Deploying to root context: Payara & Glassfish

As Payara is a fork of Glassfish, the configuration for both is quite similar. The most convenient way for Glassfish is to place a glassfish-web.xml file in the src/main/webapp/WEB-INF folder of your application:

<!DOCTYPE glassfish-web-app PUBLIC "-// GlassFish Application Server 3.1 Servlet 3.0//EN"

For Payara the filename is payara-web.xml:

<!DOCTYPE payara-web-app PUBLIC "-// Payara Server 4 Servlet 3.0//EN" "">

Both also support configuring the context path of the application within their admin console. IMHO this less convenient than the .xml file solution.

Deploying to root context: Open Liberty

Open Liberty also parses a proprietary web.xml file within src/main/webapp/WEB-INF: ibm-web-ext.xml

  <context-root uri="/"/>

Furthermore, you can also configure the context of your application within your server.xml:


  <httpEndpoint id="defaultHttpEndpoint" httpPort="9080" httpsPort="9443"/>

  <webApplication location="app.war" contextRoot="/" name="app"/>

Deploying to root context: WildFly

WildFly also has two simple ways of configuring the root context for your application. First, you can place a jboss-web.xml within src/main/webapp/WEB-INF:

<!DOCTYPE jboss-web PUBLIC "-//JBoss//DTD Web Application 2.4//EN" "">

Second, while copying your .war file to your Docker container, you can name it ROOT.war:

FROM jboss/wildfly
 ADD target/app.war /opt/jboss/wildfly/standalone/deployments/ROOT.war

For more tips & tricks for each application server, have a look at my cheat sheet.

Have fun deploying your Jakarta EE applications to the root context,


The post Deploy a Jakarta EE application to the root context appeared first on rieckpil.

by Philip Riecks at January 07, 2020 06:24 AM

Modernizing our GitHub Sync Toolset

November 19, 2019 08:10 PM

I am happy to announce that my team is ready to deploy a new version of our GitHub Sync Toolset on November 26, 2019 from 10:00 to 11:00 am EST.

We are not expecting any disruption of service but it’s possible that some committers may lose write access to their Eclipse project GitHub repositories during this 1 hour maintenance window.

This toolset is responsible for syncronizing Eclipse committers accross all our GitHub repositories and on top of that, this new release will start syncronizing contributors.

In this context, a contributor is a GitHub user with read access to the project GitHub repositories. This new feature will allow committers to assign issues to contributors who currently don’t have write access to the repository. This feature was requested in 2015 via Bug 483563 - Allow assignment of GitHub issues to contributors.

Eclipse Committers are reponsible for maintaining a list of GitHub contributors from their project page on the Eclipse Project Management Infrastructure (PMI).

To become an Eclipse contributor on a GitHub for a project, please make sure to tell us your GitHub Username in your Eclipse account.

November 19, 2019 08:10 PM

Back to the top