the beginnings of JAX-RS, OAuth, OIDC, Authentication, Authorization and Quarkusis available for download.
October 01, 2023
Hashtag Jakarta EE #196
by Ivar Grimstad at October 01, 2023 09:59 AM
Welcome to issue number one hundred and ninety-six of Hashtag Jakarta EE!
The details for a Milestone 1 release (M1) of Jakarta EE 11 were nailed down in this week’s Jakarta EE Platform call. After discussing who the target audience of an M1 is, we concluded that this release is primarily for the Platform Project itself and the vendors implementing Jakarta EE. This doesn’t prevent others from checking it out, but it helps define the scope of the milestone.
Jakarta EE consists of multiple component specifications in addition to the Platform itself. All the specifications that are updated for Jakarta EE 11 are required to participate in M1 by producing the following artifacts:
– Specification Document
– API JAR in Maven Central
– JavaDoc
– XML Schemas (if the specification defines these)
An implementation of the specification and a Test Compatible Kit (TCK) are optional for M1 but will be required for the next milestone release.
The next three weeks will be filled by four conferences back-to-back. First out is Devoxx Belgium in Antwerp. Then I will go across the ocean to Halifax for Community Over Code and back again to Devoxx Morocco in Agadir before ending the trip in Ludwigsburg and EclipseCon. I hope to see as many as possible of you there. Remember that the hallway track is the most important one to attend!
I will be bringing my running gear to each of these conferences, so keep an eye out for messages and posts tagged with #runWithJakartaEE if you would like to join. I’ll bring a limited amount of Jakarta EE running shirts if that is your motivation to join. At EclipseCon, I will let Gesine guide us on a 3K, 5K, or 8K Morning Run around Ludwigsburg. Join us in the Nestor Hotel Lobby on Wednesday, October 18 at 06:30.
Talking about EclipseCon, I hope you are aware of the Community Day for Java Developers on October 16. You can still register for only €40 which includes lunch and refreshments. Not a bad deal if you ask me…
On the topic of registering, I encourage you to register for JakartaOne Livestream 2023 which will happen on December 5, 2023. We have a great show planned around the celebration of the 5-year anniversary of Jakarta EE.
There is a lot going on in the Open Source Community these days. One of the more disrupting things is the Cyber Resilience Act (CRA) by the European Union (EU). 12 organizations, including Eclipse Foundation, have written an Open Letter addressed to policymakers proposing a solution for OSS projects developed under the governance of foundations like the Eclipse Foundation.
Feel free to share this Open Letter for example using the hashtag #ModifyTheCRA.
September 27, 2023
Navigating the Shift From Drupal 7 to Drupal 9/10 at the Eclipse Foundation
September 27, 2023 02:30 PM
We’re currently in the middle of a substantial transition as we are migrating mission-critical websites from Drupal 7 to Drupal 9, with our sights set on Drupal 10. This shift has been motivated by several factors, including the announcement of Drupal 7 end-of-life which is now scheduled for January 5, 2025, and our goal to reduce technical debt that we accrued over the last decade.
To provide some context, we’re migrating a total of six key websites:
- projects.eclipse.org: The Eclipse Project Management Infrastructure (PMI) consolidates project management activities into a single consistent location and experience.
- accounts.eclipse.org: The Eclipse Account website is where our users go to manage their profiles and sign essential agreements, like the Eclipse Contributor Agreement (ECA) and the Eclipse Individual Committer Agreement (ICA).
- blogs.eclipse.org: Our official blogging platform for Foundation staff.
- newsroom.eclipse.org: The Eclipse Newsroom is our content management system for news, events, newsletters, and valuable resources like case studies, market reports, and whitepapers.
- marketplace.eclipse.org: The Eclipse Marketplace empowers users to discover solutions that enhance their Eclipse IDE.
- eclipse.org/downloads/packages: The Eclipse Packaging website is our platform for managing the publication of download links for the Eclipse Installer and Eclipse IDE Packages on our websites.
The Progress So Far
We’ve made substantial progress this year with our migration efforts. The team successfully completed the migration of Eclipse Blogs and Eclipse Newsroom. We are also in the final stages of development with the Eclipse Marketplace, which is currently scheduled for a production release on October 25, 2023. Next year, we’ll focus our attention on completing the migration of our more substantial sites, such as Eclipse PMI, Eclipse Accounts, and Eclipse Packaging.
More Than a Simple Migration: Decoupling Drupal APIs With Quarkus
This initiative isn’t just about moving from one version of Drupal to another. Simultaneously, we’re undertaking the task of decoupling essential APIs from Drupal in the hope that future migration or upgrade won’t impact as many core services at the same time. For this purpose, we’ve chosen Quarkus as our preferred platform. In Q3 2023, the team successfully migrated the GitHub ECA Validation Service and the Open-VSX Publisher Agreement Service from Drupal to Quarkus. In Q4 2023, we’re planning to continue down that path and deploy a Quarkus implementation of several critical APIs such as:
- Account Profile API: This API offers user information, covering ECA status and profile details like bios.
- User Deletion API: This API monitors user deletion requests ensuring the right to be forgotten.
- Committer Paperwork API: This API keeps tabs on the status of ongoing committer paperwork records.
- Eclipse USS: The Eclipse User Storage Service (USS) allows Eclipse projects to store user-specific project information on our servers.
Conclusion: A Forward-Looking Transition
Our migration journey from Drupal 7 to Drupal 9, with plans for Drupal 10, represents our commitment to providing a secure, efficient, and user-friendly online experience for our community. We are excited about the possibilities this migration will unlock for us, advancing us toward a more modern web stack.
Finally, I’d like to take this moment to highlight that this project is a monumental team effort, thanks to the exceptional contributions of Eric Poirier and Théodore Biadala, our Drupal developers; Martin Lowe and Zachary Sabourin, our Java developers implementing the API decoupling objective; and Frederic Gurr, whose support has been instrumental in deploying our new apps on the Eclipse Infrastructure.
How to store JSON in MySQL Database
by Guest Author at September 27, 2023 08:24 AM
Developers use MySQL databases in every corner of the world to create cloud-based applications. As they continually look for tools that offer better scalability, performance, and flexibility, many are pairing MySQL with the JSON data format.
Combined, these provide a wealth of benefits for developers. We’re going to briefly examine the ins and outs of MySQL and JSON to get you up to speed, then take a look at some of the things you can achieve using them together.

September 26, 2023
Contributing to OpenJDK | The Two Minutes Tuesday 039 | Open Source
by Markus Karg at September 26, 2023 07:30 PM
I really adore to be part of #OpenJDK!
If you like this video, please give it a thumbs up, share it, subscribe to my channel, or become my patreon https://www.patreon.com/mkarg. Thanks!
JCON Europe Cologne 2023: "Serverless" Is What J2EE Was Meant To Be
by admin at September 26, 2023 01:11 PM
Live from Jcon Europe Cologne 2023, discussing, cloud native, cloud costs, "scale to zero", green IT, productivity and the relation between application servers ...and serverless architectures:
MicroProfile 6.1, Java 21, and fast startup times for Spring Boot apps on Open Liberty 23.0.0.10-beta
September 26, 2023 12:00 AM
This Open Liberty beta is packed full of the team’s latest standards implementation work with previews of MicroProfile 6.1 (Metrics, Telemetry, and OpenAPI), Java 21, and Jakarta Data (Beta 3) on Open Liberty. It also introduces faster startup times for your Spring Boot applications with little or no extra effort by using Liberty InstantOn; if you have any Spring apps to hand, give it a try. And there are a couple of updates that make it easier to manage security configurations in containerized environments.
The Open Liberty 23.0.0.10-beta includes the following beta features (along with all GA features):
See also previous Open Liberty beta blog posts.
Faster startup of Spring Boot apps (Spring Boot 3.0 InstantOn with CRaC)
Open Liberty InstantOn provides fast startup times for MicroProfile and Jakarta EE applications. With InstantOn, your applications can start in milliseconds, without compromising on throughput, memory, development-production parity, or Java language features. InstantOn uses the Checkpoint/Restore In Userspace (CRIU) feature of the Linux kernel to take a checkpoint of the JVM that can be restored later.
The Spring Framework (version 6.1) is adding support for Coordinated Restore at Checkpoint (CRaC), which also uses CRIU to provide Checkpoint and Restore for Java applications. The Spring Boot version 3.2 will use Spring Framework version 6.1, enabling Spring Boot applications to also use CRaC to achieve rapid startup times.
The recent addition of the Open Liberty springBoot-3.0
feature allows Spring Boot 3.x-based applications to be deployed with Open Liberty. And now, with the new Open Liberty crac-1.3
beta feature, a Spring Boot 3.2-based application can be deployed with Liberty InstantOn to achieve rapid startup times for your Spring Boot application.
To use the CRaC 1.3 functionality with the springBoot-3.0
feature, you must be running with Java 17 or higher and use the crac-1.3
feature. Additionally, if your application uses Servlet, it needs to use the servlet-6.0
feature. These features are configured in the server.xml
file as follows:
<features>
<feature>springBoot-3.0</feature>
<feature>servlet-6.0</feature>
<feature>crac-1.3</feature>
</features>
With these features enabled you can containerize your Spring Boot 3.2 application with Liberty InstantOn support by following the Liberty InstantOn documentation along with following the Liberty recommendations for containerizing Spring Boot applications with the Liberty Spring Boot guide.
For more information and an example Spring Boot application using the Liberty InstantOn crac-1.3
feature, see the How to containerize your Spring Boot application for rapid startup blog post.
You can also use the crac-1.3
feature with other applications, such as applications using Jakarta EE or MicroProfile. Such applications can register resources with CRaC to get notifications for checkpoint and restore. This allows applications to perform actions necessary to prepare for a checkpoint as well as perform necessary actions when the application is restored. For more information on the org.crac
APIs, see the org.crac Javadoc.
Java 21 support
Java 21 is finally here, the first long term support (LTS) release since Java 17 was released two years ago. It offers some new functionality and changes that you’ll want to check out for yourself.
As it is a milestone release of Java, we thought you might like to try it out a little early (we have been testing against Java 21 build 35 ourselves). Take advantage of trying out the new changes in Java 21 now and get more time to review your applications, microservices, and runtime environments.
Just:
-
Get the 23.0.0.10-beta version of Open Liberty.
-
Edit your Liberty server’s server.env file to point JAVA_HOME to your Java 21 installation.
-
Start testing!
Here are some highlights from new JEP changes in Java 18-21:
-
400: UTF-8 by Default
-
408: Simple Web Server
-
422: Linux/RISC-V Port
-
439: Generational ZGC
-
440: Record Patterns
But perhaps the most anticipated one of all is the introduction of Virtual Threads in Java 21:
-
444: Virtual Threads
Will the impact of Virtual Threads live up to the anticipation? Find out for yourself by experimenting with them, or with any of the other new features in Java 21, by trying them out in your applications run on the best Java runtime, Open Liberty!
For more information on Java 21, see:
As we work toward full Java 21 support, please bear with any of our functionality that might not be 100% ready yet.
MicroProfile 6.1 support
MicroProfile 6.1 is a minor release and is backwards-compatible with MicroProfile 6.0. It brings in Jakarta EE 10 Core Profile APIs and the following MicroProfile component specifications:
The following three specifications have minor updates, while the other five specifications remain unchanged:
-
MicroProfile Metrics 5.1
-
MicroProfile Telemetry 1.1
-
MicroProfile Config 3.1 (mainly some TCK updates to ensure the tests run against either CDI 3.x or CDI 4.0 Lite)
See the following sections for more details about each of these features and how to try them out.
MicroProfile Metrics 5.1: configure statistics tracked by Histogram and Timer metrics
MicroProfile Metrics 5.1 includes new MicroProfile Config properties that are used for configuring the statistics that the Histogram and Timer metrics track and output. In MicroProfile Metrics 5.0, the Histogram and Timer metrics only track and output the max recorded value, the sum of all values, the count of the recorded values, and a static set of percentiles for the 50th, 75th, 95th, 98th, 99th, and 99.9th percentile. These values are emitted to the /metrics
endpoint in Prometheus format.
The new properties introduced in MicroProfile Metrics 5.1 allow you to define a custom set of percentiles as well as a custom set of histogram buckets for the Histogram and Timer metrics. There are also additional configuration properties for enabling a default set of histogram buckets, including properties for defining an upper and lower bound for the bucket set.
The properties in the following table allow you to define a semicolon-separated list of value definitions using the syntax:
metric_name=value_1[,value_2…value_n]
Property | Description |
---|---|
mp.metrics.distribution.percentiles |
|
mp.metrics.distribution.histogram.buckets |
|
mp.metrics.distribution.timer.buckets |
|
mp.metrics.distribution.percentiles-histogram.enabled |
|
mp.metrics.distribution.histogram.max-value |
|
mp.metrics.distribution.histogram.min-value |
|
mp.metrics.distribution.timer.max-value |
|
mp.metrics.distribution.timer.min-value |
|
Some properties can accept multiple values for a given metric name while some can only accept a single value.
You can use an asterisk (i.e., *) as a wild card at the end of the metric name.
For example, the mp.metrics.distribution.percentiles
can be defined as:
mp.metrics.distribution.percentiles=alpha.timer=0.5,0.7,0.75,0.8;alpha.histogram=0.8,0.85,0.9,0.99;delta.*=
This example creates the alpha.timer
timer metric to track and output the 50th, 70th, 75th, and 80th percentile values. The alpha.histogram
histogram metric outputs the 80th, 85th, 90th, and 99th percentiles values. Percentiles are disabled for any Histogram or Timer metric that matches with delta.*
.
We’ll expand on the previous example and define histogram buckets for the alpha.timer
timer metric using the mp.metrics.distribution.timer.buckets
property:
mp.metrics.distribution.timer.buckets=alpha.timer=100ms,200ms,1s
This configuration tells the metrics runtime to track and output the count of durations that fall within 0-100ms, 0-200ms, and 0-1 seconds. These values are ranges because the histogram buckets work cumulatively .
The corresponding Prometheus output for the alpha.timer
metric at the /metrics
REST endpoint is:
# HELP alpha_timer_seconds_max
# TYPE alpha_timer_seconds_max gauge
alpha_timer_seconds_max{scope="application",} 5.633
# HELP alpha_timer_seconds
# TYPE alpha_timer_seconds histogram (1)
alpha_timer_seconds{scope="application",quantile="0.5",} 0.67108864
alpha_timer_seconds{scope="application",quantile="0.7",} 5.603590144
alpha_timer_seconds{scope="application",quantile="0.75",} 5.603590144
alpha_timer_seconds{scope="application",quantile="0.8",} 5.603590144
alpha_timer_seconds_bucket{scope="application",le="0.1",} 0.0 (2)
alpha_timer_seconds_bucket{scope="application",le="0.2",} 0.0 (2)
alpha_timer_seconds_bucket{scope="application",le="1.0",} 1.0 (2)
alpha_timer_seconds_bucket{scope="application",le="+Inf",} 2.0 (2) (3)
alpha_timer_seconds_count{scope="application",} 2.0
alpha_timer_seconds_sum{scope="application",} 6.333
1 | The Prometheus metric type is histogram . Both the quantiles or percentiles and buckets are represented under this type. |
2 | The le tag represents less than and is for the defined buckets, which are converted to seconds. |
3 | Prometheus requires a +Inf bucket, which counts all hits. |
For more information about MicroProfile Metrics, see:
MicroProfile Telemetry 1.1: updated OpenTelemetry implementation
MicroProfile Telemetry 1.1 provides developers with the latest Open Telemetry technology; the feature now consumes OpenTelemetry-1.29.0, updated from 1.19.0. Consequently, a lot of the dependencies are now stable.
To enable the MicroProfile Telemetry 1.1 feature, add the following configuration to your server.xml
:
<features>
<feature>mpTelemetry-1.1</feature>
</features>
Additionally, you must make third-party APIs visible for your application in the server.xml
:
<webApplication location="demo-microprofile-telemetry-inventory.war" contextRoot="/">
<!-- enable visibility to third party apis -->
<classloader apiTypeVisibility="+third-party"/>
</webApplication>
For more information about MicroProfile Telemetry, see:
MicroProfile OpenAPI 3.1: OpenAPI doc endpoint path configuration
MicroProfile OpenAPI generates and serves OpenAPI documentation for JAX-RS applications that are deployed to the Open Liberty server. The OpenAPI documentation is served from /openapi
and a user interface for browsing this documentation is served from /openapi/ui
.
With MicroProfile OpenAPI 3.1, you can configure the paths for these endpoints by adding configuration to your server.xml
. For example:
<mpOpenAPI docPath="/my/openapi/doc/path" uiPath="/docsUi" />
When you set this configuration on a local test server, you can then access the OpenAPI document at localhost:9080/my/openapi/doc/path
and the UI at localhost:9080/docsUi
.
This is particularly useful if you want to expose the OpenAPI documentation through a Kubernetes ingress which routes requests to different services based on the path. For example, with this ingress configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- http:
paths:
- path: /appA
pathType: Prefix
backend:
service:
name: appA
port:
number: 9080
You could use the following server.xml
configuration to ensure that the OpenAPI UI is available at /appA/openapi/ui
:
<mpOpenAPI docPath="/appA/openapi" />
When uiPath
is not set, it defaults to the value of docPath
with /ui
appended.
For more information about MicroProfile OpenAPI, see:
Jakarta Data beta 3: configure the data source used to query and persist data
Jakarta Data is a new Jakarta EE specification being developed in the open that aims to standardize the popular data repository pattern across a variety of providers. Open Liberty includes the Jakarta Data 1.0 beta 3 release, which adds the ability to configure the data source that a Jakarta Data repository uses to query and persist data.
The Open Liberty beta includes a test implementation of Jakarta Data that we are using to experiment with proposed specification features so that developers can try out these features and provide feedback to influence the specification as it is being developed. The test implementation currently works with relational databases and operates by redirecting repository operations to the built-in Jakarta Persistence provider. In preparation for Jakarta EE 11, which will require a minimum of Java 21 (not yet generally available), it runs on Java 17 and simulates the entirety of the Jakarta Data beta 3 release, plus some additional proposed features that are under consideration.
Jakarta Data beta 3 allows the use of multiple data sources, with a specification-defined mechanism for choosing which data source a repository will use.
To use Jakarta Data, you start by defining an entity class that corresponds to your data. With relational databases, the entity class corresponds to a database table and the entity properties (public methods and fields of the entity class) generally correspond to the columns of the table. You can define an entity class in one of the following ways:
-
Annotate the class with
jakarta.persistence.Entity
and related annotations from Jakarta Persistence. -
Define a Java class without entity annotations, in which case the primary key is inferred from an entity property named
id
or ending withId
.
You define one or more repository interfaces for an entity, annotate those interfaces as @Repository
, and inject them into components using @Inject
. The Jakarta Data provider supplies the implementation of the repository interface for you.
Here’s a simple entity:
public class Product { // entity
public long id;
public String name;
public float price;
}
The following example shows a repository that defines operations relating to the entity. It opts to specify the JNDI name of a data source where the entity data is to be stored and found:
@Repository(dataStore = "java:app/jdbc/my-example-data")
public interface Products extends CrudRepository<Product, Long> {
// query-by-method name pattern:
Page<Product> findByNameIgnoreCaseContains(String searchFor, Pageable pageRequest);
// query via JPQL:
@Query("UPDATE Product o SET o.price = o.price - (?2 * o.price) WHERE o.id = ?1")
boolean discount(long productId, float discountRate);
}
In the following example, we have chosen to define the data source with the @DataSourceDefinition
annotation, which we can place on a web component, such as the following example servlet. We can then inject the repository and use it:
@DataSourceDefinition(name = "java:app/jdbc/my-example-data",
className = "org.postgresql.xa.PGXADataSource",
databaseName = "ExampleDB",
serverName = "localhost",
portNumber = 5432,
user = "${example.database.user}",
password = "${example.database.password}")
public class MyServlet extends HttpServlet {
@Inject
Products products;
protected void doGet(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
// Request only the first 20 results on a page, ordered by price, then name, then id:
Pageable pageRequest = Pageable.size(20).sortBy(Sort.desc("price"), Sort.asc("name"), Sort.asc("id"));
Page<Product> page1 = products.findByNameIgnoreCaseContains(searchFor, pageRequest);
}
}
The dataStore
field of @Repository
can also point at the id
of a databaseStore
element or the id
or jndiName
of a dataSource
element from server configuration, or the name of a resource reference that is available to the application.
For more information about Jakarta Data, see:
Your feedback is welcome on all of the Jakarta Data features and will be helpful as the specification develops further. Let us know what you think and/or be involved directly in the specification on github.
Support LTPA keys rotation without a planned outage
Open Liberty can now automatically generate new primary LTPA keys files while continuing to use validation keys files to validate LTPA tokens. This update enables you to rotate LTPA keys without any disruption to the application’s user experience. Previously, application users had to log in to their applications again after the Liberty server LTPA keys were rotated, which is no longer necessary.
Primary Keys are LTPA keys in the specified keys default ltpa.keys
file. Primary keys are used both for generating new LTPA tokens and for validating LTPA tokens. There can only be one primary keys file per Liberty runtime.
Validation keys are LTPA keys in any .keys
files other than the primary keys file. The validation keys are used only for validating LTPA tokens. They are not used for generating new LTPA tokens. All validation keys must be located in the same directory as the primary keys file.
There are 2 ways to enable LTPA keys rotation without a planned outage: monitoring the primary keys file directory or specifying the validation keys file.
Monitor the directory of the primary keys file for any new validation keys files.
Enable the monitorDirectory
and monitorInterval
attributes. For example, add the following configurations to the server.xml
:
<ltpa monitorDirectory="true" monitorInterval="5m"/>
The monitorDirectory
attribute monitors the ${server.config.dir}/resources/security/
directory by default, but can monitor any directory the primary keys file is specified in. The directory monitor looks for any LTPA keys files with the .keys
extension. The Open Liberty server reads these LTPA keys and uses them as validation keys.
If the monitorInterval
is set to 0
, the default value, the directory is not monitored.
The ltpa.keys
file can be renamed, for example, validation1.keys
and then Liberty automatically regenerates a new ltpa.keys
file with new primary keys that are used for all new LTPA tokens created. The keys in validation1.keys
continue to be used for validating existing LTPA tokens.
When the validation1.keys
are no longer needed, remove them by deleting the file or by setting monitorDirectory
to false
. It is recommended to remove unused validation keys as it can improve performance.
Specify the validation keys file and optionally specify a date-time to stop using the validation keys.
-
Copy the primary keys file (
ltpa.keys
) to a validation keys file, for examplevalidation1.keys
. -
Modify the server configuration to use the validation keys file by specifying a
validationKeys
server configuration element inside theltpa
element. For example, add the following configuration to theserver.xml
file:
<ltpa>
<validationKeys fileName="validation1.keys" password="{xor}Lz4sLCgwLTs=" notUseAfterDate="2024-01-02T12:30:00Z"/>
<ltpa/>
The validation1.keys
file can be removed from use at a specified date-time in the future with the optional notUseAfterDate
attribute. It is recommended to use notUseAfterDate
to ignore validation keys after a given period as it can improve performance.
The fileName
and password
attributes are required in the validationKeys
element, but notUseAfterDate
is optional.
After the validation keys file is loaded from the server configuration update, the original primary keys file (ltpa.keys
) can be deleted, which triggers new primary keys to be created while continuing to use validation1.keys
for validation.
Specifying validation keys in this way can be combined with enabling monitor directory to also use validation keys that are not specified in the server.xml
configuration at the same time. For example:
<ltpa monitorDirectory="true" monitorInterval="5m">
<validationKeys fileName="validation1.keys" password="{xor}Lz4sLCgwLTs=" notUseAfterDate="2024-01-02T12:30:00Z"/>
<ltpa/>
To see all of the Liberty <ltpa>
server configuration options see LTPA configuration docs.
Include all files in a specified directory in your server configuration
You can use the include
element in your server.xml
file to specify the location of files to include in your server configuration. In previous releases, you had to specify the location for each include file individually. Now, you can place all the included files in a directory and just specify the directory location in the include
element.
This is important because when running on Kubernetes, mounting secrets as a whole folder is the only way to reflect the change from the secret dynamically in the running pod.
In the location
attribute of the include
element of the server.xml
file, enter the directory that contains your configuration files. For example:
<include location="./common/"/>
After you make the changes, you can see the following output in the log:
[AUDIT ] CWWKG0028A: Processing included configuration resource: /Users/rickyherget/libertyGit/open-liberty/dev/build.image/wlp/usr/servers/com.ibm.ws.config.include.directory/common/a.xml
[AUDIT ] CWWKG0028A: Processing included configuration resource: /Users/rickyherget/libertyGit/open-liberty/dev/build.image/wlp/usr/servers/com.ibm.ws.config.include.directory/common/b.xml
[AUDIT ] CWWKG0028A: Processing included configuration resource: /Users/rickyherget/libertyGit/open-liberty/dev/build.image/wlp/usr/servers/com.ibm.ws.config.include.directory/common/c.xml
The files in the directory are processed in alphabetical order and subdirectories are ignored.
For more information about Liberty configuration includes, see Include configuration docs.
Try it now
To try out these features, update your build tools to pull the Open Liberty All Beta Features package instead of the main release. The beta works with Java SE 21, Java SE 17, Java SE 11, and Java SE 8.
If you’re using Maven, you can install the All Beta Features package using:
<plugin>
<groupId>io.openliberty.tools</groupId>
<artifactId>liberty-maven-plugin</artifactId>
<version>3.8.2</version>
<configuration>
<runtimeArtifact>
<groupId>io.openliberty.beta</groupId>
<artifactId>openliberty-runtime</artifactId>
<version>23.0.0.10-beta</version>
<type>zip</type>
</runtimeArtifact>
</configuration>
</plugin>
You must also add dependencies to your pom.xml file for the beta version of the APIs that are associated with the beta features that you want to try. For example, for Jakarta Data Beta 3, you would include:
<dependency>
<groupId>jakarta.data</groupId>
<artifactId>jakarta-data-api</artifactId>
<version>1.0.0-b3</version>
</dependency>
For Gradle, you can install the All Beta Features package using:
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'io.openliberty.tools:liberty-gradle-plugin:3.6.2'
}
}
apply plugin: 'liberty'
dependencies {
libertyRuntime group: 'io.openliberty.beta', name: 'openliberty-runtime', version: '[23.0.0.10-beta,)'
}
Or if you’re using container images:
FROM icr.io/appcafe/open-liberty:beta
Or take a look at our Downloads page.
For more information on using a beta release, refer to the Installing Open Liberty beta releases documentation.
We welcome your feedback
Let us know what you think on our mailing list. If you hit a problem, post a question on StackOverflow. If you hit a bug, please raise an issue.
September 24, 2023
JAX-RS, OAuth, OpenID Connect (OIDC), Authentication, Authorization and Quarkus--airhacks.fm podcast
by admin at September 24, 2023 06:57 PM
Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS
Hashtag Jakarta EE #195
by Ivar Grimstad at September 24, 2023 09:59 AM
Welcome to issue number one hundred and ninety-five of Hashtag Jakarta EE!
Home again after a couple of busy weeks on the road. Read all about it in North America JUG Tour 2023. Now, I’ll be home for a week before my next trip which will be Devoxx Belgium. I can’t believe it is almost October already.
I got an article titled Simplifying data access with MySQL and Jakarta Data published in Oracle Java Magazine this week. Check it out, or even better, try it out. It contains a step-by-step guide for how to test out Jakarta Data, which will be included in Jakarta EE 11.
JakartaOne Livestream 2023 is approaching. The event will be on December 5, 2023, and the format will be the same as the previous couple of years. Currently, the program committee is reviewing proposals. I expect the first speakers will be announced shortly. Until then, the registration is open, so I encourage you to get registered and mark your calendar. This year’s edition will be special since we will be celebrating the 5-year anniversary of Jakarta EE. I am pretty sure there will be cake!
September 22, 2023
Simplifying data access with MySQL and Jakarta Data
September 22, 2023 12:00 AM
Many applications, especially in the enterprise domain, persist or access data in some form. Relational databases are still by far the most used persistence mechanism even though they are being challenged by technologies such as NoSQL databases. This article explores some concepts for data access and looks at how the new Jakarta Data specification makes data access simpler than ever for application developers.
September 21, 2023
The Payara Monthly Catch September 2023
by Priya Khaira-Hanks at September 21, 2023 09:39 AM
It's time for the September Payara Monthly Catch - our monthly news roundup from the world of Java, Jakarta EE, MicroProfile, and open source.
The big news is: Java 21 is now out! This is the latest long-term support release of Java, so it will be supported in Payara Platform. Payara Community Server and Micro will run with Java 21 by mid-October, with Payara Enterprise supporting Java 21 by mid-December. Find our pick of great Java 21 content below, including our own articles on the subject, focusing on what it will mean for Jakarta EE users.
Watch out for us next month at Devoxx Belgium! See below to meet our team at Devoxx and join our talk,Elementary Full-stack Development with Hypermedia and Java 21,we'll hope to see lots of you there.
TheJakarta EE Developer Survey 2023also came out this month, make sure to read to find out the most used technologies and trends. Our CEO and Founder Steve Millidge commented: "The future looks bright for Jakarta EE and Payara, as we note with pride that the percentage of respondents using Payara has also increased!"
And finally, free trials of our fully managed Jakarta EE cloud native application runtime, Payara Cloud, are in full swing! Join those trying it out for free, with 15 days available to you with no charge. Sign uphere.

September 20, 2023
New Jetty 12 Maven Coordinates
by Joakim Erdfelt at September 20, 2023 09:42 PM
Now that Jetty 12.0.1 is released to Maven Central, we’ve started to get a few questions about where some artifacts are, or when we intend to release them (as folks cannot find them).
Things have change with Jetty, starting with the 12.0.0 release.
First, is that our historical versioning of <servlet_support>.<major>.<minor>
is no longer being used.
With Jetty 12, we are now using a more traditional <major>.<minor>.<patch>
versioning scheme for the first time.
Also new in Jetty 12 is that the Servlet layer has been separated away from the Jetty Core layer.
The Servlet layer has been moved to the new Environments concept introduced with Jetty 12.
Environment | Jakarta EE | Servlet | Jakarta Namespace | Jetty GroupID |
ee8 | EE8 | 4 | javax.servlet | org.eclipse.jetty.ee8 |
ee9 | EE9 | 5 | jakarta.servlet | org.eclipse.jetty.ee9 |
ee10 | EE10 | 6 | jakarta.servlet | org.eclipse.jetty.ee10 |
This means the old Servlet specific artifacts have been moved to environment specific locations both in terms of Java namespace and also their Maven Coordinates.
Example:
Jetty 11 – Using Servlet 5
Maven Coord: org.eclipse.jetty:jetty-servlet
Java Class: org.eclipse.jetty.servlet.ServletContextHandler
Jetty 12 – Using Servlet 6
Maven Coord: org.eclipse.jetty.ee10:jetty-ee10-servlet
Java Class: org.eclipse.jetty.ee10.servlet.ServletContextHandler
We have a migration document which lists all of the migrated locations from Jetty 11 to Jetty 12.
This new versioning and environment features built into Jetty means that new major versions of Jetty are not as common as they have been in the past.
Running MicroProfile reactive with Helidon Nima and Virtual Threads
by Jean-François James at September 20, 2023 05:29 PM
September 19, 2023
New Survey: How Do Developers Feel About Enterprise Java in 2023?
by Mike Milinkovich at September 19, 2023 01:00 PM
The results of the 2023 Jakarta EE Developer Survey are now available! For the sixth year in a row, we’ve reached out to the enterprise Java community to ask about their preferences and priorities for cloud native Java architectures, technologies, and tools, their perceptions of the cloud native application industry, and more.
From these results, it is clear that open source cloud native Java is on the rise following the release of Jakarta EE 10.The number of respondents who have migrated to Jakarta EE continues to grow, with 60% saying they have already migrated, or plan to do so within the next 6-24 months. These results indicate steady growth in the use of Jakarta EE and a growing interest in cloud native Java overall.
When comparing the survey results to 2022, usage of Jakarta EE to build cloud native applications has remained steady at 53%. Spring/Spring Boot, which relies on some Jakarta EE specifications, continues to be the leading Java framework in this category, with usage growing from 57% to 66%.
Since the September 2022 release, Jakarta EE 10 usage has grown to 17% among survey respondents. This community-driven release is attracting a growing number of application developers to adopt Jakarta EE 10 by offering new features and updates to Jakarta EE. An equal number of developers are running Jakarta EE 9 or 9.1 in production, while 28% are running Jakarta EE 8. That means the increase we are seeing in the migration to Jakarta EE is mostly due to the adoption of Jakarta EE 10, as compared to Jakarta EE 9/9.1 or Jakarta EE 8.
The Jakarta EE Developer Survey also gives us a chance to get valuable feedback on features from the latest Jakarta EE release, as well as what direction the project should take in the future.
Respondents are most excited about Jakarta EE Core Profile, which was introduced in the Jakarta EE 10 release as a subset of Web Profile specifications designed for microservices and ahead-of-time compilation. When it comes to future releases, the community is prioritizing better support for Kubernetes and microservices, as well as adapting Java SE innovations to Jakarta EE — a priority that has grown in popularity since 2022. This is a good indicator that the Jakarta EE 11 release plan is on the right direction by adopting new Java SE 21 features.
2,203 developers, architects, and other tech professionals participated in the survey, a 53% increase from last year. This year’s survey was also available in Chinese, Japanese, Spanish & Portuguese, making it easier for Java enthusiasts around the world to share their perspectives. Participation from the Chinese Jakarta EE community was particularly strong, with over 27% of the responses coming from China. By hearing from more people in the enterprise Java space, we’re able to get a clearer picture of what challenges developers are facing, what they’re looking for, and what technologies they are using. Thank you to everyone who participated!
Learn More
We encourage you to download the report for a complete look at the enterprise Java ecosystem.
If you’d like to get more information about Jakarta EE specifications and our open source community, sign up for one of our mailing lists or join the conversation on Slack. If you’d like to participate in the Jakarta EE community, learn how to get started on our website.
September 18, 2023
How to upgrade to Quarkus 3
by F.Marchioni at September 18, 2023 04:34 PM
This article discusses how to upgrade your existing Quarkus 2.x applications to Quarkus 3.x using the Quarkus CLI tool. We will learn at first which is the impact of the upgrade on Quarkus 2 application. Then, we will show how to perform the upgrade with just a single command line! What is new in Quarkus ... Read more
The post How to upgrade to Quarkus 3 appeared first on Mastertheboss.
September 16, 2023
Addressing CVE-2023-4853 in Quarkus
by F.Marchioni at September 16, 2023 08:17 AM
The CVE-2023-4853 vulnerability in question impacts Quarkus framework’s HTTP Security Policy, . This policy provides access control to various endpoints within an application enabling developers to secure access based on path-based configurations. However, a critical flaw has been identified in how the HTTP Security Policy handles request paths containing multiple adjacent forward-slash characters. Issue Summary ... Read more
The post Addressing CVE-2023-4853 in Quarkus appeared first on Mastertheboss.
September 14, 2023
GlassFish Embedded – a simple way to run Jakarta EE apps
by Ian Blavins at September 14, 2023 09:49 PM
I’ve been asked by the Eclipse GlassFish project to say a few words about how I use GlassFish Embedded. And since they are working on a series of complex issues that I have raised I guess that is fair. The OmniFish team is one of the main contributors to the GlassFish project and I allowed them to post my article on their blog too.
Running GlassFish Embedded is pretty straightforward – you start GlassFish within a client application and deploy the server application to the running embedded GlassFish. The embedded server is created and destroyed by the tool on each tool session.
I started using GlassFish while GlassFish was in Oracle’s hands as a reference implementation of Java EE. Some time ago, there were suggestions that GlassFish was not being actively maintained. Since Oracle’s donation of GlassFish to the Eclipse Foundation, with support from the Foundation’s GlassFish team, and the OmniFish team (that also provides commercial support), the GlassFish project is very active and the community around it is certainly present and responsive. That’s one more reason for me to continue using GlassFish in the future.

Overview of the APILoader Project
My project is APILoader. APILoader is, I believe (quite possibly wrongly) the seed for the next generation of software performance testing tools. I won’t say a lot about APILoader since it hasn’t been released yet and there is IP to protect. But I can say a bit about how it uses GlassFish Embedded.
For an individual, I intend that APILoader be deployed using GlassFish Embedded. This obviates the server administration, as the server is created and destroyed by the tool on each tool session.
Some software performance engineers work in teams and need to share artefacts. For them, a server-based tool is appropriate. Others work individually. For them, a server-based tool is an overkill, implying, as it does, server administration. So APILoader has a server component and a client component but they are deployed differently depending on the needs of their users. In all deployments, the database remains external.
For teams, I intend that APILoader be deployed as a server installation with multiple clients. The engineers share the server and the database for artefacts, and use the clients for isolation. APILoader supports accounts and projects. Accounts are hermetically sealed sets of projects. Projects are separated sets of artefacts, but with the option to copy selected artefact types between them. So engineers can work completely separately by using different accounts. Or they can work in the same account, sharing account level resources, but with separate sets of project-level artefacts. Or they can work on the same project and share account and project-level resources. A team might use different accounts for testing different products where artefact sharing is unlikely. That team might use a different project for performance testing of each release of one product, initially populating each project selectively from its predecessor.
For an individual, I intend that APILoader be deployed using GlassFish Embedded. This obviates the server administration, as the embedded server is created and destroyed by the tool on each tool session. The individual can still use accounts and projects to separate the artefact sets for different pieces of work.
There is potentially a hybrid approach where each engineer runs an embedded GlassFish instance but they choose to share a single (networked) database. The issue is that the ‘database’ in APILoader is distributed with some data held in a relational database and some held in files associated with the server. So, in this scenario, those artefacts that are held in the database would be shared but those held by the server are not (since each engineer has their own (embedded) server). This scenario doesn’t appear useful as it stands because the relational database is used to access the file-based artefacts held by the server and only a subset of file-based artefacts would be reachable by each engineer. It could be made an installation option that the file-based artefacts be held in one repository, independent of the servers. Then all artefacts would be shared. This would appear that the installation is shared by the team but with a greater degree of isolation for each engineer since the server isn’t shared.
Simple Setup with GlassFish Embedded
Running GlassFish Embedded is pretty straightforward – you start GlassFish within a client application and deploy the server application to the running embedded GlassFish. There is very little to do in the server application to cater for being runnable both as a remote server or embedded. (Or maybe there was more than I remember but it is a once-only thing.)
However, there is one major consideration. GlassFish Embedded runs in the same JVM as the client. In remote server mode it doesn’t – it runs in a separate JVM process, often on a remote machine. This has significant implications for static resources. In embedded use, a static resource is shared between the client application and the embedded server. This allows some tempting shortcuts in coding that won’t work in non-embedded deployment.
With EJBs, the serialisation is done automatically. With http-based communication, it would have to be done explicitly via SOAP, XML, or Gson/JSON.
Benefits of Remote EJBs as a communication method
The APILoader client is a (very) fat GUI. It started life as a web client but I found myself spending inordinate amounts of time on the minutiae of HTML presentation. So now its a GUI. As such, communication with the server presents new options. I have chosen to use remote EJBs. These work just as well against a remote or embedded GlassFish server. Once you overcome the issue of making the remote class definitions available to the client application, server EJBs are pretty straightforward to use. And, with a GUI client, they are simpler to use than http-based messaging. The APILoader server and client communicate complex objects. With EJBs, the serialisation is done automatically. With http-based communication, it would have to be done explicitly via SOAP, XML, or Gson/JSON.
Note that the APILoader client is not an enterprise client. So it isn’t deployed to the server to run, and the EJBs aren’t injected. Instead, the client gets access to the server’s remotely accessible methods by doing context lookup() calls.
Simplified Distribution and Support
The other benefit of GlassFish Embedded is simplified distribution and support for APILoader to those clients that select it. Packaging and distributing the APILoader only has to cater for one brand of server, and one release of that server. On the other hand, support for the server option is easier in bigger teams because the server environment is usually better understood by infrastructure teams.
Since Oracle’s donation of GlassFish to the Eclipse Foundation, the GlassFish project is very active and the community around it is certainly present and responsive.
September 12, 2023
Welcome to the Liquibase Community | The Two Minutes Tuesday 038 | Open Source
by Markus Karg at September 12, 2023 07:30 PM
I have fixed a lot of bugs in Liquibase, but it was worth it!
If you like this video, please give it a thumbs up, share it, subscribe to my channel, or become my patreon https://www.patreon.com/mkarg. Thanks!
September 10, 2023
How to set a custom initial value for Ids in JPA
by F.Marchioni at September 10, 2023 04:41 PM
In Java Persistence API (JPA), entities require unique identifiers for database records. JPA provides several strategies for generating these identifiers, such as IDENTITY, SEQUENCE, and TABLE. However, there are cases where you might need to set a custom initial value for these identifiers using the Table and Sequence strategy. In this tutorial, we will explore ... Read more
The post How to set a custom initial value for Ids in JPA appeared first on Mastertheboss.
Quarkus CRUD Example with Panache Data
by F.Marchioni at September 10, 2023 02:05 PM
In this tutorial we will learn how to create a REST CRUD application in Quarkus, starting from a Hibernate Panache Entity. We will show two different approaches: in the first one we will create a REST Resources to map the CRUD methods. Then, we will show how to use REST Data Panache to generate automatically ... Read more
The post Quarkus CRUD Example with Panache Data appeared first on Mastertheboss.
August 30, 2023
Best Practices for Effective Usage of Contexts Dependency Injection (CDI) in Java Applications
by Rhuan Henrique Rocha at August 30, 2023 10:55 PM
Looking at the web, we don’t see many articles talking about Contexts Dependency Injection’s best practices. Hence, I have made the decision to discuss the utilization of Contexts Dependency Injection (CDI) using best practices, providing a comprehensive guide on its implementation.
The CDI is a Jakarta specification in the Java ecosystem to allow developers to use dependency injection, managing contexts, and component injection in an easier way. The article https://www.baeldung.com/java-ee-cdi defines the CDI as follows:
CDI turns DI into a no-brainer process, boiled down to just decorating the service classes with a few simple annotations, and defining the corresponding injection points in the client classes.
If you want to learn the CDI concepts you can read Baeldung’s post and Otavio Santana’s post. Here, in this post, we will focus on the best practices topic.
In fact, CDI is a powerful framework and allows developers to use Dependency Injection (DI) and Inversion of Control (IoC). However, we have one question here. How tightly do we want our application to be coupled with the framework? Note that I’m not talking you cannot couple your application to a framework, but you should think about it, think about the coupling level, and think about the tradeoffs. For me, coupling an application to a framework is not wrong, but doing it without thinking about the coupling level and the cost and tradeoffs is wrong.
It is impossible to add a framework to your application without minimally coupling your application. Even though your application does not have a couple expressed in the code, probably you have a behavioral coupling, that is, a behavior in your application depends on a framework’s behavior, and in some cases, you can not guarantee that other framework will provide a similar behavior, in case of changes.
Best Practices for Injecting Dependencies
When writing code in Java, we often create classes that rely on external dependencies to perform their tasks. To achieve this using CDI, we employ the @Inject
annotation, which allows us to inject these dependencies. However, it’s essential to be mindful of whether we are making the class overly dependent on CDI for its functionality, as it may limit its usability without CDI. Hence, it’s crucial to carefully consider the tightness of this dependency. As an illustration, let’s examine the code snippet below. Here, we encounter a class that is tightly coupled to CDI in order to carry out its functionality.
public class ImageRepository {
@Inject
private StorageProvider storageProvider;
public void saveImage(File image){
//Validate the file to check if it is an image.
//Apply some logic if needed
storageProvider.save(image);
}
}
As you can see the class ImageRepository
has a dependency on StorageProvider
, that is injected via CDI annotation. However, the storageProvider
variable is private and we don’t have setter method or a constructor that allows us to pass this dependency by the constructor. It means this class cannot work without a CDI context, that is, the ImageRepository is tightly coupled to CDI.
This coupling doesn’t provide any benefits for the application, instead, it only causes harm both to the application itself and potentially to the testing of this class.
Look at the code refactored to reduce the couple to CDI.
public class ImageRepository implements Serializable {
private StorageProvider storageProvider;
@Inject
public ImageRepository(StorageProvider storageProvider){
this.storageProvider = storageProvider;
}
public void saveImage(File image){
//Validate the file to check if it is an image.
//Apply some logic if needed
storageProvider.save(image);
}
}
As you can see, the ImageRepository
class has a constructor that receives the StorageProvider as a constructor argument. This approach follows what is said in the Clean Code book.
“True Dependency Injection goes one step further. The class takes no direct steps to resolve its dependencies; it is completely passive. Instead, it provides setter methods or constructor arguments (or both) that are used to inject the dependencies.”
(from “Clean Code: A Handbook of Agile Software Craftsmanship” by Martin Robert C.)
Without a constructor or a setter method, the injection depends on the CDI. However, we still have one question about this class. The class has a CDI annotation and depends on the CDI to be compiled. I’m not saying it is always a problem, but it can be a problem, especially if you are writing a framework. Coupling a framework with another framework can be a problem in cases you want to use your framework with another mutually exclusive one. In general, it should be avoided by frameworks. Thus, how can we fully decouple the ImageRepository
class from CDI?
CDI Producer Method
The CDI producer is a source of an object that can be used to be injected by CDI. It is like a factor of a type of object. Look at the code below:
public class ImageRepositoryProducer {
@Produces
public ImageRepository createImageRepository(){
StorageProvider storageProvider = CDI.current().select(StorageProvider.class).get();
return new ImageRepository(storageProvider);
}
}
Please note that we are constructing just one object, but the StorageProvider
‘s object is read by CDI. You should avoid constructing more than one object within a producer method, as this interlinks the construction of these objects and may lead to complications if you intend to designate distinct scopes for them. You can create a separated producer method to produce the StorageProvider
.
This is the ImageRepository
class refactored.
public class ImageRepository implements Serializable {
private StorageProvider storageProvider;
public ImageRepository(StorageProvider storageProvider){
this.storageProvider = storageProvider;
}
public void saveImage(File image){
//Validate the file to check if it is an image.
//Apply some logic if needed
storageProvider.save(image);
}
}
Please note that the ImageRepository
class does not know anything about the CDI, and is fully decoupled from CDI. The codes about the CDI are inside the ImageRepositoryProducer
, which can be extracted to another module if needed.
CDI Interceptor
The CDI Interceptor is a very cool feature of CDI that provides a nice CDI-based way to work with cross-cutting tasks (such as auditing). This is a little definition said in my book:
“A CDI interceptor is a class that wraps the call to a method — this method is called target method — that runs its logic and proceeds the call either to the next CDI interceptor if it exists, or the target method.”
(from “Jakarta EE for Java Developers” by Rhuan Rocha.)
The purpose of this article is not to discuss what a CDI interceptor is, but to discuss CDI best practices. So if you want to read more about CDI interceptor, check out the book Jakarta EE for Java Developers.
As said, the CDI interceptor is very interesting. I am quite fond of this feature and have incorporated it into numerous projects. However, using this feature comes with certain trade-offs for the application.
When you use the CDI interceptor you couple the class to the CDI, because you should be annotating the class with a custom annotation that is a interceptor binding. Look at the example below shown on the Jakarta EE for Java Developers book:
@ApplicationScoped
public class SecuredBean{
@Authentication
public String generateText(String username) throws AutenticationException{
return "Welcome "+username;
}
}
As you can see we should define a scope, as it should be a bean managed by CDI, and you should be annotating the class with the interceptor binding. Hence, if you eliminate CDI from your application, the interceptor’s logic won’t execute, and the class won’t be compiled. With this, your application has a behavioral coupling, and a dependency on the CDI lib jar to compile.
As said, it is not necessarily bad, however, you should think if it is a problem in your context.
CDI Event
The CDI Event is a great feature within the CDI framework that I have employed extensively in various applications. This functionality provides the implementation of the Observer Pattern, enabling us to emit events that are then observed by observers who execute tasks asynchronously. However, if we add the CDI codes inside our class to emit events we will couple the class to the CDI. Again, this is not an error, but you should be sure it is not a problem with your solution. Look at the example below.
import jakarta.enterprise.event.Event;
public class User{
private Event<Email> emailEvent;
public User(Event<Email> emailEvent){
this.emailEvent = emailEvent;
}
public void register(){
//logic
emailEvent.fireAsync(Email.of(from, to, subject, content));
}
}
Note we are receiving the Event class, which is from CDI, to emit the event. It means this class is coupled to CDI and depends on it to work. One way to avoid it is creating your own class to emit the event, and abstract the details about what is the mechanism (CDI or other) that is emitting the event. Look at the example below.
import net.rhuan.example.EventEmitter;
public class User{
private EventEmiter<Email> emailEventEmiter;
public User(EventEmiter<Email> emailEventEmiter){
this.emailEventEmiter = emailEventEmiter;
}
public void register(){
//logic
emailEventEmiter.emit(Email.of(from, to, subject, content));
}
}
Now, your class is agnostic to the emitter of the event. You can use CDI or others, according to the EventEmiter implementation.
Conclusion
The CDI is an amazing specification from Jakarta EE widely used in many Java frameworks and Java applications. Carefully determining the degree of integration between our application and the framework holds immense significance. This intentional decision becomes an important factor in proactively mitigating challenges during the solution’s evolution, especially when working on the development of a framework.
If you have a question or want to share your thoughts, feel free to add comments or send me messages about it.
August 22, 2023
How to create Jobs in Kubernetes
by F.Marchioni at August 22, 2023 12:52 PM
This article discusses how to automate Tasks in Kubernetes and OpenShift using Jobs and Cron Jobs. We will show some example on how to create and manage them. Then, we will discuss the best practices about using Jobs in Kubernetes. In a Kubernetes environment, you can use Jobs to automate tasks that need to run ... Read more
The post How to create Jobs in Kubernetes appeared first on Mastertheboss.
August 10, 2023
TimezoneStorageType – Hibernate’s improved timezone mapping
by Thorben Janssen at August 10, 2023 05:55 AM
The post TimezoneStorageType – Hibernate’s improved timezone mapping appeared first on Thorben Janssen.
Working with timestamps with timezone information has always been a struggle. Since Java 8 introduced the Date and Time API, OffsetDateTime and ZonedDateTime have become the most obvious and commonly used types to model a timestamp with timezone information. And you might expect that choosing one of them should be the only thing you need...
The post TimezoneStorageType – Hibernate’s improved timezone mapping appeared first on Thorben Janssen.
August 04, 2023
Upgrade to Jakarta EE 10 – part 3: Transform incompatible Dependencies
by Ondro Mihályi at August 04, 2023 05:43 AM
In this article, we’ll address upgrading individual libraries used by your applications. This solves two problems. First, it improves the build time of your application during development and reduces the build time introduced by transforming the final binary after each build. And second, it solves compilation problems you can face with some libraries after you adjust the source code of your application for Jakarta EE 10.
Earlier, we described how to transform an application binary, e.g. a WAR file, to make it compatible with Jakarta EE 10 so that it can be deployed to GlassFish 7. But this transformation is slow and needs to be done with every build. This doesn’t make developers happy because it increases the time to build and deploy the application after they made changes in the source code.
We also described how to automate transforming the application’s source code to compile it with the Jakarta EE 10 APIs. However, after doing this, there’s a high chance your application won’t compile. This is because some of the libraries used by your application may not be compatible with Jakarta EE 10. They are transformed after the application is built but that’s too late.
In this article, we’ll explain a few simple approaches how to make sure that external libraries are compatible with Jakarta EE 10 so that everything correctly compiles and no transformation is necessary after each build.
What’s the problem, really?
Libraries in your application fall into these categories:
- The library doesn’t use Java EE APIs at all – no problem here, just continue using it as before
- There’s a version of the library compatible with Jakarta EE 10 and you can update to this version – just update it.
- The library doesn’t have a version compatible with Jakarta EE 10 or you can’t update it for some reason. It needs to be transformed.
- The library depends on features removed in Jakarta EE 10 and cannot be updated to a version compatible with Jakarta EE 10. Tough luck, no simple solution here, though this category is very rare.
While, obviously, the first category doesn’t cause any problems, libraries that use APIs not compatible with Jakarta EE 10 need some treatment. Libraries that have a version compatible with Jakarta EE 10 can be simply updated to this version. We’ll describe some examples of such widely used libraries below.
Some other libraries have not yet been updated for Jakarta EE 10. Or you cannot afford to update them to a new version for whatever reason (risk of regression, missing feature in the new version, etc.). Then you’ll need to transform them yourself into a version compatible with Jakarta EE 10. This can be done outside of your application project so that you transform the libraries once and then use the transformed library when building the application.
In some rare cases, you may come across a library, which is not compatible with Jakarta EE 10 even after the transformation, because it depends on some old APIs removed in Jakarta EE 10. Each such library may require specific treatment, and describing the techniques which can be used would be for another whole article. Therefore we’ll not address these rare cases now.
Update libraries to a Jakarta EE 10 version
Most of the libraries widely used in enterprise projects already support Jakarta EE 10 so it’s easy to just update them to a newer version.
For example, to update the Hibernate library, just increase the version number. Here’s an example for the version 6.2.7.Final, the latest version at this moment:
<dependency>
<groupId>org.hibernate.orm</groupId>
<artifactId>hibernate-core</artifactId>
<version>6.2.7.Final</version>
</dependency>
However, some libraries maintain support for both Jakarta EE 9+ and older Jakarta EE and Java EE versions. Those libraries have two variants for the same library version. In that case, their Maven artifact for Jakarta EE 9+ is usually published with the same coordinates as before but with the jakarta
classifier. You need to specify an additional <classifier>jakarta</classifier>
configuratoin in the dependency definition, so that the correct variant is downloaded and used in your application. For example, to update the Primefaces library to version 13 and Jakarta EE 10 variant (jakarta
classifier):
<dependency>
<groupId>org.primefaces</groupId>
<artifactId>primefaces</artifactId>
<version>13.0.0</version>
<classifier>jakarta</classifier>
</dependency>
Some other libraries also provide both variants but the Maven artifact is published under different coordinates. One example is the Jackson library, which publishes the artifacts in a completely different groupId and artifactId which contain jakarta
in the name:
<dependency>
<groupId>com.fasterxml.jackson.jakarta.rs</groupId>
<artifactId>jackson-jakarta-rs-json-provider</artifactId>
<version>2.15.2</version>
</dependency>
Popular libraries which support Jakarta EE 9+
Here are some examples of popular libraries that support Jakarta EE 9+ in their recent versions (either the main artifact or variant with the jakarta
classifier):
Library name | Maven dependency definition |
Hibernate | org.hibernate.orm:hibernate-core |
Omnifaces | org.omnifaces:omnifaces |
Jackson | com.fasterxml.jackson.jakarta.rs:jackson-jakarta-rs-json-provider |
Apache Deltaspike | org.apache.deltaspike.modules (all artifacts with the jakarta classifier) |
Primefaces | org.primefaces:primefaces (jakarta classifier) |
Spring Framework 6 | https://spring.io/ |
Spring Boot 3 | https://spring.io/projects/spring-boot |
Transform libraries for Jakarta EE 10
When it’s not possible to upgrade a library, we can transform individual libraries with the Eclipse Trasformer using a similar technique to transforming the whole application WAR which we explained in a previous article. You can use the Eclipse Transformer also on individual library JARs and then use the transformed JARs during the build. However, in modern Maven or Gradle based projects, this isn’t natural because of transitive dependencies. There’s currently no tooling that would properly transform all the transitive dependencies and install them correctly to a local repository. Therefore we’ll use a trick – we’ll merge all JARs that need to be transformed into a single JAR (Uber JAR), with all the transitive dependencies, then transform it, and then install this single JAR into a Maven repository. Then we’ll change the application to depend on this single artifact instead of depending on all the individual artifacts.
First, here’s an example list of dependencies from pom.xml
file of a Maven project which aren’t compatible with Jakarta EE 10:
File pom.xml
in the “application
” project:
<groupId>ee.omnifish</groupId>
<artifactId>jakarta-app</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>war</packaging>
<dependencies>
<!-- Jakarta EE 8 API -->
<dependency>
<groupId>jakarta.platform</groupId>
<artifactId>jakarta.jakartaee-api</artifactId>
<version>8.0.0</version>
<scope>provided</scope>
</dependency>
<!-- incompatible with Jakarta EE 10 API -->
<dependency>
<groupId>net.sf.jasperreports</groupId>
<artifactId>jasperreports</artifactId>
<version>6.20.1</version>
</dependency>
<dependency>
<groupId>org.quartz-scheduler</groupId>
<artifactId>quartz</artifactId>
<version>2.3.2</version>
</dependency>
</dependencies>
With this as a starting point, we’ll create a new Maven project next to our existing project. For convenience, we can move both projects into a Maven POM project as modules so that we can build everything together if needed. We will move all the dependencies we need to transform into this new project. We will remove all those dependencies from the original project and replace them with a single dependency on the new project.
The pom.xml
of new the project would look like this:
Snippet of the pom.xml
file in a new “transform-dependencies
” project:
<groupId>ee.omnifish.transformed</groupId>
<artifactId>transform-dependencies</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>
<dependencies>
<!-- dependencies not compatible with Jakarta EE 10+
- will be transformed in this JAR artifact -->
<dependency>
<groupId>net.sf.jasperreports</groupId>
<artifactId>jasperreports</artifactId>
<version>6.20.1</version>
</dependency>
<dependency>
<groupId>org.quartz-scheduler</groupId>
<artifactId>quartz</artifactId>
<version>2.3.2</version>
</dependency>
</dependencies>
In the final WAR file, instead of having each JAR file separately in the WAR, like this:
- WEB-INF
- classes
- jasperreports.jar
- quartz.jar
- classes
We will end up with a single transformed JAR, like this:
- WEB-INF
- classes
- transform-dependencies.jar
- classes
This transform-dependencies.jar
file will contain all the artifacts merged into it – it will contain all classes and files from all the artifacts.
In order to achieve this, we can use the Maven Shade plugin, which merges multiple JAR files into a single artifact produced by the project:
Snippet of the pom.xml
file in a new “transform-dependencies
” project:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.5.0</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<shadedClassifierName>jakarta</shadedClassifierName>
<shadedArtifactAttached>true</shadedArtifactAttached>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"/>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
This plugin takes all the dependencies defined in the project, merges them into a single Uber JAR and attaches the JAR to the project as an artifact with the jakarta
classifier. It would be nicer if it attached the JAR as the main artifact, without the classifier. This would cause a conflict with the Transformer plugin we need to use on the Uber JAR. Therefore we use an extra classifier jakarta
here and we’ll need to use this classifier also in the original project when we define the dependency on this new project.
Now we add the Transformer plugin to transform the Uber JAR to make it compatible with Jakarta EE 9+. We need to configure the Transformer plugin with the following:
- execute the goal “jar”
- use the “jakartaDefaults” rule to apply transformations for Jakarta EE 9
- define artifact with the classfier “jakarta” produced by the shade maven plugin. This will have the same groupId, artifactId and version as the current project
Snippet of the pom.xml
file in a new “transform-dependencies
” project:
<plugin>
<groupId>org.eclipse.transformer</groupId>
<artifactId>transformer-maven-plugin</artifactId>
<version>0.5.0</version>
<executions>
<execution>
<id>jar</id>
<phase>package</phase>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
<configuration>
<rules>
<jakartaDefaults>true</jakartaDefaults>
</rules>
<artifact>
<groupId>${project.groupId}</groupId>
<artifactId>${project.artifactId}</artifactId>
<classifier>jakarta</classifier>
</artifact>
</configuration>
</plugin>
We can now build the transform-dependencies
project with the standard Maven command:
mvn install
The original pom.xml
file should now only depend on this new artifact:
<groupId>ee.omnifish</groupId>
<artifactId>jakarta-app</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>war</packaging>
<dependencies>
<!-- Jakarta EE 8 API -->
<dependency>
<groupId>jakarta.platform</groupId>
<artifactId>jakarta.jakartaee-api</artifactId>
<version>8.0.0</version>
<scope>provided</scope>
</dependency>
<!-- Uber JAR artifact that includes classes from all
the dependencies that need to be transformed -->
<dependency>
<groupId>ee.omnifish.transformed</groupId>
<artifactId>transform-dependencies</artifactId>
<version>1.0-SNAPSHOT</version>
<classifier>jakarta</classifier>
</dependency>
</dependencies>
When we now build the original application project, it will use the transformed Uber JAR in the build and add it to the final WAR instead of all individual (untransformed) JARs. We can now deploy this WAR to a Jakarta EE 10 application server like GlassFish 7. If none of the original JARs depends on APIs removed between Jakarta EE 9 and 10, the application should now work as expected!
A full example of this approach is here: https://github.com/OmniFish-EE/upgrading-jakarta-ee-applications/tree/main/javax-jakarta-transform-dependencies-uberjar
Evolve the project in the future
In the future, it’s likely that some of the libraries we need to transform with the transform-dependencies
project will become compatible with Jakarta EE 10. We can then easily update them and stop transforming them. We just need to move add the dependency back to the original “application” pom.xml file with a new version number, and remove it from the dependencies in the transform-dependencies
project. The application will thus start depending an official version of the library without transformation. Eventually, if you’re able to update all of the libraries, you can discard the transform-dependencies
project and keep a single native Jakarta EE 10 project. This approach with a helper transform-dependencies
project is only a temporary solution until the time you can update all the libraries to Jakarta EE 10.
Conclusion
In this article, we described the last piece of the migration process which can be fully automated with very little effort. If you combine all the steps described here and in the previous articles in this series, you’ll be able to migrate your older Java EE project to Jakarta EE 9. In case your project doesn’t depend on any APIs dropped in Jakarta EE 10, your migration will be completed and you can start using new features in Jakarta EE 10. If you’re less fortunate, there’s still some work to do to refactor your code or adjust libraries that use the removed APIs to start using newer alternative APIs in Jakarta EE 10 that replace them. Most of these refactorings can be automated with custom Eclipse Transformer rules but some are more complicated and hard to automate. We’ll deal with them in future articles.
Resources
A Github repository with sample applications: https://github.com/OmniFish-EE/upgrading-jakarta-ee-applications/#readme
July 30, 2023
How to Upload and Download Files with a Servlet
by F.Marchioni at July 30, 2023 08:50 AM
This article will illustrate how to upload files using the Jakarta Servlet API. We will also learn how to use a Servlet to download a File that is available remotely on the Server. Uploading Files with a Servlet By using Jakarta Servlet API it is pretty simple to upload a File without the need of ... Read more
The post How to Upload and Download Files with a Servlet appeared first on Mastertheboss.
July 24, 2023
How to upgrade WildFly JSF version with Galleon
by F.Marchioni at July 24, 2023 08:10 AM
This article will teach you how to add MyFaces 4 (or newer) support to your WildFly installation using Galleon features pack. At the end of it, you will be able to complete the upgrade of your JSF implementation in no time! Choosing a different JSF Implementation in WildFly Out of the box, WildFly ships with ... Read more
The post How to upgrade WildFly JSF version with Galleon appeared first on Mastertheboss.
July 14, 2023
How to run standalone Jakarta Batch Jobs
by F.Marchioni at July 14, 2023 10:16 AM
Jakarta Batch, formerly known as Java Batch, is a specification that provides a standardized approach for implementing batch processing in Java applications. It offers a robust and scalable framework for executing large-scale, long-running, and data-intensive tasks. In this tutorial, we will explore the process of running Jakarta Batch Jobs as standalone Java applications, discussing the ... Read more
The post How to run standalone Jakarta Batch Jobs appeared first on Mastertheboss.
June 20, 2023
Designing Quarkus Front-Ends with Vaadin made easy
by F.Marchioni at June 20, 2023 08:26 AM
Vaadin Flow provides a comprehensive set of UI components and tools for creating rich and interactive user interfaces, while Quarkus offers a lightweight and efficient Java framework for developing cloud-native applications. In this article, we will explore how to combine the strengths of Vaadin and Quarkus to build web applications with ease. What is Vaadin? ... Read more
The post Designing Quarkus Front-Ends with Vaadin made easy appeared first on Mastertheboss.
June 09, 2023
Quarkus Transaction Timeout configuration
by F.Marchioni at June 09, 2023 12:50 PM
Quarkus is a popular framework for building efficient and scalable Java applications. One critical aspect of application development is managing transactions, and Quarkus provides flexible ways to configure transaction timeouts. In this article, we’ll explore how to configure transaction timeouts in Quarkus. Default Transaction Timeout The Narayana JTA transaction manager lets you coordinate and expose ... Read more
The post Quarkus Transaction Timeout configuration appeared first on Mastertheboss.
May 29, 2023
How to persist additional attributes for an association with JPA and Hibernate
by Thorben Janssen at May 29, 2023 04:35 PM
The post How to persist additional attributes for an association with JPA and Hibernate appeared first on Thorben Janssen.
JPA and Hibernate allow you to define associations between entities with just a few annotations, and you don’t have to care about the underlying table model in the database. Even join tables for many-to-many associations are hidden behind a @JoinTable annotation, and you don’t need to model the additional table as an entity. That changes...
The post How to persist additional attributes for an association with JPA and Hibernate appeared first on Thorben Janssen.
May 25, 2023
Enterprise Kotlin - Kotlin and Jakarta EE
May 25, 2023 12:00 AM
If you look at the documentation on the Kotlin web page (
March 29, 2023
The Jakarta EE 2023 Developer Survey is now open!
by Tanja Obradovic at March 29, 2023 09:24 PM
It is that time of the year: the Jakarta EE 2023 Developer Survey open for your input! The survey will stay open until May 25st.
I would like to invite you to take this year six-minute survey, and have the chance to share your thoughts and ideas for future Jakarta EE releases, and help us discover uptake of the Jakarta EE latest versions and trends that inform industry decision-makers.
Please share the survey link and to reach out to your contacts: Java developers, architects and stakeholders on the enterprise Java ecosystem and invite them to participate in the 2023 Jakarta EE Developer Survey!
February 16, 2023
What is Apache Camel and how does it work?
by Rhuan Henrique Rocha at February 16, 2023 11:14 PM
In this post, I will talk to you about what the Apache Camel is. It is a brief introduction before I starting to post practical content. Thus, let’s go to understand what this framework is.
Apache Camel is an open source Java integration framework that allows different applications to communicate with each other efficiently. It provides a platform for integrating heterogeneous software systems. Camel is designed to make application integration easy, simplifying the complexity of communication between different systems.
Apache Camel is written in Java and can be run on a variety of platforms, including Jakarta EE application servers and OSGi-based application containers, and can runs inside cloud environments using Spring Boot or Quarkus. Camel also supports a wide range of network protocols and message formats, including HTTP, FTP, SMTP, JMS, SOAP, XML, and JSON.
Camel uses the Enterprise Integration Patterns (EIP) pattern to define the different forms of integration. EIP is a set of commonly used design patterns in system integration. Camel implements many of these patterns, making it a powerful tool for integration solutions.
Additionally, Camel has a set of components that allow it to integrate with different systems. The components can be used to access different resources, such as databases, web services, and message systems. Camel also supports content-based routing, which means it can route messages based on their content.
Camel is highly configurable and extensible, allowing developers to customize its functionality to their needs. It also supports the creation of integration routes at runtime, which means that routes can be defined and changed without the need to restart the system.
In summary, Camel is a powerful and flexible tool for software system integration. It allows different applications to communicate efficiently and effectively, simplifying the complexity of system integration. Camel is a reliable and widely used framework that can help improve the efficiency and effectiveness of system integration in a variety of environments.
If you want to start using this framework you can access the documentation at the site. It’s my first post about the Apache Camel and will post more practical content about this amazing framework.
February 03, 2023
Jersey 3.1.1 released – focused on performance
by Jan at February 03, 2023 11:50 PM
January 31, 2023
Jakarta EE track at Devnexus 2023!!!!
by Tanja Obradovic at January 31, 2023 08:25 PM
We have great news to share with you!
For the very first time at Devnexus 2023 we will have Jakarta EE track with 10 sessions and we will take this opportunity, to whenever possible, celebrate all we have accomplished in Jakarta EE community.
Jakarta EE track sessions
- 5 years of Jakarta EE Panel: a look into the future (hosted by Ivar and Tanja)
- Deep Dive MicroProfile 6.0 with Jakarta EE 10 Core Profile
- From javax to jakarta, the path paved with pitfalls
- Jakarta EE 10 and Beyond
- Jakarta EE and MicroProfile Highlights
- Jakarta EE for Spring Developers
- Jakarta EE integration testing
- Jakarta EE or Spring? Real world testimonies
- Let's take a look at how a Jakarta EE cloud-native application should look!
- Upgrading a Legacy Java EE App with Style
You may not be aware but this year (yes, time flies!!) marks 5 years of Jakarta EE, so we will be celebrating through out the year! Devnexus 2023, looks a great place to mark this milestone as well! So stay tuned for details, but in the meanwhile please help us out, register for the event come to see us and spread the word.
Help us out in spreading the word about Jakarta EE track @Devnexus 2023, just re-share posts you see from us on various social platforms!
To make it easier for you to spread the word on socials, we also have prepared a social kit document to help us with promotion of the Jakarta EE track @Devnexus 2023, sessions and speakers. The social kit document is going to be updated with missing sessions and speakers, so visit often and promote far and wide.
Note: Organizers wanted to do something for people impacted by the recent tech layoffs, and decided to offer a 50% discount for any conference pass (valid for a limited time). Please use code DN-JAKARTAEE for @JakartaEE Track to get additional 20% discount!
In addition, there will be an IBM workshop that will be highlighting Jakarta EE; look for "Thriving in the cloud: Venturing beyond the 12 factors". Please use the promo code ($100 off): JAKARTAEEATDEVNEXUS the organizers prepared for you (valid for a limited time).
I hope to see you all at Devnexus 2023!
November 19, 2022
Jakarta EE and MicroProfile at EclipseCon Community Day 2022
by Reza Rahman at November 19, 2022 10:39 PM
Community Day at EclipseCon 2022 was held in person on Monday, October 24 in Ludwigsburg, Germany. Community Day has always been a great event for Eclipse working groups and project teams, including Jakarta EE/MicroProfile. This year was no exception. A number of great sessions were delivered from prominent folks in the community. The following are the details including session materials. The agenda can still be found here. All the materials can be found here.

Jakarta EE Community State of the Union
The first session of the day was a Jakarta EE community state of the union delivered by Tanja Obradovic, Ivar Grimstad and Shabnam Mayel. The session included a quick overview of Jakarta EE releases, how to get involved in the work of producing the specifications, a recap of the important Jakarta EE 10 release and as well as a view of what’s to come in Jakarta EE 11. The slides are embedded below and linked here.
Jakarta Concurrency – What’s Next
Payara CEO Steve Millidge covered Jakarta Concurrency. He discussed the value proposition of Jakarta Concurrency, the innovations delivered in Jakarta EE 10 (including CDI based @Asynchronous, @ManagedExecutorDefinition, etc) and the possibilities for the future (including CDI based @Schedule, @Lock, @MaxConcurrency, etc). The slides are embedded below and linked here. There are some excellent code examples included.
Jakarta Security – What’s Next
Werner Keil covered Jakarta Security. He discussed what’s already done in Jakarta EE 10 (including OpenID Connect support) and everything that’s in the works for Jakarta EE 11 (including CDI based @RolesAllowed). The slides are embedded below and linked here.
Jakarta Data – What’s Coming
IBM’s Emily Jiang kindly covered Jakarta Data. This is a brand new specification aimed towards Jakarta EE 11. It is a higher level data access abstraction similar to Spring Data and DeltaSpike Data. It encompasses both Jakarta Persistence (JPA) and Jakarta NoSQL. The slides are embedded below and linked here. There are some excellent code examples included.
MicroProfile Community State of the Union
Emily also graciously delivered a MicroProfile state of the union. She covered what was delivered in MicroProfile 5, including alignment with Jakarta EE 9.1. She also discussed what’s coming soon in MicroProfile 6 and beyond, including very clear alignment with the Jakarta EE 10 Core Profile. The slides are embedded below and linked here. There are some excellent technical details included.
MicroProfile Telemetry – What’s Coming
Red Hat’s Martin Stefanko covered MicroProfile Telemetry. Telemetry is a brand new specification being included in MicroProfile 6. The specification essentially supersedes MicroProfile Tracing and possibly MicroProfile Metrics too in the near future. This is because the OpenTracing and OpenCensus projects merged into a single project called OpenTelemetry. OpenTelemetry is now the de facto standard defining how to collect, process, and export telemetry data in microservices. It makes sense that MicroProfile moves forward with supporting OpenTelemetry. The slides are embedded below and linked here. There are some excellent technical details and code examples included.
See You There Next Time?
Overall, it was an honor to organize the Jakarta EE/MicroProfile agenda at EclipseCon Community Day one more time. All speakers and attendees should be thanked. Perhaps we will see you at Community Day next time? It is a great way to hear from some of the key people driving Jakarta EE and MicroProfile. You can attend just Community Day even if you don’t attend EclipseCon. The fee is modest and includes lunch as well as casual networking.
November 15, 2022
Jersey 3.1.0 is finally released
by Jan at November 15, 2022 03:31 PM
November 04, 2022
JFall 2022
November 04, 2022 09:56 AM
An impression of JFall by yours truly.
keynote
Sold out!
Packet room!
Very nice first keynote speaker by Saby Sengupta about the path to transform.
He is a really nice storyteller. He had us going.
Dutch people, wooden shoes, wooden hat, would not listen
- Saby
lol
Get the answer to three why questions. If the answers stop after the first why. It may not be a good idea.
This great first keynote is followed by the very well known Venkat Subramaniam about The Art of Simplicity.
The question is not what can we add? But What can we remove?
Simple fails less
Simple is elegant
All in al a great keynote! Loved it.
Design Patterns in the light of Lambdas
By Venkat Subramaniam
The GOF are kind of the grand parents of our industry. The worst thing they have done is write the damn book.
— Venkat
The quote is in the context of that writing down grandmas fantastic recipe does not work as it is based on the skill of grandma and not the exact amount of the ingredients.
The cleanup is the responsibility of the Resource class. Much better than asking developers to take care of it. It will be forgotten!
The more powerful a language becomes the less we need to talk about patterns. Patterns become practices we use. We do not need to put in extra effort.
I love his way of presenting, but this is the one of those times - I guess - that he is hampered by his own succes. The talk did not go deep into stuff. During his talk I just about covered 5 not too difficult subjects. I missed his speed and depth.
Still a great talk though.
lunch
Was actually very nice!
NLJUG update keynote
The Java Magazine was mentioned we (as Editors) had to shout for that!
Please contact me (@ivonet) if you have ambitions to either be an author or maybe even as a fellow editor of the magazine. We are searching for a new Editor now.
Then the voting for the Innovation Awards.
I kinda missed the next keynote by ING because I was playing with a rubix cube and I did not really like his talk
jakarta EE 10 platform
by Ivar Grimstad
Ivar talks about the specification of Jakarta EE.
To create a lite version of CDI it is possible to start doing things at build time and facilitate other tools like GraalVM and Quarkus.
He gives nice demos on how to migrate code to work in de jakarta namespace.
To start your own Jakarta EE application just go to start.jakarta.ee en follow the very simple UI instructions
I am very proud to be the creator of that UI. Thanks, Ivar for giving me a shoutout for that during your talk. More cool stuff will follow soon.
Be prepared to do some namespace changes when moving from Java EE 8 to Jakarta EE.
All slides here
conclusion
I had a fantastic day. For me, it is mainly about the community and seeing all the people I know in the community. I totally love the vibe of the conference and I think it is one of the best organized venues.
See you at JSpring.
Ivo.
October 28, 2022
How to make your own scraper and then forget about it?
October 28, 2022 12:00 AM
September 26, 2022
Survey Says: Confidence Continues to Grow in the Jakarta EE Ecosystem
by Mike Milinkovich at September 26, 2022 01:00 PM
The results of the 2022 Jakarta EE Developer Survey are very telling about the current state of the enterprise Java developer community. They point to increased confidence about Jakarta EE and highlight how far Jakarta EE has grown over the past few years.
Strong Turnout Helps Drive Future of Jakarta EE
The fifth annual survey is one of the longest running and best-respected surveys of its kind in the industry. This year’s turnout was fantastic: From March 9 to May 6, a total of 1,439 developers responded.
This is great for two reasons. First, obviously, these results help inform the Java ecosystem stakeholders about the requirements, priorities and perceptions of enterprise developer communities. The more people we hear from, the better picture we get of what the community wants and needs. That makes it much easier for us to make sure the work we’re doing is aligned with what our community is looking for.
The other reason is that it helps us better understand how the cloud native Java world is progressing. By looking at what community members are using and adopting, what their top goals are and what their plans are for adoption, we can better understand not only what we should be working on today, but tomorrow and for the future of Jakarta EE.
Findings Indicate Growing Adoption and Rising Expectations
Some of the survey’s key findings include:
- Jakarta EE is the basis for the top frameworks used for building cloud native applications.
- The top three frameworks for building cloud native applications, respectively, are Spring/Spring Boot, Jakarta EE and MicroProfile, though Spring/Spring Boot lost ground this past year. It’s important to note that Spring/SpringBoot relies on Jakarta EE developments for its operation and is not competitive with Jakarta EE. Both are critical ingredients to the healthy enterprise Java ecosystem.
- Jakarta EE 9/9.1 usage increased year-over-year by 5%.
- Java EE 8, Jakarta EE 8, and Jakarta EE 9/9.1 hit the mainstream with 81% adoption.
- While over a third of respondents planned to adopt, or already had adopted Jakarta EE 9/9.1, nearly a fifth of respondents plan to skip Jakarta EE 9/9.1 altogether and adopt Jakarta EE 10 once it becomes available.
- Most respondents said they have migrated to Jakarta EE already or planned to do so within the next 6-24 months.
- The top three community priorities for Jakarta EE are:
- Native integration with Kubernetes (same as last year)
- Better support for microservices (same as last year)
- Faster support from existing Java EE/Jakarta EE or cloud vendors (new this year)
Two of the results, when combined, highlight something interesting:
- 19% of respondents planned to skip Jakarta EE 9/9.1 and go straight to 10 once it’s available
- The new community priority — faster support from existing Java EE/Jakarta EE or cloud vendors — really shows the growing confidence the community has in the ecosystem
After all, you wouldn’t wait for a later version and skip the one that’s already available, unless you were confident that the newer version was not only going to be coming out on a relatively reliable timeline, but that it was going to be an improvement.
And this growing hunger from the community for faster support really speaks to how far the ecosystem has come. When we release a new version, like when we released Jakarta EE 9, it takes some time for the technology implementers to build the product based on those standards or specifications. The community is becoming more vocal in requesting those implementers to be more agile and quickly pick up the new versions. That’s definitely an indication that developer demand for Jakarta EE products is growing in a healthy way.
Learn More
If you’d like to learn more about the project, there are several Jakarta EE mailing lists to sign up for. You can also join the conversation on Slack. And if you want to get involved, start by choosing a project, sign up for its mailing list and start communicating with the team.
September 22, 2022
Jakarta EE 10 has Landed!
by javaeeguardian at September 22, 2022 03:48 PM
The Jakarta EE Ambassadors are thrilled to see Jakarta EE 10 being released! This is a milestone release that bears great significance to the Java ecosystem. Jakarta EE 8 and Jakarta EE 9.x were important releases in their own right in the process of transitioning Java EE to a truly open environment in the Eclipse Foundation. However, these releases did not deliver new features. Jakarta EE 10 changes all that and begins the vital process of delivering long pending new features into the ecosystem at a regular cadence.
There are quite a few changes that were delivered – here are some key themes and highlights:
- CDI Alignment
- @Asynchronous in Concurrency
- Better CDI support in Batch
- Java SE Alignment
- Support for Java SE 11, Java SE 17
- CompletionStage, ForkJoinPool, parallel streams in Concurrency
- Bootstrap APIs for REST
- Closing standardization gaps
- OpenID Connect support in Security, @ManagedExecutorDefinition, UUID as entity keys, more SQL support in Persistence queries, multipart/form-data support in REST, @ClientWindowScoped in Faces, pure Java Faces views
- CDI Lite/Core Profile to enable next generation cloud native runtimes – MicroProfile will likely align with CDI Lite/Jakarta EE Core
- Deprecation/removal
- @Context annotation in REST, EJB Entity Beans, embeddable EJB container, deprecated Servlet/Faces/CDI features
While there are many features that we identified in our Jakarta EE 10 Contribution Guide that did not make it yet, this is still a very solid release that everyone in the Java ecosystem will benefit from, including Spring, MicroProfile and Quarkus. You can see here what was delivered, what’s on the way and what gaps still remain. You can try Jakarta EE 10 out now using compatible implementations like GlassFish, Payara, WildFly and Open Liberty. Jakarta EE 10 is proof in the pudding that the community, including major stakeholders, has not only made it through the transition to the Eclipse Foundation but now is beginning to thrive once again.
Many Ambassadors helped make this release a reality such as Arjan Tijms, Werner Keil, Markus Karg, Otavio Santana, Ondro Mihalyi and many more. The Ambassadors will now focus on enabling the community to evangelize Jakarta EE 10 including speaking, blogging, trying out implementations, and advocating for real world adoption. We will also work to enable the community to continue to contribute to Jakarta EE by producing an EE 11 Contribution Guide in the coming months. Please stay tuned and join us.
Jakarta EE is truly moving forward – the next phase of the platform’s evolution is here!
July 13, 2022
Java Reflections unit-testing
by Vladimir Bychkov at July 13, 2022 09:06 PM
July 06, 2022
The Power of Enum – Take advantage of it to make your code more readable and efficient
by otaviojava at July 06, 2022 06:51 AM
May 05, 2022
Java EE - Jakarta EE Initializr
May 05, 2022 02:23 PM
Getting started with Jakarta EE just became even easier!
Get started
Hot new Update!
Moved from the Apache 2 license to the Eclipse Public License v2 for the newest version of the archetype as described below.
As a start for a possible collaboration with the Eclipse start project.
New Archetype with JakartaEE 9
JakartaEE 9 + Payara 5.2022.2 + MicroProfile 4.1 running on Java 17
- And the docker image is also ready for x86_64 (amd64) AND aarch64 (arm64/v8) architectures!
February 21, 2022
FOSDEM 2022 Conference Report
by Reza Rahman at February 21, 2022 12:24 AM
FOSDEM took place February 5-6. The European based event is one of the most significant gatherings worldwide focused on all things Open Source. Named the “Friends of OpenJDK”, in recent years the event has added a devroom/track dedicated to Java. The effort is lead by my friend and former colleague Geertjan Wielenga. Due to the pandemic, the 2022 event was virtual once again. I delivered a couple of talks on Jakarta EE as well as Diversity & Inclusion.

Fundamentals of Diversity & Inclusion for Technologists
I opened the second day of the conference with my newest talk titled “Fundamentals of Diversity and Inclusion for Technologists”. I believe this is an overdue and critically important subject. I am very grateful to FOSDEM for accepting the talk. The reality for our industry remains that many people either have not yet started or are at the very beginning of their Diversity & Inclusion journey. This talk aims to start the conversation in earnest by explaining the basics. Concepts covered include unconscious bias, privilege, equity, allyship, covering and microaggressions. I punctuate the topic with experiences from my own life and examples relevant to technologists. The slides for the talk are available on SpeakerDeck. The video for the talk is now posted on YouTube.
Jakarta EE – Present and Future
Later the same day, I delivered my fairly popular talk – “Jakarta EE – Present and Future”. The talk is essentially a state of the union for Jakarta EE. It covers a little bit of history, context, Jakarta EE 8, Jakarta EE 9/9.1 as well as what’s ahead for Jakarta EE 10. One key component of the talk is the importance and ways of direct developer contributions into Jakarta EE, if needed with help from the Jakarta EE Ambassadors. Jakarta EE 10 and the Jakarta Core Profile should bring an important set of changes including to CDI, Jakarta REST, Concurrency, Security, Faces, Batch and Configuration. The slides for the talk are available on SpeakerDeck. The video for the talk is now posted on YouTube.
I am very happy to have had the opportunity to speak at FOSDEM. I hope to contribute again in the future.
January 18, 2022
Making Readable Code With Dependency Injection and Jakarta CDI
by otaviojava at January 18, 2022 03:53 PM
December 12, 2021
Infinispan Apache Log4j 2 CVE-2021-44228 vulnerability
December 12, 2021 10:00 PM
Infinispan 10+ uses Log4j version 2.0+ and can be affected by vulnerability CVE-2021-44228, which has a 10.0 CVSS score. The first fixed Log4j version is 2.15.0.
So, until official patch is coming, - you can update used logger version to the latest in few simple steps
- Download Log4j version 2.15.0: https://www.apache.org/dyn/closer.lua/logging/log4j/2.15.0/apache-log4j-2.15.0-bin.zip
- Unpack distributive
- Replace affected libraries
wget https://downloads.apache.org/logging/log4j/2.15.0/apache-log4j-2.15.0-bin.zip
unzip apache-log4j-2.15.0-bin.zip
cd /opt/infinispan-server-10.1.8.Final/lib/
rm log4j-*.jar
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-api-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-core-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-jul-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-slf4j-impl-2.15.0.jar ./
Please, note - patch above is not official, but according to initial tests it works with no issues
November 18, 2021
JPA query methods: influence on performance
by Vladimir Bychkov at November 18, 2021 07:22 AM
October 27, 2021
Eclipse Jetty Servlet Survey
by Jesse McConnell at October 27, 2021 01:25 PM
This short 5-minute survey is being presented to the Eclipse Jetty user community to validate conjecture the Jetty developers have for how users will leverage JakartaEE servlets and the Jetty project. We have some features we are gauging interest in before supporting in Jetty 12 and your responses will help shape its forthcoming release.
We will summarize results in a future blog.
September 30, 2021
Custom Identity Store with Jakarta Security in TomEE
by Jean-Louis Monteiro at September 30, 2021 11:42 AM
In the previous post, we saw how to use the built-in ‘tomcat-users.xml’ identity store with Apache TomEE. While this identity store is inherited from Tomcat and integrated into Jakarta Security implementation in TomEE, this is usually good for development or simple deployments, but may appear too simple or restrictive for production environments.
This blog will focus on how to implement your own identity store. TomEE can use LDAP or JDBC identity stores out of the box. We will try them out next time.
Let’s say you have your own file store or your own data store like an in-memory data grid, then you will need to implement your own identity store.
What is an identity store?
An identity store is a database or a directory (store) of identity information about a population of users that includes an application’s callers.
In essence, an identity store contains all information such as caller name, groups or roles, and required information to validate a caller’s credentials.
How to implement my own identity store?
This is actually fairly simple with Jakarta Security. The only thing you need to do is create an implementation of `jakarta.security.enterprise.identitystore.IdentityStore`. All methods in the interface have default implementations. So you only have to implement what you need.
public interface IdentityStore {
Set DEFAULT_VALIDATION_TYPES = EnumSet.of(VALIDATE, PROVIDE_GROUPS);
default CredentialValidationResult validate(Credential credential) {
}
default Set getCallerGroups(CredentialValidationResult validationResult) {
}
default int priority() {
}
default Set validationTypes() {
}
enum ValidationType {
VALIDATE, PROVIDE_GROUPS
}
}
By default, an identity store is used for both validating user credentials and providing groups/roles for the authenticated user. Depending on what #validationTypes() will return, you will have to implement #validate(…) and/or #getCallerGroups(…)
#getCallerGroups(…) will receive the result of #valide(…). Let’s look at a very simple example:
@ApplicationScoped
public class TestIdentityStore implements IdentityStore {
public CredentialValidationResult validate(Credential credential) {
if (!(credential instanceof UsernamePasswordCredential)) {
return INVALID_RESULT;
}
final UsernamePasswordCredential usernamePasswordCredential = (UsernamePasswordCredential) credential;
if (usernamePasswordCredential.compareTo("jon", "doe")) {
return new CredentialValidationResult("jon", new HashSet<>(asList("foo", "bar")));
}
if (usernamePasswordCredential.compareTo("iron", "man")) {
return new CredentialValidationResult("iron", new HashSet<>(Collections.singletonList("avengers")));
}
return INVALID_RESULT;
}
}
In this simple example, the identity store is hardcoded. Basically, it knows only 2 users, one of them has some roles, while the other has another set of roles.
You can easily extend this example and query a local file, or an in-memory data grid if you need. Or use JPA to access your relational database.
IMPORTANT: for TomEE to pick it up and use it in your application, the identity store must be a CDI bean.
The complete and runnable example is available under https://github.com/apache/tomee/tree/master/examples/security-custom-identitystore
The post Custom Identity Store with Jakarta Security in TomEE appeared first on Tomitribe.
September 24, 2021
Book Review: Practical Cloud-Native Java Development with MicroProfile
September 24, 2021 12:00 AM
General information
- Pages: 403
- Published by: Packt
- Release date: Aug 2021
Disclaimer: I received this book as a collaboration with Packt and one of the authors (Thanks Emily!)
A book about Microservices for the Java Enterprise-shops
Year after year many enterprise companies are struggling to embrace Cloud Native practices that we tend to denominate as Microservices, however Microservices is a metapattern that needs to follow a well defined approach, like:
- (We aim for) reactive systems
- (Hence we need a methodology like) 12 Cloud Native factors
- (Implementing) well-known design patterns
- (Dividing the system by using) Domain Driven Design
- (Implementing microservices via) Microservices chassis and/or service mesh
- (Achieving deployments by) Containers orchestration
Many of these concepts require a considerable amount of context, but some books, tutorials, conferences and YouTube videos tend to focus on specific niche information, making difficult to have a "cold start" in the microservices space if you have been developing regular/monolithic software. For me, that's the best thing about this book, it provides a holistic view to understand microservices with Java and MicroProfile for "cold starter developers".
About the book
Using a software architect perspective, MicroProfile could be defined as a set of specifications (APIs) that many microservices chassis implement in order to solve common microservices problems through patterns, lessons learned from well known Java libraries, and proposals for collaboration between Java Enterprise vendors.
Subsequently if you think that it sounds a lot like Java EE, that's right, it's the same spirit but on the microservices space with participation for many vendors, including vendors from the Java EE space -e.g. Red Hat, IBM, Apache, Payara-.
The main value of this book is the willingness to go beyond the APIs, providing four structured sections that have different writing styles, for instance:
- Section 1: Cloud Native Applications - Written as a didactical resource to learn fundamentals of distributed systems with Cloud Native approach
- Section 2: MicroProfile Deep Dive - Written as a reference book with code snippets to understand the motivation, functionality and specific details in MicroProfile APIs and the relation between these APIs and common Microservices patterns -e.g. Remote procedure invocation, Health Check APIs, Externalized configuration-
- Section 3: End-to-End Project Using MicroProfile - Written as a narrative workshop with source code already available, to understand the development and deployment process of Cloud Native applications with MicroProfile
- Section 4: The standalone specifications - Written as a reference book with code snippets, it describes the development of newer specs that could be included in the future under MicroProfile's umbrella
First section
This was by far my favorite section. This section presents a well-balanced overview about Cloud Native practices like:
- Cloud Native definition
- The role of microservices and the differences with monoliths and FaaS
- Data consistency with event sourcing
- Best practices
- The role of MicroProfile
I enjoyed this section because my current role is to coach or act as a software architect at different companies, hence this is good material to explain the whole panorama to my coworkers and/or use this book as a quick reference.
My only concern with this section is about the final chapter, this chapter presents an application called IBM Stock Trader that (as you probably guess) IBM uses to demonstrate these concepts using MicroProfile with OpenLiberty. The chapter by itself presents an application that combines data sources, front/ends, Kubernetes; however the application would be useful only on Section 3 (at least that was my perception). Hence you will be going back to this section once you're executing the workshop.
Second section
This section divides the MicroProfile APIs in three levels, the division actually makes a lot of sense but was evident to me only during this review:
- The base APIs to create microservices (JAX-RS, CDI, JSON-P, JSON-B, Rest Client)
- Enhancing microservices (Config, Fault Tolerance, OpenAPI, JWT)
- Observing microservices (Health, Metrics, Tracing)
Additionally, section also describes the need for Docker and Kubernetes and how other common approaches -e.g. Service mesh- overlap with Microservice Chassis functionality.
Currently I'm a MicroProfile user, hence I knew most of the APIs, however I liked the actual description of the pattern/need that motivated the inclusion of the APIs, and the description could be useful for newcomers, along with the code snippets also available on GitHub.
If you're a Java/Jakarta EE developer you will find the CDI section a little bit superficial, indeed CDI by itself deserves a whole book/fascicle but this chapter gives the basics to start the development process.
Third section
This section switches the writing style to a workshop style. The first chapter is entirely focused on how to compile the sample microservices, how to fulfill the technical requirements and which MicroProfile APIs are used on every microservice.
You must notice that this is not a Java programming workshop, it's a Cloud Native workshop with ready to deploy microservices, hence the step by step guide is about compilation with Maven, Docker containers, scaling with Kubernetes, operators in Openshift, etc.
You could explore and change the source code if you wish, but the section is written in a "descriptive" way assuming the samples existence.
Fourth section
This section is pretty similar to the second section in the reference book style, hence it also describes the pattern/need that motivated the discussion of the API and code snippets. The main focus of this section is GraphQL, Reactive Approaches and distributed transactions with LRA.
This section will probably change in future editions of the book because at the time of publishing the Cloud Native Container Foundation revealed that some initiatives about observability will be integrated in the OpenTelemetry project and MicroProfile it's discussing their future approach.
Things that could be improved
As any review this is the most difficult section to write, but I think that a second edition should:
- Extend the CDI section due its foundational status
- Switch the order of the Stock Tracer presentation
- Extend the data consistency discussión -e.g. CQRS, Event Sourcing-, hopefully with advances from LRA
The last item is mostly a wish since I'm always in the need for better ways to integrate this common practices with buses like Kafka or Camel using MicroProfile. I know that some implementations -e.g. Helidon, Quarkus- already have extensions for Kafka or Camel, but the data consistency is an entire discussion about patterns, tools and best practices.
Who should read this book?
- Java developers with strong SE foundations and familiarity with the enterprise space (Spring/Java EE)
September 14, 2021
#156 Bash, Apple and EJB, TomEE, Geronimo and Jakarta EE
by David Blevins at September 14, 2021 02:07 PM
New podcast episode with Adam Bien & David Blevins. Apple and EJB, @ApacheTomEE, @tomitribe, @JakartaEE, the benefits of code generation with bash, and over-engineering”–the 156th http://airhacks.fm
The post #156 Bash, Apple and EJB, TomEE, Geronimo and Jakarta EE appeared first on Tomitribe.
July 28, 2021
Jakarta Community Acceptance Testing (JCAT)
by javaeeguardian at July 28, 2021 05:41 AM
Today the Jakarta EE Ambassadors are announcing the start of the Jakarta EE Community Acceptance (JCAT) Testing initiative. The purpose of this initiative is to test Jakarta EE 9/9.1 implementations testing using your code and/or applications. Although Jakarta EE is extensively tested by the TCK, container specific tests, and QA, the purpose of JCAT is for developers to test the implementations.
Jakarta EE 9/9.1 did not introduce any new features. In Jakarta EE 9 the APIs changed from javax to jakarta. Jakarta EE 9.1 raised the supported floor to Java 11 for compatible implementations. So what are we testing?
- Testing individual spec implementations standalone with the new namespace.
- Deploying existing Java EE/Jakarta EE applications to EE 9/9.1.
- Converting Java EE/Jakarta EE applications to the new namespace.
- Running applications on Java 11 (Jakarta EE 9.1)
Participating in this initiative is easy:
- Download a Jakarta EE implementation:
- Deploy code:
- Port or run your existing Jakarta EE application
- Test out a feature using a starter template
To join this initiative, please take a moment to fill-out the form:
To submit results or feedback on your experiences with Jakarta EE 9/9.1:
Jakarta EE 9 / 9.1 Feedback Form
Resources:
- Jakarta EE Ambassadors Google Group List
- Jakarta EE Amabassors Twitter
- Jakarta EE Starter
- Jakarta EE 9 Boilerplate
- Jakarta EE Migration
Start Date: July 28, 2021
End Date: December 31st, 2021
April 17, 2021
Your Voice Matters: Take the Jakarta EE Developer Survey
by dmitrykornilov at April 17, 2021 11:36 AM

The Jakarta EE Developer Survey is in its fourth year and is the industry’s largest open source developer survey. It’s open until April 30, 2021. I am encouraging you to add your voice. Why should you do it? Because Jakarta EE Working Group needs your feedback. We need to know the challenges you facing and suggestions you have about how to make Jakarta EE better.
Last year’s edition surveyed developers to gain on-the-ground understanding and insights into how Jakarta solutions are being built, as well as identifying developers’ top choices for architectures, technologies, and tools. The 2021 Jakarta EE Developer Survey is your chance to influence the direction of the Jakarta EE Working Group’s approach to cloud native enterprise Java.
The results from the 2021 survey will give software vendors, service providers, enterprises, and individual developers in the Jakarta ecosystem updated information about Jakarta solutions and service development trends and what they mean for their strategies and businesses. Additionally, the survey results also help the Jakarta community at the Eclipse Foundation better understand the top industry focus areas and priorities for future project releases.
A full report from based on the survey results will be made available to all participants.
The survey takes less than 10 minutes to complete. We look forward to your input. Take the survey now!
April 02, 2021
Undertow AJP balancer. UT005028: Proxy request failed: java.nio.BufferOverflowException
April 02, 2021 09:00 PM
Wildfly provides great out of the box load balancing support by Undertow and modcluster subsystems
Unfortunately, in case HTTP headers size is huge enough (close to 16K), which is so actual in JWT era - pity error happened:
ERROR [io.undertow.proxy] (default I/O-10) UT005028: Proxy request to /ee-jax-rs-examples/clusterdemo/serverinfo failed: java.io.IOException: java.nio.BufferOverflowException
at io.undertow.server.handlers.proxy.ProxyHandler$HTTPTrailerChannelListener.handleEvent(ProxyHandler.java:771)
at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction$1.completed(ProxyHandler.java:646)
at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction$1.completed(ProxyHandler.java:561)
at io.undertow.client.ajp.AjpClientExchange.invokeReadReadyCallback(AjpClientExchange.java:203)
at io.undertow.client.ajp.AjpClientConnection.initiateRequest(AjpClientConnection.java:288)
at io.undertow.client.ajp.AjpClientConnection.sendRequest(AjpClientConnection.java:242)
at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction.run(ProxyHandler.java:561)
at io.undertow.util.SameThreadExecutor.execute(SameThreadExecutor.java:35)
at io.undertow.server.HttpServerExchange.dispatch(HttpServerExchange.java:815)
...
Caused by: java.nio.BufferOverflowException
at java.nio.Buffer.nextPutIndex(Buffer.java:521)
at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:297)
at io.undertow.protocols.ajp.AjpUtils.putString(AjpUtils.java:52)
at io.undertow.protocols.ajp.AjpClientRequestClientStreamSinkChannel.createFrameHeaderImpl(AjpClientRequestClientStreamSinkChannel.java:176)
at io.undertow.protocols.ajp.AjpClientRequestClientStreamSinkChannel.generateSendFrameHeader(AjpClientRequestClientStreamSinkChannel.java:290)
at io.undertow.protocols.ajp.AjpClientFramePriority.insertFrame(AjpClientFramePriority.java:39)
at io.undertow.protocols.ajp.AjpClientFramePriority.insertFrame(AjpClientFramePriority.java:32)
at io.undertow.server.protocol.framed.AbstractFramedChannel.flushSenders(AbstractFramedChannel.java:603)
at io.undertow.server.protocol.framed.AbstractFramedChannel.flush(AbstractFramedChannel.java:742)
at io.undertow.server.protocol.framed.AbstractFramedChannel.queueFrame(AbstractFramedChannel.java:735)
at io.undertow.server.protocol.framed.AbstractFramedStreamSinkChannel.queueFinalFrame(AbstractFramedStreamSinkChannel.java:267)
at io.undertow.server.protocol.framed.AbstractFramedStreamSinkChannel.shutdownWrites(AbstractFramedStreamSinkChannel.java:244)
at io.undertow.channels.DetachableStreamSinkChannel.shutdownWrites(DetachableStreamSinkChannel.java:79)
at io.undertow.server.handlers.proxy.ProxyHandler$HTTPTrailerChannelListener.handleEvent(ProxyHandler.java:754)
The same request directly to backend server works well. Tried to play with ajp-listener and mod-cluster filter "max-*" parameters, but have no luck.
Possible solution here is switch protocol from AJP to HTTP which can be bit less effective, but works well with big headers:
/profile=full-ha/subsystem=modcluster/proxy=default:write-attribute(name=listener, value=default)
January 08, 2021
Oracle Joins MicroProfile Working Group
by dmitrykornilov at January 08, 2021 06:02 PM

I am very pleased to announce that since the beginning of 2021 Oracle is officially a part of MicroProfile Working Group.
In Oracle we believe in standards and supporting them in our products. Standards are born in blood, toil, tears, and sweat. Standards are a result of collaboration of experts, vendors, customers and users. Standards bring the advantages of portability between different implementations that make standard-based solutions vendor-neutral.
We created Java EE which was the first enterprise Java standard. We opened it and moved it to the Eclipse Foundation to make its development truly open source and vendor neutral. Now we are joining MicroProfile which in the last few years has become a leading standard for cloud-native solutions.
We’ve been supporting MicroProfile for years before officially joining the Working Group. We created project Helidon which has supported MicroProfile APIs since MicroProfile version 1.1. Contributing to the evolution and supporting new versions of MicroProfile is one of our strategic goals.
I like the community driven and enjoyable approach of creating cloud-native APIs invented by MicroProfile. I believe that our collaboration will be effective and together we will push MicroProfile forward to a higher level.
November 14, 2020
An introduction to MicroProfile GraphQL
by Jean-François James at November 14, 2020 05:05 PM
September 23, 2020
General considerations on updating Enterprise Java projects from Java 8 to Java 11
September 23, 2020 12:00 AM
The purpose of this article is to consolidate all difficulties and solutions that I've encountered while updating Java EE projects from Java 8 to Java 11 (and beyond). It's a known fact that Java 11 has a lot of new characteristics that are revolutionizing how Java is used to create applications, despite being problematic under certain conditions.
This article is focused on Java/Jakarta EE but it could be used as basis for other enterprise Java frameworks and libraries migrations.
Is it possible to update Java EE/MicroProfile projects from Java 8 to Java 11?
Yes, absolutely. My team has been able to bump at least two mature enterprise applications with more than three years in development, being:
A Management Information System (MIS)
- Time for migration: 1 week
- Modules: 9 EJB, 1 WAR, 1 EAR
- Classes: 671 and counting
- Code lines: 39480
- Project's beginning: 2014
- Original platform: Java 7, Wildfly 8, Java EE 7
- Current platform: Java 11, Wildfly 17, Jakarta EE 8, MicroProfile 3.0
- Web client: Angular
Mobile POS and Geo-fence
- Time for migration: 3 week
- Modules: 5 WAR/MicroServices
- Classes: 348 and counting
- Code lines: 17160
- Project's beginning: 2017
- Original platform: Java 8, Glassfish 4, Java EE 7
- Current platform: Java 11, Payara (Micro) 5, Jakarta EE 8, MicroProfile 3.2
- Web client: Angular
Why should I ever consider migrating to Java 11?
As everything in IT the answer is "It depends . . .". However there are a couple of good reasons to do it:
- Reduce attack surface by updating project dependencies proactively
- Reduce technical debt and most importantly, prepare your project for the new and dynamic Java world
- Take advantage of performance improvements on new JVM versions
- Take advantage from improvements of Java as programming language
- Sleep better by having a more secure, efficient and quality product
Why Java updates from Java 8 to Java 11 are considered difficult?
From my experience with many teams, because of this:
Changes in Java release cadence
Currently, there are two big branches in JVMs release model:
- Java LTS: With a fixed lifetime (3 years) for long term support, being Java 11 the latest one
- Java current: A fast-paced Java version that is available every 6 months over a predictable calendar, being Java 15 the latest (at least at the time of publishing for this article)
The rationale behind this decision is that Java needed dynamism in providing new characteristics to the language, API and JVM, which I really agree.
Nevertheless, it is a know fact that most enterprise frameworks seek and use Java for stability. Consequently, most of these frameworks target Java 11 as "certified" Java Virtual Machine for deployments.
Usage of internal APIs
Errata: I fixed and simplified this section following an interesting discussion on reddit :)
Java 9 introduced changes in internal classes that weren't meant for usage outside JVM, preventing/breaking the functionality of popular libraries that made use of these internals -e.g. Hibernate, ASM, Hazelcast- to gain performance.
Hence to avoid it, internal APIs in JDK 9 are inaccessible at compile time (but accesible with --add-exports), remaining accessible if they were in JDK 8 but in a future release they will become inaccessible, in the long run this change will reduce the costs borne by the maintainers of the JDK itself and by the maintainers of libraries and applications that, knowingly or not, make use of these internal APIs.
Finally, during the introduction of JEP-260 internal APIs were classified as critical and non-critical, consequently critical internal APIs for which replacements are introduced in JDK 9 are deprecated in JDK 9 and will be either encapsulated or removed in a future release.
However, you are inside the danger zone if:
- Your project compiles against dependencies pre-Java 9 depending on critical internals
- You bundle dependencies pre-Java 9 depending on critical internals
- You run your applications over a runtime -e.g. Application Servers- that include pre Java 9 transitive dependencies
Any of these situations means that your application has a probability of not being compatible with JVMs above Java 8. At least not without updating your dependencies, which also could uncover breaking changes in library APIs creating mandatory refactors.
Removal of CORBA and Java EE modules from OpenJDK
Also during Java 9 release, many Java EE and CORBA modules were marked as deprecated, being effectively removed at Java 11, specifically:
- java.xml.ws (JAX-WS, plus the related technologies SAAJ and Web Services Metadata)
- java.xml.bind (JAXB)
- java.activation (JAF)
- java.xml.ws.annotation (Common Annotations)
- java.corba (CORBA)
- java.transaction (JTA)
- java.se.ee (Aggregator module for the six modules above)
- jdk.xml.ws (Tools for JAX-WS)
- jdk.xml.bind (Tools for JAXB)
As JEP-320 states, many of these modules were included in Java 6 as a convenience to generate/support SOAP Web Services. But these modules eventually took off as independent projects already available at Maven Central. Therefore it is necessary to include these as dependencies if our project implements services with JAX-WS and/or depends on any library/utility that was included previously.
IDEs and application servers
In the same way as libraries, Java IDEs had to catch-up with the introduction of Java 9 at least in three levels:
- IDEs as Java programs should be compatible with Java Modules
- IDEs should support new Java versions as programming language -i.e. Incremental compilation, linting, text analysis, modules-
- IDEs are also basis for an ecosystem of plugins that are developed independently. Hence if plugins have any transitive dependency with issues over JPMS, these also have to be updated
Overall, none of the Java IDEs guaranteed that plugins will work in JVMs above Java 8. Therefore you could possibly run your IDE over Java 11 but a legacy/deprecated plugin could prevent you to run your application.
How do I update?
You must notice that Java 9 launched three years ago, hence the situations previously described are mostly covered. However you should do the following verifications and actions to prevent failures in the process:
- Verify server compatibility
- Verify if you need a specific JVM due support contracts and conditions
- Configure your development environment to support multiple JVMs during the migration process
- Verify your IDE compatibility and update
- Update Maven and Maven projects
- Update dependencies
- Include Java/Jakarta EE dependencies
- Execute multiple JVMs in production
Verify server compatibility
Mike Luikides from O'Reilly affirms that there are two types of programmers. In one hand we have the low level programmers that create tools as libraries or frameworks, and on the other hand we have developers that use these tools to create experience, products and services.
Java Enterprise is mostly on the second hand, the "productive world" resting in giant's shoulders. That's why you should check first if your runtime or framework already has a version compatible with Java 11, and also if you have the time/decision power to proceed with an update. If not, any other action from this point is useless.
The good news is that most of the popular servers in enterprise Java world are already compatible, like:
- Apache Tomcat
- Apache Maven
- Spring
- Oracle WebLogic
- Payara
- Apache TomEE
... among others
If you happen to depend on non compatible runtimes, this is where the road ends unless you support the maintainer to update it.
Verify if you need an specific JVM
On a non-technical side, under support contract conditions you could be obligated to use an specific JVM version.
OpenJDK by itself is an open source project receiving contributions from many companies (being Oracle the most active contributor), but nothing prevents any other company to compile, pack and TCK other JVM distribution as demonstrated by Amazon Correto, Azul Zulu, Liberica JDK, etc.
In short, there is software that technically could run over any JVM distribution and version, but the support contract will ask you for a particular version. For instance:
- WebLogic is only certified for Oracle HotSpot and GraalVM
- SAP Netweaver includes by itself SAP JVM
Configure your development environment to support multiple JDKs
Since the jump from Java 8 to Java 11 is mostly an experimentation process, it is a good idea to install multiple JVMs on the development computer, being SDKMan and jEnv the common options:
SDKMan
SDKMan is available for Unix-Like environments (Linux, Mac OS, Cygwin, BSD) and as the name suggests, acts as a Java tools package manager.
It helps to install and manage JVM ecosystem tools -e.g. Maven, Gradle, Leiningen- and also multiple JDK installations from different providers.
jEnv
Also available for Unix-Like environments (Linux, Mac OS, Cygwin, BSD), jEnv is basically a script to manage and switch multiple JVM installations per system, user and shell.
If you happen to install JDKs from different sources -e.g Homebrew, Linux Repo, Oracle Technology Network- it is a good choice.
Finally, if you use Windows the common alternative is to automate the switch using .bat files however I would appreciate any other suggestion since I don't use Windows so often.
Verify your IDE compatibility and update
Please remember that any IDE ecosystem is composed by three levels:
- The IDE acting as platform
- Programming language support
- Plugins to support tools and libraries
After updating your IDE, you should also verify if all of the plugins that make part of your development cycle work fine under Java 11.
Update Maven and Maven projects
Probably the most common choice in Enterprise Java is Maven, and many IDEs use it under the hood or explicitly. Hence, you should update it.
Besides installation, please remember that Maven has a modular architecture and Maven modules version could be forced on any project definition. So, as rule of thumb you should also update these modules in your projects to the latest stable version.
To verify this quickly, you could use versions-maven-plugin:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>versions-maven-plugin</artifactId>
<version>2.8.1</version>
</plugin>
Which includes a specific goal to verify Maven plugins versions:
mvn versions:display-plugin-updates
After that, you also need to configure Java source and target compatibility, generally this is achieved in two points.
As properties:
<properties>
...
<maven.compiler.source>11</maven.compiler.source>
<maven.compiler.target>11</maven.compiler.target>
</properties>
As configuration on Maven plugins, specially in maven-compiler-plugin:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version>
<configuration>
<release>11</release>
</configuration>
</plugin>
Finally, some plugins need to "break" the barriers imposed by Java Modules and Java Platform Teams knows about it. Hence JVM has an argument called illegal-access to allow this, at least during Java 11.
This could be a good idea in plugins like surefire and failsafe which also invoke runtimes that depend on this flag (like Arquillian tests):
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<argLine>
--illegal-access=permit
</argLine>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<argLine>
--illegal-access=permit
</argLine>
</configuration>
</plugin>
Update project dependencies
As mentioned before, you need to check for compatible versions on your Java dependencies. Sometimes these libraries could introduce breaking changes on each major version -e.g. Flyway- and you should consider a time to refactor this changes.
Again, if you use Maven versions-maven-plugin has a goal to verify dependencies version. The plugin will inform you about available updates.:
mvn versions:display-dependency-updates
In the particular case of Java EE, you already have an advantage. If you depend only on APIs -e.g. Java EE, MicroProfile- and not particular implementations, many of these issues are already solved for you.
Include Java/Jakarta EE dependencies
Probably modern REST based services won't need this, however in projects with heavy usage of SOAP and XML marshalling is mandatory to include the Java EE modules removed on Java 11. Otherwise your project won't compile and run.
You must include as dependency:
- API definition
- Reference Implementation (if needed)
At this point is also a good idea to evaluate if you could move to Jakarta EE, the evolution of Java EE under Eclipse Foundation.
Jakarta EE 8 is practically Java EE 8 with another name, but it retains package and features compatibility, most of application servers are in the process or already have Jakarta EE certified implementations:
We could swap the Java EE API:
<dependency>
<groupId>javax</groupId>
<artifactId>javaee-api</artifactId>
<version>8.0.1</version>
<scope>provided</scope>
</dependency>
For Jakarta EE API:
<dependency>
<groupId>jakarta.platform</groupId>
<artifactId>jakarta.jakartaee-api</artifactId>
<version>8.0.0</version>
<scope>provided</scope>
</dependency>
After that, please include any of these dependencies (if needed):
Java Beans Activation
Java EE
<dependency>
<groupId>javax.activation</groupId>
<artifactId>javax.activation-api</artifactId>
<version>1.2.0</version>
</dependency>
Jakarta EE
<dependency>
<groupId>jakarta.activation</groupId>
<artifactId>jakarta.activation-api</artifactId>
<version>1.2.2</version>
</dependency>
JAXB (Java XML Binding)
Java EE
<dependency>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
<version>2.3.1</version>
</dependency>
Jakarta EE
<dependency>
<groupId>jakarta.xml.bind</groupId>
<artifactId>jakarta.xml.bind-api</artifactId>
<version>2.3.3</version>
</dependency>
Implementation
<dependency>
<groupId>org.glassfish.jaxb</groupId>
<artifactId>jaxb-runtime</artifactId>
<version>2.3.3</version>
</dependency>
JAX-WS
Java EE
<dependency>
<groupId>javax.xml.ws</groupId>
<artifactId>jaxws-api</artifactId>
<version>2.3.1</version>
</dependency>
Jakarta EE
<dependency>
<groupId>jakarta.xml.ws</groupId>
<artifactId>jakarta.xml.ws-api</artifactId>
<version>2.3.3</version>
</dependency>
Implementation (runtime)
<dependency>
<groupId>com.sun.xml.ws</groupId>
<artifactId>jaxws-rt</artifactId>
<version>2.3.3</version>
</dependency>
Implementation (standalone)
<dependency>
<groupId>com.sun.xml.ws</groupId>
<artifactId>jaxws-ri</artifactId>
<version>2.3.2-1</version>
<type>pom</type>
</dependency>
Java Annotation
Java EE
<dependency>
<groupId>javax.annotation</groupId>
<artifactId>javax.annotation-api</artifactId>
<version>1.3.2</version>
</dependency>
Jakarta EE
<dependency>
<groupId>jakarta.annotation</groupId>
<artifactId>jakarta.annotation-api</artifactId>
<version>1.3.5</version>
</dependency>
Java Transaction
Java EE
<dependency>
<groupId>javax.transaction</groupId>
<artifactId>javax.transaction-api</artifactId>
<version>1.3</version>
</dependency>
Jakarta EE
<dependency>
<groupId>jakarta.transaction</groupId>
<artifactId>jakarta.transaction-api</artifactId>
<version>1.3.3</version>
</dependency>
CORBA
In the particular case of CORBA, I'm aware of its adoption. There is an independent project in eclipse to support CORBA, based on Glassfish CORBA, but this should be investigated further.
Multiple JVMs in production
If everything compiles, tests and executes. You did a successful migration.
Some deployments/environments run multiple application servers over the same Linux installation. If this is your case it is a good idea to install multiple JVMs to allow stepped migrations instead of big bang.
For instance, RHEL based distributions like CentOS, Oracle Linux or Fedora include various JVM versions:
Most importantly, If you install JVMs outside directly from RPMs(like Oracle HotSpot), Java alternatives will give you support:
However on modern deployments probably would be better to use Docker, specially on Windows which also needs .bat script to automate this task. Most of the JVM distributions are also available on Docker Hub:
July 06, 2020
Jakarta EE Cookbook
by Elder Moraes at July 06, 2020 07:19 PM
About one month ago I had the pleasure to announce the release of the second edition of my book, now called “Jakarta EE Cookbook”. By that time I had recorded a video about and you can watch it here:
And then came a crazy month and just now I had the opportunity to write a few lines about it!
So, straight to the point, what you should know about the book (in case you have any interest in it).
Target audience
Java developers working on enterprise applications and that would like to get the best from the Jakarta EE platform.
Topics covered
I’m sure this is one of the most complete books of this field, and I’m saying it based on the covered topics:
- Server-side development
- Building services with RESTful features
- Web and client-server communication
- Security in the enterprise architecture
- Jakarta EE standards (and how does it save you time on a daily basis)
- Deployment and management using some of the best Jakarta EE application servers
- Microservices with Jakarta EE and Eclipse MicroProfile
- CI/CD
- Multithreading
- Event-driven for reactive applications
- Jakarta EE, containers & cloud computing
Style and approach
The book has the word “cookbook” on its name for a reason: it follows a 100% practical approach, with almost all working code available in the book (we only omitted the imports for the sake of the space).
And talking about the source code being available, it is really available on my Github: https://github.com/eldermoraes/javaee8-cookbook
PRs and Stars are welcomed!
Bonus content
The book has an appendix that would be worthy of another book! I tell the readers how sharing knowledge has changed my career for good and how you can apply what I’ve learned in your own career.
Surprise, surprise
In the first 24 hours of its release, this book simply reached the 1st place at Amazon among other Java releases! Wow!
Of course, I’m more than happy and honored for such a warm welcome given to my baby…
If you are interested in it, we are in the very last days of the special price in celebration of its release. You can take a look here http://book.eldermoraes.com
Leave your comments if you need any clarification about it. See you!
January 29, 2020
Monitoring REST APIs with Custom JDK Flight Recorder Events
January 29, 2020 02:30 PM
The JDK Flight Recorder (JFR) is an invaluable tool for gaining deep insights into the performance characteristics of Java applications. Open-sourced in JDK 11, JFR provides a low-overhead framework for collecting events from Java applications, the JVM and the operating system.
In this blog post we’re going to explore how custom, application-specific JFR events can be used to monitor a REST API, allowing to track request counts, identify long-running requests and more. We’ll also discuss how the JFR Event Streaming API new in Java 14 can be used to export live events, making them available for monitoring and alerting via tools such as Prometheus and Grafana.
January 20, 2020
Enforcing Java Record Invariants With Bean Validation
January 20, 2020 04:30 PM