Skip to main content

JUG Vienna, Ya!vaConf, airhacks.tv and Airhacks Workshops

by admin at December 02, 2023 03:49 AM

  1. Java User Group (JUG) Vienna: Reasonable Cloud Practices with Java on AWS
    session Java User Group (JUG) Vienna vienna 4 Dec 2023
    https://www.meetup.com/java-vienna/events/296033339/
  2. airhacks.tv: 117th airhacks.tv Questions and Answers [online event]
    live Q & A stream 5 Dec 2023
    https://www.youtube.com/@bienadam/streams
  3. airhacks.live: AWS Security, Authentication and Authorization for Java [online event]
    airhacks.live workshop 7 Dec 2023
    https://airhacks.live
  4. Ya!vaConf: Cloudy Patterns for Enterprise Java [online event]
    session 8 Dec 2023
    https://yavaconf.com/#agenda-section
  5. airhacks.live: Serverless Java Patterns and Best Practices on AWS [online event]
    airhacks.live workshop 14 Dec 2023
    https://airhacks.live

by admin at December 02, 2023 03:49 AM

Kafka Streams Tutorial

by F.Marchioni at November 28, 2023 04:42 PM

Kafka Streams is a powerful and lightweight library provided by Apache Kafka for building real-time streaming applications and microservices. In this tutorial we will show a simple Kafka Streams example with Quarkus which shows how to perform stream processing tasks directly within the Kafka ecosystem, leveraging the familiar Kafka infrastructure to process and transform data ... Read more

The post Kafka Streams Tutorial appeared first on Mastertheboss.


by F.Marchioni at November 28, 2023 04:42 PM

Preview of Jakarta Data Milestone 1 in Open Liberty 23.0.0.12-beta

November 28, 2023 12:00 AM

Open Liberty 23.0.0.12-beta contains a preview of the new Jakarta Data specification for Jakarta EE, as it currently stands at Milestone 1. Milestone 1 provides the capability to annotate lifecycle methods such as Insert, Delete, and more. You can try it out and give feedback on the specification so far.

Also in this beta, you can configure the quiesce stage when the Liberty runtime shuts down to be longer than the default 30 seconds. This update is useful for services that need more time to finish processing requests.

The Open Liberty 23.0.0.12-beta includes the following beta features (along with all GA features):

Preview of Jakarta Data (Milestone 1)

Jakarta Data is a new Jakarta EE specification being developed in the open that aims to standardize the popular data repository pattern across a variety of providers. Open Liberty includes the Jakarta Data 1.0 Milestone 1 release, which adds the ability to annotatively compose custom lifecycle methods, covering Insert, Update, Save, and Delete operations. The Open Liberty beta includes a test implementation of Jakarta Data that we are using to experiment with proposed specification features so that developers can try out these features and provide feedback to influence the Jakarta Data 1.0 specification as it continues to be developed after Milestone 1. The test implementation currently works with relational databases and operates by redirecting repository operations to the built-in Jakarta Persistence provider.

Jakarta Data 1.0 Milestone 1 introduces the concept of annotated lifecycle methods. To use these methods, you need an entity and a repository.

Start by defining an entity class that corresponds to your data. With relational databases, the entity class corresponds to a database table and the entity properties (public methods and fields of the entity class) generally correspond to the columns of the table. An entity class can be:

  • annotated with jakarta.persistence.Entity and related annotations from Jakarta Persistence

  • a Java class without entity annotations, in which case the primary key is inferred from an entity property named id or ending with Id and an entity property named version designates an automatically incremented version column.

You define one or more repository interfaces for an entity, annotate those interfaces as @Repository and inject them into components via @Inject. The Jakarta Data provider supplies the implementation of the repository interface for you.

The following example shows a simple entity:

@Entity
public class Product {
    @Id
    public long id;

    public String name;

    public float price;

    @Version
    public long version;
}

The following example shows a repository that defines operations that relate to the entity. Your repository interface can inherit from built-in interfaces, such as BasicRepository and CrudRepository, to gain a variety of general purpose repository methods for inserting, updating, deleting, and querying for entities. However, in this case, we will define all of the methods ourselves by using the new lifecycle annotations:

@Repository(dataStore = "java:app/jdbc/my-example-data")
public interface Products {
    @Insert
    Product add(Product newProduct);

    @Update
    boolean modify(Product product);

    @Delete
    boolean remove(Product product);

    // parameter based query that requires compilation with -parameters to preserve parameter names
    Optional<Product> find(long id);

    // query-by-method name pattern:
    Page<Product> findByNameIgnoreCaseContains(String searchFor, Pageable pageRequest);

    // query via JPQL:
    @Query("UPDATE Product o SET o.price = o.price - (?2 * o.price) WHERE o.id = ?1")
    boolean discount(long productId, float discountRate);
}

The following example shows the repository being used:

@DataSourceDefinition(name = "java:app/jdbc/my-example-data",
                      className = "org.postgresql.xa.PGXADataSource",
                      databaseName = "ExampleDB",
                      serverName = "localhost",
                      portNumber = 5432,
                      user = "${example.database.user}",
                      password = "${example.database.password}")
public class MyServlet extends HttpServlet {
    @Inject
    Products products;

    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {
        // Insert:
        Product prod = ...
        prod = products.add(prod);

        // Update:
        prod.price = prod.price + 1.00;
        if (products.update(prod))
            System.out.println("Successfully increased the price.");
        else {
            // someone else either removed the product or updated its version before we could
            prod = products.find(productId).orElseThrow();
            ...
        }

        // Request only the first 20 results on a page, ordered by price, then name, then id:
        Pageable pageRequest = Pageable.size(20).sortBy(Sort.desc("price"), Sort.asc("name"), Sort.asc("id"));
        Page<Product> page1 = products.findByNameIgnoreCaseContains(searchFor, pageRequest);
        ...
    }
}

Configurable quiesce timeout for the Liberty runtime

Liberty has a quiesce stage when shutting down the Liberty runtime, which prevents services from accepting new requests and allows time for services to process existing requests. The quiesce stage has always been a fixed 30 seconds time period. This quiesce time period is now configurable.

Previously, in some cases, the 30-seconds quiesce period was not long enough time for services to finish processing existing requests. So you can now increase the quiesce timeout if necessary.

To configure the quiesce timeout, add the new quiesceTimeout attribute to the executor element in the server.xml file:

<executor quiesceTimeout=1m30s/>

The timeout value is a positive integer followed by a unit of time, which can be hours (h), minutes (m), or seconds (s). For example, specify 30 seconds as 30s. You can include multiple units in a single entry. For example, 1m30s is equivalent to 90 seconds. The minimum quiesceTimeout value is 30 seconds. If you specify a shorter length of time, the value 30s is used.

Try it now

To try out these features, update your build tools to pull the Open Liberty All Beta Features package instead of the main release. The beta works with Java SE 21, Java SE 17, Java SE 11, and Java SE 8.

If you’re using Maven, you can install the All Beta Features package using:

<plugin>
    <groupId>io.openliberty.tools</groupId>
    <artifactId>liberty-maven-plugin</artifactId>
    <version>3.9</version>
    <configuration>
        <runtimeArtifact>
          <groupId>io.openliberty.beta</groupId>
          <artifactId>openliberty-runtime</artifactId>
          <version>23.0.0.12-beta</version>
          <type>zip</type>
        </runtimeArtifact>
    </configuration>
</plugin>

You must also add dependencies to your pom.xml file for the beta version of the APIs that are associated with the beta features that you want to try. For example, for Jakarta EE 10 and MicroProfile 6, you would include:

<dependency>
    <groupId>org.eclipse.microprofile</groupId>
    <artifactId>microprofile</artifactId>
    <version>6.0-RC3</version>
    <type>pom</type>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>jakarta.platform</groupId>
    <artifactId>jakarta.jakartaee-api</artifactId>
    <version>10.0.0</version>
    <scope>provided</scope>
</dependency>

Or for Gradle:

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath 'io.openliberty.tools:liberty-gradle-plugin:3.7'
    }
}
apply plugin: 'liberty'
dependencies {
    libertyRuntime group: 'io.openliberty.beta', name: 'openliberty-runtime', version: '[23.0.0.12-beta,)'
}

Or if you’re using container images:

FROM icr.io/appcafe/open-liberty:beta

Or take a look at our Downloads page.

If you’re using IntelliJ IDEA, Visual Studio Code or Eclipse IDE, you can also take advantage of our open source Liberty developer tools to enable effective development, testing, debugging and application management all from within your IDE.

For more information on using a beta release, refer to the Installing Open Liberty beta releases documentation.

We welcome your feedback

Let us know what you think on our mailing list. If you hit a problem, post a question on StackOverflow. If you hit a bug, please raise an issue.


November 28, 2023 12:00 AM

Not Injectable Principals, Quarkus, MicroProfile and Smallrye--airhacks.fm podcast

by admin at November 26, 2023 02:44 PM

Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS

The #270 airhacks.fm episode with Martin Stefanko (@xstefank) about:
promotions, the definition of titles, the importance of MicroProfile and standards, Quarkus and SmallRye.
is available for download.

by admin at November 26, 2023 02:44 PM

Hashtag Jakarta EE #204

by Ivar Grimstad at November 26, 2023 10:59 AM

Welcome to issue number two hundred and four of Hashtag Jakarta EE!

Last week I spoke at Javaforum Malmö and JCON WORLD ONLINE 2023. It was amazing to see so many people showing up at Javaforum Malmö (the Malmö Java User Group) as it was our first event in two years. Hopefully, it won’t be two years until the next one…

We are fast approaching the Milestone 1 release of Jakarta EE 11. So far, the following specifications have released a Milestone 1 (M1) to Maven Central:

Jakarta Annotations 3.0.0-M1
Jakarta Data 1.0.0-M1
– Jakarta Expression Language 6.0.0-M1
Jakarta Pages 4.0.0-M1
Jakarta Persistence 3.2.0-M1
Jakarta Servlet 6.1.0-M1
Jakarta WebSocket 2.2.0-M1

The rest of the specifications are expected to release a milestone by Tuesday, November 28 so we have time to assemble milestone releases of the Jakarta EE 11 Platform, Jakarta EE 11 Web Profile, and Jakarta EE 11 Core Profile in time for JakartaOne Livestream on December 5.

Speaking of JakartaOne Livestream. There is only a week left, so register today and mark it in your calendar NOW. And while you’re at it, bring out your best baking skills and create the best Jakarta EE-themed gingerbread cookie. There will be great prizes!


by Ivar Grimstad at November 26, 2023 10:59 AM

JCON WORLD ONLINE 2023

by Ivar Grimstad at November 24, 2023 09:48 AM

Kin of out of the blue, I was invited to speak at JCON WORLD ONLINE 2023. And not only a regular talk but the keynote of the EclipseStore Summit on the last day of the conference.

It was kind of a last-minute thing and I didn’t have time to create a brand new talk, so I revamped my Responsible Open Source talk to fit with the keynote format. You can check out the slides of Open Source – A Journey of Contribution and Collaboration here.

At the end of the keynote, I invited Markus Kett, the CEO of MictroStream on the stage (or in this case the screen) to talk about EclipseStore.

EclipseStore is a persistence layer for databaseless persistence, built for cloud-native microservices and serverless systems. It is a new project under the Eclipse Foundation. It has just released its first version, so go ahead and try it out. And, even better, take a look at the code in GitHub and start contributing.


by Ivar Grimstad at November 24, 2023 09:48 AM

Coding Microservice From Scratch (Part 16) | JAX-RS Done Right! | Head Crashing Informatics 83

by Markus Karg at November 19, 2023 05:00 PM

Write a pure-Java microservice from scratch, without an application server nor any third party frameworks, tools, or IDE plugins — Just using JDK, Maven and JAX-RS aka Jakarta REST 3.1. This video series shows you the essential steps!

You asked, why I am not simply using the Jakarta EE 10 Core API. There are many answers in this video!

If you like this video, please give it a thumbs up, share it, subscribe to my channel, or become my patreon https://www.patreon.com/mkarg. Thanks! 🙂


by Markus Karg at November 19, 2023 05:00 PM

Writing JPA applications using Java Records

by F.Marchioni at November 17, 2023 06:42 PM

In this article, we’ll explore how Java Records, available since Java 16, can be used in the context of JPA Application. We’ll uncover how Java Records, renowned for their simplicity and immutability, complement the flexibility and expressive querying abilities offered by the Criteria API. Straight question: Can you make a JPA Entity using Java Records? ... Read more

The post Writing JPA applications using Java Records appeared first on Mastertheboss.


by F.Marchioni at November 17, 2023 06:42 PM

What’s New In The Nov 2023 Payara Platform Release?

by Luqman Saeed at November 16, 2023 02:33 PM

Splashing onto the scene with a tidal wave of updates, the November 2023 release of Payara Platform is here. This release brings enhancements, security fixes, and bug fixes, ensuring a more robust and efficient environment for your mission critical workload. Payara Enterprise 6.8.0 comes with 4 improvements, 3 bug fixes, 1 security fix and 1 component upgrade. Payara Community 6.2023.11 also comes with 4 improvements, 3 bug fixes, 1 security fix and 1 component upgrade.


by Luqman Saeed at November 16, 2023 02:33 PM

Virtual Payara Conference: Full Schedule

by Priya Khaira-Hanks at November 14, 2023 10:42 AM

Our pioneering virtual business and technology conference will take place on December 14th.  Gain unique insight into Jakarta EE from the best in the business!

We have designed the programme to cater to all levels of Jakarta EE knowledge - learn as a leader! The day-long program has a focus on educating Java professionals and business leaders about the power and potential of Jakarta EE. 

The conference is totally virtual, so you can join from anywhere in the world. You can also pick and choose which sessions to join, and all those who registered will be able to access recordings to watch at their leisure.

Read on for the full schedule...


by Priya Khaira-Hanks at November 14, 2023 10:42 AM

Jersey Performance Improvement (Step One) | Code Review | Head Crashing Informatics 82

by Markus Karg at November 04, 2023 05:00 PM

Let’s take a deep dive into the source code of #Jersey (the heart of GlassFish, Payara and Helidon) to learn how we can make our own I/O code run faster on modern Java.

In this first step, we apply NIO APIs from #Java 7 and 8 to process data more efficiently, and most notably: outside of the JVM.

If you like this video, please give it a thumbs up, share it, subscribe to my channel, or become my patreon https://www.patreon.com/mkarg. Thanks! 🙂


by Markus Karg at November 04, 2023 05:00 PM

Redis Integration with Quarkus made simple

by F.Marchioni at November 01, 2023 12:19 PM

This tutorial will guide you through accessing Redis in-memory store from a Quarkus application. We will show which are the key interfaces for storing data structures in Redis and how Quarkus Dev Services greatly simplifies setting up a Dev environment for our Redis application. Redis Overview Redis is an open-source in-memory data structure store which ... Read more

The post Redis Integration with Quarkus made simple appeared first on Mastertheboss.


by F.Marchioni at November 01, 2023 12:19 PM

Preventing Security Vulnerabilities in a Web Application – Alexius Diakogiannis – Devoxx Morocco 2023

by Alexius Dionysius Diakogiannis at October 16, 2023 12:56 AM

This a speech I gave during Devoxx Morocco 2023

In today’s digital age, web applications are a crucial part of our lives. However, with great power comes great responsibility. Companies are constantly under threat from malicious users and hackers, which is why it’s essential to safeguard your web applications.

Topics Covered:

  1. Software Development Life Cycle (SDLC) – The Shield of Defense
    • Discover the importance of implementing a robust SDLC to fortify your web application against security vulnerabilities.
  2. Secure Code Writing – The Foundation of Web Application Security
    • Understand the significance of secure coding practices and how they form the bedrock of web application security.
  3. DAST, SCA and SAST tools 
    • Usage and comparison
  4. AI in Development – A Futuristic Approach
    • Explore how artificial intelligence can be harnessed to enhance web application development security.
  5. Code Monitoring in Production – Staying Vigilant
    • Learn the strategies and tools for monitoring your code in a production environment to promptly detect and mitigate vulnerabilities.

📽 Watch the Video

📄 Find the Presentation Slides

Explore the presentation slides to get an in-depth look at the concepts discussed during the session: Speaker Deck

 


by Alexius Dionysius Diakogiannis at October 16, 2023 12:56 AM

Moving from javax to jakarta namespace

by Jean-Louis Monteiro at October 12, 2023 02:32 PM

This blog aims at giving some pointers in order to address the challenge related to the switch from `javax` to `jakarta` namespace. This is one of the biggest changes in Java of the latest 20 years. No doubt. The entire ecosystem is impacted. Not only Java EE or Jakarta EE Application servers, but also libraries of any kind (Jackson, CXF, Hibernate, Spring to name a few). For instance, it took Apache TomEE about a year to convert all the source code and dependencies to the new `jakarta` namespace.

This blog is written from the user perspective, because the shift from `javax` to `jakarta` is as impacting for application providers than it is for libraries or application servers. There have been a couple of attempts to study the impact and investigate possible paths to make the change as smooth as possible. 

The problem is harder than it appears to be. The `javax` package is of course in the import section of a class, but it can be in Strings as well if you use the Java Reflection API for instance. Using Byte Code tools like ASM also makes the problem more complex, but also service loader mechanisms and many more. We will see that there are many ways to approach the problem, using byte code, converting the sources directly, but none are perfect.

Bytecode enhancement approach

The first legitimate approach that comes to our mind is the byte code approach. The goal is to keep the `javax` namespace as much as possible and use bytecode enhancement to convert binaries.

Compile time

It is possible to do a post treatment on the libraries and packages to transform archives such as then are converted to `jakarta` namespace.

  • https://maven.apache.org/plugins/maven-shade-plugin/[Maven Shade plugin]

The Maven shade plugin has the ability to relocate packages. While the primary purpose isn’t to move from `javax` to `jakarta` package, it is possible to use it to relocate small libraries when they aren’t ready yet. We used this approach in TomEE itself or in third party libraries such as Apache Johnzon (JSONB/P implementation).

Here is an example in TomEE where we use Maven Shade Plugin to transform the Apache ActiveMQ Client library https://github.com/apache/tomee/blob/main/deps/activemq-client-shade/pom.xml

This approach is not perfect, especially when you have a multi module library. For Instance, if you have a project with 2 modules, A depends on B. You can use the shade plugin to convert the 2 modules and publish them using a classifier. The issue then is when you need A, you have to exclude B so that you can include it manually with the right classifier.

We’d say it works fine but for simple cases because it breaks the dependency management in Maven, especially with transitive dependencies. It also break IDE integration because sources and javadoc won’t match.

  • https://projects.eclipse.org/projects/technology.transformer[Eclipse Transformer]

The Eclipse Transformer is also a generic tool, but it’s been heavily developed for the `javax` to `jakarta` namespace change. It operates on resources such as

Simple resources:

  • Java class files
  • OSGi feature manifest files
  • Properties files
  • Service loader configuration files
  • Text files (of several types: java source, XML, TLD, HTML, and JSP)

Container resources:

  • Directories
  • Java archives (JAR, WAR, RAR, and EAR files)
  • ZIP archives

It can be configured using Java Properties files to properly convert Java Modules, classes, test resources. This is the approach we used for Apache TomEE 9.0.0-M7 when we first tried to convert to `jakarta`. It had limitation, so we had to then find tricks to solve issues. As it was converting the final distribution and not the individual artifacts, it was impossible for users to use Arquillian or the Maven plugin. They were not converted.

  • https://github.com/apache/tomcat-jakartaee-migration[Apache Tomcat Migration tool]

This tool can operate on a directory or an archive (zip, ear, jar, war). It can migrate quite easily an application based on the set of specifications supported in Tomcat and a few more. It has the notion of profile so that you can ask it to convert more.

You can run it using the ANT task (within Maven or not), and there is also a command line interface to run it easily.

Deploy time

When using application server, it is sometimes possible to step in the deployment process and do the conversion of the binaries prior to their deployment.

  • https://github.com/apache/tomcat-jakartaee-migration[Apache Tomcat/TomEE migration tool]

Mind that by default, the tool converts only what’s being supported by Apache Tomcat and a couple of other APIs. It does not convert all specifications supported in TomEE, like JAX RS for example. And Tomcat does not provide yet any way to configure it.

Runtime

We haven’t seen any working solution in this area. Of course, we could imagine a JavaAgent approach that converts the bytecode right when it gets loaded by the JVM. The startup time is seriously impacted, and it has to be done every time the JVM restarts or loads a class in a classloader. Remember that a class can be loaded multiple times in different classloaders.

Source code enhancement approach

This may sound like the most impacting but this is probably also the most secured one. We also strongly believe that embracing the change sooner is preferable rather than later. As mentioned, this is one of the biggest breaking change in Java of the last 20 years. Since Java EE moved to Eclipse to become Jakarta, we have noticed a change in the release cadence. Releases are not more frequent and more changes are going to happen. Killing the technical depth as soon as possible is probably the best when it’s so impacting.

There are a couple of tools we tried. There are probably more in the ecosystem, and also some in-house developments.

[IMPORTANT]

This is usually a one shoot operation. It won’t be perfect and no doubt it will require adjustment because there is no perfect tool that can handle all cases.

IntelliJ IDEA

IntelliJ IDEA added a refactoring capability to its IDE in order to convert sources to the new `jakarta` namespace. I haven’t tested it myself, but it may help to do the first big step when you don’t really master the scripting approach below.

Scripting approach

For simple case, and we used that approach to do most of the conversion in TomEE, you can create your own simple tool to convert sources. For instance, SmallRye does that with their MicroProfile implementations. Here is an example https://github.com/smallrye/smallrye-config/blob/main/to-jakarta.sh

Using basic Linux commands, it converts from `javax` to `jakarta` namespace and then the result is pushed to a dedicated branch. The benefit is that they have 2 source trees with different artifacts, the dependency management isn’t broken.

One source tree is the reference and they add to the script the necessary commands to convert additional things on demand.

  • https://projects.eclipse.org/projects/technology.transformer[Eclipse Transformer]

Because the Eclipse Transformer can operate on text files, it can be easily used to migrate the sources from `javax` to `jakarta` namespace.

Producing converted artifacts for applications for consumption

Weather you are working on Open Source or not, someone will consume your artifacts. If you are using Maven for example, you may ask yourself what option is the best especially if you maintain the 2 branches `javax` and `jakarta`.

[NOTE]

It does not matter if you use the bytecode or the source code approach.

Updating version or artifactId

This is probably the more practical solution. Some project like Arquillian for example decided to go using a different artifact name (-jakarta suffix) because the artifact is the same and solves the same problem, so why bringing a technical concerned into the name? I’m more in favor of using the version to mark the namespace change. It is somehow an major API change that I’d rather emphasize using a major version update.

[IMPORTANT]

Mind that this only works if both `javax` and `jakarta` APIs are backward compatible. Otherwise, it won’t work

Using Maven classifiers

This is not an option we would recommend. Unfortunately some of our dependencies use this approach and it has many drawbacks. It’s fine for a quick test, but as I mentioned previously, it badly impacts how Maven works. If you pull a transformed artifact, you may get a transitive and not transformed dependency. This is the case for multi module project as well.

Another painful side effect is that javadoc and sources are still linked to the original artifact, so you will have a hard time to debug in the IDE.

Conclusion

We tried the bytecode approach ourselves in TomEE with the hope we could avoid maintaining 2 source trees, one for `javax` and the other one for `jakarta` namespace. Unfortunately, as we have seen before the risk is too important and there are too many edge cases not covered. Apache TomEE runs about 60k tests (including TCK) and our confidence wasn’t good enough. Even though the approach has some benefits and can work for simple use cases, like converting a small utility tool, it does not fit in our opinion for real applications.

 

The post Moving from javax to jakarta namespace appeared first on Tomitribe.


by Jean-Louis Monteiro at October 12, 2023 02:32 PM

Choosing Connector in Jersey

by Jan at October 02, 2023 01:49 PM

Jersey is using JDK HttpUrlConnection for sending HTTP requests by default. However, there are cases where the default HttpUrlConnection cannot be used, or where using any other HTTP Client available suits the customer’s needs better. For this, Jersey comes with … Continue reading

by Jan at October 02, 2023 01:49 PM

Virtual Threads with Quarkus made easy

by F.Marchioni at October 02, 2023 01:43 PM

In this first tutorial about Virtual Threads Mastering Virtual Threads: A Comprehensive Tutorial , we have covered the basics of Virtual Threads. In this article we will learn how Quarkus simplifies the implementation and debugging of Virtual Threads in your application through the @RunOnVirtualThread annotation. The @RunOnVirtualThreads annotation The @RunOnVirtualThread annotation tells Quarkus to do ... Read more

The post Virtual Threads with Quarkus made easy appeared first on Mastertheboss.


by F.Marchioni at October 02, 2023 01:43 PM

Navigating the Shift From Drupal 7 to Drupal 9/10 at the Eclipse Foundation

September 27, 2023 02:30 PM

We’re currently in the middle of a substantial transition as we are migrating mission-critical websites from Drupal 7 to Drupal 9, with our sights set on Drupal 10. This shift has been motivated by several factors, including the announcement of Drupal 7 end-of-life which is now scheduled for January 5, 2025, and our goal to reduce technical debt that we accrued over the last decade.

To provide some context, we’re migrating a total of six key websites:

  • projects.eclipse.org: The Eclipse Project Management Infrastructure (PMI) consolidates project management activities into a single consistent location and experience.
  • accounts.eclipse.org: The Eclipse Account website is where our users go to manage their profiles and sign essential agreements, like the Eclipse Contributor Agreement (ECA) and the Eclipse Individual Committer Agreement (ICA).
  • blogs.eclipse.org: Our official blogging platform for Foundation staff.
  • newsroom.eclipse.org: The Eclipse Newsroom is our content management system for news, events, newsletters, and valuable resources like case studies, market reports, and whitepapers.
  • marketplace.eclipse.org: The Eclipse Marketplace empowers users to discover solutions that enhance their Eclipse IDE.
  • eclipse.org/downloads/packages: The Eclipse Packaging website is our platform for managing the publication of download links for the Eclipse Installer and Eclipse IDE Packages on our websites.

The Progress So Far

We’ve made substantial progress this year with our migration efforts. The team successfully completed the migration of Eclipse Blogs and Eclipse Newsroom. We are also in the final stages of development with the Eclipse Marketplace, which is currently scheduled for a production release on October 25, 2023. Next year, we’ll focus our attention on completing the migration of our more substantial sites, such as Eclipse PMI, Eclipse Accounts, and Eclipse Packaging.

More Than a Simple Migration: Decoupling Drupal APIs With Quarkus

This initiative isn’t just about moving from one version of Drupal to another. Simultaneously, we’re undertaking the task of decoupling essential APIs from Drupal in the hope that future migration or upgrade won’t impact as many core services at the same time. For this purpose, we’ve chosen Quarkus as our preferred platform. In Q3 2023, the team successfully migrated the GitHub ECA Validation Service and the Open-VSX Publisher Agreement Service from Drupal to Quarkus. In Q4 2023, we’re planning to continue down that path and deploy a Quarkus implementation of several critical APIs such as:

  • Account Profile API: This API offers user information, covering ECA status and profile details like bios.
  • User Deletion API: This API monitors user deletion requests ensuring the right to be forgotten.
  • Committer Paperwork API: This API keeps tabs on the status of ongoing committer paperwork records.
  • Eclipse USS: The Eclipse User Storage Service (USS) allows Eclipse projects to store user-specific project information on our servers.

Conclusion: A Forward-Looking Transition

Our migration journey from Drupal 7 to Drupal 9, with plans for Drupal 10, represents our commitment to providing a secure, efficient, and user-friendly online experience for our community. We are excited about the possibilities this migration will unlock for us, advancing us toward a more modern web stack.

Finally, I’d like to take this moment to highlight that this project is a monumental team effort, thanks to the exceptional contributions of Eric Poirier and Théodore Biadala, our Drupal developers; Martin Lowe and Zachary Sabourin, our Java developers implementing the API decoupling objective; and Frederic Gurr, whose support has been instrumental in deploying our new apps on the Eclipse Infrastructure.


September 27, 2023 02:30 PM

MicroProfile 6.1, Java 21, and fast startup times for Spring Boot apps on Open Liberty 23.0.0.10-beta

September 26, 2023 12:00 AM

This Open Liberty beta is packed full of the team’s latest standards implementation work with previews of MicroProfile 6.1 (Metrics, Telemetry, and OpenAPI), Java 21, and Jakarta Data (Beta 3) on Open Liberty. It also introduces faster startup times for your Spring Boot applications with little or no extra effort by using Liberty InstantOn; if you have any Spring apps to hand, give it a try. And there are a couple of updates that make it easier to manage security configurations in containerized environments.

The Open Liberty 23.0.0.10-beta includes the following beta features (along with all GA features):

Faster startup of Spring Boot apps (Spring Boot 3.0 InstantOn with CRaC)

Open Liberty InstantOn provides fast startup times for MicroProfile and Jakarta EE applications. With InstantOn, your applications can start in milliseconds, without compromising on throughput, memory, development-production parity, or Java language features. InstantOn uses the Checkpoint/Restore In Userspace (CRIU) feature of the Linux kernel to take a checkpoint of the JVM that can be restored later.

The Spring Framework (version 6.1) is adding support for Coordinated Restore at Checkpoint (CRaC), which also uses CRIU to provide Checkpoint and Restore for Java applications. The Spring Boot version 3.2 will use Spring Framework version 6.1, enabling Spring Boot applications to also use CRaC to achieve rapid startup times.

The recent addition of the Open Liberty springBoot-3.0 feature allows Spring Boot 3.x-based applications to be deployed with Open Liberty. And now, with the new Open Liberty crac-1.3 beta feature, a Spring Boot 3.2-based application can be deployed with Liberty InstantOn to achieve rapid startup times for your Spring Boot application.

To use the CRaC 1.3 functionality with the springBoot-3.0 feature, you must be running with Java 17 or higher and use the crac-1.3 feature. Additionally, if your application uses Servlet, it needs to use the servlet-6.0 feature. These features are configured in the server.xml file as follows:

<features>
   <feature>springBoot-3.0</feature>
   <feature>servlet-6.0</feature>
   <feature>crac-1.3</feature>
</features>

With these features enabled you can containerize your Spring Boot 3.2 application with Liberty InstantOn support by following the Liberty InstantOn documentation along with following the Liberty recommendations for containerizing Spring Boot applications with the Liberty Spring Boot guide.

For more information and an example Spring Boot application using the Liberty InstantOn crac-1.3 feature, see the How to containerize your Spring Boot application for rapid startup blog post.

You can also use the crac-1.3 feature with other applications, such as applications using Jakarta EE or MicroProfile. Such applications can register resources with CRaC to get notifications for checkpoint and restore. This allows applications to perform actions necessary to prepare for a checkpoint as well as perform necessary actions when the application is restored. For more information on the org.crac APIs, see the org.crac Javadoc.

Java 21 support

Java 21 is finally here, the first long term support (LTS) release since Java 17 was released two years ago. It offers some new functionality and changes that you’ll want to check out for yourself.

As it is a milestone release of Java, we thought you might like to try it out a little early (we have been testing against Java 21 build 35 ourselves). Take advantage of trying out the new changes in Java 21 now and get more time to review your applications, microservices, and runtime environments.

Just:

  1. Download the latest release of Java 21.

  2. Get the 23.0.0.10-beta version of Open Liberty.

  3. Edit your Liberty server’s server.env file to point JAVA_HOME to your Java 21 installation.

  4. Start testing!

Here are some highlights from new JEP changes in Java 18-21:

But perhaps the most anticipated one of all is the introduction of Virtual Threads in Java 21:

Will the impact of Virtual Threads live up to the anticipation? Find out for yourself by experimenting with them, or with any of the other new features in Java 21, by trying them out in your applications run on the best Java runtime, Open Liberty!

For more information on Java 21, see:

As we work toward full Java 21 support, please bear with any of our functionality that might not be 100% ready yet.

MicroProfile 6.1 support

MicroProfile 6.1 is a minor release and is backwards-compatible with MicroProfile 6.0. It brings in Jakarta EE 10 Core Profile APIs and the following MicroProfile component specifications:

The following three specifications have minor updates, while the other five specifications remain unchanged:

  • MicroProfile Metrics 5.1

  • MicroProfile Telemetry 1.1

  • MicroProfile Config 3.1 (mainly some TCK updates to ensure the tests run against either CDI 3.x or CDI 4.0 Lite)

See the following sections for more details about each of these features and how to try them out.

MicroProfile Metrics 5.1: configure statistics tracked by Histogram and Timer metrics

MicroProfile Metrics 5.1 includes new MicroProfile Config properties that are used for configuring the statistics that the Histogram and Timer metrics track and output. In MicroProfile Metrics 5.0, the Histogram and Timer metrics only track and output the max recorded value, the sum of all values, the count of the recorded values, and a static set of percentiles for the 50th, 75th, 95th, 98th, 99th, and 99.9th percentile. These values are emitted to the /metrics endpoint in Prometheus format.

The new properties introduced in MicroProfile Metrics 5.1 allow you to define a custom set of percentiles as well as a custom set of histogram buckets for the Histogram and Timer metrics. There are also additional configuration properties for enabling a default set of histogram buckets, including properties for defining an upper and lower bound for the bucket set.

The properties in the following table allow you to define a semicolon-separated list of value definitions using the syntax:

metric_name=value_1[,value_2…value_n]
Property Description

mp.metrics.distribution.percentiles

  • Defines a custom set of percentiles for matching Histogram and Timer metrics to track and output.

  • Accepts a set of integer and decimal values for a metric name pairing.

  • Can be used to disable percentile output if no value is provided with a metric name pairing.

mp.metrics.distribution.histogram.buckets

  • Defines a custom set of (cumulative) histogram buckets for matching Histogram metrics to track and output.

  • Accepts a set of integer and decimal values for a metric name pairing.

mp.metrics.distribution.timer.buckets

  • Defines a custom set of (cumulative) histogram buckets for matching Timer metrics to track and output.

  • Accepts a set of decimal values with a time unit appended (i.e., ms, s, m, h) for a metric name pairing.

mp.metrics.distribution.percentiles-histogram.enabled

  • Configures any matching Histogram or Timer metric to provide a large set of default histogram buckets to allow for percentile configuration with a monitoring tool.

  • Accepts a true/false value for a metric name pairing.

mp.metrics.distribution.histogram.max-value

  • When percentile-histogram is enabled for a Timer, this property defines a upper bound for the buckets reported.

  • Accepts a single integer or decimal value for a metric name pairing.

mp.metrics.distribution.histogram.min-value

  • When percentile-histogram is enabled for a Timer, this property defines a lower bound for the buckets reported.

  • Accepts a single integer or decimal value for a metric name pairing.

mp.metrics.distribution.timer.max-value

  • When percentile-histogram is enabled for a Histogram, this property defines a upper bound for the buckets reported.

  • Accepts a single decimal values with a time unit appended (i.e., ms, s, m, h) for a metric name pairing.

mp.metrics.distribution.timer.min-value

  • When percentile-histogram is enabled for a Histogram, this property defines a lower bound for the buckets reported.

  • Accepts a single decimal value with a time unit appended (i.e., ms, s, m, h) for a metric name pairing.

Some properties can accept multiple values for a given metric name while some can only accept a single value. You can use an asterisk (i.e., *) as a wild card at the end of the metric name. For example, the mp.metrics.distribution.percentiles can be defined as:

mp.metrics.distribution.percentiles=alpha.timer=0.5,0.7,0.75,0.8;alpha.histogram=0.8,0.85,0.9,0.99;delta.*=

This example creates the alpha.timer timer metric to track and output the 50th, 70th, 75th, and 80th percentile values. The alpha.histogram histogram metric outputs the 80th, 85th, 90th, and 99th percentiles values. Percentiles are disabled for any Histogram or Timer metric that matches with delta.* .

We’ll expand on the previous example and define histogram buckets for the alpha.timer timer metric using the mp.metrics.distribution.timer.buckets property:

mp.metrics.distribution.timer.buckets=alpha.timer=100ms,200ms,1s

This configuration tells the metrics runtime to track and output the count of durations that fall within 0-100ms, 0-200ms, and 0-1 seconds. These values are ranges because the histogram buckets work cumulatively .

The corresponding Prometheus output for the alpha.timer metric at the /metrics REST endpoint is:

# HELP alpha_timer_seconds_max
# TYPE alpha_timer_seconds_max gauge
alpha_timer_seconds_max{scope="application",} 5.633
# HELP alpha_timer_seconds
# TYPE alpha_timer_seconds histogram (1)
alpha_timer_seconds{scope="application",quantile="0.5",} 0.67108864
alpha_timer_seconds{scope="application",quantile="0.7",} 5.603590144
alpha_timer_seconds{scope="application",quantile="0.75",} 5.603590144
alpha_timer_seconds{scope="application",quantile="0.8",} 5.603590144
alpha_timer_seconds_bucket{scope="application",le="0.1",} 0.0 (2)
alpha_timer_seconds_bucket{scope="application",le="0.2",} 0.0 (2)
alpha_timer_seconds_bucket{scope="application",le="1.0",} 1.0 (2)
alpha_timer_seconds_bucket{scope="application",le="+Inf",} 2.0  (2) (3)
alpha_timer_seconds_count{scope="application",} 2.0
alpha_timer_seconds_sum{scope="application",} 6.333
1 The Prometheus metric type is histogram. Both the quantiles or percentiles and buckets are represented under this type.
2 The le tag represents less than and is for the defined buckets, which are converted to seconds.
3 Prometheus requires a +Inf bucket, which counts all hits.

For more information about MicroProfile Metrics, see:

MicroProfile Telemetry 1.1: updated OpenTelemetry implementation

MicroProfile Telemetry 1.1 provides developers with the latest Open Telemetry technology; the feature now consumes OpenTelemetry-1.29.0, updated from 1.19.0. Consequently, a lot of the dependencies are now stable.

To enable the MicroProfile Telemetry 1.1 feature, add the following configuration to your server.xml:

<features>
   <feature>mpTelemetry-1.1</feature>
</features>

Additionally, you must make third-party APIs visible for your application in the server.xml:

<webApplication location="demo-microprofile-telemetry-inventory.war" contextRoot="/">
    <!-- enable visibility to third party apis -->
    <classloader apiTypeVisibility="+third-party"/>
</webApplication>

For more information about MicroProfile Telemetry, see:

MicroProfile OpenAPI 3.1: OpenAPI doc endpoint path configuration

MicroProfile OpenAPI generates and serves OpenAPI documentation for JAX-RS applications that are deployed to the Open Liberty server. The OpenAPI documentation is served from /openapi and a user interface for browsing this documentation is served from /openapi/ui.

With MicroProfile OpenAPI 3.1, you can configure the paths for these endpoints by adding configuration to your server.xml. For example:

<mpOpenAPI docPath="/my/openapi/doc/path" uiPath="/docsUi" />

When you set this configuration on a local test server, you can then access the OpenAPI document at localhost:9080/my/openapi/doc/path and the UI at localhost:9080/docsUi.

This is particularly useful if you want to expose the OpenAPI documentation through a Kubernetes ingress which routes requests to different services based on the path. For example, with this ingress configuration:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- http:
    paths:
    - path: /appA
        pathType: Prefix
        backend:
        service:
            name: appA
            port:
            number: 9080

You could use the following server.xml configuration to ensure that the OpenAPI UI is available at /appA/openapi/ui:

<mpOpenAPI docPath="/appA/openapi" />

When uiPath is not set, it defaults to the value of docPath with /ui appended.

For more information about MicroProfile OpenAPI, see:

Jakarta Data beta 3: configure the data source used to query and persist data

Jakarta Data is a new Jakarta EE specification being developed in the open that aims to standardize the popular data repository pattern across a variety of providers. Open Liberty includes the Jakarta Data 1.0 beta 3 release, which adds the ability to configure the data source that a Jakarta Data repository uses to query and persist data.

The Open Liberty beta includes a test implementation of Jakarta Data that we are using to experiment with proposed specification features so that developers can try out these features and provide feedback to influence the specification as it is being developed. The test implementation currently works with relational databases and operates by redirecting repository operations to the built-in Jakarta Persistence provider. In preparation for Jakarta EE 11, which will require a minimum of Java 21 (not yet generally available), it runs on Java 17 and simulates the entirety of the Jakarta Data beta 3 release, plus some additional proposed features that are under consideration.

Jakarta Data beta 3 allows the use of multiple data sources, with a specification-defined mechanism for choosing which data source a repository will use.

To use Jakarta Data, you start by defining an entity class that corresponds to your data. With relational databases, the entity class corresponds to a database table and the entity properties (public methods and fields of the entity class) generally correspond to the columns of the table. You can define an entity class in one of the following ways:

  • Annotate the class with jakarta.persistence.Entity and related annotations from Jakarta Persistence.

  • Define a Java class without entity annotations, in which case the primary key is inferred from an entity property named id or ending with Id.

You define one or more repository interfaces for an entity, annotate those interfaces as @Repository, and inject them into components using @Inject. The Jakarta Data provider supplies the implementation of the repository interface for you.

Here’s a simple entity:

public class Product { // entity
    public long id;
    public String name;
    public float price;
}

The following example shows a repository that defines operations relating to the entity. It opts to specify the JNDI name of a data source where the entity data is to be stored and found:

@Repository(dataStore = "java:app/jdbc/my-example-data")
public interface Products extends CrudRepository<Product, Long> {
    // query-by-method name pattern:
    Page<Product> findByNameIgnoreCaseContains(String searchFor, Pageable pageRequest);

    // query via JPQL:
    @Query("UPDATE Product o SET o.price = o.price - (?2 * o.price) WHERE o.id = ?1")
    boolean discount(long productId, float discountRate);
}

In the following example, we have chosen to define the data source with the @DataSourceDefinition annotation, which we can place on a web component, such as the following example servlet. We can then inject the repository and use it:

@DataSourceDefinition(name = "java:app/jdbc/my-example-data",
                      className = "org.postgresql.xa.PGXADataSource",
                      databaseName = "ExampleDB",
                      serverName = "localhost",
                      portNumber = 5432,
                      user = "${example.database.user}",
                      password = "${example.database.password}")
public class MyServlet extends HttpServlet {
    @Inject
    Products products;

    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {
        // Request only the first 20 results on a page, ordered by price, then name, then id:
        Pageable pageRequest = Pageable.size(20).sortBy(Sort.desc("price"), Sort.asc("name"), Sort.asc("id"));
        Page<Product> page1 = products.findByNameIgnoreCaseContains(searchFor, pageRequest);
    }
}

The dataStore field of @Repository can also point at the id of a databaseStore element or the id or jndiName of a dataSource element from server configuration, or the name of a resource reference that is available to the application.

For more information about Jakarta Data, see:

Your feedback is welcome on all of the Jakarta Data features and will be helpful as the specification develops further. Let us know what you think and/or be involved directly in the specification on github.

Support LTPA keys rotation without a planned outage

Open Liberty can now automatically generate new primary LTPA keys files while continuing to use validation keys files to validate LTPA tokens. This update enables you to rotate LTPA keys without any disruption to the application’s user experience. Previously, application users had to log in to their applications again after the Liberty server LTPA keys were rotated, which is no longer necessary.

Primary Keys are LTPA keys in the specified keys default ltpa.keys file. Primary keys are used both for generating new LTPA tokens and for validating LTPA tokens. There can only be one primary keys file per Liberty runtime.

Validation keys are LTPA keys in any .keys files other than the primary keys file. The validation keys are used only for validating LTPA tokens. They are not used for generating new LTPA tokens. All validation keys must be located in the same directory as the primary keys file.

There are 2 ways to enable LTPA keys rotation without a planned outage: monitoring the primary keys file directory or specifying the validation keys file.

Monitor the directory of the primary keys file for any new validation keys files.

Enable the monitorDirectory and monitorInterval attributes. For example, add the following configurations to the server.xml:

<ltpa monitorDirectory="true" monitorInterval="5m"/>

The monitorDirectory attribute monitors the ${server.config.dir}/resources/security/ directory by default, but can monitor any directory the primary keys file is specified in. The directory monitor looks for any LTPA keys files with the .keys extension. The Open Liberty server reads these LTPA keys and uses them as validation keys.

If the monitorInterval is set to 0, the default value, the directory is not monitored.

The ltpa.keys file can be renamed, for example, validation1.keys and then Liberty automatically regenerates a new ltpa.keys file with new primary keys that are used for all new LTPA tokens created. The keys in validation1.keys continue to be used for validating existing LTPA tokens.

When the validation1.keys are no longer needed, remove them by deleting the file or by setting monitorDirectory to false. It is recommended to remove unused validation keys as it can improve performance.

Specify the validation keys file and optionally specify a date-time to stop using the validation keys.

  1. Copy the primary keys file (ltpa.keys) to a validation keys file, for example validation1.keys.

  2. Modify the server configuration to use the validation keys file by specifying a validationKeys server configuration element inside the ltpa element. For example, add the following configuration to the server.xml file:

<ltpa>
    <validationKeys fileName="validation1.keys" password="{xor}Lz4sLCgwLTs=" notUseAfterDate="2024-01-02T12:30:00Z"/>
<ltpa/>

The validation1.keys file can be removed from use at a specified date-time in the future with the optional notUseAfterDate attribute. It is recommended to use notUseAfterDate to ignore validation keys after a given period as it can improve performance.

The fileName and password attributes are required in the validationKeys element, but notUseAfterDate is optional.

After the validation keys file is loaded from the server configuration update, the original primary keys file (ltpa.keys) can be deleted, which triggers new primary keys to be created while continuing to use validation1.keys for validation.

Specifying validation keys in this way can be combined with enabling monitor directory to also use validation keys that are not specified in the server.xml configuration at the same time. For example:

<ltpa monitorDirectory="true" monitorInterval="5m">
    <validationKeys fileName="validation1.keys" password="{xor}Lz4sLCgwLTs=" notUseAfterDate="2024-01-02T12:30:00Z"/>
<ltpa/>

To see all of the Liberty <ltpa> server configuration options see LTPA configuration docs.

Include all files in a specified directory in your server configuration

You can use the include element in your server.xml file to specify the location of files to include in your server configuration. In previous releases, you had to specify the location for each include file individually. Now, you can place all the included files in a directory and just specify the directory location in the include element.

This is important because when running on Kubernetes, mounting secrets as a whole folder is the only way to reflect the change from the secret dynamically in the running pod.

In the location attribute of the include element of the server.xml file, enter the directory that contains your configuration files. For example:

    <include location="./common/"/>

After you make the changes, you can see the following output in the log:

[AUDIT   ] CWWKG0028A: Processing included configuration resource: /Users/rickyherget/libertyGit/open-liberty/dev/build.image/wlp/usr/servers/com.ibm.ws.config.include.directory/common/a.xml
[AUDIT   ] CWWKG0028A: Processing included configuration resource: /Users/rickyherget/libertyGit/open-liberty/dev/build.image/wlp/usr/servers/com.ibm.ws.config.include.directory/common/b.xml
[AUDIT   ] CWWKG0028A: Processing included configuration resource: /Users/rickyherget/libertyGit/open-liberty/dev/build.image/wlp/usr/servers/com.ibm.ws.config.include.directory/common/c.xml

The files in the directory are processed in alphabetical order and subdirectories are ignored.

For more information about Liberty configuration includes, see Include configuration docs.

Try it now

To try out these features, update your build tools to pull the Open Liberty All Beta Features package instead of the main release. The beta works with Java SE 21, Java SE 17, Java SE 11, and Java SE 8.

If you’re using Maven, you can install the All Beta Features package using:

<plugin>
    <groupId>io.openliberty.tools</groupId>
    <artifactId>liberty-maven-plugin</artifactId>
    <version>3.8.2</version>
    <configuration>
        <runtimeArtifact>
          <groupId>io.openliberty.beta</groupId>
          <artifactId>openliberty-runtime</artifactId>
          <version>23.0.0.10-beta</version>
          <type>zip</type>
        </runtimeArtifact>
    </configuration>
</plugin>

You must also add dependencies to your pom.xml file for the beta version of the APIs that are associated with the beta features that you want to try. For example, for Jakarta Data Beta 3, you would include:

<dependency>
    <groupId>jakarta.data</groupId>
    <artifactId>jakarta-data-api</artifactId>
    <version>1.0.0-b3</version>
</dependency>

For Gradle, you can install the All Beta Features package using:

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath 'io.openliberty.tools:liberty-gradle-plugin:3.6.2'
    }
}
apply plugin: 'liberty'
dependencies {
    libertyRuntime group: 'io.openliberty.beta', name: 'openliberty-runtime', version: '[23.0.0.10-beta,)'
}

Or if you’re using container images:

FROM icr.io/appcafe/open-liberty:beta

Or take a look at our Downloads page.

If you’re using IntelliJ IDEA, Visual Studio Code, or Eclipse IDE, try our open source Liberty developer tools for efficient development, testing, debugging, and application management, all within your IDE.

For more information on using a beta release, refer to the Installing Open Liberty beta releases documentation.

We welcome your feedback

Let us know what you think on our mailing list. If you hit a problem, post a question on StackOverflow. If you hit a bug, please raise an issue.


September 26, 2023 12:00 AM

New Jetty 12 Maven Coordinates

by Joakim Erdfelt at September 20, 2023 09:42 PM

Now that Jetty 12.0.1 is released to Maven Central, we’ve started to get a few questions about where some artifacts are, or when we intend to release them (as folks cannot find them).

Things have change with Jetty, starting with the 12.0.0 release.

First, is that our historical versioning of <servlet_support>.<major>.<minor> is no longer being used.

With Jetty 12, we are now using a more traditional <major>.<minor>.<patch> versioning scheme for the first time.

Also new in Jetty 12 is that the Servlet layer has been separated away from the Jetty Core layer.

The Servlet layer has been moved to the new Environments concept introduced with Jetty 12.

EnvironmentJakarta EEServletJakarta NamespaceJetty GroupID
ee8EE84javax.servletorg.eclipse.jetty.ee8
ee9EE95jakarta.servletorg.eclipse.jetty.ee9
ee10EE106jakarta.servletorg.eclipse.jetty.ee10
Jetty Environments

This means the old Servlet specific artifacts have been moved to environment specific locations both in terms of Java namespace and also their Maven Coordinates.

Example:

Jetty 11 – Using Servlet 5
Maven Coord: org.eclipse.jetty:jetty-servlet
Java Class: org.eclipse.jetty.servlet.ServletContextHandler

Jetty 12 – Using Servlet 6
Maven Coord: org.eclipse.jetty.ee10:jetty-ee10-servlet
Java Class: org.eclipse.jetty.ee10.servlet.ServletContextHandler

We have a migration document which lists all of the migrated locations from Jetty 11 to Jetty 12.

This new versioning and environment features built into Jetty means that new major versions of Jetty are not as common as they have been in the past.





by Joakim Erdfelt at September 20, 2023 09:42 PM

Running MicroProfile reactive with Helidon Nima and Virtual Threads

by Jean-François James at September 20, 2023 05:29 PM

I recently became interested in Helidon as part of my investigations into Java Loom. Indeed, version 4 is natively based on Virtual Threads. Before going any further, let’s introduce quickly Helidon. Helidon is an Open Source (source on GitHub, Apache V2 licence) managed by Oracle that enables to develop lightweight cloud-native Java application with fast […]

by Jean-François James at September 20, 2023 05:29 PM

New Survey: How Do Developers Feel About Enterprise Java in 2023?

by Mike Milinkovich at September 19, 2023 01:00 PM

The results of the 2023 Jakarta EE Developer Survey are now available! For the sixth year in a row, we’ve reached out to the enterprise Java community to ask about their preferences and priorities for cloud native Java architectures, technologies, and tools, their perceptions of the cloud native application industry, and more.

From these results, it is clear that open source cloud native Java is on the rise following the release of Jakarta EE 10.The number of respondents who have migrated to Jakarta EE continues to grow, with 60% saying they have already migrated, or plan to do so within the next 6-24 months. These results indicate steady growth in the use of Jakarta EE and a growing interest in cloud native Java overall.

When comparing the survey results to 2022, usage of Jakarta EE to build cloud native applications has remained steady at 53%. Spring/Spring Boot, which relies on some Jakarta EE specifications, continues to be the leading Java framework in this category, with usage growing from 57% to 66%. 

Since the September 2022 release, Jakarta EE 10 usage has grown to 17% among survey respondents. This community-driven release is attracting a growing number of application developers to adopt Jakarta EE 10 by offering new features and updates to Jakarta EE. An equal number of developers are running Jakarta EE 9 or 9.1 in production, while 28% are running Jakarta EE 8. That means the increase we are seeing in the migration to Jakarta EE is mostly due to the adoption of Jakarta EE 10, as compared to Jakarta EE 9/9.1 or Jakarta EE 8.

The Jakarta EE Developer Survey also gives us a chance to get valuable feedback on features from the latest Jakarta EE release, as well as what direction the project should take in the future. 

Respondents are most excited about Jakarta EE Core Profile, which was introduced in the Jakarta EE 10 release as a subset of Web Profile specifications designed for microservices and ahead-of-time compilation. When it comes to future releases, the community is prioritizing better support for Kubernetes and microservices, as well as adapting Java SE innovations to Jakarta EE — a priority that has grown in popularity since 2022. This is a good indicator that the Jakarta EE 11 release plan is on the right direction by adopting new Java SE 21 features.

2,203 developers, architects, and other tech professionals participated in the survey, a 53% increase from last year. This year’s survey was also available in Chinese, Japanese, Spanish & Portuguese, making it easier for Java enthusiasts around the world to share their perspectives.  Participation from the Chinese Jakarta EE community was particularly strong, with over 27% of the responses coming from China. By hearing from more people in the enterprise Java space, we’re able to get a clearer picture of what challenges developers are facing, what they’re looking for, and what technologies they are using. Thank you to everyone who participated! 

Learn More

We encourage you to download the report for a complete look at the enterprise Java ecosystem. 

If you’d like to get more information about Jakarta EE specifications and our open source community, sign up for one of our mailing lists or join the conversation on Slack. If you’d like to participate in the Jakarta EE community, learn how to get started on our website.


by Mike Milinkovich at September 19, 2023 01:00 PM

How to upgrade to Quarkus 3

by F.Marchioni at September 18, 2023 04:34 PM

This article discusses how to upgrade your existing Quarkus 2.x applications to Quarkus 3.x using the Quarkus CLI tool. We will learn at first which is the impact of the upgrade on Quarkus 2 application. Then, we will show how to perform the upgrade with just a single command line! What is new in Quarkus ... Read more

The post How to upgrade to Quarkus 3 appeared first on Mastertheboss.


by F.Marchioni at September 18, 2023 04:34 PM

Addressing CVE-2023-4853 in Quarkus

by F.Marchioni at September 16, 2023 08:17 AM

The CVE-2023-4853 vulnerability in question impacts Quarkus framework’s HTTP Security Policy, . This policy provides access control to various endpoints within an application enabling developers to secure access based on path-based configurations. However, a critical flaw has been identified in how the HTTP Security Policy handles request paths containing multiple adjacent forward-slash characters. Issue Summary ... Read more

The post Addressing CVE-2023-4853 in Quarkus appeared first on Mastertheboss.


by F.Marchioni at September 16, 2023 08:17 AM

GlassFish Embedded – a simple way to run Jakarta EE apps

by Ian Blavins at September 14, 2023 09:49 PM

I’ve been asked by the Eclipse GlassFish project to say a few words about how I use GlassFish Embedded. And since they are working on a series of complex issues that I have raised I guess that is fair. The OmniFish team is one of the main contributors to the GlassFish project and I allowed them to post my article on their blog too. 

Running GlassFish Embedded is pretty straightforward – you start GlassFish within a client application and deploy the server application to the running embedded GlassFish. The embedded server is created and destroyed by the tool on each tool session.

I started using GlassFish while GlassFish was in Oracle’s hands as a reference implementation of Java EE. Some time ago, there were suggestions that GlassFish was not being actively maintained. Since Oracle’s donation of GlassFish to the Eclipse Foundation, with support from the Foundation’s GlassFish team, and the OmniFish team (that also provides commercial support), the GlassFish project is very active and the community around it is certainly present and responsive. That’s one more reason for me to continue using GlassFish in the future.

Overview of the APILoader Project

My project is APILoader. APILoader is, I believe (quite possibly wrongly) the seed for the next generation of software performance testing tools. I won’t say a lot about APILoader since it hasn’t been released yet and there is IP to protect. But I can say a bit about how it uses GlassFish Embedded.

For an individual, I intend that APILoader be deployed using GlassFish Embedded. This obviates the server administration, as the server is created and destroyed by the tool on each tool session.

Some software performance engineers work in teams and need to share artefacts. For them, a server-based tool is appropriate. Others work individually. For them, a server-based tool is an overkill, implying, as it does, server administration. So APILoader has a server component and a client component but they are deployed differently depending on the needs of their users. In all deployments, the database remains external.

For teams, I intend that APILoader be deployed as a server installation with multiple clients. The engineers share the server and the database for artefacts, and use the clients for isolation. APILoader supports accounts and projects. Accounts are hermetically sealed sets of projects. Projects are separated sets of artefacts, but with the option to copy selected artefact types between them. So engineers can work completely separately by using different accounts. Or they can work in the same account, sharing account level resources, but with separate sets of project-level artefacts. Or they can work on the same project and share account and project-level resources. A team might use different accounts for testing different products where artefact sharing is unlikely. That team might use a different project for performance testing of each release of one product, initially populating each project selectively from its predecessor.

For an individual, I intend that APILoader be deployed using GlassFish Embedded. This obviates the server administration, as the embedded server is created and destroyed by the tool on each tool session. The individual can still use accounts and projects to separate the artefact sets for different pieces of work.

There is potentially a hybrid approach where each engineer runs an embedded GlassFish instance but they choose to share a single (networked) database. The issue is that the ‘database’ in APILoader is distributed with some data held in a relational database and some held in files associated with the server. So, in this scenario, those artefacts that are held in the database would be shared but those held by the server are not (since each engineer has their own (embedded) server). This scenario doesn’t appear useful as it stands because the relational database is used to access the file-based artefacts held by the server and only a subset of file-based artefacts would be reachable by each engineer. It could be made an installation option that the file-based artefacts be held in one repository, independent of the servers. Then all artefacts would be shared.  This would appear that the installation is shared by the team but with a greater degree of isolation for each engineer since the server isn’t shared.

Simple Setup with GlassFish Embedded

Running GlassFish Embedded is pretty straightforward – you start GlassFish within a client application and deploy the server application to the running embedded GlassFish. There is very little to do in the server application to cater for being runnable both as a remote server or embedded. (Or maybe there was more than I remember but it is a once-only thing.)

However, there is one major consideration. GlassFish Embedded runs in the same JVM as the client. In remote server mode it doesn’t – it runs in a separate JVM process, often on a remote machine. This has significant implications for static resources. In embedded use, a static resource is shared between the client application and the embedded server. This allows some tempting shortcuts in coding that won’t work in non-embedded deployment.

With EJBs, the serialisation is done automatically. With http-based communication, it would have to be done explicitly via SOAP, XML, or Gson/JSON.

Benefits of Remote EJBs as a communication method

The APILoader client is a (very) fat GUI. It started life as a web client but I found myself spending inordinate amounts of time on the minutiae of HTML presentation. So now its a GUI. As such, communication with the server presents new options. I have chosen to use remote EJBs. These work just as well against a remote or embedded GlassFish server. Once you overcome the issue of making the remote class definitions available to the client application, server EJBs are pretty straightforward to use. And, with a GUI client, they are simpler to use than http-based messaging. The APILoader server and client communicate complex objects. With EJBs, the serialisation is done automatically. With http-based communication, it would have to be done explicitly via SOAP, XML, or Gson/JSON.

Note that the APILoader client is not an enterprise client. So it isn’t deployed to the server to run, and the EJBs aren’t injected. Instead, the client gets access to the server’s remotely accessible methods by doing context lookup() calls.

Simplified Distribution and Support

The other benefit of GlassFish Embedded is simplified distribution and support for APILoader to those clients that select it. Packaging and distributing the APILoader only has to cater for one brand of server, and one release of that server. On the other hand, support for the server option is easier in bigger teams because the server environment is usually better understood by infrastructure teams.

Since Oracle’s donation of GlassFish to the Eclipse Foundation, the GlassFish project is very active and the community around it is certainly present and responsive.


by Ian Blavins at September 14, 2023 09:49 PM

How to set a custom initial value for Ids in JPA

by F.Marchioni at September 10, 2023 04:41 PM

In Java Persistence API (JPA), entities require unique identifiers for database records. JPA provides several strategies for generating these identifiers, such as IDENTITY, SEQUENCE, and TABLE. However, there are cases where you might need to set a custom initial value for these identifiers using the Table and Sequence strategy. In this tutorial, we will explore ... Read more

The post How to set a custom initial value for Ids in JPA appeared first on Mastertheboss.


by F.Marchioni at September 10, 2023 04:41 PM

Quarkus CRUD Example with Panache Data

by F.Marchioni at September 10, 2023 02:05 PM

In this tutorial we will learn how to create a REST CRUD application in Quarkus, starting from a Hibernate Panache Entity. We will show two different approaches: in the first one we will create a REST Resources to map the CRUD methods. Then, we will show how to use REST Data Panache to generate automatically ... Read more

The post Quarkus CRUD Example with Panache Data appeared first on Mastertheboss.


by F.Marchioni at September 10, 2023 02:05 PM

Openshift Cheatsheet for DevOps

by F.Marchioni at September 06, 2023 01:05 PM

Whether you’re a beginner exploring OpenShift for the first time or an experienced user looking for quick references, this cheat sheet is designed to provide you with a CheatSheet of OpenShift commands, concepts, and best practices. From managing pods and services to setting up routes and exploring advanced deployment strategies, we’ve got you covered. Login ... Read more

The post Openshift Cheatsheet for DevOps appeared first on Mastertheboss.


by F.Marchioni at September 06, 2023 01:05 PM

Best Practices for Effective Usage of Contexts Dependency Injection (CDI) in Java Applications

by Rhuan Henrique Rocha at August 30, 2023 10:55 PM

Looking at the web, we don’t see many articles talking about Contexts Dependency Injection’s best practices. Hence, I have made the decision to discuss the utilization of Contexts Dependency Injection (CDI) using best practices, providing a comprehensive guide on its implementation.

The CDI is a Jakarta specification in the Java ecosystem to allow developers to use dependency injection, managing contexts, and component injection in an easier way. The article https://www.baeldung.com/java-ee-cdi defines the CDI as follows:

CDI turns DI into a no-brainer process, boiled down to just decorating the service classes with a few simple annotations, and defining the corresponding injection points in the client classes.

If you want to learn the CDI concepts you can read Baeldung’s post and Otavio Santana’s post. Here, in this post, we will focus on the best practices topic.

In fact, CDI is a powerful framework and allows developers to use Dependency Injection (DI) and Inversion of Control (IoC). However, we have one question here. How tightly do we want our application to be coupled with the framework? Note that I’m not talking you cannot couple your application to a framework, but you should think about it, think about the coupling level, and think about the tradeoffs. For me, coupling an application to a framework is not wrong, but doing it without thinking about the coupling level and the cost and tradeoffs is wrong.

It is impossible to add a framework to your application without minimally coupling your application. Even though your application does not have a couple expressed in the code, probably you have a behavioral coupling, that is, a behavior in your application depends on a framework’s behavior, and in some cases, you can not guarantee that other framework will provide a similar behavior, in case of changes.

Best Practices for Injecting Dependencies

When writing code in Java, we often create classes that rely on external dependencies to perform their tasks. To achieve this using CDI, we employ the @Inject annotation, which allows us to inject these dependencies. However, it’s essential to be mindful of whether we are making the class overly dependent on CDI for its functionality, as it may limit its usability without CDI. Hence, it’s crucial to carefully consider the tightness of this dependency. As an illustration, let’s examine the code snippet below. Here, we encounter a class that is tightly coupled to CDI in order to carry out its functionality.

public class ImageRepository {
    @Inject
    private StorageProvider storageProvider;

    public void saveImage(File image){
        //Validate the file to check if it is an image.
        //Apply some logic if needed
        storageProvider.save(image);
    }
}

As you can see the class ImageRepository has a dependency on StorageProvider, that is injected via CDI annotation. However, the storageProvider variable is private and we don’t have setter method or a constructor that allows us to pass this dependency by the constructor. It means this class cannot work without a CDI context, that is, the ImageRepository is tightly coupled to CDI.

This coupling doesn’t provide any benefits for the application, instead, it only causes harm both to the application itself and potentially to the testing of this class.

Look at the code refactored to reduce the couple to CDI.

public class ImageRepository implements Serializable {

    private StorageProvider storageProvider;

    @Inject
    public ImageRepository(StorageProvider storageProvider){
        this.storageProvider = storageProvider;
    }

    public void saveImage(File image){
        //Validate the file to check if it is an image.
        //Apply some logic if needed
        storageProvider.save(image);
    }
}

As you can see, the ImageRepository class has a constructor that receives the StorageProvider as a constructor argument. This approach follows what is said in the Clean Code book.

“True Dependency Injection goes one step further. The class takes no direct steps to resolve its dependencies; it is completely passive. Instead, it provides setter methods or constructor arguments (or both) that are used to inject the dependencies.”

(from “Clean Code: A Handbook of Agile Software Craftsmanship” by Martin Robert C.)

Without a constructor or a setter method, the injection depends on the CDI. However, we still have one question about this class. The class has a CDI annotation and depends on the CDI to be compiled. I’m not saying it is always a problem, but it can be a problem, especially if you are writing a framework. Coupling a framework with another framework can be a problem in cases you want to use your framework with another mutually exclusive one. In general, it should be avoided by frameworks. Thus, how can we fully decouple the ImageRepository class from CDI?

CDI Producer Method

The CDI producer is a source of an object that can be used to be injected by CDI. It is like a factor of a type of object. Look at the code below:

public class ImageRepositoryProducer {

    @Produces
    public ImageRepository createImageRepository(){
        StorageProvider storageProvider = CDI.current().select(StorageProvider.class).get();
        return new ImageRepository(storageProvider);
    }
}

Please note that we are constructing just one object, but the StorageProvider‘s object is read by CDI. You should avoid constructing more than one object within a producer method, as this interlinks the construction of these objects and may lead to complications if you intend to designate distinct scopes for them. You can create a separated producer method to produce the StorageProvider.

This is the ImageRepository class refactored.

public class ImageRepository implements Serializable {

    private StorageProvider storageProvider;

    public ImageRepository(StorageProvider storageProvider){
        this.storageProvider = storageProvider;
    }

    public void saveImage(File image){
        //Validate the file to check if it is an image.
        //Apply some logic if needed
        storageProvider.save(image);
    }
}

Please note that the ImageRepository class does not know anything about the CDI, and is fully decoupled from CDI. The codes about the CDI are inside the ImageRepositoryProducer, which can be extracted to another module if needed.

CDI Interceptor

The CDI Interceptor is a very cool feature of CDI that provides a nice CDI-based way to work with cross-cutting tasks (such as auditing). This is a little definition said in my book:

“A CDI interceptor is a class that wraps the call to a method — this method is called target method — that runs its logic and proceeds the call either to the next CDI interceptor if it exists, or the target method.”

(from “Jakarta EE for Java Developers” by Rhuan Rocha.)

The purpose of this article is not to discuss what a CDI interceptor is, but to discuss CDI best practices. So if you want to read more about CDI interceptor, check out the book Jakarta EE for Java Developers.

As said, the CDI interceptor is very interesting. I am quite fond of this feature and have incorporated it into numerous projects. However, using this feature comes with certain trade-offs for the application.

When you use the CDI interceptor you couple the class to the CDI, because you should be annotating the class with a custom annotation that is a interceptor binding. Look at the example below shown on the Jakarta EE for Java Developers book:

@ApplicationScopedpublic class SecuredBean{
   @Authentication
   public String generateText(String username) throws AutenticationException{
       return "Welcome "+username;
   }
}

As you can see we should define a scope, as it should be a bean managed by CDI, and you should be annotating the class with the interceptor binding. Hence, if you eliminate CDI from your application, the interceptor’s logic won’t execute, and the class won’t be compiled. With this, your application has a behavioral coupling, and a dependency on the CDI lib jar to compile.

As said, it is not necessarily bad, however, you should think if it is a problem in your context.

CDI Event

The CDI Event is a great feature within the CDI framework that I have employed extensively in various applications. This functionality provides the implementation of the Observer Pattern, enabling us to emit events that are then observed by observers who execute tasks asynchronously. However, if we add the CDI codes inside our class to emit events we will couple the class to the CDI. Again, this is not an error, but you should be sure it is not a problem with your solution. Look at the example below.

import jakarta.enterprise.event.Event;

public class User{

 private Event<Email> emailEvent;

 public User(Event<Email> emailEvent){
   this.emailEvent = emailEvent;
 }

 public void register(){
   //logic
   emailEvent.fireAsync(Email.of(from, to, subject, content));
 }
}

Note we are receiving the Event class, which is from CDI, to emit the event. It means this class is coupled to CDI and depends on it to work. One way to avoid it is creating your own class to emit the event, and abstract the details about what is the mechanism (CDI or other) that is emitting the event. Look at the example below.

import net.rhuan.example.EventEmitter;

public class User{

 private EventEmiter<Email> emailEventEmiter;

 public User(EventEmiter<Email> emailEventEmiter){
   this.emailEventEmiter = emailEventEmiter;
 }

 public void register(){
   //logic
   emailEventEmiter.emit(Email.of(from, to, subject, content));
 }
}

Now, your class is agnostic to the emitter of the event. You can use CDI or others, according to the EventEmiter implementation.

Conclusion

The CDI is an amazing specification from Jakarta EE widely used in many Java frameworks and Java applications. Carefully determining the degree of integration between our application and the framework holds immense significance. This intentional decision becomes an important factor in proactively mitigating challenges during the solution’s evolution, especially when working on the development of a framework.

If you have a question or want to share your thoughts, feel free to add comments or send me messages about it. 🙂


by Rhuan Henrique Rocha at August 30, 2023 10:55 PM

Are Java Application Servers Dead?

by Alexius Dionysius Diakogiannis at August 27, 2023 09:50 PM

In the past, application servers were essential for running Java applications. They provided a number of features that were necessary for complex applications, such as:

  • Dependency management
  • Transaction management
  • Security
  • Caching
  • Messaging

However, with the rise of microservices, many developers are moving away from application servers.

As a result, many developers believe that application servers are no longer necessary for microservices. However, there are still a number of reasons why application servers can be beneficial for Java applications, even in a microservices architecture.

Benefits of not using an application server

To configure and run one application server for each service can lead to a complex configuration and maintenance task. By using an embedded server we benefit from:

  • Faster deployment.
  • Easier continuous integration tasks.
  • Minimum maintenance effort.

But can we use Application Servers for serving and managing Microservices? Indeed we can!

Benefits of Application Servers for Microservices

Centralized configuration: Application servers provide a centralized configuration for all of your microservices. This can make it easier to manage your applications and ensure that they are all configured correctly.

Service discovery: Application servers can provide service discovery for your microservices. This means that microservices can find each other without having to know each other’s addresses.

Load balancing: Application servers can load balance your microservices. This can help to improve the performance of your applications by distributing traffic evenly across all of your microservices.

Health monitoring: Application servers can monitor the health of your microservices. This can help you to identify and fix problems before they impact your users.

In addition to the benefits mentioned above, application servers can also provide a number of other features that can be helpful for Java applications, such as:

Security: Application servers can provide a number of security features, such as authentication, authorization, and encryption.

Performance: Application servers can be optimized for performance, which can help to improve the speed and responsiveness of your applications.

Scalability: Application servers can be scaled horizontally to support more users and traffic.

Reliability: Application servers can be made highly available to ensure that your applications are always up and running.

If you are looking for a robust and scalable platform for running your Java applications, then an application server is a good option to consider.

The History of Application Servers

The first application servers were developed in the early 1990s. They were designed to provide a platform for running Java applications in a distributed environment. The first application servers were monolithic, meaning that they were all-in-one solutions that provided a wide range of features.

As Java applications became more complex, the need for specialized application servers arose. For example, some application servers focused on performance, while others focused on security. In the early 2000s, the Java EE platform was standardized, which led to the development of a number of open source application servers.

The Rise of Microservices

Microservices are a relatively new architectural style for developing software applications. In a microservices architecture, an application is broken down into a number of small, independent services. Each service is responsible for a specific task, and the services communicate with each other over a network.

Microservices have a number of advantages over traditional monolithic applications. They are more scalable, flexible, and agile. They are also easier to develop and maintain.

The rise of microservices has led to a decline in the use of traditional application servers. Many developers believe that microservices do not need an application server, as each service can be deployed and managed independently.

Conclusion

So, are application servers dead? Not quite!

While they are not as widely used as they once were, they still have a number of advantages that make them a good choice for certain types of applications. They can provide a number of benefits that can make it easier to develop, deploy, and manage your Java applications. If you are considering moving to a microservices architecture, you should carefully consider whether or not an application server is right for you.


by Alexius Dionysius Diakogiannis at August 27, 2023 09:50 PM

Quarkus vs. Micronaut: A Comparative Analysis

by F.Marchioni at August 26, 2023 05:46 PM

In the world of modern Java microservices, developers are faced with a variety of frameworks and tools to choose from. Two popular options in this space are Quarkus and Micronaut. Both of these frameworks offer unique features and advantages, making the choice between them a significant decision for developers. In this article, we’ll delve into ... Read more

The post Quarkus vs. Micronaut: A Comparative Analysis appeared first on Mastertheboss.


by F.Marchioni at August 26, 2023 05:46 PM

How to create Jobs in Kubernetes

by F.Marchioni at August 22, 2023 12:52 PM

This article discusses how to automate Tasks in Kubernetes and OpenShift using Jobs and Cron Jobs. We will show some example on how to create and manage them. Then, we will discuss the best practices about using Jobs in Kubernetes. In a Kubernetes environment, you can use Jobs to automate tasks that need to run ... Read more

The post How to create Jobs in Kubernetes appeared first on Mastertheboss.


by F.Marchioni at August 22, 2023 12:52 PM

TimezoneStorageType – Hibernate’s improved timezone mapping

by Thorben Janssen at August 10, 2023 05:55 AM

The post TimezoneStorageType – Hibernate’s improved timezone mapping appeared first on Thorben Janssen.

Working with timestamps with timezone information has always been a struggle. Since Java 8 introduced the Date and Time API, OffsetDateTime and ZonedDateTime have become the most obvious and commonly used types to model a timestamp with timezone information. And you might expect that choosing one of them should be the only thing you need...

The post TimezoneStorageType – Hibernate’s improved timezone mapping appeared first on Thorben Janssen.


by Thorben Janssen at August 10, 2023 05:55 AM

Upgrade to Jakarta EE 10 – part 3: Transform incompatible Dependencies

by Ondro Mihályi at August 04, 2023 05:43 AM

In this article, we’ll address upgrading individual libraries used by your applications. This solves two problems. First, it improves the build time of your application during development and reduces the build time introduced by transforming the final binary after each build. And second, it solves compilation problems you can face with some libraries after you adjust the source code of your application for Jakarta EE 10.

Earlier, we described how to transform an application binary, e.g. a WAR file, to make it compatible with Jakarta EE 10 so that it can be deployed to GlassFish 7. But this transformation is slow and needs to be done with every build. This doesn’t make developers happy because it increases the time to build and deploy the application after they made changes in the source code.

We also described how to automate transforming the application’s source code to compile it with the Jakarta EE 10 APIs. However, after doing this, there’s a high chance your application won’t compile. This is because some of the libraries used by your application may not be compatible with Jakarta EE 10. They are transformed after the application is built but that’s too late.

In this article, we’ll explain a few simple approaches how to make sure that external libraries are compatible with Jakarta EE 10 so that everything correctly compiles and no transformation is necessary after each build.

What’s the problem, really?

Libraries in your application fall into these categories:

  • The library doesn’t use Java EE APIs at all – no problem here, just continue using it as before
  • There’s a version of the library compatible with Jakarta EE 10 and you can update to this version – just update it.
  • The library doesn’t have a version compatible with Jakarta EE 10 or you can’t update it for some reason. It needs to be transformed.
  • The library depends on features removed in Jakarta EE 10 and cannot be updated to a version compatible with Jakarta EE 10. Tough luck, no simple solution here, though this category is very rare.

While, obviously, the first category doesn’t cause any problems, libraries that use APIs not compatible with Jakarta EE 10 need some treatment. Libraries that have a version compatible with Jakarta EE 10 can be simply updated to this version. We’ll describe some examples of such widely used libraries below.

Some other libraries have not yet been updated for Jakarta EE 10. Or you cannot afford to update them to a new version for whatever reason (risk of regression, missing feature in the new version, etc.). Then you’ll need to transform them yourself into a version compatible with Jakarta EE 10. This can be done outside of your application project so that you transform the libraries once and then use the transformed library when building the application.

In some rare cases, you may come across a library, which is not compatible with Jakarta EE 10 even after the transformation, because it depends on some old APIs removed in Jakarta EE 10. Each such library may require specific treatment, and describing the techniques which can be used would be for another whole article. Therefore we’ll not address these rare cases now.

Update libraries to a Jakarta EE 10 version

Most of the libraries widely used in enterprise projects already support Jakarta EE 10 so it’s easy to just update them to a newer version.

For example, to update the Hibernate library, just increase the version number. Here’s an example for the version 6.2.7.Final, the latest version at this moment:

<dependency>
  <groupId>org.hibernate.orm</groupId>
  <artifactId>hibernate-core</artifactId>
  <version>6.2.7.Final</version>
</dependency>

However, some libraries maintain support for both Jakarta EE 9+ and older Jakarta EE and Java EE versions. Those libraries have two variants for the same library version. In that case, their Maven artifact for Jakarta EE 9+ is usually published with the same coordinates as before but with the jakarta classifier. You need to specify an additional <classifier>jakarta</classifier> configuratoin in the dependency definition, so that the correct variant is downloaded and used in your application. For example, to update the Primefaces library to version 13 and Jakarta EE 10 variant (jakarta classifier):

<dependency>
  <groupId>org.primefaces</groupId>
  <artifactId>primefaces</artifactId>
  <version>13.0.0</version>
  <classifier>jakarta</classifier>
</dependency>

Some other libraries also provide both variants but the Maven artifact is published under different coordinates. One example is the Jackson library, which publishes the artifacts in a completely different groupId and artifactId which contain jakarta in the name:

<dependency>
  <groupId>com.fasterxml.jackson.jakarta.rs</groupId>
  <artifactId>jackson-jakarta-rs-json-provider</artifactId>
  <version>2.15.2</version>
</dependency>

Popular libraries which support Jakarta EE 9+

Here are some examples of popular libraries that support Jakarta EE 9+ in their recent versions (either the main artifact or variant with the jakarta classifier):

Library nameMaven dependency definition
Hibernateorg.hibernate.orm:hibernate-core
Omnifacesorg.omnifaces:omnifaces
Jacksoncom.fasterxml.jackson.jakarta.rs:jackson-jakarta-rs-json-provider
Apache Deltaspikeorg.apache.deltaspike.modules (all artifacts with the jakarta classifier)
Primefacesorg.primefaces:primefaces (jakarta classifier)
Spring Framework 6https://spring.io/
Spring Boot 3https://spring.io/projects/spring-boot

Transform libraries for Jakarta EE 10

When it’s not possible to upgrade a library, we can transform individual libraries with the Eclipse Trasformer using a similar technique to transforming the whole application WAR which we explained in a previous article. You can use the Eclipse Transformer also on individual library JARs and then use the transformed JARs during the build. However, in modern Maven or Gradle based projects, this isn’t natural because of transitive dependencies. There’s currently no tooling that would properly transform all the transitive dependencies and install them correctly to a local repository. Therefore we’ll use a trick – we’ll merge all JARs that need to be transformed into a single JAR (Uber JAR), with all the transitive dependencies, then transform it, and then install this single JAR into a Maven repository. Then we’ll change the application to depend on this single artifact instead of depending on all the individual artifacts.

First, here’s an example list of dependencies from pom.xml file of a Maven project which aren’t compatible with Jakarta EE 10:

File pom.xml in the “application” project:

<groupId>ee.omnifish</groupId>
<artifactId>jakarta-app</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>war</packaging>

<dependencies>
 
  <!-- Jakarta EE 8 API -->
  <dependency>
    <groupId>jakarta.platform</groupId>
    <artifactId>jakarta.jakartaee-api</artifactId>
    <version>8.0.0</version>
    <scope>provided</scope>
  </dependency>

  <!-- incompatible with Jakarta EE 10 API -->
  <dependency>
    <groupId>net.sf.jasperreports</groupId>
    <artifactId>jasperreports</artifactId>
    <version>6.20.1</version>
  </dependency>
  <dependency>
    <groupId>org.quartz-scheduler</groupId>
    <artifactId>quartz</artifactId>
    <version>2.3.2</version>
  </dependency>

</dependencies>

With this as a starting point, we’ll create a new Maven project next to our existing project. For convenience, we can move both projects into a Maven POM project as modules so that we can build everything together if needed. We will move all the dependencies we need to transform into this new project. We will remove all those dependencies from the original project and replace them with a single dependency on the new project.

The pom.xml of new the project would look like this:

Snippet of the pom.xml file in a new “transform-dependencies” project:

<groupId>ee.omnifish.transformed</groupId>
<artifactId>transform-dependencies</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>

<dependencies>
    <!-- dependencies not compatible with Jakarta EE 10+
    - will be transformed in this JAR artifact -->
    <dependency>
        <groupId>net.sf.jasperreports</groupId>
        <artifactId>jasperreports</artifactId>
        <version>6.20.1</version>
    </dependency>
    <dependency>
        <groupId>org.quartz-scheduler</groupId>
        <artifactId>quartz</artifactId>
        <version>2.3.2</version>
    </dependency>
</dependencies>

In the final WAR file, instead of having each JAR file separately in the WAR, like this:

  • WEB-INF
    • classes
      • jasperreports.jar
      • quartz.jar

We will end up with a single transformed JAR, like this:

  • WEB-INF
    • classes
      • transform-dependencies.jar

This transform-dependencies.jar file will contain all the artifacts merged into it – it will contain all classes and files from all the artifacts.

In order to achieve this, we can use the Maven Shade plugin, which merges multiple JAR files into a single artifact produced by the project:

Snippet of the pom.xml file in a new “transform-dependencies” project:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-shade-plugin</artifactId>
  <version>3.5.0</version>
  <executions>
    <execution>
      <phase>package</phase>
      <goals>
        <goal>shade</goal>
      </goals>
      <configuration>
        <shadedClassifierName>jakarta</shadedClassifierName>
        <shadedArtifactAttached>true</shadedArtifactAttached>
        <transformers>
          <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
          <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"/>
        </transformers>
      </configuration>
    </execution>
  </executions>
</plugin>

This plugin takes all the dependencies defined in the project, merges them into a single Uber JAR and attaches the JAR to the project as an artifact with the jakarta classifier. It would be nicer if it attached the JAR as the main artifact, without the classifier. This would cause a conflict with the Transformer plugin we need to use on the Uber JAR. Therefore we use an extra classifier jakarta here and we’ll need to use this classifier also in the original project when we define the dependency on this new project.

Now we add the Transformer plugin to transform the Uber JAR to make it compatible with Jakarta EE 9+. We need to configure the Transformer plugin with the following:

  • execute the goal “jar”
  • use the “jakartaDefaults” rule to apply transformations for Jakarta EE 9
  • define artifact with the classfier “jakarta” produced by the shade maven plugin. This will have the same groupId, artifactId and version as the current project

Snippet of the pom.xml file in a new “transform-dependencies” project:

<plugin>
  <groupId>org.eclipse.transformer</groupId>
  <artifactId>transformer-maven-plugin</artifactId>
  <version>0.5.0</version>
  <executions>
    <execution>
    <id>jar</id>
    <phase>package</phase>
    <goals>
      <goal>jar</goal>
    </goals>
    </execution>
  </executions>
  <configuration>
    <rules>
      <jakartaDefaults>true</jakartaDefaults>
    </rules>
    <artifact>
      <groupId>${project.groupId}</groupId>
      <artifactId>${project.artifactId}</artifactId>
      <classifier>jakarta</classifier>
    </artifact>
  </configuration>
</plugin>

We can now build the transform-dependencies project with the standard Maven command:

mvn install

The original pom.xml file should now only depend on this new artifact:

<groupId>ee.omnifish</groupId>
<artifactId>jakarta-app</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>war</packaging>

<dependencies>
 
  <!-- Jakarta EE 8 API -->
  <dependency>
    <groupId>jakarta.platform</groupId>
    <artifactId>jakarta.jakartaee-api</artifactId>
    <version>8.0.0</version>
    <scope>provided</scope>
  </dependency>

  <!-- Uber JAR artifact that includes classes from all 
        the dependencies that need to be transformed -->
  <dependency>
    <groupId>ee.omnifish.transformed</groupId>
    <artifactId>transform-dependencies</artifactId>
    <version>1.0-SNAPSHOT</version>
    <classifier>jakarta</classifier>
  </dependency>

</dependencies>

When we now build the original application project, it will use the transformed Uber JAR in the build and add it to the final WAR instead of all individual (untransformed) JARs. We can now deploy this WAR to a Jakarta EE 10 application server like GlassFish 7. If none of the original JARs depends on APIs removed between Jakarta EE 9 and 10, the application should now work as expected!

A full example of this approach is here: https://github.com/OmniFish-EE/upgrading-jakarta-ee-applications/tree/main/javax-jakarta-transform-dependencies-uberjar

Evolve the project in the future

In the future, it’s likely that some of the libraries we need to transform with the transform-dependencies project will become compatible with Jakarta EE 10. We can then easily update them and stop transforming them. We just need to move add the dependency back to the original “application” pom.xml file with a new version number, and remove it from the dependencies in the transform-dependencies project. The application will thus start depending an official version of the library without transformation. Eventually, if you’re able to update all of the libraries, you can discard the transform-dependencies project and keep a single native Jakarta EE 10 project. This approach with a helper transform-dependencies project is only a temporary solution until the time you can update all the libraries to Jakarta EE 10.

Conclusion

In this article, we described the last piece of the migration process which can be fully automated with very little effort. If you combine all the steps described here and in the previous articles in this series, you’ll be able to migrate your older Java EE project to Jakarta EE 9. In case your project doesn’t depend on any APIs dropped in Jakarta EE 10, your migration will be completed and you can start using new features in Jakarta EE 10. If you’re less fortunate, there’s still some work to do to refactor your code or adjust libraries that use the removed APIs to start using newer alternative APIs in Jakarta EE 10 that replace them. Most of these refactorings can be automated with custom Eclipse Transformer rules but some are more complicated and hard to automate. We’ll deal with them in future articles.

Resources

A Github repository with sample applications: https://github.com/OmniFish-EE/upgrading-jakarta-ee-applications/#readme


by Ondro Mihályi at August 04, 2023 05:43 AM

How to persist additional attributes for an association with JPA and Hibernate

by Thorben Janssen at May 29, 2023 04:35 PM

The post How to persist additional attributes for an association with JPA and Hibernate appeared first on Thorben Janssen.

JPA and Hibernate allow you to define associations between entities with just a few annotations, and you don’t have to care about the underlying table model in the database. Even join tables for many-to-many associations are hidden behind a @JoinTable annotation, and you don’t need to model the additional table as an entity. That changes...

The post How to persist additional attributes for an association with JPA and Hibernate appeared first on Thorben Janssen.


by Thorben Janssen at May 29, 2023 04:35 PM

Enterprise Kotlin - Kotlin and Jakarta EE

May 25, 2023 12:00 AM

Note: this blog post is also published on the Computas blog. ![ates](../media/jakarta-ee-logo.png) The Jakarta EE logo, by the Eclipse Foundation
If you look at the documentation on the Kotlin web page (

May 25, 2023 12:00 AM

The Jakarta EE 2023 Developer Survey is now open!

by Tanja Obradovic at March 29, 2023 09:24 PM

The Jakarta EE 2023 Developer Survey is now open!

It is that time of the year: the Jakarta EE 2023 Developer Survey open for your input! The survey will stay open until May 25st.


I would like to invite you to take this year six-minute survey, and have the chance to share your thoughts and ideas for future Jakarta EE releases, and help us discover uptake of the Jakarta EE latest versions and trends that inform industry decision-makers.

Please share the survey link and to reach out to your contacts: Java developers, architects and stakeholders on the enterprise Java ecosystem and invite them to participate in the 2023 Jakarta EE Developer Survey!

 

Tanja Obradovic Wed, 2023-03-29 17:24

by Tanja Obradovic at March 29, 2023 09:24 PM

What is Apache Camel and how does it work?

by Rhuan Henrique Rocha at February 16, 2023 11:14 PM

In this post, I will talk to you about what the Apache Camel is. It is a brief introduction before I starting to post practical content. Thus, let’s go to understand what this framework is.

Apache Camel is an open source Java integration framework that allows different applications to communicate with each other efficiently. It provides a platform for integrating heterogeneous software systems. Camel is designed to make application integration easy, simplifying the complexity of communication between different systems.

Apache Camel is written in Java and can be run on a variety of platforms, including Jakarta EE application servers and OSGi-based application containers, and can runs inside cloud environments using Spring Boot or Quarkus. Camel also supports a wide range of network protocols and message formats, including HTTP, FTP, SMTP, JMS, SOAP, XML, and JSON.

Camel uses the Enterprise Integration Patterns (EIP) pattern to define the different forms of integration. EIP is a set of commonly used design patterns in system integration. Camel implements many of these patterns, making it a powerful tool for integration solutions.

Additionally, Camel has a set of components that allow it to integrate with different systems. The components can be used to access different resources, such as databases, web services, and message systems. Camel also supports content-based routing, which means it can route messages based on their content.

Camel is highly configurable and extensible, allowing developers to customize its functionality to their needs. It also supports the creation of integration routes at runtime, which means that routes can be defined and changed without the need to restart the system.

In summary, Camel is a powerful and flexible tool for software system integration. It allows different applications to communicate efficiently and effectively, simplifying the complexity of system integration. Camel is a reliable and widely used framework that can help improve the efficiency and effectiveness of system integration in a variety of environments.

If you want to start using this framework you can access the documentation at the site. It’s my first post about the Apache Camel and will post more practical content about this amazing framework.


by Rhuan Henrique Rocha at February 16, 2023 11:14 PM

Jersey 3.1.1 released – focused on performance

by Jan at February 03, 2023 11:50 PM

Jersey 2.38 (Jakarta REST 2.1 compatible release) and Jersey 3.0.9 (Jakarta REST 3.0 compatible) have been released before Christmas. Jersey 3.1.1 is aligned with these releases. Apart from minor features (JDK 20 support, less repetitive warnings) and fixes, the big … Continue reading

by Jan at February 03, 2023 11:50 PM

Jakarta EE track at Devnexus 2023!!!!

by Tanja Obradovic at January 31, 2023 08:25 PM

Jakarta EE track at Devnexus 2023!!!!

We have great news to share with you!

For the very first time at Devnexus 2023 we will have Jakarta EE track with 10 sessions and we will take this opportunity, to whenever possible, celebrate all we have accomplished in Jakarta EE community.

Jakarta EE track sessions

You may not be aware but this year (yes, time flies!!) marks 5 years of Jakarta EE, so we will be celebrating through out the year! Devnexus 2023, looks a great place to mark this milestone as well! So stay tuned for details, but in the meanwhile please help us out, register for the event come to see us and spread the word.

Help us out in spreading the word about Jakarta EE track @Devnexus 2023, just re-share posts you see from us on various social platforms!
To make it easier for you to spread the word on socials,  we also have prepared a social kit document to help us with promotion of the Jakarta EE track @Devnexus 2023, sessions and speakers. The social kit document is going to be updated with missing sessions and speakers, so visit often and promote far and wide.

Note: Organizers wanted to do something for people impacted by the recent tech layoffs, and decided to offer a 50% discount for any conference pass (valid for a limited time). Please use code DN-JAKARTAEE for @JakartaEE Track to get additional 20% discount!

 In addition, there will be an IBM workshop that will be highlighting Jakarta EE; look for "Thriving in the cloud: Venturing beyond the 12 factors". Please use the promo code ($100 off): JAKARTAEEATDEVNEXUS the organizers prepared for you (valid for a limited time).

I hope to see you all at Devnexus 2023!

Tanja Obradovic Tue, 2023-01-31 15:25

by Tanja Obradovic at January 31, 2023 08:25 PM

Jakarta EE and MicroProfile at EclipseCon Community Day 2022

by Reza Rahman at November 19, 2022 10:39 PM

Community Day at EclipseCon 2022 was held in person on Monday, October 24 in Ludwigsburg, Germany. Community Day has always been a great event for Eclipse working groups and project teams, including Jakarta EE/MicroProfile. This year was no exception. A number of great sessions were delivered from prominent folks in the community. The following are the details including session materials. The agenda can still be found here. All the materials can be found here.

Jakarta EE Community State of the Union

The first session of the day was a Jakarta EE community state of the union delivered by Tanja Obradovic, Ivar Grimstad and Shabnam Mayel. The session included a quick overview of Jakarta EE releases, how to get involved in the work of producing the specifications, a recap of the important Jakarta EE 10 release and as well as a view of what’s to come in Jakarta EE 11. The slides are embedded below and linked here.

Jakarta Concurrency – What’s Next

Payara CEO Steve Millidge covered Jakarta Concurrency. He discussed the value proposition of Jakarta Concurrency, the innovations delivered in Jakarta EE 10 (including CDI based @Asynchronous, @ManagedExecutorDefinition, etc) and the possibilities for the future (including CDI based @Schedule, @Lock, @MaxConcurrency, etc). The slides are embedded below and linked here. There are some excellent code examples included.

Jakarta Security – What’s Next

Werner Keil covered Jakarta Security. He discussed what’s already done in Jakarta EE 10 (including OpenID Connect support) and everything that’s in the works for Jakarta EE 11 (including CDI based @RolesAllowed). The slides are embedded below and linked here.

Jakarta Data – What’s Coming

IBM’s Emily Jiang kindly covered Jakarta Data. This is a brand new specification aimed towards Jakarta EE 11. It is a higher level data access abstraction similar to Spring Data and DeltaSpike Data. It encompasses both Jakarta Persistence (JPA) and Jakarta NoSQL. The slides are embedded below and linked here. There are some excellent code examples included.

MicroProfile Community State of the Union

Emily also graciously delivered a MicroProfile state of the union. She covered what was delivered in MicroProfile 5, including alignment with Jakarta EE 9.1. She also discussed what’s coming soon in MicroProfile 6 and beyond, including very clear alignment with the Jakarta EE 10 Core Profile. The slides are embedded below and linked here. There are some excellent technical details included.

MicroProfile Telemetry – What’s Coming

Red Hat’s Martin Stefanko covered MicroProfile Telemetry. Telemetry is a brand new specification being included in MicroProfile 6. The specification essentially supersedes MicroProfile Tracing and possibly MicroProfile Metrics too in the near future. This is because the OpenTracing and OpenCensus projects merged into a single project called OpenTelemetry. OpenTelemetry is now the de facto standard defining how to collect, process, and export telemetry data in microservices. It makes sense that MicroProfile moves forward with supporting OpenTelemetry. The slides are embedded below and linked here. There are some excellent technical details and code examples included.

See You There Next Time?

Overall, it was an honor to organize the Jakarta EE/MicroProfile agenda at EclipseCon Community Day one more time. All speakers and attendees should be thanked. Perhaps we will see you at Community Day next time? It is a great way to hear from some of the key people driving Jakarta EE and MicroProfile. You can attend just Community Day even if you don’t attend EclipseCon. The fee is modest and includes lunch as well as casual networking.


by Reza Rahman at November 19, 2022 10:39 PM

JFall 2022

November 04, 2022 09:56 AM

An impression of JFall by yours truly.

keynote

Sold out!

Packet room!

Very nice first keynote speaker by Saby Sengupta about the path to transform.
He is a really nice storyteller. He had us going.

Dutch people, wooden shoes, wooden hat, would not listen

  • Saby

lol

Get the answer to three why questions. If the answers stop after the first why. It may not be a good idea.

This great first keynote is followed by the very well known Venkat Subramaniam about The Art of Simplicity.

The question is not what can we add? But What can we remove?

Simple fails less

Simple is elegant

All in al a great keynote! Loved it.

Design Patterns in the light of Lambdas

By Venkat Subramaniam

The GOF are kind of the grand parents of our industry. The worst thing they have done is write the damn book.
— Venkat

The quote is in the context of that writing down grandmas fantastic recipe does not work as it is based on the skill of grandma and not the exact amount of the ingredients.

The cleanup is the responsibility of the Resource class. Much better than asking developers to take care of it. It will be forgotten!

The more powerful a language becomes the less we need to talk about patterns. Patterns become practices we use. We do not need to put in extra effort.

I love his way of presenting, but this is the one of those times - I guess - that he is hampered by his own succes. The talk did not go deep into stuff. During his talk I just about covered 5 not too difficult subjects. I missed his speed and depth.

Still a great talk though.

lunch

Was actually very nice!

NLJUG update keynote

The Java Magazine was mentioned we (as Editors) had to shout for that!

Please contact me (@ivonet) if you have ambitions to either be an author or maybe even as a fellow editor of the magazine. We are searching for a new Editor now.

Then the voting for the Innovation Awards.

I kinda missed the next keynote by ING because I was playing with a rubix cube and I did not really like his talk

jakarta EE 10 platform

by Ivar Grimstad

Ivar talks about the specification of Jakarta EE.

To create a lite version of CDI it is possible to start doing things at build time and facilitate other tools like GraalVM and Quarkus.

He gives nice demos on how to migrate code to work in de jakarta namespace.

To start your own Jakarta EE application just go to start.jakarta.ee en follow the very simple UI instructions

I am very proud to be the creator of that UI. Thanks, Ivar for giving me a shoutout for that during your talk. More cool stuff will follow soon.

Be prepared to do some namespace changes when moving from Java EE 8 to Jakarta EE.

All slides here

conclusion

I had a fantastic day. For me, it is mainly about the community and seeing all the people I know in the community. I totally love the vibe of the conference and I think it is one of the best organized venues.

See you at JSpring.

Ivo.


November 04, 2022 09:56 AM

How to make your own scraper and then forget about it?

October 28, 2022 12:00 AM

So you've found a web page that changes frequently, and you want to follow the changes, but they don't provide a changelog? Then you might want to track the changes yourself. I've done that on a couple of pages - most notably tracking how bonus point awards change on the Norwegian bonus point system Viatrumf. Feel free to check it out. This solution

October 28, 2022 12:00 AM

Survey Says: Confidence Continues to Grow in the Jakarta EE Ecosystem

by Mike Milinkovich at September 26, 2022 01:00 PM

The results of the 2022 Jakarta EE Developer Survey are very telling about the current state of the enterprise Java developer community. They point to increased confidence about Jakarta EE and highlight how far Jakarta EE has grown over the past few years.

Strong Turnout Helps Drive Future of Jakarta EE

The fifth annual survey is one of the longest running and best-respected surveys of its kind in the industry. This year’s turnout was fantastic: From March 9 to May 6, a total of 1,439 developers responded. 

This is great for two reasons. First, obviously, these results help inform the Java ecosystem stakeholders about the requirements, priorities and perceptions of enterprise developer communities. The more people we hear from, the better picture we get of what the community wants and needs. That makes it much easier for us to make sure the work we’re doing is aligned with what our community is looking for. 

The other reason is that it helps us better understand how the cloud native Java world is progressing. By looking at what community members are using and adopting, what their top goals are and what their plans are for adoption, we can better understand not only what we should be working on today, but tomorrow and for the future of Jakarta EE. 

Findings Indicate Growing Adoption and Rising Expectations

Some of the survey’s key findings include:

  • Jakarta EE is the basis for the top frameworks used for building cloud native applications.
  • The top three frameworks for building cloud native applications, respectively, are Spring/Spring Boot, Jakarta EE and MicroProfile, though Spring/Spring Boot lost ground this past year. It’s important to note that Spring/SpringBoot relies on Jakarta EE developments for its operation and is not competitive with Jakarta EE. Both are critical ingredients to the healthy enterprise Java ecosystem. 
  • Jakarta EE 9/9.1 usage increased year-over-year by 5%.
  • Java EE 8, Jakarta EE 8, and Jakarta EE 9/9.1 hit the mainstream with 81% adoption. 
  • While over a third of respondents planned to adopt, or already had adopted Jakarta EE 9/9.1, nearly a fifth of respondents plan to skip Jakarta EE 9/9.1 altogether and adopt Jakarta EE 10 once it becomes available. 
  • Most respondents said they have migrated to Jakarta EE already or planned to do so within the next 6-24 months.
  • The top three community priorities for Jakarta EE are:
    • Native integration with Kubernetes (same as last year)
    • Better support for microservices (same as last year)
    • Faster support from existing Java EE/Jakarta EE or cloud vendors (new this year)

Two of the results, when combined, highlight something interesting:

  • 19% of respondents planned to skip Jakarta EE 9/9.1 and go straight to 10 once it’s available 
  • The new community priority — faster support from existing Java EE/Jakarta EE or cloud vendors — really shows the growing confidence the community has in the ecosystem

After all, you wouldn’t wait for a later version and skip the one that’s already available, unless you were confident that the newer version was not only going to be coming out on a relatively reliable timeline, but that it was going to be an improvement. 

And this growing hunger from the community for faster support really speaks to how far the ecosystem has come. When we release a new version, like when we released Jakarta EE 9, it takes some time for the technology implementers to build the product based on those standards or specifications. The community is becoming more vocal in requesting those implementers to be more agile and quickly pick up the new versions. That’s definitely an indication that developer demand for Jakarta EE products is growing in a healthy way. 

Learn More

If you’d like to learn more about the project, there are several Jakarta EE mailing lists to sign up for. You can also join the conversation on Slack. And if you want to get involved, start by choosing a project, sign up for its mailing list and start communicating with the team.


by Mike Milinkovich at September 26, 2022 01:00 PM

Jakarta EE 10 has Landed!

by javaeeguardian at September 22, 2022 03:48 PM

The Jakarta EE Ambassadors are thrilled to see Jakarta EE 10 being released! This is a milestone release that bears great significance to the Java ecosystem. Jakarta EE 8 and Jakarta EE 9.x were important releases in their own right in the process of transitioning Java EE to a truly open environment in the Eclipse Foundation. However, these releases did not deliver new features. Jakarta EE 10 changes all that and begins the vital process of delivering long pending new features into the ecosystem at a regular cadence.

There are quite a few changes that were delivered – here are some key themes and highlights:

  • CDI Alignment
    • @Asynchronous in Concurrency
    • Better CDI support in Batch
  • Java SE Alignment
    • Support for Java SE 11, Java SE 17
    • CompletionStage, ForkJoinPool, parallel streams in Concurrency
    • Bootstrap APIs for REST
  • Closing standardization gaps
    • OpenID Connect support in Security, @ManagedExecutorDefinition, UUID as entity keys, more SQL support in Persistence queries, multipart/form-data support in REST, @ClientWindowScoped in Faces, pure Java Faces views
    • CDI Lite/Core Profile to enable next generation cloud native runtimes – MicroProfile will likely align with CDI Lite/Jakarta EE Core
  • Deprecation/removal
    • @Context annotation in REST, EJB Entity Beans, embeddable EJB container, deprecated Servlet/Faces/CDI features

While there are many features that we identified in our Jakarta EE 10 Contribution Guide that did not make it yet, this is still a very solid release that everyone in the Java ecosystem will benefit from, including Spring, MicroProfile and Quarkus. You can see here what was delivered, what’s on the way and what gaps still remain. You can try Jakarta EE 10 out now using compatible implementations like GlassFish, Payara, WildFly and Open Liberty. Jakarta EE 10 is proof in the pudding that the community, including major stakeholders, has not only made it through the transition to the Eclipse Foundation but now is beginning to thrive once again.

Many Ambassadors helped make this release a reality such as Arjan Tijms, Werner Keil, Markus Karg, Otavio Santana, Ondro Mihalyi and many more. The Ambassadors will now focus on enabling the community to evangelize Jakarta EE 10 including speaking, blogging, trying out implementations, and advocating for real world adoption. We will also work to enable the community to continue to contribute to Jakarta EE by producing an EE 11 Contribution Guide in the coming months. Please stay tuned and join us.

Jakarta EE is truly moving forward – the next phase of the platform’s evolution is here!


by javaeeguardian at September 22, 2022 03:48 PM

Java Reflections unit-testing

by Vladimir Bychkov at July 13, 2022 09:06 PM

How make java code with reflections more stable? Unit tests can help with this problem. This article introduces annotations @CheckConstructor, @CheckField, @CheckMethod to create so unit tests automatically

by Vladimir Bychkov at July 13, 2022 09:06 PM

The Power of Enum – Take advantage of it to make your code more readable and efficient

by otaviojava at July 06, 2022 06:51 AM

Like any other language, Java has the enum feature that allows us to enumerate items. It is helpful to list delimited items in your code, such as the seasons. And we can go beyond it with Java! It permits clean code design. Indeed, we can apply several patterns such as VO from DDD, Singleton, and […]

by otaviojava at July 06, 2022 06:51 AM

Java EE - Jakarta EE Initializr

May 05, 2022 02:23 PM

Getting started with Jakarta EE just became even easier!

Get started

Hot new Update!

Moved from the Apache 2 license to the Eclipse Public License v2 for the newest version of the archetype as described below.
As a start for a possible collaboration with the Eclipse start project.

New Archetype with JakartaEE 9

JakartaEE 9 + Payara 5.2022.2 + MicroProfile 4.1 running on Java 17

  • And the docker image is also ready for x86_64 (amd64) AND aarch64 (arm64/v8) architectures!

May 05, 2022 02:23 PM

FOSDEM 2022 Conference Report

by Reza Rahman at February 21, 2022 12:24 AM

FOSDEM took place February 5-6. The European based event is one of the most significant gatherings worldwide focused on all things Open Source. Named the “Friends of OpenJDK”, in recent years the event has added a devroom/track dedicated to Java. The effort is lead by my friend and former colleague Geertjan Wielenga. Due to the pandemic, the 2022 event was virtual once again. I delivered a couple of talks on Jakarta EE as well as Diversity & Inclusion.

Fundamentals of Diversity & Inclusion for Technologists

I opened the second day of the conference with my newest talk titled “Fundamentals of Diversity and Inclusion for Technologists”. I believe this is an overdue and critically important subject. I am very grateful to FOSDEM for accepting the talk. The reality for our industry remains that many people either have not yet started or are at the very beginning of their Diversity & Inclusion journey. This talk aims to start the conversation in earnest by explaining the basics. Concepts covered include unconscious bias, privilege, equity, allyship, covering and microaggressions. I punctuate the topic with experiences from my own life and examples relevant to technologists. The slides for the talk are available on SpeakerDeck. The video for the talk is now posted on YouTube.

Jakarta EE – Present and Future

Later the same day, I delivered my fairly popular talk – “Jakarta EE – Present and Future”. The talk is essentially a state of the union for Jakarta EE. It covers a little bit of history, context, Jakarta EE 8, Jakarta EE 9/9.1 as well as what’s ahead for Jakarta EE 10. One key component of the talk is the importance and ways of direct developer contributions into Jakarta EE, if needed with help from the Jakarta EE Ambassadors. Jakarta EE 10 and the Jakarta Core Profile should bring an important set of changes including to CDI, Jakarta REST, Concurrency, Security, Faces, Batch and Configuration. The slides for the talk are available on SpeakerDeck. The video for the talk is now posted on YouTube.

I am very happy to have had the opportunity to speak at FOSDEM. I hope to contribute again in the future.


by Reza Rahman at February 21, 2022 12:24 AM

Making Readable Code With Dependency Injection and Jakarta CDI

by otaviojava at January 18, 2022 03:53 PM

Learn more about dependency injection with Jakarta CDI and enhance the effectiveness and readability of your code. Link: https://dzone.com/articles/making-readable-code-with-dependency-injection-and-jakarta-cdi

by otaviojava at January 18, 2022 03:53 PM

Infinispan Apache Log4j 2 CVE-2021-44228 vulnerability

December 12, 2021 10:00 PM

Infinispan 10+ uses Log4j version 2.0+ and can be affected by vulnerability CVE-2021-44228, which has a 10.0 CVSS score. The first fixed Log4j version is 2.15.0.
So, until official patch is coming, - you can update used logger version to the latest in few simple steps

wget https://downloads.apache.org/logging/log4j/2.15.0/apache-log4j-2.15.0-bin.zip
unzip apache-log4j-2.15.0-bin.zip

cd /opt/infinispan-server-10.1.8.Final/lib/

rm log4j-*.jar
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-api-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-core-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-jul-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-slf4j-impl-2.15.0.jar ./

Please, note - patch above is not official, but according to initial tests it works with no issues


December 12, 2021 10:00 PM

JPA query methods: influence on performance

by Vladimir Bychkov at November 18, 2021 07:22 AM

Specification JPA 2.2/Jakarta JPA 3.0 provides for several methods to select data from database. In this article we research how these methods affect on performance

by Vladimir Bychkov at November 18, 2021 07:22 AM

Eclipse Jetty Servlet Survey

by Jesse McConnell at October 27, 2021 01:25 PM

This short 5-minute survey is being presented to the Eclipse Jetty user community to validate conjecture the Jetty developers have for how users will leverage JakartaEE servlets and the Jetty project. We have some features we are gauging interest in before supporting in Jetty 12 and your responses will help shape its forthcoming release.

We will summarize results in a future blog.


by Jesse McConnell at October 27, 2021 01:25 PM

Custom Identity Store with Jakarta Security in TomEE

by Jean-Louis Monteiro at September 30, 2021 11:42 AM

In the previous post, we saw how to use the built-in ‘tomcat-users.xml’ identity store with Apache TomEE. While this identity store is inherited from Tomcat and integrated into Jakarta Security implementation in TomEE, this is usually good for development or simple deployments, but may appear too simple or restrictive for production environments. 

This blog will focus on how to implement your own identity store. TomEE can use LDAP or JDBC identity stores out of the box. We will try them out next time.

Let’s say you have your own file store or your own data store like an in-memory data grid, then you will need to implement your own identity store.

What is an identity store?

An identity store is a database or a directory (store) of identity information about a population of users that includes an application’s callers.

In essence, an identity store contains all information such as caller name, groups or roles, and required information to validate a caller’s credentials.

How to implement my own identity store?

This is actually fairly simple with Jakarta Security. The only thing you need to do is create an implementation of `jakarta.security.enterprise.identitystore.IdentityStore`. All methods in the interface have default implementations. So you only have to implement what you need.

public interface IdentityStore {
   Set DEFAULT_VALIDATION_TYPES = EnumSet.of(VALIDATE, PROVIDE_GROUPS);

   default CredentialValidationResult validate(Credential credential) {
   }

   default Set getCallerGroups(CredentialValidationResult validationResult) {
   }

   default int priority() {
   }

   default Set validationTypes() {
   }

   enum ValidationType {
       VALIDATE, PROVIDE_GROUPS
   }
}

By default, an identity store is used for both validating user credentials and providing groups/roles for the authenticated user. Depending on what #validationTypes() will return, you will have to implement #validate(…) and/or #getCallerGroups(…)

#getCallerGroups(…) will receive the result of #valide(…). Let’s look at a very simple example:

@ApplicationScoped
public class TestIdentityStore implements IdentityStore {

   public CredentialValidationResult validate(Credential credential) {

       if (!(credential instanceof UsernamePasswordCredential)) {
           return INVALID_RESULT;
       }

       final UsernamePasswordCredential usernamePasswordCredential = (UsernamePasswordCredential) credential;
       if (usernamePasswordCredential.compareTo("jon", "doe")) {
           return new CredentialValidationResult("jon", new HashSet<>(asList("foo", "bar")));
       }

       if (usernamePasswordCredential.compareTo("iron", "man")) {
           return new CredentialValidationResult("iron", new HashSet<>(Collections.singletonList("avengers")));
       }

       return INVALID_RESULT;
   }

}

In this simple example, the identity store is hardcoded. Basically, it knows only 2 users, one of them has some roles, while the other has another set of roles.

You can easily extend this example and query a local file, or an in-memory data grid if you need. Or use JPA to access your relational database.

IMPORTANT: for TomEE to pick it up and use it in your application, the identity store must be a CDI bean.

The complete and runnable example is available under https://github.com/apache/tomee/tree/master/examples/security-custom-identitystore

The post Custom Identity Store with Jakarta Security in TomEE appeared first on Tomitribe.


by Jean-Louis Monteiro at September 30, 2021 11:42 AM

Book Review: Practical Cloud-Native Java Development with MicroProfile

September 24, 2021 12:00 AM

Practical Cloud-Native Java Development with MicroProfile cover

General information

  • Pages: 403
  • Published by: Packt
  • Release date: Aug 2021

Disclaimer: I received this book as a collaboration with Packt and one of the authors (Thanks Emily!)

A book about Microservices for the Java Enterprise-shops

Year after year many enterprise companies are struggling to embrace Cloud Native practices that we tend to denominate as Microservices, however Microservices is a metapattern that needs to follow a well defined approach, like:

  • (We aim for) reactive systems
  • (Hence we need a methodology like) 12 Cloud Native factors
  • (Implementing) well-known design patterns
  • (Dividing the system by using) Domain Driven Design
  • (Implementing microservices via) Microservices chassis and/or service mesh
  • (Achieving deployments by) Containers orchestration

Many of these concepts require a considerable amount of context, but some books, tutorials, conferences and YouTube videos tend to focus on specific niche information, making difficult to have a "cold start" in the microservices space if you have been developing regular/monolithic software. For me, that's the best thing about this book, it provides a holistic view to understand microservices with Java and MicroProfile for "cold starter developers".

About the book

Using a software architect perspective, MicroProfile could be defined as a set of specifications (APIs) that many microservices chassis implement in order to solve common microservices problems through patterns, lessons learned from well known Java libraries, and proposals for collaboration between Java Enterprise vendors.

Subsequently if you think that it sounds a lot like Java EE, that's right, it's the same spirit but on the microservices space with participation for many vendors, including vendors from the Java EE space -e.g. Red Hat, IBM, Apache, Payara-.

The main value of this book is the willingness to go beyond the APIs, providing four structured sections that have different writing styles, for instance:

  1. Section 1: Cloud Native Applications - Written as a didactical resource to learn fundamentals of distributed systems with Cloud Native approach
  2. Section 2: MicroProfile Deep Dive - Written as a reference book with code snippets to understand the motivation, functionality and specific details in MicroProfile APIs and the relation between these APIs and common Microservices patterns -e.g. Remote procedure invocation, Health Check APIs, Externalized configuration-
  3. Section 3: End-to-End Project Using MicroProfile - Written as a narrative workshop with source code already available, to understand the development and deployment process of Cloud Native applications with MicroProfile
  4. Section 4: The standalone specifications - Written as a reference book with code snippets, it describes the development of newer specs that could be included in the future under MicroProfile's umbrella

First section

This was by far my favorite section. This section presents a well-balanced overview about Cloud Native practices like:

  • Cloud Native definition
  • The role of microservices and the differences with monoliths and FaaS
  • Data consistency with event sourcing
  • Best practices
  • The role of MicroProfile

I enjoyed this section because my current role is to coach or act as a software architect at different companies, hence this is good material to explain the whole panorama to my coworkers and/or use this book as a quick reference.

My only concern with this section is about the final chapter, this chapter presents an application called IBM Stock Trader that (as you probably guess) IBM uses to demonstrate these concepts using MicroProfile with OpenLiberty. The chapter by itself presents an application that combines data sources, front/ends, Kubernetes; however the application would be useful only on Section 3 (at least that was my perception). Hence you will be going back to this section once you're executing the workshop.

Second section

This section divides the MicroProfile APIs in three levels, the division actually makes a lot of sense but was evident to me only during this review:

  1. The base APIs to create microservices (JAX-RS, CDI, JSON-P, JSON-B, Rest Client)
  2. Enhancing microservices (Config, Fault Tolerance, OpenAPI, JWT)
  3. Observing microservices (Health, Metrics, Tracing)

Additionally, section also describes the need for Docker and Kubernetes and how other common approaches -e.g. Service mesh- overlap with Microservice Chassis functionality.

Currently I'm a MicroProfile user, hence I knew most of the APIs, however I liked the actual description of the pattern/need that motivated the inclusion of the APIs, and the description could be useful for newcomers, along with the code snippets also available on GitHub.

If you're a Java/Jakarta EE developer you will find the CDI section a little bit superficial, indeed CDI by itself deserves a whole book/fascicle but this chapter gives the basics to start the development process.

Third section

This section switches the writing style to a workshop style. The first chapter is entirely focused on how to compile the sample microservices, how to fulfill the technical requirements and which MicroProfile APIs are used on every microservice.

You must notice that this is not a Java programming workshop, it's a Cloud Native workshop with ready to deploy microservices, hence the step by step guide is about compilation with Maven, Docker containers, scaling with Kubernetes, operators in Openshift, etc.

You could explore and change the source code if you wish, but the section is written in a "descriptive" way assuming the samples existence.

Fourth section

This section is pretty similar to the second section in the reference book style, hence it also describes the pattern/need that motivated the discussion of the API and code snippets. The main focus of this section is GraphQL, Reactive Approaches and distributed transactions with LRA.

This section will probably change in future editions of the book because at the time of publishing the Cloud Native Container Foundation revealed that some initiatives about observability will be integrated in the OpenTelemetry project and MicroProfile it's discussing their future approach.

Things that could be improved

As any review this is the most difficult section to write, but I think that a second edition should:

  • Extend the CDI section due its foundational status
  • Switch the order of the Stock Tracer presentation
  • Extend the data consistency discussión -e.g. CQRS, Event Sourcing-, hopefully with advances from LRA

The last item is mostly a wish since I'm always in the need for better ways to integrate this common practices with buses like Kafka or Camel using MicroProfile. I know that some implementations -e.g. Helidon, Quarkus- already have extensions for Kafka or Camel, but the data consistency is an entire discussion about patterns, tools and best practices.

Who should read this book?

  • Java developers with strong SE foundations and familiarity with the enterprise space (Spring/Java EE)

September 24, 2021 12:00 AM

Jakarta Community Acceptance Testing (JCAT)

by javaeeguardian at July 28, 2021 05:41 AM

Today the Jakarta EE Ambassadors are announcing the start of the Jakarta EE Community Acceptance (JCAT) Testing initiative. The purpose of this initiative is to test Jakarta EE 9/9.1 implementations testing using your code and/or applications. Although Jakarta EE is extensively tested by the TCK, container specific tests, and QA, the purpose of JCAT is for developers to test the implementations.

Jakarta EE 9/9.1 did not introduce any new features. In Jakarta EE 9 the APIs changed from javax to jakarta. Jakarta EE 9.1 raised the supported floor to Java 11 for compatible implementations. So what are we testing?

  • Testing individual spec implementations standalone with the new namespace. 
  • Deploying existing Java EE/Jakarta EE applications to EE 9/9.1.
  • Converting Java EE/Jakarta EE applications to the new namespace.
  • Running applications on Java 11 (Jakarta EE 9.1)

Participating in this initiative is easy:

  1. Download a Jakarta EE implementation:
    1. Java 8 / Jakarta EE 9 Containers
    2. Java 11+ / Jakarta EE 9.1 Containers
  2. Deploy code:
    1. Port or run your existing Jakarta EE application
    2. Test out a feature using a starter template

To join this initiative, please take a moment to fill-out the form:

 Sign-up Form 

To submit results or feedback on your experiences with Jakarta EE 9/9.1:

  Jakarta EE 9 / 9.1 Feedback Form

Resources:

Start Date: July 28, 2021

End Date: December 31st, 2021


by javaeeguardian at July 28, 2021 05:41 AM

Your Voice Matters: Take the Jakarta EE Developer Survey

by dmitrykornilov at April 17, 2021 11:36 AM

The Jakarta EE Developer Survey is in its fourth year and is the industry’s largest open source developer survey. It’s open until April 30, 2021. I am encouraging you to add your voice. Why should you do it? Because Jakarta EE Working Group needs your feedback. We need to know the challenges you facing and suggestions you have about how to make Jakarta EE better.

Last year’s edition surveyed developers to gain on-the-ground understanding and insights into how Jakarta solutions are being built, as well as identifying developers’ top choices for architectures, technologies, and tools. The 2021 Jakarta EE Developer Survey is your chance to influence the direction of the Jakarta EE Working Group’s approach to cloud native enterprise Java.

The results from the 2021 survey will give software vendors, service providers, enterprises, and individual developers in the Jakarta ecosystem updated information about Jakarta solutions and service development trends and what they mean for their strategies and businesses. Additionally, the survey results also help the Jakarta community at the Eclipse Foundation better understand the top industry focus areas and priorities for future project releases.

A full report from based on the survey results will be made available to all participants.

The survey takes less than 10 minutes to complete. We look forward to your input. Take the survey now!


by dmitrykornilov at April 17, 2021 11:36 AM

Undertow AJP balancer. UT005028: Proxy request failed: java.nio.BufferOverflowException

April 02, 2021 09:00 PM

Wildfly provides great out of the box load balancing support by Undertow and modcluster subsystems
Unfortunately, in case HTTP headers size is huge enough (close to 16K), which is so actual in JWT era - pity error happened:

ERROR [io.undertow.proxy] (default I/O-10) UT005028: Proxy request to /ee-jax-rs-examples/clusterdemo/serverinfo failed: java.io.IOException: java.nio.BufferOverflowException
 at io.undertow.server.handlers.proxy.ProxyHandler$HTTPTrailerChannelListener.handleEvent(ProxyHandler.java:771)
 at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction$1.completed(ProxyHandler.java:646)
 at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction$1.completed(ProxyHandler.java:561)
 at io.undertow.client.ajp.AjpClientExchange.invokeReadReadyCallback(AjpClientExchange.java:203)
 at io.undertow.client.ajp.AjpClientConnection.initiateRequest(AjpClientConnection.java:288)
 at io.undertow.client.ajp.AjpClientConnection.sendRequest(AjpClientConnection.java:242)
 at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction.run(ProxyHandler.java:561)
 at io.undertow.util.SameThreadExecutor.execute(SameThreadExecutor.java:35)
 at io.undertow.server.HttpServerExchange.dispatch(HttpServerExchange.java:815)
...
Caused by: java.nio.BufferOverflowException
 at java.nio.Buffer.nextPutIndex(Buffer.java:521)
 at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:297)
 at io.undertow.protocols.ajp.AjpUtils.putString(AjpUtils.java:52)
 at io.undertow.protocols.ajp.AjpClientRequestClientStreamSinkChannel.createFrameHeaderImpl(AjpClientRequestClientStreamSinkChannel.java:176)
 at io.undertow.protocols.ajp.AjpClientRequestClientStreamSinkChannel.generateSendFrameHeader(AjpClientRequestClientStreamSinkChannel.java:290)
 at io.undertow.protocols.ajp.AjpClientFramePriority.insertFrame(AjpClientFramePriority.java:39)
 at io.undertow.protocols.ajp.AjpClientFramePriority.insertFrame(AjpClientFramePriority.java:32)
 at io.undertow.server.protocol.framed.AbstractFramedChannel.flushSenders(AbstractFramedChannel.java:603)
 at io.undertow.server.protocol.framed.AbstractFramedChannel.flush(AbstractFramedChannel.java:742)
 at io.undertow.server.protocol.framed.AbstractFramedChannel.queueFrame(AbstractFramedChannel.java:735)
 at io.undertow.server.protocol.framed.AbstractFramedStreamSinkChannel.queueFinalFrame(AbstractFramedStreamSinkChannel.java:267)
 at io.undertow.server.protocol.framed.AbstractFramedStreamSinkChannel.shutdownWrites(AbstractFramedStreamSinkChannel.java:244)
 at io.undertow.channels.DetachableStreamSinkChannel.shutdownWrites(DetachableStreamSinkChannel.java:79)
 at io.undertow.server.handlers.proxy.ProxyHandler$HTTPTrailerChannelListener.handleEvent(ProxyHandler.java:754)

The same request directly to backend server works well. Tried to play with ajp-listener and mod-cluster filter "max-*" parameters, but have no luck.

Possible solution here is switch protocol from AJP to HTTP which can be bit less effective, but works well with big headers:

/profile=full-ha/subsystem=modcluster/proxy=default:write-attribute(name=listener, value=default)

April 02, 2021 09:00 PM

Oracle Joins MicroProfile Working Group

by dmitrykornilov at January 08, 2021 06:02 PM

I am very pleased to announce that since the beginning of 2021 Oracle is officially a part of MicroProfile Working Group. 

In Oracle we believe in standards and supporting them in our products. Standards are born in blood, toil, tears, and sweat. Standards are a result of collaboration of experts, vendors, customers and users. Standards bring the advantages of portability between different implementations that make standard-based solutions vendor-neutral.

We created Java EE which was the first enterprise Java standard. We opened it and moved it to the Eclipse Foundation to make its development truly open source and vendor neutral. Now we are joining MicroProfile which in the last few years has become a leading standard for cloud-native solutions.

We’ve been supporting MicroProfile for years before officially joining the Working Group. We created project Helidon which has supported MicroProfile APIs since MicroProfile version 1.1. Contributing to the evolution and supporting new versions of MicroProfile is one of our strategic goals.

I like the community driven and enjoyable approach of creating cloud-native APIs invented by MicroProfile. I believe that our collaboration will be effective and together we will push MicroProfile forward to a higher level.


by dmitrykornilov at January 08, 2021 06:02 PM

An introduction to MicroProfile GraphQL

by Jean-François James at November 14, 2020 05:05 PM

If you’re interested in MicroProfile and APIs, please checkout my presentation Boost your APIs with GraphQL. I did it at EclipseCon 2020. Thanks to the organizers for the invitation! The slide deck is on Slideshare. I’ve tried to be high-level and explain how GraphQL differentiates from REST and how easy it is to implement a […]

by Jean-François James at November 14, 2020 05:05 PM

General considerations on updating Enterprise Java projects from Java 8 to Java 11

September 23, 2020 12:00 AM

shell11

The purpose of this article is to consolidate all difficulties and solutions that I've encountered while updating Java EE projects from Java 8 to Java 11 (and beyond). It's a known fact that Java 11 has a lot of new characteristics that are revolutionizing how Java is used to create applications, despite being problematic under certain conditions.

This article is focused on Java/Jakarta EE but it could be used as basis for other enterprise Java frameworks and libraries migrations.

Is it possible to update Java EE/MicroProfile projects from Java 8 to Java 11?

Yes, absolutely. My team has been able to bump at least two mature enterprise applications with more than three years in development, being:

A Management Information System (MIS)

Nabenik MIS

  • Time for migration: 1 week
  • Modules: 9 EJB, 1 WAR, 1 EAR
  • Classes: 671 and counting
  • Code lines: 39480
  • Project's beginning: 2014
  • Original platform: Java 7, Wildfly 8, Java EE 7
  • Current platform: Java 11, Wildfly 17, Jakarta EE 8, MicroProfile 3.0
  • Web client: Angular

Mobile POS and Geo-fence

Medmigo REP

  • Time for migration: 3 week
  • Modules: 5 WAR/MicroServices
  • Classes: 348 and counting
  • Code lines: 17160
  • Project's beginning: 2017
  • Original platform: Java 8, Glassfish 4, Java EE 7
  • Current platform: Java 11, Payara (Micro) 5, Jakarta EE 8, MicroProfile 3.2
  • Web client: Angular

Why should I ever consider migrating to Java 11?

As everything in IT the answer is "It depends . . .". However there are a couple of good reasons to do it:

  1. Reduce attack surface by updating project dependencies proactively
  2. Reduce technical debt and most importantly, prepare your project for the new and dynamic Java world
  3. Take advantage of performance improvements on new JVM versions
  4. Take advantage from improvements of Java as programming language
  5. Sleep better by having a more secure, efficient and quality product

Why Java updates from Java 8 to Java 11 are considered difficult?

From my experience with many teams, because of this:

Changes in Java release cadence

Java Release Cadence

Currently, there are two big branches in JVMs release model:

  • Java LTS: With a fixed lifetime (3 years) for long term support, being Java 11 the latest one
  • Java current: A fast-paced Java version that is available every 6 months over a predictable calendar, being Java 15 the latest (at least at the time of publishing for this article)

The rationale behind this decision is that Java needed dynamism in providing new characteristics to the language, API and JVM, which I really agree.

Nevertheless, it is a know fact that most enterprise frameworks seek and use Java for stability. Consequently, most of these frameworks target Java 11 as "certified" Java Virtual Machine for deployments.

Usage of internal APIs

Java 9

Errata: I fixed and simplified this section following an interesting discussion on reddit :)

Java 9 introduced changes in internal classes that weren't meant for usage outside JVM, preventing/breaking the functionality of popular libraries that made use of these internals -e.g. Hibernate, ASM, Hazelcast- to gain performance.

Hence to avoid it, internal APIs in JDK 9 are inaccessible at compile time (but accesible with --add-exports), remaining accessible if they were in JDK 8 but in a future release they will become inaccessible, in the long run this change will reduce the costs borne by the maintainers of the JDK itself and by the maintainers of libraries and applications that, knowingly or not, make use of these internal APIs.

Finally, during the introduction of JEP-260 internal APIs were classified as critical and non-critical, consequently critical internal APIs for which replacements are introduced in JDK 9 are deprecated in JDK 9 and will be either encapsulated or removed in a future release.

However, you are inside the danger zone if:

  1. Your project compiles against dependencies pre-Java 9 depending on critical internals
  2. You bundle dependencies pre-Java 9 depending on critical internals
  3. You run your applications over a runtime -e.g. Application Servers- that include pre Java 9 transitive dependencies

Any of these situations means that your application has a probability of not being compatible with JVMs above Java 8. At least not without updating your dependencies, which also could uncover breaking changes in library APIs creating mandatory refactors.

Removal of CORBA and Java EE modules from OpenJDK

JEP230

Also during Java 9 release, many Java EE and CORBA modules were marked as deprecated, being effectively removed at Java 11, specifically:

  • java.xml.ws (JAX-WS, plus the related technologies SAAJ and Web Services Metadata)
  • java.xml.bind (JAXB)
  • java.activation (JAF)
  • java.xml.ws.annotation (Common Annotations)
  • java.corba (CORBA)
  • java.transaction (JTA)
  • java.se.ee (Aggregator module for the six modules above)
  • jdk.xml.ws (Tools for JAX-WS)
  • jdk.xml.bind (Tools for JAXB)

As JEP-320 states, many of these modules were included in Java 6 as a convenience to generate/support SOAP Web Services. But these modules eventually took off as independent projects already available at Maven Central. Therefore it is necessary to include these as dependencies if our project implements services with JAX-WS and/or depends on any library/utility that was included previously.

IDEs and application servers

Eclipse

In the same way as libraries, Java IDEs had to catch-up with the introduction of Java 9 at least in three levels:

  1. IDEs as Java programs should be compatible with Java Modules
  2. IDEs should support new Java versions as programming language -i.e. Incremental compilation, linting, text analysis, modules-
  3. IDEs are also basis for an ecosystem of plugins that are developed independently. Hence if plugins have any transitive dependency with issues over JPMS, these also have to be updated

Overall, none of the Java IDEs guaranteed that plugins will work in JVMs above Java 8. Therefore you could possibly run your IDE over Java 11 but a legacy/deprecated plugin could prevent you to run your application.

How do I update?

You must notice that Java 9 launched three years ago, hence the situations previously described are mostly covered. However you should do the following verifications and actions to prevent failures in the process:

  1. Verify server compatibility
  2. Verify if you need a specific JVM due support contracts and conditions
  3. Configure your development environment to support multiple JVMs during the migration process
  4. Verify your IDE compatibility and update
  5. Update Maven and Maven projects
  6. Update dependencies
  7. Include Java/Jakarta EE dependencies
  8. Execute multiple JVMs in production

Verify server compatibility

Tomcat

Mike Luikides from O'Reilly affirms that there are two types of programmers. In one hand we have the low level programmers that create tools as libraries or frameworks, and on the other hand we have developers that use these tools to create experience, products and services.

Java Enterprise is mostly on the second hand, the "productive world" resting in giant's shoulders. That's why you should check first if your runtime or framework already has a version compatible with Java 11, and also if you have the time/decision power to proceed with an update. If not, any other action from this point is useless.

The good news is that most of the popular servers in enterprise Java world are already compatible, like:

If you happen to depend on non compatible runtimes, this is where the road ends unless you support the maintainer to update it.

Verify if you need an specific JVM

FixesJDK15

On a non-technical side, under support contract conditions you could be obligated to use an specific JVM version.

OpenJDK by itself is an open source project receiving contributions from many companies (being Oracle the most active contributor), but nothing prevents any other company to compile, pack and TCK other JVM distribution as demonstrated by Amazon Correto, Azul Zulu, Liberica JDK, etc.

In short, there is software that technically could run over any JVM distribution and version, but the support contract will ask you for a particular version. For instance:

Configure your development environment to support multiple JDKs

Since the jump from Java 8 to Java 11 is mostly an experimentation process, it is a good idea to install multiple JVMs on the development computer, being SDKMan and jEnv the common options:

SDKMan

sdkman

SDKMan is available for Unix-Like environments (Linux, Mac OS, Cygwin, BSD) and as the name suggests, acts as a Java tools package manager.

It helps to install and manage JVM ecosystem tools -e.g. Maven, Gradle, Leiningen- and also multiple JDK installations from different providers.

jEnv

jenv

Also available for Unix-Like environments (Linux, Mac OS, Cygwin, BSD), jEnv is basically a script to manage and switch multiple JVM installations per system, user and shell.

If you happen to install JDKs from different sources -e.g Homebrew, Linux Repo, Oracle Technology Network- it is a good choice.

Finally, if you use Windows the common alternative is to automate the switch using .bat files however I would appreciate any other suggestion since I don't use Windows so often.

Verify your IDE compatibility and update

Please remember that any IDE ecosystem is composed by three levels:

  1. The IDE acting as platform
  2. Programming language support
  3. Plugins to support tools and libraries

After updating your IDE, you should also verify if all of the plugins that make part of your development cycle work fine under Java 11.

Update Maven and Maven projects

maven

Probably the most common choice in Enterprise Java is Maven, and many IDEs use it under the hood or explicitly. Hence, you should update it.

Besides installation, please remember that Maven has a modular architecture and Maven modules version could be forced on any project definition. So, as rule of thumb you should also update these modules in your projects to the latest stable version.

To verify this quickly, you could use versions-maven-plugin:

<plugin>
      <groupId>org.codehaus.mojo</groupId>
      <artifactId>versions-maven-plugin</artifactId>
      <version>2.8.1</version>
</plugin>

Which includes a specific goal to verify Maven plugins versions:

mvn versions:display-plugin-updates

mavenversions

After that, you also need to configure Java source and target compatibility, generally this is achieved in two points.

As properties:

<properties>
        ...
    <maven.compiler.source>11</maven.compiler.source>
    <maven.compiler.target>11</maven.compiler.target>
</properties>

As configuration on Maven plugins, specially in maven-compiler-plugin:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.8.0</version>
    <configuration>
        <release>11</release>
    </configuration>
</plugin>

Finally, some plugins need to "break" the barriers imposed by Java Modules and Java Platform Teams knows about it. Hence JVM has an argument called illegal-access to allow this, at least during Java 11.

This could be a good idea in plugins like surefire and failsafe which also invoke runtimes that depend on this flag (like Arquillian tests):

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.22.0</version>
    <configuration>
        <argLine>
            --illegal-access=permit
        </argLine>
    </configuration>
</plugin>
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-failsafe-plugin</artifactId>
    <version>2.22.0</version>
    <configuration>
        <argLine>
            --illegal-access=permit
        </argLine>
    </configuration>
</plugin>

Update project dependencies

As mentioned before, you need to check for compatible versions on your Java dependencies. Sometimes these libraries could introduce breaking changes on each major version -e.g. Flyway- and you should consider a time to refactor this changes.

Again, if you use Maven versions-maven-plugin has a goal to verify dependencies version. The plugin will inform you about available updates.:

mvn versions:display-dependency-updates

mavendependency

In the particular case of Java EE, you already have an advantage. If you depend only on APIs -e.g. Java EE, MicroProfile- and not particular implementations, many of these issues are already solved for you.

Include Java/Jakarta EE dependencies

jakarta

Probably modern REST based services won't need this, however in projects with heavy usage of SOAP and XML marshalling is mandatory to include the Java EE modules removed on Java 11. Otherwise your project won't compile and run.

You must include as dependency:

  • API definition
  • Reference Implementation (if needed)

At this point is also a good idea to evaluate if you could move to Jakarta EE, the evolution of Java EE under Eclipse Foundation.

Jakarta EE 8 is practically Java EE 8 with another name, but it retains package and features compatibility, most of application servers are in the process or already have Jakarta EE certified implementations:

We could swap the Java EE API:

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-api</artifactId>
    <version>8.0.1</version>
    <scope>provided</scope>
</dependency>

For Jakarta EE API:

<dependency>
    <groupId>jakarta.platform</groupId>
    <artifactId>jakarta.jakartaee-api</artifactId>
    <version>8.0.0</version>
    <scope>provided</scope>
</dependency>

After that, please include any of these dependencies (if needed):

Java Beans Activation

Java EE

<dependency>
    <groupId>javax.activation</groupId>
    <artifactId>javax.activation-api</artifactId>
    <version>1.2.0</version>
</dependency>

Jakarta EE

<dependency>
    <groupId>jakarta.activation</groupId>
    <artifactId>jakarta.activation-api</artifactId>
    <version>1.2.2</version>
</dependency>

JAXB (Java XML Binding)

Java EE

<dependency>
    <groupId>javax.xml.bind</groupId>
    <artifactId>jaxb-api</artifactId>
    <version>2.3.1</version>
</dependency>

Jakarta EE

<dependency>
    <groupId>jakarta.xml.bind</groupId>
    <artifactId>jakarta.xml.bind-api</artifactId>
    <version>2.3.3</version>
</dependency>

Implementation

<dependency>
    <groupId>org.glassfish.jaxb</groupId>
    <artifactId>jaxb-runtime</artifactId>
    <version>2.3.3</version>
</dependency>

JAX-WS

Java EE

<dependency>
    <groupId>javax.xml.ws</groupId>
    <artifactId>jaxws-api</artifactId>
    <version>2.3.1</version>
</dependency>

Jakarta EE

<dependency>
    <groupId>jakarta.xml.ws</groupId>
    <artifactId>jakarta.xml.ws-api</artifactId>
    <version>2.3.3</version>
</dependency>

Implementation (runtime)

<dependency>
    <groupId>com.sun.xml.ws</groupId>
    <artifactId>jaxws-rt</artifactId>
    <version>2.3.3</version>
</dependency>

Implementation (standalone)

<dependency>
    <groupId>com.sun.xml.ws</groupId>
    <artifactId>jaxws-ri</artifactId>
    <version>2.3.2-1</version>
    <type>pom</type>
</dependency>

Java Annotation

Java EE

<dependency>
    <groupId>javax.annotation</groupId>
    <artifactId>javax.annotation-api</artifactId>
    <version>1.3.2</version>
</dependency>

Jakarta EE

<dependency>
    <groupId>jakarta.annotation</groupId>
    <artifactId>jakarta.annotation-api</artifactId>
    <version>1.3.5</version>
</dependency>

Java Transaction

Java EE

<dependency>
    <groupId>javax.transaction</groupId>
    <artifactId>javax.transaction-api</artifactId>
    <version>1.3</version>
</dependency>

Jakarta EE

<dependency>
    <groupId>jakarta.transaction</groupId>
    <artifactId>jakarta.transaction-api</artifactId>
    <version>1.3.3</version>
</dependency>

CORBA

In the particular case of CORBA, I'm aware of its adoption. There is an independent project in eclipse to support CORBA, based on Glassfish CORBA, but this should be investigated further.

Multiple JVMs in production

If everything compiles, tests and executes. You did a successful migration.

Some deployments/environments run multiple application servers over the same Linux installation. If this is your case it is a good idea to install multiple JVMs to allow stepped migrations instead of big bang.

For instance, RHEL based distributions like CentOS, Oracle Linux or Fedora include various JVM versions:

olinux

Most importantly, If you install JVMs outside directly from RPMs(like Oracle HotSpot), Java alternatives will give you support:

hotspot

However on modern deployments probably would be better to use Docker, specially on Windows which also needs .bat script to automate this task. Most of the JVM distributions are also available on Docker Hub:

dockerjava


September 23, 2020 12:00 AM

Back to the top