Skip to main content

Not Your Java Package Handler--airhacks.fm podcast

June 21, 2025 06:52 PM

Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS

The #351 airhacks.fm episode with Billy Korando (@BillyKorando) about:
microservices, monoliths, automated testing and metrics, developer relations and focus on observability
is available for download.

June 21, 2025 06:52 PM

Summer 2025: User Groups, Conferences and Livestreams

June 19, 2025 10:07 AM

  1. AWS User Group Silesia: Hardcore Serverless Java
    session AWS User Group Silesia Future Processing, Gliwice 24 June 2025
    https://www.meetup.com/airhacks/events/308303637/
  2. CloudLand: Hardcore Serviceful + Serverless Java #slideless
    session CloudLand Heide Park Soltau 4 July 2025
    https://meine.doag.org/events/cloudland/2025/agenda/#agendaId.5637
  3. LLM / GenAI Patterns, Architectures and Use Cases with Java[online event]
    airhacks.live workshop 10 July 2025
    https://airhacks.live
  4. Hardcore Serverless Java on AWS[online event]
    airhacks.live workshop 17 July 2025
    https://airhacks.live
  5. airhacks.tv Questions and Answers #livestream[online event]
    live streaming show first monday of the month, 8pm CET
    https://www.meetup.com/airhacks/

June 19, 2025 10:07 AM

Hashtag Jakarta EE #285

by Ivar Grimstad at June 15, 2025 09:59 AM

Welcome to issue number two hundred and eighty-five of Hashtag Jakarta EE!

We’re finally there! The release review for the Jakarta EE 11 Platform specification is ongoing. All the members of the Jakarta EE Specification Committee have voted, so as soon as the minimum duration of 7 days is over, the release of the specification will be approved. Public announcements and celebrations will follow in the weeks to come.

With Jakarta EE 11 out the door, the Jakarta EE Platform project can focus entirely on Jakarta EE 12. A project Milestone 0 is being planned as we speak. One of the activities of that milestone will be to get all CI Jobs and configurations set up for the new way of releasing to Maven Central due to the end-of-life of OSSRH. There will be a new release of the EE4J Parent POM to support this.

Next week, almost all of the Eclipse Foundation staff will come together at our annual All-Hands meeting. Since we are an remote organization, these events where we all get together are extremely important to align on all the various activities that take place in the organization. I am looking forward to catch up with my colleagues over a couple of days next week.


by Ivar Grimstad at June 15, 2025 09:59 AM

Improving ECA Renewals with Automated Notifications

June 10, 2025 01:48 PM

To make it easier for our community to maintain an active contributor status, we’re introducing a new notification service for the Eclipse Contributor Agreement (ECA).

Starting June 11, 2025, we will begin sending email reminders before a standalone ECA is set to expire. For those who need to take action, the email will have a subject line of “Action Required: Your Eclipse Contributor Agreement (ECA) is Expiring Soon” and will contain a link to renew the agreement.

If you are an Eclipse committer who has signed an Individual Committer Agreement (ICA), or an employee of a member organization that has signed the Member Company Committer Agreement (MCCA), you do not need to renew the standalone ECA, as both agreements already include it. If you are covered by one of these agreements, an expiring standalone ECA will not affect your ability to contribute. In this case, you will receive a separate informational email with the subject: “No Action Required: Your Eclipse Contributor Agreement (ECA) is Expiring Soon” to confirm your status.

For those covered only by a standalone ECA, if it expires, you won’t be allowed to contribute to open source projects at Eclipse until you sign it again. Specifically:

  • You will no longer be able to submit a merge request to an Eclipse project repositories hosted on Eclipse Foundation GitLab.
  • Your commits included in a GitHub Pull Request will fail our automated ECA validation check.

If this happens, you can always restore your ability to contribute by visiting https://accounts.eclipse.org/user/eca and signing the ECA. Your contributor status will be restored once the new agreement is processed, which may take 5 to 15 minutes for our system caches to update.

For any questions or feedback, please join the discussion on our HelpDesk issue.


June 10, 2025 01:48 PM

Hashtag Jakarta EE #284

by Ivar Grimstad at June 08, 2025 09:59 AM

Welcome to issue number two hundred and eighty-four of Hashtag Jakarta EE!

I am now on my way home from Tokyo and JJUG CCC 2025 Spring. You can read about that trip in my post from yesterday.

Time to start celebrating! All the materials for the release review of Jakarta EE 11 Platform has been provided, and as the Specification Committee mentor, I will have the privilege to start the release review ballot on Monday. That means that the specification will be ready to be released on the 24th of June at the latest. I hope there will be cake…

With Jakarta EE 11 out the door, all focus from now on will be on Jakarta EE 12. The plan reviews have concluded and the platform project has started with the definition of project milestones. The plan is to define a Milestone 0, which will contain steps to ensure that the specification projects are ready to get going.


by Ivar Grimstad at June 08, 2025 09:59 AM

Switching blog engine

June 03, 2025 12:00 AM

I recently realized that the engine that used to run this blog wasn't updated since 2020

June 03, 2025 12:00 AM

Foundation Laid for Faster Text I/O

by Markus Karg at May 17, 2025 06:06 PM

Hey guys! How’s it going?

Another six months are over and you might wonder what the old shaggy prepared this time? 🤔 No wonder, it certainly is another OpenJDK contribution! 🤩 This time, it speeds up reading from CharSequence, and it will allow faster Write::append. �

Never heard of CharSequence? Well, it’s the common interface of String, StringBuilder, CharBuffer, and quite some custom classes out there. But wait! There are other text classes than String? 😯 Maybe you never thought abot that… 🤨 And why would you want to speed that up? Because a program never is fast enough, but more urgently, because it reduces power consumption! So, once more, we gain fun from faster apps, plus saving the climate. 🌴 Ain’t that great? So read on!

So for long time, you did not think about what happens “under the hood” when you concatenated Strings, like "abc" + "def". But then someone came and told you not to do “+” but use StringBuild::append, as that would be way faster (which it was). And then someone else came and told you, that this is an urban legend, as javac meanwhile does exactly that for you (which it does). But in fact, what happens still is that (directly or indirectly) memory is allocated which is the size of "abc" plus the size of "def" (even worse, it is not even stack memory but heap memory, but let’s put that aside for today). Actually, there is even more work done: As Strings are compressed internally, an compression algorithm chimes in. And yes, that needs time and memory, and energy, too. 😓 Indeed there is even more going on internally, but more or less we could say: Concatinating Strings is effectively making a compressed copy, then throwing away both original values, even in the Java 25 age. And “throwing away” means, leaving behind holes in the linear memory space. So besides pure Garbage Collection (“vacuuming”), we need memory defragmentation (“waste grinding”), which is another nice word for: moving even more bytes around in memory. And that costs even more time and power. And guess what: Your app concatenates Strings a lot, right? And guess what: The Java Runtime (JRE) itself internally concatenates even more Strings! So copy, reallocate, compress, deallocate, GC, defrag all the time. But for what? For nothing! 😔 Sigh.

For nothing? Yes, for nothing. Because you could spare a lot of that – when further using StringBuilder instead of String. Ok, you know that since long time, so you do that, and so is javac (it replaces String+String by a StringBuilder “under good conditions”), and so is the JRE itself. But here comes the bad news: In the end, you, just like javac, just like the JRE itself, are calling toString() eventually. Don’t you? You do! And that means… right: Pointless power consumption, as toString() produces another temporary copy on the heap!

So why not omitting toString()? Just directly pass around your StringBuilder everywhere, instead of toString()’ing it! This spares lots of toString(). (cheer) So all is fine now? Nope. (cheer stops). Once you want to output your StringBuilder, or once you want to input text into a StringBuilder, you’re facing a problem: Your surrounding frameworks do not accept StringBuilder! Typically these all work with String, or, like in the case we’re talking about today (to finally come to the copic of today’s posting), they do accept CharSequence – but they internally toString() it. 😒

For example: Java’s Writer classes (you know, like good old BufferedWriter, PrintWriter, and all those) just pretend to accept not only Strings (like in write(String)), but also any other kind of CharSequence (like in append(myStringBuilder)). That looks just as what we want to spare that toString() heap clutter. But wait! 🫷 Take a look at the implementation first… it does… tada… toString()! 🥳 So that nice trick that javac internally uses a StringBuilder to implement “+” is good for nothing, as finally you end up with another time-squandering copy as soon as you output the result. 😭

But stopy crying! I am here to help! 🦸 The other day Oracle kindly adopted my latest OpenJDK contribution, and this finally paves the way to fix these troubles. To understand my solution, let’s dive deeper into Writer, and why it does toString() on any CharSequence (even on Strings themselves). The cause is: Performance. Shocking! 🫢 If you copy text “char-by-char” this would be totally slow, as computers can pass around much larger clusters of information with a single command. So what Writer::write internally does is, it asks the String to put all its characters into a char array with a single command (which ontop is lightning-fast machine code âš¡). That command is String::getChars(int, int, char[], int). So why doing that only with Strings, but not with other CharSequences? Because CharSequence does not have that command. It’s as embarrasing as that! CharSequence only can be asked for one character at a time, so you need a loop in Java – which means, in 99% of you cases, an interpreted loop (unless you do it 10.000 times to get it hot spotted eventually). And that is not just slow, it is even super-slow! ðŸ�Œ

So what I did is that I added that exact getChars(int, int, char[], int) method signature that String always had to the CharSequence interface. Sounds easy, but took six months of discussions and a lot of convincing (for example, I had to proof that “not many” code exists on earth that already has such a method but that method does something else – as that code would silently do the wrong thing once executed on Java 25). This foundational change is now found in Java 25, and if you download a pre-release build, you can play with it right now – or already prepare your application if it does a “char-by-char” loop or temporary toString() currently.

So what about Writer? I’m working on it. Just today I filed the first of a set of pull requests towards getting this new method used in all Writers in OpenJDK. So while nothing will get faster “magically” in JDK 25, the foundation is laid, and by the time, eventually your code runs more efficient, without recompiling it. So… stay tuned…! 😅


by Markus Karg at May 17, 2025 06:06 PM

Building AI Assistant Application in Java

by dmitrykornilov at May 16, 2025 12:57 PM

In my previous article, I discussed how Helidon integrates with LangChain4J. While the article provided a solid foundation, some readers pointed out the lack of a complete, hands-on example. This time, we’ll fix that by building a fully functional, practical AI-powered Java application.

We’ll create the Helidon Assistant — a chatbot with a web UI trained to answer questions about the Helidon framework. By “trained,” I mean it will be capable of answering questions based on the full Helidon documentation.

In this article, we’ll explore how to preprocess AsciiDoc files for ingestion into an embedding store and how to make the application stateless using an AI-powered summarization mechanism. Let’s dive in!

The Project Overview

The Helidon Assistant is built with the following technologies:

  • Java 21
  • Helidon SE 4 as the runtime
  • LangChain4J for AI integration
  • In-memory Embedding Store for storing and retrieving document embeddings
  • OpenAI GPT-4o as the default chat model
  • Helidon SE static content feature for serving the web UI
  • Bulma CSS framework for clean and minimalistic UI styling

The application is organized into three main layers:

  • RESTful Service Layer – defines the public REST API and serves the web UI.
  • AI Services Layer – defines LangChain4J AI services: one for answering user questions, and another for summarizing the conversation.
  • RAG Layer – handles ingestion: reading AsciiDoc files, preprocessing content, creating embeddings, and storing them in the embedding store.

The architecture diagram is shown below:

Building and Running the Project

The project is available on GitHub here. You can either clone the repository or browse the sources directly on GitHub. I’ll refer to it throughout the article.

Before building and running the project, you need to configure the path to the AsciiDoc documentation the assistant will work with. If you already have the documentation locally—great! If not, you can clone the Helidon repository from GitHub:

git clone https://github.com/helidon-io/helidon.git

The documentation files are located in docs/src/main/asciidoc/mp.

Next, update the application configuration file located at src/main/resources/application.yaml with the path to your AsciiDoc files:

app:
  root: "//home/dmitry/github/helidon/docs/src/main/asciidoc/mp"
  inclusions: "*.adoc"

Make sure to adjust the root path to match your local environment. You can also use the inclusions and exclusions properties to filter which files under the root directory should be included during ingestion.

Now you’re ready to build the application:

mvn clean package

And launch it:

java -jar target/helidon-assistant.jar

Once running, open your browser and go to http://localhost:8080. You’ll be greeted with the assistant’s web interface, where you can start asking questions.

Here are a few example questions to get you started:

  • How to use metrics with Helidon?
  • What does the @Retry annotation do?
  • How can I configure a web server?
  • How can I connect to a database?

In the next section, we’ll take a closer look at the build script and project dependencies.

Dependencies

The project uses a standard Maven pom.xml configuration recommended for Helidon SE applications, with several additional dependencies specific to this use case. Below is a commented snippet explaining the purpose of each dependency:

<dependencies>
  <!-- Helidon integration with LangChain4J -->
  <dependency>
    <groupId>io.helidon.integrations.langchain4j</groupId>
    <artifactId>helidon-integrations-langchain4j</artifactId>
  </dependency>

  <!-- OpenAI provider: required for using the GPT-4o chat model.
       Replace this with another provider if using a different LLM. -->
  <dependency>
    <groupId>io.helidon.integrations.langchain4j.providers</groupId>
    <artifactId>helidon-integrations-langchain4j-providers-open-ai</artifactId>
  </dependency>

  <!-- LangChain4J embeddings model used for RAG functionality -->
  <dependency>
    <groupId>dev.langchain4j</groupId>
    <artifactId>langchain4j-embeddings-all-minilm-l6-v2</artifactId>
  </dependency>

  <!-- AsciidoctorJ: used to parse and process AsciiDoc documentation files -->
  <dependency>
    <groupId>org.asciidoctor</groupId>
    <artifactId>asciidoctorj</artifactId>
    <version>${version.lib.asciidoctorj}</version>
  </dependency>

  <!-- Various Helidon dependencies needed for the application proper
       functionality -->
  <dependency>
    <groupId>io.helidon.webserver</groupId>
    <artifactId>helidon-webserver</artifactId>
  </dependency>
  <dependency>
    <groupId>io.helidon.webserver</groupId>
    <artifactId>helidon-webserver-static-content</artifactId>
  </dependency>
  <dependency>
    <groupId>io.helidon.http.media</groupId>
    <artifactId>helidon-http-media-jsonp</artifactId>
  </dependency>
  <dependency>
    <groupId>io.helidon.config</groupId>
    <artifactId>helidon-config-yaml</artifactId>
  </dependency>

  <!-- Logging -->
  <dependency>
    <groupId>io.helidon.logging</groupId>
    <artifactId>helidon-logging-jul</artifactId>
    <scope>runtime</scope>
  </dependency>
  <dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-jdk14</artifactId>
    <scope>runtime</scope>
  </dependency>
</dependencies>

You can use these dependencies for other AI-powered projects with minimal changes.

Application main class

The ApplicationMain class serves as the application’s entry point. It contains the main method, which performs the following steps:

  1. Enables runtime logging.
  2. Loads the application configuration.
  3. Ingests documentation into the embedding store.
  4. Sets up web server routing to serve static web pages and handle user requests.
  5. Starts the web server.

Below is a snippet of the main method:

public static void main(String[] args) {
    // Make sure logging is enabled as the first thing
    LogConfig.configureRuntime();

    var config = Services.get(Config.class);

    // Initialize embedding store
    Services.get(DocsIngestor.class)
            .ingest();

    // Static content setup
    var staticContentFeature = StaticContentFeature.builder()
            .addClasspath(cl -> cl.location("WEB")
                    .context("/ui")
                    .welcome("index.html"))
            .build();

    // Initialize and start web server
    WebServerConfig.builder()
            .addFeature(staticContentFeature)
            .config(config.get("server"))
            .routing(ApplicationMain::routing)
            .build()
            .start();
}

In the next section, I’ll explain how the embedding store is created and initialized.

Preparing AsciiDoc Files for Embeddings

Although AsciiDoc is lightweight and human-readable, the source content isn’t immediately ready for use in AI-powered retrieval. AsciiDoc files often contain structural directives like include statements, developer comments, attribute substitutions, and variables intended for conditional rendering or reuse. These elements are meaningful for human readers or documentation generators but can confuse or mislead a language model if left unprocessed. Additionally, formatting artifacts and metadata can introduce noise. Without proper preprocessing, the resulting embeddings might be irrelevant or misleading, which degrades the quality and accuracy of the assistant’s responses.

To address this, we apply a structured preprocessing pipeline.

  • AsciiDocJ Integration: We use the official AsciiDocJ parser to fully parse AsciiDoc documents. This library resolves include directives automatically and gives us a structured representation of the content.
  • Section-Based Chunking: We group content elements by their surrounding section and generate one embedding per section. This preserves logical and thematic boundaries and helps ensure responses remain relevant.
  • Preserve Atomic Elements: We make sure that tables and code snippets are not split across chunks. This is critical to retain the contextual meaning of examples and structured content.
  • Attach Metadata: Each chunk is enriched with metadata such as document title, relative file path, and section index. This helps reconstruct the document context when presenting answers.
  • Repeat for Each File: This process is repeated for each .adoc file identified by the inclusion pattern in the configuration.

This preprocessing ensures that the AI retrieves precise, coherent documentation segments in response to user queries, resulting in more accurate and helpful answers.

Here’s how the implementation is structured:

  • AsciiDocPreprocessor.java parses the file and produces a list of document chunks.
  • ChunkGrouper.java groups chunks into section-based logical units.
  • FileLister.java reads the directory path and applies inclusion/exclusion patterns.
  • DocsIngestor.java orchestrates the overall process: listing files, extracting and grouping chunks, converting them to TextSegment objects, and storing the resulting embeddings.

A simplified snippet from DocsIngestor.java demonstrates the ingestion logic:

public void ingest() {
    var files = FileLister.listFiles(root, exclusions, inclusions);
    var processor = new AsciiDocPreprocessor();
    var grouper = new ChunkGrouper(1000);

    for (Path path : files) {
        var chunks = processor.extractChunks(path.toFile());
        var groupedChunks = grouper.groupChunks(chunks);

        List<TextSegment> segments = new ArrayList<>();
        for (int i = 0; i < groupedChunks.size(); i++) {
            var chunk = groupedChunks.get(i);
            var metadata = new Metadata()
                    .put("source", path.toFile().getAbsolutePath())
                    .put("chunk", String.valueOf(i + 1))
                    .put("type", chunk.type().name())
                    .put("section", chunk.sectionPath());

            segments.add(TextSegment.from(chunk.text(), metadata));
        }

        var embeddings = embeddingModel.embedAll(segments);
        embeddingStore.addAll(embeddings.content(), segments);
    }
}

Serving static web pages

The UI consists of a single index.html file located in the resources/WEB directory. It’s styled using the Bulma CSS framework, which is designed to be JavaScript free.

But there is a small piece of the JavaScript code anyway. It sends user messages to the backend when the Send button is clicked, updates the chat window with the response, and manages conversation summary state.

To serve this page, we register a StaticContentFeature during the web server startup. The code below demonstrated how it’s done in the main method.

// Static content setup
var staticContentFeature = StaticContentFeature.builder()
        .addClasspath(cl -> cl.location("WEB")
                .context("/ui")
                .welcome("index.html"))
        .build();

// Initialize and start web server
WebServerConfig.builder()
        .addFeature(staticContentFeature)
        ...

/ui path is registered to serve the static content. It user tries to open another path, he will be redirected to /ui. It’s done in the routing method.

static void routing(HttpRouting.Builder routing) {
    routing.any("/", (req, res) -> {
                // showing the capability to run on any path, and redirecting from root
                res.status(Status.MOVED_PERMANENTLY_301);
                res.headers().set(UI_REDIRECT);
                res.send();
            })
            .register("/chat", Services.get(ChatBotService.class));
}

Processing user requests

When the user clicks the Send button in the UI, a server call to the /chat endpoint is initiated. This request sends the user’s message along with a conversation summary to the server. We’ll discuss conversation summaries in a later section—let’s first focus on how the request is processed on the server side.

User requests to /chat are handled by the ChatBotService.java class. This class is registered during web server initialization, as shown in ApplicationMain.java. Below is a simplified snippet that demonstrates how it’s done:

static void routing(HttpRouting.Builder routing) {
    routing.register("/chat", Services.get(ChatBotService.class));
    ...
}

The ChatBotService class contains the chatWithAssistant method, which handles incoming requests. It performs the following steps:

  1. Extracts the user’s message and conversation summary from the request.
  2. Invokes ChatAiService, passing the message and summary to generate a response.
  3. Uses SummaryAiService to create an updated conversation summary.
  4. Builds a JSON object containing the response and the updated summary, and sends it back to the client.

Here’s the simplified code for the chatWithAssistant method:

private void chatWithAssistant(ServerRequest req, ServerResponse res) {
    var json = req.content().as(JsonObject.class);
    var message = json.getString("message");
    var summary = json.getString("summary");

    var answer = chatAiService.chat(message, summary);
    var updatedSummary = summaryAiService.chat(summary, message, answer);

    var returnObject = JSON.createObjectBuilder()
                .add("message", answer)
                .add("summary", updatedSummary)
                .build();
    res.send(returnObject);
}

The ChatAiService.java class is implemented as a Helidon AI service. You can learn more about AI services and how to implement them in my previous article.

Here’s the relevant code:

@Ai.Service
public interface ChatAiService {

    @SystemMessage("""
            You are Frank, a helpful Helidon expert.

            Only answer questions related to Helidon and its components. If a question is not relevant to Helidon, 
            politely decline.

            Use the following conversation summary to keep context and maintain continuity:
            {(summary})
            """)
    String chat(@UserMessage String question, @V("summary") String previousConversationSummary);
}

Making the Application Stateless

In a typical chat application, the backend must maintain the full history of the conversation in order to understand the user’s intent. This is because language models like OpenAI’s GPT rely heavily on context — they need to see the dialogue leading up to the current question to provide an accurate and helpful answer. The longer and more complex the conversation, the more memory is required to hold that context.

However, storing chat history introduces challenges. If you’re running a single backend instance, you might store this state in memory. But in a production environment, especially in cloud-native deployments, applications often scale horizontally — meaning multiple instances of the backend may be running behind a load balancer. In such setups, traditional in-memory storage for chat history doesn’t work: the next request from the same user might be routed to a different backend instance that has no access to prior state.

This is where statelessness becomes critical. Stateless services are inherently scalable, easier to maintain, and more resilient. But to make a chatbot stateless without sacrificing conversation quality, we need a way to preserve and compress context — and that’s where AI-powered summarization comes in.

By summarizing the chat history into a compact form after every message, we replace a long list of messages with a lightweight, synthetic memory that still captures the user’s intent and context. This summary is sent along with the next message, enabling consistent, relevant responses while allowing each request to be handled independently.

The Helidon Assistant uses this technique to remain stateless and cloud-native, ensuring it can scale easily while maintaining meaningful conversations with users.

Summarizer implemented as an AI service. You can read more about AI services and how to implement them in my previous article.

@Ai.Service
public interface SummaryAiService {

    @SystemMessage("""
        You are a conversation summarizer for an AI assistant. Your job is to keep a concise summary of the
        ongoing conversation to preserve context.
        Given the previous summary, the latest user message, and the AI's response, update the summary so it
        reflects the current state of the conversation.
        Keep it short, factual, and focused on what the user is doing or trying to achieve. Avoid rephrasing the
        entire response or repeating long parts verbatim.
        """)
    @UserMessage("""
        Previous Summary: 
        {{summary}}

        Last User Message:
        {{lastUserMessage}}

        Last AI Response:
        {{aiResponse}}
        """
    )
    String chat(@V("summary") String previousSummary,
                @V("lastUserMessage") String latestUserMessage,
                @V("aiResponse") String aiResponse);
}

Wrapping Up

That’s it — we’ve built a fully working, stateless AI assistant powered by Helidon and LangChain4J. Hopefully, everything is clear and nothing important was left out. But if something feels confusing or needs more explanation, I’d love to hear your thoughts. Feedback is always welcome — whether it’s a bug, a missing step, or just a better way to do things.

Want to dive into the code or try it yourself? You’ll find everything here:

GitHub: Helidon Assistant

Thanks for reading — and happy coding!


by dmitrykornilov at May 16, 2025 12:57 PM

Performance Best Practice no. 2: Implement Caching

by Ondro Mihályi at May 12, 2025 02:54 PM

Jakarta EE applications can achieve better performance with a layered caching setup that combines client-side and server-side caching [1].

Caching Methods

Caching is a powerful technique used to improve the performance and scalability of applications. By storing frequently accessed data closer to where it’s needed, caching can significantly reduce latency and system load. Let’s take a look at some common caching levels and best practices:

Caching LevelBest PracticeImpact
Client-sideBrowser cache for static assetsReduces server requests
ApplicationIn-memory caching for dataLowers database load
DatabaseResultSet cachingCuts down query execution times
DistributedCross-server cache sharingEnhances scalability

At the client-side, web browsers store static assets like images and stylesheets locally, reducing the need to re-download them on future visits. Most of the browsers try to guess which resources should be cached and for how long. However, it’s best practice to enable or disable caching by HTTP headers in responses.

At the application level, in-memory caches such as Redis or Memcached store frequently accessed data closer to the app, minimizing the need for repeated processing or database queries. On top

[…]

The post Performance Best Practice no. 2: Implement Caching appeared first on OmniFish - Modern Jakarta EE Runtimes.


by Ondro Mihályi at May 12, 2025 02:54 PM

Performance Best Practice no. 1: Optimize database operations

by Ondro Mihályi at May 05, 2025 11:59 AM

Database operations are a very critical part of most applications in regards of performance. There are multiple reasons why database operations can significantly contribute to lower performance:

  • The database often runs on a remote server, slowing down communication with the database and the data transfer
  • Establishing individual connections to a database can take a significant portion of time compared to running the whole database query
  • Database queries can run for a long time
  • Network communication is unstable and may required restarting queries in case of network failures

You can address the above issues and boost Jakarta EE database performance by leveraging the following best practices.

  • Adjust connection pool sizes to align with workload requirements
    • 🛈 Tip: Thread pool max size should be usually bigger than connection pool max size.
    • 🛈 Tip: Connection pool max size should reflect the maximum number of connections allowed by the database.
    • 🛈 Tip: Connection idle timeout (time after which unused connections are closed) should be shorter than on the database side to avoid reusing stale connections if the database already closed them.
  • Use Prepared Statements and reuse them when calling the same query to avoid repetitive SQL parsing
    • 🛈 Tip: When using Jakarta Persistence (JPA) queries, prepared statements are used automatically by the persistence provider
  • Implement statement
[…]

The post Performance Best Practice no. 1: Optimize database operations appeared first on OmniFish - Modern Jakarta EE Runtimes.


by Ondro Mihályi at May 05, 2025 11:59 AM

The Payara Monthly Catch - April 2025

by Chiara Civardi (chiara.civardi@payara.fish) at April 28, 2025 10:19 AM

April was a month of rich fishing grounds: we reeled in top-tier blog posts, made waves at international conferences, and navigated exciting new waters in both AI and Java development. From sharpening your developer toolkit to charting safer courses for application security, we've caught some prize-worthy resources you won't want to throw back.

Drop anchor and take a look at the finest catches from our voyages this past month!


by Chiara Civardi (chiara.civardi@payara.fish) at April 28, 2025 10:19 AM

Replace simple sub-GitHub Actions with plain run

April 23, 2025 12:00 AM

Use the GH CLI directly instead of wrappers you don't need

April 23, 2025 12:00 AM

How to run WildFly applications with JBang

by F.Marchioni at April 13, 2025 02:39 PM

In this article we will learn how to provision a WildFly Server using the JBang scripting tool. Starting from a basic example using a REST endpoint, we will show how to enable additional features on the application server with the WildFly Glow tooling. Why Run WildFly in your Java scripts? WildFly is an application server ... Read more

The post How to run WildFly applications with JBang appeared first on Mastertheboss.


by F.Marchioni at April 13, 2025 02:39 PM

The Payara Monthly Catch - March 2025

by Nastasija Trajanova at March 31, 2025 08:00 AM

March was quite eventful indeed! ✨ This month, our Payarans were on the move — bringing their energy, insights, and expertise to not one, but two major Java conferences: Devnexus and JavaOne. With expert-led talks, exciting conversations at our booths, and the opportunity to meet the community in person, it was a month to remember.

But that’s not all! We’re not slowing down — JavaLand in Germany is just around the corner. If you’re attending, make sure to meet up with our brilliant team members Dominika and Chiara, who will be representing Payara on the ground. Don’t be shy — come say hi!


by Nastasija Trajanova at March 31, 2025 08:00 AM

Install VirtualBox over Fedora with SecureBoot enabled

March 24, 2025 12:00 AM

Not too long ago, I upgraded my computer and got a new Lenovo ThinkPad X1 Carbon (a great machine so far!).

Lenovo

Since I was accustomed to working on a gaming rig (Ryzen 7, 64GB RAM, 4TB) that I had set up about five years ago, I completely missed the Secure Boot and TPM trends—these weren’t relevant for my fixed workstation.

However, my goal with this new laptop is to work with both Linux and Windows on the go, making encryption mandatory. As a non-expert Windows user, I enabled encryption via BitLocker on Windows 11, which worked perfectly... until it didn’t.

The Issue with Secure Boot and VirtualBox/VMware

This week, I discovered that BitLocker leverages TPM (the encryption chip) and Secure Boot if they’re enabled during encryption. While this is beneficial for Windows users, it created an unexpected problem for me: virtualization on Linux.

Let me explain. Secure Boot is:

...an enhancement of the security of the pre-boot process of a UEFI system. When enabled, the UEFI firmware verifies the signature of every component used in the boot process. This results in boot files that are easily readable but tamper-evident.

This means components like the kernel, kernel modules, and firmware must be signed with a recognized signature, which must be installed on the computer.

This creates a tricky situation for Linux because virtualization software like VMware or VirtualBox typically compiles kernel modules on the user’s machine. These modules are unsigned by default, leading to errors when loading them:

# modprobe vboxdrv
modprobe: ERROR: could not insert 'vboxdrv': Key was rejected by service

A good way to diagnose this is to check dmesg for messages like:

[47921.605346] Loading of unsigned module is rejected
[47921.664572] Loading of unsigned module is rejected
[47932.035014] Loading of unsigned module is rejected
[47932.056838] Loading of unsigned module is rejected
[47947.224484] Loading of unsigned module is rejected
[47947.257641] Loading of unsigned module is rejected
[48291.102147] Loading of unsigned module is rejected

How to Fix the Issue with VirtualBox Using RPMFusion and Akmods

Oracle is aware of this issue, but their documentation is lacking. To quote:

If you are running on a system using UEFI (Unified Extensible Firmware Interface) Secure Boot, you may need to sign the following kernel modules before you can load them: vboxdrv, vboxnetadp, vboxnetflt, vboxpci. See your system documentation for details of the kernel module signing process.

Fedora’s documentation is sparse, so I spent a lot of time researching manual kernel module signing (Fedora docs) and following user guides until I discovered that VirtualBox is available in RPMFusion with akmods support.

Some definitions:

  1. RPM Fusion is a community repository for Enterprise Linux (Fedora, RHEL, etc.) that provides packages not included in official distributions.
  2. Akmds automates the process of building and signing kernel modules.

Here’s the step-by-step solution:

1. Enable RPM Fusion (Free Repo)

Install the RPM Fusion free repository:

sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm

RPM Fusion Install

2. Install VirtualBox (with Akmods)

Ensure VirtualBox is installed from RPMFusion (akmods will be a dependency):

sudo dnf install virtualbox

VirtualBox Install

VirtualBox Akmods

3. Start Akmods to Generate Keys

Akmods will automatically sign the modules with a key stored in /etc/pki/akmods/certs:

sudo systemctl start akmods.service

Akmods Start

4. Enroll the Key with Mokutil

Use mokutil to register the key in Secure Boot:

sudo mokutil --import /etc/pki/akmods/certs/public_key.der

Mokutil Key Import

You’ll be prompted for a case-sensitive password—remember it for the next step.

5. Reboot and Enroll the Key

After rebooting, the UEFI firmware will prompt you to enroll the new key.

MOK Enrollment

MOK Enrollment 3

If needed, you could also check for the key contents in that screen.

MOK Enrollment 2

6. Start VirtualBox Kernel Modules

The modules are now signed and can be loaded. Enable these at boot:

sudo systemctl start vboxdrv
sudo systemctl enable vboxdrv

Verify they’re loaded:

lsmod | grep vbox

Output:

vboxnetadp             32768  0
vboxnetflt             40960  0
vboxdrv               708608  2 vboxnetadp,vboxnetflt

Now, VirtualBox runs on Fedora with Secure Boot and TPM enabled, without disabling BitLocker on Windows.

VirtualBox Running


March 24, 2025 12:00 AM

Developing AI-Powered Applications with Helidon and LangChain4J

by dmitrykornilov at March 13, 2025 01:33 PM

Introduction

The rise of Large Language Models (LLMs) has opened new doors for AI-powered applications, enabling dynamic interactions, natural language processing, and retrieval-augmented generation (RAG). However, integrating these powerful models into Java applications can be challenging. This is where LangChain4J comes in – a framework designed to simplify AI development in Java.

To take things a step further, in version 4.2, Helidon introduced seamless LangChain4J integration, making it easier to build AI-driven applications while leveraging Helidon’s programming model and style. In this blog post, we’ll explore how this integration simplifies AI application development and how you can use it in your projects.

What is LangChain4J?

LangChain4J is a Java framework that facilitates building AI-powered applications using LLMs from providers like OpenAI, Cohere, Hugging Face, and others. It provides:

  • AI Services: A declarative and type-safe API to interact with models.
  • Retrieval-Augmented Generation (RAG): Enhancing responses with external knowledge sources.
  • Embeddings and Knowledge Retrieval: Working with vector-based search systems.
  • Memory and Context: Managing conversational memory for intelligent interactions.

However, integrating LangChain4J manually into an application requires configuring components, managing dependencies, and handling injections manually. This is where Helidon’s integration module provides significant advantages.

How Helidon Simplifies LangChain4J Integration

Before we proceed, note that Helidon’s LangChain4J integration is a preview feature in Helidon 4.2. This means that while it’s production-ready, the Helidon team reserves the right to modify APIs in minor versions.

Helidon’s LangChain4J integration introduces:

  • Helidon Inject Support: LangChain4J components are automatically created and registered in the Helidon service registry based on configuration.
  • Convention Over Configuration: Reduces boilerplate by using sensible defaults.
  • Declarative AI Services: Uses annotations to define AI services in a clean, structured manner.
  • CDI Integration: Components work seamlessly in Helidon MP.

These features significantly reduce the complexity of incorporating AI into Helidon applications.

Setting Up LangChain4J in Helidon

To use LangChain4J with Helidon, add the following dependency to your Maven project:

<dependency>
    <groupId>io.helidon.integrations.langchain4j</groupId>
    <artifactId>helidon-integrations-langchain4j</artifactId>
</dependency>

Include the necessary annotation processors in the <build><plugins> section of your pom.xml:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <configuration>
        <annotationProcessorPaths>
            <path>
                <groupId>io.helidon.codegen</groupId>
                <artifactId>helidon-codegen-apt</artifactId>
            </path>
            <path>
                <groupId>io.helidon.integrations.langchain4j</groupId>
                <artifactId>helidon-integrations-langchain4j-codegen</artifactId>
            </path>
        </annotationProcessorPaths>
    </configuration>
</plugin>

Using different LLM providers may require additional dependencies. For example, using OpenAI models requires adding a dependency to the OpenAI provider, while using Ollama requires adding the Ollama provider. This modular approach helps keep applications lightweight. For more information about providers and dependencies, refer to the documentation.

Creating AI Components in Helidon

When I refer to AI components, I mean the public API classes provided by LangChain4J. Examples include various models such as OpenAiChatModel and OllamaChatModel, embedding stores like InMemoryEmbeddingStore, content retrievers such as EmbeddingStoreContentRetriever, and ingestors like EmbeddingStoreIngestor, among others.

Some components are natively supported by the Helidon LangChain4J integration and can be automatically created when the corresponding configuration is present. The currently supported components include:

  • LangChain4J Core
    • EmbeddingStoreContentRetriever
    • MessageWindowChatMemory
  • Open AI
    • OpenAiChatModel
    • OpenAiStreamingChatModel
    • OpenAiEmbeddingModel
    • OpenAiImageModel
    • OpenAiLanguageModel
    • OpenAiModerationModel
  • Ollama
    • OllamaChatModel
    • OllamaStreamingChatModel
    • OllamaEmbeddingModel
    • OllamaLanguageModel
  • Cohere
    • CohereEmbeddingModel
    • CohereScoringModel
  • Oracle
    • OracleEmbeddingStore

For example, the OpenAI chat model can be automatically created by defining the following configuration in application.yaml:

langchain4j:
  open-ai:
    chat-model:
      enabled: true
      api-key: "demo"
      model-name: "gpt-4o-mini"

With this configuration, an instance of OpenAiChatModel is automatically created and can be injected into other application components.

A key element in this setup is the enabled property. The component is only created if this property is set to true. This provides an easy way to disable component creation while retaining its configuration in the file for future use.

If you need to create a component that is not in the list above and register it in the Helidon service registry, you can use a Supplier Factory:

@Service.Singleton
@Service.Named("MyChatModel")
class ChatModelFactory implements Supplier<ChatLanguageModel> {
    @Override
    public ChatLanguageModel get() {
        return OpenAiChatModel.builder()
                .apiKey("demo")
                .build();
    }
}

This method allows you to register custom embedding models and other AI components dynamically. Service.Named("MyChatModel") annotation is optional. You can add it to define a name for your component for future reference.

Using AI Components

Helidon Inject makes it easy to use AI components within your application:

@Service.Singleton
public class MyService {
    private final ChatLanguageModel chatModel;

    @Service.Inject
    public MyService(ChatLanguageModel chatModel) {
        this.chatModel = chatModel;
    }
}

For named components, use:

@Service.Inject
public MyService(@Service.Named("MyChatModel") ChatLanguageModel chatModel) {
    this.chatModel = chatModel;
}

Alternatively, you can manually retrieve a component from the service registry as follows:

var chatModel = Services.get(OpenAiChatModel.class);

AI Services

Most often AI-powered components require a combination of different components working together. For example, a simple chat assistant requires a chat model to communicate with users, embedding store to store data, embedding model to retrieve and query the data, chat memory for keeping conversation context, etc. LangChain4J AI Services provide a way of combining different kinds of AI functionality behind a simple API which significantly reduces the boilerplate code.

Helidon’s LangChain4J integration introduces a declarative Helidon Inject-based approach for creating AI Services. It supports the following components:

  • Chat Model:
    • dev.langchain4j.model.chat.ChatLanguageModel
  • Streaming Chat Model:
    • dev.langchain4j.model.chat.StreamingChatLanguageModel
  • Chat Memory:
    • dev.langchain4j.memory.ChatMemory
  • Chat Memory Provider:
    • dev.langchain4j.memory.chat.ChatMemoryProvider
  • Moderation Model:
    • dev.langchain4j.model.moderation.ModerationModel
  • RAG:
    • Content Retriever: dev.langchain4j.rag.content.retriever.ContentRetriever
    • Retrieval Augmentor: dev.langchain4j.rag.RetrievalAugmentor
  • Callback Functions:
    • Methods annotated with dev.langchain4j.agent.tool.Tool

Helidon makes it simple to define AI services using annotations:

@Ai.Service
public interface ChatAiService {
    String chat(String question);
}

In this scenario all LangChain4J components from the list above are taken from the service registry. User still has an ability to manually control the process by putting any of the following annotations which specify a service name which must be used for this particular function instead of discovering it automatically.

AnnotationDescription
Ai.ChatModelSpecifies the name of a service in the service registry that implements ChatModel to be used in the annotated AI Service. Mutually exclusive with Ai.StreamingChatModel.
Ai.StreamingChatModelSpecifies the name of a service in the service registry that implements StreamingChatModel to use in the annotated Ai Service. Mutually exclusive with Ai.ChatModel.
Ai.ChatMemorySpecifies the name of a service in the service registry that implements ChatMemory to use in the annotated Ai Service. Mutually exclusive with Ai.ChatMemoryWindow and Ai.ChatMemoryProvider.
Ai.ChatMemoryWindowAdds a MessageWindowChatModel with the specified window size to the annotated AI Service. Mutually exclusive with Ai.ChatMemory and Ai.ChatMemoryProvider.
Ai.ChatMemoryProviderSpecifies the name of a service in the service registry that implements ChatMemoryProvider to use in the annotated Ai Service. Mutually exclusive with Ai.ChatMemory and Ai.ChatMemoryWindow.
Ai.ModerationModel Specifies the name of a service in the service registry that implements ModerationModel to use in the annotated Ai Service.
Ai.ContentRetrieverSpecifies the name of a service in the service registry that implements ContentRetriever to use in the annotated Ai Service. Mutually exclusive with Ai.RetrievalAugmentor.
Ai.RetrievalAugmentorSpecifies the name of a service in the service registry that implements RetrievalAugmentor to use in the annotated Ai Service. Mutually exclusive with Ai.ContentRetriever.

For example, in the snippet below a service with name “MyChatModel” will be used as chat model and all other components are discovered automatically.

@Ai.Service
@Ai.ChatModel("MyChatModel")
public interface ChatAiService {
    String chat(String question);
}

There is a possibility to switch off automatic discovery by using @Ai.Service(autodicovery=false). In this case the service components are not discovered automatically and users must add components manually using annotations specified above. @ChatModel or @StreamingChatModel annotations are required.

Tools: Enhancing AI with Custom Logic

LangChain4J tools enable AI models to invoke external functions during execution. This is useful when an LLM needs to perform an action during its conversation with a user, such as retrieving data, calling an external service, or executing code.

LangChain4J provides the @Tool annotation, which, when applied to a method, makes that method accessible for the LLM to call. The integration code scans the project for classes containing @Tool-annotated methods and automatically adds them to AI Services. The only requirement is that these classes must be Helidon Inject services.

@Service.Singleton
public class OrderService {
    @Tool("Get order details for specified order number")
    public Order getOrderDetails(String orderNumber) {
        // Business logic here
    }
}

If you are using Helidon MP, an additional step is required. You must annotate the service containing tools with the @Ai.Tool annotation. Additionally, the integration supports tools within CDI beans.

Samples

We have created several sample applications for you to explore. These samples demonstrate all aspects of using LangChain4J in Helidon applications.

Coffee Shop Assistant

The Coffee Shop Assistant is a demo application that showcases how to build an AI-powered assistant for a coffee shop. This assistant can answer questions about the menu, provide recommendations, and create orders. It utilizes an embedding store initialized from a JSON file.

Key features:

  • Integration with OpenAI chat models
  • Utilization of embedding models, an embedding store, an ingestor, and a content retriever
  • Helidon Inject for dependency injection
  • Embedding store initialization from a JSON file
  • Support for callback functions to enhance interactions

Check it out:

Hands-on Lab

We also offer a Hands-on Lab with step-by-step instructions on how to build the Coffee Shop Assistant:

Useful Resources

Ready to get started? Here are some useful resources:


by dmitrykornilov at March 13, 2025 01:33 PM

Jakarta EE Ambassadors Gather at Devnexus 2025 to Deliver Sessions on Jakarta EE

by mpredli01 at March 03, 2025 04:54 PM

Devnexus 2025 will be held Tuesday-Thursday, March 4-6, 2025 at the Georgia World Congress Center in Atlanta, Georgia. Members of the Jakarta EE Ambassadors will be in attendance to deliver presentations on Jakarta EE.

On March 4, developers will be able to choose from five workshops:

AI-Driven Development: Enhancing Java with the latest AI Innovations by Brian Benz

Developer to Architect by Nate Schutta

Migration Engineering with OpenRewrite: The Recipe for Success by Jonathan Schneider

Practical AI Lab for Enterprise Java Developers: From Zero to Hero by Daniel Oh, Eric Deandrea and James Falkner


Stream Processing As You’ve Never Seen Before (Seriously): Apache Flink for Java Developers by Viktor Gamov and Sandon Jacobs

Concurrent with the workshops are two summits for Java User Group leaders and Java Champions as they discuss issues that affect their respective Java User Groups and the roles of a Java Champion within the Java community.

On March 5-6, developers can expect many sessions on topics such as, AI, Architecture, Jakarta EE and Core Java (among many others).

The Jakarta EE track, in particular,  includes 10 sessions, namely:

A Developer’s Guide to Jakarta EE 11 by Michael Redlich

AI Tools for Jakarta EE by Gaurav Gupta

Foundations of Modern Java Server Apps by Kito Mann

Java + LLMs: A hands-on guide to building LLM Apps in Java with JakartaEE by Bazlur Rahman and Shaaf Syed

Migrating from Java EE – to Spring Boot or something else? by Ondro Mihályi

Jakarta EE meets AI: Beyond the chatbot with LangChain4j by Jorge CajasDuke on CRaC with Jakarta EE by Ivar Grimstad and Rustam Mehmandarov

Concurrency redefined: what’s new in Jakarta Concurrency 3.1 by Chuck Bridgham and Harry Hoots III

Jakarta EE: Connected Industries with an Edge by Petr Aubrecht

Case Study: Journey to Cloud with Jakarta EE and MicroProfile by Julian Ortiz

Developers can also expect keynote addresses throughout the conference and peruse the vendors that will display their wares. The Eclipse Foundation will also have a booth where folks from the Java community involved in Jakarta EE and MicroProfile will be on hand for conversation and for example applications that demonstrate new features offered in Jakarta EE 11.


by mpredli01 at March 03, 2025 04:54 PM

Hibernate Performance Tuning – 2025 Edition

by Thorben Janssen at January 15, 2025 09:15 AM

Most Hibernate performance issues are caused by fetching related entities you don’t use in your business code. To make it even worse, this happens automatica...

by Thorben Janssen at January 15, 2025 09:15 AM

Scheduled Maintenance for accounts.eclipse.org Drupal 10 Migration

December 17, 2024 07:22 PM

We’re excited to announce that accounts.eclipse.org will migrate from Drupal 7 to Drupal 10 on January 19, 2025. This is a significant milestone in our ongoing effort to modernize the Eclipse Foundation’s web infrastructure, following the successful migration of the Project Management Infrastructure (PMI) earlier this year. This migration aligns with our plans that we outlined in last year’s post, Navigating the Shift From Drupal 7 to Drupal 9/10 at the Eclipse Foundation.

To ensure a smooth transition, we’ve scheduled a maintenance window on January 19, 2025, from 02:00 am CET to 06:30 pm CET, during which accounts.eclipse.org will be in read-only mode.

During this period, users will be able to log in and access other Eclipse Foundation websites, such as projects.eclipse.org, gitlab.eclipse.org, and marketplace.eclipse.org. However, some functionality, such as the profile edit form and the create an account form will be temporarily disabled to prevent data loss while we migrate your profile data to a new database.

We’re excited about this migration and hope it will provide a better user experience for all. If you have any feedback or encounter issues, please visit our dedicated issue, which includes details on how to access the staging environment to preview the changes and how to share feedback with us!


December 17, 2024 07:22 PM

Date and Time Mappings with Hibernate and JPA

by Thorben Janssen at December 04, 2024 11:55 AM

The post Date and Time Mappings with Hibernate and JPA appeared first on Thorben Janssen.

Databases support various data types to store date and time information. The most commonly used ones are: You can map all of them with JPA and Hibernate. But you need to decide to which Java type you want to map your database column. The Java language supports a bunch of classes to represent date and...

The post Date and Time Mappings with Hibernate and JPA appeared first on Thorben Janssen.


by Thorben Janssen at December 04, 2024 11:55 AM

A practical guide to implement OpenTelemetry in Spring Boot

December 01, 2024 12:00 AM

In this tutorial I want to consolidate some practical ideas regarding OpenTelemetry and how to use it with Spring Boot.

This tutorial is composed by four sections

  1. OpenTelemetry practical concepts
  2. Setting up an observability stack with OpenTelemetry Collector, Grafana, Loki, Tempo and Podman
  3. Instrumenting Spring Boot applications for OpenTelemetry
  4. Testing and E2E sample

By the end of the tutorial, you should be able to implement the following architecture:

Arch

OpenTelemetry practical concepts

As the official documentation states, OpenTelemetry is

  • An Observability framework and toolkit designed to create and manage telemetry data such as traces, metrics, and logs.
  • Vendor and tool-agnostic, meaning that it can be used with a broad variety of Observability backends.
  • Focused on the generation, collection, management, and export of telemetry. A major goal of OpenTelemetry is that you can easily instrument your applications or systems, no matter their language, infrastructure, or runtime environment.

Monitoring, observability and METL

To keep things short, monitoring is the process of collecting, processing and analyzing data to track the state of a (information) system. Then, monitoring is going to the next level, to actually understand the information that is being collected and do something with it, like defining alerts for a given system.

To achieve both goals it is necessary to collect three dimensions of data, specifically:

  • Logs: Registries about processes and applications, with useful data like timestamps and context
  • Metrics: Numerical data about the performance of applications and application modules
  • Traces: Data that allow to estabilish the complete route that a given operation traverses through a series of dependent applications

Hence, when the state of a given system is altered in some way, we have an Event, that correlates and ideally generates data on the three dimensions.

Why is OpenTelemetry important and which problem does it solve?

Developers recognize by experience that monitoring and observability are important, either to evaluate the actual state of a system or to do post-mortem analysis after disasters. Hence, it is natural to think that observability has been implemented in various ways. For example if we think on a system constructed with Java we have at least the following collection points:

  • Logs: Systemd, /var/log, /opt/tomcat, FluentD
  • Metrics: Java metrics via JMX, OS Metrics, vendor specific metrics via Spring Actuator
  • Tracing: Data via Jaeger or Zipkin tooling in our Java workloads

This variety in turn imposes a great amount of complexity in instrumenting our systems to provide information, that a- comes in different formats, from b- technology that is difficult to implement, often with c- solutions that are too tied to a given provider or in the worst cases, d- technologies that only work with certain languages/frameworks.

And that's the magic about OpenTelemetry proposal, by creating a working group under the CNCF umbrella the project is able to provide useful things like:

  1. Common protocols that vendors and communities can implement to talk each other
  2. Standards for software communities to implement instrumentation in libraries and frameworks to provide data in OpenTelemetry format
  3. A collector able to retrieve/receive data from diverse origins compatible with OpenTelemetry, process it and send it to ...
  4. Analysis platforms, databases and cloud vendors able to receive the data and provide added value over it

In short, OpenTelemetry is the reunion of various great monitoring ideas that overlapping software communities can implement to facilitate the burden of monitoring implementations.

OpenTelemetry data pipeline

For me, the easiest way to think about OpenTelemetry concepts is a data pipeline, in this data pipeline you need to

  1. Instrument your workloads to push (or offer) the telemetry data to a processing/collecting element -i.e. OpenTelemetry Collector-
  2. Configure OpenTelemetry Collector to receive or pull the data from diverse workloads
  3. Configure OpenTelemetry Collector to process the data -i.e adding special tags, filtering data-
  4. Configure OpenTelemetry Collector to push (or offer) the data to compatible backends
  5. Configure and use the backends to receive (or pull) the data from the collector, to allow analysis, alarms, AI ... pretty much any case that you can think about with data

Otel Pipeline

Setting up an observability stack with OpenTelemetry Collector, Grafana, Prometheus, Loki, Tempo and Podman

Collectorarch

As OpenTelemetry got popular various vendors have implemented support for it, to mention a few:

Self-hosted platforms

Cloud platforms

Hence, for development purposes, it is always useful to know how to bootstrap a quick observability stack able to receive and show OpenTelemetry capabilities.

For this purpose we will use the following elements:

  • Prometheus as time-series database for metrics
  • Loki as logs platform
  • Tempo as a tracing platform
  • Grafana as a web UI

And of course OpenTelemetry collector. This example is based on various Grafana examples, with a little bit of tweaking to demonstrate the different ways of collecting, processing and sending data to backends.

OpenTelemetry collector

As stated previously, OpenTelemetry collector acts as an intermediary that receives/pull information from data sources, processes this information and, forwards the information to destinations like analysis platforms or even other collectors. The collector is able to do this either with compliant workloads or via plugins that talk with the workloads using proprietary formats.

As the plugins collection can be increased or decreased, vendors have created their own distributions of OpenTelemetry collectors, for reference I've used successfully in the real world:

You could find a complete list directly on OpenTelemetry website.

For this demonstration, we will create a data pipeline using the contrib version of the reference implementation which provides a good amount of receivers, exporters and processors. In our case Otel configuration is designed to:

  • Receive data from Spring Boot workloads (ports 4317 and 4318)
  • Process the data adding a new tag to metrics
  • Expose an endpoint for Prometheus scrapping (port 8889)
  • Send logs to Loki (port 3100) using otlphttp format
  • Send traces to Tempo (port 9411) using otlp format
  • Exposes a rudimentary dashboard from the collector, called zpages. Very useful for debugging.

otel-config.yaml

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
processors:
  attributes:
    actions:
      - key: team
        action: insert
        value: vorozco
exporters:
  debug:
  prometheus:
    endpoint: "0.0.0.0:8889"
  otlphttp:
    endpoint: http://loki:3100/otlp
  otlp:
    endpoint: tempo:4317
    tls:
      insecure: true
service:
  extensions: [zpages]
  pipelines:
    metrics:
      receivers: [otlp]
      processors: [attributes]
      exporters: [debug,prometheus]
    traces:
      receivers: [otlp]
      exporters: [debug, otlp]
    logs:
      receivers: [otlp]
      exporters: [debug, otlphttp]
extensions:
  zpages:
    endpoint: "0.0.0.0:55679"

Prometheus

Prometheus is a well known analysis platform, that among other things offers dimensional data and a performant time-series storage.

By default it works as a metrics scrapper, then, workloads provide a http endpoint offering data using the Prometheus format. For our example we configured Otel to offer metrics to the prometheus host via port 8889.

prometheus:
    endpoint: "prometheus:8889"

Then, whe need to configure Prometheus to scrape the metrics from the Otel host. You would notice two ports, the one that we defined for the active workload data (8889) and another for metrics data for the collector itself (8888).

prometheus.yml

scrape_configs:
- job_name: "otel"
  scrape_interval: 10s
  static_configs:
    - targets: ["otel:8889"]
    - targets: ["otel:8888"]

It is worth highlighting that Prometheus also offers a way to ingest information instead of scrapping it, and, the official support for OpenTelemetry ingestion is coming on the new versions.

Loki

As described in the website, Loki is a specific solution for log aggregation heavily inspired by Prometheus, with the particular design decision to NOT format in any way the log contents, leaving that responsibility to the query system.

To configure the project for local environments, the project offers a configuration that is usable for most of the development purposes. The following configuration is an adaptation to preserve the bare minimum to work with temporal files and memory.

loki.yaml

auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096

common:
  instance_addr: 127.0.0.1
  path_prefix: /tmp/loki
  storage:
    filesystem:
      chunks_directory: /tmp/loki/chunks
      rules_directory: /tmp/loki/rules
  replication_factor: 1
  ring:
    kvstore:
      store: inmemory

query_range:
  results_cache:
    cache:
      embedded_cache:
        enabled: true
        max_size_mb: 100

schema_config:
  configs:
    - from: 2020-10-24
      store: tsdb
      object_store: filesystem
      schema: v13
      index:
        prefix: index_
        period: 24h

ruler:
  alertmanager_url: http://localhost:9093

limits_config:
  allow_structured_metadata: true

Then, we configure an exporter to deliver the data to the loki host using oltphttp format.

otlphttp:
  endpoint: http://loki:3100/otlp

Tempo

In similar fashion than Loki, Tempo is an Open Source project created by grafana that aims to provide a distributed tracing backend. On a personal note, for me besides performance it shines for being compatible not only with OpenTelemetry, it can also ingest data in Zipkin and Jaeger formats.

To configure the project for local environments, the project offers a configuration that is usable for most of the development purposes. The following configuration is an adaptation to remove the metrics generation and simplify the configuration, however with this we loose the service graph feature.

tempo.yaml

stream_over_http_enabled: true
server:
  http_listen_port: 3200
  log_level: info

query_frontend:
  search:
    duration_slo: 5s
    throughput_bytes_slo: 1.073741824e+09
    metadata_slo:
      duration_slo: 5s
      throughput_bytes_slo: 1.073741824e+09
  trace_by_id:
    duration_slo: 5s

distributor:
  receivers:
    otlp:
      protocols:
        http:
        grpc:

ingester:
  max_block_duration: 5m               # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally

compactor:
  compaction:
    block_retention: 1h                # overall Tempo trace retention. set for demo purposes

storage:
  trace:
    backend: local                     # backend configuration to use
    wal:
      path: /var/tempo/wal             # where to store the wal locally
    local:
      path: /var/tempo/blocks

Then, we configure an exporter to deliver the data to Tempo host using oltp/grpc format.

otlp:
    endpoint: tempo:4317
    tls:
      insecure: true

Grafana

Loki, Tempo and (to some extent) Prometheus are data storages, but we still need to show this data to the user. Here, Grafana enters the scene.

Grafana offers a good selection of analysis tools, plugins, dashboards, alarms, connectors and a great community that empowers observability. Besides having a great compatibility with Prometheus, it offers of course a perfect compatibility with their other offerings.

To configure Grafana you just need to plug compatible datasources and the rest of work will be on the web ui.

grafana.yaml

apiVersion: 1

datasources:
  - name: Otel-Grafana-Example
    type: prometheus
    url: http://prometheus:9090
    editable: true
  - name: Loki
    type: loki
    access: proxy
    orgId: 1
    url: http://loki:3100
    basicAuth: false
    isDefault: true
    version: 1
    editable: false
  - name: Tempo
    type: tempo
    access: proxy
    orgId: 1
    url: http://tempo:3200
    basicAuth: false
    version: 1
    editable: false
    apiVersion: 1
    uid: tempo

Podman (or Docker)

At this point you may have noticed that I've referred to the backends using single names, this is because I intend to set these names using a Podman Compose deployment.

otel-compose.yml

version: '3'
services:
  otel:
    container_name: otel
    image: otel/opentelemetry-collector-contrib:latest
    command: [--config=/etc/otel-config.yml]
    volumes:
      - ./otel-config.yml:/etc/otel-config.yml
    ports:
      - "4318:4318"
      - "4317:4317"
      - "55679:55679"
  prometheus:
    container_name: prometheus
    image: prom/prometheus
    command: [--config.file=/etc/prometheus/prometheus.yml]
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
    ports:
      - "9091:9090"
  grafana:
    container_name: grafana
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
    image: grafana/grafana
    volumes:
      - ./grafana.yml:/etc/grafana/provisioning/datasources/default.yml
    ports:
      - "3000:3000"
  loki:
    container_name: loki
    image: grafana/loki:3.2.0
    command: -config.file=/etc/loki/local-config.yaml
    volumes:
      - ./loki.yaml:/etc/loki/local-config.yaml
    ports:
      - "3100"
  tempo:
    container_name: tempo
    image: grafana/tempo:latest
    command: [ "-config.file=/etc/tempo.yaml" ]
    volumes:
      - ./tempo.yaml:/etc/tempo.yaml
    ports:
      - "4317"  # otlp grpc
      - "4318"

At this point the compose description is pretty self-descriptive, but I would like to highlight some things:

  • Some ports are open to the host -e.g. 4318:4318 - while others are closed to the default network that compose will be created among containers -e.g. 3100-
  • This stack is designed to avoid any permanent data. Again, this is my personal way to boot quickly an observability stack to allow tests during deployment. To make it ready for production you probably would want to preserve the data in some volumes

Once the configuration is ready, you can launch it using the compose file

cd podman
podman compose -f otel-compose.yml up

If the configuration is ok, you should have five containers running without errors.

Podman Otel

Instrumenting Spring Boot applications for OpenTelemetry

Springbootarch

As part of my daily activities I was in charge of a major implementation of all these concepts. Hence it was natural for me to create a proof of concept that you could find at my GitHub.

For demonstration purposes we have two services with different HTTP endpoints:

  • springboot-demo:8080 - Useful to demonstrate local and database tracing, performance, logs and OpenTelemetry instrumentation
    • /books - A books CRUD using Spring Data
    • /fibo - A Naive Fibonacci implementation that generates CPU load and delays
    • /log - Which generate log messages using the different SLF4J levels
  • springboot-client-demo:8081 - Useful to demonstrate tracing capabilities, Micrometer instrumentation and Micrometer Tracing instrumentation
    • /trace-demo - A quick OpenFeing client that invokes books GetAll Books demo

Instrumentation options

Given the popularity of OpenTelemetry, developers can expect also multiple instrumentation options.

First of all, the OpenTelemetry project offers a framework-agnostic instrumentation that uses bytecode manipulation, for this instrumentation to work you need to include a Java Agent via Java Classpath. In my experience this instrumentation is preferred if you don't control the workload or if your platform does not offer OpenTelemetry support at all.

However, instrumentation of workloads can become really specific -e.g. instrumentation of a Database pool given a particular IoC mechanism-. For this, the Java world provides a good ecosystem, for example:

And of course Spring Boot.

Spring Boot is a special case with TWO major instrumentation options

  1. OpenTelemetry's Spring Boot starter
  2. Micrometer and Micrometer Tracing

Both options use Spring concepts like decorators and interceptors to capture and send information to the destinations. The only rule is to create the clients/services/objects in the Spring way (hence via Spring IoC).

I've used both successfully and my heavily opinionated conclusion is the following:

  • Micrometer collects more information about spring metrics. Besides OpenTelemetry backend, it supports a plethora of backends directly without any collector intervention. If you cannot afford a collector, this is the way. From Micrometer perspective OpenTelemetry is just another backend
  • Micrometer Tracing is the evolution of Spring Cloud Sleuth, hence if you have workloads with Spring Boot 2 and 3, you have to support both tools (or maybe migrate everything to Spring boot 3?)
  • The Micrometer family does not offer a way to collect logs and send these to a backend, hence devs have to solve this by using an appender specific to your logging library. On the other hand OpenTelemetry Spring Boot starter offers this out of the box if you use Spring Boot default (SLF4J over Logback)

As these libraries are mutually exclusive, if the decision is mine, I would pick OpenTelemetry's Spring Boot starter. It offers logs support OOB and also a bridge for micrometer Metrics.

Instrumenting springboot-demo with OpenTelemetry SpringBoot starter

As always, it is also good to consider the official documentation.

Otel instrumentation with the Spring started is activated in three steps:

  1. You need to include both OpenTelemetry Bom and OpenTelemetry dependency. If you are planning to also use micrometer metrics, it is also a good idea to include Spring Actuator
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>io.opentelemetry.instrumentation</groupId>
            <artifactId>opentelemetry-instrumentation-bom</artifactId>
            <version>2.10.0</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
...
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
    <groupId>io.opentelemetry.instrumentation</groupId>
    <artifactId>opentelemetry-spring-boot-starter</artifactId>
</dependency>
  1. There is a set of optional libraries and adapters that you can configure if your workloads already diverged from the "Spring Way"

  2. You need to activate (or not) the dimensions of observability (metrics, traces and logs). Also, you can finetune the exporting parameter like ports, urls or exporting periods. Either by using Spring Properties or env variables

#Configure exporters
otel.logs.exporter=otlp
otel.metrics.exporter=otlp
otel.traces.exporter=otlp

#Configure metrics generation
otel.metric.export.interval=5000 #Export metrics each five seconds
otel.instrumentation.micrometer.enabled=true #Enabe Micrometer metrics bridge

Instrumenting springboot-client-demo with Micrometer and Micrometer Tracing

Again, this instrumentation does not support logs exporting. Also, it is a good idea to check the latest documentation for Micrometer and Micrometer Tracing.

  1. As in the previous example, you need to enable Spring Actuator (which includes Micrometer). As OpenTelemetry is just a backend from Micrometer perspective, you just need to ehable the corresponding OTLP registry which will export metrics to localhost by default.
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-otlp</artifactId>
</dependency>
  1. In a similar way you need to Metrics, once actuator is enabled you just need to add support for the tracing backend
<dependency>
  <groupId>io.micrometer</groupId>
  <artifactId>micrometer-tracing-bridge-otel</artifactId>
</dependency>
  1. Finally, you can finetune the configuration using Spring properties. For example, you can decide if 100% of traces are reproted or how often the metrics are reported to the backend.
management.otlp.tracing.endpoint=http://localhost:4318/v1/traces
management.otlp.tracing.timeout=10s
management.tracing.sampling.probability=1

management.otlp.metrics.export.url=http://localhost:4318/v1/metrics
management.otlp.metrics.export.step=5s
management.opentelemetry.resource-attributes."service-name"=${spring.application.name}

Testing and E2E sample

Generating workload data

The POC provides the following structure

├── podman # Podman compose config files
├── springboot-client-demo #Spring Boot Client instrumented with Actuator, Micrometer and MicroMeter tracing 
└── springboot-demo #Spring Boot service instrumented with OpenTelemetry Spring Boot Starter

  1. The first step is to boot the observability stack we created previously.
cd podman
podman compose -f otel-compose.yml up

This will provide you an instance of Grafana on port 3000

Grafana

Then, it is time to boot the first service!. You only need Java 21 on the active shell:

cd springboot-demo
mvn spring-boot:run

If the workload is properly configured, you will see the following information on the OpenTelemetry container standard output. Which basically says you are successfully reporting data.

[otel]       | 2024-12-01T22:10:07.730Z info    Logs    {"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 1, "log records": 24}
[otel]       | 2024-12-01T22:10:10.671Z info    Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 64, "data points": 90}
[otel]       | 2024-12-01T22:10:10.672Z info    Traces  {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 5}
[otel]       | 2024-12-01T22:10:15.691Z info    Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 65, "data points": 93}
[otel]       | 2024-12-01T22:10:15.833Z info    Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 65, "data points": 93}
[otel]       | 2024-12-01T22:10:15.835Z info    Logs    {"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 1, "log records": 5}

The data is being reported over the OpenTelemetry ports (4317 and 4318) which are open from Podman to the host. By default all telemetry libraries report to localhost, but this can be configured for other cases like FaaS or Kubernetes.

Also, you could verify the reporting status in ZPages

Zpages

Finally let's do the same with Spring Boot client:

cd springboot-client-demo
mvn spring-boot:run

As described in the previous section, I created a set of interactions to:

Generate CPU workload using Naive fibonacci

curl http://localhost:8080/fibo\?n\=45

Generate logs in different levels

curl http://localhost:8080/fibo\?n\=45

Persist data using a CRUD

curl -X POST --location "http://localhost:8080/books" \
-H "Content-Type: application/json" \
-d '{
"author": "Miguel Angel Asturias",
"title": "El señor presidente",
"isbn": "978-84-376-0494-7",
"publisher": "Editorial planeta"
}'

And then retrieve the data using a secondary service

curl http://localhost:8081/trace-demo 

This asciicast shows the interaction:

asciicast

Grafana results

Once the data is accesible by Grafana, the what to do with data is up to you, again, you could:

  • Create dashboards
  • Configure alarms
  • Configure notifications from alarms

The quickest way to verify if the data is reported correctly is to verify directly in Grafana explore.

First, we can check some metrics like system_cpu_usage and filter by service name. In this case I used springboot-demo which has the CPU demo using naive fibonacci, I can even filter by my own tag (which was added by Otel processor):

Grafana Metrics

In the same way, logs are already stored in Loki:

Grafana Logs

Finally, we could check the whole trace, including both services and interaction with H2 RDBMS:

Grafana Traces


December 01, 2024 12:00 AM

What is new in Jakarta Persistence 3.2

by F.Marchioni at November 24, 2024 05:08 PM

This tutorial provides an overview of some of the new features that are available in the upcoming Jakarta Persistence API 3.2, which is part of Jakarta EE 11 bundle. Jakarta Persistence defines a standard for management of persistence and ORM Java(R) Enterprise environments. If you are new to Jakarta Persistence API we recommend checking this ... Read more

The post What is new in Jakarta Persistence 3.2 appeared first on Mastertheboss.


by F.Marchioni at November 24, 2024 05:08 PM

Enhanced message validation for XML Web Services in 24.0.0.12-beta

November 19, 2024 12:00 AM

The 24.0.0.12-beta release enhances inbound SOAP message validation in XML Web Services to simplify message debugging and make your web services and clients more resilient.

Fine-tuning XML Web Services inbound SOAP message validation

Open Liberty’s XML Web Services features now support fine-grained message validation for inbound SOAP messages. This enhancement provides more control over message validation options. In Open Liberty 24.0.0.12-beta, you can configure message validation using new attributes in the server.xml file. These attributes are available for the webService and webServiceClient elements.

Attribute Description

enableSchemaValidation

Enable full validation against the XML schema

enableDefaultValidation

Enable or disable default validation for JAXB

ignoreUnexpectedElements

Use default validation while ignoring UnmarshallExceptions: Unknown Element errors

The default value for enableDefaultValidation in the webServiceClient element is true. The rest of the attributes default to false in both the webServiceClient and webService elements.

These attributes require one of the following XML Web Services features to be enabled in your server.xml file:

By using these attributes, you can tailor message validation to your specific needs and improve the security and reliability of your SOAP-based web services. You can apply the configuration to web services (webService) or web service clients (webServiceClient), either globally, or to an individual client or web service implementation.

XML schema validation

You can set the enableSchemaValidation=true attribute to provide more insight into JAXB unmarshalling exceptions and make painful message debugging easier. This option is the highest level of XML validation, which provides faster debugging and the most thorough checks on inbound message contents. But it comes with a tradeoff: higher performance cost.

Global XML schema validation

The following example shows how to enable XML schema validation for web services globally for your Open Liberty runtime:

<webService enableSchemaValidation="true" />

To enable XML schema validation globally for web service clients, set the same attribute on the webServiceClient element:

<webServiceClient enableSchemaValidation="true" />

Targeted XML schema validation

The following example shows how to enable XML schema validation for a particular web service by using the web service port:

<webService portName="<web service port name>"  enableSchemaValidation="true" />

The value of portName is the port name of the Web Service implementation you’re configuring. This name comes from your @WebService(portName=<web service port name> annotated class. Alternatively, you can check the <wsdl:port …​ name="Web Service Port Name"> line in your WSDL file for the port name.

The following example shows how to enable XML schema validation for a specific client service:

<webServiceClient serviceName="<client service name>"  enableSchemaValidation="true" />

The value of serviceName is the name of the Web Service Client you’re configuring. This name comes from your @WebServiceClient(serviceName=<client service name> annotated stub class for managed clients. For unmanaged clients, you can check the <wsdl:service name="<client service name>" line in your WSDL file.

Default validation

You can configure the default level of JAXB validation of inbound SOAP Messages with the enableDefaultValidation=true attribute. Default validation is much more efficient than XML schema validation, so enabling it provides basic message validation with lower overhead. Disabling it lets you ignore various unmarshalling errors for problematic messages. Default validation is enabled by default for web services, but disabled for web service clients.

Global default validation

The following example shows how to enable default validation globally for web services in your Open Liberty runtime:

<webService enableDefaultValidation="true"/>

Default validation is enabled by default for web service clients. To disable default validation globally for web service clients, set the enableDefaultValidation="false" attribute on the webServiceClient element.

<webServiceClient enableDefaultValidation="false"/>

Targeted default validation

The following example shows how to enable default validation for a specific web service:

<webService  portName="SayHelloService" enableDefaultValidation="true"/>

Default validation is enabled by default for web service clients. To disable default validation for a specific web service client, set the enableDefaultValidation="false" attribute on the webServiceClient element and use the serviceName attribute to specify the client service.

<webServiceClient serviceName="<client service name>"  enableDefaultValidation="false" />

Ignore unexpected elements

Inbound SOAP messages often contain extra elements in the SOAP body when a web service is updated but the client is not. When a message contains an unknown element, Open Liberty throws a UnmarshallingException: Unknown Element. By enabling ignoreUnexpectedElements, you can keep validation enabled while ignoring unknown elements.

Global configuration

The following example shows how to ignore unexpected elements globally for web services on your Open Liberty runtime:

<webService  ignoreUnexpectedElements="true"/>

To ignore unexpected elements globally for web service clients, set the ignoreUnexpectedElements attribute on the webServiceClient element.

Targeted configuration

The following example shows how to ignore unexpected elements for a specific web service:

<webService  portName="SayHelloService" ignoreUnexpectedElements="true"/>

To ignore unexpected elements for a specific web service client, set the same attribute on the webServiceClient element and use the serviceName attribute to specify the client service.

Try it now

To try out these features, update your build tools to pull the Open Liberty All Beta Features package instead of the main release. The beta works with Java SE 23, 21, 17, 11, and 8.

If you’re using Maven, you can install the All Beta Features package by using:

<plugin>
    <groupId>io.openliberty.tools</groupId>
    <artifactId>liberty-maven-plugin</artifactId>
    <version>3.11.1</version>
    <configuration>
        <runtimeArtifact>
          <groupId>io.openliberty.beta</groupId>
          <artifactId>openliberty-runtime</artifactId>
          <version>24.0.0.12-beta</version>
          <type>zip</type>
        </runtimeArtifact>
    </configuration>
</plugin>

You must also add dependencies to your pom.xml file for the beta version of the APIs that are associated with the beta features that you want to try. For example, the following block adds dependencies for two example beta APIs:

<dependency>
    <groupId>org.example.spec</groupId>
    <artifactId>exampleApi</artifactId>
    <version>7.0</version>
    <type>pom</type>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>example.platform</groupId>
    <artifactId>example.example-api</artifactId>
    <version>11.0.0</version>
    <scope>provided</scope>
</dependency>

Or for Gradle:

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath 'io.openliberty.tools:liberty-gradle-plugin:3.9.1'
    }
}
apply plugin: 'liberty'
dependencies {
    libertyRuntime group: 'io.openliberty.beta', name: 'openliberty-runtime', version: '[24.0.0.12-beta,)'
}

Or if you’re using container images:

FROM icr.io/appcafe/open-liberty:beta

Or take a look at our Downloads page.

If you’re using IntelliJ IDEA, Visual Studio Code or Eclipse IDE, you can also take advantage of our open source Liberty developer tools to enable effective development, testing, debugging, and application management all from within your IDE.

For more information on using a beta release, refer to the Installing Open Liberty beta releases documentation.

We welcome your feedback

Let us know what you think on our mailing list. If you hit a problem, post a question on StackOverflow. If you hit a bug, please raise an issue.


November 19, 2024 12:00 AM

Rethinking microservices

November 12, 2024 12:00 AM

There are many misconceptions for microservices, which have been around for over a decade. Some people wonder whether microservices are going to die, especially in an IT industry that is quickly moving toward the cloud. With serverless becoming a hot topic, will microservices survive in the serverless era?

In this blog, take a step back and rethink microservices. We start with the history of microservices, then misconception about microservices, best practices of microservices and finally the future of microservices.


November 12, 2024 12:00 AM

Virtual Threads (Project Loom) – Revolutionizing Concurrency in Java

by Alexius Dionysius Diakogiannis at November 08, 2024 03:55 PM

Introduction

Concurrency has always been a cornerstone of Java, but as applications scale and demands for high throughput and low latency increase, traditional threading models show their limitations. Project Loom and its groundbreaking introduction of virtual threads redefines how we approach concurrency in Java, making applications more scalable and development more straightforward.

In this post, we’ll go deep into virtual threads, exploring how they work, their impact on scalability, and how they simplify backend development. We’ll provide both simple and complex code examples to illustrate these concepts in practice.

Project Loom Virtual Threads in Java

The Limitations of Traditional Threads

In Java, each thread maps to an operating system (OS) thread. While this model is straightforward, it comes with significant overhead:

  • Resource Consumption: OS threads are heavy-weight, consuming considerable memory (~1MB stack size by default).
  • Context Switching: The OS has to manage context switching between threads, which can degrade performance when thousands of threads are involved.
  • Scalability Issues: Blocking operations (e.g., I/O calls) tie up OS threads, limiting scalability.

Traditional solutions involve complex asynchronous programming models or reactive frameworks, which can make code harder to read and maintain.

Introducing Virtual Threads

Virtual threads are “lightweight” threads that aim to solve these problems:

  • Lightweight: Thousands of virtual threads can be created without significant overhead.
  • Efficient Scheduling: Managed by the JVM rather than the OS, leading to more efficient context switching.
  • Simplified Concurrency: Enable writing straightforward, blocking code without sacrificing scalability.

Virtual threads decouple the application thread from the OS thread, allowing the JVM to manage threading more efficiently.

How Virtual Threads Work

Under the hood, virtual threads are scheduled by the JVM onto a pool of OS threads. Key aspects include:

  • Continuation-Based: Virtual threads use continuations to save and restore execution state.
  • Non-Blocking Operations: When a virtual thread performs a blocking operation, it yields control, allowing the JVM to schedule another virtual thread.
  • Efficient Utilization: The JVM reuses OS threads, minimizing the cost of context switches.

Here’s a simplified diagram:

[plant uml diagram]

Benefits of Virtual Threads

  • Scalability: Handle millions of concurrent tasks with minimal resources.
  • Simplified Code: Write blocking code without complex asynchronous patterns.
  • Performance: Reduced context switching overhead and better CPU utilization.
  • Integration: Works seamlessly with existing Java code and libraries.

Simple Examples

Example 1: Spawning Virtual Threads

public class VirtualThreadExample {
    public static void main(String[] args) throws InterruptedException {
        Thread.startVirtualThread(() -> {
            System.out.println("Hello from a virtual thread!");
        });

        // Alternatively, using Thread.Builder
        Thread thread = Thread.builder()
                .virtual()
                .task(() -> System.out.println("Another virtual thread"))
                .start();

        thread.join();
    }
}

Explanation:

  • Thread.startVirtualThread creates and starts a virtual thread.
  • Virtual threads behave like regular threads but are lightweight.

Example 2: Migrating from Traditional to Virtual Threads

Traditional threading:

ExecutorService executor = Executors.newFixedThreadPool(10);

for (int i = 0; i < 100; i++) {
    executor.submit(() -> {
        // Perform task
    });
}

executor.shutdown();

Using virtual threads:

ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();

for (int i = 0; i < 100; i++) {
    executor.submit(() -> {
        // Perform task
    });
}

executor.shutdown();

Explanation:

  • Executors.newVirtualThreadPerTaskExecutor() creates an executor that uses virtual threads.
  • We can submit a large number of tasks without worrying about thread exhaustion.

Complex Examples

Example 1: High-Throughput Server with Virtual Threads

Let’s build a server that handles a massive number of connections using virtual threads.

import java.io.IOException;
import java.net.ServerSocket;
import java.net.Socket;

public class VirtualThreadServer {
    public static void main(String[] args) throws IOException {
        try (ServerSocket serverSocket = new ServerSocket(8080)) {
            while (true) {
                Socket clientSocket = serverSocket.accept();

                Thread.startVirtualThread(() -> handleClient(clientSocket));
            }
        }
    }

    private static void handleClient(Socket clientSocket) {
        try (clientSocket) {
            // Read from and write to the client
            clientSocket.getOutputStream().write("HTTP/1.1 200 OK\r\n\r\nHello World".getBytes());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

Explanation:

  • Each incoming connection is handled by a virtual thread.
  • The server can handle a vast number of simultaneous connections efficiently.

Performance Considerations:

  • Blocking I/O operations in virtual threads do not block OS threads.
  • The JVM efficiently manages the scheduling of virtual threads.

Example 2: Custom Virtual Thread Executor Service

Creating a custom executor service that manages virtual threads with specific configurations.

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadFactory;

public class CustomVirtualThreadExecutor {
    public static void main(String[] args) {
        ThreadFactory factory = Thread.builder()
                .virtual()
                .name("virtual-thread-", 0)
                .factory();

        ExecutorService executor = Executors.newThreadPerTaskExecutor(factory);

        for (int i = 0; i < 1000; i++) {
            int taskNumber = i;
            executor.submit(() -> {
                System.out.println(Thread.currentThread().getName() + " executing task " + taskNumber);
                // Simulate work
                try {
                    Thread.sleep(100);
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                }
            });
        }

        executor.shutdown();
    }
}

Explanation:

  • Using Thread.builder(), we create a custom thread factory for virtual threads with a naming pattern.
  • The executor service uses this factory to create virtual threads per task.
  • This setup allows for customized thread creation and better debugging.

Example 3: Structured Concurrency with Virtual Threads

Structured concurrency helps manage multiple concurrent tasks as a single unit.

import java.time.Duration;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.StructuredTaskScope;

public class StructuredConcurrencyExample {
    public static void main(String[] args) {
        try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
            var future1 = scope.fork(() -> fetchDataFromServiceA());
            var future2 = scope.fork(() -> fetchDataFromServiceB());

            scope.join();           // Wait for all tasks
            scope.throwIfFailed();  // Propagate exceptions

            String resultA = future1.resultNow();
            String resultB = future2.resultNow();

            System.out.println("Results: " + resultA + ", " + resultB);
        } catch (InterruptedException | ExecutionException e) {
            e.printStackTrace();
        }
    }

    private static String fetchDataFromServiceA() throws InterruptedException {
        Thread.sleep(Duration.ofSeconds(2));
        return "Data from Service A";
    }

    private static String fetchDataFromServiceB() throws InterruptedException {
        Thread.sleep(Duration.ofSeconds(3));
        return "Data from Service B";
    }
}

Explanation:

  • StructuredTaskScope allows grouping tasks and managing them collectively.
  • ShutdownOnFailure ensures that if one task fails, all others are canceled.
  • Virtual threads make this pattern efficient and practical.

Benefits:

  • Simplifies error handling in concurrent code.
  • Improves readability and maintainability.

Impact on Backend Development

Virtual threads have profound implications for backend development:

Simplified Codebases

In traditional Java, we often use non-blocking I/O to achieve concurrency, which can complicate code structure. With virtual threads, we can use blocking code without the performance penalties associated with OS threads.

Example Without Virtual Threads (Using Asynchronous I/O):

CompletableFuture<Void> future = CompletableFuture.runAsync(() -> {
    try {
        HttpClient client = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder(URI.create("https://example.com")).build();
        client.sendAsync(request, HttpResponse.BodyHandlers.ofString())
              .thenAccept(response -> System.out.println(response.body()));
    } catch (Exception e) {
        e.printStackTrace();
    }
});

Simplified with Virtual Threads:

Thread.startVirtualThread(() -> {
    try {
        HttpClient client = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder(URI.create("https://example.com")).build();
        HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
        System.out.println(response.body());
    } catch (Exception e) {
        e.printStackTrace();
    }
});

With virtual threads, we can use synchronous client.send() directly, making the code simpler and more readable, while still benefiting from concurrency.

Elimination of Callback Hell

Asynchronous programming often leads to nested callbacks, which make the code harder to read and debug. Virtual threads allow us to write code in a linear, blocking style, avoiding callback hell.

Example Using Callbacks (Without Virtual Threads):

fetchDataAsync("https://example.com/data", result -> {
    processAsync(result, processed -> {
        saveAsync(processed, saved -> {
            System.out.println("Data saved successfully!");
        });
    });
});

Simplified with Virtual Threads:

Thread.startVirtualThread(() -> {
    String data = fetchData("https://example.com/data");
    String processed = process(data);
    save(processed);
    System.out.println("Data saved successfully!");
});

With virtual threads, we can write sequential, synchronous code while retaining concurrency, eliminating the need for nested callbacks.

Enhanced Performance

Handling many concurrent requests with traditional threads can quickly lead to memory exhaustion. Virtual threads allow us to handle a large number of connections concurrently with minimal resource overhead.

Example: High-Concurrency Server with Virtual Threads

try (var serverSocket = new ServerSocket(8080)) {
    while (true) {
        var clientSocket = serverSocket.accept();
        Thread.startVirtualThread(() -> handleClient(clientSocket));
    }
}

private static void handleClient(Socket clientSocket) {
    try (clientSocket) {
        clientSocket.getOutputStream().write("HTTP/1.1 200 OK\r\n\r\nHello, World!".getBytes());
    } catch (IOException e) {
        e.printStackTrace();
    }
}

This server can handle thousands of simultaneous connections without exhausting system resources, as each connection runs on a virtual thread.

Compatibility with Existing Libraries and Frameworks

Since virtual threads are part of the standard Java threading API, they are compatible with most existing libraries and frameworks, allowing developers to integrate virtual threads without extensive refactoring.

Example: Using Virtual Threads with ExecutorService

You can replace traditional thread pools with virtual thread-based executors to use existing code with minimal changes.

ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();

for (int i = 0; i < 100; i++) {
    executor.submit(() -> {
        System.out.println("Running task on a virtual thread.");
    });
}

executor.shutdown();

Any code that works with ExecutorService will continue to work seamlessly with virtual threads, enhancing compatibility.

Reduced Need for Reactive Frameworks

Virtual threads allow developers to use blocking code patterns without the overhead associated with OS threads, making it possible to achieve high concurrency with simpler code structures, reducing the need for reactive frameworks.

Example: Synchronous Data Fetching with Virtual Threads Instead of Reactive Patterns

Reactive (Without Virtual Threads):

Mono<String> data = WebClient.create("https://example.com")
    .get()
    .retrieve()
    .bodyToMono(String.class);

data.subscribe(System.out::println);

Simplified with Virtual Threads:

Thread.startVirtualThread(() -> {
    HttpClient client = HttpClient.newHttpClient();
    HttpRequest request = HttpRequest.newBuilder(URI.create("https://example.com")).build();
    String response = client.send(request, HttpResponse.BodyHandlers.ofString()).body();
    System.out.println(response);
});

Virtual threads allow us to use blocking code directly, making reactive patterns unnecessary for some use cases. This reduces complexity, especially for applications that don’t require the full power of reactive programming.

Considerations

When implementing virtual threads with Project Loom, developers must consider various technical and architectural implications. Below are some detailed considerations to keep in mind:

Memory Usage and Stack Management

Virtual threads are lightweight compared to traditional OS threads, but they still consume memory, especially if the virtual threads are highly stacked (deep call stacks).

  • Stack Size: Virtual threads start with a small stack and can expand as needed, which can potentially reduce memory consumption compared to OS threads. However, developers should monitor stack usage to avoid excessive memory consumption.
  • Memory Monitoring: Although virtual threads are efficient, monitoring the JVM’s memory usage becomes essential as thousands of virtual threads may be active concurrently.
  • JVM Configuration: Tuning the JVM’s garbage collection and memory settings is important when handling millions of threads, as they may put unexpected pressure on the heap.

Blocking vs. Non-Blocking Code Patterns

Virtual threads make blocking I/O efficient, but there are some nuances:

  • Blocking I/O Operations: With virtual threads, you can use blocking calls like Socket or File I/O without performance penalties. However, the JVM handles only traditional blocking I/O efficiently, so libraries must be updated for Loom support.
  • Non-Blocking I/O: If your project is already using non-blocking I/O, switching to virtual threads might simplify the code structure but won’t necessarily bring significant performance gains, as non-blocking code is already optimized.
  • Thread Pool Alternatives: In traditional models, a common technique is to use a pool of threads to limit the number of concurrent operations. With virtual threads, this might no longer be necessary, allowing a model where each task gets its own virtual thread without causing bottlenecks.

Concurrency Limitations

While virtual threads allow for a high degree of concurrency, they are not a silver bullet. Certain scenarios, such as CPU-bound tasks, still require careful handling to avoid performance degradation.

  • CPU-Bound Tasks: Virtual threads are designed for I/O-bound workloads. If the application has CPU-intensive tasks, virtual threads might not yield the same benefits, as they do not reduce CPU time requirements.
  • Parallelism Control: For tasks that require controlled parallelism, developers may still benefit from combining virtual threads with task-limiting mechanisms (e.g., limiting the number of CPU-bound threads).
  • Thread Priority and Scheduling: Virtual threads are managed by the JVM and may not respect OS-level thread priorities. If your application requires fine-grained control over thread priority, virtual threads might not be ideal.

Error Handling and Exception Propagation

Error handling becomes crucial, especially with the simplicity of launching thousands of threads.

  • Propagating Exceptions: Virtual threads handle exceptions differently; uncaught exceptions in a virtual thread do not terminate the JVM process but are logged or can be handled asynchronously.
  • Graceful Shutdowns: Virtual threads simplify concurrency, but managing error states across thousands of threads can be challenging. Structured concurrency (a model for grouping and managing threads introduced alongside Loom) helps manage error propagation and task cancellation.
  • Task Scopes: When using structured concurrency with virtual threads, grouping tasks with scopes (e.g., ShutdownOnFailure in Java’s StructuredTaskScope) ensures that if one task in a group fails, other tasks can be canceled or handled appropriately.

Impact on Debugging and Profiling

With potentially millions of threads, debugging and profiling virtual-threaded applications introduce unique challenges.

  • Thread Explosion in Debuggers: Debuggers might struggle with applications using millions of virtual threads, leading to overwhelming output. It may be helpful to add application-level logging or selectively enable virtual threads for debugging.
  • Profiling Complexity: Traditional thread profilers may not provide granular insights for virtual threads. Consider using JVM flight recording or Loom-aware profiling tools to trace virtual thread usage accurately.
  • Stack Trace Analysis: Virtual threads make it possible to have more granular and descriptive stack traces, but interpreting large volumes of stack traces could require additional tooling or filtering strategies.

Interplay with Synchronization and Locks

Though virtual threads alleviate many concurrency issues, developers still need to be cautious with shared resources.

  • Contention on Shared Resources: Virtual threads do not inherently solve issues related to contention on shared resources. If two virtual threads try to acquire the same lock, they may still face contention, potentially leading to bottlenecks.
  • Thread Safety: Existing synchronized code will generally work with virtual threads. However, in a highly concurrent environment, developers should consider using java.util.concurrent locks (e.g., ReentrantLock with try-lock mechanisms) or lock-free data structures to avoid contention.
  • Deadlock Risks: While virtual threads reduce many resource-related problems, deadlocks can still occur if resources are mismanaged. Deadlock analysis tools can help in identifying potential deadlock situations.

Structured Concurrency for Task Management

Structured concurrency in Project Loom allows developers to group threads and manage them collectively, making error handling and task cancellation more intuitive.

  • Parent-Child Relationships: Structured concurrency introduces a parent-child relationship between tasks, simplifying lifecycle management and error propagation.
  • Graceful Cancellation: If a parent task is canceled, all child tasks are automatically canceled, making it easier to handle scenarios where one task failure requires the cancellation of other related tasks.
  • Scope Lifecycle Management: With StructuredTaskScope, developers can define a task group’s lifecycle. This ensures that resources are managed properly, and all tasks in the scope are completed, failed, or canceled together.

Interfacing with Existing Thread-Based Libraries

Virtual threads integrate well with many libraries but may require attention with those heavily reliant on OS-level threads or specialized thread management.

  • OS-Threaded Libraries: Libraries that rely on low-level OS-thread management (e.g., JNI-based libraries) may not benefit directly from virtual threads, as they may require actual OS threads for certain operations.
  • External Thread Pools: If your application integrates with external thread pools (e.g., database connection pools), consider switching to Loom-compatible connection handling, as some third-party libraries may not yet support virtual threads.
  • Task Executors: Replacing ThreadPoolExecutor with Executors.newVirtualThreadPerTaskExecutor() allows easier adaptation to existing thread-based code, but testing is recommended to ensure compatibility and performance stability.

Performance Profiling and Resource Management

Virtual threads reduce context-switching overhead and make high-concurrency applications more feasible, but monitoring and optimizing performance remain crucial.

  • Avoiding Thread Overuse: Although virtual threads are lightweight, overusing them (e.g., starting a new thread for every small task) can still degrade performance. Consider batching or grouping tasks when feasible.
  • Heap Pressure and Garbage Collection: Large numbers of virtual threads may generate considerable garbage, adding pressure on the JVM’s garbage collection. Profiling and tuning the GC for high-throughput applications with virtual threads is crucial.
  • Application Profiling: Java Flight Recorder (JFR) or other profiling tools with virtual-thread awareness can help understand the application’s runtime characteristics, especially in production environments.

Testing and Migration Strategies

Testing and planning migration from traditional to virtual threads require thorough analysis and validation.

  • Gradual Migration: Virtual threads allow a gradual migration. Developers can begin by converting specific thread-heavy sections of the application to virtual threads while retaining traditional threads elsewhere.
  • Testing for Loom Compatibility: While most Java libraries are expected to be compatible, rigorous testing is recommended, particularly for libraries with complex threading requirements or blocking operations.
  • Load Testing and Performance Validation: Applications utilizing virtual threads should undergo load testing to validate that the new threading model provides the expected concurrency improvements without introducing regressions or bottlenecks.

Patterns and Anti-Patterns

After all beeing said lets examine some common patterns and antipatterns

Patterns

Task-Per-Thread Pattern

One of the primary use cases for virtual threads is to simplify concurrency by using a “task-per-thread” model. In this pattern, each concurrent task is assigned to a separate virtual thread, which avoids the complex management typically needed for thread pools with OS threads.

Example: A server handling multiple incoming connections, with each connection running in its own virtual thread.

try (ServerSocket serverSocket = new ServerSocket(8080)) {
    while (true) {
        Socket clientSocket = serverSocket.accept();
        Thread.startVirtualThread(() -> handleClient(clientSocket));
    }
}

private static void handleClient(Socket clientSocket) {
    try (clientSocket) {
        clientSocket.getOutputStream().write("HTTP/1.1 200 OK\r\n\r\nHello, World!".getBytes());
    } catch (IOException e) {
        e.printStackTrace();
    }
}

Each connection is handled by a virtual thread, providing simple and scalable concurrency without the need for a complex thread pool. This pattern would be inefficient with OS threads but is efficient with virtual threads.

Structured Concurrency Pattern

Structured concurrency organizes concurrent tasks as a structured unit, making it easier to manage lifecycles, cancellations, and exceptions. This pattern is particularly useful in request-based applications where tasks are interdependent.

Example: Fetching data from multiple services concurrently and consolidating the results.

try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
    var future1 = scope.fork(() -> fetchDataFromServiceA());
    var future2 = scope.fork(() -> fetchDataFromServiceB());

    scope.join();           // Wait for all tasks to complete
    scope.throwIfFailed();  // Handle exceptions

    String resultA = future1.resultNow();
    String resultB = future2.resultNow();
    System.out.println("Results: " + resultA + ", " + resultB);
} catch (Exception e) {
    e.printStackTrace();
}

Structured concurrency ensures that if any task fails, all other tasks in the same scope are canceled. It simplifies error handling and improves resource management, providing better control over concurrent flows.

Blocking Code in Virtual Threads

With virtual threads, developers can safely use blocking calls, such as Thread.sleep() or blocking I/O operations, as the JVM handles scheduling efficiently. This pattern contrasts with the traditional approach of avoiding blocking calls to prevent thread starvation.

Example: Running blocking I/O operations without impacting overall performance.

Thread.startVirtualThread(() -> {
    try {
        Thread.sleep(1000);  // Blocking call
        System.out.println("Virtual thread finished sleeping");
    } catch (InterruptedException e) {
        Thread.currentThread().interrupt();
    }
});

Virtual threads make blocking operations efficient, as the JVM schedules other virtual threads to run while the blocking call is in progress. This allows developers to write simpler, more readable code without performance trade-offs.

Task-Based Executors with Virtual Threads

Using Executors.newVirtualThreadPerTaskExecutor() creates a virtual-thread-based executor that simplifies parallel task execution. This pattern allows developers to leverage the ExecutorService interface, making it easy to transition from traditional threads to virtual threads.

Example: Running multiple tasks concurrently with a virtual-thread-based executor.

ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();

for (int i = 0; i < 10; i++) {
    executor.submit(() -> {
        System.out.println("Running task on virtual thread: " + Thread.currentThread().getName());
    });
}

executor.shutdown();

This executor allows each task to run on a virtual thread, making it efficient to create a new thread per task without the need for traditional thread pooling.

Anti-Patterns

Overuse of Virtual Threads

While virtual threads are lightweight, they are not “free.” Creating excessive numbers of virtual threads for very short-lived tasks can introduce overhead in terms of scheduling and garbage collection, which may impact performance.

Anti-Pattern Example: Creating a new virtual thread for every small task, such as iterating over a list.

List<String> items = List.of("A", "B", "C");
for (String item : items) {
    Thread.startVirtualThread(() -> processItem(item));  // Inefficient
}

Better Approach: Instead, batch tasks together if they are very short-lived to avoid excessive thread creation.

Thread.startVirtualThread(() -> items.forEach(VirtualThreadsExample::processItem));

By batching the tasks within a single virtual thread, you avoid creating unnecessary threads, optimizing resource usage and reducing scheduling overhead.

Blocking OS Resources in Virtual Threads

Blocking calls that involve resources controlled by the OS, such as file locks or certain low-level network operations, may still tie up OS threads when used with virtual threads, leading to potential bottlenecks.

Anti-Pattern Example: Locking a file for extended periods in a virtual thread.

Thread.startVirtualThread(() -> {
    try (var fileChannel = FileChannel.open(Path.of("file.txt"), StandardOpenOption.WRITE)) {
        fileChannel.lock();  // This can block an OS thread, not recommended
    } catch (IOException e) {
        e.printStackTrace();
    }
});

Better Approach: Avoid blocking virtual threads on OS resources that do not release quickly. Use asynchronous approaches for tasks involving external resources.

Improper Exception Handling in Virtual Threads

Exceptions in virtual threads do not terminate the JVM, and uncaught exceptions in virtual threads may not be logged as prominently as in traditional threads, leading to undetected errors.

Anti-Pattern Example: Ignoring exceptions in virtual threads.

Thread.startVirtualThread(() -> {
    int result = 10 / 0;  // Unhandled exception
    System.out.println("Result: " + result);
});

Better Approach: Use structured concurrency or set up explicit exception handling within virtual threads to capture and handle errors effectively.

Thread.startVirtualThread(() -> {
    try {
        int result = 10 / 0;
        System.out.println("Result: " + result);
    } catch (Exception e) {
        System.err.println("Error in virtual thread: " + e.getMessage());
    }
});

Proper error handling in virtual threads helps in identifying and managing issues without causing untracked failures.

Manual Management of Thread Lifecycles

With virtual threads, the need for manually managing thread lifecycles or using traditional thread pooling mechanisms decreases. Creating a custom virtual-thread pool or managing virtual threads directly as a group is often unnecessary and counterproductive.

Anti-Pattern Example: Manually creating a virtual-thread pool.

List<Thread> virtualThreadPool = new ArrayList<>();
for (int i = 0; i < 10; i++) {
    virtualThreadPool.add(Thread.startVirtualThread(() -> System.out.println("Task in virtual pool")));
}
virtualThreadPool.forEach(t -> {
    try {
        t.join();
    } catch (InterruptedException e) {
        Thread.currentThread().interrupt();
    }
});

Better Approach: Use Executors.newVirtualThreadPerTaskExecutor() to manage tasks rather than manually handling virtual threads.

ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();

for (int i = 0; i < 10; i++) {
    executor.submit(() -> System.out.println("Task in virtual thread executor"));
}

executor.shutdown();

Manually creating and managing virtual-thread pools contradicts the efficiency of built-in virtual-thread executors, which are optimized for this purpose.

Over-Reliance on Virtual Threads for CPU-Bound Tasks

Virtual threads excel in I/O-bound tasks, but for CPU-bound tasks, they offer limited benefits. Virtual threads do not reduce the CPU time required, so heavy reliance on virtual threads for CPU-bound operations can lead to high contention and degraded performance.

Anti-Pattern Example: Running a CPU-intensive task on a high number of virtual threads.

ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();
for (int i = 0; i < 100; i++) {
    executor.submit(() -> performCPUIntensiveTask());
}

Better Approach: Use a fixed-size thread pool for CPU-bound tasks to limit the number of concurrent CPU-intensive operations.

ExecutorService cpuExecutor = Executors.newFixedThreadPool(4);  // Adjust pool size for CPU cores
for (int i = 0; i < 100; i++) {
    cpuExecutor.submit(() -> performCPUIntensiveTask());
}
cpuExecutor.shutdown();

Using a fixed-size pool for CPU-bound tasks helps manage CPU usage and avoids overwhelming the CPU with excessive task scheduling.

Relying on Global State in Virtual Threads

Virtual threads can be short-lived and numerous, so relying on shared global state can lead to contention and potential race conditions, especially when many virtual threads attempt to access or modify the state concurrently.

Anti-Pattern Example: Modifying shared global state from multiple virtual threads.

public static int counter = 0;

for (int i = 0; i < 1000; i++) {
    Thread.startVirtualThread(() -> counter++);
}

Better Approach: Use thread-safe data structures or local variables to reduce contention. For counters, consider using AtomicInteger or other concurrent collections.

AtomicInteger counter = new AtomicInteger();

for (int i = 0; i < 1000; i++) {
    Thread.startVirtualThread(() -> counter.incrementAndGet());
}

Avoiding shared global state or using thread-safe structures reduces contention and prevents data corruption, especially in high-concurrency environments.

Conclusion

Project Loom’s virtual threads bring a groundbreaking shift to Java’s concurrency model, allowing developers to write more intuitive, efficient, and scalable concurrent code. By making virtual threads lightweight and capable of handling blocking operations without tying up OS resources, Project Loom simplifies complex concurrency patterns, allowing developers to write straightforward, blocking code that performs well under high concurrency.

Key patterns, such as task-per-thread, structured concurrency, and asynchronous handling of OS-bound tasks, demonstrate how virtual threads can enhance both code simplicity and application performance. These patterns, combined with new APIs, like StructuredTaskScope, make it easier to handle interdependent tasks, manage cancellations, and propagate exceptions in a cohesive way. At the same time, understanding anti-patterns—such as avoiding excessive thread creation for short-lived tasks or blocking on OS-level resources—is essential to prevent bottlenecks and ensure efficient resource usage.

Virtual threads encourage developers to rethink their approach to concurrency, moving away from complex reactive frameworks or callback-heavy asynchronous code toward a more synchronous and readable model. However, for CPU-bound tasks or specific I/O operations that require OS thread involvement, traditional approaches like fixed-thread pools and asynchronous task delegation remain relevant.

In essence, virtual threads make concurrency accessible and manageable, even for complex applications, while allowing developers to focus on the core logic rather than threading intricacies. As virtual threads become standard, Java developers can embrace a more flexible and high-performing concurrency model that scales efficiently and integrates smoothly with existing libraries and frameworks, setting the stage for a new era in Java application development.


by Alexius Dionysius Diakogiannis at November 08, 2024 03:55 PM

Reader.of(CharSequence)

by Markus Karg at November 02, 2024 01:21 PM

Hi guys, how’s it going? Long time no see! 😅

In fact, I had been very silent in the past months, and as you could imagine, it has a reason: I just had no time to share all the great stuff with you that I was involved with recently. In particular, creating video content for Youtube is such time-consuming that I decided to stop with that by end of 2023, at least “for some time”, until my personal stress level is “normalized” again. Unfortunately, now by end of 2024, it still is at 250%… Anyways!

Having said that, I decided to restart my blog. While many people told me that blogging is @depreceated since the invention of VLogs, I need to say, it is just so much easier for me to write a blog article, that I decided to ignore them and write some lines about my latest Open Source contribution. So here it is: My first blog entry in years!

But enough about me. What I actually wanted to tell you today is that I am totally happy these days. The reason is that since this week, JDK 24 EA b22 is available for download, and as you can see in the preliminary JavaDocs, my very first addition to the official Java API is contained: Reader.of(CharSequence) 🚀!

You might wonder what’s so crazy about that, because (as you certainly know) I am a contributor to OpenJDK since many years. Well, yes, I did contribute to OpenJDK (alias “to Java”) for long time, but all my contributions where just “under the hood”. I have optimized execution performance, I improved JavaDocs, I added unit tests, and I refactored code. But this time, I added a complete new feature to the public API. It really feels so amazing to see that my work of the past few weeks now will help Java developers in their future projects to spare some microseconds and some kilobytes per call, and in sum, those approx. ten million developers (according to Oracle marketing) will sum up to considerable amounts of CO2 that my invention will spare! 🌞ðŸŒ�🌴

Okay, so what actually is Reader.of(CharSequence) all about, how can you use it, and how does it spare resources?

I think you all know what the class StringReader is, and what to use it for: You need (for whatever reason) an implementation of the Reader interface, and the source is a String. At least that what it was made for decades ago. In fact, looking at the actual uses in 2024, more often than not the source isn’t a String actually, but is (mostly for performance reasons) a StringBuilder or StringBuffer, and sometimes (mostly for technical reasons) a CharBuffer. These classes all share one common interface, CharSequence, which is “the” interface of the class String, too. Unfortunately, StringReader is unable to accept CharSequence; it only accepts String. That’s too bad, because it means, most uses of StringReader actually perform an intermediate toString() operation, which creates a temporary copy of the full content on the heap – just to throw it away later! 🤦Creating this copy is anything but free! It imposes time to search a free place on the heap, to copy the content onto the heap, and to lateron GC (dispose and defragment) the otherwise unused copy in turn. Time is not just money – This operation costs power, and power costs (even these days) CO2! ðŸ™�

Ontop of that, most (possibly all) uses of StringReader are single-threaded. I investigated for some time but could find not a single reason for accessing a StringReader in a multi-threaded fashion. Unfortunately, StringReader is thread-safe: It internally uses the synchronized keyword in every single method. Each time. For each single read in possibly a loop of thousand iterations! And yes, you guess right: synchronized a everything by fast. It slows down code considerably, for zero benefit! And: No, the JVM has no a trick to speed this up in the single-threaded use cases – that trick (“Biased Locking”) went away years ago and the result is that synchronized is slow again! ðŸ™�

Imagine you are writing a RESTful web server which returns JSON on GET. JSON is nothing else but a character sequence. You build it from a non-trivial algorithm using a StringBuilder. That particular JSON unfortunately is non-cachable, as it contains information sent with the request or changing over time or provided by the real physical world. So the server possibly produces tens of thousands of StringBuilders per second and reads it using a StringReader. Could you imagine what happens to your performance? Thanks to the combination of both effects described earlier, you’re losing money with every other customer contacting your server. YOUR money, BTW.

This is exactly what happend in my latest commercial project, so I tried to get rid of StringReader. My first idea was to use Apache IO’s CharSequenceReader, which looks a bit old-school, but immediately got me rid of both effects instantly! ðŸ‘� The problem with Apache IO is that it is rather huge. Using lots of KBs of transitive dependencies just for one single use case didn’t sound like a smart option (but yes, this is the code actually in production still – at least unless JDK 24 is published in Q1/25). Also, the customer was not very pleased to adopt another third-party library into the game. And finally, the code of Apache IO is not really eagerly maintained; they do bug fixes, but they abstain from using modern Java APIs (not even using multi-release JARs). Some will hate me for writing this, but the actual change rate didn’t look like “stable”, it looked to “dead” – agreed, this is subject to my personal interpretation. 🥴

Being an enthusiastic Open Source committer since decades, and being an OpenJDK contributor since years, I had the idea to tackle the problem at its root: StringReader. So I proposed to provide a PR for a new public API, which was very much appreciated by the OpenJDK team. It was Alan Bateman himself (Group Lead of JDK Core Libraries) who came up with the proposal to have a static factory, which culminated in me posting a PR on Github about adding Reader.of(CharSequence). After accepting the mandatory CSR it recently got merged, and since JDK 24’s Eary Access Build 22 it is publicly available. 🚀

BTW, look at the implementation of that Reader’s bulk-read method. There is an ugly sequence of tricks to speed up performance. I will address this in a subsequent upcoming PR. Stay tuned!

So if you want to gain the performance benefit, here is what you need to do:

  • Run your app on Java 24 EA b22+.
  • Replace all occurances of new StringReader(x) and new CharSequenceReader(x) by Reader.of(x).
  • If x ends with .toString() then remove that trailing .toString() – unless the left side of x is effectively not a CharSequence.
  • Note: If you actually use multiple threads to access the Reader, don’t stick with StringReader, but simply surround your calls by a modern means of synchronization, like a Lock – locks are faster than synchronized.

Please exterminate StringReader but adopt Reader.of() ASAP!

I would be happy if you could report the results. Just leave a comment!

So far for today! PARTY ON! 🤘


by Markus Karg at November 02, 2024 01:21 PM

Rising Momentum in Enterprise Java: Insights from the 2024 Jakarta EE Developer Survey Report

by Tatjana Obradovic at October 15, 2024 03:47 PM

Rising Momentum in Enterprise Java: Insights from the 2024 Jakarta EE Developer Survey Report

The seventh annual Jakarta EE Developer Survey Report is now available! Each year, this report delivers crucial insights into the state of enterprise Java and its trajectory, providing a comprehensive view of how developers, architects, and technology leaders are adopting Java to meet the growing demands of modern cloud-native applications.

The 2024 survey, which gathered input from over 1400 participants, paints a clear picture of the current state of enterprise Java and where it may be headed in the future.

Jakarta EE continues to be at the forefront of this evolution, as adoption continues to accelerate across the enterprise landscape. Our survey finds that usage of Jakarta EE for building cloud native Java applications has grown from 53% to 60% since last year. While Spring/Spring Boot remains the leading Java framework for cloud native applications, both Jakarta EE and MicroProfile have seen notable growth, highlighting a healthy diversity of choices for developers building modern enterprise Java applications. 

32% of respondents have now migrated to Jakarta EE from Java EE, up from 26% in 2023. This marks a clear trend as enterprises shift towards more modern, cloud-friendly architectures. The transition to Jakarta EE 10, in particular, has been rapid, with adoption doubling to 34% from the previous year. 

We’re also seeing a gradual shift away from older versions of Java in favour of more recent LTS versions. Usage of Java 17 has grown to 56%, up from 37% in 2023, and Java 21 has achieved a notable adoption rate of 30% in its first year of availability. Meanwhile, usage of the older Java EE 8 has declined. 

Looking to the Future of Jakarta EE

The 2024 Jakarta EE Developer Survey Report not only provides a clear picture of the current challenges and priorities of enterprise Java developers, but also shows us where they hope to see from Jakarta EE in the future.

The survey highlights five key priorities for the Jakarta EE community moving forward:

  • Enhanced support for Kubernetes and microservices architectures
  • Better alignment with Java SE features
  • Improvements in testing support
  • Faster innovation to keep pace with enterprise needs

These priorities reflect the real-world challenges that developers and enterprises face as they build and scale cloudnative applications. With the release of Jakarta EE 11 fast approaching, work is already underway on future Jakarta EE releases, and these insights are crucial to the direction of this effort.

We invite you to take a look at the full report and discover more critical findings. Don’t miss the opportunity to see how the future of enterprise Java is unfolding before your eyes.

Learn more about Jakarta EE and the Jakarta EE Working Group at jakarta.ee 

 

 

Tatjana Obradovic

by Tatjana Obradovic at October 15, 2024 03:47 PM

The Generational Z Garbage Collector (ZGC)

by Alexius Dionysius Diakogiannis at June 17, 2024 05:51 AM

The Generational Z Garbage Collector (ZGC)

The Generational Z Garbage Collector (GenZGC) in JDK 21 represents a significant evolution in Java’s approach to garbage collection, aiming to enhance application performance through more efficient memory management. This advancement builds upon the strengths of the Z Garbage Collector (ZGC) by introducing a generational approach to garbage collection within the JVM.

Design Rationale and Operational Mechanics

Generational Hypothesis: Generational ZGC leverages the “weak generational hypothesis,” which posits that most objects die young. By dividing the heap into young and old regions, GenZGC can focus its efforts on the young region where most objects become unreachable, thereby optimizing garbage collection efficiency and reducing CPU overhead

Heap Division and Collection Cycles: The heap is divided into two logical parts: the young generation and the old generation. Newly allocated objects are placed in the young generation, which is frequently scanned for garbage collection. Objects that survive several collection cycles are then promoted to the old generation, which is scanned less often. This division allows for more frequent collection of short-lived objects while reducing the overhead of collecting long-lived objects.

Performance Implications

Throughput and Latency Internal performance tests have shown that Generational ZGC offers about a 10% improvement in throughput over its single-generation predecessors in both JDK 17 and JDK 21, despite a slight regression in average latency measured in microseconds. However, the most notable improvement is observed in maximum pause times, with a 10-20% improvement in P99 pause times. This reduction in pause times significantly enhances the predictability and responsiveness of applications, particularly those requiring low latency.

Allocation Stalls

A crucial advantage of Generational ZGC is its ability to mitigate allocation stalls, which occur when the rate of object allocation outpaces the garbage collector’s ability to reclaim memory. This capability is particularly beneficial in high-throughput applications, such as those using Apache Cassandra, where Generational ZGC maintains performance stability even under high concurrency levels.

Practical Considerations and Adoption

Transition and Adoption: While JDK 21 introduces Generational ZGC, single-generation ZGC remains the default for now. Developers can opt into using Generational ZGC through JVM arguments (`-XX:+UseZGC -XX:+ZGenerational`). The plan is for Generational ZGC to eventually become the default, with single-generation ZGC being deprecated and removed. This phased approach allows developers to gradually adapt to the new system.

Diagnostic and Profiling Tools: For those evaluating or transitioning to Generational ZGC, tools like GC logging and JDK Flight Recorder (JFR) offer valuable insights into GC behavior and performance. GC logging, accessible via the `-Xlog` argument, and JFR data can be analyzed in JDK Mission Control (JMC) to assess garbage collection behavior and application performance implications.

Conclusion

Generational ZGC represents a significant step forward in Java’s garbage collection technology, offering improved throughput, reduced pause times, and enhanced overall application performance. Its design reflects a deep understanding of application memory management needs, particularly the efficient collection of short-lived objects. As Java applications continue to grow in complexity and scale, the adoption of Generational ZGC could be a pivotal factor in achieving the performance goals of modern, high-demand applications.

The transition from Java 17 to Java 21 heralds a new era of Java development, characterized by significant improvements in performance, security, and developer-friendly features. The API changes and enhancements discussed above are just the tip of the iceberg, with Java 21 offering a wealth of other features and improvements designed to cater to the evolving needs of modern application development.  As developers, embracing Java 21 and leveraging its new features and improvements can significantly impact the efficiency, performance, and security of Java applications. Whether it’s through the enhanced I/O capabilities, improved serialization exception handling, or the new Unicode support in the `Character` class, Java 21 offers a compelling upgrade path from Java 17, promising to enhance the Java ecosystem for years to come.  In conclusion, the evolution from Java 17 to Java 21 is a testament to the ongoing commitment to advancing Java as a language and platform. By exploring and adopting these new features, developers can ensure their Java applications remain cutting-edge, secure, and performant in the face of future challenges.


by Alexius Dionysius Diakogiannis at June 17, 2024 05:51 AM

Back to the Future with Cross-Context Dispatch

by gregw at May 16, 2024 01:31 AM

Cross-Context Dispatch reintroduced to Jetty-12

With the release of Jetty 12.0.8, we’re excited to announce the (re)implementation of a somewhat maligned and deprecated feature: Cross-Context Dispatch. This feature, while having been part of the Servlet specification for many years, has seen varied levels of use and support. Its re-introduction in Jetty 12.0.8, however, marks a significant step forward in our commitment to supporting the diverse needs of our users, especially those with complex legacy and modern web applications.

Understanding Cross-Context Dispatch

Cross-Context Dispatch allows a web application to forward requests to or include responses from another web application within the same Jetty server. Although it has been available as part of the Servlet specification for an extended period, it was deemed optional with Servlet 6.0 of EE10, reflecting its status as a somewhat niche feature.

Initially, Jetty 12 moved away from supporting Cross-Context Dispatch, driven by a desire to simplify the server architecture amidst substantial changes, including support for multiple environments (EE8, EE9, and EE10). These updates mean Jetty can now deploy web applications using either the javax namespace (EE8) or the jakarta namespace (EE9 and EE10), all using the latest optimized jetty core implementations of HTTP: v1, v2 or v3.

Reintroducing Cross-Context Dispatch

The decision to reintegrate Cross-Context Dispatch in Jetty 12.0.8 was influenced significantly by the needs of our commercial clients, some who still leveraging this feature in their legacy applications. Our commitment to supporting our clients’ requirements, including the need to maintain and extend legacy systems, remains a top priority.

One of the standout features of the newly implemented Cross-Context Dispatch is its ability to bridge applications across different environments. This means a web application based on the javax namespace (EE8) can now dispatch requests to, or include responses from, a web application based on the jakarta namespace (EE9 or EE10). This functionality opens up new pathways for integrating legacy applications with newer, modern systems.

Looking Ahead

The reintroduction of Cross-Context Dispatch in Jetty 12.0.8 is more than just a nod to legacy systems; it can be used as a bridge to the future of Java web development. By allowing for seamless interactions between applications across different Servlet environments, Jetty-12 opens the possibility of incremental migration away from legacy web applications.


by gregw at May 16, 2024 01:31 AM

HTTP Patch with Jersey Client on JDK 16+

by Jan at April 26, 2024 11:26 AM

Jakarta REST provides a Client API, implemented by Jersey Client. The default implementation is based on the Java HttpUrlConnection. Unfortunately, the HttpUrlConnection supports only HTTP methods defined in the original HTTP/1.1 RFC 2616. It will never support for instance HTTP … Continue reading

by Jan at April 26, 2024 11:26 AM

Monitoring Java Virtual Threads

by Jean-François James at January 10, 2024 05:14 PM

Introduction In my previous article, we’ve seen what Virtual Threads (VTs) are, how they differ from Platform Threads (PTs), and how to use them with Helidon 4. In simple terms, VTs bring in a new concurrency model. Instead of using many PTs that can get blocked, we use a few of them that hardly ever […]

by Jean-François James at January 10, 2024 05:14 PM

Choosing Connector in Jersey

by Jan at October 02, 2023 01:49 PM

Jersey is using JDK HttpUrlConnection for sending HTTP requests by default. However, there are cases where the default HttpUrlConnection cannot be used, or where using any other HTTP Client available suits the customer’s needs better. For this, Jersey comes with … Continue reading

by Jan at October 02, 2023 01:49 PM

New Jetty 12 Maven Coordinates

by Joakim Erdfelt at September 20, 2023 09:42 PM

Now that Jetty 12.0.1 is released to Maven Central, we’ve started to get a few questions about where some artifacts are, or when we intend to release them (as folks cannot find them).

Things have change with Jetty, starting with the 12.0.0 release.

First, is that our historical versioning of <servlet_support>.<major>.<minor> is no longer being used.

With Jetty 12, we are now using a more traditional <major>.<minor>.<patch> versioning scheme for the first time.

Also new in Jetty 12 is that the Servlet layer has been separated away from the Jetty Core layer.

The Servlet layer has been moved to the new Environments concept introduced with Jetty 12.

EnvironmentJakarta EEServletJakarta NamespaceJetty GroupID
ee8EE84javax.servletorg.eclipse.jetty.ee8
ee9EE95jakarta.servletorg.eclipse.jetty.ee9
ee10EE106jakarta.servletorg.eclipse.jetty.ee10
Jetty Environments

This means the old Servlet specific artifacts have been moved to environment specific locations both in terms of Java namespace and also their Maven Coordinates.

Example:

Jetty 11 – Using Servlet 5
Maven Coord: org.eclipse.jetty:jetty-servlet
Java Class: org.eclipse.jetty.servlet.ServletContextHandler

Jetty 12 – Using Servlet 6
Maven Coord: org.eclipse.jetty.ee10:jetty-ee10-servlet
Java Class: org.eclipse.jetty.ee10.servlet.ServletContextHandler

We have a migration document which lists all of the migrated locations from Jetty 11 to Jetty 12.

This new versioning and environment features built into Jetty means that new major versions of Jetty are not as common as they have been in the past.





by Joakim Erdfelt at September 20, 2023 09:42 PM

Running MicroProfile reactive with Helidon Nima and Virtual Threads

by Jean-François James at September 20, 2023 05:29 PM

I recently became interested in Helidon as part of my investigations into Java Loom. Indeed, version 4 is natively based on Virtual Threads. Before going any further, let’s introduce quickly Helidon. Helidon is an Open Source (source on GitHub, Apache V2 licence) managed by Oracle that enables to develop lightweight cloud-native Java application with fast […]

by Jean-François James at September 20, 2023 05:29 PM

New Survey: How Do Developers Feel About Enterprise Java in 2023?

by Mike Milinkovich at September 19, 2023 01:00 PM

The results of the 2023 Jakarta EE Developer Survey are now available! For the sixth year in a row, we’ve reached out to the enterprise Java community to ask about their preferences and priorities for cloud native Java architectures, technologies, and tools, their perceptions of the cloud native application industry, and more.

From these results, it is clear that open source cloud native Java is on the rise following the release of Jakarta EE 10.The number of respondents who have migrated to Jakarta EE continues to grow, with 60% saying they have already migrated, or plan to do so within the next 6-24 months. These results indicate steady growth in the use of Jakarta EE and a growing interest in cloud native Java overall.

When comparing the survey results to 2022, usage of Jakarta EE to build cloud native applications has remained steady at 53%. Spring/Spring Boot, which relies on some Jakarta EE specifications, continues to be the leading Java framework in this category, with usage growing from 57% to 66%. 

Since the September 2022 release, Jakarta EE 10 usage has grown to 17% among survey respondents. This community-driven release is attracting a growing number of application developers to adopt Jakarta EE 10 by offering new features and updates to Jakarta EE. An equal number of developers are running Jakarta EE 9 or 9.1 in production, while 28% are running Jakarta EE 8. That means the increase we are seeing in the migration to Jakarta EE is mostly due to the adoption of Jakarta EE 10, as compared to Jakarta EE 9/9.1 or Jakarta EE 8.

The Jakarta EE Developer Survey also gives us a chance to get valuable feedback on features from the latest Jakarta EE release, as well as what direction the project should take in the future. 

Respondents are most excited about Jakarta EE Core Profile, which was introduced in the Jakarta EE 10 release as a subset of Web Profile specifications designed for microservices and ahead-of-time compilation. When it comes to future releases, the community is prioritizing better support for Kubernetes and microservices, as well as adapting Java SE innovations to Jakarta EE — a priority that has grown in popularity since 2022. This is a good indicator that the Jakarta EE 11 release plan is on the right direction by adopting new Java SE 21 features.

2,203 developers, architects, and other tech professionals participated in the survey, a 53% increase from last year. This year’s survey was also available in Chinese, Japanese, Spanish & Portuguese, making it easier for Java enthusiasts around the world to share their perspectives.  Participation from the Chinese Jakarta EE community was particularly strong, with over 27% of the responses coming from China. By hearing from more people in the enterprise Java space, we’re able to get a clearer picture of what challenges developers are facing, what they’re looking for, and what technologies they are using. Thank you to everyone who participated! 

Learn More

We encourage you to download the report for a complete look at the enterprise Java ecosystem. 

If you’d like to get more information about Jakarta EE specifications and our open source community, sign up for one of our mailing lists or join the conversation on Slack. If you’d like to participate in the Jakarta EE community, learn how to get started on our website.


by Mike Milinkovich at September 19, 2023 01:00 PM

Best Practices for Effective Usage of Contexts Dependency Injection (CDI) in Java Applications

by Rhuan Henrique Rocha at August 30, 2023 10:55 PM

Looking at the web, we don’t see many articles talking about Contexts Dependency Injection’s best practices. Hence, I have made the decision to discuss the utilization of Contexts Dependency Injection (CDI) using best practices, providing a comprehensive guide on its implementation.

The CDI is a Jakarta specification in the Java ecosystem to allow developers to use dependency injection, managing contexts, and component injection in an easier way. The article https://www.baeldung.com/java-ee-cdi defines the CDI as follows:

CDI turns DI into a no-brainer process, boiled down to just decorating the service classes with a few simple annotations, and defining the corresponding injection points in the client classes.

If you want to learn the CDI concepts you can read Baeldung’s post and Otavio Santana’s post. Here, in this post, we will focus on the best practices topic.

In fact, CDI is a powerful framework and allows developers to use Dependency Injection (DI) and Inversion of Control (IoC). However, we have one question here. How tightly do we want our application to be coupled with the framework? Note that I’m not talking you cannot couple your application to a framework, but you should think about it, think about the coupling level, and think about the tradeoffs. For me, coupling an application to a framework is not wrong, but doing it without thinking about the coupling level and the cost and tradeoffs is wrong.

It is impossible to add a framework to your application without minimally coupling your application. Even though your application does not have a couple expressed in the code, probably you have a behavioral coupling, that is, a behavior in your application depends on a framework’s behavior, and in some cases, you can not guarantee that other framework will provide a similar behavior, in case of changes.

Best Practices for Injecting Dependencies

When writing code in Java, we often create classes that rely on external dependencies to perform their tasks. To achieve this using CDI, we employ the @Inject annotation, which allows us to inject these dependencies. However, it’s essential to be mindful of whether we are making the class overly dependent on CDI for its functionality, as it may limit its usability without CDI. Hence, it’s crucial to carefully consider the tightness of this dependency. As an illustration, let’s examine the code snippet below. Here, we encounter a class that is tightly coupled to CDI in order to carry out its functionality.

public class ImageRepository {
    @Inject
    private StorageProvider storageProvider;

    public void saveImage(File image){
        //Validate the file to check if it is an image.
        //Apply some logic if needed
        storageProvider.save(image);
    }
}

As you can see the class ImageRepository has a dependency on StorageProvider, that is injected via CDI annotation. However, the storageProvider variable is private and we don’t have setter method or a constructor that allows us to pass this dependency by the constructor. It means this class cannot work without a CDI context, that is, the ImageRepository is tightly coupled to CDI.

This coupling doesn’t provide any benefits for the application, instead, it only causes harm both to the application itself and potentially to the testing of this class.

Look at the code refactored to reduce the couple to CDI.

public class ImageRepository implements Serializable {

    private StorageProvider storageProvider;

    @Inject
    public ImageRepository(StorageProvider storageProvider){
        this.storageProvider = storageProvider;
    }

    public void saveImage(File image){
        //Validate the file to check if it is an image.
        //Apply some logic if needed
        storageProvider.save(image);
    }
}

As you can see, the ImageRepository class has a constructor that receives the StorageProvider as a constructor argument. This approach follows what is said in the Clean Code book.

“True Dependency Injection goes one step further. The class takes no direct steps to resolve its dependencies; it is completely passive. Instead, it provides setter methods or constructor arguments (or both) that are used to inject the dependencies.”

(from “Clean Code: A Handbook of Agile Software Craftsmanship” by Martin Robert C.)

Without a constructor or a setter method, the injection depends on the CDI. However, we still have one question about this class. The class has a CDI annotation and depends on the CDI to be compiled. I’m not saying it is always a problem, but it can be a problem, especially if you are writing a framework. Coupling a framework with another framework can be a problem in cases you want to use your framework with another mutually exclusive one. In general, it should be avoided by frameworks. Thus, how can we fully decouple the ImageRepository class from CDI?

CDI Producer Method

The CDI producer is a source of an object that can be used to be injected by CDI. It is like a factor of a type of object. Look at the code below:

public class ImageRepositoryProducer {

    @Produces
    public ImageRepository createImageRepository(){
        StorageProvider storageProvider = CDI.current().select(StorageProvider.class).get();
        return new ImageRepository(storageProvider);
    }
}

Please note that we are constructing just one object, but the StorageProvider‘s object is read by CDI. You should avoid constructing more than one object within a producer method, as this interlinks the construction of these objects and may lead to complications if you intend to designate distinct scopes for them. You can create a separated producer method to produce the StorageProvider.

This is the ImageRepository class refactored.

public class ImageRepository implements Serializable {

    private StorageProvider storageProvider;

    public ImageRepository(StorageProvider storageProvider){
        this.storageProvider = storageProvider;
    }

    public void saveImage(File image){
        //Validate the file to check if it is an image.
        //Apply some logic if needed
        storageProvider.save(image);
    }
}

Please note that the ImageRepository class does not know anything about the CDI, and is fully decoupled from CDI. The codes about the CDI are inside the ImageRepositoryProducer, which can be extracted to another module if needed.

CDI Interceptor

The CDI Interceptor is a very cool feature of CDI that provides a nice CDI-based way to work with cross-cutting tasks (such as auditing). This is a little definition said in my book:

“A CDI interceptor is a class that wraps the call to a method — this method is called target method — that runs its logic and proceeds the call either to the next CDI interceptor if it exists, or the target method.”

(from “Jakarta EE for Java Developers” by Rhuan Rocha.)

The purpose of this article is not to discuss what a CDI interceptor is, but to discuss CDI best practices. So if you want to read more about CDI interceptor, check out the book Jakarta EE for Java Developers.

As said, the CDI interceptor is very interesting. I am quite fond of this feature and have incorporated it into numerous projects. However, using this feature comes with certain trade-offs for the application.

When you use the CDI interceptor you couple the class to the CDI, because you should be annotating the class with a custom annotation that is a interceptor binding. Look at the example below shown on the Jakarta EE for Java Developers book:

@ApplicationScopedpublic class SecuredBean{
   @Authentication
   public String generateText(String username) throws AutenticationException{
       return "Welcome "+username;
   }
}

As you can see we should define a scope, as it should be a bean managed by CDI, and you should be annotating the class with the interceptor binding. Hence, if you eliminate CDI from your application, the interceptor’s logic won’t execute, and the class won’t be compiled. With this, your application has a behavioral coupling, and a dependency on the CDI lib jar to compile.

As said, it is not necessarily bad, however, you should think if it is a problem in your context.

CDI Event

The CDI Event is a great feature within the CDI framework that I have employed extensively in various applications. This functionality provides the implementation of the Observer Pattern, enabling us to emit events that are then observed by observers who execute tasks asynchronously. However, if we add the CDI codes inside our class to emit events we will couple the class to the CDI. Again, this is not an error, but you should be sure it is not a problem with your solution. Look at the example below.

import jakarta.enterprise.event.Event;

public class User{

 private Event<Email> emailEvent;

 public User(Event<Email> emailEvent){
   this.emailEvent = emailEvent;
 }

 public void register(){
   //logic
   emailEvent.fireAsync(Email.of(from, to, subject, content));
 }
}

Note we are receiving the Event class, which is from CDI, to emit the event. It means this class is coupled to CDI and depends on it to work. One way to avoid it is creating your own class to emit the event, and abstract the details about what is the mechanism (CDI or other) that is emitting the event. Look at the example below.

import net.rhuan.example.EventEmitter;

public class User{

 private EventEmiter<Email> emailEventEmiter;

 public User(EventEmiter<Email> emailEventEmiter){
   this.emailEventEmiter = emailEventEmiter;
 }

 public void register(){
   //logic
   emailEventEmiter.emit(Email.of(from, to, subject, content));
 }
}

Now, your class is agnostic to the emitter of the event. You can use CDI or others, according to the EventEmiter implementation.

Conclusion

The CDI is an amazing specification from Jakarta EE widely used in many Java frameworks and Java applications. Carefully determining the degree of integration between our application and the framework holds immense significance. This intentional decision becomes an important factor in proactively mitigating challenges during the solution’s evolution, especially when working on the development of a framework.

If you have a question or want to share your thoughts, feel free to add comments or send me messages about it. 🙂


by Rhuan Henrique Rocha at August 30, 2023 10:55 PM

The Jakarta EE 2023 Developer Survey is now open!

by Tatjana Obradovic at March 29, 2023 09:24 PM

The Jakarta EE 2023 Developer Survey is now open!

It is that time of the year: the Jakarta EE 2023 Developer Survey open for your input! The survey will stay open until May 25st.


I would like to invite you to take this year six-minute survey, and have the chance to share your thoughts and ideas for future Jakarta EE releases, and help us discover uptake of the Jakarta EE latest versions and trends that inform industry decision-makers.

Please share the survey link and to reach out to your contacts: Java developers, architects and stakeholders on the enterprise Java ecosystem and invite them to participate in the 2023 Jakarta EE Developer Survey!

 

alt

Tatjana Obradovic

by Tatjana Obradovic at March 29, 2023 09:24 PM

What is Apache Camel and how does it work?

by Rhuan Henrique Rocha at February 16, 2023 11:14 PM

In this post, I will talk to you about what the Apache Camel is. It is a brief introduction before I starting to post practical content. Thus, let’s go to understand what this framework is.

Apache Camel is an open source Java integration framework that allows different applications to communicate with each other efficiently. It provides a platform for integrating heterogeneous software systems. Camel is designed to make application integration easy, simplifying the complexity of communication between different systems.

Apache Camel is written in Java and can be run on a variety of platforms, including Jakarta EE application servers and OSGi-based application containers, and can runs inside cloud environments using Spring Boot or Quarkus. Camel also supports a wide range of network protocols and message formats, including HTTP, FTP, SMTP, JMS, SOAP, XML, and JSON.

Camel uses the Enterprise Integration Patterns (EIP) pattern to define the different forms of integration. EIP is a set of commonly used design patterns in system integration. Camel implements many of these patterns, making it a powerful tool for integration solutions.

Additionally, Camel has a set of components that allow it to integrate with different systems. The components can be used to access different resources, such as databases, web services, and message systems. Camel also supports content-based routing, which means it can route messages based on their content.

Camel is highly configurable and extensible, allowing developers to customize its functionality to their needs. It also supports the creation of integration routes at runtime, which means that routes can be defined and changed without the need to restart the system.

In summary, Camel is a powerful and flexible tool for software system integration. It allows different applications to communicate efficiently and effectively, simplifying the complexity of system integration. Camel is a reliable and widely used framework that can help improve the efficiency and effectiveness of system integration in a variety of environments.

If you want to start using this framework you can access the documentation at the site. It’s my first post about the Apache Camel and will post more practical content about this amazing framework.


by Rhuan Henrique Rocha at February 16, 2023 11:14 PM

Jakarta EE and MicroProfile at EclipseCon Community Day 2022

by Reza Rahman at November 19, 2022 10:39 PM

Community Day at EclipseCon 2022 was held in person on Monday, October 24 in Ludwigsburg, Germany. Community Day has always been a great event for Eclipse working groups and project teams, including Jakarta EE/MicroProfile. This year was no exception. A number of great sessions were delivered from prominent folks in the community. The following are the details including session materials. The agenda can still be found here. All the materials can be found here.

Jakarta EE Community State of the Union

The first session of the day was a Jakarta EE community state of the union delivered by Tanja Obradovic, Ivar Grimstad and Shabnam Mayel. The session included a quick overview of Jakarta EE releases, how to get involved in the work of producing the specifications, a recap of the important Jakarta EE 10 release and as well as a view of what’s to come in Jakarta EE 11. The slides are embedded below and linked here.

Jakarta Concurrency – What’s Next

Payara CEO Steve Millidge covered Jakarta Concurrency. He discussed the value proposition of Jakarta Concurrency, the innovations delivered in Jakarta EE 10 (including CDI based @Asynchronous, @ManagedExecutorDefinition, etc) and the possibilities for the future (including CDI based @Schedule, @Lock, @MaxConcurrency, etc). The slides are embedded below and linked here. There are some excellent code examples included.

Jakarta Security – What’s Next

Werner Keil covered Jakarta Security. He discussed what’s already done in Jakarta EE 10 (including OpenID Connect support) and everything that’s in the works for Jakarta EE 11 (including CDI based @RolesAllowed). The slides are embedded below and linked here.

Jakarta Data – What’s Coming

IBM’s Emily Jiang kindly covered Jakarta Data. This is a brand new specification aimed towards Jakarta EE 11. It is a higher level data access abstraction similar to Spring Data and DeltaSpike Data. It encompasses both Jakarta Persistence (JPA) and Jakarta NoSQL. The slides are embedded below and linked here. There are some excellent code examples included.

MicroProfile Community State of the Union

Emily also graciously delivered a MicroProfile state of the union. She covered what was delivered in MicroProfile 5, including alignment with Jakarta EE 9.1. She also discussed what’s coming soon in MicroProfile 6 and beyond, including very clear alignment with the Jakarta EE 10 Core Profile. The slides are embedded below and linked here. There are some excellent technical details included.

MicroProfile Telemetry – What’s Coming

Red Hat’s Martin Stefanko covered MicroProfile Telemetry. Telemetry is a brand new specification being included in MicroProfile 6. The specification essentially supersedes MicroProfile Tracing and possibly MicroProfile Metrics too in the near future. This is because the OpenTracing and OpenCensus projects merged into a single project called OpenTelemetry. OpenTelemetry is now the de facto standard defining how to collect, process, and export telemetry data in microservices. It makes sense that MicroProfile moves forward with supporting OpenTelemetry. The slides are embedded below and linked here. There are some excellent technical details and code examples included.

See You There Next Time?

Overall, it was an honor to organize the Jakarta EE/MicroProfile agenda at EclipseCon Community Day one more time. All speakers and attendees should be thanked. Perhaps we will see you at Community Day next time? It is a great way to hear from some of the key people driving Jakarta EE and MicroProfile. You can attend just Community Day even if you don’t attend EclipseCon. The fee is modest and includes lunch as well as casual networking.


by Reza Rahman at November 19, 2022 10:39 PM

JFall 2022

November 04, 2022 09:56 AM

An impression of JFall by yours truly.

keynote

Sold out!

Packet room!

Very nice first keynote speaker by Saby Sengupta about the path to transform.
He is a really nice storyteller. He had us going.

Dutch people, wooden shoes, wooden hat, would not listen

  • Saby

lol

Get the answer to three why questions. If the answers stop after the first why. It may not be a good idea.

This great first keynote is followed by the very well known Venkat Subramaniam about The Art of Simplicity.

The question is not what can we add? But What can we remove?

Simple fails less

Simple is elegant

All in al a great keynote! Loved it.

Design Patterns in the light of Lambdas

By Venkat Subramaniam

The GOF are kind of the grand parents of our industry. The worst thing they have done is write the damn book.
— Venkat

The quote is in the context of that writing down grandmas fantastic recipe does not work as it is based on the skill of grandma and not the exact amount of the ingredients.

The cleanup is the responsibility of the Resource class. Much better than asking developers to take care of it. It will be forgotten!

The more powerful a language becomes the less we need to talk about patterns. Patterns become practices we use. We do not need to put in extra effort.

I love his way of presenting, but this is the one of those times - I guess - that he is hampered by his own succes. The talk did not go deep into stuff. During his talk I just about covered 5 not too difficult subjects. I missed his speed and depth.

Still a great talk though.

lunch

Was actually very nice!

NLJUG update keynote

The Java Magazine was mentioned we (as Editors) had to shout for that!

Please contact me (@ivonet) if you have ambitions to either be an author or maybe even as a fellow editor of the magazine. We are searching for a new Editor now.

Then the voting for the Innovation Awards.

I kinda missed the next keynote by ING because I was playing with a rubix cube and I did not really like his talk

jakarta EE 10 platform

by Ivar Grimstad

Ivar talks about the specification of Jakarta EE.

To create a lite version of CDI it is possible to start doing things at build time and facilitate other tools like GraalVM and Quarkus.

He gives nice demos on how to migrate code to work in de jakarta namespace.

To start your own Jakarta EE application just go to start.jakarta.ee en follow the very simple UI instructions

I am very proud to be the creator of that UI. Thanks, Ivar for giving me a shoutout for that during your talk. More cool stuff will follow soon.

Be prepared to do some namespace changes when moving from Java EE 8 to Jakarta EE.

All slides here

conclusion

I had a fantastic day. For me, it is mainly about the community and seeing all the people I know in the community. I totally love the vibe of the conference and I think it is one of the best organized venues.

See you at JSpring.

Ivo.


November 04, 2022 09:56 AM

Survey Says: Confidence Continues to Grow in the Jakarta EE Ecosystem

by Mike Milinkovich at September 26, 2022 01:00 PM

The results of the 2022 Jakarta EE Developer Survey are very telling about the current state of the enterprise Java developer community. They point to increased confidence about Jakarta EE and highlight how far Jakarta EE has grown over the past few years.

Strong Turnout Helps Drive Future of Jakarta EE

The fifth annual survey is one of the longest running and best-respected surveys of its kind in the industry. This year’s turnout was fantastic: From March 9 to May 6, a total of 1,439 developers responded. 

This is great for two reasons. First, obviously, these results help inform the Java ecosystem stakeholders about the requirements, priorities and perceptions of enterprise developer communities. The more people we hear from, the better picture we get of what the community wants and needs. That makes it much easier for us to make sure the work we’re doing is aligned with what our community is looking for. 

The other reason is that it helps us better understand how the cloud native Java world is progressing. By looking at what community members are using and adopting, what their top goals are and what their plans are for adoption, we can better understand not only what we should be working on today, but tomorrow and for the future of Jakarta EE. 

Findings Indicate Growing Adoption and Rising Expectations

Some of the survey’s key findings include:

  • Jakarta EE is the basis for the top frameworks used for building cloud native applications.
  • The top three frameworks for building cloud native applications, respectively, are Spring/Spring Boot, Jakarta EE and MicroProfile, though Spring/Spring Boot lost ground this past year. It’s important to note that Spring/SpringBoot relies on Jakarta EE developments for its operation and is not competitive with Jakarta EE. Both are critical ingredients to the healthy enterprise Java ecosystem. 
  • Jakarta EE 9/9.1 usage increased year-over-year by 5%.
  • Java EE 8, Jakarta EE 8, and Jakarta EE 9/9.1 hit the mainstream with 81% adoption. 
  • While over a third of respondents planned to adopt, or already had adopted Jakarta EE 9/9.1, nearly a fifth of respondents plan to skip Jakarta EE 9/9.1 altogether and adopt Jakarta EE 10 once it becomes available. 
  • Most respondents said they have migrated to Jakarta EE already or planned to do so within the next 6-24 months.
  • The top three community priorities for Jakarta EE are:
    • Native integration with Kubernetes (same as last year)
    • Better support for microservices (same as last year)
    • Faster support from existing Java EE/Jakarta EE or cloud vendors (new this year)

Two of the results, when combined, highlight something interesting:

  • 19% of respondents planned to skip Jakarta EE 9/9.1 and go straight to 10 once it’s available 
  • The new community priority — faster support from existing Java EE/Jakarta EE or cloud vendors — really shows the growing confidence the community has in the ecosystem

After all, you wouldn’t wait for a later version and skip the one that’s already available, unless you were confident that the newer version was not only going to be coming out on a relatively reliable timeline, but that it was going to be an improvement. 

And this growing hunger from the community for faster support really speaks to how far the ecosystem has come. When we release a new version, like when we released Jakarta EE 9, it takes some time for the technology implementers to build the product based on those standards or specifications. The community is becoming more vocal in requesting those implementers to be more agile and quickly pick up the new versions. That’s definitely an indication that developer demand for Jakarta EE products is growing in a healthy way. 

Learn More

If you’d like to learn more about the project, there are several Jakarta EE mailing lists to sign up for. You can also join the conversation on Slack. And if you want to get involved, start by choosing a project, sign up for its mailing list and start communicating with the team.


by Mike Milinkovich at September 26, 2022 01:00 PM

Jakarta EE 10 has Landed!

by javaeeguardian at September 22, 2022 03:48 PM

The Jakarta EE Ambassadors are thrilled to see Jakarta EE 10 being released! This is a milestone release that bears great significance to the Java ecosystem. Jakarta EE 8 and Jakarta EE 9.x were important releases in their own right in the process of transitioning Java EE to a truly open environment in the Eclipse Foundation. However, these releases did not deliver new features. Jakarta EE 10 changes all that and begins the vital process of delivering long pending new features into the ecosystem at a regular cadence.

There are quite a few changes that were delivered – here are some key themes and highlights:

  • CDI Alignment
    • @Asynchronous in Concurrency
    • Better CDI support in Batch
  • Java SE Alignment
    • Support for Java SE 11, Java SE 17
    • CompletionStage, ForkJoinPool, parallel streams in Concurrency
    • Bootstrap APIs for REST
  • Closing standardization gaps
    • OpenID Connect support in Security, @ManagedExecutorDefinition, UUID as entity keys, more SQL support in Persistence queries, multipart/form-data support in REST, @ClientWindowScoped in Faces, pure Java Faces views
    • CDI Lite/Core Profile to enable next generation cloud native runtimes – MicroProfile will likely align with CDI Lite/Jakarta EE Core
  • Deprecation/removal
    • @Context annotation in REST, EJB Entity Beans, embeddable EJB container, deprecated Servlet/Faces/CDI features

While there are many features that we identified in our Jakarta EE 10 Contribution Guide that did not make it yet, this is still a very solid release that everyone in the Java ecosystem will benefit from, including Spring, MicroProfile and Quarkus. You can see here what was delivered, what’s on the way and what gaps still remain. You can try Jakarta EE 10 out now using compatible implementations like GlassFish, Payara, WildFly and Open Liberty. Jakarta EE 10 is proof in the pudding that the community, including major stakeholders, has not only made it through the transition to the Eclipse Foundation but now is beginning to thrive once again.

Many Ambassadors helped make this release a reality such as Arjan Tijms, Werner Keil, Markus Karg, Otavio Santana, Ondro Mihalyi and many more. The Ambassadors will now focus on enabling the community to evangelize Jakarta EE 10 including speaking, blogging, trying out implementations, and advocating for real world adoption. We will also work to enable the community to continue to contribute to Jakarta EE by producing an EE 11 Contribution Guide in the coming months. Please stay tuned and join us.

Jakarta EE is truly moving forward – the next phase of the platform’s evolution is here!


by javaeeguardian at September 22, 2022 03:48 PM

Java Reflections unit-testing

by Vladimir Bychkov at July 13, 2022 09:06 PM

How make java code with reflections more stable? Unit tests can help with this problem. This article introduces annotations @CheckConstructor, @CheckField, @CheckMethod to create so unit tests automatically

by Vladimir Bychkov at July 13, 2022 09:06 PM

Java EE - Jakarta EE Initializr

May 05, 2022 02:23 PM

Getting started with Jakarta EE just became even easier!

Get started

Hot new Update!

Moved from the Apache 2 license to the Eclipse Public License v2 for the newest version of the archetype as described below.
As a start for a possible collaboration with the Eclipse start project.

New Archetype with JakartaEE 9

JakartaEE 9 + Payara 5.2022.2 + MicroProfile 4.1 running on Java 17

  • And the docker image is also ready for x86_64 (amd64) AND aarch64 (arm64/v8) architectures!

May 05, 2022 02:23 PM

FOSDEM 2022 Conference Report

by Reza Rahman at February 21, 2022 12:24 AM

FOSDEM took place February 5-6. The European based event is one of the most significant gatherings worldwide focused on all things Open Source. Named the “Friends of OpenJDK”, in recent years the event has added a devroom/track dedicated to Java. The effort is lead by my friend and former colleague Geertjan Wielenga. Due to the pandemic, the 2022 event was virtual once again. I delivered a couple of talks on Jakarta EE as well as Diversity & Inclusion.

Fundamentals of Diversity & Inclusion for Technologists

I opened the second day of the conference with my newest talk titled “Fundamentals of Diversity and Inclusion for Technologists”. I believe this is an overdue and critically important subject. I am very grateful to FOSDEM for accepting the talk. The reality for our industry remains that many people either have not yet started or are at the very beginning of their Diversity & Inclusion journey. This talk aims to start the conversation in earnest by explaining the basics. Concepts covered include unconscious bias, privilege, equity, allyship, covering and microaggressions. I punctuate the topic with experiences from my own life and examples relevant to technologists. The slides for the talk are available on SpeakerDeck. The video for the talk is now posted on YouTube.

Jakarta EE – Present and Future

Later the same day, I delivered my fairly popular talk – “Jakarta EE – Present and Future”. The talk is essentially a state of the union for Jakarta EE. It covers a little bit of history, context, Jakarta EE 8, Jakarta EE 9/9.1 as well as what’s ahead for Jakarta EE 10. One key component of the talk is the importance and ways of direct developer contributions into Jakarta EE, if needed with help from the Jakarta EE Ambassadors. Jakarta EE 10 and the Jakarta Core Profile should bring an important set of changes including to CDI, Jakarta REST, Concurrency, Security, Faces, Batch and Configuration. The slides for the talk are available on SpeakerDeck. The video for the talk is now posted on YouTube.

I am very happy to have had the opportunity to speak at FOSDEM. I hope to contribute again in the future.


by Reza Rahman at February 21, 2022 12:24 AM

Infinispan Apache Log4j 2 CVE-2021-44228 vulnerability

December 12, 2021 10:00 PM

Infinispan 10+ uses Log4j version 2.0+ and can be affected by vulnerability CVE-2021-44228, which has a 10.0 CVSS score. The first fixed Log4j version is 2.15.0.
So, until official patch is coming, - you can update used logger version to the latest in few simple steps

wget https://downloads.apache.org/logging/log4j/2.15.0/apache-log4j-2.15.0-bin.zip
unzip apache-log4j-2.15.0-bin.zip

cd /opt/infinispan-server-10.1.8.Final/lib/

rm log4j-*.jar
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-api-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-core-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-jul-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-slf4j-impl-2.15.0.jar ./

Please, note - patch above is not official, but according to initial tests it works with no issues


December 12, 2021 10:00 PM

JPA query methods: influence on performance

by Vladimir Bychkov at November 18, 2021 07:22 AM

Specification JPA 2.2/Jakarta JPA 3.0 provides for several methods to select data from database. In this article we research how these methods affect on performance

by Vladimir Bychkov at November 18, 2021 07:22 AM

Undertow AJP balancer. UT005028: Proxy request failed: java.nio.BufferOverflowException

April 02, 2021 09:00 PM

Wildfly provides great out of the box load balancing support by Undertow and modcluster subsystems
Unfortunately, in case HTTP headers size is huge enough (close to 16K), which is so actual in JWT era - pity error happened:

ERROR [io.undertow.proxy] (default I/O-10) UT005028: Proxy request to /ee-jax-rs-examples/clusterdemo/serverinfo failed: java.io.IOException: java.nio.BufferOverflowException
 at io.undertow.server.handlers.proxy.ProxyHandler$HTTPTrailerChannelListener.handleEvent(ProxyHandler.java:771)
 at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction$1.completed(ProxyHandler.java:646)
 at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction$1.completed(ProxyHandler.java:561)
 at io.undertow.client.ajp.AjpClientExchange.invokeReadReadyCallback(AjpClientExchange.java:203)
 at io.undertow.client.ajp.AjpClientConnection.initiateRequest(AjpClientConnection.java:288)
 at io.undertow.client.ajp.AjpClientConnection.sendRequest(AjpClientConnection.java:242)
 at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction.run(ProxyHandler.java:561)
 at io.undertow.util.SameThreadExecutor.execute(SameThreadExecutor.java:35)
 at io.undertow.server.HttpServerExchange.dispatch(HttpServerExchange.java:815)
...
Caused by: java.nio.BufferOverflowException
 at java.nio.Buffer.nextPutIndex(Buffer.java:521)
 at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:297)
 at io.undertow.protocols.ajp.AjpUtils.putString(AjpUtils.java:52)
 at io.undertow.protocols.ajp.AjpClientRequestClientStreamSinkChannel.createFrameHeaderImpl(AjpClientRequestClientStreamSinkChannel.java:176)
 at io.undertow.protocols.ajp.AjpClientRequestClientStreamSinkChannel.generateSendFrameHeader(AjpClientRequestClientStreamSinkChannel.java:290)
 at io.undertow.protocols.ajp.AjpClientFramePriority.insertFrame(AjpClientFramePriority.java:39)
 at io.undertow.protocols.ajp.AjpClientFramePriority.insertFrame(AjpClientFramePriority.java:32)
 at io.undertow.server.protocol.framed.AbstractFramedChannel.flushSenders(AbstractFramedChannel.java:603)
 at io.undertow.server.protocol.framed.AbstractFramedChannel.flush(AbstractFramedChannel.java:742)
 at io.undertow.server.protocol.framed.AbstractFramedChannel.queueFrame(AbstractFramedChannel.java:735)
 at io.undertow.server.protocol.framed.AbstractFramedStreamSinkChannel.queueFinalFrame(AbstractFramedStreamSinkChannel.java:267)
 at io.undertow.server.protocol.framed.AbstractFramedStreamSinkChannel.shutdownWrites(AbstractFramedStreamSinkChannel.java:244)
 at io.undertow.channels.DetachableStreamSinkChannel.shutdownWrites(DetachableStreamSinkChannel.java:79)
 at io.undertow.server.handlers.proxy.ProxyHandler$HTTPTrailerChannelListener.handleEvent(ProxyHandler.java:754)

The same request directly to backend server works well. Tried to play with ajp-listener and mod-cluster filter "max-*" parameters, but have no luck.

Possible solution here is switch protocol from AJP to HTTP which can be bit less effective, but works well with big headers:

/profile=full-ha/subsystem=modcluster/proxy=default:write-attribute(name=listener, value=default)

April 02, 2021 09:00 PM

Jakarta EE Cookbook

by Elder Moraes at July 06, 2020 07:19 PM

About one month ago I had the pleasure to announce the release of the second edition of my book, now called “Jakarta EE Cookbook”. By that time I had recorded a video about and you can watch it here:

And then came a crazy month and just now I had the opportunity to write a few lines about it! 🙂

So, straight to the point, what you should know about the book (in case you have any interest in it).

Target audience

Java developers working on enterprise applications and that would like to get the best from the Jakarta EE platform.

Topics covered

I’m sure this is one of the most complete books of this field, and I’m saying it based on the covered topics:

  • Server-side development
  • Building services with RESTful features
  • Web and client-server communication
  • Security in the enterprise architecture
  • Jakarta EE standards (and how does it save you time on a daily basis)
  • Deployment and management using some of the best Jakarta EE application servers
  • Microservices with Jakarta EE and Eclipse MicroProfile
  • CI/CD
  • Multithreading
  • Event-driven for reactive applications
  • Jakarta EE, containers & cloud computing

Style and approach

The book has the word “cookbook” on its name for a reason: it follows a 100% practical approach, with almost all working code available in the book (we only omitted the imports for the sake of the space).

And talking about the source code being available, it is really available on my Github: https://github.com/eldermoraes/javaee8-cookbook

PRs and Stars are welcomed! 🙂

Bonus content

The book has an appendix that would be worthy of another book! I tell the readers how sharing knowledge has changed my career for good and how you can apply what I’ve learned in your own career.

Surprise, surprise

In the first 24 hours of its release, this book simply reached the 1st place at Amazon among other Java releases! Wow!

Of course, I’m more than happy and honored for such a warm welcome given to my baby… 🙂

If you are interested in it, we are in the very last days of the special price in celebration of its release. You can take a look here http://book.eldermoraes.com

Leave your comments if you need any clarification about it. See you!


by Elder Moraes at July 06, 2020 07:19 PM

Monitoring REST APIs with Custom JDK Flight Recorder Events

January 29, 2020 02:30 PM

The JDK Flight Recorder (JFR) is an invaluable tool for gaining deep insights into the performance characteristics of Java applications. Open-sourced in JDK 11, JFR provides a low-overhead framework for collecting events from Java applications, the JVM and the operating system.

In this blog post we’re going to explore how custom, application-specific JFR events can be used to monitor a REST API, allowing to track request counts, identify long-running requests and more. We’ll also discuss how the JFR Event Streaming API new in Java 14 can be used to export live events, making them available for monitoring and alerting via tools such as Prometheus and Grafana.


January 29, 2020 02:30 PM

Enforcing Java Record Invariants With Bean Validation

January 20, 2020 04:30 PM

Record types are one of the most awaited features in Java 14; they promise to "provide a compact syntax for declaring classes which are transparent holders for shallowly immutable data". One example where records should be beneficial are data transfer objects (DTOs), as e.g. found in the remoting layer of enterprise applications. Typically, certain rules should be applied to the attributes of such DTO, e.g. in terms of allowed values.

January 20, 2020 04:30 PM

Jakarta EE 8 CRUD API Tutorial using Java 11

by Philip Riecks at January 19, 2020 03:07 PM

As part of the Jakarta EE Quickstart Tutorials on YouTube, I’ve now created a five-part series to create a Jakarta EE CRUD API. Within the videos, I’m demonstrating how to start using Jakarta EE for your next application. Given the Liberty Maven Plugin and MicroShed Testing, the endpoints are developed using the TDD (Test Driven Development) technique.

The following technologies are used within this short series: Java 11, Jakarta EE 8, Open Liberty, Derby, Flyway, MicroShed Testing & JUnit 5

Part I: Introduction to the application setup

This part covers the following topics:

  • Introduction to the Maven project skeleton
  • Flyway setup for Open Liberty
  • Derby JDBC connection configuration
  • Basic MicroShed Testing setup for TDD

Part II: Developing the endpoint to create entities

This part covers the following topics:

  • First JAX-RS endpoint to create Person entities
  • TDD approach using MicroShed Testing and the Liberty Maven Plugin
  • Store the entities using the EntityManager

Part III: Developing the endpoints to read entities

This part covers the following topics:

  • Develop two JAX-RS endpoints to read entities
  • Read all entities and by its id
  • Handle non-present entities with a different HTTP status code

Part IV: Developing the endpoint to update entities

This part covers the following topics:

  • Develop the JAX-RS endpoint to update entities
  • Update existing entities using HTTP PUT
  • Validate the client payload using Bean Validation

Part V: Developing the endpoint to delete entities

This part covers the following topics:

  • Develop the JAX-RS endpoint to delete entities
  • Enhance the test setup for deterministic and repeatable integration tests
  • Remove the deleted entity from the database

The source code for the Maven CRUD API application is available on GitHub.

For more quickstart tutorials on Jakarta EE, have a look at the overview page on my blog.

Have fun developing Jakarta EE CRUD API applications,

Phil

 

The post Jakarta EE 8 CRUD API Tutorial using Java 11 appeared first on rieckpil.


by Philip Riecks at January 19, 2020 03:07 PM

Deploy a Jakarta EE application to the root context

by Philip Riecks at January 07, 2020 06:24 AM

With the presence of Docker, Kubernetes and cheaper hardware, the deployment model of multiple applications inside one application server has passed. Now, you deploy one Jakarta EE application to one application server. This eliminates the need for different context paths.  You can use the root context / for your Jakarta EE application. With this blog post, you’ll learn how to achieve this for each Jakarta EE application server.

The default behavior for Jakarta EE application server

Without any further configuration, most of the Jakarta EE application servers deploy the application to a context path based on the filename of your .war. If you e.g. deploy your my-banking-app.war application, the server will use the context prefix /my-banking-app for your application. All you JAX-RS endpoints, Servlets, .jsp, .xhtml content is then available below this context, e.g /my-banking-app/resources/customers.

This was important in the past, where you deployed multiple applications to one application server. Without the context prefix, the application server wouldn’t be able to route the traffic to the correct application.

As of today, the deployment model changed with Docker, Kubernetes and cheaper infrastructure. You usually deploy one .war within one application server running as a Docker container. Given this deployment model, the context prefix is irrelevant. Mapping the application to the root context / is more convenient.

If you configure a reverse proxy or an Ingress controller (in the Kubernetes world), you are happy if you can just route to / instead of remembering the actual context path (error-prone).

Deploying to root context: Payara & Glassfish

As Payara is a fork of Glassfish, the configuration for both is quite similar. The most convenient way for Glassfish is to place a glassfish-web.xml file in the src/main/webapp/WEB-INF folder of your application:

<!DOCTYPE glassfish-web-app PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 Servlet 3.0//EN"
  "http://glassfish.org/dtds/glassfish-web-app_3_0-1.dtd">
<glassfish-web-app>
  <context-root>/</context-root>
</glassfish-web-app>

For Payara the filename is payara-web.xml:

<!DOCTYPE payara-web-app PUBLIC "-//Payara.fish//DTD Payara Server 4 Servlet 3.0//EN" "https://raw.githubusercontent.com/payara/Payara-Server-Documentation/master/schemas/payara-web-app_4.dtd">
<payara-web-app>
	<context-root>/</context-root>
</payara-web-app>

Both also support configuring the context path of the application within their admin console. IMHO this less convenient than the .xml file solution.

Deploying to root context: Open Liberty

Open Liberty also parses a proprietary web.xml file within src/main/webapp/WEB-INF: ibm-web-ext.xml

<web-ext
  xmlns="http://websphere.ibm.com/xml/ns/javaee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://websphere.ibm.com/xml/ns/javaee http://websphere.ibm.com/xml/ns/javaee/ibm-web-ext_1_0.xsd"
  version="1.0">
  <context-root uri="/"/>
</web-ext>

Furthermore, you can also configure the context of your application within your server.xml:

<server>
  <featureManager>
    <feature>servlet-4.0</feature>
  </featureManager>

  <httpEndpoint id="defaultHttpEndpoint" httpPort="9080" httpsPort="9443"/>

  <webApplication location="app.war" contextRoot="/" name="app"/>
</server>

Deploying to root context: WildFly

WildFly also has two simple ways of configuring the root context for your application. First, you can place a jboss-web.xml within src/main/webapp/WEB-INF:

<!DOCTYPE jboss-web PUBLIC "-//JBoss//DTD Web Application 2.4//EN" "http://www.jboss.org/j2ee/dtd/jboss-web_4_0.dtd">
<jboss-web>
  <context-root>/</context-root>
</jboss-web>

Second, while copying your .war file to your Docker container, you can name it ROOT.war:

FROM jboss/wildfly
 ADD target/app.war /opt/jboss/wildfly/standalone/deployments/ROOT.war

For more tips & tricks for each application server, have a look at my cheat sheet.

Have fun deploying your Jakarta EE applications to the root context,

Phil

The post Deploy a Jakarta EE application to the root context appeared first on rieckpil.


by Philip Riecks at January 07, 2020 06:24 AM

Specification Scope in Jakarta EE

by waynebeaton at April 08, 2019 02:56 PM

With the Eclipse Foundation Specification Process (EFSP) a single open source specification project has a dedicated project team of committers to create and maintain one or more specifications. The cycle of creation and maintenance extends across multiple versions of the specification, and so while individual members may come and go, the team remains and it is that team that is responsible for the every version of that specification that is created.

The first step in managing how intellectual property rights flow through a specification is to define the range of the work encompassed by the specification. Per the Eclipse Intellectual Property Policy, this range of work (referred to as the scope) needs to be well-defined and captured. Once defined, the scope is effectively locked down (changes to the scope are possible but rare, and must be carefully managed; the scope of a specification can be tweaked and changed, but doing so requires approval from the Jakarta EE Working Group’s Specification Committee).

Regarding scope, the EFSP states:

Among other things, the Scope of a Specification Project is intended to inform companies and individuals so they can determine whether or not to contribute to the Specification. Since a change in Scope may change the nature of the contribution to the project, a change to a Specification Project’s Scope must be approved by a Super-majority of the Specification Committee.

As a general rule, a scope statement should not be too precise. Rather, it should describe the intention of the specification in broad terms. Think of the scope statement as an executive summary or “elevator pitch”.

Elevator pitch: You have fifteen seconds before the elevator doors open on your floor; tell me about the problem your specification addresses.

The scope statement must answer the question: what does an implementation of this specification do? The scope statement must be aspirational rather than attempt to capture any particular state at any particular point-in-time. A scope statement must not focus on the work planned for any particular version of the specification, but rather, define the problem space that the specification is intended to address.

For example:

Jakarta Batch provides describes a means for executing and managing batch processes in Jakarta EE applications.

and:

Jakarta Message Service describes a means for Jakarta EE applications to create, send, and receive messages via loosely coupled, reliable asynchronous communication services.

For the scope statement, you can assume that the reader has a rudimentary understanding of the field. It’s reasonable, for example, to expect the reader to understand what “batch processing” means.

I should note that the two examples presented above are just examples of form. I’m pretty sure that they make sense, but defer to the project teams to work with their communities to sort out the final form.

The scope is “sticky” for the entire lifetime of the specification: it spans versions. The plan for any particular development cycle must describe work that is in scope; and at the checkpoint (progress and release) reviews, the project team must be prepared to demonstrate that the behavior described by the specifications (and tested by the corresponding TCK) cleanly falls within the scope (note that the development life cycle of specification project is described in Eclipse Foundation Specification Process Step-by-Step).

In addition the specification scope which is required by the Eclipse Intellectual Property Policy and EFSP, the specification project that owns and maintains the specification needs a project scope. The project scope is, I think, pretty straightforward: a particular specification project defines and maintains a specification.

For example:

The Jakarta Batch project defines and maintains the Jakarta Batch specification and related artifacts.

Like the specification scope, the project scope should be aspirational. In this regard, the specification project is responsible for the particular specification in perpetuity. Further the related artifacts, like APIs and TCKs can be in scope without actually being managed by the project right now.

Today, for example, most of the TCKs for the Jakarta EE specifications are rolled into the Jakarta EE TCK project. But, over time, this single monster TCK may be broken up and individual TCKs moved to corresponding specification projects. Or not. The point is that regardless of where the technical artifacts are currently maintained, they may one day be part of the specification project, so they are in scope.

I should back up a bit and say that our intention right now is to turn the “Eclipse Project for …” projects that we have managing artifacts related to various specifications into actual specification projects. As part of this effort, we’ll add Git repositories to these projects to provide a home for the specification documents (more on this later). A handful of these proto-specification projects currently include artifacts related to multiple specifications, so we’ll have to sort out what we’re going to do about those project scope statements.

We might consider, for example, changing the project scope of the Jakarta EE Stable APIs (note that I’m guessing a future new project name) to something simple like:

Jakarta EE Stable APIs provides a home for stable (legacy) Jakarta EE specifications and related artifacts which are no longer actively developed.

But, all that talk about specification projects aside, our initial focus needs to be on describing the scope of the specifications themselves. With that in mind, the EE4J PMC has created a project board with issues to track this work and we’re going to ask the project teams to start working with their communities to put these scope statements together. If you have thoughts regarding the scope statements for a particular specification, please weigh in.

Note that we’re in a bit of a weird state right now. As we engage in a parallel effort to rename the specifications (and corresponding specification projects), it’s not entirely clear what we should call things. You’ll notice that the issues that have been created all use the names that we guess we’re going to end up using (there’s more more information about that in Renaming Java EE Specifications for Jakarta EE).


by waynebeaton at April 08, 2019 02:56 PM

Renaming Java EE Specifications for Jakarta EE

by waynebeaton at April 04, 2019 02:17 PM

It’s time to change the specification names…

When we first moved the APIs and TCKs for the Java EE specifications over to the Eclipse Foundation under the Jakarta EE banner, we kept the existing names for the specifications in place, and adopted placeholder names for the open source projects that hold their artifacts. As we prepare to engage in actual specification work (involving an actual specification document), it’s time to start thinking about changing the names of the specifications and the projects that contain their artifacts.

Why change? For starters, it’s just good form to leverage the Jakarta brand. But, more critically, many of the existing specification names use trademarked terms that make it either very challenging or impossible to use those names without violating trademark rules. Motivation for changing the names of the existing open source projects that we’ll turn into specification projects is, I think, a little easier: “Eclipse Project for …” is a terrible name. So, while the current names for our proto-specification projects have served us well to-date, it’s time to change them. To keep things simple, we recommend that we just use the name of the specification as the project name. 

With this in mind, we’ve come up with a naming pattern that we believe can serve as a good starting point for discussion. To start with, in order to keep things as simple as possible, we’ll have the project use the same name as the specification (unless there is a compelling reason to do otherwise).

The naming rules are relatively simple:

  • Replace “Java” with “Jakarta” (e.g. “Java Message Service” becomes “Jakarta Message Service”);
  • Add a space in cases where names are mashed together (e.g. “JavaMail” becomes “Jakarta Mail”);
  • Add “Jakarta” when it is missing (e.g. “Expression Language” becomes “Jakarta Expression Language”); and
  • Rework names to consistently start with “Jakarta” (“Enterprise JavaBeans” becomes “Jakarta Enterprise Beans”).

This presents us with an opportunity to add even more consistency to the various specification names. Some, for example, are more wordy or descriptive than others; some include the term “API” in the name, and others don’t; etc.

We’ll have to sort out what we’re going to do with the Eclipse Project for Stable Jakarta EE Specifications, which provides a home for a small handful of specifications which are not expected to change. I’ll personally be happy if we can at least drop the “Eclipse Project for” from the name (“Jakarta EE Stable”?). We’ll also have to sort out what we’re going to do about the Eclipse Mojarra and Eclipse Metro projects which hold the APIs for some specifications; we may end up having to create new specification projects as homes for development of the corresponding specification documents (regardless of how this ends up manifesting as a specification project, we’re still going to need specification names).

Based on all of the above, here is my suggested starting point for specification (and most project) names (I’ve applied the rules described above; and have suggested tweaks for consistency by strike out):

  • Jakarta APIs for XML Messaging
  • Jakarta Architecture for XML Binding
  • Jakarta API for XML-based Web Services
  • Jakarta Common Annotations
  • Jakarta Enterprise Beans
  • Jakarta Persistence API
  • Jakarta Contexts and Dependency Injection
  • Jakarta EE Platform
  • Jakarta API for JSON Binding
  • Jakarta Servlet
  • Jakarta API for RESTful Web Services
  • Jakarta Server Faces
  • Jakarta API for JSON Processing
  • Jakarta EE Security API
  • Jakarta Bean Validation
  • Jakarta Mail
  • Jakarta Beans Activation Framework
  • Jakarta Debugging Support for Other Languages
  • Jakarta Server Pages Standard Tag Library
  • Jakarta EE Platform Management
  • Jakarta EE Platform Application Deployment
  • Jakarta API for XML Registries
  • Jakarta API for XML-based RPC
  • Jakarta Enterprise Web Services
  • Jakarta Authorization Contract for Containers
  • Jakarta Web Services Metadata
  • Jakarta Authentication Service Provider Interface for Containers
  • Jakarta Concurrency Utlities
  • Jakarta Server Pages
  • Jakarta Connector Architecture
  • Jakarta Dependency Injection
  • Jakarta Expression Language
  • Jakarta Message Service
  • Jakarta Batch
  • Jakarta API for WebSocket
  • Jakarta Transaction API

We’re going to couple renaming with an effort to capture proper scope statements (I’ll cover this in my next post). The Eclipse EE4J PMC Lead, Ivar Grimstad, has blogged about this recently and has created a project board to track the specification and project renaming activity (as of this writing, it has only just been started, so watch that space). We’ll start reaching out to the “Eclipse Project for …”  teams shortly to start engaging this process. When we’ve collected all of the information (names and scopes), we’ll engage in a restructuring review per the Eclipse Development Process (EDP) and make it all happen (more on this later).

Your input is requested. I’ll monitor comments on this post, but it would be better to collect your thoughts in the issues listed on the project board (after we’ve taken the step to create them, of course), on the related issue, or on the EE4J PMC’s mailing list.

 


by waynebeaton at April 04, 2019 02:17 PM

Top 20 Jakarta EE Experts to Follow on Twitter

by Elder Moraes at January 08, 2019 12:47 AM

This is the most viewed post of this blog, so I believe it deserves an update now in 2020! Its first version was written back in 2017.

There’s a lot of different opinions in this kind of lists, and there will be always somebody or something missing… just don’t be too passionate or take things personally, ok?! 😉

******************************************************

We all have to agree: there are tons of tons of information shared through social media. It’s no different on Twitter.

When we talk about staying tuned with some technology, it’s important to have some kind of focus. Otherwise, you could end up confused or, worst, getting bad and/or wrong information.

For these reasons I have a small but incredible list of people/accounts that I use to follow on Twitter to get really good information about Jakarta EE.

If you are passionate about Jakarta EE like me, I truly hope this list may be helpful to you. If you are not, I hope you enjoy as well! 😉

Important: the list isn’t a ranking, so don’t judge the account by the position at the list. In fact, don’t judge anyone by anything…

Jakarta EE – @JakartaEE

The official Jakarta EE handle.

Wayne Beaton – @waynebeaton

Wayne is a Director of Open Source Projects at the Eclipse Foundation. Undoubtedly, Jakarta EE is one of the biggest projects at Eclipse today (if not the biggest one), so it’s important to stay tunned with Wayne has to say.

Adam Bien – @AdamBien

Adam works as a freelancer with Java since JDK 1.0, with Servlets/EJB since 1.0 and before the advent of J2EE in several large-scale applications. He is an architect and developer (with usually 20/80 distribution) in Java (SE / EE / FX) projects. He has written several books about JavaFX, J2EE, and Java EE, and he is the author of Real World Java EE Patterns—Rethinking Best Practices and Real World Java EE Night Hacks—Dissecting The Business Tier.

He is also a Java Champion, NetBeans Dream Team Founding Member, Oracle ACE Director, Java Developer of the Year 2010 and has a huge amount of nominations as JavaOne Rockstar.

Kevin Sutter – @kwsutter

Kevin Sutter is the lead architect for the Jakarta EE and JPA solutions for WebSphere Application Server and the WebSphere Liberty Profile. He is also very active with Java and open-source strategies as they relate to IBM’s application middleware.

Ivar Grimstad – @ivar_grimstad

Ivar Grimstad is Java Champion, JUG Leader, JCP Spec Lead, EC and EG Member, NetBeans Dream Team and International Speaker.

He is the PMC (Project Management Committee) Lead of EE4J and Jakarta EE Developer Advocate at Eclipse Foundation.

David Blevins – @dblevins

David Blevins is the founder of Tomitribe and a veteran of open source Jakarta EE. He has been both implementing and defining Enterprise Java specifications for more than 10 years and has a strong drive to see it simple, testable, and as light as Java SE. Blevins is cofounder of OpenEJB (1999), Geronimo (2003), and TomEE (2011). He is a member of the EJB 3.0, EJB 3.1, EJB 3.2, Java EE 6, Java EE 7, and Java EE 8 Security Expert Groups, and a member of the Apache Software Foundation. Blevins is a contributing author to Component-Based Software Engineering: Putting the Pieces Together (Addison Wesley). Blevins is also a regular speaker at JavaOne, Devoxx, ApacheCon, OSCon, JAX, and other Java-focused conferences.

Otavio Santana – @otaviojava

Otávio Santana is a developer and enthusiast of open source. He is an evangelist and practitioner of agile philosophy and polyglot development in Brazil. Santana is a JUG leader of JavaBahia and SouJava, and a strong supporter of Java communities in Brazil, where he also leads the BrasilJUGs initiative to incorporate Brazilian JUGs into joint activities.

He is also the co-creator of Jakarta NoSQL, a Java framework that streamlines the integration of Java applications with NoSQL databases. It defines a set of APIs to interact with NoSQL databases and provides a standard implementation for most NoSQL databases. This helps to achieve very low coupling with the underlying NoSQL technologies used.

Java Champions – @Java_Champions

Well, why the Java Champions handle is here? Because many Java Champions are Jakarta EE experts, so following the official Twitter is nice to be in touch with what they are saying about it.

Alex Theedom@alextheedom

Trainer, Java Champion, Jakarta EE spec committee, author of Jakarta EE books and courses. Conference speaker and blogger.

OmniFaces – @OmniFaces

OmniFaces is a utility library for JSF 2 that focusses on utilities that ease everyday tasks with the standard JSF API. OmniFaces is a response to frequently recurring problems encountered during ages of professional JSF development and from questions being asked on Stack Overflow.

Dmitry Kornilov – @m0mus

Dmitry has over 20 years of experience in design and implementation of complex software systems, defining systems architecture, team leading and project management. He has worked as project leader of EclipseLink and Yasson and as spec lead of JSON-P and JSON-B specifications.

Steve Millidge – @l33tj4v4

Steve is the Founder and Director of Payara and C2B2 Consulting. Having used Java extensively since pre1.0, Steve has over 15 years’ experience as a field-based Professional Service Consultant along with extensive knowledge of Java middleware.

Before running his own business, Steve worked for Oracle as a Principal Consultant in Oracle’s Technology Architecture Practice, specializing in Business Process Integration and scalable n-tier component architectures.

Emily Jiang – @emilyfhjiang

Emily is a Java Champion and has been working on MicroProfile since 2016. She also leads the specifications of MicroProfile Config, Fault Tolerance and Service Mesh. She works for IBM as Liberty Architect for MicroProfile and CDI and is heavily involved in Java EE implementation in Liberty releases.

Arjan Tijms – @arjan_tijms

Arjan is a Jakarta EE committer and member of the Jakarta EE Steering Committee.

MicroProfile – @MicroProfileIO

MicroProfile is a baseline platform definition that optimizes Enterprise Java for a microservices architecture and delivers application portability across multiple MicroProfile runtimes. The initially planned baseline is JAX-RS + CDI + JSON-P, with the intent of the community having an active role in the MicroProfile definition and roadmap.

Sebastian Daschner – @DaschnerS

Sebastian has been working with Java enterprise software development for many years. Besides his work for clients, he set a high priority in educating developers in conference presentations, video courses, and training. He believes that teaching others not only greatly improves their situation but also educates yourself.

David Heffelfinger – @ensode

David R. Heffelfinger is an independent consultant based in the greater Washington DC area. He has authored several books on Java EE and related technologies. Heffelfinger has been architecting, designing, and developing software professionally since 1995. He has been using Java as his primary programming language since 1996. He has worked on many large-scale projects for several clients, including the US Department of Homeland Security, Freddie Mac, Fannie Mae, and the US Department of Defense. He has a master’s degree in software engineering from Southern Methodist University, Dallas, Texas. Heffelfinger is a frequent speaker at Java conferences such as Oracle Code One (forme JavaOne).

John Clingan – @jclingan

John is Product Manager at Red Hat and an ex Java EE PM. He is also a MicroProfile co-founder.

Josh Juneau – @javajuneau

Josh Juneau works as an application developer, system analyst, and database administrator. He is active in many fields of application development but primarily focuses on Jakarta EE. Juneau is a technical writer for Oracle Technology Network, Java Magazine, and Apress. He is a member of the NetBeans Dream Team, the JCP, and a part of the JSR 372 Expert Group. He enjoys working with the Java community—he is the director of meetings for the Chicago Java User Group.

Tanja Obradovic@TanjaEclipse

Tanja is the Jakarta EE Program Manager at Eclipse Foundation.

 


by Elder Moraes at January 08, 2019 12:47 AM

Back to the top