serverless Java with AWS Lambda, Java's performance on lambda compared to node.js, AWS SnapStart, Lambda functions and microservicesis available for download.
November 13, 2024
JUG Milano 2024
by Ivar Grimstad at November 13, 2024 09:27 AM
On Monday, I visited JUG Milano together with my colleague Rosaria Rossini. She did a short talk about soft and hard skills needed for Open Source titled Java & Skills: a research activities in research@eclipse. After that, I did a talk titled Jakarta EE Meets AI where I show different ways to integrate with AI in Jakarta EE applications.
JUG Torino has frequent meetings and they are all streamed live on their YouTube channel. There were comments and questions both from those present as well as from the online viewers.
Before the event, I went for a run in the city and also had some opportunity to do a little sightseeing. After the event we went out for a delicious dinner where I got to experience the traditional food for the Lombardi region. Let me just say this, I did not go to bed hungry
November 11, 2024
1Z0-1113 - I passed!
November 11, 2024 12:00 AM
November 10, 2024
From XML-Driven Enterprise Java to Serverless AWS Lambdas--airhacks.fm podcast
November 10, 2024 02:39 PM
Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS
Hashtag Jakarta EE #254
by Ivar Grimstad at November 10, 2024 10:59 AM
Welcome to issue number two hundred and fifty-four of Hashtag Jakarta EE!
I am in the middle of a trip across Europe that started at DevCon in in Bucharest, Rumania and continued to SFSCon in Bolzano, Italy. Right now, I am on my way to Milan and Turin to talk at the Java User Groups there. Check out the agenda for JUG Milano on Monday and JUG Torino on Tuesday. Rosaria from Research at Eclipse will join me at both these events and do an opening talk titled Java & Skills: a research activities in research@eclipse.
Jakarta EE 11 Core Profile is just about ready for release review. Everything is ready and all artefacts are staged or published according to the Jakarta EE Specification Process (JESP).
The Jakarta EE TCT Project is working heroically to finalize the TCK so we will be able to have the release reviews for Jakarta EE 11 Platform and Jakarta EE 11 Web Profile underway in the beginning of December. The goal is to have them completed, or at least ongoing when JakartaOne Livestream is happening on December 3, 2024.
I’ll end here with some pictures from the last couple of days. Until next week…
November 09, 2024
Ringesentralen - say what?
November 09, 2024 12:00 AM
November 08, 2024
Virtual Threads (Project Loom) – Revolutionizing Concurrency in Java
by Alexius Dionysius Diakogiannis at November 08, 2024 03:55 PM
Introduction
Concurrency has always been a cornerstone of Java, but as applications scale and demands for high throughput and low latency increase, traditional threading models show their limitations. Project Loom and its groundbreaking introduction of virtual threads redefines how we approach concurrency in Java, making applications more scalable and development more straightforward.
In this post, we’ll go deep into virtual threads, exploring how they work, their impact on scalability, and how they simplify backend development. We’ll provide both simple and complex code examples to illustrate these concepts in practice.
The Limitations of Traditional Threads
In Java, each thread maps to an operating system (OS) thread. While this model is straightforward, it comes with significant overhead:
- Resource Consumption: OS threads are heavy-weight, consuming considerable memory (~1MB stack size by default).
- Context Switching: The OS has to manage context switching between threads, which can degrade performance when thousands of threads are involved.
- Scalability Issues: Blocking operations (e.g., I/O calls) tie up OS threads, limiting scalability.
Traditional solutions involve complex asynchronous programming models or reactive frameworks, which can make code harder to read and maintain.
Introducing Virtual Threads
Virtual threads are “lightweight” threads that aim to solve these problems:
- Lightweight: Thousands of virtual threads can be created without significant overhead.
- Efficient Scheduling: Managed by the JVM rather than the OS, leading to more efficient context switching.
- Simplified Concurrency: Enable writing straightforward, blocking code without sacrificing scalability.
Virtual threads decouple the application thread from the OS thread, allowing the JVM to manage threading more efficiently.
How Virtual Threads Work
Under the hood, virtual threads are scheduled by the JVM onto a pool of OS threads. Key aspects include:
- Continuation-Based: Virtual threads use continuations to save and restore execution state.
- Non-Blocking Operations: When a virtual thread performs a blocking operation, it yields control, allowing the JVM to schedule another virtual thread.
- Efficient Utilization: The JVM reuses OS threads, minimizing the cost of context switches.
Here’s a simplified diagram:
Benefits of Virtual Threads
- Scalability: Handle millions of concurrent tasks with minimal resources.
- Simplified Code: Write blocking code without complex asynchronous patterns.
- Performance: Reduced context switching overhead and better CPU utilization.
- Integration: Works seamlessly with existing Java code and libraries.
Simple Examples
Example 1: Spawning Virtual Threads
public class VirtualThreadExample { public static void main(String[] args) throws InterruptedException { Thread.startVirtualThread(() -> { System.out.println("Hello from a virtual thread!"); }); // Alternatively, using Thread.Builder Thread thread = Thread.builder() .virtual() .task(() -> System.out.println("Another virtual thread")) .start(); thread.join(); } }
Explanation:
Thread.startVirtualThread
creates and starts a virtual thread.- Virtual threads behave like regular threads but are lightweight.
Example 2: Migrating from Traditional to Virtual Threads
Traditional threading:
ExecutorService executor = Executors.newFixedThreadPool(10); for (int i = 0; i < 100; i++) { executor.submit(() -> { // Perform task }); } executor.shutdown();
Using virtual threads:
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); for (int i = 0; i < 100; i++) { executor.submit(() -> { // Perform task }); } executor.shutdown();
Explanation:
Executors.newVirtualThreadPerTaskExecutor()
creates an executor that uses virtual threads.- We can submit a large number of tasks without worrying about thread exhaustion.
Complex Examples
Example 1: High-Throughput Server with Virtual Threads
Let’s build a server that handles a massive number of connections using virtual threads.
import java.io.IOException; import java.net.ServerSocket; import java.net.Socket; public class VirtualThreadServer { public static void main(String[] args) throws IOException { try (ServerSocket serverSocket = new ServerSocket(8080)) { while (true) { Socket clientSocket = serverSocket.accept(); Thread.startVirtualThread(() -> handleClient(clientSocket)); } } } private static void handleClient(Socket clientSocket) { try (clientSocket) { // Read from and write to the client clientSocket.getOutputStream().write("HTTP/1.1 200 OK\r\n\r\nHello World".getBytes()); } catch (IOException e) { e.printStackTrace(); } } }
Explanation:
- Each incoming connection is handled by a virtual thread.
- The server can handle a vast number of simultaneous connections efficiently.
Performance Considerations:
- Blocking I/O operations in virtual threads do not block OS threads.
- The JVM efficiently manages the scheduling of virtual threads.
Example 2: Custom Virtual Thread Executor Service
Creating a custom executor service that manages virtual threads with specific configurations.
import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.ThreadFactory; public class CustomVirtualThreadExecutor { public static void main(String[] args) { ThreadFactory factory = Thread.builder() .virtual() .name("virtual-thread-", 0) .factory(); ExecutorService executor = Executors.newThreadPerTaskExecutor(factory); for (int i = 0; i < 1000; i++) { int taskNumber = i; executor.submit(() -> { System.out.println(Thread.currentThread().getName() + " executing task " + taskNumber); // Simulate work try { Thread.sleep(100); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } }); } executor.shutdown(); } }
Explanation:
- Using
Thread.builder()
, we create a custom thread factory for virtual threads with a naming pattern. - The executor service uses this factory to create virtual threads per task.
- This setup allows for customized thread creation and better debugging.
Example 3: Structured Concurrency with Virtual Threads
Structured concurrency helps manage multiple concurrent tasks as a single unit.
import java.time.Duration; import java.util.concurrent.ExecutionException; import java.util.concurrent.StructuredTaskScope; public class StructuredConcurrencyExample { public static void main(String[] args) { try (var scope = new StructuredTaskScope.ShutdownOnFailure()) { var future1 = scope.fork(() -> fetchDataFromServiceA()); var future2 = scope.fork(() -> fetchDataFromServiceB()); scope.join(); // Wait for all tasks scope.throwIfFailed(); // Propagate exceptions String resultA = future1.resultNow(); String resultB = future2.resultNow(); System.out.println("Results: " + resultA + ", " + resultB); } catch (InterruptedException | ExecutionException e) { e.printStackTrace(); } } private static String fetchDataFromServiceA() throws InterruptedException { Thread.sleep(Duration.ofSeconds(2)); return "Data from Service A"; } private static String fetchDataFromServiceB() throws InterruptedException { Thread.sleep(Duration.ofSeconds(3)); return "Data from Service B"; } }
Explanation:
StructuredTaskScope
allows grouping tasks and managing them collectively.ShutdownOnFailure
ensures that if one task fails, all others are canceled.- Virtual threads make this pattern efficient and practical.
Benefits:
- Simplifies error handling in concurrent code.
- Improves readability and maintainability.
Impact on Backend Development
Virtual threads have profound implications for backend development:
Simplified Codebases
In traditional Java, we often use non-blocking I/O to achieve concurrency, which can complicate code structure. With virtual threads, we can use blocking code without the performance penalties associated with OS threads.
Example Without Virtual Threads (Using Asynchronous I/O):
CompletableFuture<Void> future = CompletableFuture.runAsync(() -> { try { HttpClient client = HttpClient.newHttpClient(); HttpRequest request = HttpRequest.newBuilder(URI.create("https://example.com")).build(); client.sendAsync(request, HttpResponse.BodyHandlers.ofString()) .thenAccept(response -> System.out.println(response.body())); } catch (Exception e) { e.printStackTrace(); } });
Simplified with Virtual Threads:
Thread.startVirtualThread(() -> { try { HttpClient client = HttpClient.newHttpClient(); HttpRequest request = HttpRequest.newBuilder(URI.create("https://example.com")).build(); HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString()); System.out.println(response.body()); } catch (Exception e) { e.printStackTrace(); } });
With virtual threads, we can use synchronous client.send()
directly, making the code simpler and more readable, while still benefiting from concurrency.
Elimination of Callback Hell
Asynchronous programming often leads to nested callbacks, which make the code harder to read and debug. Virtual threads allow us to write code in a linear, blocking style, avoiding callback hell.
Example Using Callbacks (Without Virtual Threads):
fetchDataAsync("https://example.com/data", result -> { processAsync(result, processed -> { saveAsync(processed, saved -> { System.out.println("Data saved successfully!"); }); }); });
Simplified with Virtual Threads:
Thread.startVirtualThread(() -> { String data = fetchData("https://example.com/data"); String processed = process(data); save(processed); System.out.println("Data saved successfully!"); });
With virtual threads, we can write sequential, synchronous code while retaining concurrency, eliminating the need for nested callbacks.
Enhanced Performance
Handling many concurrent requests with traditional threads can quickly lead to memory exhaustion. Virtual threads allow us to handle a large number of connections concurrently with minimal resource overhead.
Example: High-Concurrency Server with Virtual Threads
try (var serverSocket = new ServerSocket(8080)) { while (true) { var clientSocket = serverSocket.accept(); Thread.startVirtualThread(() -> handleClient(clientSocket)); } } private static void handleClient(Socket clientSocket) { try (clientSocket) { clientSocket.getOutputStream().write("HTTP/1.1 200 OK\r\n\r\nHello, World!".getBytes()); } catch (IOException e) { e.printStackTrace(); } }
This server can handle thousands of simultaneous connections without exhausting system resources, as each connection runs on a virtual thread.
Compatibility with Existing Libraries and Frameworks
Since virtual threads are part of the standard Java threading API, they are compatible with most existing libraries and frameworks, allowing developers to integrate virtual threads without extensive refactoring.
Example: Using Virtual Threads with ExecutorService
You can replace traditional thread pools with virtual thread-based executors to use existing code with minimal changes.
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); for (int i = 0; i < 100; i++) { executor.submit(() -> { System.out.println("Running task on a virtual thread."); }); } executor.shutdown();
Any code that works with ExecutorService
will continue to work seamlessly with virtual threads, enhancing compatibility.
Reduced Need for Reactive Frameworks
Virtual threads allow developers to use blocking code patterns without the overhead associated with OS threads, making it possible to achieve high concurrency with simpler code structures, reducing the need for reactive frameworks.
Example: Synchronous Data Fetching with Virtual Threads Instead of Reactive Patterns
Reactive (Without Virtual Threads):
Mono<String> data = WebClient.create("https://example.com") .get() .retrieve() .bodyToMono(String.class); data.subscribe(System.out::println);
Simplified with Virtual Threads:
Thread.startVirtualThread(() -> { HttpClient client = HttpClient.newHttpClient(); HttpRequest request = HttpRequest.newBuilder(URI.create("https://example.com")).build(); String response = client.send(request, HttpResponse.BodyHandlers.ofString()).body(); System.out.println(response); });
Virtual threads allow us to use blocking code directly, making reactive patterns unnecessary for some use cases. This reduces complexity, especially for applications that don’t require the full power of reactive programming.
Considerations
When implementing virtual threads with Project Loom, developers must consider various technical and architectural implications. Below are some detailed considerations to keep in mind:
Memory Usage and Stack Management
Virtual threads are lightweight compared to traditional OS threads, but they still consume memory, especially if the virtual threads are highly stacked (deep call stacks).
- Stack Size: Virtual threads start with a small stack and can expand as needed, which can potentially reduce memory consumption compared to OS threads. However, developers should monitor stack usage to avoid excessive memory consumption.
- Memory Monitoring: Although virtual threads are efficient, monitoring the JVM’s memory usage becomes essential as thousands of virtual threads may be active concurrently.
- JVM Configuration: Tuning the JVM’s garbage collection and memory settings is important when handling millions of threads, as they may put unexpected pressure on the heap.
Blocking vs. Non-Blocking Code Patterns
Virtual threads make blocking I/O efficient, but there are some nuances:
- Blocking I/O Operations: With virtual threads, you can use blocking calls like
Socket
orFile I/O
without performance penalties. However, the JVM handles only traditional blocking I/O efficiently, so libraries must be updated for Loom support. - Non-Blocking I/O: If your project is already using non-blocking I/O, switching to virtual threads might simplify the code structure but won’t necessarily bring significant performance gains, as non-blocking code is already optimized.
- Thread Pool Alternatives: In traditional models, a common technique is to use a pool of threads to limit the number of concurrent operations. With virtual threads, this might no longer be necessary, allowing a model where each task gets its own virtual thread without causing bottlenecks.
Concurrency Limitations
While virtual threads allow for a high degree of concurrency, they are not a silver bullet. Certain scenarios, such as CPU-bound tasks, still require careful handling to avoid performance degradation.
- CPU-Bound Tasks: Virtual threads are designed for I/O-bound workloads. If the application has CPU-intensive tasks, virtual threads might not yield the same benefits, as they do not reduce CPU time requirements.
- Parallelism Control: For tasks that require controlled parallelism, developers may still benefit from combining virtual threads with task-limiting mechanisms (e.g., limiting the number of CPU-bound threads).
- Thread Priority and Scheduling: Virtual threads are managed by the JVM and may not respect OS-level thread priorities. If your application requires fine-grained control over thread priority, virtual threads might not be ideal.
Error Handling and Exception Propagation
Error handling becomes crucial, especially with the simplicity of launching thousands of threads.
- Propagating Exceptions: Virtual threads handle exceptions differently; uncaught exceptions in a virtual thread do not terminate the JVM process but are logged or can be handled asynchronously.
- Graceful Shutdowns: Virtual threads simplify concurrency, but managing error states across thousands of threads can be challenging. Structured concurrency (a model for grouping and managing threads introduced alongside Loom) helps manage error propagation and task cancellation.
- Task Scopes: When using structured concurrency with virtual threads, grouping tasks with scopes (e.g.,
ShutdownOnFailure
in Java’sStructuredTaskScope
) ensures that if one task in a group fails, other tasks can be canceled or handled appropriately.
Impact on Debugging and Profiling
With potentially millions of threads, debugging and profiling virtual-threaded applications introduce unique challenges.
- Thread Explosion in Debuggers: Debuggers might struggle with applications using millions of virtual threads, leading to overwhelming output. It may be helpful to add application-level logging or selectively enable virtual threads for debugging.
- Profiling Complexity: Traditional thread profilers may not provide granular insights for virtual threads. Consider using JVM flight recording or Loom-aware profiling tools to trace virtual thread usage accurately.
- Stack Trace Analysis: Virtual threads make it possible to have more granular and descriptive stack traces, but interpreting large volumes of stack traces could require additional tooling or filtering strategies.
Interplay with Synchronization and Locks
Though virtual threads alleviate many concurrency issues, developers still need to be cautious with shared resources.
- Contention on Shared Resources: Virtual threads do not inherently solve issues related to contention on shared resources. If two virtual threads try to acquire the same lock, they may still face contention, potentially leading to bottlenecks.
- Thread Safety: Existing synchronized code will generally work with virtual threads. However, in a highly concurrent environment, developers should consider using
java.util.concurrent
locks (e.g.,ReentrantLock
with try-lock mechanisms) or lock-free data structures to avoid contention. - Deadlock Risks: While virtual threads reduce many resource-related problems, deadlocks can still occur if resources are mismanaged. Deadlock analysis tools can help in identifying potential deadlock situations.
Structured Concurrency for Task Management
Structured concurrency in Project Loom allows developers to group threads and manage them collectively, making error handling and task cancellation more intuitive.
- Parent-Child Relationships: Structured concurrency introduces a parent-child relationship between tasks, simplifying lifecycle management and error propagation.
- Graceful Cancellation: If a parent task is canceled, all child tasks are automatically canceled, making it easier to handle scenarios where one task failure requires the cancellation of other related tasks.
- Scope Lifecycle Management: With
StructuredTaskScope
, developers can define a task group’s lifecycle. This ensures that resources are managed properly, and all tasks in the scope are completed, failed, or canceled together.
Interfacing with Existing Thread-Based Libraries
Virtual threads integrate well with many libraries but may require attention with those heavily reliant on OS-level threads or specialized thread management.
- OS-Threaded Libraries: Libraries that rely on low-level OS-thread management (e.g., JNI-based libraries) may not benefit directly from virtual threads, as they may require actual OS threads for certain operations.
- External Thread Pools: If your application integrates with external thread pools (e.g., database connection pools), consider switching to Loom-compatible connection handling, as some third-party libraries may not yet support virtual threads.
- Task Executors: Replacing
ThreadPoolExecutor
withExecutors.newVirtualThreadPerTaskExecutor()
allows easier adaptation to existing thread-based code, but testing is recommended to ensure compatibility and performance stability.
Performance Profiling and Resource Management
Virtual threads reduce context-switching overhead and make high-concurrency applications more feasible, but monitoring and optimizing performance remain crucial.
- Avoiding Thread Overuse: Although virtual threads are lightweight, overusing them (e.g., starting a new thread for every small task) can still degrade performance. Consider batching or grouping tasks when feasible.
- Heap Pressure and Garbage Collection: Large numbers of virtual threads may generate considerable garbage, adding pressure on the JVM’s garbage collection. Profiling and tuning the GC for high-throughput applications with virtual threads is crucial.
- Application Profiling: Java Flight Recorder (JFR) or other profiling tools with virtual-thread awareness can help understand the application’s runtime characteristics, especially in production environments.
Testing and Migration Strategies
Testing and planning migration from traditional to virtual threads require thorough analysis and validation.
- Gradual Migration: Virtual threads allow a gradual migration. Developers can begin by converting specific thread-heavy sections of the application to virtual threads while retaining traditional threads elsewhere.
- Testing for Loom Compatibility: While most Java libraries are expected to be compatible, rigorous testing is recommended, particularly for libraries with complex threading requirements or blocking operations.
- Load Testing and Performance Validation: Applications utilizing virtual threads should undergo load testing to validate that the new threading model provides the expected concurrency improvements without introducing regressions or bottlenecks.
Patterns and Anti-Patterns
After all beeing said lets examine some common patterns and antipatterns
Patterns
Task-Per-Thread Pattern
One of the primary use cases for virtual threads is to simplify concurrency by using a “task-per-thread” model. In this pattern, each concurrent task is assigned to a separate virtual thread, which avoids the complex management typically needed for thread pools with OS threads.
Example: A server handling multiple incoming connections, with each connection running in its own virtual thread.
try (ServerSocket serverSocket = new ServerSocket(8080)) { while (true) { Socket clientSocket = serverSocket.accept(); Thread.startVirtualThread(() -> handleClient(clientSocket)); } } private static void handleClient(Socket clientSocket) { try (clientSocket) { clientSocket.getOutputStream().write("HTTP/1.1 200 OK\r\n\r\nHello, World!".getBytes()); } catch (IOException e) { e.printStackTrace(); } }
Each connection is handled by a virtual thread, providing simple and scalable concurrency without the need for a complex thread pool. This pattern would be inefficient with OS threads but is efficient with virtual threads.
Structured Concurrency Pattern
Structured concurrency organizes concurrent tasks as a structured unit, making it easier to manage lifecycles, cancellations, and exceptions. This pattern is particularly useful in request-based applications where tasks are interdependent.
Example: Fetching data from multiple services concurrently and consolidating the results.
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) { var future1 = scope.fork(() -> fetchDataFromServiceA()); var future2 = scope.fork(() -> fetchDataFromServiceB()); scope.join(); // Wait for all tasks to complete scope.throwIfFailed(); // Handle exceptions String resultA = future1.resultNow(); String resultB = future2.resultNow(); System.out.println("Results: " + resultA + ", " + resultB); } catch (Exception e) { e.printStackTrace(); }
Structured concurrency ensures that if any task fails, all other tasks in the same scope are canceled. It simplifies error handling and improves resource management, providing better control over concurrent flows.
Blocking Code in Virtual Threads
With virtual threads, developers can safely use blocking calls, such as Thread.sleep()
or blocking I/O operations, as the JVM handles scheduling efficiently. This pattern contrasts with the traditional approach of avoiding blocking calls to prevent thread starvation.
Example: Running blocking I/O operations without impacting overall performance.
Thread.startVirtualThread(() -> { try { Thread.sleep(1000); // Blocking call System.out.println("Virtual thread finished sleeping"); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } });
Virtual threads make blocking operations efficient, as the JVM schedules other virtual threads to run while the blocking call is in progress. This allows developers to write simpler, more readable code without performance trade-offs.
Task-Based Executors with Virtual Threads
Using Executors.newVirtualThreadPerTaskExecutor()
creates a virtual-thread-based executor that simplifies parallel task execution. This pattern allows developers to leverage the ExecutorService
interface, making it easy to transition from traditional threads to virtual threads.
Example: Running multiple tasks concurrently with a virtual-thread-based executor.
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); for (int i = 0; i < 10; i++) { executor.submit(() -> { System.out.println("Running task on virtual thread: " + Thread.currentThread().getName()); }); } executor.shutdown();
This executor allows each task to run on a virtual thread, making it efficient to create a new thread per task without the need for traditional thread pooling.
Anti-Patterns
Overuse of Virtual Threads
While virtual threads are lightweight, they are not “free.” Creating excessive numbers of virtual threads for very short-lived tasks can introduce overhead in terms of scheduling and garbage collection, which may impact performance.
Anti-Pattern Example: Creating a new virtual thread for every small task, such as iterating over a list.
List<String> items = List.of("A", "B", "C"); for (String item : items) { Thread.startVirtualThread(() -> processItem(item)); // Inefficient }
Better Approach: Instead, batch tasks together if they are very short-lived to avoid excessive thread creation.
Thread.startVirtualThread(() -> items.forEach(VirtualThreadsExample::processItem));
By batching the tasks within a single virtual thread, you avoid creating unnecessary threads, optimizing resource usage and reducing scheduling overhead.
Blocking OS Resources in Virtual Threads
Blocking calls that involve resources controlled by the OS, such as file locks or certain low-level network operations, may still tie up OS threads when used with virtual threads, leading to potential bottlenecks.
Anti-Pattern Example: Locking a file for extended periods in a virtual thread.
Thread.startVirtualThread(() -> { try (var fileChannel = FileChannel.open(Path.of("file.txt"), StandardOpenOption.WRITE)) { fileChannel.lock(); // This can block an OS thread, not recommended } catch (IOException e) { e.printStackTrace(); } });
Better Approach: Avoid blocking virtual threads on OS resources that do not release quickly. Use asynchronous approaches for tasks involving external resources.
Improper Exception Handling in Virtual Threads
Exceptions in virtual threads do not terminate the JVM, and uncaught exceptions in virtual threads may not be logged as prominently as in traditional threads, leading to undetected errors.
Anti-Pattern Example: Ignoring exceptions in virtual threads.
Thread.startVirtualThread(() -> { int result = 10 / 0; // Unhandled exception System.out.println("Result: " + result); });
Better Approach: Use structured concurrency or set up explicit exception handling within virtual threads to capture and handle errors effectively.
Thread.startVirtualThread(() -> { try { int result = 10 / 0; System.out.println("Result: " + result); } catch (Exception e) { System.err.println("Error in virtual thread: " + e.getMessage()); } });
Proper error handling in virtual threads helps in identifying and managing issues without causing untracked failures.
Manual Management of Thread Lifecycles
With virtual threads, the need for manually managing thread lifecycles or using traditional thread pooling mechanisms decreases. Creating a custom virtual-thread pool or managing virtual threads directly as a group is often unnecessary and counterproductive.
Anti-Pattern Example: Manually creating a virtual-thread pool.
List<Thread> virtualThreadPool = new ArrayList<>(); for (int i = 0; i < 10; i++) { virtualThreadPool.add(Thread.startVirtualThread(() -> System.out.println("Task in virtual pool"))); } virtualThreadPool.forEach(t -> { try { t.join(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } });
Better Approach: Use Executors.newVirtualThreadPerTaskExecutor()
to manage tasks rather than manually handling virtual threads.
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); for (int i = 0; i < 10; i++) { executor.submit(() -> System.out.println("Task in virtual thread executor")); } executor.shutdown();
Manually creating and managing virtual-thread pools contradicts the efficiency of built-in virtual-thread executors, which are optimized for this purpose.
Over-Reliance on Virtual Threads for CPU-Bound Tasks
Virtual threads excel in I/O-bound tasks, but for CPU-bound tasks, they offer limited benefits. Virtual threads do not reduce the CPU time required, so heavy reliance on virtual threads for CPU-bound operations can lead to high contention and degraded performance.
Anti-Pattern Example: Running a CPU-intensive task on a high number of virtual threads.
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); for (int i = 0; i < 100; i++) { executor.submit(() -> performCPUIntensiveTask()); }
Better Approach: Use a fixed-size thread pool for CPU-bound tasks to limit the number of concurrent CPU-intensive operations.
ExecutorService cpuExecutor = Executors.newFixedThreadPool(4); // Adjust pool size for CPU cores for (int i = 0; i < 100; i++) { cpuExecutor.submit(() -> performCPUIntensiveTask()); } cpuExecutor.shutdown();
Using a fixed-size pool for CPU-bound tasks helps manage CPU usage and avoids overwhelming the CPU with excessive task scheduling.
Relying on Global State in Virtual Threads
Virtual threads can be short-lived and numerous, so relying on shared global state can lead to contention and potential race conditions, especially when many virtual threads attempt to access or modify the state concurrently.
Anti-Pattern Example: Modifying shared global state from multiple virtual threads.
public static int counter = 0; for (int i = 0; i < 1000; i++) { Thread.startVirtualThread(() -> counter++); }
Better Approach: Use thread-safe data structures or local variables to reduce contention. For counters, consider using AtomicInteger
or other concurrent collections.
AtomicInteger counter = new AtomicInteger(); for (int i = 0; i < 1000; i++) { Thread.startVirtualThread(() -> counter.incrementAndGet()); }
Avoiding shared global state or using thread-safe structures reduces contention and prevents data corruption, especially in high-concurrency environments.
Conclusion
Project Loom’s virtual threads bring a groundbreaking shift to Java’s concurrency model, allowing developers to write more intuitive, efficient, and scalable concurrent code. By making virtual threads lightweight and capable of handling blocking operations without tying up OS resources, Project Loom simplifies complex concurrency patterns, allowing developers to write straightforward, blocking code that performs well under high concurrency.
Key patterns, such as task-per-thread, structured concurrency, and asynchronous handling of OS-bound tasks, demonstrate how virtual threads can enhance both code simplicity and application performance. These patterns, combined with new APIs, like StructuredTaskScope
, make it easier to handle interdependent tasks, manage cancellations, and propagate exceptions in a cohesive way. At the same time, understanding anti-patterns—such as avoiding excessive thread creation for short-lived tasks or blocking on OS-level resources—is essential to prevent bottlenecks and ensure efficient resource usage.
Virtual threads encourage developers to rethink their approach to concurrency, moving away from complex reactive frameworks or callback-heavy asynchronous code toward a more synchronous and readable model. However, for CPU-bound tasks or specific I/O operations that require OS thread involvement, traditional approaches like fixed-thread pools and asynchronous task delegation remain relevant.
In essence, virtual threads make concurrency accessible and manageable, even for complex applications, while allowing developers to focus on the core logic rather than threading intricacies. As virtual threads become standard, Java developers can embrace a more flexible and high-performing concurrency model that scales efficiently and integrates smoothly with existing libraries and frameworks, setting the stage for a new era in Java application development.
by Alexius Dionysius Diakogiannis at November 08, 2024 03:55 PM
November 04, 2024
k8s Costs, UUIDs, EJBs to CDI migration--Questions for 128th airhacks.tv
November 04, 2024 06:39 PM
- K8 cost estimation
- UUID generation for persistence
- Thoughts on: How to migrate from EJB to CDI?
- the 100-episodes back time machine. Questions from: 28th airhacks.tv:
"Java EE 8 News, New server (the real hardware): the part list. Oracle says it is 'commmitted' to Java EE 8 microprofile.io announcement Wildfly-Swarm, Payara Micro and the relation to microservices Dynamic injection into @Stateless EJBs Handling ViewExpiredException in JSF Managing JAX-RS clients on servers Accessing GlassFish / Payara logifles from the browser Is overusing CDI a code smell? JAX-RS MessageBodyWriter and Singleton challenges How to approach logging in microservices? Monitoring Java EE methods"
Any questions left? Ask now: and get the answers at the next airhacks.tv. Some questions are also answered with a short video: 60 seconds or less with Java
Ask questions during the show via twitter mentioning me: https://twitter.com/AdamBien (@AdamBien),using the hashtag: #airhacks or built-in chat at: airhacks.tv. You can join the Q&A session live each first Monday of month, 8 P.M at airhacks.tv
November 02, 2024
Reader.of(CharSequence)
by Markus Karg at November 02, 2024 01:21 PM
Hi guys, how’s it going? Long time no see!
In fact, I had been very silent in the past months, and as you could imagine, it has a reason: I just had no time to share all the great stuff with you that I was involved with recently. In particular, creating video content for Youtube is such time-consuming that I decided to stop with that by end of 2023, at least “for some time”, until my personal stress level is “normalized” again. Unfortunately, now by end of 2024, it still is at 250%… Anyways!
Having said that, I decided to restart my blog. While many people told me that blogging is @depreceated
since the invention of VLogs, I need to say, it is just so much easier for me to write a blog article, that I decided to ignore them and write some lines about my latest Open Source contribution. So here it is: My first blog entry in years!
But enough about me. What I actually wanted to tell you today is that I am totally happy these days. The reason is that since this week, JDK 24 EA b22 is available for download, and as you can see in the preliminary JavaDocs, my very first addition to the official Java API is contained: Reader.of(CharSequence)
!
You might wonder what’s so crazy about that, because (as you certainly know) I am a contributor to OpenJDK since many years. Well, yes, I did contribute to OpenJDK (alias “to Java”) for long time, but all my contributions where just “under the hood”. I have optimized execution performance, I improved JavaDocs, I added unit tests, and I refactored code. But this time, I added a complete new feature to the public API. It really feels so amazing to see that my work of the past few weeks now will help Java developers in their future projects to spare some microseconds and some kilobytes per call, and in sum, those approx. ten million developers (according to Oracle marketing) will sum up to considerable amounts of CO2 that my invention will spare!
Okay, so what actually is Reader.of(CharSequence)
all about, how can you use it, and how does it spare resources?
I think you all know what the class StringReader
is, and what to use it for: You need (for whatever reason) an implementation of the Reader
interface, and the source is a String
. At least that what it was made for decades ago. In fact, looking at the actual uses in 2024, more often than not the source isn’t a String
actually, but is (mostly for performance reasons) a StringBuilder
or StringBuffer
, and sometimes (mostly for technical reasons) a CharBuffer
. These classes all share one common interface, CharSequence
, which is “the” interface of the class String
, too. Unfortunately, StringReader
is unable to accept CharSequence
; it only accepts String
. That’s too bad, because it means, most uses of StringReader
actually perform an intermediate toString()
operation, which creates a temporary copy of the full content on the heap – just to throw it away later! Creating this copy is anything but free! It imposes time to search a free place on the heap, to copy the content onto the heap, and to lateron GC (dispose and defragment) the otherwise unused copy in turn. Time is not just money – This operation costs power, and power costs (even these days) CO2!
Ontop of that, most (possibly all) uses of StringReader
are single-threaded. I investigated for some time but could find not a single reason for accessing a StringReader
in a multi-threaded fashion. Unfortunately, StringReader
is thread-safe: It internally uses the synchronized
keyword in every single method. Each time. For each single read in possibly a loop of thousand iterations! And yes, you guess right: synchronized
a everything by fast. It slows down code considerably, for zero benefit! And: No, the JVM has no a trick to speed this up in the single-threaded use cases – that trick (“Biased Locking”) went away years ago and the result is that synchronized is slow again!
Imagine you are writing a RESTful web server which returns JSON on GET
. JSON is nothing else but a character sequence. You build it from a non-trivial algorithm using a StringBuilder
. That particular JSON unfortunately is non-cachable, as it contains information sent with the request or changing over time or provided by the real physical world. So the server possibly produces tens of thousands
of StringBuilder
s per second and reads it using a StringReader
. Could you imagine what happens to your performance? Thanks to the combination of both effects described earlier, you’re losing money with every other customer contacting your server. YOUR money, BTW.
This is exactly what happend in my latest commercial project, so I tried to get rid of StringReader
. My first idea was to use Apache IO’s CharSequenceReader
, which looks a bit old-school, but immediately got me rid of both effects instantly! The problem with Apache IO is that it is rather huge. Using lots of KBs of transitive dependencies just for one single use case didn’t sound like a smart option (but yes, this is the code actually in production still – at least unless JDK 24 is published in Q1/25). Also, the customer was not very pleased to adopt another third-party library into the game. And finally, the code of Apache IO is not really eagerly maintained; they do bug fixes, but they abstain from using modern Java APIs (not even using multi-release JARs). Some will hate me for writing this, but the actual change rate didn’t look like “stable”, it looked to “dead” – agreed, this is subject to my personal interpretation.
Being an enthusiastic Open Source committer since decades, and being an OpenJDK contributor since years, I had the idea to tackle the problem at its root: StringReader
. So I proposed to provide a PR for a new public API, which was very much appreciated by the OpenJDK team. It was Alan Bateman himself (Group Lead of JDK Core Libraries) who came up with the proposal to have a static factory, which culminated in me posting a PR on Github about adding Reader.of(CharSequence)
. After accepting the mandatory CSR it recently got merged, and since JDK 24’s Eary Access Build 22 it is publicly available.
BTW, look at the implementation of that Reader’s bulk-read method. There is an ugly sequence of tricks to speed up performance. I will address this in a subsequent upcoming PR. Stay tuned!
So if you want to gain the performance benefit, here is what you need to do:
- Run your app on Java 24 EA b22+.
- Replace all occurances of
new StringReader(x)
andnew CharSequenceReader(x)
byReader.of(x)
. - If
x
ends with.toString()
then remove that trailing.toString()
– unless the left side ofx
is effectively not aCharSequence
. - Note: If you actually use multiple threads to access the
Reader
, don’t stick withStringReader
, but simply surround your calls by a modern means of synchronization, like a Lock – locks are faster thansynchronized
.
Please exterminate StringReader
but adopt Reader.of()
ASAP!
I would be happy if you could report the results. Just leave a comment!
So far for today! PARTY ON!
October 30, 2024
The Payara Monthly Catch - October 2024
by Chiara Civardi (chiara.civardi@payara.fish) at October 30, 2024 06:45 AM
Ahoy ghoulish crew! The October edition of Payara’s Monthly Catch is truly spooktacular! Check out this month's highlights – this month’s edition is packed with must-read guides, bewitching tutorials and technical tricks (and treats!) to keep your enterprise Java/Jakarta EE skills sharp as ever.
by Chiara Civardi (chiara.civardi@payara.fish) at October 30, 2024 06:45 AM
October 18, 2024
Connecting to Redis from a Jakarta EE application
by F.Marchioni at October 18, 2024 05:28 PM
In this tutorial, we’ll learn how to integrate Redis into a Jakarta EE application using the Lettuce client library. Redis is a powerful in-memory data structure store, often used as a cache, message broker, or database. Lettuce is a popular Java client for Redis, providing both synchronous and asynchronous capabilities. We will use a simple ... Read more
The post Connecting to Redis from a Jakarta EE application appeared first on Mastertheboss.
October 15, 2024
Rising Momentum in Enterprise Java: Insights from the 2024 Jakarta EE Developer Survey Report
by Tatjana Obradovic at October 15, 2024 03:47 PM
The seventh annual Jakarta EE Developer Survey Report is now available! Each year, this report delivers crucial insights into the state of enterprise Java and its trajectory, providing a comprehensive view of how developers, architects, and technology leaders are adopting Java to meet the growing demands of modern cloud-native applications.
The 2024 survey, which gathered input from over 1400 participants, paints a clear picture of the current state of enterprise Java and where it may be headed in the future.
Jakarta EE continues to be at the forefront of this evolution, as adoption continues to accelerate across the enterprise landscape. Our survey finds that usage of Jakarta EE for building cloud native Java applications has grown from 53% to 60% since last year. While Spring/Spring Boot remains the leading Java framework for cloud native applications, both Jakarta EE and MicroProfile have seen notable growth, highlighting a healthy diversity of choices for developers building modern enterprise Java applications.
32% of respondents have now migrated to Jakarta EE from Java EE, up from 26% in 2023. This marks a clear trend as enterprises shift towards more modern, cloud-friendly architectures. The transition to Jakarta EE 10, in particular, has been rapid, with adoption doubling to 34% from the previous year.
We’re also seeing a gradual shift away from older versions of Java in favour of more recent LTS versions. Usage of Java 17 has grown to 56%, up from 37% in 2023, and Java 21 has achieved a notable adoption rate of 30% in its first year of availability. Meanwhile, usage of the older Java EE 8 has declined.
Looking to the Future of Jakarta EE
The 2024 Jakarta EE Developer Survey Report not only provides a clear picture of the current challenges and priorities of enterprise Java developers, but also shows us where they hope to see from Jakarta EE in the future.
The survey highlights five key priorities for the Jakarta EE community moving forward:
- Enhanced support for Kubernetes and microservices architectures
- Better alignment with Java SE features
- Improvements in testing support
- Faster innovation to keep pace with enterprise needs
These priorities reflect the real-world challenges that developers and enterprises face as they build and scale cloudnative applications. With the release of Jakarta EE 11 fast approaching, work is already underway on future Jakarta EE releases, and these insights are crucial to the direction of this effort.
We invite you to take a look at the full report and discover more critical findings. Don’t miss the opportunity to see how the future of enterprise Java is unfolding before your eyes.
Learn more about Jakarta EE and the Jakarta EE Working Group at jakarta.ee
September 30, 2024
The Payara Monthly Catch - September 2024
by Chiara Civardi (chiara.civardi@payara.fish) at September 30, 2024 08:20 AM
Ahoy crew! Welcome aboard the September edition of Payara’s Monthly Catch! As we chart our course toward the final quarter of the year, we’ve got a boatload of exciting updates, news and highlights from the Payara community and beyond. Here's our brief overview of must-read guides, in-depth tutorials, technical insights, and expert advice from September that will help you elevate your software development and deployment for enterprise Java applications!
by Chiara Civardi (chiara.civardi@payara.fish) at September 30, 2024 08:20 AM
September 12, 2024
GlassFish is rolling forward. What’s New?
by Ondro Mihályi at September 12, 2024 09:01 AM
The Evolution Continues. GlassFish, which used to be a popular application server, free to use and reliable, is evolving again. If you’ve been holding onto your old GlassFish instances, there’s good news—things have gotten a lot more exciting recently.
Since we created the OmniFish company and started improving GlassFish in 2022 (read about that story in Oh, What Did You Do to GlassFish?!), a lot has happened around GlassFish. We joined the Eclipse GlassFish project and joined the Jakarta EE Working Group as well. We announced the start of our enterprise support services for Eclipse GlassFish and have been helping our customers and the community of GlassFish users since then. So you may wonder, what’s been happening in the past 2 years and what’s up with GlassFish right now?
What’s the Buzz? First off, GJULE—a completely rewritten logging engine—now makes logging faster and more reliable. If logging felt sluggish before, GJULE breathes new life into it. You can enable even the lowest log levels and catch every detail without bogging down your server and applications. To find out more about the new GlassFish logging, watch this video about Changes in GlassFish 7 Logging System on YouTube.
For those dabbling in microservices, GlassFish now natively supports MicroProfile Config, REST Client, and JWT Authentication. It means your applications can be more modular, easier to configure, and more secure without jumping through hoops. Of course, all these MicroProfile APIs are useful in traditional enterprise applications too, which allows you to modernize your existing apps, simplify or even remove some of the boilerplate code.
A Developer’s Best Friend. Gone are the days of clunky setups. The GlassFish Embedded runtime now simplifies starting up your applications. GlassFish 7.0 brought a lot of fixes to GlassFish Embedded, making it a slick way to run Jakarta EE apps, with faster startup times, whether GlassFish is embedded in the app or started with a Maven plugin. There’s an Arquillian container too, so you can easily run your tests without launching the whole GlassFish server. Since then, GlassFish Embedded received a few improvements. A simple API to run a plain JAR in the embedded server without a separate WAR file and easier-to-use Maven plugin are among them.
And yes, GlassFish runs on the latest Java versions—even those still in pre-release—keeping you on the cutting edge without the usual headaches. Although GlassFish Embedded still requires a few –add-opens JVM arguments with recent Java versions to bypass the Java module system restrictions, the number of them has been reduced, with the aim to avoid any need for –add-opens arguments in the future.
Stay Secure and Auditable. Security-conscious? GlassFish has you covered. Vulnerabilities are being regularly fixed. GlassFish is kept up to date with recent versions of dependencies, which contain latest security fixes. Besides that, Admin Command Logger is a new tool that logs all administrative commands, whether from the Admin Console or Asadmin CLI. This feature ensures transparency and security in every admin operation. You can simply enable the logger with a button in the Admin Console or set a system property using the asadmin create-system-properties command. The logs will then record all the configuration changes, with the time and the author’s username. Useful for auditing, or for reverting changes if something goes wrong, right? As a bonus, this feature also gives you a way to find out which asadmin command to use in your setup scripts. Simply launch the server, make changes in the Admin Console UI, check the logs, and, voilà, copy the command to your script.
But that’s not all. If you’re eyeing the future, work is well underway on GlassFish 8 to support Jakarta EE 11. This forward-looking approach means you won’t be left behind when modernizing your apps or you’ll find some new Jakarta EE 11 features useful and would like to use them as soon as possible. If you want to try out those features, just grab the latest milestone of GlassFish 8 and get a feeling of what you can expect from the final release of GlassFish 8!
Need to keep a low profile? While GlassFish is getting improvements, new features and Jakarta EE 11 support, Piranha Cloud has emerged as a fast and lightweight Jakarta EE runtime based on many GlassFish components. It now provides most of the Jakarta EE Web Profile features but differs significantly from traditional application servers. It’s extremely modular and allows you to build a custom runtime only with the components you need. From just a servlet container that can programmatically turn request objects into response objects. To a full runtime JAR with all the components, that can run traditional WAR files. It’s up to you to choose the proper balance between simplicity and control. Piranha Cloud can replace GlassFish Server if you just need a lightweight Jakarta EE runtime to run web apps, or it can replace Tomcat or GlassFish Embedded if you want to embed Jakarta EE technologies and have full control over them.
Community-Driven Power-Ups Thanks to OmniFish, GlassFish isn’t just an application server alone. Our contributions include Docker images optimized for faster start and lower memory use, an enhanced Arquillian container with simpler configuration and debugging options, and even an Eclipse IDE plugin that supports the latest GlassFish versions.
Commercial support gets you covered. Eclipse GlassFish is an open source server maintained by project members in the Eclipse Foundation. But it’s not supported only by a community of volunteers. Several companies offer commercial services related to GlassFish. OmniFish provides full high-quality support and consultancy services to cover your back in case you have troubles. Or if you need some new features in GlassFish to make your work more efficient. OmniFish actively contributes to the development of the GlassFish project, more than all other companies and individual contributors together. OmniFish engineers have years of experience with solving production issues, improving performance and usability of GlassFish, and adding new features in GlassFish and the ecosystem of GlassFish tooling. On top of that, we at OmniFish have a vision of turning GlassFish into a productive development platform and production runtime to write and run Java apps like a god!
Final Thoughts. So, what’s next for GlassFish? It’s not about survival anymore—it’s about thriving in a modern development landscape. OmniFish and the whole GlassFish team have more plans for the future. More features to simplify app development and configuration. More ways to run apps in different environments and architectures. And, of course, even more reliable, more secure, and faster production runtime.
Whether you’re maintaining legacy systems or embracing the latest in Jakarta EE, GlassFish has the tools and updates you need to stay ahead.
The post GlassFish is rolling forward. What’s New? appeared first on OmniFish.
August 19, 2024
Jakarta EE 10 Application with an Angular Frontend
by F.Marchioni at August 19, 2024 09:20 AM
In this tutorial, we’ll walk through the process of creating a simple Jakarta EE 10 application that exposes a RESTful endpoint to fetch a list of customers. We’ll then build an Angular front end that consumes this endpoint and displays the list of customers in a user-friendly format. Prerequisites Before starting, ensure you have the ... Read more
The post Jakarta EE 10 Application with an Angular Frontend appeared first on Mastertheboss.
August 13, 2024
Simplify your configuration with versionless features in 24.0.0.8
August 13, 2024 12:00 AM
This release introduces versionless features for the Jakarta EE, Java EE, and MicroProfile platforms. It also includes updates to eliminate unnecessary audit records.
In Open Liberty 24.0.0.8:
Along with the new features and functions added to the runtime, we’ve also added a new guide to using MicroProfile Config.
View the list of fixed bugs in 24.0.0.8.
Check out previous Open Liberty GA release blog posts.
Develop and run your apps using 24.0.0.8
If you’re using Maven, include the following code in your pom.xml
file:
<plugin>
<groupId>io.openliberty.tools</groupId>
<artifactId>liberty-maven-plugin</artifactId>
<version>3.10.3</version>
</plugin>
Or for Gradle, include the following in your build.gradle
file:
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'io.openliberty.tools:liberty-gradle-plugin:3.8.3'
}
}
apply plugin: 'liberty'
Or if you’re using container images:
FROM icr.io/appcafe/open-liberty
Or take a look at our Downloads page.
If you’re using IntelliJ IDEA, Visual Studio Code or Eclipse IDE, you can also take advantage of our open source Liberty developer tools to enable effective development, testing, debugging and application management all from within your IDE.
Streamline feature selection with versionless Jakarta EE, Java EE, and MicroProfile features
With Open Liberty, you configure only the features at the specific versions that your application needs. This composable design pattern minimizes runtime resource requirements and accelerates application startup times. However, you might not always know which version of a feature is compatible with the rest of your application configuration. In previous releases, determining the correct version typically required a mix of experimentation, guesswork, and digging deep into feature documentation. In 24.0.0.8 and later, versionless features automate version selection, enabling you to focus on application development without worrying about compatibility issues.
For example, instead of specifying servlet-6.0
in your server.xml
file and having to figure out which other feature versions are compatible with Servlet 6.0, you can specify a platform version and servlet
. The platform that you specify resolves all versionless features to a compatible version.
The following server.xml
file configuration uses a Java EE platform of javaee-8.0
with associated versionless features that are defined for servlet
, jpa
, and jaxrs
:
<!-- Enable features -->
<featureManager>
<platform>javaee-8.0</platform>
<feature>servlet</feature>
<feature>jpa</feature>
<feature>jaxrs</feature>
</featureManager>
This example enables versionless MicroProfile features with microProfile-5.0
specified as the platform element:
<!-- Enable features -->
<featureManager>
<platform>microProfile-5.0</platform>
<feature>mpHealth</feature>
<feature>mpMetrics</feature>
</featureManager>
Note: The Liberty Maven and Gradle build plugins do not yet support versionless features or platform definitions.
Learn more and check out the full collection of available platforms and versionless features in the Open Liberty docs. Stay tuned for more versionless features and platforms in future releases.
Use the Audit 2.0 feature to avoid generating unnecessary REST Handler records
The 24.0.0.8 release introduces the Audit 2.0 feature (audit-2.0
). The feature is designed for users who are not using REST Handler applications.
It provides the same audit records as the Audit 1.0 feature (audit-1.0
) but it does not generate records for REST Handler applications.
If you need to keep audit records for REST Handler applications, you can continue to use the Audit 1.0 feature.
To enable the Audit 2.0 feature in your application, add the following code to your server.xml
file:
<featureManager>
<feature>audit-2.0</feature>
</featureManager>
New guide: Externalizing environment-specific microservice configuration for CI/CD
A new guide is available under the Configuration category: Externalizing environment-specific microservice configuration for CI/CD. You’ll learn how to use MicroProfile Config’s configuration profiles to externalize configurations for different phases of the CI/CD lifecycle.
Get Open Liberty 24.0.0.8 now
Available through Maven, Gradle, Docker, and as a downloadable archive.
August 08, 2024
Boost Performance and Developer Productivity with Jakarta EE 11 – World Congress slides
by Arjan Tijms at August 08, 2024 08:18 PM
The post Boost Performance and Developer Productivity with Jakarta EE 11 – World Congress slides appeared first on OmniFish.
August 06, 2024
Recognize your cloud-native Java development skills with the Liberty Developer Essentials Badge
August 06, 2024 12:00 AM
In a world where in-demand skills are critical for job success, security, and progression, it’s vital that we, as developers, ensure we are showcasing our skills to the wider world. This can include qualifications, courses, and badges, all of which help us to advertise our skills and highlight our professional experiences and expertise.
So, to help make this easier (and free!) for Java developers, the Open Liberty team created the first-ever Liberty badge: Liberty Developer Essentials!
This badge enables developers to showcase their ability to use open source technologies, such as Open Liberty, Jakarta EE, and MicroProfile, to effectively create a cloud-native Java application.
If you’re not familiar with Open Liberty, it is an open application framework that is designed for the cloud. It’s small, lightweight, and designed with modern cloud-native application development in mind. It supports the full MicroProfile and Jakarta EE APIs and is composable, meaning that you can use only the features that you need and keep everything lightweight, which is great for microservices. It also deploys to every major cloud platform, including Docker, Kubernetes, and Cloud Foundry. You can check out more about why developers love Liberty in this article on IBM Developer.
Who should apply for this badge?
New Java developers and experienced Java developers can both benefit from this badge!
New developers
If you’re new to the world of developing cloud-native Java applications and you’re looking for a good place to start, this badge and its corresponding course are a great starting point. By completing this course, you’ll learn practical, hands-on skills to effectively develop a basic Java application. You’ll then be able to apply these skills and be recognised for them through the associated badge that you can advertise on your CV, LinkedIn profile, and elsewhere.
Experienced developers:
Alternatively, if you already have experience developing cloud-native Java applications, you can benefit from this badge as a way to showcase and advertise your skills externally. If you’re a developer who is already using Liberty, this is a great way to easily show the experience you have and ability to use Liberty and other enterprise-level, open source technologies and standards to effectively create cloud-native Java applications.
On the other hand, if you’re experienced in developing cloud-native Java applications but have not used Liberty before, this course and badge offer you an opportunity to showcase your transferable skills, add Liberty to your tool belt, and widen the range of proven platforms that you can apply your development skills to.
How can I get this badge?
To earn the badge, there are two core components:
-
A hands-on course
-
An exam that tests the skills and knowledge learnt through the course
Hands-on Course
Developers who complete the Essentials for Cloud-Native Java Application Development beginner-level course on cognitiveclass.ai can earn this badge.
Note: If you’re already an experienced Liberty user, you’re also welcome to skip straight to the end exam.
This course teaches you the essential skills and technologies to create a basic cloud-native Java application with Open Liberty. It is composed of 5 modules that all involve hands-on coding experience using some of the Open Liberty interactive guides.
Course modules:
-
Getting started with Open Liberty
-
Creating a RESTful web service
-
Consuming a RESTful web service
-
Injecting dependencies into microservices
-
Configuring microservices
By completing these modules, you’ll learn about REST applications, contexts and dependency injection (CDI), externalizing application configuration, and more. All of these skills are essential for developing a basic cloud-native Java application. These modules utilise enterprise, open source, industry standards, including MicroProfile and Jakarta EE - skills that are especially important for developers working on enterprise applications.
There are no hard requirements to be able to take this course. However, a basic knowledge of Java, Maven, and microservices will be useful. It’s also worth noting that this is a self-paced course and can be taken at any time.
End Exam
At the end of the course, you’ll be presented with an exam to complete. To pass this end exam, you must score at least 80% on higher. The exam consists of 20 multiple-choice questions based on the skills and knowledge you gained by competing the the course.
Once you successfully pass this final exam, you’ll receive the Liberty Developer Essentials badge from Credly. You can then share this badge through social media sites like LinkedIn, or add it to things like your CV or email footer.
The first of many…
This badge is what we hope will be the first of many Liberty badges, enabling developers to learn and be recognised for various skills that are required for effective cloud-native Java app development. In the future, we aim to create badges that go beyond the beginner level into deeper, more challenging topics. Keep your eyes peeled for updates. If you have suggestions for badges you’d like to see, share them with us by creating an issue on the Open Liberty GitHub repository.
Get your Liberty Developer Essentials Badge today!
So, whether you’re new to Java development or a seasoned pro, get your Liberty Developer Essentials badge today and showcase your cloud-native Java application development skills! Once you’ve been awarded the badge, we’d love to see them on social media - please do tag us on X (@OpenLibertyIO) and LinkedIn (Open Liberty) so we can celebrate with you!
June 17, 2024
The Generational Z Garbage Collector (ZGC)
by Alexius Dionysius Diakogiannis at June 17, 2024 05:51 AM
The Generational Z Garbage Collector (ZGC)
The Generational Z Garbage Collector (GenZGC) in JDK 21 represents a significant evolution in Java’s approach to garbage collection, aiming to enhance application performance through more efficient memory management. This advancement builds upon the strengths of the Z Garbage Collector (ZGC) by introducing a generational approach to garbage collection within the JVM.
Design Rationale and Operational Mechanics
Generational Hypothesis: Generational ZGC leverages the “weak generational hypothesis,” which posits that most objects die young. By dividing the heap into young and old regions, GenZGC can focus its efforts on the young region where most objects become unreachable, thereby optimizing garbage collection efficiency and reducing CPU overhead
Heap Division and Collection Cycles: The heap is divided into two logical parts: the young generation and the old generation. Newly allocated objects are placed in the young generation, which is frequently scanned for garbage collection. Objects that survive several collection cycles are then promoted to the old generation, which is scanned less often. This division allows for more frequent collection of short-lived objects while reducing the overhead of collecting long-lived objects.
Performance Implications
Throughput and Latency Internal performance tests have shown that Generational ZGC offers about a 10% improvement in throughput over its single-generation predecessors in both JDK 17 and JDK 21, despite a slight regression in average latency measured in microseconds. However, the most notable improvement is observed in maximum pause times, with a 10-20% improvement in P99 pause times. This reduction in pause times significantly enhances the predictability and responsiveness of applications, particularly those requiring low latency.
Allocation Stalls
A crucial advantage of Generational ZGC is its ability to mitigate allocation stalls, which occur when the rate of object allocation outpaces the garbage collector’s ability to reclaim memory. This capability is particularly beneficial in high-throughput applications, such as those using Apache Cassandra, where Generational ZGC maintains performance stability even under high concurrency levels.
Practical Considerations and Adoption
Transition and Adoption: While JDK 21 introduces Generational ZGC, single-generation ZGC remains the default for now. Developers can opt into using Generational ZGC through JVM arguments (`-XX:+UseZGC -XX:+ZGenerational`). The plan is for Generational ZGC to eventually become the default, with single-generation ZGC being deprecated and removed. This phased approach allows developers to gradually adapt to the new system.
Diagnostic and Profiling Tools: For those evaluating or transitioning to Generational ZGC, tools like GC logging and JDK Flight Recorder (JFR) offer valuable insights into GC behavior and performance. GC logging, accessible via the `-Xlog` argument, and JFR data can be analyzed in JDK Mission Control (JMC) to assess garbage collection behavior and application performance implications.
Conclusion
Generational ZGC represents a significant step forward in Java’s garbage collection technology, offering improved throughput, reduced pause times, and enhanced overall application performance. Its design reflects a deep understanding of application memory management needs, particularly the efficient collection of short-lived objects. As Java applications continue to grow in complexity and scale, the adoption of Generational ZGC could be a pivotal factor in achieving the performance goals of modern, high-demand applications.
The transition from Java 17 to Java 21 heralds a new era of Java development, characterized by significant improvements in performance, security, and developer-friendly features. The API changes and enhancements discussed above are just the tip of the iceberg, with Java 21 offering a wealth of other features and improvements designed to cater to the evolving needs of modern application development. As developers, embracing Java 21 and leveraging its new features and improvements can significantly impact the efficiency, performance, and security of Java applications. Whether it’s through the enhanced I/O capabilities, improved serialization exception handling, or the new Unicode support in the `Character` class, Java 21 offers a compelling upgrade path from Java 17, promising to enhance the Java ecosystem for years to come. In conclusion, the evolution from Java 17 to Java 21 is a testament to the ongoing commitment to advancing Java as a language and platform. By exploring and adopting these new features, developers can ensure their Java applications remain cutting-edge, secure, and performant in the face of future challenges.
June 01, 2024
Migrating a Spring Boot project to Helidon (Helidon Petclinic)
by dmitrykornilov at June 01, 2024 12:35 PM
In this article, you will learn about:
- The motivation behind writing this article
- The architecture of the Spring Petclinic Rest project
- The architecture of the Helidon Petclinic project
- How to migrate the data module
- How to migrate REST controllers
- How to write tests
- Interesting issues and pitfalls, along with their solutions
Motivation
Initially, I anticipated the article would be short, but it ended up being quite lengthy. You could think of it as a missing chapter from the “Beginning Helidon” book I co-authored with my colleagues. Yes, it’s a bit cheesy to advertise the book in the first paragraph. You might assume that promoting the book was my sole motivation for writing this article. Admittedly, it’s a significant motivator, but not the only one.
In the Helidon support channels, we frequently encounter questions about migrating Spring Boot applications to Helidon. To address these inquiries, we concluded that creating a Helidon version of the well-known Spring Petclinic demo application and documenting the migration strategies and challenges would be the best approach. I volunteered for the task because of my previous experience with Spring programming and because I hadn’t engaged in real programming for quite some time, and I wanted to demonstrate that I still could. Whether I succeeded or not, you readers can decide after reviewing the work. Perhaps I shouldn’t have taken it on. You can find the result here. Anyway, enough philosophical musings; let’s get down to business.
Original Spring Petclinic Rest project
When I began my research, I started with the original Spring Petclinic. However, it seemed a bit outdated to me, so I explored other Petclinic forks available at https://github.com/spring-petclinic. One that caught my attention was the Spring Petclinic Rest, which functions as a RESTful service. Its architecture aligns well with the concepts of Helidon. Additionally, it features an Angular-based UI in a separate project. The plan was to develop a Helidon-based backend to complement the frontend project for demonstration purposes.
The Spring Petclinic Rest project is thoroughly documented in its README.md. For convenience, I’ll provide some basic design concepts here:
- It operates as a RESTful service, serving as the backend for the Spring Petclinic Angular project.
- The project adopts an API-first approach, with all endpoints and interfaces described in an OpenAPI document. Java sources for the corresponding model classes and services are generated by the
openapi-generator-maven-plugin
. - Data is stored in a database and supports HSQL, MySQL, and PostgreSQL.
- The project supports Basic Authentication security and manages users and roles in the database.
There are two data models:
- Data model – It consists of JPA entities reflecting the database structure.
- DTO (Data Transfer Objects) – These define data used to communicate with the public RESTful service. They are generated from the OpenAPI document.
And three layers:
- Presentation layer – This layer contains RESTful controllers that implement the public API. It operates using DTOs and serves as a thin layer. Its purpose is to retrieve data, map it to entities (data model) using a mapper, send it to the service layer for processing, convert the result to DTO, and then return it to the client.
- Business layer – Also known as the service layer. This layer contains the business logic and operates using entities (data model). It communicates with the database layer.
- Database layer – Utilizing Spring Data JPA implementing the repository pattern. This layer consists of data repository interfaces, which perform operations on data such as querying, adding, and deleting. The database structure diagram is provided below.
Helidon Petclinic
Helidon Petclinic is a project I created. You can check it out here. The README.md contains information about how to build and run it.
My goal was to preserve the design and structure of the application as much as possible. In the end, I achieved a layer structure very similar to the original. The only significant change is an optimization related to the database layer. I perform database operations in the service layer, effectively integrating it with the database layer. The original application’s service layer methods typically involved only a single line call to the data repository, so this made sense to me. I draw a picture (see below) to help you understand the difference.
Spring and Helidon are different frameworks. Spring is built around the proprietary Spring Injection. Although it integrates with some open-source libraries and standards, most of its features are Spring-specific. On the other hand, Helidon (Helidon MP) is built on top of Enterprise Java standards such as Jakarta EE and MicroProfile, which technically makes it more open. There is some overlap in third-party libraries. Both Spring and Helidon use JPA, which will make our database layer migration easier. Additionally, Jackson and Mockito can be used in both frameworks. But I would recommend using Jakarta equivalents of Jackson for better Jakarta compatibility. In the table below, I have listed Spring components and their corresponding Helidon equivalents, which I’ll use in my project.
Spring | Helidon |
---|---|
Spring Injection | Jakarta Contexts and Dependency Injection (CDI) |
Spring REST Controller | Jakarta RESTful Web Services (JAX-RS) |
Spring Data | Jakarta Persistence (JPA) |
Spring Open API generator | Helidon Open API generator |
Spring Tests Framework | Helidon Tests |
Jackson | Jakarta JSON Processing (JSON-P) and Jakarta JSON Binding (JSON-B) |
Spring Configuration | MicroProfile Config |
I had to make some compromises. For simplicity’s sake, I only added support for HSQL. I also omitted security support. Basic Authentication with passwords stored in the database is not what I would recommend using; it falls far short of modern security standards. I may consider supporting security in future versions of the project if there is demand for it.
JPA Model
The JPA model comprises a set of JPA entities. JPA is a standard, and one of the beauties of standards is that you don’t need to alter things when migrating code to another runtime. I made only minor adjustments to these classes, mainly replacing Spring-specific features with their Helidon equivalents. I replaced the usage of ToStringCreator
in toString()
methods with code generated by my IDE (IntelliJ Idea rules!). Additionally, I replaced instances of PropertiesComparator.sort()
with Collections.sort()
.
Spring code:
public List<Pet> getPets() {
List<Pet> sortedPets = new ArrayList<>(getPetsInternal());
PropertyComparator.sort(sortedPets,
new MutableSortDefinition("name", true, true));
return Collections.unmodifiableList(sortedPets);
}
Helidon code:
public List<Pet> getPets() {
var sortedPets = new ArrayList<>(getPetsInternal());
sortedPets.sort(Comparator.comparing(Pet::getName));
return Collections.unmodifiableList(sortedPets);
}
Another change I made was adding named queries to serve as a replacement for the Spring Data repositories. This is what I’ll discuss in the next section.
Replacing Spring Data repositories
Spring Data is a part of the Spring framework designed for working with databases. Spring Data implements the repository pattern, which involves defining interfaces containing methods to retrieve data (entities or collections of entities). The framework automatically generates the implementation of these interfaces. Operations on data and search criteria are defined by method names. For instance, the Collection<Pet> findAll()
method retrieves all pets, while void save(Pet pet)
updates the given Pet object in the database. This abstraction eliminates the need to understand the underlying query language. While some may find it convenient, I personally prefer working with SQL statements as they are more intuitive to me. I suppose I’m just too old-fashioned.
To do this the Helidon way, I used JPQL queries, which I find preferable. In the service layer, rather than calling methods from repositories, I utilize the pure JPA API to achieve the same results.
First step is injecting the entity manager into ClinicServiceImpl
:
@PersistenceContext(unitName = "pu1")
private EntityManager entityManager;
The entity manager will handle all database operations, which were previously handled by the repositories.
Next, I went through all methods and replaced repository usage with equivalent code using the entity manager, following certain patterns.
Replacing find methods
For find*
methods a named query in the corresponding entity class has to be created and executed in the body of a method.
public List<Pet> findAllPets() {
return entityManager.createNamedQuery("findAllPets",
Pet.class).getResultList();
}
@NamedQueries({
@NamedQuery(name = "findAllPets",
query = "SELECT p FROM Pet p")
})
public class Pet extends NamedEntity {
...
}
Save methods
The ClinicService.save*
methods combine ‘create’ and ‘update’ functionality. However, JPA provides separate methods for creating and updating entities. Therefore, I needed to separate these operations in the code, as shown in the sample below.
@Transactional
public void saveOwner(Owner owner) {
if (owner.isNew()) {
entityManager.persist(owner);
} else {
entityManager.merge(owner);
}
}
Delete methods
The replacement of delete methods is straightforward, but there is one pitfall that users need to understand and know how to avoid. Here is a sample of how it’s done for most entities:
@Transactional
public void deleteOwner(Owner owner) {
entityManager.remove(owner);
}
However, for some entities, it needs to be handled differently. There are cases where an entity is dependent and managed by its parent entity. Such a relationship exists between the Owner
and Pet
entities. Owner
contains a set of Pet
entities annotated with cascade = CascadeType.ALL
.
public class Owner extends Person {
@OneToMany(cascade = CascadeType.ALL,
mappedBy = "owner",
fetch = FetchType.EAGER,
orphanRemoval = true)
private Set<Pet> pets;
...
}
In this case, to remove a pet, instead of invoking entityManager.remove(pet)
, you need to delete it from the Owner.pets
set and call entityManager.merge(owner)
.
@Transactional
public void deletePet(Pet pet) {
var owner = pet.getOwner();
owner.deletePet(pet);
entityManager.merge(owner);
entityManager.flush();
}
This is a legitimate use case. You can read more about it here.
Transactions
The handling of transactions differs between Helidon and Spring. Spring utilizes Spring Transactions, while Helidon uses Jakarta Transactions (JTA). However, these two APIs are similar. The project employs declarative transactions, defined by placing the @Transactional
annotation on methods that need to be executed within a transaction. I only replaced the import from org.springframework.transaction.annotation.Transactional
to jakarta.transaction.Transactional
. Additionally, since JTA does not support read-only transactions, I removed the readOnly = true
parameter from Spring’s @Transactional
annotation when present. Furthermore, I optimized the code by removing transaction support from methods that do not require transactions, such as methods that only read data from the database without writing to it.
Updating REST Controllers
Migrating REST controllers to Helidon is one of the complicated tasks. Source code is different because Helidon uses Jakarta RESTful Web Services (JAX-RS) and Spring uses proprietary libraries. I had to rewrite all REST controllers manually. There are some patterns which can be used, but it’s not as obvious as it is with data repositories.
@RequestScoped (1)
public class PetResource implements PetService { (2)
@Context (3)
UriInfo uriInfo;
private final ClinicService clinicService;
private final PetMapper petMapper;
@Inject (4)
public PetResource(ClinicService clinicService,
PetMapper petMapper) {
this.clinicService = clinicService;
this.petMapper = petMapper;
}
@Override
public Response addPet(PetDto petDto) { (5)
var owner = clinicService.findOwnerById(
petDto.getOwnerId()).orElseThrow();
var pet = petMapper.toPet(petDto);
pet.setOwner(owner);
clinicService.savePet(pet);
var location = UriBuilder
.fromUri(uriInfo.getBaseUri())
.path("api/pets/{id}")
.build(pet.getId());
return Response.created(location)
.entity(petMapper.toPetDto(pet))
.build();
}
...
}
<1> – Makes this class a request scoped CDI bean
<2> – Implements PetService
generated interface. This interface contains JAX-RS annotations defining paths, HTTP methods, etc.
<3> – Injecting UriInfo
class usng JAX-RS @Context
annotation
<4> – Constructor injection of ClinicService
and PetMapper
<5> – JAX-RS method used to add a pet
Remember that this project is API first and REST resources and the model classes it operates are generated from the OpenAPI document. All Rest controllers implement these generated interfaces. And these interfaces are different for Spring and Helidon.
I’ll explain the OpenAPI plugin configuration in the next section.
OpenAPI
To generate code out of OpenAPI document, I used OpenAPI plugin provided by Helidon. I couldn’t use a plugin used in the original project because it generates the Spring code, which is not compatible with Helidon. You can find the plugin configuration in the project pom.xml file. I won’t go deep into the configuration. The only thing I mention is an option to skip generating data model tests. I couldn’t find it in documentation. The option is <generateModelTests>false</generateModelTests>
.
There are some differences in how these two generators work. First difference I noticed is handling read only fields. Spring generator doesn’t do anything and treats read only fields as all other fields. It means that they are mutable and not read only. Helidon generator behaves differently. It treats read only fields as immutable fields. The values are passed as the constructor parameters and there are no setters. Which makes them truly immutable. On the other hand, it adds some complications when migrating from the Spring approach. The major issue is that no-args constructor cannot be used anymore when converting entities to DTOs. I’ll explain how to deal with it in Mapstruct section below.
Another issue I faced was different handling of tags. Helidon plugin uses tags to group operations into services.
In the Open API document tags are defined like this:
tags:
- name: owner
description: Endpoints related to pet owners.
- name: pet
description: Endpoints related to pets.
...
If you look at the paths, there is addPetToOwner
operation at /owners/{ownerId}/pets
with pet
tag. All other Owner-related operations are tagged with owner
.
paths:
/owners/{ownerId}:
get:
tags:
- owner
operationId: getOwner
/owners/{ownerId}/pets:
post:
tags:
- pet
operationId: addPetToOwner
summary: Adds a pet to an owner
description: Records the details of a new pet.
As result, Helidon generator places addPetToOwner
method in PetService
and all other Owner related methods to OwnerService
. It causes paths collision because the base path of OwnerService
is /owner
and it supposed to handle all sub-paths too. But there is /owners/{ownerId}/pets
handler in PetService
.
I fixed it by changing a tag in /owners/{ownerId}/pets
path to owner
.
Mapstruct
Mapstruct is a library for generating converters between Java beans. It’s based on an annotation processor and does generation at build-time. It’s used in the project to generate mappers between entities and DTOs. It can be configured for usage in Spring projects as well as in CDI based Jakarta EE projects. I used the second option because Helidon project is a CDI based application. To do it add the following to Mapstruct Maven plugin configuration:
<compilerArgs>
<compilerArg>
-Amapstruct.defaultComponentModel=jakarta-cdi
</compilerArg>
...
</compilerArgs>
One of the issues I had to solve with Mapstruct is making it work with DTOs generated by Helidon Open API generator. I mentioned above that Helidon generator doesn’t generate setters for fields marked as read only in the OpenAPI document. It generates a constructor with parameters where initial values of all read only fields must be passed. It requires a special treatment using object factories in Mapstruct. Technically, it requires creation of a factory class which describes how objects are created using non-default constructor.
The sample below contains two methods annotated with @ObjectFactory
annotation. These methods will be used to create objects specified as their return types: createVetDto
creates VetDto
, createOwnerDto
creates OwnerDto
. Constructor arguments are passed as method parameters.
@ApplicationScoped
public class DtoFactory {
@ObjectFactory
public VetDto createVetDto(Vet vet) {
return new VetDto(vet.getId());
}
@ObjectFactory
public OwnerDto createOwnerDto(Owner owner) {
return new OwnerDto(owner.getId(), new ArrayList<>());
}
...
}
Object factory must be specified in @Mapper
annotation uses
parameter of the interface which creates that particular object as it’s shown in a snippet below.
@Mapper(uses = {DtoFactory.class, SpecialtyMapper.class})
public interface VetMapper {
VetDto toVetDto(Vet vet);
...
}
Testing
The original projects was nicely covered with tests, so my task was to do the same for my port. I rewrote the original tests and added tests for mappers and integration tests. Tests, the same as REST controllers, need to be reworked. The rework is smaller for the service tests and much bigger for Rest controllers tests.
Service layer tests
All service layer tests are located in io.helidon.samples.petclinic.service
. My project supports only one database rather than the original project supporting three databases, which makes the task easier. The tests perform real database operations in database allowing us to test all aspects of database operations including JPQL queries and CRUD operations. The tests look very similar to the original project tests. I copy/pasted the most of the code. The difference is that Spring supports transactions rollback after test method execution if the test method annotated with @Transactional
. It’s very convenient because the database always kept unchanged. Helidon doesn’t have this feature, but I managed to simulate it by starting a user transaction before each method call and rolling it back after it. I collected all ‘transactional’ tests in ClinicServiceTransactionalTest
class. All other tests which don’t change the data are in ClinicServiceTest
class.
Rest controllers tests
In the original Rest controller tests, mocking is utilized. Spring provides a nice MockMVC
testing framework, which is Spring-specific. Consequently, I opted for pure Mockito. Personally, I’m not a huge fan of mocking because sometimes tests using mocks end up testing the mocks themselves rather than the actual logic. However, this isn’t true for all cases; mock tests run faster and developers are accustomed to them.
The typical Helidon mocking test class is demonstrated in the code snippet below. I utilize the @HelidonTest
annotation to initiate a CDI container, in conjunction with the Mockito extension @ExtendWith(MockitoExtension.class)
.
Bootstrapping Mockito is slightly tricky because my JAX-RS resource uses constructor injection, and I also need to mock the field-injected UriInfo
class. Mockito poorly supports the use case when both constructor and field injection are used. Consequently, I had to manually create mocks for constructor-injected classes and use declarative mocking with the @Mock
annotation for field-injected UriInfo
.
Despite these challenges, the test methods look like typical Mockito tests.
@HelidonTest
@ExtendWith(MockitoExtension.class)
public class PetResourceTest {
ClinicService clinicService;
@Inject
PetMapper petMapper;
@Mock
UriInfo uriInfo;
@InjectMocks
PetResource petResource;
@BeforeEach
void setup() {
clinicService = Mockito.mock(ClinicService.class);
petResource = new PetResource(clinicService, petMapper);
MockitoAnnotations.openMocks(this);
}
@Test
void testAddPet() {
var petDto = createPetDto();
var owner = createOwner();
Mockito.when(uriInfo.getBaseUri())
.thenReturn(URI.create("http://localhost:9966/petclinic"));
Mockito.when(clinicService.findOwnerById(1))
.thenReturn(Optional.of(owner));
var response = petResource.addPet(petDto);
assertThat(response.getStatus(), is(201));
assertThat(response.getLocation().toString(),
equalTo("http://localhost:9966/petclinic/api/pets/1"));
var pet = (PetDto) response.getEntity();
assertThat(pet.getId(), equalTo(petDto.getId()));
assertThat(pet.getName(), equalTo(petDto.getName()));
assertThat(pet.getOwnerId(), equalTo(petDto.getOwnerId()));
}
...
}
Integration tests
I’ve decided to include integration tests in the project. You can find them in the io.helidon.samples.petclinic.integration
package. Ultimately, it’s the most robust way to test all application layers. I’ve utilized the Maven Failsafe plugin to execute the integration tests. Below is the failsafe plugin configuration:
<plugin>
<artifactId>maven-failsafe-plugin</artifactId>
<version>3.2.5</version>
<executions>
<execution>
<goals>
<goal>integration-test</goal>
<goal>verify</goal>
</goals>
</execution>
</executions>
<configuration>
<classesDirectory>${project.build.directory}/classes</classesDirectory>
</configuration>
</plugin>
Please pay attention to the <classesDirectory>${project.build.directory}/classes</classesDirectory>
configuration option. The plugin won’t function properly without it.
The typical integration test is illustrated in the snippet below. I’m using the @HelidonTest
annotation to initiate a CDI container and the Helidon web server, and I’m injecting a web target pointing to the running web server. This is a feature of the Helidon testing framework. In the test methods, I call a REST endpoint, retrieve the result, and verify its correctness. This tests the entire chain of layers from the REST resource to the database.
@HelidonTest
class PetResourceIT {
@Inject
private WebTarget target;
@Test
void testListPets() {
var pets = target
.path("/petclinic/api/pets")
.request()
.get(JsonArray.class);
assertThat(pets.size(), greaterThan(0));
assertThat(pets.getJsonObject(0).getInt("id"),
is(1));
assertThat(pets.getJsonObject(0).getString("name"),
equalTo("Jacka"));
}
...
}
You can execute all integration tests using the mvn integration-test
command.
Summary
I composed my summary as a small FAQ.
Is it possible to migrate a Spring project to Helidon?
Definitely yes.
Is it difficult?
Typically no, but it depends on the project. I spent about a day to migrate database and service layer, about 2 days to migrate REST controllers, about a week to migrate tests, and more than a week to write this article. At the end, testing and documenting work is more time-consuming than developing.
Is Helidon different from Spring?
Yes it is, but there are the same or similar components/frameworks/specs used in both, so if you know Spring, Helidon doesn’t look as an alien and visa versa.
What are the advantages of Helidon?
Helidon is based on Java 21 Virtual Threads and is very fast. It supports Jakarta EE and MicroProfile, so it’s a great choice if you are standards-minded.
Where I can find additional information about Helidon?
https://helidon.io and https://medium.com/helidon
If you’re thinking, “Hey, Dmitry! This is a brilliant article, I enjoyed reading it!” you can share a link to this article on social networks as an act of appreciation.
Thank you!
May 16, 2024
Back to the Future with Cross-Context Dispatch
by gregw at May 16, 2024 01:31 AM
Cross-Context Dispatch reintroduced to Jetty-12
With the release of Jetty 12.0.8, we’re excited to announce the (re)implementation of a somewhat maligned and deprecated feature: Cross-Context Dispatch. This feature, while having been part of the Servlet specification for many years, has seen varied levels of use and support. Its re-introduction in Jetty 12.0.8, however, marks a significant step forward in our commitment to supporting the diverse needs of our users, especially those with complex legacy and modern web applications.
Understanding Cross-Context Dispatch
Cross-Context Dispatch allows a web application to forward requests to or include responses from another web application within the same Jetty server. Although it has been available as part of the Servlet specification for an extended period, it was deemed optional with Servlet 6.0 of EE10, reflecting its status as a somewhat niche feature.
Initially, Jetty 12 moved away from supporting Cross-Context Dispatch, driven by a desire to simplify the server architecture amidst substantial changes, including support for multiple environments (EE8, EE9, and EE10). These updates mean Jetty can now deploy web applications using either the javax
namespace (EE8) or the jakarta
namespace (EE9 and EE10), all using the latest optimized jetty core implementations of HTTP: v1, v2 or v3.
Reintroducing Cross-Context Dispatch
The decision to reintegrate Cross-Context Dispatch in Jetty 12.0.8 was influenced significantly by the needs of our commercial clients, some who still leveraging this feature in their legacy applications. Our commitment to supporting our clients’ requirements, including the need to maintain and extend legacy systems, remains a top priority.
One of the standout features of the newly implemented Cross-Context Dispatch is its ability to bridge applications across different environments. This means a web application based on the javax
namespace (EE8) can now dispatch requests to, or include responses from, a web application based on the jakarta
namespace (EE9 or EE10). This functionality opens up new pathways for integrating legacy applications with newer, modern systems.
Looking Ahead
The reintroduction of Cross-Context Dispatch in Jetty 12.0.8 is more than just a nod to legacy systems; it can be used as a bridge to the future of Java web development. By allowing for seamless interactions between applications across different Servlet environments, Jetty-12 opens the possibility of incremental migration away from legacy web applications.
April 26, 2024
HTTP Patch with Jersey Client on JDK 16+
by Jan at April 26, 2024 11:26 AM
January 10, 2024
Monitoring Java Virtual Threads
by Jean-François James at January 10, 2024 05:14 PM
November 19, 2023
Coding Microservice From Scratch (Part 16) | JAX-RS Done Right! | Head Crashing Informatics 83
by Markus Karg at November 19, 2023 05:00 PM
Write a pure-Java microservice from scratch, without an application server nor any third party frameworks, tools, or IDE plugins — Just using JDK, Maven and JAX-RS aka Jakarta REST 3.1. This video series shows you the essential steps!
You asked, why I am not simply using the Jakarta EE 10 Core API. There are many answers in this video!
If you like this video, please give it a thumbs up, share it, subscribe to my channel, or become my patreon https://www.patreon.com/mkarg. Thanks!
October 12, 2023
Moving from javax to jakarta namespace
by Jean-Louis Monteiro at October 12, 2023 02:32 PM
This blog aims at giving some pointers in order to address the challenge related to the switch from `javax` to `jakarta` namespace. This is one of the biggest changes in Java of the latest 20 years. No doubt. The entire ecosystem is impacted. Not only Java EE or Jakarta EE Application servers, but also libraries of any kind (Jackson, CXF, Hibernate, Spring to name a few). For instance, it took Apache TomEE about a year to convert all the source code and dependencies to the new `jakarta` namespace.
This blog is written from the user perspective, because the shift from `javax` to `jakarta` is as impacting for application providers than it is for libraries or application servers. There have been a couple of attempts to study the impact and investigate possible paths to make the change as smooth as possible.
The problem is harder than it appears to be. The `javax` package is of course in the import section of a class, but it can be in Strings as well if you use the Java Reflection API for instance. Using Byte Code tools like ASM also makes the problem more complex, but also service loader mechanisms and many more. We will see that there are many ways to approach the problem, using byte code, converting the sources directly, but none are perfect.
Bytecode enhancement approach
The first legitimate approach that comes to our mind is the byte code approach. The goal is to keep the `javax` namespace as much as possible and use bytecode enhancement to convert binaries.
Compile time
It is possible to do a post treatment on the libraries and packages to transform archives such as then are converted to `jakarta` namespace.
- https://maven.apache.org/plugins/maven-shade-plugin/[Maven Shade plugin]
The Maven shade plugin has the ability to relocate packages. While the primary purpose isn’t to move from `javax` to `jakarta` package, it is possible to use it to relocate small libraries when they aren’t ready yet. We used this approach in TomEE itself or in third party libraries such as Apache Johnzon (JSONB/P implementation).
Here is an example in TomEE where we use Maven Shade Plugin to transform the Apache ActiveMQ Client library https://github.com/apache/tomee/blob/main/deps/activemq-client-shade/pom.xml
This approach is not perfect, especially when you have a multi module library. For Instance, if you have a project with 2 modules, A depends on B. You can use the shade plugin to convert the 2 modules and publish them using a classifier. The issue then is when you need A, you have to exclude B so that you can include it manually with the right classifier.
We’d say it works fine but for simple cases because it breaks the dependency management in Maven, especially with transitive dependencies. It also break IDE integration because sources and javadoc won’t match.
- https://projects.eclipse.org/projects/technology.transformer[Eclipse Transformer]
The Eclipse Transformer is also a generic tool, but it’s been heavily developed for the `javax` to `jakarta` namespace change. It operates on resources such as
Simple resources:
- Java class files
- OSGi feature manifest files
- Properties files
- Service loader configuration files
- Text files (of several types: java source, XML, TLD, HTML, and JSP)
Container resources:
- Directories
- Java archives (JAR, WAR, RAR, and EAR files)
- ZIP archives
It can be configured using Java Properties files to properly convert Java Modules, classes, test resources. This is the approach we used for Apache TomEE 9.0.0-M7 when we first tried to convert to `jakarta`. It had limitation, so we had to then find tricks to solve issues. As it was converting the final distribution and not the individual artifacts, it was impossible for users to use Arquillian or the Maven plugin. They were not converted.
- https://github.com/apache/tomcat-jakartaee-migration[Apache Tomcat Migration tool]
This tool can operate on a directory or an archive (zip, ear, jar, war). It can migrate quite easily an application based on the set of specifications supported in Tomcat and a few more. It has the notion of profile so that you can ask it to convert more.
You can run it using the ANT task (within Maven or not), and there is also a command line interface to run it easily.
Deploy time
When using application server, it is sometimes possible to step in the deployment process and do the conversion of the binaries prior to their deployment.
- https://github.com/apache/tomcat-jakartaee-migration[Apache Tomcat/TomEE migration tool]
Mind that by default, the tool converts only what’s being supported by Apache Tomcat and a couple of other APIs. It does not convert all specifications supported in TomEE, like JAX RS for example. And Tomcat does not provide yet any way to configure it.
Runtime
We haven’t seen any working solution in this area. Of course, we could imagine a JavaAgent approach that converts the bytecode right when it gets loaded by the JVM. The startup time is seriously impacted, and it has to be done every time the JVM restarts or loads a class in a classloader. Remember that a class can be loaded multiple times in different classloaders.
Source code enhancement approach
This may sound like the most impacting but this is probably also the most secured one. We also strongly believe that embracing the change sooner is preferable rather than later. As mentioned, this is one of the biggest breaking change in Java of the last 20 years. Since Java EE moved to Eclipse to become Jakarta, we have noticed a change in the release cadence. Releases are not more frequent and more changes are going to happen. Killing the technical depth as soon as possible is probably the best when it’s so impacting.
There are a couple of tools we tried. There are probably more in the ecosystem, and also some in-house developments.
[IMPORTANT]
This is usually a one shoot operation. It won’t be perfect and no doubt it will require adjustment because there is no perfect tool that can handle all cases.
IntelliJ IDEA
IntelliJ IDEA added a refactoring capability to its IDE in order to convert sources to the new `jakarta` namespace. I haven’t tested it myself, but it may help to do the first big step when you don’t really master the scripting approach below.
Scripting approach
For simple case, and we used that approach to do most of the conversion in TomEE, you can create your own simple tool to convert sources. For instance, SmallRye does that with their MicroProfile implementations. Here is an example https://github.com/smallrye/smallrye-config/blob/main/to-jakarta.sh
Using basic Linux commands, it converts from `javax` to `jakarta` namespace and then the result is pushed to a dedicated branch. The benefit is that they have 2 source trees with different artifacts, the dependency management isn’t broken.
One source tree is the reference and they add to the script the necessary commands to convert additional things on demand.
- https://projects.eclipse.org/projects/technology.transformer[Eclipse Transformer]
Because the Eclipse Transformer can operate on text files, it can be easily used to migrate the sources from `javax` to `jakarta` namespace.
Producing converted artifacts for applications for consumption
Weather you are working on Open Source or not, someone will consume your artifacts. If you are using Maven for example, you may ask yourself what option is the best especially if you maintain the 2 branches `javax` and `jakarta`.
[NOTE]
It does not matter if you use the bytecode or the source code approach.
Updating version or artifactId
This is probably the more practical solution. Some project like Arquillian for example decided to go using a different artifact name (-jakarta suffix) because the artifact is the same and solves the same problem, so why bringing a technical concerned into the name? I’m more in favor of using the version to mark the namespace change. It is somehow an major API change that I’d rather emphasize using a major version update.
[IMPORTANT]
Mind that this only works if both `javax` and `jakarta` APIs are backward compatible. Otherwise, it won’t work
Using Maven classifiers
This is not an option we would recommend. Unfortunately some of our dependencies use this approach and it has many drawbacks. It’s fine for a quick test, but as I mentioned previously, it badly impacts how Maven works. If you pull a transformed artifact, you may get a transitive and not transformed dependency. This is the case for multi module project as well.
Another painful side effect is that javadoc and sources are still linked to the original artifact, so you will have a hard time to debug in the IDE.
Conclusion
We tried the bytecode approach ourselves in TomEE with the hope we could avoid maintaining 2 source trees, one for `javax` and the other one for `jakarta` namespace. Unfortunately, as we have seen before the risk is too important and there are too many edge cases not covered. Apache TomEE runs about 60k tests (including TCK) and our confidence wasn’t good enough. Even though the approach has some benefits and can work for simple use cases, like converting a small utility tool, it does not fit in our opinion for real applications.
The post Moving from javax to jakarta namespace appeared first on Tomitribe.
October 02, 2023
Choosing Connector in Jersey
by Jan at October 02, 2023 01:49 PM
September 27, 2023
Navigating the Shift From Drupal 7 to Drupal 9/10 at the Eclipse Foundation
September 27, 2023 02:30 PM
We’re currently in the middle of a substantial transition as we are migrating mission-critical websites from Drupal 7 to Drupal 9, with our sights set on Drupal 10. This shift has been motivated by several factors, including the announcement of Drupal 7 end-of-life which is now scheduled for January 5, 2025, and our goal to reduce technical debt that we accrued over the last decade.
To provide some context, we’re migrating a total of six key websites:
- projects.eclipse.org: The Eclipse Project Management Infrastructure (PMI) consolidates project management activities into a single consistent location and experience.
- accounts.eclipse.org: The Eclipse Account website is where our users go to manage their profiles and sign essential agreements, like the Eclipse Contributor Agreement (ECA) and the Eclipse Individual Committer Agreement (ICA).
- blogs.eclipse.org: Our official blogging platform for Foundation staff.
- newsroom.eclipse.org: The Eclipse Newsroom is our content management system for news, events, newsletters, and valuable resources like case studies, market reports, and whitepapers.
- marketplace.eclipse.org: The Eclipse Marketplace empowers users to discover solutions that enhance their Eclipse IDE.
- eclipse.org/downloads/packages: The Eclipse Packaging website is our platform for managing the publication of download links for the Eclipse Installer and Eclipse IDE Packages on our websites.
The Progress So Far
We’ve made substantial progress this year with our migration efforts. The team successfully completed the migration of Eclipse Blogs and Eclipse Newsroom. We are also in the final stages of development with the Eclipse Marketplace, which is currently scheduled for a production release on October 25, 2023. Next year, we’ll focus our attention on completing the migration of our more substantial sites, such as Eclipse PMI, Eclipse Accounts, and Eclipse Packaging.
More Than a Simple Migration: Decoupling Drupal APIs With Quarkus
This initiative isn’t just about moving from one version of Drupal to another. Simultaneously, we’re undertaking the task of decoupling essential APIs from Drupal in the hope that future migration or upgrade won’t impact as many core services at the same time. For this purpose, we’ve chosen Quarkus as our preferred platform. In Q3 2023, the team successfully migrated the GitHub ECA Validation Service and the Open-VSX Publisher Agreement Service from Drupal to Quarkus. In Q4 2023, we’re planning to continue down that path and deploy a Quarkus implementation of several critical APIs such as:
- Account Profile API: This API offers user information, covering ECA status and profile details like bios.
- User Deletion API: This API monitors user deletion requests ensuring the right to be forgotten.
- Committer Paperwork API: This API keeps tabs on the status of ongoing committer paperwork records.
- Eclipse USS: The Eclipse User Storage Service (USS) allows Eclipse projects to store user-specific project information on our servers.
Conclusion: A Forward-Looking Transition
Our migration journey from Drupal 7 to Drupal 9, with plans for Drupal 10, represents our commitment to providing a secure, efficient, and user-friendly online experience for our community. We are excited about the possibilities this migration will unlock for us, advancing us toward a more modern web stack.
Finally, I’d like to take this moment to highlight that this project is a monumental team effort, thanks to the exceptional contributions of Eric Poirier and Théodore Biadala, our Drupal developers; Martin Lowe and Zachary Sabourin, our Java developers implementing the API decoupling objective; and Frederic Gurr, whose support has been instrumental in deploying our new apps on the Eclipse Infrastructure.
September 20, 2023
New Jetty 12 Maven Coordinates
by Joakim Erdfelt at September 20, 2023 09:42 PM
Now that Jetty 12.0.1 is released to Maven Central, we’ve started to get a few questions about where some artifacts are, or when we intend to release them (as folks cannot find them).
Things have change with Jetty, starting with the 12.0.0 release.
First, is that our historical versioning of <servlet_support>.<major>.<minor>
is no longer being used.
With Jetty 12, we are now using a more traditional <major>.<minor>.<patch>
versioning scheme for the first time.
Also new in Jetty 12 is that the Servlet layer has been separated away from the Jetty Core layer.
The Servlet layer has been moved to the new Environments concept introduced with Jetty 12.
Environment | Jakarta EE | Servlet | Jakarta Namespace | Jetty GroupID |
ee8 | EE8 | 4 | javax.servlet | org.eclipse.jetty.ee8 |
ee9 | EE9 | 5 | jakarta.servlet | org.eclipse.jetty.ee9 |
ee10 | EE10 | 6 | jakarta.servlet | org.eclipse.jetty.ee10 |
This means the old Servlet specific artifacts have been moved to environment specific locations both in terms of Java namespace and also their Maven Coordinates.
Example:
Jetty 11 – Using Servlet 5
Maven Coord: org.eclipse.jetty:jetty-servlet
Java Class: org.eclipse.jetty.servlet.ServletContextHandler
Jetty 12 – Using Servlet 6
Maven Coord: org.eclipse.jetty.ee10:jetty-ee10-servlet
Java Class: org.eclipse.jetty.ee10.servlet.ServletContextHandler
We have a migration document which lists all of the migrated locations from Jetty 11 to Jetty 12.
This new versioning and environment features built into Jetty means that new major versions of Jetty are not as common as they have been in the past.
Running MicroProfile reactive with Helidon Nima and Virtual Threads
by Jean-François James at September 20, 2023 05:29 PM
September 19, 2023
New Survey: How Do Developers Feel About Enterprise Java in 2023?
by Mike Milinkovich at September 19, 2023 01:00 PM
The results of the 2023 Jakarta EE Developer Survey are now available! For the sixth year in a row, we’ve reached out to the enterprise Java community to ask about their preferences and priorities for cloud native Java architectures, technologies, and tools, their perceptions of the cloud native application industry, and more.
From these results, it is clear that open source cloud native Java is on the rise following the release of Jakarta EE 10.The number of respondents who have migrated to Jakarta EE continues to grow, with 60% saying they have already migrated, or plan to do so within the next 6-24 months. These results indicate steady growth in the use of Jakarta EE and a growing interest in cloud native Java overall.
When comparing the survey results to 2022, usage of Jakarta EE to build cloud native applications has remained steady at 53%. Spring/Spring Boot, which relies on some Jakarta EE specifications, continues to be the leading Java framework in this category, with usage growing from 57% to 66%.
Since the September 2022 release, Jakarta EE 10 usage has grown to 17% among survey respondents. This community-driven release is attracting a growing number of application developers to adopt Jakarta EE 10 by offering new features and updates to Jakarta EE. An equal number of developers are running Jakarta EE 9 or 9.1 in production, while 28% are running Jakarta EE 8. That means the increase we are seeing in the migration to Jakarta EE is mostly due to the adoption of Jakarta EE 10, as compared to Jakarta EE 9/9.1 or Jakarta EE 8.
The Jakarta EE Developer Survey also gives us a chance to get valuable feedback on features from the latest Jakarta EE release, as well as what direction the project should take in the future.
Respondents are most excited about Jakarta EE Core Profile, which was introduced in the Jakarta EE 10 release as a subset of Web Profile specifications designed for microservices and ahead-of-time compilation. When it comes to future releases, the community is prioritizing better support for Kubernetes and microservices, as well as adapting Java SE innovations to Jakarta EE — a priority that has grown in popularity since 2022. This is a good indicator that the Jakarta EE 11 release plan is on the right direction by adopting new Java SE 21 features.
2,203 developers, architects, and other tech professionals participated in the survey, a 53% increase from last year. This year’s survey was also available in Chinese, Japanese, Spanish & Portuguese, making it easier for Java enthusiasts around the world to share their perspectives. Participation from the Chinese Jakarta EE community was particularly strong, with over 27% of the responses coming from China. By hearing from more people in the enterprise Java space, we’re able to get a clearer picture of what challenges developers are facing, what they’re looking for, and what technologies they are using. Thank you to everyone who participated!
Learn More
We encourage you to download the report for a complete look at the enterprise Java ecosystem.
If you’d like to get more information about Jakarta EE specifications and our open source community, sign up for one of our mailing lists or join the conversation on Slack. If you’d like to participate in the Jakarta EE community, learn how to get started on our website.
August 30, 2023
Best Practices for Effective Usage of Contexts Dependency Injection (CDI) in Java Applications
by Rhuan Henrique Rocha at August 30, 2023 10:55 PM
Looking at the web, we don’t see many articles talking about Contexts Dependency Injection’s best practices. Hence, I have made the decision to discuss the utilization of Contexts Dependency Injection (CDI) using best practices, providing a comprehensive guide on its implementation.
The CDI is a Jakarta specification in the Java ecosystem to allow developers to use dependency injection, managing contexts, and component injection in an easier way. The article https://www.baeldung.com/java-ee-cdi defines the CDI as follows:
CDI turns DI into a no-brainer process, boiled down to just decorating the service classes with a few simple annotations, and defining the corresponding injection points in the client classes.
If you want to learn the CDI concepts you can read Baeldung’s post and Otavio Santana’s post. Here, in this post, we will focus on the best practices topic.
In fact, CDI is a powerful framework and allows developers to use Dependency Injection (DI) and Inversion of Control (IoC). However, we have one question here. How tightly do we want our application to be coupled with the framework? Note that I’m not talking you cannot couple your application to a framework, but you should think about it, think about the coupling level, and think about the tradeoffs. For me, coupling an application to a framework is not wrong, but doing it without thinking about the coupling level and the cost and tradeoffs is wrong.
It is impossible to add a framework to your application without minimally coupling your application. Even though your application does not have a couple expressed in the code, probably you have a behavioral coupling, that is, a behavior in your application depends on a framework’s behavior, and in some cases, you can not guarantee that other framework will provide a similar behavior, in case of changes.
Best Practices for Injecting Dependencies
When writing code in Java, we often create classes that rely on external dependencies to perform their tasks. To achieve this using CDI, we employ the @Inject
annotation, which allows us to inject these dependencies. However, it’s essential to be mindful of whether we are making the class overly dependent on CDI for its functionality, as it may limit its usability without CDI. Hence, it’s crucial to carefully consider the tightness of this dependency. As an illustration, let’s examine the code snippet below. Here, we encounter a class that is tightly coupled to CDI in order to carry out its functionality.
public class ImageRepository {
@Inject
private StorageProvider storageProvider;
public void saveImage(File image){
//Validate the file to check if it is an image.
//Apply some logic if needed
storageProvider.save(image);
}
}
As you can see the class ImageRepository
has a dependency on StorageProvider
, that is injected via CDI annotation. However, the storageProvider
variable is private and we don’t have setter method or a constructor that allows us to pass this dependency by the constructor. It means this class cannot work without a CDI context, that is, the ImageRepository is tightly coupled to CDI.
This coupling doesn’t provide any benefits for the application, instead, it only causes harm both to the application itself and potentially to the testing of this class.
Look at the code refactored to reduce the couple to CDI.
public class ImageRepository implements Serializable {
private StorageProvider storageProvider;
@Inject
public ImageRepository(StorageProvider storageProvider){
this.storageProvider = storageProvider;
}
public void saveImage(File image){
//Validate the file to check if it is an image.
//Apply some logic if needed
storageProvider.save(image);
}
}
As you can see, the ImageRepository
class has a constructor that receives the StorageProvider as a constructor argument. This approach follows what is said in the Clean Code book.
“True Dependency Injection goes one step further. The class takes no direct steps to resolve its dependencies; it is completely passive. Instead, it provides setter methods or constructor arguments (or both) that are used to inject the dependencies.”
(from “Clean Code: A Handbook of Agile Software Craftsmanship” by Martin Robert C.)
Without a constructor or a setter method, the injection depends on the CDI. However, we still have one question about this class. The class has a CDI annotation and depends on the CDI to be compiled. I’m not saying it is always a problem, but it can be a problem, especially if you are writing a framework. Coupling a framework with another framework can be a problem in cases you want to use your framework with another mutually exclusive one. In general, it should be avoided by frameworks. Thus, how can we fully decouple the ImageRepository
class from CDI?
CDI Producer Method
The CDI producer is a source of an object that can be used to be injected by CDI. It is like a factor of a type of object. Look at the code below:
public class ImageRepositoryProducer {
@Produces
public ImageRepository createImageRepository(){
StorageProvider storageProvider = CDI.current().select(StorageProvider.class).get();
return new ImageRepository(storageProvider);
}
}
Please note that we are constructing just one object, but the StorageProvider
‘s object is read by CDI. You should avoid constructing more than one object within a producer method, as this interlinks the construction of these objects and may lead to complications if you intend to designate distinct scopes for them. You can create a separated producer method to produce the StorageProvider
.
This is the ImageRepository
class refactored.
public class ImageRepository implements Serializable {
private StorageProvider storageProvider;
public ImageRepository(StorageProvider storageProvider){
this.storageProvider = storageProvider;
}
public void saveImage(File image){
//Validate the file to check if it is an image.
//Apply some logic if needed
storageProvider.save(image);
}
}
Please note that the ImageRepository
class does not know anything about the CDI, and is fully decoupled from CDI. The codes about the CDI are inside the ImageRepositoryProducer
, which can be extracted to another module if needed.
CDI Interceptor
The CDI Interceptor is a very cool feature of CDI that provides a nice CDI-based way to work with cross-cutting tasks (such as auditing). This is a little definition said in my book:
“A CDI interceptor is a class that wraps the call to a method — this method is called target method — that runs its logic and proceeds the call either to the next CDI interceptor if it exists, or the target method.”
(from “Jakarta EE for Java Developers” by Rhuan Rocha.)
The purpose of this article is not to discuss what a CDI interceptor is, but to discuss CDI best practices. So if you want to read more about CDI interceptor, check out the book Jakarta EE for Java Developers.
As said, the CDI interceptor is very interesting. I am quite fond of this feature and have incorporated it into numerous projects. However, using this feature comes with certain trade-offs for the application.
When you use the CDI interceptor you couple the class to the CDI, because you should be annotating the class with a custom annotation that is a interceptor binding. Look at the example below shown on the Jakarta EE for Java Developers book:
@ApplicationScoped
public class SecuredBean{
@Authentication
public String generateText(String username) throws AutenticationException{
return "Welcome "+username;
}
}
As you can see we should define a scope, as it should be a bean managed by CDI, and you should be annotating the class with the interceptor binding. Hence, if you eliminate CDI from your application, the interceptor’s logic won’t execute, and the class won’t be compiled. With this, your application has a behavioral coupling, and a dependency on the CDI lib jar to compile.
As said, it is not necessarily bad, however, you should think if it is a problem in your context.
CDI Event
The CDI Event is a great feature within the CDI framework that I have employed extensively in various applications. This functionality provides the implementation of the Observer Pattern, enabling us to emit events that are then observed by observers who execute tasks asynchronously. However, if we add the CDI codes inside our class to emit events we will couple the class to the CDI. Again, this is not an error, but you should be sure it is not a problem with your solution. Look at the example below.
import jakarta.enterprise.event.Event;
public class User{
private Event<Email> emailEvent;
public User(Event<Email> emailEvent){
this.emailEvent = emailEvent;
}
public void register(){
//logic
emailEvent.fireAsync(Email.of(from, to, subject, content));
}
}
Note we are receiving the Event class, which is from CDI, to emit the event. It means this class is coupled to CDI and depends on it to work. One way to avoid it is creating your own class to emit the event, and abstract the details about what is the mechanism (CDI or other) that is emitting the event. Look at the example below.
import net.rhuan.example.EventEmitter;
public class User{
private EventEmiter<Email> emailEventEmiter;
public User(EventEmiter<Email> emailEventEmiter){
this.emailEventEmiter = emailEventEmiter;
}
public void register(){
//logic
emailEventEmiter.emit(Email.of(from, to, subject, content));
}
}
Now, your class is agnostic to the emitter of the event. You can use CDI or others, according to the EventEmiter implementation.
Conclusion
The CDI is an amazing specification from Jakarta EE widely used in many Java frameworks and Java applications. Carefully determining the degree of integration between our application and the framework holds immense significance. This intentional decision becomes an important factor in proactively mitigating challenges during the solution’s evolution, especially when working on the development of a framework.
If you have a question or want to share your thoughts, feel free to add comments or send me messages about it.
March 29, 2023
The Jakarta EE 2023 Developer Survey is now open!
by Tatjana Obradovic at March 29, 2023 09:24 PM
It is that time of the year: the Jakarta EE 2023 Developer Survey open for your input! The survey will stay open until May 25st.
I would like to invite you to take this year six-minute survey, and have the chance to share your thoughts and ideas for future Jakarta EE releases, and help us discover uptake of the Jakarta EE latest versions and trends that inform industry decision-makers.
Please share the survey link and to reach out to your contacts: Java developers, architects and stakeholders on the enterprise Java ecosystem and invite them to participate in the 2023 Jakarta EE Developer Survey!
February 16, 2023
What is Apache Camel and how does it work?
by Rhuan Henrique Rocha at February 16, 2023 11:14 PM
In this post, I will talk to you about what the Apache Camel is. It is a brief introduction before I starting to post practical content. Thus, let’s go to understand what this framework is.
Apache Camel is an open source Java integration framework that allows different applications to communicate with each other efficiently. It provides a platform for integrating heterogeneous software systems. Camel is designed to make application integration easy, simplifying the complexity of communication between different systems.
Apache Camel is written in Java and can be run on a variety of platforms, including Jakarta EE application servers and OSGi-based application containers, and can runs inside cloud environments using Spring Boot or Quarkus. Camel also supports a wide range of network protocols and message formats, including HTTP, FTP, SMTP, JMS, SOAP, XML, and JSON.
Camel uses the Enterprise Integration Patterns (EIP) pattern to define the different forms of integration. EIP is a set of commonly used design patterns in system integration. Camel implements many of these patterns, making it a powerful tool for integration solutions.
Additionally, Camel has a set of components that allow it to integrate with different systems. The components can be used to access different resources, such as databases, web services, and message systems. Camel also supports content-based routing, which means it can route messages based on their content.
Camel is highly configurable and extensible, allowing developers to customize its functionality to their needs. It also supports the creation of integration routes at runtime, which means that routes can be defined and changed without the need to restart the system.
In summary, Camel is a powerful and flexible tool for software system integration. It allows different applications to communicate efficiently and effectively, simplifying the complexity of system integration. Camel is a reliable and widely used framework that can help improve the efficiency and effectiveness of system integration in a variety of environments.
If you want to start using this framework you can access the documentation at the site. It’s my first post about the Apache Camel and will post more practical content about this amazing framework.
November 19, 2022
Jakarta EE and MicroProfile at EclipseCon Community Day 2022
by Reza Rahman at November 19, 2022 10:39 PM
Community Day at EclipseCon 2022 was held in person on Monday, October 24 in Ludwigsburg, Germany. Community Day has always been a great event for Eclipse working groups and project teams, including Jakarta EE/MicroProfile. This year was no exception. A number of great sessions were delivered from prominent folks in the community. The following are the details including session materials. The agenda can still be found here. All the materials can be found here.
Jakarta EE Community State of the Union
The first session of the day was a Jakarta EE community state of the union delivered by Tanja Obradovic, Ivar Grimstad and Shabnam Mayel. The session included a quick overview of Jakarta EE releases, how to get involved in the work of producing the specifications, a recap of the important Jakarta EE 10 release and as well as a view of what’s to come in Jakarta EE 11. The slides are embedded below and linked here.
Jakarta Concurrency – What’s Next
Payara CEO Steve Millidge covered Jakarta Concurrency. He discussed the value proposition of Jakarta Concurrency, the innovations delivered in Jakarta EE 10 (including CDI based @Asynchronous, @ManagedExecutorDefinition, etc) and the possibilities for the future (including CDI based @Schedule, @Lock, @MaxConcurrency, etc). The slides are embedded below and linked here. There are some excellent code examples included.
Jakarta Security – What’s Next
Werner Keil covered Jakarta Security. He discussed what’s already done in Jakarta EE 10 (including OpenID Connect support) and everything that’s in the works for Jakarta EE 11 (including CDI based @RolesAllowed). The slides are embedded below and linked here.
Jakarta Data – What’s Coming
IBM’s Emily Jiang kindly covered Jakarta Data. This is a brand new specification aimed towards Jakarta EE 11. It is a higher level data access abstraction similar to Spring Data and DeltaSpike Data. It encompasses both Jakarta Persistence (JPA) and Jakarta NoSQL. The slides are embedded below and linked here. There are some excellent code examples included.
MicroProfile Community State of the Union
Emily also graciously delivered a MicroProfile state of the union. She covered what was delivered in MicroProfile 5, including alignment with Jakarta EE 9.1. She also discussed what’s coming soon in MicroProfile 6 and beyond, including very clear alignment with the Jakarta EE 10 Core Profile. The slides are embedded below and linked here. There are some excellent technical details included.
MicroProfile Telemetry – What’s Coming
Red Hat’s Martin Stefanko covered MicroProfile Telemetry. Telemetry is a brand new specification being included in MicroProfile 6. The specification essentially supersedes MicroProfile Tracing and possibly MicroProfile Metrics too in the near future. This is because the OpenTracing and OpenCensus projects merged into a single project called OpenTelemetry. OpenTelemetry is now the de facto standard defining how to collect, process, and export telemetry data in microservices. It makes sense that MicroProfile moves forward with supporting OpenTelemetry. The slides are embedded below and linked here. There are some excellent technical details and code examples included.
See You There Next Time?
Overall, it was an honor to organize the Jakarta EE/MicroProfile agenda at EclipseCon Community Day one more time. All speakers and attendees should be thanked. Perhaps we will see you at Community Day next time? It is a great way to hear from some of the key people driving Jakarta EE and MicroProfile. You can attend just Community Day even if you don’t attend EclipseCon. The fee is modest and includes lunch as well as casual networking.
November 04, 2022
JFall 2022
November 04, 2022 09:56 AM
An impression of JFall by yours truly.
keynote
Sold out!
Packet room!
Very nice first keynote speaker by Saby Sengupta about the path to transform.
He is a really nice storyteller. He had us going.
Dutch people, wooden shoes, wooden hat, would not listen
- Saby
lol
Get the answer to three why questions. If the answers stop after the first why. It may not be a good idea.
This great first keynote is followed by the very well known Venkat Subramaniam about The Art of Simplicity.
The question is not what can we add? But What can we remove?
Simple fails less
Simple is elegant
All in al a great keynote! Loved it.
Design Patterns in the light of Lambdas
By Venkat Subramaniam
The GOF are kind of the grand parents of our industry. The worst thing they have done is write the damn book.
— Venkat
The quote is in the context of that writing down grandmas fantastic recipe does not work as it is based on the skill of grandma and not the exact amount of the ingredients.
The cleanup is the responsibility of the Resource class. Much better than asking developers to take care of it. It will be forgotten!
The more powerful a language becomes the less we need to talk about patterns. Patterns become practices we use. We do not need to put in extra effort.
I love his way of presenting, but this is the one of those times - I guess - that he is hampered by his own succes. The talk did not go deep into stuff. During his talk I just about covered 5 not too difficult subjects. I missed his speed and depth.
Still a great talk though.
lunch
Was actually very nice!
NLJUG update keynote
The Java Magazine was mentioned we (as Editors) had to shout for that!
Please contact me (@ivonet) if you have ambitions to either be an author or maybe even as a fellow editor of the magazine. We are searching for a new Editor now.
Then the voting for the Innovation Awards.
I kinda missed the next keynote by ING because I was playing with a rubix cube and I did not really like his talk
jakarta EE 10 platform
by Ivar Grimstad
Ivar talks about the specification of Jakarta EE.
To create a lite version of CDI it is possible to start doing things at build time and facilitate other tools like GraalVM and Quarkus.
He gives nice demos on how to migrate code to work in de jakarta namespace.
To start your own Jakarta EE application just go to start.jakarta.ee en follow the very simple UI instructions
I am very proud to be the creator of that UI. Thanks, Ivar for giving me a shoutout for that during your talk. More cool stuff will follow soon.
Be prepared to do some namespace changes when moving from Java EE 8 to Jakarta EE.
All slides here
conclusion
I had a fantastic day. For me, it is mainly about the community and seeing all the people I know in the community. I totally love the vibe of the conference and I think it is one of the best organized venues.
See you at JSpring.
Ivo.
September 26, 2022
Survey Says: Confidence Continues to Grow in the Jakarta EE Ecosystem
by Mike Milinkovich at September 26, 2022 01:00 PM
The results of the 2022 Jakarta EE Developer Survey are very telling about the current state of the enterprise Java developer community. They point to increased confidence about Jakarta EE and highlight how far Jakarta EE has grown over the past few years.
Strong Turnout Helps Drive Future of Jakarta EE
The fifth annual survey is one of the longest running and best-respected surveys of its kind in the industry. This year’s turnout was fantastic: From March 9 to May 6, a total of 1,439 developers responded.
This is great for two reasons. First, obviously, these results help inform the Java ecosystem stakeholders about the requirements, priorities and perceptions of enterprise developer communities. The more people we hear from, the better picture we get of what the community wants and needs. That makes it much easier for us to make sure the work we’re doing is aligned with what our community is looking for.
The other reason is that it helps us better understand how the cloud native Java world is progressing. By looking at what community members are using and adopting, what their top goals are and what their plans are for adoption, we can better understand not only what we should be working on today, but tomorrow and for the future of Jakarta EE.
Findings Indicate Growing Adoption and Rising Expectations
Some of the survey’s key findings include:
- Jakarta EE is the basis for the top frameworks used for building cloud native applications.
- The top three frameworks for building cloud native applications, respectively, are Spring/Spring Boot, Jakarta EE and MicroProfile, though Spring/Spring Boot lost ground this past year. It’s important to note that Spring/SpringBoot relies on Jakarta EE developments for its operation and is not competitive with Jakarta EE. Both are critical ingredients to the healthy enterprise Java ecosystem.
- Jakarta EE 9/9.1 usage increased year-over-year by 5%.
- Java EE 8, Jakarta EE 8, and Jakarta EE 9/9.1 hit the mainstream with 81% adoption.
- While over a third of respondents planned to adopt, or already had adopted Jakarta EE 9/9.1, nearly a fifth of respondents plan to skip Jakarta EE 9/9.1 altogether and adopt Jakarta EE 10 once it becomes available.
- Most respondents said they have migrated to Jakarta EE already or planned to do so within the next 6-24 months.
- The top three community priorities for Jakarta EE are:
- Native integration with Kubernetes (same as last year)
- Better support for microservices (same as last year)
- Faster support from existing Java EE/Jakarta EE or cloud vendors (new this year)
Two of the results, when combined, highlight something interesting:
- 19% of respondents planned to skip Jakarta EE 9/9.1 and go straight to 10 once it’s available
- The new community priority — faster support from existing Java EE/Jakarta EE or cloud vendors — really shows the growing confidence the community has in the ecosystem
After all, you wouldn’t wait for a later version and skip the one that’s already available, unless you were confident that the newer version was not only going to be coming out on a relatively reliable timeline, but that it was going to be an improvement.
And this growing hunger from the community for faster support really speaks to how far the ecosystem has come. When we release a new version, like when we released Jakarta EE 9, it takes some time for the technology implementers to build the product based on those standards or specifications. The community is becoming more vocal in requesting those implementers to be more agile and quickly pick up the new versions. That’s definitely an indication that developer demand for Jakarta EE products is growing in a healthy way.
Learn More
If you’d like to learn more about the project, there are several Jakarta EE mailing lists to sign up for. You can also join the conversation on Slack. And if you want to get involved, start by choosing a project, sign up for its mailing list and start communicating with the team.
September 22, 2022
Jakarta EE 10 has Landed!
by javaeeguardian at September 22, 2022 03:48 PM
The Jakarta EE Ambassadors are thrilled to see Jakarta EE 10 being released! This is a milestone release that bears great significance to the Java ecosystem. Jakarta EE 8 and Jakarta EE 9.x were important releases in their own right in the process of transitioning Java EE to a truly open environment in the Eclipse Foundation. However, these releases did not deliver new features. Jakarta EE 10 changes all that and begins the vital process of delivering long pending new features into the ecosystem at a regular cadence.
There are quite a few changes that were delivered – here are some key themes and highlights:
- CDI Alignment
- @Asynchronous in Concurrency
- Better CDI support in Batch
- Java SE Alignment
- Support for Java SE 11, Java SE 17
- CompletionStage, ForkJoinPool, parallel streams in Concurrency
- Bootstrap APIs for REST
- Closing standardization gaps
- OpenID Connect support in Security, @ManagedExecutorDefinition, UUID as entity keys, more SQL support in Persistence queries, multipart/form-data support in REST, @ClientWindowScoped in Faces, pure Java Faces views
- CDI Lite/Core Profile to enable next generation cloud native runtimes – MicroProfile will likely align with CDI Lite/Jakarta EE Core
- Deprecation/removal
- @Context annotation in REST, EJB Entity Beans, embeddable EJB container, deprecated Servlet/Faces/CDI features
While there are many features that we identified in our Jakarta EE 10 Contribution Guide that did not make it yet, this is still a very solid release that everyone in the Java ecosystem will benefit from, including Spring, MicroProfile and Quarkus. You can see here what was delivered, what’s on the way and what gaps still remain. You can try Jakarta EE 10 out now using compatible implementations like GlassFish, Payara, WildFly and Open Liberty. Jakarta EE 10 is proof in the pudding that the community, including major stakeholders, has not only made it through the transition to the Eclipse Foundation but now is beginning to thrive once again.
Many Ambassadors helped make this release a reality such as Arjan Tijms, Werner Keil, Markus Karg, Otavio Santana, Ondro Mihalyi and many more. The Ambassadors will now focus on enabling the community to evangelize Jakarta EE 10 including speaking, blogging, trying out implementations, and advocating for real world adoption. We will also work to enable the community to continue to contribute to Jakarta EE by producing an EE 11 Contribution Guide in the coming months. Please stay tuned and join us.
Jakarta EE is truly moving forward – the next phase of the platform’s evolution is here!
July 13, 2022
Java Reflections unit-testing
by Vladimir Bychkov at July 13, 2022 09:06 PM
May 05, 2022
Java EE - Jakarta EE Initializr
May 05, 2022 02:23 PM
Getting started with Jakarta EE just became even easier!
Get started
Hot new Update!
Moved from the Apache 2 license to the Eclipse Public License v2 for the newest version of the archetype as described below.
As a start for a possible collaboration with the Eclipse start project.
New Archetype with JakartaEE 9
JakartaEE 9 + Payara 5.2022.2 + MicroProfile 4.1 running on Java 17
- And the docker image is also ready for x86_64 (amd64) AND aarch64 (arm64/v8) architectures!
February 21, 2022
FOSDEM 2022 Conference Report
by Reza Rahman at February 21, 2022 12:24 AM
FOSDEM took place February 5-6. The European based event is one of the most significant gatherings worldwide focused on all things Open Source. Named the “Friends of OpenJDK”, in recent years the event has added a devroom/track dedicated to Java. The effort is lead by my friend and former colleague Geertjan Wielenga. Due to the pandemic, the 2022 event was virtual once again. I delivered a couple of talks on Jakarta EE as well as Diversity & Inclusion.
Fundamentals of Diversity & Inclusion for Technologists
I opened the second day of the conference with my newest talk titled “Fundamentals of Diversity and Inclusion for Technologists”. I believe this is an overdue and critically important subject. I am very grateful to FOSDEM for accepting the talk. The reality for our industry remains that many people either have not yet started or are at the very beginning of their Diversity & Inclusion journey. This talk aims to start the conversation in earnest by explaining the basics. Concepts covered include unconscious bias, privilege, equity, allyship, covering and microaggressions. I punctuate the topic with experiences from my own life and examples relevant to technologists. The slides for the talk are available on SpeakerDeck. The video for the talk is now posted on YouTube.
Jakarta EE – Present and Future
Later the same day, I delivered my fairly popular talk – “Jakarta EE – Present and Future”. The talk is essentially a state of the union for Jakarta EE. It covers a little bit of history, context, Jakarta EE 8, Jakarta EE 9/9.1 as well as what’s ahead for Jakarta EE 10. One key component of the talk is the importance and ways of direct developer contributions into Jakarta EE, if needed with help from the Jakarta EE Ambassadors. Jakarta EE 10 and the Jakarta Core Profile should bring an important set of changes including to CDI, Jakarta REST, Concurrency, Security, Faces, Batch and Configuration. The slides for the talk are available on SpeakerDeck. The video for the talk is now posted on YouTube.
I am very happy to have had the opportunity to speak at FOSDEM. I hope to contribute again in the future.
December 12, 2021
Infinispan Apache Log4j 2 CVE-2021-44228 vulnerability
December 12, 2021 10:00 PM
Infinispan 10+ uses Log4j version 2.0+ and can be affected by vulnerability CVE-2021-44228, which has a 10.0 CVSS score. The first fixed Log4j version is 2.15.0.
So, until official patch is coming, - you can update used logger version to the latest in few simple steps
- Download Log4j version 2.15.0: https://www.apache.org/dyn/closer.lua/logging/log4j/2.15.0/apache-log4j-2.15.0-bin.zip
- Unpack distributive
- Replace affected libraries
wget https://downloads.apache.org/logging/log4j/2.15.0/apache-log4j-2.15.0-bin.zip
unzip apache-log4j-2.15.0-bin.zip
cd /opt/infinispan-server-10.1.8.Final/lib/
rm log4j-*.jar
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-api-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-core-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-jul-2.15.0.jar ./
cp ~/Downloads/apache-log4j-2.15.0-bin/log4j-slf4j-impl-2.15.0.jar ./
Please, note - patch above is not official, but according to initial tests it works with no issues
November 18, 2021
JPA query methods: influence on performance
by Vladimir Bychkov at November 18, 2021 07:22 AM
September 30, 2021
Custom Identity Store with Jakarta Security in TomEE
by Jean-Louis Monteiro at September 30, 2021 11:42 AM
In the previous post, we saw how to use the built-in ‘tomcat-users.xml’ identity store with Apache TomEE. While this identity store is inherited from Tomcat and integrated into Jakarta Security implementation in TomEE, this is usually good for development or simple deployments, but may appear too simple or restrictive for production environments.
This blog will focus on how to implement your own identity store. TomEE can use LDAP or JDBC identity stores out of the box. We will try them out next time.
Let’s say you have your own file store or your own data store like an in-memory data grid, then you will need to implement your own identity store.
What is an identity store?
An identity store is a database or a directory (store) of identity information about a population of users that includes an application’s callers.
In essence, an identity store contains all information such as caller name, groups or roles, and required information to validate a caller’s credentials.
How to implement my own identity store?
This is actually fairly simple with Jakarta Security. The only thing you need to do is create an implementation of `jakarta.security.enterprise.identitystore.IdentityStore`. All methods in the interface have default implementations. So you only have to implement what you need.
public interface IdentityStore {
Set DEFAULT_VALIDATION_TYPES = EnumSet.of(VALIDATE, PROVIDE_GROUPS);
default CredentialValidationResult validate(Credential credential) {
}
default Set getCallerGroups(CredentialValidationResult validationResult) {
}
default int priority() {
}
default Set validationTypes() {
}
enum ValidationType {
VALIDATE, PROVIDE_GROUPS
}
}
By default, an identity store is used for both validating user credentials and providing groups/roles for the authenticated user. Depending on what #validationTypes() will return, you will have to implement #validate(…) and/or #getCallerGroups(…)
#getCallerGroups(…) will receive the result of #valide(…). Let’s look at a very simple example:
@ApplicationScoped
public class TestIdentityStore implements IdentityStore {
public CredentialValidationResult validate(Credential credential) {
if (!(credential instanceof UsernamePasswordCredential)) {
return INVALID_RESULT;
}
final UsernamePasswordCredential usernamePasswordCredential = (UsernamePasswordCredential) credential;
if (usernamePasswordCredential.compareTo("jon", "doe")) {
return new CredentialValidationResult("jon", new HashSet<>(asList("foo", "bar")));
}
if (usernamePasswordCredential.compareTo("iron", "man")) {
return new CredentialValidationResult("iron", new HashSet<>(Collections.singletonList("avengers")));
}
return INVALID_RESULT;
}
}
In this simple example, the identity store is hardcoded. Basically, it knows only 2 users, one of them has some roles, while the other has another set of roles.
You can easily extend this example and query a local file, or an in-memory data grid if you need. Or use JPA to access your relational database.
IMPORTANT: for TomEE to pick it up and use it in your application, the identity store must be a CDI bean.
The complete and runnable example is available under https://github.com/apache/tomee/tree/master/examples/security-custom-identitystore
The post Custom Identity Store with Jakarta Security in TomEE appeared first on Tomitribe.
September 24, 2021
Book Review: Practical Cloud-Native Java Development with MicroProfile
September 24, 2021 12:00 AM
General information
- Pages: 403
- Published by: Packt
- Release date: Aug 2021
Disclaimer: I received this book as a collaboration with Packt and one of the authors (Thanks Emily!)
A book about Microservices for the Java Enterprise-shops
Year after year many enterprise companies are struggling to embrace Cloud Native practices that we tend to denominate as Microservices, however Microservices is a metapattern that needs to follow a well defined approach, like:
- (We aim for) reactive systems
- (Hence we need a methodology like) 12 Cloud Native factors
- (Implementing) well-known design patterns
- (Dividing the system by using) Domain Driven Design
- (Implementing microservices via) Microservices chassis and/or service mesh
- (Achieving deployments by) Containers orchestration
Many of these concepts require a considerable amount of context, but some books, tutorials, conferences and YouTube videos tend to focus on specific niche information, making difficult to have a "cold start" in the microservices space if you have been developing regular/monolithic software. For me, that's the best thing about this book, it provides a holistic view to understand microservices with Java and MicroProfile for "cold starter developers".
About the book
Using a software architect perspective, MicroProfile could be defined as a set of specifications (APIs) that many microservices chassis implement in order to solve common microservices problems through patterns, lessons learned from well known Java libraries, and proposals for collaboration between Java Enterprise vendors.
Subsequently if you think that it sounds a lot like Java EE, that's right, it's the same spirit but on the microservices space with participation for many vendors, including vendors from the Java EE space -e.g. Red Hat, IBM, Apache, Payara-.
The main value of this book is the willingness to go beyond the APIs, providing four structured sections that have different writing styles, for instance:
- Section 1: Cloud Native Applications - Written as a didactical resource to learn fundamentals of distributed systems with Cloud Native approach
- Section 2: MicroProfile Deep Dive - Written as a reference book with code snippets to understand the motivation, functionality and specific details in MicroProfile APIs and the relation between these APIs and common Microservices patterns -e.g. Remote procedure invocation, Health Check APIs, Externalized configuration-
- Section 3: End-to-End Project Using MicroProfile - Written as a narrative workshop with source code already available, to understand the development and deployment process of Cloud Native applications with MicroProfile
- Section 4: The standalone specifications - Written as a reference book with code snippets, it describes the development of newer specs that could be included in the future under MicroProfile's umbrella
First section
This was by far my favorite section. This section presents a well-balanced overview about Cloud Native practices like:
- Cloud Native definition
- The role of microservices and the differences with monoliths and FaaS
- Data consistency with event sourcing
- Best practices
- The role of MicroProfile
I enjoyed this section because my current role is to coach or act as a software architect at different companies, hence this is good material to explain the whole panorama to my coworkers and/or use this book as a quick reference.
My only concern with this section is about the final chapter, this chapter presents an application called IBM Stock Trader that (as you probably guess) IBM uses to demonstrate these concepts using MicroProfile with OpenLiberty. The chapter by itself presents an application that combines data sources, front/ends, Kubernetes; however the application would be useful only on Section 3 (at least that was my perception). Hence you will be going back to this section once you're executing the workshop.
Second section
This section divides the MicroProfile APIs in three levels, the division actually makes a lot of sense but was evident to me only during this review:
- The base APIs to create microservices (JAX-RS, CDI, JSON-P, JSON-B, Rest Client)
- Enhancing microservices (Config, Fault Tolerance, OpenAPI, JWT)
- Observing microservices (Health, Metrics, Tracing)
Additionally, section also describes the need for Docker and Kubernetes and how other common approaches -e.g. Service mesh- overlap with Microservice Chassis functionality.
Currently I'm a MicroProfile user, hence I knew most of the APIs, however I liked the actual description of the pattern/need that motivated the inclusion of the APIs, and the description could be useful for newcomers, along with the code snippets also available on GitHub.
If you're a Java/Jakarta EE developer you will find the CDI section a little bit superficial, indeed CDI by itself deserves a whole book/fascicle but this chapter gives the basics to start the development process.
Third section
This section switches the writing style to a workshop style. The first chapter is entirely focused on how to compile the sample microservices, how to fulfill the technical requirements and which MicroProfile APIs are used on every microservice.
You must notice that this is not a Java programming workshop, it's a Cloud Native workshop with ready to deploy microservices, hence the step by step guide is about compilation with Maven, Docker containers, scaling with Kubernetes, operators in Openshift, etc.
You could explore and change the source code if you wish, but the section is written in a "descriptive" way assuming the samples existence.
Fourth section
This section is pretty similar to the second section in the reference book style, hence it also describes the pattern/need that motivated the discussion of the API and code snippets. The main focus of this section is GraphQL, Reactive Approaches and distributed transactions with LRA.
This section will probably change in future editions of the book because at the time of publishing the Cloud Native Container Foundation revealed that some initiatives about observability will be integrated in the OpenTelemetry project and MicroProfile it's discussing their future approach.
Things that could be improved
As any review this is the most difficult section to write, but I think that a second edition should:
- Extend the CDI section due its foundational status
- Switch the order of the Stock Tracer presentation
- Extend the data consistency discussión -e.g. CQRS, Event Sourcing-, hopefully with advances from LRA
The last item is mostly a wish since I'm always in the need for better ways to integrate this common practices with buses like Kafka or Camel using MicroProfile. I know that some implementations -e.g. Helidon, Quarkus- already have extensions for Kafka or Camel, but the data consistency is an entire discussion about patterns, tools and best practices.
Who should read this book?
- Java developers with strong SE foundations and familiarity with the enterprise space (Spring/Java EE)
July 28, 2021
Jakarta Community Acceptance Testing (JCAT)
by javaeeguardian at July 28, 2021 05:41 AM
Today the Jakarta EE Ambassadors are announcing the start of the Jakarta EE Community Acceptance (JCAT) Testing initiative. The purpose of this initiative is to test Jakarta EE 9/9.1 implementations testing using your code and/or applications. Although Jakarta EE is extensively tested by the TCK, container specific tests, and QA, the purpose of JCAT is for developers to test the implementations.
Jakarta EE 9/9.1 did not introduce any new features. In Jakarta EE 9 the APIs changed from javax to jakarta. Jakarta EE 9.1 raised the supported floor to Java 11 for compatible implementations. So what are we testing?
- Testing individual spec implementations standalone with the new namespace.
- Deploying existing Java EE/Jakarta EE applications to EE 9/9.1.
- Converting Java EE/Jakarta EE applications to the new namespace.
- Running applications on Java 11 (Jakarta EE 9.1)
Participating in this initiative is easy:
- Download a Jakarta EE implementation:
- Deploy code:
- Port or run your existing Jakarta EE application
- Test out a feature using a starter template
To join this initiative, please take a moment to fill-out the form:
To submit results or feedback on your experiences with Jakarta EE 9/9.1:
Jakarta EE 9 / 9.1 Feedback Form
Resources:
- Jakarta EE Ambassadors Google Group List
- Jakarta EE Amabassors Twitter
- Jakarta EE Starter
- Jakarta EE 9 Boilerplate
- Jakarta EE Migration
Start Date: July 28, 2021
End Date: December 31st, 2021
April 17, 2021
Your Voice Matters: Take the Jakarta EE Developer Survey
by dmitrykornilov at April 17, 2021 11:36 AM
The Jakarta EE Developer Survey is in its fourth year and is the industry’s largest open source developer survey. It’s open until April 30, 2021. I am encouraging you to add your voice. Why should you do it? Because Jakarta EE Working Group needs your feedback. We need to know the challenges you facing and suggestions you have about how to make Jakarta EE better.
Last year’s edition surveyed developers to gain on-the-ground understanding and insights into how Jakarta solutions are being built, as well as identifying developers’ top choices for architectures, technologies, and tools. The 2021 Jakarta EE Developer Survey is your chance to influence the direction of the Jakarta EE Working Group’s approach to cloud native enterprise Java.
The results from the 2021 survey will give software vendors, service providers, enterprises, and individual developers in the Jakarta ecosystem updated information about Jakarta solutions and service development trends and what they mean for their strategies and businesses. Additionally, the survey results also help the Jakarta community at the Eclipse Foundation better understand the top industry focus areas and priorities for future project releases.
A full report from based on the survey results will be made available to all participants.
The survey takes less than 10 minutes to complete. We look forward to your input. Take the survey now!
April 02, 2021
Undertow AJP balancer. UT005028: Proxy request failed: java.nio.BufferOverflowException
April 02, 2021 09:00 PM
Wildfly provides great out of the box load balancing support by Undertow and modcluster subsystems
Unfortunately, in case HTTP headers size is huge enough (close to 16K), which is so actual in JWT era - pity error happened:
ERROR [io.undertow.proxy] (default I/O-10) UT005028: Proxy request to /ee-jax-rs-examples/clusterdemo/serverinfo failed: java.io.IOException: java.nio.BufferOverflowException
at io.undertow.server.handlers.proxy.ProxyHandler$HTTPTrailerChannelListener.handleEvent(ProxyHandler.java:771)
at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction$1.completed(ProxyHandler.java:646)
at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction$1.completed(ProxyHandler.java:561)
at io.undertow.client.ajp.AjpClientExchange.invokeReadReadyCallback(AjpClientExchange.java:203)
at io.undertow.client.ajp.AjpClientConnection.initiateRequest(AjpClientConnection.java:288)
at io.undertow.client.ajp.AjpClientConnection.sendRequest(AjpClientConnection.java:242)
at io.undertow.server.handlers.proxy.ProxyHandler$ProxyAction.run(ProxyHandler.java:561)
at io.undertow.util.SameThreadExecutor.execute(SameThreadExecutor.java:35)
at io.undertow.server.HttpServerExchange.dispatch(HttpServerExchange.java:815)
...
Caused by: java.nio.BufferOverflowException
at java.nio.Buffer.nextPutIndex(Buffer.java:521)
at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:297)
at io.undertow.protocols.ajp.AjpUtils.putString(AjpUtils.java:52)
at io.undertow.protocols.ajp.AjpClientRequestClientStreamSinkChannel.createFrameHeaderImpl(AjpClientRequestClientStreamSinkChannel.java:176)
at io.undertow.protocols.ajp.AjpClientRequestClientStreamSinkChannel.generateSendFrameHeader(AjpClientRequestClientStreamSinkChannel.java:290)
at io.undertow.protocols.ajp.AjpClientFramePriority.insertFrame(AjpClientFramePriority.java:39)
at io.undertow.protocols.ajp.AjpClientFramePriority.insertFrame(AjpClientFramePriority.java:32)
at io.undertow.server.protocol.framed.AbstractFramedChannel.flushSenders(AbstractFramedChannel.java:603)
at io.undertow.server.protocol.framed.AbstractFramedChannel.flush(AbstractFramedChannel.java:742)
at io.undertow.server.protocol.framed.AbstractFramedChannel.queueFrame(AbstractFramedChannel.java:735)
at io.undertow.server.protocol.framed.AbstractFramedStreamSinkChannel.queueFinalFrame(AbstractFramedStreamSinkChannel.java:267)
at io.undertow.server.protocol.framed.AbstractFramedStreamSinkChannel.shutdownWrites(AbstractFramedStreamSinkChannel.java:244)
at io.undertow.channels.DetachableStreamSinkChannel.shutdownWrites(DetachableStreamSinkChannel.java:79)
at io.undertow.server.handlers.proxy.ProxyHandler$HTTPTrailerChannelListener.handleEvent(ProxyHandler.java:754)
The same request directly to backend server works well. Tried to play with ajp-listener and mod-cluster filter "max-*" parameters, but have no luck.
Possible solution here is switch protocol from AJP to HTTP which can be bit less effective, but works well with big headers:
/profile=full-ha/subsystem=modcluster/proxy=default:write-attribute(name=listener, value=default)
September 23, 2020
General considerations on updating Enterprise Java projects from Java 8 to Java 11
September 23, 2020 12:00 AM
The purpose of this article is to consolidate all difficulties and solutions that I've encountered while updating Java EE projects from Java 8 to Java 11 (and beyond). It's a known fact that Java 11 has a lot of new characteristics that are revolutionizing how Java is used to create applications, despite being problematic under certain conditions.
This article is focused on Java/Jakarta EE but it could be used as basis for other enterprise Java frameworks and libraries migrations.
Is it possible to update Java EE/MicroProfile projects from Java 8 to Java 11?
Yes, absolutely. My team has been able to bump at least two mature enterprise applications with more than three years in development, being:
A Management Information System (MIS)
- Time for migration: 1 week
- Modules: 9 EJB, 1 WAR, 1 EAR
- Classes: 671 and counting
- Code lines: 39480
- Project's beginning: 2014
- Original platform: Java 7, Wildfly 8, Java EE 7
- Current platform: Java 11, Wildfly 17, Jakarta EE 8, MicroProfile 3.0
- Web client: Angular
Mobile POS and Geo-fence
- Time for migration: 3 week
- Modules: 5 WAR/MicroServices
- Classes: 348 and counting
- Code lines: 17160
- Project's beginning: 2017
- Original platform: Java 8, Glassfish 4, Java EE 7
- Current platform: Java 11, Payara (Micro) 5, Jakarta EE 8, MicroProfile 3.2
- Web client: Angular
Why should I ever consider migrating to Java 11?
As everything in IT the answer is "It depends . . .". However there are a couple of good reasons to do it:
- Reduce attack surface by updating project dependencies proactively
- Reduce technical debt and most importantly, prepare your project for the new and dynamic Java world
- Take advantage of performance improvements on new JVM versions
- Take advantage from improvements of Java as programming language
- Sleep better by having a more secure, efficient and quality product
Why Java updates from Java 8 to Java 11 are considered difficult?
From my experience with many teams, because of this:
Changes in Java release cadence
Currently, there are two big branches in JVMs release model:
- Java LTS: With a fixed lifetime (3 years) for long term support, being Java 11 the latest one
- Java current: A fast-paced Java version that is available every 6 months over a predictable calendar, being Java 15 the latest (at least at the time of publishing for this article)
The rationale behind this decision is that Java needed dynamism in providing new characteristics to the language, API and JVM, which I really agree.
Nevertheless, it is a know fact that most enterprise frameworks seek and use Java for stability. Consequently, most of these frameworks target Java 11 as "certified" Java Virtual Machine for deployments.
Usage of internal APIs
Errata: I fixed and simplified this section following an interesting discussion on reddit :)
Java 9 introduced changes in internal classes that weren't meant for usage outside JVM, preventing/breaking the functionality of popular libraries that made use of these internals -e.g. Hibernate, ASM, Hazelcast- to gain performance.
Hence to avoid it, internal APIs in JDK 9 are inaccessible at compile time (but accesible with --add-exports), remaining accessible if they were in JDK 8 but in a future release they will become inaccessible, in the long run this change will reduce the costs borne by the maintainers of the JDK itself and by the maintainers of libraries and applications that, knowingly or not, make use of these internal APIs.
Finally, during the introduction of JEP-260 internal APIs were classified as critical and non-critical, consequently critical internal APIs for which replacements are introduced in JDK 9 are deprecated in JDK 9 and will be either encapsulated or removed in a future release.
However, you are inside the danger zone if:
- Your project compiles against dependencies pre-Java 9 depending on critical internals
- You bundle dependencies pre-Java 9 depending on critical internals
- You run your applications over a runtime -e.g. Application Servers- that include pre Java 9 transitive dependencies
Any of these situations means that your application has a probability of not being compatible with JVMs above Java 8. At least not without updating your dependencies, which also could uncover breaking changes in library APIs creating mandatory refactors.
Removal of CORBA and Java EE modules from OpenJDK
Also during Java 9 release, many Java EE and CORBA modules were marked as deprecated, being effectively removed at Java 11, specifically:
- java.xml.ws (JAX-WS, plus the related technologies SAAJ and Web Services Metadata)
- java.xml.bind (JAXB)
- java.activation (JAF)
- java.xml.ws.annotation (Common Annotations)
- java.corba (CORBA)
- java.transaction (JTA)
- java.se.ee (Aggregator module for the six modules above)
- jdk.xml.ws (Tools for JAX-WS)
- jdk.xml.bind (Tools for JAXB)
As JEP-320 states, many of these modules were included in Java 6 as a convenience to generate/support SOAP Web Services. But these modules eventually took off as independent projects already available at Maven Central. Therefore it is necessary to include these as dependencies if our project implements services with JAX-WS and/or depends on any library/utility that was included previously.
IDEs and application servers
In the same way as libraries, Java IDEs had to catch-up with the introduction of Java 9 at least in three levels:
- IDEs as Java programs should be compatible with Java Modules
- IDEs should support new Java versions as programming language -i.e. Incremental compilation, linting, text analysis, modules-
- IDEs are also basis for an ecosystem of plugins that are developed independently. Hence if plugins have any transitive dependency with issues over JPMS, these also have to be updated
Overall, none of the Java IDEs guaranteed that plugins will work in JVMs above Java 8. Therefore you could possibly run your IDE over Java 11 but a legacy/deprecated plugin could prevent you to run your application.
How do I update?
You must notice that Java 9 launched three years ago, hence the situations previously described are mostly covered. However you should do the following verifications and actions to prevent failures in the process:
- Verify server compatibility
- Verify if you need a specific JVM due support contracts and conditions
- Configure your development environment to support multiple JVMs during the migration process
- Verify your IDE compatibility and update
- Update Maven and Maven projects
- Update dependencies
- Include Java/Jakarta EE dependencies
- Execute multiple JVMs in production
Verify server compatibility
Mike Luikides from O'Reilly affirms that there are two types of programmers. In one hand we have the low level programmers that create tools as libraries or frameworks, and on the other hand we have developers that use these tools to create experience, products and services.
Java Enterprise is mostly on the second hand, the "productive world" resting in giant's shoulders. That's why you should check first if your runtime or framework already has a version compatible with Java 11, and also if you have the time/decision power to proceed with an update. If not, any other action from this point is useless.
The good news is that most of the popular servers in enterprise Java world are already compatible, like:
- Apache Tomcat
- Apache Maven
- Spring
- Oracle WebLogic
- Payara
- Apache TomEE
... among others
If you happen to depend on non compatible runtimes, this is where the road ends unless you support the maintainer to update it.
Verify if you need an specific JVM
On a non-technical side, under support contract conditions you could be obligated to use an specific JVM version.
OpenJDK by itself is an open source project receiving contributions from many companies (being Oracle the most active contributor), but nothing prevents any other company to compile, pack and TCK other JVM distribution as demonstrated by Amazon Correto, Azul Zulu, Liberica JDK, etc.
In short, there is software that technically could run over any JVM distribution and version, but the support contract will ask you for a particular version. For instance:
- WebLogic is only certified for Oracle HotSpot and GraalVM
- SAP Netweaver includes by itself SAP JVM
Configure your development environment to support multiple JDKs
Since the jump from Java 8 to Java 11 is mostly an experimentation process, it is a good idea to install multiple JVMs on the development computer, being SDKMan and jEnv the common options:
SDKMan
SDKMan is available for Unix-Like environments (Linux, Mac OS, Cygwin, BSD) and as the name suggests, acts as a Java tools package manager.
It helps to install and manage JVM ecosystem tools -e.g. Maven, Gradle, Leiningen- and also multiple JDK installations from different providers.
jEnv
Also available for Unix-Like environments (Linux, Mac OS, Cygwin, BSD), jEnv is basically a script to manage and switch multiple JVM installations per system, user and shell.
If you happen to install JDKs from different sources -e.g Homebrew, Linux Repo, Oracle Technology Network- it is a good choice.
Finally, if you use Windows the common alternative is to automate the switch using .bat files however I would appreciate any other suggestion since I don't use Windows so often.
Verify your IDE compatibility and update
Please remember that any IDE ecosystem is composed by three levels:
- The IDE acting as platform
- Programming language support
- Plugins to support tools and libraries
After updating your IDE, you should also verify if all of the plugins that make part of your development cycle work fine under Java 11.
Update Maven and Maven projects
Probably the most common choice in Enterprise Java is Maven, and many IDEs use it under the hood or explicitly. Hence, you should update it.
Besides installation, please remember that Maven has a modular architecture and Maven modules version could be forced on any project definition. So, as rule of thumb you should also update these modules in your projects to the latest stable version.
To verify this quickly, you could use versions-maven-plugin:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>versions-maven-plugin</artifactId>
<version>2.8.1</version>
</plugin>
Which includes a specific goal to verify Maven plugins versions:
mvn versions:display-plugin-updates
After that, you also need to configure Java source and target compatibility, generally this is achieved in two points.
As properties:
<properties>
...
<maven.compiler.source>11</maven.compiler.source>
<maven.compiler.target>11</maven.compiler.target>
</properties>
As configuration on Maven plugins, specially in maven-compiler-plugin:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version>
<configuration>
<release>11</release>
</configuration>
</plugin>
Finally, some plugins need to "break" the barriers imposed by Java Modules and Java Platform Teams knows about it. Hence JVM has an argument called illegal-access to allow this, at least during Java 11.
This could be a good idea in plugins like surefire and failsafe which also invoke runtimes that depend on this flag (like Arquillian tests):
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<argLine>
--illegal-access=permit
</argLine>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<argLine>
--illegal-access=permit
</argLine>
</configuration>
</plugin>
Update project dependencies
As mentioned before, you need to check for compatible versions on your Java dependencies. Sometimes these libraries could introduce breaking changes on each major version -e.g. Flyway- and you should consider a time to refactor this changes.
Again, if you use Maven versions-maven-plugin has a goal to verify dependencies version. The plugin will inform you about available updates.:
mvn versions:display-dependency-updates
In the particular case of Java EE, you already have an advantage. If you depend only on APIs -e.g. Java EE, MicroProfile- and not particular implementations, many of these issues are already solved for you.
Include Java/Jakarta EE dependencies
Probably modern REST based services won't need this, however in projects with heavy usage of SOAP and XML marshalling is mandatory to include the Java EE modules removed on Java 11. Otherwise your project won't compile and run.
You must include as dependency:
- API definition
- Reference Implementation (if needed)
At this point is also a good idea to evaluate if you could move to Jakarta EE, the evolution of Java EE under Eclipse Foundation.
Jakarta EE 8 is practically Java EE 8 with another name, but it retains package and features compatibility, most of application servers are in the process or already have Jakarta EE certified implementations:
We could swap the Java EE API:
<dependency>
<groupId>javax</groupId>
<artifactId>javaee-api</artifactId>
<version>8.0.1</version>
<scope>provided</scope>
</dependency>
For Jakarta EE API:
<dependency>
<groupId>jakarta.platform</groupId>
<artifactId>jakarta.jakartaee-api</artifactId>
<version>8.0.0</version>
<scope>provided</scope>
</dependency>
After that, please include any of these dependencies (if needed):
Java Beans Activation
Java EE
<dependency>
<groupId>javax.activation</groupId>
<artifactId>javax.activation-api</artifactId>
<version>1.2.0</version>
</dependency>
Jakarta EE
<dependency>
<groupId>jakarta.activation</groupId>
<artifactId>jakarta.activation-api</artifactId>
<version>1.2.2</version>
</dependency>
JAXB (Java XML Binding)
Java EE
<dependency>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
<version>2.3.1</version>
</dependency>
Jakarta EE
<dependency>
<groupId>jakarta.xml.bind</groupId>
<artifactId>jakarta.xml.bind-api</artifactId>
<version>2.3.3</version>
</dependency>
Implementation
<dependency>
<groupId>org.glassfish.jaxb</groupId>
<artifactId>jaxb-runtime</artifactId>
<version>2.3.3</version>
</dependency>
JAX-WS
Java EE
<dependency>
<groupId>javax.xml.ws</groupId>
<artifactId>jaxws-api</artifactId>
<version>2.3.1</version>
</dependency>
Jakarta EE
<dependency>
<groupId>jakarta.xml.ws</groupId>
<artifactId>jakarta.xml.ws-api</artifactId>
<version>2.3.3</version>
</dependency>
Implementation (runtime)
<dependency>
<groupId>com.sun.xml.ws</groupId>
<artifactId>jaxws-rt</artifactId>
<version>2.3.3</version>
</dependency>
Implementation (standalone)
<dependency>
<groupId>com.sun.xml.ws</groupId>
<artifactId>jaxws-ri</artifactId>
<version>2.3.2-1</version>
<type>pom</type>
</dependency>
Java Annotation
Java EE
<dependency>
<groupId>javax.annotation</groupId>
<artifactId>javax.annotation-api</artifactId>
<version>1.3.2</version>
</dependency>
Jakarta EE
<dependency>
<groupId>jakarta.annotation</groupId>
<artifactId>jakarta.annotation-api</artifactId>
<version>1.3.5</version>
</dependency>
Java Transaction
Java EE
<dependency>
<groupId>javax.transaction</groupId>
<artifactId>javax.transaction-api</artifactId>
<version>1.3</version>
</dependency>
Jakarta EE
<dependency>
<groupId>jakarta.transaction</groupId>
<artifactId>jakarta.transaction-api</artifactId>
<version>1.3.3</version>
</dependency>
CORBA
In the particular case of CORBA, I'm aware of its adoption. There is an independent project in eclipse to support CORBA, based on Glassfish CORBA, but this should be investigated further.
Multiple JVMs in production
If everything compiles, tests and executes. You did a successful migration.
Some deployments/environments run multiple application servers over the same Linux installation. If this is your case it is a good idea to install multiple JVMs to allow stepped migrations instead of big bang.
For instance, RHEL based distributions like CentOS, Oracle Linux or Fedora include various JVM versions:
Most importantly, If you install JVMs outside directly from RPMs(like Oracle HotSpot), Java alternatives will give you support:
However on modern deployments probably would be better to use Docker, specially on Windows which also needs .bat script to automate this task. Most of the JVM distributions are also available on Docker Hub:
July 06, 2020
Jakarta EE Cookbook
by Elder Moraes at July 06, 2020 07:19 PM
About one month ago I had the pleasure to announce the release of the second edition of my book, now called “Jakarta EE Cookbook”. By that time I had recorded a video about and you can watch it here:
And then came a crazy month and just now I had the opportunity to write a few lines about it!
So, straight to the point, what you should know about the book (in case you have any interest in it).
Target audience
Java developers working on enterprise applications and that would like to get the best from the Jakarta EE platform.
Topics covered
I’m sure this is one of the most complete books of this field, and I’m saying it based on the covered topics:
- Server-side development
- Building services with RESTful features
- Web and client-server communication
- Security in the enterprise architecture
- Jakarta EE standards (and how does it save you time on a daily basis)
- Deployment and management using some of the best Jakarta EE application servers
- Microservices with Jakarta EE and Eclipse MicroProfile
- CI/CD
- Multithreading
- Event-driven for reactive applications
- Jakarta EE, containers & cloud computing
Style and approach
The book has the word “cookbook” on its name for a reason: it follows a 100% practical approach, with almost all working code available in the book (we only omitted the imports for the sake of the space).
And talking about the source code being available, it is really available on my Github: https://github.com/eldermoraes/javaee8-cookbook
PRs and Stars are welcomed!
Bonus content
The book has an appendix that would be worthy of another book! I tell the readers how sharing knowledge has changed my career for good and how you can apply what I’ve learned in your own career.
Surprise, surprise
In the first 24 hours of its release, this book simply reached the 1st place at Amazon among other Java releases! Wow!
Of course, I’m more than happy and honored for such a warm welcome given to my baby…
If you are interested in it, we are in the very last days of the special price in celebration of its release. You can take a look here http://book.eldermoraes.com
Leave your comments if you need any clarification about it. See you!
January 29, 2020
Monitoring REST APIs with Custom JDK Flight Recorder Events
January 29, 2020 02:30 PM
The JDK Flight Recorder (JFR) is an invaluable tool for gaining deep insights into the performance characteristics of Java applications. Open-sourced in JDK 11, JFR provides a low-overhead framework for collecting events from Java applications, the JVM and the operating system.
In this blog post we’re going to explore how custom, application-specific JFR events can be used to monitor a REST API, allowing to track request counts, identify long-running requests and more. We’ll also discuss how the JFR Event Streaming API new in Java 14 can be used to export live events, making them available for monitoring and alerting via tools such as Prometheus and Grafana.
January 20, 2020
Enforcing Java Record Invariants With Bean Validation
January 20, 2020 04:30 PM
January 19, 2020
Jakarta EE 8 CRUD API Tutorial using Java 11
by Philip Riecks at January 19, 2020 03:07 PM
As part of the Jakarta EE Quickstart Tutorials on YouTube, I've now created a five-part series to create a Jakarta EE CRUD API. Within the videos, I'm demonstrating how to start using Jakarta EE for your next application. Given the Liberty Maven Plugin and MicroShed Testing, the endpoints are developed using the TDD (Test Driven Development) technique.
The following technologies are used within this short series: Java 11, Jakarta EE 8, Open Liberty, Derby, Flyway, MicroShed Testing & JUnit 5
Part I: Introduction to the application setup
This part covers the following topics:
- Introduction to the Maven project skeleton
- Flyway setup for Open Liberty
- Derby JDBC connection configuration
- Basic MicroShed Testing setup for TDD
Part II: Developing the endpoint to create entities
This part covers the following topics:
- First JAX-RS endpoint to create
Person
entities - TDD approach using MicroShed Testing and the Liberty Maven Plugin
- Store the entities using the
EntityManager
Part III: Developing the endpoints to read entities
This part covers the following topics:
- Develop two JAX-RS endpoints to read entities
- Read all entities and by its id
- Handle non-present entities with a different HTTP status code
Part IV: Developing the endpoint to update entities
This part covers the following topics:
- Develop the JAX-RS endpoint to update entities
- Update existing entities using HTTP PUT
- Validate the client payload using Bean Validation
Part V: Developing the endpoint to delete entities
This part covers the following topics:
- Develop the JAX-RS endpoint to delete entities
- Enhance the test setup for deterministic and repeatable integration tests
- Remove the deleted entity from the database
The source code for the Maven CRUD API application is available on GitHub.
For more quickstart tutorials on Jakarta EE, have a look at the overview page on my blog.
Have fun developing Jakarta EE CRUD API applications,
Phil
The post Jakarta EE 8 CRUD API Tutorial using Java 11 appeared first on rieckpil.
January 07, 2020
Deploy a Jakarta EE application to the root context
by Philip Riecks at January 07, 2020 06:24 AM
With the presence of Docker, Kubernetes and cheaper hardware, the deployment model of multiple applications inside one application server has passed. Now, you deploy one Jakarta EE application to one application server. This eliminates the need for different context paths. You can use the root context /
for your Jakarta EE application. With this blog post, you'll learn how to achieve this for each Jakarta EE application server.
The default behavior for Jakarta EE application server
Without any further configuration, most of the Jakarta EE application servers deploy the application to a context path based on the filename of your .war
. If you e.g. deploy your my-banking-app.war
application, the server will use the context prefix /my-banking-app
for your application. All you JAX-RS endpoints, Servlets, .jsp
, .xhtml
content is then available below this context, e.g /my-banking-app/resources/customers
.
This was important in the past, where you deployed multiple applications to one application server. Without the context prefix, the application server wouldn't be able to route the traffic to the correct application.
As of today, the deployment model changed with Docker, Kubernetes and cheaper infrastructure. You usually deploy one .war
within one application server running as a Docker container. Given this deployment model, the context prefix is irrelevant. Mapping the application to the root context /
is more convenient.
If you configure a reverse proxy or an Ingress controller (in the Kubernetes world), you are happy if you can just route to /
instead of remembering the actual context path (error-prone).
Deploying to root context: Payara & Glassfish
As Payara is a fork of Glassfish, the configuration for both is quite similar. The most convenient way for Glassfish is to place a glassfish-web.xml
file in the src/main/webapp/WEB-INF
folder of your application:
<!DOCTYPE glassfish-web-app PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 Servlet 3.0//EN" "http://glassfish.org/dtds/glassfish-web-app_3_0-1.dtd"> <glassfish-web-app> <context-root>/</context-root> </glassfish-web-app>
For Payara the filename is payara-web.xml
:
<!DOCTYPE payara-web-app PUBLIC "-//Payara.fish//DTD Payara Server 4 Servlet 3.0//EN" "https://raw.githubusercontent.com/payara/Payara-Server-Documentation/master/schemas/payara-web-app_4.dtd"> <payara-web-app> <context-root>/</context-root> </payara-web-app>
Both also support configuring the context path of the application within their admin console. IMHO this less convenient than the .xml
file solution.
Deploying to root context: Open Liberty
Open Liberty also parses a proprietary web.xml
file within src/main/webapp/WEB-INF
: ibm-web-ext.xml
<web-ext xmlns="http://websphere.ibm.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://websphere.ibm.com/xml/ns/javaee http://websphere.ibm.com/xml/ns/javaee/ibm-web-ext_1_0.xsd" version="1.0"> <context-root uri="/"/> </web-ext>
Furthermore, you can also configure the context of your application within your server.xml
:
<server> <featureManager> <feature>servlet-4.0</feature> </featureManager> <httpEndpoint id="defaultHttpEndpoint" httpPort="9080" httpsPort="9443"/> <webApplication location="app.war" contextRoot="/" name="app"/> </server>
Deploying to root context: WildFly
WildFly also has two simple ways of configuring the root context for your application. First, you can place a jboss-web.xml
within src/main/webapp/WEB-INF
:
<!DOCTYPE jboss-web PUBLIC "-//JBoss//DTD Web Application 2.4//EN" "http://www.jboss.org/j2ee/dtd/jboss-web_4_0.dtd"> <jboss-web> <context-root>/</context-root> </jboss-web>
Second, while copying your .war
file to your Docker container, you can name it ROOT.war
:
FROM jboss/wildfly ADD target/app.war /opt/jboss/wildfly/standalone/deployments/ROOT.war
For more tips & tricks for each application server, have a look at my cheat sheet.
Have fun deploying your Jakarta EE applications to the root context,
Phil
The post Deploy a Jakarta EE application to the root context appeared first on rieckpil.
November 19, 2019
Modernizing our GitHub Sync Toolset
November 19, 2019 08:10 PM
I am happy to announce that my team is ready to deploy a new version of our GitHub Sync Toolset on November 26, 2019 from 10:00 to 11:00 am EST.
We are not expecting any disruption of service but it’s possible that some committers may lose write access to their Eclipse project GitHub repositories during this 1 hour maintenance window.
This toolset is responsible for syncronizing Eclipse committers accross all our GitHub repositories and on top of that, this new release will start syncronizing contributors.
In this context, a contributor is a GitHub user with read access to the project GitHub repositories. This new feature will allow committers to assign issues to contributors who currently don’t have write access to the repository. This feature was requested in 2015 via Bug 483563 - Allow assignment of GitHub issues to contributors.
Eclipse Committers are reponsible for maintaining a list of GitHub contributors from their project page on the Eclipse Project Management Infrastructure (PMI).
To become an Eclipse contributor on a GitHub for a project, please make sure to tell us your GitHub Username in your Eclipse account.
May 28, 2019
Jakarta EE, A de facto standard in the making
by David R. Heffelfinger at May 28, 2019 10:06 PM
I’ve been involved in Java EE since the very beginning, Having written one of the first ever books on Java EE. My involvement in Java EE / Jakarta EE has been on an education / advocacy role. Having written books, articles, blog posts and given talks in conferences about the technology. I advocate Jakarta EE not because I’m paid to do so, but because I really believe it is a great technology. I’m a firm believer that the fact that Jakarta EE is a standard, with multiple competing implementations, results in very high quality implementations, since Jakarta EE avoids vendor lock-in and encourages competition, benefiting developers.
Oracle’s donation of Java EE to the Eclipse Foundation was well received and celebrated by the Java EE community. Many prominent community members had been advocating for a more open process for Java EE, which is exactly what Jakarta EE, under the stewardship from the Eclipse Foundation provides.
There are some fundamental changes on how Jakarta EE is managed, that differ from Java EE, that benefit the Jakarta EE community greatly.
Fundamental differences between Java EE and Jakarta EE Management
Some of the differences in the way Jakarta EE is managed as opposed to Java EE are that there is no single vendor controlling the technology, there is free access to the TCK and there is no reference implementation.
No single company controls the standard
First and foremost, we no longer have a single company as a steward of Jakarta EE. Instead, we have several companies who have a vested interest in the success of the technology working together to develop the standard. This has the benefit that the technology is not subject to the whims of any one vendor, and, if any of the vendors loses interest in Jakarta EE, others can easily pick up the slack. The fact that there is no single vendor behind the technology makes Jakarta EE very resilient, it is here to stay.
TCK freely accessible
Something those of us involved heavily in Jakarta EE (and Java EE before), take for granted, but that may not be clear to others, is that Jakarta EE is a set of specifications with multiple implementations. Since the APIs are defined in a specification, they don’t change across Jakarta EE implementations, making Jakarta EE compliant code portable across implementations. For example, a Jakarta EE compliant application should run with minimal or no modifications on popular Jakarta EE implementations such as Apache Tomee, Payara, IBM’s OpenLiberty or Red Hat’s Thorntail
One major change that Jakarta EE has against Java EE is the fact that the Technology Compatibility Kit (TCK) is open source and free. The TCK is a set of test to verify that a Jakarta EE implementation is 100% compliant with all Jakarta EE specifications. With Java EE, organizations wanting to create a Java EE implementation, had to pay large sums of money to gain access to the TCK, once their implementation passed all the tests, their implementation was certified as Java EE compatible. The fact that the TCK was not freely accessible became a barrier to innovation, as smaller organizations and open source developers not always had the funds to get access to the TCK. Now that the TCK is freely accessible, the floodgates will open, and we should see a lot more quality implementations of Jakarta EE.
No reference implementation
Another major change between Java EE and Jakarta EE is that Java EE had the concept of a reference implementation. The idea behind having a Java EE reference implementation was to prove that suggested API specifications were actually feasible to implement. Having a reference implementation, however, had a side effect. If the reference implementation implemented something that wasn’t properly defined in the specification, then many developers expected all Java EE implementations to behave the same way, making the reference implementation a de-facto Java EE specification of sorts. Jakarta EE does away with the concept of a reference implementation, and will have multiple compatible implementations instead. The fact that there isn’t a reference implementation in Jakarta EE will result in more complete specifications, as differences in behavior between implementations will bring to light deficiencies in the specifications, these deficiencies can then be addressed by the community.
Conclusion
With multiple organizations with a vested interest in Jakarta EE’s success, a lowered barrier of entry for new Jakarta EE implementations, and better specifications Jakarta EE will become the de-facto standard in server-side Java development.
April 08, 2019
Specification Scope in Jakarta EE
by waynebeaton at April 08, 2019 02:56 PM
With the Eclipse Foundation Specification Process (EFSP) a single open source specification project has a dedicated project team of committers to create and maintain one or more specifications. The cycle of creation and maintenance extends across multiple versions of the specification, and so while individual members may come and go, the team remains and it is that team that is responsible for the every version of that specification that is created.
The first step in managing how intellectual property rights flow through a specification is to define the range of the work encompassed by the specification. Per the Eclipse Intellectual Property Policy, this range of work (referred to as the scope) needs to be well-defined and captured. Once defined, the scope is effectively locked down (changes to the scope are possible but rare, and must be carefully managed; the scope of a specification can be tweaked and changed, but doing so requires approval from the Jakarta EE Working Group’s Specification Committee).
Regarding scope, the EFSP states:
Among other things, the Scope of a Specification Project is intended to inform companies and individuals so they can determine whether or not to contribute to the Specification. Since a change in Scope may change the nature of the contribution to the project, a change to a Specification Project’s Scope must be approved by a Super-majority of the Specification Committee.
As a general rule, a scope statement should not be too precise. Rather, it should describe the intention of the specification in broad terms. Think of the scope statement as an executive summary or “elevator pitch”.
Elevator pitch: You have fifteen seconds before the elevator doors open on your floor; tell me about the problem your specification addresses.
The scope statement must answer the question: what does an implementation of this specification do? The scope statement must be aspirational rather than attempt to capture any particular state at any particular point-in-time. A scope statement must not focus on the work planned for any particular version of the specification, but rather, define the problem space that the specification is intended to address.
For example:
Jakarta Batch provides describes a means for executing and managing batch processes in Jakarta EE applications.
and:
Jakarta Message Service describes a means for Jakarta EE applications to create, send, and receive messages via loosely coupled, reliable asynchronous communication services.
For the scope statement, you can assume that the reader has a rudimentary understanding of the field. It’s reasonable, for example, to expect the reader to understand what “batch processing” means.
I should note that the two examples presented above are just examples of form. I’m pretty sure that they make sense, but defer to the project teams to work with their communities to sort out the final form.
The scope is “sticky” for the entire lifetime of the specification: it spans versions. The plan for any particular development cycle must describe work that is in scope; and at the checkpoint (progress and release) reviews, the project team must be prepared to demonstrate that the behavior described by the specifications (and tested by the corresponding TCK) cleanly falls within the scope (note that the development life cycle of specification project is described in Eclipse Foundation Specification Process Step-by-Step).
In addition the specification scope which is required by the Eclipse Intellectual Property Policy and EFSP, the specification project that owns and maintains the specification needs a project scope. The project scope is, I think, pretty straightforward: a particular specification project defines and maintains a specification.
For example:
The Jakarta Batch project defines and maintains the Jakarta Batch specification and related artifacts.
Like the specification scope, the project scope should be aspirational. In this regard, the specification project is responsible for the particular specification in perpetuity. Further the related artifacts, like APIs and TCKs can be in scope without actually being managed by the project right now.
Today, for example, most of the TCKs for the Jakarta EE specifications are rolled into the Jakarta EE TCK project. But, over time, this single monster TCK may be broken up and individual TCKs moved to corresponding specification projects. Or not. The point is that regardless of where the technical artifacts are currently maintained, they may one day be part of the specification project, so they are in scope.
I should back up a bit and say that our intention right now is to turn the “Eclipse Project for …” projects that we have managing artifacts related to various specifications into actual specification projects. As part of this effort, we’ll add Git repositories to these projects to provide a home for the specification documents (more on this later). A handful of these proto-specification projects currently include artifacts related to multiple specifications, so we’ll have to sort out what we’re going to do about those project scope statements.
We might consider, for example, changing the project scope of the Jakarta EE Stable APIs (note that I’m guessing a future new project name) to something simple like:
Jakarta EE Stable APIs provides a home for stable (legacy) Jakarta EE specifications and related artifacts which are no longer actively developed.
But, all that talk about specification projects aside, our initial focus needs to be on describing the scope of the specifications themselves. With that in mind, the EE4J PMC has created a project board with issues to track this work and we’re going to ask the project teams to start working with their communities to put these scope statements together. If you have thoughts regarding the scope statements for a particular specification, please weigh in.
Note that we’re in a bit of a weird state right now. As we engage in a parallel effort to rename the specifications (and corresponding specification projects), it’s not entirely clear what we should call things. You’ll notice that the issues that have been created all use the names that we guess we’re going to end up using (there’s more more information about that in Renaming Java EE Specifications for Jakarta EE).
April 04, 2019
Renaming Java EE Specifications for Jakarta EE
by waynebeaton at April 04, 2019 02:17 PM
It’s time to change the specification names…
When we first moved the APIs and TCKs for the Java EE specifications over to the Eclipse Foundation under the Jakarta EE banner, we kept the existing names for the specifications in place, and adopted placeholder names for the open source projects that hold their artifacts. As we prepare to engage in actual specification work (involving an actual specification document), it’s time to start thinking about changing the names of the specifications and the projects that contain their artifacts.
Why change? For starters, it’s just good form to leverage the Jakarta brand. But, more critically, many of the existing specification names use trademarked terms that make it either very challenging or impossible to use those names without violating trademark rules. Motivation for changing the names of the existing open source projects that we’ll turn into specification projects is, I think, a little easier: “Eclipse Project for …” is a terrible name. So, while the current names for our proto-specification projects have served us well to-date, it’s time to change them. To keep things simple, we recommend that we just use the name of the specification as the project name.
With this in mind, we’ve come up with a naming pattern that we believe can serve as a good starting point for discussion. To start with, in order to keep things as simple as possible, we’ll have the project use the same name as the specification (unless there is a compelling reason to do otherwise).
The naming rules are relatively simple:
- Replace “Java” with “Jakarta” (e.g. “Java Message Service” becomes “Jakarta Message Service”);
- Add a space in cases where names are mashed together (e.g. “JavaMail” becomes “Jakarta Mail”);
- Add “Jakarta” when it is missing (e.g. “Expression Language” becomes “Jakarta Expression Language”); and
- Rework names to consistently start with “Jakarta” (“Enterprise JavaBeans” becomes “Jakarta Enterprise Beans”).
This presents us with an opportunity to add even more consistency to the various specification names. Some, for example, are more wordy or descriptive than others; some include the term “API” in the name, and others don’t; etc.
We’ll have to sort out what we’re going to do with the Eclipse Project for Stable Jakarta EE Specifications, which provides a home for a small handful of specifications which are not expected to change. I’ll personally be happy if we can at least drop the “Eclipse Project for” from the name (“Jakarta EE Stable”?). We’ll also have to sort out what we’re going to do about the Eclipse Mojarra and Eclipse Metro projects which hold the APIs for some specifications; we may end up having to create new specification projects as homes for development of the corresponding specification documents (regardless of how this ends up manifesting as a specification project, we’re still going to need specification names).
Based on all of the above, here is my suggested starting point for specification (and most project) names (I’ve applied the rules described above; and have suggested tweaks for consistency by strike out):
- Jakarta
APIsfor XML Messaging - Jakarta
Architecture forXML Binding - Jakarta
API forXML-basedWeb Services - Jakarta Common Annotations
- Jakarta Enterprise Beans
- Jakarta Persistence
API - Jakarta Contexts and Dependency Injection
- Jakarta EE Platform
- Jakarta
API forJSON Binding - Jakarta Servlet
- Jakarta
API forRESTful Web Services - Jakarta Server Faces
- Jakarta
API forJSON Processing - Jakarta
EESecurityAPI - Jakarta Bean Validation
- Jakarta Mail
- Jakarta Beans Activation
Framework - Jakarta Debugging Support for Other Languages
- Jakarta Server Pages Standard Tag Library
- Jakarta EE Platform Management
- Jakarta EE Platform Application Deployment
- Jakarta
API forXML Registries - Jakarta
API forXML-based RPC - Jakarta Enterprise Web Services
- Jakarta Authorization
Contract for Containers - Jakarta Web Services Metadata
- Jakarta Authentication
Service Provider Interface for Containers - Jakarta Concurrency Utlities
- Jakarta Server Pages
- Jakarta Connector Architecture
- Jakarta Dependency Injection
- Jakarta Expression Language
- Jakarta Message Service
- Jakarta Batch
- Jakarta
API forWebSocket - Jakarta Transaction
API
We’re going to couple renaming with an effort to capture proper scope statements (I’ll cover this in my next post). The Eclipse EE4J PMC Lead, Ivar Grimstad, has blogged about this recently and has created a project board to track the specification and project renaming activity (as of this writing, it has only just been started, so watch that space). We’ll start reaching out to the “Eclipse Project for …” teams shortly to start engaging this process. When we’ve collected all of the information (names and scopes), we’ll engage in a restructuring review per the Eclipse Development Process (EDP) and make it all happen (more on this later).
Your input is requested. I’ll monitor comments on this post, but it would be better to collect your thoughts in the issues listed on the project board (after we’ve taken the step to create them, of course), on the related issue, or on the EE4J PMC’s mailing list.
January 08, 2019
Top 20 Jakarta EE Experts to Follow on Twitter
by Elder Moraes at January 08, 2019 12:47 AM
This is the most viewed post of this blog, so I believe it deserves an update now in 2020! Its first version was written back in 2017.
There’s a lot of different opinions in this kind of lists, and there will be always somebody or something missing… just don’t be too passionate or take things personally, ok?!
******************************************************
We all have to agree: there are tons of tons of information shared through social media. It’s no different on Twitter.
When we talk about staying tuned with some technology, it’s important to have some kind of focus. Otherwise, you could end up confused or, worst, getting bad and/or wrong information.
For these reasons I have a small but incredible list of people/accounts that I use to follow on Twitter to get really good information about Jakarta EE.
If you are passionate about Jakarta EE like me, I truly hope this list may be helpful to you. If you are not, I hope you enjoy as well!
Important: the list isn’t a ranking, so don’t judge the account by the position at the list. In fact, don’t judge anyone by anything…
Jakarta EE – @JakartaEE
The official Jakarta EE handle.
Wayne Beaton – @waynebeaton
Wayne is a Director of Open Source Projects at the Eclipse Foundation. Undoubtedly, Jakarta EE is one of the biggest projects at Eclipse today (if not the biggest one), so it’s important to stay tunned with Wayne has to say.
Adam Bien – @AdamBien
Adam works as a freelancer with Java since JDK 1.0, with Servlets/EJB since 1.0 and before the advent of J2EE in several large-scale applications. He is an architect and developer (with usually 20/80 distribution) in Java (SE / EE / FX) projects. He has written several books about JavaFX, J2EE, and Java EE, and he is the author of Real World Java EE Patterns—Rethinking Best Practices and Real World Java EE Night Hacks—Dissecting The Business Tier.
He is also a Java Champion, NetBeans Dream Team Founding Member, Oracle ACE Director, Java Developer of the Year 2010 and has a huge amount of nominations as JavaOne Rockstar.
Kevin Sutter – @kwsutter
Kevin Sutter is the lead architect for the Jakarta EE and JPA solutions for WebSphere Application Server and the WebSphere Liberty Profile. He is also very active with Java and open-source strategies as they relate to IBM’s application middleware.
Ivar Grimstad – @ivar_grimstad
Ivar Grimstad is Java Champion, JUG Leader, JCP Spec Lead, EC and EG Member, NetBeans Dream Team and International Speaker.
He is the PMC (Project Management Committee) Lead of EE4J and Jakarta EE Developer Advocate at Eclipse Foundation.
David Blevins – @dblevins
David Blevins is the founder of Tomitribe and a veteran of open source Jakarta EE. He has been both implementing and defining Enterprise Java specifications for more than 10 years and has a strong drive to see it simple, testable, and as light as Java SE. Blevins is cofounder of OpenEJB (1999), Geronimo (2003), and TomEE (2011). He is a member of the EJB 3.0, EJB 3.1, EJB 3.2, Java EE 6, Java EE 7, and Java EE 8 Security Expert Groups, and a member of the Apache Software Foundation. Blevins is a contributing author to Component-Based Software Engineering: Putting the Pieces Together (Addison Wesley). Blevins is also a regular speaker at JavaOne, Devoxx, ApacheCon, OSCon, JAX, and other Java-focused conferences.
Otavio Santana – @otaviojava
Otávio Santana is a developer and enthusiast of open source. He is an evangelist and practitioner of agile philosophy and polyglot development in Brazil. Santana is a JUG leader of JavaBahia and SouJava, and a strong supporter of Java communities in Brazil, where he also leads the BrasilJUGs initiative to incorporate Brazilian JUGs into joint activities.
He is also the co-creator of Jakarta NoSQL, a Java framework that streamlines the integration of Java applications with NoSQL databases. It defines a set of APIs to interact with NoSQL databases and provides a standard implementation for most NoSQL databases. This helps to achieve very low coupling with the underlying NoSQL technologies used.
Java Champions – @Java_Champions
Well, why the Java Champions handle is here? Because many Java Champions are Jakarta EE experts, so following the official Twitter is nice to be in touch with what they are saying about it.
Alex Theedom – @alextheedom
Trainer, Java Champion, Jakarta EE spec committee, author of Jakarta EE books and courses. Conference speaker and blogger.
OmniFaces – @OmniFaces
OmniFaces is a utility library for JSF 2 that focusses on utilities that ease everyday tasks with the standard JSF API. OmniFaces is a response to frequently recurring problems encountered during ages of professional JSF development and from questions being asked on Stack Overflow.
Dmitry Kornilov – @m0mus
Dmitry has over 20 years of experience in design and implementation of complex software systems, defining systems architecture, team leading and project management. He has worked as project leader of EclipseLink and Yasson and as spec lead of JSON-P and JSON-B specifications.
Steve Millidge – @l33tj4v4
Steve is the Founder and Director of Payara and C2B2 Consulting. Having used Java extensively since pre1.0, Steve has over 15 years’ experience as a field-based Professional Service Consultant along with extensive knowledge of Java middleware.
Before running his own business, Steve worked for Oracle as a Principal Consultant in Oracle’s Technology Architecture Practice, specializing in Business Process Integration and scalable n-tier component architectures.
Emily Jiang – @emilyfhjiang
Emily is a Java Champion and has been working on MicroProfile since 2016. She also leads the specifications of MicroProfile Config, Fault Tolerance and Service Mesh. She works for IBM as Liberty Architect for MicroProfile and CDI and is heavily involved in Java EE implementation in Liberty releases.
Arjan Tijms – @arjan_tijms
Arjan is a Jakarta EE committer and member of the Jakarta EE Steering Committee.
MicroProfile – @MicroProfileIO
MicroProfile is a baseline platform definition that optimizes Enterprise Java for a microservices architecture and delivers application portability across multiple MicroProfile runtimes. The initially planned baseline is JAX-RS + CDI + JSON-P, with the intent of the community having an active role in the MicroProfile definition and roadmap.
Sebastian Daschner – @DaschnerS
Sebastian has been working with Java enterprise software development for many years. Besides his work for clients, he set a high priority in educating developers in conference presentations, video courses, and training. He believes that teaching others not only greatly improves their situation but also educates yourself.
David Heffelfinger – @ensode
David R. Heffelfinger is an independent consultant based in the greater Washington DC area. He has authored several books on Java EE and related technologies. Heffelfinger has been architecting, designing, and developing software professionally since 1995. He has been using Java as his primary programming language since 1996. He has worked on many large-scale projects for several clients, including the US Department of Homeland Security, Freddie Mac, Fannie Mae, and the US Department of Defense. He has a master’s degree in software engineering from Southern Methodist University, Dallas, Texas. Heffelfinger is a frequent speaker at Java conferences such as Oracle Code One (forme JavaOne).
John Clingan – @jclingan
John is Product Manager at Red Hat and an ex Java EE PM. He is also a MicroProfile co-founder.
Josh Juneau – @javajuneau
Josh Juneau works as an application developer, system analyst, and database administrator. He is active in many fields of application development but primarily focuses on Jakarta EE. Juneau is a technical writer for Oracle Technology Network, Java Magazine, and Apress. He is a member of the NetBeans Dream Team, the JCP, and a part of the JSR 372 Expert Group. He enjoys working with the Java community—he is the director of meetings for the Chicago Java User Group.
Tanja Obradovic – @TanjaEclipse
Tanja is the Jakarta EE Program Manager at Eclipse Foundation.