Skip to main content

Jakarta EE 2020 Developer Survey

by Ivar Grimstad at April 09, 2020 11:55 AM

It’s finally here! The annual Jakarta EE 2020 Developer Survey is out. Make sure to use this opportunity to make your voice heard!

This is the third Jakarta EE Developer Survey. The survey last year had more than 1,700 responses from individuals around the World. Let’s beat that number this year! It provides valuable insight into the state of the community to better understand the top priorities for future Jakarta EE releases.

The survey takes less than 10 minutes to complete, so don’t hesitate! Take the Jakarta EE 2020 Survey now!


by Ivar Grimstad at April 09, 2020 11:55 AM

Making Graph Databases Fun Again With Java

by otaviojava at April 09, 2020 10:16 AM

Graph databases need to be made fun again! Not to worry — the open-source TinkerPop from Apache is here to do just that. Ref: https://dzone.com/articles/have-a-fun-moment-with-graph-and-java

by otaviojava at April 09, 2020 10:16 AM

Web Components with Boundary Control Entity, lit-html and redux--an application walk through

by admin at April 08, 2020 08:43 AM

A walk through the "events" application from the Building Applications with native Web Components, lit-html and redux online workshop. Also featuring the "Boundary Control Entity" BCE structure:

by admin at April 08, 2020 08:43 AM

2020 Jakarta EE Developer Survey Available

by Will Lyons at April 07, 2020 02:44 PM

The 2020 Jakarta EE Developer Survey, sponsored by the Jakarta EE Working Group is now available.  The survey will be open until April 30, and I encourage you to participate. 

This is the third year of this survey, which is intended to help the Java developer community better enable the understand trends, requirements and priorities for development of enterprise applications.  Last year we had over 1700 respondents to the survey which were reported here. This year's survey focuses heavily on trends in cloud native development, and we expect it provide helpful insights for developers, enterprises and vendors as they plan the evolution of their applications and application infrastructure.   It will certainly help the Jakarta EE community set priorities for the coming year.

So please complete the survey by April 30, and make you voice heard.   It should take less than 10 minutes to complete, may be interesting for you, and your participation will be greatly appreciated.  Thanks! 


by Will Lyons at April 07, 2020 02:44 PM

Add Your Voice to the 2020 Jakarta EE Developer Survey

by Mike Milinkovich at April 07, 2020 12:02 PM

Our third annual Jakarta EE Developer Survey is now open and I encourage everyone to take a few minutes and complete the survey before the April 30 deadline. Your input is extremely important.

With your feedback, the entire Java ecosystem will have a better understanding of the requirements, priorities, and perceptions in the global Java developer community. This understanding enables a clearer view of the Java industry landscape, the challenges Java developers are facing, and the opportunities for enterprise Java stakeholders in the cloud native era.

The Jakarta EE Developer Survey is one of the Java industry’s largest developer surveys. Since the survey’s inception, we’ve received thousands of responses from developers around the world, including 1,700 responses in 2019 — a clear indication the Java developer community recognizes the value of the survey results.

Last year, we were able to share critical insight into the state of cloud native innovation for enterprise Java development globally, including expected growth rates for Java apps in the cloud as well as leading architectures, applications, and technologies. We were also able to share the community’s top priorities for Jakarta EE.

This year, we’re asking developers to tell us more about their next steps for Java and cloud native development and their choices for architectures, technologies, and tools as cloud native resources for Java mature.

With this updated information, platform vendors, enterprises, and individual developers in the Java ecosystem will have a better understanding of how the cloud native world for enterprise Java is unfolding and what that means for their strategies and businesses. And the Jakarta EE community at the Eclipse Foundation will have a better understanding of the community’s top priorities for future Jakarta EE releases.

The Jakarta EE Developer Survey is your opportunity to add your voice to the global Java ecosystem and we’re counting on our entire community to help us gain the broadest possible view of the state of cloud native technologies in the context of enterprise Java. Best of all, this year we’ve organized the survey so it takes less than 10 minutes to complete!

To access the survey, click here.


by Mike Milinkovich at April 07, 2020 12:02 PM

Just Write Code and Keep It Forever--an airhacks.fm Podcast

by admin at April 07, 2020 11:16 AM

Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS

The #82 airhacks.fm episode with Markus Karg (@mkarg) about:
about JAX-RS and the importance of APIs in real world
is available for download.

by admin at April 07, 2020 11:16 AM

AWS Cognito, JVM vs. native, Quarkus vs. WildFly, DataSources, DI, MP metrics, GraalVM--or 73rd airhacks.tv

by admin at April 06, 2020 06:01 AM

Topics (https://gist.github.com/AdamBien/6fd5e96c8f22da6a9df86dcd28ad4b27) for the 73rd airhacks.tv (always first Monday of the month, 8pm CET / CEST):
  1. Interactive code review of BCE-based web component / redux / lit-html application
  2. JSF integration with AWS cognito
  3. wrk vs. vegeta [blog comment]
  4. JVM vs. native GraalVM performance [blog comment]
  5. @manytomany with JSON-B
  6. What is the magic behind Quarkus?
  7. Use Cases for GraalVM native images
  8. Application Servers vs. Quarkus, Helidon and Co.
  9. A standard way to define DataSources
  10. Field Injection vs. Constructor Injection
  11. Exposing MP metrics to authenticated users only
  12. DI without reflection
  13. Hooking into MP config
  14. Java 6 to Java 11 migrations
  15. OpenAM for authorization
  16. Is @Stateless equivalent to @RequestScoped and @Transactional
  17. Database tables naming conventions
Any questions left? Ask now: https://gist.github.com/AdamBien/6fd5e96c8f22da6a9df86dcd28ad4b27 and get the answers at the next airhacks.tv.

by admin at April 06, 2020 06:01 AM

Building Web Apps with Web Components, redux and lit-html--Online Workshop

by admin at April 05, 2020 05:21 PM

In the online workshops webcomponents.training and effectiveweb.training I used vanilla web standards APIs to build simple apps.

However, in my projects, workshops and meetups I gathered a lot of questions like:

  • How to structure a serious app?
  • How to deal with state?
  • How the components communicate with each other?
  • How do you test a serious app?
  • Are you productive without a framework?
  • What about data binding?
  • Can you use 3rd-party routers?
  • How to modularize a complex app?
  • Are custom elements too fine grained?
  • Are you using CSS frameworks?
  • How to integrate 3rd party components?
  • How to deal with long running server requests?
  • How to handle errors?
  • Do you need a build process?
  • (...)

I answered many such questions during the > 5h, 100 episodes, continuous coding workshop: Building apps with Web Components, redux and lit-html

Building web applications with Web Components

This time, I coded what is reasonable "from scratch" and used the following next gen, standard-based libraries / tools, with the goal in mind "develop fast, never migrate":


by admin at April 05, 2020 05:21 PM

Hashtag Jakarta EE #14

by Ivar Grimstad at April 05, 2020 09:59 AM

Welcome to the fourteenth issue of Hashtag Jakarta EE!

It has been an interesting week in the Jakarta EE community. The work with Jakarta EE 9 makes progress, but we still need any help we can get from the community. Make sure you tune in to the Jakarta EE Update Call on Wednesday for more information about how you can help!

The ongoing thread on the Jakarta EE Community mailing list regarding creating a fork of MicroProfile Config as a basis for a Jakarta Config specification goes on. Please make sure to chime in with your opinion there.

I think this discussion shows that it is important that Jakarta EE states how the technical alignment with MicroProfile (as well as other potential candidates for standardization) should be from a Jakarta EE standpoint. The MicroProfile community selected a Pull approach, which in plain words means that they will not initiate any standardization efforts with Jakarta EE, or anywhere else. The Jakarta EE working group should come up with a similar strategy, or statement, for how the technical alignment should be from the Jakarta EE side in order to end this confusion.

In the end, a reminder that the nomination period for the Jakarta EE Working Group election ends on April 10, 2020.


by Ivar Grimstad at April 05, 2020 09:59 AM

To fork, or not to fork

by Ivar Grimstad at April 03, 2020 12:50 PM

There is a very interesting discussion ongoing on the Jakarta EE Community mailing list about forking Eclipse MicroProfile as Jakarta Configuration. While the discussion on the mailing list initially is about the MicroProfile Config specification specifically, it also raises the question of how the strategy of Jakarta EE should be for technical alignment with Eclipse MicroProfile.

Background

A couple of weeks ago, there was a vote in the Eclipse MicroProfile community about how the technical alignment with downstream consumers. Two approaches, the Pull Model and the Push Model were voted on. The Pull Model got the most votes, and thus was selected as the strategy for technical alignment.

In parallel, the discussions regarding the creation of a MicroProfile working group have been going on since October last year. Several suggestions have been explored, including separate working groups, a combined working group with Jakarta EE, or a combination of the two with two working groups linked together with an umbrella working group.

The latest development in this working group discussion is that the Eclipse Foundation has asked the MicroProfile Community to define a charter for a MicroProfile working group independent from Jakarta EE.

Why Fork?

Whether there is a single working group, a group of working groups under an umbrella working group, or just independent working groups is actually not relevant to this discussion. The technical alignment will still be the same. The MicroProfile community has chosen the Pull Model, and this is something Jakarta EE needs to figure out how to relate to.

In my mind, it is pretty simple. Jakarta EE should create a fork of the specifications that makes sense to make a part of the Jakarta EE Platform. There are several reasons for this. I have touched upon a couple of them here.

1. Stability

MicroProfile wants to move fast and break things, while Jakarta EE wants to maintain a certain level of backward compatibility. By creating a fork, Jakarta EE won’t have to address this concern since it will then control the life cycle of its specifications.

2. Cohesion

Java EE, and by succession Jakarta EE, has always been a very cohesive platform. A central piece like configuration will be used throughout the entire platform and it only makes sense that it is in the same namespace as the rest of the platform.

3. Flexibility

By maintaining a fork of the specification, Jakarta EE is free to make modifications that are relevant for Jakarta EE, but maybe wouldn’t be relevant for MicroProfile.

Some Thoughts

As I pointed out in Hashtag Jakarta EE #11, the headache of diverging tines of the fork lands on the vendors that support both Jakarta EE and MicroProfile in the same product. Therefore, it is kind of interesting to see that the vendors that were most in favor of the Pull approach that are all shipping products that are both Jakarta EE and MicroProfile compliant. So, clearly, they must have thought about this and have a solution in place.

Some other reads on the same topic

MicroProfile and Jakarta EE Technical Alignment – Steve Millidge
Proposal on Jakarta EE’s innovation & relationship with MicroProfile – Sebastian Daschner


by Ivar Grimstad at April 03, 2020 12:50 PM

Add Payara Server 5 to the Visual Studio Code Tutorial

by Gaurav Gupta at April 03, 2020 11:30 AM

In this tutorial, I will explain how to add the Payara Server in the Visual Studio Code and deploy the maven web application to the Payara Server.


by Gaurav Gupta at April 03, 2020 11:30 AM

Election Time

by Ivar Grimstad at April 01, 2020 01:14 PM

The election for positions at the various committees in the Jakarta EE working group has started. There are seats up for election in the Steering Committee, the Specification Committee, and the Marketing and Brand Committee.

This election consists of one seat for Participant Members and one seat for Committer Members in each of the committees, which means that there are six positions open.

In the following Studio Jakarta EE, I have recorded a little information regarding the election.

To sum up, the nomination period is open until April 10, 2020. So don’t hesitate with nominating yourself!


by Ivar Grimstad at April 01, 2020 01:14 PM

The Payara Monthly Catch for March 2020

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at April 01, 2020 11:00 AM

I was very determined this month to try and get this blog out on time! A lot has happened this month! Java 14 was released. Some virus thingy means everyone is working from home. Conferences and events are almost all cancelled all delayed, globally. Some have transitioned to virtual events.

Microsoft Teams reported a 775 percent increase in cloud services in regions that have enforced social distancing. Users generated over 900 million meeting and calling minutes on Teams daily in a single week.

Survey Time! Please help us with our research into how open source is helping organisations overcome Coronavirus Pandemic!

Below you will find a curated list of some of the most interesting news, articles and videos from this month. Cant wait until the end of the month? then visit our twitter page where we post all these articles as we find them! 


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at April 01, 2020 11:00 AM

Free e-book about cloud, Jakarta EE and MicroProfile available for free by Payara and Platform.sh

by otaviojava at April 01, 2020 09:06 AM

All companies are software companies, and businesses will always experience the challenge of keeping integrations between users and applications scalable, productive, fast, and of high quality. To combat this, cloud, microservices, and other modern solutions come up more and more in architectural decisions. Here is the question: Is Java prepared to deal with these diverse […]

by otaviojava at April 01, 2020 09:06 AM

Java: It’s Time to Move Your Application to Java 11

by otaviojava at March 30, 2020 09:16 AM

In this article, we discuss the need for current Java 8 users to begin migrating to Java 11 and provide an easy solution for migration. https://dzone.com/articles/java-its-time-to-move-your-application-to-java-11

by otaviojava at March 30, 2020 09:16 AM

Strip The Cow To The Skeleton--an airhacks.fm Podcast

by admin at March 29, 2020 02:51 PM

Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS

The #81 airhacks.fm episode with Arjan Tijms (@arjan_tijms) about:
piranha.cloud, building servlet engines from scratch, Jakarta EE Security, Maven as filesystem, MicroProfile and Cloud Events
is available for download.

by admin at March 29, 2020 02:51 PM

Hashtag Jakarta EE #13

by Ivar Grimstad at March 29, 2020 01:11 PM

Welcome to the thirteenth issue of Hashtag Jakarta EE!

EclipseCon 2020 is added to the long list of events that transforms into a virtual event this year. For those with an eye for detail will notice that the conference has changed the name from EclipseCon Europe to simply EclipseCon.

The name change has nothing to do with the decision to go virtual this year. EclipseCon is a global event and has been so for years, so removing the Europe part of the name just makes sense.

The seemingly never-ending story of creating a working group for Eclipse MicroProfile took an interesting turn at the end of this week. In an email to the Microprofile mailing list, Mike Milinkovich tasked the MicroProfile community to come up with a proposal for a MicroProfile Working Group Charter. This means that the efforts of creating a common working group or an Umbrella working group structure in relation to Jakarta EE have been put on hold.

I think it is a good thing that the discussions now can be around how to get the MicroProfile Working Group up and running so we can all focus our energy on technical challenges rather than governance.


by Ivar Grimstad at March 29, 2020 01:11 PM

Architecting cloud computing solutions with Java webinar

by otaviojava at March 26, 2020 12:19 PM

Cloud-native has become a big buzzword around the world. But what does it mean? What advantages does it bring to your application and to your life as a software developer or architect? What’s new in the Java world, and what are the steps to follow for a native cloud app? In this webinar you’ll learn […]

by otaviojava at March 26, 2020 12:19 PM

Payara Platform 201 : New Release Roundup

by Debbie Hoffman at March 26, 2020 10:51 AM

We've published a number of blogs about the updates and new features with the Payara Platform 5.201 release, but here are the highlights in case you've missed it:


by Debbie Hoffman at March 26, 2020 10:51 AM

ORMs: Heroes or Villains Inside the Architecture?

by otaviojava at March 25, 2020 01:15 PM

In the information age, with new technologies, frameworks, and programming languages, there is an aspect of technology that never changes. Ref: https://dzone.com/articles/orms-heroes-or-villains-indoors-the-architecture

by otaviojava at March 25, 2020 01:15 PM

Load huge amount of data with Jakarta EE Batch

March 24, 2020 10:00 PM

Processing huge amount of data is a challenge for every enterprise system. Jakarta EE specifications provides useful approach to get it done through Jakarta Batch (JSR-352):

Batch processing is a pervasive workload pattern, expressed by a distinct application organization and execution model. It is found across virtually every industry, applied to such tasks as statement generation, bank postings, risk evaluation, credit score calculation, inventory management, portfolio optimization, and on and on. Nearly any bulk processing task from any business sector is a candidate for batch processing.
Batch processing is typified by bulk-oriented, non-interactive, background execution. Frequently long-running, it may be data or computationally intensive, execute sequentially or in parallel, and may be initiated through various invocation models, including ad hoc, scheduled, and on-demand.
Batch applications have common requirements, including logging, checkpointing, and parallelization. Batch workloads have common requirements, especially operational control, which allow for initiation of, and interaction with, batch instances; such interactions include stop and restart.

One of the typical use case is a import data from different sources and formats to internal database. Below we will design sample application to import data, for example, from json and xml files to the database and see how well structured it can be.

Using Eclipse Red Hat CodeReady Studio plugin, we can easily design our solution diagram:
import batch diagram

Jakarta Batch descriptor in this case will looks like:
META-INF/batch-jobs/hugeImport.xml:

<?xml version="1.0" encoding="UTF-8"?>
<job id="hugeImport" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/jobXML_1_0.xsd" version="1.0">
    <step id="fileSelector" next="decider">
        <batchlet ref="fileSelectorBatchlet">
            <properties>
                <property name="path" value="/tmp/files2import"/>
            </properties>
        </batchlet>
    </step>
    <decision id="decider" ref="myDecider">
        <next on="xml" to="xmlParser"/>
        <next on="json" to="jsonParser"/>
    </decision>
    <step id="xmlParser" next="chunkProcessor">
        <batchlet ref="xmlParserBatchlet"/>
    </step>
    <step id="jsonParser" next="chunkProcessor">
        <batchlet ref="jsonParserBatchlet"/>
    </step>
    <step id="chunkProcessor">
        <chunk>
            <reader ref="itemReader"/>
            <processor ref="itemMockProcessor"/>
            <writer ref="itemJpaWriter"/>
        </chunk>
        <partition>
            <plan partitions="5"></plan>
        </partition>
    </step>
</job>

So, now we need to implement each brick above and try to keep each batchlet independent as much as possible. As you can see from above our sample job consist from:

  • fileSelector - batchlet do file selection based on supported by configuration file extension
  • decider - decision maker, responsible for choosing right parser
  • xml\jsonParser - parser batchlets, responsible for file parsing to a list of items
  • chunkProcessor - items processing chunk(reader, optional processor and writer) with partitioning to boost performance

Before start with implementation, let's design useful solution to share state between steps. Unfortunately, Jakarta Batch Specification does not provide job scoped CDI beans yet (JBeret implementation does, specification doesn't). But we able to use JobContext.set\getTransientUserData() to deal with the current batch context. In our case we want to share File and Queue with items for processing:

@Named
public class ImportJobContext {
    @Inject
    private JobContext jobContext;

    private Optional<File> file = Optional.empty();
    private Queue<ImportItem> items = new ConcurrentLinkedQueue<>();

    public Optional<File> getFile() {
        return getImportJobContext().file;
    }
    public void setFile(Optional<File> file) {
        getImportJobContext().file = file;
    }
    public Queue<ImportItem> getItems() {
        return getImportJobContext().items;
    }

    private ImportJobContext getImportJobContext() {
        if (jobContext.getTransientUserData() == null) {
            jobContext.setTransientUserData(this);
        }
        return (ImportJobContext) jobContext.getTransientUserData();
    }
}

Now we can inject our custom ImportJobContext to share type-safe state between batchlets. First step is search file for processing by provided in step properties path:

@Named
public class FileSelectorBatchlet extends AbstractBatchlet {

    @Inject
    private ImportJobContext jobContext;

    @Inject
    @BatchProperty
    private String path;

    @Override
    public String process() throws Exception {
        Optional<File> file = Files.walk(Paths.get(path)).filter(Files::isRegularFile).map(Path::toFile).findAny();
        if (file.isPresent()) {
            jobContext.setFile(file);
        }
        return BatchStatus.COMPLETED.name();
    }
}

After we need to make decision about parser, for example, based on extension. Decider just returns file extension as string and then batch runtime should give control to the corresponding parser batchlet. Please, check <decision id="decider" ref="myDecider"> section in the XML batch descriptor above.

@Named
public class MyDecider implements Decider {

    @Inject
    private ImportJobContext jobContext;

    @Override
    public String decide(StepExecution[] ses) throws Exception {
        if (!jobContext.getFile().isPresent()) {
            throw new FileNotFoundException();
        }
        String name = jobContext.getFile().get().getName();
        String extension = name.substring(name.lastIndexOf(".")+1);
        return extension;
    }
}

ParserBatchlet in turn should parse file using JSON-B or JAXB depends on type and fill Queue with ImportItem objects. I would like to use ConcurrentLinkedQueue to share items between partitions, but if you need for some other behavior here, you can provide javax.batch.api.partition.PartitionMapper with your own implementation

@Named
public class JsonParserBatchlet  extends AbstractBatchlet {

    @Inject
    ImportJobContext importJobContext;

    @Override
    public String process() throws Exception {

        List<ImportItem> items = JsonbBuilder.create().fromJson(
                new FileInputStream(importJobContext.getFile().get()),
                new ArrayList<ImportItem>(){}.getClass().getGenericSuperclass());

        importJobContext.getItems().addAll(items);
        return BatchStatus.COMPLETED.name();
    }
}

ItemReader then will looks as simple as possible, just pool item from the Queue:

@Named
public class ItemReader  extends AbstractItemReader {

    @Inject
    ImportJobContext importJobContext;

    @Override
    public ImportItem readItem() throws Exception {

        return importJobContext.getItems().poll();
    }
}

And persist time...

@Named
public class ItemJpaWriter  extends AbstractItemWriter  {

    @PersistenceContext
    EntityManager entityManager;

    @Override
    public void writeItems(List<Object> list) throws Exception {
        for (Object obj : list) {
            ImportItem item = (ImportItem) obj;
            entityManager.merge(item);
        }
    }
}

Actually, this is it! Now we able to easily extend our application with new parsers, processors and writers without any existing code changes, - just describe new (update existing) flows over Jakarta Batch descriptor.
Of course, Jakarta Batch specification provides much more helpful functionality than i have covered in this post (Checkpoints, Exception Handling, Listeners, Flow Control, Failed job restarting etc.), but even it enough to see how simple, power and well structured it can be.

Note! Wildfly Application Server implements Jakarta Batch specification through the batch-jberet subsystem. By default last one configured to use only 10 threads.

<subsystem xmlns="urn:jboss:domain:batch-jberet:2.0">
    ...
    <thread-pool name="batch">
        <max-threads count="10"/>
        <keepalive-time time="30" unit="seconds"/>
    </thread-pool>
</subsystem>

So, if you are planing intensive usage of Batch runtime - feel free to increase this parameter:

/subsystem=batch-jberet/thread-pool=batch/:write-attribute(name=max-threads, value=100)

Described sample application source code available on GitHub


March 24, 2020 10:00 PM

Navigating jakarta.ee

by Ivar Grimstad at March 24, 2020 11:51 AM

A new Studio Jakarta EE recording is available!

In this snippet, I go through the new navigation elements added to the specifications part of the Jakarta EE website. Specifically, links to the Eclipse Project pages and an external webpage if that exists.

A trivia at the end. Which year did we get the shirt I am wearing in the video at JavaOne? Tweet me the reply if you remember it. A photo of yourself in the same shirt is a bonus! No prices other than fame and glory 🙂


by Ivar Grimstad at March 24, 2020 11:51 AM

Microservices in the Cloud series in English

by otaviojava at March 23, 2020 02:52 PM

Check out the Microservices in the Cloud series in English: https://dzone.com/articles/microservices-in-the-cloud-part-one https://dzone.com/articles/microservices-in-the-cloud-part-two

by otaviojava at March 23, 2020 02:52 PM

Studio Jakarta EE

by Ivar Grimstad at March 23, 2020 12:40 PM

As I wrote in my latest #Hashtag Jakarta EE. I just started a new YouTube channel called Studio Jakarta EE. Since I totally forgot to introduce myself in the first video, I uploaded a new one today to correct that. This short video also features Duke as my distinguished guest.

I hope you enjoy this effort as it is a completely new experience for me. These first recordings are probably not up for any Academy Awards yet…

But if you bear with me and my silly stuff now in the beginning as I walk my first steps as Youtuber, my promise to You is that the videos will be better and maybe even contain some relevant information in the near future. Stay tuned!


by Ivar Grimstad at March 23, 2020 12:40 PM

Hot Deploy Feature in Payara Platform 5.201

by Gaurav Gupta at March 23, 2020 11:36 AM

Being productive gives developers a sense of satisfaction and fulfillment. That's why increasing developer productivity is always our priority and we are consistently working towards improving the Payara Platform developer tools and the developer experience.


In this blog, we will show you how to configure a Project in the Apache NetBeans IDE to enable Auto Deploy and Hot Deploy mode.

The Auto Deploy and Hot Deploy mode are helpful for developers to run and test an application immediately after making changes to its sources without restarting the Server or manual redeployment to maximize your productivity where Auto Deploy is the feature of Apache NetBeans IDE and Hot Deploy is the feature of Payara Server. Hot Deploy mode is currently only supported in Apache NetBeans IDE as an experimental feature.


by Gaurav Gupta at March 23, 2020 11:36 AM

Hashtag Jakarta EE #12

by Ivar Grimstad at March 22, 2020 11:48 AM

Welcome to the twelfth issue of Hashtag Jakarta EE!

This week, I should have been speaking at JavaLand, one of my favorite conferences.

But as you are aware of, this conference was added to the long list of cancelled events this spring.

Being a Developer Advocate normally involves a lot of travel and interacting with people face-to-face. Now that we’re all grounded in one way or the other, I have been exploring the various options for creating video content, either by live-streaming or prerecorded sessions. It is a jungle! The rest of this post describes some of the efforts we have started up this week.

I am super happy with the free Crowdcast channel for JUGs that we were able to set up with funding from Jakarta EE. Make sure to add it to your bookmarks and follow the channel for updates on upcoming events.

This morning I created the Studio Jakarta EE YouTube channel and uploaded the first video, so I can now officially call my self a Youtuber 🙂

At the Eclipse Foundation, we will also start streaming a series of interviews, discussions and live events on the Eclipse Foundation Crowdcast channel.

If you are still hungry for more, take look at the recordings from last year’s Jakarta One LiveStream.


by Ivar Grimstad at March 22, 2020 11:48 AM

LazyInitializationException – What it is and the best way to fix it

by Thorben Janssen at March 20, 2020 08:01 PM

The post LazyInitializationException – What it is and the best way to fix it appeared first on Thoughts on Java.

The LazyInitializationException is one of the most common exceptions when working with Hibernate. There are a few easy ways to fix it. But unfortunately, you can also find lots of bad advice online. The proclaimed fixes often replace the exception with a hidden problem that will cause trouble in production. Some of them introduce performance issues, and others might create inconsistent results.

In the following paragraphs, I will explain to you what the LazyInitializationException is, which advice you should ignore, and how to fix the exception instead.

When does Hibernate throw a LazyInitializationException

Hibernate throws the LazyInitializationException when it needs to initialize a lazily fetched association to another entity without an active session context. That’s usually the case if you try to use an uninitialized association in your client application or web layer.

Here you can see a test case with a simplified example.

EntityManager em = emf.createEntityManager();
em.getTransaction().begin();

TypedQuery<Author> q = em.createQuery(
		"SELECT a FROM Author a",
		Author.class);
List<Author> authors = q.getResultList();
em.getTransaction().commit();
em.close();

for (Author author : authors) {
	List<Book> books = author.getBooks();
	log.info("... the next line will throw LazyInitializationException ...");
	books.size();
}

The database query returns an Author entity with a lazily fetched association to the books this author has written. Hibernate initializes the books attributes with its own List implementation, which handles the lazy loading. When you try to access an element in that List or call a method that operates on its elements, Hibernate’s List implementation recognizes that no active session is available and throws a LazyInitializationException.

How to NOT fix the LazyInitializationException

As I wrote at the beginning, you can find lots of bad advice on how to fix the LazyInitializationException. Let me quickly explain which suggestions you should ignore.

Don’t use FetchType.EAGER

Some developers suggest changing the FetchType of the association to EAGER. This, of course, fixes the LazyInitializationException, but it introduces performance problems that will show up in production.

When you set the FetchType to EAGER, Hibernate will always fetch the association, even if you don’t use it in your use case. That obviously causes an overhead that slows down your application. But it gets even worse if you don’t use the EntityManager.find method and don’t reference the association in a JOIN FETCH clause. Hibernate then executes an additional query to fetch the association. This often results in the n+1 select issue, which is the most common cause of performance issues.

So please, don’t use FetchType.EAGER. As explained in various articles on this blog, you should always prefer FetchType.LAZY.

Avoid the Open Session in View anti-pattern

When using the Open Session in View anti-patter, you open and close the EntityManager or Hibernate Session in your view layer. You then call the service layer, which opens and commits a database transaction. Because the Session is still open after the service layer returned the entity, the view layer can then initialize the lazily fetched association.

But after the service layer committed the database transaction, there is no active transaction. Because of that, Hibernate executes each SQL statement triggered by the view layer in auto-commit mode. This increases the load on the database server because it has to handle an extra transaction for each SQL statement. At the end of each of these transactions, the database has to write the transaction log to the disc, which is an expensive operation.

The increased pressure on your database isn’t the only downside of this anti-pattern. It can also produce inconsistent results because you are now using 2 or more independent transactions. As a result, the lazily fetched association might return different data than your service layer used to perform the business logic. Your view layer then presents both information together and it might seem like your application manages inconsistent data.

Unfortunately, Spring Boot uses the Open Session in View anti-pattern by default. It only logs a warning message.

2020-03-06 16:18:21.292  WARN 11552 --- [  restartedMain] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning

You can deactivate it by setting the spring.jpa.open-in-view parameter in your application.properties file to false.

Don’t use hibernate.enable_lazy_load_no_trans

Another suggestion you should avoid is to set the hibernate.enable_lazy_load_no_trans configuration parameter in the persistence.xml file to true. This parameter tells Hibernate to open a temporary Session when no active Session is available to initialize the lazily fetched association. This increases the number of used database connections, database transactions and the overall load on your database.

OK, so what should you do instead?

How to fix the LazyInitializationException

The right way to fix a LazyInitializationException is to fetch all required associations within your service layer. The best option for that is to load the entity with all required associations in one query. Or you can use a DTO projection, which doesn’t support lazy loading and needs to be fully initialized before you return it to the client.

Let’s take a closer look at the different options to initialize lazily fetched association and at the best way to use DTO projections.

Initializing associations with a LEFT JOIN FETCH clause

The easiest way to load an entity with all required associations is to perform a JPQL or Criteria Query with one or more LEFT JOIN FETCH clauses. That tells Hibernate to not only fetch the entity referenced in the projection but also to fetch all associated entities referenced in the LEFT JOIN FETCH clause.

Here you can see a simple example of such a query.

EntityManager em = emf.createEntityManager();
em.getTransaction().begin();

TypedQuery<Author> q = em.createQuery("SELECT a FROM Author a LEFT JOIN FETCH a.books", Author.class);
List<Author> authors = q.getResultList();

em.getTransaction().commit();
em.close();

for (Author a : authors) {
	log.info(a.getName() + " wrote the books " 
		+ a.getBooks().stream().map(b -> b.getTitle()).collect(Collectors.joining(", "))
	);
}

The query selects Author entities, and the LEFT JOIN FETCH clause tells Hibernate to also fetch the associated Book entities. As you can see in the generated SQL statement, Hibernate not only joins the 2 corresponding tables in the FROM clause, it also added all columns mapped by the Book entity to the SELECT clause.

select
	author0_.id as id1_0_0_,
	books1_.id as id1_2_1_,
	author0_.name as name2_0_0_,
	author0_.version as version3_0_0_,
	books1_.author_id as author_i7_2_1_,
	books1_.authorEager_id as authorEa8_2_1_,
	books1_.publisher as publishe2_2_1_,
	books1_.publishingDate as publishi3_2_1_,
	books1_.sells as sells4_2_1_,
	books1_.title as title5_2_1_,
	books1_.version as version6_2_1_,
	books1_.author_id as author_i7_2_0__,
	books1_.id as id1_2_0__ 
from
	Author author0_ 
left outer join
	Book books1_ 
		on author0_.id=books1_.author_id

As you can see in the log messages, the query returned an Author entity with an initialized books association.

16:56:23,169 INFO  [org.thoughtsonjava.lazyintitializationexception.TestLazyInitializationException] - Thorben Janssen wrote the books Hibernate Tips - More than 70 solutions to common Hibernate problems

Use a @NamedEntityGraph to initialize an association

You can do the same using a @NamedEntityGraph. The main difference is that the definition of the graph is independent of the query. That enables you to use the same query with different graphs or to use the same graph with various queries.

I explained @NamedEntityGraphs in great detail in a previous article. So, I keep the explanation short. You can define the graph by annotating one of your entity classes with a @NamedEntityGraph annotation. Within this annotation, you can provide multiple @NamedAttributeNode annotations to specify the attributes that Hibernate shall fetch.

@NamedEntityGraph(
    name = "graph.authorBooks",
    attributeNodes = @NamedAttributeNode("books")
)
@Entity
public class Author { ... }

To use this graph, you first need to get a reference to it from your EntityManager. In the next step, you can set it as a hint on your query.

EntityManager em = emf.createEntityManager();
em.getTransaction().begin();

EntityGraph<?> entityGraph = em.createEntityGraph("graph.authorBooks");
TypedQuery<Author> q = em.createQuery("SELECT a FROM Author a", Author.class)
		.setHint("javax.persistence.fetchgraph", entityGraph);
List<Author> authors = q.getResultList();

em.getTransaction().commit();
em.close();

for (Author a : authors) {
	log.info(a.getName() + " wrote the books " 
		+ a.getBooks().stream().map(b -> b.getTitle()).collect(Collectors.joining(", "))
	);
}

If you look at the generated SQL statement, you can see that there is no difference between a LEFT JOIN FETCH clause and a @NamedEntityGraph. Both approaches result in a query that selects all columns mapped by the Author and the Book entity and return Author entities with an initialized books association.

select
	author0_.id as id1_0_0_,
	books1_.id as id1_2_1_,
	author0_.name as name2_0_0_,
	author0_.version as version3_0_0_,
	books1_.author_id as author_i7_2_1_,
	books1_.authorEager_id as authorEa8_2_1_,
	books1_.publisher as publishe2_2_1_,
	books1_.publishingDate as publishi3_2_1_,
	books1_.sells as sells4_2_1_,
	books1_.title as title5_2_1_,
	books1_.version as version6_2_1_,
	books1_.author_id as author_i7_2_0__,
	books1_.id as id1_2_0__ 
from
	Author author0_ 
left outer join
	Book books1_ 
		on author0_.id=books1_.author_id

EntityGraph to initialize an association

The EntityGraph API provides you the same functionality as the @NamedEntityGraph annotation. The only difference is that you use a Java API instead of annotations to define the graph. That enables you to adjust the graph definition dynamically.

As you can see in the code snippet, the API-based definition of the graph follows the same concepts as the annotation-based definition. You first create the graph by calling the createEntityGraph method. In the next step, you can add multiple attributes nodes and subgraphs to the graph. I explain all of that in great detail in JPA Entity Graphs: How to Dynamically Define and Use an EntityGraph.

EntityManager em = emf.createEntityManager();
em.getTransaction().begin();

EntityGraph<Author> entityGraph = em.createEntityGraph(Author.class);
entityGraph.addAttributeNodes("books");
TypedQuery<Author> q = em.createQuery("SELECT a FROM Author a", Author.class)
		.setHint("javax.persistence.fetchgraph", entityGraph);
List<Author> authors = q.getResultList();

em.getTransaction().commit();
em.close();

for (Author a : authors) {
	log.info(a.getName() + " wrote the books " 
		+ a.getBooks().stream().map(b -> b.getTitle()).collect(Collectors.joining(", "))
	);
}

After you defined the graph, you can use it in the same way as a @NamedEntityGraph, and Hibernate generates an identical query for both of them.

select
	author0_.id as id1_0_0_,
	books1_.id as id1_2_1_,
	author0_.name as name2_0_0_,
	author0_.version as version3_0_0_,
	books1_.author_id as author_i7_2_1_,
	books1_.authorEager_id as authorEa8_2_1_,
	books1_.publisher as publishe2_2_1_,
	books1_.publishingDate as publishi3_2_1_,
	books1_.sells as sells4_2_1_,
	books1_.title as title5_2_1_,
	books1_.version as version6_2_1_,
	books1_.author_id as author_i7_2_0__,
	books1_.id as id1_2_0__ 
from
	Author author0_ 
left outer join
	Book books1_ 
		on author0_.id=books1_.author_id

Using a DTO projection

Fetching all required associations when you load the entity fixes the LazyInitializationException. But there is an alternative that’s an even better fit for all read operations. As I showed in a previous article, DTO projections provide significantly better performance if you don’t want to change the retrieved information.

In these situations, you can use a constructor expression to tell Hibernate to instantiate a DTO object for each record in the result set.

EntityManager em = emf.createEntityManager();
em.getTransaction().begin();

TypedQuery<AuthorDto> q = em.createQuery(
		"SELECT new org.thoughtsonjava.lazyintitializationexception.dto.AuthorDto(a.name,b.title) FROM Author a JOIN a.books b",
		AuthorDto.class);
List<AuthorDto> authors = q.getResultList();

em.getTransaction().commit();
em.close();

for (AuthorDto author : authors) {
	log.info(author.getName() + " wrote the book " + author.getBookTitle());
}

Hibernate then generates an SQL statement that only selects the columns that are mapped by the attributes that you reference in the constructor call. This often reduces the number of selected columns and improves the performance even further.

select
	author0_.name as col_0_0_,
	books1_.title as col_1_0_ 
from
	Author author0_ 
inner join
	Book books1_ 
		on author0_.id=books1_.author_id

Conclusion

If you have used Hibernate for a while, you probably had to fix at least one LazyInitializationException. It’s one of the most common ones when working with Hibernate.

As I explained in this article, you can find lots of advice online on how to fix this exception. But a lot of these suggestions only replace the exception with problems that will show up in production.

There are only 2 good solutions to this problem:

  1. You initialize all required associations when you load the entity using a LEFT JOIN FETCH clause or a @NamedEntityGraph or the EntityGraph API.
  2. You use a DTO projection instead of entities. DTOs don’t support lazy loading, and you need to fetch all required information within your service layer.

The post LazyInitializationException – What it is and the best way to fix it appeared first on Thoughts on Java.


by Thorben Janssen at March 20, 2020 08:01 PM

What’s New With Jakarta NoSQL? (Part II)

by otaviojava at March 20, 2020 09:54 AM

The concept of cloud-native and to run an application with this concept using the latest milestone version of Jakarta EE NoSQL. Ref: https://dzone.com/articles/whats-new-with-jakarta-nosql-part-ii

by otaviojava at March 20, 2020 09:54 AM

Slow SQL logging with JPA and Wildfly

March 19, 2020 10:00 PM

Recently I wrote about Logging for JPA SQL queries with Wildfly. In this post I'll show you how to configure logging for slow SQL queries.

Wildfly uses Hibernate as JPA provider. So, to enable slow sql feature you just need to provide hibernate.session.events.log.LOG_QUERIES_SLOWER_THAN_MS property in your persistence.xml :

<properties>
    ...
    <property name="hibernate.session.events.log.LOG_QUERIES_SLOWER_THAN_MS" value="25"/>
    ...
</properties>    

To log slow queries to separate file, please configure logging like:

/subsystem=logging/periodic-rotating-file-handler=slow_sql_handler:add(level=INFO, file={"path"=>"slowsql.log"}, append=true, autoflush=true, suffix=.yyyy-MM-dd,formatter="%d{yyyy-MM-dd HH:mm:ss,SSS}")
/subsystem=logging/logger=org.hibernate.SQL_SLOW:add(use-parent-handlers=false,handlers=["slow_sql_handler"])

Note!
Described above functionality available since Hibernate version 5.4.5, but latest for today Wildfly 19 uses Hibernate version 5.3. Fortunately, if you can't wait to enjoy the latest version of Hibernate, you can use WildFly feature packs to create a custom server with a different version of Hibernate ORM in few simple steps:

Create provisioning configuration file (provision.xml)

<server-provisioning xmlns="urn:wildfly:server-provisioning:1.1" copy-module-artifacts="true">
    <feature-packs>
	<feature-pack
		groupId="org.hibernate"
		artifactId="hibernate-orm-jbossmodules"
		version="${hibernate-orm.version}" />
	<feature-pack
		groupId="org.wildfly"
		artifactId="wildfly-feature-pack"
		version="${wildfly.version}" />
    </feature-packs>
</server-provisioning>

Create gradle build file (build.gradle)

plugins {
  id "org.wildfly.build.provision" version '0.0.6'
}
repositories {
    mavenLocal()
    mavenCentral()
    maven {
        name 'jboss-public'
        url 'https://repository.jboss.org/nexus/content/groups/public/'
    }
}
provision {
    //Optional destination directory:
    destinationDir = file("wildfly-custom")

    //Update the JPA API:
    override( 'org.hibernate.javax.persistence:hibernate-jpa-2.1-api' ) {
        groupId = 'javax.persistence'
        artifactId = 'javax.persistence-api'
        version = '2.2'
    }
    configuration = file( 'provision.xml' )
    //Define variables which need replacing in the provisioning configuration!
    variables['wildfly.version'] = '17.0.0.Final'
    variables['hibernate-orm.version'] = '5.4.5.Final'
}

Build custom Wildfly version

gradle provision

Switch to a different Hibernate ORM slot in your persistence.xml

<properties>
    <property name="jboss.as.jpa.providerModule" value="org.hibernate:5.4"/>
</properties>

Enjoy!


March 19, 2020 10:00 PM

Hashtag Jakarta EE #11

by Ivar Grimstad at March 15, 2020 10:59 AM

Welcome to the eleventh issue of Hashtag Jakarta EE!

It’s been a special week. But at least, in our industry, we are pretty well equipped and used to remote working. Let’s just cross our fingers and hope that the measures taken will get the situation under control and we can get back to normal as soon as possible.

Supporting the Community

At the end of the week, we launched an initiative sponsored by the Jakarta EE Working Group to enable Java User Groups around the world to stream live events for free using our Crowdcast account. This will be available for at least as long as physical meetups are put on hold due to the Covid-19 situation.

Over to some updates about what is going on in the MicroProfile community.

Push vs Pull

The hangout this Tuesday was in entirety devoted to discussing the Pull vs Push approach for technical alignment with other standardization bodies, such as Jakarta EE. The vote is ongoing and will be closed Tuesday, March 17. Check out the MicroProfile Calendar for details about how to join the MicroProfile hangouts.

The current status of the voting indicates that the decision will be to go for a Pull model. What will this mean for Jakarta EE?

Implications of Pull

The obvious consequence of the Pull model is that if Jakarta EE decides to pull in a MicroProfile specification, it will essentially mean a fork. For those runtimes supporting either MicroProfile or Jakarta EE, but not both, it will be business as usual.

Those supporting both will have the headache of figuring out how to implement this. This will probably not be a hard nut to crack until one, or two of the tines of the fork, start evolving.

A possible Scenario

Let’s say that Jakarta EE decides to pull in MicroProfile Config and create a specification called Jakarta Config. The base package is changed from org.eclipse.microprofile.config to jakarta.config and the specification is added to the Jakarta EE Full and/or Web Profile.

Other Jakarta specifications are now free to reference Jakarta Config and implementations are required to implement it in order to be Jakarta EE Compatible. Products out there supporting both (e.g. OpenLiberty, WildFly, and Payara to mention a few open-source implementations), will now have two configuration options that are more or less identical.

Let’s say, then, that Jakarta Config adds a nifty feature. Should this feature be back-ported to MicroProfile Config? Or should MicroProfile Config be abandoned and Jakarta Config added to the base Jakarta specs required for MicroProfile?

I think these questions need to be addressed somehow, and that it is up to the vendors behind these initiatives to figure out a strategy that is in the best interest of their customers. It is a too easy way out to say that this will self-regulate by the community.


by Ivar Grimstad at March 15, 2020 10:59 AM

Payara Platform Supports TLS 1.3 on JDK 8

by Susan Rai at March 13, 2020 11:28 AM

Transport Layer Security (TLS) was introduced as a replacement for Secure Sockets Layer (SSL). TLS is a cryptographic protocol which provides secure communication between a client and a server. It also provides a mechanism by which information is not tampered with, falsified or read by anyone other than the intended receiver. TLS 1.3 was released in August 2018 to replace the widely used TLS 1.2. TLS 1.3 comes with stronger cryptographic algorithms and brings in major improvements in performance, security and privacy, which will be discussed in this blog.


by Susan Rai at March 13, 2020 11:28 AM

Jakarta EE Community Update March 2020

by Tanja Obradovic at March 11, 2020 04:47 PM

Welcome to the latest Jakarta EE update. We have a number of initiatives underway and many great opportunities to get involved with Jakarta EE, so I’ll get right to it.

The Adopt-A-Spec Program for Java User Groups Is Now Live

We encourage all Java User Groups (JUGs) to adopt a Jakarta EE spec. You’ll find the instructions to sign up, along with more information about the program, here.

We’re very pleased to tell you that the Madras JUG in India and the SouJava JUG in Brazil are the first JUGs to adopt a Jakarta EE specification.

It’s Now Even Easier to Get Involved with Jakarta EE!

We welcome you to get involved and we made it simpler for you to join! Please see below to learn more about the steps to become a contributor and a committer. 

For details about specification project committer agreements, check out Wayne Beaton’s blog post on the topic.

We welcome everyone who wants to get involved with Jakarta EE!

Great progress on Jakarta EE 9

Work on Jakarta EE 9 is now underway and you can check the progress we’re making here. Will attempt to get an RC out this week for Platform and Web Profile APIs!

For additional insight into Jakarta EE 9, check out the:

·  Jakarta EE Platform specification page

·  GitHub page

Alignment With MicroProfile Specs Is up to Us

After a recent MicroProfile Hangout discussion, it was decided that MicroProfile will produce specs and other communities, including Jakarta EE, can determine how they want to adopt them.

 You can find a summary of the discussion by John Clingan in the thread MicroProfile Working Group discussion – Push vs pull on the MicroProfile mailing list.

 If you’d like to join MicroProfile discussions, check out the calendar here.

 CN4J Day at KubeCon Amsterdam Is Postponed

With the postponement of the KubeCon + CloudNativeCon event, we’ve also postponed CN4J Day, which was originally planned for March 30. We’ll let you know when the event is rescheduled as soon as we can.

 In the meantime, you can follow updates about KubeCon rescheduling and get more information about what the postponement means for your involvement here.

Jakartification of Oracle Specs: We always welcome your help!

Thanks to everyone who has been helping us Jakartify the Oracle specifications. We’re making progress, but we still need helping hands. Now that we have the copyright for all of the specifications that Oracle contributed, there’s a lot of work to do.

To help you get started:

·  The Specification Committee created a document that explains how to convert Java EE specifications to Jakarta EE.

Ivar Grimstad provided a demo during the community call in October. You can view it here

Join Community Update Calls

Every month, the Jakarta EE community holds a community call for everyone in the Jakarta EE community. For upcoming dates and connection details, see the Jakarta EE Community Calendar.

Our next call is Wednesday, March 18 at 10:00 a.m. EST using this meeting link.

We know it’s not always possible to join calls in real time, so here are links to the recordings and presentations:

·  The complete playlist.

·  February 12 call and presentation, featuring Wayne Beaton’s update on enabling individual participation in Jakarta EE, Shabnam Mayel’s update on enabling JUG participating, and Ivar Grimstad’s update on Jakarta EE 9.

February Event Summary

February was a busy month for events:

·  FOSDEM. Eclipse Foundation Executive Director, Mike Milinkovich, presented to a full room at this unique, free event in Brussels. For more insight, read Ivar’s blog.

·  Devnexus. We hosted a Cloud Native for Java Meetup for more than 100 participants at this conference organized by the Atlanta JUG. We also had a Jakarta EE booth in the community corner of the exhibition hall. This is an awesome event for Java developers with 2,400 attendees and world-class speakers. Here’s a photo to inspire you to attend next year.

 

·  JakartaOne Livestream - Japan. The first Livestream event in Japanese was a success with 211 registered participants. You can watch the replay here.

·  ConFoo. Ivar spoke at the 18th edition of this event in Montreal, Canada. For more information, read Ivar’s blog.

Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Tanja Obradovic’s blog summarizes the community engagement plan, which includes:

·  Social media: Twitter, Facebook, LinkedIn Group

·  Mailing lists: jakarta.ee-community@eclipse.org and jakarta.ee-wg@eclipse.org

·  Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs, monthly update emails to jakarta.ee-community@eclipse.org, and community blogs on “how are you involved with Jakarta EE”

·  Meetings: Jakarta Tech Talks, Jakarta EE Update, Jakarta Town Hall, and Eclipse Foundation events and conferences

Subscribe to your preferred channels today. And, get involved in the Jakarta EE Working Group to help shape the future of open source, cloud native Java.

Bookmark the Jakarta EE Community Calendar to learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk.

 


by Tanja Obradovic at March 11, 2020 04:47 PM

Payara Platform Roadmap Planning for 2020

by Steve Millidge at March 11, 2020 11:57 AM

Starting with the latest Payara Platform 201 release, we've made changes to how we build and report our future platform roadmap. We recently introduced the Payara Reef initiative to enhance our communication with the Payara community, and as part of the Reef initiative, we are also introducing the Open Roadmap for the Payara Platform.


by Steve Millidge at March 11, 2020 11:57 AM

The Payara Monthly Catch for Feb 2020

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at March 10, 2020 12:00 PM

It's been a little while since the last update. Your humble author has been on the road, most recently DevNexus in Atlanta where we met many awesome people and had a great time. We also just published our latest release Payara Platform 5.201.  We wont lament further, as usual we have kept our eyes open and have been squirrelling away some great content.

Below you will find a curated list of some of the most interesting news, articles and videos from this month. Cant wait until the end of the month? then visit our twitter page where we post all these articles as we find them! 


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at March 10, 2020 12:00 PM

What’s New With Jakarta NoSQL? (Part 1): Intro to Document With MongoDB

by otaviojava at March 10, 2020 08:50 AM

This post will talk about the newest milestone version of this new specification and more. https://dzone.com/articles/whats-new-with-jakarta-nosql-part-i-introduction-t

by otaviojava at March 10, 2020 08:50 AM

JakartaOne Livestream Japan 2020

by Kenji Hasunuma at March 09, 2020 03:54 PM

I talked in JakartaOne Livestream Japan held at 26th of February 2020 as behalf of Payara Services Team!


by Kenji Hasunuma at March 09, 2020 03:54 PM

JakartaOne Livestream Japan 2020 (Japanese)

by Kenji Hasunuma at March 09, 2020 04:15 AM

このイベントは、昨年9月に開催されたJakartaOne Livestreamに感銘を受けた日本のJavaコミュニティ有志が企画したバーチャル・カンファレンスです。私もこのカンファレンスのプログラム委員の一員として、カンファレンスの事前準備や当日の運営に携わりました。このようなイベントは日本ではまだ例がなく私たちも(当日のカンファレンス進行中でさえ)試行錯誤しましたが、日本のJavaコミュニティに広くJakarta EEMicroProfileをご紹介しようと最善を尽くしました。


by Kenji Hasunuma at March 09, 2020 04:15 AM

Hashtag Jakarta EE #10

by Ivar Grimstad at March 08, 2020 08:33 PM

Welcome to the tenth issue of Hashtag Jakarta EE!

Join me in celebrating the 10th issueversity of this series! I can’t believe we’re already at 10.

In the MicroProfile Hangout this week, the discussion around technical alignment with related technologies and standardization efforts (such as Jakarta EE) continued. Two approaches have crystallized themselves and the goal is to come to a decision at the hangout next week (Tuesday, March 10). The models discussed are push and pull.

The Pull Model implies that MicroProfile creates and evolves specifications without regard to downstream consumer requirements (e.g. Jakarta).

With the Push Model, MicroProfile specifications, when mature/stable, are transferred to external organizations (e.g. Jakarta EE).

As I mentioned above, the goal is to settle which approach to go for at the MicroProfile Hangout next week. Make sure to tune in to that one. Refer to the MicroProfile Calendar for details about the call.

The Virus

Due to Covid-19, a large number of events are being canceled. Those I had scheduled to speak at are dev.next, Red Hat Summit and IBM Think so far. All of these will be replaced by virtual events.

While virtual events are better than nothing, I don’t think they will ever be able to fully face-to-face events. The hallway discussions and the social benefit of actually meeting someone physically are what makes conferences irreplaceable. Looking forward to continuing to meet you all at the conference circuit later when the dust has settled. Meanwhile, I will participate in virtual events as well as smaller gatherings and meetups where they are possible to arrange.


by Ivar Grimstad at March 08, 2020 08:33 PM

Firebase push notifications with Eclipse Microprofile Rest Client

March 04, 2020 10:00 PM

Nowadays Push notifications is a must have feature for any trend application. Firebase Cloud Messaging (FCM) is a free (at least in this moment) cross-platform solution for messages and notifications for Android, iOS and Web applications.

firebase, push, microprofile, rest client

To enable push notification on client side you should create Firebase project and follow the manual or examples. From the server side perspective all you need to send push notification is:

  • Server key - will be created for your firebase project
  • Instance ID token - id of specific subscribed instance (instance destination id)

Firebase provides https://fcm.googleapis.com/fcm/send endpoint and very simple HTTP API like

{
    "to": "<Instance ID token>",
    "notification": {
      "title": "THIS IS MP REST CLIENT!",
      "body": "The quick brown fox jumps over the lazy dog."
      }
}

So, let's design simple Microprofile REST client to deal with above:

@Path("/")
@RegisterRestClient(configKey = "push-api")
public interface PushClientService {

    @POST
    @Path("/fcm/send")
    @Produces("application/json")
    @ClientHeaderParam(name = "Authorization", value = "{generateAuthHeader}")
    void send(PushMessage msg);

    default String generateAuthHeader() {
        return "key=" + ConfigProvider.getConfig().getValue("firebase.server_key", String.class);
    }
}
public class PushMessage {

    public String to;
    public PushNotification notification;

    public static class PushNotification {
        public String title;
        public String body;
    }
}

and application.properties

# firebase server key
firebase.server_key=<SERVER_KEY>
# rest client
push-api/mp-rest/url=https://fcm.googleapis.com/

Actually, this is it! Now you able to @Inject PushClientService and enjoy push notifications as well.

@Inject
@RestClient
PushClientService pushService;
...
pushService.send(message);

If you would like to test how it works from client side perspective, - feel free to use Test web application to generate instance ID token and check notifications delivery.

Described sample application source code with swagger-ui endpoint and firebase.server_key available on GitHub


March 04, 2020 10:00 PM

What’s New in Payara Platform 201?

by Jan Bernitt at March 02, 2020 02:03 PM

The first Payara Platform release in 2020 is out. Highlighted updates include new End-to-End data encryption and the extension of the Monitoring Console. The release includes 31 bug fixes, 5 new features, 12 improvements and 21 component upgrades!

Check out the release notes here.


by Jan Bernitt at March 02, 2020 02:03 PM

Hashtag Jakarta EE #9

by Ivar Grimstad at March 02, 2020 12:02 AM

Welcome to the ninth issue of Hashtag Jakarta EE!

This week, I had the pleasure of speaking at ConFoo in Montreal. It was my second time speaking at this conference. This was the 18th edition and the number of attendees has increased every year. 839 registered attendees this year!

My first talk was a live coding session where I demoed most aspects of Eclipse MicroProfile.

On the second day, I did the Microservice Patterns talk where I go through a list of microservice patterns and show how each of them is implemented with Eclipse MicroProfile.

An interesting observation regarding the strength of the various brands is that when I did a poll at the beginning of both my talks, about 5% had heard about MicroProfile, about 50% about Jakarta EE and 100% had heard about Spring Boot. It should be noted that ConFoo is originally a PHP conference that has extended out to include more technologies, so the audience was not necessarily 100% hardcore server-side Java developers. But still interesting to see how the awareness of Jakarta EE is growing.

To round of wit something sweet; after the conference, we went on a trip to a Sugar shack outside of Montreal.

The favorite on the table I were seated were bacon with maple syrup!


by Ivar Grimstad at March 02, 2020 12:02 AM

Jersey 2.30.1 has been released

by Jan at March 01, 2020 11:21 PM

It has been a while since we have released Jersey 2.30. On the client-side, we introduced new PreInvocationInterceptor, PostInvocationInterceptor, and InvocationBuilderListener interfaces. We made the default Rx client using the AsyncInvoker (unlike the RxInvokerProvider). We worked hard to make the Apache HttpClient … Continue reading

by Jan at March 01, 2020 11:21 PM

Optimize your code for Quarkus

by Jean-François James at February 24, 2020 05:18 PM

My previous article, was about running a JakartaEE/MicroProfile application with minimum changes. My purpose was to keep the Java code as standard as possible so that it can keep running on other implementations such as OpenLiberty, Payara, KumuluzEE, TomEE. This article proposes an alternative: how to optimize your code for Quarkus? It turns out that […]

by Jean-François James at February 24, 2020 05:18 PM

Well secured and documented REST API with Eclipse Microprofile and Quarkus

February 19, 2020 10:00 PM

Eclipse Microprofile specification provides several many helpful sections about building well designed microservice-oriented applications. OpenAPI, JWT Propagation and JAX-RS - the ones of them.
microprofile, jwt, openapi, jax-rs
To see how it works on practice let's design two typical REST resources: insecured token to generate JWT and secured user, based on Quarkus Microprofile implementation.

Easiest way to bootstrap Quarkus application from scratch is generation project structure by provided starter page - code.quarkus.io. Just select build tool you like and extensions you need. In our case it is:

  • SmallRye JWT
  • SmallRye OpenAPI

I prefer gradle, - and my build.gradle looks pretty simple

group 'org.kostenko'
version '1.0.0'
plugins {
    id 'java'
    id 'io.quarkus'
}
repositories {
     mavenLocal()
     mavenCentral()
}
dependencies {
    implementation 'io.quarkus:quarkus-smallrye-jwt'
    implementation 'io.quarkus:quarkus-smallrye-openapi'
    implementation 'io.quarkus:quarkus-resteasy-jackson'    
    implementation 'io.quarkus:quarkus-resteasy'
    implementation enforcedPlatform("${quarkusPlatformGroupId}:${quarkusPlatformArtifactId}:${quarkusPlatformVersion}")
    testImplementation 'io.quarkus:quarkus-junit5'
    testImplementation 'io.rest-assured:rest-assured'
}
compileJava {
    options.compilerArgs << '-parameters'
}

Now we are ready to improve standard JAX-RS service with OpenAPI and JWT stuff:

@RequestScoped
@Path("/user")
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
@Tags(value = @Tag(name = "user", description = "All the user methods"))
@SecurityScheme(securitySchemeName = "jwt", type = SecuritySchemeType.HTTP, scheme = "bearer", bearerFormat = "jwt")
public class UserResource {

    @Inject
    @Claim("user_name")
    Optional<JsonString> userName;

    @POST
    @PermitAll
    @Path("/token/{userName}")
    @APIResponses(value = {
        @APIResponse(responseCode = "400", description = "JWT generation error"),
        @APIResponse(responseCode = "200", description = "JWT successfuly created.", content = @Content(schema = @Schema(implementation = User.class)))})
    @Operation(summary = "Create JWT token by provided user name")
    public User getToken(@PathParam("userName") String userName) {
        User user = new User();
        user.setJwt(TokenUtils.generateJWT(userName));
        return user;    
    }

    @GET
    @RolesAllowed("user")
    @Path("/current")
    @SecurityRequirement(name = "jwt", scopes = {})
    @APIResponses(value = {
        @APIResponse(responseCode = "401", description = "Unauthorized Error"),
        @APIResponse(responseCode = "200", description = "Return user data", content = @Content(schema = @Schema(implementation = User.class)))})
    @Operation(summary = "Return user data by provided JWT token")
    public User getUser() {
        User user = new User();
        user.setName(userName.get().toString());
        return user;
    }
}

First let's take a brief review of used Open API annotations:

  • @Tags(value = @Tag(name = "user", description = "All the user methods")) - Represents a tag. Tag is a meta-information you can use to help organize your API end-points.
  • @SecurityScheme(securitySchemeName = "jwt", type = SecuritySchemeType.HTTP, scheme = "bearer", bearerFormat = "jwt") - Defines a security scheme that can be used by the operations.
  • @APIResponse(responseCode = "401", description = "Unauthorized Error") - Corresponds to the OpenAPI response model object which describes a single response from an API Operation.
  • @Operation(summary = "Return user data by provided JWT token") - Describes an operation or typically a HTTP method against a specific path.
  • @Schema(implementation = User.class) - Allows the definition of input and output data types.

To more details about Open API annotations, please refer to the MicroProfile OpenAPI Specification.

After start the application, you will able to get your Open API description in the .yaml format by the next URL http://0.0.0.0:8080/openapi or even enjoy Swagger UI as well by http://0.0.0.0:8080/swagger-ui/ :
microprofile, openapi, swagger-ui

Note By default swagger-ui available in the dev mode only. If you would like to keep swagger on production, - add next property to your application.properties

quarkus.swagger-ui.always-include=true

Second part of this post is a JWT role based access control(RBAC) for microservice endpoints. JSON Web Tokens are an open, industry standard RFC 7519 method for representing claims securely between two parties and below we will see how easy it can be integrated in your application with Eclipse Microprofile.

As JWT suggests usage of cryptography - we need to generate public\private key pair before start coding:

# Generate a private key
openssl genpkey -algorithm RSA -out private_key.pem -pkeyopt rsa_keygen_bits:2048

# Derive the public key from the private key
openssl rsa -pubout -in private_key.pem -out public_key.pem

Now we are able to generate JWT and sign data with our private key in the, for example, next way:

public static String generateJWT(String userName) throws Exception {

    Map<String, Object> claimMap = new HashMap<>();
    claimMap.put("iss", "https://kostenko.org");
    claimMap.put("sub", "jwt-rbac");
    claimMap.put("exp", currentTimeInSecs + 300)
    claimMap.put("iat", currentTimeInSecs);
    claimMap.put("auth_time", currentTimeInSecs);
    claimMap.put("jti", UUID.randomUUID().toString());
    claimMap.put("upn", "UPN");
    claimMap.put("groups", Arrays.asList("user"));
    claimMap.put("raw_token", UUID.randomUUID().toString());
    claimMap.put("user_bane", userName);

    return Jwt.claims(claimMap).jws().signatureKeyId("META-INF/private_key.pem").sign(readPrivateKey("META-INF/private_key.pem"));
}

For additional information about JWT structure, please refer https://jwt.io

Time to review our application security stuff:
@RequestScoped - It is not about security as well. But as JWT is request scoped we need this one to work correctly;
@PermitAll - Specifies that all security roles are allowed to invoke the specified method;
@RolesAllowed("user") - Specifies the list of roles permitted to access method;
@Claim("user_name") - Allows us inject provided by JWT field;

To configure JWT in your application.properties, please add

quarkus.smallrye-jwt.enabled=true
mp.jwt.verify.publickey.location=META-INF/public_key.pem
mp.jwt.verify.issuer=https://kostenko.org

# quarkus.log.console.enable=true
# quarkus.log.category."io.quarkus.smallrye.jwt".level=TRACE
# quarkus.log.category."io.undertow.request.security".level=TRACE

And actually that is it, - if you try to reach /user/current service without or with bad JWT token in the Authorization header - you will get HTTP 401 Unauthorized error.

curl example:

curl -X GET "http://localhost:8080/user/current" -H "accept: application/json" -H "Authorization: Bearer eyJraWQiOiJNRVRBLUlORi9wcml2YXRlX2tleS5wZW0iLCJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJqd3QtcmJhYyIsInVwbiI6IlVQTiIsInJhd190b2tlbiI6IjQwOWY3MzVkLTQyMmItNDI2NC1iN2UyLTc1YTk0OGFjMTg3MyIsInVzZXJfbmFtZSI6InNlcmdpaSIsImF1dGhfdGltZSI6MTU4MjE5NzM5OSwiaXNzIjoiaHR0cHM6Ly9rb3N0ZW5rby5vcmciLCJncm91cHMiOlsidXNlciJdLCJleHAiOjkyMjMzNzIwMzY4NTQ3NzU4MDcsImlhdCI6MTU4MjE5NzM5OSwianRpIjoiMzNlMGMwZjItMmU0Yi00YTczLWJkNDItNDAzNWQ4NTYzODdlIn0.QteseKKwnYJWyj8ccbI1FuHBgWOk98PJuN0LU1vnYO69SYiuPF0d9VFbBada46N_kXIgzw7btIc4zvHKXVXL5Uh3IO2v1lnw0I_2Seov1hXnzvB89SAcFr61XCtE-w4hYWAOaWlkdTAmpMSUt9wHtjc0MwvI_qSBD3ol_VEoPv5l3_W2NJ2YBnqkY8w68c8txL1TnoJOMtJWB-Rpzy0XrtiO7HltFAz-Gm3spMlB3FEjnmj8-LvMmoZ3CKIybKO0U-bajWLPZ6JMJYtp3HdlpsiXNmv5QdIq1yY7uOPIKDNnPohWCgOhFVW-bVv9m-LErc_s45bIB9djwe13jFTbNg"

Source code of described sample application available on GitHub


February 19, 2020 10:00 PM

Scope + Communication – The magic formula of microservices

by Thorben Janssen at February 19, 2020 08:30 AM

The post Scope + Communication – The magic formula of microservices appeared first on Thoughts on Java.

For quite some time, finding the right scope of a microservice was proclaimed to solve all problems. If you do it right, implementing your service is supposed to be easy, your services are independent of each other, and you don’t need to worry about any communication between your services.

Unfortunately, reality didn’t hold up to this promise all too well. Don’t get me wrong, finding the right scope of a service helps. Implementing a few right-sized services is much easier than creating lots of services that are too small and that depend on each other. Unfortunately, that doesn’t mean that all problems are solved or that there is no communication between your services.

But let’s take a step back and discuss what “the right scope” means and why it’s so important.

What is the right scope of a microservice?

Finding the right scope of a service is a lot harder than it might seem. It requires a good understanding of your business domain. That’s why most architects agree that a bounded context, as it’s defined by Domain-Driven Design, represents a proper scope of a microservice.

Interestingly enough, when we talk about a bounded context, we don’t talk about size. We talk about the goal that the model of a bounded context is internally consistent. That means that there is only one exact definition of each concept. If you try to model the whole business domain, that’s often hard to achieve.

A customer in an order management application, for example, is different from a customer in an online store. The customer in the store browses around and might or might not decide to buy something. We have almost no information about that person. A customer in an order management application, on the other hand, has bought something, and we know the name and their payment information. We also know which other things that person bought before.

If you try to use the same model of a customer for both subsystems, your definition of a customer loses a lot of precision. If you talk about customers, nobody exactly knows which kind of customer you mean.

All of that gets a lot easier and less confusing if you split that model into multiple bounded contexts. That enables you to have 2 independent definitions of a customer: one for the order management and one for the online store. Within each context, you can precisely define what a customer is.

The same is true for monolithic and microservice applications. A monolith is often confusing, and there might be different definitions or implementations of the same concept within the application. That is confusing and makes the monolith hard to understand and maintain. But if you split it into multiple microservices, this gets a lot easier. If you do it right, there are no conflicting implementations or definitions of the same concept within one microservice.

Bounded contexts and microservices are connected

As you can see, there is an apparent similarity between microservices and bounded contexts. And that’s not the only one. There is another similarity that often gets ignored. Bounded contexts in DDD can be connected to other services. You’re probably not surprised if I tell you that the same is true for microservices.

These connections are necessary, and you can’t avoid them. You might use different definitions of a customer in your online store and your order management application. But for each customer in your order management system, there needs to be a corresponding customer in the online store system. And sooner or later, someone will ask you to connect this information.

Let’s take a closer look at a few situations in which we need to share data between microservices.

Data replication

The most obvious example of services that need to exchange data are services that provide different functionalities on the same information. Typical examples of services that use data owned by other services are management dashboards, recommendation engines, and any other kind of application that needs to aggregate information.

The functionality provided by these services shouldn’t become part of the services that are owning the data. By doing that, you would implement 2 or more separate bounded contexts within the same application. That will cause the same issues as we had with unstructured monoliths.

It’s much better to replicate the required information asynchronously instead. As an example, the order, store, and inventory service replicate their data asynchronously, and the management dashboard aggregates them to provide the required statistics to the managers.

When you implement such a replication, it’s important to ensure that you don’t introduce any direct dependencies between your services. In general, this is achieved by exchanging messages or events via a message broker or an event streaming platform.

There are various patterns that you can use to replicate data and decouple your services. In my upcoming Data and Communication Patterns for Microservices course, I recommend using the Outbox Pattern. It’s relatively easy to implement, enables great decoupling of your services, scales well, and ensures a reasonable level of consistency.

Coordinate complex operations

Another example is a set of services that need to work together to perform a complex business operation. In the case of an online store, that might be the order management service, the payment service, and the inventory service. All 3 of them model independent contexts, and there are lots of good reasons to keep them separate.

But when a customer orders something, all 3 services need to work together. The order management service needs to receive and handle the order. The payment service processes the payment and the inventory service reserves and ships the products.

Each service can be implemented independently, and it provides its part of the overall functionality. But you need some form of coordination to make sure that each order gets paid before you ship the products or that you only accept orders that you can actually fulfill.

As you can see, this is another example of services that need to communicate and exchange data. The only alternative would be to merge these services into one and to implement a small monolith. But that’s something we decided to avoid.

You can implement such operations using different patterns. If you do it right, you can avoid any direct dependencies between your services. I recommend using one of the 2 forms of the SAGA patterns, which I explain in great detail in my Data and Communication Patterns for Microservices course.

You need the right scope and the right communication

To sum it up, finding the proper scope for each service is important. It makes the implementation of each service easier and avoids any unnecessary communication or dependencies between your services.

But that’s only the first step. After you carefully defined the scope of your services, there will be some services that are connected to other services. Using the right patterns, you can implement these connections in a reliable and scalable way without introducing direct dependencies between your services.

The post Scope + Communication – The magic formula of microservices appeared first on Thoughts on Java.


by Thorben Janssen at February 19, 2020 08:30 AM

Simple note about using JPA relation mappings

February 13, 2020 10:00 PM

There is a lot of typical examples how to build JPA @OneToMany and @ManyToOne relationships in your Jakarta EE application. And usually it looks like:

@Entity
@Table(name = "author")
public class Author {
    @OneToMany
    private List<Book> book;
    ...
}
@Entity
@Table(name = "book")
public class Book {
    @ManyToOne
    private Author author;
    ...
}

This code looks pretty clear, but on my opinion you should NOT USE this style in your real world application. From years of JPA using experience i definitely can say that sooner or later your project will stuck with known performance issues and holy war questions about: N+1, LazyInitializationException, Unidirectional @OneToMany , CascadeTypes ,LAZY vs EAGER, JOIN FETCH, Entity Graph, Fetching lot of unneeded data, Extra queries (for example: select Author by id before persist Book) etcetera. Even if you are have answers for each potential issue above, usually proposed solution will add unreasonable complexity to the project.

To avoid potential issues i recommend to follow next rules:

  • Avoid using of @OneToMany at all
  • Use @ManyToOne to build constrains but work with ID instead of Entity

Unfortunately, simple snippet below does not work as expected in case persist

@ManyToOne(targetEntity = Author.class)
private long authorId;

But,we can use next one instead of

@JoinColumn(name = "authorId", insertable = false, updatable = false)
@ManyToOne(targetEntity = Author.class)
private Author author;

private long authorId;

public long getAuthorId() {
    return authorId;
}

public void setAuthorId(long authorId) {
    this.authorId = authorId;
}

Hope, this two simple rules helps you enjoy all power of JPA with KISS and decreasing count of complexity.


February 13, 2020 10:00 PM

Distributed Transactions – Don’t use them for Microservices

by Thorben Janssen at February 12, 2020 02:09 PM

The post Distributed Transactions – Don’t use them for Microservices appeared first on Thoughts on Java.

Since I started talking about microservices and the challenges that you have to solve whenever you want to exchange data between your services, I hear 3 things:

  1. You only need to model the scope of your services “the right way” to avoid these problems.
  2. We use multiple local transactions, and everything works fine. It’s really not that big of a deal.
  3. We have always used distributed transactions to ensure data consistency. We will keep doing that for our microservice architecture.

Let’s quickly address the first 2 answers before we get to the main part of this article.

Designing services the right way

It’s a popular myth that you can solve all problems by designing the scope of your services the right way. That might be the case for highly scalable “hello” world applications that you see in demos. But it doesn’t work that way in the real world.

Don’t get me wrong; designing the scope of your services is important, and it makes the implementation of your application easier. But you will not be able to avoid communication between your services completely. You always have some services that offer their functionality based on other services.

An example of that is an OrderInfo service in an online bookstore. It shows the customer the current status of their order based on the information managed by the Order service, the Inventory service, and the Book service.

Another example is an Inventory service, which needs to reserve a book for a specific order and prepare it for delivery after the Order and the Payment service processed the order.

In these cases, you either:

  • Implement some form of data exchange between these services or
  • Move all the logic to the frontend, which in the end is the same approach as option 1, or
  • Merge all the services into 1, which gets you a monolithic application.

As you can see, there are several situations in which you need to design and implement some form of communication and exchange data between your services. And that’s OK if you do it intentionally. There are several patterns and tools for that. I explain the most important and popular ones in my upcoming course Data and Communication Patterns for Microservices. It launches in just a few days. I recommend joining the waitlist now so that you don’t miss it.

Using multiple local transactions

If teams accepted that they need to exchange data between their services, quite a few decide to use multiple, independent, local transactions. This is a risky decision because sooner or later, it will cause data inconsistencies.

By using multiple local transactions, you create a situation that’s called a dual write. I explained it in great detail in a previous article. To summarize that article, you can’t handle the situation in which you try to commit 2 independent transactions, and the 2nd commit fails. You might try to implement workarounds that try to revert the first transaction. But you can’t guarantee that they will always work.

Distributed transactions and their problems in a microservice application

In a monolithic application or older distributed applications, we often used transactions that span over multiple external systems. Typical examples are transactions that include one or more databases or a database and a message broker. These transactions are called global or distributed transactions. They enable you to apply the ACID principle to multiple systems.

Unfortunately, they are not a good fit for a microservice architecture. They use a pattern called 2-phase commit. This pattern describes a complex process that requires multiple steps and locks.

2-phase commit protocol

As you might have guessed from the name, the main difference between a local and distributed transaction that uses the two-phase commit pattern is the commit operation. As soon as more than one system is involved, you can’t just send a commit message to each of them. That would create the same problems as we discussed for dual writes.

The two-phase commit avoids these problems by splitting the commit into 2 steps:

  1. The transaction coordinator first sends a prepare command to each involved system.
    Each system then checks if they could commit the transaction.
  2. If that’s the case, they respond with “prepared” and the transaction coordinator sends a commit command to all systems. The transaction was successful, and all changes get committed.
    If any of the systems doesn’t answer the prepare command or responds with “failed”, the transaction coordinator sends an abort command to all systems. This rolls back all the changes performed within the transaction.

As you can see, a two-phase commit is more complicated than the simple commit of a local transaction. But it gets even worse when you take a look at the systems that need to prepare and commit the transaction.

The problem of a 2-phase commit

After a system confirmed the prepare command, it needs to make sure that it will be able to commit the transaction when it receives the commit command. That means nothing is allowed to change until that system gets the commit or abort command.

The only way to ensure that is to lock all the information that you changed in the transaction. As long as this lock is active, no other transaction can use this information. These locks can become a bottleneck that slows down your system and should obviously be avoided.

This problem also existed in a distributed, monolithic application. But the small scope of a microservice and the huge number of services that are often deployed make it worse.

A 2-phase commit between a transaction coordinator and 2 external systems is already bad enough. But the complexity and the performance impact of the required locks increase with each additional external system that takes part in the transaction.

Due to that, a distributed transaction is no longer an easy to use approach to ensure data consistency that, in the worst case, might slow down your application a little bit. In a microservice architecture, a distributed transaction is an outdated approach that causes severe scalability issues. Modern patterns that rely on asynchronous data replication or model distributed write operations as orchestrated or choreographed SAGAs avoid these problems. I explain all of them in great detail in my upcoming course Data and Communication Patterns for Microservices. It launches in just a few days. If you’re building microservices, I recommend joining the waitlist now so that you don’t miss it.

The post Distributed Transactions – Don’t use them for Microservices appeared first on Thoughts on Java.


by Thorben Janssen at February 12, 2020 02:09 PM

Jakarta EE Working Group is Open to Forming a Working Relationship with MicroProfile

by Will Lyons at February 11, 2020 09:25 AM

Hello - 

Last week the following resolution was passed unanimously by the Jakarta EE Steering Committee:

The Jakarta EE Working Group Steering Committee would be open to proposals for collaborative processes that may achieve a consensus approach to a joint Working Group, or working with a standalone MicroProfile Working Group.  If a single Cloud Native for Java (CN4J) Working Group is preferred by both communities then the Jakarta EE Working Group is open to considering the possibility of forming a joint Working Group with the MicroProfile community.   We recognize that forming a joint Working Group would require significant modifications to the current Jakarta EE Working Group charter, and are open to that prospect.   We are open to considering the current CN4J Working Group proposal, and/or evolving that proposal, and potentially other proposals, together with the MicroProfile community, in an effort to best meet the needs of MicroProfile and Jakarta EE, and to create more opportunities for synergy between the two efforts.  We are open to discuss whatever approach works best, and would welcome MicroProfile community feedback.

There is active discussion on this topic at the MicroProfile sandbox, with a proposal for an alternative option as well.  Please join in.  We're hoping these two efforts can be combined in a way that maximizes synergies between them for market success.  We're sure there will be more discussion on this topic in coming days and weeks.

 

 


by Will Lyons at February 11, 2020 09:25 AM

Jakarta EE Community Update February 2020

by Tanja Obradovic at February 05, 2020 07:21 PM

With the Jan 16 announcement that we’re targeting a mid-2020 release for Jakarta EE9, the year is off to a great start for the entire Jakarta EE community. But, the Jakarta EE 9 release announcement certainly wasn’t our only achievement in the first month of 2020.

Here’s a look at the great progress the Jakarta EE community made in January, along with some important reminders and details about events you won’t want to miss.

____________________________________________________________

The Java EE Guardians Are Now the Jakarta EE Ambassadors

The rebranding is complete and the website has been updated to reflect the evolution. Also note that the group’s:

·  Twitter handle has been renamed to @jee_ambassadors

·  Google Group has been renamed to https://groups.google.com/forum/#!forum/jakartaee-ambassadors

Everyone at the Eclipse Foundation and in the Jakarta EE community is thrilled the Java EE Guardians took the time and effort to rebrand themselves for Jakarta EE. I’d like to take this opportunity to thank everyone involved with the Jakarta EE Ambassadors for their contributions to the advancement of Jakarta EE. As Eclipse Foundation Executive Director, Mike Milinkovich, noted, “The new Jakarta EE Ambassadors are an important part of our community, and we very much appreciate their support and trust.”

I look forward to collaborating with the Jakarta EE Ambassadors to drive the success and growth of the Jakarta EE community. I’d also like to encourage all Jakarta EE Ambassadors to start using the new logo to associate themselves with the group.

____________________________________________________________

Java User Groups Will Be Able to Adopt a Jakarta EE Specification

We’re working to enable Java User Groups (JUGs) to become actively involved in evolving the Jakarta EE Specification through our adopt-a-spec program.

In addition to being Jakarta EE contributors and committers, JUG members that adopt-a-spec will be able to:

·  Blog about the Specification

·  Tweet about the Specification

·  Write an end-to-end test web application, such as Pet Store for Jakarta EE

·  Review the specification and comment on unclear content

·  Write additional tests to supplement those we already have

We’ll share more information and ideas for JUG groups, organizers, and individuals to get involved as we finalize the adopt-a-spec program details and sign up process.

____________________________________________________________ 

We’re Improving Opportunities for Individuals in the Jakarta EE Working Group

Let me start by saying we welcome everyone who wants to get involved with Jakarta EE! We’re fully aware there’s always room for improvement, and that there are issues we don’t yet know about. If you come across a problem, please get in touch and we’ll be happy to help.

We recently realized we’ve made it very difficult (read impossible) for individuals employed by companies that are neither Jakarta EE Working Group participants nor Eclipse Foundation Members to become committers in Jakarta EE Specification projects.

We’re working to address the problem for these committers and are aiming to have a solution in place in the coming weeks. In the meantime, these individuals can become contributors.

We’ve provided the information below to help people understand the paperwork that must be completed to become a Jakarta EE contributor or a committer. Please look for announcements in the next week or so about the.

 ______________________________________________________ 

It’s Time to Start Working on Jakarta EE 9               

Now that the Jakarta EE 9 Release Plan is approved, it’s time for everyone in the Jakarta EE community to come together and start working on the release.

Here’re link that can help you get informed and motivate you to get involved!

·  Start with the Jakarta EE Platform specification page.

·  Access the files you need on the Jakarta EE Platform GitHub page.

·  Monitor release progress here.

____________________________________________________________

All Oracle Contributed Specifications Are Now Available for Jakartification

We now have the copyright for all of the Java EE specifications that Oracle contributed so we need the community’s help with Jakartification more than ever. This is the only way the Java EE specifications can be contributed to Jakarta EE. 

To help you get started:

·      The Specification Committee has created a document that explains how to convert Java EE specifications to Jakarta EE.

·      Ivar Grimstad provided a demo during the community call in October. You can view it here.

______________________________________________

Upcoming Events

Here’s a brief look at two upcoming Jakarta EE events to mark on your calendar.

·      JakartaOne Livestream – Japan, February 26

This event builds on the success of the JakartaOne Livestream event in September 2019. Registration for JakartaOne Livestream – Japan is open and can be accessed here. Please keep in mind this entire event will be presented in Japanese. For all the details, be sure to follow the event on Twitter @JakartaOneJPN.

·  Cloud Native for Java Day at KubeCon + CloudNativeCon Amsterdam, March 30

Cloud Native for Java (CN4J) Day will be the first time the best and brightest minds from the Java ecosystem and the Kubernetes ecosystem come together at one event to collaborate and share their expertise. And, momentum is quickly building.

To learn more about this ground-breaking event, get a sense of the excitement surrounding it, and access the registration page, check out these links:

o   Eclipse Foundation’s official announcement

o   Mike Milinkovich’s blog

o   Reza Rahman’s blog

In addition to CN4J day at KubeCon, the Eclipse Foundation will have a booth #S73 featuring innovations from our Jakarta EE and Eclipse MicroProfile communities. Be sure to drop by to meet community experts in person and check out the demos.

________________________________________________________

Join Community Update Calls

Every month, the Jakarta EE community holds a community call for everyone in the Jakarta EE community. For upcoming dates and connection details, see the Jakarta EE Community Calendar.

Our next call is Wednesday, February 12, at 11:00 a.m. EST using this meeting ID.

We know it’s not always possible to join calls in real time, so here are links to the recordings and presentations from previous calls:

·      The complete playlist

·      January 15 call and presentation, featuring:

o   Updates on the Jakarta EE 9 release from Steve Millage

o   A call for action to help with Jakartifying specifications from Ivar Grimstad

o   A review of the Jakarta EE 2020 Marketing Plan and budget from Neil Paterson

o   A retrospective on Jakarta EE 8 from Ed Bratt

o   A heads up for the CN4J and JakartaOne Livestream events from Tanja Obradovic

  ____________________________________________________________

Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Tanja Obradovic’s blog summarizes the community engagement plan, which includes:

• Social media: Twitter, Facebook, LinkedIn Group

• Mailing lists: jakarta.ee-community@eclipse.org and jakarta.ee-wg@eclipse.org

• Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs, monthly update emails to jakarta.ee-community@eclipse.org, and community blogs on “how are you involved with Jakarta EE”

• Meetings: Jakarta Tech Talks, Jakarta EE Update, Jakarta Town Hall, and Eclipse Foundation events and conferences

Subscribe to your preferred channels today. And, get involved in the Jakarta EE Working Group to help shape the future of open source, cloud native Java.

To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.

 



by Tanja Obradovic at February 05, 2020 07:21 PM

ISPN000299: Unable to acquire lock after 15 seconds for key

February 04, 2020 10:00 PM

Distributed cache is a wide used technology that provides useful possibilities to share state whenever it necessary. Wildfly supports distributed cache through infinispan subsystem and actually it works well, but in case height load and concurrent data access you may run into a some issues like:

  • ISPN000299: Unable to acquire lock after 15 seconds for key
  • ISPN000924: beforeCompletion() failed for SynchronizationAdapter
  • ISPN000160: Could not complete injected transaction.
  • ISPN000136: Error executing command PrepareCommand on Cache
  • ISPN000476: Timed out waiting for responses for request
  • ISPN000482: Cannot create remote transaction GlobalTx

and others.

In my case i had two node cluster with next infinispan configuration:

/profile=full-ha/subsystem=infinispan/cache-container=myCache/distributed-cache=userdata:add()
/profile=full-ha/subsystem=infinispan/cache-container=myCache/distributed-cache=userdata/component=transaction:add(mode=BATCH)

distributed cache above means that number of copies are maintained, however this is typically less than the number of nodes in the cluster. From other point of view, to provide redundancy and fault tolerance you should configure enough amount of owners and obviously 2 is the necessary minimum here. So, in case usage small cluster and keep in mind the BUG, - i recommend use replicated-cache (all nodes in a cluster hold all keys)

Please, compare Which cache mode should I use? with your needs.

Solution:

/profile=full-ha/subsystem=infinispan/cache-container=myCache/distributed-cache=userdata:remove()
/profile=full-ha/subsystem=infinispan/cache-container=myCache/replicated-cache=userdata:add()
/profile=full-ha/subsystem=infinispan/cache-container=myCache/replicated-cache=userdata/component=transaction:add(mode=NON_DURABLE_XA, locking=OPTIMISTIC)

Note!, NON_DURABLE_XA doesn't keep any transaction recovery information and if you still getting Unable to acquire lock errors on application critical data - you can try to resolve it by some retry policy and fail-fast transaction:

/profile=full-ha/subsystem=infinispan/cache-container=myCache/distributed-cache=userdata/component=locking:write-attribute(name=acquire-timeout, value=0)

February 04, 2020 10:00 PM

Projections with JPA and Hibernate

by Thorben Janssen at January 31, 2020 10:00 AM

The post Projections with JPA and Hibernate appeared first on Thoughts on Java.

Choosing the right projection when selecting data with JPA and Hibernate is incredibly important. When I’m working with a coaching client to improve the performance of their application, we always work on slow queries. At least 80% of them can be tremendously improved by either adjusting the projection or by using the correct FetchType.

Unfortunately, changing the projection of an existing query always requires a lot of refactoring in your business code. So, better make sure to pick a good projection in the beginning. That’s relatively simple if you follow a few basic rules that I will explain in this article.

But before we do that, let’s quickly explain what a projection is.

What is a projection?

The projection describes which columns you select from your database and in which form Hibernate provides them to you. Or in other words, if you’re writing a JPQL query, it’s everything between the SELECT and the FROM keywords.

em.createQuery("SELECT b.title, b.publisher, b.author.name FROM Book b");

What projections do JPA and Hibernate support?

JPA and Hibernate support 3 groups of projections:

  1. Scalar values
  2. Entities
  3. DTOs

SQL only supports scalar projections, like table columns or the return value of a database function. So, how can JPA and Hibernate support more projections?

Hibernate first checks which information it needs to retrieve from the database and generates an SQL statement with a scalar value projection for it. It then executes the query and returns the result if you used a scalar value projection in your code. If you requested a DTO or entity projection, Hibernate applies an additional transformation step. It iterates through the result set and instantiates an entity or a DTO object for each record.

Let’s take a closer look at all 3 projections and discuss when you should use which of them.

Entity projections

For most teams, entities are the most common projection. They are very easy to use with JPA and Hibernate.

You can either use the find method on your EntityManager or write a simple JPQL or Criteria query that selects one or more entities. Spring Data JPA can even derive a query that returns an entity from the name of your repository method.

TypedQuery<Book> q = em.createQuery("SELECT b FROM Book b", Book.class);
List<Book> books = q.getResultList();

All entities that you load from the database or retrieve from one of Hibernate’s caches are in the lifecycle state managed. That means that your persistence provider, e.g., Hibernate, will automatically update or remove the corresponding database record if you change the value of an entity attribute or decide to remove the entity.

b.setTitle("Hibernate Tips - More than 70 solutions to common Hibernate problems");

Entities are the only projection that has a managed lifecycle state. Whenever you want to implement a write operation, you should fetch entities from the database. They make the implementation of write operations much easier and often even provide performance optimizations.

But if you implement a read-only use case, you should prefer a different projection. Managing the lifecycle state, ensuring that there is only 1 entity object for each mapped database record within a session, and all the other features provided by Hibernate create an overhead. This overhead makes the entity projection slower than a scalar value or DTO projection.

Scalar value projections

Scalar value projections avoid the management overhead of entity projections, but they are not very comfortable to use. Hibernate doesn’t transform the result of the query. You, therefore, get an Object or an Object[] as the result of your query.

Query q = em.createQuery("SELECT b.title, b.publisher, b.author.name FROM Book b");
List<Object[]> books = (Object[]) q.getResultList();

In the next step, you then need to iterate through each record in your result set and cast each Object to its specific type before you can use it. That makes your code error-prone and hard to read.

Instead of an Object[], you can also retrieve a scalar projection as a Tuple interface. The interface is a little easier to use than the Object[].

TypedQuery<Tuple> q = em.createQuery("SELECT b.title as title, b.publisher as publisher, b.author.name as author FROM Book b", Tuple.class);
List<Tuple> books = q.getResultList();

for (Tuple b : books) {
	log.info(b.get("title"));
}

But don’t expect too much. It only provides a few additional methods to retrieve an element, e.g., by its alias. But the returned values are still of type Object, and your code is still as error-prone as it is if you use an Object[].

Database functions in scalar value projections

Scalar value projections are not limited to singular entity attributes. You can also include the return values of one or more database functions.

TypedQuery<Tuple> q = em.createQuery("SELECT AVG(b.sales) as avg_sales, SUM(b.sales) as total_sales, COUNT(b) as books, b.author.name as author FROM Book b GROUP BY b.author.name", Tuple.class);
List<Tuple> authors = q.getResultList();

for (Tuple a : authors) {
	log.info("author:" + a.get("author")
			+ ", books:" + a.get("books")
			+ ", AVG sales:" + a.get("avg_sales")
			+ ", total sales:" + a.get("total_sales"));
}

This is a huge advantage compared to an entity projection. If you used an entity projection in the previous example, you would need to select all Book entities with their associated Author entity. In the next step, you would then need to count the number of books each author has written, and calculate the total and average sales values.

As you can see in the code snippet, using a database function is easier, and it also provides better performance.

DTO projections

DTO projections are the best kind of projection for read-only operations. Hibernate instantiates the DTO objects as a post-processing step after it retrieved the query result from the database. It then iterates through the result set and executes the described constructor call for each record.

Here you can see a simple example of a JPQL query that returns the query result as a List of BookDTO objects. By using the keyword new and providing the fully qualified class name of your DTO class and an array of references to entity attributes, you can define a constructor call. Hibernate will then use reflection to call this constructor.

TypedQuery<BookDTO> q = em.createQuery("SELECT new org.thoughtsonjava.projection.dto.BookDTO(b.title, b.author.name, b.publisher) FROM Book b", BookDTO.class);
List<BookDTO> books = q.getResultList();

In contrast to the entity projection, the overhead of a DTO projection is minimal. The objects are not part of the current persistence context and don’t follow any managed lifecycle. Due to that, Hibernate will not generate any SQL UPDATE statements if you change the value of a DTO attribute. But it also doesn’t have to spend any management effort, which provides significant performance benefits.

Database functions in DTO projections

Similar to a scalar value projection, you can also use database functions in a DTO projection. As explained earlier, the instantiation of the DTO object is a post-processing step after Hibernate retrieved the query result. At that phase, it doesn’t make any difference if a value was stored in a database column or if it was calculated by a database function. Hibernate simply gets it from the result set and provides it as a constructor parameter.

Conclusion

JPA and Hibernate support 3 groups of projections:

  1. Entities are the easiest and most common projection. They are a great fit if you need to change data, but they are not the most efficient ones for read-only use cases.
  2. Scalar projections are returned as Object[]s or instances of the Tuple interface. Both versions don’t provide any type-information and are hard to use. Even though they are very efficient for read-only operations, you should avoid them in your application.
  3. DTO projections provide similar performance as scalar value projections but are much easier to use. That makes them the best projection for read-only operations.

The post Projections with JPA and Hibernate appeared first on Thoughts on Java.


by Thorben Janssen at January 31, 2020 10:00 AM

Monitoring REST APIs with Custom JDK Flight Recorder Events

January 29, 2020 02:30 PM

The JDK Flight Recorder (JFR) is an invaluable tool for gaining deep insights into the performance characteristics of Java applications. Open-sourced in JDK 11, JFR provides a low-overhead framework for collecting events from Java applications, the JVM and the operating system.

In this blog post we’re going to explore how custom, application-specific JFR events can be used to monitor a REST API, allowing to track request counts, identify long-running requests and more. We’ll also discuss how the JFR Event Streaming API new in Java 14 can be used to export live events, making them available for monitoring and alerting via tools such as Prometheus and Grafana.


January 29, 2020 02:30 PM

Plans for 2020 and key lessons from 2019

by Thorben Janssen at January 28, 2020 01:00 PM

The post Plans for 2020 and key lessons from 2019 appeared first on Thoughts on Java.

It’s almost February 2020, and I still haven’t published my end of 2019 review or shared my plans for this year. But I have good excuses for that. So far, January has been extremely busy. I already did a code review, started a new coaching project, taught an in-house workshop, recorded multiple online course lectures and YouTube videos, and wrote blog articles. Not too bad for only 3 weeks.

But I still want to share what I learned in 2019 and what’s planned for 2020. So, here we go …

What I learned in 2019

The last year was incredibly successful:

  • The blog suffered from an issue with an SEO plugin, but in the end, traffic grew to almost 4 Million views in 2019.
  • We got to more than 17000 subscribers on YouTube.
  • I spoke at several conferences and JUGs across Europe.
  • I did more in-house workshops and had more students in my online courses than ever before.
  • I hosted my first in-person workshops in Düsseldorf (Germany).
  • With the JPA for Beginners Online Training, I also published a new course.
  • For the first year since I was a teenager, I established a relatively consistent workout routine.
  • And I learned that traveling by train doesn’t have to take much longer than flying but it isn’t as stressful.

But I also had to learn that too much of something that I enjoy, is still too much.

Sometimes too much fun is still too much

In the beginning, traveling from one in-house workshop to the next was fun. But that changed after a while. It started to wear me out. You might have recognized that I didn’t publish new articles and videos as consistently as I had planned. Doing too many in-house workshops and attending too many conferences was the main reason for that. I either was traveling and speaking, or I tried to catch up with all the things I wasn’t able to do while traveling.

This year, I want to make sure that this doesn’t happen again. I plan to not speak at more than 1 in-house workshop per month and not more than 6 conferences per year. That’s still 1.5 events per month.

If you add onsite and remote coaching engagements to the mix, my schedule still looks pretty busy. But it’s hopefully more sustainable and gives me some extra time to work on new online courses and products.

Hosting my own workshop isn’t complicated or scary

Another thing that I learned in 2019 was that it’s not too complicated to host and promote my own in-person workshops. Sure, it was a little stressful in the beginning, but the result was totally worth it.

In December, I offered an Advanced Hibernate Workshop and a Hibernate Performance Tuning Workshop at the Lindner Congress Hotel in Düsseldorf. Their team did an amazing job and took care of all the logistics. I had booked a meeting room with drinks, snacks, and lunch. So, the only thing that I had to do was to be there on time and teach the workshops.

In the end, I liked these workshops much better than the ones that I did with different training companies in the past. From now on, I will host my workshops myself.

I already planned 3 of them for this year. But more about that in the next section.

What to expect in 2020

OK, so 2019 was great, and I learned a few things. What does that mean for this year? Am I happy with the achievements of last year and keep everything as it is?

Of course not!

I want to grow the team, improve the site, create new courses, and offer more in-person workshops.

One or two new online courses

I’m currently working on my new Data and Communication Patterns for Microservices Online Training. It’s inspired by several coaching projects in which I helped teams to model the persistence layers of their microservices and to exchange data between services in a reliable and scalable way.

The first of these coaching projects started shortly after microservices became popular. Most teams had to recognize that exchanging data and ensuring data consistency had become an issue. They no longer implemented their logic in 1 application and ensured data consistency with a simple transaction. They now did that in multiple services and needed to handle the downsides of a distributed system.

There are several patterns and tools that help you to handle these issues. If you use them correctly, exchanging data in a consistent and scalable way still adds complexity to your system. But it becomes a manageable task, and you will be able to enjoy the advantages of a microservice architecture.

I will show you the most important and most popular patterns in the Data and Communication Patterns for Microservices Online Training. It will launch on February 28th. You can join the early-bird notification list here.

And that might not be the only new course in 2020. I have 1-2 more ideas for new courses, but it’s still too early to share them.

3 in-person workshops

As I said earlier, I also planned 3 in-person workshops for this year.

  1. In the JPA for Beginners workshop, you will learn all you need to know to use JPA with Hibernate or EclipseLink. I will teach you all the important concepts, JPA’s mapping annotations, and the JPQL query language. After these 2 days, you will be able to implement a basic persistence layer on your own or to join a team that’s working on a huge and complex one.
    The JPA for Beginners workshop will take place on June 30th – July 1st, 2020. Make sure to enroll before March 28th to get the best price.
  2. The Data and Communication Patterns for Microservices workshop is the in-person workshop version of the new online course. You will learn how to exchange data between your services in a scalable and reliable way. I will show you different patterns for synchronous service calls, asynchronous data replication, and distributed write operations.
    The Data and Communication Patterns for Microservices workshop will take place on September 15th-17th, 2020. Make sure to enroll before June 12th to get the best price.
  3. The Advanced Hibernate workshop was my most popular in-person workshop in 2019. In this workshop, you will learn to implement complex domain mappings, create dynamic and type-safe queries, support custom data types, use Hibernate’s multi-tenancy features, and much more.
    The Advanced Hibernate workshop will take place on December 8th – 10th, 2020. Make sure to enroll before August 30th to get the best price.

Growing the team

In addition to all of that, I also want to consistently post new tutorials here on the blog and on my YouTube channel. I also teach in-house workshops and help development teams as a coach to use Hibernate more efficiently and to fix issues in their current projects.

So far, we have done all of that with a team of 2.

For the last few years, Rayhan has helped me as a contractor. He takes care of all the important tasks in the background and keeps everything up and running while I’m on the road. He edits videos, creates images, updates WordPress plugins, and lots of other things. To be honest, without his help, there wouldn’t be any YouTube channel, and I would probably still be working on my 2nd course.

But at the end of last year, I had to realize that there is just too much work for such a small team. I decided to hire Khalifa to help me prepare articles, update code samples, and do other Java-related things.

I hope that that’s just the beginning. I’m planning to add another person to the team as soon as the 3 of us got used to each other and found a good rhythm.

I hope I can share more about that soon. Until then, I hope you find our articles and videos helpful, and I’m looking forward to meeting you in person at a conference or workshop.

The post Plans for 2020 and key lessons from 2019 appeared first on Thoughts on Java.


by Thorben Janssen at January 28, 2020 01:00 PM

Cloud Native for Java Day @ KubeCon EU

by Mike Milinkovich at January 28, 2020 12:00 PM

Cloud Native for Java (CN4J) Day at KubeCon + CloudNativeCon Europe will be the first time the best and brightest minds from the Java ecosystem and the Kubernetes ecosystem come together at one event to collaborate and share their expertise.

The all-day event on March 30 includes expert talks, demos, and thought-provoking sessions focused on building cloud native enterprise applications using Jakarta EE-based microservices on Kubernetes. CN4J Day is a proud moment for all of us at the Eclipse Foundation as it confirms the Jakarta EE and MicroProfile communities are at the forefront of fulfilling the promise of cloud native Java. We’re excited to be working with our friends at the CNCF to offer this event co-located with KubeCon Europe.

A Unique Opportunity to Engage With Global Experts

The timing of CN4J Day could not be better. With momentum toward the Jakarta EE 9 release building, this event gives all of us an important and truly unique opportunity to:

  •     Learn more about the future of cloud native Java development from industry and community leaders
  •     Gain deeper insight into key aspects of Jakarta EE, MicroProfile, and Kubernetes technologies
  •     Meet and share ideas with global Java and Kubernetes ecosystem innovators

The global Java ecosystem has embraced CN4J day and several of its leading minds will be on-hand to share their insights. Along with keynote addresses from my colleague Tanja Obradovic and IBM Java CTO, Tim Ellison, CN4J Day features informative technical talks from Java experts and Eclipse Foundation community members, such as:

  •     Adam Bien, an internationally recognized Java architect, developer, workshop leader, and author
  •     Sebastian Daschner, lead java developer advocate at IBM
  •     Clement Escoffier, principal software engineer at Red Hat
  •     Ken Finnegan, senior principal engineer at Red Hat
  •     Emily Jiang, liberty architect for MicroProfile and CDI at IBM
  •     Dmitry Kornilov, Jakarta EE and Helidon Team Leader at Oracle
  •     Tomas Langer, Helidon Architect & Developer at Oracle

Major Industry and Ecosystem Endorsement

Leading industry players in the Java ecosystem are also showing their support for CN4J Day through sponsorship. Our sponsors include:

  •     Cloud Native Computing Foundation (CNCF)
  •     IBM
  •     Oracle
  •     Red Hat

The event is being coordinated by an independent program committee composed of Arun Gupta, principal technologist at Amazon Web Services, Reza Rahman, principal program manager for Java on Azure at Microsoft, and Tanja Obradovic, program manager for Jakarta EE at the Eclipse Foundation.

Register Today

To register today, simply add the event to your KubeCon + CloudNativeCon Europe registration. Thanks to the generous support of our sponsors, a limited amount of discounted CN4J Day add-on registrations will be made available to Jakarta EE and MicroProfile community members on a first-come, first-served basis.

For more details about CN4J Day and a link to the registration page, click here. For additional questions regarding this event, please reach out to events-info@eclipse.org.

As additional speakers and sponsors come onboard, we’ll keep you posted, so watch for updates in our blogs and newsletters.


by Mike Milinkovich at January 28, 2020 12:00 PM

Dual Writes – The Unknown Cause of Data Inconsistencies

by Thorben Janssen at January 23, 2020 01:06 PM

The post Dual Writes – The Unknown Cause of Data Inconsistencies appeared first on Thoughts on Java.

Since a lot of new applications are built as a system of microservices, dual writes have become a widespread issue. They are one of the most common reasons for data inconsistencies. To make it even worse, I had to learn that a lot of developers don’t even know what a dual write is.

Dual writes seem to be an easy solution to a complex problem. If you’re not familiar with distributed systems, you might even wonder why people even worry about it.

That’s because everything seems to be totally fine … until it isn’t.

So, let’s talk about dual writes and make sure that you don’t use them in your applications. And if you want to dive deeper into this topic and learn various patterns that help you to avoid this kind of issue, please take a look at my upcoming Data and Communication Patterns for Microservices course.

What is a dual write?

A dual write describes the situation when you change data in 2 systems, e.g., a database and Apache Kafka, without an additional layer that ensures data consistency over both services. That’s typically the case if you use a local transaction with each of the external systems.

Here you can see a diagram of an example in which I want to change data in my database and send an event to Apache Kafka:

As long as both operations are successful, everything is OK. Even if the first transaction fails, it’s still fine. But if you successfully committed the 1st transaction and the 2nd one fails, you are having an issue. Your system is now in an inconsistent state, and there is no easy way to fix it.

Distributed transactions are no longer an option

In the past, when we build monoliths, we used distributed transactions to avoid this situation. Distributed transactions use the 2 phase commit protocol. It splits the commit process of the transaction into 2 steps and ensures the ACID principles for all systems.

But we don’t use distributed transactions if we’re building a system of microservices. These transactions require locks and don’t scale well. They also need all involved systems to be up and running at the same time.

So what shall you do instead?

3 “solutions” that don’t work

When I discuss this topic with attendees at a conference talk or during one of my workshops, I often hear one of the following 3 suggestions:

  1. Yes, we are aware of this issue, and we don’t have a solution for it. But it’s not that bad. So far, nothing has happened. Let’s keep it as it is.
  2. Let’s move the interaction with Apache Kafka to an after commit listener.
  3. Let’s write the event to the topic in Kafka before you commit the database transaction.

Well, it should be obvious that suggestion 1 is a rather risky one. It probably works most of the time. But sooner or later, you will create more and more inconsistencies between the data that’s stored by your services.

So, let’s focus on options 2 and 3.

Post the event in an after commit listener

Publishing the event in an after commit listener is a pretty popular approach. It ensures that the event only gets published if the database transaction was successful. But it’s difficult to handle the situation that Kafka is down or that any other reason prevents you from publishing the event.

You already committed the database transaction. So, you can’t easily revert these changes. Other transactions might have already used and modified that data while you tried to publish the event in Kafka.

You might try to persist the failure in your database and run regular cleanup jobs that seek to recover the failed events. This might look like a logical solution, but it has a few flaws:

  1. It only works if you can persist the failed event in your database. If the database transaction fails, or your application or the database crash before you can store the information about the failed event, you will lose it.
  2. It only works if the event itself didn’t cause the problem.
  3. If another operation creates an event for that business object before the cleanup job recovers the failed event, your events get out of order.

These might seem like hypothetical scenarios, but that’s what we’re preparing for. The main idea of local transactions, distributed transactions, and approaches that ensure eventual consistency is to be absolutely sure that you can’t create any (permanent) inconsistencies.

An after commit listener can’t ensure that. So, let’s take a look at the other option.

Post the event before committing the database transaction

This approach gets often suggested after we discussed why the after commit listener doesn’t work. If publishing the event after the commit creates a problem, you simply publish it before we commit the transaction, right?

Well, no … Let me explain …

Publishing the event before you commit the transaction enables you to roll back the transaction if you can’t publish the event. That’s right.

But what do you do if your database transaction fails?

Your operations might violate a unique constraint, or there might have been 2 concurrent updates on the same database record. All database constraints get checked during the commit, and you can’t be sure that none of them fails. Your database transactions are also isolated from each other so that you can’t prevent concurrent updates without using locks. But that creates new scalability issues. To make it short, your database transaction might fail and there is nothing you can, or want to do about it.

If that happens, your event is already published. Other microservices probably already observed it and triggered some business logic. You can’t take the event back.

Undo operations fail for the same reasons, as we discussed before. You might be able to build a solution that works most of the time. But you are not able to create something that’s absolutely failsafe.

How to avoid dual writes?

You can choose between a few approaches that help you to avoid dual writes. But you need to be aware that without using a distributed transaction, you can only build an eventually consistent system.

The general idea is to split the process into multiple steps. Each of these steps only operates with one data store, e.g., the database or Apache Kafka. That enables you to use a local transaction, asynchronous communication between the involved systems and an asynchronous, potentially endless retry mechanism.

If you only want to replicate data between your services or inform other services that an event has occurred, you can use the outbox pattern with a change data capture implementation like Debezium. I explained this approach in great detail in the following articles:

And if you need to implement a consistent write operation that involves multiple services, you can use the SAGA pattern. I will explain it in more detail in one of the following articles.

Conclusion

Dual writes are often underestimated, and a lot of developers aren’t even aware of the potential data inconsistencies.

As explained in this article, writing to 2 or more systems without a distributed transaction or an algorithm that ensures eventual consistency can cause data inconsistencies. If you work with multiple local transactions, you can’t handle all error scenarios.

The only way to avoid that is to split the communication into multiple steps and only write to one external system during each step. The SAGA pattern and change data capture implementations, like Debezium, use this approach to ensure consistent write operation to multiple systems or to send events to Apache Kafka.

The post Dual Writes – The Unknown Cause of Data Inconsistencies appeared first on Thoughts on Java.


by Thorben Janssen at January 23, 2020 01:06 PM

Enforcing Java Record Invariants With Bean Validation

January 20, 2020 04:30 PM

Record types are one of the most awaited features in Java 14; they promise to "provide a compact syntax for declaring classes which are transparent holders for shallowly immutable data". One example where records should be beneficial are data transfer objects (DTOs), as e.g. found in the remoting layer of enterprise applications. Typically, certain rules should be applied to the attributes of such DTO, e.g. in terms of allowed values. The goal of this blog post is to explore how such invariants can be enforced on record types, using annotation-based constraints as provided by the Bean Validation API.

January 20, 2020 04:30 PM

Jakarta EE 8 CRUD API Tutorial using Java 11

by rieckpil at January 19, 2020 03:07 PM

As part of the Jakarta EE Quickstart Tutorials on YouTube, I’ve now created a five-part series to create a Jakarta EE CRUD API. Within the videos, I’m demonstrating how to start using Jakarta EE for your next application. Given the Liberty Maven Plugin and MicroShed Testing, the endpoints are developed using the TDD (Test Driven Development) technique.

The following technologies are used within this short series: Java 11, Jakarta EE 8, Open Liberty, Derby, Flyway, MicroShed Testing & JUnit 5

Part I: Introduction to the application setup

This part covers the following topics:

  • Introduction to the Maven project skeleton
  • Flyway setup for Open Liberty
  • Derby JDBC connection configuration
  • Basic MicroShed Testing setup for TDD

Part II: Developing the endpoint to create entities

This part covers the following topics:

  • First JAX-RS endpoint to create Person entities
  • TDD approach using MicroShed Testing and the Liberty Maven Plugin
  • Store the entities using the EntityManager

Part III: Developing the endpoints to read entities

This part covers the following topics:

  • Develop two JAX-RS endpoints to read entities
  • Read all entities and by its id
  • Handle non-present entities with a different HTTP status code

Part IV: Developing the endpoint to update entities

This part covers the following topics:

  • Develop the JAX-RS endpoint to update entities
  • Update existing entities using HTTP PUT
  • Validate the client payload using Bean Validation

Part V: Developing the endpoint to delete entities

This part covers the following topics:

  • Develop the JAX-RS endpoint to delete entities
  • Enhance the test setup for deterministic and repeatable integration tests
  • Remove the deleted entity from the database

The source code for the Maven CRUD API application is available on GitHub.

For more quickstart tutorials on Jakarta EE, have a look at the overview page on my blog.

Have fun developing Jakarta EE CRUD API applications,

Phil

 

The post Jakarta EE 8 CRUD API Tutorial using Java 11 appeared first on rieckpil.


by rieckpil at January 19, 2020 03:07 PM

Moving Forward With Jakarta EE 9

by Mike Milinkovich at January 16, 2020 05:06 PM

On behalf of the Jakarta EE Working Group, I am excited to announce the unanimous approval of the plan for Jakarta EE 9, with an anticipated mid-2020 release. Please note that the project team believes this timeline is aggressive, so think of this as a plan of intent with early estimate dates. The milestone dates will be reviewed and possibly adjusted at each release review.

If you have any interest at all in the past, present, or future of Java, I highly recommend that you read that plan document, as Jakarta EE 9 represents a major inflection point in the platform.

The key elements of  this Jakarta EE 9 release plan are to:

  • move all specification APIs to the jakarta namespace (sometimes referred to as the “big bang”);
  • remove unwanted or deprecated specifications;
  • minor enhancements to a small number of specifications;
  • add no new specifications, apart from specifications pruned from Java SE 8 where appropriate; and
  • Java SE 11 support.

What is not in the plan is the addition of any significant new functionality. That is because the goals of this Jakarta EE 9 release plan are to:

  • lower the barrier of entry to new vendors and implementations to achieve compatibility;
  • make the release available rapidly as a platform for future innovation; and
  • provide a platform that developers can use as a stable target for testing migration to the new namespace.

Moving a platform and ecosystem the size and scale of Jakarta EE takes time and careful planning. After a great deal of discussion the community consensus was that using EE 9 to provide a clear transition to the jakarta namespace, and to pare down the platform would be the best path to future success. While work on the EE 9 platform release is proceeding, individual component specification teams are encouraged to innovate in their individual specifications, which will hopefully lead to a rapid iteration towards the Jakarta EE 10 release.

Defining this release plan has been an enormous community effort. A lot of time and energy went into its development. It has been exciting to watch the … ummm passionate…. discussions evolve towards a pretty broad consensus on this approach. I would like to particularly recognize the contributions of Steve Millidge, Kevin Sutter, Bill Shannon, David Blevins, and Scott Stark for their tireless and occasionally thankless work in guiding this process.

The Jakarta EE Working Group has been busy working on creating a Program Plan, Marketing Plan and Budget for 2020. The team has also been very busy with creating a plan for the Jakarta EE 9 release. The Jakarta EE Platform project team, as requested, has delivered a proposal plan to the Steering Committee. With their endorsement, it will be voted on by the Specification Committee at their first meeting in January 2020.

Retrospective

The Jakarta EE 9 release is going to be an important step in the evolution of the platform, but it is important to recognize the many accomplishments that happened in 2019 that made this plan possible.

First, the Eclipse Foundation and Oracle successfully completed some very complex negotiations about how Java EE would be evolved under the community-led Jakarta EE process. Although the Jakarta EE community cannot evolve the specifications under the javax namespace, we were still able to fully transition the Java EE specifications to the Eclipse Foundation. That transition led to the second major accomplishment in 2019: the first release of Jakarta EE. Those two milestones were, in my view, absolutely key accomplishments. They were enabled by a number of other large efforts, such as creating the Eclipse Foundation Specification Process, significant revisions to our IP Policy, and establishing the Jakarta EE compatibility program. But ultimately, the most satisfying result of all of this effort is the fact that we have seven fully compatible Jakarta EE 8 products, with more on the way.

The Jakarta EE community was also incredibly active in 2019. Here are just a few of the highlights:

2019 was a very busy year, and it laid the foundation for a very successful 2020. I, and the entire Jakarta EE community, look forward to the exciting progress and innovation coming in 2020.


by Mike Milinkovich at January 16, 2020 05:06 PM

Naming Strategies in Hibernate 5

by Thorben Janssen at January 16, 2020 01:00 PM

The post Naming Strategies in Hibernate 5 appeared first on Thoughts on Java.

JPA and Hibernate provide a default mapping that maps each entity class to a database table with the same name. Each of its attributes gets mapped to a column with the same. But what if you want to change this default, e.g., because it doesn’t match your company’s naming conventions?

You can, of course, specify the table name for each entity and the column name for each attribute. That requires a @Table annotation on each class and a @Column annotation on each attribute. This is called an explicit naming.

That’s a good approach if you want to change the mapping for one attribute. But doing that for lots of attributes requires a lot of work. Adapting Hibernate’s naming strategy is then often a better approach.

In this article, I will show you how to use it to adjust the mapping of all entities and attributes. But before we do that, we first need to talk about the difference between Hibernate’s logical and physical naming strategy.



Already a member? Login here.

A 2-step approach

Hibernate splits the mapping of the entity or attribute name to the table or column name into 2 steps:

  1. It first determines the logical name of an entity or attribute. You can explicitly set the logical name using the @Table and @Column annotations. If you don’t do that, Hibernate will use one of its implicit naming strategies.
  2. It then maps the logical name to a physical name. By default, Hibernate uses the logical name as the physical name. But you can also a PhysicalNamingStrategy that maps the logical name to a physical one that follows your internal naming convention.

So, why does Hibernate differentiate between a logical and a physical naming strategy, but the JPA specification doesn’t?

JPA’s approach works, but if you take a closer look at it, you recognize that Hibernate’s approach provides more flexibility. By splitting the process into 2 steps, Hibernate allows you to implement a conversion that gets applied to all attributes and classes.

If your naming conventions, for example, require you to ad “_TBL” to all table names, you can do that in your PhysicalNamingStrategy. It then doesn’t matter if you explicitly specify the table name in a @Table annotation or if you do it implicitly based on the entity name. In both cases, Hibernate will add “_TBL” to the end of your table name.

Because of the added flexibility, I like Hibernate’s approach a little better.

Logical naming strategy

As explained earlier, you can either define the logical name explicitly or implicitly. Let’s take a look at both options.

Explicit naming strategy

The explicit naming strategy is very easy to use. You probably already used it yourself. The only thing you need to do is to annotate your entity class with @Table or your entity attribute with @Column and provide your preferred name as a value to the name attribute.

@Entity
@Table(name = "AUTHORS")
public class Author {

    @Column(name = "author_name")
    private String name;

    ...
}

If you then use this entity in your code and activate the logging of SQL statements, you can see that Hibernate uses the provided names instead of the default ones.

15:55:52,525 DEBUG [org.hibernate.SQL] - insert into AUTHORS (author_name, version, id) values (?, ?, ?)

Implicit naming strategy

If you don’t set the table or column name in an annotation, Hibernate uses one of its implicit naming strategies. You can choose between 4 different naming strategies and 1 default strategy:

  • default
    By default, Hibernate uses the implicit naming strategy defined by the JPA specification. This value is an alias for jpa.
  • jpa
    This is the naming strategy defined by the JPA 2.0 specification.
    The logical name of an entity class is either the name provided in the @Entity annotation or the unqualified class name. For basic attributes, it uses the name of the attributes as the logical name. To get the logical name of a join column of an association, this strategy concatenates the name of the referencing attribute, an “_” and the name of the primary key attribute of the referenced entity. The logical name of a join column of an element collection consists of the name of the entity that owns the association, an “_” and the name of the primary key attribute of the referenced entity. And the logical name of a join table starts with the physical name of the owning table, followed by an “_” and the physical name of the referencing table.
  • legacy-hbm
    This is Hibernate’s original naming strategy. It doesn’t recognize any of JPA’s annotations. But you can use Hibernate’s proprietary configuration file and annotations to define a column or entity name.
    In addition to that, there are a few other differences to the JPA specification:
    • The logical name of a join column is only its attribute name.
    • For join tables, this strategy concatenates the name of the physical table that owns the association, an “_” and the name of the attribute that owns the association.
  • legacy-jpa
    The legacy-jpa strategy implements the naming strategy defined by JPA 1.0.
    The main differences to the jpa strategy are:
    • The logical name of a join table consists of the physical table name of the owning side of the association, an “_” and either the physical name of the referencing side of the association or the owning attribute of the association.
    • To get the logical name of the join column of an element collection, the legacy-jpa strategy uses the physical table name instead of the entity name of the referenced side of the association. That means the logical name of the join column consists of the physical table name of the referenced side of the association, an “_” and the name of the referenced primary key column.
  • component-path
    This strategy is almost identical to the jpa strategy. The only difference is that it includes the name of the composite in the logical attribute name.

You can configure the logical naming strategy by setting the hibernate.implicit_naming_strategy attribute in your configuration.

<persistence>
    <persistence-unit name="naming">
        ...
        <properties>
            <property name="hibernate.implicit_naming_strategy"
                      value="jpa" />
            ...
        </properties>
    </persistence-unit>
</persistence>

Physical naming strategy

Implementing your own physical naming strategy isn’t complicated. You can either implement the PhysicalNamingStrategy interface or extend Hibernate’s PhysicalNamingStrategyStandardImpl class.

I prefer extending Hibernate’s PhysicalNamingStrategyStandardImpl. In the following examples, to create a naming strategy that adds the postfix “_TBL” to each table name and to create a naming strategy that converts camel case names into snake case.

Table postfix strategy

The only thing I want to change in this naming strategy is the handing of the table name. So, extending Hibernate’s PhysicalNamingStrategyStandardImpl class it the easiest way to achieve that.

Implementing a custom strategy

I overwrite the toPhysicalTableName method, add a static postfix to the name, and convert it into an Identifier.

public class TablePostfixPhysicalNamingStrategy extends PhysicalNamingStrategyStandardImpl {

    private final static String POSTFIX = "_TBL";
    
    @Override
    public Identifier toPhysicalTableName(final Identifier identifier, final JdbcEnvironment jdbcEnv) {
        if (identifier == null) {
            return null;
        }

        final String newName = identifier.getText() + POSTFIX;
        return Identifier.toIdentifier(newName);
    }

}

In the next step, you need to activate the naming strategy. You do that by setting the hibernate.physical_naming_strategy attribute to the fully qualified class name of the strategy.

<persistence>
    <persistence-unit name="naming">
        ...
        <properties>
            <property name="hibernate.physical_naming_strategy"
                      value="org.thoughtsonjava.naming.config.TablePostfixPhysicalNamingStrategy" />
            ...
        </properties>
    </persistence-unit>
</persistence>

Using the table postfix strategy

Let’s try this mapping using this basic Author entity. I don’t specify a logical name for the entity. So, it defaults to the name of the class, which is Author. Without our custom naming strategy, Hibernate would map this entity to the Author table.

@Entity
public class Author {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    @Version
    private int version;

    private String name;

    @ManyToMany(mappedBy = "authors", fetch = FetchType.LAZY)
    private Set<Book> books;

    ...
}

When I persist this entity, you can see in the log file that Hibernate mapped it to the AUTHOR_TBL table.

14:05:56,619 DEBUG [org.hibernate.SQL] - insert into Author_TBL (name, version, id) values (?, ?, ?)

Names in snake case instead of camel case

In Java, we prefer to use camel case for our class and attribute names. By default, Hibernate uses the logical name as the physical name. So, the entity attribute LocalDate publishingDate gets mapped to the database column publishingDate.

Some companies use naming conventions that require you to use snake case for your table and column names. That means that your publishingDate attribute needs to be mapped to the publishing_date column.

As explained earlier, you could use the explicit naming strategy and annotate each attribute with a @Column annotation. But for most persistence layers, that’s a lot of work, and it’s easy to forget.

So, let’s implement a naming strategy that does that for us.

Implementing a custom strategy

public class SnakeCasePhysicalNamingStrategy extends PhysicalNamingStrategyStandardImpl {

    @Override
    public Identifier toPhysicalCatalogName(Identifier name, JdbcEnvironment context) {
        return super.toPhysicalCatalogName(toSnakeCase(name), context);
    }

    @Override
    public Identifier toPhysicalColumnName(Identifier name, JdbcEnvironment context) {
        return super.toPhysicalColumnName(toSnakeCase(name), context);
    }

    @Override
    public Identifier toPhysicalSchemaName(Identifier name, JdbcEnvironment context) {
        return super.toPhysicalSchemaName(toSnakeCase(name), context);
    }

    @Override
    public Identifier toPhysicalSequenceName(Identifier name, JdbcEnvironment context) {
        return super.toPhysicalSequenceName(toSnakeCase(name), context);
    }

    @Override
    public Identifier toPhysicalTableName(Identifier name, JdbcEnvironment context) {
        return super.toPhysicalTableName(toSnakeCase(name), context);
    }
    
    private Identifier toSnakeCase(Identifier id) {
        if (id == null)
            return id;
            
        String name = id.getText();
        String snakeName = name.replaceAll("([a-z]+)([A-Z]+)", "$1\\_$2").toLowerCase();
        if (!snakeName.equals(name))
            return new Identifier(snakeName, id.isQuoted());
        else
            return id;
    }
}

The interesting part of this naming strategy is the toSnakeCase method. I call it in all methods that return a physical name to convert the provided name to snake case.

If you’re familiar with regular expressions, the implementation of the toSnakeCase method is pretty simple. By calling replaceAll(“([a-z]+)([A-Z]+)”, “$1\\_$2”), we add an “_” in front of each capital letter. After that is done, we only need to change all characters to lower case.

In the next step, we need to set the strategy in the persistence.xml file.

<persistence>
    <persistence-unit name="naming">
        ...
        <properties>
            <property name="hibernate.physical_naming_strategy"
                      value="org.thoughtsonjava.naming.config.SnakeCasePhysicalNamingStrategy" />
            ...
        </properties>
    </persistence-unit>
</persistence>

Using the snake case strategy

When I now persist this Book entity, Hibernate will use the custom strategy to map the publishingDate attribute to the database column publishing_date.

@Entity
public class Book {

    @Id
    @GeneratedValue
    private Long id;

    @Version
    private int version;

    private String title;

    private LocalDate publishingDate;

    @ManyToMany
    private Set<Author> authors;

    @ManyToOne
    private Publisher publisher;

    ...
}

As you can see in the log file, the naming strategy worked as expected and changed the name of the publishingDate column to publishing_date.

14:28:59,337 DEBUG [org.hibernate.SQL] - insert into books (publisher_id, publishing_date, title, version, id) values (?, ?, ?, ?, ?)


Already a member? Login here.

Conclusion

Hibernate’s naming strategy provides you with lots of flexibility. It consists of 2 parts, the mapping of the logical and the physical name.

You can explicitly define the logical name using the @Table and @Column annotation. If you don’t do that, Hibernate uses one of its implicit naming strategies. The default one is compliant with JPA 2.0.

After the logical name got determined, Hibernate applies a physical naming strategy. By default, it returns the logical name. But you can use it to implement a conversion that gets applied to all logical names. As you have seen in the examples, this provides an easy way to fulfill your internal naming conventions.

The post Naming Strategies in Hibernate 5 appeared first on Thoughts on Java.


by Thorben Janssen at January 16, 2020 01:00 PM

Summary of Community Retrospective on Jakarta EE 8 Release

by Will Lyons at January 13, 2020 09:42 PM

One of the topics we will cover at the Jakarta EE Update call on January 15 at 11:00 a.m. EST (please use this meeting ID) is a retrospective on the Jakarta EE 8 release that was conducted by the Jakarta EE Steering Committee.  The process included including solicitation of input from contributors to specifications, implementers of specifications, users of specifications and those observing from the outside.   The goal of the retrospective was to collect feedback on the Jakarta EE 8 delivery process that can be turned into action items to improve our delivery of subsequent releases, and enable as much of the community to participate as possible.

The full retrospective document is published here.   Some of the retrospective highlights include:

1) Areas that went well

  • Excellent progress after clearing legal hurdles
  • Excellent press and analyst coverage
  • Livestream and Code One announcements went well. 

2) Areas for improvement

  • Spec process and release management
    • Jakarta EE 8 seemed mostly Oracle driven.   Need more visible participation from others.
    • Need better documentation on the spec process and related processes
    • Need to reduce the number of meetings required to get out the next release
  • Communications
    • Need to communicate what we are doing and the progress we have made more.
    • Review the approach to Jakarta EE update calls
    • Need more even engagement via social media from all Strategic members.
  • Organization
    • Clarify the roles of the various committees
  • Other/general
    • We need to understand the outside view of the box we (the Jakarta EE/Eclipse Foundation) live in, and communicate accordingly.
    • Need a clear roadmap

3) In response to this feedback, the Steering Committee plans to take the following actions:

  • Proactive communication of CY2020 Jakarta EE Working Group Program Plan
    • This plan itself addresses much of the feedback above
    • Recommend sharing it now
    • Share budgets when confirmed/approved

We have followed up on topic during the December 11 Jakarta EE Update call.

  • Proactive communication of the Jakarta EE 9 release plan
    • Addresses some feedback on near term issues (e.g. move to jakarta namespace)
    • Should be placed in context of post Jakarta EE 9 goals
    • Participate in Jakarta EE update call on Nov 13 (planned)
    • Share when approved by Steering Committee

We are following up in all of these areas, both soliciting input and communicating through Jakarta EE Update calls.   We expect to announce formal approval for the prelease plan after this week’s Spec Committee meeting.

  • Proactive communication of significant Jakarta EE initiatives in general
    • Build into any significant planning event

We are building this into planning for JakartaOne Livestream Japan on Feb. 26, and Cloud Native for Java Day in Amsterdam on March 30.

  • Review approach to Jakarta EE Update calls (request Eclipse Foundation staff to drive review).

Your feedback on this topic is welcome – what do you want included in these calls?

  • Communication from Specification Committee on plan for addressing retrospective findings above after appropriate review and consensus.

The Spec Committee has not done an independent review per se, but are making a strong effort in conjunction with the Jakarta EE Platform Project to communicate proactively about the direction of Jakarta EE 9.   Look for more information on Jakarta EE 9 this week.

The process of responding to our experiences and feedback is ongoing.    We hope to continue to hear from you, so that we can continue to improve our release process for the community.

 

 

 


by Will Lyons at January 13, 2020 09:42 PM

Back to the top

Submit your event

If you have a community event you would like listed on our events page, please fill out and submit the Event Request form.

Submit Event