Skip to main content

Well secured and documented REST API with Eclipse Microprofile and Quarkus

February 19, 2020 10:00 PM

Eclipse Microprofile specification provides several many helpful sections about building well designed microservice-oriented applications. OpenAPI, JWT Propagation and JAX-RS - the ones of them.
microprofile, jwt, openapi, jax-rs
To see how it works on practice let's design two typical REST resources: insecured token to generate JWT and secured user, based on Quarkus Microprofile implementation.

Easiest way to bootstrap Quarkus application from scratch is generation project structure by provided starter page - code.quarkus.io. Just select build tool you like and extensions you need. In our case it is:

  • SmallRye JWT
  • SmallRye OpenAPI

I prefer gradle, - and my build.gradle looks pretty simple

group 'org.kostenko'
version '1.0.0'
plugins {
    id 'java'
    id 'io.quarkus'
}
repositories {
     mavenLocal()
     mavenCentral()
}
dependencies {
    implementation 'io.quarkus:quarkus-smallrye-jwt'
    implementation 'io.quarkus:quarkus-smallrye-openapi'
    implementation 'io.quarkus:quarkus-resteasy-jackson'    
    implementation 'io.quarkus:quarkus-resteasy'
    implementation enforcedPlatform("${quarkusPlatformGroupId}:${quarkusPlatformArtifactId}:${quarkusPlatformVersion}")
    testImplementation 'io.quarkus:quarkus-junit5'
    testImplementation 'io.rest-assured:rest-assured'
}
compileJava {
    options.compilerArgs << '-parameters'
}

Now we are ready to improve standard JAX-RS service with OpenAPI and JWT stuff:

@RequestScoped
@Path("/user")
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
@Tags(value = @Tag(name = "user", description = "All the user methods"))
@SecurityScheme(securitySchemeName = "jwt", type = SecuritySchemeType.HTTP, scheme = "bearer", bearerFormat = "jwt")
public class UserResource {

    @Inject
    @Claim("user_name")
    Optional<JsonString> userName;

    @POST
    @PermitAll
    @Path("/token/{userName}")
    @APIResponses(value = {
        @APIResponse(responseCode = "400", description = "JWT generation error"),
        @APIResponse(responseCode = "200", description = "JWT successfuly created.", content = @Content(schema = @Schema(implementation = User.class)))})
    @Operation(summary = "Create JWT token by provided user name")
    public User getToken(@PathParam("userName") String userName) {
        User user = new User();
        user.setJwt(TokenUtils.generateJWT(userName));
        return user;    
    }

    @GET
    @RolesAllowed("user")
    @Path("/current")
    @SecurityRequirement(name = "jwt", scopes = {})
    @APIResponses(value = {
        @APIResponse(responseCode = "401", description = "Unauthorized Error"),
        @APIResponse(responseCode = "200", description = "Return user data", content = @Content(schema = @Schema(implementation = User.class)))})
    @Operation(summary = "Return user data by provided JWT token")
    public User getUser() {
        User user = new User();
        user.setName(userName.get().toString());
        return user;
    }
}

First let's take a brief review of used Open API annotations:

  • @Tags(value = @Tag(name = "user", description = "All the user methods")) - Represents a tag. Tag is a meta-information you can use to help organize your API end-points.
  • @SecurityScheme(securitySchemeName = "jwt", type = SecuritySchemeType.HTTP, scheme = "bearer", bearerFormat = "jwt") - Defines a security scheme that can be used by the operations.
  • @APIResponse(responseCode = "401", description = "Unauthorized Error") - Corresponds to the OpenAPI response model object which describes a single response from an API Operation.
  • @Operation(summary = "Return user data by provided JWT token") - Describes an operation or typically a HTTP method against a specific path.
  • @Schema(implementation = User.class) - Allows the definition of input and output data types.

To more details about Open API annotations, please refer to the MicroProfile OpenAPI Specification.

After start the application, you will able to get your Open API description in the .yaml format by the next URL http://0.0.0.0:8080/openapi or even enjoy Swagger UI as well by http://0.0.0.0:8080/swagger-ui/ :
microprofile, openapi, swagger-ui

Note By default swagger-ui available in the dev mode only. If you would like to keep swagger on production, - add next property to your application.properties

quarkus.swagger-ui.always-include=true

Second part of this post is a JWT role based access control(RBAC) for microservice endpoints. JSON Web Tokens are an open, industry standard RFC 7519 method for representing claims securely between two parties and below we will see how easy it can be integrated in your application with Eclipse Microprofile.

As JWT suggests usage of cryptography - we need to generate public\private key pair before start coding:

# Generate a private key
openssl genpkey -algorithm RSA -out private_key.pem -pkeyopt rsa_keygen_bits:2048

# Derive the public key from the private key
openssl rsa -pubout -in private_key.pem -out public_key.pem

Now we are able to generate JWT and sign data with our private key in the, for example, next way:

public static String generateJWT(String userName) throws Exception {

    Map<String, Object> claimMap = new HashMap<>();
    claimMap.put("iss", "https://kostenko.org");
    claimMap.put("sub", "jwt-rbac");
    claimMap.put("exp", currentTimeInSecs + 300)
    claimMap.put("iat", currentTimeInSecs);
    claimMap.put("auth_time", currentTimeInSecs);
    claimMap.put("jti", UUID.randomUUID().toString());
    claimMap.put("upn", "UPN");
    claimMap.put("groups", Arrays.asList("user"));
    claimMap.put("raw_token", UUID.randomUUID().toString());
    claimMap.put("user_bane", userName);

    return Jwt.claims(claimMap).jws().signatureKeyId("META-INF/private_key.pem").sign(readPrivateKey("META-INF/private_key.pem"));
}

For additional information about JWT structure, please refer https://jwt.io

Time to review our application security stuff:
@RequestScoped - It is not about security as well. But as JWT is request scoped we need this one to work correctly;
@PermitAll - Specifies that all security roles are allowed to invoke the specified method;
@RolesAllowed("user") - Specifies the list of roles permitted to access method;
@Claim("user_name") - Allows us inject provided by JWT field;

To configure JWT in your application.properties, please add

quarkus.smallrye-jwt.enabled=true
mp.jwt.verify.publickey.location=META-INF/public_key.pem
mp.jwt.verify.issuer=https://kostenko.org

# quarkus.log.console.enable=true
# quarkus.log.category."io.quarkus.smallrye.jwt".level=TRACE
# quarkus.log.category."io.undertow.request.security".level=TRACE

And actually that is it, - if you try to reach /user/current service without or with bad JWT token in the Authorization header - you will get HTTP 401 Unauthorized error.

curl example:

curl -X GET "http://localhost:8080/user/current" -H "accept: application/json" -H "Authorization: Bearer eyJraWQiOiJNRVRBLUlORi9wcml2YXRlX2tleS5wZW0iLCJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJqd3QtcmJhYyIsInVwbiI6IlVQTiIsInJhd190b2tlbiI6IjQwOWY3MzVkLTQyMmItNDI2NC1iN2UyLTc1YTk0OGFjMTg3MyIsInVzZXJfbmFtZSI6InNlcmdpaSIsImF1dGhfdGltZSI6MTU4MjE5NzM5OSwiaXNzIjoiaHR0cHM6Ly9rb3N0ZW5rby5vcmciLCJncm91cHMiOlsidXNlciJdLCJleHAiOjkyMjMzNzIwMzY4NTQ3NzU4MDcsImlhdCI6MTU4MjE5NzM5OSwianRpIjoiMzNlMGMwZjItMmU0Yi00YTczLWJkNDItNDAzNWQ4NTYzODdlIn0.QteseKKwnYJWyj8ccbI1FuHBgWOk98PJuN0LU1vnYO69SYiuPF0d9VFbBada46N_kXIgzw7btIc4zvHKXVXL5Uh3IO2v1lnw0I_2Seov1hXnzvB89SAcFr61XCtE-w4hYWAOaWlkdTAmpMSUt9wHtjc0MwvI_qSBD3ol_VEoPv5l3_W2NJ2YBnqkY8w68c8txL1TnoJOMtJWB-Rpzy0XrtiO7HltFAz-Gm3spMlB3FEjnmj8-LvMmoZ3CKIybKO0U-bajWLPZ6JMJYtp3HdlpsiXNmv5QdIq1yY7uOPIKDNnPohWCgOhFVW-bVv9m-LErc_s45bIB9djwe13jFTbNg"

Source code of described sample application available on GitHub


February 19, 2020 10:00 PM

Scope + Communication – The magic formula of microservices

by Thorben Janssen at February 19, 2020 08:30 AM

The post Scope + Communication – The magic formula of microservices appeared first on Thoughts on Java.

For quite some time, finding the right scope of a microservice was proclaimed to solve all problems. If you do it right, implementing your service is supposed to be easy, your services are independent of each other, and you don’t need to worry about any communication between your services.

Unfortunately, reality didn’t hold up to this promise all too well. Don’t get me wrong, finding the right scope of a service helps. Implementing a few right-sized services is much easier than creating lots of services that are too small and that depend on each other. Unfortunately, that doesn’t mean that all problems are solved or that there is no communication between your services.

But let’s take a step back and discuss what “the right scope” means and why it’s so important.

What is the right scope of a microservice?

Finding the right scope of a service is a lot harder than it might seem. It requires a good understanding of your business domain. That’s why most architects agree that a bounded context, as it’s defined by Domain-Driven Design, represents a proper scope of a microservice.

Interestingly enough, when we talk about a bounded context, we don’t talk about size. We talk about the goal that the model of a bounded context is internally consistent. That means that there is only one exact definition of each concept. If you try to model the whole business domain, that’s often hard to achieve.

A customer in an order management application, for example, is different from a customer in an online store. The customer in the store browses around and might or might not decide to buy something. We have almost no information about that person. A customer in an order management application, on the other hand, has bought something, and we know the name and their payment information. We also know which other things that person bought before.

If you try to use the same model of a customer for both subsystems, your definition of a customer loses a lot of precision. If you talk about customers, nobody exactly knows which kind of customer you mean.

All of that gets a lot easier and less confusing if you split that model into multiple bounded contexts. That enables you to have 2 independent definitions of a customer: one for the order management and one for the online store. Within each context, you can precisely define what a customer is.

The same is true for monolithic and microservice applications. A monolith is often confusing, and there might be different definitions or implementations of the same concept within the application. That is confusing and makes the monolith hard to understand and maintain. But if you split it into multiple microservices, this gets a lot easier. If you do it right, there are no conflicting implementations or definitions of the same concept within one microservice.

Bounded contexts and microservices are connected

As you can see, there is an apparent similarity between microservices and bounded contexts. And that’s not the only one. There is another similarity that often gets ignored. Bounded contexts in DDD can be connected to other services. You’re probably not surprised if I tell you that the same is true for microservices.

These connections are necessary, and you can’t avoid them. You might use different definitions of a customer in your online store and your order management application. But for each customer in your order management system, there needs to be a corresponding customer in the online store system. And sooner or later, someone will ask you to connect this information.

Let’s take a closer look at a few situations in which we need to share data between microservices.

Data replication

The most obvious example of services that need to exchange data are services that provide different functionalities on the same information. Typical examples of services that use data owned by other services are management dashboards, recommendation engines, and any other kind of application that needs to aggregate information.

The functionality provided by these services shouldn’t become part of the services that are owning the data. By doing that, you would implement 2 or more separate bounded contexts within the same application. That will cause the same issues as we had with unstructured monoliths.

It’s much better to replicate the required information asynchronously instead. As an example, the order, store, and inventory service replicate their data asynchronously, and the management dashboard aggregates them to provide the required statistics to the managers.

When you implement such a replication, it’s important to ensure that you don’t introduce any direct dependencies between your services. In general, this is achieved by exchanging messages or events via a message broker or an event streaming platform.

There are various patterns that you can use to replicate data and decouple your services. In my upcoming Data and Communication Patterns for Microservices course, I recommend using the Outbox Pattern. It’s relatively easy to implement, enables great decoupling of your services, scales well, and ensures a reasonable level of consistency.

Coordinate complex operations

Another example is a set of services that need to work together to perform a complex business operation. In the case of an online store, that might be the order management service, the payment service, and the inventory service. All 3 of them model independent contexts, and there are lots of good reasons to keep them separate.

But when a customer orders something, all 3 services need to work together. The order management service needs to receive and handle the order. The payment service processes the payment and the inventory service reserves and ships the products.

Each service can be implemented independently, and it provides its part of the overall functionality. But you need some form of coordination to make sure that each order gets paid before you ship the products or that you only accept orders that you can actually fulfill.

As you can see, this is another example of services that need to communicate and exchange data. The only alternative would be to merge these services into one and to implement a small monolith. But that’s something we decided to avoid.

You can implement such operations using different patterns. If you do it right, you can avoid any direct dependencies between your services. I recommend using one of the 2 forms of the SAGA patterns, which I explain in great detail in my Data and Communication Patterns for Microservices course.

You need the right scope and the right communication

To sum it up, finding the proper scope for each service is important. It makes the implementation of each service easier and avoids any unnecessary communication or dependencies between your services.

But that’s only the first step. After you carefully defined the scope of your services, there will be some services that are connected to other services. Using the right patterns, you can implement these connections in a reliable and scalable way without introducing direct dependencies between your services.

The post Scope + Communication – The magic formula of microservices appeared first on Thoughts on Java.


by Thorben Janssen at February 19, 2020 08:30 AM

The Lord of the Jars--airhacks.fm podcast

by admin at February 16, 2020 02:48 PM

Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS

The #75 airhacks.fm episode with Alex Soto (@alexsotob) about:
the journey from HTML over Servlets to Java EE.
is available for download.

by admin at February 16, 2020 02:48 PM

Hashtag Jakarta EE #7

by Ivar Grimstad at February 16, 2020 10:59 AM

Welcome to the seventh issue of Hashtag Jakarta EE!

One of the most common questions I get when talking about Jakarta EE is: “How can I help?”. Well, here is a suggestion: The post “Help wanted: improved signature test tool” by Bill Shannon on the Jakarta EE Community mailing list asks for help improving the signature test tool used by the TCK.

In short, what we need help with is to fix the features of this tool to make it possible to either limit the recorded signatures to the API being tested or exclude the signatures for the JDK classes. See the GitHub issue for details.

MicroProfile will produce specs and it is up to others how they adopt or consume them

This week’s MicroProfile Hangout was dedicated to Working Group discussions. The agenda was, as always, set by the participants and the topic this week quickly became technical alignment between MicroProfile and related technologies, such as Jakarta EE. The result of this discussion is summarized by John Clingan in the thread MicroProfile Working Group discussion – Push vs pull on the MicroProfile mailing list. Basically, what this approach means is that MicroProfile will produce specs and it is up to others how they adopt or consume them.

There is an interesting Twitter discussion going on around Quarkus, CDI and the requirements to claim MicroProfile compatibility. This discussion has moved over to various threads on the MicroProfile mailing list. The disagreement within the MicroProfile community is whether the Java EE specifications (JAX-RS, JSON-B, JSON-P, and CDI) are a part of MicroProfile or just referenced APIs. Why this distinction is important is worth a blog post on its own, but the gist of it is that if CDI is a part of the platform, a product cannot be claimed to be compatible with MicroProfile unless the CDI TCK is passed.

For reference, I have included the graphics describing the content of the first (1.0) and the current (3.2) release of MicroProfile below.


by Ivar Grimstad at February 16, 2020 10:59 AM

Migration from @Stateless (BCE) to Quarkus

by admin at February 14, 2020 02:56 PM

A typical BCE Jakarta EE application comprises a JAX-RS resource:

import javax.inject.Inject;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
    
@Path("ping")
public class MessageResource {

    @Inject
    MessageFetcher messageFetcher;

    @GET
    public String ping() {
        return "hello, " + this.messageFetcher.getMessage();
    }

}
and a corresponding implementation in the boundary package:

package com.airhacks.ping.boundary;
import com.airhacks.ping.control.MessageConfigurator;
import javax.ejb.Stateless;
import javax.inject.Inject;

@Stateless
public class MessageFetcher {

    @Inject
    MessageConfigurator configurator;

    public String getMessage() {
        return this.configurator.message();
    }

}

The boundary acts as a facade and usually coordinates multiple controls:


package com.airhacks.ping.control;

import javax.inject.Inject;
import org.eclipse.microprofile.config.inject.ConfigProperty;

public class MessageConfigurator {

    @Inject
    @ConfigProperty(name = "message")
    String message;

    public String message() {
        return this.message + " generated at: " + System.currentTimeMillis();
    }
}    

To run the code on quarkus you will have to replace @Stateless with @Transactional and @RequestScoped, or a stereotype which combines both:


@Stereotype
@Transactional
@RequestScoped
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
public @interface Boundary {}    

Now the MessageFetcher looks like:


@Boundary
public class MessageFetcher {    }    

All controls have to annotated to be injectable on quarkus. A @Dependent (default) scope could be used for that purpose, or the following stereotype:


import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

import javax.enterprise.context.Dependent;
import javax.enterprise.inject.Stereotype;

@Stereotype
@Dependent
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
public @interface Control {}

The MessageConfigurator control has to be annotated with the @Control stereotype and looks like:


@Control    
public class MessageConfigurator { }

Both stereotype are used in the Web Push gateway: gatelink.


by admin at February 14, 2020 02:56 PM

Simple note about using JPA relation mappings

February 13, 2020 10:00 PM

There is a lot of typical examples how to build JPA @OneToMany and @ManyToOne relationships in your Jakarta EE application. And usually it looks like:

@Entity
@Table(name = "author")
public class Author {
    @OneToMany
    private List<Book> book;
    ...
}
@Entity
@Table(name = "book")
public class Book {
    @ManyToOne
    private Author author;
    ...
}

This code looks pretty clear, but on my opinion you should NOT USE this style in your real world application. From years of JPA using experience i definitely can say that sooner or later your project will stuck with known performance issues and holy war questions about: N+1, LazyInitializationException, Unidirectional @OneToMany , CascadeTypes ,LAZY vs EAGER, JOIN FETCH, Entity Graph, Fetching lot of unneeded data, Extra queries (for example: select Author by id before persist Book) etcetera. Even if you are have answers for each potential issue above, usually proposed solution will add unreasonable complexity to the project.

To avoid potential issues i recommend to follow next rules:

  • Avoid using of @OneToMany at all
  • Use @ManyToOne to build constrains but work with ID instead of Entity

Unfortunately, simple snippet below does not work as expected in case persist

@ManyToOne(targetEntity = Author.class)
private long authorId;

But,we can use next one instead of

@JoinColumn(name = "authorId", insertable = false, updatable = false)
@ManyToOne(targetEntity = Author.class)
private Author author;

private long authorId;

public long getAuthorId() {
    return authorId;
}

public void setAuthorId(long authorId) {
    this.authorId = authorId;
}

Hope, this two simple rules helps you enjoy all power of JPA with KISS and decreasing count of complexity.


February 13, 2020 10:00 PM

Distributed Transactions – Don’t use them for Microservices

by Thorben Janssen at February 12, 2020 02:09 PM

The post Distributed Transactions – Don’t use them for Microservices appeared first on Thoughts on Java.

Since I started talking about microservices and the challenges that you have to solve whenever you want to exchange data between your services, I hear 3 things:

  1. You only need to model the scope of your services “the right way” to avoid these problems.
  2. We use multiple local transactions, and everything works fine. It’s really not that big of a deal.
  3. We have always used distributed transactions to ensure data consistency. We will keep doing that for our microservice architecture.

Let’s quickly address the first 2 answers before we get to the main part of this article.

Designing services the right way

It’s a popular myth that you can solve all problems by designing the scope of your services the right way. That might be the case for highly scalable “hello” world applications that you see in demos. But it doesn’t work that way in the real world.

Don’t get me wrong; designing the scope of your services is important, and it makes the implementation of your application easier. But you will not be able to avoid communication between your services completely. You always have some services that offer their functionality based on other services.

An example of that is an OrderInfo service in an online bookstore. It shows the customer the current status of their order based on the information managed by the Order service, the Inventory service, and the Book service.

Another example is an Inventory service, which needs to reserve a book for a specific order and prepare it for delivery after the Order and the Payment service processed the order.

In these cases, you either:

  • Implement some form of data exchange between these services or
  • Move all the logic to the frontend, which in the end is the same approach as option 1, or
  • Merge all the services into 1, which gets you a monolithic application.

As you can see, there are several situations in which you need to design and implement some form of communication and exchange data between your services. And that’s OK if you do it intentionally. There are several patterns and tools for that. I explain the most important and popular ones in my upcoming course Data and Communication Patterns for Microservices. It launches in just a few days. I recommend joining the waitlist now so that you don’t miss it.

Using multiple local transactions

If teams accepted that they need to exchange data between their services, quite a few decide to use multiple, independent, local transactions. This is a risky decision because sooner or later, it will cause data inconsistencies.

By using multiple local transactions, you create a situation that’s called a dual write. I explained it in great detail in a previous article. To summarize that article, you can’t handle the situation in which you try to commit 2 independent transactions, and the 2nd commit fails. You might try to implement workarounds that try to revert the first transaction. But you can’t guarantee that they will always work.

Distributed transactions and their problems in a microservice application

In a monolithic application or older distributed applications, we often used transactions that span over multiple external systems. Typical examples are transactions that include one or more databases or a database and a message broker. These transactions are called global or distributed transactions. They enable you to apply the ACID principle to multiple systems.

Unfortunately, they are not a good fit for a microservice architecture. They use a pattern called 2-phase commit. This pattern describes a complex process that requires multiple steps and locks.

2-phase commit protocol

As you might have guessed from the name, the main difference between a local and distributed transaction that uses the two-phase commit pattern is the commit operation. As soon as more than one system is involved, you can’t just send a commit message to each of them. That would create the same problems as we discussed for dual writes.

The two-phase commit avoids these problems by splitting the commit into 2 steps:

  1. The transaction coordinator first sends a prepare command to each involved system.
    Each system then checks if they could commit the transaction.
  2. If that’s the case, they respond with “prepared” and the transaction coordinator sends a commit command to all systems. The transaction was successful, and all changes get committed.
    If any of the systems doesn’t answer the prepare command or responds with “failed”, the transaction coordinator sends an abort command to all systems. This rolls back all the changes performed within the transaction.

As you can see, a two-phase commit is more complicated than the simple commit of a local transaction. But it gets even worse when you take a look at the systems that need to prepare and commit the transaction.

The problem of a 2-phase commit

After a system confirmed the prepare command, it needs to make sure that it will be able to commit the transaction when it receives the commit command. That means nothing is allowed to change until that system gets the commit or abort command.

The only way to ensure that is to lock all the information that you changed in the transaction. As long as this lock is active, no other transaction can use this information. These locks can become a bottleneck that slows down your system and should obviously be avoided.

This problem also existed in a distributed, monolithic application. But the small scope of a microservice and the huge number of services that are often deployed make it worse.

A 2-phase commit between a transaction coordinator and 2 external systems is already bad enough. But the complexity and the performance impact of the required locks increase with each additional external system that takes part in the transaction.

Due to that, a distributed transaction is no longer an easy to use approach to ensure data consistency that, in the worst case, might slow down your application a little bit. In a microservice architecture, a distributed transaction is an outdated approach that causes severe scalability issues. Modern patterns that rely on asynchronous data replication or model distributed write operations as orchestrated or choreographed SAGAs avoid these problems. I explain all of them in great detail in my upcoming course Data and Communication Patterns for Microservices. It launches in just a few days. If you’re building microservices, I recommend joining the waitlist now so that you don’t miss it.

The post Distributed Transactions – Don’t use them for Microservices appeared first on Thoughts on Java.


by Thorben Janssen at February 12, 2020 02:09 PM

February / March 2020 Java and Web Events

by admin at February 12, 2020 03:52 AM

  1. Cloud Native Patterns und Approaches mit Jakarta EE + MicroProfile #lowslides #cloudful
    JUG Oberpfalz OTH Weiden 2020-02-19 https://www.meetup.com/JUG-Oberpfalz/events/267274608/
  2. Microprofile Productivity with Quarkus
    JUG BMW MUC 2020-02-20 https://www.meetup.com/de-DE/Java-User-Group-Munchen-JUGM/events/268625315/
  3. Kickass Web Components with a bit of Quarkus
    Free Meetup Mayflower MUC 2020-03-10 https://www.meetup.com/techNoid/events/268420210/
  4. Keynote: What Should Happen in 2020 with Java and Web
    Voxxed Days Bucharest Sheraton Hotel Bucharest 2020-03-13 https://romania.voxxeddays.com/bucharest/voxxed-days-bucharest-2020/
  5. Session: 2020 Predictions Interactive On-Stage Hacking #slideless #nomigrations
    Voxxed Days Bucharest Sheraton Hotel Bucharest 2020-03-13 https://romania.voxxeddays.com/bucharest/voxxed-days-bucharest-2020/
  6. From Java Developer to Web Guru in 1 hour - No slides
    JUG Session Google MUC Munich 2020-03-20 https://www.meetup.com/OpenValueMuenchen/events/268323924/
  7. MicroProfile with Quarkus
    Airhacks Workshops Airport MUC 2020-03-24 http://workshops.adam-bien.com/quarkus.htm
  8. Micro Frontends with Web Components
    Airhacks Workshops Airport MUC 2020-03-25 http://workshops.adam-bien.com/microfrontends.htm
  9. Jakarta EE + MicroProfile + Kubernetes = The Productivity Dream #slideless
    Kubecon Cloud Native for Java (CN4J) RAI Amsterdam 2020-03-30 https://eclipse-5413615.hs-sites.com/cn4j-day

by admin at February 12, 2020 03:52 AM

Payara Platform 201 Release - More Updates to Monitoring Console

by Jan Bernitt at February 11, 2020 12:26 PM

In Payara Platform 5.194 we introduced the monitoring console. The upcoming 5.201 release now offers numerous improvements and additions. We continued to followed our vision of a monitoring tool that users can configure to their needs, ranging from new tools such as watches and alerts, to new colour themes and settings users can tweak to match their individual preferences.


by Jan Bernitt at February 11, 2020 12:26 PM

Jakarta EE Working Group is Open to Forming a Working Relationship with MicroProfile

by Will Lyons at February 11, 2020 09:25 AM

Hello - 

Last week the following resolution was passed unanimously by the Jakarta EE Steering Committee:

The Jakarta EE Working Group Steering Committee would be open to proposals for collaborative processes that may achieve a consensus approach to a joint Working Group, or working with a standalone MicroProfile Working Group.  If a single Cloud Native for Java (CN4J) Working Group is preferred by both communities then the Jakarta EE Working Group is open to considering the possibility of forming a joint Working Group with the MicroProfile community.   We recognize that forming a joint Working Group would require significant modifications to the current Jakarta EE Working Group charter, and are open to that prospect.   We are open to considering the current CN4J Working Group proposal, and/or evolving that proposal, and potentially other proposals, together with the MicroProfile community, in an effort to best meet the needs of MicroProfile and Jakarta EE, and to create more opportunities for synergy between the two efforts.  We are open to discuss whatever approach works best, and would welcome MicroProfile community feedback.

There is active discussion on this topic at the MicroProfile sandbox, with a proposal for an alternative option as well.  Please join in.  We're hoping these two efforts can be combined in a way that maximizes synergies between them for market success.  We're sure there will be more discussion on this topic in coming days and weeks.

 

 


by Will Lyons at February 11, 2020 09:25 AM

Paths "Subtraction" with Path#relativize

by admin at February 10, 2020 06:37 AM

The method relativize computes the difference between an absolute path (src/main/java/com/airhacks) and it's root (src/main/java). This is useful for subtraction of a common path prefix:

import java.nio.file.Path;
import java.nio.file.Paths;
import org.junit.Test;

public class PathSubstractionTest {

    @Test
    public void splitWithDot() {
        Path fromProjectRoot = Paths.get("src/main/java/com/airhacks");
        Path projectRoot = Paths.get("src/main/java");
        Path substracted = projectRoot.relativize(fromProjectRoot);
        String actual = substracted.toString();
        String expected = "com/airhacks";
        assertThat(actual, is(expected));
    }
}


by admin at February 10, 2020 06:37 AM

String#split with a "dot"

by admin at February 09, 2020 04:04 PM

String#split with a dot "." does return an empty array:

    String packages[] = "com.airhacks".split(".");
    assertThat(packages.length, is(0));

The split method parameter is a regular expression, and "dot" is Any character (may or may not match line terminators)

Escaping the dot solves the "problem":


    String packages[] = "com.airhacks".split("\\.");
    assertThat(packages.length, is(2));

by admin at February 09, 2020 04:04 PM

Hashtag Jakarta EE #6

by Ivar Grimstad at February 09, 2020 10:59 AM

Welcome to the sixth issue of Hashtag Jakarta EE!

On the fun side, I was made aware that my shoutout for the Hashtag series may be a little confusing as you can see in my conversation with Ronnie Zolverda.

I didn’t want to use #5 to indicate number 5 since Twitter would then interpret the hashtag (#) as if I were tagging the number 5. Also interesting that nobody reacted on the first 4 posts…

From this week on, I will tweet that “Hashtag Jakarta EE number X is out!” to avoid confusion in the future 🙂

So, over to the technical side. Gunnar Morling referred me to a recent article of his where he describes how to use the JDK Flight Recorder to monitor REST APIs.

We didn’t have any Jakarta Tech Talks or Update calls this week, but the work with Jakarta EE 9 proceeds as planned. The status is best followed by checking out the project board. We have now passed the deadline for individual component release plans. These are Java Activation Framework 2.0 and Jakarta Enterprise Beans 4.0. The rest will follow the release plan for the full platform.

The discussions regarding establishing a working group for MicroProfile, mentioned in Hashtag Jakarta EE #1 and #3, continue with weekly MicroProfile hangouts as well as being a recurring topic in the Jakarta EE Steering Committee.

So far, there are two proposals on the table; a joint working group or two separate working groups. While the structure of the working group(s) is important, another aspect is the technical alignment of Jakarta EE and MicroProfile. A couple of weeks ago David Belvins put forward a couple of proposals to bootstrap the discussions. A third proposal was presented by Steve Millidge where he proposes that profiles in Jakarta EE are promoted to individual brands and that MicroProfile becomes a profile of Jakarta EE. Interesting thoughts!


by Ivar Grimstad at February 09, 2020 10:59 AM

MicroProfile and Jakarta EE Technical Alignment

by Steve Millidge at February 06, 2020 12:15 PM

The transition of Java EE to the Eclipse Foundation is now complete with the release of the Jakarta EE 8 Platform Specification and the compatible implementations, including Payara Server. The release plan for Jakarta EE 9 is also approved, and will move all the Java EE APIs to the jakarta namespace - providing a future platform for Jakarta EE 10 and beyond.


by Steve Millidge at February 06, 2020 12:15 PM

Jakarta EE Community Update February 2020

by Tanja Obradovic at February 05, 2020 07:21 PM

With the Jan 16 announcement that we’re targeting a mid-2020 release for Jakarta EE9, the year is off to a great start for the entire Jakarta EE community. But, the Jakarta EE 9 release announcement certainly wasn’t our only achievement in the first month of 2020.

Here’s a look at the great progress the Jakarta EE community made in January, along with some important reminders and details about events you won’t want to miss.

____________________________________________________________

The Java EE Guardians Are Now the Jakarta EE Ambassadors

The rebranding is complete and the website has been updated to reflect the evolution. Also note that the group’s:

·  Twitter handle has been renamed to @jee_ambassadors

·  Google Group has been renamed to https://groups.google.com/forum/#!forum/jakartaee-ambassadors

Everyone at the Eclipse Foundation and in the Jakarta EE community is thrilled the Java EE Guardians took the time and effort to rebrand themselves for Jakarta EE. I’d like to take this opportunity to thank everyone involved with the Jakarta EE Ambassadors for their contributions to the advancement of Jakarta EE. As Eclipse Foundation Executive Director, Mike Milinkovich, noted, “The new Jakarta EE Ambassadors are an important part of our community, and we very much appreciate their support and trust.”

I look forward to collaborating with the Jakarta EE Ambassadors to drive the success and growth of the Jakarta EE community. I’d also like to encourage all Jakarta EE Ambassadors to start using the new logo to associate themselves with the group.

____________________________________________________________

Java User Groups Will Be Able to Adopt a Jakarta EE Specification

We’re working to enable Java User Groups (JUGs) to become actively involved in evolving the Jakarta EE Specification through our adopt-a-spec program.

In addition to being Jakarta EE contributors and committers, JUG members that adopt-a-spec will be able to:

·  Blog about the Specification

·  Tweet about the Specification

·  Write an end-to-end test web application, such as Pet Store for Jakarta EE

·  Review the specification and comment on unclear content

·  Write additional tests to supplement those we already have

We’ll share more information and ideas for JUG groups, organizers, and individuals to get involved as we finalize the adopt-a-spec program details and sign up process.

____________________________________________________________ 

We’re Improving Opportunities for Individuals in the Jakarta EE Working Group

Let me start by saying we welcome everyone who wants to get involved with Jakarta EE! We’re fully aware there’s always room for improvement, and that there are issues we don’t yet know about. If you come across a problem, please get in touch and we’ll be happy to help.

We recently realized we’ve made it very difficult (read impossible) for individuals employed by companies that are neither Jakarta EE Working Group participants nor Eclipse Foundation Members to become committers in Jakarta EE Specification projects.

We’re working to address the problem for these committers and are aiming to have a solution in place in the coming weeks. In the meantime, these individuals can become contributors.

We’ve provided the information below to help people understand the paperwork that must be completed to become a Jakarta EE contributor or a committer. Please look for announcements in the next week or so about the.

 ______________________________________________________ 

It’s Time to Start Working on Jakarta EE 9               

Now that the Jakarta EE 9 Release Plan is approved, it’s time for everyone in the Jakarta EE community to come together and start working on the release.

Here’re link that can help you get informed and motivate you to get involved!

·  Start with the Jakarta EE Platform specification page.

·  Access the files you need on the Jakarta EE Platform GitHub page.

·  Monitor release progress here.

____________________________________________________________

All Oracle Contributed Specifications Are Now Available for Jakartification

We now have the copyright for all of the Java EE specifications that Oracle contributed so we need the community’s help with Jakartification more than ever. This is the only way the Java EE specifications can be contributed to Jakarta EE. 

To help you get started:

·      The Specification Committee has created a document that explains how to convert Java EE specifications to Jakarta EE.

·      Ivar Grimstad provided a demo during the community call in October. You can view it here.

______________________________________________

Upcoming Events

Here’s a brief look at two upcoming Jakarta EE events to mark on your calendar.

·      JakartaOne Livestream – Japan, February 26

This event builds on the success of the JakartaOne Livestream event in September 2019. Registration for JakartaOne Livestream – Japan is open and can be accessed here. Please keep in mind this entire event will be presented in Japanese. For all the details, be sure to follow the event on Twitter @JakartaOneJPN.

·  Cloud Native for Java Day at KubeCon + CloudNativeCon Amsterdam, March 30

Cloud Native for Java (CN4J) Day will be the first time the best and brightest minds from the Java ecosystem and the Kubernetes ecosystem come together at one event to collaborate and share their expertise. And, momentum is quickly building.

To learn more about this ground-breaking event, get a sense of the excitement surrounding it, and access the registration page, check out these links:

o   Eclipse Foundation’s official announcement

o   Mike Milinkovich’s blog

o   Reza Rahman’s blog

In addition to CN4J day at KubeCon, the Eclipse Foundation will have a booth #S73 featuring innovations from our Jakarta EE and Eclipse MicroProfile communities. Be sure to drop by to meet community experts in person and check out the demos.

________________________________________________________

Join Community Update Calls

Every month, the Jakarta EE community holds a community call for everyone in the Jakarta EE community. For upcoming dates and connection details, see the Jakarta EE Community Calendar.

Our next call is Wednesday, February 12, at 11:00 a.m. EST using this meeting ID.

We know it’s not always possible to join calls in real time, so here are links to the recordings and presentations from previous calls:

·      The complete playlist

·      January 15 call and presentation, featuring:

o   Updates on the Jakarta EE 9 release from Steve Millage

o   A call for action to help with Jakartifying specifications from Ivar Grimstad

o   A review of the Jakarta EE 2020 Marketing Plan and budget from Neil Paterson

o   A retrospective on Jakarta EE 8 from Ed Bratt

o   A heads up for the CN4J and JakartaOne Livestream events from Tanja Obradovic

  ____________________________________________________________

Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Tanja Obradovic’s blog summarizes the community engagement plan, which includes:

• Social media: Twitter, Facebook, LinkedIn Group

• Mailing lists: jakarta.ee-community@eclipse.org and jakarta.ee-wg@eclipse.org

• Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs, monthly update emails to jakarta.ee-community@eclipse.org, and community blogs on “how are you involved with Jakarta EE”

• Meetings: Jakarta Tech Talks, Jakarta EE Update, Jakarta Town Hall, and Eclipse Foundation events and conferences

Subscribe to your preferred channels today. And, get involved in the Jakarta EE Working Group to help shape the future of open source, cloud native Java.

To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.

 



by Tanja Obradovic at February 05, 2020 07:21 PM

ISPN000299: Unable to acquire lock after 15 seconds for key

February 04, 2020 10:00 PM

Distributed cache is a wide used technology that provides useful possibilities to share state whenever it necessary. Wildfly supports distributed cache through infinispan subsystem and actually it works well, but in case height load and concurrent data access you may run into a some issues like:

  • ISPN000299: Unable to acquire lock after 15 seconds for key
  • ISPN000924: beforeCompletion() failed for SynchronizationAdapter
  • ISPN000160: Could not complete injected transaction.
  • ISPN000136: Error executing command PrepareCommand on Cache
  • ISPN000476: Timed out waiting for responses for request
  • ISPN000482: Cannot create remote transaction GlobalTx

and others.

In my case i had two node cluster with next infinispan configuration:

/profile=full-ha/subsystem=infinispan/cache-container=myCache/distributed-cache=userdata:add()
/profile=full-ha/subsystem=infinispan/cache-container=myCache/distributed-cache=userdata/component=transaction:add(mode=BATCH)

distributed cache above means that number of copies are maintained, however this is typically less than the number of nodes in the cluster. From other point of view, to provide redundancy and fault tolerance you should configure enough amount of owners and obviously 2 is the necessary minimum here. So, in case usage small cluster and keep in mind the BUG, - i recommend use replicated-cache (all nodes in a cluster hold all keys)

Please, compare Which cache mode should I use? with your needs.

Solution:

/profile=full-ha/subsystem=infinispan/cache-container=myCache/distributed-cache=userdata:remove()
/profile=full-ha/subsystem=infinispan/cache-container=myCache/replicated-cache=userdata:add()
/profile=full-ha/subsystem=infinispan/cache-container=myCache/replicated-cache=userdata/component=transaction:add(mode=NON_DURABLE_XA, locking=OPTIMISTIC)

Note!, NON_DURABLE_XA doesn't keep any transaction recovery information and if you still getting Unable to acquire lock errors on application critical data - you can try to resolve it by some retry policy and fail-fast transaction:

/profile=full-ha/subsystem=infinispan/cache-container=myCache/distributed-cache=userdata/component=locking:write-attribute(name=acquire-timeout, value=0)

February 04, 2020 10:00 PM

Reflect, Relax, Recharge at Devnexus 2020

by Ivar Grimstad at February 02, 2020 12:36 PM

The next conference I will be attending is Devnexus in Atlanta which is organized by the Atlanta Java User Group.

This year, Devnexus is extended with a JUG Leader Summit scheduled the day before the conference where there will be more than 40 Java User Groups represented.

In my session, What’s going on with Jakarta EE, I will provide an update about the ongoing work with Jakarta EE 9. This talk is also an opportunity to come forward and have a dialogue about everything going on in and around the Jakarta EE Working Group at the Eclipse Foundation.

During the conference, you will probably find me around the Jakarta EE booth with Tanja when I am not attending talks by all the amazing speakers. Please visit ut there for an informal chat about open source or to pick up some of our Jakarta EE swag!


by Ivar Grimstad at February 02, 2020 12:36 PM

Hashtag Jakarta EE #5

by Ivar Grimstad at February 02, 2020 10:59 AM

Welcome to the fifth issue of Hashtag Jakarta EE!

This weekend, I attended my first FOSDEM. This is a free event that takes place in Brussels every year. It is quite an experience with fully packed rooms and crowded corridors. Sessions are short (25 mins) and focused. Absolutely a recommendation!

Mike Milinkovich presented Free at Last! The Tale of Jakarta EE for a full Java Dev Room.

Check out the minutes from the weekly Jakarta EE Platform call to follow the progress of Jakarta EE 9.

Earlier this week, I presented Microservices in Practice with Eclipse MicroProfile at Javaforum Malmö.

This demo-heavy talk was well received. There has been a steady increase of participants in our JUG over the last year, a strong indication for the continued popularity of Java.

If you have made it this far, I want to end this hashtag with an encouragement to check out these articles by Nicolas Frankel. He provides some very useful tips and advice about Tricky Servlet Mappings and Creative use of Filters.


by Ivar Grimstad at February 02, 2020 10:59 AM

Projections with JPA and Hibernate

by Thorben Janssen at January 31, 2020 10:00 AM

The post Projections with JPA and Hibernate appeared first on Thoughts on Java.

Choosing the right projection when selecting data with JPA and Hibernate is incredibly important. When I’m working with a coaching client to improve the performance of their application, we always work on slow queries. At least 80% of them can be tremendously improved by either adjusting the projection or by using the correct FetchType.

Unfortunately, changing the projection of an existing query always requires a lot of refactoring in your business code. So, better make sure to pick a good projection in the beginning. That’s relatively simple if you follow a few basic rules that I will explain in this article.

But before we do that, let’s quickly explain what a projection is.

What is a projection?

The projection describes which columns you select from your database and in which form Hibernate provides them to you. Or in other words, if you’re writing a JPQL query, it’s everything between the SELECT and the FROM keywords.

em.createQuery("SELECT b.title, b.publisher, b.author.name FROM Book b");

What projections do JPA and Hibernate support?

JPA and Hibernate support 3 groups of projections:

  1. Scalar values
  2. Entities
  3. DTOs

SQL only supports scalar projections, like table columns or the return value of a database function. So, how can JPA and Hibernate support more projections?

Hibernate first checks which information it needs to retrieve from the database and generates an SQL statement with a scalar value projection for it. It then executes the query and returns the result if you used a scalar value projection in your code. If you requested a DTO or entity projection, Hibernate applies an additional transformation step. It iterates through the result set and instantiates an entity or a DTO object for each record.

Let’s take a closer look at all 3 projections and discuss when you should use which of them.

Entity projections

For most teams, entities are the most common projection. They are very easy to use with JPA and Hibernate.

You can either use the find method on your EntityManager or write a simple JPQL or Criteria query that selects one or more entities. Spring Data JPA can even derive a query that returns an entity from the name of your repository method.

TypedQuery<Book> q = em.createQuery("SELECT b FROM Book b", Book.class);
List<Book> books = q.getResultList();

All entities that you load from the database or retrieve from one of Hibernate’s caches are in the lifecycle state managed. That means that your persistence provider, e.g., Hibernate, will automatically update or remove the corresponding database record if you change the value of an entity attribute or decide to remove the entity.

b.setTitle("Hibernate Tips - More than 70 solutions to common Hibernate problems");

Entities are the only projection that has a managed lifecycle state. Whenever you want to implement a write operation, you should fetch entities from the database. They make the implementation of write operations much easier and often even provide performance optimizations.

But if you implement a read-only use case, you should prefer a different projection. Managing the lifecycle state, ensuring that there is only 1 entity object for each mapped database record within a session, and all the other features provided by Hibernate create an overhead. This overhead makes the entity projection slower than a scalar value or DTO projection.

Scalar value projections

Scalar value projections avoid the management overhead of entity projections, but they are not very comfortable to use. Hibernate doesn’t transform the result of the query. You, therefore, get an Object or an Object[] as the result of your query.

Query q = em.createQuery("SELECT b.title, b.publisher, b.author.name FROM Book b");
List<Object[]> books = (Object[]) q.getResultList();

In the next step, you then need to iterate through each record in your result set and cast each Object to its specific type before you can use it. That makes your code error-prone and hard to read.

Instead of an Object[], you can also retrieve a scalar projection as a Tuple interface. The interface is a little easier to use than the Object[].

TypedQuery<Tuple> q = em.createQuery("SELECT b.title as title, b.publisher as publisher, b.author.name as author FROM Book b", Tuple.class);
List<Tuple> books = q.getResultList();

for (Tuple b : books) {
	log.info(b.get("title"));
}

But don’t expect too much. It only provides a few additional methods to retrieve an element, e.g., by its alias. But the returned values are still of type Object, and your code is still as error-prone as it is if you use an Object[].

Database functions in scalar value projections

Scalar value projections are not limited to singular entity attributes. You can also include the return values of one or more database functions.

TypedQuery<Tuple> q = em.createQuery("SELECT AVG(b.sales) as avg_sales, SUM(b.sales) as total_sales, COUNT(b) as books, b.author.name as author FROM Book b GROUP BY b.author.name", Tuple.class);
List<Tuple> authors = q.getResultList();

for (Tuple a : authors) {
	log.info("author:" + a.get("author")
			+ ", books:" + a.get("books")
			+ ", AVG sales:" + a.get("avg_sales")
			+ ", total sales:" + a.get("total_sales"));
}

This is a huge advantage compared to an entity projection. If you used an entity projection in the previous example, you would need to select all Book entities with their associated Author entity. In the next step, you would then need to count the number of books each author has written, and calculate the total and average sales values.

As you can see in the code snippet, using a database function is easier, and it also provides better performance.

DTO projections

DTO projections are the best kind of projection for read-only operations. Hibernate instantiates the DTO objects as a post-processing step after it retrieved the query result from the database. It then iterates through the result set and executes the described constructor call for each record.

Here you can see a simple example of a JPQL query that returns the query result as a List of BookDTO objects. By using the keyword new and providing the fully qualified class name of your DTO class and an array of references to entity attributes, you can define a constructor call. Hibernate will then use reflection to call this constructor.

TypedQuery<BookDTO> q = em.createQuery("SELECT new org.thoughtsonjava.projection.dto.BookDTO(b.title, b.author.name, b.publisher) FROM Book b", BookDTO.class);
List<BookDTO> books = q.getResultList();

In contrast to the entity projection, the overhead of a DTO projection is minimal. The objects are not part of the current persistence context and don’t follow any managed lifecycle. Due to that, Hibernate will not generate any SQL UPDATE statements if you change the value of a DTO attribute. But it also doesn’t have to spend any management effort, which provides significant performance benefits.

Database functions in DTO projections

Similar to a scalar value projection, you can also use database functions in a DTO projection. As explained earlier, the instantiation of the DTO object is a post-processing step after Hibernate retrieved the query result. At that phase, it doesn’t make any difference if a value was stored in a database column or if it was calculated by a database function. Hibernate simply gets it from the result set and provides it as a constructor parameter.

Conclusion

JPA and Hibernate support 3 groups of projections:

  1. Entities are the easiest and most common projection. They are a great fit if you need to change data, but they are not the most efficient ones for read-only use cases.
  2. Scalar projections are returned as Object[]s or instances of the Tuple interface. Both versions don’t provide any type-information and are hard to use. Even though they are very efficient for read-only operations, you should avoid them in your application.
  3. DTO projections provide similar performance as scalar value projections but are much easier to use. That makes them the best projection for read-only operations.

The post Projections with JPA and Hibernate appeared first on Thoughts on Java.


by Thorben Janssen at January 31, 2020 10:00 AM

Contributing to Jakarta EE

by Ivar Grimstad at January 30, 2020 01:14 PM

This post is meant to clear up some misunderstandings that occurred during a discussion thread on the Jakarta EE Community mailing list. Some of this is a repetition of what I described in Jakarta EE 9 Shaping Up in December, but such an important topic cannot be stressed enough.

The only thing you need in order to contribute to Jakarta EE specifications is a signed ECA!

First of all, to contribute to any open source project at the Eclipse Foundation, you will need to create an account and sign the Eclipse Contributor Agreement (ECA). See below for a visualization of this process.

Steps to create an Eclipse Account

That’s it!

You can now start contributing by submitting Pull Requests to the projects you are interested in, including Jakarta EE specification projects. It doesn’t cost anything. No signatures from your employer are necessary. Just the ECA. The only thing you need in order to contribute to Jakarta EE specifications is a signed ECA!

The more you contribute, the more likely it is that you will be proposed to become a committer to the project. I will describe the zero-cost way of becoming a committer in a follow-up post to this one.


by Ivar Grimstad at January 30, 2020 01:14 PM

Monitoring REST APIs with Custom JDK Flight Recorder Events

January 29, 2020 02:30 PM

The JDK Flight Recorder (JFR) is an invaluable tool for gaining deep insights into the performance characteristics of Java applications. Open-sourced in JDK 11, JFR provides a low-overhead framework for collecting events from Java applications, the JVM and the operating system.

In this blog post we’re going to explore how custom, application-specific JFR events can be used to monitor a REST API, allowing to track request counts, identify long-running requests and more. We’ll also discuss how the JFR Event Streaming API new in Java 14 can be used to export live events, making them available for monitoring and alerting via tools such as Prometheus and Grafana.


January 29, 2020 02:30 PM

Plans for 2020 and key lessons from 2019

by Thorben Janssen at January 28, 2020 01:00 PM

The post Plans for 2020 and key lessons from 2019 appeared first on Thoughts on Java.

It’s almost February 2020, and I still haven’t published my end of 2019 review or shared my plans for this year. But I have good excuses for that. So far, January has been extremely busy. I already did a code review, started a new coaching project, taught an in-house workshop, recorded multiple online course lectures and YouTube videos, and wrote blog articles. Not too bad for only 3 weeks.

But I still want to share what I learned in 2019 and what’s planned for 2020. So, here we go …

What I learned in 2019

The last year was incredibly successful:

  • The blog suffered from an issue with an SEO plugin, but in the end, traffic grew to almost 4 Million views in 2019.
  • We got to more than 17000 subscribers on YouTube.
  • I spoke at several conferences and JUGs across Europe.
  • I did more in-house workshops and had more students in my online courses than ever before.
  • I hosted my first in-person workshops in Düsseldorf (Germany).
  • With the JPA for Beginners Online Training, I also published a new course.
  • For the first year since I was a teenager, I established a relatively consistent workout routine.
  • And I learned that traveling by train doesn’t have to take much longer than flying but it isn’t as stressful.

But I also had to learn that too much of something that I enjoy, is still too much.

Sometimes too much fun is still too much

In the beginning, traveling from one in-house workshop to the next was fun. But that changed after a while. It started to wear me out. You might have recognized that I didn’t publish new articles and videos as consistently as I had planned. Doing too many in-house workshops and attending too many conferences was the main reason for that. I either was traveling and speaking, or I tried to catch up with all the things I wasn’t able to do while traveling.

This year, I want to make sure that this doesn’t happen again. I plan to not speak at more than 1 in-house workshop per month and not more than 6 conferences per year. That’s still 1.5 events per month.

If you add onsite and remote coaching engagements to the mix, my schedule still looks pretty busy. But it’s hopefully more sustainable and gives me some extra time to work on new online courses and products.

Hosting my own workshop isn’t complicated or scary

Another thing that I learned in 2019 was that it’s not too complicated to host and promote my own in-person workshops. Sure, it was a little stressful in the beginning, but the result was totally worth it.

In December, I offered an Advanced Hibernate Workshop and a Hibernate Performance Tuning Workshop at the Lindner Congress Hotel in Düsseldorf. Their team did an amazing job and took care of all the logistics. I had booked a meeting room with drinks, snacks, and lunch. So, the only thing that I had to do was to be there on time and teach the workshops.

In the end, I liked these workshops much better than the ones that I did with different training companies in the past. From now on, I will host my workshops myself.

I already planned 3 of them for this year. But more about that in the next section.

What to expect in 2020

OK, so 2019 was great, and I learned a few things. What does that mean for this year? Am I happy with the achievements of last year and keep everything as it is?

Of course not!

I want to grow the team, improve the site, create new courses, and offer more in-person workshops.

One or two new online courses

I’m currently working on my new Data and Communication Patterns for Microservices Online Training. It’s inspired by several coaching projects in which I helped teams to model the persistence layers of their microservices and to exchange data between services in a reliable and scalable way.

The first of these coaching projects started shortly after microservices became popular. Most teams had to recognize that exchanging data and ensuring data consistency had become an issue. They no longer implemented their logic in 1 application and ensured data consistency with a simple transaction. They now did that in multiple services and needed to handle the downsides of a distributed system.

There are several patterns and tools that help you to handle these issues. If you use them correctly, exchanging data in a consistent and scalable way still adds complexity to your system. But it becomes a manageable task, and you will be able to enjoy the advantages of a microservice architecture.

I will show you the most important and most popular patterns in the Data and Communication Patterns for Microservices Online Training. It will launch on February 28th. You can join the early-bird notification list here.

And that might not be the only new course in 2020. I have 1-2 more ideas for new courses, but it’s still too early to share them.

3 in-person workshops

As I said earlier, I also planned 3 in-person workshops for this year.

  1. In the JPA for Beginners workshop, you will learn all you need to know to use JPA with Hibernate or EclipseLink. I will teach you all the important concepts, JPA’s mapping annotations, and the JPQL query language. After these 2 days, you will be able to implement a basic persistence layer on your own or to join a team that’s working on a huge and complex one.
    The JPA for Beginners workshop will take place on June 30th – July 1st, 2020. Make sure to enroll before March 28th to get the best price.
  2. The Data and Communication Patterns for Microservices workshop is the in-person workshop version of the new online course. You will learn how to exchange data between your services in a scalable and reliable way. I will show you different patterns for synchronous service calls, asynchronous data replication, and distributed write operations.
    The Data and Communication Patterns for Microservices workshop will take place on September 15th-17th, 2020. Make sure to enroll before June 12th to get the best price.
  3. The Advanced Hibernate workshop was my most popular in-person workshop in 2019. In this workshop, you will learn to implement complex domain mappings, create dynamic and type-safe queries, support custom data types, use Hibernate’s multi-tenancy features, and much more.
    The Advanced Hibernate workshop will take place on December 8th – 10th, 2020. Make sure to enroll before August 30th to get the best price.

Growing the team

In addition to all of that, I also want to consistently post new tutorials here on the blog and on my YouTube channel. I also teach in-house workshops and help development teams as a coach to use Hibernate more efficiently and to fix issues in their current projects.

So far, we have done all of that with a team of 2.

For the last few years, Rayhan has helped me as a contractor. He takes care of all the important tasks in the background and keeps everything up and running while I’m on the road. He edits videos, creates images, updates WordPress plugins, and lots of other things. To be honest, without his help, there wouldn’t be any YouTube channel, and I would probably still be working on my 2nd course.

But at the end of last year, I had to realize that there is just too much work for such a small team. I decided to hire Khalifa to help me prepare articles, update code samples, and do other Java-related things.

I hope that that’s just the beginning. I’m planning to add another person to the team as soon as the 3 of us got used to each other and found a good rhythm.

I hope I can share more about that soon. Until then, I hope you find our articles and videos helpful, and I’m looking forward to meeting you in person at a conference or workshop.

The post Plans for 2020 and key lessons from 2019 appeared first on Thoughts on Java.


by Thorben Janssen at January 28, 2020 01:00 PM

Cloud Native for Java Day @ KubeCon EU

by Mike Milinkovich at January 28, 2020 12:00 PM

Cloud Native for Java (CN4J) Day at KubeCon + CloudNativeCon Europe will be the first time the best and brightest minds from the Java ecosystem and the Kubernetes ecosystem come together at one event to collaborate and share their expertise.

The all-day event on March 30 includes expert talks, demos, and thought-provoking sessions focused on building cloud native enterprise applications using Jakarta EE-based microservices on Kubernetes. CN4J Day is a proud moment for all of us at the Eclipse Foundation as it confirms the Jakarta EE and MicroProfile communities are at the forefront of fulfilling the promise of cloud native Java. We’re excited to be working with our friends at the CNCF to offer this event co-located with KubeCon Europe.

A Unique Opportunity to Engage With Global Experts

The timing of CN4J Day could not be better. With momentum toward the Jakarta EE 9 release building, this event gives all of us an important and truly unique opportunity to:

  •     Learn more about the future of cloud native Java development from industry and community leaders
  •     Gain deeper insight into key aspects of Jakarta EE, MicroProfile, and Kubernetes technologies
  •     Meet and share ideas with global Java and Kubernetes ecosystem innovators

The global Java ecosystem has embraced CN4J day and several of its leading minds will be on-hand to share their insights. Along with keynote addresses from my colleague Tanja Obradovic and IBM Java CTO, Tim Ellison, CN4J Day features informative technical talks from Java experts and Eclipse Foundation community members, such as:

  •     Adam Bien, an internationally recognized Java architect, developer, workshop leader, and author
  •     Sebastian Daschner, lead java developer advocate at IBM
  •     Clement Escoffier, principal software engineer at Red Hat
  •     Ken Finnegan, senior principal engineer at Red Hat
  •     Emily Jiang, liberty architect for MicroProfile and CDI at IBM
  •     Dmitry Kornilov, Jakarta EE and Helidon Team Leader at Oracle
  •     Tomas Langer, Helidon Architect & Developer at Oracle

Major Industry and Ecosystem Endorsement

Leading industry players in the Java ecosystem are also showing their support for CN4J Day through sponsorship. Our sponsors include:

  •     Cloud Native Computing Foundation (CNCF)
  •     IBM
  •     Oracle
  •     Red Hat

The event is being coordinated by an independent program committee composed of Arun Gupta, principal technologist at Amazon Web Services, Reza Rahman, principal program manager for Java on Azure at Microsoft, and Tanja Obradovic, program manager for Jakarta EE at the Eclipse Foundation.

Register Today

To register today, simply add the event to your KubeCon + CloudNativeCon Europe registration. Thanks to the generous support of our sponsors, a limited amount of discounted CN4J Day add-on registrations will be made available to Jakarta EE and MicroProfile community members on a first-come, first-served basis.

For more details about CN4J Day and a link to the registration page, click here. For additional questions regarding this event, please reach out to events-info@eclipse.org.

As additional speakers and sponsors come onboard, we’ll keep you posted, so watch for updates in our blogs and newsletters.


by Mike Milinkovich at January 28, 2020 12:00 PM

Jakarta EE 9 Release Plan Approved

by Steve Millidge at January 27, 2020 04:35 PM

The approval of the Jakarta EE 9 Release Plan is a great milestone for the Jakarta EE project and a stepping stone towards the evolution of Jakarta EE into a project that meets the community's needs and wants. View the approved Jakarta EE 9 Release Plan here.


by Steve Millidge at January 27, 2020 04:35 PM

Hashtag Jakarta EE #4

by Ivar Grimstad at January 26, 2020 10:59 AM

Welcome to the fourth issue of Hashtag Jakarta EE!

We’re on a roll here! I can’t believe it is already four weeks since I started this series!

Stay tuned for announcements about Jakarta MVC!

A little on the side of Jakarta EE, but still related is that the MVC 1.0 specification (JSR 371) is finally final! We have been working with this for a long time, and special thanks to Christian for his work in getting the release out the door! Without him, I doubt there would be a release of MVC!

So, why isn’t MVC already moved over to the Eclipse Foundation and Jakarta EE? The short answer to that is that we wanted to finish the first release under the JCP in order to have a released project to transfer. We have already transferred Krazo, the reference implementation and the plan is to start the transfer of the specification and the TCK shortly. Stay tuned for announcements about Jakarta MVC!

Jakarta EE 9 is moving forward with great progress. The status of all the work is tracked on the Jakarta EE 9 tracking board. If you are involved in one of the specifications in the Plan Review column, you are encouraged to take a look and see how you can help move these specifications forward to the In Progress column. Instructions can be found in notes at the top of the columns.


by Ivar Grimstad at January 26, 2020 10:59 AM

Dual Writes – The Unknown Cause of Data Inconsistencies

by Thorben Janssen at January 23, 2020 01:06 PM

The post Dual Writes – The Unknown Cause of Data Inconsistencies appeared first on Thoughts on Java.

Since a lot of new applications are built as a system of microservices, dual writes have become a widespread issue. They are one of the most common reasons for data inconsistencies. To make it even worse, I had to learn that a lot of developers don’t even know what a dual write is.

Dual writes seem to be an easy solution to a complex problem. If you’re not familiar with distributed systems, you might even wonder why people even worry about it.

That’s because everything seems to be totally fine … until it isn’t.

So, let’s talk about dual writes and make sure that you don’t use them in your applications. And if you want to dive deeper into this topic and learn various patterns that help you to avoid this kind of issue, please take a look at my upcoming Data and Communication Patterns for Microservices course.

What is a dual write?

A dual write describes the situation when you change data in 2 systems, e.g., a database and Apache Kafka, without an additional layer that ensures data consistency over both services. That’s typically the case if you use a local transaction with each of the external systems.

Here you can see a diagram of an example in which I want to change data in my database and send an event to Apache Kafka:

As long as both operations are successful, everything is OK. Even if the first transaction fails, it’s still fine. But if you successfully committed the 1st transaction and the 2nd one fails, you are having an issue. Your system is now in an inconsistent state, and there is no easy way to fix it.

Distributed transactions are no longer an option

In the past, when we build monoliths, we used distributed transactions to avoid this situation. Distributed transactions use the 2 phase commit protocol. It splits the commit process of the transaction into 2 steps and ensures the ACID principles for all systems.

But we don’t use distributed transactions if we’re building a system of microservices. These transactions require locks and don’t scale well. They also need all involved systems to be up and running at the same time.

So what shall you do instead?

3 “solutions” that don’t work

When I discuss this topic with attendees at a conference talk or during one of my workshops, I often hear one of the following 3 suggestions:

  1. Yes, we are aware of this issue, and we don’t have a solution for it. But it’s not that bad. So far, nothing has happened. Let’s keep it as it is.
  2. Let’s move the interaction with Apache Kafka to an after commit listener.
  3. Let’s write the event to the topic in Kafka before you commit the database transaction.

Well, it should be obvious that suggestion 1 is a rather risky one. It probably works most of the time. But sooner or later, you will create more and more inconsistencies between the data that’s stored by your services.

So, let’s focus on options 2 and 3.

Post the event in an after commit listener

Publishing the event in an after commit listener is a pretty popular approach. It ensures that the event only gets published if the database transaction was successful. But it’s difficult to handle the situation that Kafka is down or that any other reason prevents you from publishing the event.

You already committed the database transaction. So, you can’t easily revert these changes. Other transactions might have already used and modified that data while you tried to publish the event in Kafka.

You might try to persist the failure in your database and run regular cleanup jobs that seek to recover the failed events. This might look like a logical solution, but it has a few flaws:

  1. It only works if you can persist the failed event in your database. If the database transaction fails, or your application or the database crash before you can store the information about the failed event, you will lose it.
  2. It only works if the event itself didn’t cause the problem.
  3. If another operation creates an event for that business object before the cleanup job recovers the failed event, your events get out of order.

These might seem like hypothetical scenarios, but that’s what we’re preparing for. The main idea of local transactions, distributed transactions, and approaches that ensure eventual consistency is to be absolutely sure that you can’t create any (permanent) inconsistencies.

An after commit listener can’t ensure that. So, let’s take a look at the other option.

Post the event before committing the database transaction

This approach gets often suggested after we discussed why the after commit listener doesn’t work. If publishing the event after the commit creates a problem, you simply publish it before we commit the transaction, right?

Well, no … Let me explain …

Publishing the event before you commit the transaction enables you to roll back the transaction if you can’t publish the event. That’s right.

But what do you do if your database transaction fails?

Your operations might violate a unique constraint, or there might have been 2 concurrent updates on the same database record. All database constraints get checked during the commit, and you can’t be sure that none of them fails. Your database transactions are also isolated from each other so that you can’t prevent concurrent updates without using locks. But that creates new scalability issues. To make it short, your database transaction might fail and there is nothing you can, or want to do about it.

If that happens, your event is already published. Other microservices probably already observed it and triggered some business logic. You can’t take the event back.

Undo operations fail for the same reasons, as we discussed before. You might be able to build a solution that works most of the time. But you are not able to create something that’s absolutely failsafe.

How to avoid dual writes?

You can choose between a few approaches that help you to avoid dual writes. But you need to be aware that without using a distributed transaction, you can only build an eventually consistent system.

The general idea is to split the process into multiple steps. Each of these steps only operates with one data store, e.g., the database or Apache Kafka. That enables you to use a local transaction, asynchronous communication between the involved systems and an asynchronous, potentially endless retry mechanism.

If you only want to replicate data between your services or inform other services that an event has occurred, you can use the outbox pattern with a change data capture implementation like Debezium. I explained this approach in great detail in the following articles:

And if you need to implement a consistent write operation that involves multiple services, you can use the SAGA pattern. I will explain it in more detail in one of the following articles.

Conclusion

Dual writes are often underestimated, and a lot of developers aren’t even aware of the potential data inconsistencies.

As explained in this article, writing to 2 or more systems without a distributed transaction or an algorithm that ensures eventual consistency can cause data inconsistencies. If you work with multiple local transactions, you can’t handle all error scenarios.

The only way to avoid that is to split the communication into multiple steps and only write to one external system during each step. The SAGA pattern and change data capture implementations, like Debezium, use this approach to ensure consistent write operation to multiple systems or to send events to Apache Kafka.

The post Dual Writes – The Unknown Cause of Data Inconsistencies appeared first on Thoughts on Java.


by Thorben Janssen at January 23, 2020 01:06 PM

Back to Basics - Installing Payara Server 5 on Ubuntu

by Jonathan Coustick at January 22, 2020 12:12 PM

This is Part 1 of our 'Payara Server- Back to Basics' series, where we will show you a step-by-step overview of how to install Payara Server 5 on Ubuntu. 


by Jonathan Coustick at January 22, 2020 12:12 PM

Enforcing Java Record Invariants With Bean Validation

January 20, 2020 04:30 PM

Record types are one of the most awaited features in Java 14; they promise to "provide a compact syntax for declaring classes which are transparent holders for shallowly immutable data". One example where records should be beneficial are data transfer objects (DTOs), as e.g. found in the remoting layer of enterprise applications. Typically, certain rules should be applied to the attributes of such DTO, e.g. in terms of allowed values. The goal of this blog post is to explore how such invariants can be enforced on record types, using annotation-based constraints as provided by the Bean Validation API.

January 20, 2020 04:30 PM

Jakarta EE 8 CRUD API Tutorial using Java 11

by rieckpil at January 19, 2020 03:07 PM

As part of the Jakarta EE Quickstart Tutorials on YouTube, I’ve now created a five-part series to create a Jakarta EE CRUD API. Within the videos, I’m demonstrating how to start using Jakarta EE for your next application. Given the Liberty Maven Plugin and MicroShed Testing, the endpoints are developed using the TDD (Test Driven Development) technique.

The following technologies are used within this short series: Java 11, Jakarta EE 8, Open Liberty, Derby, Flyway, MicroShed Testing & JUnit 5

Part I: Introduction to the application setup

This part covers the following topics:

  • Introduction to the Maven project skeleton
  • Flyway setup for Open Liberty
  • Derby JDBC connection configuration
  • Basic MicroShed Testing setup for TDD

Part II: Developing the endpoint to create entities

This part covers the following topics:

  • First JAX-RS endpoint to create Person entities
  • TDD approach using MicroShed Testing and the Liberty Maven Plugin
  • Store the entities using the EntityManager

Part III: Developing the endpoints to read entities

This part covers the following topics:

  • Develop two JAX-RS endpoints to read entities
  • Read all entities and by its id
  • Handle non-present entities with a different HTTP status code

Part IV: Developing the endpoint to update entities

This part covers the following topics:

  • Develop the JAX-RS endpoint to update entities
  • Update existing entities using HTTP PUT
  • Validate the client payload using Bean Validation

Part V: Developing the endpoint to delete entities

This part covers the following topics:

  • Develop the JAX-RS endpoint to delete entities
  • Enhance the test setup for deterministic and repeatable integration tests
  • Remove the deleted entity from the database

The source code for the Maven CRUD API application is available on GitHub.

For more quickstart tutorials on Jakarta EE, have a look at the overview page on my blog.

Have fun developing Jakarta EE CRUD API applications,

Phil

 

The post Jakarta EE 8 CRUD API Tutorial using Java 11 appeared first on rieckpil.


by rieckpil at January 19, 2020 03:07 PM

Hashtag Jakarta EE #3

by Ivar Grimstad at January 19, 2020 10:59 AM

Welcome to the third issue of Hashtag Jakarta EE!

Most of this week was spent at the Eclipse Foundation office in Ottawa. It is great to have the opportunity to meet the people I work with daily in person once in a while.

On Tuesday, January 14, there was an off-week MicroProfile Hangout dedicated to the discussions around the Working Group proposals for MicroProfile. The hangout was pretty well attended with the usual suspects doing most of the talking. See the presentation and meeting minutes for more details. The session was even recorded.

This week, we also had a Jakarta Tech talk by Adam Bien. He gave a great presentation about Jakarta EE application servers and Docker.

We even had a Jakarta EE Community Update Call where we talked about Jakarta EE 9, the Jakartification of the specification documents as well as the 2020 program and budget.

In the blog space, there is a lot being written about Jakarta EE. I will point to a couple of posts here. Will Lyons summarises The Jakarta EE 8 Community Retrospective and you can read about the approval of the Jakarta EE 9 release plan in Mike Milinkovich’ post Moving Forward with Jakarta EE 9,


by Ivar Grimstad at January 19, 2020 10:59 AM

Moving Forward With Jakarta EE 9

by Mike Milinkovich at January 16, 2020 05:06 PM

On behalf of the Jakarta EE Working Group, I am excited to announce the unanimous approval of the plan for Jakarta EE 9, with an anticipated mid-2020 release. Please note that the project team believes this timeline is aggressive, so think of this as a plan of intent with early estimate dates. The milestone dates will be reviewed and possibly adjusted at each release review.

If you have any interest at all in the past, present, or future of Java, I highly recommend that you read that plan document, as Jakarta EE 9 represents a major inflection point in the platform.

The key elements of  this Jakarta EE 9 release plan are to:

  • move all specification APIs to the jakarta namespace (sometimes referred to as the “big bang”);
  • remove unwanted or deprecated specifications;
  • minor enhancements to a small number of specifications;
  • add no new specifications, apart from specifications pruned from Java SE 8 where appropriate; and
  • Java SE 11 support.

What is not in the plan is the addition of any significant new functionality. That is because the goals of this Jakarta EE 9 release plan are to:

  • lower the barrier of entry to new vendors and implementations to achieve compatibility;
  • make the release available rapidly as a platform for future innovation; and
  • provide a platform that developers can use as a stable target for testing migration to the new namespace.

Moving a platform and ecosystem the size and scale of Jakarta EE takes time and careful planning. After a great deal of discussion the community consensus was that using EE 9 to provide a clear transition to the jakarta namespace, and to pare down the platform would be the best path to future success. While work on the EE 9 platform release is proceeding, individual component specification teams are encouraged to innovate in their individual specifications, which will hopefully lead to a rapid iteration towards the Jakarta EE 10 release.

Defining this release plan has been an enormous community effort. A lot of time and energy went into its development. It has been exciting to watch the … ummm passionate…. discussions evolve towards a pretty broad consensus on this approach. I would like to particularly recognize the contributions of Steve Millidge, Kevin Sutter, Bill Shannon, David Blevins, and Scott Stark for their tireless and occasionally thankless work in guiding this process.

The Jakarta EE Working Group has been busy working on creating a Program Plan, Marketing Plan and Budget for 2020. The team has also been very busy with creating a plan for the Jakarta EE 9 release. The Jakarta EE Platform project team, as requested, has delivered a proposal plan to the Steering Committee. With their endorsement, it will be voted on by the Specification Committee at their first meeting in January 2020.

Retrospective

The Jakarta EE 9 release is going to be an important step in the evolution of the platform, but it is important to recognize the many accomplishments that happened in 2019 that made this plan possible.

First, the Eclipse Foundation and Oracle successfully completed some very complex negotiations about how Java EE would be evolved under the community-led Jakarta EE process. Although the Jakarta EE community cannot evolve the specifications under the javax namespace, we were still able to fully transition the Java EE specifications to the Eclipse Foundation. That transition led to the second major accomplishment in 2019: the first release of Jakarta EE. Those two milestones were, in my view, absolutely key accomplishments. They were enabled by a number of other large efforts, such as creating the Eclipse Foundation Specification Process, significant revisions to our IP Policy, and establishing the Jakarta EE compatibility program. But ultimately, the most satisfying result of all of this effort is the fact that we have seven fully compatible Jakarta EE 8 products, with more on the way.

The Jakarta EE community was also incredibly active in 2019. Here are just a few of the highlights:

2019 was a very busy year, and it laid the foundation for a very successful 2020. I, and the entire Jakarta EE community, look forward to the exciting progress and innovation coming in 2020.


by Mike Milinkovich at January 16, 2020 05:06 PM

Naming Strategies in Hibernate 5

by Thorben Janssen at January 16, 2020 01:00 PM

The post Naming Strategies in Hibernate 5 appeared first on Thoughts on Java.

JPA and Hibernate provide a default mapping that maps each entity class to a database table with the same name. Each of its attributes gets mapped to a column with the same. But what if you want to change this default, e.g., because it doesn’t match your company’s naming conventions?

You can, of course, specify the table name for each entity and the column name for each attribute. That requires a @Table annotation on each class and a @Column annotation on each attribute. This is called an explicit naming.

That’s a good approach if you want to change the mapping for one attribute. But doing that for lots of attributes requires a lot of work. Adapting Hibernate’s naming strategy is then often a better approach.

In this article, I will show you how to use it to adjust the mapping of all entities and attributes. But before we do that, we first need to talk about the difference between Hibernate’s logical and physical naming strategy.



Already a member? Login here.

A 2-step approach

Hibernate splits the mapping of the entity or attribute name to the table or column name into 2 steps:

  1. It first determines the logical name of an entity or attribute. You can explicitly set the logical name using the @Table and @Column annotations. If you don’t do that, Hibernate will use one of its implicit naming strategies.
  2. It then maps the logical name to a physical name. By default, Hibernate uses the logical name as the physical name. But you can also a PhysicalNamingStrategy that maps the logical name to a physical one that follows your internal naming convention.

So, why does Hibernate differentiate between a logical and a physical naming strategy, but the JPA specification doesn’t?

JPA’s approach works, but if you take a closer look at it, you recognize that Hibernate’s approach provides more flexibility. By splitting the process into 2 steps, Hibernate allows you to implement a conversion that gets applied to all attributes and classes.

If your naming conventions, for example, require you to ad “_TBL” to all table names, you can do that in your PhysicalNamingStrategy. It then doesn’t matter if you explicitly specify the table name in a @Table annotation or if you do it implicitly based on the entity name. In both cases, Hibernate will add “_TBL” to the end of your table name.

Because of the added flexibility, I like Hibernate’s approach a little better.

Logical naming strategy

As explained earlier, you can either define the logical name explicitly or implicitly. Let’s take a look at both options.

Explicit naming strategy

The explicit naming strategy is very easy to use. You probably already used it yourself. The only thing you need to do is to annotate your entity class with @Table or your entity attribute with @Column and provide your preferred name as a value to the name attribute.

@Entity
@Table(name = "AUTHORS")
public class Author {

    @Column(name = "author_name")
    private String name;

    ...
}

If you then use this entity in your code and activate the logging of SQL statements, you can see that Hibernate uses the provided names instead of the default ones.

15:55:52,525 DEBUG [org.hibernate.SQL] - insert into AUTHORS (author_name, version, id) values (?, ?, ?)

Implicit naming strategy

If you don’t set the table or column name in an annotation, Hibernate uses one of its implicit naming strategies. You can choose between 4 different naming strategies and 1 default strategy:

  • default
    By default, Hibernate uses the implicit naming strategy defined by the JPA specification. This value is an alias for jpa.
  • jpa
    This is the naming strategy defined by the JPA 2.0 specification.
    The logical name of an entity class is either the name provided in the @Entity annotation or the unqualified class name. For basic attributes, it uses the name of the attributes as the logical name. To get the logical name of a join column of an association, this strategy concatenates the name of the referencing attribute, an “_” and the name of the primary key attribute of the referenced entity. The logical name of a join column of an element collection consists of the name of the entity that owns the association, an “_” and the name of the primary key attribute of the referenced entity. And the logical name of a join table starts with the physical name of the owning table, followed by an “_” and the physical name of the referencing table.
  • legacy-hbm
    This is Hibernate’s original naming strategy. It doesn’t recognize any of JPA’s annotations. But you can use Hibernate’s proprietary configuration file and annotations to define a column or entity name.
    In addition to that, there are a few other differences to the JPA specification:
    • The logical name of a join column is only its attribute name.
    • For join tables, this strategy concatenates the name of the physical table that owns the association, an “_” and the name of the attribute that owns the association.
  • legacy-jpa
    The legacy-jpa strategy implements the naming strategy defined by JPA 1.0.
    The main differences to the jpa strategy are:
    • The logical name of a join table consists of the physical table name of the owning side of the association, an “_” and either the physical name of the referencing side of the association or the owning attribute of the association.
    • To get the logical name of the join column of an element collection, the legacy-jpa strategy uses the physical table name instead of the entity name of the referenced side of the association. That means the logical name of the join column consists of the physical table name of the referenced side of the association, an “_” and the name of the referenced primary key column.
  • component-path
    This strategy is almost identical to the jpa strategy. The only difference is that it includes the name of the composite in the logical attribute name.

You can configure the logical naming strategy by setting the hibernate.implicit_naming_strategy attribute in your configuration.

<persistence>
    <persistence-unit name="naming">
        ...
        <properties>
            <property name="hibernate.implicit_naming_strategy"
                      value="jpa" />
            ...
        </properties>
    </persistence-unit>
</persistence>

Physical naming strategy

Implementing your own physical naming strategy isn’t complicated. You can either implement the PhysicalNamingStrategy interface or extend Hibernate’s PhysicalNamingStrategyStandardImpl class.

I prefer extending Hibernate’s PhysicalNamingStrategyStandardImpl. In the following examples, to create a naming strategy that adds the postfix “_TBL” to each table name and to create a naming strategy that converts camel case names into snake case.

Table postfix strategy

The only thing I want to change in this naming strategy is the handing of the table name. So, extending Hibernate’s PhysicalNamingStrategyStandardImpl class it the easiest way to achieve that.

Implementing a custom strategy

I overwrite the toPhysicalTableName method, add a static postfix to the name, and convert it into an Identifier.

public class TablePostfixPhysicalNamingStrategy extends PhysicalNamingStrategyStandardImpl {

    private final static String POSTFIX = "_TBL";
    
    @Override
    public Identifier toPhysicalTableName(final Identifier identifier, final JdbcEnvironment jdbcEnv) {
        if (identifier == null) {
            return null;
        }

        final String newName = identifier.getText() + POSTFIX;
        return Identifier.toIdentifier(newName);
    }

}

In the next step, you need to activate the naming strategy. You do that by setting the hibernate.physical_naming_strategy attribute to the fully qualified class name of the strategy.

<persistence>
    <persistence-unit name="naming">
        ...
        <properties>
            <property name="hibernate.physical_naming_strategy"
                      value="org.thoughtsonjava.naming.config.TablePostfixPhysicalNamingStrategy" />
            ...
        </properties>
    </persistence-unit>
</persistence>

Using the table postfix strategy

Let’s try this mapping using this basic Author entity. I don’t specify a logical name for the entity. So, it defaults to the name of the class, which is Author. Without our custom naming strategy, Hibernate would map this entity to the Author table.

@Entity
public class Author {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    @Version
    private int version;

    private String name;

    @ManyToMany(mappedBy = "authors", fetch = FetchType.LAZY)
    private Set<Book> books;

    ...
}

When I persist this entity, you can see in the log file that Hibernate mapped it to the AUTHOR_TBL table.

14:05:56,619 DEBUG [org.hibernate.SQL] - insert into Author_TBL (name, version, id) values (?, ?, ?)

Names in snake case instead of camel case

In Java, we prefer to use camel case for our class and attribute names. By default, Hibernate uses the logical name as the physical name. So, the entity attribute LocalDate publishingDate gets mapped to the database column publishingDate.

Some companies use naming conventions that require you to use snake case for your table and column names. That means that your publishingDate attribute needs to be mapped to the publishing_date column.

As explained earlier, you could use the explicit naming strategy and annotate each attribute with a @Column annotation. But for most persistence layers, that’s a lot of work, and it’s easy to forget.

So, let’s implement a naming strategy that does that for us.

Implementing a custom strategy

public class SnakeCasePhysicalNamingStrategy extends PhysicalNamingStrategyStandardImpl {

    @Override
    public Identifier toPhysicalCatalogName(Identifier name, JdbcEnvironment context) {
        return super.toPhysicalCatalogName(toSnakeCase(name), context);
    }

    @Override
    public Identifier toPhysicalColumnName(Identifier name, JdbcEnvironment context) {
        return super.toPhysicalColumnName(toSnakeCase(name), context);
    }

    @Override
    public Identifier toPhysicalSchemaName(Identifier name, JdbcEnvironment context) {
        return super.toPhysicalSchemaName(toSnakeCase(name), context);
    }

    @Override
    public Identifier toPhysicalSequenceName(Identifier name, JdbcEnvironment context) {
        return super.toPhysicalSequenceName(toSnakeCase(name), context);
    }

    @Override
    public Identifier toPhysicalTableName(Identifier name, JdbcEnvironment context) {
        return super.toPhysicalTableName(toSnakeCase(name), context);
    }
    
    private Identifier toSnakeCase(Identifier id) {
        if (id == null)
            return id;
            
        String name = id.getText();
        String snakeName = name.replaceAll("([a-z]+)([A-Z]+)", "$1\\_$2").toLowerCase();
        if (!snakeName.equals(name))
            return new Identifier(snakeName, id.isQuoted());
        else
            return id;
    }
}

The interesting part of this naming strategy is the toSnakeCase method. I call it in all methods that return a physical name to convert the provided name to snake case.

If you’re familiar with regular expressions, the implementation of the toSnakeCase method is pretty simple. By calling replaceAll(“([a-z]+)([A-Z]+)”, “$1\\_$2”), we add an “_” in front of each capital letter. After that is done, we only need to change all characters to lower case.

In the next step, we need to set the strategy in the persistence.xml file.

<persistence>
    <persistence-unit name="naming">
        ...
        <properties>
            <property name="hibernate.physical_naming_strategy"
                      value="org.thoughtsonjava.naming.config.SnakeCasePhysicalNamingStrategy" />
            ...
        </properties>
    </persistence-unit>
</persistence>

Using the snake case strategy

When I now persist this Book entity, Hibernate will use the custom strategy to map the publishingDate attribute to the database column publishing_date.

@Entity
public class Book {

    @Id
    @GeneratedValue
    private Long id;

    @Version
    private int version;

    private String title;

    private LocalDate publishingDate;

    @ManyToMany
    private Set<Author> authors;

    @ManyToOne
    private Publisher publisher;

    ...
}

As you can see in the log file, the naming strategy worked as expected and changed the name of the publishingDate column to publishing_date.

14:28:59,337 DEBUG [org.hibernate.SQL] - insert into books (publisher_id, publishing_date, title, version, id) values (?, ?, ?, ?, ?)


Already a member? Login here.

Conclusion

Hibernate’s naming strategy provides you with lots of flexibility. It consists of 2 parts, the mapping of the logical and the physical name.

You can explicitly define the logical name using the @Table and @Column annotation. If you don’t do that, Hibernate uses one of its implicit naming strategies. The default one is compliant with JPA 2.0.

After the logical name got determined, Hibernate applies a physical naming strategy. By default, it returns the logical name. But you can use it to implement a conversion that gets applied to all logical names. As you have seen in the examples, this provides an easy way to fulfill your internal naming conventions.

The post Naming Strategies in Hibernate 5 appeared first on Thoughts on Java.


by Thorben Janssen at January 16, 2020 01:00 PM

Summary of Community Retrospective on Jakarta EE 8 Release

by Will Lyons at January 13, 2020 09:42 PM

One of the topics we will cover at the Jakarta EE Update call on January 15 at 11:00 a.m. EST (please use this meeting ID) is a retrospective on the Jakarta EE 8 release that was conducted by the Jakarta EE Steering Committee.  The process included including solicitation of input from contributors to specifications, implementers of specifications, users of specifications and those observing from the outside.   The goal of the retrospective was to collect feedback on the Jakarta EE 8 delivery process that can be turned into action items to improve our delivery of subsequent releases, and enable as much of the community to participate as possible.

The full retrospective document is published here.   Some of the retrospective highlights include:

1) Areas that went well

  • Excellent progress after clearing legal hurdles
  • Excellent press and analyst coverage
  • Livestream and Code One announcements went well. 

2) Areas for improvement

  • Spec process and release management
    • Jakarta EE 8 seemed mostly Oracle driven.   Need more visible participation from others.
    • Need better documentation on the spec process and related processes
    • Need to reduce the number of meetings required to get out the next release
  • Communications
    • Need to communicate what we are doing and the progress we have made more.
    • Review the approach to Jakarta EE update calls
    • Need more even engagement via social media from all Strategic members.
  • Organization
    • Clarify the roles of the various committees
  • Other/general
    • We need to understand the outside view of the box we (the Jakarta EE/Eclipse Foundation) live in, and communicate accordingly.
    • Need a clear roadmap

3) In response to this feedback, the Steering Committee plans to take the following actions:

  • Proactive communication of CY2020 Jakarta EE Working Group Program Plan
    • This plan itself addresses much of the feedback above
    • Recommend sharing it now
    • Share budgets when confirmed/approved

We have followed up on topic during the December 11 Jakarta EE Update call.

  • Proactive communication of the Jakarta EE 9 release plan
    • Addresses some feedback on near term issues (e.g. move to jakarta namespace)
    • Should be placed in context of post Jakarta EE 9 goals
    • Participate in Jakarta EE update call on Nov 13 (planned)
    • Share when approved by Steering Committee

We are following up in all of these areas, both soliciting input and communicating through Jakarta EE Update calls.   We expect to announce formal approval for the prelease plan after this week’s Spec Committee meeting.

  • Proactive communication of significant Jakarta EE initiatives in general
    • Build into any significant planning event

We are building this into planning for JakartaOne Livestream Japan on Feb. 26, and Cloud Native for Java Day in Amsterdam on March 30.

  • Review approach to Jakarta EE Update calls (request Eclipse Foundation staff to drive review).

Your feedback on this topic is welcome – what do you want included in these calls?

  • Communication from Specification Committee on plan for addressing retrospective findings above after appropriate review and consensus.

The Spec Committee has not done an independent review per se, but are making a strong effort in conjunction with the Jakarta EE Platform Project to communicate proactively about the direction of Jakarta EE 9.   Look for more information on Jakarta EE 9 this week.

The process of responding to our experiences and feedback is ongoing.    We hope to continue to hear from you, so that we can continue to improve our release process for the community.

 

 

 


by Will Lyons at January 13, 2020 09:42 PM

Payara Platform 2019 Community Survey Results

by Debbie Hoffman at January 13, 2020 01:55 PM

We're proud to announce that our 2019 Community Survey results are now available! We conducted a survey between September and November 2019 to determine how organizations are using the Payara Platform and what ecosystem components are most commonly used with the platform. Thank you for contributing and helping us gain insight into which features and enhancements the community would most like to see in future releases of the Payara Platform.


by Debbie Hoffman at January 13, 2020 01:55 PM

New Features in Jersey Client

by Jan at January 13, 2020 09:29 AM

Jersey 2.30 comes with multiple new features, and this post describes three new interfaces on the client. They are PreInvocationInterceptor, PostInvocationInterceptor, and InvocationBuilderListener. Suppose a case that the start of the request is to be logged and even measured. This … Continue reading

by Jan at January 13, 2020 09:29 AM

Hashtag Jakarta EE #2

by Ivar Grimstad at January 12, 2020 10:59 AM

Welcome to the second issue of Hashtag Jakarta EE!

This week, I have been in Sandusky, Ohio for CodeMash. This awesome conference is organized by an Ohio Non-Profit Organization and is held at the Kalahari resort in Sandusky, Ohio.

In addition to meeting a bunch of awesome people, I had a talk about Eclipse MicroProfile.

Attending conferences like CodeMash that are not entirely Java-centric is a great reminder that outside our bubble, most developers haven’t heard about Jakarta EE or MicroProfile at all!

It is both challenging and rewarding to give a talk where you are not preaching to the choir. I encourage you all to do this more often. This is how we spread the message! This is how we grow our community!

In the Jakarta EE space, we started the week with an EE4J PMC meeting followed by the Jakarta EE Steering- and Specification committees. The most important agenda item discussed is the ongoing ballot for approval of the Jakarta EE 9 release plan in the Specification Committee. You can follow the ongoing ballot on the Jakarta EE Specification Committee mailing list.

At the end of last year, the Java EE Guardians completed their rebranding to become the Jakarta EE Ambassadors. I am really happy that I was able to help in the process of getting the new logo created. This is the first approved usage of the Jakarta EE brand and logo outside of the Jakarta EE Working Group. A milestone in itself!


by Ivar Grimstad at January 12, 2020 10:59 AM

Logging for JPA SQL queries with Wildfly

January 09, 2020 10:00 PM

Logging for real SQL queries is very important in case using any ORM solution, - as you never can be sure which and how many requests will send JPA provider to do find, merge, query or some other operation.

Wildfly uses Hibernate as JPA provider and provides few standard ways to enable SQL logging:

1. Add hibernate.show_sql property to your persistence.xml :

<properties>
    ...
    <property name="hibernate.show_sql" value="true" />
    ...
</properties>
INFO  [stdout] (default task-1) Hibernate: insert into blogentity (id, body, title) values (null, ?, ?)
INFO  [stdout] (default task-1) Hibernate: select blogentity0_.id as id1_0_, blogentity0_.body as body2_0_, blogentity0_.title as title3_0_ from blogentity blogentity0_ where blogentity0_.title=?

2. Enable ALL log level for org.hibernate category like:

/subsystem=logging/periodic-rotating-file-handler=sql_handler:add(level=ALL, file={"path"=>"sql.log"}, append=true, autoflush=true, suffix=.yyyy-MM-dd,formatter="%d{yyyy-MM-dd HH:mm:ss,SSS}")
/subsystem=logging/logger=org.hibernate.SQL:add(use-parent-handlers=false,handlers=["sql_handler"])
/subsystem=logging/logger=org.hibernate.type.descriptor.sql.BasicBinder:add(use-parent-handlers=false,handlers=["sql_handler"])
DEBUG [o.h.SQL] insert into blogentity (id, body, title) values (null, ?, ?)
TRACE [o.h.t.d.s.BasicBinder] binding parameter [1] as [VARCHAR] - [this is body]
TRACE [o.h.t.d.s.BasicBinder] binding parameter [2] as [VARCHAR] - [title]
DEBUG [o.h.SQL] select blogentity0_.id as id1_0_, blogentity0_.body as body2_0_, blogentity0_.title as title3_0_ from blogentity blogentity0_ where blogentity0_.title=?
TRACE [o.h.t.d.s.BasicBinder] binding parameter [1] as [VARCHAR] - [title]

3. Enable spying of SQL statements:

/subsystem=datasources/data-source=ExampleDS/:write-attribute(name=spy,value=true)
/subsystem=logging/logger=jboss.jdbc.spy/:add(level=DEBUG)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [DataSource] getConnection()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [Connection] prepareStatement(insert into blogentity (id, body, title) values (null, ?, ?), 1)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [PreparedStatement] setString(1, this is body)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [PreparedStatement] setString(2, title)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [PreparedStatement] executeUpdate()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [PreparedStatement] getGeneratedKeys()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] next()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] getMetaData()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] getLong(1)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] close()
...
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [DataSource] getConnection()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [Connection] prepareStatement(select blogentity0_.id as id1_0_, blogentity0_.body as body2_0_, blogentity0_.title as title3_0_ from blogentity blogentity0_ where blogentity0_.title=?)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [PreparedStatement] setString(1, title)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [PreparedStatement] executeQuery()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] next()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] getLong(id1_0_)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] wasNull()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] getString(body2_0_)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] wasNull()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] getString(title3_0_)
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] wasNull()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] next()
DEBUG [j.j.spy] java:jboss/datasources/ExampleDS [ResultSet] close()

So, from above we can see that variants 2 and 3 is most useful as allows to log queries with parameters. From other point of view - SQL logging can generate lot of unneeded debug information on production. To avoid garbage data in your log files, feel free to use Filter Expressions for Logging


January 09, 2020 10:00 PM

Jakarta EE Community Update January 2020

by Tanja Obradovic at January 09, 2020 02:26 PM

Welcome to our first update of 2020. As we start the new year, I want to wish you all the best. We have a lot to look forward to in 2020 as it will be the first year with multiple Jakarta EE releases. What a bright and optimistic way to start a new year!

As we ramp up our 2020 activities, here’s a look back at Jakarta EE highlights from the last few weeks in November and December of 2019. 

The Jakarta EE 8 Retrospective Is Complete

As we mentioned in our last update, the Jakarta EE Steering Committee asked the community to provide feedback on the Jakarta EE 8 release so we can make improvements in Jakarta EE 9 and future releases.

We’re happy to tell you that feedback has now been collected, analyzed, and summarized. 

To learn more about the results of the retrospective, keep an eye out for a blog post on the topic, but also, please join us for the January 15 at 11:00 a.m. EST using this meeting ID where this topic will be on the Agenda.

 The Jakarta EE 9 Release Plan Is Ready for Specification Committee Review

The Jakarta EE Platform Project team, led by Kevin Sutter and Steve Millidge, has been working very hard to create the detailed Jakarta EE 9 release plan. As promised, the plan has been created, presented, and endorsed by the Jakarta EE Steering Committee. The next step is for the Specification Committee to review it and vote on it early this year.

You can read the minutes of the Jakarta EE Platform Project team meetings here.

For the latest updates on Jakarta EE 9 release planning, listen to the recording of the December Jakarta EE update call here.

Reminder: We Need Your Help to Jakartify Specifications

Just a reminder that we now have the copyright for quite a number of Java EE specifications and we need the community to Jakartify them so they can be contributed to Jakarta EE.

To help you get started:

·      The Specification Committee has created a document that explains how to convert Java EE specifications to Jakarta EE.

·      Ivar Grimstad provided a demo during the community call in October. You can view it here.

And, here’s the list of specifications that are ready for the community to Jakartify.

• Jakarta Annotations

• Jakarta Enterprise Beans

• Jakarta Expression Language

• Jakarta Security

• Jakarta Server Faces

• Jakarta Interceptors

• Jakarta Authorization

• Jakarta Activation

• Jakarta Managed Beans

 

• Jakarta Deployment

• Jakarta XML RPC

• Jakarta Authentication

• Jakarta Mail

• Jakarta XML Binding

• Jakarta RESTful Web Services

• Jakarta Web Services Metadata

• Jakarta XML Web Services

• Jakarta Connectors

 

• Jakarta Persistence

• Jakarta JSON Binding

• Jakarta JSON Processing

• Jakarta Debugging Support for Other Languages

• Jakarta Server Pages

• Jakarta Transactions

• Jakarta WebSocket

 

Jakarta EE 2020 Marketing Plan Finalized

The Jakarta EE Marketing Operations Report for Q3 2019 was presented during the December Marketing Committee call. The report includes market activities and metrics for planned versus actual activities in Q3 and was generally well received by the committee.

You can read the meeting minutes here

Upcoming Events

Here’s a brief look at two upcoming Jakarta EE events to mark on your calendar:

·      JakartaOne Livestream – Japan, February 26

With the success of the JakartaOne Livestream event in September 2019, we’re expanding the initiative with more virtual conferences in more languages. Our next event is JakartaOne Livestream - Japan scheduled for February 26. Please follow the event on Twitter @JakartaOneJPN for all the details. Registration will open shortly. Keep in mind this entire event will be presented in Japanese.

·      Cloud Native for Java (CN4J) Day at KubeCon Amsterdam, March 30

CN4J day on March 30 is a full day (9:00 a.m.-5:00 p.m.) of expert talks, demos, and thought-provoking sessions focused on enterprise applications implemented using Jakarta EE and Eclipse MicroProfile on Kubernetes. This event is a great opportunity to meet with industry and community leaders, to better understand key aspects of Jakarta EE and MicroProfile technologies, and to share your ideas with ecosystem innovators. Learn more about the event here.

Join Community Update Calls

Every month, the Jakarta EE community holds a community call for everyone in the Jakarta EE community. For upcoming dates and connection details, see the Jakarta EE Community Calendar.

 Our next call is Wednesday, January 15 at 11:00 a.m. EST using this meeting ID.

We know it’s not always possible to join calls in real time, so here are links to the recordings and presentations:

·      The complete playlist.

·      December 11 call and presentation, featuring Steve Millage discussing Jakarta EE 9 release planning, an update on Jakartifying specifications from Ivar Grimstad, and a call for action to help with the Jakarta EE 2020 Program Plan and budget from Jakarta EE Steering Committee chair, Will Lyons.

November and December Event Summary

We ended a very exciting year for Jakarta EE by participating in three major events:  

·      Devoxx BE

·      KubeCon NA

·      Java2Days

We had members presenting at all three events and we’d like to sincerely thank them for their time and effort. A special shout-out to our Jakarta EE Developer Advocate, Ivar Grimstad, who presented multiple times and participated in a panel discussion at Java2Days.

Our members were also on hand at our booths at Devoxx BE and KubeCon NA to share their expertise, provide demos, and engage with attendees. Again, a huge thank you to everyone involved for their enthusiastic participation. Your engagement helps drive the success of the entire Jakarta EE community.

Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Tanja Obradovic’s blog summarizes the community engagement plan, which includes:

•  Social media: Twitter, Facebook, LinkedIn Group

•  Mailing lists: jakarta.ee-community@eclipse.org and jakarta.ee-wg@eclipse.org

•  Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs, monthly update emails to jakarta.ee-community@eclipse.org, and community blogs on “how are you involved with Jakarta EE”

•  Meetings: Jakarta Tech Talks, Jakarta EE Update, Jakarta Town Hall, and Eclipse Foundation events and conferences

Subscribe to your preferred channels today. And, get involved in the Jakarta EE Working Group to help shape the future of open source, cloud native Java.

To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.

 

 


by Tanja Obradovic at January 09, 2020 02:26 PM

Key annotations you need to know when working with JPA and Hibernate

by Thorben Janssen at January 09, 2020 10:33 AM

The post Key annotations you need to know when working with JPA and Hibernate appeared first on Thoughts on Java.

When you start learning and using Hibernate and JPA, the number of annotations might be overwhelming. But as long as you rely on the defaults, you can implement your persistence layer using only a small subset of them.

After you have mastered the basic annotations, you can take a look at additional customization options. You can, for example, customize the join tables of many-to-many associations, use composite primary keys, or share a primary key value between 2 associated entities.

But please be careful with any mapping that tries to handle a significant difference between your table model and your domain model. Quite often, the simpler mappings are better than the complex ones. They provide better performance and are much easier to understand by all developers in your team.

You only need the more advanced mappings if you need to map a legacy database or use various kinds of performance optimizations. But especially when you are new to JPA and Hibernate, you should ignore these features and focus on the basic concepts.

So, let’s take a look at the most important annotations and their attributes. For each annotation, I will explain which attributes you really need and which ones you should better avoid.

And if you want to dive deeper into JPA and make sure you have a solid understanding of all the basic concepts, I recommend enrolling in my JPA for Beginners online course.

Define an Entity Class

JPA entities don’t need to implement any interface or extend a superclass. They are simple POJOs. But you still need to identify a class as an entity class, and you might want to adapt the default table mapping.

@Entity

The JPA specification requires the @Entity annotation. It identifies a class as an entity class.

@Entity
public class Author { ... }

You can use the name attribute of the @Entity annotation to define the name of the entity. It has to be unique for the persistence unit, and you use it to reference the entity in your JPQL queries.

@Table

By default, each entity class maps a database table with the same name in the default schema of your database. You can customize this mapping using the name, schema, and catalog attributes of the @Table annotation.

@Entity
@Table(name = "AUTHORS", schema = "STORE")
public class Author {

The name attribute enables you to change the name of the database table which your entity maps. The schema attribute specifies the name of the database schema in which the table is located. And the catalog attribute describes the name of the database catalog that stores the metadata information of the table.

The @Table annotation also defines 2 attributes that enable you to influence the generation of the database table. These are called indexes and uniqueConstraints. I don’t recommend to use them. External script and tools like Liquibase or Flyway are a much better option to create and update your database.



Already a member? Login here.

Basic Column Mappings

By default, all JPA implementations map each entity attribute to a database column with the same name and a compatible type. The following annotations enable you to perform basic customizations of these mappings. You can, for example, change the name of the column, adapt the type mapping, identify primary key attributes, and generate unique values for them.

@Column

Let’s start with the @Column annotation. It is an optional annotation that enables you to customize the mapping between the entity attribute and the database column.

@Entity
public class Book {

    @Column(name = "title", updatable = false, insertable = true)
    private String title;

    ...
}

You can use the name attribute to specify the name of the database column which the entity attribute map. The attributes updatable and insertable enable you to exclude the attribute from insert or update statements.

You should only use the table attribute if you map your entity to 2 database tables. In general, I don’t recommend to use this mapping. But you sometimes need it to work with a legacy database or as a temporary step during a complex refactoring.

All other attributes only affect the generated CREATE TABLE statement, and I don’t recommend to use them. These are:

  • The columnDefinition attribute that allows you to define an SQL fragment that’s used during table definition.
  • The length attribute, which defines the length of String-valued database column.
  • The attributes scale and precision, which specify the scale and precision of a decimal column.
  • The unique attribute that defines a unique constraint on the mapped column.

@Id

JPA and Hibernate require you to specify at least one primary key attribute for each entity. You can do that by annotating an attribute with the @Id annotation.

@Entity
public class Author {

    @Id
    private Long id;

    ...
}

@GeneratedValue

When we’re talking about primary keys, we also need to talk about sequences and auto-incremented database columns. These are the 2 most common database features to generate unique primary key values.

If you annotate your primary key attribute with the @GeneratedValue annotation, you can use a database sequence by setting the strategy attribute to GenerationType.SEQUENCE. Or, if you want to use an auto-incremented database column to generate your primary key values, you need to set the strategy to GenerationType.IDENTITY.

@Entity
public class Author {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE)
    private Long id;

    ...
}

The generator attribute of the @GeneratedValue annotation enables you to reference a custom generator. You can use it to customize a standard generator, e.g., to use a custom database sequence, or to implement your own generator.

I explain the primary key generation strategies and their performance impacts in more detail in How to generate primary keys with JPA and Hibernate.

@Enumerated

The @Enumerated annotation enables you to define how an enum attribute gets persisted in the database. By default, all JPA implementations map the ordinal value of the enum to a numeric database column.

As I explained in more detail in my guide on enum mappings, the ordinal makes it hard to add or remove values to the enum. The mapping as a String is more robust and much easier to read. You can activate this mapping by EnumType.STRING to the @Enumerated annotation.

@Entity
public class Author {

    @Enumerated(EnumType.STRING)
    private AuthorStatus status;

    ...
}

@Temporal

If you’re still using java.util.Date or java.util.Calendar as your attribute types, you need to annotate the attribute with @Temporal. Using this annotation, you can define if the attribute shall be mapped as an SQL DATE, TIME, or TIMESTAMP.

@Entity
public class Author {
	
    @Temporal(TemporalType.DATE)
    private Date dateOfBirth;

    ...
}

This mapping works really well, but I recommend using the classes of the Date and Time API instead. These classes are much easier to use in your business code, and they provide all the required mapping information. That means that they don’t require any annotations.

@Lob

In Java, there is almost no limit to the size of a String or a byte[]. But that’s not the case for relational databases. They provide specific data types for large objects. These are BLOB for binary large objects and CLOB for character large objects.

Using JPA’s @Lob annotation, you can map a BLOB to a byte[] and a CLOB to a String. Your persistence provider then fetches the whole BLOB or CLOB when it initializes the entity attribute.

@Entity
public class Book {
     
    @Lob
    private byte[] cover;
 
    ...
}

In addition to that, Hibernate also supports mappings to java.sql.Blob and java.sql.Clob. These are not as easy to use a byte[] or a String, but they can provide better performance. I explained that mapping in great detail in Mapping BLOBs and CLOBs with Hibernate and JPA.

Association Mappings

You can also map associations between your entities. In the table model, these are modeled as foreign key columns. These associations are mapped as attributes of the type of the associated entity or a Collection of associated entities, in your domain model.

In both cases, you need to describe the association mapping. You can do that using a @ManyToMany, @ManyToOne, @OneToMany, or @OneToOne annotation.

@ManyToMany

Many-to-many associations are very common in relational table models. A typical example is an association between books and authors.

In your domain model, you can map this association in a uni- or bidirectional way using attributes of type List, Set or Map, and a @ManyToMany annotations.

@Entity
@Table(name = "BOOKS")
public class Book {

    @ManyToMany
    private Set<Author> authors;

    ...
}

Here you can see a typical example of the owning side of the association. You can use it to model a unidirectional many-to-many association. Or you can use it as the owning side of a bidirectional mapping. In both cases, Hibernate uses an association table that contains foreign key columns that reference both ends of the association.

When you’re using this annotation, you should also be familiar with JPA’s FetchTypes. The fetch attribute of the @ManyToMany annotation allows you to define the FetchType that shall be used for this association. The FetchType defines when the persistence provider fetches the referenced entities from the database. By default, a many-to-many association uses the FetchType.LAZY. This tells your persistence provider to fetch the associated entities when you use them. That’s the most efficient approach, and you shouldn’t change it.

By setting the cascade attribute, you can also tell your persistence provider which entity operations it shall cascade to all associated entities. This can make working with graphs of entities much easier. But you should avoid CascadeType.REMOVE for all many-to-many associations. It removes much more data than you would expect.

If you want to model the association in a bidirectional way, you need to implement a similar mapping on the referenced entity. But this time, you also need to set the mappedBy attribute of the @ManyToMany annotation to the name of the attribute that owns the association. To your persistence provider, this identifies the mapping as a bidirectional one.

@Entity
public class Author {

    @ManyToMany(mappedBy = "authors")
    private Set<Book> books;

    ...
}

You use the same @ManyToMany annotation to define the referencing side of the association, as you use to specify the owning side of it. So, you can use the same cascade and fetch attributes, as I described before.

@ManyToOne and @OneToMany

Many-to-one and one-to-many associations represent the same association from 2 different perspectives. So, it’s no surprise that you can use them together to define a bidirectional association. You can also use each of them on their own to create a unidirectional many-to-one or one-to-many association. But you should avoid unidirectional one-to-many associations. Hibernate handles them very inefficient.

@ManyToOne

Let’s take a closer look at the @ManyToOne annotation. It defines the owning side of a bidirectional many-to-one/one-to-many association. You do that on the entity that maps the database table that contains the foreign key column.

@Entity
public class Book {

    @ManyToOne(fetch = FetchType.LAZY)
    private Publisher publisher;

    ...
}

When you’re using a @ManyToOne annotation, you should be familiar with its fetch and cascade attributes.

The fetch attribute enables you to define the FetchType that shall be used for this association. The default value is FetchType.LAZY, and you shouldn’t change it.

You can set the cascade attribute to define which operations on this entity shall get cascaded to all associated entities. That gets often used to cascade an operation from a parent to a child entity. So, it’s mostly used on a @OneToMany association, and I will show it in the next section.

You can also set the optional attribute to false to indicate that this association is mandatory.

@OneToMany

You can use the @OneToMany annotation to define the referencing side of a bidirectional many-to-one/one-to-many association. As explained before, you shouldn’t use it to model a unidirectional one-to-many association. Hibernate handles these associations very inefficiently.

Similar to the referencing side of a bidirectional many-to-many association, you can reference the name of the attribute that owns the association in the mappedBy attribute. That tells your persistence provider that this is the referencing side of a bidirectional association, and it reuses the association mapping defined by the owning side.

@Entity
public class Publisher {

    @OneToMany(mappedBy = "publisher", cascade = CascadeType.ALL)
    private Set<Book> books;

    ...
}

I already explained the fetch and cascade attributes for the @ManyToMany and @ManyToOne annotations. You can use them in the same way with the @OneToMany annotation.

In addition to these 2 attributes, you should also know the orphanRemoval attribute. If you set it to true, Hibernate removes an entity from the database when it gets removed from the association. That’s often used for parent-child associations in which the child can’t exist without its parent. A typical example would be the item of an order. The item can’t exist without the order. So, it makes sense to remove it as soon as the association to the order gets removed.

@OneToOne

One-to-one associations are only rarely used in relational table models. You can map them using a @OneToOne annotation.

Similar to the previously discussed association mapping, you can model a uni- or bidirectional one-to-one associations. The attribute that’s defined on the entity that maps the database table that contains the foreign key column owns the association.

@Entity
public class Manuscript {

    @OneToOne(fetch = FetchType.LAZY)
    private Book book;

    ...
}

The @OneToOne annotation supports the fetch, cascade, and optional attributes that I already explained in the previous sections.

And if you model it as a bidirectional association, you need to set the mappedBy attribute of the referencing side of the association to the attribute name that owns the association.

@Entity
public class Book {

    @OneToOne(mappedBy = "book")
    private Manuscript manuscript;

    ...
}


Already a member? Login here.

Conclusion

As you have seen, you only need a relatively small number of annotations to define your domain model. In most cases, you only need to annotate your entity class with @Entity and your primary key attribute with @Id and @GeneratedValue.

If the names of your entity class or one of its attributes don’t match the table or column names, you can adjust the mapping using a @Table or @Column annotation. You can also change the type mappings using an @Enumerated, @Temporal, or @Lob annotation.

One of the key features of any object-relational mapper is the handling of associations. With JPA and Hibernate, you can map one-to-one, one-to-many, many-to-one, and many-to-many associations in a uni- or bidirectional way. All association mappings require an additional annotation that describes the association mapping and that you can use to define the fetching and cascading behavior of it.

The post Key annotations you need to know when working with JPA and Hibernate appeared first on Thoughts on Java.


by Thorben Janssen at January 09, 2020 10:33 AM

Payara Platform監視コンソール

by Jan Bernitt at January 08, 2020 05:30 AM

嬉しいお知らせです。Payara Platform 5.194から、Payara Serverにはサーバーの状態を可視化できる組み込みの監視コンソールが搭載されます。


by Jan Bernitt at January 08, 2020 05:30 AM

Jersey Apache Connector Hangs …?

by Jan at January 07, 2020 09:59 PM

Jersey comes with various connectors to third-party HTTP clients. The way the connector is used is simple, put the connector and the third-party client to the classpath, and tell the client to use it. For Apache Connector, use: ClientConfig clientConfig … Continue reading

by Jan at January 07, 2020 09:59 PM

Deploy a Jakarta EE application to the root context

by rieckpil at January 07, 2020 06:24 AM

With the presence of Docker, Kubernetes and cheaper hardware, the deployment model of multiple applications inside one application server has passed. Now, you deploy one Jakarta EE application to one application server. This eliminates the need for different context paths.  You can use the root context / for your Jakarta EE application. With this blog post, you’ll learn how to achieve this for each Jakarta EE application server.

The default behavior for Jakarta EE application server

Without any further configuration, most of the Jakarta EE application servers deploy the application to a context path based on the filename of your .war. If you e.g. deploy your my-banking-app.war application, the server will use the context prefix /my-banking-app for your application. All you JAX-RS endpoints, Servlets, .jsp, .xhtml content is then available below this context, e.g /my-banking-app/resources/customers.

This was important in the past, where you deployed multiple applications to one application server. Without the context prefix, the application server wouldn’t be able to route the traffic to the correct application.

As of today, the deployment model changed with Docker, Kubernetes and cheaper infrastructure. You usually deploy one .war within one application server running as a Docker container. Given this deployment model, the context prefix is irrelevant. Mapping the application to the root context / is more convenient.

If you configure a reverse proxy or an Ingress controller (in the Kubernetes world), you are happy if you can just route to / instead of remembering the actual context path (error-prone).

Deploying to root context: Payara & Glassfish

As Payara is a fork of Glassfish, the configuration for both is quite similar. The most convenient way for Glassfish is to place a glassfish-web.xml file in the src/main/webapp/WEB-INF folder of your application:

<!DOCTYPE glassfish-web-app PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 Servlet 3.0//EN"
  "http://glassfish.org/dtds/glassfish-web-app_3_0-1.dtd">
<glassfish-web-app>
  <context-root>/</context-root>
</glassfish-web-app>

For Payara the filename is payara-web.xml:

<!DOCTYPE payara-web-app PUBLIC "-//Payara.fish//DTD Payara Server 4 Servlet 3.0//EN" "https://raw.githubusercontent.com/payara/Payara-Server-Documentation/master/schemas/payara-web-app_4.dtd">
<payara-web-app>
	<context-root>/</context-root>
</payara-web-app>

Both also support configuring the context path of the application within their admin console. IMHO this less convenient than the .xml file solution.

Deploying to root context: Open Liberty

Open Liberty also parses a proprietary web.xml file within src/main/webapp/WEB-INF: ibm-web-ext.xml

<web-ext
  xmlns="http://websphere.ibm.com/xml/ns/javaee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://websphere.ibm.com/xml/ns/javaee http://websphere.ibm.com/xml/ns/javaee/ibm-web-ext_1_0.xsd"
  version="1.0">
  <context-root uri="/"/>
</web-ext>

Furthermore, you can also configure the context of your application within your server.xml:

<server>
  <featureManager>
    <feature>servlet-4.0</feature>
  </featureManager>

  <httpEndpoint id="defaultHttpEndpoint" httpPort="9080" httpsPort="9443"/>

  <webApplication location="app.war" contextRoot="/" name="app"/>
</server>

Deploying to root context: WildFly

WildFly also has two simple ways of configuring the root context for your application. First, you can place a jboss-web.xml within src/main/webapp/WEB-INF:

<!DOCTYPE jboss-web PUBLIC "-//JBoss//DTD Web Application 2.4//EN" "http://www.jboss.org/j2ee/dtd/jboss-web_4_0.dtd">
<jboss-web>
  <context-root>/</context-root>
</jboss-web>

Second, while copying your .war file to your Docker container, you can name it ROOT.war:

FROM jboss/wildfly
 ADD target/app.war /opt/jboss/wildfly/standalone/deployments/ROOT.war

For more tips & tricks for each application server, have a look at my cheat sheet.

Have fun deploying your Jakarta EE applications to the root context,

Phil

The post Deploy a Jakarta EE application to the root context appeared first on rieckpil.


by rieckpil at January 07, 2020 06:24 AM

Hashtag Jakarta EE #1

by Ivar Grimstad at January 05, 2020 10:59 AM

For a while now, I have been thinking of posting more regular updates about stuff going on in the Jakarta EE community. Kind of what Josh Long does with his “This Week in Spring” series. Being a big fan of Josh and the work he is doing in the community, I am not ashamed of copying him.

The goal is weekly updates, but being realistic I leave out the cadence from the title. So welcome to the first issue of Hashtag Jakarta EE!

The year 2020 is still young and pristine, most members enjoying a well-deserved vacation after a busy 2019.

Worth mentioning, though, is the ongoing discussion regarding the establishment of a Working Group for MicroProfile. There are currently two proposals on the table, the MicroProfile Working Group Proposal and the Cloud Native for Java Working Group Proposal.

At the end of last year, a release plan for Jakarta EE 9 was made available by the Jakarta EE Platform Project. The work with this release will start up again next week with weekly calls. These calls are open for anyone. The details can be found in the EE4J PMC Calendar.

Hope you enjoyed this short update!


by Ivar Grimstad at January 05, 2020 10:59 AM

2019 Summary

by Ivar Grimstad at December 31, 2019 10:48 AM

It is time for my yearly summary of conferences and community activities. In addition to numerous local meetups and online events, I had the opportunity to speak at the following major developer conferences:

Jfokus, Stockholm
Devnexus, Atlanta
ConFoo, Montreal
JavaLand, Brühl
JEEConf, Kiev
QCon, São Paulo
DevTalks, Bucharest
Java Cloud Africa, Johannesburg
J4K, Orlando
Oracle CodeOne, San Francisco
EclipseCon Europe, Ludwigsburg
Trondheim Developer Day, Trondheim
Devoxx Ukraine, Kiev
Devoxx Belgium, Antwerp
Devoxx Morocco, Agadir
Java2Days, Sofia

The biggest change for me this year happened in October when I joined the Eclipse Foundation as the Jakarta EE Developer Advocate. This means that I am able to dedicate all my time to community building and contributions to open source.

My new role enable me to be even more present at conferences and events around the World, so I expect 2020 to be at least as busy. I already have conferences lined up for the first quarter and more in the planning.

With Jakarta EE 8 out the door in 2019, we are now in full planning for Jakarta EE 9. The biggest impact of this release will be the namespace change from javax.* to jakarta.*. We will also prune some not very widely used technologies in order to lighten the burden for new implementations of the platform. This work will probably continue in subsequent releases.

All in all 2019 was a great year for Jakarta EE and I expect 2020 to be even better!


by Ivar Grimstad at December 31, 2019 10:48 AM

Why Jakarta EE beats other java solutions from security point of view

December 26, 2019 10:00 PM

No one care about security until security incident. In case enterprise development last one can costs too much, so any preventive steps can help. Significant part part of the OWASP Application Security Verification Standard (ASVS) reads:

10.2.1 Verify that the application source code and third party libraries do not contain unauthorized phone home or data collection capabilities. Where such functionality exists, obtain the user's permission for it to operate before collecting any data.
10.2.3 Verify that the application source code and third party libraries do not contain back doors, such as hard-coded or additional undocumented accounts or keys, code obfuscation, undocumented binary blobs, rootkits, or anti-debugging, insecure debugging features, or otherwise out of date, insecure, or hidden functionality that could be used maliciously if discovered.
10.2.4 Verify that the application source code and third party libraries does not contain time bombs by searching for date and time related functions.
10.2.5 Verify that the application source code and third party libraries does not contain malicious code, such as salami attacks, logic bypasses, or logic bombs.
10.2.6 Verify that the application source code and third party libraries do not contain Easter eggs or any other potentially unwanted functionality.
10.3.2 Verify that the application employs integrity protections, such as code signing or sub-resource integrity. The application must not load or execute code from untrusted sources, such as loading includes, modules, plugins, code, or libraries from untrusted sources or the Internet.
14.2.4 Verify that third party components come from pre-defined, trusted and continually maintained repositories.

In other words that meaning you should: "Verify all code including third-party binaries, libraries, frameworks are reviewed for hardcoded credentials (backdoors)."

In case development according to Jakarta EE specification you shouldn't be able to use poor controlled third party libraries, as all you need already came with Application Server. In turn, last one is responsible for in time security updates, ussage of verified libraries and many more...


December 26, 2019 10:00 PM

How to setup Darcula LAF on Netbeans 11

December 25, 2019 10:00 PM

It is pity, but Apache Netbeans IDE still comes without support default dark mode. Enabling Netbeans 8.2 Plugin portal does not have any effect, so to use plugins from previous versions we need to add New Provider (Tools->Plugins) with next URL:

http://plugins.netbeans.org/nbpluginportal/updates/8.2/catalog.xml.gz

add provider

After that you should be able setup Darcula LAF in standard way

add provider


December 25, 2019 10:00 PM

Java ile Özel Notasyon Geliştirimi

by Hüseyin Akdogan at December 23, 2019 06:00 AM

JDK 1.5 ve üstü bir sürümle geliştirim yapmış her seviyeden Java geliştiricisi notasyon kullanmıştır. @Inject, @Entity, @Autowired, hiç olmadıysa @Override notasyonunu görmemiş, kullanmamış herhalde yoktur. Java dilinin güçlü yanlarından biri olan notasyonlar, birer dekoratör gibi çalışarak; sınıf, yapılandırıcı, metot, alan gibi program yapılarına meta verisi eklememizi sağlar. Geliştiriciye belirli bir eylem/davranışı adeta markalayıp, uygulamanın ihtiyaç duyulan noktalarına enjekte etme olanağı sağlayan notasyonlar, mükerrer kod yazımını azaltıp, kod okunurluğunu artırırlar.

Notasyonların gücünden, Java geliştiricileri kendi özel notasyonlarını oluşturarak yeterince yararlanmakta mıdır? Başta kendi geliştirim tecrübem olmak üzere kişisel gözlemim ve çeşitli yazı, makale, yayınlanmış anketlerden anladığım; Javacılar notasyon kullanmayı yazmaktan daha çok seviyor. Bu yazıyla amacım, notasyonları daha yakından tanımanıza yardımcı olmak, çalışma mantıklarını göstermek ve basit bir örnek aracılığıyla kendi notasyonunuzu nasıl yazıp kullanabileceğinizi örneklendirerek, özel notasyonların sağladığı avantajlara dikkatinizi çekmek.

Nasıl Çalışıyor?

An annotation is a marker which associates information with a program construct, but has no effect at run time. JLS Section 9.7

Java dil belirtiminde ifade edildiği gibi, notasyonlar kendi başlarına çalışma zamanında herhangi bir etkiye sahip değildir. Daha açık bir ifadeyle, bir notasyon tek başına eklendiği programın çalışma zamanı davranışını değiştirmez.Aslında bir notasyon, uygulandığı noktada elde edilmek istenen davranışı bildiren bir işaretçi(an annotation is a marker – JSL) gibi davranır. Bu sebeple, notasyonların geliştirici tarafından belirlenmiş davranış/eylemi gerçekleştirmesi için çalışma zamanı çerçeveleri(runtime frameworks) veya derleyici tarafından işlenmesi gerekir. Notasyonlar, derleme zamanında Java derleyicisinin bir tür eklentisi sayılabilecek notasyon işleyiciler(annotation processors), çalışma zamanında ise Java Reflection API tarafından işlenir. Bu yazıda, çalışma zamanında Reflection API ile işlenecek bir notasyon örneğini inceleyeceğiz.

Nasıl Tanımlanır?

Bir notasyon tanımlamak, arayüz tanımlamaya benzer. Sözdizimsel fark, interface anahtarı başına @ sembolünün getirilmesinden ibarettir.

public @interface Monitor {}

Bir notasyon oluşturmak için temelde iki bilgiyi sağlamak gerekmektedir. Bunlardan biri Retention politikası, diğeri Target. Retention politikası, notasyona erişim zamanını tanımlarken, target hangi yapıya(sınıf, method, alan) uygulanacağını tanımlamaktadır.

Retention politikası, RetentionPolicy ile tanımlanır. RetentionPolicy enum tipinde bir veri yapısıdır ve java.lang.annotation paketi altında bulunmaktadır.

package java.lang.annotation;
/**
* Annotation retention policy.  The constants of this enumerated type
* describe the various policies for retaining annotations.  They are used
* in conjunction with the {@link Retention} meta-annotation type to specify
* how long annotations are to be retained.
*
* @author  Joshua Bloch
* @since 1.5
*/
public enum RetentionPolicy {
/**
* Annotations are to be discarded by the compiler.
*/
SOURCE,
/**
* Annotations are to be recorded in the class file by the compiler
* but need not be retained by the VM at run time.  This is the default
* behavior.
*/
CLASS,
/**
* Annotations are to be recorded in the class file by the compiler and
* retained by the VM at run time, so they may be read reflectively.
*
* @see java.lang.reflect.AnnotatedElement
*/
RUNTIME
}

Görüldüğü üzere 3 retention politikası vardır.

  • SOURCE: Notasyon derleyici tarafından atılır.
  • CLASS: Notasyon derleyici tarafından oluşturulan sınıf dosyasına kaydedilir ve JVM tarafından saklanması gerekmez. Varsayılan davranış biçimidir.
  • RUNTIME: Notasyon sınıf dosyasına derleyici tarafından kaydedilir ve çalışma zamanında JVM tarafından saklanır, böylece reflection ile okunabilir.

Runtime, uygulamaların notasyonlara ve ilişkili verilerine reflection ile erişip kod yürütülmesine izin verdiği için en bilinen ve yaygın olarak kullanılan retention politikasıdır.

Java 9 sonrası, 11 target vardır; bir başka deyişle 11 yapıya notasyon uygulayabiliriz.

  • TYPE: Sınıf, arayüz, notasyon, enum gibi tipleri hedefler
  • FIELD: Sınıf değişkenleri, enum sabitleri gibi alan tiplerini hedefler
  • METHOD: Sınıf metotlarını hedefler
  • MODULE: Java 9 ile gelmiştir, modülleri hedefler
  • PARAMETER: Metot, yapılandırıcı parametrelerini hedefler
  • CONSTRUCTOR: Yapılandırıcıları hedefler
  • LOCAL_VARIABLE: Yerel değişkenleri hedefler
  • ANNOTATION_TYPE: Notasyonları hedefler ve başka bir notasyon ekler
  • PACKAGE: Paketleri hedefler
  • TYPE_PARAMETER: Java 1.8 ile gelmiştir, MyClass<T>’deki T gibi jenerik parametreleri hedefler
  • TYPE_USE: Java 1.8 ile gelmiştir, herhangi bir türün kullanımını(örneğin new ile oluşturulma, arayüz implementasyonu, cast işlemi vb) hedefler

Notasyon Parametreleri ve Notasyon Tipleri

Notasyonlar program yapılarına meta verisini, parametreleri aracılığıyla ekler. Bir notasyonun parametreye sahip olup olmaması tipini belirler. Parametre bildirimi içermeyen notasyonlar işaretçi notasyon(marker annotation type) türünü, tek bir parametre bildirimi içeren notasyonlar tek elemanlı(single element annotation type), birden fazla parametre bildirimi içeren notasyonlar ise kompleks notasyon(complex annotation type) türünü oluşturur. Primitif tipler, String, Class, Enum, bir başka notasyon ve bu anılan tiplerin dizi tipi, bir notasyon parametresi olarak bildirilebilir.

Senaryomuz

Profiling için kodumuzda çalışma sürelerini ölçmek istediğimiz metotlara sahip olduğumuzu düşünelim. Her metodun değil, bazı metotların çalışma süresini ölçmek istiyoruz; üstelik bu metotlara ilerde yenileri eklenebilir. Bunun için bir notasyon yazmak iyi bir fikirdir çünkü en başta belirttiğimiz üzere kod okunurluğunu bozmadan ek olarak aynı davranışı tekrar tekrar dilediğimiz noktalarda(örneğin kodumuza eklenecek yeni metotlarda) elde etmek için notasyonlar biçilmiş kaftandır.

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Monitor
{
MonitorPolicy value();
}

Retention policy ve Target’tan daha önce bahsettiğimiz için, yukarıda görülen notasyonun çalışma zamanında Reflection API ile işlenebileceğini ve ancak metotlara(başka bir yapıya uygulanırsa derleme zamanında hata alınır) uygulanabileceğini kestirmeniz zor değil. Bu noktada Target ile ilgili şu detayı da paylaşalım, birden fazla hedef tanımlayabilirsiniz.

@Target({ElementType.FIELD, ElementType.METHOD})

Notasyonumuzun gövdesinde ise enum tipinde MonitorPolicy parametresinin deklare edildiğini görüyorsunuz. İlgili enum değerine çalışma zamanında value() metodu üzerinden erişeceğimiz gibi, notasyonumuza değer geçirmek için value’yu anahtar olarak da kullanabiliriz.

@Monitor(value = MonitorPolicy.SHORT)

Fakat değer geçirmek için value anahtarını kullanmak zorunlu değildir. Aşağıdaki kullanım ile yukarıdaki eşdeğerdir.

@Monitor(MonitorPolicy.SHORT)

Value dışında farklı bir ismi kullansaydık bunu belirtmemiz gerekirdi çünkü value, notasyonlarda varsayılan anahtar ismidir. MonitorPolicy SHORT ve DETAILED değerlerine sahiptir. Bu değerlerle, çalışma süresini ölçeceğimiz metotlara dair döndürülecek bilginin biçimini belirliyoruz, ayrıntılı veya kısa.

enum MonitorPolicy 
{
SHORT,DETAILED
}

Tanımladığımız parametreye varsayılan bir değer atamak istediğimizde ise default anahtarını kullanırız.

MonitorPolicy value() default MonitorPolicy.SHORT;

Bu durumda aşağıdaki gibi bir kullanımda, monitor policy short olacaktır.

@Monitor
public String getUserInfo(){
   //Make HTTP request
}

İşleyicimiz

Daha önce ifade ettiğimiz gibi, notasyonlar kendi başlarına herhangi bir kod çalıştırmazlar. Monitor notasyonunda Retention policy olarak RUNTIME tanımlandığı için, çalışma zamanında Java Reflection API kullanılarak işlenmesi gerekir. Aşağıdaki metot bu vazifeyi gerçekleştirir.

public static String executor(final Object object, final Object... passedParameters) {
    checkObjectReference(object);
    final StringBuilder result = new StringBuilder();
    final Method[] methods = object.getClass().getDeclaredMethods();
    for(Method method :methods){
        if(method.isAnnotationPresent(Monitor.class)){
            if(passedParameters.length > 0){
                result.append(invoker(method, object, passedParameters));
            } else {
                result.append(invoker(method, object));
            }
        }
    }
    return result.toString();
}

Executor metodu, biri Monitor notasyonunu kullanan sınıf referansı, diğeri varargs tipinde; ölçüm yapılacak metotların parametreleri olmak üzere 2 argüman alıyor.

Önce object referansı üzerinden, ilgili nesnenin deklare edilmiş metotlarını çekip ardından bir döngü yardımıyla gezilen metotlarda Monitor notasyonunun uygulanıp uygulanmadığını Reflection API’ye ait isAnnotationPresent()metoduyla(metoda notasyon nesnesini geçirdiğimize dikkat edin) kontrol ediyoruz. Monitor notasyonunu uygulayan metod varsa, koşturulması için invoker metodunu çağırıyoruz.

private static String invoker(final Method method, Object object, Object... passedParameters) {

    method.setAccessible(true);
    final StringBuilder result = new StringBuilder();
    final MonitorPolicy policy = method.getAnnotation(Monitor.class).value();
    long start = System.currentTimeMillis();

    try {
        if (passedParameters.length > 0) {
            checkTypeMismatch(method.getName(), method.getParameters(), passedParameters);
            method.invoke(object, passedParameters);
        } else if(method.getGenericParameterTypes().length > 0){
            throw new MissingArgumentException("The parameter(s) of the " + method.getName() + " method are missing");
        } else {
            method.invoke(object);
        }

        final long end = System.currentTimeMillis();
        if(policy.equals(MonitorPolicy.SHORT)){
            result.append(end - start).append(" ms");
            logger.info( "{} ms", (end - start));
        } else {
            result.append("Total execution time of ")
                    .append(method.getName())
                    .append(" method is ")
                    .append(end - start)
                    .append(" ms");
            logger.info("Total execution time of {} method is {} ms",  method.getName(), (end - start));
        }

    } catch ...

Public olarak deklare edilmemiş metotlara da erişebilmek için, invoker metodunda ilk olarak method.setAccessible(true); ile erişim kontrolünü yapılandırıyoruz; aksi halde private metotlar için IllegalAccessException istisnasıyla karşılaşırız. Sonraki adımda, çağırıldığı yapıyla(bizim örneğimizde metot) ilişkilendirilmiş, belirtilen tipteki(metoda notasyon nesnesini geçirdiğimize dikkat edin) notasyonu döndüren Reflection API’ye ait getAnnotation metodunu kullanarak, notasyonumuzda tanımlanmış değeri çekiyoruz. Bu değeri, çalışma süresine dair bilgiyi çıktılama biçimizi belirlemede kullanacağız. Bu aşamadan sonra, o anki zaman mührünü milisaniye cinsinden start değişkeninde depoluyoruz; Monitor notasyonunu uygulamış metodu, Reflection API’nin invoke metoduyla koşturduktan sonra elde edilen zaman mührünü ise end değişkeninde. Son aşamada ise, MonitorPolicy’ye bağlı olarak çalışma süresini çıktılıyoruz.

Aşağıda, paylaşılan örnekteki gibi bir kullanımda alacağımız muhtemel çıktıyı görüyorsunuz.

@Monitor(MonitorPolicy.DETAILED)
public String getUserInfo(){
   //Make HTTP request
}
Profiler.executor(new User());

Total execution time of getUserInfo method is 1459 ms

Monitor notasyonunu içeren uygulamaya buradan ulaşabilirsiniz.

Sonuç

Java notasyonları sınıf, yapılandırıcı, metot, alan gibi Java program yapılarına meta verisi eklememizi sağlayan, bu şekilde mükerrer kod yazımını azaltıp, kod okunurluğunu artıran Java dilinin güçlü yanlarından biridir. Notasyonlar kendi başlarına herhangi bir kod çalıştırmazlar, bunun yerine derleyici veya ait oldukları çerçeve(framework) için, uygulandığı noktada elde edilmek istenen davranışı bildiren bir işaretçi gibi davranıp, derleme zamanında Java Annotation Processors’ler, çalışma zamanında ise Java Reflection API tarafından işlenirler.


by Hüseyin Akdogan at December 23, 2019 06:00 AM

Run your Jakarta EE/MicroProfile application on Quarkus with minimum changes

by Jean-François James at December 17, 2019 04:58 PM

Quarkus 1.0.0.Final has been released, so I think it’s time to go beyond the “Get started” guides and to give it a try. In this article, I’m going to illustrate how to run an existing Jakarta EE/MicroProfile application on Quarkus, both in JVM and native modes, with minimum changes on the code and the configuration. I’ve […]

by Jean-François James at December 17, 2019 04:58 PM

The Payara Monthly Catch for Nov/Dec 2019

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at December 17, 2019 10:30 AM

For the festive season we decided to change it up abit and create a mega bumper edition of the month catch by combining November and December and serve it to you before many head off for their Christmas holidays. 

Below you will find a curated list of some of the most interesting news, articles and videos from this month. Cant wait until the end of the month? then visit our twitter page where we post all these articles as we find them! 


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at December 17, 2019 10:30 AM

Jakarta EE: Creating an Enterprise JavaBeans Timer

by Rhuan Henrique Rocha at December 17, 2019 03:33 AM

Enterprise JavaBeans (EJB) has many interesting and useful features, some of which I will be highlighting in this and upcoming articles. In this article, I’ll show you how to create an EJB timer programmatically and with annotation. Let’s go!

The EJB timer feature allows us to schedule tasks to be executed according a calendar configuration. It is very useful because we can execute scheduled tasks using the power of Jakarta context. When we run tasks based on a timer, we need to answer some questions about concurrency, which node the task was scheduled on (in case of an application in a cluster), what is the action if the task does not execute, and others. When we use the EJB timer we can delegate many of these concerns to Jakarta context and care more about business logic. It is interesting, isn’t it?

Creating an EJB timer programmatically

We can schedule an EJB timer to runs according to a business logic using a programmatic approach. This method can be used when we want a dynamic behavior, according to the parameter values passed to the process. Let’s look at an example of an EJB timer:

import javax.annotation.Resource;
import javax.ejb.SessionContext;
import javax.ejb.Stateless;
import javax.ejb.Timeout;
import java.util.logging.Logger;

@Stateless
public class MyTimer {

    private Logger logger = Logger.getLogger(MyTimer.class.getName());
    @Resource
    private SessionContext context;

    public void initTimer(String message){
        context.getTimerService().createTimer(10000, message);
    }

    @Timeout
    public void execute(){
        logger.info("Starting");

        context.getTimerService().getAllTimers().stream().forEach(timer -> logger.info(String.valueOf(timer.getInfo())));
        

        logger.info("Ending");
    }    
}

To schedule this EJB timer, call this method:

@Inject
private MyTimer myTimer;
....
myTimer.initTimer(message);

After passing 10000 milliseconds, the method annotated with @Timeout will be called.

Scheduling an EJB timer using annotation

We can also create an EJB timer that is automatically scheduled to run according to an annotation configuration. Look at this example:

@Singleton
public class MyTimerAutomatic {

    private Logger logger = Logger.getLogger(MyTimerAutomatic.class.getName());

    @Schedule(hour = "*", minute = "*",second = "0,10,20,30,40,50",persistent = false)
    public void execute(){

        logger.info("Automatic timer executing");

    }
}

As you can see, to configure an automatic EJB timer schedule, you can annotate the method using @Schedule and configure the calendar attributes. For example:

@Schedule(hour = "*", minute = "*",second = "0,10,20,30,40,50",persistent = false)

As you can see, the method execute is configured to be called every 10 seconds. You can configure whether the timer is persistent as well.

Conclusion

EJB timer is a good EJB feature that is helpful in solving many problems. Using the EJB timer feature, we can schedule tasks to be executed, thereby delegating some responsibilities to Jakarta context to solve for us. Furthermore, we can create persistent timers, control the concurrent execution, and work with it in a clustered environment.  If you want to see the complete example, visit this repository on GitHub.

This post was released at Developer Red Hat blog and you can see here.

 


by Rhuan Henrique Rocha at December 17, 2019 03:33 AM

Jakarta EE Quickstart Guides for each application server

by rieckpil at December 16, 2019 03:08 PM

Read about my latest YouTube series: Jakarta EE Quickstart Guides. It targets both Jakarta EE newcomers and experienced developers to provide tutorials for common tasks. This page works as an entry page to query for the tutorial you are looking for.

About the Jakarta EE Quickstart Guides series

Jakarta EE Quickstart Guides Blog Logo

The goal of this series is to provide short (5 to 15 minutes) and easy-to-understand videos on YouTube for common  Jakarta EE topics. Next to the videos, I’m uploading the source code for the examples on GitHub.

Furthermore, I’ll use all compatible Jakarta EE application server for these guides to vary and fulfill everyone’s needs.

» Jakarta EE Quickstart Guides YouTube Playlist

I. Jakarta EE project setup

II. Jakarta EE tooling

III. Resource connection quickstart guides

IV. Testing a Jakarta EE project

V. Development

VI. Deployment

VII. more to come

There are more topics to come, e.g.: integration with Eclipse MicroProfile, deployment to Kubernetes, best practices, etc.

Further Jakarta EE resources

The Jakarta EE guide you are searching for is missing?

I’m looking forward to creating a quick start guide for your topic. Either add a comment to an existing video on YouTube or create an issue on the corresponding GitHub repository.

Furthermore, have a look at all Jakarta EE Tutorials on my blog for more content about Jakarta EE.

Have fun with these Jakarta EE Tutorials,

Phil

The post Jakarta EE Quickstart Guides for each application server appeared first on rieckpil.


by rieckpil at December 16, 2019 03:08 PM

Jakarta EE & React file up- and download using Java 11 and TypeScript

by rieckpil at December 12, 2019 05:55 AM

Given the latest release of Payara, we can now officially use it with Java 11 and Jakarta EE. I’m using this occasion to demonstrate how to create a Jakarta EE backend with a React frontend using TypeScript to up- and download a file. This example also includes a solution to create and bundle a React application with Maven and serve the result with Payara.

The final result will look like the following:

Jakarta EE React File application

Jakarta EE project setup

The sample project uses Maven, Java 11 and the following dependencies:

<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>de.rieckpil.blog</groupId>
  <artifactId>jakarta-ee-react-file-handling</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>war</packaging>

  <properties>
    <maven.compiler.source>11</maven.compiler.source>
    <maven.compiler.target>11</maven.compiler.target>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    <jakarta.jakartaee-api.version>8.0.0</jakarta.jakartaee-api.version>
    <jersey.version>2.29.1</jersey.version>
  </properties>

  <dependencies>
    <dependency>
      <groupId>jakarta.platform</groupId>
      <artifactId>jakarta.jakartaee-api</artifactId>
      <version>${jakarta.jakartaee-api.version}</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.glassfish.jersey.media</groupId>
      <artifactId>jersey-media-multipart</artifactId>
      <version>${jersey.version}</version>
      <scope>provided</scope>
    </dependency><dependency>
      <groupId>org.glassfish.jersey.core</groupId>
      <artifactId>jersey-server</artifactId>
      <version>${jersey.version}</version>
      <scope>provided</scope>
    </dependency>
  </dependencies>
  
  <!-- more -->
</project>

As the JAX-RS specification does not provide a standard for handling file upload as multipart data, I’m including proprietary Jersey dependencies for this. We can mark them with scope provided, as they are bundled with Payara and therefore don’t need to be part of the .war file.

For this project, I’ll demonstrate a solution to build the frontend application with Maven. Furthermore, the Payara Server will then serve the static files for the Single Page Application. We can achieve this while configuring the build section of our project:

<project>

  <!-- dependencies like seen above -->

  <build>
    <finalName>jakarta-ee-react-file-handling</finalName>
    <plugins>
      <plugin>
        <groupId>com.github.eirslett</groupId>
        <artifactId>frontend-maven-plugin</artifactId>
        <version>1.8.0</version>
        <executions>
          <execution>
            <id>install node and npm</id>
            <goals>
              <goal>install-node-and-npm</goal>
            </goals>
            <phase>generate-resources</phase>
          </execution>
          <execution>
            <id>npm install</id>
            <goals>
              <goal>npm</goal>
            </goals>
            <phase>generate-resources</phase>
            <configuration>
              <arguments>install</arguments>
            </configuration>
          </execution>
          <execution>
            <id>npm test</id>
            <goals>
              <goal>npm</goal>
            </goals>
            <phase>generate-resources</phase>
            <configuration>
              <environmentVariables>
                <CI>true</CI>
              </environmentVariables>
              <arguments>test</arguments>
            </configuration>
          </execution>
          <execution>
            <id>npm build</id>
            <goals>
              <goal>npm</goal>
            </goals>
            <phase>generate-resources</phase>
            <configuration>
              <environmentVariables>
                <CI>true</CI>
              </environmentVariables>
              <arguments>run build</arguments>
            </configuration>
          </execution>
        </executions>
        <configuration>
          <workingDirectory>src/main/frontend</workingDirectory>
          <nodeVersion>v12.13.1</nodeVersion>
        </configuration>
      </plugin>
      <plugin>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>3.8.1</version>
      </plugin>
      <plugin>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>2.22.2</version>
      </plugin>
      <plugin>
        <artifactId>maven-war-plugin</artifactId>
        <version>3.2.3</version>
        <configuration>
          <webResources>
            <resource>
              <directory>${project.basedir}/src/main/frontend/build</directory>
            </resource>
          </webResources>
        </configuration>
      </plugin>
    </plugins>
  </build>
</project>

First, the frontend-maven-plugin takes care of installing all npm dependencies, executing the frontend tests and building the React Single Page Application.  It also downloads (if not already present) the correct Node version. You can configure this with nodeVersion in the configuration section of the plugin. This will ensure each team member uses the same version to build the project.

The CI=true is specific for create-react-app. This will ensure to not run the tests and the frontend build in interactive mode and rather finish the process to proceed further Maven plugins.

Finally, we can configure the maven-war-plugin to include our frontend resources as web resources. When we now build the project with mvn package, it will build the frontend and backend application and bundle the frontend resources within our .war file.

Handling files with the Jakarta EE backend

Next, let’s have a look at how to handle the file up- and download with our Jakarta EE backend. The backend provides two endpoints: one to upload a file and another to download a random file.

In the first place, we have to register the MultiPartFeature of Jersey for our application. There are multiple ways to do this. I’m using a JAX-RS configuration class to achieve this:

@ApplicationPath("resources")
public class JAXRSConfiguration extends ResourceConfig {
  public JAXRSConfiguration() {
    packages("de.rieckpil.blog").register(MultiPartFeature.class);
  }
}

As a result of this, we can now use the proprietary Jersey feature for our JAX-RS resource.

For the file upload, we’ll store the file in a list and include the original filename:

@Path("files")
@ApplicationScoped
public class FileResource {

  private List<FileContent> inMemoryFileStore = new ArrayList();

  @POST
  @Consumes(MediaType.MULTIPART_FORM_DATA)
  public Response uploadNewFile(@FormDataParam("file") InputStream inputStream,
                                @FormDataParam("file") FormDataContentDisposition fileDetails,
                                @Context UriInfo uriInfo) throws IOException {

    this.inMemoryFileStore.add(new FileContent(fileDetails.getFileName(), inputStream.readAllBytes()));

    return Response.created(uriInfo.getAbsolutePath()).build();
  }
}

The FileContent class is a simple POJO to store the relevant information:

public class FileContent {

  private String fileName;
  private byte[] content;

  // constructors, getters & setters
}

Our frontend application can request a random file on the same endpoint but has to use HTTP GET. To properly download a file, we have to set the content type to application/octet-stream and configure some HTTP headers:

@GET
@Produces(MediaType.APPLICATION_OCTET_STREAM)
public Response downloadRandomFile() {

  if (inMemoryFileStore.size() == 0) {
    return Response.noContent().build();
  }

  FileContent randomFile = inMemoryFileStore.get(
    ThreadLocalRandom.current().nextInt(0, inMemoryFileStore.size()));

  return Response
    .ok(randomFile.getContent())
    .type(MediaType.APPLICATION_OCTET_STREAM)
    .header(HttpHeaders.CONTENT_LENGTH, randomFile.getContent().length)
    .header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=" + randomFile.getFileName())
    .build();
}

React project setup

Next, let’s have a look at the React project setup. With create-react-app we have a great solution for bootstrapping new React applications. I’m using this to place the frontend within the src/main folder with the following command:

npx create-react-app frontend --template typescript

To actually create a TypeScript based project, we can use the argument --template typescript.

For proper components and styling, I’m adding semantic-ui-react and semantic-ui-css to this default project:

{
  "name": "frontend",
  "version": "0.1.0",
  "private": true,
  "dependencies": {
    "@testing-library/jest-dom": "^4.2.4",
    "@testing-library/react": "^9.3.2",
    "@testing-library/user-event": "^7.1.2",
    "@types/jest": "^24.0.23",
    "@types/node": "^12.12.14",
    "@types/react": "^16.9.15",
    "@types/react-dom": "^16.9.4",
    "react": "^16.12.0",
    "react-dom": "^16.12.0",
    "react-scripts": "3.3.0",
    "semantic-ui-css": "^2.4.1",
    "semantic-ui-react": "^0.88.2",
    "typescript": "^3.7.3"
  },
  "scripts": {
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test",
    "eject": "react-scripts eject"
  },
  "eslintConfig": {
    "extends": "react-app"
  },
  "browserslist": {
    "production": [
      ">0.2%",
      "not dead",
      "not op_mini all"
    ],
    "development": [
      "last 1 chrome version",
      "last 1 firefox version",
      "last 1 safari version"
    ]
  }
}

File up- and download from React

Finally, let’s have a look at React application written in TypeScript. It contains three components: FileUploadComponent, FileDownloadComponent, and App to wrap everything. All of these are using React’s functional component approach.

Let’s start with the FileUploadComponent:

interface FileUploadComponentProps {
  uploadFile: (file: File) => void
}

const FileUploadComponent: React.FC<FileUploadComponentProps> = ({uploadFile}) => {

  const [fileContent, setFileContent] = useState<File | null>();

  return <React.Fragment>
    <Header as='h4'>Upload your file</Header>
    <Form onSubmit={event => fileContent && uploadFile(fileContent)}>
      <Form.Group widths='equal'>
        <Form.Field>
          <input placeholder='Select a file'
                 type='file'
                 onChange={event => event.target.files && setFileContent(event.target.files.item(0))}/>
        </Form.Field>
        <Button type='submit'>Upload</Button>
      </Form.Group>
    </Form>
  </React.Fragment>;
};

export default FileUploadComponent;

Once we upload a file and submit the HTML form, the component will pass the uploaded file to the uploadFile function.

The FileDownloadComponent is even simpler, as it executes a download function whenever someone clicks the button:

interface FileDownloadComponentProps {
  downloadFile: () => void
}

const FileDownloadComponent: React.FC<FileDownloadComponentProps> = ({downloadFile}) => {
  return <React.Fragment>
    <Header as='h4'>Download a random file</Header>
    <Form>
      <Button type='submit' onClick={downloadFile}>Download</Button>
    </Form>
  </React.Fragment>;
};

export default FileDownloadComponent;

Last but not least the App component orchestrates everything:

const App: React.FC = () => {

  const [statusInformation, setStatusInformation] = useState<StatusInformation>();

  return <Container text style={{marginTop: 10}}>
    <Image src='/jakartaEELogo.png' size='small' centered/>
    <Header as='h2' textAlign='center'>Jakarta EE & React File Handling</Header>
    {statusInformation && <Message color={statusInformation.color}>{statusInformation.message}</Message>}
    <FileUploadComponent uploadFile={file => {
      setStatusInformation(undefined);
      new ApiClient()
        .uploadFile(file)
        .then(response => setStatusInformation(
          {
            message: 'Successfully uploaded the file',
            color: 'green'
          }))
        .catch(error => setStatusInformation({
          message: 'Error occurred while uploading file',
          color: 'red'
        }))
    }}/>
    <Divider/>
    <FileDownloadComponent
      downloadFile={() => {
        setStatusInformation(undefined);
        new ApiClient()
          .downloadRandomFile()
          .then(response => {
            setStatusInformation({
              message: 'Successfully downloaded a random file',
              color: 'green'
            });
            downloadFileInBrowser(response);
          })
          .catch(error => setStatusInformation({
            message: 'Error occurred while downloading file',
            color: 'red'
          }))
      }}/>
  </Container>;
};

export default App;

Our ApiClient uses the fetch API for making HTTP requests:

export class ApiClient {

  uploadFile(file: File) {
    let data = new FormData();
    data.append('file', file);

    return fetch('http://localhost:8080/resources/files', {
      method: 'POST',
      body: data
    });
  }

  downloadRandomFile() {
    return fetch('http://localhost:8080/resources/files');
  }
}

To actually download the incoming file from the backend directly, I’m using the following solution:

const downloadFileInBrowser = (response: Response) => {
  const filename = response.headers.get('Content-Disposition')!.split('filename=')![1];
  response.blob().then(blob => {
    let url = window.URL.createObjectURL(blob);
    let a = document.createElement('a');
    a.href = url;
    a.download = filename;
    a.click();
  });
};

File handling for other project setups

Similar to this example, I’ve created several guides for handling files with different project setups. Besides Jakarta EE and React file handling, find other examples here:

The source code, with instructions on how to run this example, is available on GitHub.

Have fun up- and downloading files with Jakarta EE and React,

Phil

The post Jakarta EE & React file up- and download using Java 11 and TypeScript appeared first on rieckpil.


by rieckpil at December 12, 2019 05:55 AM

The Road to Jakarta EE Compatibility

by Patrik Duditš at December 11, 2019 06:06 PM

Payara Platform 5.194 was released recently, and just like the previous release, it is a certified Jakarta EE implementation. The request for certification can be seen on Jakarta EE platform project issue tracker.


by Patrik Duditš at December 11, 2019 06:06 PM

Joyful Open Liberty Developer Experience with Liberty Maven Plugin

by rieckpil at November 30, 2019 08:59 AM

Short feedback cycles during development are essential for your productivity. If you practice TDD you’ll agree on this even more. In the past, I’ve used several IDE plugins to achieve this for developing Jakarta EE applications. All worked but it wasn’t joyful. Adam Bien’s Watch and Deploy was a light at the end of the tunnel for a simple & generic solution. Recently I stumbled over the Liberty Maven Plugin from the Open Liberty team. This is really a gamechanger for great development experience for Liberty profile servers.

TL;DR: You’ll achieve the following with this Maven plugin:

  • Hot-reloading of code/configuration/dependency changes
  • TDD joy for developing Jakarta EE and MicroProfile applications
  • Easily debug your deployed application
  • Start your Open Liberty server without manual effort

Integrate the Liberty Maven Plugin to your project

Integrating this Open Liberty Maven plugin to an existing project, is as simple, like the following:

<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>de.rieckpil.blog</groupId>
    <artifactId>open-maven-plugin-review</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>war</packaging>

    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
        <failOnMissingWebXml>false</failOnMissingWebXml>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <jakarta.jakartaee-api.version>8.0.0</jakarta.jakartaee-api.version>
        <microprofile.version>3.2</microprofile.version>
        <junit-jupiter.version>5.5.0</junit-jupiter.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>jakarta.platform</groupId>
            <artifactId>jakarta.jakartaee-api</artifactId>
            <version>${jakarta.jakartaee-api.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.eclipse.microprofile</groupId>
            <artifactId>microprofile</artifactId>
            <version>${microprofile.version}</version>
            <type>pom</type>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter</artifactId>
            <version>${junit-jupiter.version}</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <finalName>open-maven-plugin-review</finalName>
        <plugins>
            <plugin>
                <groupId>io.openliberty.tools</groupId>
                <artifactId>liberty-maven-plugin</artifactId>
                <version>3.1</version>
            </plugin>
            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.8.1</version>
            </plugin>
            <plugin>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.22.2</version>
            </plugin>
        </plugins>
    </build>
</project>

Open Liberty server configuration

By default, the plugin makes use of your configuration within src/main/liberty. To now configure the server, we can place our server.xml within src/main/liberty/config:

<?xml version="1.0" encoding="UTF-8"?>
<server description="rieckpil">

    <featureManager>
        <feature>cdi-2.0</feature>
        <feature>jpa-2.2</feature>
        <feature>jaxrs-2.1</feature>
        <feature>mpConfig-1.3</feature>
        <feature>mpHealth-2.0</feature>
        <feature>mpRestClient-1.3</feature>
    </featureManager>

    <httpEndpoint id="defaultHttpEndpoint" httpPort="9080" httpsPort="9443"/>

    <!-- ${databaseName} is configured in the pom.xml -->
    <dataSource id="DefaultDataSource" jndiName="jdbc/h2">
        <jdbcDriver libraryRef="h2-library"/>
        <properties URL="jdbc:h2:mem:${databaseName}"/>
    </dataSource>

    <library id="h2-library">
        <file name="${server.config.dir}/h2.jar"/>
    </library>
</server>

If you, later on, start the Open Liberty server, you’ll find this file within target/liberty/wlp/usr/servers/defaultServer. To provide further resources e.g. a JDBC driver like in the example above, just place them next inside the same directory like your server.xml. You can also create new folder structures to separate the files from each other.

Similar to this, you can configure server parameters like environment, JVM or Liberty specific variables with this plugin. Since plugin version 3.1, you can achieve this within the properties section of your pom.xml:

<properties>
   <maven.compiler.source>11</maven.compiler.source>
   <maven.compiler.target>11</maven.compiler.target>
   <failOnMissingWebXml>false</failOnMissingWebXml>
   <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
   <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
   <jakarta.jakartaee-api.version>8.0.0</jakarta.jakartaee-api.version>
   <microprofile.version>3.2</microprofile.version>
   <junit-jupiter.version>5.5.0</junit-jupiter.version>
   <!-- Configuration for Open Liberty -->
   <liberty.jvm.minHeap>-Xms512m</liberty.jvm.minHeap>
   <liberty.env.MY_MESSAGE>Hello World from Maven pom.xml!</liberty.env.MY_MESSAGE>
   <liberty.var.databaseName>jakarta</liberty.var.databaseName>
</properties>

The liberty.env can be then used for example by MicroProfile Config. The liberty.var variables are available within your server.xml for further configuration like it did with the name of the H2 database.

Find a list of all possible server configurations here.

Open Liberty in development mode = joy

Now comes the joyful part…

With this plugin you can launch the Open Liberty server on your machine in development mode:

mvn liberty:dev

If you execute this for the first time, the plugin will download the openliberty-kernel and all the required features you need. These files are stored within your local Maven repository folder (usually ~/.m2/repository) and can be therefore used for multiple projects.

Once all artifacts are downloaded, your Open Liberty server starts in development mode. If you now change the source code of your application, the plugin compiles and deploys your changes to the running server in a matter of seconds.

This is not limited to changes to your code. You can also add new dependencies to your Maven project or adjust your server configuration. Even if you add new features to your server.xml, they will be installed automatically.

Furthermore, you can run your (unit & integration) tests on-demand with hitting enter. In addition to this,  if you start the development mode with -DhotTests, your tests are executed on each change:

mvn liberty:dev -DhotTests

This is a gamechanger for the TDD (Test Driven Development) experience with Open Liberty.

Once Open Liberty is running in this development mode, you can also debug your application. The default port is 7777 and you can easily attach your IDE to the running JVM and debug your application.

To exit the development mode, either enter q in your terminal and hit enter or exit with CTRL+C.

Other Maven plugin goals

Besides this development mode, you can also start and stop your Open Liberty server with this project. The Maven goals for this are mvn liberty:start and mvn liberty:stop. The server content is located at target/liberty. There is no need to manually download Open Liberty anymore, as this plugin will do this.

If you want to start Open Liberty in the foreground, replace the start goal with run.

There are more useful goals for this Maven plugins to explore e.g dump, debug, create. The full list of goals is available on GitHub.

Summary of the Open Liberty Maven Plugin

With this Maven plugin, your Jakarta EE or MicroProfile development experience will takeoff. Given this plugin, there is no need to download the latest server anymore. The conventions for the server configuration makes it also more convenient to use compared to achieving the same with Docker.  This plugin will definitely be part of all my further Open Liberty projects and I can’t imagine developing applications without it anymore.

For a hands-on example, I’ve recorded a YouTube video:

Clone the example application for this review from GitHub or quickly add the plugin to your own project to see how joyful it makes developing with Open Liberty. There is also a recording available at the Open Liberty blog.

If you are looking for a solution to achieve hot reloading for other application servers, have a look at Adam Bien’s WAD.

Have fun using the Open Liberty Maven Plugin,

Phil

The post Joyful Open Liberty Developer Experience with Liberty Maven Plugin appeared first on rieckpil.


by rieckpil at November 30, 2019 08:59 AM

Don’t expose your JPA entities in your REST API

by Thorben Janssen at November 28, 2019 05:34 PM

The post Don’t expose your JPA entities in your REST API appeared first on Thoughts on Java.

Should you expose your entities in your REST API, or should you prefer to serialize and deserialize DTO classes?
That’s one of the most commonly asked questions when I’m talking to developers or when I’m coaching teams who are working on a new application.

There are two main reasons for these questions and all the discussions that arise from them:

  1. Entities are POJOs. It often seems like they can get easily serialized and deserialized to JSON documents. If it really works that easily, the implementation of your REST endpoints would become pretty simple.
  2. Exposing your entities creates a strong coupling between your API and your persistence model. Any difference between the 2 models introduces extra complexity, and you need to find a way to bridge the gap between them. Unfortunately, there are always differences between your API and your persistence model. The most obvious ones are the handling of associations between your entities.

There is an obvious conflict. It seems like exposing entities makes implementing your use cases easier, but it also introduces new problems. So, what has a bigger impact on your implementation? And are there any other problems that might not be that obvious?

I have seen both approaches in several projects, and over the years, I’ve formed a pretty strong opinion on this. Even though it’s tempting to expose your entities, you should avoid it for all applications with at least mediocre complexity and for all applications that you need to support for a long time. Exposing your entities at your API makes it impossible to fulfill a few best practices when designing your API; it reduces the readability of your entity classes, slows down your application, and makes it hard to implement a true REST architecture.

You can avoid all of these issues by designing DTO classes, which you then serialize and deserialize on your API. That requires you to implement a mapping between the DTOs and your internal data structures. But that’s worth it if you consider all the downsides of exposing entities in your API.

Let me explain …

Don’t want to read? You can watch it here!

Hide implementation details

As a general best practice, your API shouldn’t expose any implementation details of your application. The structure that you use to persist your data is such a detail. Exposing your entities in your API obviously doesn’t follow this best practice.

Almost every time I bring up this argument in a discussion, someone skeptically raises an eyebrow or directly asks if that is really that big of a deal.

Well, it’s only a big deal if you want to be able to add, remove or change any attributes of your entities without changing your API or if you’re going to change the data returned by a REST endpoint without changing your database.

In other words: Yes, separating your API from your persistence layer is necessary to implement a maintainable application. If you don’t do it, every change of your REST API will affect your entity model and vice versa. That means your API and your persistence layer can no longer evolve independently of each other.



Already a member? Login here.

Don’t bloat your entities with additional annotations

And if you consider to only expose entities when they are a perfect match for the input or return value of a REST endpoint, then please be aware of the additional annotations you will need to add for the JSON serialization and deserialization.

Most entity mappings already require several annotations. Adding additional ones for your JSON mapping makes the entity classes even harder to understand. Better keep it simple and separate the entity class from the class you use to serialize and deserialize your JSON documents.

Different handling of associations

Another argument to not expose your entities in your API is the handling of associations between entities. Your persistence layer and your API treat them differently. That’s especially the case if you’re implementing a REST API.

With JPA and Hibernate, you typically use managed associations that are represented by an entity attribute. That enables you to join the entities in your queries easily and to use the entity attribute to traverse the association in your business code. Depending on the configured fetch type and your query, this association is either fully initialized, or lazily fetched on the first access.

In your REST API, you handle these associations differently. The correct way would be to provide a link for each association. Roy Fielding described that as HATEOAS. It’s one of the essential parts of a REST architecture. But most teams decide to either not model the associations at all or to only include id references.

Links and id references provide a similar challenge. When you serialize your entity to a JSON document, you need to fetch the associated entities and create references for each of them. And during deserialization, you need to take the references and fetch entities for them. Depending on the number of required queries, this might slow down your application.

That’s why teams often exclude associations during serialization and deserialization. That might be OK for your client applications, but it creates problems if you try to merge an entity that you created by deserializing a JSON object. Hibernate expects that managed associations either reference other entity objects or dynamically created proxy objects or a Hibernate-specific List or Set implementation. But if you deserialize a JSON object and ignore the managed associations on your entity, the associations get set to null. You then either need to set them manually, or Hibernate will delete the association from your database.

As you can see, managing associations can be tricky. Don’t get me wrong; these issues can be solved. But that requires extra work, and if you forget just one of them, you will lose some of your data.

Design your APIs

Another drawback of exposing your APIs is that most teams use it as an excuse to not design the response of their REST endpoints. They only return serialized entity objects.

But if you’re not implementing a very simple CRUD operation, your clients will most likely benefit from carefully designed responses. Here are a few examples for a basic bookstore application:

  • When you return the result of a search for a book, you might only want to return the title and price of the book, the names of its authors and the publisher, and an average customer rating. With a specifically designed JSON document, you can avoid unnecessary information and embed the information of the authors, the publisher, and the average rating instead of providing links to them.
  • When the client requests detailed information about a book, the response will most likely be pretty similar to a serialized representation of the entity. But there will be some important differences. Your JSON document might contain the title, blurb, additional description, and other information about the book. But there is some information you don’t want to share, like the wholesale price or the current inventory of the book. You might also want to exclude the associations to the authors and reviews of this book.

Creating these different representations based on use case specific DTO classes is pretty simple. But doing the same based on a graph of entity objects is much harder and most likely requires some manual mappings.

Support multiple versions of your API

If your application gets used for a while, you will need to add new REST endpoints and change existing ones. If you can’t always update all clients at the same time, this will force you to support multiple versions of your API.

Doing that while exposing your entities in your API is a tough challenge. Your entities then become a mix of currently used and old, deprecated attributes that are annotated with @Transient so that they don’t get persisted in the database.

Supporting multiple versions of an API is much easier if you’re exposing DTOs. That separates the persistence layer from your API, and you can introduce a migration layer to your application. This layer separates all the operations required to map the calls from your old API to the new one. That allows you to provide a simple and efficient implementation of your current API. And whenever you deactivate the old API, you can remove the migration layer.



Already a member? Login here.

Conclusion

As you can see, there are several reasons why I don’t like to expose entities in my APIs. But I also agree that none of them creates unsolvable problems. That’s why there are still so many discussions about this topic.

If you’re having this discussion in your team, you need to ask yourself: Do you want to spend the additional effort to fix all these issues to avoid the very basic mapping between entity and DTO classes?

In my experience, it’s just not worth the effort. I prefer to separate my API from my persistence layer and implement a few basic entity to DTO mappings. That keeps my code easy to read and gives me the flexibility to change all internal parts of my application without worrying about any clients.

The post Don’t expose your JPA entities in your REST API appeared first on Thoughts on Java.


by Thorben Janssen at November 28, 2019 05:34 PM

Jakarta EE integration tests with MicroShed Testing

by rieckpil at November 22, 2019 06:57 AM

Integration tests for your Jakarta EE application are essential. Testing the application in a full setup will ensure all of your components can work together. The Testcontainers project provides a great approach to use Docker containers for such tests. With MicroShed Testing you get a convenient way to use Testcontainers for writing integration tests for your Jakarta EE application. This is also true for Java EE or standalone MicroProfile applications.

Learn how to use this framework with this blog post. I’m using a Jakarta EE 8, MicroProfile 3.0, Java 11 application running on Open Liberty for the example.

Project setup for MicroShed Testing

For creating this project I’m using the following Maven Archetype:

mvn archetype:generate \     
  -DarchetypeGroupId=de.rieckpil.archetypes \
  -DarchetypeArtifactId=jakartaee8 \
  -DarchetypeVersion=1.0.0 \
  -DgroupId=de.rieckpil.blog \
  -DartifactId=review-microshed-testing \
  -DinteractiveMode=false

Besides the basic dependencies for Jakarta EE and Eclipse MicroProfile, we need some test dependencies also. As MicroShed Testing is using Testcontainers under the hood, we can add the official Testcontainers dependencies for starting a PostgreSQL and MockServer container.

Furthermore, we need JUnit 5 and the MicroShed Testing Open Liberty dependency itself. The Log4 1.2 SLF4J binding dependency is optional but improves the logging output of our tests.

As a result, the full pom.xml looks like the following:

<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>de.rieckpil.blog</groupId>
    <artifactId>review-microshed-testing</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>war</packaging>

    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
        <failOnMissingWebXml>false</failOnMissingWebXml>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <jakarta.jakartaee-api.version>8.0.0</jakarta.jakartaee-api.version>
        <microprofile.version>3.0</microprofile.version>
        <mockito-core.version>3.1.0</mockito-core.version>
        <junit-jupiter.version>5.5.0</junit-jupiter.version>
        <microshed-testing.version>0.6.1.1</microshed-testing.version>
        <testcontainers.version>1.12.4</testcontainers.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>jakarta.platform</groupId>
            <artifactId>jakarta.jakartaee-api</artifactId>
            <version>${jakarta.jakartaee-api.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.eclipse.microprofile</groupId>
            <artifactId>microprofile</artifactId>
            <version>${microprofile.version}</version>
            <type>pom</type>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.microshed</groupId>
            <artifactId>microshed-testing-liberty</artifactId>
            <version>${microshed-testing.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter</artifactId>
            <version>${junit-jupiter.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.29</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>postgresql</artifactId>
            <version>${testcontainers.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.testcontainers</groupId>
            <artifactId>mockserver</artifactId>
            <version>${testcontainers.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.mock-server</groupId>
            <artifactId>mockserver-client-java</artifactId>
            <version>5.5.4</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <finalName>review-microshed-testing</finalName>
        <plugins>
            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.8.1</version>
            </plugin>
            <plugin>
                <groupId>io.openliberty.tools</groupId>
                <artifactId>liberty-maven-plugin</artifactId>
                <version>3.1</version>
            </plugin>
            <plugin>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.22.2</version>
            </plugin>
            <!-- Plugin to run integration tests -->
            <plugin>
                <artifactId>maven-failsafe-plugin</artifactId>
                <version>3.0.0-M3</version>
                <executions>
                    <execution>
                        <id>integration-test</id>
                        <goals>
                            <goal>integration-test</goal>
                        </goals>
                        <configuration>
                            <trimStackTrace>false</trimStackTrace>
                        </configuration>
                    </execution>
                    <execution>
                        <id>verify</id>
                        <goals>
                            <goal>verify</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

With MicroShed Testing, you can either provide your own Dockerfile or in case of Open Liberty use the extension to have almost zero setup tasks. MicroShed Testing searches for a Dockerfile in either the root folder of your project or within src/main/docker.

For this application, I’m using the official Open Liberty extension from MicroShed Testing and therefore just have to configure the Open Liberty server within src/main/liberty. This folder contains a custom server.xml and the JDBC driver:

<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">

    <featureManager>
        <feature>cdi-2.0</feature>
        <feature>jpa-2.2</feature>
        <feature>jaxrs-2.1</feature>
        <feature>mpConfig-1.3</feature>
        <feature>mpHealth-2.0</feature>
        <feature>mpRestClient-1.3</feature>
    </featureManager>

    <httpEndpoint id="defaultHttpEndpoint" httpPort="9080" httpsPort="9443"/>
    
    <dataSource id="DefaultDataSource">
        <jdbcDriver libraryRef="postgresql-library"/>
        <properties.postgresql serverName="${POSTGRES_HOSTNAME}"
                               portNumber="${POSTGRES_PORT}"
                               databaseName="users"
                               user="${POSTGRES_USERNAME}"
                               password="${POSTGRES_PASSWORD}"/>
    </dataSource>

    <library id="postgresql-library">
        <fileset dir="${server.config.dir}/postgres"/>
    </library>
</server>

Jakarta EE application walkthrough

As I don’t want to provide a simple Hello World application for this example, I’m using an application with JPA & PostgreSQL and an external REST API as a dependency. This should mirror 80% of the basic applications out there.

The application contains two JAX-RS resources: SampleResource and PersonResource

First, you can retrieve a MicroProfile Config property from the SampleResource and a quote of the day. This quote of the day is fetched from a public REST API using MicroProfile RestClient:

@RegisterRestClient(baseUri = "https://quotes.rest")
public interface QuoteRestClient {

    @GET
    @Path("/qod")
    @Consumes(MediaType.APPLICATION_JSON)
    JsonObject getQuoteOfTheDay();
}

Using this public REST API, I’ll later demonstrate how to mock this call in your Jakarta EE integration test. The full resource looks like this:

@Path("sample")
@Produces(MediaType.TEXT_PLAIN)
public class SampleResource {

    @Inject
    @ConfigProperty(name = "message")
    private String message;

    @Inject
    @RestClient
    private QuoteRestClient quoteRestClient;

    @GET
    @Path("/message")
    public Response getMessage() {
        return Response.ok(message).build();
    }

    @GET
    @Path("/quotes")
    public Response getQuotes() {
        var quoteOfTheDayPointer = Json.createPointer("/contents/quotes/0/quote");
        var quoteOfTheDay = quoteOfTheDayPointer.getValue(quoteRestClient.getQuoteOfTheDay()).toString();
        return Response.ok(quoteOfTheDay).build();
    }
}

Second, the PersonResource allows clients to create and read person resources. We’ll store the persons in the PostgreSQL database using JPA:

@Path("/persons")
@ApplicationScoped
@Transactional(TxType.REQUIRED)
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public class PersonResource {

    @PersistenceContext
    private EntityManager entityManager;

    @GET
    public List<Person> getAllPersons() {
        return entityManager.createQuery("SELECT p FROM Person p", Person.class).getResultList();
    }

    @GET
    @Path("/{id}")
    public Person getPersonById(@PathParam("id") Long id) {
        var personById = entityManager.find(Person.class, id);

        if (personById == null) {
            throw new NotFoundException();
        }

        return personById;
    }

    @POST
    public Response createNewPerson(@Context UriInfo uriInfo, @RequestBody Person personToStore) {
        entityManager.persist(personToStore);

        var headerLocation = uriInfo.getAbsolutePathBuilder()
                .path(personToStore.getId().toString())
                .build();

        return Response.created(headerLocation).build();
    }
}

Integration test setup with MicroShed Testing

Next, we can set up the integration tests for our Jakarta EE (or Java EE or standalone MicroProfile) application. With MicroShed Testing, we can create a SharedContainerConfiguration to share the Docker containers between integration tests. As our system requires a running application, a PostgreSQL database, and a remote system, I’m creating three containers:

public class SampleApplicationConfig implements SharedContainerConfiguration {

    @Container
    public static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>()
            .withNetworkAliases("mypostgres")
            .withExposedPorts(5432)
            .withUsername("duke")
            .withPassword("duke42")
            .withDatabaseName("users");

    @Container
    public static MockServerContainer mockServer = new MockServerContainer()
            .withNetworkAliases("mockserver");

    @Container
    public static ApplicationContainer app = new ApplicationContainer()
            .withEnv("POSTGRES_HOSTNAME", "mypostgres")
            .withEnv("POSTGRES_PORT", "5432")
            .withEnv("POSTGRES_USERNAME", "duke")
            .withEnv("POSTGRES_PASSWORD", "duke42")
            .withEnv("message", "Hello World from MicroShed Testing")
            .withAppContextRoot("/")
            .withReadinessPath("/health/ready")
            .withMpRestClient(QuoteRestClient.class, "http://mockserver:" + MockServerContainer.PORT);

}

The PostgreSQLContainer and MockServerContainer are part of a Testcontainer dependency. ApplicationContainer is now the first MicroShed Testing class we make use of.  You can use this wrapper for plain Jakarta EE or Java EE applications.

We can set up the ApplicationContainer with any environment variables, we want and provide MicroProfile Config properties in this way.  Next, you can define the readiness path of your application. For this, we can make use of MicroProfile Health and specify the readiness path /health/ready.

Furthermore, MicroShed Testing is capable to override the base URL for our MicroProfile RestClient. This allows us to use the MockServer for our integration tests.

Using the network alias our application is able to communicate with the MockServer and the PostgreSQL database within the Docker network.

Writing integration tests for Jakarta EE applications

Given this setup, we can finally start writing integration tests for our Jakarta EE application. MicroShed Testing can be enabled for a JUnit 5 test using @MicroShedTest. With @SharedContainerConfig you can reference the common system setup.

To test the SampleResource class, I’m expecting to get MicroProfile Config  property configured in the SharedContainerConfiguration. Furthermore, the quotes endpoint should return the quote of the day properly:

@MicroShedTest
@SharedContainerConfig(SampleApplicationConfig.class)
public class SampleResourceIT {
    
    @RESTClient
    public static SampleResource sampleEndpoint;
    
    @Test
    public void shouldReturnSampleMessage() {
        assertEquals("Hello World from MicroShed Testing",  
                sampleEndpoint.getMessage());
    }
    
    @Test
    public void shouldReturnQuoteOfTheDay() {

        var resultQuote = Json.createObjectBuilder()
                .add("contents",
                        Json.createObjectBuilder().add("quotes",
                                Json.createArrayBuilder().add(Json.createObjectBuilder()
                                        .add("quote", "Do not worry if you have built your castles in the air. " +
                                                "They are where they should be. Now put the foundations under them."))))
                .build();

        new MockServerClient(mockServer.getContainerIpAddress(), mockServer.getServerPort())
                .when(request("/qod"))
                .respond(response().withBody(resultQuote.toString(), com.google.common.net.MediaType.JSON_UTF_8));

        var result = sampleEndpoint.getQuotes();

        System.out.println("Quote of the day: " + result);

        assertNotNull(result);
        assertFalse(result.isEmpty());
    }
}

Injecting the JAX-RS resource with @RESTClient within the integration test allows us to make the HTTP call like using a JAX-RS client.

The integration test for PersonResource is more advanced. I’m testing the flow of creating a new person and then querying for it:

@MicroShedTest
@SharedContainerConfig(SampleApplicationConfig.class)
public class PersonResourceIT {
    
    @RESTClient
    public static PersonResource personsEndpoint;
    
    @Test
    public void shouldCreatePerson() {
        Person duke = new Person();
        duke.setFirstName("duke");
        duke.setLastName("jakarta");
        
        Response result = personsEndpoint.createNewPerson(null, duke);

        assertEquals(Response.Status.CREATED.getStatusCode(), result.getStatus());
        var createdUrl = result.getHeaderString("Location");
        assertNotNull(createdUrl);

        var id = Long.valueOf(createdUrl.substring(createdUrl.lastIndexOf('/') + 1));
        assertTrue(id > 0, "Generated ID should be greater than 0 but was: " + id);
        
        var newPerson = personsEndpoint.getPersonById(id);
        assertNotNull(newPerson);
        assertEquals("duke", newPerson.getFirstName());
        assertEquals("jakarta", newPerson.getLastName());
    }
}

Summary of MicroShed Testing for Jakarta EE integration tests

Even though the project is in its early stages, it provides excellent support for writing Jakarta EE integrations tests using Testcontainers. There are also dedicated dependencies available for Open Liberty and Payara. With them, it’s, even more, simpler to use. You should give it a try and provide feedback to further evolve it.

There are also good examples available for different application setups.

For more information follow this guide on Open Liberty. Furthermore, visit the GitHub repository or the official project homepage. You can find this application also on GitHub.

Have fun writing Jakarta EE integration tests with MicroShed Testing,

Phil

The post Jakarta EE integration tests with MicroShed Testing appeared first on rieckpil.


by rieckpil at November 22, 2019 06:57 AM

Hibernate Tip: How to control cache invalidation for native queries

by Thorben Janssen at November 21, 2019 01:00 PM

The post Hibernate Tip: How to control cache invalidation for native queries appeared first on Thoughts on Java.

Hibernate Tips is a series of posts in which I describe a quick and easy solution for common Hibernate questions. If you have a question for a future Hibernate Tip, please post a comment below.

Question:

“I was told that native queries remove all entities from my 2nd level cache. But you’re still recommending them. Don’t they negatively affect the performance?”

Don’t want to read? You can watch it here!

Solution:

Yes, some native queries invalidate the 2nd level cache. But no, if you do it correctly, it doesn’t have any negative performance impacts, and it doesn’t change my recommendation to use them.

To answer this question in more detail, we first need to discuss which kinds of native queries invalidate the 2nd level cache before we talk about finetuning this process.

Which native queries invalidate the cache?

Native SQL SELECT statements don’t affect the 2nd level cache, and you don’t need to worry about any negative performance impacts. But Hibernate invalidates the 2nd level cache if you execute an SQL UPDATE or DELETE statement as a native query. This is necessary because the SQL statement changed data in the database, and by that, it might have invalidated entities in the cache. By default, Hibernate doesn’t know which records were affected. Due to this, Hibernate can only invalidate the entire 2nd level cache.

Let’s take a look at an example.

Before I execute the following test, the Author entity with id 1 is already in the 2nd level cache. Then I run an SQL UPDATE statement as a native query in one transaction. In the following transaction, I check if the Author entity is still in the cache.

EntityManager em = emf.createEntityManager();
em.getTransaction().begin();

log.info("Before native update");
log.info("Author 1 in Cache? " + em.getEntityManagerFactory().getCache().contains(Author.class, 1L));

Query q = em.createNativeQuery("UPDATE Book SET title = title || ' - changed'");
q.executeUpdate();

em.getTransaction().commit();
em.close();



em = emf.createEntityManager();
em.getTransaction().begin();

log.info("After native update");
log.info("Author 1 in Cache? " + em.getEntityManagerFactory().getCache().contains(Author.class, 1L));

a = em.find(Author.class, 1L);
log.info(a);

em.getTransaction().commit();
em.close();

If you don’t provide additional information, Hibernate invalidates the 2nd level cache and removes all entities from it. You can see that in the log messages written by the 2nd transaction. The Author entity with id 1 is no longer in the cache, and Hibernate has to use a query to get it from the database.

06:32:02,752 INFO  [org.thoughts.on.java.model.Test2ndLevelCache] - Before native update
06:32:02,752 INFO  [org.thoughts.on.java.model.Test2ndLevelCache] - Author 1 in Cache? true
06:32:02,779 DEBUG [org.hibernate.SQL] - UPDATE Book SET title = title || ' - changed'
06:32:02,782 INFO  [org.hibernate.engine.internal.StatisticalLoggingSessionEventListener] - Session Metrics {
    14800 nanoseconds spent acquiring 1 JDBC connections;
    22300 nanoseconds spent releasing 1 JDBC connections;
    201400 nanoseconds spent preparing 1 JDBC statements;
    1356000 nanoseconds spent executing 1 JDBC statements;
    0 nanoseconds spent executing 0 JDBC batches;
    0 nanoseconds spent performing 0 L2C puts;
    0 nanoseconds spent performing 0 L2C hits;
    0 nanoseconds spent performing 0 L2C misses;
    0 nanoseconds spent executing 0 flushes (flushing a total of 0 entities and 0 collections);
    17500 nanoseconds spent executing 1 partial-flushes (flushing a total of 0 entities and 0 collections)
}
06:32:02,782 INFO  [org.thoughts.on.java.model.Test2ndLevelCache] - After native update
06:32:02,782 INFO  [org.thoughts.on.java.model.Test2ndLevelCache] - Author 1 in Cache? false
06:32:02,783 DEBUG [org.hibernate.SQL] - select author0_.id as id1_0_0_, author0_.firstName as firstNam2_0_0_, author0_.lastName as lastName3_0_0_, author0_.version as version4_0_0_ from Author author0_ where author0_.id=?
06:32:02,784 INFO  [org.thoughts.on.java.model.Test2ndLevelCache] - Author firstName: Joshua, lastName: Bloch
06:32:02,785 INFO  [org.hibernate.engine.internal.StatisticalLoggingSessionEventListener] - Session Metrics {
    11900 nanoseconds spent acquiring 1 JDBC connections;
    15300 nanoseconds spent releasing 1 JDBC connections;
    18500 nanoseconds spent preparing 1 JDBC statements;
    936400 nanoseconds spent executing 1 JDBC statements;
    0 nanoseconds spent executing 0 JDBC batches;
    256700 nanoseconds spent performing 1 L2C puts;
    0 nanoseconds spent performing 0 L2C hits;
    114600 nanoseconds spent performing 1 L2C misses;
    107100 nanoseconds spent executing 1 flushes (flushing a total of 1 entities and 0 collections);
    0 nanoseconds spent executing 0 partial-flushes (flushing a total of 0 entities and 0 collections)
}

Only invalidate affected regions

But that doesn’t have to be the case. You can tell Hibernate, which entity classes are affected by the query. You just need to unwrap the Query object to get a Hibernate-specific SqlQuery, and call the addSynchronizedEntityClass method with a class reference.

EntityManager em = emf.createEntityManager();
em.getTransaction().begin();

log.info("Before native update");
log.info("Author 1 in Cache? " + em.getEntityManagerFactory().getCache().contains(Author.class, 1L));

Query q = em.createNativeQuery("UPDATE Book SET title = title || ' - changed'");
q.unwrap(NativeQuery.class).addSynchronizedEntityClass(Book.class);
q.executeUpdate();

em.getTransaction().commit();
em.close();



em = emf.createEntityManager();
em.getTransaction().begin();

log.info("After native update");
log.info("Author 1 in Cache? " + em.getEntityManagerFactory().getCache().contains(Author.class, 1L));

a = em.find(Author.class, 1L);
log.info(a);

em.getTransaction().commit();
em.close();

My SQL UPDATE statement changes records in the Book table which gets mapped by the Book entity. After providing this information to Hibernate, it online invalidates the region of the Book entity and the Author entities stay in the 2nd level cache.

06:30:51,985 INFO  [org.thoughts.on.java.model.Test2ndLevelCache] - Before native update
06:30:51,985 INFO  [org.thoughts.on.java.model.Test2ndLevelCache] - Author 1 in Cache? true
06:30:52,011 DEBUG [org.hibernate.SQL] - UPDATE Book SET title = title || ' - changed'
06:30:52,014 INFO  [org.hibernate.engine.internal.StatisticalLoggingSessionEventListener] - Session Metrics {
    18400 nanoseconds spent acquiring 1 JDBC connections;
    19900 nanoseconds spent releasing 1 JDBC connections;
    86000 nanoseconds spent preparing 1 JDBC statements;
    1825400 nanoseconds spent executing 1 JDBC statements;
    0 nanoseconds spent executing 0 JDBC batches;
    0 nanoseconds spent performing 0 L2C puts;
    0 nanoseconds spent performing 0 L2C hits;
    0 nanoseconds spent performing 0 L2C misses;
    0 nanoseconds spent executing 0 flushes (flushing a total of 0 entities and 0 collections);
    19400 nanoseconds spent executing 1 partial-flushes (flushing a total of 0 entities and 0 collections)
}
06:30:52,015 INFO  [org.thoughts.on.java.model.Test2ndLevelCache] - After native update
06:30:52,015 INFO  [org.thoughts.on.java.model.Test2ndLevelCache] - Author 1 in Cache? true
06:30:52,015 INFO  [org.thoughts.on.java.model.Test2ndLevelCache] - Author firstName: Joshua, lastName: Bloch
06:30:52,016 INFO  [org.hibernate.engine.internal.StatisticalLoggingSessionEventListener] - Session Metrics {
    10000 nanoseconds spent acquiring 1 JDBC connections;
    25700 nanoseconds spent releasing 1 JDBC connections;
    0 nanoseconds spent preparing 0 JDBC statements;
    0 nanoseconds spent executing 0 JDBC statements;
    0 nanoseconds spent executing 0 JDBC batches;
    0 nanoseconds spent performing 0 L2C puts;
    86900 nanoseconds spent performing 1 L2C hits;
    0 nanoseconds spent performing 0 L2C misses;
    104700 nanoseconds spent executing 1 flushes (flushing a total of 1 entities and 0 collections);
    0 nanoseconds spent executing 0 partial-flushes (flushing a total of 0 entities and 0 collections)
}

Get this Hibernate Tip as a printable PDF!

Join the free Thoughts on Java Library to get access to lots of member-only content, like a printable PDF for this post, lots of cheat sheets and 2 ebooks about Hibernate.

Already a member? Login here.

Learn more:

If you want to learn more about native queries or Hibernate’s caches, you will find the following articles interesting:

 

Hibernate Tips Book


Get more recipes like this one in my new book Hibernate Tips: More than 70 solutions to common Hibernate problems.

It gives you more than 70 ready-to-use recipes for topics like basic and advanced mappings, logging, Java 8 support, caching and statically and dynamically defined queries.

Get it now as a paperback, ebook or PDF.

The post Hibernate Tip: How to control cache invalidation for native queries appeared first on Thoughts on Java.


by Thorben Janssen at November 21, 2019 01:00 PM

Coming Soon: Payara Platform Monitoring Console

by Jan Bernitt at November 21, 2019 12:09 PM

We are happy to announce that from the Payara Platform 5.194 release onwards Payara Server ships with a built-in monitoring console that allows a visual peek under the hood of the server.


by Jan Bernitt at November 21, 2019 12:09 PM

Modernizing our GitHub Sync Toolset

November 19, 2019 08:10 PM

I am happy to announce that my team is ready to deploy a new version of our GitHub Sync Toolset on November 26, 2019 from 10:00 to 11:00 am EST.

We are not expecting any disruption of service but it’s possible that some committers may lose write access to their Eclipse project GitHub repositories during this 1 hour maintenance window.

This toolset is responsible for syncronizing Eclipse committers accross all our GitHub repositories and on top of that, this new release will start syncronizing contributors.

In this context, a contributor is a GitHub user with read access to the project GitHub repositories. This new feature will allow committers to assign issues to contributors who currently don’t have write access to the repository. This feature was requested in 2015 via Bug 483563 - Allow assignment of GitHub issues to contributors.

Eclipse Committers are reponsible for maintaining a list of GitHub contributors from their project page on the Eclipse Project Management Infrastructure (PMI).

To become an Eclipse contributor on a GitHub for a project, please make sure to tell us your GitHub Username in your Eclipse account.


November 19, 2019 08:10 PM

Jakarta EE Community Update November 2019

by Tanja Obradovic at November 19, 2019 06:52 PM

Now that Jakarta EE 8 has been available for a couple of months, I want to share news about some of the great committee work that’s been happening. I also want to tell you about our latest Jakarta EE-compatible product, and make sure you have links to the recordings of our Jakarta EE community calls and presentations.

 Due to the timing of this update, I’ve included news about activities in the first half of November as well as October.

 

Another Jakarta EE-Compatible Product

I’m very pleased to tell you that Payara Server is now fully certified as a Jakarta EE 8-compatible implementation. If you’re not familiar with Payara Server, take a few minutes to learn more about this innovative, cloud native application server.

 The Payara team told us they found the compatibility process smooth and easy. To learn more about the benefits of being certified as a Jakarta EE-compatible product and the process to get listed, click here.

 

Jakarta EE 8 Feedback Will Drive Improvements

The Jakarta EE Steering Committee started a community retrospective on the Jakarta EE 8 release, sharing this document to drive the process.

You’ll also see retrospectives from each of the other Jakarta EE Working Group committees as they look to gather community input on improvements for the next release. Once all of the input is collected, we’ll summarize and publish the findings.

 

Jakarta EE 9 Delivery Plan to Be Ready December 9

Jakarta EE 9 planning is underway, and the Steering Committee has published a resolution requesting Jakarta EE Platform Project leaders to deliver a Jakarta EE 9 Delivery Plan, including a release date, to the Steering Committee no later than December 9, 2019.

 According to the resolution, the Jakarta EE 9 Delivery Plan should:

  • Implement the “big bang”
  • Include an explicit means to identify and enable specifications that are unnecessary or unwanted to be deprecated or removed 
  • Move all remaining specification APIs to the Jakarta namespace
  • Add no new specifications, apart from those pruned from Java SE 8 where appropriate, unless those specifications will not impact the target delivery date

The resolution is now with the Jakarta EE Platform Project team, which is actively looking into the Steering Committee requests. The Platform Project team will put a higher priority on meeting the Steering Committee resolution requests as soon as possible rather than adding more functionality to the release.

 You can read the minutes of the Jakarta EE Platform Project team meetings here.

 

New Chair for the Jakarta EE Specification Committee

We welcome Paul Buck as the non-voting Chair of the Jakarta EE Specification Committee. Paul is Vice President of Community Development at the Eclipse Foundation, and was unanimously elected to his new role.

 

Jakarta EE 8 Restructuring Continues

In the push to complete Jakarta EE 8, a number of planned improvements were deferred. Here’s a brief summary of the improvements the Jakarta EE Specification Committee is currently discussing:

  • Updating project ids and technical namespaces. For example:
    • ee4j.jms becomes ee4j.messaging
    • https://github.com/eclipse-ee4j/jms-api becomes https://github.com/eclipse-ee4j/messaging-api
    • javax.jms becomes jakarta.messaging
  • Updating project names. For example:
    • Jakarta Server Faces becomes Jakarta Faces
  • Whether top-level projects should be changed to include both specifications and compatible implementations as subpages
  • How to address the decision to rename TCK files from “eclipse-” to “jakarta-”

 

Time to Jakartify More Specifications

When Jakarta EE 8 was released, we provided specifications for the Jakarta EE Full Profile and Jakarta EE Web Profile. Now that we’ve acquired the copyright for additional specifications, it’s time for the community to Jakartify them so they can be contributed to Jakarta EE.

 

To help you get started:

And, here’s the list of specifications that are ready for the community to "Jakartify":

•    Jakarta Annotations

•    Jakarta Enterprise Beans

•    Jakarta Expression Language

•    Jakarta Security

•    Jakarta Server Faces

•    Jakarta Interceptors

•    Jakarta Authorization

•    Jakarta Activation

•    Jakarta Managed Beans

 

•    Jakarta Deployment

•    Jakarta XML RPC

•    Jakarta Authentication

•    Jakarta Mail

•    Jakarta XML Binding

•    Jakarta RESTful Web Services

•    Jakarta Web Services Metadata

•    Jakarta XML Web Services

•    Jakarta Connectors

 

•    Jakarta Persistence

•    Jakarta JSON Binding

•    Jakarta JSON Processing

•    Jakarta Debugging Support for Other Languages

•    Jakarta Server Pages

•    Jakarta Transactions

•    Jakarta WebSocket

 

 

On a related note, the Specification Committee is also working to further define compatibility testing requirements for the full platform and web profile specifications for subsequent releases of Jakarta EE-compatible products.

 

Jakarta EE Marketing Plan Nearly Finalized

We expect the Jakarta EE 2020 marketing plan and budget to be approved by the end of November.

The Marketing Committee is also looking to choose a Committee Chair very soon. In the meantime, the Eclipse Foundation will be actively participating in KubeCon, NA. If you’re there, be sure to drop by booth S5 to talk to our technical experts and check out the demos on the cloud native Java projects.

 

Join Community Update Calls

Every month, the Jakarta EE community holds a Community Call for everyone in the Jakarta EE community. For upcoming dates and connection details, see the Jakarta EE Community Calendar.

 We know it’s not always possible to join calls in real time, so here are links to the recordings and presentations:

 

A Look Back at October Events

October was another busy month for Jakarta EE and cloud native Java events as we participated in. Beside EclipseCon Europe 2019, we were present at Trondheim Developer Conference in Norway, Open Source Summit EU in France, SpringOne Platform, Think London - UK and Joker<?> - Russia.

In addition to many reports and blogs you may find on the participation, I would like to point out the ECE 2019 Community Day collaboration between IoT and Cloud Native Java teams. Even though it was just a starting point attempt to work on the solution by both teams, it was great seeing two Eclipse Foundation communities working together! I am looking for more of these collaborations in the future. Please look for blog from Jens Reimann (@ctron) from Red Hat on this. 

 

Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Tanja Obradovic’s blog summarizes the community engagement plan, which includes:

•      Social media: Twitter, Facebook, LinkedIn Group

•      Mailing lists: jakarta.ee-community@eclipse.org and jakarta.ee-wg@eclipse.org

•      Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs, monthly update emails to jakarta.ee-community@eclipse.org, and community blogs on “how are you involved with Jakarta EE”

•      Meetings: Jakarta Tech Talks, Jakarta EE Update, Jakarta Town Hall, and Eclipse Foundation events and conferences

 

Subscribe to your preferred channels today. And, get involved in the Jakarta EE Working Group to help shape the future of open source, cloud native Java.

 

To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.


by Tanja Obradovic at November 19, 2019 06:52 PM

Back to the top

Submit your event

If you have a community event you would like listed on our events page, please fill out and submit the Event Request form.

Submit Event