Skip to main content

MicroProfile 3.0 Support Comes to Helidon

by dmitrykornilov at September 14, 2019 04:37 PM

We are proud to announce a new version of Helidon 1.3. The main feature of this release is MicroProfile 3.0 support, but it also includes additional features, bug fixes and performance improvements. Let’s take a closer look.

MicroProfile 3.0

About a month ago we released Helidon 1.2 with MicroProfile 2.2 support. Since that time we made a step forward and brought MicroProfile 3.0 support.

For those who don’t know, MicroProfile is a set of cloud-native Java APIs. It’s supported by most of the modern Java vendors like Oracle, IBM, Red Hat, Payara and Tomitribe which makes it a de-facto standard in this area. It’s one of the goals of Helidon project to support the latest versions of MicroProfile. The Helidon MicroProfile implementation is called Helidon MP and along with the reactive, non-blocking framework called Helidon SE it builds the core of Helidon.

MicroProfile 3.0 is a major release. It contains updated Metrics 2.0 with some backwards incompatible changes, HealthCheck 2.0 and Rest Client 1.3 with minor updates.

Although MicroProfile 3.0 is not backwards compatible with MicroProfile 2.2, we didn’t want to bring backwards incompatibility to Helidon. Helidon Version 1.3 supports both MicroProfile 2.2 and MicroProfile 3.0. Helidon MP applications can select the MicroProfile version by depending on one (and only one) of the following bundles.

For compatibility with MicroProfile 2.2:

<dependency>
    <groupId>io.helidon.microprofile.bundles</groupId>    
    <artifactId>helidon-microprofile-2.2</artifactId>
</dependency>

For compatibility with MicroProfile 3.0:

<dependency>
    <groupId>io.helidon.microprofile.bundles</groupId
    <artifactId>helidon-microprofile-3.0</artifactId>
</dependency>

Backward compatibility with MicroProfile 2.2 implies that every existing Helidon application that depends on helidon-microprofile-2.2 will continue to run without any changes. New applications created from the latest archetypes in Helidon 1.3 will depend on helidon-microprofile-3.0.

Metrics 2.0 Support

As mentioned above, MicroProfile Metrics 2.0 introduces a number of new features as well as some backward incompatible changes. The following is a summary of the changes:

  • Existing counters have been limited to always be monotonic
  • A new metric called a concurrent gauge is now supported
  • Tags are now part of MetricID instead of Metadata
  • Metadata is now immutable
  • Minor changes to JSON format
  • Prometheus format is now OpenMetrics format (with a few small updates)

The reader is referred to https://github.com/eclipse/microprofile-metrics/releases/tag/2.0.1 for more information.

Note: There have been some disruptive signature changes in class MetricRegistry. Several getter methods now return maps whose keys are of type MetricID instead of String. Applications upgrading to the latest version of MicroProfile Metrics 2.0 should review these uses to ensure the correct type is passed and thus prevent metric lookup failures.

Helidon SE applications that use metrics can also take advantage of the new features in Helidon 1.3. For example, MicroProfile Metrics 2.0 introduced the notion of concurrent gauges which are now also available in Helidon SE. To use any of these new features, Helidon SE applications can depend on:

<dependency>
    <groupId>io.helidon.metrics</groupId>
    <artifactId>helidon-metrics2</artifactId>
</dependency>

Existing Helidon SE applications can continue to build using the older helidon-metrics dependency.

HealthCheck 2.0 Support

HealthCheck 2.0 contains some breaking changes. The message body of Health check response was modified, outcome and state were replaced by status. Also readiness (/health/ready) and liveness (/health/live) endpoints were introduced for smoother integration with Kubernetes.

The original /health endpoint is not removed, so your old application will still work without any changes.

New specification introduces two new annotations @Liveness and @Readiness. In Helidon SE we introduced two corresponding methods addLiveness and addReadiness and deprecated the original add method.

JPA and JTA are Production Ready

In earlier versions of Helidon we introduced an early access version of JPA and JTA integration. We received feedback from our users, fixed some issues and improved performance. In version 1.3 we are moving JPA and JTA support from Early Access to Production Ready.

We also created a guide helping users to get familiar with this feature.

Hibernate Support

With 1.3.0 you can now use Hibernate as the JPA provider, or you can continue using EclipseLink. It’s up to you. The difference is one <dependency> element in your pom.xml.

For EclipseLink:

<dependency>
    <groupId>io.helidon.integrations.cdi</groupId
    <artifactId>helidon-integrations-cdi-eclipselink</artifactId
    <scope>runtime</scope>
</dependency>

For Hibernate:

<dependency>
    <groupId>io.helidon.integrations.cdi</groupId
    <artifactId>helidon-integrations-cdi-hibernate</artifactId
    <scope>runtime</scope>
</dependency>

As with our EclipseLink support, Helidon’s Hibernate JPA integration features full Java EE-mode compatibility, including support for EJB-free extended persistence contexts, JTA transactions and bean validation. It works just like the application servers you may be used to, but inside Helidon’s lightweight MicroProfile environment.

GraalVM Improvements

Supporting GraalVM is one of our goals. In each release we are continuously improving GraalVM support in Helidon SE. This version brings support of GraalVM version 19.2.0. Also now you can use Jersey Client in Helidon SE application and build a native-image for it.

Example code:

private void outbound(ServerRequest request, ServerResponse response) {
    // and reactive jersey client call
    webTarget.request()
        .rx()
        .get(String.class)
        .thenAccept(response::send)
        .exceptionally(throwable -> {
            // process exception
            response.status(Http.Status.INTERNAL_SERVER_ERROR_500);
            response.send("Failed with: " + throwable);
            return null;
    });
}

We also added a guide explaining how to build a GraalVM native image from your Helidon SE application. Check it out.

New Guides

To simplify Helidon adoption process we added plenty of new guides explaining how to use various Helidon features.

Getting Started

Basics

Persistence

Build and Deploy

Tutorials

Other features

This release includes many bug fixes, performance improvements and minor updates. More information about changes you can find in the release notes.

Helidon on OOW/CodeOne 2019

Next week (Sep 16, 2019) Oracle Open World and CodeOne open doors for all attendees. Helidon is well covered there. There are some Helidon-related talks from Helidon team where we will introduce some new features like Helidon DB Client coming soon to Helidon as well as talks from our users covering different Helidon use cases. Here is a full list:

  • Non-blocking Database Access in Helidon SE [DEV5365]
    Monday, September 16, 09:00 AM — 09:45 AM
  • Migrating a Single Monolithic Application to Microservices [DEV5112]
    Thursday, September 19, 12:15 PM — 01:00 PM
  • Hands on Lab: Building Microservices with Helidon
    Monday, September 16, 05:00 PM — 07:00 PM
  • Building Cloud Native Applications with Helidon [CON5124]
    Wednesday, September 18, 09:00 AM — 09:45 AM
  • Helidon Flies Faster on GraalVM [DEV5356]
    September 16, 01:30 PM — 02:15 PM
  • Helidon MicroProfile: Managing Persistence with JPA [DEV5376]
    Thursday, September 19, 09:00 AM — 09:45 AM

See you on CodeOne!


by dmitrykornilov at September 14, 2019 04:37 PM

From PHP to Transactions--airhacks.fm Podcast

by admin at September 14, 2019 04:53 AM

Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS

The #53 airhacks.fm episode with Ondrej Chaloupka (@_chalda) about:

the journey from PHP to the innerworkings of Local-, Distributed Transactions (2PC / XA) on a Jakarta EE / Java EE server.
is available for download.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at September 14, 2019 04:53 AM

JakartaONE: Live Coding with Jakarta EE and MicroProfile #slideless

by admin at September 13, 2019 06:08 AM

In this slideless JakartaONE conference session I used openliberty 19.0.8 and Payara Full servers. OpenLiberty 19.0.6 passed the Jakarta EE 8 TCK (see results) and therefore is Jakarta EE 8 compatible.

...this is probably the very first live coding demo which uses a certified Jakarta EE 8 runtime:

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at September 13, 2019 06:08 AM

Java EE 8 to Jakarta EE 8 Migration

by admin at September 12, 2019 04:28 PM

To migrate a Java EE 8 project to Jakarta EE 8, replace the following dependency:

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-api</artifactId>
    <version>8.0</version>
    <scope>provided</scope>
</dependency>

...with Jakarta EE 8 API

<dependency>
    <groupId>jakarta.platform</groupId>
    <artifactId>jakarta.jakartaee-api</artifactId>
    <version>8.0.0</version>
    <scope>provided</scope>
</dependency> 

The resulting ThinWAR is pom:

<project>
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.airhacks</groupId>
    <artifactId>jakarta</artifactId>
    <version>0.0.1</version>
    <packaging>war</packaging>
    <dependencies>
        <dependency>
            <groupId>jakarta.platform</groupId>
            <artifactId>jakarta.jakartaee-api</artifactId>
            <version>8.0.0</version>
            <scope>provided</scope>
        </dependency>         
    </dependencies>
    <build>
        <finalName>jakarta</finalName>
    </build>
    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <failOnMissingWebXml>false</failOnMissingWebXml>
    </properties>
</project>    

...can be conveniently build with wad.sh and deployed to all Java EE 8 and Jakarta EE 8 runtimes.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at September 12, 2019 04:28 PM

#HOWTO: Bootstrap your first Jakarta EE 8 application

by rieckpil at September 11, 2019 07:05 AM

As Jakarta EE 8 was now finally released on the 10th of September 2019, we can start using it. This is the first release of Jakarta EE and a big accomplishment as everything is now hosted at the Eclipse Foundation. The Eclipse Foundation hosted an online conference (JakartaOne) during the release day with a lot of interesting talks about the future of Jakarta EE. Stay tuned, as the talks will be published on Youtube soon! For now, there will be no new features compared to Java EE 8, but the plan is to have new features in Jakarta EE 9. With this blog post, I’ll show you how to bootstrap your first Jakarta EE 8 application using Java 11 with both Maven or Gradle.

Use Maven to bootstrap your Jakarta EE application

To start with your first Maven Jakarta EE 8 project, your pom.xml now needs the following jakartaee-api dependency:

<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>de.rieckpil.blog</groupId>
    <artifactId>bootstrap-jakarta-ee-8-application</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>war</packaging>
    <dependencies>
        <dependency>
            <groupId>jakarta.platform</groupId>
            <artifactId>jakarta.jakartaee-api</artifactId>
            <version>8.0.0</version>
            <scope>provided</scope>
        </dependency>
    </dependencies>
    <build>
        <finalName>bootstrap-jakarta-ee-8-application</finalName>
    </build>
    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
        <failOnMissingWebXml>false</failOnMissingWebXml>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    </properties>
</project>

During the online conference, Adam Bien also announced to create a new Maven archetype to bootstrap Jakarta EE 8 applications with a single command.

Use Gradle to bootstrap your Jakarta EE application

If you use Gradle to build your application, you can start with the following build.gradle:

apply plugin: 'war'

group = 'de.rieckpil.blog'
version = '1.0-SNAPSHOT'

repositories {
    mavenCentral()
}
dependencies {
    providedCompile 'jakarta.platform:jakarta.jakartaee-api:8.0.0'
}

compileJava {
    targetCompatibility = '11'
    sourceCompatibility = '11'
}

war{
    archiveName 'bootstrap-jakarta-ee-8-application.war'
}

Sample Jakarta EE 8 application

As there are no new features for Jakarta EE 8 yet, you can make use of everything you know from Java EE 8. For a sample project, I’ll create a JAX-RS application which fetches data from an external service and exposes them:

@ApplicationPath("resources")
public class JAXRSConfiguration extends Application {
}
@Path("users")
@ApplicationScoped
public class UserResource {

    @Inject
    private UserProvider userProvider;

    @GET
    public Response getAllUsers() {
        return Response.ok(userProvider.getAllUsers()).build();
    }
}
public class UserProvider {

    private WebTarget webTarget;
    private Client client;

    @PostConstruct
    public void init() {
        this.client = ClientBuilder
                .newBuilder()
                .readTimeout(2, TimeUnit.SECONDS)
                .connectTimeout(2, TimeUnit.SECONDS)
                .build();

        this.webTarget = this.client.target("https://jsonplaceholder.typicode.com/users");
    }

    public JsonArray getAllUsers() {
        return this.webTarget
                .request()
                .accept(MediaType.APPLICATION_JSON)
                .get()
                .readEntity(JsonArray.class);
    }

    @PreDestroy
    public void tearDown() {
        this.client.close();
    }
}

For now, everything is still in the javax.* namespace as there are no modifications, but once the specifications evolve, they will move to jakarta.*.

Deploy your application

At the time of writing there are already three application servers officially Jakarta EE 8 Full Platform compatible: Glassfish 5.1, Open Liberty 19.0.0.6 and WildFly 17.0.1.Final. You can get an overview of all compatible products here.

For a quick example, I’ll deploy the sample application to WildFly 17.0.1.Final running inside a Docker container:

FROM jboss/wildfly:17.0.1.Final

# Gradle
# COPY build/libs/bootstrap-jakarta-ee-8-application.war /opt/jboss/wildfly/standalone/deployments/ROOT.war

# Maven
COPY target/bootstrap-jakarta-ee-8-application.war /opt/jboss/wildfly/standalone/deployments/ROOT.war

To run the Jakarta EE 8 application on WildFly you don’t have to configure anything in addition, it just works 😉

So if you are already familiar with Java EE 8, it’s easy for you to adapt Jakarta EE 8. As we now have the Eclipse Foundation Specification Process  (EFCP), we should see new features and platform releases way more often.

For further information about Jakarta EE, have a look at the official website.

The code for this example is available on GitHub.

Have fun using Jakarta EE,

Phil

The post #HOWTO: Bootstrap your first Jakarta EE 8 application appeared first on rieckpil.


by rieckpil at September 11, 2019 07:05 AM

#WHATIS?: Eclipse MicroProfile Fault Tolerance

by rieckpil at September 11, 2019 04:43 AM

With the current trend to build distributed-systems, it is increasingly important to build fault-tolerant services. Fault tolerance is about using different strategies to handle failures in a distributed system. Moreover, the services should be resilient and be able to operate further if a failure occurs in an external service and not cascade the failure and bring the system down. There is a set of common patterns to achieve fault tolerance within your system. These patterns are all available within the MicroProfile Fault Tolerance specification.

Learn more about the MicroProfile Fault Tolerance specification, its annotations, and how to use it in this blog post. This post covers all available interceptor bindings as defined in the specification:

  • Fallback
  • Timeout
  • Retry
  • CircuitBreaker
  • Asynchronous
  • Bulkhead

Specification profile: MicroProfile Fault Tolerance

  • Current version: 2.0 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Provide a set of strategies to build resilient and fault-tolerant services

Provide a fallback method

First, let’s cover the @Fallback interceptor binding of the MicroProfile Fault Tolerance specification. With this annotation, you can provide a fallback behavior of your method in case of an exception. Assume your service fetches data from other microservices and the call might fail due to network issues or downtime of the target. In case your service could recover from the failure and you can provide meaningful fallback behavior for your domain, the @Fallback annotation saves you.

A good example might be the checkout process of your webshop where you rely on a third-party service for handling e.g. credit card payments. If this service fails, you might fall back to a default payment provider and recover gracefully from the failure.

For a simple example, I’ll demonstrate it with a JAX-RS client request to a placeholder REST API and provide a fallback method:

@Fallback(fallbackMethod = "getDefaultPost")
public JsonObject getPostById(Long id) {
    return this.webTarget
        .path(String.valueOf(id))
        .request()
        .accept(MediaType.APPLICATION_JSON)
        .get(JsonObject.class);
}

public JsonObject getDefaultPost(Long id) {
    return Json.createObjectBuilder()
        .add("comment", "Lorem ipsum")
        .add("postId", id)
        .build();
}

With the @Fallback annotation you can specify the method name of the fallback method which must share the same response type and method arguments as the annotated method.

In addition, you can also specify a dedicated class to handle the fallback. This class is required to implement the FallbackHandler<T> interface where T is the response type of the targeted method:

@Fallback(PlaceHolderApiFallback.class)
public JsonObject getPostById(Long id) {
    return this.webTarget
        .path(String.valueOf(id))
        .request()
        .accept(MediaType.APPLICATION_JSON)
        .get(JsonObject.class);
}
public class PlaceHolderApiFallback implements FallbackHandler<JsonObject> {

    @Override
    public JsonObject handle(ExecutionContext context) {
        return Json.createObjectBuilder()
                .add("comment", "Lorem ipsum")
                .add("postId", Long.valueOf(context.getParameters()[0].toString()))
                .build();
    }
}

As you’ll see it in the upcoming chapters, the @Fallback annotation can be used in combination with other MicroProfile Fault Tolerance interceptor bindings.

Add timeouts to limit the duration of a method execution

For some operations in your system, you might have a strict response time target. If you make use of the JAX-RS client or the client of MicroProfile Rest Client you can specify read and connect timeouts to avoid long-running requests. But what about use cases where you can’t declare timeouts easily? The MicroProfile Fault Tolerance specification defines the @Timeout annotation for such problems.

With this interceptor binding, you can specify the maximum duration of a method. If the computation time within the method exceeds the limit, a TimeoutException is thrown.

@Timeout(4000)
@Fallback(fallbackMethod = "getFallbackData")
public String getDataFromLongRunningTask() throws InterruptedException {
    Thread.sleep(4500);
    return "duke";
}

The default unit is milliseconds, but you can configure a different ChronoUnit:

@Timeout(value = 4, unit = ChronoUnit.SECONDS)
@Fallback(fallbackMethod = "getFallbackData")
public String getDataFromLongRunningTask() throws InterruptedException {
    Thread.sleep(4500);
    return "duke";
}

Define retry policies for method calls

A valid fallback behavior for an external system call might be just to retry it. With the @Retry annotation, we can achieve such a behavior. Directly retrying to execute the request might not always be the best solution. Similarily you want to add delay for the next retry and maybe add some randomness. We can configure such a requirement with the @Retry annotation:

@Retry(maxDuration = 5000, maxRetries = 3, delay = 500, jitter = 200)
@Fallback(fallbackMethod = "getFallbackData")
public String accessFlakyService() {

    System.out.println("Trying to access flaky service at " + LocalTime.now());

    if (ThreadLocalRandom.current().nextLong(1000) < 50) {
        return "flaky duke";
    } else {
        throw new RuntimeException("Flaky service not accessible");
    }
}

In this example, we would try to execute the method three times with a delay of 500 milliseconds and 200 milliseconds of randomness (called jitter). The effective delay is the following: [delay – jitter, delay + jitter] (in our example 300 to 700 milliseconds).

Furthermore, endless retrying might also be counter-productive.  That’s why we can specify the maxDuration which is quite similar to the @Timeout annotation above. If the whole retrying takes more than 5 seconds, it will fail with a TimeoutException.

Add a Circuit Breaker around a method invocation to fail fast

Once an external system you call is down or returning 503 as it is currently unavailable to process further requests, you might not want to access it for a given timeframe again. This might help the other system to recover and your methods can fail fast as you already know the expected response from requests in the past.  For this scenario, the Circuit Breaker pattern comes into place.

The Circuit Breaker offers a way to fail fast by directly failing the method execution to prevent further overloading of the target system and indefinite wait or timeouts. With MicroProfile Fault Tolerance we have an annotation to achieve this with ease: @CircuitBreaker

There are three different states a Circuit Breaker can have: closed, opened, half-open.

In the closed state, the operation is executed as expected. If a failure occurs while e.g. calling an external service, the Circuit Breaker records such an event. If a particular threshold of failures is met, it will switch to the open state.

Once the Circuit Breaker enters the open state, further calls will fail immediately.  After a given delay the circuit enters the half-open state. Within the half-open state, trial executions will happen. Once such a trial execution fails, the circuit transitions to the open state again. When a predefined number of these trial executions succeed, the circuit enters the original closed state.

Let’s have a look at the following example:

@CircuitBreaker(successThreshold = 10, requestVolumeThreshold = 5, failureRatio = 0.5, delay = 500)
@Fallback(fallbackMethod = "getFallbackData")
public String getRandomData() {
    if (ThreadLocalRandom.current().nextLong(1000) < 300) {
        return "random duke";
    } else {
        throw new RuntimeException("Random data not available");
    }
}

In the example above I define a Circuit Breaker which enters the open state once 50% (failureRatio=0.5) of five consecutive executions (requestVolumeThreshold=5) fail. After a delay of 500 milliseconds in the open state,  the circuit transitions to half-open. Once ten trial executions (successThreshold=10) in the half-open state succeed, the circuit will be back in the closed state.

Execute a method asynchronously with MicroProfile Fault Tolerance

Some use cases of your system might not require synchronous and in-order execution of different tasks. For instance, you can fetch data for a customer (purchased orders, contact information, invoices) from different services in parallel.  The MicroProfile Fault Tolerance specification offers a convenient way for achieving such asynchronous method executions: @Asynchronous:

@Asynchronous
public Future<String> getConcurrentServiceData(String name) {
    System.out.println(name + " is accessing the concurrent service");
    return CompletableFuture.completedFuture("concurrent duke");
}

With this annotation, the execution will be on a separate thread and the method has to return either a Future or a CompletionStage

Apply Bulkheads to limit the number of concurrent calls

The Bulkhead pattern is a way of isolating failures in your system while the rest can still function. It’s named after the sectioned parts (bulkheads) of a ship. If one bulkhead of a ship is damaged and filled with water, the other bulkheads aren’t affected, which prevents the ship from sinking.

Imagine a scenario where all your threads are occupied for a request to a (slow-responding) external system and your application can’t process other tasks. To prevent such a scenario, we can apply the @Bulkhead annotation and limit concurrent calls:

@Bulkhead(5)
@Asynchronous
public Future<String> getConcurrentServiceData(String name) throws InterruptedException {
    Thread.sleep(1000);
    System.out.println(name + " is accessing the concurrent service");
    return CompletableFuture.completedFuture("concurrent duke");
}

In this example, only five concurrent calls can enter this method and further have to wait. If this annotation is used together with @Asynchronous, as in the example above,  it means thread isolation. In addition and only for asynchronous methods we can specify the length of the waiting queue with the attribute waitingTaksQueue.  For non-async methods, the specification defines to utilize semaphores for isolation.

MicroProfile Fault Tolerance integration with MicroProfile Config

Above all, the MicroProfile Fault Tolerance specification provides tight integration with the config spec.  You can configure every attribute of the different interceptor bindings with an external config source like the microprofile-config.properties file.

The pattern for external configuration is the following: <classname>/<methodname>/<annotation>/<parameter>:

de.rieckpil.blog.RandomDataProvider/accessFlakyService/Retry/maxRetries=10
de.rieckpil.blog.RandomDataProvider/accessFlakyService/Retry/delay=300
de.rieckpil.blog.RandomDataProvider/accessFlakyService/Retry/maxDuration=5000

YouTube video for using MicroProfile Fault Tolerance 2.0

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile Fault Tolerance in action:

coming soon

You can find the source code with further instructions to run this example on GitHub.

Have fun using MicroProfile Fault Tolerance,

Phil

The post #WHATIS?: Eclipse MicroProfile Fault Tolerance appeared first on rieckpil.


by rieckpil at September 11, 2019 04:43 AM

How to pack Angular 8 applications on regular war files

September 11, 2019 12:00 AM

Maven

From time to time it is necessary to distribute SPA applications using war files as containers, in my experience this is necessary when:

  • You don't have control over deployment infrastructure
  • You're dealing with rigid deployment standards
  • IT people is reluctant to publish a plain old web server

Anyway, and as described in Oracle's documentation one of the benefits of using war files is the possibility to include static (HTML/JS/CSS) files in the deployment, hence is safe to assume that you could distribute any SPA application using a war file as wrapper (with special considerations).

Creating a POC with Angular 8 and Java War

To demonstrate this I will create a project that:

  1. Is compatible with the big three Java IDEs (NetBeans, IntelliJ, Eclipse) and VSCode
  2. Allows you to use the IDEs as JavaScript development IDEs
  3. Allows you to create a SPA modern application (With all npm, ng, cli stuff)
  4. Allows you to combine Java(Maven) and JavaScript(Webpack) build systems
  5. Allows you to distribute a minified and ready for production project

Bootstrapping a simple Java web project

To bootstrap the Java project, you could use the plain old maven-archetype-webapp as basis:

mvn archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DarchetypeVersion=1.4

The interactive shell will ask you for you project characteristics including groupId, artifactId (project name) and base package.

Java Bootstrap

In the end you should have the following structure as result:

demo-angular-8$ tree
.
├── pom.xml
└── src
    └── main
        └── webapp
            ├── WEB-INF
            │   └── web.xml
            └── index.jsp

4 directories, 3 files

Now you should be able to open your project in any IDE. By default the 'pom.xml' will include locked down versions for maven plugins, you could safely get rid of those since we won't personalize the entire Maven lifecycle, just a couple of hooks.

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.nabenik</groupId>
  <artifactId>demo-angular-8</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>war</packaging>

  <name>demo-angular-8 Maven Webapp</name>

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <maven.compiler.source>1.7</maven.compiler.source>
    <maven.compiler.target>1.7</maven.compiler.target>
  </properties>
</project>

Besides that index.jsp is not necessary, just delete it.

Bootstrapping a simple Angular JS project

As an opinionated approach I suggest to isolate the Angular project at its own directory (src/main/frontend), on the past and with simple frameworks (AngularJS, Knockout, Ember) it was possible to bootstrap the entire project with a couple of includes in the index.html file, however nowadays most of the modern front end projects use some kind of bundler/linter in order to enable modern (>=ES6) features like modules, and in the case of Angular, it uses Webpack under the hood for this.

For this guide I assume that you already have installed all Angular CLI tools, hence we could go inside our source code structure and bootstrap the Angular project.

demo-angular-8$ cd src/main/
demo-angular-8/src/main$ ng new frontend

This will bootstrap a vanilla Angular project, and in fact you could consider the src/main/frontend folder as a separate root (and also you could open this directly from VSCode), the final structure will be like this:

JS Structure

As a first POC I started the application directly from CLI using IntelliJ IDEA and ng serve --open, all worked as expected.

Angular run

Invoking Webpack from Maven

One of the useful plugins for this task is frontend-maven-plugin which allows you to:

  1. Download common JS package managers (npm, cnpm, bower, yarn)
  2. Invoke JS build systems and tests (grunt, gulp, webpack or npm itself, karma)

By default Angular project come with hooks from npm to ng but we need to add a hook in package.json to create a production quality build (buildProduction), please double check the base-href parameter since I'm using the default root from Java conventions (same as project name)

...
"scripts": {
    "ng": "ng",
    "start": "ng serve",
    "build": "ng build",
    "buildProduction": "ng build --prod --base-href /demo-angular-8/",
    "test": "ng test",
    "lint": "ng lint",
    "e2e": "ng e2e"
  }
...

To test this build we could execute npm run buildProduction at webproject's root (src/main/frontend), the output should be like this:

NPM Hook

Finally It is necessary to invoke or new target with maven, hence our configuration should:

  1. Install NodeJS (and NPM)
  2. Install JS dependencies
  3. Invoke our new hook
  4. Copy the result to our final distributable war

To achieve this, the following configuration should be enough:

<build>
<finalName>demo-angular-8</finalName>
    <plugins>
    <plugin>
            <groupId>com.github.eirslett</groupId>
            <artifactId>frontend-maven-plugin</artifactId>
            <version>1.6</version>

            <configuration>
                <workingDirectory>src/main/frontend</workingDirectory>
            </configuration>

            <executions>

                <execution>
                    <id>install-node-and-npm</id>
                    <goals>
                        <goal>install-node-and-npm</goal>
                    </goals>
                    <configuration>
                        <nodeVersion>v10.16.1</nodeVersion>
                    </configuration>
                </execution>

                <execution>
                    <id>npm install</id>
                    <goals>
                        <goal>npm</goal>
                    </goals>
                    <configuration>
                        <arguments>install</arguments>
                    </configuration>
                </execution>
                <execution>
                    <id>npm build</id>
                    <goals>
                        <goal>npm</goal>
                    </goals>
                    <configuration>
                        <arguments>run buildProduction</arguments>
                    </configuration>
                    <phase>generate-resources</phase>
                </execution>
            </executions>
    </plugin>
    <plugin>
        <artifactId>maven-war-plugin</artifactId>
        <version>3.2.2</version>
        <configuration>
            <failOnMissingWebXml>false</failOnMissingWebXml>

            <!-- Add frontend folder to war package -->
            <webResources>
                <resource>
                    <directory>src/main/frontend/dist/frontend</directory>
                </resource>
            </webResources>

        </configuration>
    </plugin>
    </plugins>
</build>

And that's it!. Once you execute mvn clean package you will obtain as result a portable war file that will run over any Servlet container runtime. For Instance I tested it with Payara Full 5, working as expected.

Payara


September 11, 2019 12:00 AM

Jersey 2.29.1 has been released!

by Jan at September 10, 2019 09:51 PM

What a busy summer! Jersey 2.29 has been released in June and Jakarta EE 8 release was the next goal to be achieved before Oracle Code One. It has been a lot of work. Jakarta EE 8 contains almost 30 … Continue reading

by Jan at September 10, 2019 09:51 PM

Jakarta EE 8 Released: The New Era of Java EE

by Rhuan Henrique Rocha at September 10, 2019 01:29 PM

The Java EE is a fantastic project, but it was created in 1999 with J2EE name and it is 20 years old and its processes to evolve is not appropriated to new enterprise scenario. Then, Java EE needed change too.

Java EE has a new home and new brand and is being released today September 10th. The Java EE was migrated from Oracle to Eclipse Foundation and now is Jakarta EE, that is under Eclipse Enterprise for Java (EE4J) project. Today the Eclipse Foundation is releasing the Jakarta EE 8 and we’ll see what it means in this post.

The Java EE was a very stronger project and was highly used in many kind of enterprise Java application and many big framework like Spring and Struts. Some developers has questioned its features and evolving processes, But looking at its high usage and time in the market, its success is unquestionable. But the enterprise world doesn’t stop and new challenges are emerging all the time. The speed of change has grown more and more in the enterprise world because the companies should be prepared more and more to answer to the market challenges. Thus, the technologies should follow these changes in the enterprise world and adapt itself to provide the better solutions in these cases.

With that in mind, the IT world promoted many changes and solutions too, to be able to provide a better answer to enterprise world. One of these solutions was the Cloud Computing computing. Resuming Cloud Computing concept  in a few words, Cloud Computing is solution to provide computer resource as a service (IaaS, PaaS, SaaS). This allows you to use only the resources you need resource and scale up and down  when needed.

The Java EE is a fantastic project, but it was created in 1999 with J2EE name and it is 20 years old and its processes to evolve is not appropriated to new enterprise scenario. Then, Java EE needed change too.

Jakarta EE Goals

The Jakarta EE 8 has the same set of specification from Java EE 8 without changes in its features. The only change done was the new process to evolve these specifications.

The Java ecosystem has a new focus that is putting your power in the service of the cloud computing and Jakarta EE is a key to that.

Jakarta EE has a goal to accelerate business application development for Cloud Computing (cloud native application), working based on specifications worked by many vendors. This project is starting based on Java EE 8, where its specifications, TCKs and Reference Implementations (RI) was migrated from Oracle to Eclipse Foundation. But to evolve these specification to attend to Cloud Computing we can not work with the same process worked on Java EE project, because it is too slow to current enterprise challenges. Thus, the first action of Eclipse Foundation is changing the process to evolve Jakarta EE.

The Jakarta EE 8 has the same set of specification from Java EE 8 without changes in its features. The only change done was the new process to evolve these specifications. With this Jakarta EE 8 is a mark at Java enterprise history, because inserts these specification in a new process to boost these specification to cloud native application approach.

Jakarta EE Specification Process

The Jakarta EE Specification Process (JESP) is the new process will be used by Jakarta EE Working Group to evolve the Jakarta EE. The JESP is replacing the JCP process used previously for java EE.

The JESP is based on Eclipse Foundation Specification Process (EFSP) with some changes These changes are informed in https://jakarta.ee/about/jesp/. Follows the changes:

  • Any modification to or revision of this Jakarta EE Specification Process, including the adoption of a new version of the EFSP, must be approved by a Super-majority of the Specification Committee, including a Super-majority of the Strategic Members of the Jakarta EE Working Group, in addition to any other ballot requirements set forth in the EFSP.
  • All specification committee approval ballot periods will have the minimum duration as outlined below (notwithstanding the exception process defined by the EFSP, these periods may not be shortened)
    • Creation Review: 7 calendar days;
    • Plan Review: 7 calendar days;
    • Progress Review: 14 calendar days;
    • Release Review: 14 calendar days;
    • Service Release Review: 14 calendar days; and
    • JESP Update: 7 calendar days.
  • A ballot will be declared invalid and concluded immediately in the event that the Specification Team withdraws from the corresponding review.
  • Specification Projects must engage in at least one Progress or Release Review per year while in active development.

The goals of JESP is being a process as lightweight as possible, with a design closer to open source development and with code-first development in mind. With this, this process promotes a new culture that focus on experimentation to evolve these specification based on experiences gained with experimentation.

Jakarta EE 9

The Jakarta EE 8 is focusing in update its process to evolve and the first updates in feature will come in Jakarta EE 9. The main update expected in Jakarta EE 9 is the birth of Jakarta NoSQL specification.

Jakarta NoSQL is a specification to promote a ease integration between Java applications and NoSQL database, promoting a standard solution to connect Java application to NoSQL databases with a high level abstraction. It is fantastic and is a big step to close Java platform to Cloud Native approach, because NoSQL database is widely used on Cloud environments and its improvement is expected. The Jakarta NoSQL is based on JNoSQL that will be its reference implementation.

Another update expected on Jakarta EE is about namespace. Basically the Oracle gave the Java EE project to Eclipse Foundation, but the trademark is still from Oracle. It means the Eclipse Foundation can not use java or javax to project’s name or namespace in new features that come in Jakarta EE. Thus, the community is discussing about the transition of old name to jakarta.* name. You can see this thread here.

Conclusion

Jakarta EE is opening a new era in the Java ecosystem getting the Java EE that was and is a very important project to working under a very good open source process, to improvementsAlthough this Jakarta EE version come without features updates it is opening the gate to new features that is coming on Jakarta EE in the future. So we’ll see many solutions based on specifications to working on cloud soon, in the next versions of Jakarta EE.


by Rhuan Henrique Rocha at September 10, 2019 01:29 PM

Welcome to the Future of Cloud Native Java

by Mike Milinkovich at September 10, 2019 11:00 AM

Today, with the release of Jakarta EE 8, we’ve entered a new era in Java innovation.

Under an open, vendor-neutral process, a diverse community of the world’s leading Java organizations, hundreds of dedicated developers, and Eclipse Foundation staff have delivered the Jakarta EE 8 Full Platform, Web Profiles, and related TCKs, as well as Eclipse GlassFish 5.1 certified as a Jakarta EE 8 compatible implementation.

To say this a big deal is an understatement. With 18 different member organizations, over 160 new committers, 43 projects, and a codebase of over 61 million lines of code in 129 Git repositories, this was truly a massive undertaking — even by the Eclipse community’s standards. There are far too many people to thank individually here, so I’ll say many thanks to everyone in the Jakarta EE community who played a role in achieving this industry milestone.

Here are some of the reasons I’m so excited about this release.

For more than two decades, Java EE has been the platform of choice across industries for developing and running enterprise applications. According to IDC, 90 percent of Fortune 500 companies rely on Java for mission-critical workloads. Jakarta EE 8 gives software vendors, more than 10 million Java developers, and thousands of enterprises the foundation they need to migrate Java EE applications and workloads to a standards-based, vendor-neutral, open source enterprise Java stack.

As a result of the tireless efforts of the Jakarta EE Working Group’s Specification Committee, specification development follows the Jakarta EE Specification Process and Eclipse Development Process, which are open, community-driven successors to the Java Community Process (JCP) for Java EE. This makes for a fully open, collaborative approach to generating specifications, with every decision made by the community — collectively. Combined with open source TCKs and an open process of self-certification, Jakarta EE significantly lowers the barriers to entry and participation for independent implementations.

The Jakarta EE 8 specifications are fully compatible with Java EE 8 specifications and include the same APIs and Javadoc using the same programming model developers have been using for years. The Jakarta EE 8 TCKs are based on and fully compatible with Java EE 8 TCKs. That means enterprise customers will be able to migrate to Jakarta EE 8 without any changes to Java EE 8 applications.

In addition to GlassFish 5.1 (which you can download here), IBM’s Open Liberty server runtime has also been certified as a Jakarta EE 8 compatible implementation. All of the vendors in the Jakarta EE Working Group plan to certify that their Java EE 8 implementations are compatible with Jakarta EE 8.

 All of this represents an unprecedented opportunity for Java stakeholders to participate in advancing Jakarta EE to meet the modern enterprise’s need for cloud-based applications that resolve key business challenges. The community now has an open source baseline that enables the migration of proven Java technologies to a world of containers, microservices, Kubernetes, service mesh, and other cloud native technologies that have been adopted by enterprises over the last few years.

As part of the call to action, we’re actively seeking new members for the Jakarta EE Working Group. I encourage everyone to explore the benefits and advantages of membership. If Java is important to your business, and you want to ensure the innovation, growth, and sustainability of Jakarta EE within a well-governed, vendor-neutral ecosystem that benefits everyone, now is the time to get involved.

Also, if you’re interested in learning more about our community’s perspective on what cloud native Java is, why it matters so much to many enterprises, and where Jakarta EE technologies are headed, download our new free eBook, Fulfilling the Vision for Open Source, Cloud Native Java. Thank you to Adam Bien, Sebastian Daschner, Josh Juneau, Mark Little, and Reza Rahman for contributing their insights and expertise to the eBook.

Finally, if you’ll be at Oracle Code One at the Moscone Center in San Francisco next week, be sure to stop by booth #3228, where the Eclipse community will be showcasing Jakarta EE 8, GlassFish 5.1, Eclipse MicroProfile, Eclipse Che, and more of our portfolio of cloud native Java open source projects.

 


by Mike Milinkovich at September 10, 2019 11:00 AM

Jakarta EE 8 Specifications Released by The Eclipse Foundation, Payara Platform Compatibility Coming Soon

by Debbie Hoffman at September 10, 2019 11:00 AM

The Jakarta EE 8 Full Platform, Web Profile specifications and related TCKs have been officially released today (September 10th, 2019). This release completes the transition of Java EE to an open and vendor-neutral process and provides a foundation for migrating mission-critical Java EE applications to a standard enterprise Java stack for a cloud native world. 


by Debbie Hoffman at September 10, 2019 11:00 AM

Jakarta EE and the great naming debate

September 10, 2019 03:46 AM

At JavaOne 2017 Oracle announced that they would start the difficult process of moving Java EE to the Eclipse Software Foundation. This has been a massive effort on behalf of Eclipse, Oracle and many others and we are getting close to having a specification process and a Jakarta EE 8 platform. We are looking forward to being able to certify Open Liberty to it soon. While that is excellent news, on Friday last week Mike Milinkovich from Eclipse informed the community that Eclipse and Oracle could not come to an agreement that would allow Jakarta EE to evolve using the existing javax package prefix. This has caused a flurry of discussion on Twitter, from panic, confusion, and in some cases outright FUD.

To say that everyone is disappointed with this outcome would be a massive understatement of how people feel. Yes this is disappointing, but this is not the end of the world. First of all, despite what some people are implying, Java EE applications are not suddenly broken today, when they were working a week ago. Similarly, your Spring apps are not going to be broken (yes, the Spring Framework has 2545 Java EE imports, let alone all the upstream dependencies). It just means that we will have a constraint on how Jakarta EE evolves to add new function.

We have got a lot of experience with managing migration in the Open Liberty team. We have a zero migration promise for Open Liberty which is why we are the only application server that supports Java EE 7 and 8 in the same release stream. This means that if you are on Open Liberty, your existing applications are totally shielded from any class name changes in Jakarta EE 9. We do this through our versioned feature which provide the exact API and runtime required by the specification as it was originally defined. We are optimistic about for the future because we have been doing this with Liberty since it was created in 2012.

The question for the community is "how we should move forward from here?" It seems that many in the Jakarta EE spec group at Eclipse are leaning towards quickly renaming everything in a Jakarta EE 9 release. There are advantages and disadvantages to this approach, but it appears favoured by David Blevins, Ian Robinson, Kevin Sutter, Steve Millidge. While I can see the value of just doing a rename now (after all, it is better pull a band aid off fast than slow), I think it would be a mistake if at the same time we do not invest in making the migration from Java EE package names to Jakarta EE package names cost nothing. Something in Liberty we call "zero migration".

Jakarta EE will only succeed if developers have a seamless transition from Java EE to Jakarta EE. I think there are four aspects to pulling off zero migration with a rename:

  1. Existing application binaries need to continue to work without change.

  2. Existing application source needs to continue to work without change.

  3. Tools must be provided to quickly and easily change the import statements for Java source.

  4. Applications that are making use of the new APIs must be able to call binaries that have not been updated.

The first two are trivial to do: Java class files have a constant pool that contains all the referenced class and method references. Updating the constant pool when the class is loaded will be technically easy, cheap at runtime, and safe. We are literally talking about changing javax.servlet to jakarta.servlet, no method changes.

The third one is also relatively simple; as long as class names do not change switching import statements from javax.servlet.* to jakarta.servlet.* is easy to automate.

The last one is the most difficult because you have existing binaries using the javax.servlet package and new source using the jakarta.servlet package. Normally this would produce a compilation error because you cannot pass a jakarta.servlet class somewhere that takes a javax.servlet class. In theory we could reuse the approach used to support existing apps and apply it at compile time to the downstream dependencies, but this will depend on the build tools being able to support this behaviour. You could add something to the Maven build to run prior to compilation to make sure this works, but that might be too much work for some users to contemplate, and perhaps is not close enough to zero migration.

I think if the Jakarta EE community pulls together to deliver this kind of zero migration approach prior to making any break, the future will be bright for Jakarta EE. The discussion has already started on the jakarta-platform-dev mail list kicked off by David Blevins. If you are not a member you can join now on eclipse.org. I am also happy to hear your thought via twitter.


September 10, 2019 03:46 AM

Update for Jakarta EE community: September 2019

by Tanja Obradovic at September 09, 2019 03:12 PM

We hope you’re enjoying the Jakarta EE monthly email update, which seeks to highlight news from various committee meetings related to this platform. There’s a lot happening in the Jakarta EE ecosystem so if you want to get richer insight into the work that has been invested in Jakarta EE so far and get involved in shaping the future of Cloud Native Java, read on. 

Without further ado, let’s have a look at what happened in August: 

EclipseCon Europe 2019: Register for Community Day 

We’re gearing up for EclipseCon Europe 2019! If you’ve already booked your ticket, make sure to sign up for Community Day happening on October 21; this day is jam-packed with peer-to-peer interaction and community-organized meetings that are ideal for Eclipse Working Groups, Eclipse projects, and similar groups that form the Eclipse community. As always, Community Day is accompanied by an equally interesting Community Evening, where like-minded attendees can share ideas, experiences and have fun! 

That said, in order to make this event a success, we need your help. What would you like Community Day & Evening to be all about? Check out this wiki give us your suggestions and let us know if you plan to attend by signing up at the bottom of the wiki. Also, make sure to go over what we did last year. And don’t forget to register for Community Day and Evening! 

EclipseCon Europe will take place in Ludwigsburg, Germany on October 21 - 24, 2019. 

JakartaOne Livestream: There’s still time to register!

JakartaOne Livestream, taking place on September 10, is the fall virtual conference spanning multiple time zones. Plus, the date coincides with the highly anticipated Jakarta EE 8 release so make sure to save the date; you’re in for a treat! 

We hope you’ll attend this all-day virtual conference as it unfolds; this way, you get the chance to interact with renowned speakers, participate in interesting interactions and have all your questions answered during the interactive sessions. More than 500 people have already signed up to participate in JakartaOne Livestream so register now to secure your spot! 

Once you’ve registered, you will have the opportunity to post questions and/or comments for the talks you’re interested in.  We encourage all participants to make the most out of this virtual event by sharing your questions and chiming in with your suggestions/comments! 

No matter if you’re a developer or a technical business leader, this virtual conference promises to satisfy your thirst for knowledge with a balance of technical talks, user experiences, use cases and more. Check out the schedule here

Jakarta EE 8 release

The moment we've all been waiting for is almost upon us. The expected due date of Jakarta EE 8 is September 10 and we’re in the final stages of preparation for specifications. Eclipse GlassFish 5.1, as well as Open Liberty 19.0.0.6, open source Jakarta EE compatible implementations, are expected to be released on the same day, and other compatible implementations are expected to follow suit. 

Keep an eye out for the Jakarta EE 8 page, which will include all the necessary information and updates related to the Jakarta EE 8 release, including links to specifications, compatible products, Eclipse Cloud Native Java eBook and more.  

If you’d like to learn more about cloud native Java and Jakarta EE, the Eclipse Foundation will be at Oracle Code One, so come armed with questions. Stop by our booth -number 3228- to say hi, ask questions or chat with our experts. Here’s who you can expect to see in our booth for Oracle Code One:

  • A lot of community participation at the boot, after all this is an open source community driven project!

  • Pods dedicated to Jakarta EE, MicroProfile and Eclipse Che

  • A lot of information and discussion about the Jakarta EE 8 release and related Compatible Implementations 

Jakarta EE Community Update: August video call

The most recent Jakarta EE Community Update meeting took place on August 27; the conversation included topics such as the progress and latest status of the Jakarta EE 8 release as well as details about JakartaOne Livestream and EclipseCon Europe 2019.   

The materials used in the Jakarta EE community update meeting are available here and the recorded Zoom video conversation can be found here.  

Fulfilling the Vision for Open Source, Cloud Native Java: Coming soon! 

What does cloud native Java really mean to developers? What does the cloud native Java future look like? Where is Jakarta EE headed? Which technologies should be part of your toolkit for developing cloud native Java applications? 

All these questions (and more!) will be answered soon; we’re developing a downloadable eBook called Fulfilling the Vision for Open Source, Cloud Native Java on the community's definition and vision for cloud native Java, which will become available shortly before Jakarta EE 8 is released. Stay tuned!

Cloud Native Java & Jakarta EE presence at events and conferences: August overview 

We’d like to give kudos to Otávio Santana for his hard work this past summer on his 11+ conference session tour on “Jakarta on the Cloud America Latina 2019”. It’s great to see the success of your sessions and we are happy to promote your community participation. 

Informally, the OSS repo being used for the Community, including the Eclipse Foundation team, to provide feedback is via this repo, plus the Jakarta Community forum. Anyone who has Java EE & Jakarta sessions coming up is welcome to submit them via issues to broaden the message and not make it exclusive of the committee access. 

Links you may want to bookmark!

Thank you for your interest in Jakarta EE. Help steer Jakarta EE toward its exciting future by subscribing to the jakarta.ee-wg@eclipse.org mailing list and by joining the Jakarta EE Working Group. Don’t forget to follow us on Twitter to get the latest news and updates!

To learn more about the collaborative efforts to build tomorrow’s enterprise Java platform for the cloud, check out the Jakarta Blogs and participate in the monthly Jakarta Tech Talks. Don’t forget to subscribe to the Eclipse newsletter!  

The Jakarta EE community promises to be a very active one, especially given the various channels that can be used to stay up-to-date with all the latest and greatest. Tanja Obradovic’s blog offers a sneak peek at the community engagement plan, which includes

Note: If you’d like to learn more about Jakarta EE-related plans and get involved in shaping the future of cloud native Java and see when is the next Jakarta Tech Talk, please bookmark the Jakarta EE Community Calendar


by Tanja Obradovic at September 09, 2019 03:12 PM

MicroProfile, Business Constraints, Outbox, lit-html, OData, ManagedExecutorService, Effective Java EE, Minishift, Quarkus-the 66th airhacks.tv

by admin at September 09, 2019 07:19 AM

The 66th airhacks.tv episode covering:

MicroProfile polyfills, ensuring consistency in business constraints, outbox transactional pattern, lit-html, minishift and okd, parsing images from stream, odata and backend for frontend, quarkus and bulkheads, JWTenizr on CI/CD, WARs on Docker, and recent podcasts

...is available:

Any questions left? Ask now: https://gist.github.com/AdamBien/1a227df3f1701e4a12a751d3f7d1633e and get the answers at the next airhacks.tv.

See you at "Build to last" effectively progressive applications with webstandards only -- the "no frameworks, no migrations" approach, at Munich Airport, Terminal 2 or effectiveweb.training (online).

by admin at September 09, 2019 07:19 AM

#WHATIS?: Eclipse MicroProfile JWT Auth

by rieckpil at September 06, 2019 04:31 AM

In today’s microservice architectures security is usually based on the following protocols: OAuth2, OpenID Connect, and SAML. These main security protocols use security tokens to propagate the security state from client to server. This stateless approach is usually achieved by passing a JWT token alongside every client request. For convenient use of this kind of token-based authentication, the MicroProfile JWT Auth evolved. The specification ensures, that the security token is extracted from the request, validated and a security context is created out of the extracted information.

Learn more about the MicroProfile JWT Auth specification, its annotations, and how to use it in this blog post.

Specification profile: MicroProfile JWT Auth

  • Current version: 1.1 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Provide JWT token-based authentication for your application

Securing a JAX-RS application

First, we have to instruct our JAX-RS application, that we’ll use the JWTs for authentication and authorization. You can configure this with the @LoginConfig annotation:

@ApplicationPath("resources")
@LoginConfig(authMethod = "MP-JWT")
public class JAXRSConfiguration extends Application {
}

Once an incoming request has a valid JWT within the HTTP Bearer header, the groups in the JWT are mapped to roles. We can now limit the access for a resource to specific roles and achieve authorization with the Common Security Annotations (JSR-250) (@RolesAllowed, @PermitAll, @DenyAll):

@GET
@RolesAllowed("admin")
public Response getBook() {

    JsonObject secretBook = Json.createObjectBuilder()
        .add("title", "secret")
        .add("author", "duke")
        .build();

    return Response.ok(secretBook).build();
}

Furthermore, we can inject the actual JWT token (alongside the Principal) with CDI and inject any claim of the JWT in addition:

@Path("books")
@RequestScoped
@Produces(MediaType.APPLICATION_JSON)
public class BookResource {

    @Inject
    private Principal principal;

    @Inject
    private JsonWebToken jsonWebToken;

    @Inject
    @Claim("administrator_id")
    private JsonNumber administrator_id;

    @GET
    @RolesAllowed("admin")
    public Response getBook() {

        System.out.println("Secret book for " + principal.getName()
                + " with roles " + jsonWebToken.getGroups());
        System.out.println("Administrator level: "
                + jsonWebToken.getClaim("administrator_level").toString());
        System.out.println("Administrator id: " + administrator_id);

        JsonObject secretBook = Json.createObjectBuilder()
                .add("title", "secret")
                .add("author", "duke")
                .build();

        return Response.ok(secretBook).build();
    }

}

In this example, I’m injecting the claim administrator_id and access the claim administrator_level via the JWT token. These are not part of the standard JWT claims but you can add any additional metadata in your token.

Always make sure to only inject the JWT token and the claims to @RequestScoped CDI beans, as you’ll get a DeploymentExcpetion otherwise:

javax.enterprise.inject.spi.DeploymentException: CWWKS5603E: The claim cannot be injected into the [BackedAnnotatedField] @Inject @Claim private de.rieckpil.blog.BookResource.administrator_id injection point for the ApplicationScoped or SessionScoped scopes.
        at com.ibm.ws.security.mp.jwt.cdi.JwtCDIExtension.processInjectionTarget(JwtCDIExtension.java:92)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)

HINT: Depending on the application server you’ll deploy this example, you might have to first declare the available roles with @DeclareRoles({"admin", "chief", "duke"}).

Required configuration for MicroProfile JWT Auth

Achieving validation of the JWT signature requires the public key. Since MicroProfile JWT Auth 1.1, we can configure this with MicroProfile Config (previously it was vendor-specific). The JWT Auth specification allows the following public key formats:

  • PKCS#8 (Public Key Cryptography Standards #8 PEM)
  • JWK (JSON Web Key)
  • JWKS (JSON Web Key Set)
  • JWK Base64 URL encoded
  • JWKS Base64 URL encoded

For this example, I’m using the PKCS#8 format and specify the path of the .pem file containing the public key in the microprofile-config.properties file:

mp.jwt.verify.publickey.location=/META-INF/publicKey.pem
mp.jwt.verify.issuer=rieckpil

The configuration of the issuer is also required and has to match the iss claim in the JWT. A valid publicKey.pem file might look like the following:

-----BEGIN RSA PUBLIC KEY-----
YOUR_PUBLIC_KEY
-----END RSA PUBLIC KEY-----

Using JWTEnizer to create tokens for testing

Usually, the JWT is issued by an identity provider (e.g. Keycloak). For quick testing, we can use the JWTenizer tool from Adam Bien. This provides a simple way to create valid JWT token and generates the corresponding public and private key. Once you downloaded the jwtenizer.jar you can run it for the first time with the following command:

java -jar jwtenizer.jar

This will now create a jwt-token.json file in the folder you executed the command above. We can adjust this .json file to our needs and model a sample JWT token:

{
  "iss": "rieckpil",
  "jti": "42",
  "sub": "duke",
  "upn": "duke",
  "groups": [
    "chief",
    "hacker",
    "admin"
  ],
  "administrator_id": 42,
  "administrator_level": "HIGH"
}

Once you adjusted the raw jwt-token.json, you can run java -jar jwtenizer.jar again and this second run will now pick the existing .json file for creating the JWT. Alongside the JWT token, the tool generates a microprofile-config.properties file, from which we can copy the public key and paste it to our publicKey.pem file.

Furthermore the shell output of running jwtenizer.jar  contains a cURL command we can use to hit our resources:

curl -i -H'Authorization: Bearer GENERATED_JWT' http://localhost:9080/resources/books

With a valid Bearer header you should get the following response from the backend:

HTTP/1.1 200 OK
X-Powered-By: Servlet/4.0
Content-Type: application/json
Date: Fri, 06 Sep 2019 03:24:16 GMT
Content-Language: en-US
Content-Length: 34

{"title":"secret","author":"duke"}

You can now adjust the jwt-token.json again and remove the admin group and generate a new JWT. With this generated token you shouldn’t be able to get a response from the backend and receive 403 Forbidden, as you are authenticated but don’t have the correct role.

For further instructions on how to use this tool, have a look at the README on GitHub or the following video of Adam Bien.

YouTube video for using MicroProfile JWT Auth 1.1

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile JWT Auth in action:

coming soon

You can find the source code with further instructions to run this example on GitHub.

Have fun using MicroProfile JWT Auth,

Phil

The post #WHATIS?: Eclipse MicroProfile JWT Auth appeared first on rieckpil.


by rieckpil at September 06, 2019 04:31 AM

Deep Learning 101 meetup’ı ardından

by Hüseyin Akdogan at September 05, 2019 06:00 AM

Geçtiğimiz Salı günü(03.09.2019) sevgili Hakan Özler ile n11’den Orkun Susuz’u Deep Learning 101 meetup’ında ağırladık. Deep Learning popüler bir konu ancak doğal olarak pek çoğumuz için hala “Nedir Deep Learning?” geçerli bir soru. Biz de bu soruyla başladık, sevgili Orkun’un yanıtı, “Deep Learning Machine Learning algoritmalarının bir kolu” oldu. Orkun, Deep Learning’i diğer Machine Learning algoritmalarına kıyasla bu denli popüler kılan temel unsurun özellikle büyük veriyle çalışılırken gösterdiği başarı olduğunu “ne kadar çok veri o kadar başarılı sonuç, diğer algoritmalardan temel fark bu” sözleriyle ortaya koydu.

Orkun devamla günümüzde Deep Learning’in otonom araçlardan insan imgeleriyle identification’a, retina tarama ile auto login işlemlerinden kanserli hücre analizine, kredi skorlamadan ses analizine, deep fake’e uzanan yelpazede; sağlık, bankacılık, sigortacılık, güvenlik gibi akla gelebilecek yaşamımızın neredeyse her noktasına değen alanlarda kullanıldığını çeşitli örneklerde ifade etti.

Bu noktadan sonra Deep Learning’e giriş yapmak isteyenler için doğru ilk adımın ne olacağı üzerine konuştuk. Hakan, doğru programlama dilinin seçimi noktasında community desteğinin öneminin altını çizdi ve arka planda nelerin döndüğüne dair canlı bir ilgi ve merakın bulunmasının faydasından söz etti. Dil konusunda Orkun ve Hakan Python’ın kesinlikle doğru bir başlangıç seçeneği olduğunda hemfikir oldular. Orkun scientific kodlama için Python’ın sahip olduğu avantajların altını çizdi ve rakiplerine kıyasla Python kütüphanelerinin geliştiriciyi daha özgür bıraktığını ifade etti. Bu noktada başlangıç aşamasında geliştiricilerin gözünü korkutan bir husus olmasından hareketle, özellikle Python ile Deep Learning kodlamak için yüksek seviye dil bilgisi gerekmediğinin altını çizdi.

Deep Learning konusunda Türkiye’de ve Türkiye’den katkılar nelerdir sorusu bağlamında Orkun DeepMind projesinde aktif rol alan Koray Kavukçuoğlu’ndan söz etti. Koray hoca şu anda DeepMind’da VP of Research pozisyonunda görev yapıyor. Orkun devamla Ethem Alpaydın hoca ve MIT tarafından basılan Machine Learning kitabından söz etti. Hakan da bu noktada Deep Learning Türkiye topluluğu ve çalışmalarından söz edip, Medium ve Twitter üzerinden topluluk faaliyetlerinin takip edilebileceğini belirtti.

Son olarak yeni başlayacaklar için kaynak önerilerini aldık. Orkun Machine Learning A-Z Udemy eğitimini ve https://machinelearningmastery.com/ adresini önerdi.

Hakan ise https://mlcourse.ai, https://www.kaggle.com ve https://medium.com/deep-learning-turkiye adreslerini.

Sevgili Orkun’a ayırdığı zaman ve aktardığı değerli bilgiler için teşekkür ediyoruz. Bu arada Cloud Native meetup’ıyla ilgili yazıda müjdesini verdiğimiz podcast kanalımız açıldı. Bu meetup’ı ve önceki meetup’ları iTunes ve Spotify üzerinden JUG İstanbul podcast kanalından takip edebilirsiniz.

Bir başka etkinlikte görüşmek üzere…


by Hüseyin Akdogan at September 05, 2019 06:00 AM

The Page Visibility API

by admin at September 04, 2019 02:33 PM

The Page Visibility API is useful for removing listeners (or stopping background processes) from hidden tabs or pages.

You only have to register a visibilitychange listener:


document.addEventListener('visibilitychange', _ => { 
    const state = document.visibilityState;
    console.log('document is: ',state);
})    

Hiding the page / making it visible again prints:


document is:  hidden
document is:  visible    

See it in action:

See you at "Build to last" effectively progressive applications with webstandards only -- the "no frameworks, no migrations" approach, at Munich Airport, Terminal 2 or effectiveweb.training (online).


by admin at September 04, 2019 02:33 PM

The Payara Monthly Catch for August 2019

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at September 03, 2019 11:44 AM

August felt a little bit quieter than previous months, with many people gearing up for the busy conference season. However there were still plenty of juicy pieces of content to be found.

 

Below you will find a curated list of some of the most interesting news, articles and videos from this month. Cant wait until the end of the month? then visit our twitter page where we post all these articles as we find them! 


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at September 03, 2019 11:44 AM

#WHATIS?: Eclipse MicroProfile Rest Client

by rieckpil at September 03, 2019 06:14 AM

In a distributed system your services usually communicate via HTTP and expose REST APIs. External clients or other services in your system consume these endpoints on a regular basis to e.g. fetch data from a different part of the domain. If you are using Java EE you can utilize the JAX-RS WebTarget and Client for this kind of communication. With the MicroProfile Rest Client specification, you’ll get a more advanced and simpler way of creating these RESTful clients. You just declare interfaces and use a more declarative approach (like you might already know it from the Feign library).

Learn more about the MicroProfile Rest Client specification, its annotations, and how to use it in this blog post.

Specification profile: MicroProfile Rest Client

  • Current version: 1.3 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Provide a type-safe approach to invoke RESTful services over HTTP.

Defining the RESTful client

For defining the Rest Client you just need a Java interface and model the remote REST API using JAX-RS annotations:

public interface JSONPlaceholderClient {

    @GET
    @Path("/posts")
    JsonArray getAllPosts();

    @POST
    @Path("/posts")
    Response createPost(JsonObject post);

}

You can specify the response type with a specific POJO (JSON-B will then try to deserialize the HTTP response body) or use the generic Response class of JAX-RS.

Furthermore, you can indicate an asynchronous execution, if you use CompletionStage<T> as the method return type:

@GET
@Path("/posts/{id}")
CompletionStage<JsonObject> getPostById(@PathParam("id") String id);

Path variables and query parameters for the remote endpoint can be specified with @PathParam and @QueryParam:

@GET
@Path("/posts")
JsonArray getAllPosts(@QueryParam("orderBy") String orderDirection);

@GET
@Path("/posts/{id}/comments")
JsonArray getCommentsForPostByPostId(@PathParam("id") String id);

You can define the media type of the request and the expected media type of the response on either interface level or for each method separately:

@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public interface JSONPlaceholderClient {

    @GET
    @Produces(MediaType.APPLICATION_XML) // overrides the JSON media type only for this method
    @Path("/posts/{id}")
    CompletionStage<JsonObject> getPostById(@PathParam("id") String id);

}

If you have to declare specific HTTP headers (e.g. for authentication), you can pass them either to the method with @HeaderParam or define them with @ClientHeaderParam (static value or refer to a method):

@ClientHeaderParam(name = "X-Application-Name", value = "MP-blog")
public interface JSONPlaceholderClient {

    @PUT
    @ClientHeaderParam(name = "Authorization", value = "{generateAuthHeader}")
    @Path("/posts/{id}")
    Response updatePostById(@PathParam("id") String id, JsonObject post, 
                            @HeaderParam("X-Request-Id") String requestIdHeader);

    default String generateAuthHeader() {
        return "Basic " + new String(Base64.getEncoder().encode("duke:SECRET".getBytes()));
    }

}

Using the client interface

Once you define your Rest Client interface you have two ways of using them. First, you can make use of the programmatic approach using the RestClientBuilder. With this builder we can set the base URI, define timeouts and register JAX-RS features/provider like ClientResponseFilter, MessageBodyReader, ReaderInterceptor etc. :

JSONPlaceholderClient jsonApiClient = RestClientBuilder.newBuilder()
                .baseUri(new URI("https://jsonplaceholder.typicode.com"))
                .register(ResponseLoggingFilter.class)
                .connectTimeout(2, TimeUnit.SECONDS),
                .readTimeout(2, TimeUnit.SECONDS)
                .build(JSONPlaceholderClient.class);

 jsonApiClient.getPostById("1").thenAccept(System.out::println);

In addition to this, we can use CDI to inject the Rest Client.  To register the interface as a CDI managed bean during runtime, the interface requires the @RegisterRestClient annotation:

@RegisterRestClient
@RegisterProvider(ResponseLoggingFilter.class)
public interface JSONPlaceholderClient {

}

With the @RegisterProvider you can register further JAX-RS provider and features as you’ve seen it in the programmatic approach. If you don’t specify any scope for the interface,  the @Dependent scope will be used by default. With this scope, your Rest Client bean is bound (dependent) to the lifecycle of the injector class.

You can now use it as any other CDI bean and inject it to your classes. Make sure to add the CDI qualifier @RestClient to the injection point:

@ApplicationScoped
public class PostService {

    @Inject
    @RestClient
    JSONPlaceholderClient jsonPlaceholderClient;

}

Further configuration for the Rest Client

If you use the CDI approach, you can make use of MicroProfile Config to further configure the Rest Client. You can specify the following properties with MicroProfile Config:

  • Base URL (.../mp-rest/url)
  • Base URI (.../mp-rest/uri)
  • The CDI scope of the client as a fully qualified class name (.../mp-rest/scope)
  • JAX-RS provider as a comma-separated list of fully qualified class names (../mp-rest/providers)
  • The priority of a registered provider (.../mp-rest/providers/com.acme.MyProvider/priority)
  • Connect and read timeouts (.../mp-rest/connectTimeout and .../mp-rest/readTimeout)

You can specify these properties for each client individually as you have to specify the fully qualified class name of  the Rest Client for each property:

de.rieckpil.blog.JSONPlaceholderClient/mp-rest/url=https://jsonplaceholder.typicode.com
de.rieckpil.blog.JSONPlaceholderClient/mp-rest/connectTimeout=3000
de.rieckpil.blog.JSONPlaceholderClient/mp-rest/readTimeout=3000

YouTube video for using MicroProfile Rest Client 1.3

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile Rest Client in action:

coming soon

You can find the source code with further instructions to run this example on GitHub.

Have fun using MicroProfile Rest Client,

Phil

The post #WHATIS?: Eclipse MicroProfile Rest Client appeared first on rieckpil.


by rieckpil at September 03, 2019 06:14 AM

The First Line of Quarkus--airhacks.fm Podcast

by admin at September 03, 2019 05:21 AM

Subscribe to airhacks.fm podcast via: spotify, iTunes, RSS

The #52 airhacks.fm episode with Emmanuel Bernard (@emmanuelbernard) about:

learning programming, ORM mappers, JPA, Hibernate contributions, bean validations, extending Hibernate to NoSQL, Java optimizations, GraalVM, Kubernetes, next generation Java EE application servers and Quarkus
is available for download.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at September 03, 2019 05:21 AM

MicroProfile, Business Constraints, Outbox, lit-html, OData, ManagedExecutorService, Effective Java EE--or 66th airhacks.tv

by admin at September 02, 2019 10:05 AM

Topics for 66th airhacks.tv episode (https://gist.github.com/AdamBien/a47834f9c6dc4f85fe2de58084ac0246):
  1. The MicroProfile polyfills
  2. Dealing with business constraints -- or pessimistic vs. optimistic locks
  3. Implementing the Outbox Transactional pattern with or without dependencies
  4. Browsersync, lit-html and document.write
  5. Minishift vs. OKD
  6. Parsing (loading) an image from an InputStream
  7. OData and backend for frontend
  8. ManagedExecutorService and Quarkus
  9. "The Effective Java" book in Java EE projects
  10. Forking GPL3 projects
  11. Using JWTenizr in CI/CD pipelines
  12. EAR / WAR deployments on docker
  13. What should a JavaEE developer know in 2019?
  14. Blog comment: Jakarta EE archetype
  15. Blog comment: Quarkus Scheduler and Kubernetes
  16. Blog comment: Abstract Data Access Object (or not)

Any questions left? Ask now: https://gist.github.com/AdamBien/a47834f9c6dc4f85fe2de58084ac0246 and get the answers at the next airhacks.tv.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at September 02, 2019 10:05 AM

Java EE - Jakarta EE Initializr

August 30, 2019 07:14 AM

Getting started with Jakarta EE just became even easier!

Get started

Update!

Version 1.3 now has:

Payara 5.193 running on Java 11


August 30, 2019 07:14 AM

#WHATIS?: Eclipse MicroProfile OpenTracing

by rieckpil at August 30, 2019 04:51 AM

Tracing method calls in a monolith to identify slow parts is simple. Everything is happening in one application (context) and you can easily add metrics to gather information about e.g. the elapsed time for fetching data from the database. Once you have a microservice environment with service-to-service communication, tracing needs more effort. If a business operation requires your service to call other services (which might then also call others) to gather data, identifying the source of a bottleneck is hard. Over the past years, several vendors evolved to tackle this issue of distributed tracing (e.g. Jaeger, Zipkin etc.). As the different solutions did not rely on a single, standard mechanism for trace description and propagation, a vendor-neutral standard for distributed tracing was due: OpenTracing. With Eclipse MicroProfile we get a dedicated specification to make use of this standard: MicroProfile OpenTracing.

Learn more about the MicroProfile OpenTracing specification, its annotations, and how to use it in this blog post.

Specification profile: MicroProfile OpenTracing

  • Current version: 1.3 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Provide distributed tracing for your JAX-RS application using the OpenTracing standard

Basics about distributed tracing

Once the flow of a request touches multiple service boundaries, you need to somehow correlate each incoming call with the same business flow. To accomplish this with distributed tracing, each service is instrumented to log messages with a correlation id that may have been propagated from an upstream service. These messages are then collected in a storage system and aggregated as they share the same correlation id.

A so-called trace represents the full journey of a request containing multiple spans. A span contains a single operation within the request with both start and end-time information. The distributed tracing systems (e.g. Jager or Zipkin) then usually provide a visual timeline representation for a given trace with its spans.

Enabling distributed tracing with MicroProfile OpenTracing

The MicroProfile OpenTracing specification does not address the problem of defining, implementing or configuring the underlying distributed tracing system. It assumes an environment where all services use a common OpenTracing implementation.

The MicroProfile specification defines two operation modes:

  • Without instrumentation of application code (distributed tracing is enabled for JAX-RS applications by default)
  • With explicit code instrumentation (using the @Traced annotation)

So once a request arrives at a JAX-RS endpoint, the Tracer instance extracts the SpanContext (if given) from the inbound request and starts a new span. If there is no SpanContext yet, e.g. the request is coming from a frontend application, the MicroProfile application has to create one.

Every outgoing request (with either the JAX-RS Client or the MicroProfile Rest Client) then needs to contain the SpanContext and propagate it downstream. Tracing for the JAX-RS Client might need to be explicitly enabled (depending on the implementation), for the MicroProfile Rest Client it is globally enabled by default.

Besides the no instrumentation mode, you can add the @Traced annotation to a class or method to explicitly start a new span at the beginning of a method.

Sample application setup for MicroProfile OpenTracing

To provide you an example, I’m using the following two services to simulate a microservice architecture setup: book-store and book-store-client. Both are MicroProfile applications and have no further dependencies. The book-store-client has one public endpoint to retrieve books together with their price:

@Path("books")
public class BookResource {

    @Inject
    private BookProvider bookProvider;

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Response getBooks() {
        return Response.ok(bookProvider.getBooksFromBookStore()).build();
    }

}

For gathering information about the book and its price, the book-store-client communicates with the book-store:

@RequestScoped
public class BookProvider {

    @Inject
    private PriceCalculator priceCalculator;

    private WebTarget bookStoreTarget;

    @PostConstruct
    public void setup() {
        Client client = ClientBuilder
                .newBuilder()
                .connectTimeout(2, TimeUnit.SECONDS)
                .readTimeout(2, TimeUnit.SECONDS)
                .build();

        this.bookStoreTarget = client.target("http://book-store:9080/resources/books");
    }

    public JsonArray getBooksFromBookStore() {

        JsonArray books = this.bookStoreTarget
                .request()
                .get()
                .readEntity(JsonArray.class);

        List<JsonObject> result = new ArrayList();

        for (JsonObject book : books.getValuesAs(JsonValue::asJsonObject)) {
            result.add(Json.createObjectBuilder()
                    .add("title", book.getString("title"))
                    .add("price", priceCalculator.getPriceForBook(book.getInt("id")))
                    .build());
        }

        return result
                .stream()
                .collect(JsonCollectors.toJsonArray());
    }
}

So there will be at least on outgoing call to fetch all available books and for each book and additional request to get the price of the book:

@RequestScoped
public class PriceCalculator {

    private WebTarget bookStorePriceTarget;
    private Double discount = 1.5;

    @PostConstruct
    public void setUp() {
        Client client = ClientBuilder
                .newBuilder()
                .connectTimeout(2, TimeUnit.SECONDS)
                .readTimeout(2, TimeUnit.SECONDS)
                .build();

        this.bookStorePriceTarget = client.target("http://book-store:9080/resources/prices");
    }

    public Double getPriceForBook(int id) {
        Double bookPrice = this.bookStorePriceTarget
            .path(String.valueOf(id))
            .request()
            .get()
            .readEntity(Double.class);
        return Math.round((bookPrice - discount) * 100.0) / 100.0;
    }

}

On the book-store side, when fetching the prices, there is a random Thread.sleep(), so we can later see different traces. Without further instrumentations on both sides, we are ready for distributed tracing. We could add additional @Traced annotations to the involved methods, to create a span for each method call and narrow down the tracing.

Using the Zipkin implementation on Open Liberty

For this example, I’m using Open Liberty to deploy both applications. With Open Liberty we have to add a feature for the OpenTracing implementation to the server and configure it in server.xml:

FROM open-liberty:kernel-java11
COPY --chown=1001:0  target/microprofile-open-tracing-server.war /config/dropins/
COPY --chown=1001:0  server.xml /config/
COPY --chown=1001:0  extension /opt/ol/wlp/usr/extension
<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">

    <featureManager>
        <feature>microProfile-3.0</feature>
        <feature>usr:opentracingZipkin-0.31</feature>
    </featureManager>

    <opentracingZipkin host="zipkin" port="9411"/>

    <mpMetrics authentication="false"/>

    <ssl id="defaultSSLConfig" keyStoreRef="defaultKeyStore" trustStoreRef="jdkTrustStore"/>
    <keyStore id="jdkTrustStore" location="${java.home}/lib/security/cacerts" password="changeit"/>

    <httpEndpoint id="defaultHttpEndpoint" httpPort="9080" httpsPort="9443" />
</server>

The OpenTracing Zipkin implementation is provided by IBM and can be downloaded at the following tutorial.

For the book-store DNS resolution, you saw in the previous code snippets and to start Zipkin as the distributed tracing system, I’m using docker-compose:

version: '3.6'
services:
  book-store-client:
    build: book-store-client/
    ports:
      - "9080:9080"
      - "9443:9443"
    links:
      - zipkin
      - book-store
  book-store:
    build: book-store/
    links:
      - zipkin
  zipkin:
    image: openzipkin/zipkin
    ports:
      - "9411:9411"

Once both services and Zipkin is running, you can visit http://localhost:9080/resources/books to fetch all available books from the book-store-client application. You can now hit this endpoint several times and then switch to http://localhost:9411/zipkin/ and query for all available traces:

Once you click on a specific trace, you’ll get a timeline to see what operation took the most time:

YouTube video for using MicroProfile Open Tracing 1.3

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile OpenTracing in action:

coming soon

You can find the source code with further instructions to run this example on GitHub.

Have fun using MicroProfile OpenTracing,

Phil

The post #WHATIS?: Eclipse MicroProfile OpenTracing appeared first on rieckpil.


by rieckpil at August 30, 2019 04:51 AM

What's New In Payara Platform 5.193?

by Cuba Stanley at August 29, 2019 01:00 PM

With the summer season coming to a close, the time has come for a new release of the Payara Platform! Here's a quick list of the new features you'll have to look forward to with the Payara Platform 5.193 release:


by Cuba Stanley at August 29, 2019 01:00 PM

Authentication and Authorisation as Joy - airhacks.fm Podcast

by admin at August 25, 2019 07:37 AM

Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS

The #51 airhacks.fm episode with Sebastien Blanc (@sebi2706) about:
Logo, authorisation and authentication, Java EE, Keycloak features and integrations, Oauth, JWT and OIDC.
is available for download.

See you at "Build to last" effectively progressive applications with webstandards only -- the "no frameworks, no migrations" approach, at Munich Airport, Terminal 2 or effectiveweb.training (online).


by admin at August 25, 2019 07:37 AM

#WHATIS?: Eclipse MicroProfile OpenAPI

by rieckpil at August 24, 2019 09:31 AM

Exposing REST endpoints usually requires documentation for your clients. This documentation usually includes the following: accepted media types, HTTP method, path variables, query parameters, and the request and response schema. With the OpenAPI v3 specification we have a standard way to document APIs. You can generate this kind of API documentation from your JAX-RS classes using MicroProfile OpenAPI out-of-the-box. In addition, you can customize the result with additional metadata like detailed description, error codes and their reasons, and further information about the used security mechanism.

Learn more about the MicroProfile OpenAPI specification, its annotations and how to use it in this blog post.

Specification profile: MicroProfile OpenAPI

  • Current version: 1.1 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Provide a unified Java API for the OpenAPI v3 specification to expose API documentation

Customize your API documentation with MicroProfile OpenAPI

Without any additional annotation or configuration, you get your API documentation with MicroProfile OpenAPI out-of-the-box. Therefore your JAX-RS classes are scanned for your @Produces, @Consumes, @Path, @GET etc. annotations to extract the required information for the documentation.

If you have external clients accessing your endpoints, you usually add further metadata for them to understand what each endpoint is about. Fortunately, the MicroProfile OpenAPI specification defines a bunch of annotations you can use to customize the API documentation.

The following example shows a part of the available annotation you can use to add further information:

@GET
@Operation(summary = "Get all books", description = "Returns all available books of the book store XYZ")
@APIResponse(responseCode = "404", description = "No books found")
@APIResponse(responseCode = "418", description = "I'm a teapot")
@APIResponse(responseCode = "500", description = "Server unavailable")
@Tag(name = "BETA", description = "This API is currently in beta state")
@Produces(MediaType.APPLICATION_JSON)
public Response getAllBooks() {
   System.out.println("Get all books...");
   return Response.ok(new Book("MicroProfile", "Duke", 1L)).build();
}

In this example, I’m adding a summary and description to the endpoint to tell the client what this endpoint is about. Furthermore, you can specify the different response codes this endpoint returns and give them a description if they are somehow different from the HTTP spec.

Another important part of your API documentation is the request and response body schema. With JSON as the current de-facto standard format for exchanging data, you and need to know the expected and accepted formats. The same is true for the response as your client needs information about the contract of the API to further process the result. This can be achieved with an additional MicroProfile OpenAPI annotation:

@GET
@APIResponse(description = "Book",
             content = @Content(mediaType = "application/json",
                    schema = @Schema(implementation = Book.class)))
@Path("/{id}")
@Consumes({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
public Response getBookById(@PathParam("id") Long id) {
   return Response.ok(new Book("MicroProfile", "Duke", 1L)).build();
}

Within the @APIResponse annotation we can reference the response object with the schema attribute. This can point to your data transfer object class. Aforementioned Java class can then also have further annotations to specify which field is required and what are example values:

@Schema(name = "Book", description = "POJO that represents a book.")
public class Book {

    @Schema(required = true, example = "MicroProfile")
    private String title;

    @Schema(required = true, example = "Duke")
    private String author;

    @Schema(required = true, readOnly = true, example = "1")
    private Long id;

}

Access the created documentation

The MicroProfile OpenAPI specification defines a pre-defined endpoint to access the documentation: /openapi:

openapi: 3.0.0
info:
  title: Deployed APIs
  version: 1.0.0
servers:
- url: http://localhost:9080
- url: https://localhost:9443
tags:
- name: BETA
  description: This API is currently in beta state
paths:
  /resources/books/{id}:
    get:
      operationId: getBookById
      parameters:
      - name: id
        in: path
        required: true
        schema:
          type: integer
          format: int64
      responses:
        default:
          description: Book
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Book'

This endpoint returns your generated API documentation in the OpenAPI v3 specification format as text/plain.

Moreover, if you are using Open Liberty you’ll get a nicelooking user interface for your API documentation. You can access it at http://localhost:9080/openapi/ui/. This looks similar to the Swagger UI and offers your client a way to explore your API and also trigger requests to your endpoints via this user interface:

microProfileOpenAPIUIOpenLiberty

microProfileOpenAPIExecutionOfApiCall

microProfileOpenAPIModelExplorer

YouTube video for using MicroProfile OpenAPI 1.1

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile OpenAPI in action:

You can find the source code for this blog post on GitHub.

Have fun using MicroProfile OpenAPI,

Phil

The post #WHATIS?: Eclipse MicroProfile OpenAPI appeared first on rieckpil.


by rieckpil at August 24, 2019 09:31 AM

New Challenges Ahead

by Ivar Grimstad at August 20, 2019 06:27 PM

I am super excited to announce that October 1st, I will become the first Jakarta EE Developer Advocate at Eclipse Foundation!

So, What’s new? Hasn’t this guy been doing this for years already?

Well, yes, and no. My day job has always been working as a consultant even if I have been fortunate that Cybercom Sweden (my employer of almost 15 years) has given me the freedom to also work on open source projects, community building and speaking at conferences and meetups.

What’s different then?

Even if I have had this flexibility, it has still been part-time work which has rippled into my spare time. It’s only so much a person can do and there are only 24 hours a day. As a full-time Jakarta EE Developer Advocate, I will be able to focus entirely on community outreach around Jakarta EE.

The transition of the Java EE technologies from Oracle to Jakarta EE at Eclipse Foundations has taken a lot longer than anticipated. The community around these technologies has taken a serious hit as a result of that. My primary focus for the first period as Jakarta EE Developer Advocate is to regain the trust and help enable participation of the strong community around Jakarta EE. The timing of establishing this position fits perfectly with the upcoming release of Jakarta EE 8. From that release and forward, it is up to us as a community to bring the technology forward.

I think I have been pretty successful with being vendor-neutral throughout the years. This will not change! Eclipse Foundation is a vendor-neutral organization and I will represent the entire Jakarta EE working group and community as the Jakarta EE Developer Advocate. This is what distinguishes this role from the vendor’s own developer advocates.

I hope to see you all very soon at a conference or meetup near you!


by Ivar Grimstad at August 20, 2019 06:27 PM

#WHATIS?: Eclipse MicroProfile Health

by rieckpil at August 19, 2019 05:16 AM

Once your application is deployed to production you want to ensure it’s up– and running. To determine the health and status of your application you can use monitoring based on different metrics, but this requires further knowledge and takes time. Usually, you just want a quick answer to the question: Is my application up? The same is true if your application is running e.g. in a Kubernetes cluster, where the cluster regularly performs health probes to terminate unhealthy pods. With MicroProfile Health you can write both readiness and liveness checks and expose them via an HTTP endpoint with ease.

Learn more about the MicroProfile Health specification and how to use it in this blog post.

Specification profile: MicroProfile Health

  • Current version: 2.0.1 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Add liveness and readiness checks to determine the application’s health

Determine the application’s health with MicroProfile Health

With MicroProfile Health you get three new endpoints to determine both the readiness and liveness of your application:

  • /health/ready: Returns the result of all readiness checks and determines whether or not your application can process requests
  • /health/live: Returns the result of all liveness checks and determines whether or not your application is up- and running
  • /health : As in previous versions of MicroProfile Health there was no distinction between readiness and liveness, this is active for downwards compatibility. This endpoint returns the result of both health check types.

To determine your readiness and liveness you can have multiple checks. The overall status is constructed with a logical AND of all your checks of that specific type (liveness or readiness). If e.g. on liveness check fails, the overall liveness status is DOWN and the HTTP status is 503:

$ curl -v http://localhost:9080/health/live


< HTTP/1.1 503 Service Unavailable
< X-Powered-By: Servlet/4.0
< Content-Type: application/json; charset=UTF-8
< Content-Language: en-US

{"checks":[...],"status":"DOWN"}

In case of an overall UP status, you’ll receive the HTTP status 200:

$ curl -v http://localhost:9080/health/ready

< HTTP/1.1 200 OK
< X-Powered-By: Servlet/4.0
< Content-Type: application/json; charset=UTF-8
< Content-Language: en-US

{"checks":[...],"status":"UP"}

Create a readiness check

To create a readiness check you have to implement the HealthCheck interface and add @Readiness to your class:

@Readiness
public class ReadinessCheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse.builder()
                .name("readiness")
                .up()
                .build();
    }
}

As you can add multiple checks, you need to give every check a dedicated name. In general, all your readiness checks should determine whether your application is ready to accept traffic or not. Therefore a quick response is preferable.

If your application is about exposing and accepting data using REST endpoints and does not rely on other services to work, the readiness check above should be good enough as it returns 200 once the JAX-RS runtime is up- and running:

{
   "checks":[
      {
         "data":{},
         "name":"readiness",
         "status":"UP"
      }
   ],
   "status":"UP"
}

Furthermore, once /health/ready returns 200, the readiness is identified and from now on the /health/live is used and no more readiness checks are required.

Create liveness checks

Creating liveness checks is as simple as creating readiness checks. The only difference is the @Livness annotation at class level:

@Liveness
public class DiskSizeCheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {

        File file = new File("/");
        long freeSpace = file.getFreeSpace() / 1024 / 1024;

        return responseBuilder = HealthCheckResponse.builder()
                .name("disk")
                .withData("remainingSpace", freeSpace)
                .state(freeSpace > 100)
                .build();

}

In this example, I’m checking for free disk space as a service might rely on storage to persist e.g. files.  With the .withData() method of the HealthCheckResponseBuilder you can add further metadata to your response.

In addition, you can also combine the @Readiness and @Liveness annotation and reuse a health check class for both checks:

@Readiness
@Liveness
public class MultipleHealthCheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse
                .builder()
                .name("generalCheck")
                .withData("foo", "bar")
                .withData("uptime", 42)
                .withData("isReady", true)
                .up()
                .build();
    }
}

This check now appears for /health/ready and /health/live:

{
   "checks":[
      {
         "data":{
            "remainingSpace":447522
         },
         "name":"disk",
         "status":"UP"
      },
      {
         "data":{

         },
         "name":"liveness",
         "status":"UP"
      },
      {
         "data":{
            "foo":"bar",
            "isReady":true,
            "uptime":42
         },
         "name":"generalCheck",
         "status":"UP"
      }
   ],
   "status":"UP"
}

Other possible liveness checks might be: checking for active JDBC connections, connections to queues, CPU usage, or custom metrics (with the help of MicroProfile Metrics).

YouTube video for using MicroProfile Health 2.0

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile Health in action:

You can find the source code for this blog post on GitHub.

Have fun using MicroProfile Health,

Phil

The post #WHATIS?: Eclipse MicroProfile Health appeared first on rieckpil.


by rieckpil at August 19, 2019 05:16 AM

#WHATIS?: Eclipse MicroProfile Metrics

by rieckpil at August 18, 2019 08:03 AM

Ensuring a stable operation of your application in production requires monitoring. Without monitoring, you have no insights about the internal state and health of your system and have to work with a black-box. MicroProfile Metrics gives you the ability to not only monitor pre-defined metrics like JVM statistics but also create custom metrics to monitor e.g. key figures of your business. These metrics are then exposed via HTTP and ready to visualize on a dashboard and create appropriate alarms.

Learn more about the MicroProfile Metrics specification and how to use it in this blog post.

Specification profile: MicroProfile Metrics

  • Current version: 2.0 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Add custom metrics (e.g. timer or counter) to your application and expose them via HTTP

Default MicroProfile metrics defined in the specification

The specification defines one endpoint with three subresources to collect metrics from a MicroProfile application:

  • The endpoint to collect all available metrics: /metrics
  • Base (pre-defined by the specification) metrics: /metrics/base
  • Application metrics: /metrics/application (optional)
  • Vendor-specific metrics: /metrics/vendor (optional)

So you can either use the main /metrics endpoint and get all available metrics for your application or one of the subresources.

The default media type for these endpoints is text/plain using the OpenMetrics format. You are also able to get them as JSON if you specify the Accept header in your request as application/json.

In the specification, you find a list of base metrics every MicroProfile Metrics compliant application server has to offer. These are mainly JVM, GC, memory, and CPU related metrics to monitor the infrastructure. The following output is the required amount of base metrics:

{
    "gc.total;name=scavenge": 393,
    "gc.time;name=global": 386,
    "cpu.systemLoadAverage": 0.92,
    "thread.count": 85,
    "classloader.loadedClasses.count": 11795,
    "classloader.unloadedClasses.total": 21,
    "jvm.uptime": 985206,
    "memory.committedHeap": 63111168,
    "thread.max.count": 100,
    "cpu.availableProcessors": 12,
    "classloader.loadedClasses.total": 11816,
    "thread.daemon.count": 82,
    "gc.time;name=scavenge": 412,
    "gc.total;name=global": 14,
    "memory.maxHeap": 4182573056,
    "cpu.processCpuLoad": 0.0017964831879557087,
    "memory.usedHeap": 34319912
}

In addition, you are able to add metadata and tags to your metrics like in the output above for gc.time where name=global is a tag. You can use these tags to further separate a metric for multiple use cases.

Create a custom metric with MicroProfile Metrics

There are two ways for defining a custom metric with MicroProfile Metrics: using annotations or programmatically. The specification offers five different metric types:

  • Timer: sampling the time for e.g. a method call
  • Counter: monotonically counting e.g. invocations of a method
  • Gauges: sample the value of an object e.g. current size of JMS queue
  • Meters: tracking the throughput of e.g. a JAX-RS endpoint
  • Histogram: calculate the distribution of a value e.g. the variance of incoming user agents

For simple use cases, you can make use of annotations and just add them to a method you want to monitor. Each annotation offers attributes to configure tags and metadata for the metric:

@Counted(name = "bookCommentClientInvocations",
         description = "Counting the invocations of the constructor",
         displayName = "bookCommentClientInvoke",
         tags = {"usecase=simple"})
public BookCommentClient() {
}

If your monitoring use case requires a more dynamic configuration, you can programmatically create/update your metrics. For this, you just need to inject the MetricRegistry to your class:

public class BookCommentClient {

    @Inject
    @RegistryType(type = MetricRegistry.Type.APPLICATION)
    private MetricRegistry metricRegistry;

    public String getBookCommentByBookId(String bookId) {
        Response response = this.bookCommentsWebTarget.path(bookId).request().get();
        this.metricRegistry.counter("bookCommentApiResponseCode" + response.getStatus()).inc();
        return response.readEntity(JsonObject.class).getString("body");
    }
}

Create a timer metric

If you want to track and sample the duration for a method call, you can make use of timers. You can add them with the @Timer annotation or using the MetricRegistry. A good use case might be tracking the time for a call to an external service:

@Timed(name = "getBookCommentByBookIdDuration")
public String getBookCommentByBookId(String bookId) {
   Response response = this.bookCommentsWebTarget.path(bookId).request().get();
   return response.readEntity(JsonObject.class).getString("body");
}

While using the timer metric type you’ll also get a count of method invocations and mean/max/min/percentile calculations out-of-the-box:

 "de.rieckpil.blog.BookCommentClient.getBookCommentByBookIdDuration": {
        "fiveMinRate": 0.000004243196464475842,
        "max": 3966817891,
        "count": 13,
        "p50": 737218798,
        "p95": 3966817891,
        "p98": 3966817891,
        "p75": 997698383,
        "p99": 3966817891,
        "min": 371079671,
        "fifteenMinRate": 0.005509550587308515,
        "meanRate": 0.003936521878196718,
        "mean": 1041488167.7031761,
        "p999": 3966817891,
        "oneMinRate": 1.1484886591525709e-24,
        "stddev": 971678361.3592016
}

Be aware that you get the result as nanoseconds if you request the JSON result and for the OpenMetrics format, you get seconds:

getBookCommentByBookIdDuration_rate_per_second 0.003756880727820997
getBookCommentByBookIdDuration_one_min_rate_per_second 7.980095572816848E-26
getBookCommentByBookIdDuration_five_min_rate_per_second 2.4892551645230856E-6
getBookCommentByBookIdDuration_fifteen_min_rate_per_second 0.004612201440656351
getBookCommentByBookIdDuration_mean_seconds 1.0414881677031762
getBookCommentByBookIdDuration_max_seconds 3.9668178910000003
getBookCommentByBookIdDuration_min_seconds 0.371079671
getBookCommentByBookIdDuration_stddev_seconds 0.9716783613592016
getBookCommentByBookIdDuration_seconds_count 13
getBookCommentByBookIdDuration_seconds{quantile="0.5"} 0.737218798
getBookCommentByBookIdDuration_seconds{quantile="0.75"} 0.997698383
getBookCommentByBookIdDuration_seconds{quantile="0.95"} 3.9668178910000003
getBookCommentByBookIdDuration_seconds{quantile="0.98"} 3.9668178910000003
getBookCommentByBookIdDuration_seconds{quantile="0.99"} 3.9668178910000003
getBookCommentByBookIdDuration_seconds{quantile="0.999"} 3.9668178910000003

Create a counter metric

The next metric type is the simplest one: a counter. With the counter, you can track e.g. the number of invocations of a method:

@Counted
public String doFoo() {
  return "Duke";
}

In one of the previous MicroProfile Metrics versions, you were able to decrease the counter and have a not monotonic counter. As this caused confusion with the gauge metric type, the current specification version defines this metric type as a monotonic counter which can only increase.

If you use the programmatic approach, you are also able to define the amount of increase for the counter on each invocation:

public void checkoutItem(String item, Long amount) {
   this.metricRegistry.counter(item + "Count").inc(amount);
   // further business logic
}

Create a metered metric

The meter type is perfect if you want to measure the throughput of something and get the one-, five- and fifteen-minute rates. As an example I’ll monitor the throughput of a JAX-RS endpoint:

@GET
@Metered(name = "getBookCommentForLatestBookRequest", tags = {"spec=JAX-RS", "level=REST"})
@Produces(MediaType.TEXT_PLAIN)
public Response getBookCommentForLatestBookRequest() {
   String latestBookRequestId = bookRequestProcessor.getLatestBookRequestId();
   return Response.ok(this.bookCommentClient.getBookCommentByBookId(latestBookRequestId)).build();
}

After several invocations, the result looks like the following:

"de.rieckpil.blog.BookResource.getBookCommentForLatestBookRequest": {
       "oneMinRate;level=REST;spec=JAX-RS": 1.1363013189791909e-24,
       "fiveMinRate;level=REST;spec=JAX-RS": 0.0000042408326224725166,
       "meanRate;level=REST;spec=JAX-RS": 0.003936520624021342,
       "fifteenMinRate;level=REST;spec=JAX-RS": 0.0055092085268208186,
       "count;level=REST;spec=JAX-RS": 13
}

Create a gauge metric

To monitor a value which can increase and decrease over time, you should use the gauge metric type. Imagine you want to visualize the current disk size or the remaining messages to process in a queue:

@Gauge(unit = "amount")
public Long remainingBookRequestsToProcess() {
  // monitor e.g. current size of a JMS queue
  return ThreadLocalRandom.current().nextLong(0, 1_000_000);
}

The unit attribute of the annotation is required and has to be explicitly configured. There is a MetricUnits class which you can use for common units like seconds or megabytes.

In contrast to all other metrics, the @Gauge annotation can only be used in combination with a single instance (e.g. @ApplicationScoped) as otherwise, it would be not clear which instance represents the actual value. There is a @ConcurrentGauge if you need to count parallel invocations.

The outcome is the current value of the gauge, which might increase or decrease over time:

# TYPE application_..._remainingBookRequestsToProcess_amount
application_..._remainingBookRequestsToProcess_amount 990120

// invocation of /metrics 5 minutes later

# TYPE application_..._remainingBookRequestsToProcess_amount
application_..._remainingBookRequestsToProcess_amount 11003

YouTube video for using MicroProfile Metrics 2.0

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile Metrics in action:

You can find the source code for this blog post on GitHub.

Have fun using MicroProfile Metrics,

Phil

The post #WHATIS?: Eclipse MicroProfile Metrics appeared first on rieckpil.


by rieckpil at August 18, 2019 08:03 AM

#WHATIS?: Eclipse MicroProfile Config

by rieckpil at August 17, 2019 12:00 PM

Injecting configuration properties like JDBC URLs, passwords, usernames or hostnames from external sources is a common requirement for every application. Inspired by the twelve-factor app principles you should store configuration in the environment (e.g. OS environment variables or config maps in Kubernetes). These external configuration properties can then be replaced for your different stages (dev/prod/test) with ease. Using MicroProfile Config you can achieve this in a simple and extensible way.

Learn more about the MicroProfile Config specification and how to use it in this blog post.

Specification profile: MicroProfile Config

  • Current version: 1.3 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Inject configuration properties from external sources (like property files, environment or system variables)

Injecting configuration properties

At several parts of your application, you might want to inject configuration properties to configure for example the base URL of a JAX-RS Client. With MicroProfile Config you can inject a Config object using CDI and fetch a specific property by its key:

public class BasicConfigurationInjection {

    @Inject
    private Config config;

    public void init(@Observes @Initialized(ApplicationScoped.class) Object init) {
        System.out.println(config.getValue("message", String.class));
    }

}

In addition, you can inject a property value to a member variable with the @ConfigProperty annotation and also specify a default value:

public class BasicConfigurationInjection {

    @Inject
    @ConfigProperty(name = "message", defaultValue = "Hello World")
    private String message;

}

If you don’t specify a defaultValue, and the application can’t find a property value in the configured ConfigSources, your application will throw an error during startup:

The [BackedAnnotatedField] @Inject @ConfigProperty private de.rieckpil.blog.BasicConfigurationInjection.value InjectionPoint dependency was not resolved. Error: java.util.NoSuchElementException: CWMCG0015E: The property not.existing.value was not found in the configuration.
       at com.ibm.ws.microprofile.config.impl.AbstractConfig.getValue(AbstractConfig.java:175)
       at [internal classes]

For a more resilient behaviour, or if the config property is optional, you can wrap the value with Java’s Optional<T> class and check its existence during runtime:

public class BasicConfigurationInjection {
    
    @Inject
    @ConfigProperty(name = "my.app.password")
    private Optional<String> password;
    
}

Furthermore you can wrap the property with a Provider<T>  for a more dynmaic injection. This ensure that each invocation of Provider.get() resolves the latest value from the underlying Config and you are able to change it during runtime.

public class BasicConfigurationInjection {

    @Inject
    @ConfigProperty(name = "my.app.timeout")
    private Provider<Long> timeout;

    public void init(@Observes @Initialized(ApplicationScoped.class) Object init) {
        System.out.println(timeout.get());
    }

}

For the key of the configuration property you might use the dot notation to prevent conflicts and seperate domains: my.app.passwords.twitter.

Configuration sources

The default ConfigSources are the following:

  • System property (default ordinal: 400): passed with -Dmessage=Hello to the application
  • Environment variables (default ordinal: 300): OS variables like export MESSAGE=Hello
  • Property file (default ordinal: 100): file META-INF/microprofile-config.properties

Once the MicroProfile Config runtime finds a property in two places (e.g. property file and environment variable), the value with the higher ordinal source is chosen.

These default configuration sources should cover most of the use cases and support writing cloud-native applications. However, if you need any additional custom ConfigSource, you can plug-in your own (e.g. fetch configurations from a database or external service).

To provide you an example for a custom ConfigSource, I’m creating a static source which serves just two properties. Therefore you just need to implement the ConfigSource interface and its methods

public class CustomConfigSource implements ConfigSource {

    public static final String CUSTOM_PASSWORD = "CUSTOM_PASSWORD";
    public static final String MESSAGE = "Hello from custom ConfigSource";

    @Override
    public int getOrdinal() {
        return 500;
    }

    @Override
    public Map<String, String> getProperties() {
        Map<String, String> properties = new HashMap<>();
        properties.put("my.app.password", CUSTOM_PASSWORD);
        properties.put("message", MESSAGE);
        return properties;
    }

    @Override
    public String getValue(String key) {
        if (key.equalsIgnoreCase("my.app.password")) {
            return CUSTOM_PASSWORD;
        } else if (key.equalsIgnoreCase("message")) {
            return MESSAGE;
        }
        return null;
    }

    @Override
    public String getName() {
        return "randomConfigSource";
    }
}

To register this new ConfigSource you can either bootstrap a custom Config object with this source:

Config config = ConfigProviderResolver
                .instance()
                .getBuilder()
                .addDefaultSources()
                .withSources(new CustomConfigSource())
                .addDiscoveredConverters()
                .build();

or add the fully-qualified name of the class of the configuration source to the org.eclipse.microprofile.config.spi.ConfigSource file in /src/main/resources/META-INF/services:

de.rieckpil.blog.CustomConfigSource

Using the file approach, the custom source is now part of the ConfigSources by default.

Configuration converters

Internally the mechanism for MicroProfile Config is purely Stringbased, typesafety is achieved with Converter classes. The specification provides default Converter for converting the configuration property into the known Java types: Integer, Long, Float, Boolean etc. In addition, there are built-in providers for converting properties into Arrays, Lists, Optional<T> and Provider<T>.

If the default Converter doesn’t match your requirements and you want e.g. to convert a property into a domain object, you can plug-in a custom Converter<T>.

For example, I’ll convert a config property into a Token instance:

public class Token {

    private String name;
    private String payload;

    public Token(String name, String payload) {
        this.name = name;
        this.payload = payload;
    }

    // getter & setter
}

The custom converter needs to implement the Converter<Token> interface. The converter method accepts a raw string value and returns the custom domain object, in this case, an instance of Token:

public class CustomConfigConverter implements Converter<Token> {

    @Override
    public Token convert(String value) {
        String[] chunks = value.split(",");
        Token result = new Token(chunks[0], chunks[1]);
        return result;
    }
}

To register this converter you can either build your own Config instance and add the converter manually:

int PRIORITY = 100;

Config config = ConfigProviderResolver
                .instance()
                .getBuilder()
                .addDefaultSources()
                .addDiscoveredConverters()
                .withConverter(Token.class, PRIORITY, new CustomConfigConverter())
                .build();

or you can add the fullyqualified name of the class of the converter to the org.eclipse.microprofile.config.spi.Converter file in /src/main/resources/META-INF/services:

de.rieckpil.blog.CustomConfigConverter

Once your converter is registered, you can start using it:

my.app.token=TOKEN_1337, SUPER_SECRET_VALUE
public class BasicConfigurationInjection {

    @Inject
    @ConfigProperty(name = "my.app.token")
    private Token token;

}

YouTube video for using MicroProfile Config 1.3

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile Config in action:

You can find the source code for this blog post on GitHub.

Have fun using MicroProfile Config,

Phil

The post #WHATIS?: Eclipse MicroProfile Config appeared first on rieckpil.


by rieckpil at August 17, 2019 12:00 PM

Request Tracing in Payara Platform 5.192

by Andrew Pielage at August 15, 2019 12:22 PM

Request tracing has been a feature in Payara Platform for a number of years now, and over time it has evolved and changed in a number of ways. The crux of what the feature is remains the same, however: tracing requests through various parts of your applications and the Payara Platform to provide details about their travels.


by Andrew Pielage at August 15, 2019 12:22 PM

Building Microservices with Jakarta EE and MicroProfile

by Edwin Derks at August 09, 2019 12:45 PM

Jakarta EE is the successor to the established Java EE platform. What does this actually mean, what are the differences and how does Jakarta EE compare to similar frameworks like Spring? Even more important, can Jakarta EE be a fit for your projects, even if scalability is a requirement? There are certainly possibilities for you there, thanks to the addition of Eclipse MicroProfile. How this all works, even without having to roll out a pure microservices architecture, will be revealed to you in this #JakartaTechTalk.


by Edwin Derks at August 09, 2019 12:45 PM

Deploying MicroProfile Microservices with Tekton

by Niklas Heidloff at August 08, 2019 02:48 PM

This article describes Tekton, an open-source framework for creating CI/CD systems, and explains how to deploy microservices built with Eclipse MicroProfile on Kubernetes and OpenShift.

What is Tekton?

Kubernetes is the de-facto standard for running cloud-native applications. While Kubernetes is very flexible and powerful, deploying applications is sometimes challenging for developers. That’s why several platforms and tools have evolved that aim to make deployments of applications easier, for example Cloud Foundry’s ‘cf push’ experience, OpenShift’s source to image (S2I), various Maven plugins and different CI/CD systems.

Similarly as Kubernetes has evolved to be the standard for running containers and similarly as Knative is evolving to become the standard for serverless platforms, the goal of Tekton is to become the standard for continuous integration and delivery (CI/CD) platforms.

The biggest companies that are engaged in this project are at this point Google, CloudBees, IBM and Red Hat. Because of its importance the project has been split from Knative which is focussed on scale to zero capabilities.

Tekton comes with a set of custom resources to define and run pipelines:

  • Pipeline: Pipelines can contain several tasks and can be triggered by events or manually
  • Task: Tasks can contain multiple steps. Typical tasks are 1. source to image and 2. deploy via kubectl
  • PipelineRun: This resource is used to trigger pipelines and to pass parameters like location of Dockerfiles to pipelines
  • PipelineResource: This resource is used, for example, to pass links to GitHub repos

MicroProfile Microservice Implementation

I’ve created a simple microservice which is available as open source as part of the cloud-native-starter repo.

The microservice contains the following functionality:

If you want to use this code for your own microservice, remove the three Java files for the REST GET endpoint and rename the service in the pom.xml file and the yaml files.

Setup of the Tekton Pipeline

I’ve created five yaml files that define the pipeline to deploy the sample authors microservice.

1) The file task-source-to-image.yaml defines how to 1. build the image within the Kubernetes cluster and 2. how to push it to a registry.

For building the image kaniko is used, rather than Docker. For application developers this is almost transparent though. As usual images are defined via Dockerfiles. The only difference I ran into is how access rights are handled. For some reason I couldn’t write the ‘server.xml’ file into the ‘/config’ directory. To fix this, I had to manually assign access rights in the Dockerfile first: ‘RUN chmod 777 /config/’.

The source to image task is the first task in the pipeline and has only one step. The screenshot shows a representation of the task in the Tekton dashboard.

2) The file task-deploy-via-kubectl.yaml contains the second task of the pipeline which essentially only runs kubectl commands to deploy the service. Before this can be done, the template yaml file is changed to contain the full image name for the current user and environment.

apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: deploy-via-kubectl
spec:
  inputs:
    resources:
      - name: git-source
        type: git
    params:
      - name: pathToDeploymentYamlFile
        description: The path to the yaml file with Deployment resource to deploy within the git source
      ...
  steps:
    - name: update-yaml
      image: alpine
      command: ["sed"]
      args:
        - "-i"
        - "-e"
        - "s;authors:1;${inputs.params.imageUrl}:${inputs.params.imageTag};g"
        - "/workspace/git-source/${inputs.params.pathToContext}/${inputs.params.pathToDeploymentYamlFile}"
    - name: run-kubectl-deployment
      image: lachlanevenson/k8s-kubectl
      command: ["kubectl"]
      args:
        - "apply"
        - "-f"
        - "/workspace/git-source/${inputs.params.pathToContext}/${inputs.params.pathToDeploymentYamlFile}"

3) The file pipeline.yaml basically only defines the order of the two tasks as well as how to pass parameters between the different tasks.

The screenshot shows the pipeline after it has been run. The output of the third and last steps of the second task ‘deploy to cluster’ is displayed.

4) The file resource-git-cloud-native-starter.yaml only contains the address of the GitHub repo.

apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: resource-git-cloud-native-starter
spec:
  type: git
  params:
    - name: revision
      value: master
    - name: url
      value: https://github.com/IBM/cloud-native-starter

5) The file pipeline-account.yaml is necessary to define access rights from Tekton to the container registry.

Here are the complete steps to set up the pipeline on the IBM Cloud Kubernetes service. Except of the login capabilities the same instructions should work as well for Kubernetes services on other clouds and the Kubernetes distribution OpenShift.

First get an IBM lite account. It’s free and there is no time restriction. In order to use the Kubernetes service you need to enter your credit card information, but there is a free Kubernetes cluster. After this create a new Kubernetes cluster.

To create the pipeline, invoke these commands:

$ git clone https://github.com/ibm/cloud-native-starter.git
$ cd cloud-native-starter
$ ROOT_FOLDER=$(pwd)
$ REGISTRY_NAMESPACE=<your-namespace>
$ CLUSTER_NAME=<your-cluster-name>
$ cd ${ROOT_FOLDER}/authors-java-jee
$ ibmcloud login -a cloud.ibm.com -r us-south -g default
$ ibmcloud ks cluster-config --cluster $CLUSTER_NAME
$ export <output-from-previous-command>
$ REGISTRY=$(ibmcloud cr info | awk '/Container Registry  /  {print $3}')
$ ibmcloud cr namespace-add $REGISTRY_NAMESPACE
$ kubectl apply -f deployment/tekton/resource-git-cloud-native-starter.yaml 
$ kubectl apply -f deployment/tekton/task-source-to-image.yaml 
$ kubectl apply -f deployment/tekton/task-deploy-via-kubectl.yaml 
$ kubectl apply -f deployment/tekton/pipeline.yaml
$ ibmcloud iam api-key-create tekton -d "tekton" --file tekton.json
$ cat tekton.json | grep apikey 
$ kubectl create secret generic ibm-cr-push-secret --type="kubernetes.io/basic-auth" --from-literal=username=iamapikey --from-literal=password=<your-apikey>
$ kubectl annotate secret ibm-cr-push-secret tekton.dev/docker-0=us.icr.io
$ kubectl apply -f deployment/tekton/pipeline-account.yaml

Execute the Tekton Pipeline

In order to invoke the pipeline, a sixth yaml file pipeline-run-template.yaml is used. As stated above, this file needs to be modified first to contain the exact image name.

The pipeline-run resource is used to define input parameters like the Git repository, location of the Dockerfile, name of the image, etc.

apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
  generateName: pipeline-run-cns-authors-
spec:
  pipelineRef:
    name: pipeline
  resources:
    - name: git-source
      resourceRef:
        name: resource-git-cloud-native-starter
  params:
    - name: pathToContext
      value: "authors-java-jee"
    - name: pathToDeploymentYamlFile
      value: "deployment/deployment.yaml"
    - name: pathToServiceYamlFile
      value: "deployment/service.yaml"
    - name: imageUrl
      value: <ip:port>/<namespace>/authors
    - name: imageTag
      value: "1"
    - name: pathToDockerFile
      value: "DockerfileTekton"
  trigger:
    type: manual
  serviceAccount: pipeline-account

Invoke the following commands to trigger the pipeline and to test the authors service:

$ cd ${ROOT_FOLDER}/authors-java-jee/deployment/tekton
$ REGISTRY=$(ibmcloud cr info | awk '/Container Registry  /  {print $3}')
$ sed "s+<namespace>+$REGISTRY_NAMESPACE+g" pipeline-run-template.yaml > pipeline-run-template.yaml.1
$ sed "s+<ip:port>+$REGISTRY+g" pipeline-run-template.yaml.1 > pipeline-run-template.yaml.2
$ sed "s+<tag>+1+g" pipeline-run-template.yaml.2 > pipeline-run.yaml
$ cd ${ROOT_FOLDER}/authors-java-jee
$ kubectl create -f deployment/tekton/pipeline-run.yaml
$ kubectl describe pipelinerun pipeline-run-cns-authors-<output-from-previous-command>
$ clusterip=$(ibmcloud ks workers --cluster $CLUSTER_NAME | awk '/Ready/ {print $2;exit;}')
$ nodeport=$(kubectl get svc authors --output 'jsonpath={.spec.ports[*].nodePort}')
$ open http://${clusterip}:${nodeport}/openapi/ui/
$ curl -X GET "http://${clusterip}:${nodeport}/api/v1/getauthor?name=Niklas%20Heidloff" -H "accept: application/json"

After running the pipeline you’ll see two Tekton pods and one authors pod in the Kubernetes dashboard.

Try out this sample yourself!

The post Deploying MicroProfile Microservices with Tekton appeared first on Niklas Heidloff.


by Niklas Heidloff at August 08, 2019 02:48 PM

[EN] Using ConfigProperty in Jakarta Microprofile

by Altuğ Bilgin Altıntaş at August 08, 2019 12:28 PM

We are trying to convert small Spring app into Jakarta EE. In that process, I would like to share my exprience in a short format with you.  Let’s you have a property file in resources

 

 

In Spring Framework you can read and assign the key values in properties files like this :

@Value("${newfromconnectionscontroller.connectionsUrl}")
private String connectionsUrl;

@Value("${newfromconnectionscontroller.postsUrl}")
private String postsUrl;

What is the equivalent annotation in Jakarta EE ? Here is the answer :

  
@Inject
@ConfigProperty(name = "newfromconnectionscontroller.connectionsUrl")
  private String connectionsUrl;
  @Inject  
  @ConfigProperty(name = "newfromconnectionscontroller.postsUrl")
  private String postsUrl;

But you have to add org.eclipse.microprofile dependency into your pom.xml file

<dependency>
    <groupId>org.eclipse.microprofile</groupId>
    <artifactId>microprofile</artifactId>
    <version>1.3</version>
    <type>pom</type>
    <scope>provided</scope>
</dependency>

Bye !


by Altuğ Bilgin Altıntaş at August 08, 2019 12:28 PM

Helidon brings MicroProfile 2.2+ support

by dmitrykornilov at August 08, 2019 10:06 AM

We are pleased to announce the 1.2.0 release of Helidon. This release adds support for MicroProfile 2.2 and includes additional bug and performance fixes. Let’s take a closer look at what’s in the release.

MicroProfile

MicroProfile is now a de-facto standard for Java cloud-native APIs. One of the main goals of project Helidon is to deliver support for the latest MicroProfile APIs. The Helidon MicroProfile implementation is called Helidon MP and along with the reactive, non-blocking framework called Helidon SE it builds the core of Helidon.

We have been adding support for newer MicroProfile specifications one by one during the last few releases. The 1.2.0 release brings MicroProfile REST Client 1.2.1 and Open Tracing 1.3. With these pieces in place we now have full MicroProfile 2.2 support.

The full list of supported MicroProfile and Java EE APIs is listed on this image:

As you see, we added support for two more Java EE APIs: JPA (Persistence) and JTA (Transaction). It’s in early access at the moment. You should consider it as a preview. We are still working on it and the implementation and configuration is subject to change.

Here are some examples of using new APIs added in Helidon 1.2.0.

MicroProfile REST Client sample

Register a rest client interface (can be the same one that is implemented by the JAX-RS resource). Note that the URI can be overridden using configuration.

@RegisterRestClient(baseUri = "http://localhost:8081/greet")
public interface GreetResource {
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    JsonObject getDefaultMessage();
    
    @Path("/{name}")
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    JsonObject getMessage(@PathParam("name") String name);
}

Declare the rest client in a class using it (such as a JAX-RS resource in a different microservice):

@Inject
@RestClient
private GreetResource greetService;

And simply use the field to invoke the remote service (this example proxies the request to the remote service):

@GET
@Produces(MediaType.APPLICATION_JSON)
public JsonObject getDefaultMessage() {
    return greetService.getDefaultMessage();
}

Health Check 2.0 sample

Health Check 2.0 has two types of checks (in previous versions a single type existed):

  • Readiness — used by clients (such as Kubernetes readiness check) to check if the service has started and can be used
  • Liveness — used by clients (such as Kubernetes liveness checks) to check if the service is still up and running

Simply annotate an application scoped bean with the appropriate annotation (@Readiness or @Liveness) to create a health check:

import javax.enterprise.context.ApplicationScoped;
import javax.inject.Inject;
import org.eclipse.microprofile.health.HealthCheck;
import org.eclipse.microprofile.health.HealthCheckResponse;
import org.eclipse.microprofile.health.Liveness;

@Liveness
@ApplicationScoped
public class GreetHealthcheck implements HealthCheck {
    private GreetingProvider provider;
  
    @Inject
    public GreetHealthcheck(GreetingProvider provider) 
        this.provider = provider;
    }

    @Override
    public HealthCheckResponse call() {
        String message = provider.getMessage();
        return HealthCheckResponse.named("greeting")
            .state("Hello".equals(message))
            .withData("greeting", message)
            .build();
    }
}

Open Tracing Sample

MP Open Tracing adds a single annotation: @Traced, to add tracing to CDI beans.

Tracing of JAX-RS resources is automated (and can be disabled with the @Traced). Example of the bean used in the Health check example (the method getMessage is traced):

@ApplicationScoped
public class GreetingProvider {
    private final AtomicReference<String&gt; message = new AtomicReference<&gt;();
    
    /**
     * Create a new greeting provider, reading the message 
     * from configuration.
     *
     * @param message greeting to use
     */
    @Inject
    public GreetingProvider(
        @ConfigProperty(name = "app.greeting") String message) {
        this.message.set(message);
    }
    
    @Traced(operationName = "GreetingProvider.getMessage")
    String getMessage() {
        return message.get();
    }
    
    ...
}

Other Enhancements

In addition to MicroProfile 2.2, Helidon 1.2.0 contains a couple other enhancements:

  • HTTP Access Log support for Helidon MP and SE.
  • Early Access: Oracle Universal Connection Pool support: this lets you configure and inject the Oracle UCP JDBC driver as a DataSource in your Helidon MP application.

More to Come

With MicroProfile 2.2 support, Helidon has caught up with most of the other main MicroProfile implementations. We are now pushing Helidon towards MicroProfile 3.0, and we’ve already taken the first steps. That’s why we put a plus after 2.2 in the title. We already have support for Health Check 2.0 (and we’ll support it in a backwards compatible way). That leaves Metrics 2.0 and REST Client 1.3 and we are working hard to deliver it next month.

Stay tuned!

Thanks to Tomas Langer for helping with samples and to Joe Di Pol for great conversational style.


by dmitrykornilov at August 08, 2019 10:06 AM

Cloud Native meetup’ı ardından

by Hüseyin Akdogan at August 08, 2019 06:00 AM

Salı günü(06.08.2019) Altuğ hocam ve Red Hat’ten sevgili Aykut Bulgu ile birlikte Cloud Native konulu online bir meetup gerçekleştirdik. Cloud Native güncel, popüler bir kavram. Bir geliştirici olarak pek çok makalede, katıldığınız konferanslarda, izlediğiniz sunum, dinlediğiniz podcast’lerde Cloud Native ile karşılaşmış olmanız kuvvetle muhtemel. Ancak yine de, azımsanmayacak sayıda insan için kavramın üzerinde bir bilinmezlik bulutu dolaştığını söylememiz mümkün.

Bu sebeple öncelikle nedir Cloud Native sorusuyla başladık, nasıl tanımlanabileceğini sorduk. Sevgili Aykut her şeyden önce Cloud Native’i bir sıfat olarak gördüğünü ifade edip, konunun özünde Agile/Çeviklik ile ilgisine değindi. Günümüzde artık proseslerin değil teknolojilerin agilitysinden söz edildiğini, Cloud Native ile değişime hızlı cevap verme yetkinliği kazanıldığını, bunun verimliliği artırdığını ve teknolojik belirsizliği minimize ettiğini ifade etti.

Cloud Native ile maliyetlerin azaltıldığını vurgulayan Aykut Bulgu çok önemsediğim bir kavramsallaştırma kullanarak teknoloji israfından söz edip piyasada gözlemlediği genel bir tavır olarak emeklemeden koşma isteğini aktardı. Ne tür uygulamalar veya hangi frameworkler Cloud Native uygulamalar geliştirmeye daha elverişlidir gibi bir soruya ise, single responsibility prensibine bağlı, bağımlı olduğu servisleri hızlı ayağa kaldırabilen, stabil, izlenebilir(monitoring) ve failsafe olma gibi şartları sağlayan platform/framework ve toollarla Cloud Native uygulamalar geliştirilebileceğini, burada araçtan ziyade bahsedilen özelliklerin sağlanmasının önemli olduğunu belirtip Kubernetes ve Openshift desteğinin de altını çizdi.

Aykut hocaya son olarak Cloud Native öğrenmek isteyenler için hangi kaynak/referansları önereceğini sorduk. Aşağıdaki referansları paylaştı:

Sevgili Aykut Bulgu’ya ayırdığı zaman ve aktardığı değerli görüşleri için teşekkür ediyoruz. Bu vesile yakında podcast kanalımızın açılacağını ve bu meetup ile birlikte bundan sonraki onlie meetup’larımıza da bu kanaldan ulaşabileceğinizi ifade etmek istiyorum. Kanal yayına başladığında sosyal medya hesaplarımızdan duyurusunu yapacağız.

Bir başka etkinlikte görüşmek üzere…


by Hüseyin Akdogan at August 08, 2019 06:00 AM

Update for Jakarta EE community: August 2019

by Tanja Obradovic at August 06, 2019 03:55 PM

We hope you’re enjoying the Jakarta EE monthly email update, which seeks to highlight news from various committee meetings related to this platform. There’s a lot happening in the Jakarta EE ecosystem so if you want to get a richer insight into the work that has been invested in Jakarta EE so far and get involved in shaping the future of Cloud Native Java, read on. 

Without further ado, let’s have a look at what happened in July: 

EclipseCon Europe 2019: Record-high talk submissions

With your help, EclipseCon Europe 2019 reported record-high talk submissions. Thank you all for proposing so many interesting talks! This is not the only record, though: it seems that the track with the biggest number of submissions is Cloud Native Java so if you want to learn how to develop applications and cloud native microservices using Java, EclipseCon Europe 2019 is the place to be. The program will be announced the week of August 5th. 

Speaking of EclipseCon Europe, you don’t want to miss the Community Day happening on October 21; this day is jam-packed with peer-to-peer interaction and community-organized meetings that are ideal for Eclipse Working Groups, Eclipse projects, and similar groups that form the Eclipse community. Plus, there’s also a Community Evening planned for you, where like-minded attendees can share ideas, experiences and have fun! That said, in order to make this event a success, we need your help. What would you like the Community Day & Evening to be all about? Check out this wiki first, then make sure to go over what we did last year. And don’t forget to register for the Community Day and/or Community Evening! 

EclipseCon Europe will take place in Ludwigsburg, Germany on October 21 - 24, 2019. 

JakartaOne Livestream: Registration is open!

Given the huge interest in the Cloud Native Java track at EclipseCon Europe 2019, it’s safe to say that JakartaOne Livestream, taking place on September 10, is the fall virtual conference spanning multiple time zones. Plus, the date coincides with the highly anticipated Jakarta EE 8 release so make sure to save the date; you’re in for a treat! 

We hope you’ll attend this all-day virtual conference as it unfolds; this way, you get the chance to interact with renowned speakers, participate in interesting interactions and have all your questions answered during the interactive sessions. Registration is now open so make sure to secure your spot at JakartaOne Livestream! 

No matter if you’re a developer or a technical business leader, this virtual conference promises to satisfy your thirst for knowledge with a balance of technical talks, user experiences, use cases and more. The program will be published soon. Stay tuned!

Jakarta EE 8 release

On September 10, join us in celebrating the Jakarta EE 8 release at JakartaOne Livestream!  

That being said, head over to GitHub to keep track of all the Eclipse EE4J projects. Noticeable progress has been made on Final Specifications Releases, Jakarta EE 8 TCK jobs, Jakarta Specification Project Names, and Jakarta Specification Scope Statements so make sure to check out the progress and contribute!  

Jakarta EE Trademark guidelines: Updates 

Version 1.1 of the Jakarta EE Trademark Guidelines is out! This document supplements the Eclipse Foundation Guidelines for Eclipse Logos & Trademarks Policy to address the permitted usage of the Jakarta EE Marks, including the following names and/or logos: 

  • Jakarta EE

  • Jakarta EE Working Group

  • Jakarta EE Member 

  • Jakarta EE Compatible 

The full guidelines on the usage of the Jakarta EE Marks are described in the Jakarta EE Brand Usage Handbook

EFSP: Updates

Version 1.2 of the Eclipse Foundation Specification Process was approved on June 30, 2019. The EFSP leverages and augments the Eclipse Development Process (EDP), which defines important concepts, including the Open Source Rules of Engagement, the organizational framework for open source projects and teams, releases, reviews, and more.

JESP: Updates

Jakarta EE Specification Process v1.2 was approved on July 16, 2019. The JESP has undergone a few modifications, including 

  • changed ballot periods for the progress and release (including service releases) reviews from 30 to 14 days

  • the Jakarta EE Specification Committee now adopts the EFSP v1.2 as the Jakarta EE Specification Process

TCK process finalized 

The TCK process has been finalized. The document sheds light on aspects such as the materials a TCK must possess in order to be considered suitable for delivering portability, the process for challenging tests and how to resolve them and more.     

This document defines:

  • Materials a TCK MUST possess to be considered suitable for delivering portability

  • Process for challenging tests and how these challenges are resolved

  • Means of excluding released TCK tests from certification requirements

  • Policy on improving TCK tests for released specifications

  • Process for self-certification

Jakarta EE Community Update: July video call

The most recent Jakarta EE Community Update meeting took place in mid-July; the conversation included topics such as Jakarta EE 8 release, status on progress and plans, Jakarta EE TCK process update, brief update re. transitioning from javax namespace to the jakarta namespace, as well as details about JakartaOne Livestream and EclipseCon Europe 2019.   

The materials used in the Jakarta EE community update meeting are available here and the recorded Zoom video conversation can be found here.  

Please make sure to join us for the August 14  community call.

Cloud Native Java eBook: Coming soon!

What does cloud native Java really mean to developers? What does the cloud native Java future look like? Where is Jakarta EE headed? Which technologies should be part of your toolkit for developing cloud native Java applications? 

All these questions (and more!) will be answered soon; we’re developing a downloadable eBook on the community's definition and vision for cloud native Java, which will become available shortly before Jakarta EE 8 is released. Stay tuned!

Eclipse Newsletter: Jakarta EE edition  

The Jakarta community has made great progress this year and the culmination of all this hard work is the Jakarta EE 8 release, which will be celebrated on September 10 at JakartaOne Livestream

In honor of this milestone, the next issue of the Eclipse Newsletter will focus entirely on Jakarta EE 8. If you’re not subscribed to the Eclipse Newsletter, make sure to do that before the Jakarta EE issue is released - on August 22! 

Meet the Jakarta EE Working Group Committee Members 

It takes a village to create a successful project and the Jakarta EE Working Group is no different. We’d like to honor all those who have demonstrated their commitment to Jakarta EE by presenting the members of all the committees that work together toward a common goal: steer Jakarta EE toward its exciting future. As a reminder, Strategic members appoint their representatives, while the representatives for Participant and Committer members were elected in June.

The list of all Committee Members can be found here

Steering Committee 

Will Lyons (chair)- Oracle, Ed Bratt - alternate

Kenji Kazumura - Fujitsu, Michael DeNicola - alternate

Dan Bandera - IBM, Ian Robinson - alternate

Steve Millidge - Payara, Mike Croft - alternate

Mark Little - Red Hat, Scott Stark - alternate

David Blevins - Tomitribe, Richard Monson-Haefel alternate

Martijn Verburg - London Java Community - Elected Participant Representative

Ivar Grimstad - Elected Committer Representative

Specifications Committee 

Kenji Kazumura - Fujitsu, Michael DeNicola - alternate

Dan Bandera - IBM, Kevin Sutter - alternate

Bill Shannon - Oracle, Ed Bratt - alternate

Steve Millidge - Payara, Arjan Tijms - alternate

Scott Stark - Red Hat, Mark Little - alternate

David Blevins - Tomitribe, Richard Monson-Haefel - alternate

Ivar Grimstad - PMC Representative

Alex Theedom - London Java Community Elected Participant Representative

Werner Keil - Elected Committer Representative

Paul Buck - Eclipse Foundation (serves as interim chair, but is not a voting committee member)

Marketing and Brand Committee

Michael DeNicola - Fujitsu, Kenji Kazumura - alternate 

Dan Bandera - IBM, Neil Patterson - alternate

Ed Bratt - Oracle, David Delabassee - alternate

Dominika Tasarz - Payara, Jadon Orglepp - alternate

Cesar Saavedra - Red Hat, Paul Hinz - alternate

David Blevins - Tomitribe, Jonathan Gallimore - alternate

Theresa Nguyen - Microsoft Elected Participant Representative

VACANT - Elected Committer Representative

Thabang Mashologu - Eclipse Foundation (serves as interim chair, but is not a voting committee member)

Jakarta EE presence at events and conferences: July overview 

Cloud native was the talk of the town in July. Conferences such as JCrete, J4K, and Java Forum Stuttgart, to name a few, were all about open source and cloud native and how to tap into this key approach for IT modernization success. The Eclipse Foundation and the Jakarta EE Working Group members were there to take a pulse of the community to better understand the adoption of cloud native technologies. 

For example, IBM’s Graham Charters and Steve Poole featured Jakarta EE and Eclipse MicroProfile in demonstrations at the IBM Booth at OSCON; Open Source Summit 2019 participants should expect another round of Jakarta EE and Eclipse MicroProfile demonstrations from IBM representatives. 



Thank you for your interest in Jakarta EE. Help steer Jakarta EE toward its exciting future by subscribing to the jakarta.ee-wg@eclipse.org mailing list and by joining the Jakarta EE Working Group. Don’t forget to follow us on Twitter to get the latest news and updates!

To learn more about the collaborative efforts to build tomorrow’s enterprise Java platform for the cloud, check out the Jakarta Blogs and participate in the monthly Jakarta Tech Talks. Don’t forget to subscribe to the Eclipse newsletter!  

 

Links you may want to bookmark!

The Jakarta EE community promises to be a very active one, especially given the various channels that can be used to stay up-to-date with all the latest and greatest. Tanja Obradovic’s blog offers a sneak peek at the community engagement plan, which includes


Note: All the upcoming Jakarta Tech Talks can be found in the calendar. If you’d like to learn more about Jakarta EE-related plans and get involved in shaping the future of cloud native Java, please bookmark the Jakarta EE Community Calendar.

 


by Tanja Obradovic at August 06, 2019 03:55 PM

Payara Platform 2019 Roadmap - Update

by Steve Millidge at August 06, 2019 10:02 AM

It's 6 months since I posted our last roadmap update and the team have been working hard to deliver what we promised at the beginning of the year and have released both our 191 and 192 releases since then. I therefore thought it was a good time to reflect on what we've delivered so far and what we've still got to do.

 


by Steve Millidge at August 06, 2019 10:02 AM

#HOWTO: Write Java EE applications with Kotlin

by rieckpil at August 04, 2019 06:33 PM

The precise and clean style of writing code with Kotlin is tempting as a Java developer. In addition, switching to Kotlin with a Java background is rather simple. But what about using Kotlin for an existing Java EE application or start using it for a new project? Read this blog post to get a first impression of what it takes to use Kotlin alongside Java EE in your projects.

The following technologies are used in this example: Kotlin 1.3.41, Java 11, Java EE 8, MicroProfile 2.2 running on a dockerized Open Liberty 19.0.0.7 with Eclipse OpenJ9.

Maven project setup for Kotlin

The classic Java EE Maven pom.xml  needs just two adjustments: the kotlin-stdlib dependency and the kotlin-maven-plugin:

https://github.com/rieckpil/blog-tutorials/tree/master/java-ee-with-kotlin<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>de.rieckpil.blog</groupId>
    <artifactId>java-ee-with-kotlin</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>war</packaging>

    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>

        <kotlin.version>1.3.41</kotlin.version>
        <kotlin.compiler.incremental>true</kotlin.compiler.incremental>

        <failOnMissingWebXml>false</failOnMissingWebXml>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    </properties>

    <dependencies>
        <dependency>
            <groupId>javax</groupId>
            <artifactId>javaee-api</artifactId>
            <version>8.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.eclipse.microprofile</groupId>
            <artifactId>microprofile</artifactId>
            <version>2.2</version>
            <type>pom</type>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.jetbrains.kotlin</groupId>
            <artifactId>kotlin-stdlib</artifactId>
            <version>${kotlin.version}</version>
        </dependency>
    </dependencies>

    <build>
        <finalName>java-ee-with-kotlin</finalName>

        <sourceDirectory>${project.basedir}/src/main/kotlin</sourceDirectory>
        <testSourceDirectory>${project.basedir}/src/test/kotlin</testSourceDirectory>

        <plugins>
            <plugin>
                <groupId>org.jetbrains.kotlin</groupId>
                <artifactId>kotlin-maven-plugin</artifactId>
                <version>${kotlin.version}</version>
                <configuration>
                    <compilerPlugins>
                        <plugin>jpa</plugin>
                    </compilerPlugins>
                    <jvmTarget>11</jvmTarget>
                </configuration>
                <dependencies>
                    <dependency>
                        <groupId>org.jetbrains.kotlin</groupId>
                        <artifactId>kotlin-maven-noarg</artifactId>
                        <version>${kotlin.version}</version>
                    </dependency>
                </dependencies>
                <executions>
                    <execution>
                        <id>compile</id>
                        <phase>compile</phase>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>test-compile</id>
                        <phase>test-compile</phase>
                        <goals>
                            <goal>test-compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

For JPA we have to configure the kotlin-maven-noarg plugin to add parameterless constructors to our JPA entities. In addition, I’m activating the incremental Kotlin compiler  in the properties sections to decrease repetitive build times, which is optional.

Pitfalls for Java EE using Kotlin

The two most common pitfalls for Java EE with Kotlin are non-final classes & methods and parameterless constructors. Whilst Kotlin makes classes per default final, we need non-final classes for some specifications to work. This is required for example to create proxies and enrich the functionality of public methods (e.g. EJB’s transaction management).

With the following sample project, you’ll see where and how to solve these requirements. The project provides a REST interface to retrieve books. The JAX-RS resource class looks like the following:

@Path("books")
@Produces(MediaType.APPLICATION_JSON)
class BookResource {

    @Inject
    private lateinit var bookService: BookService;

    @GET
    fun getAllBooks(): Response = Response.ok(bookService.getAllBooks()).build()

    @GET
    @Path("/{id}")
    fun getBookById(@PathParam("id") id: Long): Response {
        var book: Book? = 
           bookService.getBookById(id) ?: 
           return Response.status(Response.Status.NOT_FOUND).build()
        return Response.ok(book).build()
    }
}

The first thing to pay attention is the injection of other beans. Here I’m injecting the BookService to retrieve both all books and a book by its id. To avoid cumbersome null-checks we have to use lateinit to tell the Kotlin compiler that this instance is injected during runtime. Normally, properties declared as having a non-null type must be initialized in the constructor within Kotlin.

Next, I’m using Book? as the return type of the method to get a book by its id. This is required, as there might be no book in the database for the provided id. The Elvis operator ?: makes the required null-check and returns an HTTP 404 status in case of null.

The corresponding BookService is a singleton EJB and delegates access to the database using the EntityManager:

@Singleton
@Startup
open class BookService {

    @PersistenceContext
    private lateinit var entityManager: EntityManager

    @PostConstruct
    open fun setUp() {
        println("Initializing books ...")
        entityManager.persist(Book(null, "Java EE 8", "Duke"))
        entityManager.persist(Book(null, "Jakarta EE 8", "Duke"))
        entityManager.persist(Book(null, "MicroProfile 2.2", "Duke"))
        println("... finished initializing books")
    }

    open fun getAllBooks(): List<Book> = entityManager
         .createQuery("SELECT b FROM Book b", Book::class.java).resultList
    open fun getBookById(id: Long): Book? = entityManager.find(Book::class.java, id)
}

The open keyword on class- and method-level is required to make both non-final.

Writing JPA entities

For writing JPA entities we can utilize Kotlin’s data classes. The only thing to keep in mind is that we need a public no-arg constructor for JPA to work. Enabling the JPA plugin in the Maven build section, makes sure we get a parameterless constructor for all our @Entity classes and don’t have to worry:

@Entity
@Table(name = "books")
data class Book(
        @Id
        @GeneratedValue
        var id: Long?,

        @Column(nullable = false, unique = true)
        val title: String,

        @Column(nullable = false)
        val author: String) {
}

Final thoughts for Java EE with Kotlin

Converting an existing Java EE application written in Java to Kotlin might not be that simple. You have to make sure that dependency injection, interceptors in general and your JPA entities are working. With this blog post, you might get a first impression of where to pay attention when using Kotlin. If the precise and elegant style of writing source code with Kotlin overweighs the additional pitfalls for you, then give it a try!

You can find the whole example on GitHub.

Further resources on this topic:

  • Kotlin and Java EE article on DZone
  • Working with Kotlin and JPA on Baeldung
  • Kotlin EE: Boost your Productivity by Marcus Fihlon talk

Have fun using Kotlin for your Java EE project,

Phil

The post #HOWTO: Write Java EE applications with Kotlin appeared first on rieckpil.


by rieckpil at August 04, 2019 06:33 PM

The Payara Monthly Catch for July 2019

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at August 01, 2019 02:09 PM

Another great month in the bag. There were awards, conferences, out of incubation releases, competitions, surveys and lots more going on.  Below you will find a curated list of some of the most interesting news, articles and videos from this month. Cant wait until the end of the month? then visit our twitter page where we post all these articles as we find them! 


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at August 01, 2019 02:09 PM

A preview of MicroProfile GraphQL

by Jean-François James at July 25, 2019 01:05 PM

This blog post is related to my GitHub project mpql-preview. Objectives This project aims at providing a preview of the future Eclipse MicroProfile GraphQL specification. If you don’t know what MicroProfile GraphQL is about, please have a look on my previous post and on  GitHub. In short, it aims at making GraphQL a first-class citizen […]

by Jean-François James at July 25, 2019 01:05 PM

Understanding the Current Java Moment

by Rhuan Henrique Rocha at July 21, 2019 05:48 PM

Java platform is one the most used platform in last years and has the largest ecosystem in the world of technology. Java platform permit us develop applications to several platforms, such as Windows, Linux, embedded systems, mobile. However, Java had received many claims such as Java is fat, Java take a lot of memory, Java is verbose. In fact Java was created to solve big problems not small problems, although it could be used to solve small problems. You can solve small problems with Java, but you see the real benefit of Java when you have a big problem, mainly when this problem is about enterprise environments. Then when you created a hello world application with Java and compared to hello world application written in another language you could see a greater memory use,  you could write more line codes and other. But when you created a big application that integrates to another applications and resource, in this point you saw the real benefit of Java platform.

Java is great to enterprise environment because its power to solve complexity problems and its multi-platform characteristic, but also because it promotes more security to business, promoting a backward compatibility and solutions based in specifications. Then, the business has more guarantee that a new update of Java won’t breaking your systems and has a solution decoupled vendors, permitting to business changes vendors when it is needed.

Java has a big ecosystem, with emphasis on Java EE (now Jakarta EE) that promotes several specifications to solve common problems at enterprise environment. Some of these specifications are: EJB, JPA, JMS, JAX-RS, JAX-WS, and other. Furthermore, we have the Spring that tuned the Java ecosystem, although it is not based on specifications but uses some specifications from Java EE.

Cloud Computing and Microservices

Cloud Computing is a concept that has been grown up over and over along of year, and has changed how developers architect, write and think applications. Cloud Computing is a set of principles and approaches that has as aim provide computing resources as a service (PaaS, IaaS, SaaS).  With this, we can uses only needed resource to run applications, and scale when needed. Then we could optimize the computing resource uses and consequently optimize cost to business. It is fantastic, but to avail from Cloud Computing the applications should be according to this approach. With this, microservice architecture came as a good approach to architect and thinking applications to Cloud Computing (Cloud Native Application).

Microservice architecture is an approach that break a big application (monolith) in many micro-applications or micro-services, generally broken in business domain. With this, we can scale only the business domains that really needs without scale all business domains, we have a fault tolerance, because if one business domain falls, the other business domain will not falls together, and we have a resilience, because the microservice that falls can be restored. Then, microservice architecture permit us explore the Cloud Computing benefits and optimize optimize the computing resource uses.

Java and Cloud Computing

As said above, “In fact Java was created to solve big problems not small problems, although it could be used to solve small problems”. But Cloud Native Application approach breaks a big and complex application into many small and simpler applications (such as microservices). Furthermore, the life cycle of  an application is very smaller in the microservice architecture than when we use a monolith.  Besides that, in Cloud Native Application approach the complexity is not in the applications, but it is in the communication between these application (integrations between them), managements of them and monitoring. In other word the complexity is about how these applications (microservices) will interacts with which other and how can we identify a problem in some application with fast way.  With this, the Java platform and its ecosystem had many gap to solve, that will be shown below:

Fat JVM: We had many Java applications started with many library that was not used and the JVM will loaded several things that this application didn’t needs. It is okay when we have a big application that solve complex problems, but to small applications (like microservices) it is not so good.

JVM JIT Optimization: The JVM has a JIT Optimization that optimize the application  running along of time. In other words, a longer application life cycle has more optimization done. Then to JVM is better to runs an application for a long time than have an application running for short time. In cloud computing, applications are born and dies all the time and its life cycle are smaller.

Java Application Has a Bigger Boot Time: Many Java applications has a long boot time comparing to application written in another language, because these applications commonly solve some things in boot time.

Java Generate a Fat Package (war, ear, jar): Many Java Applications has a large package size, mainly when it has some libraries inside them (in lib folder). It can grow up the delivery time, degrading the delivery process.

Java EE Has Not Pattern Solutions to Microservice: The Java EE has many important specs to solve enterprise problems, but it has not specs to solve problems that came from microservice architecture and cloud computing.

The Updates of Java and Java EE are Slow: The Java and Java EE had a slow process to update their features and to create new features. It is bad because the enterprise environment is in continuous change and has new challenge at all the times.

With this, the Java ecosystem had several changes and initiatives to solve each gap created by cloud computing, and the put Java on the top again.

Java On Top Again

Java platform is a robust platform that promotes many solutions for any things, but to me it is not the best of Java. To me the best of Java world is the community, that is very strong and workaholic. Then, in a few time, the Java community promoted many actions and initiatives that boosted the Java platform to cloud computing approach, promoting solutions to turn Java closer to cloud native application approach more and more. Many peoples call it as Cloud Native Java. The principals actions and initiatives that have been done in the Java ecosystem are: Jakarta EE, Microprofile, new Java release cycle, improvement at Java Language, improvement at JVM and Quarkus. Then I’ll explain how each of these actions and initiatives have impacted the Java ecosystem.

Jakarta EE: Java EE was one of the most import project at Java ecosystem. Java EE promoted many pattern solutions to enterprise problems, but this project was migrated from Oracle to Eclipse Foundation and had many changes in the work`s structure, and now is called Jakarta EE.

The Jakarta EE is an umbrella project that promotes pattern solutions (specifications) to enterprise world and has a new process to approve new features and evolve the existent features. With this, the Jakarta EE can evolve fast and improve more and more the enterprise solutions. It is nice, because nowadays the enterprise has changed very fast and has had new challenges o all the time. As the technology is a tool to innovate,  this one need be able to change quickly when needed.

Microprofile: The Java EE and now Jakarta EE has many good solutions to enterprise world. But this project don`t have a pattern solutions to many problems about microservice architecture. It does not means that you can not implement solutions to microservice architecture, but you will need implement these solutions by yourself and these solutions will be in your hands.

Microprofile is an umbrella project that promotes many pattern solutions (specifications) to microservice architecture problems. Microprofile has compatibility with Java EE and permit the developers develop applications using microservice architecture with easier way.  Some of these specifications are: Microprofile Config, Microprofile Opentrancing, Microprofile RestClient, Microprofile Fault Tolerance and others.

Java Releases Cycle: The Java release cycle changed and nowadays the Java releases are released each six months. It’s an excellent change because permit the Java platform response fast to new challenges. Beside that, it promotes a faster evolve of Java platform.

Improvement at Java Language: The Java had several changes that improved some features, such as the functional feature and other from Java. Beside that, the Java language had the Jigsaw project that introduced the modularity on Java. With this, we can create thinner Java applications that can be easily scaled.

Improvement at JVM: The JVM had some issues when used in containers, mainly about measurements about memory and CPU. It was bad because the the container is very important to cloud computing. With containers we don’t delivery the application only, but we delivery all environment with its dependencies.

Since Java 9 the JVM had many updates that turned the communication with containers better. With this, the JVM is closer to cloud computing necessities.

Quarkus: Quarkus is the latest news to the Java ecosystem and has been at the top of the talks. Quarkus is a project tailored to GraalVM and OpenJDK HotSpot that promotes a Kubernate Java Application stack to permit developers write applications to cloud using the breed Java libraries and standards. With Quarkus we can write applications with very faster boot time, incredibly low RSS memory and an amazing set of tools to facilitate the developers to write applications.

Quarkus is really an amazing project that defines a new future to Java platform. This project works with Container First concept and uses the technique of compile time boot to boost the Java applications. If you want to know more about Quarkus click here.

All of these projects and initiatives in the Java ecosystem bring Java back into focus and starts the new era for the Java platform. With this, Java enter on cloud computing offering your way of working with specifications, promoting a standardized solutions to cloud computing. It is amazing to Java and to cloud computing, because from these standardized solutions will emerge many enterprise solutions with support of many companies, turning the adopt of these solutions safer.

 

 

 

 

 


by Rhuan Henrique Rocha at July 21, 2019 05:48 PM

Using Jakarta Security on Tomcat and the Payara Platform

by Arjan Tijms at July 18, 2019 11:06 AM

Java EE Security API is one of the new APIs in Java EE 8. With Java EE currently being transferred and rebranded to Jakarta EE, this API will soon be rebranded to Jakarta Security, which is the term we'll use in this article. Jakarta Security is part of the Jakarta APIs, included and active in the Payara Platform by default with no configuration required in order to use it. With some effort, Jakarta Security can be used with Tomcat, as well.  


by Arjan Tijms at July 18, 2019 11:06 AM

Update for Jakarta EE community: July 2019

by Tanja Obradovic at July 15, 2019 04:19 PM

Two months ago, we launched a monthly email update for the Jakarta EE community which seeks to highlight news from various committee meetings related to this platform. There are a few ways to get richer insight into the work that has been invested in Jakarta EE so far, so if you’d like to learn more about Jakarta EE-related plans and get involved in shaping the future of Cloud Native Java, read on. 

Without further ado, let’s have a look at what happened in June: 

JakartaOne LiveStream: All eyes on Cloud Native Java

Are you interested in the current state and future of Jakarta EE? Would you like to explore other related technologies that should be part of your toolkit for developing Cloud Native Java applications? Then JakartaOne Livestream is for you! No matter if you’re a developer or a technical business leader, this virtual conference promises to satisfy your thirst for knowledge with a balance of technical talks, user experiences, use cases and more.  

You should join the JakartaOne Livestream speaker lineup if you want to 

  • Show the world how you and/or your organization are using Jakarta EE technologies to develop cutting-edge solutions. 

  • Demonstrate how Jakarta EE and Java EE features can be used today to develop cloud native solutions. 

This one-day virtual conference, which takes place September 10, 2019, is currently accepting submissions from speakers so if you have an idea for a talk that will educate and inspire the Jakarta community, now’s the time to submit your pitch!  The deadline for submissions is today, July 15, 2019. 

Note: All the JakartaOne Livestream sessions and keynotes are chosen by an independent program committee made up of volunteers from the Jakarta EE and Cloud Native Java community: Reza Rahman, who is also the program chair, Adam Bien, Arun Gupta, Ivar Grimstad, Josh Juneau, and Tanja Obradovic.

*As this inaugural event is a one-day event only, the number of accepted sessions is limited. Submit your talk now!  

Even if all the talks will be recorded and made available later on the Jakarta EE website, make sure to attend the virtual conference in order to directly interact with the speakers. We do hope you will attend “live”, as it will lead to more questions and more interactive sessions.  


Jakarta EE 8 release and progress

Are you keeping track of Eclipse EE4J projects on GitHub? Have you noticed that Jakarta EE Platform Specifications are now available in GitHub? If not please do!!!! Also please, check out the creation and progress of specification projects, which will be used to follow the process of converting the "Eclipse Project for ..." projects into specification projects to set them up to specification work as defined by the Eclipse Foundation Specification Process, and Specification Document Names.

Noticeable progress has been made on Jakarta EE 8 TCK jobs, Jakarta Specification Project Names, and Jakarta Specification Scope Statements so head over to GitHub to discover all the improvements and all the bits and pieces that have already been resolved.  

Work on the TCK process is in progress, with Scott Stark, Vice President of Architecture at Red Hat, leading the effort. The TCK process document v 1.0 is expected to be completed in the very near future. The document will shed light on aspects such as the materials a TCK must possess in order to be considered suitable for delivering portability, the process for challenging tests and how to resolve them and more. 

Jakarta EE 8 is expected to be released on September 10, 2019, just in time for JakartaOne Livestream.  

Javax package namespace discussions

The specification committee has put out two approaches regarding restrictions on javax package namespace use for the community to consider, namely Big Bang and Incremental. 

Based on the input we got from the community and discussions within the Working Group, the specification committee has not yet reached consensus on the approach to be taken, until work on the binary compatibility is further explored. With that in mind, the Working Group members will invest time to work on the technical approach for binary compatibility and then propose/decide on the option that is the best for the customers, vendors, and developers.

Please refer to David Blevins’ presentation from the Jakarta EE Update call June 12th, 2019

If you want to dive deeper into this topic, David Blevins has written a helpful analysis of the javax package namespace matter, in which he answers questions like "If we rename javax.servlet, what else has to be renamed?" 

 JCP Copyright Licensing request: Your assistance in this matter is greatly appreciated

As part of Java EE’s transfer to the Eclipse Foundation under the Jakarta EE name, it is essential to ensure that the Foundation has the necessary rights so that the specifications can be evolved under the new Jakarta EE Specification Process. For this, we need your help!

We are currently requesting copyright licenses from all past contributors to Java EE specifications under the JCP; we are reaching out to all companies and individuals who made contributions to Java EE in the past to help out, execute the agreements and return them back to the Eclipse Foundation. As the advancement of the specifications and the technology is at stake, we greatly appreciate your prompt response. Oracle, Red Hat, IBM, and many others in the community have already signed an agreement to license their contributions to Java EE specifications to the Eclipse Foundation. We are also counting on the JCP community to be supportive of this request.

For more information about this topic, read Tanja Obradovic’s blog. If you have questions regarding the request for copyright licenses from all past contributors, please contact mariateresa.delgado@eclipse-foundation.org.

 Election results for Jakarta EE working group committees

The nomination period for elections to the Jakarta EE committees is now closed. 

Almost all positions have been filled, with the exception of the Committer representative on the Marketing Committee, due to lack of nominees.   

The representatives for 2019-20 on the committees, starting July 1, 2019, are: 

Participant Representative:

STEERING COMMITTEE - Martijn Verburg (London Java Community)

SPECIFICATIONS COMMITTEE - Alex Theedom (London Java Community)

MARKETING COMMITTEE - Theresa Nguyen (Microsoft)

Committer Representative:

STEERING COMMITTEE - Ivar Grimstad

SPECIFICATIONS COMMITTEE - Werner Keil

MARKETING COMMITTEE - Vacant

 Jakarta EE Community Update: June video call

The most recent Jakarta EE Community Update meeting took place in June; the conversation included topics such as Jakarta EE 8 progress and plans, headway with specification name changes/ specification scope definitions, TCK process update, copyright license agreements, PMC/ Projects update, and more. 

The materials used on the Jakarta EE community update meeting are available here and the recorded Zoom video conversation can be found here.  

Please make sure to join us for the July 17th call.

 EclipseCon Europe 2019: Call for Papers open until July 15

You can still submit your proposals to be part of EclipseCon Europe 2019’s speaker lineup. The Call for Papers (CFP) is closing soon so if you have an idea for a talk that will educate and inspire the Eclipse community, now’s the time to submit your talk! The final submission deadline is July 15. 

The conference takes place in Ludwigsburg, Germany on October 21 - 24, 2019. 


Jakarta EE presence at events and conferences: June overview

(asked members on Jakarta marketing committee Slack channel if they participated in any conferences; waiting for a reply) 

Eclipse DemoCamp Florence 2019

Tomitribe: presence at JNation in Portugal 

 

Thank you for your interest in Jakarta EE. Help steer Jakarta EE toward its exciting future by subscribing to the jakarta.ee-wg@eclipse.org mailing list and by joining the Jakarta EE Working Group. 

To learn more about the collaborative efforts to build tomorrow’s enterprise Java platform for the cloud, check out the Jakarta Blogs and participate in the monthly Jakarta Tech Talks. Don’t forget to subscribe to the Eclipse newsletter!  


 

 


by Tanja Obradovic at July 15, 2019 04:19 PM

#HOWTO: Intercept method calls using CDI interceptors

by rieckpil at July 14, 2019 10:05 AM

If you have cross-cutting concerns for several parts of your application you usually don’t want to copy and paste the code. For Java EE applications the CDI (Context and Dependency Injection) spec contains the concept of interceptors which are defined in the Java Interceptor specification. With these CDI interceptors, you can intercept business method, timeouts for EJB timers, and lifecycle events.

With this blog post, I’ll demonstrate where and how to use the interceptors for a Java EE 8 application, using Java 8 and running on Payara 5.192.

Injection points for interceptors

Even though interceptors are part of the CDI spec they can intercept: EJBs, session beans, message-driven beans, and CDI managed beans. The Java Interceptors 1.2 release (latest) is part of the maintenance release JSR-318 and the CDI spec builds upon its basic functionality.

The specification defines five types of injection points for interceptors:

  • @AroundInvoke intercept a basic method call
  • @AroundTimeout used to intercept timeout methods of EJB Timers
  • @AroundConstruct interceptor method that receives a callback after the target class is constructed
  • @PostConstruct intercept post-construct lifecycle events
  • @PreDestroy interceptor method for pre-destroy lifecycle events

For most of these injection points, I’ll provide an example in the following sections.

Writing CDI interceptors

Writing a CDI interceptor is as simple as the following:

@Interceptor
public class SecurePaymentInterceptor {

    @AroundInvoke
    public Object securePayment(InvocationContext invocationContext) throws Exception {
        return invocationContext.proceed();
    }
}

You just annotate a class with @Interceptor and add methods for intercepting your desired injection points. Within an interceptor method, you have access to the InvocationContext . With this object, you can retrieve the name of the method, its parameters and you can also manipulate them. Make sure to call the .proceed() method if you want to continue with the execution of the original method.

As an example I’m going to intercept the following EJB:

@Startup
@Singleton
@ManipulatedPayment
public class PaymentProvider {

    @PostConstruct
    public void setUpPaymentProvider() {
        System.out.println("Setting up payment provider ...");
    }

    public void withdrawMoneyFromCustomer(String customer, BigDecimal amount) {
        System.out.println("Withdrawing money from " + customer + " - amount: " + amount);
    }
}

To demonstrate how to manipulate the method parameters, I’m going to change the amount of the .withdrawMoneyFromCustomer(String customer, BigDecimal amount) if the customer name is duke. In addition. I’m logging a single line to console once lifecycle event interceptors are triggered:

@Interceptor
public class PaymentManipulationInterceptor {

    @Inject
    private PaymentManipulator paymentManipulator;

    @AroundInvoke
    public Object manipulatePayment(InvocationContext invocationContext) throws Exception {

        if (invocationContext.getParameters()[0] instanceof String) {
            if (((String) invocationContext.getParameters()[0]).equalsIgnoreCase("duke")) {
                paymentManipulator.manipulatePayment();
                invocationContext.setParameters(new Object[]{
                        "Duke", new BigDecimal(999.99).setScale(2, RoundingMode.HALF_UP)
                });
            }
        }

        return invocationContext.proceed();
    }

    @AroundConstruct
    public void aroundConstructInterception(InvocationContext invocationContext) throws Exception {
        System.out.println(
           invocationContext.getConstructor().getDeclaringClass() + " will be manipulated");
        invocationContext.proceed();
    }

    @PostConstruct
    public void postConstructInterception(InvocationContext invocationContext) throws Exception {
        System.out.println(
           invocationContext.getMethod().getDeclaringClass() + " is ready for manipulation");
        invocationContext.proceed();
    }

    @PreDestroy
    public void preDestroyInterception(InvocationContext invocationContext) throws Exception {
        System.out.println(
           "Stopped manipulating of class " + invocationContext.getMethod().getDeclaringClass());
        invocationContext.proceed();
    }
}

For a more realistic example, I’m creating an interceptor to intercept JAX-RS methods and check if a required HTTP header is set. If the header is not present, the server will return with an HTTP status 400:

@Interceptor
public class SecurePaymentInterceptor {

    @Context
    private HttpHeaders headers;

    @AroundInvoke
    public Object securePayment(InvocationContext invocationContext) throws Exception {

        String requiredHttpHeader = invocationContext
                .getMethod()
                .getAnnotation(SecurePayment.class)
                .requiredHttpHeader();

        if (headers.getRequestHeaders().containsKey(requiredHttpHeader)) {
            return invocationContext.proceed();
        } else {
            throw new WebApplicationException(
             "Missing HTTP header: " + requiredHttpHeader, Response.Status.BAD_REQUEST);
        }

    }
}

The required HTTP header is stored in the annotation @SecurePayment(requiredHttpHeader="X-Duke"), which is used to bind an interceptor to a method/class, as you’ll see in the next chapter.

Binding interceptors to methods and classes

Up until now we just created CDI interceptors but did not bind them to a specific method or class. For this we’ll use a custom annotation with @InterceptorBinding. The most simplest annotation for this looks like the following:

@InterceptorBinding
@Target({TYPE, METHOD})
@Retention(RUNTIME)
public @interface ManipulatedPayment {
}

We can also add custom attributes to the annotation, as we need it for your @SecurePayment binding to specify the required HTTP header:

@InterceptorBinding
@Target({METHOD, TYPE})
@Retention(RUNTIME)
public @interface SecurePayment {
    
   @Nonbinding String requiredHttpHeader() default "X-Duke";
   
}

Once this annotation is in place, we have to add it to the interceptor class and to the method or class (to include all methods of this class) we want to intercept:

@Interceptor
@SecurePayment
public class SecurePaymentInterceptor {

    // ...

}
@Path("payments")
public class PaymentResource {

    @Inject
    private PaymentProvider paymentProvider;

    @GET
    @Path("/{customerName}")
    @SecurePayment(requiredHttpHeader = "X-Secure-Payment")
    public Response getPaymentForCustomer(@PathParam("customerName") String customerName) {

        paymentProvider
                .withdrawMoneyFromCustomer(customerName,
                        new BigDecimal(42.00).setScale(2, RoundingMode.HALF_UP));

        return Response.ok("Payment was withdrawn from customer " + customerName).build();
    }

}

Activating CDI interceptors

The CDI interceptors are deactive by default and we have to activate them first. Currently there are two possible ways to activate them:

  1. Add the fully qualified class name of the interceptor to the beans.xml file
  2. Add the @Priority(int priority) annotation to the interceptor

To show you how both work, I’m using the first approach for the first interceptor:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd"
       bean-discovery-mode="all">
<interceptors>
    <class>de.rieckpil.blog.PaymentManipulationInterceptor</class>
</interceptors>
</beans>

The second interceptor is activated using the annotation. With the priority number we can specify the execution order of interceptors if multiple apply for the same method:

@Priority(42)
@Interceptor
@SecurePayment
public class SecurePaymentInterceptor {

    // ...

}

Putting it all together and hitting /resources/payments/mike and /resources/payments/duke with the X-Secure-Payment header, results in the following log output:

[Payara 5.192] [INFO] [[ Clustered CDI Event bus initialized]]
[Payara 5.192] [INFO] [[ class de.rieckpil.blog.PaymentProvider will be manipulated]]
[Payara 5.192] [INFO] [[ class de.rieckpil.blog.PaymentProvider is ready for manipulation]]
[Payara 5.192] [INFO] [[ Setting up payment provider ...]]
[Payara 5.192] [INFO] [[ Initializing Soteria 1.1-b01 for context '']]
[Payara 5.192] [INFO] [[ Loading application [intercept-methods-with-cdi-interceptors] at [/]]]
[Payara 5.192] [INFO] [[intercept-methods-with-cdi-interceptors was successfully deployed in 1,678 milliseconds.]]
[Payara 5.192] [INFO] [[ Context path from ServletContext:  differs from path from bundle: /]]
[Payara 5.192] [INFO] [[ Withdrawing money from mike - amount: 42.00]]
[Payara 5.192] [INFO] [[ Manipulating payment...]]
[Payara 5.192] [INFO] [[ Withdrawing money from Duke - amount: 999.99]]

For more information visit the Weld documentation (CDI reference implementation) or the website of the CDI spec.

You can find the source code for this example on GitHub.

Have fun using CDI interceptors,

Phil

The post #HOWTO: Intercept method calls using CDI interceptors appeared first on rieckpil.


by rieckpil at July 14, 2019 10:05 AM

Recording of Jakarta Tech Talk ‘How to develop Microservices’

by Niklas Heidloff at July 10, 2019 08:08 AM

Yesterday I presented in a Jakarta Tech Talk ‘How to develop your first cloud-native Applications with Java’. Below is the recording and the slides.

In the talk I described the key cloud-native concepts and explained how to develop your first microservices with Java EE/Jakarta EE and Eclipse MicroProfile and how to deploy the services to Kubernetes and Istio.

For the demos I used our end-to-end example application cloud-native-starter which is available as open source. There are instructions and scripts so that everyone can setup and run the demos locally in less than an hour.

I demonstrated key cloud-native functionality:

Here is the recording.

The slides are on SlideShare.

This is the summary page with links to resources to find out more:

The post Recording of Jakarta Tech Talk ‘How to develop Microservices’ appeared first on Niklas Heidloff.


by Niklas Heidloff at July 10, 2019 08:08 AM

#HOWTO: MicroProfile Rest Client for RESTful communication

by rieckpil at July 08, 2019 05:28 AM

In one of my recent blog posts, I presented Spring’s WebClient for RESTful communication. With Java EE we can utilize the JAX-RS Client and WebTarget classes to achieve the same. However, if you add the MicroProfile API to your project, you can make use of the MicroProfile Rest Client specification. This API targets the following goal:

The MicroProfile Rest Client provides a type-safe approach to invoke RESTful services over HTTP. As much as possible the MP Rest Client attempts to use JAX-RS 2.0 APIs for consistency and easier re-use.

With MicroProfile 2.2 you’ll get the latest version of the Rest Client which is 1.3. RESTful communication becomes quite easy  with this specification as you just define the access to an external REST API with an interface definition and JAX-RS annotations:

@Path("/movies")
public interface MovieReviewService {

    @GET
    Set<Movie> getAllMovies();

    @POST
    @Path("/{movieId}/reviews")
    String submitReview(@PathParam("movieId") String movieId, Review review);
    @PUT
    @Path("/{movieId}/reviews/{reviewId}")
    Review updateReview(@PathParam("movieId") String movieId, 
                        @PathParam("reviewId") String reviewId, Review review);
}

In this blog post, I’ll demonstrate an example usage of MicroProfile Rest Client using Java EE 8, MicroProfile 2.2, Java 8 running on Payara 5.192.

System architecture

The project contains two services: order application and user management application. Alongside order data, the order application also stores the id of the user who created this order. To resolve a username from the user id and to create new orders the application accesses the user management application’s REST interface:

microProfileRestClientBlogArchitecture

For simplicity, I keep the implementation simple and store the objects in-memory without an underlying database.

MicroProfile project setup

Both applications were created with my Java EE 8 and MicroProfile Maven archetype and contain just both APIs:

<dependencies>
    <dependency>
        <groupId>javax</groupId>
        <artifactId>javaee-api</artifactId>
        <version>8.0</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.eclipse.microprofile</groupId>
        <artifactId>microprofile</artifactId>
        <version>2.2</version>
        <type>pom</type>
        <scope>provided</scope>
    </dependency>
</dependencies>

The order application has two JAX-RS endpoints, one for reading orders by their id and one for creating an order:

@Path("orders")
@Produces("application/json")
@Consumes("application/json")
public class OrderResource {

    @Inject
    private OrderService orderService;

    @GET
    @Path("/{id}")
    public JsonObject getOrderById(@PathParam("id") Integer id) {
        return orderService.getOrderById(id);
    }

    @POST
    public Response createNewOrder(JsonObject order, @Context UriInfo uriInfo) {
        Integer newOrderId = this.orderService.createNewOrder(new Order(order));
        UriBuilder uriBuilder = uriInfo.getAbsolutePathBuilder();
        uriBuilder.path(Integer.toString(newOrderId));

        return Response.created(uriBuilder.build()).build();
    }
}

The user management application has also two JAX-RS endpoints to resolve a username by its id and to create a new user. Both of these endpoints are required for the order application to work properly and are synchronously called:

@Path("users")
@Consumes("application/json")
@Produces("application/json")
@ApplicationScoped
public class UserResource {

    private ConcurrentHashMap<Integer, String> userDatabase;
    private Faker randomUser;

    @PostConstruct
    public void init() {
        this.userDatabase = new ConcurrentHashMap<>();
        this.userDatabase.put(1, "Duke");
        this.userDatabase.put(2, "John");
        this.userDatabase.put(3, "Tom");

        this.randomUser = new Faker();
    }

    @GET
    @Path("/{userId}")
    public JsonObject getUserById(@PathParam("userId") Integer userId,
                                  @HeaderParam("X-Request-Id") String requestId,
                                  @HeaderParam("X-Application-Name") String applicationName) {

        System.out.println(
                String.format("External system with name '%s' " +
                        "and request id '%s' trying to access " +
                        "user with id '%s'", applicationName, requestId, userId));

        return Json
                .createObjectBuilder()
                .add("username", this.userDatabase.getOrDefault(userId, "Default User"))
                .build();
    }

    @POST
    @RolesAllowed("ADMIN")
    public void createNewUser(JsonObject user) {
        this.userDatabase
                .put(user.getInt("userId"), this.randomUser.name().firstName());
    }
}

For a more advanced use case, I’m tracking the access to /users/{userId} by printing two custom HTTP headers X-Request-Id and X-Application-Name. In addition, posting a new user requires authentication and authorization which is basic authentication for which I’m using the Java EE 8 Security API.

Invoke RESTful services over HTTP with Rest Client

The REST access to the user management app is specified with a Java interface:

@RegisterRestClient
@Path("/resources/users")
@Produces("application/json")
@Consumes("application/json")
@ClientHeaderParam(name = "X-Application-Name", value = "ORDER-MGT-APP")
public interface UserManagementApplicationClient {

    @GET
    @Path("/{userId}")
    JsonObject getUserById(@HeaderParam("X-Request-Id") String requestIdHeader, 
                           @PathParam("userId") Integer userId);

    @POST
    @ClientHeaderParam(name = "Authorization", value = "{generateAuthHeader}")
    Response createUser(JsonObject user);

    default String generateAuthHeader() {
        return "Basic " + new String(Base64.getEncoder().encode("duke:SECRET".getBytes()));
    }
}

Every method of this interface represents one REST endpoint of the external service. With the common JAX-RS annotations like @GET, @POST, @Path, and @PathParam you can specify the HTTP method and URL parameters. The return type of the method represents the HTTP response body which is deserialized using the MessageBodyReader which is makes use of JSON-B for application/json. For sending data alongside the HTTP request body, you can add a pojo as the method argument.

Furthermore, you can add HTTP headers to your calls by using either @HeaderParam or @ClientHeaderParam. With @HeaderParam you mark a method argument as an HTTP header and can pass it from outside to the Rest Client. The @ClientHeaderParam on the other side does not modify the method signature with an additional argument and retrieves its value either by config, by a hardcoded string or by calling a method. In this example, I’m using it to add the X-Application-Name  header to every HTTP request and for the authorization header which is required for basic auth. You can use this annotation on both class and method level.

Rest Client configuration and CDI integration

To integrate this Rest Client with CDI and make it injectable, you can register the client with @RegisterRestClient. Any other bean can now inject the Rest Client with the following code:

@Inject
@RestClient
private UserManagementApplicationClient userManagementApplicationClient;

The URL of the remote service is configured either with the @RegisterRestClient(baseUri="http://somedomain/api") annotation or using the MicroProfile Config API. For this example, I’m using the configuration approach with a microprofile-config.properties file:

de.rieckpil.blog.order.control.UserManagementApplicationClient/mp-rest/url=http://user-management-application:8080
de.rieckpil.blog.order.control.UserManagementApplicationClient/mp-rest/connectTimeout=3000
de.rieckpil.blog.order.control.UserManagementApplicationClient/mp-rest/readTimeOut=3000

Besides the URL you can configure the HTTP connect and read timeouts for this Rest Client and specify JAX-RS providers to intercept the requests/responses. For more information have a look at the specification document.

As I’m using docker-compose for deploying the two applications, I can use the name of the user app for external access:

version: '3'
services:
  order-application:
    build: order-application
    ports:
      - "8080:8080"
    links: 
      - user-management-application
  user-management-application:
    build: user-management-application

For further information have a look at the GitHub repository of this specification and the release page to get the latest specification documents.

You can find the source code with a step-by-step guide to start the two applications on GitHub.

Have fun using the MicroProfile Rest Client API,

Phil

The post #HOWTO: MicroProfile Rest Client for RESTful communication appeared first on rieckpil.


by rieckpil at July 08, 2019 05:28 AM

#HOWTO: Deploy Java EE applications to Kubernetes

by rieckpil at July 06, 2019 08:08 AM

Kubernetes is currently the de-facto standard for deploying applications in the cloud. Every major cloud provider offers a dedicated Kubernetes service (eg. Google Cloud with GKE, AWS with EKS, etc.) to deploy applications in a Kubernetes cluster. Once your stateless Java EE application (this is important, as your application will run with multiple instances) is dockerized you are ready to deploy the application to Kubernetes.

In this blog post, I’ll show you how to deploy a sample Java EE 8 and MicroProfile 2.2 application running on Payara 5.192 to a local Kubernetes cluster. You can apply the same for every Kubernetes cluster in the cloud (with small adjustments for the container registry).

Setup Java EE backend

The sample application contains only the Java EE 8 and MicroProfile 2.2 API dependency and built with Maven:

<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>de.rieckpil.blog</groupId>
  <artifactId>java-ee-kubernetes-deployment</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>war</packaging>
  <dependencies>
    <dependency>
      <groupId>javax</groupId>
      <artifactId>javaee-api</artifactId>
      <version>8.0</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.eclipse.microprofile</groupId>
      <artifactId>microprofile</artifactId>
      <version>2.2</version>
      <type>pom</type>
      <scope>provided</scope>
    </dependency>
  </dependencies>
  <build>
    <finalName>java-ee-kubernetes-deployment</finalName>
  </build>
  <properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  </properties>
</project>

The application contains just one JAX-RS resource to return a message injected via the MicroProfile Config API.

@Path("sample")
public class SampleResource {

  @Inject
  @ConfigProperty(name = "message")
  private String message;

  @GET
  public Response message() {
    return Response.ok(message).build();
  }

}

Furthermore, I’ve added HealthResource which implements the HealthCheck interface of the MicroProfile Health 1.0 API. This class is optional, but nice-to-have as Kubernetes will, later on, have to identify if your application is ready for traffic. In this example, the implementation is rather simple as it just returns the UP status but you could also add further business logic to mark your application as ready. There can also be multiple implementations of the HealthCheck interface in one application to check for multiple things like e.g. free disk space, database availability, etc. The result of all the health checks is then combined together under /health.

@Health
@ApplicationScoped
public class HealthResource implements HealthCheck {

    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse
                .name("java-ee")
                .builder()
                .up()
                .build();
    }
}

With MicroProfile 3.0 there will also be dedicated annotations for @Readiness and @Liveness to differentiate between these two states.

Prepare Kubernetes deployment

First, we need to create a Docker image for our Kubernetes deployment. With Java EE applications this is pretty straightforward, as we just copy the .war file to the deployment folder of the targeted application server. For this example, I’ve chosen the Payara server-full 5.192 base image.

FROM payara/server-full:5.192
COPY target/java-ee-kubernetes-deployment.war $DEPLOY_DIR

Next, we create a so-called Kubernetes Deployment. With this Kubernetes object, we define metadata for our application. This includes which image to use, which port to expose, where to find the health endpoint. In addition, we define here how many pods (containers) should run in parallel:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: java-ee-kubernetes
spec:
  replicas: 2
  selector:
    matchLabels:
      app: java-ee-kubernetes
  template:
    metadata:
      labels:
        app: java-ee-kubernetes
    spec:
      containers:
        - name: java-ee-kubernetes
          image: localhost:5000/java-ee-kubernetes
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
          readinessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 45
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 45
      restartPolicy: Always

In this example, I’m using a local Docker registry and reference therefore to localhost:5000/java-ee-kubernetes for the application’s Docker image. If you plan to deploy your Java EE application to a Kubernetes cluster running in the cloud, you have to push your Docker image to this registry and replace the localhost:5000 e.g. eu.gcr.io/java-ee-kubernetes (gcr is Google’s container registry service).

To give the pod some time for the startup I’ve added the initialDelaySeconds attribute to wait 45 seconds until the readiness and liveness probes are requested.

With just the deployment we wouldn’t be able to access our application for outside. Therefore we have to specify the access with a so-called Kubernetes Service. The service references our previous deployment and uses the type NodePort. With this configuration, we specify a port on our Kubernetes nodes which forwards to our application’s port.  For our example we use port 31000 on the Kubernetes node and forward to port 8080 of our Java EE application:

kind: Service
apiVersion: v1
metadata:
  name: java-ee-kubernetes
spec:
  type: NodePort
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
      nodePort: 31000
  selector:
    app: java-ee-kubernetes

For a more advanced configuration, have a look at the Kubernetes Ingress Controllers.

Deploy to a local Kubernetes cluster

In this example, I’m deploying the application to a local Kubernetes cluster. The Kubernetes add-on is available in the latest Docker for Mac/Windows version and creates a simple cluster on-demand.

With the following steps you deploy the application to your local Kubernetes cluster:

docker run -d -p 5000:5000 --restart=always --name registry registry:2
mvn clean package
docker build -t java-ee-kubernetes .
docker tag java-ee-kubernetes localhost:5000/java-ee-kubernetes
docker push localhost:5000/java-ee-kubernetes
kubectl apply -f deployment.yml

After the two pods are up and running, you can access the application at http://localhost:31000/resources/sample and to get a greeting from the Java EE application.

You can find the source code with a step-by-step guide on GitHub.

Have fun deploying your Java EE application to Kubernetes,

Phil

The post #HOWTO: Deploy Java EE applications to Kubernetes appeared first on rieckpil.


by rieckpil at July 06, 2019 08:08 AM

The Payara Monthly Catch for June 2019

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at July 04, 2019 11:03 AM

Another very busy month for the Payara team! We had our annual "Payara Week" where we fly everyone in the company to our UK HQ for a week of close collaboration, celebration, review and fun! We also announced our new partner program "Payara Radiate".


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at July 04, 2019 11:03 AM

How to develop Open Liberty Microservices on OpenShift

by Niklas Heidloff at July 04, 2019 09:23 AM

Open Liberty is a flexible Java application server. It comes with Eclipse MicroProfile which is a set of tools and APIs to build microservices. With these technologies I have created a simple hello world microservice which can be used as template for your own microservices. In this article I describe how to deploy it to OpenShift.

The sample microservice is available as open source.

Microservice Implementation

The microservice contains the following functionality:

If you want to use this code for your own microservice, remove the three Java files for the REST GET endpoint and rename the service in the pom.xml file and the yaml files.

Deployment Options

The microservice can be run in different environments:

Deployment to Red Hat OpenShift on the IBM Cloud

The following instructions should work for OpenShift, no matter where you run it. However I’ve only tested it on the IBM Cloud.

IBM provides a managed Red Hat OpenShift offering on the IBM Cloud (beta). You can get a free IBM Cloud Lite account.

After you’ve created a new cluster, open the OpenShift console. From the dropdown menu in the upper right of the page, click ‘Copy Login Command’. Paste the copied command in your local terminal, for example ‘oc login https://c1-e.us-east.containers.cloud.ibm.com:23967 –token=xxxxxx’.

Get the code:

$ git clone https://github.com/nheidloff/cloud-native-starter.git
$ cd cloud-native-starter
$ ROOT_FOLDER=$(pwd)

Push the code and build the image:

$ cd ${ROOT_FOLDER}/authors-java-jee
$ oc login ...
$ oc new-project cloud-native-starter
$ oc new-build --name authors --binary --strategy docker
$ oc start-build authors --from-dir=.
$ oc get istag

Wait until the image has been built. Then deploy the microservice:

$ cd ${ROOT_FOLDER}/authors-java-jee/deployment
$ sed "s+<namespace>+cloud-native-starter+g" deployment-template.yaml > deployment-template.yaml.1
$ sed "s+<ip:port>+docker-registry.default.svc:5000+g" deployment-template.yaml.1 > deployment-template.yaml.2
$ sed "s+<tag>+latest+g" deployment-template.yaml.2 > deployment-os.yaml
$ oc apply -f deployment-os.yaml
$ oc apply -f service.yaml
$ oc expose svc/authors
$ open http://$(oc get route authors -o jsonpath={.spec.host})/openapi/ui/
$ curl -X GET "http://$(oc get route authors -o jsonpath={.spec.host})/api/v1/getauthor?name=Niklas%20Heidloff" -H "accept: application/json"

Rather than using ‘oc apply’ (which is essentially ‘kubectl apply’), you can also use ‘oc new-app’. In this case you don’t have to create yaml files which makes it easier. At the same time you loose some flexibility and capabilities that kubectl with yaml files provides.

$ oc new-app authors

After this you’ll be able to open the API explorer and invoke the endpoint:

After the deployment the application will show up in the OpenShift Web Console:

Note that there are several other options to deploy applications to OpenShift/OKD. Check out my previous article Deploying Open Liberty Microservices to OpenShift.

This sample is part of the GitHub repo cloud-native-starter. Check it out to learn how to develop cloud-native applications with Java EE/Jakarta EE, Eclipse MicroProfile, Kubernetes and Istio and how to deploy these applications to Kubernetes, Minikube, OpenShift and Minishift.

The post How to develop Open Liberty Microservices on OpenShift appeared first on Niklas Heidloff.


by Niklas Heidloff at July 04, 2019 09:23 AM

[EN] Send custom error messages to client via JAX-RS

by Altuğ Bilgin Altıntaş at June 27, 2019 01:25 PM

Would you like to show custom error messages to your clients via JAX-RS?  Assume that we have a String field in an entity java class and we want to control this String field according to business rules.  Here is the code :

import javax.json.bind.annotation.JsonbCreator;
import javax.json.bind.annotation.JsonbProperty;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.NamedQuery;
import javax.validation.constraints.Size;

/**
*
* @author altuga
*/
@Entity
@NamedQuery(name = "all" , query = "select flight from Flight flight")
public class Flight {

  @Id
  @GeneratedValue
  private long id;

  @Size(min = 3, max = 10, message = "stupid users")
  public String number;

  public int numberOfSeats;

  @JsonbCreator
  public Flight(@JsonbProperty("number") String number,
            @JsonbProperty("numberOfSeats") int numberOfSeats) {
   this.number = number;
   this.numberOfSeats = numberOfSeats;
  }

  public Flight() {
  }

  public long getId() {
   return id;
  }
}

We used @Size annotation tag and in order to apply our business. If the user enters below 3 or above 10 then the system will throw a validation exception with a custom messages

import java.net.URI;
import java.util.List;
import javax.ejb.Stateless;
import javax.inject.Inject;
import javax.validation.Valid;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.UriInfo;

/**
*
* @author airhacks.com
*/
@Path("ping")
@Stateless
public class FlightResource {

   @Inject
   FlightCoordinator coordinator;

   @POST
   public Response save(@Context UriInfo info, @Valid Flight flight) {
    this.coordinator.save(flight);
    URI uri = info.getAbsolutePathBuilder().path(String.valueOf(flight.getId())).build();
    return Response.created(uri).build();
  }

}

In order to trigger the validation process, we have to put @Valid annotation just before parameter. In order to show our custom validation message to JAX-RS clients, we have to implement ExceptionMapper class. Here we go :

@Singleton
@Provider
public class ConstraintViolationMapper implements ExceptionMapper<ConstraintViolationException> {

@Override
public Response toResponse(ConstraintViolationException e) {
   List<String> messages = e.getConstraintViolations().stream()
      .map(ConstraintViolation::getMessage)
      .collect(Collectors.toList());

      return Response.status(Status.BAD_REQUEST).entity(messages).build();
}

}

ConstraintViolationMapper class will catch the exception and transform it according to your custom messages. In that case “stupid user” message will be sent to the client.

 

 


by Altuğ Bilgin Altıntaş at June 27, 2019 01:25 PM

Jersey 2.29 has been released!

by Jan at June 25, 2019 08:58 PM

It is a pleasure to announce that Jersey 2.29 has been released. It is rather a large release. While Jersey 2.27 was the last non-Jakarta EE release and Jersey 2.28 was the first release of Jersey under a Jakarta EE … Continue reading

by Jan at June 25, 2019 08:58 PM

Source to Image Builder for Open Liberty Apps on OpenShift

by Niklas Heidloff at June 25, 2019 09:02 AM

OKD, the open source upstream Kubernetes distribution embedded in OpenShift, provides several ways to make deployments of applications to Kubernetes for developers easy. I’ve open sourced a project that demonstrates how to deploy local Open Liberty applications via two simple commands ‘oc new-app’ and ‘oc start-build’.

$ oc new-app s2i-open-liberty:latest~/. --name=<service-name>
$ oc start-build --from-dir . <service-name>

Get the code from GitHub.

Source-to-Image

OKD allows developers to deploy applications without having to understand Docker and Kubernetes in depth. Similarily to the Cloud Foundry cf push experience, developers can deploy applications easily via terminal commands and without having to build Docker images. In order to do this Source-to-Image is used.

Source-to-Image (S2I) is a toolkit for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image.

In order to use S2I, builder images are needed. These builder images create the actual images with the applications. The builder images are similar to Cloud Foundry buildpacks.

OKD provides several builder images out of the box. In order to support other runtimes, for example Open Liberty, custom builder images can be built and deployed.

Sample Application running locally in Docker Desktop

The repo contains a S2I builder image which creates an image running Java web applications on Open Liberty. Additionally the repo comes with a simple sample application which has been implemented with Java/Jakarta EE and Eclipse MicroProfile.

The Open Liberty builder image can be used in two different environments:

  • OpenShift or MiniShift via ‘oc new-app’ and ‘oc start-build’
  • Local Docker runtime via ‘s2i’

This is how to run the sample application locally with Docker and S2I:

$ cd ${ROOT_FOLDER}/sample
$ mvn package
$ s2i build . nheidloff/s2i-open-liberty authors
$ docker run -it --rm -p 9080:9080 authors
$ open http://localhost:9080/openapi/ui/

To use “s2i” or “oc new-app/oc start-build” you need to build the application with Maven. The server configuration file and the war file need to be in these directories:

  • server.xml in the root directory
  • *.war file in the target directory

Sample Application running on Minishift

First the builder image needs to be built and deployed:

$ cd ${ROOT_FOLDER}
$ eval $(minishift docker-env)
$ oc login -u developer -p developer
$ oc new-project cloud-native-starter
$ docker login -u developer -p $(oc whoami -t) $(minishift openshift registry)
$ docker build -t nheidloff/s2i-open-liberty .
$ docker tag nheidloff/s2i-open-liberty:latest $(minishift openshift registry)/cloud-native-starter/s2i-open-liberty:latest
$ docker push $(minishift openshift registry)/cloud-native-starter/s2i-open-liberty

After the builder image has been deployed, Open Liberty applications can be deployed:

$ cd ${ROOT_FOLDER}/sample
$ mvn package
$ oc new-app s2i-open-liberty:latest~/. --name=authors
$ oc start-build --from-dir . authors 
$ oc expose svc/authors
$ open http://authors-cloud-native-starter.$(minishift ip).nip.io/openapi/ui/
$ curl -X GET "http://authors-cloud-native-starter.$(minishift ip).nip.io/api/v1/getauthor?name=Niklas%20Heidloff" -H "accept: application/json"

After the sample application has been deployed, it shows up in the console.

The post Source to Image Builder for Open Liberty Apps on OpenShift appeared first on Niklas Heidloff.


by Niklas Heidloff at June 25, 2019 09:02 AM

Free and self paced workshop: How to develop microservices with Java

by Niklas Heidloff at June 24, 2019 09:01 AM

Over the last weeks I’ve worked on an example application that demonstrates how to develop your first cloud-native applications with Java/Jakarta EE, Eclipse MicroProfile, Kubernetes and Istio. Based on this example my colleague Thomas Südbröcker wrote a workshop which explains how to implement resilient microservices, how to expose and consume REST APIs, how to do traffic management and more.

Check out the workshop on GitHub.

The workshop contains the following labs:

The example application is a simple application that displays blog entries and links to the profiles of the authors.

There are three microservices: Web-API, Articles and Authors, The microservice Web-API has two versions to demonstrate traffic management. The web application has been developed with Vue.js and is hosted via nginx.

The workshop utilizes the IBM Cloud Kubernetes Service. You can get a free lite account and a free cluster as documented in our repo.

The post Free and self paced workshop: How to develop microservices with Java appeared first on Niklas Heidloff.


by Niklas Heidloff at June 24, 2019 09:01 AM

Update for Jakarta EE community: June 2019

by Tanja Obradovic at June 20, 2019 02:01 PM

Last month, we launched a monthly email update for the Jakarta EE community which seeks to highlight news from various committee meetings related to this platform. We have also decided to publish these updates as blogs and share the information that way as well. There are a few ways to get a grip on the work that has been invested in Jakarta EE so far, so if you’d like to learn more about Jakarta EE-related plans and get involved in shaping the future of cloud native Java, read on.

Without further ado, let’s have a look at what has happened in May:

Jakarta EE 8 release and progress

Jakarta EE 8 will be fully compatible with Java EE 8, including use of the javax namespace. The process of driving the Jakarta EE 8 specifications, as well as delivery of the Jakarta EE 8 TCKs, and Jakarta EE 8 compatible implementations will be transparent.

Mike Milinkovich recently published a FAQ about Jakarta EE 8, in which he offered answers to questions such as  

  • Will Jakarta EE 8 break existing Java EE applications that rely upon javax APIs?

  • What will Jakarta EE 8 consist of?

  • Will there be Jakarta EE 8 compatible implementations?

  • What is the process for delivery of Jakarta EE 8

  • When will Jakarta EE 8 be delivered?

Read Mike’s blog to find out what to expect from the Jakarta EE 8 release.

We need your help with the work on Jakarta EE 8 release. Project teams please get involved in the Eclipse EE4J projects and help out with  Jakarta Specification Project Names and Jakarta Specification Scope Statements.

If you’d like to get involved in the work for the Jakarta EE Platform, there are a few projects that require your attention, namely the Jakarta EE 8 Platform Specification, which is meant to keep track of the work involved with creating the platform specification for Jakarta EE 8, Jakarta EE 9 Platform Specification, intended to keep track of the work involved with creating the platform specification for Jakarta EE 9 and Jakarta EE.Next Roadmap Planning, which seeks to define a roadmap and plan for the Jakarta EE 9 release.

Right now, the fastest way to have a say in the planning and preparation for the Jakarta EE 9 release is by getting involved in the Jakarta EE.Next Roadmap Planning.

Election schedule for Jakarta EE working group committees

The various facets of the Jakarta EE Working Group are driven by three key committees for which there are elected positions to be filled: the Steering Committee, the Specification Committee, and the Marketing and Brand Committee. The elected positions are to represent each of the Enterprise Members, Participant Members, and Committer Members. Strategic Members each have a representative appointed to these committees.  

The Eclipse Foundation is holding elections on behalf of the Jakarta EE Working Group using the following proposed timetable:  

Nomination period:  May 24 - June 4 (self-nominations are welcome)

Election period:  June 11 - June 25

Winning candidates announced:  June 27

All members are encouraged to consider nominating someone for the positions, and self-nominations are welcome. The period for nominations runs through June 4th.  Nominations may be sent to elections@eclipse.org.

Once nominations are closed, all working group members will be informed about the candidates and ballots will be distributed via email to those eligible to vote.  The election process will follow the Eclipse “Single Transferable Vote” method, as defined in the Eclipse Bylaws.  

The winning candidates will be announced on this mailing list immediately after the elections are concluded.  

The following positions will be filled as part of this election:

Steering Committee

Two seats allocated for Enterprise Members

One seat allocated for Participant Members

One seat allocated for Committer Members

Specification Committee

Two seats allocated for Enterprise Members

One seat allocated for Participant Members

One seat allocated for Committer Members

Marketing and Brand Committee

Two seats allocated for Enterprise Members

One seat allocated for Participant Members

One seat allocated for Committer Members

Transitioning Jakarta EE to the jakarta namespace

The process of migrating Java EE to the Eclipse Foundation has been a collaborative effort between the Eclipse Foundation staff and the many contributors, committers, members, and stakeholders that are participating. Last month, it was revealed that the javax package namespace will not be evolved by the Jakarta EE community and that Java trademarks such as the existing specification names will not be used by Jakarta EE specifications. While these restrictions were not what was originally expected, it might be in Jakarta EE’s best interest as the modification of javax would always have involved long-term legal and trademark restrictions.

In order to evolve Jakarta EE, we must transition to a new namespace. In an effort to bootstrap the conversation, the Jakarta EE Specification Committee has prepared two proposals (Big-bang Jakarta EE 9, Jakarta EE 10 new features and incremental change in Jakarta EE 9 and beyond) on how to make the move into the new namespace smoother. These proposals represent a starting point, but the community is warmly invited to submit more proposals.

Community discussion on how to transition to the jakarta namespace concluded Sunday, June 9th, 2019.

We invite you to read a few blogs on this topic:

2019 Jakarta EE Developer Survey Results

The Eclipse Foundation recently released the results of the 2019 Jakarta EE developer survey that canvassed nearly 1,800 Java developers about trends in enterprise Java programming and their adoption of cloud native technologies. The aim of the survey, which was conducted by the Foundation in March of 2019 in cooperation with member companies and partners, including the London Java Community and Java User Groups, was to help Java ecosystem stakeholders better understand the requirements, priorities, and perceptions of enterprise Java developer communities.

A third of developers surveyed are currently building cloud native architectures and another 30 percent are planning to within the next year. Furthermore, the number of Java applications running in the cloud is expected to increase significantly over the next two years, with 32 percent of respondents hoping to run nearly two-thirds of their Java applications in the cloud in two years’ time. Also, over 40 percent of respondents are using the microservices architecture to implement Java in the cloud.

Access the full findings of the 2019 Java Community Developer Survey here.

Community engagement

The Jakarta EE community promises to be a very active one, especially given the various channels that can be used to stay up-to-date with all the latest and greatest. Tanja Obradovic’s blog offers a sneak peek at the community engagement plan, which includes

For more information about community engagement, read Tanja Obradovic’s blog.

Jakarta EE Wiki

Have you checked out the Jakarta EE Wiki yet? It includes important information such as process guidelines, documentation, Eclipse guides and mailing lists, Jakarta EE Working Group essentials and more.  

Keep in mind that this page is a work in progress and is expected to evolve in the upcoming weeks and months. The community’s input and suggestions are welcome and appreciated!

Jakarta EE Community Update: May video call

The most recent Jakarta EE Community Update meeting took place in May; the conversation included topics such as the Jakarta EE progress so far, Jakarta EE Rights to Java Trademarks, the transition from javax namespace to the jakarta namespace (mapping javax to jakarta, when repackaging is required and when migration to namespaces is not required) and how to maximize compatibility with Java EE 8 and Jakarta EE for future versions without stifling innovation, the Jakarta EE 8 release, PMC/ Projects update and more.

The minutes of the Jakarta EE community update meeting are available here and the recorded Zoom video conversation can be found here.  

Jakarta EE presence at conferences: May overview

Cloud native was the talk of the town in May. Conferences such as JAX 2019, Red Hat Summit 2019 and KubeCon + CloudNativeCon Europe 2019 were all about cloud native and how to tap into this key approach for IT modernization success and the Eclipse Foundation was there to take a pulse of the community to better understand the adoption of cloud native technologies.

Don’t forget to check out Tanja Obradovic’s video interview about the future of Jakarta EE at JAX 2019.  

EclipseCon Europe 2019: Call for Papers open until July 15

It’s that time of year again! You can now submit your proposals to be part of EclipseCon Europe 2019’s speaker lineup. The conference takes place in Ludwigsburg, Germany on October 21 - 24, 2019. Early bird submissions are due July 1, and the final deadline is July 15. Check out Jameka's blog and submit your talk today!

We are also working on JakartaOne Livestream conferencescheduled for September 10th. Call for Papers are open until July 1st

Thank you for your interest in Jakarta EE. Help steer Jakarta EE toward its exciting future by subscribing to the jakarta.ee-wg@eclipse.org mailing list and by joining the Jakarta EE Working Group.

To learn more about the collaborative efforts to build tomorrow’s enterprise Java platform for the cloud, check out the Jakarta Blogs and participate in the monthly Jakarta Tech Talks. Don’t forget to subscribe to the Eclipse newsletter!  


by Tanja Obradovic at June 20, 2019 02:01 PM

Recording of Talk ‘How to develop your first cloud-native Applications with Java’

by Niklas Heidloff at June 18, 2019 06:58 AM

At WeAreDevelopers Harald Uebele and I gave a 30 minutes talk ‘How to develop your first cloud-native Applications with Java’. Below is the recording and the slides.

In the talk we described the key cloud-native concepts and explained how to develop your first microservices with Java EE/Jakarta EE and Eclipse MicroProfile and how to deploy the services to Kubernetes and Istio.

For the demos we used our end-to-end example application cloud-native-starter which is available as open source. There are instructions and scripts so that everyone can setup and run the demos locally in less than an hour.

We demonstrated key cloud-native functionality:

Here is the video.

The slides are on SlideShare. There is also another deck for a one hour presentation with more details.

Picture from the big stage:

Get the code of the sample application from GitHub.

The post Recording of Talk ‘How to develop your first cloud-native Applications with Java’ appeared first on Niklas Heidloff.


by Niklas Heidloff at June 18, 2019 06:58 AM

JCP Copyright Licensing request

by Tanja Obradovic at June 17, 2019 06:20 PM

The open source community has welcomed Oracle’s contribution of Java EE into Eclipse Foundation, under the new name Jakarta EE. As part of this huge effort and transfer, we want to ensure that we have the necessary rights so we can evolve the specifications under the new  Jakarta EE Specification Process. For this, we need your help!

We must request copyright licenses from all past contributors to Java EE specifications under the JCP. Hence, we are reaching out to all companies and individuals who made contributions to Java EE in the past to help out, execute the agreements and return them back to Eclipse Foundation. As the advancement of the specifications and the technology is in question, we would greatly appreciate your prompt response. Oracle, Red Hat, IBM, and many others in the community have already signed an agreement to license their contributions to Java EE specifications to the Eclipse Foundation. We are also counting on the JCP community to be supportive of this request.

The request is for JCP contributors to Java EE specifications, once you receive an email from the Eclipse Foundation regarding this, please get back to us as soon as you can!

Should you have any questions regarding the request for copyright licenses from all past contributors, please contact mariateresa.delgado@eclipse-foundation.org who is leading us all through this process.

Many thanks!


by Tanja Obradovic at June 17, 2019 06:20 PM

How to build and run a Hello World Java Microservice

by Niklas Heidloff at June 17, 2019 02:02 PM

The repo cloud-native-starter contains an end-to-end sample application that demonstrates how to develop your first cloud-native applications. Two of the microservices have been developed with Java EE and MicroProfile. To simplify the creation of new Java EE microservices, I’ve added another very simple service that can be used as template for new services.

Get the code.

The template contains the following functionality:

If you want to use this code for your own microservice, remove the three Java files for the REST GET endpoint and rename the service in the pom.xml file and the yaml files.

The microservice can be run in different environments:

  • Docker
  • Minikube
  • IBM Cloud Kubernetes Service

In all cases get the code first:

$ git clone https://github.com/nheidloff/cloud-native-starter.git
$ cd cloud-native-starter
$ ROOT_FOLDER=$(pwd)

Run in Docker

The microservice can be run in Docker Desktop.

$ cd ${ROOT_FOLDER}/authors-java-jee
$ mvn package
$ docker build -t authors .
$ docker run -i --rm -p 3000:3000 authors
$ open http://localhost:3000/openapi/ui/

Run in Minikube

These are the instructions to run the microservice in Minikube.

$ cd ${ROOT_FOLDER}/authors-java-jee
$ mvn package
$ eval $(minikube docker-env)
$ docker build -t authors:1 .
$ kubectl apply -f deployment/deployment.yaml
$ kubectl apply -f deployment/service.yaml
$ minikubeip=$(minikube ip)
$ nodeport=$(kubectl get svc authors --ignore-not-found --output 'jsonpath={.spec.ports[*].nodePort}')
$ open http://${minikubeip}:${nodeport}/openapi/ui/

Run in IBM Cloud Kubernetes Service

IBM provides the managed IBM Cloud Kubernetes Service. You can get a free IBM Cloud account. Check out the instructions for how to create a Kubernetes cluster.

Set your namespace and cluster name, for example:

$ REGISTRY_NAMESPACE=niklas-heidloff-cns
$ CLUSTER_NAME=niklas-heidloff-free

Build the image:

$ cd ${ROOT_FOLDER}/authors-java-jee
$ ibmcloud login -a cloud.ibm.com -r us-south -g default
$ ibmcloud ks cluster-config --cluster $CLUSTER_NAME
$ export ... // for example: export KUBECONFIG=/Users/$USER/.bluemix/plugins/container-service/clusters/niklas-heidloff-free/kube-config-hou02-niklas-heidloff-free.yml
$ mvn package
$ REGISTRY=$(ibmcloud cr info | awk '/Container Registry  /  {print $3}')
$ ibmcloud cr namespace-add $REGISTRY_NAMESPACE
$ ibmcloud cr build --tag $REGISTRY/$REGISTRY_NAMESPACE/authors:1 .

Deploy microservice:

$ cd ${ROOT_FOLDER}/authors-java-jee/deployment
$ sed "s+<namespace>+$REGISTRY_NAMESPACE+g" deployment-template.yaml > deployment-template.yaml.1
$ sed "s+<ip:port>+$REGISTRY+g" deployment-template.yaml.1 > deployment-template.yaml.2
$ sed "s+<tag>+1+g" deployment-template.yaml.2 > deployment-iks.yaml
$ kubectl apply -f deployment-iks.yaml
$ kubectl apply -f service.yaml
$ clusterip=$(ibmcloud ks workers --cluster $CLUSTER_NAME | awk '/Ready/ {print $2;exit;}')
$ nodeport=$(kubectl get svc authors --output 'jsonpath={.spec.ports[*].nodePort}')
$ open http://${clusterip}:${nodeport}/openapi/ui/
$ curl -X GET "http://${clusterip}:${nodeport}/api/v1/getauthor?name=Niklas%20Heidloff" -H "accept: application/json"

Swagger UI

Once deployed the Swagger UI can be opened which shows the APIs of the authors service:

The post How to build and run a Hello World Java Microservice appeared first on Niklas Heidloff.


by Niklas Heidloff at June 17, 2019 02:02 PM

#REVIEW: What’s new in MicroProfile 3.0

by rieckpil at June 16, 2019 01:36 PM

With the MicroProfile release cycle of three releases every year in February, June, and October we got MicroProfile 3.0 on June 11th, 2019. This version is based on MicroProfile 2.2 and updates the Rest Client, Metrics, and Health Check API which I’ll show you in this blog post today.

The current API landscape for MicroProfile 3.0 looks like the following:

 

As you can see in this image, there were no new APIs added with this release. The Microprofile Rest Client API was updated from 1.2 to 1.3 with no breaking change included. The Metrics API got a new major version update from 1.1 to 2.0 introducing some breaking changes, which you’ll learn about. The same is true for the Health Check API which is now available with version 2.0 and also introduces breaking API changes.

Changes with Metrics 2.0: Counters ftw!

Important links:

  • Official changelog on GitHub
  • Current API specification document as pdf
  • Release information on GitHub

Breaking changes:

  • Refactoring of Counters, as the old @Counted was misleading in practice (you can find migration hints in the API specification document)
  • Removed deprecated org.eclipse.microprofile.metrics.MetricRegistry.register(String name, Metric, Metadata)
  • Metadata is now immutable and built via a MetadataBuilder.
  • Metrics are now uniquely identified by a MetricID (a combination of the metric’s name and tags).
  • JSON output format for GET requests now appends tags along with the metric in metricName;tag=value;tag=value format. JSON format for OPTIONS requests has been modified such that the ‘tags’ attribute is a list of nested lists which holds tags from different metrics that are associated with the metadata. The default value of the reusable attribute for metric objects created programmatically (not via annotations) is now true
    Some base metrics’ names have changed to follow the convention of ending the name of accumulating counters with total

Other important changes:

  • Removed unnecessary @InterceptorBinding annotation from org.eclipse.microprofile.metrics.annotation.Metric
  • Tag key names for labels are restricted to match the regex [a-zA-Z_][a-zA-Z0-9_]*.
  • MetricFilter modified to filter with MetricID instead of the name
  • Tag values defined through  MP_METRICS_TAGS must escape equal signs = and commas, with a backslash \.

Changes with Health Check 2.0: Kubernetes here we come!

Important links:

  • Current API specification document as pdf
  • All changes with this release
  • Release information on GitHub

Breaking changes:

  • The message body of Health check response was modified, outcome and state were replaced by status
  • Introduction of Health checks for @Liveness and @Readiness on /health/ready and /health/live endpoints (nice for Kubernetes)

Other important changes:

  • Deprecation of @Health annotation
  • Correction and enhancement of response JSON format
  • TCK enhancement and cleanup
  • Enhance examples in spec (introduce Health check procedures producers)

Changes with Rest Client 1.3: Improved config and security!

Important links:

Important changes:

  • Spec-defined SSL support via new RestClientBuilder methods and MP Config properties.
  • Allow client proxies to be cast to Closeable/AutoCloseable.
  • Simpler configuration using configKeys.
  • Defined application/json to be the default MediaType if none is specified in @Produces/@Consumes.

For more details, you can visit the official announcement post on the MicroProfile page.

I’m planning to give you code-based examples for MicroProfile 3.0 once the first application server supports it (for Payara this will be version 5.193). Stay tuned!

Have fun with MicroProfile 3.0,

Philip

The post #REVIEW: What’s new in MicroProfile 3.0 appeared first on rieckpil.


by rieckpil at June 16, 2019 01:36 PM

#HOWTO: Send emails with Java EE using Payara

by rieckpil at June 09, 2019 05:34 PM

Sending emails to your application’s clients or customers is a common enterprise use case. The emails usually contain invoices, reports or confirmations for a given business transaction. With Java, we have a mature and robust API for this: The JavaMail API.

The JavaMail API standard has a dedicated website providing official documentation and quickstart examples. The API is part of the Java Standard Edition (Java SE) and Java Enterprise Edition (Java EE) and can, therefore, be used also without Java EE.

In this blog post, I’ll show you how you can send an email with an attachment to an email address of your choice using this API and Java EE 8, MicroProfile 2.0, Payara 5.192, Java 8, Maven and Docker.

Let’s get started.

Setting up the backend

For the backend, I’ve created a straightforward Java EE 8 Maven project:

<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>de.rieckpil.blog</groupId>
  <artifactId>java-ee-sending-mails</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>war</packaging>
  <dependencies>
    <dependency>
      <groupId>javax</groupId>
      <artifactId>javaee-api</artifactId>
      <version>8.0</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.eclipse.microprofile</groupId>
      <artifactId>microprofile</artifactId>
      <version>2.0.1</version>
      <type>pom</type>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.12</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.mockito</groupId>
      <artifactId>mockito-core</artifactId>
      <version>2.23.0</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
  <build>
    <finalName>java-ee-sending-mails</finalName>
  </build>
  <properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  </properties>
</project>

The email transport is triggered by a JAX-RS endpoint (definitely no best practice, but good enough for this example):

@Path("mails")
public class MailingResource {

  @Inject
  private MailingService mailingService;

  @GET
  public Response sendSimpleMessage() {
    mailingService.sendSimpleMail();
    return Response.ok("Mail was successfully delivered").build();
  }

}

The actual logic for creating and sending the email is provided by the MailingService EJB which is injected via CDI in the JAX-RS class:

@Stateless
public class MailingService {

    @Inject
    @ConfigProperty(name = "email")
    private String emailAddress;


    @Resource(name = "mail/localsmtp")
    private Session mailSession;

    public void sendSimpleMail() {

        Message simpleMail = new MimeMessage(mailSession);

        try {
            simpleMail.setSubject("Hello World from Java EE!");
            simpleMail.setRecipient(Message.RecipientType.TO, new InternetAddress(emailAddress));

            MimeMultipart mailContent = new MimeMultipart();

            MimeBodyPart mailMessage = new MimeBodyPart();
            mailMessage.setContent(
               "<p>Take a look at the <b>scecretMessage.txt</b> file</p>", "text/html; charset=utf-8");
            mailContent.addBodyPart(mailMessage);

            MimeBodyPart mailAttachment = new MimeBodyPart();
            DataSource source = new ByteArrayDataSource(
               "This is a secret message".getBytes(), "text/plain");
            mailAttachment.setDataHandler(new DataHandler(source));
            mailAttachment.setFileName("secretMessage.txt");

            mailContent.addBodyPart(mailAttachment);
            simpleMail.setContent(mailContent);

            Transport.send(simpleMail);

            System.out.println("Message successfully send to: " + emailAddress);
        } catch (MessagingException e) {
            e.printStackTrace();
        }

    }
}

First, the EJB requires an instance of the javax.mail.Session class which is injected with @Resoucre and found via its unique JNDI name. The connection setup for this email session is done with either the Payara admin web page or using asadmin as you’ll see in the next section.

The Session object is then used to create a MimeMessage instance which represents the actual email. Setting the email recipient and the subject of the email is pretty straightforward. In this example, I’m injecting the recipient’s email address via the MicroProfile Config API with a microprofile-config.properties file:

email=duke@java.ee

For both the attachment and for the email body I’m using a dedicated MimeBodyPart instance and add both to the MimeMultipart object. Finally, the email is sent via the static Transport.send(Message msg) method via SMPT.

Providing an SMPT server

For your real-world example, you would connect to your company internal SMPT server to send the emails to e.g. your customers. To provide you a running example without using an external SMPT server I’m using a Docker container to start a local SMPT server. The whole infrastructure (SMPT server and Java EE backend) for this example is combined in a simple docker-compose.yml file:

version: '3'
services:
  app:
    build: ./
    ports:
      - "8080:8080"
      - "4848:4848"
    links:
      - smtp
  smtp:
    image: namshi/smtp
    ports:
      - "25:25"

With this setup, the Docker container with the Payara application server can reach the SMPT server via its name smpt and I don’t need to hardcode any IP address.

The email session is then configured (connection settings and JNDI name) in Payara with a post-boot asadmin script:

# Connecting to the SMTP server within the Docker Compose environment
create-javamail-resource --mailhost smtp --mailuser duke --fromaddress duke@java.ee mail/localsmtp

# For a connecting to e.g. Gmail's SMTP you have to specify further parameters (check e.g. https://medium.com/@swhp/sending-email-with-payara-and-gmail-56b0b5d56882)

deploy /opt/payara/deployments/java-ee-sending-mails.war

For the sake of completeness, this is the Dockerfile for the backend:

FROM payara/server-full:5.192
COPY create-mail-session.asadmin $CONFIG_DIR
COPY target/java-ee-sending-mails.war $DEPLOY_DIR
ENV POSTBOOT_COMMANDS $CONFIG_DIR/create-mail-session.asadmin

You can find the source code alongside a docker-compose.yml file to bootstrap the application and an SMTP server for local development on GitHub.

Have fun sending emails with Java EE,

Phil

The post #HOWTO: Send emails with Java EE using Payara appeared first on rieckpil.


by rieckpil at June 09, 2019 05:34 PM

Back to the top

Submit your event

If you have a community event you would like listed on our events page, please fill out and submit the Event Request form.

Submit Event