Skip to main content

#WHATIS?: Eclipse MicroProfile OpenAPI

by rieckpil at August 24, 2019 09:31 AM

Exposing REST endpoints usually requires documentation for your clients. This documentation usually includes the following: accepted media types, HTTP method, path variables, query parameters, and the request and response schema. With the OpenAPI v3 specification we have a standard way to document APIs. You can generate this kind of API documentation from your JAX-RS classes using MicroProfile OpenAPI out-of-the-box. In addition, you can customize the result with additional metadata like detailed description, error codes and their reasons, and further information about the used security mechanism.

Learn more about the MicroProfile OpenAPI specification, its annotations and how to use it in this blog post.

Specification profile: MicroProfile OpenAPI

  • Current version: 1.1 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Provide a unified Java API for the OpenAPI v3 specification to expose API documentation

Customize your API documentation with MicroProfile OpenAPI

Without any additional annotation or configuration, you get your API documentation with MicroProfile OpenAPI out-of-the-box. Therefore your JAX-RS classes are scanned for your @Produces, @Consumes, @Path, @GET etc. annotations to extract the required information for the documentation.

If you have external clients accessing your endpoints, you usually add further metadata for them to understand what each endpoint is about. Fortunately, the MicroProfile OpenAPI specification defines a bunch of annotations you can use to customize the API documentation.

The following example shows a part of the available annotation you can use to add further information:

@GET
@Operation(summary = "Get all books", description = "Returns all available books of the book store XYZ")
@APIResponse(responseCode = "404", description = "No books found")
@APIResponse(responseCode = "418", description = "I'm a teapot")
@APIResponse(responseCode = "500", description = "Server unavailable")
@Tag(name = "BETA", description = "This API is currently in beta state")
@Produces(MediaType.APPLICATION_JSON)
public Response getAllBooks() {
   System.out.println("Get all books...");
   return Response.ok(new Book("MicroProfile", "Duke", 1L)).build();
}

In this example, I’m adding a summary and description to the endpoint to tell the client what this endpoint is about. Furthermore, you can specify the different response codes this endpoint returns and give them a description if they are somehow different from the HTTP spec.

Another important part of your API documentation is the request and response body schema. With JSON as the current de-facto standard format for exchanging data, you and need to know the expected and accepted formats. The same is true for the response as your client needs information about the contract of the API to further process the result. This can be achieved with an additional MicroProfile OpenAPI annotation:

@GET
@APIResponse(description = "Book",
             content = @Content(mediaType = "application/json",
                    schema = @Schema(implementation = Book.class)))
@Path("/{id}")
@Consumes({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
public Response getBookById(@PathParam("id") Long id) {
   return Response.ok(new Book("MicroProfile", "Duke", 1L)).build();
}

Within the @APIResponse annotation we can reference the response object with the schema attribute. This can point to your data transfer object class. This Java class can then also have further annotations to specify which field is required and what are example values:

@Schema(name = "Book", description = "POJO that represents a book.")
public class Book {

    @Schema(required = true, example = "MicroProfile")
    private String title;

    @Schema(required = true, example = "Duke")
    private String author;

    @Schema(required = true, readOnly = true, example = "1")
    private Long id;

}

Access the created documentation

The MicroProfile OpenAPI specification defines a pre-defined endpoint to access the documentation: /openapi:

openapi: 3.0.0
info:
  title: Deployed APIs
  version: 1.0.0
servers:
- url: http://localhost:9080
- url: https://localhost:9443
tags:
- name: BETA
  description: This API is currently in beta state
paths:
  /resources/books/{id}:
    get:
      operationId: getBookById
      parameters:
      - name: id
        in: path
        required: true
        schema:
          type: integer
          format: int64
      responses:
        default:
          description: Book
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Book'

This endpoint returns your generated API documentation in the OpenAPI v3 specification format as text/plain.

If you are using Open Liberty you’ll get a nicelooking user interface for your API documentation. You can access it at http://localhost:9080/openapi/ui/. This looks similar to the Swagger UI and offers your client a way to explore your API and also trigger requests to your endpoints via this user interface:

microProfileOpenAPIUIOpenLiberty

microProfileOpenAPIExecutionOfApiCall

microProfileOpenAPIModelExplorer

YouTube video for using MicroProfile OpenAPI 1.1

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile OpenAPI in action:

coming soon

You can find the source code for this blog post on GitHub.

Have fun using MicroProfile OpenAPI,

Phil

The post #WHATIS?: Eclipse MicroProfile OpenAPI appeared first on rieckpil.


by rieckpil at August 24, 2019 09:31 AM

Shipping Fallback Scripts with "nomodule"

by admin at August 22, 2019 05:57 AM

All recent browsers come with good support for WebComponents, ES 6 and ES 6 modules, so you can develop and ship a web application without any build, or transpilation steps (even without npm), and generate a fallback script on e.g. Jenkins and deliver it with the script nomodule attribute:

See you at "Build to last" effectively progressive applications with webstandards only -- the "no frameworks, no migrations" approach, at Munich Airport, Terminal 2 or effectiveweb.training (online).

by admin at August 22, 2019 05:57 AM

New Challenges Ahead

by Ivar Grimstad at August 20, 2019 06:27 PM

I am super excited to announce that October 1st, I will become the first Jakarta EE Developer Advocate at Eclipse Foundation!

So, What’s new? Hasn’t this guy been doing this for years already?

Well, yes, and no. My day job has always been working as a consultant even if I have been fortunate that Cybercom Sweden (my employer of almost 15 years) has given me the freedom to also work on open source projects, community building and speaking at conferences and meetups.

What’s different then?

Even if I have had this flexibility, it has still been part-time work which has rippled into my spare time. It’s only so much a person can do and there are only 24 hours a day. As a full-time Jakarta EE Developer Advocate, I will be able to focus entirely on community outreach around Jakarta EE.

The transition of the Java EE technologies from Oracle to Jakarta EE at Eclipse Foundations has taken a lot longer than anticipated. The community around these technologies has taken a serious hit as a result of that. My primary focus for the first period as Jakarta EE Developer Advocate is to regain the trust and help enable participation of the strong community around Jakarta EE. The timing of establishing this position fits perfectly with the upcoming release of Jakarta EE 8. From that release and forward, it is up to us as a community to bring the technology forward.

I think I have been pretty successful with being vendor-neutral throughout the years. This will not change! Eclipse Foundation is a vendor-neutral organization and I will represent the entire Jakarta EE working group and community as the Jakarta EE Developer Advocate. This is what distinguishes this role from the vendor’s own developer advocates.

I hope to see you all very soon at a conference or meetup near you!


by Ivar Grimstad at August 20, 2019 06:27 PM

Jakarta EE and the great naming debate

August 20, 2019 03:45 AM

At JavaOne 2017 Oracle announced that they would start the difficult process of moving Java EE to the Eclipse Software Foundation. This has been a massive effort on behalf of Eclipse, Oracle and many others and we are getting close to having a specification process and a Jakarta EE 8 platform. We are looking forward to being able to certify Open Liberty to it soon. While that is excellent news, on Friday last week Mike Milinkovich from Eclipse informed the community that Eclipse and Oracle could not come to an agreement that would allow Jakarta EE to evolve using the existing javax package prefix. This has caused a flurry of discussion on Twitter, from panic, confusion, and in some cases outright FUD.

To say that everyone is disappointed with this outcome would be a massive understatement of how people feel. Yes this is disappointing, but this is not the end of the world. First of all, despite what some people are implying, Java EE applications are not suddenly broken today, when they were working a week ago. Similarly, your Spring apps are not going to be broken (yes, the Spring Framework has 2545 Java EE imports, let alone all the upstream dependencies). It just means that we will have a constraint on how Jakarta EE evolves to add new function.

We have got a lot of experience with managing migration in the Open Liberty team. We have a zero migration promise for Open Liberty which is why we are the only application server that supports Java EE 7 and 8 in the same release stream. This means that if you are on Open Liberty, your existing applications are totally shielded from any class name changes in Jakarta EE 9. We do this through our versioned feature which provide the exact API and runtime required by the specification as it was originally defined. We are optimistic about for the future because we have been doing this with Liberty since it was created in 2012.

The question for the community is "how we should move forward from here?" It seems that many in the Jakarta EE spec group at Eclipse are leaning towards quickly renaming everything in a Jakarta EE 9 release. There are advantages and disadvantages to this approach, but it appears favoured by David Blevins, Ian Robinson, Kevin Sutter, Steve Millidge. While I can see the value of just doing a rename now (after all, it is better pull a band aid off fast than slow), I think it would be a mistake if at the same time we do not invest in making the migration from Java EE package names to Jakarta EE package names cost nothing. Something in Liberty we call "zero migration".

Jakarta EE will only succeed if developers have a seamless transition from Java EE to Jakarta EE. I think there are four aspects to pulling off zero migration with a rename:

  1. Existing application binaries need to continue to work without change.

  2. Existing application source needs to continue to work without change.

  3. Tools must be provided to quickly and easily change the import statements for Java source.

  4. Applications that are making use of the new APIs must be able to call binaries that have not been updated.

The first two are trivial to do: Java class files have a constant pool that contains all the referenced class and method references. Updating the constant pool when the class is loaded will be technically easy, cheap at runtime, and safe. We are literally talking about changing javax.servlet to jakarta.servlet, no method changes.

The third one is also relatively simple; as long as class names do not change switching import statements from javax.servlet.* to jakarta.servlet.* is easy to automate.

The last one is the most difficult because you have existing binaries using the javax.servlet package and new source using the jakarta.servlet package. Normally this would produce a compilation error because you cannot pass a jakarta.servlet class somewhere that takes a javax.servlet class. In theory we could reuse the approach used to support existing apps and apply it at compile time to the downstream dependencies, but this will depend on the build tools being able to support this behaviour. You could add something to the Maven build to run prior to compilation to make sure this works, but that might be too much work for some users to contemplate, and perhaps is not close enough to zero migration.

I think if the Jakarta EE community pulls together to deliver this kind of zero migration approach prior to making any break, the future will be bright for Jakarta EE. The discussion has already started on the jakarta-platform-dev mail list kicked off by David Blevins. If you are not a member you can join now on eclipse.org. I am also happy to hear your thought via twitter.


August 20, 2019 03:45 AM

#WHATIS?: Eclipse MicroProfile Health

by rieckpil at August 19, 2019 05:16 AM

Once your application is deployed to production you want to ensure it’s up– and running. To determine the health and status of your application you can use monitoring based on different metrics, but this requires further knowledge and takes time. Usually, you just want a quick answer to the question: Is my application up? The same is true if your application is running e.g. in a Kubernetes cluster, where the cluster regularly performs health probes to terminate unhealthy pods. With MicroProfile Health you can write both readiness and liveness checks and expose them via an HTTP endpoint with ease.

Learn more about the MicroProfile Health specification and how to use it in this blog post.

Specification profile: MicroProfile Health

  • Current version: 2.0.1 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Add liveness and readiness checks to determine the application’s health

Determine the application’s health with MicroProfile Health

With MicroProfile Health you get three new endpoints to determine both the readiness and liveness of your application:

  • /health/ready: Returns the result of all readiness checks and determines whether or not your application can process requests
  • /health/live: Returns the result of all liveness checks and determines whether or not your application is up- and running
  • /health : As in previous versions of MicroProfile Health there was no distinction between readiness and liveness, this is active for downwards compatibility. This endpoint returns the result of both health check types.

To determine your readiness and liveness you can have multiple checks. The overall status is constructed with a logical AND of all your checks of that specific type (liveness or readiness). If e.g. on liveness check fails, the overall liveness status is DOWN and the HTTP status is 503:

$ curl -v http://localhost:9080/health/live


< HTTP/1.1 503 Service Unavailable
< X-Powered-By: Servlet/4.0
< Content-Type: application/json; charset=UTF-8
< Content-Language: en-US

{"checks":[...],"status":"DOWN"}

In case of an overall UP status, you’ll receive the HTTP status 200:

$ curl -v http://localhost:9080/health/ready

< HTTP/1.1 200 OK
< X-Powered-By: Servlet/4.0
< Content-Type: application/json; charset=UTF-8
< Content-Language: en-US

{"checks":[...],"status":"UP"}

Create a readiness check

To create a readiness check you have to implement the HealthCheck interface and add @Readiness to your class:

@Readiness
public class ReadinessCheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse.builder()
                .name("readiness")
                .up()
                .build();
    }
}

As you can add multiple checks, you need to give every check a dedicated name. In general, all your readiness checks should determine whether your application is ready to accept traffic or not. Therefore a quick response is preferable.

If your application is about exposing and accepting data using REST endpoints and does not rely on other services to work, the readiness check above should be good enough as it returns 200 once the JAX-RS runtime is up- and running:

{
   "checks":[
      {
         "data":{},
         "name":"readiness",
         "status":"UP"
      }
   ],
   "status":"UP"
}

Furthermore, once /health/ready returns 200, the readiness is identified and from now on the /health/live is used and no more readiness checks are required.

Create liveness checks

Creating liveness checks is as simple as creating readiness checks. The only difference is the @Livness annotation at class level:

@Liveness
public class DiskSizeCheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {

        File file = new File("/");
        long freeSpace = file.getFreeSpace() / 1024 / 1024;

        return responseBuilder = HealthCheckResponse.builder()
                .name("disk")
                .withData("remainingSpace", freeSpace)
                .state(freeSpace > 100)
                .build();

}

In this example, I’m checking for free disk space as a service might rely on storage to persist e.g. files.  With the .withData() method of the HealthCheckResponseBuilder you can add further metadata to your response.

In addition, you can also combine the @Readiness and @Liveness annotation and reuse a health check class for both checks:

@Readiness
@Liveness
public class MultipleHealthCheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse
                .builder()
                .name("generalCheck")
                .withData("foo", "bar")
                .withData("uptime", 42)
                .withData("isReady", true)
                .up()
                .build();
    }
}

This check now appears for /health/ready and /health/live:

{
   "checks":[
      {
         "data":{
            "remainingSpace":447522
         },
         "name":"disk",
         "status":"UP"
      },
      {
         "data":{

         },
         "name":"liveness",
         "status":"UP"
      },
      {
         "data":{
            "foo":"bar",
            "isReady":true,
            "uptime":42
         },
         "name":"generalCheck",
         "status":"UP"
      }
   ],
   "status":"UP"
}

Other possible liveness checks might be: checking for active JDBC connections, connections to queues, CPU usage, or custom metrics (with the help of MicroProfile Metrics).

YouTube video for using MicroProfile Health 2.0

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile Health in action:

coming soon

You can find the source code for this blog post on GitHub.

Have fun using MicroProfile Health,

Phil

The post #WHATIS?: Eclipse MicroProfile Health appeared first on rieckpil.


by rieckpil at August 19, 2019 05:16 AM

#WHATIS?: Eclipse MicroProfile Metrics

by rieckpil at August 18, 2019 08:03 AM

Ensuring a stable operation of your application in production requires monitoring. Without monitoring, you have no insights about the internal state and health of your system and have to work with a black-box. MicroProfile Metrics gives you the ability to not only monitor pre-defined metrics like JVM statistics but also create custom metrics to monitor e.g. key figures of your business. These metrics are then exposed via HTTP and ready to visualize on a dashboard and create appropriate alarms.

Learn more about the MicroProfile Metrics specification and how to use it in this blog post.

Specification profile: MicroProfile Metrics

  • Current version: 2.0 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Add custom metrics (e.g. timer or counter) to your application and expose them via HTTP

Default MicroProfile metrics defined in the specification

The specification defines one endpoint with three subresources to collect metrics from a MicroProfile application:

  • The endpoint to collect all available metrics: /metrics
  • Base (pre-defined by the specification) metrics: /metrics/base
  • Application metrics: /metrics/application (optional)
  • Vendor-specific metrics: /metrics/vendor (optional)

So you can either use the main /metrics endpoint and get all available metrics for your application or one of the subresources.

The default media type for these endpoints is text/plain using the OpenMetrics format. You are also able to get them as JSON if you specify the Accept header in your request as application/json.

In the specification, you find a list of base metrics every MicroProfile Metrics compliant application server has to offer. These are mainly JVM, GC, memory, and CPU related metrics to monitor the infrastructure. The following output is the required amount of base metrics:

{
    "gc.total;name=scavenge": 393,
    "gc.time;name=global": 386,
    "cpu.systemLoadAverage": 0.92,
    "thread.count": 85,
    "classloader.loadedClasses.count": 11795,
    "classloader.unloadedClasses.total": 21,
    "jvm.uptime": 985206,
    "memory.committedHeap": 63111168,
    "thread.max.count": 100,
    "cpu.availableProcessors": 12,
    "classloader.loadedClasses.total": 11816,
    "thread.daemon.count": 82,
    "gc.time;name=scavenge": 412,
    "gc.total;name=global": 14,
    "memory.maxHeap": 4182573056,
    "cpu.processCpuLoad": 0.0017964831879557087,
    "memory.usedHeap": 34319912
}

In addition, you are able to add metadata and tags to your metrics like in the output above for gc.time where name=global is a tag. You can use these tags to further separate a metric for multiple use cases.

Create a custom metric with MicroProfile Metrics

There are two ways for defining a custom metric with MicroProfile Metrics: using annotations or programmatically. The specification offers five different metric types:

  • Timer: sampling the time for e.g. a method call
  • Counter: monotonically counting e.g. invocations of a method
  • Gauges: sample the value of an object e.g. current size of JMS queue
  • Meters: tracking the throughput of e.g. a JAX-RS endpoint
  • Histogram: calculate the distribution of a value e.g. the variance of incoming user agents

For simple use cases, you can make use of annotations and just add them to a method you want to monitor. Each annotation offers attributes to configure tags and metadata for the metric:

@Counted(name = "bookCommentClientInvocations",
         description = "Counting the invocations of the constructor",
         displayName = "bookCommentClientInvoke",
         tags = {"usecase=simple"})
public BookCommentClient() {
}

If your monitoring use case requires a more dynamic configuration, you can programmatically create/update your metrics. For this, you just need to inject the MetricRegistry to your class:

public class BookCommentClient {

    @Inject
    @RegistryType(type = MetricRegistry.Type.APPLICATION)
    private MetricRegistry metricRegistry;

    public String getBookCommentByBookId(String bookId) {
        Response response = this.bookCommentsWebTarget.path(bookId).request().get();
        this.metricRegistry.counter("bookCommentApiResponseCode" + response.getStatus()).inc();
        return response.readEntity(JsonObject.class).getString("body");
    }
}

Create a timer metric

If you want to track and sample the duration for a method call, you can make use of timers. You can add them with the @Timer annotation or using the MetricRegistry. A good use case might be tracking the time for a call to an external service:

@Timed(name = "getBookCommentByBookIdDuration")
public String getBookCommentByBookId(String bookId) {
   Response response = this.bookCommentsWebTarget.path(bookId).request().get();
   return response.readEntity(JsonObject.class).getString("body");
}

While using the timer metric type you’ll also get a count of method invocations and mean/max/min/percentile calculations out-of-the-box:

 "de.rieckpil.blog.BookCommentClient.getBookCommentByBookIdDuration": {
        "fiveMinRate": 0.000004243196464475842,
        "max": 3966817891,
        "count": 13,
        "p50": 737218798,
        "p95": 3966817891,
        "p98": 3966817891,
        "p75": 997698383,
        "p99": 3966817891,
        "min": 371079671,
        "fifteenMinRate": 0.005509550587308515,
        "meanRate": 0.003936521878196718,
        "mean": 1041488167.7031761,
        "p999": 3966817891,
        "oneMinRate": 1.1484886591525709e-24,
        "stddev": 971678361.3592016
}

Be aware that you get the result as nanoseconds if you request the JSON result and for the OpenMetrics format, you get seconds:

getBookCommentByBookIdDuration_rate_per_second 0.003756880727820997
getBookCommentByBookIdDuration_one_min_rate_per_second 7.980095572816848E-26
getBookCommentByBookIdDuration_five_min_rate_per_second 2.4892551645230856E-6
getBookCommentByBookIdDuration_fifteen_min_rate_per_second 0.004612201440656351
getBookCommentByBookIdDuration_mean_seconds 1.0414881677031762
getBookCommentByBookIdDuration_max_seconds 3.9668178910000003
getBookCommentByBookIdDuration_min_seconds 0.371079671
getBookCommentByBookIdDuration_stddev_seconds 0.9716783613592016
getBookCommentByBookIdDuration_seconds_count 13
getBookCommentByBookIdDuration_seconds{quantile="0.5"} 0.737218798
getBookCommentByBookIdDuration_seconds{quantile="0.75"} 0.997698383
getBookCommentByBookIdDuration_seconds{quantile="0.95"} 3.9668178910000003
getBookCommentByBookIdDuration_seconds{quantile="0.98"} 3.9668178910000003
getBookCommentByBookIdDuration_seconds{quantile="0.99"} 3.9668178910000003
getBookCommentByBookIdDuration_seconds{quantile="0.999"} 3.9668178910000003

Create a counter metric

The next metric type is the simplest one: a counter. With the counter, you can track e.g. the number of invocations of a method:

@Counted
public String doFoo() {
  return "Duke";
}

In one of the previous MicroProfile Metrics versions, you were able to decrease the counter and have a not monotonic counter. As this caused confusion with the gauge metric type, the current specification version defines this metric type as a monotonic counter which can only increase.

If you use the programmatic approach, you are also able to define the amount of increase for the counter on each invocation:

public void checkoutItem(String item, Long amount) {
   this.metricRegistry.counter(item + "Count").inc(amount);
   // further business logic
}

Create a metered metric

The meter type is perfect if you want to measure the throughput of something and get the one-, five- and fifteen-minute rates. As an example I’ll monitor the throughput of a JAX-RS endpoint:

@GET
@Metered(name = "getBookCommentForLatestBookRequest", tags = {"spec=JAX-RS", "level=REST"})
@Produces(MediaType.TEXT_PLAIN)
public Response getBookCommentForLatestBookRequest() {
   String latestBookRequestId = bookRequestProcessor.getLatestBookRequestId();
   return Response.ok(this.bookCommentClient.getBookCommentByBookId(latestBookRequestId)).build();
}

After several invocations, the result looks like the following:

"de.rieckpil.blog.BookResource.getBookCommentForLatestBookRequest": {
       "oneMinRate;level=REST;spec=JAX-RS": 1.1363013189791909e-24,
       "fiveMinRate;level=REST;spec=JAX-RS": 0.0000042408326224725166,
       "meanRate;level=REST;spec=JAX-RS": 0.003936520624021342,
       "fifteenMinRate;level=REST;spec=JAX-RS": 0.0055092085268208186,
       "count;level=REST;spec=JAX-RS": 13
}

Create a gauge metric

To monitor a value which can increase and decrease over time, you should use the gauge metric type. Imagine you want to visualize the current disk size or the remaining messages to process in a queue:

@Gauge(unit = "amount")
public Long remainingBookRequestsToProcess() {
  // monitor e.g. current size of a JMS queue
  return ThreadLocalRandom.current().nextLong(0, 1_000_000);
}

The unit attribute of the annotation is required and has to be explicitly configured. There is a MetricUnits class which you can use for common units like seconds or megabytes.

In contrast to all other metrics, the @Gauge annotation can only be used in combination with a single instance (e.g. @ApplicationScoped) as otherwise, it would be not clear which instance represents the actual value. There is a @ConcurrentGauge if you need to count parallel invocations.

The outcome is the current value of the gauge, which might increase or decrease over time:

# TYPE application_..._remainingBookRequestsToProcess_amount
application_..._remainingBookRequestsToProcess_amount 990120

// invocation of /metrics 5 minutes later

# TYPE application_..._remainingBookRequestsToProcess_amount
application_..._remainingBookRequestsToProcess_amount 11003

YouTube video for using MicroProfile Metrics 2.0

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile Metrics in action:

You can find the source code for this blog post on GitHub.

Have fun using MicroProfile Metrics,

Phil

The post #WHATIS?: Eclipse MicroProfile Metrics appeared first on rieckpil.


by rieckpil at August 18, 2019 08:03 AM

A Lean "Core Insurance As A Service" (Java EE / Jakarta EE / MicroProfile) Startup--airhacks.fm podcast

by admin at August 18, 2019 04:55 AM

Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS

The #50 airhacks.fm episode with Matthias Reining (@MatthiasReining) about:

the tech11 "Core Insurance Platform as a Service" startup, the technology choices in product development, and productivity with Java EE, Jakarta EE, MicroProfile and WebComponents / WebStandards
is available for download.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at August 18, 2019 04:55 AM

#WHATIS?: Eclipse MicroProfile Config

by rieckpil at August 17, 2019 12:00 PM

Injecting configuration properties like JDBC URLs, passwords, usernames or hostnames from external sources is a common requirement for every application. Inspired by the twelve-factor app principles you should store configuration in the environment (e.g. OS environment variables or config maps in Kubernetes). These external configuration properties can then be replaced for your different stages (dev/prod/test) with ease. Using MicroProfile Config you can achieve this in a simple and extensible way.

Learn more about the MicroProfile Config specification and how to use it in this blog post.

Specification profile: MicroProfile Config

  • Current version: 1.3 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Inject configuration properties from external sources (like property files, environment or system variables)

Injecting configuration properties

At several parts of your application, you might want to inject configuration properties to configure for example the base URL of a JAX-RS Client. With MicroProfile Config you can inject a Config object using CDI and fetch a specific property by its key:

public class BasicConfigurationInjection {

    @Inject
    private Config config;

    public void init(@Observes @Initialized(ApplicationScoped.class) Object init) {
        System.out.println(config.getValue("message", String.class));
    }

}

In addition, you can inject a property value to a member variable with the @ConfigProperty annotation and also specify a default value:

public class BasicConfigurationInjection {

    @Inject
    @ConfigProperty(name = "message", defaultValue = "Hello World")
    private String message;

}

If you don’t specify a defaultValue, and the application can’t find a property value in the configured ConfigSources, your application will throw an error during startup:

The [BackedAnnotatedField] @Inject @ConfigProperty private de.rieckpil.blog.BasicConfigurationInjection.value InjectionPoint dependency was not resolved. Error: java.util.NoSuchElementException: CWMCG0015E: The property not.existing.value was not found in the configuration.
       at com.ibm.ws.microprofile.config.impl.AbstractConfig.getValue(AbstractConfig.java:175)
       at [internal classes]

For a more resilient behaviour, or if the config property is optional, you can wrap the value with Java’s Optional<T> class and check its existence during runtime:

public class BasicConfigurationInjection {
    
    @Inject
    @ConfigProperty(name = "my.app.password")
    private Optional<String> password;
    
}

Furthermore you can wrap the property with a Provider<T>  for a more dynmaic injection. This ensure that each invocation of Provider.get() resolves the latest value from the underlying Config and you are able to change it during runtime.

public class BasicConfigurationInjection {

    @Inject
    @ConfigProperty(name = "my.app.timeout")
    private Provider<Long> timeout;

    public void init(@Observes @Initialized(ApplicationScoped.class) Object init) {
        System.out.println(timeout.get());
    }

}

For the key of the configuration property you might use the dot notation to prevent conflicts and seperate domains: my.app.passwords.twitter.

Configuration sources

The default ConfigSources are the following:

  • System property (default ordinal: 400): passed with -Dmessage=Hello to the application
  • Environment variables (default ordinal: 300): OS variables like export MESSAGE=Hello
  • Property file (default ordinal: 100): file META-INF/microprofile-config.properties

Once the MicroProfile Config runtime finds a property in two places (e.g. property file and environment variable), the value with the higher ordinal source is chosen.

These default configuration sources should cover most of the use cases and support writing cloud-native applications. However, if you need any additional custom ConfigSource, you can plug-in your own (e.g. fetch configurations from a database or external service).

To provide you an example for a custom ConfigSource, I’m creating a static source which serves just two properties. Therefore you just need to implement the ConfigSource interface and its methods

public class CustomConfigSource implements ConfigSource {

    public static final String CUSTOM_PASSWORD = "CUSTOM_PASSWORD";
    public static final String MESSAGE = "Hello from custom ConfigSource";

    @Override
    public int getOrdinal() {
        return 500;
    }

    @Override
    public Map<String, String> getProperties() {
        Map<String, String> properties = new HashMap<>();
        properties.put("my.app.password", CUSTOM_PASSWORD);
        properties.put("message", MESSAGE);
        return properties;
    }

    @Override
    public String getValue(String key) {
        if (key.equalsIgnoreCase("my.app.password")) {
            return CUSTOM_PASSWORD;
        } else if (key.equalsIgnoreCase("message")) {
            return MESSAGE;
        }
        return null;
    }

    @Override
    public String getName() {
        return "randomConfigSource";
    }
}

To register this new ConfigSource you can either bootstrap a custom Config object with this source:

Config config = ConfigProviderResolver
                .instance()
                .getBuilder()
                .addDefaultSources()
                .withSources(new CustomConfigSource())
                .addDiscoveredConverters()
                .build();

or add the fully-qualified name of the class of the configuration source to the org.eclipse.microprofile.config.spi.ConfigSource file in /src/main/resources/META-INF/services:

de.rieckpil.blog.CustomConfigSource

Using the file approach, the custom source is now part of the ConfigSources by default.

Configuration converters

Internally the mechanism for MicroProfile Config is purely Stringbased, typesafety is achieved with Converter classes. The specification provides default Converter for converting the configuration property into the known Java types: Integer, Long, Float, Boolean etc. In addition, there are built-in providers for converting properties into Arrays, Lists, Optional<T> and Provider<T>.

If the default Converter doesn’t match your requirements and you want e.g. to convert a property into a domain object, you can plug-in a custom Converter<T>.

For example, I’ll convert a config property into a Token instance:

public class Token {

    private String name;
    private String payload;

    public Token(String name, String payload) {
        this.name = name;
        this.payload = payload;
    }

    // getter & setter
}

The custom converter needs to implement the Converter<Token> interface. The converter method accepts a raw string value and returns the custom domain object, in this case, an instance of Token:

public class CustomConfigConverter implements Converter<Token> {

    @Override
    public Token convert(String value) {
        String[] chunks = value.split(",");
        Token result = new Token(chunks[0], chunks[1]);
        return result;
    }
}

To register this converter you can either build your own Config instance and add the converter manually:

int PRIORITY = 100;

Config config = ConfigProviderResolver
                .instance()
                .getBuilder()
                .addDefaultSources()
                .addDiscoveredConverters()
                .withConverter(Token.class, PRIORITY, new CustomConfigConverter())
                .build();

or you can add the fullyqualified name of the class of the converter to the org.eclipse.microprofile.config.spi.Converter file in /src/main/resources/META-INF/services:

de.rieckpil.blog.CustomConfigConverter

Once your converter is registered, you can start using it:

my.app.token=TOKEN_1337, SUPER_SECRET_VALUE
public class BasicConfigurationInjection {

    @Inject
    @ConfigProperty(name = "my.app.token")
    private Token token;

}

YouTube video for using MicroProfile Config 1.3

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile Config in action:

You can find the source code for this blog post on GitHub.

Have fun using MicroProfile Config,

Phil

The post #WHATIS?: Eclipse MicroProfile Config appeared first on rieckpil.


by rieckpil at August 17, 2019 12:00 PM

Request Tracing in Payara Platform 5.192

by Andrew Pielage at August 15, 2019 12:22 PM

Request tracing has been a feature in Payara Platform for a number of years now, and over time it has evolved and changed in a number of ways. The crux of what the feature is remains the same, however: tracing requests through various parts of your applications and the Payara Platform to provide details about their travels.


by Andrew Pielage at August 15, 2019 12:22 PM

Quarkus JAX-RS Service With CORS Support

by admin at August 14, 2019 05:29 AM

quarkus.io ships with the undertow web server and built-in Cross-Origin Resource Sharing (CORS) support. To activate the CORS filter you only have to set a single property (quarkus.http.cors=true).

In this screencast I built a web application from scratch and accessed the backend using the Fetch API with and without activated CORS:

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at August 14, 2019 05:29 AM

Payara, "java.lang.NoSuchMethodError: com.google.common.collect..." Problem and Solution

by admin at August 13, 2019 07:37 AM

Payara Server loads own libraries / jars like e.g. Google's Guava first. In case your application uses a recent feature (e.g. a method) in a newer version of such a library, you will get an exception like:

Caused by: java.lang.NoSuchMethodError: com.google.common.collect.Sets$SetView.iterator()Lcom/google/common/collect/UnmodifiableIterator;

Turning off the class loader delegation solves the problem:


<?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE payara-web-app PUBLIC "-//Payara.fish//DTD Payara Server 4 Servlet 3.0//EN" "https://docs.payara.fish/schemas/payara-web-app_4.dtd">
    <payara-web-app error-url="">
    <class-loader delegate="false"/>
</payara-web-app>

However, a code refactoring and the removal of the dependency from the pom.xml and using Java 8+ / Java EE features only will make your WARs thinner and docker deployments faster:

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at August 13, 2019 07:37 AM

Productive Java EE, MicroProfile, AI and Deep Learning--airhacks.fm podcast

by admin at August 11, 2019 04:52 AM

Subscribe to airhacks.fm podcast via: spotify, iTunes, RSS

The #49 airhacks.fm episode with Pavel Pscheidl (@PavelPscheidl) about:
Java EE's productivity, successful projects and overview about AI supervised and unsupervised learning algorithms.
is available for download.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at August 11, 2019 04:52 AM

Building Microservices with Jakarta EE and MicroProfile

by Edwin Derks at August 09, 2019 12:45 PM

Jakarta EE is the successor to the established Java EE platform. What does this actually mean, what are the differences and how does Jakarta EE compare to similar frameworks like Spring? Even more important, can Jakarta EE be a fit for your projects, even if scalability is a requirement? There are certainly possibilities for you there, thanks to the addition of Eclipse MicroProfile. How this all works, even without having to roll out a pure microservices architecture, will be revealed to you in this #JakartaTechTalk.


by Edwin Derks at August 09, 2019 12:45 PM

JAX-RS Client / Jersey: HTTP Tracing

by admin at August 09, 2019 07:10 AM

To log the HTTP traffic in a JAX-RS client (e.g. in a System Test) with Jersey, you will have to register an instance of LoggingFeature at the Client:


import org.glassfish.jersey.logging.LoggingFeature;
import static org.hamcrest.CoreMatchers.is;
import static org.junit.Assert.assertThat;
import org.junit.Before;
import org.junit.Test;

public class WorkshopsIT {

private Client client;
private WebTarget tut;

@Before
public void init() {
    this.client = ClientBuilder.newClient().register(logging());
    this.tut = this.client.target("http://localhost:8080/...");
}

LoggingFeature logging() {
    Logger logger = Logger.getLogger(this.getClass().getName());
    return new LoggingFeature(logger, Level.INFO, null, null);
}

@Test
public void request() {
    Response response = this.tut.request(MediaType.APPLICATION_JSON).get();
    assertThat(response.getStatus(), is(200));
    //...
}

The System Test yields:

Running com.airhacks.WorkshopsIT
Aug 09, 2019 8:55:20 AM com.airhacks.WorkshopsIT logging
Aug 09, 2019 8:55:20 AM org.glassfish.jersey.logging.LoggingInterceptor log
INFO: 1 * Sending client request on thread main
1 > GET http://localhost:8080/airhacks/resources/workshops
1 > Accept: application/json

Aug 09, 2019 8:55:20 AM org.glassfish.jersey.logging.LoggingInterceptor log
INFO: 1 * Client response received on thread main
1 < 200
1 < Connection: keep-alive
1 < Content-Length: 23
1 < Content-Type: application/json
1 < Date: Fri, 09 Aug 2019 06:55:20 GMT
{"airhacks":{"workshops":["PWAs","clouds","microservices"]}}

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at August 09, 2019 07:10 AM

Deploying MicroProfile Microservices with Tekton

by Niklas Heidloff at August 08, 2019 02:48 PM

This article describes Tekton, an open-source framework for creating CI/CD systems, and explains how to deploy microservices built with Eclipse MicroProfile on Kubernetes and OpenShift.

What is Tekton?

Kubernetes is the de-facto standard for running cloud-native applications. While Kubernetes is very flexible and powerful, deploying applications is sometimes challenging for developers. That’s why several platforms and tools have evolved that aim to make deployments of applications easier, for example Cloud Foundry’s ‘cf push’ experience, OpenShift’s source to image (S2I), various Maven plugins and different CI/CD systems.

Similarly as Kubernetes has evolved to be the standard for running containers and similarly as Knative is evolving to become the standard for serverless platforms, the goal of Tekton is to become the standard for continuous integration and delivery (CI/CD) platforms.

The biggest companies that are engaged in this project are at this point Google, CloudBees, IBM and Red Hat. Because of its importance the project has been split from Knative which is focussed on scale to zero capabilities.

Tekton comes with a set of custom resources to define and run pipelines:

  • Pipeline: Pipelines can contain several tasks and can be triggered by events or manually
  • Task: Tasks can contain multiple steps. Typical tasks are 1. source to image and 2. deploy via kubectl
  • PipelineRun: This resource is used to trigger pipelines and to pass parameters like location of Dockerfiles to pipelines
  • PipelineResource: This resource is used, for example, to pass links to GitHub repos

MicroProfile Microservice Implementation

I’ve created a simple microservice which is available as open source as part of the cloud-native-starter repo.

The microservice contains the following functionality:

If you want to use this code for your own microservice, remove the three Java files for the REST GET endpoint and rename the service in the pom.xml file and the yaml files.

Setup of the Tekton Pipeline

I’ve created five yaml files that define the pipeline to deploy the sample authors microservice.

1) The file task-source-to-image.yaml defines how to 1. build the image within the Kubernetes cluster and 2. how to push it to a registry.

For building the image kaniko is used, rather than Docker. For application developers this is almost transparent though. As usual images are defined via Dockerfiles. The only difference I ran into is how access rights are handled. For some reason I couldn’t write the ‘server.xml’ file into the ‘/config’ directory. To fix this, I had to manually assign access rights in the Dockerfile first: ‘RUN chmod 777 /config/’.

The source to image task is the first task in the pipeline and has only one step. The screenshot shows a representation of the task in the Tekton dashboard.

2) The file task-deploy-via-kubectl.yaml contains the second task of the pipeline which essentially only runs kubectl commands to deploy the service. Before this can be done, the template yaml file is changed to contain the full image name for the current user and environment.

apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: deploy-via-kubectl
spec:
  inputs:
    resources:
      - name: git-source
        type: git
    params:
      - name: pathToDeploymentYamlFile
        description: The path to the yaml file with Deployment resource to deploy within the git source
      ...
  steps:
    - name: update-yaml
      image: alpine
      command: ["sed"]
      args:
        - "-i"
        - "-e"
        - "s;authors:1;${inputs.params.imageUrl}:${inputs.params.imageTag};g"
        - "/workspace/git-source/${inputs.params.pathToContext}/${inputs.params.pathToDeploymentYamlFile}"
    - name: run-kubectl-deployment
      image: lachlanevenson/k8s-kubectl
      command: ["kubectl"]
      args:
        - "apply"
        - "-f"
        - "/workspace/git-source/${inputs.params.pathToContext}/${inputs.params.pathToDeploymentYamlFile}"

3) The file pipeline.yaml basically only defines the order of the two tasks as well as how to pass parameters between the different tasks.

The screenshot shows the pipeline after it has been run. The output of the third and last steps of the second task ‘deploy to cluster’ is displayed.

4) The file resource-git-cloud-native-starter.yaml only contains the address of the GitHub repo.

apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: resource-git-cloud-native-starter
spec:
  type: git
  params:
    - name: revision
      value: master
    - name: url
      value: https://github.com/IBM/cloud-native-starter

5) The file pipeline-account.yaml is necessary to define access rights from Tekton to the container registry.

Here are the complete steps to set up the pipeline on the IBM Cloud Kubernetes service. Except of the login capabilities the same instructions should work as well for Kubernetes services on other clouds and the Kubernetes distribution OpenShift.

First get an IBM lite account. It’s free and there is no time restriction. In order to use the Kubernetes service you need to enter your credit card information, but there is a free Kubernetes cluster. After this create a new Kubernetes cluster.

To create the pipeline, invoke these commands:

$ git clone https://github.com/ibm/cloud-native-starter.git
$ cd cloud-native-starter
$ ROOT_FOLDER=$(pwd)
$ REGISTRY_NAMESPACE=<your-namespace>
$ CLUSTER_NAME=<your-cluster-name>
$ cd ${ROOT_FOLDER}/authors-java-jee
$ ibmcloud login -a cloud.ibm.com -r us-south -g default
$ ibmcloud ks cluster-config --cluster $CLUSTER_NAME
$ export <output-from-previous-command>
$ REGISTRY=$(ibmcloud cr info | awk '/Container Registry  /  {print $3}')
$ ibmcloud cr namespace-add $REGISTRY_NAMESPACE
$ kubectl apply -f deployment/tekton/resource-git-cloud-native-starter.yaml 
$ kubectl apply -f deployment/tekton/task-source-to-image.yaml 
$ kubectl apply -f deployment/tekton/task-deploy-via-kubectl.yaml 
$ kubectl apply -f deployment/tekton/pipeline.yaml
$ ibmcloud iam api-key-create tekton -d "tekton" --file tekton.json
$ cat tekton.json | grep apikey 
$ kubectl create secret generic ibm-cr-push-secret --type="kubernetes.io/basic-auth" --from-literal=username=iamapikey --from-literal=password=<your-apikey>
$ kubectl annotate secret ibm-cr-push-secret tekton.dev/docker-0=us.icr.io
$ kubectl apply -f deployment/tekton/pipeline-account.yaml

Execute the Tekton Pipeline

In order to invoke the pipeline, a sixth yaml file pipeline-run-template.yaml is used. As stated above, this file needs to be modified first to contain the exact image name.

The pipeline-run resource is used to define input parameters like the Git repository, location of the Dockerfile, name of the image, etc.

apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
  generateName: pipeline-run-cns-authors-
spec:
  pipelineRef:
    name: pipeline
  resources:
    - name: git-source
      resourceRef:
        name: resource-git-cloud-native-starter
  params:
    - name: pathToContext
      value: "authors-java-jee"
    - name: pathToDeploymentYamlFile
      value: "deployment/deployment.yaml"
    - name: pathToServiceYamlFile
      value: "deployment/service.yaml"
    - name: imageUrl
      value: <ip:port>/<namespace>/authors
    - name: imageTag
      value: "1"
    - name: pathToDockerFile
      value: "DockerfileTekton"
  trigger:
    type: manual
  serviceAccount: pipeline-account

Invoke the following commands to trigger the pipeline and to test the authors service:

$ cd ${ROOT_FOLDER}/authors-java-jee/deployment/tekton
$ REGISTRY=$(ibmcloud cr info | awk '/Container Registry  /  {print $3}')
$ sed "s+<namespace>+$REGISTRY_NAMESPACE+g" pipeline-run-template.yaml > pipeline-run-template.yaml.1
$ sed "s+<ip:port>+$REGISTRY+g" pipeline-run-template.yaml.1 > pipeline-run-template.yaml.2
$ sed "s+<tag>+1+g" pipeline-run-template.yaml.2 > pipeline-run.yaml
$ cd ${ROOT_FOLDER}/authors-java-jee
$ kubectl create -f deployment/tekton/pipeline-run.yaml
$ kubectl describe pipelinerun pipeline-run-cns-authors-<output-from-previous-command>
$ clusterip=$(ibmcloud ks workers --cluster $CLUSTER_NAME | awk '/Ready/ {print $2;exit;}')
$ nodeport=$(kubectl get svc authors --output 'jsonpath={.spec.ports[*].nodePort}')
$ open http://${clusterip}:${nodeport}/openapi/ui/
$ curl -X GET "http://${clusterip}:${nodeport}/api/v1/getauthor?name=Niklas%20Heidloff" -H "accept: application/json"

After running the pipeline you’ll see two Tekton pods and one authors pod in the Kubernetes dashboard.

Try out this sample yourself!

The post Deploying MicroProfile Microservices with Tekton appeared first on Niklas Heidloff.


by Niklas Heidloff at August 08, 2019 02:48 PM

[EN] Using ConfigProperty in Jakarta Microprofile

by Altuğ Bilgin Altıntaş at August 08, 2019 12:28 PM

We are trying to convert small Spring app into Jakarta EE. In that process, I would like to share my exprience in a short format with you.  Let’s you have a property file in resources

 

 

In Spring Framework you can read and assign the key values in properties files like this :

@Value("${newfromconnectionscontroller.connectionsUrl}")
private String connectionsUrl;

@Value("${newfromconnectionscontroller.postsUrl}")
private String postsUrl;

What is the equivalent annotation in Jakarta EE ? Here is the answer :

  
@Inject
@ConfigProperty(name = "newfromconnectionscontroller.connectionsUrl")
  private String connectionsUrl;
  @Inject  
  @ConfigProperty(name = "newfromconnectionscontroller.postsUrl")
  private String postsUrl;

But you have to add org.eclipse.microprofile dependency into your pom.xml file

<dependency>
    <groupId>org.eclipse.microprofile</groupId>
    <artifactId>microprofile</artifactId>
    <version>1.3</version>
    <type>pom</type>
    <scope>provided</scope>
</dependency>

Bye !


by Altuğ Bilgin Altıntaş at August 08, 2019 12:28 PM

Helidon brings MicroProfile 2.2+ support

by dmitrykornilov at August 08, 2019 10:06 AM

We are pleased to announce the 1.2.0 release of Helidon. This release adds support for MicroProfile 2.2 and includes additional bug and performance fixes. Let’s take a closer look at what’s in the release.

MicroProfile

MicroProfile is now a de-facto standard for Java cloud-native APIs. One of the main goals of project Helidon is to deliver support for the latest MicroProfile APIs. The Helidon MicroProfile implementation is called Helidon MP and along with the reactive, non-blocking framework called Helidon SE it builds the core of Helidon.

We have been adding support for newer MicroProfile specifications one by one during the last few releases. The 1.2.0 release brings MicroProfile REST Client 1.2.1 and Open Tracing 1.3. With these pieces in place we now have full MicroProfile 2.2 support.

The full list of supported MicroProfile and Java EE APIs is listed on this image:

As you see, we added support for two more Java EE APIs: JPA (Persistence) and JTA (Transaction). It’s in early access at the moment. You should consider it as a preview. We are still working on it and the implementation and configuration is subject to change.

Here are some examples of using new APIs added in Helidon 1.2.0.

MicroProfile REST Client sample

Register a rest client interface (can be the same one that is implemented by the JAX-RS resource). Note that the URI can be overridden using configuration.

@RegisterRestClient(baseUri = "http://localhost:8081/greet")
public interface GreetResource {
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    JsonObject getDefaultMessage();
    
    @Path("/{name}")
    @GET
    @Produces(MediaType.APPLICATION_JSON)
    JsonObject getMessage(@PathParam("name") String name);
}

Declare the rest client in a class using it (such as a JAX-RS resource in a different microservice):

@Inject
@RestClient
private GreetResource greetService;

And simply use the field to invoke the remote service (this example proxies the request to the remote service):

@GET
@Produces(MediaType.APPLICATION_JSON)
public JsonObject getDefaultMessage() {
    return greetService.getDefaultMessage();
}

Health Check 2.0 sample

Health Check 2.0 has two types of checks (in previous versions a single type existed):

  • Readiness — used by clients (such as Kubernetes readiness check) to check if the service has started and can be used
  • Liveness — used by clients (such as Kubernetes liveness checks) to check if the service is still up and running

Simply annotate an application scoped bean with the appropriate annotation (@Readiness or @Liveness) to create a health check:

import javax.enterprise.context.ApplicationScoped;
import javax.inject.Inject;
import org.eclipse.microprofile.health.HealthCheck;
import org.eclipse.microprofile.health.HealthCheckResponse;
import org.eclipse.microprofile.health.Liveness;

@Liveness
@ApplicationScoped
public class GreetHealthcheck implements HealthCheck {
    private GreetingProvider provider;
  
    @Inject
    public GreetHealthcheck(GreetingProvider provider) 
        this.provider = provider;
    }

    @Override
    public HealthCheckResponse call() {
        String message = provider.getMessage();
        return HealthCheckResponse.named("greeting")
            .state("Hello".equals(message))
            .withData("greeting", message)
            .build();
    }
}

Open Tracing Sample

MP Open Tracing adds a single annotation: @Traced, to add tracing to CDI beans.

Tracing of JAX-RS resources is automated (and can be disabled with the @Traced). Example of the bean used in the Health check example (the method getMessage is traced):

@ApplicationScoped
public class GreetingProvider {
    private final AtomicReference<String&gt; message = new AtomicReference<&gt;();
    
    /**
     * Create a new greeting provider, reading the message 
     * from configuration.
     *
     * @param message greeting to use
     */
    @Inject
    public GreetingProvider(
        @ConfigProperty(name = "app.greeting") String message) {
        this.message.set(message);
    }
    
    @Traced(operationName = "GreetingProvider.getMessage")
    String getMessage() {
        return message.get();
    }
    
    ...
}

Other Enhancements

In addition to MicroProfile 2.2, Helidon 1.2.0 contains a couple other enhancements:

  • HTTP Access Log support for Helidon MP and SE.
  • Early Access: Oracle Universal Connection Pool support: this lets you configure and inject the Oracle UCP JDBC driver as a DataSource in your Helidon MP application.

More to Come

With MicroProfile 2.2 support, Helidon has caught up with most of the other main MicroProfile implementations. We are now pushing Helidon towards MicroProfile 3.0, and we’ve already taken the first steps. That’s why we put a plus after 2.2 in the title. We already have support for Health Check 2.0 (and we’ll support it in a backwards compatible way). That leaves Metrics 2.0 and REST Client 1.3 and we are working hard to deliver it next month.

Stay tuned!

Thanks to Tomas Langer for helping with samples and to Joe Di Pol for great conversational style.


by dmitrykornilov at August 08, 2019 10:06 AM

Cloud Native meetup’ı ardından

by Hüseyin Akdogan at August 08, 2019 06:00 AM

Salı günü(06.08.2019) Altuğ hocam ve Red Hat’ten sevgili Aykut Bulgu ile birlikte Cloud Native konulu online bir meetup gerçekleştirdik. Cloud Native güncel, popüler bir kavram. Bir geliştirici olarak pek çok makalede, katıldığınız konferanslarda, izlediğiniz sunum, dinlediğiniz podcast’lerde Cloud Native ile karşılaşmış olmanız kuvvetle muhtemel. Ancak yine de, azımsanmayacak sayıda insan için kavramın üzerinde bir bilinmezlik bulutu dolaştığını söylememiz mümkün.

Bu sebeple öncelikle nedir Cloud Native sorusuyla başladık, nasıl tanımlanabileceğini sorduk. Sevgili Aykut her şeyden önce Cloud Native’i bir sıfat olarak gördüğünü ifade edip, konunun özünde Agile/Çeviklik ile ilgisine değindi. Günümüzde artık proseslerin değil teknolojilerin agilitysinden söz edildiğini, Cloud Native ile değişime hızlı cevap verme yetkinliği kazanıldığını, bunun verimliliği artırdığını ve teknolojik belirsizliği minimize ettiğini ifade etti.

Cloud Native ile maliyetlerin azaltıldığını vurgulayan Aykut Bulgu çok önemsediğim bir kavramsallaştırma kullanarak teknoloji israfından söz edip piyasada gözlemlediği genel bir tavır olarak emeklemeden koşma isteğini aktardı. Ne tür uygulamalar veya hangi frameworkler Cloud Native uygulamalar geliştirmeye daha elverişlidir gibi bir soruya ise, single responsibility prensibine bağlı, bağımlı olduğu servisleri hızlı ayağa kaldırabilen, stabil, izlenebilir(monitoring) ve failsafe olma gibi şartları sağlayan platform/framework ve toollarla Cloud Native uygulamalar geliştirilebileceğini, burada araçtan ziyade bahsedilen özelliklerin sağlanmasının önemli olduğunu belirtip Kubernetes ve Openshift desteğinin de altını çizdi.

Aykut hocaya son olarak Cloud Native öğrenmek isteyenler için hangi kaynak/referansları önereceğini sorduk. Aşağıdaki referansları paylaştı:

Sevgili Aykut Bulgu’ya ayırdığı zaman ve aktardığı değerli görüşleri için teşekkür ediyoruz. Bu vesile yakında podcast kanalımızın açılacağını ve bu meetup ile birlikte bundan sonraki onlie meetup’larımıza da bu kanaldan ulaşabileceğinizi ifade etmek istiyorum. Kanal yayına başladığında sosyal medya hesaplarımızdan duyurusunu yapacağız.

Bir başka etkinlikte görüşmek üzere…


by Hüseyin Akdogan at August 08, 2019 06:00 AM

Update for Jakarta EE community: August 2019

by Tanja Obradovic at August 06, 2019 03:55 PM

We hope you’re enjoying the Jakarta EE monthly email update, which seeks to highlight news from various committee meetings related to this platform. There’s a lot happening in the Jakarta EE ecosystem so if you want to get a richer insight into the work that has been invested in Jakarta EE so far and get involved in shaping the future of Cloud Native Java, read on. 

Without further ado, let’s have a look at what happened in July: 

EclipseCon Europe 2019: Record-high talk submissions

With your help, EclipseCon Europe 2019 reported record-high talk submissions. Thank you all for proposing so many interesting talks! This is not the only record, though: it seems that the track with the biggest number of submissions is Cloud Native Java so if you want to learn how to develop applications and cloud native microservices using Java, EclipseCon Europe 2019 is the place to be. The program will be announced the week of August 5th. 

Speaking of EclipseCon Europe, you don’t want to miss the Community Day happening on October 21; this day is jam-packed with peer-to-peer interaction and community-organized meetings that are ideal for Eclipse Working Groups, Eclipse projects, and similar groups that form the Eclipse community. Plus, there’s also a Community Evening planned for you, where like-minded attendees can share ideas, experiences and have fun! That said, in order to make this event a success, we need your help. What would you like the Community Day & Evening to be all about? Check out this wiki first, then make sure to go over what we did last year. And don’t forget to register for the Community Day and/or Community Evening! 

EclipseCon Europe will take place in Ludwigsburg, Germany on October 21 - 24, 2019. 

JakartaOne Livestream: Registration is open!

Given the huge interest in the Cloud Native Java track at EclipseCon Europe 2019, it’s safe to say that JakartaOne Livestream, taking place on September 10, is the fall virtual conference spanning multiple time zones. Plus, the date coincides with the highly anticipated Jakarta EE 8 release so make sure to save the date; you’re in for a treat! 

We hope you’ll attend this all-day virtual conference as it unfolds; this way, you get the chance to interact with renowned speakers, participate in interesting interactions and have all your questions answered during the interactive sessions. Registration is now open so make sure to secure your spot at JakartaOne Livestream! 

No matter if you’re a developer or a technical business leader, this virtual conference promises to satisfy your thirst for knowledge with a balance of technical talks, user experiences, use cases and more. The program will be published soon. Stay tuned!

Jakarta EE 8 release

On September 10, join us in celebrating the Jakarta EE 8 release at JakartaOne Livestream!  

That being said, head over to GitHub to keep track of all the Eclipse EE4J projects. Noticeable progress has been made on Final Specifications Releases, Jakarta EE 8 TCK jobs, Jakarta Specification Project Names, and Jakarta Specification Scope Statements so make sure to check out the progress and contribute!  

Jakarta EE Trademark guidelines: Updates 

Version 1.1 of the Jakarta EE Trademark Guidelines is out! This document supplements the Eclipse Foundation Guidelines for Eclipse Logos & Trademarks Policy to address the permitted usage of the Jakarta EE Marks, including the following names and/or logos: 

  • Jakarta EE

  • Jakarta EE Working Group

  • Jakarta EE Member 

  • Jakarta EE Compatible 

The full guidelines on the usage of the Jakarta EE Marks are described in the Jakarta EE Brand Usage Handbook

EFSP: Updates

Version 1.2 of the Eclipse Foundation Specification Process was approved on June 30, 2019. The EFSP leverages and augments the Eclipse Development Process (EDP), which defines important concepts, including the Open Source Rules of Engagement, the organizational framework for open source projects and teams, releases, reviews, and more.

JESP: Updates

Jakarta EE Specification Process v1.2 was approved on July 16, 2019. The JESP has undergone a few modifications, including 

  • changed ballot periods for the progress and release (including service releases) reviews from 30 to 14 days

  • the Jakarta EE Specification Committee now adopts the EFSP v1.2 as the Jakarta EE Specification Process

TCK process finalized 

The TCK process has been finalized. The document sheds light on aspects such as the materials a TCK must possess in order to be considered suitable for delivering portability, the process for challenging tests and how to resolve them and more.     

This document defines:

  • Materials a TCK MUST possess to be considered suitable for delivering portability

  • Process for challenging tests and how these challenges are resolved

  • Means of excluding released TCK tests from certification requirements

  • Policy on improving TCK tests for released specifications

  • Process for self-certification

Jakarta EE Community Update: July video call

The most recent Jakarta EE Community Update meeting took place in mid-July; the conversation included topics such as Jakarta EE 8 release, status on progress and plans, Jakarta EE TCK process update, brief update re. transitioning from javax namespace to the jakarta namespace, as well as details about JakartaOne Livestream and EclipseCon Europe 2019.   

The materials used in the Jakarta EE community update meeting are available here and the recorded Zoom video conversation can be found here.  

Please make sure to join us for the August 14  community call.

Cloud Native Java eBook: Coming soon!

What does cloud native Java really mean to developers? What does the cloud native Java future look like? Where is Jakarta EE headed? Which technologies should be part of your toolkit for developing cloud native Java applications? 

All these questions (and more!) will be answered soon; we’re developing a downloadable eBook on the community's definition and vision for cloud native Java, which will become available shortly before Jakarta EE 8 is released. Stay tuned!

Eclipse Newsletter: Jakarta EE edition  

The Jakarta community has made great progress this year and the culmination of all this hard work is the Jakarta EE 8 release, which will be celebrated on September 10 at JakartaOne Livestream

In honor of this milestone, the next issue of the Eclipse Newsletter will focus entirely on Jakarta EE 8. If you’re not subscribed to the Eclipse Newsletter, make sure to do that before the Jakarta EE issue is released - on August 22! 

Meet the Jakarta EE Working Group Committee Members 

It takes a village to create a successful project and the Jakarta EE Working Group is no different. We’d like to honor all those who have demonstrated their commitment to Jakarta EE by presenting the members of all the committees that work together toward a common goal: steer Jakarta EE toward its exciting future. As a reminder, Strategic members appoint their representatives, while the representatives for Participant and Committer members were elected in June.

The list of all Committee Members can be found here

Steering Committee 

Will Lyons (chair)- Oracle, Ed Bratt - alternate

Kenji Kazumura - Fujitsu, Michael DeNicola - alternate

Dan Bandera - IBM, Ian Robinson - alternate

Steve Millidge - Payara, Mike Croft - alternate

Mark Little - Red Hat, Scott Stark - alternate

David Blevins - Tomitribe, Richard Monson-Haefel alternate

Martijn Verburg - London Java Community - Elected Participant Representative

Ivar Grimstad - Elected Committer Representative

Specifications Committee 

Kenji Kazumura - Fujitsu, Michael DeNicola - alternate

Dan Bandera - IBM, Kevin Sutter - alternate

Bill Shannon - Oracle, Ed Bratt - alternate

Steve Millidge - Payara, Arjan Tijms - alternate

Scott Stark - Red Hat, Mark Little - alternate

David Blevins - Tomitribe, Richard Monson-Haefel - alternate

Ivar Grimstad - PMC Representative

Alex Theedom - London Java Community Elected Participant Representative

Werner Keil - Elected Committer Representative

Paul Buck - Eclipse Foundation (serves as interim chair, but is not a voting committee member)

Marketing and Brand Committee

Michael DeNicola - Fujitsu, Kenji Kazumura - alternate 

Dan Bandera - IBM, Neil Patterson - alternate

Ed Bratt - Oracle, David Delabassee - alternate

Dominika Tasarz - Payara, Jadon Orglepp - alternate

Cesar Saavedra - Red Hat, Paul Hinz - alternate

David Blevins - Tomitribe, Jonathan Gallimore - alternate

Theresa Nguyen - Microsoft Elected Participant Representative

VACANT - Elected Committer Representative

Thabang Mashologu - Eclipse Foundation (serves as interim chair, but is not a voting committee member)

Jakarta EE presence at events and conferences: July overview 

Cloud native was the talk of the town in July. Conferences such as JCrete, J4K, and Java Forum Stuttgart, to name a few, were all about open source and cloud native and how to tap into this key approach for IT modernization success. The Eclipse Foundation and the Jakarta EE Working Group members were there to take a pulse of the community to better understand the adoption of cloud native technologies. 

For example, IBM’s Graham Charters and Steve Poole featured Jakarta EE and Eclipse MicroProfile in demonstrations at the IBM Booth at OSCON; Open Source Summit 2019 participants should expect another round of Jakarta EE and Eclipse MicroProfile demonstrations from IBM representatives. 



Thank you for your interest in Jakarta EE. Help steer Jakarta EE toward its exciting future by subscribing to the jakarta.ee-wg@eclipse.org mailing list and by joining the Jakarta EE Working Group. Don’t forget to follow us on Twitter to get the latest news and updates!

To learn more about the collaborative efforts to build tomorrow’s enterprise Java platform for the cloud, check out the Jakarta Blogs and participate in the monthly Jakarta Tech Talks. Don’t forget to subscribe to the Eclipse newsletter!  

 


by Tanja Obradovic at August 06, 2019 03:55 PM

Payara Platform 2019 Roadmap - Update

by Steve Millidge at August 06, 2019 10:02 AM

It's 6 months since I posted our last roadmap update and the team have been working hard to deliver what we promised at the beginning of the year and have released both our 191 and 192 releases since then. I therefore thought it was a good time to reflect on what we've delivered so far and what we've still got to do.

 


by Steve Millidge at August 06, 2019 10:02 AM

#HOWTO: Write Java EE applications with Kotlin

by rieckpil at August 04, 2019 06:33 PM

The precise and clean style of writing code with Kotlin is tempting as a Java developer. In addition, switching to Kotlin with a Java background is rather simple. But what about using Kotlin for an existing Java EE application or start using it for a new project? Read this blog post to get a first impression of what it takes to use Kotlin alongside Java EE in your projects.

The following technologies are used in this example: Kotlin 1.3.41, Java 11, Java EE 8, MicroProfile 2.2 running on a dockerized Open Liberty 19.0.0.7 with Eclipse OpenJ9.

Maven project setup for Kotlin

The classic Java EE Maven pom.xml  needs just two adjustments: the kotlin-stdlib dependency and the kotlin-maven-plugin:

https://github.com/rieckpil/blog-tutorials/tree/master/java-ee-with-kotlin<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>de.rieckpil.blog</groupId>
    <artifactId>java-ee-with-kotlin</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>war</packaging>

    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>

        <kotlin.version>1.3.41</kotlin.version>
        <kotlin.compiler.incremental>true</kotlin.compiler.incremental>

        <failOnMissingWebXml>false</failOnMissingWebXml>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    </properties>

    <dependencies>
        <dependency>
            <groupId>javax</groupId>
            <artifactId>javaee-api</artifactId>
            <version>8.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.eclipse.microprofile</groupId>
            <artifactId>microprofile</artifactId>
            <version>2.2</version>
            <type>pom</type>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.jetbrains.kotlin</groupId>
            <artifactId>kotlin-stdlib</artifactId>
            <version>${kotlin.version}</version>
        </dependency>
    </dependencies>

    <build>
        <finalName>java-ee-with-kotlin</finalName>

        <sourceDirectory>${project.basedir}/src/main/kotlin</sourceDirectory>
        <testSourceDirectory>${project.basedir}/src/test/kotlin</testSourceDirectory>

        <plugins>
            <plugin>
                <groupId>org.jetbrains.kotlin</groupId>
                <artifactId>kotlin-maven-plugin</artifactId>
                <version>${kotlin.version}</version>
                <configuration>
                    <compilerPlugins>
                        <plugin>jpa</plugin>
                    </compilerPlugins>
                    <jvmTarget>11</jvmTarget>
                </configuration>
                <dependencies>
                    <dependency>
                        <groupId>org.jetbrains.kotlin</groupId>
                        <artifactId>kotlin-maven-noarg</artifactId>
                        <version>${kotlin.version}</version>
                    </dependency>
                </dependencies>
                <executions>
                    <execution>
                        <id>compile</id>
                        <phase>compile</phase>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>test-compile</id>
                        <phase>test-compile</phase>
                        <goals>
                            <goal>test-compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

For JPA we have to configure the kotlin-maven-noarg plugin to add parameterless constructors to our JPA entities. In addition, I’m activating the incremental Kotlin compiler  in the properties sections to decrease repetitive build times, which is optional.

Pitfalls for Java EE using Kotlin

The two most common pitfalls for Java EE with Kotlin are non-final classes & methods and parameterless constructors. Whilst Kotlin makes classes per default final, we need non-final classes for some specifications to work. This is required for example to create proxies and enrich the functionality of public methods (e.g. EJB’s transaction management).

With the following sample project, you’ll see where and how to solve these requirements. The project provides a REST interface to retrieve books. The JAX-RS resource class looks like the following:

@Path("books")
@Produces(MediaType.APPLICATION_JSON)
class BookResource {

    @Inject
    private lateinit var bookService: BookService;

    @GET
    fun getAllBooks(): Response = Response.ok(bookService.getAllBooks()).build()

    @GET
    @Path("/{id}")
    fun getBookById(@PathParam("id") id: Long): Response {
        var book: Book? = 
           bookService.getBookById(id) ?: 
           return Response.status(Response.Status.NOT_FOUND).build()
        return Response.ok(book).build()
    }
}

The first thing to pay attention is the injection of other beans. Here I’m injecting the BookService to retrieve both all books and a book by its id. To avoid cumbersome null-checks we have to use lateinit to tell the Kotlin compiler that this instance is injected during runtime. Normally, properties declared as having a non-null type must be initialized in the constructor within Kotlin.

Next, I’m using Book? as the return type of the method to get a book by its id. This is required, as there might be no book in the database for the provided id. The Elvis operator ?: makes the required null-check and returns an HTTP 404 status in case of null.

The corresponding BookService is a singleton EJB and delegates access to the database using the EntityManager:

@Singleton
@Startup
open class BookService {

    @PersistenceContext
    private lateinit var entityManager: EntityManager

    @PostConstruct
    open fun setUp() {
        println("Initializing books ...")
        entityManager.persist(Book(null, "Java EE 8", "Duke"))
        entityManager.persist(Book(null, "Jakarta EE 8", "Duke"))
        entityManager.persist(Book(null, "MicroProfile 2.2", "Duke"))
        println("... finished initializing books")
    }

    open fun getAllBooks(): List<Book> = entityManager
         .createQuery("SELECT b FROM Book b", Book::class.java).resultList
    open fun getBookById(id: Long): Book? = entityManager.find(Book::class.java, id)
}

The open keyword on class- and method-level is required to make both non-final.

Writing JPA entities

For writing JPA entities we can utilize Kotlin’s data classes. The only thing to keep in mind is that we need a public no-arg constructor for JPA to work. Enabling the JPA plugin in the Maven build section, makes sure we get a parameterless constructor for all our @Entity classes and don’t have to worry:

@Entity
@Table(name = "books")
data class Book(
        @Id
        @GeneratedValue
        var id: Long?,

        @Column(nullable = false, unique = true)
        val title: String,

        @Column(nullable = false)
        val author: String) {
}

Final thoughts for Java EE with Kotlin

Converting an existing Java EE application written in Java to Kotlin might not be that simple. You have to make sure that dependency injection, interceptors in general and your JPA entities are working. With this blog post, you might get a first impression of where to pay attention when using Kotlin. If the precise and elegant style of writing source code with Kotlin overweighs the additional pitfalls for you, then give it a try!

You can find the whole example on GitHub.

Further resources on this topic:

  • Kotlin and Java EE article on DZone
  • Working with Kotlin and JPA on Baeldung
  • Kotlin EE: Boost your Productivity by Marcus Fihlon talk

Have fun using Kotlin for your Java EE project,

Phil

The post #HOWTO: Write Java EE applications with Kotlin appeared first on rieckpil.


by rieckpil at August 04, 2019 06:33 PM

The Payara Monthly Catch for July 2019

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at August 01, 2019 02:09 PM

Another great month in the bag. There were awards, conferences, out of incubation releases, competitions, surveys and lots more going on.  Below you will find a curated list of some of the most interesting news, articles and videos from this month. Cant wait until the end of the month? then visit our twitter page where we post all these articles as we find them! 


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at August 01, 2019 02:09 PM

A preview of MicroProfile GraphQL

by Jean-François James at July 25, 2019 01:05 PM

This blog post is related to my GitHub project mpql-preview. Objectives This project aims at providing a preview of the future Eclipse MicroProfile GraphQL specification. If you don’t know what MicroProfile GraphQL is about, please have a look on my previous post and on  GitHub. In short, it aims at making GraphQL a first-class citizen […]

by Jean-François James at July 25, 2019 01:05 PM

Understanding the Current Java Moment

by Rhuan Henrique Rocha at July 21, 2019 05:48 PM

Java platform is one the most used platform in last years and has the largest ecosystem in the world of technology. Java platform permit us develop applications to several platforms, such as Windows, Linux, embedded systems, mobile. However, Java had received many claims such as Java is fat, Java take a lot of memory, Java is verbose. In fact Java was created to solve big problems not small problems, although it could be used to solve small problems. You can solve small problems with Java, but you see the real benefit of Java when you have a big problem, mainly when this problem is about enterprise environments. Then when you created a hello world application with Java and compared to hello world application written in another language you could see a greater memory use,  you could write more line codes and other. But when you created a big application that integrates to another applications and resource, in this point you saw the real benefit of Java platform.

Java is great to enterprise environment because its power to solve complexity problems and its multi-platform characteristic, but also because it promotes more security to business, promoting a backward compatibility and solutions based in specifications. Then, the business has more guarantee that a new update of Java won’t breaking your systems and has a solution decoupled vendors, permitting to business changes vendors when it is needed.

Java has a big ecosystem, with emphasis on Java EE (now Jakarta EE) that promotes several specifications to solve common problems at enterprise environment. Some of these specifications are: EJB, JPA, JMS, JAX-RS, JAX-WS, and other. Furthermore, we have the Spring that tuned the Java ecosystem, although it is not based on specifications but uses some specifications from Java EE.

Cloud Computing and Microservices

Cloud Computing is a concept that has been grown up over and over along of year, and has changed how developers architect, write and think applications. Cloud Computing is a set of principles and approaches that has as aim provide computing resources as a service (PaaS, IaaS, SaaS).  With this, we can uses only needed resource to run applications, and scale when needed. Then we could optimize the computing resource uses and consequently optimize cost to business. It is fantastic, but to avail from Cloud Computing the applications should be according to this approach. With this, microservice architecture came as a good approach to architect and thinking applications to Cloud Computing (Cloud Native Application).

Microservice architecture is an approach that break a big application (monolith) in many micro-applications or micro-services, generally broken in business domain. With this, we can scale only the business domains that really needs without scale all business domains, we have a fault tolerance, because if one business domain falls, the other business domain will not falls together, and we have a resilience, because the microservice that falls can be restored. Then, microservice architecture permit us explore the Cloud Computing benefits and optimize optimize the computing resource uses.

Java and Cloud Computing

As said above, “In fact Java was created to solve big problems not small problems, although it could be used to solve small problems”. But Cloud Native Application approach breaks a big and complex application into many small and simpler applications (such as microservices). Furthermore, the life cycle of  an application is very smaller in the microservice architecture than when we use a monolith.  Besides that, in Cloud Native Application approach the complexity is not in the applications, but it is in the communication between these application (integrations between them), managements of them and monitoring. In other word the complexity is about how these applications (microservices) will interacts with which other and how can we identify a problem in some application with fast way.  With this, the Java platform and its ecosystem had many gap to solve, that will be shown below:

Fat JVM: We had many Java applications started with many library that was not used and the JVM will loaded several things that this application didn’t needs. It is okay when we have a big application that solve complex problems, but to small applications (like microservices) it is not so good.

JVM JIT Optimization: The JVM has a JIT Optimization that optimize the application  running along of time. In other words, a longer application life cycle has more optimization done. Then to JVM is better to runs an application for a long time than have an application running for short time. In cloud computing, applications are born and dies all the time and its life cycle are smaller.

Java Application Has a Bigger Boot Time: Many Java applications has a long boot time comparing to application written in another language, because these applications commonly solve some things in boot time.

Java Generate a Fat Package (war, ear, jar): Many Java Applications has a large package size, mainly when it has some libraries inside them (in lib folder). It can grow up the delivery time, degrading the delivery process.

Java EE Has Not Pattern Solutions to Microservice: The Java EE has many important specs to solve enterprise problems, but it has not specs to solve problems that came from microservice architecture and cloud computing.

The Updates of Java and Java EE are Slow: The Java and Java EE had a slow process to update their features and to create new features. It is bad because the enterprise environment is in continuous change and has new challenge at all the times.

With this, the Java ecosystem had several changes and initiatives to solve each gap created by cloud computing, and the put Java on the top again.

Java On Top Again

Java platform is a robust platform that promotes many solutions for any things, but to me it is not the best of Java. To me the best of Java world is the community, that is very strong and workaholic. Then, in a few time, the Java community promoted many actions and initiatives that boosted the Java platform to cloud computing approach, promoting solutions to turn Java closer to cloud native application approach more and more. Many peoples call it as Cloud Native Java. The principals actions and initiatives that have been done in the Java ecosystem are: Jakarta EE, Microprofile, new Java release cycle, improvement at Java Language, improvement at JVM and Quarkus. Then I’ll explain how each of these actions and initiatives have impacted the Java ecosystem.

Jakarta EE: Java EE was one of the most import project at Java ecosystem. Java EE promoted many pattern solutions to enterprise problems, but this project was migrated from Oracle to Eclipse Foundation and had many changes in the work`s structure, and now is called Jakarta EE.

The Jakarta EE is an umbrella project that promotes pattern solutions (specifications) to enterprise world and has a new process to approve new features and evolve the existent features. With this, the Jakarta EE can evolve fast and improve more and more the enterprise solutions. It is nice, because nowadays the enterprise has changed very fast and has had new challenges o all the time. As the technology is a tool to innovate,  this one need be able to change quickly when needed.

Microprofile: The Java EE and now Jakarta EE has many good solutions to enterprise world. But this project don`t have a pattern solutions to many problems about microservice architecture. It does not means that you can not implement solutions to microservice architecture, but you will need implement these solutions by yourself and these solutions will be in your hands.

Microprofile is an umbrella project that promotes many pattern solutions (specifications) to microservice architecture problems. Microprofile has compatibility with Java EE and permit the developers develop applications using microservice architecture with easier way.  Some of these specifications are: Microprofile Config, Microprofile Opentrancing, Microprofile RestClient, Microprofile Fault Tolerance and others.

Java Releases Cycle: The Java release cycle changed and nowadays the Java releases are released each six months. It’s an excellent change because permit the Java platform response fast to new challenges. Beside that, it promotes a faster evolve of Java platform.

Improvement at Java Language: The Java had several changes that improved some features, such as the functional feature and other from Java. Beside that, the Java language had the Jigsaw project that introduced the modularity on Java. With this, we can create thinner Java applications that can be easily scaled.

Improvement at JVM: The JVM had some issues when used in containers, mainly about measurements about memory and CPU. It was bad because the the container is very important to cloud computing. With containers we don’t delivery the application only, but we delivery all environment with its dependencies.

Since Java 9 the JVM had many updates that turned the communication with containers better. With this, the JVM is closer to cloud computing necessities.

Quarkus: Quarkus is the latest news to the Java ecosystem and has been at the top of the talks. Quarkus is a project tailored to GraalVM and OpenJDK HotSpot that promotes a Kubernate Java Application stack to permit developers write applications to cloud using the breed Java libraries and standards. With Quarkus we can write applications with very faster boot time, incredibly low RSS memory and an amazing set of tools to facilitate the developers to write applications.

Quarkus is really an amazing project that defines a new future to Java platform. This project works with Container First concept and uses the technique of compile time boot to boost the Java applications. If you want to know more about Quarkus click here.

All of these projects and initiatives in the Java ecosystem bring Java back into focus and starts the new era for the Java platform. With this, Java enter on cloud computing offering your way of working with specifications, promoting a standardized solutions to cloud computing. It is amazing to Java and to cloud computing, because from these standardized solutions will emerge many enterprise solutions with support of many companies, turning the adopt of these solutions safer.

 

 

 

 

 


by Rhuan Henrique Rocha at July 21, 2019 05:48 PM

Using Jakarta Security on Tomcat and the Payara Platform

by Arjan Tijms at July 18, 2019 11:06 AM

Java EE Security API is one of the new APIs in Java EE 8. With Java EE currently being transferred and rebranded to Jakarta EE, this API will soon be rebranded to Jakarta Security, which is the term we'll use in this article. Jakarta Security is part of the Jakarta APIs, included and active in the Payara Platform by default with no configuration required in order to use it. With some effort, Jakarta Security can be used with Tomcat, as well.  


by Arjan Tijms at July 18, 2019 11:06 AM

Update for Jakarta EE community: July 2019

by Tanja Obradovic at July 15, 2019 04:19 PM

Two months ago, we launched a monthly email update for the Jakarta EE community which seeks to highlight news from various committee meetings related to this platform. There are a few ways to get richer insight into the work that has been invested in Jakarta EE so far, so if you’d like to learn more about Jakarta EE-related plans and get involved in shaping the future of Cloud Native Java, read on. 

Without further ado, let’s have a look at what happened in June: 

JakartaOne LiveStream: All eyes on Cloud Native Java

Are you interested in the current state and future of Jakarta EE? Would you like to explore other related technologies that should be part of your toolkit for developing Cloud Native Java applications? Then JakartaOne Livestream is for you! No matter if you’re a developer or a technical business leader, this virtual conference promises to satisfy your thirst for knowledge with a balance of technical talks, user experiences, use cases and more.  

You should join the JakartaOne Livestream speaker lineup if you want to 

  • Show the world how you and/or your organization are using Jakarta EE technologies to develop cutting-edge solutions. 

  • Demonstrate how Jakarta EE and Java EE features can be used today to develop cloud native solutions. 

This one-day virtual conference, which takes place September 10, 2019, is currently accepting submissions from speakers so if you have an idea for a talk that will educate and inspire the Jakarta community, now’s the time to submit your pitch!  The deadline for submissions is today, July 15, 2019. 

Note: All the JakartaOne Livestream sessions and keynotes are chosen by an independent program committee made up of volunteers from the Jakarta EE and Cloud Native Java community: Reza Rahman, who is also the program chair, Adam Bien, Arun Gupta, Ivar Grimstad, Josh Juneau, and Tanja Obradovic.

*As this inaugural event is a one-day event only, the number of accepted sessions is limited. Submit your talk now!  

Even if all the talks will be recorded and made available later on the Jakarta EE website, make sure to attend the virtual conference in order to directly interact with the speakers. We do hope you will attend “live”, as it will lead to more questions and more interactive sessions.  


Jakarta EE 8 release and progress

Are you keeping track of Eclipse EE4J projects on GitHub? Have you noticed that Jakarta EE Platform Specifications are now available in GitHub? If not please do!!!! Also please, check out the creation and progress of specification projects, which will be used to follow the process of converting the "Eclipse Project for ..." projects into specification projects to set them up to specification work as defined by the Eclipse Foundation Specification Process, and Specification Document Names.

Noticeable progress has been made on Jakarta EE 8 TCK jobs, Jakarta Specification Project Names, and Jakarta Specification Scope Statements so head over to GitHub to discover all the improvements and all the bits and pieces that have already been resolved.  

Work on the TCK process is in progress, with Scott Stark, Vice President of Architecture at Red Hat, leading the effort. The TCK process document v 1.0 is expected to be completed in the very near future. The document will shed light on aspects such as the materials a TCK must possess in order to be considered suitable for delivering portability, the process for challenging tests and how to resolve them and more. 

Jakarta EE 8 is expected to be released on September 10, 2019, just in time for JakartaOne Livestream.  

Javax package namespace discussions

The specification committee has put out two approaches regarding restrictions on javax package namespace use for the community to consider, namely Big Bang and Incremental. 

Based on the input we got from the community and discussions within the Working Group, the specification committee has not yet reached consensus on the approach to be taken, until work on the binary compatibility is further explored. With that in mind, the Working Group members will invest time to work on the technical approach for binary compatibility and then propose/decide on the option that is the best for the customers, vendors, and developers.

Please refer to David Blevins’ presentation from the Jakarta EE Update call June 12th, 2019

If you want to dive deeper into this topic, David Blevins has written a helpful analysis of the javax package namespace matter, in which he answers questions like "If we rename javax.servlet, what else has to be renamed?" 

 JCP Copyright Licensing request: Your assistance in this matter is greatly appreciated

As part of Java EE’s transfer to the Eclipse Foundation under the Jakarta EE name, it is essential to ensure that the Foundation has the necessary rights so that the specifications can be evolved under the new Jakarta EE Specification Process. For this, we need your help!

We are currently requesting copyright licenses from all past contributors to Java EE specifications under the JCP; we are reaching out to all companies and individuals who made contributions to Java EE in the past to help out, execute the agreements and return them back to the Eclipse Foundation. As the advancement of the specifications and the technology is at stake, we greatly appreciate your prompt response. Oracle, Red Hat, IBM, and many others in the community have already signed an agreement to license their contributions to Java EE specifications to the Eclipse Foundation. We are also counting on the JCP community to be supportive of this request.

For more information about this topic, read Tanja Obradovic’s blog. If you have questions regarding the request for copyright licenses from all past contributors, please contact mariateresa.delgado@eclipse-foundation.org.

 Election results for Jakarta EE working group committees

The nomination period for elections to the Jakarta EE committees is now closed. 

Almost all positions have been filled, with the exception of the Committer representative on the Marketing Committee, due to lack of nominees.   

The representatives for 2019-20 on the committees, starting July 1, 2019, are: 

Participant Representative:

STEERING COMMITTEE - Martijn Verburg (London Java Community)

SPECIFICATIONS COMMITTEE - Alex Theedom (London Java Community)

MARKETING COMMITTEE - Theresa Nguyen (Microsoft)

Committer Representative:

STEERING COMMITTEE - Ivar Grimstad

SPECIFICATIONS COMMITTEE - Werner Keil

MARKETING COMMITTEE - Vacant

 Jakarta EE Community Update: June video call

The most recent Jakarta EE Community Update meeting took place in June; the conversation included topics such as Jakarta EE 8 progress and plans, headway with specification name changes/ specification scope definitions, TCK process update, copyright license agreements, PMC/ Projects update, and more. 

The materials used on the Jakarta EE community update meeting are available here and the recorded Zoom video conversation can be found here.  

Please make sure to join us for the July 17th call.

 EclipseCon Europe 2019: Call for Papers open until July 15

You can still submit your proposals to be part of EclipseCon Europe 2019’s speaker lineup. The Call for Papers (CFP) is closing soon so if you have an idea for a talk that will educate and inspire the Eclipse community, now’s the time to submit your talk! The final submission deadline is July 15. 

The conference takes place in Ludwigsburg, Germany on October 21 - 24, 2019. 


Jakarta EE presence at events and conferences: June overview

(asked members on Jakarta marketing committee Slack channel if they participated in any conferences; waiting for a reply) 

Eclipse DemoCamp Florence 2019

Tomitribe: presence at JNation in Portugal 

 

Thank you for your interest in Jakarta EE. Help steer Jakarta EE toward its exciting future by subscribing to the jakarta.ee-wg@eclipse.org mailing list and by joining the Jakarta EE Working Group. 

To learn more about the collaborative efforts to build tomorrow’s enterprise Java platform for the cloud, check out the Jakarta Blogs and participate in the monthly Jakarta Tech Talks. Don’t forget to subscribe to the Eclipse newsletter!  


 

 


by Tanja Obradovic at July 15, 2019 04:19 PM

#HOWTO: Intercept method calls using CDI interceptors

by rieckpil at July 14, 2019 10:05 AM

If you have cross-cutting concerns for several parts of your application you usually don’t want to copy and paste the code. For Java EE applications the CDI (Context and Dependency Injection) spec contains the concept of interceptors which are defined in the Java Interceptor specification. With these CDI interceptors, you can intercept business method, timeouts for EJB timers, and lifecycle events.

With this blog post, I’ll demonstrate where and how to use the interceptors for a Java EE 8 application, using Java 8 and running on Payara 5.192.

Injection points for interceptors

Even though interceptors are part of the CDI spec they can intercept: EJBs, session beans, message-driven beans, and CDI managed beans. The Java Interceptors 1.2 release (latest) is part of the maintenance release JSR-318 and the CDI spec builds upon its basic functionality.

The specification defines five types of injection points for interceptors:

  • @AroundInvoke intercept a basic method call
  • @AroundTimeout used to intercept timeout methods of EJB Timers
  • @AroundConstruct interceptor method that receives a callback after the target class is constructed
  • @PostConstruct intercept post-construct lifecycle events
  • @PreDestroy interceptor method for pre-destroy lifecycle events

For most of these injection points, I’ll provide an example in the following sections.

Writing CDI interceptors

Writing a CDI interceptor is as simple as the following:

@Interceptor
public class SecurePaymentInterceptor {

    @AroundInvoke
    public Object securePayment(InvocationContext invocationContext) throws Exception {
        return invocationContext.proceed();
    }
}

You just annotate a class with @Interceptor and add methods for intercepting your desired injection points. Within an interceptor method, you have access to the InvocationContext . With this object, you can retrieve the name of the method, its parameters and you can also manipulate them. Make sure to call the .proceed() method if you want to continue with the execution of the original method.

As an example I’m going to intercept the following EJB:

@Startup
@Singleton
@ManipulatedPayment
public class PaymentProvider {

    @PostConstruct
    public void setUpPaymentProvider() {
        System.out.println("Setting up payment provider ...");
    }

    public void withdrawMoneyFromCustomer(String customer, BigDecimal amount) {
        System.out.println("Withdrawing money from " + customer + " - amount: " + amount);
    }
}

To demonstrate how to manipulate the method parameters, I’m going to change the amount of the .withdrawMoneyFromCustomer(String customer, BigDecimal amount) if the customer name is duke. In addition. I’m logging a single line to console once lifecycle event interceptors are triggered:

@Interceptor
public class PaymentManipulationInterceptor {

    @Inject
    private PaymentManipulator paymentManipulator;

    @AroundInvoke
    public Object manipulatePayment(InvocationContext invocationContext) throws Exception {

        if (invocationContext.getParameters()[0] instanceof String) {
            if (((String) invocationContext.getParameters()[0]).equalsIgnoreCase("duke")) {
                paymentManipulator.manipulatePayment();
                invocationContext.setParameters(new Object[]{
                        "Duke", new BigDecimal(999.99).setScale(2, RoundingMode.HALF_UP)
                });
            }
        }

        return invocationContext.proceed();
    }

    @AroundConstruct
    public void aroundConstructInterception(InvocationContext invocationContext) throws Exception {
        System.out.println(
           invocationContext.getConstructor().getDeclaringClass() + " will be manipulated");
        invocationContext.proceed();
    }

    @PostConstruct
    public void postConstructInterception(InvocationContext invocationContext) throws Exception {
        System.out.println(
           invocationContext.getMethod().getDeclaringClass() + " is ready for manipulation");
        invocationContext.proceed();
    }

    @PreDestroy
    public void preDestroyInterception(InvocationContext invocationContext) throws Exception {
        System.out.println(
           "Stopped manipulating of class " + invocationContext.getMethod().getDeclaringClass());
        invocationContext.proceed();
    }
}

For a more realistic example, I’m creating an interceptor to intercept JAX-RS methods and check if a required HTTP header is set. If the header is not present, the server will return with an HTTP status 400:

@Interceptor
public class SecurePaymentInterceptor {

    @Context
    private HttpHeaders headers;

    @AroundInvoke
    public Object securePayment(InvocationContext invocationContext) throws Exception {

        String requiredHttpHeader = invocationContext
                .getMethod()
                .getAnnotation(SecurePayment.class)
                .requiredHttpHeader();

        if (headers.getRequestHeaders().containsKey(requiredHttpHeader)) {
            return invocationContext.proceed();
        } else {
            throw new WebApplicationException(
             "Missing HTTP header: " + requiredHttpHeader, Response.Status.BAD_REQUEST);
        }

    }
}

The required HTTP header is stored in the annotation @SecurePayment(requiredHttpHeader="X-Duke"), which is used to bind an interceptor to a method/class, as you’ll see in the next chapter.

Binding interceptors to methods and classes

Up until now we just created CDI interceptors but did not bind them to a specific method or class. For this we’ll use a custom annotation with @InterceptorBinding. The most simplest annotation for this looks like the following:

@InterceptorBinding
@Target({TYPE, METHOD})
@Retention(RUNTIME)
public @interface ManipulatedPayment {
}

We can also add custom attributes to the annotation, as we need it for your @SecurePayment binding to specify the required HTTP header:

@InterceptorBinding
@Target({METHOD, TYPE})
@Retention(RUNTIME)
public @interface SecurePayment {
    
   @Nonbinding String requiredHttpHeader() default "X-Duke";
   
}

Once this annotation is in place, we have to add it to the interceptor class and to the method or class (to include all methods of this class) we want to intercept:

@Interceptor
@SecurePayment
public class SecurePaymentInterceptor {

    // ...

}
@Path("payments")
public class PaymentResource {

    @Inject
    private PaymentProvider paymentProvider;

    @GET
    @Path("/{customerName}")
    @SecurePayment(requiredHttpHeader = "X-Secure-Payment")
    public Response getPaymentForCustomer(@PathParam("customerName") String customerName) {

        paymentProvider
                .withdrawMoneyFromCustomer(customerName,
                        new BigDecimal(42.00).setScale(2, RoundingMode.HALF_UP));

        return Response.ok("Payment was withdrawn from customer " + customerName).build();
    }

}

Activating CDI interceptors

The CDI interceptors are deactive by default and we have to activate them first. Currently there are two possible ways to activate them:

  1. Add the fully qualified class name of the interceptor to the beans.xml file
  2. Add the @Priority(int priority) annotation to the interceptor

To show you how both work, I’m using the first approach for the first interceptor:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd"
       bean-discovery-mode="all">
<interceptors>
    <class>de.rieckpil.blog.PaymentManipulationInterceptor</class>
</interceptors>
</beans>

The second interceptor is activated using the annotation. With the priority number we can specify the execution order of interceptors if multiple apply for the same method:

@Priority(42)
@Interceptor
@SecurePayment
public class SecurePaymentInterceptor {

    // ...

}

Putting it all together and hitting /resources/payments/mike and /resources/payments/duke with the X-Secure-Payment header, results in the following log output:

[Payara 5.192] [INFO] [[ Clustered CDI Event bus initialized]]
[Payara 5.192] [INFO] [[ class de.rieckpil.blog.PaymentProvider will be manipulated]]
[Payara 5.192] [INFO] [[ class de.rieckpil.blog.PaymentProvider is ready for manipulation]]
[Payara 5.192] [INFO] [[ Setting up payment provider ...]]
[Payara 5.192] [INFO] [[ Initializing Soteria 1.1-b01 for context '']]
[Payara 5.192] [INFO] [[ Loading application [intercept-methods-with-cdi-interceptors] at [/]]]
[Payara 5.192] [INFO] [[intercept-methods-with-cdi-interceptors was successfully deployed in 1,678 milliseconds.]]
[Payara 5.192] [INFO] [[ Context path from ServletContext:  differs from path from bundle: /]]
[Payara 5.192] [INFO] [[ Withdrawing money from mike - amount: 42.00]]
[Payara 5.192] [INFO] [[ Manipulating payment...]]
[Payara 5.192] [INFO] [[ Withdrawing money from Duke - amount: 999.99]]

For more information visit the Weld documentation (CDI reference implementation) or the website of the CDI spec.

You can find the source code for this example on GitHub.

Have fun using CDI interceptors,

Phil

The post #HOWTO: Intercept method calls using CDI interceptors appeared first on rieckpil.


by rieckpil at July 14, 2019 10:05 AM

Recording of Jakarta Tech Talk ‘How to develop Microservices’

by Niklas Heidloff at July 10, 2019 08:08 AM

Yesterday I presented in a Jakarta Tech Talk ‘How to develop your first cloud-native Applications with Java’. Below is the recording and the slides.

In the talk I described the key cloud-native concepts and explained how to develop your first microservices with Java EE/Jakarta EE and Eclipse MicroProfile and how to deploy the services to Kubernetes and Istio.

For the demos I used our end-to-end example application cloud-native-starter which is available as open source. There are instructions and scripts so that everyone can setup and run the demos locally in less than an hour.

I demonstrated key cloud-native functionality:

Here is the recording.

The slides are on SlideShare.

This is the summary page with links to resources to find out more:

The post Recording of Jakarta Tech Talk ‘How to develop Microservices’ appeared first on Niklas Heidloff.


by Niklas Heidloff at July 10, 2019 08:08 AM

#HOWTO: MicroProfile Rest Client for RESTful communication

by rieckpil at July 08, 2019 05:28 AM

In one of my recent blog posts, I presented Spring’s WebClient for RESTful communication. With Java EE we can utilize the JAX-RS Client and WebTarget classes to achieve the same. However, if you add the MicroProfile API to your project, you can make use of the MicroProfile Rest Client specification. This API targets the following goal:

The MicroProfile Rest Client provides a type-safe approach to invoke RESTful services over HTTP. As much as possible the MP Rest Client attempts to use JAX-RS 2.0 APIs for consistency and easier re-use.

With MicroProfile 2.2 you’ll get the latest version of the Rest Client which is 1.3. RESTful communication becomes quite easy  with this specification as you just define the access to an external REST API with an interface definition and JAX-RS annotations:

@Path("/movies")
public interface MovieReviewService {

    @GET
    Set<Movie> getAllMovies();

    @POST
    @Path("/{movieId}/reviews")
    String submitReview(@PathParam("movieId") String movieId, Review review);
    @PUT
    @Path("/{movieId}/reviews/{reviewId}")
    Review updateReview(@PathParam("movieId") String movieId, 
                        @PathParam("reviewId") String reviewId, Review review);
}

In this blog post, I’ll demonstrate an example usage of MicroProfile Rest Client using Java EE 8, MicroProfile 2.2, Java 8 running on Payara 5.192.

System architecture

The project contains two services: order application and user management application. Alongside order data, the order application also stores the id of the user who created this order. To resolve a username from the user id and to create new orders the application accesses the user management application’s REST interface:

microProfileRestClientBlogArchitecture

For simplicity, I keep the implementation simple and store the objects in-memory without an underlying database.

MicroProfile project setup

Both applications were created with my Java EE 8 and MicroProfile Maven archetype and contain just both APIs:

<dependencies>
    <dependency>
        <groupId>javax</groupId>
        <artifactId>javaee-api</artifactId>
        <version>8.0</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.eclipse.microprofile</groupId>
        <artifactId>microprofile</artifactId>
        <version>2.2</version>
        <type>pom</type>
        <scope>provided</scope>
    </dependency>
</dependencies>

The order application has two JAX-RS endpoints, one for reading orders by their id and one for creating an order:

@Path("orders")
@Produces("application/json")
@Consumes("application/json")
public class OrderResource {

    @Inject
    private OrderService orderService;

    @GET
    @Path("/{id}")
    public JsonObject getOrderById(@PathParam("id") Integer id) {
        return orderService.getOrderById(id);
    }

    @POST
    public Response createNewOrder(JsonObject order, @Context UriInfo uriInfo) {
        Integer newOrderId = this.orderService.createNewOrder(new Order(order));
        UriBuilder uriBuilder = uriInfo.getAbsolutePathBuilder();
        uriBuilder.path(Integer.toString(newOrderId));

        return Response.created(uriBuilder.build()).build();
    }
}

The user management application has also two JAX-RS endpoints to resolve a username by its id and to create a new user. Both of these endpoints are required for the order application to work properly and are synchronously called:

@Path("users")
@Consumes("application/json")
@Produces("application/json")
@ApplicationScoped
public class UserResource {

    private ConcurrentHashMap<Integer, String> userDatabase;
    private Faker randomUser;

    @PostConstruct
    public void init() {
        this.userDatabase = new ConcurrentHashMap<>();
        this.userDatabase.put(1, "Duke");
        this.userDatabase.put(2, "John");
        this.userDatabase.put(3, "Tom");

        this.randomUser = new Faker();
    }

    @GET
    @Path("/{userId}")
    public JsonObject getUserById(@PathParam("userId") Integer userId,
                                  @HeaderParam("X-Request-Id") String requestId,
                                  @HeaderParam("X-Application-Name") String applicationName) {

        System.out.println(
                String.format("External system with name '%s' " +
                        "and request id '%s' trying to access " +
                        "user with id '%s'", applicationName, requestId, userId));

        return Json
                .createObjectBuilder()
                .add("username", this.userDatabase.getOrDefault(userId, "Default User"))
                .build();
    }

    @POST
    @RolesAllowed("ADMIN")
    public void createNewUser(JsonObject user) {
        this.userDatabase
                .put(user.getInt("userId"), this.randomUser.name().firstName());
    }
}

For a more advanced use case, I’m tracking the access to /users/{userId} by printing two custom HTTP headers X-Request-Id and X-Application-Name. In addition, posting a new user requires authentication and authorization which is basic authentication for which I’m using the Java EE 8 Security API.

Invoke RESTful services over HTTP with Rest Client

The REST access to the user management app is specified with a Java interface:

@RegisterRestClient
@Path("/resources/users")
@Produces("application/json")
@Consumes("application/json")
@ClientHeaderParam(name = "X-Application-Name", value = "ORDER-MGT-APP")
public interface UserManagementApplicationClient {

    @GET
    @Path("/{userId}")
    JsonObject getUserById(@HeaderParam("X-Request-Id") String requestIdHeader, 
                           @PathParam("userId") Integer userId);

    @POST
    @ClientHeaderParam(name = "Authorization", value = "{generateAuthHeader}")
    Response createUser(JsonObject user);

    default String generateAuthHeader() {
        return "Basic " + new String(Base64.getEncoder().encode("duke:SECRET".getBytes()));
    }
}

Every method of this interface represents one REST endpoint of the external service. With the common JAX-RS annotations like @GET, @POST, @Path, and @PathParam you can specify the HTTP method and URL parameters. The return type of the method represents the HTTP response body which is deserialized using the MessageBodyReader which is makes use of JSON-B for application/json. For sending data alongside the HTTP request body, you can add a pojo as the method argument.

Furthermore, you can add HTTP headers to your calls by using either @HeaderParam or @ClientHeaderParam. With @HeaderParam you mark a method argument as an HTTP header and can pass it from outside to the Rest Client. The @ClientHeaderParam on the other side does not modify the method signature with an additional argument and retrieves its value either by config, by a hardcoded string or by calling a method. In this example, I’m using it to add the X-Application-Name  header to every HTTP request and for the authorization header which is required for basic auth. You can use this annotation on both class and method level.

Rest Client configuration and CDI integration

To integrate this Rest Client with CDI and make it injectable, you can register the client with @RegisterRestClient. Any other bean can now inject the Rest Client with the following code:

@Inject
@RestClient
private UserManagementApplicationClient userManagementApplicationClient;

The URL of the remote service is configured either with the @RegisterRestClient(baseUri="http://somedomain/api") annotation or using the MicroProfile Config API. For this example, I’m using the configuration approach with a microprofile-config.properties file:

de.rieckpil.blog.order.control.UserManagementApplicationClient/mp-rest/url=http://user-management-application:8080
de.rieckpil.blog.order.control.UserManagementApplicationClient/mp-rest/connectTimeout=3000
de.rieckpil.blog.order.control.UserManagementApplicationClient/mp-rest/readTimeOut=3000

Besides the URL you can configure the HTTP connect and read timeouts for this Rest Client and specify JAX-RS providers to intercept the requests/responses. For more information have a look at the specification document.

As I’m using docker-compose for deploying the two applications, I can use the name of the user app for external access:

version: '3'
services:
  order-application:
    build: order-application
    ports:
      - "8080:8080"
    links: 
      - user-management-application
  user-management-application:
    build: user-management-application

For further information have a look at the GitHub repository of this specification and the release page to get the latest specification documents.

You can find the source code with a step-by-step guide to start the two applications on GitHub.

Have fun using the MicroProfile Rest Client API,

Phil

The post #HOWTO: MicroProfile Rest Client for RESTful communication appeared first on rieckpil.


by rieckpil at July 08, 2019 05:28 AM

#HOWTO: Deploy Java EE applications to Kubernetes

by rieckpil at July 06, 2019 08:08 AM

Kubernetes is currently the de-facto standard for deploying applications in the cloud. Every major cloud provider offers a dedicated Kubernetes service (eg. Google Cloud with GKE, AWS with EKS, etc.) to deploy applications in a Kubernetes cluster. Once your stateless Java EE application (this is important, as your application will run with multiple instances) is dockerized you are ready to deploy the application to Kubernetes.

In this blog post, I’ll show you how to deploy a sample Java EE 8 and MicroProfile 2.2 application running on Payara 5.192 to a local Kubernetes cluster. You can apply the same for every Kubernetes cluster in the cloud (with small adjustments for the container registry).

Setup Java EE backend

The sample application contains only the Java EE 8 and MicroProfile 2.2 API dependency and built with Maven:

<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>de.rieckpil.blog</groupId>
  <artifactId>java-ee-kubernetes-deployment</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>war</packaging>
  <dependencies>
    <dependency>
      <groupId>javax</groupId>
      <artifactId>javaee-api</artifactId>
      <version>8.0</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.eclipse.microprofile</groupId>
      <artifactId>microprofile</artifactId>
      <version>2.2</version>
      <type>pom</type>
      <scope>provided</scope>
    </dependency>
  </dependencies>
  <build>
    <finalName>java-ee-kubernetes-deployment</finalName>
  </build>
  <properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  </properties>
</project>

The application contains just one JAX-RS resource to return a message injected via the MicroProfile Config API.

@Path("sample")
public class SampleResource {

  @Inject
  @ConfigProperty(name = "message")
  private String message;

  @GET
  public Response message() {
    return Response.ok(message).build();
  }

}

Furthermore, I’ve added HealthResource which implements the HealthCheck interface of the MicroProfile Health 1.0 API. This class is optional, but nice-to-have as Kubernetes will, later on, have to identify if your application is ready for traffic. In this example, the implementation is rather simple as it just returns the UP status but you could also add further business logic to mark your application as ready. There can also be multiple implementations of the HealthCheck interface in one application to check for multiple things like e.g. free disk space, database availability, etc. The result of all the health checks is then combined together under /health.

@Health
@ApplicationScoped
public class HealthResource implements HealthCheck {

    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse
                .name("java-ee")
                .builder()
                .up()
                .build();
    }
}

With MicroProfile 3.0 there will also be dedicated annotations for @Readiness and @Liveness to differentiate between these two states.

Prepare Kubernetes deployment

First, we need to create a Docker image for our Kubernetes deployment. With Java EE applications this is pretty straightforward, as we just copy the .war file to the deployment folder of the targeted application server. For this example, I’ve chosen the Payara server-full 5.192 base image.

FROM payara/server-full:5.192
COPY target/java-ee-kubernetes-deployment.war $DEPLOY_DIR

Next, we create a so-called Kubernetes Deployment. With this Kubernetes object, we define metadata for our application. This includes which image to use, which port to expose, where to find the health endpoint. In addition, we define here how many pods (containers) should run in parallel:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: java-ee-kubernetes
spec:
  replicas: 2
  selector:
    matchLabels:
      app: java-ee-kubernetes
  template:
    metadata:
      labels:
        app: java-ee-kubernetes
    spec:
      containers:
        - name: java-ee-kubernetes
          image: localhost:5000/java-ee-kubernetes
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
          readinessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 45
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 45
      restartPolicy: Always

In this example, I’m using a local Docker registry and reference therefore to localhost:5000/java-ee-kubernetes for the application’s Docker image. If you plan to deploy your Java EE application to a Kubernetes cluster running in the cloud, you have to push your Docker image to this registry and replace the localhost:5000 e.g. eu.gcr.io/java-ee-kubernetes (gcr is Google’s container registry service).

To give the pod some time for the startup I’ve added the initialDelaySeconds attribute to wait 45 seconds until the readiness and liveness probes are requested.

With just the deployment we wouldn’t be able to access our application for outside. Therefore we have to specify the access with a so-called Kubernetes Service. The service references our previous deployment and uses the type NodePort. With this configuration, we specify a port on our Kubernetes nodes which forwards to our application’s port.  For our example we use port 31000 on the Kubernetes node and forward to port 8080 of our Java EE application:

kind: Service
apiVersion: v1
metadata:
  name: java-ee-kubernetes
spec:
  type: NodePort
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
      nodePort: 31000
  selector:
    app: java-ee-kubernetes

For a more advanced configuration, have a look at the Kubernetes Ingress Controllers.

Deploy to a local Kubernetes cluster

In this example, I’m deploying the application to a local Kubernetes cluster. The Kubernetes add-on is available in the latest Docker for Mac/Windows version and creates a simple cluster on-demand.

With the following steps you deploy the application to your local Kubernetes cluster:

docker run -d -p 5000:5000 --restart=always --name registry registry:2
mvn clean package
docker build -t java-ee-kubernetes .
docker tag java-ee-kubernetes localhost:5000/java-ee-kubernetes
docker push localhost:5000/java-ee-kubernetes
kubectl apply -f deployment.yml

After the two pods are up and running, you can access the application at http://localhost:31000/resources/sample and to get a greeting from the Java EE application.

You can find the source code with a step-by-step guide on GitHub.

Have fun deploying your Java EE application to Kubernetes,

Phil

The post #HOWTO: Deploy Java EE applications to Kubernetes appeared first on rieckpil.


by rieckpil at July 06, 2019 08:08 AM

The Payara Monthly Catch for June 2019

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at July 04, 2019 11:03 AM

Another very busy month for the Payara team! We had our annual "Payara Week" where we fly everyone in the company to our UK HQ for a week of close collaboration, celebration, review and fun! We also announced our new partner program "Payara Radiate".


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at July 04, 2019 11:03 AM

How to develop Open Liberty Microservices on OpenShift

by Niklas Heidloff at July 04, 2019 09:23 AM

Open Liberty is a flexible Java application server. It comes with Eclipse MicroProfile which is a set of tools and APIs to build microservices. With these technologies I have created a simple hello world microservice which can be used as template for your own microservices. In this article I describe how to deploy it to OpenShift.

The sample microservice is available as open source.

Microservice Implementation

The microservice contains the following functionality:

If you want to use this code for your own microservice, remove the three Java files for the REST GET endpoint and rename the service in the pom.xml file and the yaml files.

Deployment Options

The microservice can be run in different environments:

Deployment to Red Hat OpenShift on the IBM Cloud

The following instructions should work for OpenShift, no matter where you run it. However I’ve only tested it on the IBM Cloud.

IBM provides a managed Red Hat OpenShift offering on the IBM Cloud (beta). You can get a free IBM Cloud Lite account.

After you’ve created a new cluster, open the OpenShift console. From the dropdown menu in the upper right of the page, click ‘Copy Login Command’. Paste the copied command in your local terminal, for example ‘oc login https://c1-e.us-east.containers.cloud.ibm.com:23967 –token=xxxxxx’.

Get the code:

$ git clone https://github.com/nheidloff/cloud-native-starter.git
$ cd cloud-native-starter
$ ROOT_FOLDER=$(pwd)

Push the code and build the image:

$ cd ${ROOT_FOLDER}/authors-java-jee
$ oc login ...
$ oc new-project cloud-native-starter
$ oc new-build --name authors --binary --strategy docker
$ oc start-build authors --from-dir=.
$ oc get istag

Wait until the image has been built. Then deploy the microservice:

$ cd ${ROOT_FOLDER}/authors-java-jee/deployment
$ sed "s+<namespace>+cloud-native-starter+g" deployment-template.yaml > deployment-template.yaml.1
$ sed "s+<ip:port>+docker-registry.default.svc:5000+g" deployment-template.yaml.1 > deployment-template.yaml.2
$ sed "s+<tag>+latest+g" deployment-template.yaml.2 > deployment-os.yaml
$ oc apply -f deployment-os.yaml
$ oc apply -f service.yaml
$ oc expose svc/authors
$ open http://$(oc get route authors -o jsonpath={.spec.host})/openapi/ui/
$ curl -X GET "http://$(oc get route authors -o jsonpath={.spec.host})/api/v1/getauthor?name=Niklas%20Heidloff" -H "accept: application/json"

Rather than using ‘oc apply’ (which is essentially ‘kubectl apply’), you can also use ‘oc new-app’. In this case you don’t have to create yaml files which makes it easier. At the same time you loose some flexibility and capabilities that kubectl with yaml files provides.

$ oc new-app authors

After this you’ll be able to open the API explorer and invoke the endpoint:

After the deployment the application will show up in the OpenShift Web Console:

Note that there are several other options to deploy applications to OpenShift/OKD. Check out my previous article Deploying Open Liberty Microservices to OpenShift.

This sample is part of the GitHub repo cloud-native-starter. Check it out to learn how to develop cloud-native applications with Java EE/Jakarta EE, Eclipse MicroProfile, Kubernetes and Istio and how to deploy these applications to Kubernetes, Minikube, OpenShift and Minishift.

The post How to develop Open Liberty Microservices on OpenShift appeared first on Niklas Heidloff.


by Niklas Heidloff at July 04, 2019 09:23 AM

[EN] Send custom error messages to client via JAX-RS

by Altuğ Bilgin Altıntaş at June 27, 2019 01:25 PM

Would you like to show custom error messages to your clients via JAX-RS?  Assume that we have a String field in an entity java class and we want to control this String field according to business rules.  Here is the code :

import javax.json.bind.annotation.JsonbCreator;
import javax.json.bind.annotation.JsonbProperty;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.NamedQuery;
import javax.validation.constraints.Size;

/**
*
* @author altuga
*/
@Entity
@NamedQuery(name = "all" , query = "select flight from Flight flight")
public class Flight {

  @Id
  @GeneratedValue
  private long id;

  @Size(min = 3, max = 10, message = "stupid users")
  public String number;

  public int numberOfSeats;

  @JsonbCreator
  public Flight(@JsonbProperty("number") String number,
            @JsonbProperty("numberOfSeats") int numberOfSeats) {
   this.number = number;
   this.numberOfSeats = numberOfSeats;
  }

  public Flight() {
  }

  public long getId() {
   return id;
  }
}

We used @Size annotation tag and in order to apply our business. If the user enters below 3 or above 10 then the system will throw a validation exception with a custom messages

import java.net.URI;
import java.util.List;
import javax.ejb.Stateless;
import javax.inject.Inject;
import javax.validation.Valid;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.UriInfo;

/**
*
* @author airhacks.com
*/
@Path("ping")
@Stateless
public class FlightResource {

   @Inject
   FlightCoordinator coordinator;

   @POST
   public Response save(@Context UriInfo info, @Valid Flight flight) {
    this.coordinator.save(flight);
    URI uri = info.getAbsolutePathBuilder().path(String.valueOf(flight.getId())).build();
    return Response.created(uri).build();
  }

}

In order to trigger the validation process, we have to put @Valid annotation just before parameter. In order to show our custom validation message to JAX-RS clients, we have to implement ExceptionMapper class. Here we go :

@Singleton
@Provider
public class ConstraintViolationMapper implements ExceptionMapper<ConstraintViolationException> {

@Override
public Response toResponse(ConstraintViolationException e) {
   List<String> messages = e.getConstraintViolations().stream()
      .map(ConstraintViolation::getMessage)
      .collect(Collectors.toList());

      return Response.status(Status.BAD_REQUEST).entity(messages).build();
}

}

ConstraintViolationMapper class will catch the exception and transform it according to your custom messages. In that case “stupid user” message will be sent to the client.

 

 


by Altuğ Bilgin Altıntaş at June 27, 2019 01:25 PM

Jersey 2.29 has been released!

by Jan at June 25, 2019 08:58 PM

It is a pleasure to announce that Jersey 2.29 has been released. It is rather a large release. While Jersey 2.27 was the last non-Jakarta EE release and Jersey 2.28 was the first release of Jersey under a Jakarta EE … Continue reading

by Jan at June 25, 2019 08:58 PM

Source to Image Builder for Open Liberty Apps on OpenShift

by Niklas Heidloff at June 25, 2019 09:02 AM

OKD, the open source upstream Kubernetes distribution embedded in OpenShift, provides several ways to make deployments of applications to Kubernetes for developers easy. I’ve open sourced a project that demonstrates how to deploy local Open Liberty applications via two simple commands ‘oc new-app’ and ‘oc start-build’.

$ oc new-app s2i-open-liberty:latest~/. --name=<service-name>
$ oc start-build --from-dir . <service-name>

Get the code from GitHub.

Source-to-Image

OKD allows developers to deploy applications without having to understand Docker and Kubernetes in depth. Similarily to the Cloud Foundry cf push experience, developers can deploy applications easily via terminal commands and without having to build Docker images. In order to do this Source-to-Image is used.

Source-to-Image (S2I) is a toolkit for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image.

In order to use S2I, builder images are needed. These builder images create the actual images with the applications. The builder images are similar to Cloud Foundry buildpacks.

OKD provides several builder images out of the box. In order to support other runtimes, for example Open Liberty, custom builder images can be built and deployed.

Sample Application running locally in Docker Desktop

The repo contains a S2I builder image which creates an image running Java web applications on Open Liberty. Additionally the repo comes with a simple sample application which has been implemented with Java/Jakarta EE and Eclipse MicroProfile.

The Open Liberty builder image can be used in two different environments:

  • OpenShift or MiniShift via ‘oc new-app’ and ‘oc start-build’
  • Local Docker runtime via ‘s2i’

This is how to run the sample application locally with Docker and S2I:

$ cd ${ROOT_FOLDER}/sample
$ mvn package
$ s2i build . nheidloff/s2i-open-liberty authors
$ docker run -it --rm -p 9080:9080 authors
$ open http://localhost:9080/openapi/ui/

To use “s2i” or “oc new-app/oc start-build” you need to build the application with Maven. The server configuration file and the war file need to be in these directories:

  • server.xml in the root directory
  • *.war file in the target directory

Sample Application running on Minishift

First the builder image needs to be built and deployed:

$ cd ${ROOT_FOLDER}
$ eval $(minishift docker-env)
$ oc login -u developer -p developer
$ oc new-project cloud-native-starter
$ docker login -u developer -p $(oc whoami -t) $(minishift openshift registry)
$ docker build -t nheidloff/s2i-open-liberty .
$ docker tag nheidloff/s2i-open-liberty:latest $(minishift openshift registry)/cloud-native-starter/s2i-open-liberty:latest
$ docker push $(minishift openshift registry)/cloud-native-starter/s2i-open-liberty

After the builder image has been deployed, Open Liberty applications can be deployed:

$ cd ${ROOT_FOLDER}/sample
$ mvn package
$ oc new-app s2i-open-liberty:latest~/. --name=authors
$ oc start-build --from-dir . authors 
$ oc expose svc/authors
$ open http://authors-cloud-native-starter.$(minishift ip).nip.io/openapi/ui/
$ curl -X GET "http://authors-cloud-native-starter.$(minishift ip).nip.io/api/v1/getauthor?name=Niklas%20Heidloff" -H "accept: application/json"

After the sample application has been deployed, it shows up in the console.

The post Source to Image Builder for Open Liberty Apps on OpenShift appeared first on Niklas Heidloff.


by Niklas Heidloff at June 25, 2019 09:02 AM

Free and self paced workshop: How to develop microservices with Java

by Niklas Heidloff at June 24, 2019 09:01 AM

Over the last weeks I’ve worked on an example application that demonstrates how to develop your first cloud-native applications with Java/Jakarta EE, Eclipse MicroProfile, Kubernetes and Istio. Based on this example my colleague Thomas Südbröcker wrote a workshop which explains how to implement resilient microservices, how to expose and consume REST APIs, how to do traffic management and more.

Check out the workshop on GitHub.

The workshop contains the following labs:

The example application is a simple application that displays blog entries and links to the profiles of the authors.

There are three microservices: Web-API, Articles and Authors, The microservice Web-API has two versions to demonstrate traffic management. The web application has been developed with Vue.js and is hosted via nginx.

The workshop utilizes the IBM Cloud Kubernetes Service. You can get a free lite account and a free cluster as documented in our repo.

The post Free and self paced workshop: How to develop microservices with Java appeared first on Niklas Heidloff.


by Niklas Heidloff at June 24, 2019 09:01 AM

Update for Jakarta EE community: June 2019

by Tanja Obradovic at June 20, 2019 02:01 PM

Last month, we launched a monthly email update for the Jakarta EE community which seeks to highlight news from various committee meetings related to this platform. We have also decided to publish these updates as blogs and share the information that way as well. There are a few ways to get a grip on the work that has been invested in Jakarta EE so far, so if you’d like to learn more about Jakarta EE-related plans and get involved in shaping the future of cloud native Java, read on.

Without further ado, let’s have a look at what has happened in May:

Jakarta EE 8 release and progress

Jakarta EE 8 will be fully compatible with Java EE 8, including use of the javax namespace. The process of driving the Jakarta EE 8 specifications, as well as delivery of the Jakarta EE 8 TCKs, and Jakarta EE 8 compatible implementations will be transparent.

Mike Milinkovich recently published a FAQ about Jakarta EE 8, in which he offered answers to questions such as  

  • Will Jakarta EE 8 break existing Java EE applications that rely upon javax APIs?

  • What will Jakarta EE 8 consist of?

  • Will there be Jakarta EE 8 compatible implementations?

  • What is the process for delivery of Jakarta EE 8

  • When will Jakarta EE 8 be delivered?

Read Mike’s blog to find out what to expect from the Jakarta EE 8 release.

We need your help with the work on Jakarta EE 8 release. Project teams please get involved in the Eclipse EE4J projects and help out with  Jakarta Specification Project Names and Jakarta Specification Scope Statements.

If you’d like to get involved in the work for the Jakarta EE Platform, there are a few projects that require your attention, namely the Jakarta EE 8 Platform Specification, which is meant to keep track of the work involved with creating the platform specification for Jakarta EE 8, Jakarta EE 9 Platform Specification, intended to keep track of the work involved with creating the platform specification for Jakarta EE 9 and Jakarta EE.Next Roadmap Planning, which seeks to define a roadmap and plan for the Jakarta EE 9 release.

Right now, the fastest way to have a say in the planning and preparation for the Jakarta EE 9 release is by getting involved in the Jakarta EE.Next Roadmap Planning.

Election schedule for Jakarta EE working group committees

The various facets of the Jakarta EE Working Group are driven by three key committees for which there are elected positions to be filled: the Steering Committee, the Specification Committee, and the Marketing and Brand Committee. The elected positions are to represent each of the Enterprise Members, Participant Members, and Committer Members. Strategic Members each have a representative appointed to these committees.  

The Eclipse Foundation is holding elections on behalf of the Jakarta EE Working Group using the following proposed timetable:  

Nomination period:  May 24 - June 4 (self-nominations are welcome)

Election period:  June 11 - June 25

Winning candidates announced:  June 27

All members are encouraged to consider nominating someone for the positions, and self-nominations are welcome. The period for nominations runs through June 4th.  Nominations may be sent to elections@eclipse.org.

Once nominations are closed, all working group members will be informed about the candidates and ballots will be distributed via email to those eligible to vote.  The election process will follow the Eclipse “Single Transferable Vote” method, as defined in the Eclipse Bylaws.  

The winning candidates will be announced on this mailing list immediately after the elections are concluded.  

The following positions will be filled as part of this election:

Steering Committee

Two seats allocated for Enterprise Members

One seat allocated for Participant Members

One seat allocated for Committer Members

Specification Committee

Two seats allocated for Enterprise Members

One seat allocated for Participant Members

One seat allocated for Committer Members

Marketing and Brand Committee

Two seats allocated for Enterprise Members

One seat allocated for Participant Members

One seat allocated for Committer Members

Transitioning Jakarta EE to the jakarta namespace

The process of migrating Java EE to the Eclipse Foundation has been a collaborative effort between the Eclipse Foundation staff and the many contributors, committers, members, and stakeholders that are participating. Last month, it was revealed that the javax package namespace will not be evolved by the Jakarta EE community and that Java trademarks such as the existing specification names will not be used by Jakarta EE specifications. While these restrictions were not what was originally expected, it might be in Jakarta EE’s best interest as the modification of javax would always have involved long-term legal and trademark restrictions.

In order to evolve Jakarta EE, we must transition to a new namespace. In an effort to bootstrap the conversation, the Jakarta EE Specification Committee has prepared two proposals (Big-bang Jakarta EE 9, Jakarta EE 10 new features and incremental change in Jakarta EE 9 and beyond) on how to make the move into the new namespace smoother. These proposals represent a starting point, but the community is warmly invited to submit more proposals.

Community discussion on how to transition to the jakarta namespace concluded Sunday, June 9th, 2019.

We invite you to read a few blogs on this topic:

2019 Jakarta EE Developer Survey Results

The Eclipse Foundation recently released the results of the 2019 Jakarta EE developer survey that canvassed nearly 1,800 Java developers about trends in enterprise Java programming and their adoption of cloud native technologies. The aim of the survey, which was conducted by the Foundation in March of 2019 in cooperation with member companies and partners, including the London Java Community and Java User Groups, was to help Java ecosystem stakeholders better understand the requirements, priorities, and perceptions of enterprise Java developer communities.

A third of developers surveyed are currently building cloud native architectures and another 30 percent are planning to within the next year. Furthermore, the number of Java applications running in the cloud is expected to increase significantly over the next two years, with 32 percent of respondents hoping to run nearly two-thirds of their Java applications in the cloud in two years’ time. Also, over 40 percent of respondents are using the microservices architecture to implement Java in the cloud.

Access the full findings of the 2019 Java Community Developer Survey here.

Community engagement

The Jakarta EE community promises to be a very active one, especially given the various channels that can be used to stay up-to-date with all the latest and greatest. Tanja Obradovic’s blog offers a sneak peek at the community engagement plan, which includes

For more information about community engagement, read Tanja Obradovic’s blog.

Jakarta EE Wiki

Have you checked out the Jakarta EE Wiki yet? It includes important information such as process guidelines, documentation, Eclipse guides and mailing lists, Jakarta EE Working Group essentials and more.  

Keep in mind that this page is a work in progress and is expected to evolve in the upcoming weeks and months. The community’s input and suggestions are welcome and appreciated!

Jakarta EE Community Update: May video call

The most recent Jakarta EE Community Update meeting took place in May; the conversation included topics such as the Jakarta EE progress so far, Jakarta EE Rights to Java Trademarks, the transition from javax namespace to the jakarta namespace (mapping javax to jakarta, when repackaging is required and when migration to namespaces is not required) and how to maximize compatibility with Java EE 8 and Jakarta EE for future versions without stifling innovation, the Jakarta EE 8 release, PMC/ Projects update and more.

The minutes of the Jakarta EE community update meeting are available here and the recorded Zoom video conversation can be found here.  

Jakarta EE presence at conferences: May overview

Cloud native was the talk of the town in May. Conferences such as JAX 2019, Red Hat Summit 2019 and KubeCon + CloudNativeCon Europe 2019 were all about cloud native and how to tap into this key approach for IT modernization success and the Eclipse Foundation was there to take a pulse of the community to better understand the adoption of cloud native technologies.

Don’t forget to check out Tanja Obradovic’s video interview about the future of Jakarta EE at JAX 2019.  

EclipseCon Europe 2019: Call for Papers open until July 15

It’s that time of year again! You can now submit your proposals to be part of EclipseCon Europe 2019’s speaker lineup. The conference takes place in Ludwigsburg, Germany on October 21 - 24, 2019. Early bird submissions are due July 1, and the final deadline is July 15. Check out Jameka's blog and submit your talk today!

We are also working on JakartaOne Livestream conferencescheduled for September 10th. Call for Papers are open until July 1st

Thank you for your interest in Jakarta EE. Help steer Jakarta EE toward its exciting future by subscribing to the jakarta.ee-wg@eclipse.org mailing list and by joining the Jakarta EE Working Group.

To learn more about the collaborative efforts to build tomorrow’s enterprise Java platform for the cloud, check out the Jakarta Blogs and participate in the monthly Jakarta Tech Talks. Don’t forget to subscribe to the Eclipse newsletter!  


by Tanja Obradovic at June 20, 2019 02:01 PM

Recording of Talk ‘How to develop your first cloud-native Applications with Java’

by Niklas Heidloff at June 18, 2019 06:58 AM

At WeAreDevelopers Harald Uebele and I gave a 30 minutes talk ‘How to develop your first cloud-native Applications with Java’. Below is the recording and the slides.

In the talk we described the key cloud-native concepts and explained how to develop your first microservices with Java EE/Jakarta EE and Eclipse MicroProfile and how to deploy the services to Kubernetes and Istio.

For the demos we used our end-to-end example application cloud-native-starter which is available as open source. There are instructions and scripts so that everyone can setup and run the demos locally in less than an hour.

We demonstrated key cloud-native functionality:

Here is the video.

The slides are on SlideShare. There is also another deck for a one hour presentation with more details.

Picture from the big stage:

Get the code of the sample application from GitHub.

The post Recording of Talk ‘How to develop your first cloud-native Applications with Java’ appeared first on Niklas Heidloff.


by Niklas Heidloff at June 18, 2019 06:58 AM

JCP Copyright Licensing request

by Tanja Obradovic at June 17, 2019 06:20 PM

The open source community has welcomed Oracle’s contribution of Java EE into Eclipse Foundation, under the new name Jakarta EE. As part of this huge effort and transfer, we want to ensure that we have the necessary rights so we can evolve the specifications under the new  Jakarta EE Specification Process. For this, we need your help!

We must request copyright licenses from all past contributors to Java EE specifications under the JCP. Hence, we are reaching out to all companies and individuals who made contributions to Java EE in the past to help out, execute the agreements and return them back to Eclipse Foundation. As the advancement of the specifications and the technology is in question, we would greatly appreciate your prompt response. Oracle, Red Hat, IBM, and many others in the community have already signed an agreement to license their contributions to Java EE specifications to the Eclipse Foundation. We are also counting on the JCP community to be supportive of this request.

The request is for JCP contributors to Java EE specifications, once you receive an email from the Eclipse Foundation regarding this, please get back to us as soon as you can!

Should you have any questions regarding the request for copyright licenses from all past contributors, please contact mariateresa.delgado@eclipse-foundation.org who is leading us all through this process.

Many thanks!


by Tanja Obradovic at June 17, 2019 06:20 PM

How to build and run a Hello World Java Microservice

by Niklas Heidloff at June 17, 2019 02:02 PM

The repo cloud-native-starter contains an end-to-end sample application that demonstrates how to develop your first cloud-native applications. Two of the microservices have been developed with Java EE and MicroProfile. To simplify the creation of new Java EE microservices, I’ve added another very simple service that can be used as template for new services.

Get the code.

The template contains the following functionality:

If you want to use this code for your own microservice, remove the three Java files for the REST GET endpoint and rename the service in the pom.xml file and the yaml files.

The microservice can be run in different environments:

  • Docker
  • Minikube
  • IBM Cloud Kubernetes Service

In all cases get the code first:

$ git clone https://github.com/nheidloff/cloud-native-starter.git
$ cd cloud-native-starter
$ ROOT_FOLDER=$(pwd)

Run in Docker

The microservice can be run in Docker Desktop.

$ cd ${ROOT_FOLDER}/authors-java-jee
$ mvn package
$ docker build -t authors .
$ docker run -i --rm -p 3000:3000 authors
$ open http://localhost:3000/openapi/ui/

Run in Minikube

These are the instructions to run the microservice in Minikube.

$ cd ${ROOT_FOLDER}/authors-java-jee
$ mvn package
$ eval $(minikube docker-env)
$ docker build -t authors:1 .
$ kubectl apply -f deployment/deployment.yaml
$ kubectl apply -f deployment/service.yaml
$ minikubeip=$(minikube ip)
$ nodeport=$(kubectl get svc authors --ignore-not-found --output 'jsonpath={.spec.ports[*].nodePort}')
$ open http://${minikubeip}:${nodeport}/openapi/ui/

Run in IBM Cloud Kubernetes Service

IBM provides the managed IBM Cloud Kubernetes Service. You can get a free IBM Cloud account. Check out the instructions for how to create a Kubernetes cluster.

Set your namespace and cluster name, for example:

$ REGISTRY_NAMESPACE=niklas-heidloff-cns
$ CLUSTER_NAME=niklas-heidloff-free

Build the image:

$ cd ${ROOT_FOLDER}/authors-java-jee
$ ibmcloud login -a cloud.ibm.com -r us-south -g default
$ ibmcloud ks cluster-config --cluster $CLUSTER_NAME
$ export ... // for example: export KUBECONFIG=/Users/$USER/.bluemix/plugins/container-service/clusters/niklas-heidloff-free/kube-config-hou02-niklas-heidloff-free.yml
$ mvn package
$ REGISTRY=$(ibmcloud cr info | awk '/Container Registry  /  {print $3}')
$ ibmcloud cr namespace-add $REGISTRY_NAMESPACE
$ ibmcloud cr build --tag $REGISTRY/$REGISTRY_NAMESPACE/authors:1 .

Deploy microservice:

$ cd ${ROOT_FOLDER}/authors-java-jee/deployment
$ sed "s+<namespace>+$REGISTRY_NAMESPACE+g" deployment-template.yaml > deployment-template.yaml.1
$ sed "s+<ip:port>+$REGISTRY+g" deployment-template.yaml.1 > deployment-template.yaml.2
$ sed "s+<tag>+1+g" deployment-template.yaml.2 > deployment-iks.yaml
$ kubectl apply -f deployment-iks.yaml
$ kubectl apply -f service.yaml
$ clusterip=$(ibmcloud ks workers --cluster $CLUSTER_NAME | awk '/Ready/ {print $2;exit;}')
$ nodeport=$(kubectl get svc authors --output 'jsonpath={.spec.ports[*].nodePort}')
$ open http://${clusterip}:${nodeport}/openapi/ui/
$ curl -X GET "http://${clusterip}:${nodeport}/api/v1/getauthor?name=Niklas%20Heidloff" -H "accept: application/json"

Swagger UI

Once deployed the Swagger UI can be opened which shows the APIs of the authors service:

The post How to build and run a Hello World Java Microservice appeared first on Niklas Heidloff.


by Niklas Heidloff at June 17, 2019 02:02 PM

#REVIEW: What’s new in MicroProfile 3.0

by rieckpil at June 16, 2019 01:36 PM

With the MicroProfile release cycle of three releases every year in February, June, and October we got MicroProfile 3.0 on June 11th, 2019. This version is based on MicroProfile 2.2 and updates the Rest Client, Metrics, and Health Check API which I’ll show you in this blog post today.

The current API landscape for MicroProfile 3.0 looks like the following:

 

As you can see in this image, there were no new APIs added with this release. The Microprofile Rest Client API was updated from 1.2 to 1.3 with no breaking change included. The Metrics API got a new major version update from 1.1 to 2.0 introducing some breaking changes, which you’ll learn about. The same is true for the Health Check API which is now available with version 2.0 and also introduces breaking API changes.

Changes with Metrics 2.0: Counters ftw!

Important links:

  • Official changelog on GitHub
  • Current API specification document as pdf
  • Release information on GitHub

Breaking changes:

  • Refactoring of Counters, as the old @Counted was misleading in practice (you can find migration hints in the API specification document)
  • Removed deprecated org.eclipse.microprofile.metrics.MetricRegistry.register(String name, Metric, Metadata)
  • Metadata is now immutable and built via a MetadataBuilder.
  • Metrics are now uniquely identified by a MetricID (a combination of the metric’s name and tags).
  • JSON output format for GET requests now appends tags along with the metric in metricName;tag=value;tag=value format. JSON format for OPTIONS requests has been modified such that the ‘tags’ attribute is a list of nested lists which holds tags from different metrics that are associated with the metadata. The default value of the reusable attribute for metric objects created programmatically (not via annotations) is now true
    Some base metrics’ names have changed to follow the convention of ending the name of accumulating counters with total

Other important changes:

  • Removed unnecessary @InterceptorBinding annotation from org.eclipse.microprofile.metrics.annotation.Metric
  • Tag key names for labels are restricted to match the regex [a-zA-Z_][a-zA-Z0-9_]*.
  • MetricFilter modified to filter with MetricID instead of the name
  • Tag values defined through  MP_METRICS_TAGS must escape equal signs = and commas, with a backslash \.

Changes with Health Check 2.0: Kubernetes here we come!

Important links:

  • Current API specification document as pdf
  • All changes with this release
  • Release information on GitHub

Breaking changes:

  • The message body of Health check response was modified, outcome and state were replaced by status
  • Introduction of Health checks for @Liveness and @Readiness on /health/ready and /health/live endpoints (nice for Kubernetes)

Other important changes:

  • Deprecation of @Health annotation
  • Correction and enhancement of response JSON format
  • TCK enhancement and cleanup
  • Enhance examples in spec (introduce Health check procedures producers)

Changes with Rest Client 1.3: Improved config and security!

Important links:

Important changes:

  • Spec-defined SSL support via new RestClientBuilder methods and MP Config properties.
  • Allow client proxies to be cast to Closeable/AutoCloseable.
  • Simpler configuration using configKeys.
  • Defined application/json to be the default MediaType if none is specified in @Produces/@Consumes.

For more details, you can visit the official announcement post on the MicroProfile page.

I’m planning to give you code-based examples for MicroProfile 3.0 once the first application server supports it (for Payara this will be version 5.193). Stay tuned!

Have fun with MicroProfile 3.0,

Philip

The post #REVIEW: What’s new in MicroProfile 3.0 appeared first on rieckpil.


by rieckpil at June 16, 2019 01:36 PM

#HOWTO: Send emails with Java EE using Payara

by rieckpil at June 09, 2019 05:34 PM

Sending emails to your application’s clients or customers is a common enterprise use case. The emails usually contain invoices, reports or confirmations for a given business transaction. With Java, we have a mature and robust API for this: The JavaMail API.

The JavaMail API standard has a dedicated website providing official documentation and quickstart examples. The API is part of the Java Standard Edition (Java SE) and Java Enterprise Edition (Java EE) and can, therefore, be used also without Java EE.

In this blog post, I’ll show you how you can send an email with an attachment to an email address of your choice using this API and Java EE 8, MicroProfile 2.0, Payara 5.192, Java 8, Maven and Docker.

Let’s get started.

Setting up the backend

For the backend, I’ve created a straightforward Java EE 8 Maven project:

<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>de.rieckpil.blog</groupId>
  <artifactId>java-ee-sending-mails</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>war</packaging>
  <dependencies>
    <dependency>
      <groupId>javax</groupId>
      <artifactId>javaee-api</artifactId>
      <version>8.0</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.eclipse.microprofile</groupId>
      <artifactId>microprofile</artifactId>
      <version>2.0.1</version>
      <type>pom</type>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.12</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.mockito</groupId>
      <artifactId>mockito-core</artifactId>
      <version>2.23.0</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
  <build>
    <finalName>java-ee-sending-mails</finalName>
  </build>
  <properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  </properties>
</project>

The email transport is triggered by a JAX-RS endpoint (definitely no best practice, but good enough for this example):

@Path("mails")
public class MailingResource {

  @Inject
  private MailingService mailingService;

  @GET
  public Response sendSimpleMessage() {
    mailingService.sendSimpleMail();
    return Response.ok("Mail was successfully delivered").build();
  }

}

The actual logic for creating and sending the email is provided by the MailingService EJB which is injected via CDI in the JAX-RS class:

@Stateless
public class MailingService {

    @Inject
    @ConfigProperty(name = "email")
    private String emailAddress;


    @Resource(name = "mail/localsmtp")
    private Session mailSession;

    public void sendSimpleMail() {

        Message simpleMail = new MimeMessage(mailSession);

        try {
            simpleMail.setSubject("Hello World from Java EE!");
            simpleMail.setRecipient(Message.RecipientType.TO, new InternetAddress(emailAddress));

            MimeMultipart mailContent = new MimeMultipart();

            MimeBodyPart mailMessage = new MimeBodyPart();
            mailMessage.setContent(
               "<p>Take a look at the <b>scecretMessage.txt</b> file</p>", "text/html; charset=utf-8");
            mailContent.addBodyPart(mailMessage);

            MimeBodyPart mailAttachment = new MimeBodyPart();
            DataSource source = new ByteArrayDataSource(
               "This is a secret message".getBytes(), "text/plain");
            mailAttachment.setDataHandler(new DataHandler(source));
            mailAttachment.setFileName("secretMessage.txt");

            mailContent.addBodyPart(mailAttachment);
            simpleMail.setContent(mailContent);

            Transport.send(simpleMail);

            System.out.println("Message successfully send to: " + emailAddress);
        } catch (MessagingException e) {
            e.printStackTrace();
        }

    }
}

First, the EJB requires an instance of the javax.mail.Session class which is injected with @Resoucre and found via its unique JNDI name. The connection setup for this email session is done with either the Payara admin web page or using asadmin as you’ll see in the next section.

The Session object is then used to create a MimeMessage instance which represents the actual email. Setting the email recipient and the subject of the email is pretty straightforward. In this example, I’m injecting the recipient’s email address via the MicroProfile Config API with a microprofile-config.properties file:

email=duke@java.ee

For both the attachment and for the email body I’m using a dedicated MimeBodyPart instance and add both to the MimeMultipart object. Finally, the email is sent via the static Transport.send(Message msg) method via SMPT.

Providing an SMPT server

For your real-world example, you would connect to your company internal SMPT server to send the emails to e.g. your customers. To provide you a running example without using an external SMPT server I’m using a Docker container to start a local SMPT server. The whole infrastructure (SMPT server and Java EE backend) for this example is combined in a simple docker-compose.yml file:

version: '3'
services:
  app:
    build: ./
    ports:
      - "8080:8080"
      - "4848:4848"
    links:
      - smtp
  smtp:
    image: namshi/smtp
    ports:
      - "25:25"

With this setup, the Docker container with the Payara application server can reach the SMPT server via its name smpt and I don’t need to hardcode any IP address.

The email session is then configured (connection settings and JNDI name) in Payara with a post-boot asadmin script:

# Connecting to the SMTP server within the Docker Compose environment
create-javamail-resource --mailhost smtp --mailuser duke --fromaddress duke@java.ee mail/localsmtp

# For a connecting to e.g. Gmail's SMTP you have to specify further parameters (check e.g. https://medium.com/@swhp/sending-email-with-payara-and-gmail-56b0b5d56882)

deploy /opt/payara/deployments/java-ee-sending-mails.war

For the sake of completeness, this is the Dockerfile for the backend:

FROM payara/server-full:5.192
COPY create-mail-session.asadmin $CONFIG_DIR
COPY target/java-ee-sending-mails.war $DEPLOY_DIR
ENV POSTBOOT_COMMANDS $CONFIG_DIR/create-mail-session.asadmin

You can find the source code alongside a docker-compose.yml file to bootstrap the application and an SMTP server for local development on GitHub.

Have fun sending emails with Java EE,

Phil

The post #HOWTO: Send emails with Java EE using Payara appeared first on rieckpil.


by rieckpil at June 09, 2019 05:34 PM

[EN] Using Logger with @Produces

by Altuğ Bilgin Altıntaş at June 06, 2019 12:23 PM

Logging the events is crucial so how can we use Logging mechanism into our Jakarta EE projects? Here we go :

import java.util.logging.Logger;
import javax.enterprise.inject.Produces;
import javax.enterprise.inject.spi.InjectionPoint;

/**
*
* @author altuga
*/
public class LogManager {

   @Produces
   public Logger configure(InjectionPoint point) {
     String name = point.getMember().getName();
     return Logger.getLogger(name);
   }
}

In order to use Logger in your project, just inject the Logger and start using it like this :

import java.util.List;
import java.util.logging.Logger;
import javax.ejb.Stateless;
import javax.inject.Inject;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import org.eclipse.microprofile.metrics.annotation.Metered;


@Path("/ping")
@Stateless
public class PingResource {

   @Inject
   Pingy pingy;

   @Inject
   Logger logger;

   @GET
   public String ping() {
     return pingy.pingMe();
   }

   @POST
   @Metered
   public void save(Ping ping) {
    logger.info("save method called...");
    pingy.save(ping);
  }

   @GET
   @Path("/all")
   public List<Ping> getAll() {
     return pingy.getAll();
   }
}

After that, you should be able to see the logs in app server’s log management system.

Enjoy


by Altuğ Bilgin Altıntaş at June 06, 2019 12:23 PM

Java EE - Jakarta EE Initializr

June 03, 2019 01:37 PM

Getting started with Jakarta EE just became even easier!

Get started

Update!

version 1.2 now has…

Payara 5.192 and JKD 11.0.3


June 03, 2019 01:37 PM

The Payara Monthly Roundup for May 2019

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at June 03, 2019 10:31 AM

This was a big month for the Payara Team. We just released Payara Platform 5.192, we toured all across Japan and there has been plenty going on in the Java world with the announcement by the Eclipse Foundation regarding the continued use of the javax namespace. 


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at June 03, 2019 10:31 AM

The Power of the Application Server

by Edwin Derks at May 31, 2019 07:48 AM

Presently, there is a battle of frameworks going on that compete for being the best “rightsizing” framework, tailored for building and running microservices as efficiently as possible. However, there is an older, yet proven concept available to run enterprise applications, that doesn’t apply the concept of rightsizing. On the contrary, it provides access to the full feature-rich specifications set of Jakarta EE (previously Java EE), without the need to right-size your enterprise applications.

The concept I’m referring to is that of the Application Server, commonly used in the Jakarta EE ecosystem. Because most developers are focusing on the rightsizing frameworks today, the concept of an application server is sometimes perceived to be outdated and not suitable for building (micro)services. This is not necessarily true, and the application server still has a part to play. Therefore I would like to address these perceptions by revisiting the concept of the application server and highlight its perks in building and running Java-based enterprise applications.

Application Server Characteristics

Currently, there are several application servers available that are built and made available by vendors for developers, often as open-source projects. Each of these application servers implements a certain version of the Jakarta EE specifications set. Application servers are used to run your Jakarta EE enterprise applications. This means that present-day application servers implement the specifications of Jakarta EE, not yet officially released but compatible with Java EE 8. Although the various application servers are likely to share some of the implementation details, they are also very different. If you compare the internals of application servers, you will see that they all apply their own approach of implementing the specifications and running the server. Some servers also provide some vendor-specific commercial features that should make using the application server for your purposes a more optimal experience.

Eclipse MicroProfile

Note that next to the Jakarta EE specifications, several application servers also implement the specifications from Eclipse MicroProfile. Like Jakarta EE, this is a specifications-based framework that is also being developed under the stewardship of the Eclipse Foundation. MicroProfile provides you, among other features, with the ability to run your application server in a scalable environment like a cloud. This is a fit for orchestrators and monitors because these are operating on the MicroProfile endpoints that are exposed for health checks and metrics.

Lean and tidy enterprise applications

If you look at the picture above, you can see that the application server runs on Java and exposes the various Jakarta EE, and possibly MicroProfile specifications to run Java-based enterprise applications. This means  you can compile your enterprise applications against the matching specifications to make them compatible with the application server. This has the advantage that you don’t have to include the implementations of the specifications in your build artifacts (often in the form of a WAR (Web ARchive). That leads to enterprise applications that only contain your business logic, therefore often just a few kilobytes in size. Your enterprise application becomes essentially an extension of the application server itself.

Easy development

To start developing Jakarta EE-based enterprise applications, you only have to create a Maven-based Java project that contains the following dependency:

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-api</artifactId>
    <version>8.0</version>
    <scope>provided</scope>
</dependency>

This is currently a legacy Java EE 8 dependency, but it will soon be replaced by a Jakarta EE counterpart. Since Jakarta EE  and Java EE 8 are to remain compatible in regard to the specifications, the dependencies should be drop-in replacements.

In case your application server also supports MicroProfile, you can also include this dependency in your Maven project:

<dependency>
    <groupId>org.eclipse.microprofile</groupId>
    <artifactId>microprofile</artifactId>
    <version>2.2</version>
    <type>pom</type>
</dependency>

These dependencies alone provide you with full and seamless access to the complete specification sets of Jakarta EE and MicroProfile in your project. If you start coding in your IDE, you should be able to access and compile against the Java classes, interfaces and annotations from the specifications that are exposed by the Maven dependencies.

Deploying your enterprise application

By using Maven’s Jakarta EE and MicroProfile dependencies for building enterprise applications, you can also easily build and deploy your application on your application server. Usually, you only have to tell Maven that you want your application built as a WAR (Web ARchive) which implicitly contains the internal file structure that is understood by the application server.

<packaging>war</packaging>

When you have chosen an application server to run your enterprise application on, you can download and unzip it before configuring your IDE to deploy your enterprise application onto the application server.

When you want to deploy your code on the application server, you can easily do so by using the (re)deploy option in your IDE. Maven will then (re)build your enterprise application and deploy it on the (running) application server. Since the WAR is usually only kilobytes in size, this should only take milliseconds. This should save you lots of development time and provide you with a pleasant development experience at the same time.

Additional Powers

But that is not all. Using an application server should provide you with a machine that has been optimized by a vendor to handle high loads of request and data. As I explained earlier, the whole application server consists of several components. These are all tuned and optimized to run in harmony to let the application server run with optimal performance and stability on the available resources. Yes, this can take lots of RAM and CPU cycles, but you should ask yourself if this really is a problem. This phenomenon has not yet been solved by rightsizing frameworks either, and should be justified when it provides you with the desired performance and stability.

Another benefit is patching and upgrading the application server. Since the internals of the whole application server are available to you, details like what’s in it and how it works are perfectly transparent. You can use this to your advantage because it allows you to tune the application server to your needs, or replace parts of the application server when patches or upgrades made available by the vendor.

Conclusion

An application server is, of course, no silver bullet, but other concepts, including the “rightsizing” frameworks aren’t either. However, there is a place for application servers in the modern world.

The question you should ask yourself when you are considering using application servers is: do you really need microservices? I personally don’t think that is necessarily true. Sometimes you just want scalable services. And that’s an angle of approach where Jakarta EE and the appliance of application servers really shine through.

Due to the monolithic approach of building and running enterprise applications, an application server should relieve you of the architectural, infrastructural and operational overhead that you implicitly get when you build on a pure microservices-based architecture.

By using application servers, you can still build scalable enterprise applications, and with the addition of MicroProfile, these are also are a fit for scalable environments. And because an application server is tuned with performance, stability and ease of development in mind, it should be a solid base for your project when it aligns with Jakarta EE’s angle of approach in building present-day enterprise applications.


by Edwin Derks at May 31, 2019 07:48 AM

Jakarta EE, A de facto standard in the making

by David R. Heffelfinger at May 28, 2019 10:06 PM

I’ve been involved in Java EE since the very beginning, Having written one of the first ever books on Java EE. My involvement in Java EE / Jakarta EE has been on an education / advocacy role. Having written books, articles, blog posts and given talks in conferences about the technology. I advocate Jakarta EE not because I’m paid to do so, but because I really believe it is a great technology. I’m a firm believer that the fact that Jakarta EE is a standard, with multiple competing implementations, results in very high quality implementations, since Jakarta EE avoids vendor lock-in and encourages competition, benefiting developers.


Oracle’s donation of Java EE to the Eclipse Foundation was well received and celebrated by the Java EE community. Many prominent community members had been advocating for a more open process for Java EE, which is exactly what Jakarta EE, under the stewardship from the Eclipse Foundation provides.


There are some fundamental changes on how Jakarta EE is managed, that differ from Java EE, that benefit the Jakarta EE community greatly.

Fundamental differences between Java EE and Jakarta EE Management


Some of the differences in the way Jakarta EE is managed as opposed to Java EE are that there is no single vendor controlling the technology, there is free access to the TCK and there is no reference implementation.

No single company controls the standard

First and foremost, we no longer have a single company as a steward of Jakarta EE. Instead, we have several companies who have a vested interest in the success of the technology working together to develop the standard. This has the benefit that the technology is not subject to the whims of any one vendor, and, if any of the vendors loses interest in Jakarta EE, others can easily pick up the slack. The fact that there is no single vendor behind the technology makes Jakarta EE very resilient, it is here to stay.

TCK freely accessible

Something those of us involved heavily in Jakarta EE (and Java EE before), take for granted, but that may not be clear to others, is that Jakarta EE is a set of specifications with multiple implementations. Since the APIs are defined in a specification, they don’t change across Jakarta EE implementations, making Jakarta EE compliant code portable across implementations. For example, a Jakarta EE compliant application should run with minimal or no modifications on popular Jakarta EE implementations such as Apache Tomee, Payara, IBM’s OpenLiberty or Red Hat’s Thorntail


One major change that Jakarta EE has against Java EE is the fact that the Technology Compatibility Kit (TCK) is open source and free. The TCK is a set of test to verify that a Jakarta EE implementation is 100% compliant with all Jakarta EE specifications. With Java EE, organizations wanting to create a Java EE implementation, had to pay large sums of money to gain access to the TCK, once their implementation passed all the tests, their implementation was certified as Java EE compatible. The fact that the TCK was not freely accessible became a barrier to innovation, as smaller organizations and open source developers not always had the funds to get access to the TCK. Now that the TCK is freely accessible, the floodgates will open, and we should see a lot more quality implementations of Jakarta EE.

No reference implementation

Another major change between Java EE and Jakarta EE is that Java EE had the concept of a reference implementation. The idea behind having a Java EE reference implementation was to prove that suggested API specifications were actually feasible to implement. Having a reference implementation, however, had a side effect. If the reference implementation implemented something that wasn’t properly defined in the specification, then many developers expected all Java EE implementations to behave the same way, making the reference implementation a de-facto Java EE specification of sorts. Jakarta EE does away with the concept of a reference implementation, and will have multiple compatible implementations instead. The fact that there isn’t a reference implementation in Jakarta EE will result in more complete specifications, as differences in behavior between implementations will bring to light deficiencies in the specifications, these deficiencies can then be addressed by the community.

Conclusion

With multiple organizations with a vested interest in Jakarta EE’s success, a lowered barrier of entry for new Jakarta EE implementations, and better specifications Jakarta EE will become the de-facto standard in server-side Java development.



by David R. Heffelfinger at May 28, 2019 10:06 PM

Election time for Jakarta EE Working Group Committees!

by Tanja Obradovic at May 24, 2019 10:02 AM

The Jakarta EE Working Group charter identifies three key committees to drive the various facets of the working group for which there are elected positions to be filled: the Steering Committee, the Specification Committee, and the Marketing and Brand Committee.

The elected positions are to represent each of the Enterprise Members, Participant Members, and Committer Members.  Note that Strategic Members each have a representative appointed to these committees.

This way, we are announcing that the Foundation will hold elections on behalf of the working group using the proposed timetable listed below. This mimics the process used by other working groups as well as the process used by the Eclipse Foundation itself for filling the elected positions on our Board.

All members are encouraged to consider nominating someone for the positions, and self-nominations are welcome. The period for nominations will open later this week and will run through June 4th.  Nominations may be sent to elections@eclipse.org.

Once nominations are closed, we will inform all working group members of the candidates and will distribute ballots via email to those eligible to vote.  The election process will follow the Eclipse “Single Transferable Vote” method, as defined in the Eclipse Bylaws.  

The winning candidates will be announced on this mailing list immediately after the elections are concluded.  

Election Schedule

Nomination period:  May 24 - June 4 (self-nominations are welcome)

Election period:  June 11 - June 25

Winning candidates announced:  June 27

 

The following positions will be filled as part of this election:

 

Steering Committee

Two seats allocated for Enterprise Members

One seat allocated for Participant Members

One seat allocated for Committer Members

Specification Committee

Two seats allocated for Enterprise Members

One seat allocated for Participant Members

One seat allocated for Committer Members

Marketing and Brand Committee

Two seats allocated for Enterprise Members

One seat allocated for Participant Members

One seat allocated for Committer Members


by Tanja Obradovic at May 24, 2019 10:02 AM

Persistence for Java Microservices in Kubernetes via JPA

by Niklas Heidloff at May 21, 2019 03:14 PM

Over the last weeks I’ve worked on an example application that demonstrates how Java EE developers can get started with microservices. The application is a full end-to-end sample which includes a web application, business logic, authentication and now also persistence. It runs on Kubernetes and Istio and there are scripts to easily deploy it.

Get the cloud-native-starter code from GitHub.

Java Persistence API

In the example I use a full open source Java stack with OpenJ9, OpenJDK, Open Liberty and MicroProfile. In order to deploy the microservices to Kubernetes, I’ve created an image. Read my article Dockerizing Java MicroProfile Applications for details.

Open Liberty provides some pretty good guides. One guide is specifically about JPA: Accessing and persisting data in microservices. I don’t want to repeat everything here, but only highlight the changes I had to do to run this functionality in a container, rather than via a local Open Liberty installation.

Here is a short description of JPA:

JPA is a Java EE specification for representing relational database table data as Plain Old Java Objects (POJO). JPA simplifies object-relational mapping (ORM) by using annotations to map Java objects to tables in a relational database. In addition to providing an efficient API for performing CRUD operations, JPA also reduces the burden of having to write JDBC and SQL code when performing database operations and takes care of database vendor-specific differences.

Configuration of the Sample Application

The following diagram shows the simplied architecture of the cloud-native-starter example. A web application invokes through Ingress the Web-API service that implements a backend-for-frontend pattern. The Web-API service invokes the Articles service which stores data in a SQL database on the IBM Cloud. Obviously you can use any other SQL database instead.

In order to access the Db2 on the IBM Cloud, first the driver needs to be downloaded via Maven. Note that the driver does not go into the war files together with the business logic of the microservices, but it needs to be copied in a certain Open Liberty directory: /opt/ol/wlp/usr/shared/resources/jcc-11.1.4.4.jar

Next you need to define in server.xml information about the driver and the data source.

<server description="OpenLiberty Server">
    <featureManager>
        <feature>webProfile-8.0</feature>
        <feature>microProfile-2.1</feature>
    </featureManager>

    <httpEndpoint id="defaultHttpEndpoint" host="*" httpPort="8080" httpsPort="9443"/>

    <library id="DB2JCCLib">
        <fileset dir="${shared.resource.dir}" includes="jcc*.jar"/>
    </library>

    <dataSource id="articlejpadatasource"
              jndiName="jdbc/articlejpadatasource">
        <jdbcDriver libraryRef="DB2JCCLib" />
        <properties.db2.jcc databaseName="BLUDB"
            portNumber="50000"
            serverName="DB2-SERVER"         
            user="DB2-USER" 
            password="DB2-PASSWORD" />
  </dataSource>
</server>

Next the persistence unit needs to be define in persistence.xml.

The tricky part was for me to figure out the right location for this file. In order for all Maven versions to build it correctly I put it in ‘src/main/resources/META-INF/persistence.xml’. This produces an articles.war file with the internal structure ‘classes/META-INF/persistence.xml’.

<persistence version="2.2"
    xmlns="http://xmlns.jcp.org/xml/ns/persistence" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence 
                        http://xmlns.jcp.org/xml/ns/persistence/persistence_2_2.xsd">
    <persistence-unit name="jpa-unit" transaction-type="JTA">
        <jta-data-source>jdbc/articlejpadatasource</jta-data-source>
        <properties>
            <property name="eclipselink.ddl-generation" value="create-tables"/>
            <property name="eclipselink.ddl-generation.output-mode" value="both" />
        </properties>
    </persistence-unit>
</persistence>

Usage of JPA in Java

Once all the configuration has been done, writing the Java code is simple.

First you need to define the Java class which represents the entries in a table. Check out the code of ArticleEntity.java with the five columns id, title, url, author and creation date. As defined in persistence.xml this table is created automatically.

The CRUD operations for articles are defined in ArticleDao.java. The code is pretty straight forward. The only thing that confused me, was that I have to begin and commit transactions manually for the create operation. In the Open Liberty sample this was not necessary. I’m trying to find out what the difference is.

In JPADataAccess.java the logic is implemented to add and read articles. The ArticleDao is injected. Again, the code looks simple. The lesson that I learned here is, that dependency injection only seems to work when the upper layers that invoke this code use dependency injection and @ApplicationScoped as well.

How to run the Example

I’ve written scripts to create the SQL database and create the Articles service. Check out the documentation to run the sample yourself on Minikube or the IBM Cloud Kubernetes Service.

Once installed the OpenAPI API Explorer can be used to create a new article.

The table is displayed in the Db2 console.

The data in the table can be displayed in the console as well.

To learn more about microservices built with Java and MicroProfile, check out the cloud-native-starter repo.

The post Persistence for Java Microservices in Kubernetes via JPA appeared first on Niklas Heidloff.


by Niklas Heidloff at May 21, 2019 03:14 PM

Authorization in Microservices with MicroProfile

by Niklas Heidloff at May 20, 2019 08:55 AM

I’ve been working on an example that demonstrates how to get started with cloud-native applications as a Java developer. The example is supposed to be a full end-to-end sample application which includes the topis authentication and authorization, since that functionality is required by most applications. This article describes how to check authorization in microservices implemented with Java EE and Eclipse MicroProfile.

Get the code of the example application cloud-native-starter from GitHub.

Previously I blogged about how to do authorization with Istio. In general as much as possible of the functionality of the Kubernetes and Istio platforms should be leveraged when building microservices. However, some functionality needs to be implemented in the business logic of the application. One such scenario is fine grained authorization that only the application can determine. For example in a project management application only the application knows the owners of specific tasks.

OpenID Connect and JWT

Before I describe the sample, let me give you some background and explain my requirements.

In order to authenticate and authorize users, I’d like to use the standard OpenID Connect 1.0 and OAuth 2.0 which can be used with many existing identify providers. In the context of enterprise applications you want to leverage existing organization directories. IBM App ID, for example, acts as an identity provider or identity provider proxy. For simple tests you can define test users in a cloud directory. For production usage App ID can be configured to work against third-party providers such as Active Directory Federation Services via SAML.

The other nice thing about OpenID Connect for developers is, that you don’t have to understand the internals of the different identity providers, but can use standardized and easy APIs. As responses of successful OAuth dances, you get access tokens and user tokens as JSON Web Token (JWT).

Cloud-Native Sample Application

There are five services in the application. The services, except of the managed one, are available as open source and run in Kubernetes clusters with Istio. In my case I utilize Minikube locally or the IBM Cloud Kubernetes Service.

  • Web-App: Simple web application built with Vue.js which provides login functionality for users and stores tokens locally
  • Web-App Hosting: Nginx based hosting of the Vue.js resources
  • Authentication: Node.js microservice which handles the OAuth dance and returns tokens to the web application
  • Web-API: Provides an unprotected endpoint ‘/getmultiple’ to read articles and a protected endpoint ‘create’ to create a new article
  • IBM App ID: Contains a cloud directory with test users and acts as an OpenID identity provider

Check out my other article Authenticating Web Users with OpenID and JWT that explains how the tokens are retrieved and how they are stored in the web application.

Authorization via MicroProfile

Eclipse MicroProfile supports controlling user and role access to microservices with JSON Web Token. Let’s take a look at the sample.

From the web application the endpoint ‘/manage’ of the Web API service can be invoked. Only the user ‘admin@demo.email’ is allowed to invoke this endpoint.

For the user ‘user@demo.email’ an error is thrown.

Watch the animated gif to see the flow in action.

This is the Java code that checks authorization:

public class Manage {
   @Inject
   private JsonWebToken jwtPrincipal;

   @POST
   @Path("/manage")
   @Produces(MediaType.APPLICATION_JSON)
   public Response manage() {
      String principalEmail = this.jwtPrincipal.getClaim("email");
      if (principalEmail.equalsIgnoreCase("admin@demo.email")) {
         JsonObject output = Json.createObjectBuilder().add("message", "success").build();
         return Response.ok(output).build();
      }
      else {			
         JsonObject output = Json.createObjectBuilder().add("message", "failure").build();
         return Response.status(Status.FORBIDDEN).entity(output).type(MediaType.APPLICATION_JSON).build();
      }
   }
}

The tricky part to get this working was the configuration of the Liberty server. Thanks a lot to Chunlong Liang for figuring this out.

In order to check the validation of the JWT token, MicroProfile needs to contact App ID via ‘https’. When using Istio to check authorization, this needs to be done too. The difference is that Istio already comes with the public key of App ID. For MicroProfile applications running in Open Liberty the key needs to be imported into the validation keystore first. It sounds like for WebSphere Liberty this is not necessary either, but I haven’t tested it.

<server description="OpenLiberty Server">
    <featureManager>
        <feature>webProfile-8.0</feature>
        <feature>microProfile-2.1</feature>
        <feature>mpJwt-1.1</feature>
        <feature>appSecurity-3.0</feature>
    </featureManager>

    <mpJwt id="jwt"   
        issuer="https://us-south.appid.cloud.ibm.com/oauth/v4/xxx"
        jwksUri="https://us-south.appid.cloud.ibm.com/oauth/v4/xxx/publickeys"
	    userNameAttribute="sub"
	    audiences="ALL_AUDIENCES"/>  

    <sslDefault sslRef="RpSSLConfig"/>
    <ssl id="RpSSLConfig" keyStoreRef="defaultKeyStore" trustStoreRef="validationKeystore"/> 
    <keyStore id="defaultKeyStore" location="keystore.jceks" type="JCEKS" password="secret" />
    <keyStore id="validationKeystore" location="/config/key.jks" type="jks" password="changeit"/>
</server>

In order to import the public App ID key, you need to download the key first. After this you can use keytool to import it:

keytool -import -file /Users/nheidloff/Desktop/wildcardbluemixnet.crt -alias certificate_alias -keystore /Users/nheidloff/git/cloud-native-starter/auth-java-jee/keystore.jks -storepass keyspass

The example application contains this key already. When you want to use another OpenID Connect provider, you need to import the key as just described.

The other issue I ran into was, that App ID doesn’t return an JWT token in the right format for MicroProfile. Fortunately the claims ‘sub’ (subject/user) and ‘audiences’ can be configured in server.xml too.

To run the example yourself, follow these instructions.

The post Authorization in Microservices with MicroProfile appeared first on Niklas Heidloff.


by Niklas Heidloff at May 20, 2019 08:55 AM

Update for Jakarta EE community: May 2019

by Tanja Obradovic at May 18, 2019 10:46 AM

The Jakarta EE community is the driving force behind the future of cloud-native Java. Active participation represents the best way to drive the vendor-neutral and rapid innovation necessary to modernize enterprise systems for cloud use cases. That said, we’d like to make sure that the community is kept up-to-speed with the latest developments in the Jakarta EE ecosystem.

We’re launching a monthly email update for the Jakarta EE community which seeks to highlight news from various committee meetings related to this platform. There are a few ways to get a grip on the work that has been invested in Jakarta EE so far, so if you’d like to learn more about Jakarta EE-related plans and get involved in shaping the future of cloud-native Java, read on. We’d also like to use this opportunity to invite you to get involved in EE4J projects and join the conversation around the Jakarta EE Platform.

Without further ado, let’s have a look at what has happened this month:

Update on Jakarta EE Rights to Java Trademarks

The process of migrating Java EE to the Eclipse Foundation has been a collaborative effort between the Eclipse Foundation staff and the many contributors, committers, members, and stakeholders that are participating. The Eclipse Foundation and Oracle have agreed that the javax package namespace will not be evolved by the Jakarta EE community. Furthermore, Java trademarks such as the existing specification names will not be used by Jakarta EE specifications.

Since the ratified Jakarta EE specifications will be available under a different license (the Eclipse Foundation Specification License), we recommend that you update your contributor and committer agreements.

Read more about the implications and what’s next for the Jakarta EE Working Group in Mike Milinkovich’s latest blog.

In order to evolve Jakarta EE, we must transition to a new namespace. In an effort to bootstrap the conversation, the Jakarta EE Specification Committee has prepared two proposals (Big-bang Jakarta EE 9, Jakarta EE 10 new features and incremental change in Jakarta EE 9 and beyond) on how to make the move into the new namespace smoother. These proposals represent a starting point, but the community is warmly invited to submit more proposals.

Community discussion on how to transition to the jakarta namespace will conclude Sunday, June 9th, 2019.

EFSP v1.1

Version 1.1 of the Eclipse Foundation Specification Process was approved on March 20, 2019. The EFSP leverages and augments the Eclipse Development Process (EDP), which defines important concepts, including the Open Source Rules of Engagement, the organizational framework for open source projects and teams, releases, reviews, and more.
 

JESP v1.0

Jakarta EE Specification Process v1.0 was approved on April 3, 2019. Therefore, the Jakarta EE Specification Committee now adopts the EFSP v1.1 as the Jakarta EE Specification Process with a few modifications, including the fact that any changes or revisions of the Jakarta EE Specification Process must be approved by a Super-majority of the Specification Committee.

 

TCK process:

Work on the TCK process is in progress, with Tomitribe CEO David Blevins leading the effort. The TCK process is expected to be completed in the near future. The document will shed light on aspects such as the materials a TCK must possess in order to be considered suitable for delivering portability, the process for challenging tests and how to resolve them and more.      
 

Jakarta EE 8 release

Jakarta EE 8 is a highly-anticipated release, especially since it represents the first release that’s completely based on Java EE to ensure backward compatibility. It relies on four pillars of work, namely specifications for the full platform, TCKs, including documents on how to use them, a compatible implementation for the release of Jakarta EE 8, and marketing aspects such as branding, logo usage guidelines, and marketing and PR activities.

All parties involved are far along with the planning process and work on specifications has already started. Please look at Wayne Beaton’s blogs on the work in progress with regard to specification project names and specification scopes.

 

EE4J GitHub

Get involved in Eclipse EE4J! There are currently three projects that you can be a part of, namely Specification Document Names, Jakarta Specification Project Names, and Jakarta Specification Scope Statements (for the specifications). Furthermore, there are plenty of repos that require your attention and involvement.

But before you dive right in, you should read the latest blog from the Jakarta EE Specification committee, which recently approved a handful of naming standards for Jakarta EE Specification projects. While you’re at it, you should read Wayne Beaton’s blog on why changing the names of the specifications and the projects that contain their artifacts is a necessary step.

Head over to GitHub and join the conversation!
 

Jakarta EE Platform

There’s no better time to get involved in the work for the Jakarta EE Platform than the present. As of now, the projects that demand the community’s attention are the Jakarta EE 8 Platform Specification, which is meant to keep track of the work involved with creating the platform specification for Jakarta EE 8, Jakarta EE 9 Platform Specification, intended to keep track of the work involved with creating the platform specification for Jakarta EE 9 and Jakarta EE.Next Roadmap Planning, which seeks to define a roadmap and plan for the Jakarta EE 9 release.

Community Engagement

Speaking of community engagement, there are a few ways to get a grip on the work that has been invested in Jakarta EE so far, learn more about Jakarta EE-related plans and get involved in shaping the future of cloud-native Java. One way to do that is by reading Tanja Obradovic’s blog series on how to get involved.

You should also be aware of the newly-created Jakarta EE community calendar, which is now open to the public and offers an overview of all the activities surrounding Jakarta EE. The community is invited to participate in Jakarta Tech Talks, which take place on a monthly basis, attend Jakarta EE Update monthly calls (the next one is on May 8), help build the Jakarta EE wiki with all relevant links and look for opportunities to engage and become part of the community.

Last but not least, the Jakarta EE Developer Survey will be released in the next few days. Head over to jakarta.ee to discover the latest trends, the community’s top priorities regarding the future of Jakarta EE and more. Stay tuned!

Conclusion:

Thank you for your interest in Jakarta EE. To help us build tomorrow’s enterprise Java platform, join the Jakarta EE community now or get involved by becoming a contributor or committer to one of the EE4J projects.   

Help steer Jakarta EE toward its exciting future by joining the Jakarta EE working group!


by Tanja Obradovic at May 18, 2019 10:46 AM

I am an Incrementalist: Jakarta EE and package renaming

by BJ Hargrave (noreply@blogger.com) at May 17, 2019 05:11 PM


Eclipse Jakarta EE has been placed in the position that it may not evolve the enterprise APIs under their existing package names. That is, the package names starting with java or javax. See Update on Jakarta EE Rights to Java Trademarksfor the background on how we arrived at this state.

So this means that after Jakarta EE 8 (which is API identical to Java EE 8 from which it descends), whenever an API in Jakarta EE is to be updated for a new specification version, the package names used by the API must be renamed away from java or javax. (Note: some other things will also need to be renamed such as system property names, property file names, and XML schema namespaces if those things start with java or javax. For example, the property file META-INF/services/javax.persistence.PersistenceProvider.) But this also means that if an API does not need to be changed, then it is free to remain in its current package names. Only a change to the signature of a package, that is, adding or removing types in the package or adding or removing members in the existing types in the package, will require a name change to the package.

There has been much discussion on the Jakarta EE mail lists and in blogs about what to do given the above constraint and David Blevins has kindly summed up the two main choices being discussed by the Jakarta EE Specification Committee: https://www.eclipse.org/lists/jakartaee-platform-dev/msg00029.html.

In a nutshell, the two main choices are (1) “Big Bang” and (2) Incremental. Big Bang says: Let’s rename all the packages in all the Jakarta EE specifications all at once for the Jakarta EE release after Jakarta EE 8. Incremental says: Let’s rename packages only when necessary such as when, in the normal course of specification innovation, a Jakarta EE specification project wants to update its API.

I would like to argue that Jakarta EE should chose the Incremental option.

Big Bang has no technical value and large, up-front community costs.

The names of the packages are of little technical value in and of themselves. They just need to be unique and descriptive to programmers. In source code, developers almost never see the package names. They are generally in import statements at the top of the source file and most IDEs kindly collapse the view of the import statements so they are not “in the way” of the developer. So, a developer will generally not really know or care if the Jakarta EE API being used in the source code is a mix of package names starting with java or javax, unchanged since Jakarta EE 8, and updated API with package names starting with jakarta. That is, there is little mental cost to such a mixture. The Jakarta EE 8 API are already spread across many, many package names and developers can easily deal with this. That some will start with java or javax and some with jakarta is largely irrelevant to a developer. The developer mostly works with type and member names which are not subject to the package rename problem.

But once source code is compiled into class files, packaged into artifacts, and distributed to repositories, the package names are baked in to the artifacts and play an important role in interoperation between artifacts: binary compatibility. Modern Java applications generally include many 3rdparty open source artifacts from public repositories such as Maven Central and there are many such artifacts in Maven Central which use the current package names. If Jakarta EE 9 were to rename all packages, then the corpus of existing artifacts is no longer usable in Jakarta EE 9 and later. At least not without some technical “magic” in builds, deployments, and/or runtimes to attempt to rename package references on-the-fly. Such magic may be incomplete and will break jar signatures and will complicate builds and tool chains. It will not be transparent.

Jakarta EE must minimize the inflection point/blast radius on the Java community caused by the undesired constraint to rename packages if they are changed. The larger the inflection point, the more reason you give to developers to consider alternatives to Jakarta EE and to Java in general. The Incremental approach minimizes the inflection point providing an evolutionary approach to the package naming changes rather than the revolutionary approach of the Big Bang.

Some Jakarta EE specification may never be updated. They have long been stable in the Java EE world and will likely remain so in Jakarta EE. So why rename their packages? The Big Bang proposal even recognizes this by indicating that some specification will be “frozen” in their current package names. But, of course, there is the possibility that one day, Jakarta EE will want to update a frozen specification. And then the package names will need to be changed. The Incremental approach takes this approach to all Jakarta EE specifications. Only rename packages when absolutely necessary to minimize the impact on the Java community.

Renaming packages incrementally, as needed, does not reduce the freedom of action for Jakarta EE to innovate. It is just a necessary part of the first innovation of a Jakarta EE specification.

A Big Bang approach does not remove the need to run existing applications on earlier platform versions.  It increases the burden on customers since they must update all parts of their application for the complete package renaming when the need to access a new innovation in a single updated Jakarta EE specification when none of the other Jakarta EE specifications they use have any new innovations. Just package renames for no technical reason.  It also puts a large burden on all application server vendors. Rather than having to update parts of their implementations to support the package name changes of a Jakarta EE specification when the specification is updated for some new innovation, they must spend a lot of resources to support both old and new packages name for the implementations of all Jakarta EE specifications.

There are some arguments in favor of a Big Bang approach. It “gets the job done” once and for all and for new specifications and implementations the old java or javax package names will fade from collective memories. In addition, the requirement to use a certified Java SE implementation licensed by Oracle to claim compliance with Eclipse Jakarta EE evaporates once there are no longer any java or javax package names in a Jakarta EE specification. However, these arguments do not seem sufficient motivation to disrupt the ability of all existing applications to run on a future Jakarta EE 9 platform.

In general, lazy evaluation is a good strategy in programming. Don’t do a thing until the thing needs to be done. We should apply that strategy in Jakarta EE to package renaming and take the Incremental approach. Finally, I am reminded of Æsop’s fable, The Tortoise & the Hare. “The race is not always to the swift.”


by BJ Hargrave (noreply@blogger.com) at May 17, 2019 05:11 PM

Renaming Jetty from javax.* to jakarta.*

by gregw at May 13, 2019 07:20 AM

The Issue The Eclipse Jakarta EE project has not obtained the rights from Oracle to extend the Java EE APIs living in the javax.* package. As such, the Java community is faced with a choice between continuing to use the

by gregw at May 13, 2019 07:20 AM

The Cloud Native Imperative — Results from the 2019 Jakarta EE Developer Survey

by Mike Milinkovich at May 10, 2019 03:17 PM

The results of the 2019 Jakarta EE Developer Survey are out. Almost 1,800 Java developers from around the world have spoken. Taken together with the engagement and response to my recent posts on the future of Jakarta EE (see my latest blog here), the survey makes clear the developer community is focused on charting a new course for a cloud native future, beginning with delivering Jakarta EE 8. The Java ecosystem has a strong desire to see Jakarta EE, as the successor to Java EE, continue to evolve to support microservices, containers, and multi-cloud portability.

Organized by the Jakarta EE Working Group, the survey was conducted over three weeks in March 2019. Just like last year (see the 2018 results here), Jakarta EE member companies promoted the survey in partnership with the London Java Community, Java User Groups, and other community stakeholders. Thank you to everyone who took the time to participate. Access the full findings of the survey here.

Some of the highlights from this year’s survey include:

  • The top three community priorities for Jakarta EE are: better support for microservices, native integration with Kubernetes (tied at 61 percent), followed by production quality reference implementations (37 percent). To move mission-critical Java EE applications and workloads to the cloud, developers will need specifications, tools, and products backed by a diverse vendor community. Jakarta EE Working Group members have committed to deliver multiple compatible implementations of the Jakarta EE 8 Platform when the Jakarta EE 8 specifications are released.
  • With a third of developers reporting they are currently building cloud native architectures and another 30 percent planning to within the next year, cloud native is critically important today and will continue to be so;
  • The number of Java applications running in the cloud is projected to substantially increase, with 32 percent of respondents expecting that they will be running nearly two-thirds of their Java applications in the cloud within the next two years;
  • Microservices dominates as the architecture approach to implementing Java in the cloud, according to 43 percent of respondents;
  • Spring/Spring Boot again leads as the framework chosen by most developers for building cloud native applications in Java;
  • Eclipse Microprofile’s adoption has surged, with usage growing from 13 percent in 2018 to 28 percent today;
  • Java continues to dominate when it comes to deploying applications in production environments. It comes as no surprise that most companies are committed to protecting their past strategic investments in Java.

Once again, thanks to everyone who completed the survey and to the community members for their help with the promotion.

Let me know what you think about this year’s survey findings. We are open to suggestions on how we can improve the survey in the future, so please feel free to share your feedback.


by Mike Milinkovich at May 10, 2019 03:17 PM

The Cloud is Driving the Future of the Java Ecosystem and Jakarta EE: Eclipse Foundation Survey Results

by Debbie Hoffman at May 09, 2019 02:54 PM

The 2019 Jakarta EE developer survey results are in - and they show cloud deployments have increased since last year with 62% of Java developers currently building or planning cloud native architectures within the year.


by Debbie Hoffman at May 09, 2019 02:54 PM

Frequently Asked Questions About Jakarta EE 8

by Mike Milinkovich at May 08, 2019 12:00 PM

I’d like to thank the community for the level of engagement we’ve seen in response to my post from last week.   This post, which again represents the consensus view of the Jakarta EE Steering Committee, answers some questions about Jakarta EE 8, which is planned as the initial release of Jakarta EE, and is intended to be fully compatible with Java EE 8, including use of the javax namespace.   We thought it would be useful to reiterate the messages we have been delivering about this release.

Note that this post is not about future Jakarta releases where the namespace will be changed. There is a vigorous discussion going on right now on the jakarta-platform-dev@eclipse.org list (archive), so if you are interested in that topic, I would suggest you participate there. We expect that it will be about a month before the Jakarta EE Spec Committee will determine the next steps in the Jakarta EE roadmap.

Will Jakarta EE 8 break existing Java EE applications that rely upon javax APIs?

No, Jakarta EE 8 will not break existing existing Java EE applications that rely upon javax APIs.   We expect Jakarta EE 8 to be completely compatible with Java EE 8. We expect Jakarta EE 8 to specify the same javax namespace, and the same javax APIs and the same behavior as is specified in Java EE 8.    We expect that implementations that pass the Java EE 8 TCKs will also pass the Jakarta EE 8 TCKs, because the Jakarta EE 8 TCKs will be based on the same sources as the Java EE 8 TCKs. Jakarta EE 8 will not require any changes to Java EE 8 applications or their use of javax APIs.

What will Jakarta EE 8 consist of?

The Jakarta EE 8 specifications will:

  • Be fully compatible with Java EE 8 specifications
  • Include the same APIs and Javadoc using the same javax namespace
  • Provide open source licensed Jakarta EE 8 TCKs that are based on, and fully compatible with, the Java EE 8 TCKs.
  • Include a Jakarta EE 8 Platform specification that will describe the same platform integration requirements as the Java EE 8 Platform specification.
  • Reference multiple compatible  implementations of the Jakarta EE 8 Platform when the Jakarta EE 8 specifications are released.
  • Provide a compatibility and branding process for demonstrating that implementations are Jakarta EE 8 compatible.

Will there be Jakarta EE 8 compatible implementations?

Yes.  Multiple compatible implementations of the Jakarta EE 8 Platform will be available when the Jakarta EE 8 specifications are released.  We expect that any Java EE 8 compatible implementation would also be Jakarta EE 8 compatible, and the vendors in the Jakarta EE Working Group intend to certify their Java EE 8 compatible implementations as Jakarta EE 8 compatible.  In addition, because the Jakarta EE TCKs are available under an open source license, we will “lower the bar” for other technology providers to demonstrate Jakarta EE compatibility for their implementations. The lower cost and more liberal Jakarta EE trademark licensing will allow more technology providers to leverage and strengthen the Jakarta EE brand in the Enterprise Java community.  Jakarta EE 8 will provide a new baseline for the evolution of the Jakarta EE technologies, under an open, vendor-neutral community-driven process.

What is the process for delivery of Jakarta EE 8

The process for delivery of Jakarta EE 8 specifications will be fully transparent and will follow the Jakarta EE Specification Process.  Expect to see in coming weeks the delivery of initial, draft Jakarta EE 8 component specifications corresponding to Java EE 8 component specifications.  These will contain Javadoc defining the relevant APIs, and TCKs for compatibility testing. To publish specification text, we need to acquire copyright licenses for this text.  We have obtained Oracle and IBM’s copyright licenses for their  contributions, and intend to obtain the remaining copyright licenses required to publish the text of the Jakarta EE 8 Platform specification, and as much as possible of the component specifications. If you contributed to the Java EE specifications at the JCP in the past, expect to be contacted by the Eclipse Foundation to provide a license to use your contributions in Jakarta EE going forward. Providing such a license will be an important step in supporting the new specification process and the Jakarta EE community.  You will see these draft specifications evolve to final specifications in an open community process. Join the specification projects and participate!

When will Jakarta EE 8 be delivered?

The Jakarta EE Working Group intends to release final Jakarta EE 8 specifications by the fall of 2019.    This is an open community-driven effort, so there will be transparency into the process of driving the Jakarta EE 8 specifications, delivery of the Jakarta EE 8 TCKs, and Jakarta EE 8 compatible implementations.


by Mike Milinkovich at May 08, 2019 12:00 PM

Transitioning Jakarta EE to the “jakarta” namespace

by Ivar Grimstad at May 07, 2019 11:55 AM

As described in Jakarta Going Forward, we need to transition the Jakarta EE specifications to the jakarta.* namespace/base package. After long and intense discussions in the Jakarta EE Specification, we have proposed two possible ways forward to kick-start the discussions on this thread.

In this post, I am highlighting some of the content of the initial post to the mailing list for reference.

Proposal 1: Big-bang Jakarta EE 9, Jakarta EE 10 New Features

The heart of this proposal is to do a one-time move of API source from the javax.* namespace to the jakarta.* namespace with the primary goal of not prolonging industry cost and pain associated with the transition.

https://www.eclipse.org/lists/jakartaee-platform-dev/msg00029.html

Proposal 2: Incremental Change in Jakarta EE 9 and beyond

Evolve API source from javax.* to the jakarta.* namespace over time on an as-needed basis. The most active specifications would immediately move in Jakarta EE 9. Every Jakarta EE release, starting with version 10 and beyond may involve some javax.* to jakarta.* namespace transition.

https://www.eclipse.org/lists/jakartaee-platform-dev/msg00029.html

Other Proposals

Other proposals should incorporate the following considerations and goals:

The new namespace will be jakarta.*

APIs moved to the jakarta.* namespace maintain class names and method signatures compatible with equivalent class names and method signatures in the javax.* namespace.

Even a small maintenance change to an API would require a javax.* to jakarta.* change of that entire specification. Examples include:
– Adding a value to an enum
– Overriding/adding a method signature
– Adding default methods in interfaces
– Compensating for Java language changes

Binary compatibility for existing applications in the javax.* namespace is an agreed goal by the majority of existing vendors in the Jakarta EE Working Group and would be a priority in their products. However, there is a strong desire not to deter new implementers of the jakarta.* namespace from entering the ecosystem by requiring they also implement an equivalent javax.* legacy API.

There is no intention to change Jakarta EE 8 goals or timeline.

Community discussion on how to transition to the jakarta.* namespace will conclude Sunday, June 9th, 2019.

https://www.eclipse.org/lists/jakartaee-platform-dev/msg00029.html

Contribute

There are already a lot of contributions and lively discussions going on. Please make sure you join the Jakarta EE Platform Developer Discussions mailing list https://accounts.eclipse.org/mailing-list/jakartaee-platform-dev to take part in the conversation. At the time of writing this post, the number of subscribers to the list has more than doubled! Another proof of the passion and commitment in the Jakarta EE community!


by Ivar Grimstad at May 07, 2019 11:55 AM

Transitioning Java EE to the Jakarta EE namespace

by Edwin Derks at May 07, 2019 04:10 AM

A few days ago, the Eclipse Foundation has announced on their blog that the Java brand is no longer allowed in future versions of the Jakarta EE platform. That means that Oracle allows Jakarta EE to use the Java EE related code and artificats as-is, until a modification occurs on these items from where they must be rebranded to the Jakarta EE brand. Knowing the situation of Oracle moving away from Java EE while keeping the brand’s name and logo, this really didn’t come as a surprise to me. Actually, I think it was a job well done by both the negotiators from Eclipse and Oracle to get this out of the negotiatons.

What did surprise me, is mostly the reactions to this on Twitter. Several people were against this, sharing their disagreement, even have called it a ‘debacle’. I have only seen few people like me that were relevied that a decision was made in the first place, and take it up from there. In my personal opinion, this situation was inevitable and would surface at some moment in time, which apparently was at moment of the blog post’s announcement. Am I really the only one who was not surprised?

In any case, from my point of view, what does it mean for me as a Jakarta EE transitioning developer? It means roughly that I (maybe among other changes) will have to change my Maven artifact's names to the Jakarta branded versions, and change the imports of my Java code to the jakarta.* namespace. The Jakarta framework will at that time probably be in some hybrid situation, supporting both Java EE and Jakarta EE specifications. Of course: this is not going to be an ideal situation. But remember that Jakarta EE is going to be compatible with Java EE 8 for a while. Performing the transition during this period should have the least impact.

This is a hurdle that I, and my fellow developers alike, will have to overcome. Simple because that’s the decision that has been made, and we have to live with that. But if you look at the big picture, is this transition actually such an interesting phenomenon? I don’t think so, because after the transition, Jakarta EE will fall completely in line with any other Java based enterprise development framework which is also not allowed to use the Java brand in their own code and brand. Jakarta EE will therefore become a self-maturing framework that happens to run on Java, and suffer the same evolutionary consequenses of that language like any other Java-based framework.

So please, dear Jakarta EE community, unite and embrance this transition. Let’s pick up the glove and make this thing happen. We have work to do…


by Edwin Derks at May 07, 2019 04:10 AM

Jakarta Going Forward

by Ivar Grimstad at May 05, 2019 06:03 AM

The agreement between the Eclipse Foundation and Oracle regarding rights to Java trademarks has been signed! This is truly an important milestone for Jakarta EE since we will now be able to move forward with Jakarta EE.

As outlined in https://eclipse-foundation.blog/jakarta-ee-java-trademarks, there are two major areas of impact on the Jakarta EE projects:

  • Java Trademarks
  • The javax.* Namespace

Java Trademarks

One part of the agreement is regarding the use of Java trademarks. The implications for Jakarta EE is that we have to rename the specifications and the specification projects. This work is ongoing and is tracked in our specification renaming board on GitHub. The EE4J PMC has published the following Naming Standard for Jakarta EE Specifications in order to comply with the trademark agreement.

The javax Namespace

The major topic of the agreement is around the use of the javax namespace. The agreement permits Jakarta EE specifications to use the javax namespace as is only. Changes to the API must be made in another namespace.

While the name changes can be considered cosmetic changes, the restrictions on the use of the javax.* namespace come with some technical challenges. For example, how are we going to preserve backwards compatibility for applications written using the javax.* namespace?

The Jakarta EE Specifications Committee has come up with the following guiding principle for Jakarta EE.next:

Maximize compatibility with Jakarta EE 8 for future versions without stifling innovation.

With the restrictions on the use of the javax.* namespace, it is necessary to transition the Jakarta EE specifications to a new namespace. The Jakarta EE Specification Committee has decided that the new namespace will be jakarta.*.

How and when this transition should happen is now the primary decision for the Jakarta EE community to make. There are several possible ways forward and the forum for these discussions is the Jakarta EE Platform mailing list.

Mailing List: jakartaee-platform-dev@eclipse.org

Please make sure that you subscribe to the mailing list and join in on the discussion. We hope that we will be able to reach some form of consensus within a month that can be presented to the Specification Committee for approval.

An Opportunity

While the restrictions on the use of Java trademarks and the javax.* namespace impose both practical as well as technical challenges, it is also an opportunity to perform some housekeeping.

By renaming the specifications, we can get a uniform, homogeneous naming structure for the specifications that makes more sense and is easier on the tongue than the existing. By having clear and concise names, we may even get rid of the need for abbreviations and acronyms.

The shift from javax.* to jakarta.* opens up for the possibility to differentiate the stable (or legacy) specifications that have played their role from the ones currently being developed.


by Ivar Grimstad at May 05, 2019 06:03 AM

Back to the top

Submit your event

If you have a community event you would like listed on our events page, please fill out and submit the Event Request form.

Submit Event