Skip to main content

The Simplest Possible Web Component (CustomElement)

by admin at October 18, 2019 05:14 AM

A WebComponent (CustomElement) is an ES 6 class:

class HelloWorld extends HTMLElement { 
    connectedCallback() { 
        const message = "world";
        this.innerText = `
            hello, ${message}
        `;
    }
}
customElements.define('hello-world',HelloWorld);    

which renders itself after including in a "html" page:


<!DOCTYPE html>
<html>
<body>
    <hello-world></hello-world>
    <script src="HelloWorld.js"></script>
</body>
</html>    

WebComponents are supported in all recent browsers.

See you at "Build to last" effectively progressive applications with webstandards only -- the "no frameworks, no migrations" approach, at Munich Airport, Terminal 2 or effectiveweb.training (online).


by admin at October 18, 2019 05:14 AM

A Tool for Jakarta EE Package Renaming in Binaries

by BJ Hargrave (noreply@blogger.com) at October 17, 2019 09:26 PM

In a previous post, I laid out my thinking on how to approach the package renaming problem which the Jakarta EE community now faces. Regardless of whether the community chooses big bang or incremental, there are still existing artifacts in the world using the Java EE package names that the community will need to use together with the new Jakarta EE package names.

Tools are always important to take the drudgery away from developers. So I have put together a tool prototype which can be used to transform binaries such as individual class files and complete JARs and WARs to rename uses of the Java EE package names to their new Jakarta EE package names.

The tools is rule driven which is nice since the Jakarta EE community still needs to define the actual package renames for Jakarta EE 9. The rules also allow the users to control which class files in a JAR/WAR are transformed. Different users may want different rules depending upon their specific needs. And the tool can be used for any package renaming challenge, not just the specific Jakarta EE package renames.

The tools provides an API allowing it to be embedded in a runtime to dynamically transform class files during the class loader definition process. The API also supports transforming JAR files. A CLI is also provided to allow use from the command line. Ultimately, the tool can be packaged as Gradle and Maven plugins to incorporate in a broader tool chain.

Given that the tool is prototype, and there is much work to be done in the Jakarta EE community regarding the package renames, I have started a list of TODOs in the project' issues for known work items.

Please try out the tool and let me know what you think. I am hoping that tooling such as this will ease the community cost of dealing with the package renames in Jakarta EE.

PS. Package renaming in source code is also something the community will need to deal with. But most IDEs are pretty good at this sort of thing, so I think there is probably sufficient tooling in existence for handling the package renames in source code.

by BJ Hargrave (noreply@blogger.com) at October 17, 2019 09:26 PM

Custom Map Updates without Null Checks: Map#merge

by admin at October 17, 2019 05:07 AM

The Java 1.8+ Map#merge method is useful for upserts of a Map with custom bevavior. Null checks are not required:

import java.util.HashMap;
import java.util.Map;
import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.MatcherAssert.assertThat;
import org.junit.Test;

public class MapMergeTest {

    @Test
    public void mergeInMap() {
        Map<String, Integer> map = new HashMap<>();
        int initial = 1;
	
        int result = map.merge("key", initial, (oldValue, newValue) -> oldValue + newValue);
        assertThat(result, is(1));

        int update = 42;
        int expected = initial + update;
        result = map.merge("key", update, (oldValue, newValue) -> oldValue + newValue);
        assertThat(result, is(expected));
    }
}

The Map#merge is equivalent to:


if (oldValue != null ) {
    if (newValue != null)
        map.put(key, newValue);
    else
        map.remove(key);
    } else {
    if (newValue != null)
        map.put(key, newValue);
    else
        return null;
    }    

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at October 17, 2019 05:07 AM

Relationship between Payara Platform, MicroProfile and Java EE/Jakarta EE

by Rudy De Busscher at October 15, 2019 11:27 AM

Maybe you've already heard about Eclipse MicroProfile, or maybe you don't know what benefits it offers you in your current project. Perhaps you don't see the relationship with Java EE/Jakarta EE - or how you can use it with Payara Server or Payara Micro.

In this blog, I'll give you a short overview of all of the above questions so that you can start using MicroProfile in your next project on the Payara Platform.


by Rudy De Busscher at October 15, 2019 11:27 AM

Installing and Deploying Swagger UI

by admin at October 15, 2019 03:57 AM

The following dependency:


<dependency>
    <groupId>org.microprofile-ext.openapi-ext</groupId>
    <artifactId>swagger-ui</artifactId>
    <version>1.0.2</version>
</dependency>

installs the Swagger UI "website" and makes it available directly from a configurable URI (in our case: http://localhost:8080/openapiui/resources/openapi-ui/).

The example was deployed with wad.sh to payara.fish. Warning: most application servers / runtimes already ship with Swagger UI.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at October 15, 2019 03:57 AM

Jakarta EE and the great naming debate

October 15, 2019 03:48 AM

At JavaOne 2017 Oracle announced that they would start the difficult process of moving Java EE to the Eclipse Software Foundation. This has been a massive effort on behalf of Eclipse, Oracle and many others and we are getting close to having a specification process and a Jakarta EE 8 platform. We are looking forward to being able to certify Open Liberty to it soon. While that is excellent news, on Friday last week Mike Milinkovich from Eclipse informed the community that Eclipse and Oracle could not come to an agreement that would allow Jakarta EE to evolve using the existing javax package prefix. This has caused a flurry of discussion on Twitter, from panic, confusion, and in some cases outright FUD.

To say that everyone is disappointed with this outcome would be a massive understatement of how people feel. Yes this is disappointing, but this is not the end of the world. First of all, despite what some people are implying, Java EE applications are not suddenly broken today, when they were working a week ago. Similarly, your Spring apps are not going to be broken (yes, the Spring Framework has 2545 Java EE imports, let alone all the upstream dependencies). It just means that we will have a constraint on how Jakarta EE evolves to add new function.

We have got a lot of experience with managing migration in the Open Liberty team. We have a zero migration promise for Open Liberty which is why we are the only application server that supports Java EE 7 and 8 in the same release stream. This means that if you are on Open Liberty, your existing applications are totally shielded from any class name changes in Jakarta EE 9. We do this through our versioned feature which provide the exact API and runtime required by the specification as it was originally defined. We are optimistic about for the future because we have been doing this with Liberty since it was created in 2012.

The question for the community is "how we should move forward from here?" It seems that many in the Jakarta EE spec group at Eclipse are leaning towards quickly renaming everything in a Jakarta EE 9 release. There are advantages and disadvantages to this approach, but it appears favoured by David Blevins, Ian Robinson, Kevin Sutter, Steve Millidge. While I can see the value of just doing a rename now (after all, it is better pull a band aid off fast than slow), I think it would be a mistake if at the same time we do not invest in making the migration from Java EE package names to Jakarta EE package names cost nothing. Something in Liberty we call "zero migration".

Jakarta EE will only succeed if developers have a seamless transition from Java EE to Jakarta EE. I think there are four aspects to pulling off zero migration with a rename:

  1. Existing application binaries need to continue to work without change.

  2. Existing application source needs to continue to work without change.

  3. Tools must be provided to quickly and easily change the import statements for Java source.

  4. Applications that are making use of the new APIs must be able to call binaries that have not been updated.

The first two are trivial to do: Java class files have a constant pool that contains all the referenced class and method references. Updating the constant pool when the class is loaded will be technically easy, cheap at runtime, and safe. We are literally talking about changing javax.servlet to jakarta.servlet, no method changes.

The third one is also relatively simple; as long as class names do not change switching import statements from javax.servlet.* to jakarta.servlet.* is easy to automate.

The last one is the most difficult because you have existing binaries using the javax.servlet package and new source using the jakarta.servlet package. Normally this would produce a compilation error because you cannot pass a jakarta.servlet class somewhere that takes a javax.servlet class. In theory we could reuse the approach used to support existing apps and apply it at compile time to the downstream dependencies, but this will depend on the build tools being able to support this behaviour. You could add something to the Maven build to run prior to compilation to make sure this works, but that might be too much work for some users to contemplate, and perhaps is not close enough to zero migration.

I think if the Jakarta EE community pulls together to deliver this kind of zero migration approach prior to making any break, the future will be bright for Jakarta EE. The discussion has already started on the jakarta-platform-dev mail list kicked off by David Blevins. If you are not a member you can join now on eclipse.org. I am also happy to hear your thought via twitter.


October 15, 2019 03:48 AM

Deploy Jakarta EE application to Kubernetes

by Hayri Cicek at October 14, 2019 06:42 AM

In this tutorial, I will show you how to deploy Jakarta EE application to Kubernetes.
Kubernetes doesn't run containers directly, instead it uses something called pod, which is a group of containers deployed together on the same host.
To follow this tutorial, you will need Docker, Kubernetes, Maven and of course Java installed on your machine.
I will use my custom maven archetype to generate the Jakarta EE application.


by Hayri Cicek at October 14, 2019 06:42 AM

Threads, Transactions, EntityManager, Fluid Logic,Quarkus, AMQP and Jakarta EE -- the 67th airhacks.tv

by admin at October 14, 2019 03:01 AM

The 67th airhacks.tv episode covering:

"Jakarta EE without Docker and FatJARs, Fulltext Search, Connection Pools, Password Management, Fluid Logic, Thread-Safety and EntityManager, Quarkus productivity, AMQP, Jakarta EE"

...is available:

Any questions left? Ask now: https://gist.github.com/AdamBien/2735e9c8845fe1eba40720281d9c2c09 and get the answers at the next airhacks.tv.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at October 14, 2019 03:01 AM

Change Data Capture, Debezium, Streaming and Kafka--airhacks.fm Podcast

by admin at October 13, 2019 11:27 AM

Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS

The #58 airhacks.fm episode with Gunnar Morling (@gunnarmorling) about:

Change Data Capture with Debezium, Streaming, Kafka and Use Cases
is available for download.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at October 13, 2019 11:27 AM

Jakarta EE and MicroProfile applications with React and PostgreSQL

by rieckpil at October 12, 2019 02:37 PM

As now all major application server vendors are Jakarta EE 8 certified, we are ready to start a new era of enterprise Java. Most of the examples on the internet lack a full-stack approach and just focus on the backend. With this post, I want to share a simple full-stack example following best practices with Jakarta EE, MicroProfile,  React, and PostgreSQL. This includes a Flyway setup to migrate the database schema and TypeScript for the frontend application. At the end of this blog post, you’ll be able to connect your React application to a Jakarta EE & MicroProfile backend to display data from PostgreSQL.

Setup the backend for Jakarta EE and MicroProfile

The backend uses Java 11 and Maven and to build the project. Next to the Jakarta EE and MicroProfile dependencies, I’m adding the PostgreSQL driver and Flyway for the schema migration:

<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>de.rieckpil.blog</groupId>
    <artifactId>guide-to-jakarta-ee-with-react-and-postgresql</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>war</packaging>

    <properties>
        <!-- configure the version numbers -->
    </properties>

    <dependencies>
        <dependency>
            <groupId>jakarta.platform</groupId>
            <artifactId>jakarta.jakartaee-api</artifactId>
            <version>${jakarta.jakartaee-api.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.eclipse.microprofile</groupId>
            <artifactId>microprofile</artifactId>
            <version>${microprofile.version}</version>
            <type>pom</type>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.postgresql</groupId>
            <artifactId>postgresql</artifactId>
            <version>${postgresql.version}</version>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.flywaydb</groupId>
            <artifactId>flyway-core</artifactId>
            <version>${flyway-core.version}</version>
        </dependency>
    </dependencies>

    <build>
        <finalName>guide-to-jakarta-ee-with-react-and-postgresql</finalName>
    </build>
</project>

The backend exposes one REST endpoint to retrieve all available books inside the database. To show you the MicroProfile integration inside a Jakarta EE application, I’m injecting a config property to limit the number of books:

@Path("books")
public class BookResource {

    @Inject
    @ConfigProperty(name = "book_list_size", defaultValue = "10")
    private Integer bookListSize;

    @PersistenceContext
    private EntityManager entityManager;

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Response getAllBooks() {

        List<Book> allBooks = this.entityManager
                .createQuery("SELECT b FROM Book b", Book.class)
                .setMaxResults(bookListSize)
                .getResultList();

        return Response.ok(allBooks).build();
    }
}

The JPA Book entity looks like the following:

@Entity
public class Book {

    @Id
    @GeneratedValue
    private Long id;

    private String title;
    private String author;
    private String excerpt;
    private String isbn;
    private String genre;
    private LocalDateTime published;
 
    // ... getters & setters
}

Furthermore, as the frontend application will, later on, run on a different port, we have to add a CORS filter for the JAX-RS resource:

@Provider
public class CorsFilter implements ContainerResponseFilter {

    @Override
    public void filter(ContainerRequestContext requestContext,
                       ContainerResponseContext responseContext) throws IOException {
        responseContext.getHeaders()
          .add("Access-Control-Allow-Origin", "*");
        responseContext.getHeaders()
          .add("Access-Control-Allow-Credentials", "true");
        responseContext.getHeaders()
          .add("Access-Control-Allow-Headers", "origin, content-type, accept, authorization");
        responseContext.getHeaders()
          .add("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS, HEAD");
    }
}

Prepare the PostgreSQL database schema with Flyway

Flyway is a Java library to version your database schema and evolve it over time. For the schema migration I’m using a singleton EJB with the @Startup annotation to make sure the schema is updated only once:

@Startup
@Singleton
@TransactionManagement(TransactionManagementType.BEAN)
public class FlywayUpdater {

    @Resource(lookup = "jdbc/postgresql")
    private DataSource dataSource;

    @PostConstruct
    public void initFlyway() {
        System.out.println("Starting to migrate the database schema with Flyway");
        Flyway flyway = Flyway.configure().dataSource(dataSource).load();
        flyway.migrate();
        System.out.println("Successfully applied latest schema changes");
    }
}

It’s important to set the transaction management for this class to TransactionManagementType.Bean. This will delegate the transaction handling to the bean and not the container. Besides the normal Jakarta EE transaction management, where the application server takes care of committing and rollback, Flyway will do it itself.

With this setup, Flyway will evaluate the schema version on every application startup. New schema changes are then applied to PostgreSQL if required. There are further ways to migrate the database schema with Flyway using a CLI or Maven Plugin.

For this simple example, I’m using two database scripts: one to create the table for the Book entity :

CREATE TABLE book (
    id BIGINT PRIMARY KEY,
    title VARCHAR(255) NOT NULL,
    excerpt TEXT,
    author VARCHAR(255) NOT NULL,
    isbn VARCHAR (20) NOT NULL,
    genre VARCHAR(255),
    published TIMESTAMP
);

… and another script to populate some books:

INSERT INTO book VALUES (1, 'Jakarta EE 8', 'All you need to know about Jakarta EE 8', 'Duke', '...', 'Java', '...');
INSERT INTO book VALUES (2, 'React 16', 'Effective Frontend Development with React', 'Duke', '...', 'React', '...');
INSERT INTO book VALUES (3, 'MicroProfile 3', 'All you need to know about Jakarta EE 8', 'Duke', '...', 'Java', '...');
INSERT INTO book VALUES (4, 'Jakarta EE 9', 'All you need to know about Jakarta EE 8', 'Duke', '...', 'Java', null);

Important note: Please make sure to add an empty flyway.location file to the folder of your migrations script (in this example /src/main/resources/db/migration) to use Flyway with Open Liberty.

Prepare Open Liberty for PostgreSQL

The Open Liberty application server recently announced (since 19.0.0.7) its first-class support for PostgreSQL. With this, the configuration of the JDBC data source requires a little bit less XML code.

We still have to provide the JDBC driver for PostgreSQL and link to it in the server.xml file. As I’m using Docker for this example, I’m extending the default Open Liberty image to add the custom server.xml and the JDBC driver. You can find the latest JDBC driver for PostgreSQL on Maven Central.

FROM open-liberty:kernel-java11
COPY --chown=1001:0 postgresql-42.2.8.jar /opt/ol/wlp/lib/
COPY --chown=1001:0 target/guide-to-jakarta-ee-with-react-and-postgresql.war /config/dropins/
COPY --chown=1001:0 server.xml /config

Next, we configure Open Liberty for MicroProfile 3.0 & Jakarta EE 8 (the javaee-8.0 feature works for Jakarta EE as well) and our PostgreSQL data source :

<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">

    <featureManager>
        <feature>javaee-8.0</feature>
        <feature>microProfile-3.0</feature>
    </featureManager>

    <httpEndpoint id="defaultHttpEndpoint" httpPort="9080" httpsPort="9443"/>

    <quickStartSecurity userName="duke" userPassword="dukeduke"/>

    <dataSource id="DefaultDataSource" jndiName="jdbc/postgresql">
        <jdbcDriver libraryRef="postgresql-library"/>
        <properties.postgresql serverName="book-store"
                               portNumber="5432"
                               databaseName="postgres"
                               user="postgres"
                               password="postgres"/>
    </dataSource>

    <library id="postgresql-library">
        <fileset dir="/opt/ol/wlp/lib"/>
    </library>
</server>

Make sure to add the DefaultDataSource as an id for the dataSource configuration, so JPA will take it without any further configuration.

PS: If you are looking for a reference guide to set up the JDBC data source on a different application server, have a look at this cheat sheet.

Create the React application with TypeScript

For this example, React and TypeScript is used to create the frontend application. To bootstrap a new React application with TypeScript you can use create-react-app as the following:

npx create-react-app my-app --typescript

This creates a new React project with everything you need to start. In addition, I’ve added semantic-ui-react to get some pre-built components:

npm install semantic-ui-react
npm install semantic-ui-css

The TypeScript types are already included in the packages above and you don’t have to install anything else.

Next, we can start by creating the React components for this application. To reduce complexity, I’ll just use the already provided App component and a BookTable component. The App is going to fetch the data from our Jakarta EE backend and pass it to the BookTable component to render the data inside a table.

Both components are functional components and the data is fetched with a React Effect Hook and the Fetch browser API:

const App: React.FC = () => {

    const [data, setData] = useState<Array<Book> | Error>();
    useEffect(() => {
        fetch('http://localhost:9080/resources/books')
            .then(response => response.json() as Promise<Book[]>)
            .then(data => setData(data))
            .catch(error => setData(new Error(error.statusText)))
    }, []);

    let content;

    if (!data) {
        content = <Message>Loading</Message>;
    } else if (data instanceof Error) {
        content = <Message negative>An error occurred while fetching the data</Message>;
    } else {
        content = <BookTable books={data}></BookTable>;
    }

    return (
        <Container>
            <Header as='h2'>Available Books</Header>
            {content}
        </Container>
    );
}

export default App;

Inside the BookTable we can use the Table component from semantic-ui-react to render the result:

export const BookTable: React.FC<BookTableProps> = ({books}) => {
    return (
        <Table celled>
            <Table.Header>
                <Table.Row>
                    <Table.HeaderCell>ID</Table.HeaderCell>
                    <Table.HeaderCell>Title</Table.HeaderCell>
                    <Table.HeaderCell>Genre</Table.HeaderCell>
                    <Table.HeaderCell>Excerpt</Table.HeaderCell>
                    <Table.HeaderCell>ISBN</Table.HeaderCell>
                    <Table.HeaderCell>Published</Table.HeaderCell>
                </Table.Row>
            </Table.Header>
            <Table.Body>
                {books.map(book =>
                    <Table.Row key={book.id}>
                        <Table.Cell>{book.id}</Table.Cell>
                        <Table.Cell>{book.title}</Table.Cell>
                        <Table.Cell>{book.genre}</Table.Cell>
                        <Table.Cell>{book.excerpt}</Table.Cell>
                        <Table.Cell>{book.isbn}</Table.Cell>
                        <Table.Cell>{book.published}</Table.Cell>
                    </Table.Row>
                )}
            </Table.Body>
        </Table>
    );
};

For the frontend deployment, I’m using an nginx Docker image and copy the static files to it:

FROM nginx:1.17.4
COPY build /usr/share/nginx/html

The final result of Jakarta EE, MicroProfile, React and PostgreSQL

Once everything is up- and running, the frontend should look like the following:

React Table filled by Jakarta EE backend

You can find the whole source code on GitHub and instructions to deploy the example on your local machine.

If you are looking for further quickstart examples like this, have a look at my Common Enterprise Use Case overview.

Have fun creating Jakarta EE & MicroProfile applications with React and PostgreSQL,

Phil

The post Jakarta EE and MicroProfile applications with React and PostgreSQL appeared first on rieckpil.


by rieckpil at October 12, 2019 02:37 PM

Signing in…

by Ivar Grimstad at October 12, 2019 12:49 PM

It’s been a week since I started as the Jakarta EE Developer Advocate at the Eclipse Foundation!

As you may have noticed, I am involved in almost any committee around Jakarta EE and enterprise Java in general and my new role has some implications for these engagements. I have listed them below and tried to give a reasonable explanation for each of them.

EE4J PMC

I have been a member of the EE4J PMC since its inception back in 2017, and for practical reasons served as the PMC Lead the entire time. According to the charter, we are supposed to rotate the leadership among the non-foundation staff members of the PMC. In order to minimize overhead, the PMC decided to stick with me as the lead until otherwise decided.

If the PMC wants me to continue in the PMC Lead position, the “non-Foundation staff” phrase will have to be removed from the charter. This has been put on the agenda for the PMC meeting on November 5th, so then we will know…

Jakarta EE Working Group Steering Committee

I have withdrawn from my elected Committer Representative seat in the Steering Group as this seat should not be held by anyone from the Eclipse Foundation. This position is currently up for election (hint hint: if you want to be involved, nominate yourself…).

Jakarta EE Working Group Specification Committee

The position I have in the Specification Committee is the PMC Representative. It is up to the PMC whether I should continue or withdraw. This will also be handled at the next PMC meeting on November 5th.

Java Community Process (JCP) Executive Committee

I have withdrawn from my Associate Seat at the JCP since the Eclipse Foundation is already on the committee. However, I will still be lurking around here as I will be the alternate representative for the Eclipse Foundation.

The JCP Elections have started. Remember to cast your vote!

by Ivar Grimstad at October 12, 2019 12:49 PM

Back from Oracle Code One 2019

by Jean-François James at October 10, 2019 03:01 PM

From September 15 to 19, I had the chance to participate in the event Oracle Code One San Francisco as a speaker. Here is my feedback. What is Oracle Code One? Oracle Code One San Francisco is one of the leading annual international meeting for developers. It is organized in parallel to another major event, […]

by Jean-François James at October 10, 2019 03:01 PM

Jakarta EE Community Update October 2019

by Tanja Obradovic at October 09, 2019 05:07 PM

Welcome to the latest Jakarta EE community update. In this edition, we highlight key opportunities to participate in upcoming community events, explore the Jakarta EE 8 release, and learn more about the very bright future of cloud native Java.

EclipseCon Europe 2019: Register for Community Day

Community Day, which will be held Monday, October 21 at EclipseCon Europe, is a must for everyone who’s interested in our cloud native projects. The day is dedicated to community-organized meetings to discuss projects and technologies, provide workshops and coding sessions, hold working group gatherings, and more. Lunch and breaks are included, and the day ends with a casual reception.

There’s already a gathering planned for anyone interested in Jakarta EE, MicroProfile, Eclipse Jemo, Eclipse Che, Eclipse Codewind, and other cloud-related topics. To see the agenda so far, and add your ideas for discussion topics, check the EclipseCon Europe Community Day wiki.

 And, don’t forget to attend our new event this year — Community Evening on Tuesday, October 22. This is your opportunity to participate in more casual, interactive events and enjoy a beverage with your community colleagues. A bar offering beer, wine, water, and juice will be available. 

To register for EclipseCon Europe and for Community Day, click here.

__________________________

 JakartaOne Livestream Wrap-Up

The first-ever JakartaOne Livestream event was a huge success with more than 1,400 registered attendees. The online conference, held September 10, marked the release of the first vendor-neutral, Java EE 8-compatible release of Jakarta EE following the new Jakarta EE Specification Process.

We first knew this event would be bigger than expected when several well-respected leaders in the Java EE community graciously and enthusiastically agreed to join the JakartaOne Livestream Program Committee. Led by Committee Chair, Reza Rahman, committee members Adam Bien, Ivar Grimstad, Arun Gupta, Josh Juneau, along with Tanja Obradovic from the Eclipse Foundation, put in a huge effort to plan the conference.

One of the committees’ toughest jobs was selecting 16 conference papers for presentation from among the more than 50 high-quality submissions. Participants enjoyed a great mixture of introductory and overview sessions, including sessions on particular specifications, cloud native topics, keynotes from Mike Milinkovich and James Gosling, as well as industry keynotes from Jakarta EE Working Group Steering Committee members IBM, Fujitsu, Oracle, Payara, Red Hat, and Tomitribe. Demos, panel discussions, and Q&A sessions rounded out the 18 hours of program material that were delivered.

To see a list of the topics presented and access the session recordings, visit jakartaone.org.    

__________________________

Jakarta EE 8 Release Highlights

The Jakarta EE 8 release is now available with 43 projects, more than 60 million lines of code, and full compatibility with Java EE 8.

 With the delivery of the Jakarta EE 8 Platform, the entire ecosystem — from software vendors to developers and enterprises — has all of the pieces needed to shape the future of cloud native Java and meet the modern enterprise’s need for cloud-based applications that resolve key business challenges.

To ensure that cloud native Java applications are portable, secure, stable, and resilient, product compatibility certifications are underway. We already have three products that are certified as compatible with the full Jakarta EE 8 platform:

·      Eclipse GlassFish application server, version 5.1

·      IBM Open Liberty server runtime, version 19.0

·      Red Hat WildFly application server, version 17.0

Eclipse GlassFish and Open Liberty are also certified as Jakarta EE 8 web profile-compatible products.

Almost three dozen Jakarta EE specifications are also available. Jakarta projects are listed here and are included in our main project repository. It’s time for everyone to get involved in the cloud native Java community and engage in turning the huge potential for cloud native Java into reality.

__________________________

Our Free Cloud Native Java E-Book Is Now Available

To mark the significance of the Jakarta EE 8 release, we also released a free e-book, Fulfilling the Vision for Open Source, Cloud Native Java, on September 10. The e-book includes insights from some of the leading voices in enterprise Java and the Jakarta EE Working Group. It explores:

·      Why the world needs open source, cloud native Java

·      The common vision for cloud native Java that has emerged

·      The many benefits of cloud native Java for software vendors, developers, and enterprises

·      Why it’s time for all Java stakeholders to get involved in the Jakarta EE Working Group

·      Priorities for evolving cloud native Java in the short- and long-term

·      The vital role of the Eclipse Foundation in supporting cloud native Java evolution

 Download the e-book today.

__________________________

A Look Back at September Events

September was a busy month for Jakarta EE and cloud native Java events as we participated in the JakartaOne Livestream event, described earlier, as well as Oracle Code One and HeapCon. Check out the blogs about Oracle Code One by Payara and Tomitribe.

 A quick note about Oracle Code One: Everyone at the Eclipse Foundation was extremely proud when our executive director, Mike Milinkovich, accepted the Duke’s Choice Award on behalf of the Jakarta EE community at the conference.

__________________________

Stay Connected With the Jakarta EE Community

The Jakarta EE community promises to be very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Tanja Obradovic’s blog offers a sneak peek at the community engagement plan, which includes:

·      Social media: Twitter, Facebook, LinkedIn Group

·      Mailing lists: jakarta.ee-community@eclipse.org and jakarta.ee-wg@eclipse.org

·      Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs, monthly update emails to jakarta.ee-community@eclipse.org, and community blogs on “how are you involved with Jakarta EE”

·      Meetings: Jakarta Tech Talks, Jakarta EE Update, Jakarta Town Hall, and Eclipse Foundation events and conferences, such as EclipseCon Europe

Subscribe to your preferred channels today. And, get involved in the Jakarta EE Working Group to help shape the future of open source, cloud native Java. To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.


by Tanja Obradovic at October 09, 2019 05:07 PM

Payara Server is Jakarta EE 8 Compatible!

by Patrik Duditš at October 09, 2019 11:00 AM

We are very happy to report that we've successfully passed all of nearly 50,000 test suites of Jakarta EE 8 TCK, and Payara Server 5.193.1 is Jakarta EE 8 Full Profile compatible!


by Patrik Duditš at October 09, 2019 11:00 AM

#WHATIS?: Contexts and Dependency Injection (CDI)

by rieckpil at October 07, 2019 09:07 AM

Dependency Injection (DI) is one of the central techniques in today’s applications and targets Separation of concerns. Not only makes this testing easier, but you are also not in charge to know how to construct the instance of a requested class. With Java/Jakarta EE we have a specification which (besides other topics) covers this: Contexts and Dependency Injection (short CDI). CDI is also part of the Eclipse MicroProfile project and many other Java/Jakarta EE specifications already use it internally or plan to use it.

Learn more about the Contexts and Dependency Injection (CDI) specification, its annotations and how to use it in this blog post. Please note that I won’t cover every aspect of this spec and rather concentrate on the most important parts. For more in-depth knowledge, have a look at the following book.

Specification profile: Contexts and Dependency Injection (CDI)

  • Current version: 2.0 in Java/Jakarta EE 8 and 2.0 in MicroProfile 3.0
  • GitHub repository
  • Specification homepage
  • Basic use case: provide a typesafe dependency injection mechanism

Basic dependency injection with CDI

The main use case for CDI is to provide a typesafe dependency injection mechanism. To make a Java class injectable and managed by the CDI container, you just need a default no-args constructor or a constructor with a @Inject annotation.

If you use no further annotations, you have to tell CDI to scan your project for all available beans. You can achieve this which a beans.xml file inside src/main/resources/webapp/WEB-INF using the bean-discovery-mode:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd"
       bean-discovery-mode="all">
</beans>

Using this setup, the following BookService can inject an instance of the IsbnValidator class:

public class IsbnValidator {
    public boolean validateIsbn(String isbn) {
        retrn isbn.replace("-", "").length() < 13);
    }
}

public class BookService {

    @Inject
    private IsbnValidator isbnValidator;

    // work with the instance
}

You can inject beans via either field-, setter-, constructor-injection or request a bean manually from the CDI runtime:

public void storeBook(String bookName, String isbn) {
   if (CDI.current().select(IsbnValidator.class).get().validateIsbn(isbn)) {
        logger.info("Store book with name: " + bookName);
   }
}

Once CDI manages a bean, the instances have a well-defined lifecycle and are bound to a scope. You can interact with the lifecycle of a bean while using e.g. @PostConstruct or @PreDestroy. The default scope, if you don’t specify any (like in the example above), is the pseudo-scope @Dependent. With this scope, an instance of your bean is bound to the scope of the bean it gets injected to and won’t be shared.

However, you can specify the scope of your bean using the available scopes in CDI:

  • @RequestScoped – bound to an HTTP request
  • @SessionScoped – bound to the HTTP session of a user
  • @ApplicationScoped – like a Singleton, one instance per application
  • @ConversationScoped – bound to a conversation context e.g. wizard-like web app

If you need a more dynamic approach for creating a bean that is managed by CDI you can use the @Produces annotation. This gives you access to the InjectionPoint which contains metadata about the class who requested an instance:

public class LoggerProducer {

    @Produces
    public Logger produceLogger(InjectionPoint injectionPoint) {
        return Logger.getLogger(injectionPoint.getMember().getDeclaringClass().getName());
    }
}

Using qualifiers to specify beans

In the previous chapter, we looked at the simplest scenario where we just have one possible bean to inject. Imagine the following scenario where have multiple implementations of an interface:

public interface BookDistributor {
    void distributeBook(String bookName);
}

public class BookPlaneDistributor implements BookDistributor {

    @Override
    public void distributeBook(String bookName) {
        System.out.println("Distributing book by plane");
    }
}

public class BookShipDistributor implements BookDistributor {

    @Override
    public void distributeBook(String bookName) {
        System.out.println("Distributing book by ship");
    }
}

If we now request a bean of the type BookDistributor, which instance do we get? The BookPlaneDistributor or an instance of BookShipDistributor?

public class BookStorage {

    @Inject // this will fail
    private BookDistributor bookDistributor;

}

… well, we get nothing but an exception, as the CDI runtime doesn’t know which implementation to inject:

WELD-001409: Ambiguous dependencies for type BookDistributor with qualifiers @Default
  at injection point [BackedAnnotatedField] @Inject private de.rieckpil.blog.qualifiers.BookStorage.bookDistributors
  at de.rieckpil.blog.qualifiers.BookStorage.bookDistributors(BookStorage.java:0)
  Possible dependencies: 
  - Managed Bean [class de.rieckpil.blog.qualifiers.BookShipDistributor] with qualifiers [@Any @Default],
  - Managed Bean [class de.rieckpil.blog.qualifiers.BookPlaneDistributor] with qualifiers [@Any @Default]

The stack trace contains an important hint on how to fix such a scenario. If we don’t further qualify a bean our beans have the default qualifiers @Any and @Default. In the scenario above the BookStorage class requests for a BookDistributor and also does not specify anything else, meaning it will get the @Default bean. As there are two beans with this default behavior, dependency injection is not possible (without further adjustments) here.

To fix the error above, we have to introduce qualifiers and further specify which concrete bean we want. A qualifier is a Java annotation including @Qualifier:

@Qualifier
@Retention(RUNTIME)
@Target({TYPE, METHOD, FIELD, PARAMETER})
public @interface PlaneDistributor {
}

Once we have this annotation, we can use it both for the implementation and at the injection point:

@PlaneDistributor
public class BookPlaneDistributor implements BookDistributor {

    @Override
    public void distributeBook(String bookName) {
        System.out.println("Distributing book by plane");
    }
}

@Inject
@PlaneDistributor
private BookDistributor bookPlaneDistributor;

… and now have a proper injection of our requested bean.

Above all, you can also always request for all instances matching a Java type using the Instance<T>  wrapper class:

public class BookStorage {

    @Inject
    private Instance<BookDistributor> bookDistributors;

    public void distributeBookToCustomer(String bookName) {
        bookDistributors.forEach(b -> b.distributeBook(bookName));
    }
}

Enrich functionality with decorators & interceptors

With CDI we have two mechanisms to enrich/extend the functionality of a class without changing the implementation: Decorators and Interceptors.

Decorators allow a type-safe way to decorate your actual implementation. Given the following example of an Account  interface and one implementation:

public interface Account {
    Double getBalance();
    void withdrawMoney(Double amount);
}

public class CustomerAccount implements Account {

    @Override
    public Double getBalance() {
        return 42.0;
    }

    @Override
    public void withdrawMoney(Double amount) {
        System.out.println("Withdraw money from customer: " + amount);
    }
}

We can now write a decorator to make special checks if the amount of money to withdraw meets a threshold:

@Decorator
public abstract class LargeWithdrawDecorator implements Account {

    @Inject
    @Delegate
    private Account account;

    @Override
    public void withdrawMoney(Double amount) {
        if (amount >= 100.0) {
            System.out.println("A large amount of money gets withdrawn!!!");
            // e.g. do further checks
        }
        account.withdrawMoney(amount);
    }
}

With interceptors, we get a more generic approach and don’t have the same method signature as the intercepted class, rather an InvocationContext. This offers more flexibility as we can reuse our interceptor on multiple classes/methods. A lot of cross-cutting logic in Java/Jakarta EE like transactions and security is actually achieved with interceptors. For an example on how to write interceptors, have a look at one of my previous blog posts.

Both decorators and interceptors are inactive by default. To activate them, you either have to specify them in your beans.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd"
       bean-discovery-mode="all">
    <decorators>
        <class>de.rieckpil.blog.decorators.LargeWithdrawDecorator</class>
    </decorators>
</beans>

or using the @Priority annotation and specify the priority value:

@Decorator
@Priority(100)
public abstract class LargeWithdrawDecorator implements Account {
}

Decouple components with CDI events

Last but not least, the CDI specification provides a sophisticated event notification model. You can use this to decouple your components and use the Observer pattern to notify all listeners once a new event is available.

The event notification in CDI is available both in a synchronous and asynchronous way. The payload of the event can be any Java class and you can use qualifiers to further specialize an event. Firing an event is as simple as the following:

public class BookRequestPublisher {

    @Inject
    private Event<BookRequest> bookRequestEvent;

    public void publishNewRequest() {
        this.bookRequestEvent.fire(new BookRequest("MicroProfile 3.0", 1));
   }
}

Observing such an event requires the @Observes annotation on the receiver-side:

public class BookRequestListener {
    public void onBookRequest(@Observes BookRequest bookRequest) {
        System.out.println("New book request incoming: " + bookRequest.toString());
    }
}

Using the asynchronous way, you receive a CompletionStage<T> as a result and can add further processing steps or handle errors:

public class BookRequestPublisher {

    @Inject
    private Event<BookRequest> bookRequestEvent;

    public void publishNewRequest() {

        this.bookRequestEvent
                .fireAsync(new BookRequest("MicroProfile 3.0", 1))
                .handle((request, error) -> {
                    if (error == null) {
                        System.out.println("Successfully fired async event");
                        return request;
                    } else {
                        System.out.println("Error occured during async event");
                        return null;
                    }
                })
                .thenAccept(r -> System.out.println(r));
    }

}

Listening to async events requires the @ObservesAsync  annotation instead of @Observes :

public void onBookRequestAsync(@ObservesAsync BookRequest bookRequest) {
   System.out.println("New book request incoming async: " + bookRequest.toString());
}

YouTube video for using CDI 2.0

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see CDI 2.0 in action:

coming soon

If you are looking for resources to learn more advanced CDI concepts in-depth, have a look at this book.

You can find the source code with further instructions to run this example on GitHub.

Have fun using CDI,

Phil

The post #WHATIS?: Contexts and Dependency Injection (CDI) appeared first on rieckpil.


by rieckpil at October 07, 2019 09:07 AM

Jakarta EE without Docker, Fulltext Search, Connection Pools, Passwords, Fluid Logic, Thread-Safety and EntityManager--or 67th airhacks.tv

by admin at October 07, 2019 07:23 AM

Topics for 67th airhacks.tv episode (https://gist.github.com/AdamBien/1a227df3f1701e4a12a751d3f7d1633e):
  1. Quarkus JSF
  2. The best approach to deploy Jakarta EE applications without Docker / containers
  3. Fulltext search with JPA and EclipseLink
  4. Integration testing and databases
  5. Configuring the number of DB connections in a connection pool
  6. Quarkus datasource configuration
  7. Thoughts on "Fluid Logic" pattern implementation
  8. EntityManager: transactions vs. thread-safeness
  9. Integration tests with JPA and auto-registration
  10. Jakarta EE and Java EE specifications
  11. JAX-RS and the added value of @Stateless
  12. The easy setup of Jakarta EE

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at October 07, 2019 07:23 AM

Jason's Binding and Fast, Greek Birds--airhacks.fm Podcast

by admin at October 06, 2019 11:40 AM

Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS

The #56 airhacks.fm episode with Dmitry Kornilov (@m0mus) about:

JPA-RS, EclipseLink, JSON-B and the road to helidon.io
is available for download.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at October 06, 2019 11:40 AM

#REVIEW: Pro CDI 2 in Java EE 8 (book)

by rieckpil at October 05, 2019 10:39 AM

When it comes to creating an application with Java (now Jakarta) EE, the first specification you usually get in touch with is CDI (Contexts and Dependency Injection). Starting with this specification, you’ll probably know it for its dependency injection (DI) capabilities: @Inject. Even besides DI, the CDI spec offers a lot more: events, decorators, interceptors, etc. Well-known books about Java EE (find some of them here) sometimes only cover basic CDI functionality and rarely dedicated more than one chapter for it. Fortunately, Jan Beernink and Arjan Tijms recently published a book to cover the CDI spec in-depth: Pro CDI 2 in Java EE 8 – An In-Depth Guide to Context and Dependency Injection.

proCDI2InJavaEE8BookCover

The whole book consists of 241 pages and is available both as a digital and print version (e.g. on Amazon). I got the PDF version from Arjan and was able to read it in four days while trying out some examples in the IDE in parallel. My CDI knowledge before reading the book was more basic. I was able to use the concepts, but never touched advanced topics and still had some knowledge gaps to connect all the dots. This changed a lot after reading the book!

Let’s start.

The history of CDI

The book starts with a throwback of important people and events in the history of the CDI specification. This was extremely valuable for me, as I wasn’t aware of how everything started and who drove the success of CDI (btw. I am 24 and started with Java EE 7). You’ll learn about how Rod Johnson, Gavin King, and Bob Lee had an influence on CDI. Furthermore, this chapter includes the evolution of the major Java EE specifications with an own dependency injection framework (e.g. JSF, JAX-RS…) alongside CDI and you’ll realize that CDI is a rather young spec, compared to EJB or JPA.

Next, the authors describe the influence of proprietary DI frameworks in the past for CDI and how the AtInject JSR is related to CDI. This chapter is a nice recap of how the specification evolved to a more or less platform DI framework for Java EE, as more and more specifications plan to remove their own injection framework.

CDI beans, scopes and qualifiers

In the following chapters of the Pro CDI 2 book, you get a detailed introduction of what the component model and a CDI bean actually is. This includes a lot of basic knowledge, you need to understand CDI in full execution. You’ll learn about the different ways to identify and retrieve a CDI bean. Next, the authors present all different scopes and when to use them. In addition, they provide two good examples to write your own CDI scope and how to integrate it with a CDI extension.

All of this was really important for me to connect the missing dots, as I always mixed up what @Dependent, @Any, @Qualifier, @Named etc. actually does.

CDI events in the Pro CDI 2 book

Another important aspect of the CDI specification is events. With CDI it’s really easy to emit and observe events to decouple parts of your application. The authors dedicate an own chapter to this topic and explain it in detail. You’ll also get to know how to apply CDI qualifiers to your events to distinguish them. Both synchronous and asynchronous events are covered. For the asynchronous part, you’ll also learn how you can efficiently use the returning CompletionStage<T> to further process or handle errors.

Decorators and interceptors with CDI 2

Next, decorators and interceptors are covered in the Pro CDI 2 book. Even though the interceptor part originates from an own spec, they have a tight integration to CDI. For both of these concepts, the authors provide good examples when to use them and what the main differences are. Aside from creating them, you’ll also learn the ways to activate them and how to put them in order.

For an introduction to CDI interceptors, you can have a look at one of my previous blog post.

Dynamic beans and CDI 2 in Java SE/FX

One of the major advanced CDI topics is Dynamic Beans. The book has an own chapter for covering this and you get examples for both implementing the Bean<T> interface and using a CDI extension alongside a configurator. The authors also emphasize that Dynamic CDI Beans might rarely be a requirement for usual business applications. In contrast, they also provide good examples where it could make sense.

The final chapter of the books gives you an introduction on how to bootstrap CDI in a Java SE and also JavaFX (replacing JavaFX injection) environment. All CDI implementations (e.g. Weld or Apache OpenWebBeans)  supported an API to bootstrap the CDI container in a plain Java SE app for a long time. However, this manual bootstrap API was not standardized until CDI 2. You’ll learn how to include it to your project, scan for beans, and also the differences compared to a Java EE environment. This section also covers how to use this approach for unit testing with JUnit 5.

Summary

To summarize it, I can definitely recommend you to buy and read this book. The authors did great work to produce such an in-depth guide. After reading this book you’ll definitely know everything to effectively use CDI. They balance theoretical, practical and historical explanations in a remarkable way. Their examples always include real-world business problems and are not artificial (like foo & bar). Not only is CDI one of the most central specifications in Java (now Jakarta) EE but also the right use of it makes your development life easier.

Feel free to add a comment if you plan to read it or have something to add.

PS: If you are looking for a reference for basic CDI knowledge, take a look at this introduction to CDI.

Have fun reading this excellent book about CDI 2,

Phil

The post #REVIEW: Pro CDI 2 in Java EE 8 (book) appeared first on rieckpil.


by rieckpil at October 05, 2019 10:39 AM

Configuring Jersey Application

by Jan at October 04, 2019 10:57 PM

It is a common issue, I am asked about every now and then, how to pass the user data to a user application? The user needs to change the code behavior depending on the configuration data. This is where the … Continue reading

by Jan at October 04, 2019 10:57 PM

The Payara Monthly Catch for September 2019

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at October 04, 2019 10:00 AM

This month we had Oracle Code One dominate the lions share of everyone's attention with talks and announcements. So you will notice I have included more than my usual amount of videos, that feature some of the talks and panels from the event. Shortly afterwards was the first Jakarta One Virtual conference that finally announced Jakarta EE 8! Which explains the a large rise in Jakarta EE and MicroProfile content. 

Below you will find a curated list of some of the most interesting news, articles and videos from this month. Cant wait until the end of the month? then visit our twitter page where we post all these articles as we find them! 


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at October 04, 2019 10:00 AM

JAX-RS Client ThreadPool leak

October 03, 2019 09:00 PM

Recently got resource(ThreadPool\Thread) leak with JAX-RS Client implementation on WF10.0.1 (RestEasy).
jax-rs thread leak

From the dump above we can see, that pool number is extremely height, the same time thread number is always 1. That means that some code uses Executors.new*, which returns java.util.concurrent.ThreadPoolExecutor using the DefaultThreadFactory.

Actually in this situation, it is ALL than we can see from thread and heap dumps when debugging leak like above. Because in case classes containing these executors was garbage collected, the executors get orphaned (but are still alive and uncollectable), making it difficult/impossible to detect from a heap dump where the executors came from.

Lesson #1 is: Doing Executors.new*, would be nice to little bit think about guys who will support your code and provide non default thread names with custom ThreadFactory like :)

ExecutorService es = Executors.newCachedThreadPool(new CustomThreadFactory());

...

class CustomThreadFactory implements ThreadFactory {

    @Override
    public Thread newThread(final Runnable r) {
        return new Thread(r, "nice_place_for_helpful_name");
    }
}

So, after many times of investigation and "heap walking"(paths to GC root) i found few Executors$DefaultThreadFactory like

jax-rs thread leak

what made me see the code with REST services invocations. Something like

public void doCall() {
    Client client = ClientBuilder.newClient();
    Future<Response> future = client.target("http://...")
                                 .request()
                                 .async().get();
}

According to WF10 JAX-RS implementation each newClient() will build ResteasyClient that uses ExecutorService asyncInvocationExecutor to do requests and potentially it is can be the reason of the leak.

Lesson #2 is: Always! do close() client after usage. Check that implementation closes connection and shutdowns ThreadPool in case errors (timeouts, socket resets, etc).

Lesson #3 is: Try to construct only a small number of Client instances in the application. Last one is still bit unclear from the pure JakartaEE application point of view, as it not works as well in multi-threaded environment. (Invalid use of BasicClientConnManager: connection still allocated. Make sure to release the connection before allocating another one.)

P.S. Many thanks for JProfiler tool trial version to make me happy with ThreadDump walking.


October 03, 2019 09:00 PM

JakartaOne Livestream Wrap up

by Tanja Obradovic at October 03, 2019 11:12 AM

September 10th, 2019 was a big day in the Jakarta EE global circles. Not just that we released the very first version of the Jakarta EE, but we also got very ambitious and organized the very first JakartaOne Livestream conference. What an experience that was!

The intention was to mark an important milestone, the release of the very first vendor neutral, Java EE 8 compatible release of Jakarta EE using the new Jakarta EE Specification Process. 

After almost two years since the Oracle announcement of Java EE contribution to Eclipse Foundation on September 10th, we’ll finally have the base level release that will enable existing products certified on Java EE 8 to move easily and seamlessly Jakarta EE 8 release. The release is a result of cross-community collaboration at the Eclipse Foundation, it has the same Java EE 8 APIs using the javax namespace, its Jakarta EE 8 TCKs are fully compatible with Java EE 8 TCKs, and new compatibility/branding process is available for compatible products.

It was very obvious from the beginning that this may be bigger than we expected, here is why. We have approached a few well-respected leaders in Java EE community to help out and be on the JakartaOne Livestream Program Committee. What a thrill that was! Reza Rahman graciously accepted to be the Program Committee Chair, and the rest of the members of the committee were equally eager and interested to help out: Josh Juneau, Ivar Grimstad, Adam Bien, Arun Gupta and from the Eclipse Foundation - Tanja Obradovic. It was a great pleasure working with this team. 

We started the work on the conference in early June, and with all members having busy schedules and summer vacations, we made most of the time we had: made a plan and put out CFP for the conference. The response was another indicator that there is an interest in the wider community for something like this. We got well over 50 talk submissions of a great quality and we had a difficult task selecting the best. We had a great mixture of introductory / overview talks, specification specific talks, cloud native focused talks, keynotes from Mike Milinkovich and James Gosling, as well as industry keynotes from Jakarta EE Working Group Steering Committee members: Tomitribe, IBM, Fujitsu, Oracle, Payara, Red Hat, demos and panel discussions. We selected 16 talks, which meant 18 hours of the program with keynotes and panel discussions. Adam Bien accepted to be MC of the event and I would be available to help out. In the end, the Program Committee chose the following collection of talks and a reminder that the talks themselves can still be viewed via jakartaone.org.

You may call it beginner's luck, but on the day of the event, we had well over 1350 registered attendees, with a number of questions that were almost immediately answered on the chat by someone from the community. The level of positivity and the sense of the community coming together was overwhelming, illustrating the true power of the open source. With very few technical difficulties, talks were going one after another and even when we had issues Reza Rahman yet again was available to save the day with an additional Q&A session.

 

And to leave the best for last - in terms of running the event, the work done by my colleagues of the Eclipse Foundation staff, was impressive. I could not ask for better help than working with Stephanie Swart, Laura Tran and Shabnam Mayel, as well as the great effort by all our marketing and web teams ahead of the event. The level of dedication, professionalism and readiness to improvise in order to deal with issues presented, was amazing. And did I mention for all four of us this was the very first conference we worked on! Let’s take a deep breath, enjoy the success for another moment before we start working on the next Jakarta EE release.


by Tanja Obradovic at October 03, 2019 11:12 AM

E-Ticaret siteleri arama motorlarını nasıl kullanıyor meetup’ı ardından

by Hüseyin Akdogan at October 03, 2019 06:00 AM

01.10.2019 salı günü sevgili Hakan Özler ile n11’den Hasan Emre Erkek ve Hüseyin Çelik’i, E-Ticaret siteleri arama motorlarını nasıl kullanıyor? meetup’ında ağırladık. E-Ticaret pazarı tüm dünyada olduğu gibi ülkemizde de hızla büyümeye devam ediyor. Ürün ve hizmetlerin müşteriye doğru ve hızlı sunulmasının temel sağlayıcısı arama motorları olduğu için, arama motorları, kullanılan veri yapıları, algoritmalar, optimum sorgu sonuçlarının gösterimi gibi konuları ele almanın faydalı olacağını düşündük.

İlk olarak, en temelde arama motorlarının nasıl kullanıldığını, son kullanıcıya bakan yönüyle ele aldık. Hasan Emre Erkek, temel amacı; kullanıcının aradığı konteksi anlayıp, yaptığı aramayla, en ilgili ürünü eşleştirmek ve kullanıcıya sunabilmek olarak ifade etti. Hüseyin Çelik, eşleştirmenin de ötesinde, kullanıcının arama ifadesiyle ne demek istediğini, tam olarak hangi ürün hizmete ulaşmayı amaçladığını anlamaya ve buna bağlı olarak kullanıcıyı doğru biçimde yönlendirmeye çalıştıklarını ifade etti. Bu noktada eş anlamlı kelimelerin tespitinin önemli olduğu vurgulandı. Sohbetin bu noktasında ifade edilen ilginç bir örneği, metup’ımızı canlı takip eden bir katılımcımızın tweetinde görmek mümkün 🙂

Spell checking, corpus’lar, levenshtein ve word break algoritmaları hakkında konuştuktan sonra, Hasan Emre Erkek, var olan datadan kendi corpusunuzu oluşturmanın en değer katıcı unsurlardan biri olduğunu söyledi. Deep Learning de konuşulan konular arasındaydı. Kullanıcı aramalarının word directory’sinin oluştulup, aranılan terimle ilgili vektör uzayında en yakın olan kelimelerin listesinin oluşturulması bu bağlamda ele alındı. Hüseyin Çelik, üzerinde halen çalıştıkları, arama hatasını düzeltmiş kullanıcı verisinden hareketle, benzer hatayı yapanlara bu düzeltilmiş hali önermeyi amaçlayan çalışmalarından söz etti.

Hüseyin Çelik devamla, kullanıcının ürüne ulaşma adımlarını kısaltmaya çalıştıklarını, örneğin kullanıcılar belirli bir filtreyi yoğunlukla kullanmış isa, bunu başka kullanıcılara da önerdiklerini belirti. Bu noktada Hasan Emre Erkek, kullanıcıyı özellikle generic aramalarda yönlendirmenin de temel misyonları olduğunu ifade etti.

N11’de arama motoru olarak hangi teknolojinin kullanıldığını da sorduk, aldığımız yanıt Apache Solr oldu. Solr ihtiyaçlarınıza cevap veriyor mu sorusuna ise konuklarımız “domain specific, bizim devreye girmemiz gereken konular dışında fazlasıyla evet� karşılığını verdi. Solr’ın temelinde yer alan lucene, boosting, tfidf skorları ve sağladığı diğer avantajlar üzerine konuştuktan sonra son olarak bu konularda öğrenmeye, araştırmaya nereden nasıl başlanmalı şeklinde Hakan Özler’in sorduğu soruya Hasan Emre Erkek ve Hüseyin Çelik, information retrieval ve natural language processing konusunda temel düzeyde bilgi sahibi olunmasının önemini vurgulayıp, sonrasında da lucene gibi açık kaynak bir kütüphanenin öğrenilmesi ve üzerine çalışılmasını tavsiye ettiler. Bu noktada Lucene in Action kitabı, konuklarımızın önerdiği bir kaynak oldu.

Kendi adıma yararlandığım, keyifli verimli bir sohbetti. Konuklarımız Hasan Emre Erkek ve Hüseyin Çelik’e tekrar teşekkür ediyoruz. Unutmadan, bu meetup’ı ve önceki meetup’larımızı iTunes ve Spotify üzerinden JUG İstanbul podcast kanalından takip edebileceğinizi hatırlatmak istiyorum.

Bir başka etkinlikte görüşmek üzere…


by Hüseyin Akdogan at October 03, 2019 06:00 AM

Authentication and Authorization with MicroProfile JWT and Payara Server

by admin at October 02, 2019 04:28 AM

Authentication and authorization with MicroProfile JWT and Payara:

Tokens were generated with: jwtenizr.sh, the application was deployed with wad.sh.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at October 02, 2019 04:28 AM

A simple MicroProfile JWT token provider with Payara realms and JAX-RS

October 02, 2019 12:00 AM

Armor

In this tutorial I will demonstrate how to create a "simple" (yet practical) token provider using Payara realms as users/groups store, with a couple of tweaks it's applicable to any MicroProfile implementation (since all implementations support JAX-RS).

In short this guide will:

  • Create a public/private key in RSASSA-PKCS-v1_5 format to sign tokens
  • Create user, password and fixed groups on Payara file realm (groups will be web and mobile)
  • Create a vanilla JakartaEE + MicroProfile project
  • Generate tokens that are compatible with MicroProfile JWT specification using Nimbus JOSE

Create a public/private pair

MicroProfile JWT establishes that tokens should be signed by using RSASSA-PKCS-v1_5 signature with SHA-256 hash algorithm.

The general idea behind this is to generate a private key that will be used on token provider, subsequently the clients only need the public key to verify the signature. One of the "simple" ways to do this is by generating an SSH keypair using OpenSSL.

First it is necessary to generate a base key to be signed:

openssl genrsa -out baseKey.pem

From the base key generate the PKCS#8 private key:

openssl pkcs8 -topk8 -inform PEM -in baseKey.pem -out privateKey.pem -nocrypt

Using the private key you could generate a public (and distributable) key

openssl rsa -in baseKey.pem -pubout -outform PEM -out publicKey.pem

Finally some crypto libraries like bouncy castle only accept traditional RSA keys, hence it is safe to convert it using also openssl:

openssl rsa -in privateKey.pem -out myprivateKey.pem

At the end myprivateKey.pem could be used to sign the tokens and publicKey.pem could be distributed to any potential consumer.

Create user, password and groups on Payara realm

According to Glassfish documentation, the general idea of realms is to provide a security policy for domains, being able to contain users and groups and consequently assign users to groups, these realms could be created using:

  • File containers
  • Certificates databases
  • LDAP directories
  • Plain old JDBC
  • Solaris
  • Custom realms

For tutorial purposes a file realm will be used but any properly configured Realm should work.

On vanilla Glassfish installations domain 1 uses server-config configuration, thus to create the realm you need to go to server-config -> Security -> Realms and add a new realm, in this tutorial burgerland will be created with the following configuration:

  • Name: burgerland
  • Class name: com.sun.enterprise.security.auth.realm.file.FileRealm
  • JAAS Context: fileRealm
  • Key file: ${com.sun.aas.instanceRoot}/config/burgerlandkeyfile

Realm Creation

Once the realm is ready we can add two users/password with different roles (web, mobile), being ronald and king, final result should look like this:

Users Creation

Create a vanilla JakartaEE project

In order to generate the Tokens, we need to create a greenfield application, this could be achieved by using javaee8-essentials-archetype with the following command:

mvn archetype:generate -Dfilter=com.airhacks:javaee8-essentials-archetype -Dversion=0.0.4

As usual archetype assistant will ask for project details, project will be named microjwt-provider:

Project Creation

Now, it is necessary to copy the myprivateKey.pem file generated at section 1 to project's classpath using Maven structure, specifically to src/main/resources, to avoid any confussion I also renamed this file to privateKey.pem, the final structure will look like this:

microjwt-provider$ tree
.
├── buildAndRun.sh
├── Dockerfile
├── pom.xml
├── README.md
└── src
    └── main
        ├── java
        │   └── com
        │       └── airhacks
        │           ├── JAXRSConfiguration.java
        │           └── ping
        │               └── boundary
        │                   └── PingResource.java
        ├── resources
        │   ├── META-INF
        │   │   └── microprofile-config.properties
        │   └── privateKey.pem
        └── webapp
            └── WEB-INF
                └── beans.xml

You could get rid of source code since application will be bootstrapped using a different package structure :-).

Generating MP compliant tokens from Payara realm

In order to create a provider, we will create a project with a central JAX-RS resource named TokenProviderResource with the following characteristics:

  • Receives a POST+Form params petition over /auth
  • Resource creates and signs a token using privateKey.pem certificate
  • Returns token in response body
  • Roles will be established using web.xml file
  • Roles will be mapped to Payara realm using glassfish-web.xml file
  • User, password and roles will be checked using Servlet 3+ API

Nimbus JOSE and Bouncy Castle should be added as dependencies in order to read and sign tokens, these should be added at pom.xml file.

<dependency>
    <groupId>com.nimbusds</groupId>
    <artifactId>nimbus-jose-jwt</artifactId>
    <version>5.7</version>
</dependency>
<dependency>
    <groupId>org.bouncycastle</groupId>
    <artifactId>bcpkix-jdk15on</artifactId>
    <version>1.53</version>
</dependency>

Later, a enum will be used to describe the fixed roles in a type safe way:

public enum RolesEnum {
	WEB("web"),
	MOBILE("mobile");

	private String role;

	public String getRole() {
		return this.role;
	}

	RolesEnum(String role) {
		this.role = role;
	}
}

Once dependencies and roles are into project, we will implement a plain old Java bean in chage of token creation. First to be compliant with MicroProfile token structure a MPJWTToken bean is created, this will also contain a fast objet to JSON string converter but you could use any other marshaller implementation.

public class MPJWTToken {
	private String iss; 
    private String aud;
    private String jti;
    private Long exp;
    private Long iat;
    private String sub;
    private String upn;
    private String preferredUsername;
    private List<String> groups = new ArrayList<>();
    private List<String> roles;
    private Map<String, String> additionalClaims;

    //Gets and sets go here

    public String toJSONString() {

        JSONObject jsonObject = new JSONObject();
        jsonObject.appendField("iss", iss);
        jsonObject.appendField("aud", aud);
        jsonObject.appendField("jti", jti);
        jsonObject.appendField("exp", exp / 1000);
        jsonObject.appendField("iat", iat / 1000);
        jsonObject.appendField("sub", sub);
        jsonObject.appendField("upn", upn);
        jsonObject.appendField("preferred_username", preferredUsername);

        if (additionalClaims != null) {
            for (Map.Entry<String, String> entry : additionalClaims.entrySet()) {
                jsonObject.appendField(entry.getKey(), entry.getValue());
            }
        }

        JSONArray groupsArr = new JSONArray();
        for (String group : groups) {
            groupsArr.appendElement(group);
        }
        jsonObject.appendField("groups", groupsArr);

        return jsonObject.toJSONString();
    }

Once JWT structure is complete, a CypherService is implemented to create and sign the token. This service will implement the JWT generator and also a key "loader" that reads privateKey file from classpath using Bouncy Castle.

public class CypherService {

	public static String generateJWT(PrivateKey key, String subject, List<String> groups) {
        JWSHeader header = new JWSHeader.Builder(JWSAlgorithm.RS256)
                .type(JOSEObjectType.JWT)
                .keyID("burguerkey")
                .build();

        MPJWTToken token = new MPJWTToken();
        token.setAud("burgerGt");
        token.setIss("https://burger.nabenik.com");
        token.setJti(UUID.randomUUID().toString());

        token.setSub(subject);
        token.setUpn(subject);

        token.setIat(System.currentTimeMillis());
        token.setExp(System.currentTimeMillis() + 7*24*60*60*1000); // 1 week expiration!

        token.setGroups(groups);

        JWSObject jwsObject = new JWSObject(header, new Payload(token.toJSONString()));

        // Apply the Signing protection
        JWSSigner signer = new RSASSASigner(key);

        try {
            jwsObject.sign(signer);
        } catch (JOSEException e) {
            e.printStackTrace();
        }

        return jwsObject.serialize();
    }

    public PrivateKey readPrivateKey() throws IOException {

        InputStream inputStream = CypherService.class.getResourceAsStream("/privateKey.pem");

        PEMParser pemParser = new PEMParser(new InputStreamReader(inputStream));
        JcaPEMKeyConverter converter = new JcaPEMKeyConverter().setProvider(new BouncyCastleProvider());
        Object object = pemParser.readObject();
        KeyPair kp = converter.getKeyPair((PEMKeyPair) object);
        return kp.getPrivate();
    }	
}

CypherService will be used from TokenProviderResource as injectable CDI bean. One of my motivations to separate key reading from signing process is that key reading should be implemented respecting resource lifecycle, hence the key will be loaded at CDI @PostConstruct callback.

Here, full resource code:

@Singleton
@Path("/auth")
public class TokenProviderResource {

    @Inject
    CypherService cypherService;

    private PrivateKey key;

    @PostConstruct
    public void init() {
        try {
            key = cypherService.readPrivateKey();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    @POST
    @Produces(MediaType.APPLICATION_JSON)
    @Consumes(MediaType.APPLICATION_FORM_URLENCODED)
    public Response doTokenLogin(@FormParam("username") String username, @FormParam("password")String password,
                               @Context HttpServletRequest request){

        List<String> target = new ArrayList<>();
        try {
            request.login(username, password);

            if(request.isUserInRole(RolesEnum.MOBILE.getRole()))
                target.add(RolesEnum.MOBILE.getRole());

            if(request.isUserInRole(RolesEnum.WEB.getRole()))
                target.add(RolesEnum.WEB.getRole());

        }catch (ServletException ex){
            ex.printStackTrace();
            return Response.status(Response.Status.UNAUTHORIZED)
                    .build();
        }

        String token = cypherService.generateJWT(key, username, target);

            return Response.status(Response.Status.OK)
                    .header(AUTHORIZATION, "Bearer ".concat(token))
                    .entity(token)
                    .build();

    }

}

JAX-RS endpoints in the end are abstractions over Servlet API, consequently you could inject the HttpServletRequest or HttpServletResponse object on any method (doTokenLogin). In this case it is usefull since I'm triggering a manual login using Servlet 3+ login method.

As noticed by many users, Servlet API does not allow to read user roles in a portable way, hence I'm just checking if a given user is included in fixed roles using the previously defined enum and adding these roles to the target ArrayList.

In this code the parameters were declared as @FormParam consuming x-www-form-urlencoded data making it usefull for plain HTML forms, but this configuration is completely optional.

Mapping project to Payara realm

The main motivation to use Servlet's login method is basically because it is already integrated with Java EE security schemes, hence using the realm will be a simple two-step configuration:

  • Add the realm/roles configuration at web.xml file in the project
  • Map Payara groups to application roles using glassfish-web.xml file

If you wanna know the full description of this mapping I found a useful post here.

First, I need to map the application to burgerland realm and declare the two roles. Since I'm not selecting an auth method, project will fallback to BASIC method, however I'm not protecting any resource so, credentials won't be explicitly required on any HTTP request:

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"
         version="3.1">

    <login-config>
        <realm-name>burgerland</realm-name>
    </login-config>
    <security-role>
        <role-name>web</role-name>
    </security-role>
    <security-role>
        <role-name>mobile</role-name>
    </security-role>
</web-app>

Payara groups and Java web application roles are not the same concepts, but these could actually be mapped using glassfish descriptor glassfish-web.xml:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE glassfish-web-app PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 Servlet 3.0//EN" "http://glassfish.org/dtds/glassfish-web-app_3_0-1.dtd">
<glassfish-web-app error-url="">
    <security-role-mapping>
        <role-name>mobile</role-name>
        <group-name>mobile</group-name>
    </security-role-mapping>
    <security-role-mapping>
        <role-name>web</role-name>
        <group-name>web</group-name>
    </security-role-mapping>
</glassfish-web-app>

Finally the new application is deployed and a simple test demonstrates the functionality of token provider:

Postman test

The token could be explored using any JWT tool, like the popular jwt.io, here the token is a compatible JWT implementation:

JWT test

And as stated previously the signature could be checked using only the PUBLIC key:

JWT test 2

As always, full implementation is available at GitHub.


October 02, 2019 12:00 AM

Payara Services at Oracle Code One 2019

by Ondrej Mihályi at October 01, 2019 11:42 AM

This year marked the second edition of the Oracle Code One conference, which was formerly known as Java One. The conference is one of the most important Java conferences in the world and rightly so for many reasons! Which means that we at Payara couldn't miss being there. We were extraordinary busy at the conference, so we want to share with you a short summary of what happened, what it meant for Payara and for the whole Java community in general.


by Ondrej Mihályi at October 01, 2019 11:42 AM

Oracle CodeOne 2019

by Ivar Grimstad at September 30, 2019 07:49 AM

I am on my way back home from this year’s Oracle CodeOne. As always, this week is so filled with content and activities that it just flies by.

In previous years, I have always had the sort of empty feeling on Thursday (the last day of the conference); the exhibition hall and Groundbreakers Hub is packed away, lunch in the hallways, lots of people walking around with luggage just catching a couple of sessions before heading home.

This year was different. As Ed Burns said in the talk he held together with Phillip Krüger that he found Thursday to be the best day of the conference, to be able to just attend sessions without all the other distractions. And I agree! This year, I listened to great talks from 9 in the morning until 15 in the afternoon with only 15 min breaks between the sessions. No distractions, other than the usual short hallway discussions between the sessions.

and

Jakarta EE 8 was launched the week before CodeOne. Another important milestone for the community! We had a lot of great talks, BOFs and hallway discussions.

The general impression of this edition of CodeOne is that it was smaller than last year. Both in the number of attendees, but also the number of exhibitors. The community spirit, however, was as strong as always!


by Ivar Grimstad at September 30, 2019 07:49 AM

Migration from JEE to JakartaEE

September 29, 2019 09:00 PM

As you probably know Java EE was moved from Oracle to the Eclipse Foundation where will evolve under the Jakarta EE brand. Sept. 10, 2019 Jakarta EE Full Platform and Web Profile specifications was released by Eclipse Foundation during JakartaOne Livestream. Few days later Wildfly declared that WildFly 17.0.1 has passed the Jakarta EE 8 TCK and certification request has been approved by the Jakarta EE Spec Committee. So, now WildFly is a Jakarta EE Full platform compatible implementation.

Let's do migration of typical gradle EE project to the Jakarta EE and look how hard is it. Current JakartaEE version 8.0.0 is fully compatible with JavaEE version 8.0, that means no need to change project sources, just update dependency from javax:javaee-api:8.0 to jakarta.platform:jakarta.jakartaee-api:8.0.0

updated build.gradle:

apply plugin: 'war'
dependencies {
    providedCompile "jakarta.platform:jakarta.jakartaee-api:8.0.0"
}

That is it! Application builds and works well under WF17.0.1

Source code of demo application available on GitHub


September 29, 2019 09:00 PM

#WHATIS?: Jakarta RESTful Web Services (JAX-RS)

by rieckpil at September 29, 2019 10:03 AM

The REST architectural pattern is widely adopted when it comes to creating web services. The term was first introduced by Roy Fielding in his dissertation and describes a way for clients to query and manipulate the resources of a server. With Jakarta RESTful Web Services (JAX-RS), formerly known as Java API for RESTful Web Services, we have a standardized approach to create such web services. This specification is also part of the MicroProfile project since day one.

Learn more about the Jakarta RESTful Web Services (JAX-RS) specification, its annotations and how to use it in this blog post. Please note that I won’t cover every aspect of this spec (as it is quite large) and rather concentrate on the most important parts.

Specification profile: Jakarta RESTful Web Services (JAX-RS)

  • Current version: 2.1 in Java/Jakarta EE 8 and 2.1 in MicroProfile 3.0
  • GitHub repository
  • Specification homepage
  • Basic use case: develop web services following the Representational State Transfer (REST) pattern

Bootstrap a JAX-RS application

Bootstrapping a JAX-RS application is simple. The main mechanism is to provide a subclass of javax.ws.rs.core.Application on your classpath:

@ApplicationPath("resources")
public class JAXRSApplication extends Application {
}

With @ApplicationPath you can specify the path prefix all of your REST endpoints should share. This might be /api or /resources. Furthermore, you can override the methods of Application  and register for example all your resources classes, providers and features manually (getClasses() method), but you don’t have to.

Create REST endpoints

Most of the time you’ll use JAX-RS to expose resources of your server on a given path and for a specific HTTP method.  The specification provides an annotation to map each HTTP method (GET, PUT, POST, DELETE …) to a Java method. Using the @Path annotation you can specify which path to map and also specify path variables:

@Path("books")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public class BookResource {


    @GET
    @Path("/{id}")
    public Response getBookById(@PathParam("id") Long id, 
                                @QueryParam("title") @DefaultValue("") String title) {
       // ...
    }

    @POST
    public Response getBookById(Book bookToStore, @Context UriInfo uriInfo) {
        // ...
    }

    @DELETE
    @Path("/{id}")
    public Response deleteBook(@PathParam("id") Long id, 
                               @HeaderParam("User-Agent") String userAgent) {
       // ...
    }

}

In the example above you see that the whole class is mapped to the path /books with different HTTP methods. @PathParam is used to get the value of a path variable and @QueryParam for retrieving query parameters of a URL (e.g. ?order=DESC). In addition, you can inject further classes into your JAX-RS method and get access e.g. to the HttpServlet, UriInfo and HTTP headers of the request (@HeaderParam("nameOfHeader")).

Next, JAX-RS offers annotations for content-negotiation: @Consumes and @Produces. In the example above, I’m adding these annotations on class-level, so all methods (which don’t specify their own @Produces/@Consumes) inherit the rules to accept only JSON requests and produces only JSON responses.

In the case, your client sends a payload in the HTTP body (e.g. creating a new book – @POST in the example above), you can map the payload to a Java POJO. For JSON payloads, JSON-B is used in the background and for not default payload types  (e.g. binary protobuf payload to POJO) you have to register your own MessageBodyReader and MessageBodyWriter. The specification defines a standard set of entity providers, which are supported out-of-the-box (e.g. String for text/plain, byte[] for */*, File for */*, MultivaluedMap<String, String> for application/x-www-form-urlencoded , etc.).

Alongside synchronous and blocking REST endpoints, the specification also supports asynchronous ones:

@GET
@Path("async")
public void getBooksAsync(@Suspended final AsyncResponse asyncResponse) {
    // do long-running task with e.g. @Asynchronous annotation 
    // form MicroProfile Fault Tolerance or from EJB
    asyncResponse.resume(this.bookStore);
}

If you don’t specify any other lifecycle (e.g. with @Singleton from EJB or a CDI scope) the JAX-RS runtime instantiates a new instance for each request for this resource.

Access external resources

The JAX-RS specification also provides a convenient way to access external resources (e.g. REST endpoints of other services) as a client. We can construct such a client with the ClientBuilder from JAX-RS:

@PostConstruct
public void initClient() {
    ClientBuilder clientBuilder = ClientBuilder.newBuilder()
        .connectTimeout(5, TimeUnit.SECONDS)
        .readTimeout(5, TimeUnit.SECONDS)
        .register(UserAgentClientFilter.class)
        .register(ClientLoggingResponseFilter.class);

    this.client = clientBuilder.build();
}

@PreDestroy
public void tearDown() {
    this.client.close();
}

This ClientBuilder allows you to specify metadata like the connect and read timeouts, but also register several features (like you’ll see in the next chapter). Make sure to not construct a new Client for every request, as they are heavy-weight objects:

Clients are heavy-weight objects that manage the client-side communication infrastructure. Initialization as well as disposal of a {@code Client} instance may be a rather expensive operation. It is therefore advised to construct only a small number of {@code Client} instances in the application. Client instances must be {@link #close() properly closed} before being disposed to avoid leaking resources.

Javadoc of the Client class

Once you have an instance of a Client, you can now specify the external resources and create a WebTarget instance for each target you want to access:

WebTarget quotesApiTarget = client.target("https://quotes.rest").path("qod");

With this WebTarget instance you can now perform any HTTP operation, set additional header/cookies, set the request body and specify the response type:

JsonObject quoteApiResult = this.quotesApiTarget
    .request()
    .header("X-Foo", "bar")
    .accept(MediaType.APPLICATION_JSON)
    .get()
    .readEntity(JsonObject.class);

Furthermore, JAX-RS offer reactive support for requesting external resources with .rx():

CompletionStage<JsonObject> rxQuoteApiResult = this.quotesApiTarget
    .request()
    .header("X-Foo", "bar")
    .accept(MediaType.APPLICATION_JSON)
    .rx()
    .get(JsonObject.class);

Intercept the request and response flow

There are various entry points to intercept the flow of a JAX-RS resource and including client requests. To give you an idea of how the overall architecture looks like, have a look at the following image:

jaxRsRequestResponseFlow

As you see in the image above, there are several ways to apply cross-cutting logic to your JAX-RS resource method or client. I’ll not cover all of the filters/readers/interceptors in this blog post, as you’ll find a perfect documentation in the Jersey user guide. I’ll just have a look at the most common ones.

To register your implementations, you either do it manually with your JAX-RS configuration class (see the first chapter), the .register() method of the ClientBuilder, or use @Provider to register it globally.

First, you can apply a filter that executes before JAX-RS maps an incoming request to your resource method. For this, you need the @PreMatching annotation and can do evil things like the following:

@Provider
@PreMatching
public class HttpMethodModificationFilter implements ContainerRequestFilter {

    @Override
    public void filter(ContainerRequestContext requestContext) throws IOException {

        if(requestContext.getMethod().equalsIgnoreCase("DELETE")) {
            requestContext.setMethod("GET");
        }

    }
}

Next, you can add e.g. common headers to the response of your resource method with a ContainerResponseFilter:

@Priority(100)
@Provider
public class XPoweredByResponseHeaderFilter implements ContainerResponseFilter {

    @Override
    public void filter(ContainerRequestContext requestContext,
                       ContainerResponseContext responseContext) throws IOException {
        responseContext.getHeaders().add("X-Powered-By", "MicroProfile");
    }
}

With @Priority you can set the order of your filter once you use multiple and rely on execution in order.

For the client-side, we can add a filter to first log all HTTP headers of the incoming response with @ClientResponseFilter:

@Provider
public class ClientLoggingResponseFilter implements ClientResponseFilter {

    @Override
    public void filter(ClientRequestContext requestContext, 
                       ClientResponseContext responseContext) throws IOException {
        System.out.println("Response filter for JAX-RS Client");
        responseContext.getHeaders().forEach((k, v) -> System.out.println(k + ":" + v));
    }
}

YouTube video for using JAX-RS 2.1

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see JAX-RS 2.1 in action:

coming soon

You can find the source code with further instructions to run this example on GitHub.

Have fun using JAX-RS,

Phil

 

The post #WHATIS?: Jakarta RESTful Web Services (JAX-RS) appeared first on rieckpil.


by rieckpil at September 29, 2019 10:03 AM

#WHATIS?: JSON Processing (JSON-P)

by rieckpil at September 26, 2019 04:37 AM

Besides binding and converting JSON from an to Java objects with JSON-B, the Java EE specification (now Jakarta EE) offers a spec to process JSON data: JSON Processing (JSON-P). With this spec, you can easily create, write, read, stream, transform and query JSON objects. This specification is also part of the Eclipse MicroProfile project and provides a simple API to handle and further process JSON data structures as you’ll see it in the following examples.

Learn more about the JSON Processing (JSON-P) specification and how to use it in this blog post.

Specification profile: JSON Processing (JSON-P)

  • Current version: 1.1 in Java/Jakarta EE 8 and 1.1 in MicroProfile 3.0
  • GitHub repository
  • Specification homepage
  • Basic use case: Process JSON messages (parse, generate, transform and query)

Construct JSON objects using JSON-P

With JSON-P you can easily build JSON objects on-demand. You can create a JsonObjectBuilder using the Json class and build the JSON object while adding new attributes to the object:

JsonObject json = Json.createObjectBuilder()
    .add("name", "Duke")
    .add("age", 42)
    .add("skills",
        Json.createArrayBuilder()
        .add("Java SE")
        .add("Java EE").build())
    .add("address",
        Json.createObjectBuilder()
        .add("street", "Mainstreet")
        .add("city", "Jakarta")
        .build())
    .build();

If you print this object, you already have a valid JSON and can return this e.g. from a JAX-RS endpoint or use it as an HTTP request body:

{"name":"Duke","age":42,"skills":["Java SE","Java EE"],"address":{"street":"Mainstreet","city":"Jakarta"}}

You are not limited to create JSON objects only, you can also request for a JsonArrayBuilder and start constructing your JSON array:

JsonArray jsonArray = Json.createArrayBuilder()
    .add("foo")
    .add("bar")
    .add("duke")
    .build();

Write JSON objects

Given a JSON object, you can also write it to a different source using JSON-P and its JsonWriterFactory. As an example, I’m writing a JSON object to a File in pretty-print:

private void prettyPrintJsonToFile(JsonObject json) throws IOException {
    Map<String,Boolean> config = new HashMap<>();
    config.put(JsonGenerator.PRETTY_PRINTING, true);

    JsonWriterFactory writerFactory = Json.createWriterFactory(config);
    try (OutputStream outputStream = new FileOutputStream(new File("/tmp/output.json")); 
         JsonWriter jsonWriter = writerFactory.createWriter(outputStream)) {

        jsonWriter.write(json);
    }
}

The JsonWriterFactory accepts any Writer or OutputStream to instantiate the JsonWriter:

private void prettyPrintJsonToConsole(JsonObject json) throws IOException {
    Map<String,Boolean> config = new HashMap<>();
    config.put(JsonGenerator.PRETTY_PRINTING, true);

    JsonWriterFactory writerFactory = Json.createWriterFactory(config);
    try (Writer stringWriter = new StringWriter(); 
         JsonWriter jsonWriter = writerFactory.createWriter(stringWriter)) {
        jsonWriter.write(json);
        System.out.println(stringWriter);
    }
}

Using the JSON object from the chapter above, the output on the console will look like the following:

{
    "name": "Duke",
    "age": 42,
    "skills": [
        "Java SE",
        "Java EE"
    ],
    "address": {
        "street": "Mainstreet",
        "city": "Jakarta"
    }
}

Read JSON with JSON-P

The specification also provides a convenient way to read and parse JSON from a given source (e.g. File or String). To create a JsonReader instance, you either have to provide a InputStream or a Reader. As an example, I’m reading from both a String and a File on the classpath:

private void readFromString() {
    JsonReader jsonReader = Json.createReader(
        new StringReader("{\"name\":\"duke\",\"age\":42,\"skills\":[\"Java SE\", \"Java EE\"]}"));
    JsonObject jsonObject = jsonReader.readObject();
    System.out.println(jsonObject);
}

private void readFromFile() {
    JsonReader jsonReader = Json.createReader(this.getClass().getClassLoader()
        .getResourceAsStream("books.json"));
    JsonArray jsonArray = jsonReader.readArray();
    System.out.println(jsonArray);
}

If the JSON is not valid, the JsonReader throws a JsonParsingExcpetion while parsing it and will give a hint about what is wrong e.g. Invalid token=SQUARECLOSE at (line no=1, column no=54, offset=53). Expected tokens are: [COLON].

Stream JSON data

For use cases where you have to process big JSON objects (which might not fit into memory), you should have a look at the streaming options of JSON-P. The specification says the following about its streaming capabilities:

Unlike the Object model this offers more generic access to JSON strings that may change more often with attributes added or similar structural changes. Streaming API is also the preferred method for very large JSON strings that could take more memory reading them altogether through the Object model API.

Streaming works for both parsing and generating JSON objects. To parse and process a big JSON object, the spec provides the JsonParser:

String jsonString = "{\"name\":\"duke\",\"isRetired\":false,\"age\":42,\"skills\":[\"Java SE\", \"Java EE\"]}";
try (JsonParser parser = Json.createParser(new StringReader(jsonString))) {
    while (parser.hasNext()) {
        final Event event = parser.next();
        switch (event) {
            case START_ARRAY:
                System.out.println("Start of array");
                break;
            case END_ARRAY:
                System.out.println("End of array");
                break;
            case KEY_NAME:
                System.out.println("Key found " + parser.getString());
                break;
            case VALUE_STRING:
                System.out.println("Value found " + parser.getString());
                break;
            case VALUE_NUMBER:
                System.out.println("Number found " + parser.getLong());
                break;
            case VALUE_TRUE:
                System.out.println(true);
                break;
            case VALUE_FALSE:
                System.out.println(false);
                break;
        }
    }
}

This offers rather low-level access to the JSON object and you can access all Event objects (e.g. START_ARRAY, KEY_NAME, VALUE_STRING) while parsing.

For creating a JSON object in a streaming-fashion, you can use the JsonGenerator class and write to any source using a Writer or OutputStream:

StringWriter stringWriter = new StringWriter();

try (JsonGenerator jsonGenerator = Json.createGenerator(stringWriter)) {
    jsonGenerator.writeStartArray()
        .writeStartObject()
        .write("name", "duke")
        .writeEnd()
        .writeStartObject()
        .write("name", "jakarta")
        .writeEnd()
        .writeEnd();
    jsonGenerator.flush();
}

System.out.println(stringWriter.toString());

Transform JSON with JsonPointer, JsonPatch and JsonMergePatch

Since JSON-P 1.1, the specification offers a great way to query and transform JSON structures using the following standardized JSON operations:

Identify a specific value with JSON Pointer

If your JSON object contains several sub-objects and arrays and you have to find the value of a specific attribute, iterating over the whole object is cumbersome. With JSON Pointer you can specify an expression and point to a specific attribute and directly access it. The expression is defined in the official RFC.

Once you have a JSON pointer in place, you can get the value, remove it, replace it, add a new and check for existence with JSON-P and its JsonPointer class:

String jsonString = "{\"name\":\"duke\",\"age\":42,\"skills\":[\"Java SE\", \"Java EE\"]}";

JsonObject jsonObject = Json.createReader(new StringReader(jsonString)).readObject();

JsonPointer arrayElementPointer = Json.createPointer("/skills/1");
JsonPointer agePointer = Json.createPointer("/age");
JsonPointer namePointer = Json.createPointer("/name");
JsonPointer addressPointer = Json.createPointer("/address");
JsonPointer tagsPointer = Json.createPointer("/tags");

System.out.println("Get array element with pointer: " 
          + arrayElementPointer.getValue(jsonObject).toString());
System.out.println("Remove age with pointer: " 
          + agePointer.remove(jsonObject));
System.out.println("Replace name with pointer: " 
          + namePointer.replace(jsonObject, Json.createValue("john")));
System.out.println("Check address with pointer: " 
          + addressPointer.containsValue(jsonObject));
System.out.println("Add tags with pointer: " 
          + tagsPointer.add(jsonObject, Json.createArrayBuilder().add("nice").build()));

Define a sequence of operations to apply using JSON Patch

Similar to the JSON Pointer in the example above, you can define a set of operations to apply on a given JSON with JSON Patch. The possible operations to apply to a JSON are defined in the official RFC. As an example, I’m modifying an existing JSON with JsonPatch like the following:

String jsonString = "{\"name\":\"duke\",\"age\":42,\"skills\":[\"Java SE\", \"Java EE\"]}";

JsonObject jsonObject = Json.createReader(new StringReader(jsonString)).readObject();

JsonPatch patch = Json.createPatchBuilder()
    .add("/isRetired", false)
    .add("/skills/2", "Jakarta EE")
    .remove("/age")
    .replace("/name", "duke two")
    .build();

JsonObject patchedJson = patch.apply(jsonObject);
System.out.println("Patched JSON: " + patchedJson);

The patched JSON object looks like the following:

Patched JSON: {"name":"duke two","skills":["Java SE","Java EE","Jakarta EE"],"isRetired":false}

Merge two JSON objects with JSON Merge Patch

If you want to merge a given JSON object with another JSON, you can make use of the JSON Merge Patch. With this, you first have to define how the merge JSON object looks like and can then apply it to a target JSON structure.

String jsonString = "{\"name\":\"duke\",\"age\":42,\"skills\":[\"Java SE\", \"Java EE\"]}";

JsonObject jsonObject = Json.createReader(new StringReader(jsonString)).readObject();

JsonObject merge = Json.createObjectBuilder()
    .add("name", "duke2")
    .add("isEmployee", true)
    .add("skills", Json.createArrayBuilder()
        .add("CSS")
        .add("HTML")
        .add("JavaScript")
        .build())
    .build();

JsonMergePatch mergePatch = Json.createMergePatch(merge);
JsonValue mergedJson = mergePatch.apply(jsonObject);
System.out.println("Merged JSON: " + mergedJson);

The merged JSON in this example looks like the following:

Merged JSON: {"name":"duke2","age":42,"skills":["CSS","HTML","JavaScript"],"isEmployee":true}

For more information about the JSON Merge Patch, have a look at the official RFC.

YouTube video for using JSON-P 1.1

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see JSON-P 1.1 in action:

coming soon

You can find the source code with further instructions to run this example on GitHub.

Have fun using JSON-P,

Phil

The post #WHATIS?: JSON Processing (JSON-P) appeared first on rieckpil.


by rieckpil at September 26, 2019 04:37 AM

Jakarta EE 8 Release @ Eclipse Foundation, August 2019

by Tanja Obradovic at September 24, 2019 10:30 AM

As I am marking one year at the Eclipse Foundation as the Jakarta EE Program Manager, I am looking forward to the major milestone we are all eagerly waiting for - the Jakarta EE 8 release. We now have an official release date: September 10th, 2019! It feels like the whole (very large) Java EE community and enthusiasts have been waiting for this way too long! There were many ups and downs since Oracle announced they would be contributing Java EE to the Eclipse Foundation and a lot of work has been put into planning and making sure we have all the groundwork done for Jakarta EE 8 and beyond.

So let's look back at everything that has been done so far: October 2017: Oracle announces the contribution of Java EE to the Eclipse Foundation. Jakarta EE Working Group is established with the goal to

  • deliver more frequent releases
  • lower barriers to participation
  • develop the community
  • manage the Jakarta EE brand on behalf of the community

The following milestones were reached over the course of one year:

We are also working on ensuring the Compatibility Trademark Guidelines and License agreements are in place: Jakarta EE Trademark Guidelines and License Agreements.

More progress:
TCK process - link to be available shortly, finished July 10th, 2019
JESP Operations Guide, finished July 10th, 2019
Jakarta EE 8 Release Guide, finished July 10th, 2019

With all the above laid out and defined, and with a clear understanding of the Jakarta EE rights to Java trademarks link to Mike Milinkovich's blog (May 2019), the Jakarta EE Working Group has been extremely busy making sure we have the Jakarta EE 8 release ready for the community.

The Jakarta EE 8 release will

  • Be fully compatible with Java EE 8 specifications
  • The Jakarta EE 8 specifications will be fully transparent and will follow the Jakarta EE Specification Process
  • Include the same APIs and Javadoc using the same javax namespace
  • Provide Jakarta EE 8 TCKs under an open source license based on and fully compatible with the Java EE 8 TCKs.
  • Include a Jakarta EE 8 Platform specification that will describe the same platform integration requirements as the Java EE 8 Platform specification.
  • Reference multiple compatible implementations of the Jakarta EE 8 Platform when the Jakarta EE 8 specifications are released.
  • Provide a compatibility and branding process for demonstrating that implementations are Jakarta EE 8 compatible.

Jakarta EE at a glance

  • There has been a strong commitment from the Jakarta EE Working Group to
    • Deliver Jakarta EE 8
    • Keep evolving Jakarta EE and deliver new versions
    • Further plans are being evolved by the Jakarta EE community via the Jakarta EE Platform Project
  • The ongoing evolution of Jakarta EE is the only way to ensure that developers and software vendors can continue to meet the modern enterprise's need for cloud-based applications that resolve key business challenges.

As you can see, a lot of work has been put into Jakarta EE already and a lot more is ahead of us! While the current focus is on the Jakarta EE 8 release, we are looking into making sure all steps are taken so that we can grow the community and direct our attention to innovation in future releases. On a personal level, I am looking forward to seeing the evolution of Jakarta EE and witnessing Java's dominance in the cloud native era!


by Tanja Obradovic at September 24, 2019 10:30 AM

Eclipse Foundation Receives 2019 Duke's Choice Award for Jakarta EE

by Debbie Hoffman at September 23, 2019 04:20 PM

On September 16, 2019, the Eclipse Foundation received the 2019 Duke's Choice Award for Jakarta EE in recognition for open source contributions to the Java ecosystem and the community-driven achievement of moving Java EE from Oracle to Jakarta EE.


by Debbie Hoffman at September 23, 2019 04:20 PM

#WHATIS?: JSON Binding (JSON-B)

by rieckpil at September 22, 2019 10:16 AM

JSON is the current de-facto data format standard for exposing data via APIs. The Java ecosystem offers a bunch of libraries to create JSON from Java objects and vice-versa (GSON, Jackson, etc.). With the release of Java EE 8 and the JSR-367, we now have a standardized approach for this: JSON-B. With the transition of Java EE to the Eclipse Foundation, this specification is now renamed to Jakarta JSON Binding (JSON-B). In addition, this spec is also part of the Eclipse MicroProfile project.

Learn more about the JSON Binding (JSON-B) specification, its annotations, and how to use it in this blog post.

Specification profile: JSON Binding (JSON-B)

  • Current version: 1.0 in Java/Jakarta EE 8 and 1.0 in MicroProfile 3.0
  • GitHub repository
  • Specification homepage
  • Basic use case: Convert Java objects from and to JSON

Map objects from and to JSON

The central use case for JSON-B is mapping Java objects to and from JSON strings. To provide you an example, I’m using the following POJO:

public class Book {

    private String title;
    private LocalDate creationDate;
    private long pages;
    private boolean isPublished;
    private String author;
    private BigDecimal price;

    // constructors, getters & setters

}

Mapping Java objects and JSON messages requires an instance of the Jsonb. The specification defines a builder to create such an object. This instance can then be used for both mapping Java objects from and to JSON:

Book book = new Book("Java 11", LocalDate.now(), 1, false, "Duke", new BigDecimal(44.444));

Jsonb jsonb = JsonbBuilder.create();

String resultJson = jsonb.toJson(book);

Book serializedBook = jsonb.fromJson(resultJson, Book.class);

With no further configuration or adjustments, the JSON result contains all Java member variables (ignoring null values) as attributes in camel case.

Furthermore, you can also map a collection of Java objects to and from JSON arrays in a type-safe manner:

List<Book> bookList = new ArrayList<>();
bookList.add(new Book("Java 11", LocalDate.now(), 100, true, "Duke", new BigDecimal(39.95)));
bookList.add(new Book("Java 15", LocalDate.now().plus(365, ChronoUnit.DAYS), 110, false, "Duke", new BigDecimal(50.50)));

Jsonb jsonb = JsonbBuilder.create();

String result = jsonb.toJson(bookList);

List<Book> serializedBookList = jsonb
    .fromJson(result, new ArrayList<Book>(){}.getClass().getGenericSuperclass());

Configure the mapping of attributes

Sometimes the default mapping strategy of JSON-B might not fit your requirements and you want to e.g. customize the JSON attribute name or the date/number format. The specification offers a set of annotations to override the default mapping behavior, which can be applied to your Java POJO class.

With @JsonbProperty you can adjust the name of the JSON attribute name. If you use this annotation on a field level, it will affect both serialization and deserialization. On getter methods it affects only serialization and on setters only deserialization back to Java objects:

@JsonbProperty("book-title")
private String title;

Next, you can use @JsonbTransient to avoid the serialization of a specific attribute to JSON at all:

@JsonbTransient
private boolean isPublished;

If you plan to override the default behavior to not include null values to the JSON message, @JsonbNillable offers a way to do this. This annotation can only be used on class level and will affect all attributes:

@JsonbNillable
public class Book {
}

For those use cases where you just want one attribute to be serialized to null, you can use @JsonbProperty(nillable=true) on fields/getters/setters.

In addition, you are able to adjust the format of dates and numbers with @JsonbDateFormat and @JsonbNumberFormat and specify your custom format:

@JsonbDateFormat("dd.MM.yyyy")
private LocalDate creationDate;

@JsonbNumberFormat("#0.00")
private BigDecimal price;

Above all, if you don’t want JSON-B to use the default no-arg constructor to deserialize JSON to Java objects, you can specify a custom constructor and use the annotation @JsonbCreator:

public class Book {

    // ...

    @JsonbCreator
    public Book(@JsonbProperty("book-title") String title) {
        this.title = title;
    }

}

Make sure you use this annotation only once per class.

Define metadata for mapping JSON objects

Applying e.g. the @JsonbDateFormat to all your POJOs so they are all compliant to your custom date format, might be cumbersome and error-prone. Furthermore, if you use the annotations above to customize the mapping, you are not able to provide multiple representations if different clients require their own.

You can solve such requirements with a JsonbConfig instance and define global metadata for the mapping. Together with this configuration class, you can create a configured Jsonb instance and apply the mapping rules to all mappings of this instance:

Book book = new Book("Java 11", LocalDate.now(), 1, false, null, new BigDecimal(50.50));

JsonbConfig config = new JsonbConfig()
    .withNullValues(false)
    .withFormatting(true)
    .withPropertyOrderStrategy(PropertyOrderStrategy.LEXICOGRAPHICAL)
    .withPropertyNamingStrategy(PropertyNamingStrategy.LOWER_CASE_WITH_UNDERSCORES)
    .withDateFormat("dd-MM-YYYY", Locale.GERMAN);

Jsonb jsonb = JsonbBuilder.create(config);

String jsonString = jsonb.toJson(book);

Using this JsonbConfig, you are also able to configure things you can’t with the annotations of the previous chapter: pretty-printing, locale information, naming strategies, ordering of attributes, encoding information, binary data strategies, etc. Have a look at the official user guide for all configuration attributes.

Provide a custom JSON-B mapping strategy

If all of the above solutions don’t meet your requirements for mapping Java objects to and from JSON, you can implement your own JsonAdpater and get full access to serialization and deserialization:

public class BookAdapter implements JsonbAdapter<Book, JsonObject> {

    @Override
    public JsonObject adaptToJson(Book book) throws Exception {
        return Json.createObjectBuilder()
                .add("title", book.getTitle() + " - " + book.getAuthor())
                .add("creationDate", book.getCreationDate().toEpochDay())
                .add("pages", book.getPages())
                .add("price", book.getPrice().multiply(BigDecimal.valueOf(2l)))
                .build();
    }

    @Override
    public Book adaptFromJson(JsonObject jsonObject) throws Exception {
        Book book = new Book();
        book.setTitle(jsonObject.getString("title").split("-")[0].trim());
        book.setAuthor(jsonObject.getString("title").split("-")[1].trim());
        book.setPages(jsonObject.getInt("pages"));
        book.setPublished(false);
        book.setPrice(BigDecimal.valueOf(jsonObject.getJsonNumber("price").longValue()));
        book.setCreationDate(LocalDate.ofEpochDay(jsonObject.getInt("creationDate")));
        return book;
    }
}

With this adapter, you have full access to manage your JSON representation and to the deserialization logic. In this example, I’m using both the title and author for the final book title and concatenate both. Keep in mind that with a custom adapter, your JSON-B annotations on your POJO are overruled.

To make use of this JsonAdapter, you have to register it using a custom JsonbConfig:

JsonbConfig config = new JsonbConfig()
    .withAdapters(new BookAdapter());

Jsonb jsonb = JsonbBuilder.create(config);

String jsonString = jsonb.toJson(book);

Book serializedBook = jsonb.fromJson(jsonString, Book.class);

If you need more low-level access to the serialization and deserialization, have a look at the JsonbSerializer and JsonbDeserializer interface (an example can be found in the official user guide).

YouTube video for using JSON-B 1.0

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see JSON-B 1.0 in action:

coming soon

You can find the source code with further instructions to run this example on GitHub.

Have fun using JSON-B,

Phil

The post #WHATIS?: JSON Binding (JSON-B) appeared first on rieckpil.


by rieckpil at September 22, 2019 10:16 AM

JUG İstanbul Java Ekosistem Raporu 2019

by Hüseyin Akdogan at September 16, 2019 06:00 AM

Geçtiğimiz günlerde JUG İstanbul meetup ve twitter hesabı üzerinden topluluğumuzla bir anket paylaştık. Bu anketle neyi amaçladığımıza gelmeden önce kısaca anketin hikayesini paylaşmak istiyorum.

 

İLHAMIMIZ

Londra bilişim dünyası için son yıllarda canlılık ve hareketliliği gözle görülür biçimde artan önemli bir merkez haline geldi. Özellikle start-up’ların başkentlerinden biri olduğunu söylersek abartmış olmayız. Pek çok etkinliğe de ev sahipliği yapan Londra’da büyük ve dinamik bir Java topluluğu söz konusu. JUG İstanbul olarak diğer JUG ve Java topluluklarını Java dünyasında olup bitenleri ıskalamamak adına takip etmeye çalışıyoruz. Bu bağlamda LJC(London Java Community) de takip ettiğimiz topluluklardan biri. LJC Ağustos ayında meetup sayfaları üzerinden topluluklarıyla bir anket paylaştı ve özetle Java’nın yeni sürümleri ve Java dünyasının gündeminde olan çeşitli konulara dair fikirlerini almak istedi. Biz de yararlı gördüğümüz böylesi bir etkinlikten ilham alarak benzer bir çalışmayı topluluğumuzla gerçekleştirmek istedik.

 

HANGİ SONUÇLARA ULAŞMAYI AMAÇLADIK?

Araştırmamızda, katılanların profiline dair bilgi talep eden sorular hariç tutulursa toplamda 27 soru yönelttik. Bu 27 soruyla temelde yanıt aradığımız 4 ana soru şunlardı:

  • Java’nın yeni sürümlerini benimsiyor musunuz?
  • Hangi Java dağıtımını kullanıyorsunuz?
  • Support için para ödemeye hazır mısınız?
  • Enterprise Java ile ilişkiniz hangi düzeyde?

Aldığımız yanıtlarla bu 4 temel soruda topluluğumuza dair önemli bir fotoğraf çektiğimizi düşünüyoruz.

 

YANITLARIN ANALİZİ

Yanıtlara geçmeden, araştırmamıza katılanların profiline bir göz atalım.

Katılımcılarımızın ağırlıklı olarak yazılım geliştiricisi olduklarını, takım lideri, proje yöneticisi ve mühendis rolündeki katılımcıların toplamda %32.8’lik bir dilime tekabül ettiğini görüyoruz. Yanıtları analiz ederken bu dağılımı gözönünde bulundurduk.

 

KARGO KÜLTÜ

Cargo cult terimi, bir süreci ya da sistemi anlamaksızın, en dışsal, en yüzeysel görünümlerini tekrar ve taklit ederek yeniden üretmeye çalışmayı ifade etmektedir. Anket yanıtlarını incelediğimizde yer yer kargo kültü efekti ile karşılaştığımızı ifade etmeliyim. Örneğin, anket katılımcılarına yönelttiğimiz ilk soru bu efekti gördüğümüz sorulardan biriydi.

“Uygulamalarınız şu anda hangi JDK üzerinde koşuyor?” sorusuna katılımcıların %57.8’i Oracle JDK yanıtını vermiş. Bu yanıtta kargo kültü etkisi görme sebebimiz “Şu anda JDK desteği için bir satıcıya ödeme yapıyor musunuz?” sorusuna %89.1 oranında hayır, “Gelecekte JDK desteği için ödeme yapmayı düşünür müsünüz?” sorusuna ise %57.8 oranında hayır, %18.8 oranında belki yanıtının verilmesi. Gelecekte JDK desteği için kesin bir dille ödeme yapmayı düşünmediklerini söyleyenler içinde rolü yazılım geliştirici olmayanların oranı %37.83. 

JDK desteği için gelecekte ödeme yapabileceğini ifade edenler içinde rolü yazılım geliştirici olmayanların oranı ise %50. Bu kitle içinde rol ağırlığı mühendis olarak görülüyor.

 

BİRBİRİMİZE ZAMAN TANIYALIM 

Uygulamalarınız için üretimde hangi Java SE sürümünü kullanıyorsunuz?” sorusuna katılımcılarımız %71.9 oranında Java 1.8 yanıtını vermiş. Görünen o ki Java 8 hala en baskın version. Java 9 kullandığını belirtenlerin oranı %6.3 iken, Java 11 kullananların oranının %7.8 olduğunu görüyoruz. Bu iki version arasında kalan Java 10 için kullanım oranı ise %1.6’da kalmış. Bu durumu, Java 10 ile gelen ve geliştiriciyi doğrudan etkileyen yeniliklerin sınırlı olmasıyla açıklayabileceğimizi düşünüyoruz. Sonuç itibariyle, söz konusu oranlardan anlaşılan, yeni Java sürümlerinin henüz tam olarak benimsenmediği. Yanıtlarda dikkat çeken bir diğer nokta, Java 1.6 ya da daha eski seçeneğinin %6.3 oranına sahip olması. Bu yanıtı verenler arasında rolü yazılım geliştirici olan yok.

Daha yeni bir sürüme geçmeme nedeni olarak ağırlıkla(%54.7) mevcut sürümün gayet iyi çalıştığı yanıtı verilmiş. Bu noktada “çalışıyorsa dokunma” prensibine güçlü bir bağlılık gözleniyor, ek olarak güncelleme maliyeti ve yeni sürümün ihtiyaç içeren bir yenilik getirmediğini söyleyenlerin de azımsanmayacak oranda olduğunu belirtebiliriz.

 

HAVET

Yazılım güvenliği kuşkusuz geliştiricisinden takım liderine, proje yöneticisine sektörde herkes için önemli bir konu ancak anketimizde aldığımız yanıtlara bakınca “güvenlik sizin için önemli mi?” gibi gizli bir soruya katılımcılarımız havet yanıtı vermiş görünüyor.

JDK’nın yeni sürümlerini üretimde benimsemede hangi tutuma sahipsiniz?” sorusuna %53.1 oranında uzun süreli destek(TLS) sürümleri ile kalmak yanıtı verilmiş. Sonrasında oransal olarak en yüksek yanıt %32.8 ile Release temelinde, özelliklere bağlı olarak karar vermek olmuş. 

Bu yanıtlar havet’in evet kısmı, hayır kısmı “Kritik JDK güvenlik güncellemelerini ne kadar çabuk uygularsınız?” sorusuna alınan yanıtlarda görülüyor.

Evet, doğru görüyorsunuz, soruya %34.4 oranında(en yüksek) uygulamıyoruz yanıtı verilmiş ve bu yanıtı verenlerin %22.72’sinin mevcut rolü yazılım geliştirici değil. %17.2 oranıyla Release tarihinden itibaren 1 hafta içinde yanıtı verenler dışında kalanlar ise kritik güvenlik güncellemelerini en az 1 ay ötelediklerini ifade etmiş. Anket katılımcılarımıza “Nerede ne zaman bağımlılıklarınızı bilinen güvenlik açıkları için gözden geçirirsiniz?” sorusunu da yönelttik, aldığımız yanıtların yukarıdakilerle parallelik gösterdiği görülüyor.  

Elbette her bir yanıt ve sahip olduğu oran bize bir şeyler söylüyor ancak hiçbir zaman ve bilmiyorum yanıtlarının toplamda %42.2’lik bir orana tekabül etmesi, bu soruda katılımcıların birden fazla seçeneceği tercih edebildiği dikkate alınsa dahi düşündürücü. 

 

BENİM İÇİN SENDEN BAŞKASI YALAN

JVM başlangıçta yalnızca Java programlama dilini desteklemek için tasarlanmıştı ancak geçen zaman içinde daha fazla dil JVM’e uyarlandı veya tasarlandı. Bugün JVM Scala, Kotlin, Ceylon, Groovy gibi pek çok dili destekliyor. Bu sebeple “Uygulamalarınız için ana JVM diliniz nedir?” sorusunu yönelttik, aldığımız cevap %96.9 oranında Java oldu.

Yukarıdaki grafikte gördüğünüz gibi Java dışında farklı JVM dili olarak bir tek Kotlin yanıtı %3.1 oranında alınmış. Android platformu için mobil geliştirim yapılıyor olmasa farklı bir JVM dili ile hiç teşriki mesaimiz olmayacak gibi görünüyor. Burada şaşırtıcı bulduğumuz bir diğer nokta, kotlin yanıtı verenler içinde mevcut rolü yazılım geliştirici olan kimsenin bulunmayışı. Kotlin yanıtını verenlerin mevcut rolleri %50 oranında takım lideri ve proje yöneticisi.

 

MODÜLE MODÜL DEMEM MAVEN MODÜLÜ OLMAYINCA

Araştırmamızda cevabını aradığımız temel sorulardan birinin Java’nın yeni sürümlerinin ne oranda benimsendiği olduğundan bahsetmiştik. Bu kapsamda “Java uygulamalarınızda Java modüllerini kullanıyor musunuz veya kullanmayı planlıyor musunuz?” sorusunu yönelttik.

Kullanıyoruz yanıtının oranı %51.6 oysa “Uygulamalarınız için üretimde hangi Java SE sürümünü kullanıyorsunuz?” sorusuna %6.3 oranında Java 9 yanıtı verilmişti. Burada Java 9 modülleriyle muhtemelen maven modüllerinin karıştırılmış olduğunu tahmin ediyoruz. Bu sonucu bizim için daha çarpıcı hale getiren şey, her iki soruda rolü yazılım geliştirici olmayanların yüksek oranı. Üretimde Java 9 kullandıklarını belirtenlerin %50’si yazılım geliştirici iken, “Java uygulamalarınızda Java modüllerini kullanıyor musunuz veya kullanmayı planlıyor musunuz?” sorusuna yanıt verenler arasında rolü yazılım geliştirici olmayanların oranı %45.45.

 

SARI SAÇLARINI DELİ GÖNLÜME BAĞLAMIŞIM ÇÖZÜLMÜYOR

Spring Framework’ün topluluğumuzda yaygın bir kullanıma sahip olduğunu kestirmek zor değil ancak yine de araştırmamız sonucu ortaya çıkan biçimde bir baskınlığın bizi bir nebze de olsa şaşırttığını belirtmeliyiz. “Spring Framework kullanıyor musunuz?” sorusuna katılımcılarımızın %82.8’i evet yanıtı vermiş.

Soruya evet diyenler içinde rolü yazılım geliştirici olanların oranı %58.49 iken, hayır diyenler içinde aynı role sahip olanların oranı %63.63.

Topluluğumuzun Enterprise Java ile ilişkisiyle birlikte, Spring’in bu yaygın kullanımında kargo kültü etkisinin izini sürmek için “Enterprise Java kullanıyor musunuz?” sorusunu yönelttik ve cevap opsiyonları arasına “Evet, Spring veya diğer bir framework üzerinden”i ekledik. 

Görüldüğü üzere aldığımız yanıtlarda Enterprise Java ile topluluğumuzun ilişkisi ağırlıklı olarak Spring veya diğer bir framework üzerinden sağlanmış görünüyor. Evet, doğrudan yanıtı veren katılımcıların %45.45’i yazılım geliştirici dışında bir role sahip.

 

NE İÇİNDEYİM ZAMANIN NE DE BÜSBÜTÜN DIŞINDA

Topluluğumuz Java dünyasındaki güncel ve gündem konularla ne kadar ilgili? Bunu öğrenmek adına JavaX ad alanıyla ilgili tartışmalara ne tepki verdiklerini sorduk. 

Katılımcılar arasında %45.3 gibi azımsanmayacak oranda bir kitlenin konudan haberdar olmadığı görülüyor. Konudan haberim yok yanıtı verenler içinde mevcut rolü yazılım geliştirici olanların oranı ise %65.52. Bir sonraki soruya aldığımız yanıtlardan, yine azımsanmaycak bir kitlenin JavaX ad alanıyla ilgili muhtemel değişikliklerin olası etkilerine dair sağlıklı bir fikre sahip olmadığı görülüyor.

Bu soruya sanmıyorum yanıtı verenler içinde rolü yazılım geliştirici olanların oranı %68.

 

INTELLIJ, MAVEN, JENKINS, SİZ BU ALEMDE BİRTANESİNİZ 

Katılımcılarımıza hangi ide, derleme ve CI aracı, kod deposunu kullandıklarını da sorduk. İşte aldığımız yanıtlar.

Ağırlıklı olarak, JDK desteği için ödeme yapmayan ve yapmayı da düşünmeyen katılımcı kitlemizin, ihtiyaç duyduğu geliştirme ortamı için ödeme yapmaktan çekinmediğini, çekinmeyeceğini “Hangi IDE’yi kullanıyorsunuz?” sorusuna %60.9 oranında IntelliJ IDEA Ultimate (Ücretli) yanıtı verilmesi üzerinden okuyabiliyoruz. “Uygulamalarınız için hangi derleme aracını kullanıyorsunuz?” sorusuna baskın biçimde maven, “Hangi CI sunucusunu kullanıyorsunuz?” sorusuna ise jenkins yanıtlarının verilmesini şaşırtıcı bulmamakla birlikte, “Uygulamalarınız için hangi kod deposunu kullanıyorsunuz?” sorusunda GitHub (public) ve GitLab (public) için toplamda %9.4’lük bir oranda kalınmasını açık kaynak kültürü noktasında eksikliğimizin dışa vurumu olarak okumak durumunda kaldık.

 

SONUÇ

Büyük ve dinamik bir topluluğuz, güçlü yanlarımız olduğu gibi, zayıf kalmış geliştirilmesi gerereken yanlarımız da var. Bu çalışmanın bu noktada önemli bir fotoğraf çektiğini düşünüyoruz. Aldığımız bazı yanıtlarda kargo kültü etkisi gördük, neden/niçin sorusunu yönelttiğimizde geçerli bir sebep sunulacağından kuşku duyduğumuz, tekrar ve taklide dayalı davranış/tercihler gözledik. Test güdümlü yazılım süreci eksikliğinin, güvenlik temelli sorularda kendisini belirgin biçimde ortaya koyduğunu düşünüyoruz. JVM ve yeni Java sürümleri odaklı sorularda bir derinlik problemimiz olduğu net biçimde görülüyor. Enterpise Java ile ilişkimizin ağırlıklı olarak frameworkler üzerinden olması da bu kapsama dahil edilebilir. Sonuçlar, Java dünyasında olup bitenlere biraz daha yakından bakmamız ve son olarak, açık kaynak dünyasında daha görünür olmamız gerektiği bize fısıldıyor.

 


by Hüseyin Akdogan at September 16, 2019 06:00 AM

MicroProfile 3.0 Support Comes to Helidon

by dmitrykornilov at September 14, 2019 04:37 PM

We are proud to announce a new version of Helidon 1.3. The main feature of this release is MicroProfile 3.0 support, but it also includes additional features, bug fixes and performance improvements. Let’s take a closer look.

MicroProfile 3.0

About a month ago we released Helidon 1.2 with MicroProfile 2.2 support. Since that time we made a step forward and brought MicroProfile 3.0 support.

For those who don’t know, MicroProfile is a set of cloud-native Java APIs. It’s supported by most of the modern Java vendors like Oracle, IBM, Red Hat, Payara and Tomitribe which makes it a de-facto standard in this area. It’s one of the goals of Helidon project to support the latest versions of MicroProfile. The Helidon MicroProfile implementation is called Helidon MP and along with the reactive, non-blocking framework called Helidon SE it builds the core of Helidon.

MicroProfile 3.0 is a major release. It contains updated Metrics 2.0 with some backwards incompatible changes, HealthCheck 2.0 and Rest Client 1.3 with minor updates.

Although MicroProfile 3.0 is not backwards compatible with MicroProfile 2.2, we didn’t want to bring backwards incompatibility to Helidon. Helidon Version 1.3 supports both MicroProfile 2.2 and MicroProfile 3.0. Helidon MP applications can select the MicroProfile version by depending on one (and only one) of the following bundles.

For compatibility with MicroProfile 2.2:

<dependency>
    <groupId>io.helidon.microprofile.bundles</groupId>    
    <artifactId>helidon-microprofile-2.2</artifactId>
</dependency>

For compatibility with MicroProfile 3.0:

<dependency>
    <groupId>io.helidon.microprofile.bundles</groupId
    <artifactId>helidon-microprofile-3.0</artifactId>
</dependency>

Backward compatibility with MicroProfile 2.2 implies that every existing Helidon application that depends on helidon-microprofile-2.2 will continue to run without any changes. New applications created from the latest archetypes in Helidon 1.3 will depend on helidon-microprofile-3.0.

Metrics 2.0 Support

As mentioned above, MicroProfile Metrics 2.0 introduces a number of new features as well as some backward incompatible changes. The following is a summary of the changes:

  • Existing counters have been limited to always be monotonic
  • A new metric called a concurrent gauge is now supported
  • Tags are now part of MetricID instead of Metadata
  • Metadata is now immutable
  • Minor changes to JSON format
  • Prometheus format is now OpenMetrics format (with a few small updates)

The reader is referred to https://github.com/eclipse/microprofile-metrics/releases/tag/2.0.1 for more information.

Note: There have been some disruptive signature changes in class MetricRegistry. Several getter methods now return maps whose keys are of type MetricID instead of String. Applications upgrading to the latest version of MicroProfile Metrics 2.0 should review these uses to ensure the correct type is passed and thus prevent metric lookup failures.

Helidon SE applications that use metrics can also take advantage of the new features in Helidon 1.3. For example, MicroProfile Metrics 2.0 introduced the notion of concurrent gauges which are now also available in Helidon SE. To use any of these new features, Helidon SE applications can depend on:

<dependency>
    <groupId>io.helidon.metrics</groupId>
    <artifactId>helidon-metrics2</artifactId>
</dependency>

Existing Helidon SE applications can continue to build using the older helidon-metrics dependency.

HealthCheck 2.0 Support

HealthCheck 2.0 contains some breaking changes. The message body of Health check response was modified, outcome and state were replaced by status. Also readiness (/health/ready) and liveness (/health/live) endpoints were introduced for smoother integration with Kubernetes.

The original /health endpoint is not removed, so your old application will still work without any changes.

New specification introduces two new annotations @Liveness and @Readiness. In Helidon SE we introduced two corresponding methods addLiveness and addReadiness and deprecated the original add method.

JPA and JTA are Production Ready

In earlier versions of Helidon we introduced an early access version of JPA and JTA integration. We received feedback from our users, fixed some issues and improved performance. In version 1.3 we are moving JPA and JTA support from Early Access to Production Ready.

We also created a guide helping users to get familiar with this feature.

Hibernate Support

With 1.3.0 you can now use Hibernate as the JPA provider, or you can continue using EclipseLink. It’s up to you. The difference is one <dependency> element in your pom.xml.

For EclipseLink:

<dependency>
    <groupId>io.helidon.integrations.cdi</groupId
    <artifactId>helidon-integrations-cdi-eclipselink</artifactId
    <scope>runtime</scope>
</dependency>

For Hibernate:

<dependency>
    <groupId>io.helidon.integrations.cdi</groupId
    <artifactId>helidon-integrations-cdi-hibernate</artifactId
    <scope>runtime</scope>
</dependency>

As with our EclipseLink support, Helidon’s Hibernate JPA integration features full Java EE-mode compatibility, including support for EJB-free extended persistence contexts, JTA transactions and bean validation. It works just like the application servers you may be used to, but inside Helidon’s lightweight MicroProfile environment.

GraalVM Improvements

Supporting GraalVM is one of our goals. In each release we are continuously improving GraalVM support in Helidon SE. This version brings support of GraalVM version 19.2.0. Also now you can use Jersey Client in Helidon SE application and build a native-image for it.

Example code:

private void outbound(ServerRequest request, ServerResponse response) {
    // and reactive jersey client call
    webTarget.request()
        .rx()
        .get(String.class)
        .thenAccept(response::send)
        .exceptionally(throwable -> {
            // process exception
            response.status(Http.Status.INTERNAL_SERVER_ERROR_500);
            response.send("Failed with: " + throwable);
            return null;
    });
}

We also added a guide explaining how to build a GraalVM native image from your Helidon SE application. Check it out.

New Guides

To simplify Helidon adoption process we added plenty of new guides explaining how to use various Helidon features.

Getting Started

Basics

Persistence

Build and Deploy

Tutorials

Other features

This release includes many bug fixes, performance improvements and minor updates. More information about changes you can find in the release notes.

Helidon on OOW/CodeOne 2019

Next week (Sep 16, 2019) Oracle Open World and CodeOne open doors for all attendees. Helidon is well covered there. There are some Helidon-related talks from Helidon team where we will introduce some new features like Helidon DB Client coming soon to Helidon as well as talks from our users covering different Helidon use cases. Here is a full list:

  • Non-blocking Database Access in Helidon SE [DEV5365]
    Monday, September 16, 09:00 AM — 09:45 AM
  • Migrating a Single Monolithic Application to Microservices [DEV5112]
    Thursday, September 19, 12:15 PM — 01:00 PM
  • Hands on Lab: Building Microservices with Helidon
    Monday, September 16, 05:00 PM — 07:00 PM
  • Building Cloud Native Applications with Helidon [CON5124]
    Wednesday, September 18, 09:00 AM — 09:45 AM
  • Helidon Flies Faster on GraalVM [DEV5356]
    September 16, 01:30 PM — 02:15 PM
  • Helidon MicroProfile: Managing Persistence with JPA [DEV5376]
    Thursday, September 19, 09:00 AM — 09:45 AM

See you on CodeOne!


by dmitrykornilov at September 14, 2019 04:37 PM

JakartaONE: Live Coding with Jakarta EE and MicroProfile #slideless

by admin at September 13, 2019 04:08 AM

In this slideless JakartaONE conference session I used openliberty 19.0.8 and Payara Full servers. OpenLiberty 19.0.6 passed the Jakarta EE 8 TCK (see results) and therefore is Jakarta EE 8 compatible.

...this is probably the very first live coding demo which uses a certified Jakarta EE 8 runtime:

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at September 13, 2019 04:08 AM

Java EE 8 to Jakarta EE 8 Migration

by admin at September 12, 2019 02:28 PM

To migrate a Java EE 8 project to Jakarta EE 8, replace the following dependency:

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-api</artifactId>
    <version>8.0</version>
    <scope>provided</scope>
</dependency>

...with Jakarta EE 8 API

<dependency>
    <groupId>jakarta.platform</groupId>
    <artifactId>jakarta.jakartaee-api</artifactId>
    <version>8.0.0</version>
    <scope>provided</scope>
</dependency> 

The resulting ThinWAR is pom:

<project>
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.airhacks</groupId>
    <artifactId>jakarta</artifactId>
    <version>0.0.1</version>
    <packaging>war</packaging>
    <dependencies>
        <dependency>
            <groupId>jakarta.platform</groupId>
            <artifactId>jakarta.jakartaee-api</artifactId>
            <version>8.0.0</version>
            <scope>provided</scope>
        </dependency>         
    </dependencies>
    <build>
        <finalName>jakarta</finalName>
    </build>
    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <failOnMissingWebXml>false</failOnMissingWebXml>
    </properties>
</project>    

...can be conveniently build with wad.sh and deployed to all Java EE 8 and Jakarta EE 8 runtimes.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at September 12, 2019 02:28 PM

#HOWTO: Bootstrap your first Jakarta EE 8 application

by rieckpil at September 11, 2019 07:05 AM

As Jakarta EE 8 was now finally released on the 10th of September 2019, we can start using it. This is the first release of Jakarta EE and a big accomplishment as everything is now hosted at the Eclipse Foundation. The Eclipse Foundation hosted an online conference (JakartaOne) during the release day with a lot of interesting talks about the future of Jakarta EE. Stay tuned, as the talks will be published on Youtube soon! For now, there will be no new features compared to Java EE 8, but the plan is to have new features in Jakarta EE 9. With this blog post, I’ll show you how to bootstrap your first Jakarta EE 8 application using Java 11 with both Maven or Gradle.

Use Maven to bootstrap your Jakarta EE application

To start with your first Maven Jakarta EE 8 project, your pom.xml now needs the following jakartaee-api dependency:

<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>de.rieckpil.blog</groupId>
    <artifactId>bootstrap-jakarta-ee-8-application</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>war</packaging>
    <dependencies>
        <dependency>
            <groupId>jakarta.platform</groupId>
            <artifactId>jakarta.jakartaee-api</artifactId>
            <version>8.0.0</version>
            <scope>provided</scope>
        </dependency>
    </dependencies>
    <build>
        <finalName>bootstrap-jakarta-ee-8-application</finalName>
    </build>
    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
        <failOnMissingWebXml>false</failOnMissingWebXml>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    </properties>
</project>

During the online conference, Adam Bien also announced to create a new Maven archetype to bootstrap Jakarta EE 8 applications with a single command.

Use Gradle to bootstrap your Jakarta EE application

If you use Gradle to build your application, you can start with the following build.gradle:

apply plugin: 'war'

group = 'de.rieckpil.blog'
version = '1.0-SNAPSHOT'

repositories {
    mavenCentral()
}
dependencies {
    providedCompile 'jakarta.platform:jakarta.jakartaee-api:8.0.0'
}

compileJava {
    targetCompatibility = '11'
    sourceCompatibility = '11'
}

war{
    archiveName 'bootstrap-jakarta-ee-8-application.war'
}

Sample Jakarta EE 8 application

As there are no new features for Jakarta EE 8 yet, you can make use of everything you know from Java EE 8. For a sample project, I’ll create a JAX-RS application which fetches data from an external service and exposes them:

@ApplicationPath("resources")
public class JAXRSConfiguration extends Application {
}

@Path("users")
@ApplicationScoped
public class UserResource {

    @Inject
    private UserProvider userProvider;

    @GET
    public Response getAllUsers() {
        return Response.ok(userProvider.getAllUsers()).build();
    }
}

public class UserProvider {

    private WebTarget webTarget;
    private Client client;

    @PostConstruct
    public void init() {
        this.client = ClientBuilder
                .newBuilder()
                .readTimeout(2, TimeUnit.SECONDS)
                .connectTimeout(2, TimeUnit.SECONDS)
                .build();

        this.webTarget = this.client.target("https://jsonplaceholder.typicode.com/users");
    }

    public JsonArray getAllUsers() {
        return this.webTarget
                .request()
                .accept(MediaType.APPLICATION_JSON)
                .get()
                .readEntity(JsonArray.class);
    }

    @PreDestroy
    public void tearDown() {
        this.client.close();
    }
}

For now, everything is still in the javax.* namespace as there are no modifications, but once the specifications evolve, they will move to jakarta.*.

Deploy your application

At the time of writing there are already three application servers officially Jakarta EE 8 Full Platform compatible: Glassfish 5.1, Open Liberty 19.0.0.6 and WildFly 17.0.1.Final. You can get an overview of all compatible products here.

For a quick example, I’ll deploy the sample application to WildFly 17.0.1.Final running inside a Docker container:

FROM jboss/wildfly:17.0.1.Final

# Gradle
# COPY build/libs/bootstrap-jakarta-ee-8-application.war /opt/jboss/wildfly/standalone/deployments/ROOT.war

# Maven
COPY target/bootstrap-jakarta-ee-8-application.war /opt/jboss/wildfly/standalone/deployments/ROOT.war

To run the Jakarta EE 8 application on WildFly you don’t have to configure anything in addition, it just works 😉

So if you are already familiar with Java EE 8, it’s easy for you to adapt Jakarta EE 8. As we now have the Eclipse Foundation Specification Process  (EFCP), we should see new features and platform releases way more often.

For further information about Jakarta EE, have a look at the official website.

The code for this example is available on GitHub.

Have fun using Jakarta EE,

Phil

The post #HOWTO: Bootstrap your first Jakarta EE 8 application appeared first on rieckpil.


by rieckpil at September 11, 2019 07:05 AM

#WHATIS?: Eclipse MicroProfile Fault Tolerance

by rieckpil at September 11, 2019 04:43 AM

With the current trend to build distributed-systems, it is increasingly important to build fault-tolerant services. Fault tolerance is about using different strategies to handle failures in a distributed system. Moreover, the services should be resilient and be able to operate further if a failure occurs in an external service and not cascade the failure and bring the system down. There is a set of common patterns to achieve fault tolerance within your system. These patterns are all available within the MicroProfile Fault Tolerance specification.

Learn more about the MicroProfile Fault Tolerance specification, its annotations, and how to use it in this blog post. This post covers all available interceptor bindings as defined in the specification:

  • Fallback
  • Timeout
  • Retry
  • CircuitBreaker
  • Asynchronous
  • Bulkhead

Specification profile: MicroProfile Fault Tolerance

  • Current version: 2.0 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Provide a set of strategies to build resilient and fault-tolerant services

Provide a fallback method

First, let’s cover the @Fallback interceptor binding of the MicroProfile Fault Tolerance specification. With this annotation, you can provide a fallback behavior of your method in case of an exception. Assume your service fetches data from other microservices and the call might fail due to network issues or downtime of the target. In case your service could recover from the failure and you can provide meaningful fallback behavior for your domain, the @Fallback annotation saves you.

A good example might be the checkout process of your webshop where you rely on a third-party service for handling e.g. credit card payments. If this service fails, you might fall back to a default payment provider and recover gracefully from the failure.

For a simple example, I’ll demonstrate it with a JAX-RS client request to a placeholder REST API and provide a fallback method:

@Fallback(fallbackMethod = "getDefaultPost")
public JsonObject getPostById(Long id) {
    return this.webTarget
        .path(String.valueOf(id))
        .request()
        .accept(MediaType.APPLICATION_JSON)
        .get(JsonObject.class);
}

public JsonObject getDefaultPost(Long id) {
    return Json.createObjectBuilder()
        .add("comment", "Lorem ipsum")
        .add("postId", id)
        .build();
}

With the @Fallback annotation you can specify the method name of the fallback method which must share the same response type and method arguments as the annotated method.

In addition, you can also specify a dedicated class to handle the fallback. This class is required to implement the FallbackHandler<T> interface where T is the response type of the targeted method:

@Fallback(PlaceHolderApiFallback.class)
public JsonObject getPostById(Long id) {
    return this.webTarget
        .path(String.valueOf(id))
        .request()
        .accept(MediaType.APPLICATION_JSON)
        .get(JsonObject.class);
}

public class PlaceHolderApiFallback implements FallbackHandler<JsonObject> {

    @Override
    public JsonObject handle(ExecutionContext context) {
        return Json.createObjectBuilder()
                .add("comment", "Lorem ipsum")
                .add("postId", Long.valueOf(context.getParameters()[0].toString()))
                .build();
    }
}

As you’ll see it in the upcoming chapters, the @Fallback annotation can be used in combination with other MicroProfile Fault Tolerance interceptor bindings.

Add timeouts to limit the duration of a method execution

For some operations in your system, you might have a strict response time target. If you make use of the JAX-RS client or the client of MicroProfile Rest Client you can specify read and connect timeouts to avoid long-running requests. But what about use cases where you can’t declare timeouts easily? The MicroProfile Fault Tolerance specification defines the @Timeout annotation for such problems.

With this interceptor binding, you can specify the maximum duration of a method. If the computation time within the method exceeds the limit, a TimeoutException is thrown.

@Timeout(4000)
@Fallback(fallbackMethod = "getFallbackData")
public String getDataFromLongRunningTask() throws InterruptedException {
    Thread.sleep(4500);
    return "duke";
}

The default unit is milliseconds, but you can configure a different ChronoUnit:

@Timeout(value = 4, unit = ChronoUnit.SECONDS)
@Fallback(fallbackMethod = "getFallbackData")
public String getDataFromLongRunningTask() throws InterruptedException {
    Thread.sleep(4500);
    return "duke";
}

Define retry policies for method calls

A valid fallback behavior for an external system call might be just to retry it. With the @Retry annotation, we can achieve such a behavior. Directly retrying to execute the request might not always be the best solution. Similarily you want to add delay for the next retry and maybe add some randomness. We can configure such a requirement with the @Retry annotation:

@Retry(maxDuration = 5000, maxRetries = 3, delay = 500, jitter = 200)
@Fallback(fallbackMethod = "getFallbackData")
public String accessFlakyService() {

    System.out.println("Trying to access flaky service at " + LocalTime.now());

    if (ThreadLocalRandom.current().nextLong(1000) < 50) {
        return "flaky duke";
    } else {
        throw new RuntimeException("Flaky service not accessible");
    }
}

In this example, we would try to execute the method three times with a delay of 500 milliseconds and 200 milliseconds of randomness (called jitter). The effective delay is the following: [delay – jitter, delay + jitter] (in our example 300 to 700 milliseconds).

Furthermore, endless retrying might also be counter-productive.  That’s why we can specify the maxDuration which is quite similar to the @Timeout annotation above. If the whole retrying takes more than 5 seconds, it will fail with a TimeoutException.

Add a Circuit Breaker around a method invocation to fail fast

Once an external system you call is down or returning 503 as it is currently unavailable to process further requests, you might not want to access it for a given timeframe again. This might help the other system to recover and your methods can fail fast as you already know the expected response from requests in the past.  For this scenario, the Circuit Breaker pattern comes into place.

The Circuit Breaker offers a way to fail fast by directly failing the method execution to prevent further overloading of the target system and indefinite wait or timeouts. With MicroProfile Fault Tolerance we have an annotation to achieve this with ease: @CircuitBreaker

There are three different states a Circuit Breaker can have: closed, opened, half-open.

In the closed state, the operation is executed as expected. If a failure occurs while e.g. calling an external service, the Circuit Breaker records such an event. If a particular threshold of failures is met, it will switch to the open state.

Once the Circuit Breaker enters the open state, further calls will fail immediately.  After a given delay the circuit enters the half-open state. Within the half-open state, trial executions will happen. Once such a trial execution fails, the circuit transitions to the open state again. When a predefined number of these trial executions succeed, the circuit enters the original closed state.

Let’s have a look at the following example:

@CircuitBreaker(successThreshold = 10, requestVolumeThreshold = 5, failureRatio = 0.5, delay = 500)
@Fallback(fallbackMethod = "getFallbackData")
public String getRandomData() {
    if (ThreadLocalRandom.current().nextLong(1000) < 300) {
        return "random duke";
    } else {
        throw new RuntimeException("Random data not available");
    }
}

In the example above I define a Circuit Breaker which enters the open state once 50% (failureRatio=0.5) of five consecutive executions (requestVolumeThreshold=5) fail. After a delay of 500 milliseconds in the open state,  the circuit transitions to half-open. Once ten trial executions (successThreshold=10) in the half-open state succeed, the circuit will be back in the closed state.

Execute a method asynchronously with MicroProfile Fault Tolerance

Some use cases of your system might not require synchronous and in-order execution of different tasks. For instance, you can fetch data for a customer (purchased orders, contact information, invoices) from different services in parallel.  The MicroProfile Fault Tolerance specification offers a convenient way for achieving such asynchronous method executions: @Asynchronous:

@Asynchronous
public Future<String> getConcurrentServiceData(String name) {
    System.out.println(name + " is accessing the concurrent service");
    return CompletableFuture.completedFuture("concurrent duke");
}

With this annotation, the execution will be on a separate thread and the method has to return either a Future or a CompletionStage

Apply Bulkheads to limit the number of concurrent calls

The Bulkhead pattern is a way of isolating failures in your system while the rest can still function. It’s named after the sectioned parts (bulkheads) of a ship. If one bulkhead of a ship is damaged and filled with water, the other bulkheads aren’t affected, which prevents the ship from sinking.

Imagine a scenario where all your threads are occupied for a request to a (slow-responding) external system and your application can’t process other tasks. To prevent such a scenario, we can apply the @Bulkhead annotation and limit concurrent calls:

@Bulkhead(5)
@Asynchronous
public Future<String> getConcurrentServiceData(String name) throws InterruptedException {
    Thread.sleep(1000);
    System.out.println(name + " is accessing the concurrent service");
    return CompletableFuture.completedFuture("concurrent duke");
}

In this example, only five concurrent calls can enter this method and further have to wait. If this annotation is used together with @Asynchronous, as in the example above,  it means thread isolation. In addition and only for asynchronous methods we can specify the length of the waiting queue with the attribute waitingTaksQueue.  For non-async methods, the specification defines to utilize semaphores for isolation.

MicroProfile Fault Tolerance integration with MicroProfile Config

Above all, the MicroProfile Fault Tolerance specification provides tight integration with the config spec.  You can configure every attribute of the different interceptor bindings with an external config source like the microprofile-config.properties file.

The pattern for external configuration is the following: <classname>/<methodname>/<annotation>/<parameter>:

de.rieckpil.blog.RandomDataProvider/accessFlakyService/Retry/maxRetries=10
de.rieckpil.blog.RandomDataProvider/accessFlakyService/Retry/delay=300
de.rieckpil.blog.RandomDataProvider/accessFlakyService/Retry/maxDuration=5000

YouTube video for using MicroProfile Fault Tolerance 2.0

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile Fault Tolerance in action:

coming soon

You can find the source code with further instructions to run this example on GitHub.

Have fun using MicroProfile Fault Tolerance,

Phil

The post #WHATIS?: Eclipse MicroProfile Fault Tolerance appeared first on rieckpil.


by rieckpil at September 11, 2019 04:43 AM

How to pack Angular 8 applications on regular war files

September 11, 2019 12:00 AM

Maven

From time to time it is necessary to distribute SPA applications using war files as containers, in my experience this is necessary when:

  • You don't have control over deployment infrastructure
  • You're dealing with rigid deployment standards
  • IT people is reluctant to publish a plain old web server

Anyway, and as described in Oracle's documentation one of the benefits of using war files is the possibility to include static (HTML/JS/CSS) files in the deployment, hence is safe to assume that you could distribute any SPA application using a war file as wrapper (with special considerations).

Creating a POC with Angular 8 and Java War

To demonstrate this I will create a project that:

  1. Is compatible with the big three Java IDEs (NetBeans, IntelliJ, Eclipse) and VSCode
  2. Allows you to use the IDEs as JavaScript development IDEs
  3. Allows you to create a SPA modern application (With all npm, ng, cli stuff)
  4. Allows you to combine Java(Maven) and JavaScript(Webpack) build systems
  5. Allows you to distribute a minified and ready for production project

Bootstrapping a simple Java web project

To bootstrap the Java project, you could use the plain old maven-archetype-webapp as basis:

mvn archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DarchetypeVersion=1.4

The interactive shell will ask you for you project characteristics including groupId, artifactId (project name) and base package.

Java Bootstrap

In the end you should have the following structure as result:

demo-angular-8$ tree
.
├── pom.xml
└── src
    └── main
        └── webapp
            ├── WEB-INF
            │   └── web.xml
            └── index.jsp

4 directories, 3 files

Now you should be able to open your project in any IDE. By default the 'pom.xml' will include locked down versions for maven plugins, you could safely get rid of those since we won't personalize the entire Maven lifecycle, just a couple of hooks.

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.nabenik</groupId>
  <artifactId>demo-angular-8</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>war</packaging>

  <name>demo-angular-8 Maven Webapp</name>

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <maven.compiler.source>1.7</maven.compiler.source>
    <maven.compiler.target>1.7</maven.compiler.target>
  </properties>
</project>

Besides that index.jsp is not necessary, just delete it.

Bootstrapping a simple Angular JS project

As an opinionated approach I suggest to isolate the Angular project at its own directory (src/main/frontend), on the past and with simple frameworks (AngularJS, Knockout, Ember) it was possible to bootstrap the entire project with a couple of includes in the index.html file, however nowadays most of the modern front end projects use some kind of bundler/linter in order to enable modern (>=ES6) features like modules, and in the case of Angular, it uses Webpack under the hood for this.

For this guide I assume that you already have installed all Angular CLI tools, hence we could go inside our source code structure and bootstrap the Angular project.

demo-angular-8$ cd src/main/
demo-angular-8/src/main$ ng new frontend

This will bootstrap a vanilla Angular project, and in fact you could consider the src/main/frontend folder as a separate root (and also you could open this directly from VSCode), the final structure will be like this:

JS Structure

As a first POC I started the application directly from CLI using IntelliJ IDEA and ng serve --open, all worked as expected.

Angular run

Invoking Webpack from Maven

One of the useful plugins for this task is frontend-maven-plugin which allows you to:

  1. Download common JS package managers (npm, cnpm, bower, yarn)
  2. Invoke JS build systems and tests (grunt, gulp, webpack or npm itself, karma)

By default Angular project come with hooks from npm to ng but we need to add a hook in package.json to create a production quality build (buildProduction), please double check the base-href parameter since I'm using the default root from Java conventions (same as project name)

...
"scripts": {
    "ng": "ng",
    "start": "ng serve",
    "build": "ng build",
    "buildProduction": "ng build --prod --base-href /demo-angular-8/",
    "test": "ng test",
    "lint": "ng lint",
    "e2e": "ng e2e"
  }
...

To test this build we could execute npm run buildProduction at webproject's root (src/main/frontend), the output should be like this:

NPM Hook

Finally It is necessary to invoke or new target with maven, hence our configuration should:

  1. Install NodeJS (and NPM)
  2. Install JS dependencies
  3. Invoke our new hook
  4. Copy the result to our final distributable war

To achieve this, the following configuration should be enough:

<build>
<finalName>demo-angular-8</finalName>
    <plugins>
    <plugin>
            <groupId>com.github.eirslett</groupId>
            <artifactId>frontend-maven-plugin</artifactId>
            <version>1.6</version>

            <configuration>
                <workingDirectory>src/main/frontend</workingDirectory>
            </configuration>

            <executions>

                <execution>
                    <id>install-node-and-npm</id>
                    <goals>
                        <goal>install-node-and-npm</goal>
                    </goals>
                    <configuration>
                        <nodeVersion>v10.16.1</nodeVersion>
                    </configuration>
                </execution>

                <execution>
                    <id>npm install</id>
                    <goals>
                        <goal>npm</goal>
                    </goals>
                    <configuration>
                        <arguments>install</arguments>
                    </configuration>
                </execution>
                <execution>
                    <id>npm build</id>
                    <goals>
                        <goal>npm</goal>
                    </goals>
                    <configuration>
                        <arguments>run buildProduction</arguments>
                    </configuration>
                    <phase>generate-resources</phase>
                </execution>
            </executions>
    </plugin>
    <plugin>
        <artifactId>maven-war-plugin</artifactId>
        <version>3.2.2</version>
        <configuration>
            <failOnMissingWebXml>false</failOnMissingWebXml>

            <!-- Add frontend folder to war package -->
            <webResources>
                <resource>
                    <directory>src/main/frontend/dist/frontend</directory>
                </resource>
            </webResources>

        </configuration>
    </plugin>
    </plugins>
</build>

And that's it!. Once you execute mvn clean package you will obtain as result a portable war file that will run over any Servlet container runtime. For Instance I tested it with Payara Full 5, working as expected.

Payara


September 11, 2019 12:00 AM

Jersey 2.29.1 has been released!

by Jan at September 10, 2019 09:51 PM

What a busy summer! Jersey 2.29 has been released in June and Jakarta EE 8 release was the next goal to be achieved before Oracle Code One. It has been a lot of work. Jakarta EE 8 contains almost 30 … Continue reading

by Jan at September 10, 2019 09:51 PM

Jakarta EE 8 Released: The New Era of Java EE

by Rhuan Henrique Rocha at September 10, 2019 01:29 PM

The Java EE is a fantastic project, but it was created in 1999 with J2EE name and it is 20 years old and its processes to evolve is not appropriated to new enterprise scenario. Then, Java EE needed change too.

Java EE has a new home and new brand and is being released today September 10th. The Java EE was migrated from Oracle to Eclipse Foundation and now is Jakarta EE, that is under Eclipse Enterprise for Java (EE4J) project. Today the Eclipse Foundation is releasing the Jakarta EE 8 and we’ll see what it means in this post.

The Java EE was a very stronger project and was highly used in many kind of enterprise Java application and many big framework like Spring and Struts. Some developers has questioned its features and evolving processes, But looking at its high usage and time in the market, its success is unquestionable. But the enterprise world doesn’t stop and new challenges are emerging all the time. The speed of change has grown more and more in the enterprise world because the companies should be prepared more and more to answer to the market challenges. Thus, the technologies should follow these changes in the enterprise world and adapt itself to provide the better solutions in these cases.

With that in mind, the IT world promoted many changes and solutions too, to be able to provide a better answer to enterprise world. One of these solutions was the Cloud Computing computing. Resuming Cloud Computing concept  in a few words, Cloud Computing is solution to provide computer resource as a service (IaaS, PaaS, SaaS). This allows you to use only the resources you need resource and scale up and down  when needed.

The Java EE is a fantastic project, but it was created in 1999 with J2EE name and it is 20 years old and its processes to evolve is not appropriated to new enterprise scenario. Then, Java EE needed change too.

Jakarta EE Goals

The Jakarta EE 8 has the same set of specification from Java EE 8 without changes in its features. The only change done was the new process to evolve these specifications.

The Java ecosystem has a new focus that is putting your power in the service of the cloud computing and Jakarta EE is a key to that.

Jakarta EE has a goal to accelerate business application development for Cloud Computing (cloud native application), working based on specifications worked by many vendors. This project is starting based on Java EE 8, where its specifications, TCKs and Reference Implementations (RI) was migrated from Oracle to Eclipse Foundation. But to evolve these specification to attend to Cloud Computing we can not work with the same process worked on Java EE project, because it is too slow to current enterprise challenges. Thus, the first action of Eclipse Foundation is changing the process to evolve Jakarta EE.

The Jakarta EE 8 has the same set of specification from Java EE 8 without changes in its features. The only change done was the new process to evolve these specifications. With this Jakarta EE 8 is a mark at Java enterprise history, because inserts these specification in a new process to boost these specification to cloud native application approach.

Jakarta EE Specification Process

The Jakarta EE Specification Process (JESP) is the new process will be used by Jakarta EE Working Group to evolve the Jakarta EE. The JESP is replacing the JCP process used previously for java EE.

The JESP is based on Eclipse Foundation Specification Process (EFSP) with some changes These changes are informed in https://jakarta.ee/about/jesp/. Follows the changes:

  • Any modification to or revision of this Jakarta EE Specification Process, including the adoption of a new version of the EFSP, must be approved by a Super-majority of the Specification Committee, including a Super-majority of the Strategic Members of the Jakarta EE Working Group, in addition to any other ballot requirements set forth in the EFSP.
  • All specification committee approval ballot periods will have the minimum duration as outlined below (notwithstanding the exception process defined by the EFSP, these periods may not be shortened)
    • Creation Review: 7 calendar days;
    • Plan Review: 7 calendar days;
    • Progress Review: 14 calendar days;
    • Release Review: 14 calendar days;
    • Service Release Review: 14 calendar days; and
    • JESP Update: 7 calendar days.
  • A ballot will be declared invalid and concluded immediately in the event that the Specification Team withdraws from the corresponding review.
  • Specification Projects must engage in at least one Progress or Release Review per year while in active development.

The goals of JESP is being a process as lightweight as possible, with a design closer to open source development and with code-first development in mind. With this, this process promotes a new culture that focus on experimentation to evolve these specification based on experiences gained with experimentation.

Jakarta EE 9

The Jakarta EE 8 is focusing in update its process to evolve and the first updates in feature will come in Jakarta EE 9. The main update expected in Jakarta EE 9 is the birth of Jakarta NoSQL specification.

Jakarta NoSQL is a specification to promote a ease integration between Java applications and NoSQL database, promoting a standard solution to connect Java application to NoSQL databases with a high level abstraction. It is fantastic and is a big step to close Java platform to Cloud Native approach, because NoSQL database is widely used on Cloud environments and its improvement is expected. The Jakarta NoSQL is based on JNoSQL that will be its reference implementation.

Another update expected on Jakarta EE is about namespace. Basically the Oracle gave the Java EE project to Eclipse Foundation, but the trademark is still from Oracle. It means the Eclipse Foundation can not use java or javax to project’s name or namespace in new features that come in Jakarta EE. Thus, the community is discussing about the transition of old name to jakarta.* name. You can see this thread here.

Conclusion

Jakarta EE is opening a new era in the Java ecosystem getting the Java EE that was and is a very important project to working under a very good open source process, to improvementsAlthough this Jakarta EE version come without features updates it is opening the gate to new features that is coming on Jakarta EE in the future. So we’ll see many solutions based on specifications to working on cloud soon, in the next versions of Jakarta EE.


by Rhuan Henrique Rocha at September 10, 2019 01:29 PM

Welcome to the Future of Cloud Native Java

by Mike Milinkovich at September 10, 2019 11:00 AM

Today, with the release of Jakarta EE 8, we’ve entered a new era in Java innovation.

Under an open, vendor-neutral process, a diverse community of the world’s leading Java organizations, hundreds of dedicated developers, and Eclipse Foundation staff have delivered the Jakarta EE 8 Full Platform, Web Profiles, and related TCKs, as well as Eclipse GlassFish 5.1 certified as a Jakarta EE 8 compatible implementation.

To say this a big deal is an understatement. With 18 different member organizations, over 160 new committers, 43 projects, and a codebase of over 61 million lines of code in 129 Git repositories, this was truly a massive undertaking — even by the Eclipse community’s standards. There are far too many people to thank individually here, so I’ll say many thanks to everyone in the Jakarta EE community who played a role in achieving this industry milestone.

Here are some of the reasons I’m so excited about this release.

For more than two decades, Java EE has been the platform of choice across industries for developing and running enterprise applications. According to IDC, 90 percent of Fortune 500 companies rely on Java for mission-critical workloads. Jakarta EE 8 gives software vendors, more than 10 million Java developers, and thousands of enterprises the foundation they need to migrate Java EE applications and workloads to a standards-based, vendor-neutral, open source enterprise Java stack.

As a result of the tireless efforts of the Jakarta EE Working Group’s Specification Committee, specification development follows the Jakarta EE Specification Process and Eclipse Development Process, which are open, community-driven successors to the Java Community Process (JCP) for Java EE. This makes for a fully open, collaborative approach to generating specifications, with every decision made by the community — collectively. Combined with open source TCKs and an open process of self-certification, Jakarta EE significantly lowers the barriers to entry and participation for independent implementations.

The Jakarta EE 8 specifications are fully compatible with Java EE 8 specifications and include the same APIs and Javadoc using the same programming model developers have been using for years. The Jakarta EE 8 TCKs are based on and fully compatible with Java EE 8 TCKs. That means enterprise customers will be able to migrate to Jakarta EE 8 without any changes to Java EE 8 applications.

In addition to GlassFish 5.1 (which you can download here), IBM’s Open Liberty server runtime has also been certified as a Jakarta EE 8 compatible implementation. All of the vendors in the Jakarta EE Working Group plan to certify that their Java EE 8 implementations are compatible with Jakarta EE 8.

 All of this represents an unprecedented opportunity for Java stakeholders to participate in advancing Jakarta EE to meet the modern enterprise’s need for cloud-based applications that resolve key business challenges. The community now has an open source baseline that enables the migration of proven Java technologies to a world of containers, microservices, Kubernetes, service mesh, and other cloud native technologies that have been adopted by enterprises over the last few years.

As part of the call to action, we’re actively seeking new members for the Jakarta EE Working Group. I encourage everyone to explore the benefits and advantages of membership. If Java is important to your business, and you want to ensure the innovation, growth, and sustainability of Jakarta EE within a well-governed, vendor-neutral ecosystem that benefits everyone, now is the time to get involved.

Also, if you’re interested in learning more about our community’s perspective on what cloud native Java is, why it matters so much to many enterprises, and where Jakarta EE technologies are headed, download our new free eBook, Fulfilling the Vision for Open Source, Cloud Native Java. Thank you to Adam Bien, Sebastian Daschner, Josh Juneau, Mark Little, and Reza Rahman for contributing their insights and expertise to the eBook.

Finally, if you’ll be at Oracle Code One at the Moscone Center in San Francisco next week, be sure to stop by booth #3228, where the Eclipse community will be showcasing Jakarta EE 8, GlassFish 5.1, Eclipse MicroProfile, Eclipse Che, and more of our portfolio of cloud native Java open source projects.

 


by Mike Milinkovich at September 10, 2019 11:00 AM

Jakarta EE 8 Specifications Released by The Eclipse Foundation, Payara Platform Compatibility Coming Soon

by Debbie Hoffman at September 10, 2019 11:00 AM

The Jakarta EE 8 Full Platform, Web Profile specifications and related TCKs have been officially released today (September 10th, 2019). This release completes the transition of Java EE to an open and vendor-neutral process and provides a foundation for migrating mission-critical Java EE applications to a standard enterprise Java stack for a cloud native world. 


by Debbie Hoffman at September 10, 2019 11:00 AM

Update for Jakarta EE community: September 2019

by Tanja Obradovic at September 09, 2019 03:12 PM

We hope you’re enjoying the Jakarta EE monthly email update, which seeks to highlight news from various committee meetings related to this platform. There’s a lot happening in the Jakarta EE ecosystem so if you want to get richer insight into the work that has been invested in Jakarta EE so far and get involved in shaping the future of Cloud Native Java, read on. 

Without further ado, let’s have a look at what happened in August: 

EclipseCon Europe 2019: Register for Community Day 

We’re gearing up for EclipseCon Europe 2019! If you’ve already booked your ticket, make sure to sign up for Community Day happening on October 21; this day is jam-packed with peer-to-peer interaction and community-organized meetings that are ideal for Eclipse Working Groups, Eclipse projects, and similar groups that form the Eclipse community. As always, Community Day is accompanied by an equally interesting Community Evening, where like-minded attendees can share ideas, experiences and have fun! 

That said, in order to make this event a success, we need your help. What would you like Community Day & Evening to be all about? Check out this wiki give us your suggestions and let us know if you plan to attend by signing up at the bottom of the wiki. Also, make sure to go over what we did last year. And don’t forget to register for Community Day and Evening! 

EclipseCon Europe will take place in Ludwigsburg, Germany on October 21 - 24, 2019. 

JakartaOne Livestream: There’s still time to register!

JakartaOne Livestream, taking place on September 10, is the fall virtual conference spanning multiple time zones. Plus, the date coincides with the highly anticipated Jakarta EE 8 release so make sure to save the date; you’re in for a treat! 

We hope you’ll attend this all-day virtual conference as it unfolds; this way, you get the chance to interact with renowned speakers, participate in interesting interactions and have all your questions answered during the interactive sessions. More than 500 people have already signed up to participate in JakartaOne Livestream so register now to secure your spot! 

Once you’ve registered, you will have the opportunity to post questions and/or comments for the talks you’re interested in.  We encourage all participants to make the most out of this virtual event by sharing your questions and chiming in with your suggestions/comments! 

No matter if you’re a developer or a technical business leader, this virtual conference promises to satisfy your thirst for knowledge with a balance of technical talks, user experiences, use cases and more. Check out the schedule here

Jakarta EE 8 release

The moment we've all been waiting for is almost upon us. The expected due date of Jakarta EE 8 is September 10 and we’re in the final stages of preparation for specifications. Eclipse GlassFish 5.1, as well as Open Liberty 19.0.0.6, open source Jakarta EE compatible implementations, are expected to be released on the same day, and other compatible implementations are expected to follow suit. 

Keep an eye out for the Jakarta EE 8 page, which will include all the necessary information and updates related to the Jakarta EE 8 release, including links to specifications, compatible products, Eclipse Cloud Native Java eBook and more.  

If you’d like to learn more about cloud native Java and Jakarta EE, the Eclipse Foundation will be at Oracle Code One, so come armed with questions. Stop by our booth -number 3228- to say hi, ask questions or chat with our experts. Here’s who you can expect to see in our booth for Oracle Code One:

  • A lot of community participation at the boot, after all this is an open source community driven project!

  • Pods dedicated to Jakarta EE, MicroProfile and Eclipse Che

  • A lot of information and discussion about the Jakarta EE 8 release and related Compatible Implementations 

Jakarta EE Community Update: August video call

The most recent Jakarta EE Community Update meeting took place on August 27; the conversation included topics such as the progress and latest status of the Jakarta EE 8 release as well as details about JakartaOne Livestream and EclipseCon Europe 2019.   

The materials used in the Jakarta EE community update meeting are available here and the recorded Zoom video conversation can be found here.  

Fulfilling the Vision for Open Source, Cloud Native Java: Coming soon! 

What does cloud native Java really mean to developers? What does the cloud native Java future look like? Where is Jakarta EE headed? Which technologies should be part of your toolkit for developing cloud native Java applications? 

All these questions (and more!) will be answered soon; we’re developing a downloadable eBook called Fulfilling the Vision for Open Source, Cloud Native Java on the community's definition and vision for cloud native Java, which will become available shortly before Jakarta EE 8 is released. Stay tuned!

Cloud Native Java & Jakarta EE presence at events and conferences: August overview 

We’d like to give kudos to Otávio Santana for his hard work this past summer on his 11+ conference session tour on “Jakarta on the Cloud America Latina 2019”. It’s great to see the success of your sessions and we are happy to promote your community participation. 

Links you may want to bookmark!

Thank you for your interest in Jakarta EE. Help steer Jakarta EE toward its exciting future by subscribing to the jakarta.ee-wg@eclipse.org mailing list and by joining the Jakarta EE Working Group. Don’t forget to follow us on Twitter to get the latest news and updates!

To learn more about the collaborative efforts to build tomorrow’s enterprise Java platform for the cloud, check out the Jakarta Blogs and participate in the monthly Jakarta Tech Talks. Don’t forget to subscribe to the Eclipse newsletter!  

The Jakarta EE community promises to be a very active one, especially given the various channels that can be used to stay up-to-date with all the latest and greatest. Tanja Obradovic’s blog offers a sneak peek at the community engagement plan, which includes

Note: If you’d like to learn more about Jakarta EE-related plans and get involved in shaping the future of cloud native Java and see when is the next Jakarta Tech Talk, please bookmark the Jakarta EE Community Calendar


by Tanja Obradovic at September 09, 2019 03:12 PM

MicroProfile, Business Constraints, Outbox, lit-html, OData, ManagedExecutorService, Effective Java EE, Minishift, Quarkus-the 66th airhacks.tv

by admin at September 09, 2019 05:19 AM

The 66th airhacks.tv episode covering:

MicroProfile polyfills, ensuring consistency in business constraints, outbox transactional pattern, lit-html, minishift and okd, parsing images from stream, odata and backend for frontend, quarkus and bulkheads, JWTenizr on CI/CD, WARs on Docker, and recent podcasts

...is available:

Any questions left? Ask now: https://gist.github.com/AdamBien/1a227df3f1701e4a12a751d3f7d1633e and get the answers at the next airhacks.tv.

See you at "Build to last" effectively progressive applications with webstandards only -- the "no frameworks, no migrations" approach, at Munich Airport, Terminal 2 or effectiveweb.training (online).

by admin at September 09, 2019 05:19 AM

#WHATIS?: Eclipse MicroProfile JWT Auth

by rieckpil at September 06, 2019 04:31 AM

In today’s microservice architectures security is usually based on the following protocols: OAuth2, OpenID Connect, and SAML. These main security protocols use security tokens to propagate the security state from client to server. This stateless approach is usually achieved by passing a JWT token alongside every client request. For convenient use of this kind of token-based authentication, the MicroProfile JWT Auth evolved. The specification ensures, that the security token is extracted from the request, validated and a security context is created out of the extracted information.

Learn more about the MicroProfile JWT Auth specification, its annotations, and how to use it in this blog post.

Specification profile: MicroProfile JWT Auth

  • Current version: 1.1 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Provide JWT token-based authentication for your application

Securing a JAX-RS application

First, we have to instruct our JAX-RS application, that we’ll use the JWTs for authentication and authorization. You can configure this with the @LoginConfig annotation:

@ApplicationPath("resources")
@LoginConfig(authMethod = "MP-JWT")
public class JAXRSConfiguration extends Application {
}

Once an incoming request has a valid JWT within the HTTP Bearer header, the groups in the JWT are mapped to roles. We can now limit the access for a resource to specific roles and achieve authorization with the Common Security Annotations (JSR-250) (@RolesAllowed, @PermitAll, @DenyAll):

@GET
@RolesAllowed("admin")
public Response getBook() {

    JsonObject secretBook = Json.createObjectBuilder()
        .add("title", "secret")
        .add("author", "duke")
        .build();

    return Response.ok(secretBook).build();
}

Furthermore, we can inject the actual JWT token (alongside the Principal) with CDI and inject any claim of the JWT in addition:

@Path("books")
@RequestScoped
@Produces(MediaType.APPLICATION_JSON)
public class BookResource {

    @Inject
    private Principal principal;

    @Inject
    private JsonWebToken jsonWebToken;

    @Inject
    @Claim("administrator_id")
    private JsonNumber administrator_id;

    @GET
    @RolesAllowed("admin")
    public Response getBook() {

        System.out.println("Secret book for " + principal.getName()
                + " with roles " + jsonWebToken.getGroups());
        System.out.println("Administrator level: "
                + jsonWebToken.getClaim("administrator_level").toString());
        System.out.println("Administrator id: " + administrator_id);

        JsonObject secretBook = Json.createObjectBuilder()
                .add("title", "secret")
                .add("author", "duke")
                .build();

        return Response.ok(secretBook).build();
    }

}

In this example, I’m injecting the claim administrator_id and access the claim administrator_level via the JWT token. These are not part of the standard JWT claims but you can add any additional metadata in your token.

Always make sure to only inject the JWT token and the claims to @RequestScoped CDI beans, as you’ll get a DeploymentExcpetion otherwise:

javax.enterprise.inject.spi.DeploymentException: CWWKS5603E: The claim cannot be injected into the [BackedAnnotatedField] @Inject @Claim private de.rieckpil.blog.BookResource.administrator_id injection point for the ApplicationScoped or SessionScoped scopes.
        at com.ibm.ws.security.mp.jwt.cdi.JwtCDIExtension.processInjectionTarget(JwtCDIExtension.java:92)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)

HINT: Depending on the application server you’ll deploy this example, you might have to first declare the available roles with @DeclareRoles({"admin", "chief", "duke"}).

Required configuration for MicroProfile JWT Auth

Achieving validation of the JWT signature requires the public key. Since MicroProfile JWT Auth 1.1, we can configure this with MicroProfile Config (previously it was vendor-specific). The JWT Auth specification allows the following public key formats:

  • PKCS#8 (Public Key Cryptography Standards #8 PEM)
  • JWK (JSON Web Key)
  • JWKS (JSON Web Key Set)
  • JWK Base64 URL encoded
  • JWKS Base64 URL encoded

For this example, I’m using the PKCS#8 format and specify the path of the .pem file containing the public key in the microprofile-config.properties file:

mp.jwt.verify.publickey.location=/META-INF/publicKey.pem
mp.jwt.verify.issuer=rieckpil

The configuration of the issuer is also required and has to match the iss claim in the JWT. A valid publicKey.pem file might look like the following:

-----BEGIN RSA PUBLIC KEY-----
YOUR_PUBLIC_KEY
-----END RSA PUBLIC KEY-----

Using JWTEnizer to create tokens for testing

Usually, the JWT is issued by an identity provider (e.g. Keycloak). For quick testing, we can use the JWTenizer tool from Adam Bien. This provides a simple way to create valid JWT token and generates the corresponding public and private key. Once you downloaded the jwtenizer.jar you can run it for the first time with the following command:

java -jar jwtenizer.jar

This will now create a jwt-token.json file in the folder you executed the command above. We can adjust this .json file to our needs and model a sample JWT token:

{
  "iss": "rieckpil",
  "jti": "42",
  "sub": "duke",
  "upn": "duke",
  "groups": [
    "chief",
    "hacker",
    "admin"
  ],
  "administrator_id": 42,
  "administrator_level": "HIGH"
}

Once you adjusted the raw jwt-token.json, you can run java -jar jwtenizer.jar again and this second run will now pick the existing .json file for creating the JWT. Alongside the JWT token, the tool generates a microprofile-config.properties file, from which we can copy the public key and paste it to our publicKey.pem file.

Furthermore the shell output of running jwtenizer.jar  contains a cURL command we can use to hit our resources:

curl -i -H'Authorization: Bearer GENERATED_JWT' http://localhost:9080/resources/books

With a valid Bearer header you should get the following response from the backend:

HTTP/1.1 200 OK
X-Powered-By: Servlet/4.0
Content-Type: application/json
Date: Fri, 06 Sep 2019 03:24:16 GMT
Content-Language: en-US
Content-Length: 34

{"title":"secret","author":"duke"}

You can now adjust the jwt-token.json again and remove the admin group and generate a new JWT. With this generated token you shouldn’t be able to get a response from the backend and receive 403 Forbidden, as you are authenticated but don’t have the correct role.

For further instructions on how to use this tool, have a look at the README on GitHub or the following video of Adam Bien.

YouTube video for using MicroProfile JWT Auth 1.1

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile JWT Auth in action:

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

You can find the source code with further instructions to run this example on GitHub.

Have fun using MicroProfile JWT Auth,

Phil

The post #WHATIS?: Eclipse MicroProfile JWT Auth appeared first on rieckpil.


by rieckpil at September 06, 2019 04:31 AM

Deep Learning 101 meetup’ı ardından

by Hüseyin Akdogan at September 05, 2019 06:00 AM

Geçtiğimiz Salı günü(03.09.2019) sevgili Hakan Özler ile n11’den Orkun Susuz’u Deep Learning 101 meetup’ında ağırladık. Deep Learning popüler bir konu ancak doğal olarak pek çoğumuz için hala “Nedir Deep Learning?” geçerli bir soru. Biz de bu soruyla başladık, sevgili Orkun’un yanıtı, “Deep Learning Machine Learning algoritmalarının bir kolu” oldu. Orkun, Deep Learning’i diğer Machine Learning algoritmalarına kıyasla bu denli popüler kılan temel unsurun özellikle büyük veriyle çalışılırken gösterdiği başarı olduğunu “ne kadar çok veri o kadar başarılı sonuç, diğer algoritmalardan temel fark bu” sözleriyle ortaya koydu.

Orkun devamla günümüzde Deep Learning’in otonom araçlardan insan imgeleriyle identification’a, retina tarama ile auto login işlemlerinden kanserli hücre analizine, kredi skorlamadan ses analizine, deep fake’e uzanan yelpazede; sağlık, bankacılık, sigortacılık, güvenlik gibi akla gelebilecek yaşamımızın neredeyse her noktasına değen alanlarda kullanıldığını çeşitli örneklerde ifade etti.

Bu noktadan sonra Deep Learning’e giriş yapmak isteyenler için doğru ilk adımın ne olacağı üzerine konuştuk. Hakan, doğru programlama dilinin seçimi noktasında community desteğinin öneminin altını çizdi ve arka planda nelerin döndüğüne dair canlı bir ilgi ve merakın bulunmasının faydasından söz etti. Dil konusunda Orkun ve Hakan Python’ın kesinlikle doğru bir başlangıç seçeneği olduğunda hemfikir oldular. Orkun scientific kodlama için Python’ın sahip olduğu avantajların altını çizdi ve rakiplerine kıyasla Python kütüphanelerinin geliştiriciyi daha özgür bıraktığını ifade etti. Bu noktada başlangıç aşamasında geliştiricilerin gözünü korkutan bir husus olmasından hareketle, özellikle Python ile Deep Learning kodlamak için yüksek seviye dil bilgisi gerekmediğinin altını çizdi.

Deep Learning konusunda Türkiye’de ve Türkiye’den katkılar nelerdir sorusu bağlamında Orkun DeepMind projesinde aktif rol alan Koray Kavukçuoğlu’ndan söz etti. Koray hoca şu anda DeepMind’da VP of Research pozisyonunda görev yapıyor. Orkun devamla Ethem Alpaydın hoca ve MIT tarafından basılan Machine Learning kitabından söz etti. Hakan da bu noktada Deep Learning Türkiye topluluğu ve çalışmalarından söz edip, Medium ve Twitter üzerinden topluluk faaliyetlerinin takip edilebileceğini belirtti.

Son olarak yeni başlayacaklar için kaynak önerilerini aldık. Orkun Machine Learning A-Z Udemy eğitimini ve https://machinelearningmastery.com/ adresini önerdi.

Hakan ise https://mlcourse.ai, https://www.kaggle.com ve https://medium.com/deep-learning-turkiye adreslerini.

Sevgili Orkun’a ayırdığı zaman ve aktardığı değerli bilgiler için teşekkür ediyoruz. Bu arada Cloud Native meetup’ıyla ilgili yazıda müjdesini verdiğimiz podcast kanalımız açıldı. Bu meetup’ı ve önceki meetup’ları iTunes ve Spotify üzerinden JUG İstanbul podcast kanalından takip edebilirsiniz.

Bir başka etkinlikte görüşmek üzere…


by Hüseyin Akdogan at September 05, 2019 06:00 AM

The Page Visibility API

by admin at September 04, 2019 12:33 PM

The Page Visibility API is useful for removing listeners (or stopping background processes) from hidden tabs or pages.

You only have to register a visibilitychange listener:


document.addEventListener('visibilitychange', _ => { 
    const state = document.visibilityState;
    console.log('document is: ',state);
})    

Hiding the page / making it visible again prints:


document is:  hidden
document is:  visible    

See it in action:

See you at "Build to last" effectively progressive applications with webstandards only -- the "no frameworks, no migrations" approach, at Munich Airport, Terminal 2 or effectiveweb.training (online).


by admin at September 04, 2019 12:33 PM

The Payara Monthly Catch for August 2019

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at September 03, 2019 11:44 AM

August felt a little bit quieter than previous months, with many people gearing up for the busy conference season. However there were still plenty of juicy pieces of content to be found.

 

Below you will find a curated list of some of the most interesting news, articles and videos from this month. Cant wait until the end of the month? then visit our twitter page where we post all these articles as we find them! 


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at September 03, 2019 11:44 AM

#WHATIS?: Eclipse MicroProfile Rest Client

by rieckpil at September 03, 2019 06:14 AM

In a distributed system your services usually communicate via HTTP and expose REST APIs. External clients or other services in your system consume these endpoints on a regular basis to e.g. fetch data from a different part of the domain. If you are using Java EE you can utilize the JAX-RS WebTarget and Client for this kind of communication. With the MicroProfile Rest Client specification, you’ll get a more advanced and simpler way of creating these RESTful clients. You just declare interfaces and use a more declarative approach (like you might already know it from the Feign library).

Learn more about the MicroProfile Rest Client specification, its annotations, and how to use it in this blog post.

Specification profile: MicroProfile Rest Client

  • Current version: 1.3 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Provide a type-safe approach to invoke RESTful services over HTTP.

Defining the RESTful client

For defining the Rest Client you just need a Java interface and model the remote REST API using JAX-RS annotations:

public interface JSONPlaceholderClient {

    @GET
    @Path("/posts")
    JsonArray getAllPosts();

    @POST
    @Path("/posts")
    Response createPost(JsonObject post);

}

You can specify the response type with a specific POJO (JSON-B will then try to deserialize the HTTP response body) or use the generic Response class of JAX-RS.

Furthermore, you can indicate an asynchronous execution, if you use CompletionStage<T> as the method return type:

@GET
@Path("/posts/{id}")
CompletionStage<JsonObject> getPostById(@PathParam("id") String id);

Path variables and query parameters for the remote endpoint can be specified with @PathParam and @QueryParam:

@GET
@Path("/posts")
JsonArray getAllPosts(@QueryParam("orderBy") String orderDirection);

@GET
@Path("/posts/{id}/comments")
JsonArray getCommentsForPostByPostId(@PathParam("id") String id);

You can define the media type of the request and the expected media type of the response on either interface level or for each method separately:

@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public interface JSONPlaceholderClient {

    @GET
    @Produces(MediaType.APPLICATION_XML) // overrides the JSON media type only for this method
    @Path("/posts/{id}")
    CompletionStage<JsonObject> getPostById(@PathParam("id") String id);

}

If you have to declare specific HTTP headers (e.g. for authentication), you can pass them either to the method with @HeaderParam or define them with @ClientHeaderParam (static value or refer to a method):

@ClientHeaderParam(name = "X-Application-Name", value = "MP-blog")
public interface JSONPlaceholderClient {

    @PUT
    @ClientHeaderParam(name = "Authorization", value = "{generateAuthHeader}")
    @Path("/posts/{id}")
    Response updatePostById(@PathParam("id") String id, JsonObject post, 
                            @HeaderParam("X-Request-Id") String requestIdHeader);

    default String generateAuthHeader() {
        return "Basic " + new String(Base64.getEncoder().encode("duke:SECRET".getBytes()));
    }

}

Using the client interface

Once you define your Rest Client interface you have two ways of using them. First, you can make use of the programmatic approach using the RestClientBuilder. With this builder we can set the base URI, define timeouts and register JAX-RS features/provider like ClientResponseFilter, MessageBodyReader, ReaderInterceptor etc.

JSONPlaceholderClient jsonApiClient = RestClientBuilder.newBuilder()
                .baseUri(new URI("https://jsonplaceholder.typicode.com"))
                .register(ResponseLoggingFilter.class)
                .connectTimeout(2, TimeUnit.SECONDS),
                .readTimeout(2, TimeUnit.SECONDS)
                .build(JSONPlaceholderClient.class);

 jsonApiClient.getPostById("1").thenAccept(System.out::println);

In addition to this, we can use CDI to inject the Rest Client.  To register the interface as a CDI managed bean during runtime, the interface requires the @RegisterRestClient annotation:

@RegisterRestClient
@RegisterProvider(ResponseLoggingFilter.class)
public interface JSONPlaceholderClient {

}

With the @RegisterProvider you can register further JAX-RS provider and features as you’ve seen it in the programmatic approach. If you don’t specify any scope for the interface,  the @Dependent scope will be used by default. With this scope, your Rest Client bean is bound (dependent) to the lifecycle of the injector class.

You can now use it as any other CDI bean and inject it into your classes. Make sure to add the CDI qualifier @RestClient to the injection point:

@ApplicationScoped
public class PostService {

    @Inject
    @RestClient
    JSONPlaceholderClient jsonPlaceholderClient;

}

Further configuration for the Rest Client

If you use the CDI approach, you can make use of MicroProfile Config to further configure the Rest Client. You can specify the following properties with MicroProfile Config:

  • Base URL (.../mp-rest/url)
  • Base URI (.../mp-rest/uri)
  • The CDI scope of the client as a fully qualified class name (.../mp-rest/scope)
  • JAX-RS provider as a comma-separated list of fully qualified class names (../mp-rest/providers)
  • The priority of a registered provider (.../mp-rest/providers/com.acme.MyProvider/priority)
  • Connect and read timeouts (.../mp-rest/connectTimeout and .../mp-rest/readTimeout)

You can specify these properties for each client individually as you have to specify the fully qualified class name of  the Rest Client for each property:

de.rieckpil.blog.JSONPlaceholderClient/mp-rest/url=https://jsonplaceholder.typicode.com
de.rieckpil.blog.JSONPlaceholderClient/mp-rest/connectTimeout=3000
de.rieckpil.blog.JSONPlaceholderClient/mp-rest/readTimeout=3000

YouTube video for using MicroProfile Rest Client 1.3

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile Rest Client in action:

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

You can find the source code with further instructions to run this example on GitHub.

Have fun using MicroProfile Rest Client,

Phil

The post #WHATIS?: Eclipse MicroProfile Rest Client appeared first on rieckpil.


by rieckpil at September 03, 2019 06:14 AM

The First Line of Quarkus--airhacks.fm Podcast

by admin at September 03, 2019 03:21 AM

Subscribe to airhacks.fm podcast via: spotify, iTunes, RSS

The #52 airhacks.fm episode with Emmanuel Bernard (@emmanuelbernard) about:

learning programming, ORM mappers, JPA, Hibernate contributions, bean validations, extending Hibernate to NoSQL, Java optimizations, GraalVM, Kubernetes, next generation Java EE application servers and Quarkus
is available for download.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at September 03, 2019 03:21 AM

Java EE - Jakarta EE Initializr

August 30, 2019 07:14 AM

Getting started with Jakarta EE just became even easier!

Get started

Update!

Version 1.3 now has:

Payara 5.193 running on Java 11


August 30, 2019 07:14 AM

#WHATIS?: Eclipse MicroProfile OpenTracing

by rieckpil at August 30, 2019 04:51 AM

Tracing method calls in a monolith to identify slow parts is simple. Everything is happening in one application (context) and you can easily add metrics to gather information about e.g. the elapsed time for fetching data from the database. Once you have a microservice environment with service-to-service communication, tracing needs more effort. If a business operation requires your service to call other services (which might then also call others) to gather data, identifying the source of a bottleneck is hard. Over the past years, several vendors evolved to tackle this issue of distributed tracing (e.g. Jaeger, Zipkin etc.). As the different solutions did not rely on a single, standard mechanism for trace description and propagation, a vendor-neutral standard for distributed tracing was due: OpenTracing. With Eclipse MicroProfile we get a dedicated specification to make use of this standard: MicroProfile OpenTracing.

Learn more about the MicroProfile OpenTracing specification, its annotations, and how to use it in this blog post.

Specification profile: MicroProfile OpenTracing

  • Current version: 1.3 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Provide distributed tracing for your JAX-RS application using the OpenTracing standard

Basics about distributed tracing

Once the flow of a request touches multiple service boundaries, you need to somehow correlate each incoming call with the same business flow. To accomplish this with distributed tracing, each service is instrumented to log messages with a correlation id that may have been propagated from an upstream service. These messages are then collected in a storage system and aggregated as they share the same correlation id.

A so-called trace represents the full journey of a request containing multiple spans. A span contains a single operation within the request with both start and end-time information. The distributed tracing systems (e.g. Jaeger or Zipkin) then usually provide a visual timeline representation for a given trace with its spans.

Enabling distributed tracing with MicroProfile OpenTracing

The MicroProfile OpenTracing specification does not address the problem of defining, implementing or configuring the underlying distributed tracing system. It assumes an environment where all services use a common OpenTracing implementation.

The MicroProfile specification defines two operation modes:

  • Without instrumentation of application code (distributed tracing is enabled for JAX-RS applications by default)
  • With explicit code instrumentation (using the @Traced annotation)

So once a request arrives at a JAX-RS endpoint, the Tracer instance extracts the SpanContext (if given) from the inbound request and starts a new span. If there is no SpanContext yet, e.g. the request is coming from a frontend application, the MicroProfile application has to create one.

Every outgoing request (with either the JAX-RS Client or the MicroProfile Rest Client) then needs to contain the SpanContext and propagate it downstream. Tracing for the JAX-RS Client might need to be explicitly enabled (depending on the implementation), for the MicroProfile Rest Client it is globally enabled by default.

Besides the no instrumentation mode, you can add the @Traced annotation to a class or method to explicitly start a new span at the beginning of a method.

Sample application setup for MicroProfile OpenTracing

To provide you an example, I’m using the following two services to simulate a microservice architecture setup: book-store and book-store-client. Both are MicroProfile applications and have no further dependencies. The book-store-client has one public endpoint to retrieve books together with their price:

@Path("books")
public class BookResource {

    @Inject
    private BookProvider bookProvider;

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Response getBooks() {
        return Response.ok(bookProvider.getBooksFromBookStore()).build();
    }

}

For gathering information about the book and its price, the book-store-client communicates with the book-store:

@RequestScoped
public class BookProvider {

    @Inject
    private PriceCalculator priceCalculator;

    private WebTarget bookStoreTarget;

    @PostConstruct
    public void setup() {
        Client client = ClientBuilder
                .newBuilder()
                .connectTimeout(2, TimeUnit.SECONDS)
                .readTimeout(2, TimeUnit.SECONDS)
                .build();

        this.bookStoreTarget = client.target("http://book-store:9080/resources/books");
    }

    public JsonArray getBooksFromBookStore() {

        JsonArray books = this.bookStoreTarget
                .request()
                .get()
                .readEntity(JsonArray.class);

        List<JsonObject> result = new ArrayList();

        for (JsonObject book : books.getValuesAs(JsonValue::asJsonObject)) {
            result.add(Json.createObjectBuilder()
                    .add("title", book.getString("title"))
                    .add("price", priceCalculator.getPriceForBook(book.getInt("id")))
                    .build());
        }

        return result
                .stream()
                .collect(JsonCollectors.toJsonArray());
    }
}

So there will be at least on outgoing call to fetch all available books and for each book and additional request to get the price of the book:

@RequestScoped
public class PriceCalculator {

    private WebTarget bookStorePriceTarget;
    private Double discount = 1.5;

    @PostConstruct
    public void setUp() {
        Client client = ClientBuilder
                .newBuilder()
                .connectTimeout(2, TimeUnit.SECONDS)
                .readTimeout(2, TimeUnit.SECONDS)
                .build();

        this.bookStorePriceTarget = client.target("http://book-store:9080/resources/prices");
    }

    public Double getPriceForBook(int id) {
        Double bookPrice = this.bookStorePriceTarget
            .path(String.valueOf(id))
            .request()
            .get()
            .readEntity(Double.class);
        return Math.round((bookPrice - discount) * 100.0) / 100.0;
    }

}

On the book-store side, when fetching the prices, there is a random Thread.sleep(), so we can later see different traces. Without further instrumentations on both sides, we are ready for distributed tracing. We could add additional @Traced annotations to the involved methods, to create a span for each method call and narrow down the tracing.

Using the Zipkin implementation on Open Liberty

For this example, I’m using Open Liberty to deploy both applications. With Open Liberty we have to add a feature for the OpenTracing implementation to the server and configure it in server.xml:

FROM open-liberty:kernel-java11
COPY --chown=1001:0  target/microprofile-open-tracing-server.war /config/dropins/
COPY --chown=1001:0  server.xml /config/
COPY --chown=1001:0  extension /opt/ol/wlp/usr/extension

<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">

    <featureManager>
        <feature>microProfile-3.0</feature>
        <feature>usr:opentracingZipkin-0.31</feature>
    </featureManager>

    <opentracingZipkin host="zipkin" port="9411"/>

    <mpMetrics authentication="false"/>

    <ssl id="defaultSSLConfig" keyStoreRef="defaultKeyStore" trustStoreRef="jdkTrustStore"/>
    <keyStore id="jdkTrustStore" location="${java.home}/lib/security/cacerts" password="changeit"/>

    <httpEndpoint id="defaultHttpEndpoint" httpPort="9080" httpsPort="9443" />
</server>

The OpenTracing Zipkin implementation is provided by IBM and can be downloaded at the following tutorial.

For the book-store DNS resolution, you saw in the previous code snippets and to start Zipkin as the distributed tracing system, I’m using docker-compose:

version: '3.6'
services:
  book-store-client:
    build: book-store-client/
    ports:
      - "9080:9080"
      - "9443:9443"
    links:
      - zipkin
      - book-store
  book-store:
    build: book-store/
    links:
      - zipkin
  zipkin:
    image: openzipkin/zipkin
    ports:
      - "9411:9411"

Once both services and Zipkin is running, you can visit http://localhost:9080/resources/books to fetch all available books from the book-store-client application. You can now hit this endpoint several times and then switch to http://localhost:9411/zipkin/ and query for all available traces:

Once you click on a specific trace, you’ll get a timeline to see what operation took the most time:

YouTube video for using MicroProfile Open Tracing 1.3

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile OpenTracing in action:

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

You can find the source code with further instructions to run this example on GitHub.

Have fun using MicroProfile OpenTracing,

Phil

The post #WHATIS?: Eclipse MicroProfile OpenTracing appeared first on rieckpil.


by rieckpil at August 30, 2019 04:51 AM

What's New In Payara Platform 5.193?

by Cuba Stanley at August 29, 2019 01:00 PM

With the summer season coming to a close, the time has come for a new release of the Payara Platform! Here's a quick list of the new features you'll have to look forward to with the Payara Platform 5.193 release:


by Cuba Stanley at August 29, 2019 01:00 PM

#WHATIS?: Eclipse MicroProfile OpenAPI

by rieckpil at August 24, 2019 09:31 AM

Exposing REST endpoints usually requires documentation for your clients. This documentation usually includes the following: accepted media types, HTTP method, path variables, query parameters, and the request and response schema. With the OpenAPI v3 specification we have a standard way to document APIs. You can generate this kind of API documentation from your JAX-RS classes using MicroProfile OpenAPI out-of-the-box. In addition, you can customize the result with additional metadata like detailed description, error codes and their reasons, and further information about the used security mechanism.

Learn more about the MicroProfile OpenAPI specification, its annotations and how to use it in this blog post.

Specification profile: MicroProfile OpenAPI

  • Current version: 1.1 in MicroProfile 3.0
  • GitHub repository
  • Latest specification document
  • Basic use case: Provide a unified Java API for the OpenAPI v3 specification to expose API documentation

Customize your API documentation with MicroProfile OpenAPI

Without any additional annotation or configuration, you get your API documentation with MicroProfile OpenAPI out-of-the-box. Therefore your JAX-RS classes are scanned for your @Produces, @Consumes, @Path, @GET etc. annotations to extract the required information for the documentation.

If you have external clients accessing your endpoints, you usually add further metadata for them to understand what each endpoint is about. Fortunately, the MicroProfile OpenAPI specification defines a bunch of annotations you can use to customize the API documentation.

The following example shows a part of the available annotation you can use to add further information:

@GET
@Operation(summary = "Get all books", description = "Returns all available books of the book store XYZ")
@APIResponse(responseCode = "404", description = "No books found")
@APIResponse(responseCode = "418", description = "I'm a teapot")
@APIResponse(responseCode = "500", description = "Server unavailable")
@Tag(name = "BETA", description = "This API is currently in beta state")
@Produces(MediaType.APPLICATION_JSON)
public Response getAllBooks() {
   System.out.println("Get all books...");
   return Response.ok(new Book("MicroProfile", "Duke", 1L)).build();
}

In this example, I’m adding a summary and description to the endpoint to tell the client what this endpoint is about. Furthermore, you can specify the different response codes this endpoint returns and give them a description if they are somehow different from the HTTP spec.

Another important part of your API documentation is the request and response body schema. With JSON as the current de-facto standard format for exchanging data, you and need to know the expected and accepted formats. The same is true for the response as your client needs information about the contract of the API to further process the result. This can be achieved with an additional MicroProfile OpenAPI annotation:

@GET
@APIResponse(description = "Book",
             content = @Content(mediaType = "application/json",
                    schema = @Schema(implementation = Book.class)))
@Path("/{id}")
@Consumes({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
public Response getBookById(@PathParam("id") Long id) {
   return Response.ok(new Book("MicroProfile", "Duke", 1L)).build();
}

Within the @APIResponse annotation we can reference the response object with the schema attribute. This can point to your data transfer object class. Aforementioned Java class can then also have further annotations to specify which field is required and what are example values:

@Schema(name = "Book", description = "POJO that represents a book.")
public class Book {

    @Schema(required = true, example = "MicroProfile")
    private String title;

    @Schema(required = true, example = "Duke")
    private String author;

    @Schema(required = true, readOnly = true, example = "1")
    private Long id;

}

Access the created documentation

The MicroProfile OpenAPI specification defines a pre-defined endpoint to access the documentation: /openapi:

openapi: 3.0.0
info:
  title: Deployed APIs
  version: 1.0.0
servers:
- url: http://localhost:9080
- url: https://localhost:9443
tags:
- name: BETA
  description: This API is currently in beta state
paths:
  /resources/books/{id}:
    get:
      operationId: getBookById
      parameters:
      - name: id
        in: path
        required: true
        schema:
          type: integer
          format: int64
      responses:
        default:
          description: Book
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Book'

This endpoint returns your generated API documentation in the OpenAPI v3 specification format as text/plain.

Moreover, if you are using Open Liberty you’ll get a nicelooking user interface for your API documentation. You can access it at http://localhost:9080/openapi/ui/. This looks similar to the Swagger UI and offers your client a way to explore your API and also trigger requests to your endpoints via this user interface:

microProfileOpenAPIUIOpenLiberty

microProfileOpenAPIExecutionOfApiCall

microProfileOpenAPIModelExplorer

YouTube video for using MicroProfile OpenAPI 1.1

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile OpenAPI in action:

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

You can find the source code for this blog post on GitHub.

Have fun using MicroProfile OpenAPI,

Phil

The post #WHATIS?: Eclipse MicroProfile OpenAPI appeared first on rieckpil.


by rieckpil at August 24, 2019 09:31 AM

New Challenges Ahead

by Ivar Grimstad at August 20, 2019 06:27 PM

I am super excited to announce that October 1st, I will become the first Jakarta EE Developer Advocate at Eclipse Foundation!

So, What’s new? Hasn’t this guy been doing this for years already?

Well, yes, and no. My day job has always been working as a consultant even if I have been fortunate that Cybercom Sweden (my employer of almost 15 years) has given me the freedom to also work on open source projects, community building and speaking at conferences and meetups.

What’s different then?

Even if I have had this flexibility, it has still been part-time work which has rippled into my spare time. It’s only so much a person can do and there are only 24 hours a day. As a full-time Jakarta EE Developer Advocate, I will be able to focus entirely on community outreach around Jakarta EE.

The transition of the Java EE technologies from Oracle to Jakarta EE at Eclipse Foundations has taken a lot longer than anticipated. The community around these technologies has taken a serious hit as a result of that. My primary focus for the first period as Jakarta EE Developer Advocate is to regain the trust and help enable participation of the strong community around Jakarta EE. The timing of establishing this position fits perfectly with the upcoming release of Jakarta EE 8. From that release and forward, it is up to us as a community to bring the technology forward.

I think I have been pretty successful with being vendor-neutral throughout the years. This will not change! Eclipse Foundation is a vendor-neutral organization and I will represent the entire Jakarta EE working group and community as the Jakarta EE Developer Advocate. This is what distinguishes this role from the vendor’s own developer advocates.

I hope to see you all very soon at a conference or meetup near you!


by Ivar Grimstad at August 20, 2019 06:27 PM

#WHATIS?: Eclipse MicroProfile Health

by rieckpil at August 19, 2019 05:16 AM

Once your application is deployed to production you want to ensure it’s up– and running. To determine the health and status of your application you can use monitoring based on different metrics, but this requires further knowledge and takes time. Usually, you just want a quick answer to the question: Is my application up? The same is true if your application is running e.g. in a Kubernetes cluster, where the cluster regularly performs health probes to terminate unhealthy pods. With MicroProfile Health you can write both readiness and liveness checks and expose them via an HTTP endpoint with ease.

Learn more about the MicroProfile Health specification and how to use it in this blog post.

Specification profile: MicroProfile Health

  • Current version: 2.1 in MicroProfile 3.1
  • GitHub repository
  • Latest specification document
  • Basic use case: Add liveness and readiness checks to determine the application’s health

Determine the application’s health with MicroProfile Health

With MicroProfile Health you get three new endpoints to determine both the readiness and liveness of your application:

  • /health/ready: Returns the result of all readiness checks and determines whether or not your application can process requests
  • /health/live: Returns the result of all liveness checks and determines whether or not your application is up- and running
  • /health : As in previous versions of MicroProfile Health there was no distinction between readiness and liveness, this is active for downwards compatibility. This endpoint returns the result of both health check types.

To determine your readiness and liveness you can have multiple checks. The overall status is constructed with a logical AND of all your checks of that specific type (liveness or readiness). If e.g. on liveness check fails, the overall liveness status is DOWN and the HTTP status is 503:

$ curl -v http://localhost:9080/health/live


&lt; HTTP/1.1 503 Service Unavailable
&lt; X-Powered-By: Servlet/4.0
&lt; Content-Type: application/json; charset=UTF-8
&lt; Content-Language: en-US

{"checks":[...],"status":"DOWN"}

In case of an overall UP status, you’ll receive the HTTP status 200:

$ curl -v http://localhost:9080/health/ready

&lt; HTTP/1.1 200 OK
&lt; X-Powered-By: Servlet/4.0
&lt; Content-Type: application/json; charset=UTF-8
&lt; Content-Language: en-US

{"checks":[...],"status":"UP"}

Create a readiness check

To create a readiness check you have to implement the HealthCheck interface and add @Readiness to your class:

@Readiness
public class ReadinessCheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse.builder()
                .name("readiness")
                .up()
                .build();
    }
}

As you can add multiple checks, you need to give every check a dedicated name. In general, all your readiness checks should determine whether your application is ready to accept traffic or not. Therefore a quick response is preferable.

If your application is about exposing and accepting data using REST endpoints and does not rely on other services to work, the readiness check above should be good enough as it returns 200 once the JAX-RS runtime is up- and running:

{
   "checks":[
      {
         "data":{},
         "name":"readiness",
         "status":"UP"
      }
   ],
   "status":"UP"
}

Furthermore, once /health/ready returns 200, the readiness is identified and from now on the /health/live is used and no more readiness checks are required.

Create liveness checks

Creating liveness checks is as simple as creating readiness checks. The only difference is the @Livness annotation at class level:

@Liveness
public class DiskSizeCheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {

        File file = new File("/");
        long freeSpace = file.getFreeSpace() / 1024 / 1024;

        return responseBuilder = HealthCheckResponse.builder()
                .name("disk")
                .withData("remainingSpace", freeSpace)
                .state(freeSpace &gt; 100)
                .build();

}

In this example, I’m checking for free disk space as a service might rely on storage to persist e.g. files.  With the .withData() method of the HealthCheckResponseBuilder you can add further metadata to your response.

In addition, you can also combine the @Readiness and @Liveness annotation and reuse a health check class for both checks:

@Readiness
@Liveness
public class MultipleHealthCheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse
                .builder()
                .name("generalCheck")
                .withData("foo", "bar")
                .withData("uptime", 42)
                .withData("isReady", true)
                .up()
                .build();
    }
}

This check now appears for /health/ready and /health/live:

{
   "checks":[
      {
         "data":{
            "remainingSpace":447522
         },
         "name":"disk",
         "status":"UP"
      },
      {
         "data":{

         },
         "name":"liveness",
         "status":"UP"
      },
      {
         "data":{
            "foo":"bar",
            "isReady":true,
            "uptime":42
         },
         "name":"generalCheck",
         "status":"UP"
      }
   ],
   "status":"UP"
}

Other possible liveness checks might be: checking for active JDBC connections, connections to queues, CPU usage, or custom metrics (with the help of MicroProfile Metrics).

YouTube video for using MicroProfile Health 2.0

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile Health in action:

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

You can find the source code for this blog post on GitHub.

Have fun using MicroProfile Health,

Phil

The post #WHATIS?: Eclipse MicroProfile Health appeared first on rieckpil.


by rieckpil at August 19, 2019 05:16 AM

Wildfly JMX connection problems (so slow and terminates)

August 18, 2019 09:00 PM

JMX (Java Management Extensions ) - is a technology that provide us possibility to monitoring applications (application servers) by MBeans (Managed Bean) objects.
List of supported MBeans can be obtained by JConsole tool that already included to JDK. As JMX does not provide strong defined communication protocol, - implementations can be different depends on vendor.
For example, to connect to Wildfly Application Server you need to use included in distribution jconsole.sh script:

<WFLY_HOME>/bin/jconsole.sh

or add <WFLY_HOME>/bin/client/jboss-client.jar to classpath:

jconsole J-Djava.class.path=$JAVA_HOME\lib\tools.jar;$JAVA_HOME\lib\jconsole.jar;jboss-client.jar

By default, Wildfly uses timeout = 60s for remote JMX connections, after that connection will terminated:
jconsole terminated connection
To change default timeout value, use org.jboss.remoting-jmx.timeout property:

./jconsole.sh -J-Dorg.jboss.remoting-jmx.timeout=300

But increasing timeouts, is not always good solution. So, lets search for the reason of slowness. To construct list of MBeans, jconsole recursively requests ALL MBeans, that can be extremely slow in case many deployments and many loggers. (Reported issue: WFCORE-3186). Partial solution here is reducing count of log files by changing rotating type from periodic-size-rotating-file-handler to size-rotating-file-handler.

Other reason of extremely slowness can be Batch subsystem (JBeret). Last one stores a lot of working information in their tables (in memory or on remote DB, depends on configuration). If this tables big enough - it can negative affect performance of server. So, if you, no need for this data then just cleanup this stuff periodically. (for example, every redeploy in case you do it often enough):

TRUNCATE TABLE PARTITION_EXECUTION CASCADE;
TRUNCATE TABLE STEP_EXECUTION CASCADE;
TRUNCATE TABLE JOB_EXECUTION CASCADE;
TRUNCATE TABLE JOB_INSTANCE CASCADE;  

From other point of view, obtaining ALL MBeans is not good decision as well. So, just use tooling that allows to find MBeans by path.


August 18, 2019 09:00 PM

#WHATIS?: Eclipse MicroProfile Metrics

by rieckpil at August 18, 2019 08:03 AM

Ensuring a stable operation of your application in production requires monitoring. Without monitoring, you have no insights about the internal state and health of your system and have to work with a black-box. MicroProfile Metrics gives you the ability to not only monitor pre-defined metrics like JVM statistics but also create custom metrics to monitor e.g. key figures of your business. These metrics are then exposed via HTTP and ready to visualize on a dashboard and create appropriate alarms.

Learn more about the MicroProfile Metrics specification and how to use it in this blog post.

Specification profile: MicroProfile Metrics

  • Current version: 2.1 in MicroProfile 3.1
  • GitHub repository
  • Latest specification document
  • Basic use case: Add custom metrics (e.g. timer or counter) to your application and expose them via HTTP

Default MicroProfile metrics defined in the specification

The specification defines one endpoint with three subresources to collect metrics from a MicroProfile application:

  • The endpoint to collect all available metrics: /metrics
  • Base (pre-defined by the specification) metrics: /metrics/base
  • Application metrics: /metrics/application (optional)
  • Vendor-specific metrics: /metrics/vendor (optional)

So you can either use the main /metrics endpoint and get all available metrics for your application or one of the subresources.

The default media type for these endpoints is text/plain using the OpenMetrics format. You are also able to get them as JSON if you specify the Accept header in your request as application/json.

In the specification, you find a list of base metrics every MicroProfile Metrics compliant application server has to offer. These are mainly JVM, GC, memory, and CPU related metrics to monitor the infrastructure. The following output is the required amount of base metrics:

{
    "gc.total;name=scavenge": 393,
    "gc.time;name=global": 386,
    "cpu.systemLoadAverage": 0.92,
    "thread.count": 85,
    "classloader.loadedClasses.count": 11795,
    "classloader.unloadedClasses.total": 21,
    "jvm.uptime": 985206,
    "memory.committedHeap": 63111168,
    "thread.max.count": 100,
    "cpu.availableProcessors": 12,
    "classloader.loadedClasses.total": 11816,
    "thread.daemon.count": 82,
    "gc.time;name=scavenge": 412,
    "gc.total;name=global": 14,
    "memory.maxHeap": 4182573056,
    "cpu.processCpuLoad": 0.0017964831879557087,
    "memory.usedHeap": 34319912
}

In addition, you are able to add metadata and tags to your metrics like in the output above for gc.time where name=global is a tag. You can use these tags to further separate a metric for multiple use cases.

Create a custom metric with MicroProfile Metrics

There are two ways for defining a custom metric with MicroProfile Metrics: using annotations or programmatically. The specification offers five different metric types:

  • Timer: sampling the time for e.g. a method call
  • Counter: monotonically counting e.g. invocations of a method
  • Gauges: sample the value of an object e.g. current size of JMS queue
  • Meters: tracking the throughput of e.g. a JAX-RS endpoint
  • Histogram: calculate the distribution of values e.g. the variance of incoming user agents

For simple use cases, you can make use of annotations and just add them to a method you want to monitor. Each annotation offers attributes to configure tags and metadata for the metric:

@Counted(name = "bookCommentClientInvocations",
         description = "Counting the invocations of the constructor",
         displayName = "bookCommentClientInvoke",
         tags = {"usecase=simple"})
public BookCommentClient() {
}

If your monitoring use case requires a more dynamic configuration, you can programmatically create/update your metrics. For this, you just need to inject the MetricRegistry to your class:

public class BookCommentClient {

    @Inject
    @RegistryType(type = MetricRegistry.Type.APPLICATION)
    private MetricRegistry metricRegistry;

    public String getBookCommentByBookId(String bookId) {
        Response response = this.bookCommentsWebTarget.path(bookId).request().get();
        this.metricRegistry.counter("bookCommentApiResponseCode" + response.getStatus()).inc();
        return response.readEntity(JsonObject.class).getString("body");
    }
}

Create a timer metric

If you want to track and sample the duration for a method call, you can make use of timers. You can add them with the @Timer annotation or using the MetricRegistry. A good use case might be tracking the time for a call to an external service:

@Timed(name = "getBookCommentByBookIdDuration")
public String getBookCommentByBookId(String bookId) {
   Response response = this.bookCommentsWebTarget.path(bookId).request().get();
   return response.readEntity(JsonObject.class).getString("body");
}

While using the timer metric type you’ll also get a count of method invocations and mean/max/min/percentile calculations out-of-the-box:

"de.rieckpil.blog.BookCommentClient.getBookCommentByBookIdDuration": {
        "fiveMinRate": 0.000004243196464475842,
        "max": 3966817891,
        "count": 13,
        "p50": 737218798,
        "p95": 3966817891,
        "p98": 3966817891,
        "p75": 997698383,
        "p99": 3966817891,
        "min": 371079671,
        "fifteenMinRate": 0.005509550587308515,
        "meanRate": 0.003936521878196718,
        "mean": 1041488167.7031761,
        "p999": 3966817891,
        "oneMinRate": 1.1484886591525709e-24,
        "stddev": 971678361.3592016
}

Be aware that you get the result as nanoseconds if you request the JSON result and for the OpenMetrics format, you get seconds:

getBookCommentByBookIdDuration_rate_per_second 0.003756880727820997
getBookCommentByBookIdDuration_one_min_rate_per_second 7.980095572816848E-26
getBookCommentByBookIdDuration_five_min_rate_per_second 2.4892551645230856E-6
getBookCommentByBookIdDuration_fifteen_min_rate_per_second 0.004612201440656351
getBookCommentByBookIdDuration_mean_seconds 1.0414881677031762
getBookCommentByBookIdDuration_max_seconds 3.9668178910000003
getBookCommentByBookIdDuration_min_seconds 0.371079671
getBookCommentByBookIdDuration_stddev_seconds 0.9716783613592016
getBookCommentByBookIdDuration_seconds_count 13
getBookCommentByBookIdDuration_seconds{quantile="0.5"} 0.737218798
getBookCommentByBookIdDuration_seconds{quantile="0.75"} 0.997698383
getBookCommentByBookIdDuration_seconds{quantile="0.95"} 3.9668178910000003
getBookCommentByBookIdDuration_seconds{quantile="0.98"} 3.9668178910000003
getBookCommentByBookIdDuration_seconds{quantile="0.99"} 3.9668178910000003
getBookCommentByBookIdDuration_seconds{quantile="0.999"} 3.9668178910000003

Create a counter metric

The next metric type is the simplest one: a counter. With the counter, you can track e.g. the number of invocations of a method:

@Counted
public String doFoo() {
  return "Duke";
}

In one of the previous MicroProfile Metrics versions, you were able to decrease the counter and have a not monotonic counter. As this caused confusion with the gauge metric type, the current specification version defines this metric type as a monotonic counter which can only increase.

If you use the programmatic approach, you are also able to define the amount of increase for the counter on each invocation:

public void checkoutItem(String item, Long amount) {
   this.metricRegistry.counter(item + "Count").inc(amount);
   // further business logic
}

Create a metered metric

The meter type is perfect if you want to measure the throughput of something and get the one-, five- and fifteen-minute rates. As an example I’ll monitor the throughput of a JAX-RS endpoint:

@GET
@Metered(name = "getBookCommentForLatestBookRequest", tags = {"spec=JAX-RS", "level=REST"})
@Produces(MediaType.TEXT_PLAIN)
public Response getBookCommentForLatestBookRequest() {
   String latestBookRequestId = bookRequestProcessor.getLatestBookRequestId();
   return Response.ok(this.bookCommentClient.getBookCommentByBookId(latestBookRequestId)).build();
}

After several invocations, the result looks like the following:

"de.rieckpil.blog.BookResource.getBookCommentForLatestBookRequest": {
       "oneMinRate;level=REST;spec=JAX-RS": 1.1363013189791909e-24,
       "fiveMinRate;level=REST;spec=JAX-RS": 0.0000042408326224725166,
       "meanRate;level=REST;spec=JAX-RS": 0.003936520624021342,
       "fifteenMinRate;level=REST;spec=JAX-RS": 0.0055092085268208186,
       "count;level=REST;spec=JAX-RS": 13
}

Create a gauge metric

To monitor a value that can increase and decrease over time, you should use the gauge metric type. Imagine you want to visualize the current disk size or the remaining messages to process in a queue:

@Gauge(unit = "amount")
public Long remainingBookRequestsToProcess() {
  // monitor e.g. current size of a JMS queue
  return ThreadLocalRandom.current().nextLong(0, 1_000_000);
}

The unit attribute of the annotation is required and has to be explicitly configured. There is a MetricUnits class which you can use for common units like seconds or megabytes.

In contrast to all other metrics, the @Gauge annotation can only be used in combination with a single instance (e.g. @ApplicationScoped) as otherwise, it would be not clear which instance represents the actual value. There is a @ConcurrentGauge if you need to count parallel invocations.

The outcome is the current value of the gauge, which might increase or decrease over time:

# TYPE application_..._remainingBookRequestsToProcess_amount
application_..._remainingBookRequestsToProcess_amount 990120

// invocation of /metrics 5 minutes later

# TYPE application_..._remainingBookRequestsToProcess_amount
application_..._remainingBookRequestsToProcess_amount 11003

YouTube video for using MicroProfile Metrics 2.0

Watch the following YouTube video of my Getting started with Eclipse MicroProfile 3.0 series to see MicroProfile Metrics in action:

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

You can find the source code for this blog post on GitHub.

Have fun using MicroProfile Metrics,

Phil

The post #WHATIS?: Eclipse MicroProfile Metrics appeared first on rieckpil.


by rieckpil at August 18, 2019 08:03 AM

Back to the top

Submit your event

If you have a community event you would like listed on our events page, please fill out and submit the Event Request form.

Submit Event