Skip to main content

#HOWTO: Messaging with JMS using Payara with embedded OpenMQ broker

by rieckpil at April 25, 2019 01:26 PM

Messaging is a key concept for distributed enterprise applications. There are a lot of use cases, where you don’t want or need a sync response you get with e.g. a REST call and can use async messaging: IoT (sensor data), event streaming, data duplication, etc. With the hype of Kafka and other highly-distributed messaging solutions, you may forget that there is already a sophisticated and proven messaging standard within Java/Jakarta EE: JMS (Java Message Service).

For this Java EE standard, I’m providing a simple introduction for sending and receiving JSON messages with this blog post. The technology setup is the following: Java 8, Java EE 8, Payara 5.191 and H2 for storing the messages.

JMS prerequisites

As Payara already comes with OpenMQ, which implements the Java Message Service (JMS) standard, you don’t have to set up an external JMS broker (e.g ActiveMQ, RabbitMQ ..) for this example and can use the embedded version (think twice if you use this in production).

The connection pool for the embedded OpenMQ is preconfigured and we can directly make use of it via its JNDI name jms/__defaultConnectionFactory. If you want to connect to an external broker you would have to set up the connection manually (take a look at this excellent example for ActiveMQ from Steve Millidge itself).

With JMS you can make use of two different concepts for delivering our messages: Topics (publish & subscribe) and Queues (point-to-point). In this example, I’m using a javax.jms.Queue but the code would look quite similar for using a Topic.

The JMS Queue or Topic has to be first configured within Payara as a JMS Destination Resource and can be configured either via the Payara admin panel (Resources – JMS Resources – Destination Resources) or using asadmin:

asadmin create-jms-resource --restype javax.jms.Queue --property Name=STOCKS jms/stocks

You have to specify the resource type (Topic or Queue), the physical name of the resource within the broker (will be created if it doesn’t exist) and the JNDI name.

Let’s start coding

The Maven project for this showcase is a simple and thin Java EE project:

<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>de.rieckpil.blog</groupId>
  <artifactId>messaging-with-jms-using-payara</artifactId>
  <version>1.0-SNAPSHOT</version>

  <packaging>war</packaging>

  <dependencies>
    <dependency>
      <groupId>javax</groupId>
      <artifactId>javaee-api</artifactId>
      <version>8.0</version>
      <scope>provided</scope>
    </dependency>
  </dependencies>

  <build>
    <finalName>messaging-with-jms-using-payara</finalName>
  </build>

  <properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  </properties>
</project>

In this showcase I’m sending random stock information (using JSON with JSONP) every two seconds with a simple EJB timer:

@Singleton
public class StockPublisher {

  @Resource(lookup = "jms/__defaultConnectionFactory")
  private ConnectionFactory jmsFactory;

  @Resource(lookup = "jms/stocks")
  private Queue jmsQueue;

  private String[] stockCodes = { "MSFT", "GOOGL", "AAPL", "AMZN" };

  @Schedule(second = "*/2", minute = "*", hour = "*", persistent = false)
  public void sendStockInformation() {

    TextMessage message;

    try (Connection connection = jmsFactory.createConnection();
        Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
        MessageProducer producer = session.createProducer(jmsQueue)) {

      JsonObject stockInformation = Json.createObjectBuilder()
          .add("stockCode", stockCodes[ThreadLocalRandom.current().nextInt(stockCodes.length)])
          .add("price", ThreadLocalRandom.current().nextDouble(1.0, 150.0))
          .add("timestamp", Instant.now().toEpochMilli()).build();

      message = session.createTextMessage();
      message.setText(stockInformation.toString());

      producer.send(message);

    } catch (JMSException e) {
      e.printStackTrace();
    }
  }
}

For sending messages, you first have to create a connection via the JMS ConnectionFactory, then create a Session and finally a message producer for the concrete topic or queue. You can access the required resources via their JNDI names as configured before.

Receiving messages is achieved with so-called Message-Driven Beans (MDB) which implement the MessageListener interface and are configured (which topic or queue to listen) using the @MessageDriven annotation. In this example, I’m sending and receiving the message within the same application (for simplicity) and store the provided information in the embedded H2 database of Payara:

@MessageDriven(name = "stockmdb", activationConfig = {
    @ActivationConfigProperty(propertyName = "destinationLookup", propertyValue = "jms/stocks"),
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") })
public class StockListener implements MessageListener {

  @PersistenceContext
  private EntityManager em;

  @Override
  public void onMessage(Message message) {

    TextMessage textMessage = (TextMessage) message;

    try {
      System.out.println("A new stock information arrived: " + textMessage.getText());

      JsonReader jsonReader = Json.createReader(new StringReader(textMessage.getText()));
      JsonObject stockInformation = jsonReader.readObject();

      em.persist(new StockHistory(stockInformation));
    } catch (JMSException e) {
      e.printStackTrace();
    }
  }

}

Once deployed to Payara, the console output looks like the following:

[#|2019-04-25T13:04:08.007+0000|INFO|Payara 5.191||_ThreadID=186;_ThreadName=orb-thread-pool-1 (pool #1): worker-2;_TimeMillis=1556197448007;_LevelValue=800;|
  A new stock information arrived: {"stockCode":"MSFT","price":148.55312721701924,"timestamp":1556197448002}|#]

[#|2019-04-25T13:04:10.012+0000|INFO|Payara 5.191||_ThreadID=188;_ThreadName=orb-thread-pool-1 (pool #1): worker-3;_TimeMillis=1556197450012;_LevelValue=800;|
  A new stock information arrived: {"stockCode":"AMZN","price":77.60891905653475,"timestamp":1556197450003}|#]

[#|2019-04-25T13:04:12.009+0000|INFO|Payara 5.191||_ThreadID=190;_ThreadName=orb-thread-pool-1 (pool #1): worker-4;_TimeMillis=1556197452009;_LevelValue=800;|
  A new stock information arrived: {"stockCode":"MSFT","price":8.593186941846369,"timestamp":1556197452002}|#]

The JPA entity StockHistory looks like the following:

@Entity
public class StockHistory {

  @Id
  @GeneratedValue
  private Long id;

  @Column(nullable = false)
  private String stockCode;

  @Column(nullable = false)
  private Double price;

  @Column(nullable = false)
  private Instant timestamp;

  public StockHistory(JsonObject json) {
    this.stockCode = json.getString("stockCode");
    this.price = json.getJsonNumber("price").doubleValue();
    this.timestamp = Instant.ofEpochMilli(json.getJsonNumber("timestamp").longValue());
  }

       // further constructors, getters & setters 
}

For the sake of completeness, this is the persistence.xml for this small application:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.2" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_2.xsd">
  <persistence-unit name="prod" transaction-type="JTA">
    <properties>
      <property
        name="javax.persistence.schema-generation.database.action" value="drop-and-create" />
    </properties>
  </persistence-unit>
</persistence>

You can find the full code on GitHub with a step-by-step guide to run this example locally on your machine with Docker. If you are looking for a simple JMS quickstart with Open Liberty, have a look at one of my previous posts.

Keep sending/receiving messages,

Phil


by rieckpil at April 25, 2019 01:26 PM

MicroProfile and Jakarta EE -- The Lightweight Stuff Session

by admin at April 24, 2019 04:54 AM

An itkonekt session for building a MicroProfile / Java EE application from scratch. Metrics, OpenAPI, FaultTolerance, Configuration, multi-server deployments, ThinWARs, and even reactive (easter) eggs included:

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at April 24, 2019 04:54 AM

Jakarta EE + MicroProfile Kickstarter

by admin at April 23, 2019 08:14 AM

To create a Jakarta EE (Java EE) + MicroProfile Maven project replace both placeholders GROUP_ID and PROJECT_NAME and execute:

mvn archetype:generate -o -DarchetypeGroupId=com.airhacks -DarchetypeArtifactId=javaee8-essentials-archetype -DarchetypeVersion=0.0.4 -Darchetype.interactive=false --batch-mode -Dversion=0.0.1 -DgroupId=GROUP_ID -DartifactId=PROJECT_NAME

The maven archetype is available from Maven central, the sources from: https://github.com/AdamBien/javaee8-essentials-archetype

To continuously build and deploy the project, also to multiple servers, see wad.sh (a self contained JAR-application).

Checkout both in 3:27 minutes:

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at April 23, 2019 08:14 AM

MicroProfile JWT with Keycloak

by Hayri Cicek at April 23, 2019 06:26 AM

In this tutorial, we will learn how to secure our services using MicroProfile JWT and Keycloak. Go to https://www.keycloak.org/downloads.html and download latest Standalone server distribution. Unzip the zip file and open a new terminal window and navigate to the keycloak folder.

by Hayri Cicek at April 23, 2019 06:26 AM

Kafka vs. JMS/MQ--airhacks.fm podcast

by admin at April 22, 2019 08:00 AM

Subscribe to airhacks.fm podcast via: spotify| iTunes| RSS

An airhacks.fm conversation with Andrew Schofield, Chief Architect, Event Streams at IBM about:

1982, Dragon 32 and Basic Programming with 12, starting with JDK 1.0, writing a JMS provider for WebSphere v6, no ceremony JMS, Apache Kafka considered simple, why writing a Kafka application is harder than a JMS application, there is a big architectural difference between Kafka and JMS, or message queuing and event stores, Kafka remembers historical data, JMS is about fowarding messages, with Kafka it is harder to write conversational systems, clustering singletons is hard, running Kafka on a single node is easy, "deliver once and only once" is the killer feature of persistent JMS queues, JMS topics are nicer - you can send messages to unknown receivers, the killer use cases for JMS and Kafka, JMS is good for system coordination and transaction integrity, Kafka is well suited for (IoT) event buffering and re-processability, 2PC, XA and the advantages of middleware, in distributed transactions everyone has to remember everything, we only need distributed and rock-solid persistence, kubernetes pods are stateless, challenges of using Kafka, setting up for production can take months for an average Java programmer with JMS background, restarting Kafka brokers can be challenging, in Kafka you are communicating with the cluster, MQ is a collection of individual queue managers, in MQ there is a directory of resources which knows where the queues are hosted.
Andrew on github, and LinkedIn.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at April 22, 2019 08:00 AM

The state of Kotlin for Jakarta EE/MicroProfile traditional applications

April 22, 2019 12:00 AM

Kotlin EE

This easter I had the opportunity to catch up with some R&D tasks in my main Job, in short I had to evaluate the feasibility of using Kotlin in regular Jakarta EE/MicroProfile projects including:

  • Language benefits
  • Jakarta EE/MicroProfile API support, specifically CDI, JPA, JAX-RS
  • Maven support
  • Compatibility with my regular libraries (Flyway, SL4J, Arquillian)
  • Regular application server support -e.g Payara-
  • Tooling support (deployment, debugging, project import)
  • Cloud deployment by Docker

TL;DR version

Kotlin works as language and plays well with Maven, I'm able to use many of the EE API for "services", however the roadblock is not the language or libraries, is the tooling support.

The experience is superb on IntelliJ IDEA and all works as expected, however the IDE is a barrier if you're not an IntelliJ user, Kotlin doesn't play well with WTP on Eclipse (hence it doesn't deploy to app servers) and Kotlin support for Netbeans is mostly dead.

About language

If you have a couple of years in the JVM space you probably remember that Scala, Ceylon and Kotlin have been considered as better Javas. I do a lot of development in different languages including Java for backends, Kotlin for mobile, JavaScript on front-ends, Bash for almost every automation task so I knew the streghts of Kotlin from the mobile space where is specially important, basically most of the Java development on Android is currently a Java 7+Lambdas experience.

My top five features that could help you in your EE tasks are
* One line functions
* Public by default
* Multiline Strings
* Companion objects
* Type inference

I'll try to exemplify these in a regular application

The Jakarta EE/MicroProfile application

The demo application follows a simple structure, it includes MicroProfile Config, CDI, EJB, JPA, and JAX-RS, focused on a simple phrase collector/retrieval service. Interesting Kotlin features are highlighted.

Source code is available at GitHub repo https://github.com/tuxtor/integrum-ee.

Project configuration

To enable Kotlin support I basically followed Kotlin's Maven guide, is not so explanatory but if you have a litle bit of experience in maven this won't be a problem.

Besides adding Kotlin dependencies to a regular EE pom.xmla special configuration is needed for the all-open plugin, Jakarta EE works with proxy entities that inherit from your original code. To make it simple all CDI, EJB and JAX-RS important annotations were included as "open activators".

<plugin>
    <groupId>org.jetbrains.kotlin</groupId>
    <artifactId>kotlin-maven-plugin</artifactId>
    <version>${kotlin.version}</version>
    <executions>
        <execution>
            <id>compile</id>
            <goals> <goal>compile</goal> </goals>
            <configuration>
                <sourceDirs>
                    <sourceDir>${project.basedir}/src/main/kotlin</sourceDir>
                    <sourceDir>${project.basedir}/src/main/java</sourceDir>
                </sourceDirs>
            </configuration>
        </execution>
        <execution>
            <id>test-compile</id>
            <goals> <goal>test-compile</goal> </goals>
            <configuration>
                <sourceDirs>
                    <sourceDir>${project.basedir}/src/test/kotlin</sourceDir>
                    <sourceDir>${project.basedir}/src/test/java</sourceDir>
                </sourceDirs>
            </configuration>
        </execution>
    </executions>
    <configuration>
        <compilerPlugins>
            <plugin>all-open</plugin>
        </compilerPlugins>

        <pluginOptions>
            <option>all-open:annotation=javax.ws.rs.Path</option>
            <option>all-open:annotation=javax.enterprise.context.RequestScoped</option>
            <option>all-open:annotation=javax.enterprise.context.SessionScoped</option>
            <option>all-open:annotation=javax.enterprise.context.ApplicationScoped</option>
            <option>all-open:annotation=javax.enterprise.context.Dependent</option>
            <option>all-open:annotation=javax.ejb.Singleton</option>
            <option>all-open:annotation=javax.ejb.Stateful</option>
            <option>all-open:annotation=javax.ejb.Stateless</option>
        </pluginOptions>
    </configuration>

    <dependencies>
        <dependency>
            <groupId>org.jetbrains.kotlin</groupId>
            <artifactId>kotlin-maven-allopen</artifactId>
            <version>${kotlin.version}</version>
        </dependency>
    </dependencies>
</plugin>

Model

Table models are easily created by using Kotlin's Data Classes, note the default values on parameters and nullable types for autogenerated values, using an incrementable value on a table.

@Entity
@Table(name = "adm_phrase")
@TableGenerator(name = "admPhraseIdGenerator", table = "adm_sequence_generator", pkColumnName = "id_sequence", valueColumnName = "sequence_value", pkColumnValue = "adm_phrase", allocationSize = 1)
data class AdmPhrase(
        @Id
        @GeneratedValue(strategy = GenerationType.TABLE, generator = "admPhraseIdGenerator")
        @Column(name = "phrase_id")
        var phraseId:Long? = null,
        var author:String = "",
        var phrase:String = ""
)

After that, I need to provide also a repository, the repository is a classic CRUD component injectable with CDI, one line methods are created to make the repository concise. The interesting part however is that Kotlin's nullability system plays well with CDI by using lateinit.

The most pleasant part is to create JPQL querys with multiline declarations, In general I dislike the + signs polluting my query in Java :).

@RequestScoped
class AdmPhraseRepository @Inject constructor() {

    @Inject
    private lateinit var em:EntityManager

    @PostConstruct
    fun init() {
        println ("Booting repository")
    }

    fun create(admPhrase:AdmPhrase) = em.persist(admPhrase)

    fun update(admPhrase:AdmPhrase) = em.merge(admPhrase)

    fun findById(phraseId: Long) = em.find(AdmPhrase::class.java, phraseId)

    fun delete(admPhrase: AdmPhrase) = em.remove(admPhrase)

    fun listAll(author: String, phrase: String): List<AdmPhrase> {

        val query = """SELECT p FROM AdmPhrase p
            where p.author LIKE :author
            and p.phrase LIKE :phrase
        """

        return em.createQuery(query, AdmPhrase::class.java)
                .setParameter("author", "%$author%")
                .setParameter("phrase", "%$phrase%")
                .resultList
    }

}

Controller

The model needs to by exposed by using a controller, hence a JAX-RS activator is needed

@ApplicationPath("/rest")
class RestApplication : Application()

That's all the code.

On the other side, implementing the controller looks a lot more like Java, specially if the right HTTP codes are needed. In this line to express the Java class to the builders the special Kotlin syntax this::class.java is mandatory.

Is also observed the elvis operator in action (in DELETE), if the entity is not found the alternative return is fired immediately, a nice idiomatic resource.

@Path("/phrases")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
class AdmPhraseController{

    @Inject
    private lateinit var admPhraseRepository: AdmPhraseRepository

    @Inject
    private lateinit var logger: Logger

    @GET
    fun findAll(@QueryParam("author") @DefaultValue("%") author: String,
                @QueryParam("phrase") @DefaultValue("%") phrase: String) =
                admPhraseRepository.listAll(author, phrase)

    @GET
    @Path("/{id:[0-9][0-9]*}")
    fun findById(@PathParam("id") id:Long) = admPhraseRepository.findById(id)

    @PUT
    fun create(phrase: AdmPhrase): Response {
        admPhraseRepository.create(phrase)
        return Response.created(UriBuilder.fromResource(this::class.java)
                .path(phrase.phraseId.toString()).build()).build()
    }

    @POST
    @Path("/{id:[0-9][0-9]*}")
    fun update(@PathParam("id") id: Long?, phrase: AdmPhrase): Response {
        if(id != phrase.phraseId) return Response.status(Response.Status.NOT_FOUND).build()

        val updatedEntity = admPhraseRepository.update(phrase)
        return Response.ok(updatedEntity).build()
    }

    @DELETE
    @Path("/{id:[0-9][0-9]*}")
    fun delete(@PathParam("id") id: Long): Response {
        val updatedEntity = admPhraseRepository.findById(id) ?:
            return Response.status(Response.Status.NOT_FOUND).build()
        admPhraseRepository.delete(updatedEntity)
        return Response.ok().build()
    }
}

To try MicroProfile a second controller is created that tries to read JAVA_HOME

@Path("/hello")
class HelloController{

    @Inject
    @ConfigProperty(name ="JAVA_HOME", defaultValue = "JAVA_HOME")
    lateinit var javaHome:String

    @GET
    fun doHello() = "There is no place like $javaHome"

}

Utilities

To create a real test, four "advanced components" were included, being an entity manager producer for CDI components, a Flyway bootstrapper with @Startup EJB, a Log producer for SL4J, and a "simple" test with Arquillian.

The producer itself is pretty similar to its Java equivalent, it only takes advantage of one line methods.

@ApplicationScoped
class EntityManagerProducer {

    @PersistenceUnit
    private lateinit var entityManagerFactory: EntityManagerFactory

    @Produces
    @Default
    @RequestScoped
    fun create(): EntityManager = this.entityManagerFactory.createEntityManager()

    fun dispose(@Disposes @Default entityManager: EntityManager) {
        if (entityManager.isOpen) {
            entityManager.close()
        }
    }
}

The Flyway migration is implemented with EJB to fire it every time that this application is deployed (and of course to test EJB). Since Kotlin doesn't have a try-with-resources structure the resource management is implemented with a let block, making it really readable. Besides this if there is a problem the data source it will be null and let block won't be executed.

@ApplicationScoped
@Singleton
@Startup
class FlywayBootstrapper{

    @Inject
    private lateinit var logger:Logger

    @Throws(EJBException::class)
    @PostConstruct
    fun init() {

        val ctx = InitialContext()
        val dataSource = ctx.lookup("java:app/jdbc/integrumdb") as? DataSource

        dataSource.let {
            val flywayConfig = Flyway.configure()
                    .dataSource(it)
                    .locations("db/postgresql")

            val flyway = flywayConfig.load()
            val migrationInfo = flyway.info().current()

            if (migrationInfo == null) {
                logger.info("No existing database at the actual datasource")
            }
            else {
                logger.info("Found a database with the version: ${migrationInfo.version} : ${migrationInfo.description}")
            }

            flyway.migrate()
            logger.info("Successfully migrated to database version: {}", flyway.info().current().version)
            it?.connection?.close()
        }
        ctx.close()
    }
}

To create a non empty-database a PostgreSQL migration was created at db/postgresql in project resources.

CREATE TABLE IF NOT EXISTS public.adm_sequence_generator (
    id_sequence VARCHAR(75) DEFAULT '' NOT NULL,
    sequence_value BIGINT DEFAULT 0 NOT NULL,
    CONSTRAINT adm_sequence_generator_pk PRIMARY KEY (id_sequence)
);
COMMENT ON COLUMN public.adm_sequence_generator.id_sequence IS 'normal_text - people name, items short name';
COMMENT ON COLUMN public.adm_sequence_generator.sequence_value IS 'integuer_qty - sequences, big integer qty';


CREATE TABLE IF NOT EXISTS public.adm_phrase
(
    phrase_id BIGINT DEFAULT 0 NOT NULL,
    author varchar(25) DEFAULT '' NOT NULL,
    phrase varchar(25) DEFAULT '' NOT NULL,
    CONSTRAINT adm_phrase_pk PRIMARY KEY (phrase_id)
);

insert into adm_phrase values (1, 'Twitter','Kotlin is cool');
insert into adm_phrase values (2, 'TIOBE','Java is the king');

Log producer also gets benefits from one line methods

open class LogProducer{

    @Produces
    fun produceLog(injectionPoint: InjectionPoint): Logger =
            LoggerFactory.getLogger(injectionPoint.member.declaringClass.name)
}

Finally a test class is also implemented, since Kotlin doesn't have static methods a companion object with @JvmStatic annotation is created on the class, otherwise test won't be executed. This is probably one of the examples where Kotlin's program get bigger if compared to Java equivalent.

@RunWith(Arquillian::class)
class AdmPhraseRepositoryIT {

    @Inject
    private lateinit var admPhraseRepository: AdmPhraseRepository


    companion object ArquillianTester{

        @JvmStatic
        @Deployment
        fun bootstrapTest(): WebArchive {
            val war = createBasePersistenceWar()
                    .addClass(AdmPhraseRepository::class.java)
                    .addAsWebInfResource("test-beans.xml", "beans.xml")
            println(war.toString(true))

            return war
        }
    }

    @Test
    fun testPersistance(){
        val phrase = AdmPhrase( author = "Torvalds", phrase = "Intelligence is the ability to avoid doing work, yet getting the work done")
        admPhraseRepository.create(phrase)
        assertNotNull(phrase.phraseId)
    }
}

Testing the application with IntelliJ IDEA and Payara 5

If the application is executed/debugged on IntelliJ IDEA all works as expected, sincerely I wasn't expecting an easy road but this worked really well. For instance a debugging session is initiated with Payara 5:

Debugging as expected

I could also retrieve the results from RDMBS

Json Phrases

And my hello world with MicroProfile works too

Hello Microprofile.

Testing the application on Oracle Cloud

As Oracle Groundbreaker Ambassador I have access to instances on Oracle Cloud, hence I created and deployed the same application just to check if there are any other caveats.

Packing applications in Payara Micro is very easy, basically you copy your application to a predefined location:

FROM payara/micro
COPY target/integrum-ee.war $DEPLOY_DIR

A scalable Docker Compose descriptor is needed to provide a simple load balancer and RDBMS, this step is also applicable with Kubernetes, Docker Swarm, Rancher, etc.

version: '3.1'
services:
  db:
    image: postgres:9.6.1
    restart: always
    environment:
      POSTGRES_PASSWORD: informatica
      POSTGRES_DB: integrum
    networks:
      - webnet
  web:
    image: "integrum-ee:latest"
    restart: always
    environment:
      JDBC_URL: 'jdbc:postgresql://db:5432/integrum'
      JAVA_TOOL_OPTIONS: '-Xmx64m'
      POSTGRES_PASSWORD: informatica
      POSTGRES_DB: integrum
    ports:
      - 8080
    networks:
      - webnet
  nginx:
    image: nginx:latest
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - web
    ports:
      - "4000:4000"
    networks:
      - webnet
networks:
  webnet:

A simple nginx.conf file is created just to balance the access to Payara (eventual and scalable) Workers:

user  nginx;

events {
    worker_connections   1000;
}
http {
        server {
              listen 4000;
              location / {
                proxy_pass http://web:8080;
              }
        }
}

The JTA resource is created via glassfish-resources.xml expressing RDBMS credentials with env variables:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 Resource Definitions//EN" "http://glassfish.org/dtds/glassfish-resources_1_5.dtd">
<resources>
    <jdbc-connection-pool name="postgres_appPool" allow-non-component-callers="false" associate-with-thread="false" connection-creation-retry-attempts="0" connection-creation-retry-interval-in-seconds="10" connection-leak-reclaim="false" connection-leak-timeout-in-seconds="0" connection-validation-method="table" datasource-classname="org.postgresql.ds.PGSimpleDataSource" fail-all-connections="false" idle-timeout-in-seconds="300" is-connection-validation-required="false" is-isolation-level-guaranteed="true" lazy-connection-association="false" lazy-connection-enlistment="false" match-connections="false" max-connection-usage-count="0" max-pool-size="200" max-wait-time-in-millis="60000" non-transactional-connections="false" ping="false" pool-resize-quantity="2" pooling="true" res-type="javax.sql.DataSource" statement-cache-size="0" statement-leak-reclaim="false" statement-leak-timeout-in-seconds="0" statement-timeout-in-seconds="-1" steady-pool-size="8" validate-atmost-once-period-in-seconds="0" wrap-jdbc-objects="true">
        <property name="URL" value="${ENV=JDBC_URL}"/>
        <property name="User" value="postgres"/>
        <property name="Password" value="${ENV=POSTGRES_PASSWORD}"/>
        <property name="DatabaseName" value="${ENV=POSTGRES_DB}"/>
        <property name="driverClass" value="org.postgresql.Driver"/>
    </jdbc-connection-pool>
    <jdbc-resource enabled="true" jndi-name="java:app/jdbc/integrumdb" object-type="user" pool-name="postgres_appPool"/>
</resources>

After that, Invoking the compose file will bring the application+infrastructure to life. Now it's time to test it on a real cloud. First the image is published to Oracle's Container Registry

Oracle Container Registry

Once the container is available at Oracle Cloud the image become usable for any kind of orchestation, for simplicity I'm running directly the compose file over a bare CentOS VM image (the Wercker+Docker+Kubernetes is an entirely different tutorial).

Oracle Container Registry

To use it on Oracle's infrasctructure, the image ID should be switched to the full name image: iad.ocir.io/tuxtor/microprofile/integrum-ee:1, in the end the image is pulled, and the final result is our Kotlin EE application running over Oracle Cloud.

Oracle Cloud

Testing the application with NetBeans

JetBrains dropped Kotlin support for NetBeans in 2017, I tried the plugin just for fun on NetBeans 11 but it hangs Netbeans startup while loading Kotlin support, so NetBeans tests were not possible.

Kotlin NetBeans

Testing the application with Eclipse for Java EE

JetBeans currently develops a Kotlin complement for Eclipse and seems to work fine with pure Kotlin projects, however the history is very different for Jakarta EE.

After importing the Maven project (previously created in IntelliJ IDEA) many default facets are included in the project, being CDI the most problematic, it takes a lot of time to build the project in the CDI builder step.

CDI slow

Besides that, Jakarta EE complements/parsers work over the code and not the class files, hence the "advanced" menus don't show content, a very cosmetic detail anyway. If Kotlin sources are located under src/main/kotlin as suggested by JetBrains Maven guide, these are ignored by default. Hence, I took the easy road and moved all the code to src/main/java.

Eclipse facets

Kotlin syntax highlighting works fine, as expected from any language on Eclipse.

Kotlin syntax

However if you try to deploy the application it simply doesn't work because Eclipse compiler does not produce the class file for Kotlin source files, a bug was raised in 2017 and many other users report issues with Tomcat and Wildfly. Basically Eclipse WTP is not compatible with Kotlin and deployment/debugging Kotlin code won't work over an application server.

Final considerations

At the end I was a little dissapointed, Kotlin has great potential for Jakarta EE but it works only if you use IntelliJ IDEA. I'm not sure about IntelliJ CE but as stated in JetBrains website, EE support is only included on Ultimate Edition, maybe this could be changed with more community involvemente but I'm not sure if this is the right direction considering project Amber.


April 22, 2019 12:00 AM

#HOWTO: RESTEasy (WildFly – JAX-RS 2.1) file up- and downloading

by rieckpil at April 21, 2019 03:38 PM

In my latest blog post, I demonstrated a solution for up- and downloading files with Jersey (JAX-RS 2.1) on Payara. As WildFly does not rely on Jersey as the JAX-RS reference implementation and is using RESTEasy instead, I’ll show you a quick example for up- and downloading files with RESTEasy on WildFly.

Setting up the backend

For this blog post, I’m using a classic Java EE 8 Maven application with RESTEasy’s multipart-provider as a provided and additional dependency (no need to package the library within the .war as WildFly already contains this lib):

<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>de.rieckpil.blog</groupId>
  <artifactId>rest-easy-file-uploading-and-downloading</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>war</packaging>
  <dependencies>
    <dependency>
      <groupId>javax</groupId>
      <artifactId>javaee-api</artifactId>
      <version>8.0</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.jboss.resteasy</groupId>
      <artifactId>resteasy-multipart-provider</artifactId>
      <version>3.6.3.Final</version>
      <scope>provided</scope>
    </dependency>
  </dependencies>
  <build>
    <finalName>rest-easy-file-uploading-and-downloading</finalName>
  </build>
  <properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  </properties>
</project>

The JAX-RS configuration is pretty straightforward:

@ApplicationPath("resources")
public class JAXRSConfiguration extends Application {

}

For storing the uploaded file I’m using a simple JPA entity called FileUpload:

@Entity
public class FileUpload {

  @Id
  @GeneratedValue(strategy = GenerationType.IDENTITY)
  private Long id;

  private String fileName;

  private String contentType;

  @Lob
  private byte[] data;

       // constructors, getters & setters
}

The file handling within the JAX-RS endpoint method is not as straightforward as with Jersey as you have to work with the MultipartFormDataInput interface and manually extract the filename and content type:

@Path("files")
@Stateless
public class FileUploadResource {

  @PersistenceContext
  private EntityManager em;


  @POST
  @Consumes(MediaType.MULTIPART_FORM_DATA)
  public void uploadFile(MultipartFormDataInput incomingFile) throws IOException {

    InputPart inputPart = incomingFile.getFormDataMap().get("file").get(0);
    InputStream uploadedInputStream = inputPart.getBody(InputStream.class, null);

    ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
    byte[] buffer = new byte[1024];
    int len;

    while ((len = uploadedInputStream.read(buffer)) != -1) {
      byteArrayOutputStream.write(buffer, 0, len);
    }

    FileUpload upload = new FileUpload(
        getFileNameOfUploadedFile(inputPart.getHeaders().getFirst("Content-Disposition")),
        getContentTypeOfUploadedFile(inputPart.getHeaders().getFirst("Content-Type")),
        byteArrayOutputStream.toByteArray());

    em.persist(upload);
  }

}

For extracting the filename I wrote a small helper method which parses the HTTP header Content-Disposition:

/** 
* Parses the HTTP Content-Disposition header to extract the filename. If the
* header is missing or not containing a filename attribute, 'unknown' is
* returned.
* 
* Sample HTTP Content-Disposition header:
* 
* Content-Disposition=[form/data;filename="foo.txt"]
* 
* @param contentDispositionHeader
* @return the name of the uploaded file
*/
private String getFileNameOfUploadedFile(String contentDispositionHeader) {

  if (contentDispositionHeader == null || contentDispositionHeader.isEmpty()) {
    return "unkown";
  } else {
    String[] contentDispositionHeaderTokens = contentDispositionHeader.split(";");

    for (String contentDispositionHeaderToken : contentDispositionHeaderTokens) {
      if ((contentDispositionHeaderToken.trim().startsWith("filename"))) {
        return contentDispositionHeaderToken.split("=")[1].trim().replaceAll("\"", "");
      }
    }
    return "unkown";
  }
}

Extracting the content type of the file is even simpler:

/**
* Parses the HTTP Content-Type header to extract the type (e.g. image/jpeg) of
* the file. If the header is missing or empty, 'unknown' is returned.
* 
* Sample HTTP Content-Type header:
* 
* Content-Type=[image/jpeg]
* 
* @param contentTypeHeader
* @return the type of the uploaded file
*/
private String getContentTypeOfUploadedFile(String contentTypeHeader) {
   if (contentTypeHeader == null || contentTypeHeader.isEmpty()) {
      return "unkown";
    } else {
      return contentTypeHeader.replace("[", "").replace("]", "");
   }
}

The required code for downloading a file does not depend on any proprietary RESTEasy and is just JAX-RS and Java EE standard (in the example I’m offering a random file for the user to download):

@GET
@Produces(MediaType.APPLICATION_OCTET_STREAM)
public Response getRandomFile() {

 Long amountOfFiles = em.createQuery("SELECT COUNT(f) FROM FileUpload f", Long.class).getSingleResult();
 Long randomPrimaryKey;

 if (amountOfFiles == null || amountOfFiles == 0) {
  return Response.ok().build();
 } else if (amountOfFiles == 1) {
  randomPrimaryKey = 1 L;
 } else {
  randomPrimaryKey = ThreadLocalRandom.current().nextLong(1, amountOfFiles + 1);
 }

 FileUpload randomFile = em.find(FileUpload.class, randomPrimaryKey);

 return Response.ok(randomFile.getData(), MediaType.APPLICATION_OCTET_STREAM)
  .header("Content-Disposition", "attachment; filename=" + randomFile.getFileName()).build();
}

You’ll find the full code base on GitHub with an HTML page to upload files and test the implementation.

Have fun up- and downloading files on WildFly,

Phil


by rieckpil at April 21, 2019 03:38 PM

Sending an InputStream to JAX-RS Resource

by admin at April 19, 2019 10:46 AM

A JAX-RS resource accepting a plain InputStream:

@Path("uploads")
public class UploadsResource {

    @POST
    @Consumes("*/*")
    public void upload(InputStream stream) throws IOException {
        //consume input stream
        System.out.println("Read: " + stream.read());

    }    
}

...will consume any binary stream (e.g. file upload) of data as:

import java.io.IOException;
import java.io.InputStream;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.Entity;
import javax.ws.rs.client.WebTarget;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import static org.hamcrest.CoreMatchers.is;
import static org.junit.Assert.assertThat;
import org.junit.Before;
import org.junit.Test;

public class UploadsResourceIT {

    private WebTarget tut;

    @Before
    public void init() {
        Client client = ClientBuilder.newClient();
        this.tut = client.target("http://localhost:8080/jaxrs-streaming/resources/uploads");
    }

    @Test
    public void sendStream() {
        InputStream stream = //...

        Response response = this.tut.
                request().
                post(Entity.entity(stream, MediaType.APPLICATION_OCTET_STREAM));
        assertThat(response.getStatus(), is(204));
    }    
}
The System Test is a Java SE client and therefore requires a JAX-RS API implementation (in our example: Apache CXF ):

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-rs-client</artifactId>
    <version>3.3.1</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-rs-extension-providers</artifactId>
    <version>3.3.1</version>
    <scope>test</scope>
</dependency>  
</dependencies>         

Project created with javaee8-essentials-archetype, the 3kB ThinWAR was built and deployed with: wad.sh in 2329ms

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at April 19, 2019 10:46 AM

Asynchronous Returns with CompletableFuture with JAX-RS 2.1 / Java EE 8

by admin at April 18, 2019 07:32 AM

An expensive method:

public class Messenger {
    public String hello() {
        //heavy lifting
        return "hello!";
    }
}    
...can be directly published asynchronously via a JAX-RS resource:

import java.util.concurrent.CompletableFuture;
import static java.util.concurrent.CompletableFuture.supplyAsync;
import javax.inject.Inject;
import javax.ws.rs.GET;
import javax.ws.rs.Path;

@Path("messages")
public class MessagesResource {

    @Inject
    Messenger messenger;

    @GET
    public CompletableFuture<String> ping() {
        return supplyAsync(this.messenger::hello);
    }
}    

Project created with javaee8-essentials-archetype, the 4kB ThinWAR was built and deployed with: wad.sh in 2937ms

Big thanks to @OndroMih for the hint during the continuous @czjug hacking.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at April 18, 2019 07:32 AM

Helidon flies faster with GraalVM

by dmitrykornilov at April 17, 2019 02:55 PM

GraalVM is an open source, high-performance, polyglot virtual machine developed by Oracle Labs. GraalVM offers multiple features, including the ability to compile Java code ahead-of-time into a native executable binary. The binary can run natively on the operating system, without a Java runtime

A native executable offers important benefits, like shorter startup time and lower memory footprint. In addition, when a native executable runs within a container, the size of the container image is reduced (when compared with the same Java application running in a traditional JVM), because the container image doesn’t include a Java runtime. An optimized container size is critical for deploying apps to the cloud.

We are pleased to announce that, starting with version 1.0.3, Helidon supports the GraalVM native-image capability. Now you can easily compile your Helidon application into a native executable with all the advantages described earlier. For example, the sample application described in this article has a startup time measured in tenths of milliseconds and a MacOS executable size of only 21MB. Both of those numbers are higher when a traditional JVM is used.

On the other hand, everything is always a tradeoff. Long running applications on traditional JVMs are still demonstrating better performance than GraalVM native executables due to runtime optimization. The key word here is long-running; for short-running applications like serverless functions native executables have a performance advantage. So, you need to decide yourself between fast startup time and small size (and the additional step of building the native executable) versus better performance for long-running applications.

Compiling to native binaries introduces certain restrictions for your application. Because of native compilation, you must identify all the points in your code where reflection is used, which is not possible if CDI runtime injection is used. Helidon MP supports MicroProfile standards, which require CDI 2.0. Also, Helidon CDI cloud extensions are implemented as CDI plugins. We didn’t want to limit or complicate the user experience with Helidon MP; so GraalVM is supported in only Helidon SE. It is a perfect fit, because Helidon SE is designed to build small, reactive microservices. It doesn’t use dependency injection, annotations and other such magic . All the Helidon SE features and components (WebServer, Config, Security, Metrics, and Health Checks) are compatible with GraalVM native images.

Helidon supports two convenient GraalVM profiles:

  • The local profile is for users who have GraalVM installed locally and want to build a native executable for the same OS that they work on.
  • The Docker profile is for users who don’t have GraalVM installed locally or want to build native executable for Linux while using macOS locally.

Due to the possibility of backwards incompatible changes in GraalVM before the final release, the GraalVM support in Helidon is experimental. It’s been tested on GraalVM version RC13; it is not guaranteed to work on other GraalVM versions.

To get you started, Helidon has a quickstart sample, which you can use as a template for your Helidon-on-GraalVM application.

Generate the project using a Maven archetype:

mvn archetype:generate -DinteractiveMode=false \
    -DarchetypeGroupId=io.helidon.archetypes \
    -DarchetypeArtifactId=helidon-quickstart-se \
    -DarchetypeVersion=1.0.3 \
    -DgroupId=io.helidon.examples \
    -DartifactId=helidon-quickstart \
    -Dpackage=io.helidon.examples.quickstart

To build using the local profile you need to download GraalVM, extract it to some folder on your computer and define GRAALVM_HOME environment variable pointing to it.

If you use macOS, GRAALVM_HOME must point to Contents/Home directory inside the GraalVM root as it’s shown below.

export GRAALVM_HOME=~/graalvm-ce-1.0.0-rc13/Contents/Home

Build the project:

mvn package -Pnative-image

Run it:

./target/helidon-quickstart

If you want to use Docker profile, GraalVM installation is not required. Use the following command to build a docker image:

docker build -t helidon-native -f Dockerfile.native .

Run it as:

docker run --rm -p 8080:8080 helidon-native:latest

Helidon is a Java framework designed from scratch for writing microservices. It provides everything that you need to create small and efficient Java applications. It offers two programming models: reactive, and imperative supporting MicroProfile.

With GraalVM support, Helidon is one of the best solutions in the market for developing cloud-native microservices!

Do join our open-source community! We care about our users, and are ready to support you and answer your questions.

Update

We tested it with newly released GraalVM RC16 (both CE and EE versions) and it perfectly works.


by dmitrykornilov at April 17, 2019 02:55 PM

EE Security in Relation to JASPIC, JACC and LoginModules/Realms

by Arjan Tijms at April 16, 2019 01:17 PM

Java EE 8 introduced a new API called the Java EE Security API (see JSR 375) or "EE Security" in short.

 

This new API, perhaps unsurprisingly given its name, deals with security in Java EE.  Security in Java EE is obviously not a new thing though, and in various ways it has been part of the platform since its inception.

 

So what is exactly the difference between EE Security and the existing security facilities in Java EE? In this article we'll take a look at that exact question.

 


by Arjan Tijms at April 16, 2019 01:17 PM

Payara On Tour in Japan!

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at April 15, 2019 05:30 AM

We are extremely excited to announce that we will be touring Japan in just a few short weeks. We have teamed up with some of the most prominent Java User Groups in the country and will be delivering a range of talks.


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at April 15, 2019 05:30 AM

Set Your Proxy Client On Synology NAS Through cli

April 14, 2019 06:11 PM

You want to be able to configure your proxy client settings with the terminal but don’t know how…

Challenge

I use a couple of different proxy servers I’ve created with docker and I can (de)activate them through the
command line at will. The trouble came when I tried to do the same with the proxy client settings as shown in the screen shot.

Normally on linux machines you can just set environment variables like http_proxy and https_proxy but it
seems to go a bit different on a Synology NAS.

So I was able to start and stop the docker proxies easily on my iPhone with a simple ssh app but not the proxy client of Synology itself.

Finally I got tired of it and dug my heels in and did some reverse engineering :-)

Solution

Do it with the synowebapi.

There is an /etc/proxy.conf file where DSM saves its data to be retrieved when needed, but when I tried
to just edit that file and change some values DSM did not pick it up, hmmm not what I wanted.

Ok lets do it through the web interface and use the developers view to see the network traffic…
Yup that seemed to work and that resulted in the script below.

~/bin/set-proxy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#!/bin/sh
PORT=8118
HOST=localhost
ENABLE=true
while getopts ":p:h:ad?" opt; do
case ${opt} in
h) HOST=$OPTARG
;;
p) PORT=$OPTARG
;;
a) ENABLE=true
;;
d) ENABLE=false
;;
\?) echo "Usage: set-proxy [-p PORT_NUMBER] [-h HOSTNAME] [-a] [-d] [-?]"
echo " -p PORT_NUMBER : sets the portnumber (default: 8118)"
echo " -h HOSTNAME : sets the hostname (default: localhost)"
echo " -a : activates the proxy client (=default)"
echo " -d : disables the proxy client"
echo " -? : this help message"
exit 0
;;
esac
done
/usr/syno/bin/synowebapi --exec \
"api"="SYNO.Core.Network.Proxy" \
"method"="set" \
"version"="1" \
"enable"=${ENABLE:-true} \
"http_host"="${HOST}" \
"http_port"="${PORT}" \
"enable_different_host"=false \
"enable_auth"=false \
"enable_bypass"=false \
"https_host"="${HOST}" \
"https_port"="${PORT}"
/usr/syno/bin/synowebapi --exec \
"api"="SYNO.Core.Network.Proxy" \
"method"="get"

Note that I have my ~/bin folder always on my PATH

When I saved the above script in set-proxy and ran it without params

1
set-proxy

the result was something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Line 259] Exec WebAPI: api=SYNO.Core.Network.Proxy, version=1, method=set, param={"enable":true,"enable_auth":false,"enable_bypass":false,"enable_different_host":false,"http_host":"localhost","http_port":8118,"https_host":"localhost","https_port":8118}, runner=
{
"httpd_restart" : false,
"success" : true
}
[Line 259] Exec WebAPI: api=SYNO.Core.Network.Proxy, version=1, method=get, param={}, runner=
{
"data" : {
"enable" : true,
"enable_auth" : false,
"enable_bypass" : false,
"enable_different_host" : false,
"http_host" : "localhost",
"http_port" : "8118",
"https_host" : "localhost",
"https_port" : "8118",
"password" : "\t\t\t\t\t\t\t\t",
"username" : ""
},
"httpd_restart" : false,
"success" : true
}

Other command examples:

Use localhost with port 3128 and enable it

1
set-proxy -p 3128 -a

disable the proxy:

1
set-proxy -d

Use some other host on port 92000 and enable it

1
set-proxy -a -p 92000 -h www.example.com

Note that you can add true but as it is the default you can drop it too.

testing

I did quite a few tests with this and it seems to work just fine.

With the following command you can see your current external ip address

1
curl -s https://ipecho.net/plain

Start with the above command to see what your IP is before setting a proxy and then start the proxy and try again…

An example test was to run my ivonet/nordvpn-tor-provoxy docker image on port 8118 and to set-proxy to it and then do
the curl as described above.

It all worked just as hoped.

Goal

Well my goal to be able to configure the proxy client through a cli interface has worked.
The commands need to be executed as root (sudo) and that makes scripting it a bit more difficult but not
insurmountable as I have already written a blog about how to Run A Script As root Without sudo

If you have questions of comments leave them below.
They are always welcome.


April 14, 2019 06:11 PM

Quarkus and ThinJARs--airhacks.fm podcast

by admin at April 14, 2019 04:15 AM

Subscribe to airhacks.fm podcast via: RSS iTunes

An airhacks.fm conversation with Stuart Douglas (@stuartwdouglas) about:

starting with Visual Basic in high school, with the goal to build games, then quick transition to C++, creating Tetris from scratch in weeks in C++, building first commercial financial planning application with PHP, starting with Java 1.5 and annotations in 2007, Java is popular in Australia, building Seam applications with JBoss 4, contributing to Weld in spare time, improving the performance and startup performance of Weld, working for RedHat as JBoss AS 7 developer, JBoss is more than the application server and the advent of Wildfly, the braning clean up, creating Undertow, WildFly was shipped with deactivated EJB pooling, too small EJB pools can cause performance issues, how to kill EJBs with CDI, EJB vs. CDI, interview with Pavel Samolysov and EJB vs. CDI performance comparison, quarkus is not using reflection for injection, a small team (8 people) started quarkus to leverage GraalVM, the goal of quarkus is to make a subset of Java EE natively compilable to an "unikernel", updating the cloud platform without recompiling the app, serverless with quarkus, serverless without the function disadvantage, 20MB self contained, native images, building Java EE / Jakarta EE unikernels, extremely fast start times for Java EE applications, native images are running with a fraction of RAM, at one point in time, quarkus might be commercially supported by RedHat, CDI portable extensions are not supported now, quarkus wizard is not based on Maven archetype - what is a great idea, Maven is only one of many possible entry points to quarkus, a really good developer experience was always the main goal, hot reload is a must, currently the classloader with the "business" part is just dropped and the app is reloaded, adding dependencies via CLI or pom.xml, quarkus ThinJARs are compatible with the ThinWAR idea, FatJAR's builds have to be slower, packaging all dependencies into a single JAR, using Chrome Dev Tools Protocols for hot reloading the browser, misusing quarkus for building command line tools, community extensions are on the roadmap, quarkus is going to be well integrated with OpenShift, quarkus forum.
Stuart on twitter: (@stuartwdouglas), and github.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at April 14, 2019 04:15 AM

#HOWTO: Up- and download files with Java EE and Web Components

by rieckpil at April 13, 2019 08:51 AM

In one of my last blog post, I showed you how to upload and download files with React and Spring Boot. Today I want to give you a quickstart example on how to achieve the same with Java EE and Web Components (standards ftw!). In this blog post, I’ll be using Java 8, Java EE 8, Payara 5.191 as the application server and no framework for the frontend (except Bootstrap for some styling).

 

The final result will look like the following:

 

 

Setting up the backend

As the JAX-RS specification currently doesn’t provide a convenient way to work with multipart data (except working with the  HttpServletRequest directly), I’m using some proprietary Jersey code. Similar annotations/methods are available for Apache CXF and RESTEasy as well.
The pom.xml for the backend looks like the following:

<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>de.rieckpil.blog</groupId>
  <artifactId>backend</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>war</packaging>
  <dependencies>
    <dependency>
      <groupId>javax</groupId>
      <artifactId>javaee-api</artifactId>
      <version>8.0</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.glassfish.jersey.core</groupId>
      <artifactId>jersey-server</artifactId>
      <version>2.27</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.glassfish.jersey.media</groupId>
      <artifactId>jersey-media-multipart</artifactId>
      <version>2.27</version>
      <scope>provided</scope>
    </dependency>
  </dependencies>
  <build>
    <finalName>backend</finalName>
  </build>
  <properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  </properties>
</project>

The two Jersey dependencies are required for the propreitary annotations, which you’ll see in the next step. They are marked as scope provided as they are alreay packaged within Payara and therefor don’t need to pollute the .war file and keep its size thin.
To register the MultipartFeature of Jersey for our JAX-RS endpoints, I’m using the programmatic way like the following:

@ApplicationPath("resources")
public class JAXRSConfiguration extends ResourceConfig {

  public JAXRSConfiguration() {
    packages("de.rieckpil.blog").register(MultiPartFeature.class);
  }

}

The ResourceConfig class is part of the jersey-server dependency. You can achieve the same with registering the feature in the web.xml.
With this feature enabled, we can now make use of the @FormDataParam annotation to parse and access the incoming FormData. In addition, Jesery provides metadata of the uploaded file with the class FormDataContentDisposition. As the content type of the incoming request won’t be classic application/json, we have to add the correct MediaType to the method:

@Path("files")
@Stateless
public class FileUploadResource {

    @PersistenceContext
    private EntityManager em;

    @POST
    @Consumes(MediaType.MULTIPART_FORM_DATA)
    public void uploadFile(@FormDataParam("file") InputStream uploadedInputStream,
                @FormDataParam("file") FormDataContentDisposition fileDetail) throws IOException {

        ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
        byte[] buffer = new byte[1024];
        int len;

        while ((len = uploadedInputStream.read(buffer)) != -1) {
            byteArrayOutputStream.write(buffer, 0, len);
        }

        FileUpload upload = new FileUpload(fileDetail.getFileName(), fileDetail.getType(),
                byteArrayOutputStream.toByteArray());

        em.persist(upload);
    }

    // ...
}

Within the method I’m converting the incoming InputStream to a byte[] and store it in the embedded H2 database of Payara using this JPA entity:

@Entity
public class FileUpload {

  @Id
  @GeneratedValue(strategy = GenerationType.IDENTITY)
  private Long id;

  private String fileName;

  private String contentType;

  @Lob
  private byte[] data;

    // constructors, getters and setters ...
}

Besides uploading, I’m also providing an endpoint to download a file. In this example, a random file is retrieved from the database and returned as MediaType.APPLICATION_OCTET_STREAM. In addition, the Content-Disposition header contains the name of the file.

@GET
@Produces(MediaType.APPLICATION_OCTET_STREAM)
public Response getRandomFile() {

  Long amountOfFiles = em.createQuery(
         "SELECT COUNT(f) FROM FileUpload f", Long.class).getSingleResult();
  Long randomPrimaryKey;

  if (amountOfFiles == null || amountOfFiles == 0) {
  return Response.ok().build();
   } else if (amountOfFiles == 1) {
  randomPrimaryKey = 1L;
   } else {
  randomPrimaryKey = ThreadLocalRandom.current().nextLong(1, amountOfFiles + 1);
   }

   FileUpload randomFile = em.find(FileUpload.class, randomPrimaryKey);

   return Response.ok(randomFile.getData(), MediaType.APPLICATION_OCTET_STREAM)
           .header("Content-Disposition", "attachment; filename=" + randomFile.getFileName()).build();
}

To make the interaction with the frontend work, we have to enable CORS as the browser will otherwise block the request. This is achieved with a JAX-RS @Provider which intercepts the HTTP response and adds custom HTTP headers. Extracting the filename of the downloaded file in the frontend, later on, the header Access-Control-Expose-Headers is important, as we otherwise won’t have access to the HTTP header Content-Disposition:

@Provider
public class CorsFilter implements ContainerResponseFilter {

  @Override
  public void filter(ContainerRequestContext requestContext, ContainerResponseContext responseContext)
      throws IOException {
    responseContext.getHeaders().add("Access-Control-Allow-Origin", "*");
    responseContext.getHeaders().add("Access-Control-Allow-Credentials", "true");
    responseContext.getHeaders().add("Access-Control-Allow-Headers", "origin, content-type, accept, authorization");
    responseContext.getHeaders().add("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS, HEAD");

    // Required to be able to access the Content-Disposition header with Fetch API
    responseContext.getHeaders().add("Access-Control-Expose-Headers", "content-disposition");
  }
}

That’s everything for the backend.

Setting up the frontend

As I’m using no framework for the frontend, the setup is pretty straightforward. In addition to the Bootstrap CSS and JS libraries, I’m adding a custom app.js to the index.html. Within the <body> of the HTML file, you can find a basic page layout and two unknown HTML tags: <upload-component> and <download-component> which are both custom Web Components:

<!DOCTYPE html>
<html>
<head>
    <title>File Upload with Web Components</title>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <!-- Bootstrap CSS library -->
</head>
<body>
    <div class="container">
        <div class="row">
            <div class="col-9  offset-md-1">
                <h3 style="text-align:center">Up- and downloading 
                  files with Java EE and Web Components</h3>
                <div id="message" role="alert"></div>
                <upload-component caption="Upload"></upload-component>
                <br/>
                <download-component caption="Download random file"></download-component>
            </div>
        </div>
    </div>

    <script src="app.js" type="module"></script>
    <!-- Bootstrap JS libraries -->
</body>
</html>

Each component is defined in its own file (DownloadComponent.js and UploadComponent.js) and is really simple for this quickstart (no real configuration from outside to make it reusable), but you should get a good insight into Web Components with them. To show you how you could configure the component from outside, I’m passing the text of the button as a component attribute (caption) from outside:

export default class UploadComponent extends HTMLElement {

    constructor() {
        super();
        this.message = document.querySelector("#message");
        this.innerHTML = `<form>
            <input required type="file"></input>
            <button type="submit">${this.getAttribute('caption')}</button>
        </form>`;
        this.form = this.querySelector('form');
        this.form.onsubmit = e => this.uploadFile(e);
        this.file = this.querySelector('input');
    }

    uploadFile(e) {
        e.preventDefault();
        const formData = new FormData();
        formData.append('file', this.file.files[0]);

        fetch('http://localhost:8080/resources/files', {
            method: 'POST',
            body: formData
        }).then(response => {
            this.message.innerHTML = 'File upload was succesfull!';
            this.message.className = 'alert alert-success';
            this.form.reset();
        }).catch(error => {
            this.message.innerHTML = 'Something went wrong while uploading a file :(';
            this.message.className = 'alert alert-danger';
        });
    }
}

customElements.define('upload-component', UploadComponent);

For creating a Web Component, you have to extend the HTMLElement class and define a custom element with an own HTML tag. Within the constructor of the component, I’m creating the HTML layout of the component with a simple JavaScript string literal (have a look at lit-html for a more advanced way of doing this). In addition, the references to some of the HTML elements are stored as attributes and an onsubmit function is defined for the form.
The file for the upload is wrapped in a FormData object and the Fetch API is used to make the POST HTTP call and upload the file.
The <download-component> is even simpler than the previous. The component contains just a button and accesses the JAX-RS endpoint for downloading a random file on every click on the button:

export default class DownloadComponent extends HTMLElement {

    constructor() {
        super();
        this.message = document.querySelector("#message");
        this.innerHTML = `<button>${this.getAttribute('caption')}</button>`;
        this.button = this.querySelector('button');
        this.button.onclick = e => this.downloadRandomFile(e);
    }

    downloadRandomFile(e) {
        e.preventDefault();
        fetch('http://localhost:8080/resources/files')
            .then(response => {
                console.log(response.headers)
                const filename = response.headers.get('Content-Disposition').split('filename=')[1];
                response.blob().then(blob => {
                    let url = window.URL.createObjectURL(blob);
                    let a = document.createElement('a');
                    a.href = url;
                    a.download = filename;
                    a.click();
                });
            }).catch(error => {
                this.message.innerHTML = 'Something went wrong while downloading a random file :(';
                this.message.className = 'alert alert-danger';
            });
    }

}

customElements.define('download-component', DownloadComponent);

Finally, the app.js imports both components so that they are available within the index.html:

import DownloadComponent from './DownloadComponent.js';
import UploadComponent from './UploadComponent.js';

You can find the example in my GitHub repository with a docker-compose.yml file for local testing time on your machine.

Have fun up- and downloading files with Web/Java EE standards only,
Phil


by rieckpil at April 13, 2019 08:51 AM

Specs, RIs, APIs - What Does it all Mean?

by Mark Wareham at April 12, 2019 12:48 PM

There are many acronyms in the Java world. Here's a list of some commonly used acronyms and what they mean.

 

JCP - Java Community Process

The Java Community Process is simply the process by which Java EE was developed. It's an open process that anyone can apply to become a part of. To find out more about the JCP, visit their website: https://www.jcp.org


by Mark Wareham at April 12, 2019 12:48 PM

Specification Scope in Jakarta EE

by waynebeaton at April 08, 2019 02:56 PM

With the Eclipse Foundation Specification Process (EFSP) a single open source specification project has a dedicated project team of committers to create and maintain one or more specifications. The cycle of creation and maintenance extends across multiple versions of the specification, and so while individual members may come and go, the team remains and it is that team that is responsible for the every version of that specification that is created.

The first step in managing how intellectual property rights flow through a specification is to define the range of the work encompassed by the specification. Per the Eclipse Intellectual Property Policy, this range of work (referred to as the scope) needs to be well-defined and captured. Once defined, the scope is effectively locked down (changes to the scope are possible but rare, and must be carefully managed; the scope of a specification can be tweaked and changed, but doing so requires approval from the Jakarta EE Working Group’s Specification Committee).

Regarding scope, the EFSP states:

Among other things, the Scope of a Specification Project is intended to inform companies and individuals so they can determine whether or not to contribute to the Specification. Since a change in Scope may change the nature of the contribution to the project, a change to a Specification Project’s Scope must be approved by a Super-majority of the Specification Committee.

As a general rule, a scope statement should not be too precise. Rather, it should describe the intention of the specification in broad terms. Think of the scope statement as an executive summary or “elevator pitch”.

Elevator pitch: You have fifteen seconds before the elevator doors open on your floor; tell me about the problem your specification addresses.

The scope statement must answer the question: what does an implementation of this specification do? The scope statement must be aspirational rather than attempt to capture any particular state at any particular point-in-time. A scope statement must not focus on the work planned for any particular version of the specification, but rather, define the problem space that the specification is intended to address.

For example:

Jakarta Batch provides describes a means for executing and managing batch processes in Jakarta EE applications.

and:

Jakarta Message Service describes a means for Jakarta EE applications to create, send, and receive messages via loosely coupled, reliable asynchronous communication services.

For the scope statement, you can assume that the reader has a rudimentary understanding of the field. It’s reasonable, for example, to expect the reader to understand what “batch processing” means.

I should note that the two examples presented above are just examples of form. I’m pretty sure that they make sense, but defer to the project teams to work with their communities to sort out the final form.

The scope is “sticky” for the entire lifetime of the specification: it spans versions. The plan for any particular development cycle must describe work that is in scope; and at the checkpoint (progress and release) reviews, the project team must be prepared to demonstrate that the behavior described by the specifications (and tested by the corresponding TCK) cleanly falls within the scope (note that the development life cycle of specification project is described in Eclipse Foundation Specification Process Step-by-Step).

In addition the specification scope which is required by the Eclipse Intellectual Property Policy and EFSP, the specification project that owns and maintains the specification needs a project scope. The project scope is, I think, pretty straightforward: a particular specification project defines and maintains a specification.

For example:

The Jakarta Batch project defines and maintains the Jakarta Batch specification and related artifacts.

Like the specification scope, the project scope should be aspirational. In this regard, the specification project is responsible for the particular specification in perpetuity. Further the related artifacts, like APIs and TCKs can be in scope without actually being managed by the project right now.

Today, for example, most of the TCKs for the Jakarta EE specifications are rolled into the Jakarta EE TCK project. But, over time, this single monster TCK may be broken up and individual TCKs moved to corresponding specification projects. Or not. The point is that regardless of where the technical artifacts are currently maintained, they may one day be part of the specification project, so they are in scope.

I should back up a bit and say that our intention right now is to turn the “Eclipse Project for …” projects that we have managing artifacts related to various specifications into actual specification projects. As part of this effort, we’ll add Git repositories to these projects to provide a home for the specification documents (more on this later). A handful of these proto-specification projects currently include artifacts related to multiple specifications, so we’ll have to sort out what we’re going to do about those project scope statements.

We might consider, for example, changing the project scope of the Jakarta EE Stable APIs (note that I’m guessing a future new project name) to something simple like:

Jakarta EE Stable APIs provides a home for stable (legacy) Jakarta EE specifications and related artifacts which are no longer actively developed.

But, all that talk about specification projects aside, our initial focus needs to be on describing the scope of the specifications themselves. With that in mind, the EE4J PMC has created a project board with issues to track this work and we’re going to ask the project teams to start working with their communities to put these scope statements together. If you have thoughts regarding the scope statements for a particular specification, please weigh in.

Note that we’re in a bit of a weird state right now. As we engage in a parallel effort to rename the specifications (and corresponding specification projects), it’s not entirely clear what we should call things. You’ll notice that the issues that have been created all use the names that we guess we’re going to end up using (there’s more more information about that in Renaming Java EE Specifications for Jakarta EE).


by waynebeaton at April 08, 2019 02:56 PM

Jakarta EE, MicroProfile, OpenLiberty: Better Than Ice Hockey--airhacks.fm podcast

by admin at April 08, 2019 02:50 AM

Subscribe to airhacks.fm podcast via: RSS iTunes

An airhacks.fm conversation with Andrew Guibert (@andrew_guibert) about:

old IBM PCs and old school Legos, starting programming in elementary school to write video games, the market for enterprise software is better, than the market for video games, World of Warcraft is good for practicing team work, ice hockey, snowboarding and baseball, getting job at IBM by pitching Nintendo WII hacking, why Java EE is exciting for young developers, OpenLiberty is a dream team at IBM, providing Java EE support for WebSphere Liberty and WebSphere "traditional" customers, Java EE 8 was good, and MicroProfile is a good platform for innovation, quick MicroProfile iterations, sprinkling MicroProfile goodness into existing applications, MicroProfile helps glue things together, OpenLiberty strictly follows the Java EE standards, how OpenLiberty knows what Java EE 8 is, OpenLiberty is built on an OSGi runtime, features are modules with dependencies, OpenLiberty comprises public and internal features, Java EE 8 is a convenience feature which pulls in other modules / features, OpenLIberty also supports users features, OpenLiberty works with EclipseLink, as well as, Hibernate, OpenLiberty comes with generic JPA support with transaction integration, Erin Schnabel fixes OpenLiberty configuration at JavaONE, IBM booth with vi in a few seconds, Erin Schnabel is a 10x-er, IBM MQ / MQS could be the best possible developer experience as JMS provider, Liberty Bikes - a Java EE 8 / MicroProfile Tron-like game, scaling websockets with session affinity, tiny ThinWARs, there is MicroProfile discussion for JWT token production, controlling OpenLiberty from MineCraft, testing JDBC connections, BulkHeads with porcupine, all concurrency in OpenLiberty runs on single, self-tuning ThreadPool
Andy on twitter: @andrew_guibert and github.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at April 08, 2019 02:50 AM

What’s in a Name?

by Ivar Grimstad at April 07, 2019 06:47 AM

As Wayne Beaton wrote about in his blog Renaming Java EE Specifications for Jakarta EE, the process of renaming the Java EE specifications as a part of the path towards Jakarta EE 8 has started.

Naming is hard and changing names that we have become attached to is even harder. Most of us also have this inherent feeling that change is bad and should be avoided. At the same time, we live in this ever-changing industry where embracing change is, not only a mantra but also a requirement to survive. Add to this, that some of the names we are talking about in this context are pretty bad. Some of them so long that we have to google the acronym to be able to describe what it actually abbreviates. Some of the acronyms we are using daily aren’t even acronyms, but more of an artificially created letter combination following some sort of standard. A couple of examples:

Jakarta Authentication

I think most of us agree that Jakarta Authentication is far more easy to remember than The Java Authentication Service Provider Interface for Containers. It may not be as precise and descriptive, but should that really be the requirement of the name? Wouldn’t it be better to have a name that we can remember and use in daily talk?

Jakarta REST

Think about Jakarta REST versus Java™ API for RESTful Web Services. I don’t think anyone ever uses the long name, but rather the abbreviation/acronym JAX-RS. Which isn’t even an acronym! It is just some collection of letters to fit into the JAX-* naming standard used by Sun back in the J2EE days.

Jakarta Messaging

The Java™ Message Service specification covers create, send receive and read messages. So it is much more than just a message service. In this case, Jakarta Messaging would be a much more descriptive name.

Participate!

We have created an issue in each of the specification GitHub projects as well as a Specification Project Renaming board in EE4J GitHub organization to track the process.

Please join the discussion by commenting on the issue related to the particular specification you are interested in. If you are the first commenting on an issue, please move it to the In Progress column. I will try to keep the status updated, but all help is welcome.

Final Thoughts

I am positive to renaming the specifications even if it is a lot of work and may seem like an unnecessary use of resources. It is an opportunity to fix something to the better in one go and this is probably the perfect time to do it.


by Ivar Grimstad at April 07, 2019 06:47 AM

Renaming Java EE Specifications for Jakarta EE

by waynebeaton at April 04, 2019 02:17 PM

It’s time to change the specification names…

When we first moved the APIs and TCKs for the Java EE specifications over to the Eclipse Foundation under the Jakarta EE banner, we kept the existing names for the specifications in place, and adopted placeholder names for the open source projects that hold their artifacts. As we prepare to engage in actual specification work (involving an actual specification document), it’s time to start thinking about changing the names of the specifications and the projects that contain their artifacts.

Why change? For starters, it’s just good form to leverage the Jakarta brand. But, more critically, many of the existing specification names use trademarked terms that make it either very challenging or impossible to use those names without violating trademark rules. Motivation for changing the names of the existing open source projects that we’ll turn into specification projects is, I think, a little easier: “Eclipse Project for …” is a terrible name. So, while the current names for our proto-specification projects have served us well to-date, it’s time to change them. To keep things simple, we recommend that we just use the name of the specification as the project name. 

With this in mind, we’ve come up with a naming pattern that we believe can serve as a good starting point for discussion. To start with, in order to keep things as simple as possible, we’ll have the project use the same name as the specification (unless there is a compelling reason to do otherwise).

The naming rules are relatively simple:

  • Replace “Java” with “Jakarta” (e.g. “Java Message Service” becomes “Jakarta Message Service”);
  • Add a space in cases where names are mashed together (e.g. “JavaMail” becomes “Jakarta Mail”);
  • Add “Jakarta” when it is missing (e.g. “Expression Language” becomes “Jakarta Expression Language”); and
  • Rework names to consistently start with “Jakarta” (“Enterprise JavaBeans” becomes “Jakarta Enterprise Beans”).

This presents us with an opportunity to add even more consistency to the various specification names. Some, for example, are more wordy or descriptive than others; some include the term “API” in the name, and others don’t; etc.

We’ll have to sort out what we’re going to do with the Eclipse Project for Stable Jakarta EE Specifications, which provides a home for a small handful of specifications which are not expected to change. I’ll personally be happy if we can at least drop the “Eclipse Project for” from the name (“Jakarta EE Stable”?). We’ll also have to sort out what we’re going to do about the Eclipse Mojarra and Eclipse Metro projects which hold the APIs for some specifications; we may end up having to create new specification projects as homes for development of the corresponding specification documents (regardless of how this ends up manifesting as a specification project, we’re still going to need specification names).

Based on all of the above, here is my suggested starting point for specification (and most project) names (I’ve applied the rules described above; and have suggested tweaks for consistency by strike out):

  • Jakarta APIs for XML Messaging
  • Jakarta Architecture for XML Binding
  • Jakarta API for XML-based Web Services
  • Jakarta Common Annotations
  • Jakarta Enterprise Beans
  • Jakarta Persistence API
  • Jakarta Contexts and Dependency Injection
  • Jakarta EE Platform
  • Jakarta API for JSON Binding
  • Jakarta Servlet
  • Jakarta API for RESTful Web Services
  • Jakarta Server Faces
  • Jakarta API for JSON Processing
  • Jakarta EE Security API
  • Jakarta Bean Validation
  • Jakarta Mail
  • Jakarta Beans Activation Framework
  • Jakarta Debugging Support for Other Languages
  • Jakarta Server Pages Standard Tag Library
  • Jakarta EE Platform Management
  • Jakarta EE Platform Application Deployment
  • Jakarta API for XML Registries
  • Jakarta API for XML-based RPC
  • Jakarta Enterprise Web Services
  • Jakarta Authorization Contract for Containers
  • Jakarta Web Services Metadata
  • Jakarta Authentication Service Provider Interface for Containers
  • Jakarta Concurrency Utlities
  • Jakarta Server Pages
  • Jakarta Connector Architecture
  • Jakarta Dependency Injection
  • Jakarta Expression Language
  • Jakarta Message Service
  • Jakarta Batch
  • Jakarta API for WebSocket
  • Jakarta Transaction API

We’re going to couple renaming with an effort to capture proper scope statements (I’ll cover this in my next post). The Eclipse EE4J PMC Lead, Ivar Grimstad, has blogged about this recently and has created a project board to track the specification and project renaming activity (as of this writing, it has only just been started, so watch that space). We’ll start reaching out to the “Eclipse Project for …”  teams shortly to start engaging this process. When we’ve collected all of the information (names and scopes), we’ll engage in a restructuring review per the Eclipse Development Process (EDP) and make it all happen (more on this later).

Your input is requested. I’ll monitor comments on this post, but it would be better to collect your thoughts in the issues listed on the project board (after we’ve taken the step to create them, of course), on the related issue, or on the EE4J PMC’s mailing list.

 


by waynebeaton at April 04, 2019 02:17 PM

Unikernels, Quarkus.io, SPA vs. Websites, JMS, shared deployments, fighting Sonar, Testing--61st airhacks.tv

by admin at April 04, 2019 04:21 AM

61st edition of airhacks.tv with the following topics:

"Unikernels, Quarkus.io, SPA vs. Document Oriented Model, Service to Service Communication with JMS, shared deployments, shared entities between microservices, service discovery and WARs, fighting Sonar, Unit-, Integration-, System Tests with or without Arquillian, File Uploads with or without dependencies, Sockets on Java EE"

Any questions left? Ask now: https://gist.github.com/AdamBien/001c1bad3f868a8508783569e192ea3d and get the answers at the next airhacks.tv.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at April 04, 2019 04:21 AM

How to participate in advancing Jakarta EE Specification: Technical and Collateral material related work

by Tanja Obradovic at April 03, 2019 07:13 PM

Technical Work

We will need a lot of help on this front as well

o   Jakarta EE specifications: specification documents and APIs

We have heard from members of the community some suggestions on what they need from the specification, but we can always use more. Get involved in the discussion on Github (https://github.com/eclipse-ee4j/jakartaee-platform/issues).

o   Jakarta EE TCK

It’s a goliath, inconvenient, and we want to slowly begin to break it up into separate TCKs for each specification. Not for the very first release of Jakarta EE, but we need to start planning and discussing the approach.

o   Compatible Implementations

To make a final version of a specification alive we need specification implementations. Whether the implementation is hosted in Eclipse Foundation or not is not the focus, we need you to implement the specification.

 

 

Collateral material related work

While we encourage everyone to participate in specification development, please keep in mind this isn’t limited to coding only. Of equal importance is the need for collateral material related to the specification(s). This includes documentation, presentations, videos, demos, examples, blogs, tech talks, etc. This is the type of content we can circulate through the community and use to educate and spread the news on the new specifications. Presenting the material on the conferences is yet another aspect where you can help out also!


by Tanja Obradovic at April 03, 2019 07:13 PM

Using MicroProfile Rest Client For System Testing

by admin at April 02, 2019 04:26 AM

A JAX-RS resource:

@Path("ping")
public class PingResource {

    @GET
    public String ping() {
        return "Enjoy Java EE 8!";
    }

}    
...can be system tested (checkout: http://javaeetesting.com) with MicroProfile Rest Client by describing the resource with an interface:

import javax.ws.rs.GET;
import javax.ws.rs.Path;
        
@Path("/restclient/resources/ping")
public interface PingClient {

    @GET
    String ping();
}
    
...and creating a proxy implementation with RestClientBuilder:

import java.net.MalformedURLException;
import java.net.URI;
import org.eclipse.microprofile.rest.client.RestClientBuilder;
import static org.junit.Assert.assertNotNull;
import org.junit.Test;


public class RestClientTest {

    @Test
    public void init() throws MalformedURLException {
        URI baseURI = URI.create("http://localhost:8080");
        PingClient client = RestClientBuilder.newBuilder().
                baseUri(baseURI).
                build(PingClient.class);                
        assertNotNull(client);
        String result = client.ping();
        assertNotNull(result);
    }
}

You will need the following dependencies to run the system test from your IDE / CLI:


<dependencies>
    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>4.12</version>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>javax</groupId>
        <artifactId>javaee-api</artifactId>
        <version>8.0</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.apache.cxf</groupId>
        <artifactId>cxf-rt-rs-mp-client</artifactId>
        <version>3.3.1</version>
        <scope>test</scope>
    </dependency>
</dependencies>    

Project created with javaee8-essentials-archetype, the 3kB ThinWAR was built and deployed with: wad.sh in 2940ms

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at April 02, 2019 04:26 AM

The Payara Monthly Roundup for March 2019

by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at April 01, 2019 03:05 PM

Hello and welcome to the first issue of our monthly round up where we feature a curated list of interesting articles and videos created by the community in the last month that we have enjoyed taking in.


by Jadon Ortlepp (Jadon.Ortlepp@Payara.fish) at April 01, 2019 03:05 PM

Unikernels, Quarkus.io, SPA vs. Document Oriented Model, Service to Service Communication with JMS--61st airhacks.tv

by admin at April 01, 2019 07:07 AM

Agenda for the 61st https://gist.github.com/AdamBien/606e7a0c27ebd6457515741320ff037f airhacks.tv:
  1. quarkus.io and unikernels
  2. SPA vs. Request / Response
  3. Reading TCP sockets with Java EE
  4. Message Brokers and distributed JMS communication
  5. Transactions between Microservices in Java EE
  6. Flyway vs. Liquibase
  7. Multi-lingual content with entities and REST
  8. @Singleton with Bean concurrency with or without locks
  9. Unit Tests, Integration tests, System Tests
  10. How to deal with shared JPA entities between microservices
  11. Coupling vs. cohesion and Boundary Control Entity
See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at April 01, 2019 07:07 AM

Serverless without Functions, OpenShift with a bit Istio--airhacks.fm podcast

by admin at March 31, 2019 07:00 PM

Subscribe to airhacks.fm podcast via: RSS or iTunes

An airhacks.fm conversation with Sebastian Daschner (@DaschnerS) about:

being chief Enterprise Service Bus Officer at IBM (not true), Lead Java Advocate for Java at IBM (now true), Sebastian still likes Java EE, the definition of Serverless, there is no need for functions in serverless computing, a reference to episode with Bruno Borges "Jakarta EE / MicroProfile in the Clouds: Runtimes not Servers", the difference between servers and runtimes, focussing on ThinwWARs is serverless, immutable infrastructures with immutable layers, pushing 50 times a day a ThinWAR to the cloud, Payara Configured as example for intermediary layers, Payara s2i, misusing Docker Registry as "FTP", ThinWAR upload triggers a hook and rebuilds a server, ultra productive Java EE, servers do not matter, using FaaS to trigger server re-configuration, functions are too fine grained for the implementation of stock applications, implement the added value of clouds by injecting cloud services, cloud bootstrap / initialization code looks like from 1945, externalizing cloud libraries to immutable images, added value of istio at openshift, cross cutting concerns with Istio, canary releases, routes and observability, istio adds additional configuration overhead, istio adds technical features on top of openshift, a possible killer features of istio, monitoring database traffic with istio, Istio as "feel good factor", some technical dashboards are as usable as lava lamps, monitoring external services, artificially slowing down connections in tests, MQS, hello worlds with Kafka are great, two lines to send a JMS events and one annotation to receive a message, Kafka is great as managed service, the next killer feature of MQS, killer runtimes with microprofile and Java EE, you can find us at jakartaee.blog and this blog is not usable as source for articles

Meet Sebastian at twitter: (@DaschnerS), https://jakartablogs.ee and his blog https://blog.sebastian-daschner.com/.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at March 31, 2019 07:00 PM

Monitoring HTTP Requests with MicroProfile Metrics

by admin at March 27, 2019 05:53 AM

A Java EE 8 HttpFilter, configured to map all URLs:

import javax.inject.Inject;
import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpFilter;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.eclipse.microprofile.metrics.Meter;
import org.eclipse.microprofile.metrics.MetricRegistry;
import org.eclipse.microprofile.metrics.annotation.RegistryType;

@WebFilter(urlPatterns = "/*")
public class FilterEverything extends HttpFilter {

    @Inject
    @RegistryType(type = MetricRegistry.Type.APPLICATION)
    MetricRegistry registry;

    @Override
    protected void doFilter(HttpServletRequest req, HttpServletResponse res, FilterChain chain) throws IOException, ServletException {
        registry.counter(req.getPathInfo()).inc();
        Meter meter = registry.meter("meter_" + req.getPathInfo());
        chain.doFilter(req, res);
        meter.mark();
    }

}

...will generate for the invocation of the PingResource:


@Path("ping")
public class PingResource {

    @GET
    @Path("slow")
    public String slow() {
        //heavy lifting
        return "42";
    }

    @GET
    @Path("fast")
    public String fast() {
        return "21";
    }

}

...the following prometheus metrics (curl http://localhost:8080/metrics/application):


# TYPE application:/ping/slow counter
application:/ping/slow 1
# TYPE application:/ping/fast counter
application:/ping/fast 1
# TYPE application:meter_/ping/fast_total counter
application:meter_/ping/fast_total 1
# TYPE application:meter_/ping/fast_rate_per_second gauge
application:meter_/ping/fast_rate_per_second 0.044544529211015586
# TYPE application:meter_/ping/fast_one_min_rate_per_second gauge
application:meter_/ping/fast_one_min_rate_per_second 0.155760156614281
# TYPE application:meter_/ping/fast_five_min_rate_per_second gauge
application:meter_/ping/fast_five_min_rate_per_second 0.1902458849001428
# TYPE application:meter_/ping/fast_fifteen_min_rate_per_second gauge
application:meter_/ping/fast_fifteen_min_rate_per_second 0.1966942907643235
# TYPE application:meter_/ping/slow_total counter
application:meter_/ping/slow_total 1
# TYPE application:meter_/ping/slow_rate_per_second gauge
application:meter_/ping/slow_rate_per_second 0.05214608959864638
# TYPE application:meter_/ping/slow_one_min_rate_per_second gauge
application:meter_/ping/slow_one_min_rate_per_second 0.16929634497812282
# TYPE application:meter_/ping/slow_five_min_rate_per_second gauge
application:meter_/ping/slow_five_min_rate_per_second 0.1934432200964012
# TYPE application:meter_/ping/slow_fifteen_min_rate_per_second gauge
application:meter_/ping/slow_fifteen_min_rate_per_second 0.19779007785878447

...also available as JSON (curl -H"Accept: application/json" http://localhost:8080/metrics/application):

{
    "/ping/slow": 1,
    "/ping/fast": 1,
    "meter_/ping/fast": {
        "count": 1,
        "fiveMinRate": 0.14571471450227858,
        "oneMinRate": 0.04105793151598188,
        "fifteenMinRate": 0.17996489624183667,
        "meanRate": 0.009531711523994577
    },
    "meter_/ping/slow": {
        "count": 1,
        "fiveMinRate": 0.14571471450227858,
        "oneMinRate": 0.04105793151598188,
        "fifteenMinRate": 0.17996489624183667,
        "meanRate": 0.009838528938613995
    }
}

Project was created with javaee8-essentials-archetype, the 5kB ThinWAR was built and deployed with wad.sh in 2963ms and tested with Payara 5.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.

by admin at March 27, 2019 05:53 AM

How to participate in advancing Jakarta EE Specification: Marketing and Promotional Work

by Tanja Obradovic at March 25, 2019 06:01 PM

We all know in this day and age, marketing and social media are essential.  Social media is an effective way to stay connected with the Jakarta EE community. By informing the community about project milestones, progress, news around Jakarta EE, and how to become a contributor, you can help build and maintain this active community. Here are some ways in which you can help spread the word and participate:

  • Follow Jakarta EE on social media

    • Twitter: @JakartaEE

    • Facebook: Jakarta EE

    • LinkedIn Group: Jakarta.EE

  • Write your own Jakarta EE related posts and repost our content

  • Add your blog to jakartablogs.ee by following these guidelines

  • Subscribe to our mailing-lists https://jakarta.ee/connect/

If you have any blogs, websites or other content related to Jakarta EE, let us know! We can promote your work and increase its reach within the community.

Same goes for any events you may be planning or event ideas you have. If it’s related to Jakarta EE, chances are we’d love a piece of it, just make sure to submit an event form with all the details.

 


by Tanja Obradovic at March 25, 2019 06:01 PM

Optimizing for Humans, Not Machines--airhacks.fm podcast

by admin at March 24, 2019 07:09 PM

Subscribe to airhacks.fm podcast via: RSS iTunes

An airhacks.fm conversation with Simon Harrer (@simonharrer) about:
Amstrad Laptop, first line of VB code was a commercial one, customers two desks away, Scheme is an excellent language to learn programming, Java is great - mainly because of the tool support, Eclipse was the first opensource IDE with decent refactoring support, Bamberg is the home of Schlenkerla, teaching is the best way to learn, therefore teaching is selfish, building portals for students with PHP and Joomla, building e-commerce shops for students with Ruby on Rails, 2006 everything had to be Rails, PhD about choreography and distributed transactions, too high expectations on workflow and rule engines, workflow engines are for developers and not for business people, central process view is still desirable, startups with Bosch, in Germany it is hard to find developers who are willing to join startups, Simon works for InnoQ and Stefan Tilkov, Computer Science University of Bamberg, the pragmatic book: Java by Comparison by The Pragmatic Bookshelf, in streams there are no exceptions, over-abstractions cause trouble, reviewing the code of thousands of students for six years, it is unusual for universities to promote pragmatic code, be strict about adding external libraries, clear separation between infrastructure and business logic helps with clean code, moving domain specific libraries into the infrastructure, human centered code, optimizing for machines, not for humans is problematic, writing bad code is often not intentional, "Abstract, If, Impl, Default, Bean Conventions - Just For Lazy Developers", don't write for reuse, reuse rarely happens, reuse as motivator for bad abstractions, do repeat yourself, than refactor, "How To Comment With JavaDoc", Book: Java by Comparison, Become a Java Craftsman in 70 Examples.
Simon Harrer on twitter: (@simonharrer).

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at March 24, 2019 07:09 PM

#CHEATSHEET: Java/Jakarta EE application servers

by rieckpil at March 23, 2019 10:00 AM

As a Java/Jakarta EE developer, we can rely on the javax standards to work on every certified application server in the same way. When it comes to managing the application server, every vendor has a different and proprietary API for managing its resources. To use a different application server you first have to get comfortable with its CLI and config files.

To provide you a faster learning curve for a new application server, I’ve created a #CHEATSHEET for every major Java/Jakarta EE application server (Payara, Open Liberty, WildFly and TomEE). With this cheat sheet, you will have a reference for starting/stopping the server, finding relevant logs files, getting general server information, customizing the server configuration, adding custom libraries and creating a JDBC data source with PostgreSQL as an example.

Payara

Starting and stopping the server:

# Starting on Windows - default domain is domain1
$PAYARA_HOME/bin/asadmin.bat start-domain [domain_name]

# Starting on Mac & Linux
$PAYARA_HOME/bin/asadmin start-domain [domain_name]

# Stopping on Windows
$PAYARA_HOME/bin/asadmin.bat stop-domain [domain_name]

# Stopping on Max & Linux
$PAYARA_HOME/bin/asadmin stop-domain [domain_name]
  • Server logs: $PAYARA_HOME/glassfish/domains/[domain_name]/logs/server.log
  • Important ports:
    • 8080 – HTTP listener
    • 8181 – HTTPS listener
    • 4848 – HTTPS admin listener
    • 9009 – Debug port
  • Admin panel: http://localhost:4848
  • Central configuration file: domain.xml in $PAYARA_HOME/glassfish/domains/[domain_name]/config
  • Embedded database: H2 (default) and Derby
  • Auto-deploy folder: $PAYARA_HOME/glassfish/domains/[domain_name]/autodeploy
  • Available domains per default: domain1 (default) & production
  • Add custom libraries to the application server:
# For Windows
$PAYARA_HOME/bin/asadmin.bat add-library /path/to/download/your-jar.jar

# For Mac & Linux
$PAYARA_HOME/bin/asadmin add-library /path/to/download/your-jar.jar

# e.g. adding PostgreSQL JDBC driver
$PAYARA_HOME/bin/asadmin add-library /home/username/.m2/repository/org/postgresql/postgresql/42.2.5/postgresql-42.2.5.jar
  • Configure a new data source:
    • Configure a JDBC connection pool (pick one):
      • via admin panel: Resources -> JDBC -> JDBC Connection Pool  -> New -> Follow the wizard
      • via asadmin:
# For Windows
$PAYARA_HOME/bin/asadmin.bat create-jdbc-connection-pool --datasourceclassname org.postgresql.ds.PGPoolingDataSource --restype javax.sql.DataSource 
--property portNumber=5432:serverName=localhost:user=postgres:password=postgres:databaseName=postgres PostgresPool

# For Mac & Linux
$PAYARA_HOME/bin/asadmin create-jdbc-connection-pool --datasourceclassname org.postgresql.ds.PGPoolingDataSource --restype javax.sql.DataSource 
--property portNumber=5432:serverName=localhost:user=postgres:password=postgres:databaseName=postgres PostgresPool
      • via adding an entry in $PAYARA_HOME/glassfish/domains/[domain_name]/config/domain.xml:
<domain>
  <resources>
    <jdbc-connection-pool
      name="PostgresPool"
      res-type="javax.sql.DataSource"
      datasource-classname="org.postgresql.ds.PGPoolingDataSource">
      <property name="Url" value="jdbc:postgresql://localhost:5432/"/>
      <property name="Password" value="postgres"/>
      <property name="User" value="postgres"/>
      <property name="databaseName" value="postgres"/>
      <!-- far more can be configured for use in production -->
    </jdbc-connection-pool>
  </resources>
</domain>
    • Create a JDBC resource (pick one):
      • via admin panel: Resources -> JDBC -> JDBC Resouces -> New -> Follow the wizard
      • via asadmin:
# For Windows
$PAYARA_HOME/bin/asadmin.bat create-jdbc-resource --connectionpoolid PostgresPool jdbc/postgres

# For Mac & Linux
$PAYARA_HOME/bin/asadmin create-jdbc-resource --connectionpoolid PostgresPool jdbc/postgres
      • via adding an entry in $PAYARA_HOME/glassfish/domains/[domain_name]/config/domain.xml:
<domain>
  <resources>
     <jdbc-resource pool-name="NameOfTheConnectionPool" jndi-name="jdbc/postgres"></jdbc-resource>
     <!-- e.g.
     <jdbc-resource pool-name="PostgresPool" jndi-name="jdbc/postgres"></jdbc-resource>
     -->
  </resources>
</domain>
FROM payara/server-full:latest
COPY target/your-war.war $DEPLOY_DIR

# docker build -t mypayaraapp .
# docker run -p 8080:8080 -p 4848:4848 -d mypayaraapp
  • CDI implementation provider: Weld
  • JAX-RS implementation provider: Jersey
  • JPA implementation provider: EclipseLink
  • JSF implementation provider: Mojarra

 

Open Liberty

Starting and stopping the server:

# Starting on Windows - default server name is defaultServer
$WLP_HOME/bin/server.bat start [server_name]

# Starting on Mac & Linux
$WLP_HOME/bin/server start [server_name]

# Stopping on Windows
$WLP_HOME/bin/server.bat stop [server_name]

# Stopping on Mac & Linux
$WLP_HOME/bin/server stop [server_name]
  • Server logs: $WLP_HOME/usr/servers/[sever_name]/logs/console.log and $WLP_HOME/usr/servers/[sever_name]/logs/messages.log
  • Important ports:
    • 9080– HTTP listener
    • 9443 – HTTPS listener
  • Admin panel: none
  • Central configuration file: server.xml in $WLP_HOME/usr/servers/[sever_name] (hot reloading of configuration is possible)
  • Embedded database: none, Derby can be configured like the following:
    • Download the derby.jar
    • Update the server.xml and reference to the downloaded .jar file:
<server>
    <!-- other settings omitted -->
    <dataSource id="DefaultDataSource">
        <jdbcDriver libraryRef="DERBY_JDBC_LIB" />
        <properties.derby.embedded databaseName="test" createDatabase="create" />
    </dataSource>

    <library id="DERBY_JDBC_LIB">
        <file name="/home/user/derby/lib/derby.jar" />
    </library>
</server>
  • Auto-deploy folder: $WLP_HOME/usr/servers/[sever_name]/dropins
  • Available servers per default: defaultSever
  • Add custom libraries to the application server:
    • The following <library> elements are available for the server.xml:
<library>
   <folder dir="..." />
   <file name="..." />
   <fileset dir="..." includes="*.jar" scanInterval="5s" />
</library>
    • either reference the library from your filesystem or place it in ${server.config.dir}/lib (specific libs for the server e.g. $WLP_HOME/usr/servers/defaultServer/lib) or ${shared.config.dir}/lib/global for global libs
  • Configure a new data source:
    • Reference the JDBC driver within the server.xml and give it a unique name:
<server>
   <library id="POSTGRES_JDBC_LIB">
        <file name="/home/user/.m2/repository/org/postgresql/postgresql/42.2.5/postgresql-42.2.5.jar" />
    </library>
</server>
    • Create a new <dataSource> element within the server.xml and configure the access to the data source and JNDI name:
<server>    
  <dataSource id="postgres" jndiName="jdbc/postgres" type="javax.sql.DataSource">
    <jdbcDriver javax.sql.DataSource="org.postgresql.ds.PGPoolingDataSource" 
                libraryRef="POSTGRES_JDBC_LIB"/>
    <properties databaseName="postgres" serverName="localhost" password="postgres" 
                portNumber="5432" user="postgres"/>
  </dataSource>
</server>
FROM open-liberty:kernel
COPY server.xml /config/
COPY target/your-war.war /config/dropins

# docker build -t myopenlibertyapp .
# docker run -p 9080:9080 -p 9443:9443 -d myopenlibertyapp
  • CDI implementation provider: Weld
  • JAX-RS implementation provider: ApacheCXF
  • JPA implementation provider: EclipseLink (customizable via  server.xml to use Hibernate)
  • JSF implementation provider: Apache MyFaces

 

WildFly

Starting and stopping the server:

# Starting on Windows - standalone mode for running a single instance
$WILDFLY_HOME/bin/standalone.bat

# Starting on Mac & Linux
$WILDFLY_HOME/bin/standalone.sh

# Stopping on Windows
$WILDFLY_HOME/bin/jboss-cli.bat --connect command=:shutdown

# Stopping on Mac & Linux
$WILDFLY_HOME/bin/jboss-cli.sh --connnect command=:shutdown
  • Server logs: $WILDFLY_HOME/standalone/log/server.log for running in standalone mode
  • Important ports:
    • 8080 – HTTP listener
    • 8443 – HTTPS listener
    • 9990 – Admin panel
  • Admin panel: http://localhost:9990
    • To log in, you first have to create a user via:
# Windows
$WILDFLY_HOME/bin/addUser.bat user userPassword

# Mac & Linux
$WILDFLY_HOME/bin/addUser.sh user userPassword
  • Central configuration file: standalone.xml in $WILDFLY_HOME/standalone/configuration
  • Embedded database: H2
  • Auto-deploy folder: $WILDFLY_HOME/standalone/deployments
  • Available server modes per default: standalone (single instance) and domain (cluster) mode
  • Add custom libraries to the application server: Follow the steps as described for the JDBC driver and create a module
  • Configure a new data source:
    • Create a new folder org/postgresql/main within $WILDFLY_HOME/modules
    • Copy the JDBC driver (e.g. postgresql-42.2.5.jar) to this folder
    • Create a module.xml file within this folder:
<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.1" name="org.postgresql">
  <resources>
    <resource-root path="postgresql-42.2.5.jar" />
  </resources>
  <dependencies>
    <module name="javax.api" />
    <module name="javax.transaction.api" />
  </dependencies>
</module>
    • Connect to your running WildFly with $WILDFLY_HOME/bin/jboss-cli.bat --connect or $WILDFLY_HOME/bin/jboss-cli.sh --connect
    • Execute:
# First create the new module for the JDBC driver
/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql, driver-module-name=org.postgresql, driver-class-name=org.postgresql.Driver, driver-datasource-class-name=org.postgresql.ds.PGPoolingDataSource)

# Create a data source
/subsystem=datasources/data-source=PostgresDS:add(jndi-name=java:jboss/datasources/postgres, driver-name=postgresql, connection-url=jdbc:postgresl://localhost:5432/postgres, user-name=postgres, password=postgres)
FROM jboss/wildfly:16.0.0.Final
COPY target/your-war.war /opt/jboss/wildfly/standalone/deployments/

# docker build -t mywildflyapp .
# docker run -p 8080:8080 -p 8443:8443 -p 9990:9990 -d mywildflyapp
  • CDI implementation provider: Weld
  • JAX-RS implementation provider: RESTEasy
  • JPA implementation provider: Hibernate
  • JSF implementation provider: Mojarra

 

TomEE

 

 

Starting and stopping the server:

# Starting on Windows
$CATALINA_HOME/bin/catalina.bat start

# Starting on Mac & Linux
$CATALINA_HOME/bin/catalina.sh start

# Stopping on Windows
$CATALINA_HOME/bin/catalina.bat stop

# Stopping on Mac & Linux
$CATALINA_HOME/bin/catalina.sh stop
  • Server logs: $CATALINA_HOME/logs/catalina.DATE.log
  • Important ports:
    • 8080 – HTTP listener
    • 8443 – HTTPS listener
  • Admin panel: http://localhost:8080/manager/html
    • To log in, you first have to create a user in $CATALINA_HOME/conf/tomcat-users.xml:
<?xml version="1.0" encoding="UTF-8"?>
<tomcat-users xmlns="http://tomcat.apache.org/xml"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"
              version="1.0">
  <role rolename="tomee-admin" />
  <user username="tomee" password="tomee" roles="tomee-admin,manager-gui" />
</tomcat-users>
  • Central configuration file: server.xml, context.xml and tomee.xml in $CATALINA_HOME/config
  • Embedded database: HSQL
  • Auto-deploy folder: $CATALINA_HOME/webapps
  • Add custom libraries to the application server: Copy the library to $CATALINA_HOME/lib
  • Configure a new data source:
    • Copy the JDBC driver (e.g. postgresql-42.2.5.jar) to $CATALINA_HOME/lib
    • Add the following config to  $CATALINA_HOME/config/tomee.xml
<tomee>
  <Resource id="jdbc/postgres" type="javax.sql.DataSource">
      jdbcDriver org.postgresql.Driver
      jdbcUrl jdbc:postgresql://127.0.0.1:5432/postgres
      userName postgres
      password postgres
  </Resource>
</tomee>
FROM tomee:8-jre-8.0.0-M2-plume
COPY target/your-war.war /usr/local/tomee/webapps/
# docker build -t mytomeeapp .
# docker run -p 8080:8080 -p 8443:8443 -d mytomeeapp
  • CDI implementation provider: Apache OpenWebBeans
  • JAX-RS implementation provider: Apache CXF
  • JPA implementation provider: EclipseLink
  • JSF implementation provider: Mojarra

 

Want to have this cheatsheet always at hand as a nice PDF? Subscribe to my newsletter and get it for free.


by rieckpil at March 23, 2019 10:00 AM

ThinJARs with quarkus.io on Docker

by admin at March 21, 2019 10:07 AM

quarkus.io separates business logic from the infrastructure out-of-the-box and therefore keeps the ThinJAR (see: ThinWAR) small and the deployments fast:

The base docker image was pushed into docklands.

See you at Web, MicroProfile and Java EE Workshops at Munich Airport, Terminal 2 or Virtual Dedicated Workshops / consulting. Is Munich's airport too far? Learn from home: airhacks.io.


by admin at March 21, 2019 10:07 AM

Payara Server or Payara Micro?

by Edwin Derks at March 19, 2019 10:14 PM

Payara is a Java EE 8 / Jakarata EE 8 compliant application server, also implementing MicroProfile. Next to the traditional Payara Server, the company also provides you with the option of running your enterprise applications with Payara Micro. Since both editions of Payara can run your Java EE based enterprise applications, which edition should you use, and when? This depends on your situation, but before you are able to answer this question for yourself, let’s take a look at how both editions are being used.

Payara Server

The traditional application server allows you to run your Java EE based enterprise applications (WAR or EAR). It is possible to deploy, undeploy or manage your applications when the server is running. The application server can also be configured and tailored to your environment. This means that the application server can take the lion’s share of the work and configuration, while you can keep your applications clean, small and tidy. This is the idea behind Java EE: always try to keep your applications as small as possible, only containing your Java EE based business logic compiled against the various API’s whose implementations are provided by the application server.

Payara Micro

Contrary to the Payara Server, this edition of Payara is provided as a “hollow” JAR. This means that Payara is provided as a single JAR file, containing the tools to run your application server with a java -jar command. As a parameter of this command, you can provide the application to run in this process like this:

java -jar payara-micro-5.191.jar --deploy my-javaee-app.war

Choosing your Payara

Now that you know how to use both editions of Payara, let’s consider which edition to use in what situation. You see, running either edition of Payara on your development, test or production environment should work. The clients of your running servers should in no way notice, or be impacted by the edition that you have chosen.

What cán matter is how you ship and run your applications on Payara. It depends on how you are running Payara in a microservices environment, possibly using Docker, and even an orchestrator like Kubernetes. Or maybe you are traditionally deploying your application on a Payara Server which is running on a server that is being fully managed by you. Either way, both editions of Payara are tailored for their runtime environments, but it is not necessarily true that you can’t use Payara Server for building and deploying microservices. Nor is it necessarily true that you can’t run a “monolith” with Payara Micro. I would like to spend another blog on these contemplations and get into the details, but they are way out of scope for this blog post. For now, let’s focus on where you can definitely benefit from one of them: your local development environment.

I have shown how to start Payara Micro on the command line, pointing to the exact location of the application to deploy. However, this is not very efficient during development for several reasons:

  • You have to manually build the application with Maven every time, and possibly (manually) move it to the location where Payara can find it on the command line
  • You have to start and stop Payara every time that you want to deploy a new version of your application

This can add up several seconds in development time, over and over again for every new deployment that you want to do. Luckily, there is a fairly easy way to (re)deploy your application on Payara, but this counts only for Payara Server. That is because this edition supports (re)deploying an application on a running instance.

You can easily run your Payara Server integrated in your IDE like Eclipse IDE, Netbeans or IntelliJ. For example, if you configure and manage Payara Server in your IDE like shown below, every time that you want to deploy a new version of your application, your IDE will take care of this process. It will compile and build a new WAR with Maven and deploy it on the managed Payara Server.

It often takes only a few seconds from building the application to redeploying it, which increases your development speed. And since it doesn’t feel that you have to wait for the deployment to finish, you shouldn’t lose your focus every time a deployment is running. This can save you a lot of personal energy and keep programming a fun thing to do. Also very important. 🙂

So if you are using Payara Micro in production, you can consider using Payara Server for optimizing your development experience until your application has to be shipped.

Download

Interested in using Payara yourself? You can download both versions at https://www.payara.fish.


by Edwin Derks at March 19, 2019 10:14 PM

Jakarta EE 8 Status

by Ivar Grimstad at March 18, 2019 05:30 PM

Those of you following Jakarta EE probably know that the upcoming Jakarta EE 8 release will be functionally equivalent to Java EE 8. The reason for this is that we want to prove that the transfer from Oracle is complete and that we are able to produce the processes, specifications, test suites and a compatible implementation through the Eclipse Foundation.

So far Eclipse GlassFish 5.1 has been released and certified as Java EE 8 compatible. The next step is to set everything up for Jakarta EE 8 and release Eclipse GlassFish 5.2 as Jakarta EE 8 compatible.

One of the tasks that need to be done in order to release Jakarta EE 8 is to transform the existing Java EE specification documents to Jakarta EE. This will involve renaming the specifications according to the trademark agreement between Oracle and Eclipse Foundation*.

In addition to this, the scope for the existing EE4J projects containing the APIs for the individual specs will need to be updated and the projects themselves will be converted into Jakarta EE specification projects as defined in the Eclipse Foundation Specification Process (EFSP). The Jakarta EE Specification process will be a specialization of the EFSP.

To keep track of all of this, we have created a planning board in the Jakarta EE Platform GitHub project.

Planning board for transitioning Java EE 8 specifications to Jakarta EE 8
Planning board for transforming the Java EE specification to Jakarta EE

What I have described in this post is just a couple of the things that need to be done regarding the specifications in order to get Jakarta EE 8 out the door. There are a lot of other activities involving the TCK and not the least Eclipse GlassFish 5.2 that need to be done as well. But for now, the most critical item is to get through the legal hurdles of the trademark agreement and the transfer of the specification documents over to Eclipse Foundation.

*) The details of this agreement is yet to be defined when this blog post is published.


by Ivar Grimstad at March 18, 2019 05:30 PM

How to participate in advancing Jakarta EE Specification

by Tanja Obradovic at March 18, 2019 04:04 PM

 

It is March already, and excitement for Jakarta EE is growing. The migration to Jakarta EE is almost complete! In the wake of the successful launch of the Java EE 8-compatible Eclipse GlassFish 5.1 in January, we are keeping our sights set firmly forward. We will continue to work towards the full transition from Java EE and JCP to Jakarta EE and the new specification process. All projects related to Jakarta EE will need to be examined and established into new Eclipse Foundation Specification projects, but it’s important that we agree on specifications and project names and ensure each has a well-documented project scope before we can begin advancing the specifications.

Yes, it’s set to be another busy year for the Eclipse Foundation. Having welcomed aboard new specification proposals such as Jakarta EE NoSQL and Jakarta Batch, we’re going to need all hands on deck! I’ve created this blog post with the aim of making it easier for individuals to contribute to the Jakarta EE Specification.

 The Scope of the Work

All of this can be daunting for people who are just starting to take interest in Jakarta EE and open source, but everyone can contribute in some easy ways to streamline the ongoing transition and advancement of the technology. If you are wondering what you can do to participate, this blog series will outline some areas where involvement is needed. I will group the work in the three major groups:

  • Marketing

  • Technical

  • Collateral

In the next couple of weeks, please expect more details on each.

Committer and Contributors  paperwork

Before you start the work please ensure all agreements are executed

 


by Tanja Obradovic at March 18, 2019 04:04 PM

Jakarta EE Developer Survey 2019

by Ivar Grimstad at March 15, 2019 03:35 PM

The Jakarta EE 2019 Developer Survey is available!

Take the survey today and help the community gain a better understanding of what’s in store for Java innovation. This is your chance to share your thoughts and experiences and help shape the future for Jakarta EE!

Jakarta EE 2019 Developer Survey
https://www.surveymonkey.com/r/JakartaEEMkt

Responses will be collected until March 25, 2019, at 11:59 PM Pacific Time


by Ivar Grimstad at March 15, 2019 03:35 PM

GraphQL with KumuluzEE

by Jean-François James at March 08, 2019 11:31 AM

In the context of my contribution to the recently started MicroProfile GraphQL initiative, I’ve decided to give KumuluzEE GraphQL a try in order to  get some interesting inputs. I first heard of KumuluzEE in 2015 when it won Java Duke’s Choice Award.  I must admit that I didn’t pay much attention to it so far. […]

by Jean-François James at March 08, 2019 11:31 AM

MicroProfile Health Check

by Hayri Cicek at March 03, 2019 08:53 AM

Health checks are used to determine if the service is running, shutdown, lack of disk space or maybe issues with the database connection. In this tutorial, we will use Eclipse MicroProfile Starter to generate a new project.

Go to https://start.microprofile.io/ and follow the steps below to generate a new project.

by Hayri Cicek at March 03, 2019 08:53 AM

#REVIEW: Improved Java/Jakarta EE productivity with Adam Bien’s WAD (Watch and Deploy)

by rieckpil at February 28, 2019 07:58 PM

The productivity of your developers is crucial for the success of your project. Without fast deployments and short feedback-cycles about a new feature, you lose a lot of time just “idling”. As a Java/Jakarta EE developer, you will most likely have a local installation for the target application server and deploy the application several times a day during development. In the past, I’ve used Eclipse/IntelliJ/Netbeans with various vendor plugins to start the application server and deploy on every new code change. This was always quite a fiddling task, as I sometimes got issues with the plugins, had to wait for a new release or it wasn’t as fast as I expected until I could see my changes. Luckily Adam Bien (@AdamBien) solved this with a small Java project called WAD (Watch and Deploy) which deploys thin .war ‘s on every change to any server and improves your productivity.

In this blog post, I’ll review the project and show you my personal setup.

Prerequisites

First, make sure you have Maven installed on your machine and that the bin folder of Maven is on your PATH. In addition, you have to set the MAVEN_HOME environment variable for Linux/Mac and M2_HOME for Windows to the path of the Maven root folder e.g. C:\Users\rieckpil\development\maven-3.5.3

Getting started

To start with WAD you first have to download the .jar file from GitHub and place it into your Java/Jakarta EE project folder next to your pom.xml (for quick Java EE 8 application bootstrapping, have a look at my or Adam’s Maven archetype).  With the .jar file in place, you can now start it with:

java -jar wad.jar [DEPLOYMENT_DIR_OF_APP_SERVER] [DEPLOYMENT_DIR_OF_APP_SERVER] [DEPLOYMENT_DIR_OF_APP_SERVER]

WAD will now watch for changes within src/main . On every change, the project is built and deployed to the folder you specified during the launch of WAD. You can manually specify application server auto-deployment folders (e.g. for Payara: .../Payara/glassfish/domains/domain1/autodeploy) separated by spaces or use a global configuration as described in the following section.

With the latest release of WAD, you can even configure the deployment folders in one place with a .wadrc located in your home/user directory. This file specifies all deployment folders separated by a new line like:

C:\Development\Server\Payara\payara-5.183\glassfish\domains\domain1\autodeploy
C:\Development\Server\OpenLiberty\openliberty-19.0.0.1\usr\servers\defaultServer\dropins
C:\Development\Server\Wildfly\wildfly-16.0.0\standalone\deployments
C:\Development\Server\TomEE\apache-tomee-plume-8.0.0-M2\webapps

With this setup, you can now launch WAD without an additional parameter like:

java -jar wad.jar

and it will detect all deployment folders automatically:

$ java -jar wad.jar
wad 0.1.1-SNAPSHOT
 'C:\Development\Server\TomEE\apache-tomee-plume-8.0.0-M2\webapps'  from ~/.wadrc
 'C:\Development\Server\Wildfly\wildfly-16.0.0\standalone\deployments'  from ~/.wadrc
 'C:\Development\Server\OpenLiberty\openliberty-19.0.0.1\usr\servers\defaultServer\dropins'  from ~/.wadrc
 'C:\Development\Server\Payara\payara-5.183\glassfish\domains\domain1\autodeploy'  from ~/.wadrc
resulting deployment folders are:
 'C:\Development\Server\TomEE\apache-tomee-plume-8.0.0-M2\webapps'
 'C:\Development\Server\Wildfly\wildfly-16.0.0\standalone\deployments'
 'C:\Development\Server\OpenLiberty\openliberty-19.0.0.1\usr\servers\defaultServer\dropins'
 'C:\Development\Server\Payara\payara-5.183\glassfish\domains\domain1\autodeploy'
WAD is watching .\src\main, deploying target\improved-java-ee-productivity-with-wad.war to [C:\Development\Server\TomEE\apache-tomee-plume-8.0.0-M2\webapps\myapp.war, C:\Development\Server\Wildfly\wildfly-16.0.0\standalone\deployments\myapp.war, C:\Development\Server\OpenLiberty\openliberty-19.0.0.1\usr\servers\defaultServer\dropins\myapp.war, C:\Development\Server\Payara\payara-5.183\glassfish\domains\domain1\autodeploy\myapp.war]

During the first launch, your project is initially built and deployed:

[10:51:02][1] built in 2455 ms
Copying 4kB ThinWAR to  (...)app.war
Copying 4kB ThinWAR to  (...)app.war
Copying 4kB ThinWAR to  (...)app.war
Copying 4kB ThinWAR to (...)app.war
copied in 6 ms

You can now start the application server of your choice or start all in parallel (when running TomEE/Payara/WildFly in parallel make sure they all start on a different port and not all on 8080) and test your application against every vendor if you want to.

In addition, I personally launch a tail -f server.log (on Windows I use either Git Bash or MobaXterm for that) for the application server I’m using to see possible errors within the console. Furthermore, I defined aliases to start and run every application server on my machine:

alias startPayara583= C:\Development\Server\Payara\payara-5.183\bin\asadmin.bat start-domain
alias stopPayara583= C:\Development\Server\Payara\payara-5.183\bin\asadmin.bat stop-domain
alias startOpenLiberty1901= C:\Development\Server\OpenLiberty\openliberty-19.0.0.1\bin\server.bat start
alias stopOpenLiberty1901= C:\Development\Server\OpenLiberty\openliberty-19.0.0.1\bin\server.bat stop
alias startWildFly16= C:\Development\Server\Wildfly\wildfly-16.0.0\bin\standalone.bat -Djboss.http.port=8888
alias startTomEEPlume= C:\Development\Server\TomEE\apache-tomee-plume-8.0.0-M2\bin\catalina.bat run

Final thoughts

With this tooling, you can now pick any Editor (Visual Code, Atom, Notepad++ ...) or IDE of your choice and start coding. When you save your changes they will be automatically deployed and you can see the results after a short delay (1 – 3 seconds, depending on how thin your .war is) in your browser.

If you are looking for a Docker-based solution in combination with WAD, have a look at this excellent video of Sebastian Daschner.

For new releases of WAD have a look at the official homepage wad.sh or at GitHub.

You can find a quick example of using WAD in my GitHub repository.

Happy Java EE hacking,

Phil


by rieckpil at February 28, 2019 07:58 PM

Eclipse Foundation Contributor Validation Service

February 25, 2019 11:20 PM

In an effort to provide a more robust solution to our Contributor Validation Service on GitHub, we created the Eclipse ECA Validation Github App that can be installed on any GitHub account, organization or repository.

The goal of this new GitHub App is to make sure that every contributor is covered by the necessary legal agreements in order to contribute to all Eclipse Foundation Projects including specification projects.

For example, all contributors must be covered by the Eclipse Foundation Contributor Agreement (ECA) and they must include a “Signed-off-by” footer in commit messages. When contributing to an Eclipse Foundation Specification Project, contributors must be covered with version 3.0.0 or greater of the ECA.

We created a GitHub App to improve the following problems:

  1. Reduce our maintenance burden by simplifying the installation process.
  2. Increase our API rate limit.
  3. Create a better experience for users by allowing the App to be installed on non-Eclipse project repositories such as the Eclipse IoT website and the Jakarta EE website.

Finally, we made some improvements to our “details” page. We added a “revalidate” button to allow Eclipse users to trigger a revalidation without pushing new changes to the pull-request and we added some useful links to allow users to return to GitHub or to sign the ECA.

We are planning to install our new Eclipse ECA Validation Github App to all our Eclipse Projects on GitHub this week and I am hoping that these changes will improve the way our users are contributing via Github.

If you are using our new Github App and you wish to contribute feedback, please do so on Bug 540694 - Github IP validation needs to be more robust.


February 25, 2019 11:20 PM

To BOM or not to BOM

by Edwin Derks at February 24, 2019 08:54 PM

Maven let’s you define the artifacts that you want to include in your Java based project. Whether you want to include the artifcts in your own build artifacts, or just want to have them available to compile against, you can decide what is applicable for your situation. In a Java EE / Jakarta EE / MicroProfile context, you will likely do the latter because of the nature of these platforms. I will explain later why this is such a good fit.

However, MicroProfile’s 2.2 release includes a BOM-style dependency for MicroProfile artifacts in addition to the conventional dependency that provides MicroProfile’s API’s for that specific version. You might wonder what the difference is and in which situation one of these options is to be used. In case you’re interested, read further and you should be able to choose the applicable option when the situation calls for it.

Dependency management (non-BOM-style)

First let’s look at the conventional, and far easiest way to get started with Java EE / Jakarta EE / MicroProfile using Maven. Below a snippet is shown that you should include in your pom.xml, and you are ready to start coding.

<dependencies>
<dependency>
<groupId>javax</groupId>
<artifactId>javaee-api</artifactId>
<version>8.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.eclipse.microprofile</groupId>
<artifactId>microprofile</artifactId>
<version>2.2</version>
<type>pom</type>
<scope>provided</scope>
</dependency>
</dependencies>

When you want to use an application server that is Java EE 8 compliant, you only have to include the javaee-api artifact dependency as shown in the snippet. However, if you choose an application server that also implements MicroProfile, you can add the microprofile artifact dependency in addition to the javaee-api artifact dependency in order to seamlessly use both in your enterprise application.

In the context of a Java EE 8 and optionally MicroProfile compliant application server, you want to have transparent access to all of the Java EE 8 and MicroProfile API artifacts because the application server provides the implementation. Due to the <scope>provided</scope> definition of the dependencies for javaee-api and microprofile artifacts, they provide you with exactly that because all of these API artifacts are defined in their <dependencies> sections.

Dependency management (BOM-style)

Now that we have a clear definition of what our “conventional” dependency management for using Java EE / Jakarta EE / MicroProfile looks like, let’s compare it with doing the same in BOM-style. BOM stands for BILL OF MATERIALS, which in the Maven context means that you have a list of artifact dependencies available, but not transparently. You explicitly cherry-pick which of these to use in your project by defining them as <dependencies> in your own pom.xml. This is exact opposite of transparently having access to every artifact as described in the non-BOM-style section.

So to switch to a BOM-style dependency, we can change our previous pom.xml snippet to the following:

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.eclipse.microprofile</groupId>
<artifactId>microprofile</artifactId>
<version>2.2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

<dependencies>
<dependency>
<groupId>javax</groupId>
<artifactId>javaee-api</artifactId>
<version>8.0</version>
<scope>provided</scope>
</dependency>
</dependencies>

If you look closely, two things have changed. First, the microprofiledependency has moved to the <dependencyManagement> section, but the javaee-api dependency has not. The second thing to notice is that the microprofile dependency’s scope has changed to <scope>import</scope>.

So what does this exactly mean? Well, remember that in the non-Bom-style section, I mentioned that all the <dependencies> of an artifact defined with <scope>provided</scope> were transparently available in your project? With changing the microprofile dependency to <scope>import</scope>, we are no longer transparently accessing microprofile‘s <dependencies>, but microprofile‘s <dependencyManagement>. This effectively means that the microprofiledependency is making no artifiacts transparently available to our pom.xml, but only the associated versions! So if we now expand our snippet with an explicit dependency to microprofile‘s Health specification API for example, we will have access to that particular API, using the version that has been defined in microprofile‘s <dependencyManagement>.

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.eclipse.microprofile</groupId>
<artifactId>microprofile</artifactId>
<version>2.2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

<dependencies>
<dependency>
<groupId>javax</groupId>
<artifactId>javaee-api</artifactId>
<version>8.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.eclipse.microprofile.health</groupId>
<artifactId>microprofile-health-api</artifactId>
<scope>provided</scope>
</dependency>
</dependencies>

One question that remains is why I haven’t moved our javaee-api to create a BOM-style dependency. Well this is simply because this artifact doesn’t define a <dependencyManagement> section in it’s pom.xml. For that reason it is just not possible to create a BOM-style for this specific javaee-api dependency because there would be nothing to “dependencyManage“. In case you will try anyway ;), it is perfectly valid to define the javaee-api as <scope>import</scope>, but this just doesn’t do anything.

Conclusion

BOM-style dependency management can be applicable for your situation, but in the context of a Java EE / Jakarta EE / MicroProfile application server, it makes little sense. However, the BOM-style could make sense when you are using a microservices framework that implements MicroProfile. In which case, you also want or need to right-size your application by defining a <dependency> on the artifacts that you explicitly want to in- or exclude in the build artifacts of your project.

When I have more information and experience on whether or not to use BOM-style dependency management in the Java EE / Jakarta EE / MicroProfile ecosystem, I will of course be happy to share it with you. In any case, I hope this concept now makes sense and that you now can decide for yourself when to apply it in your own projects.


by Edwin Derks at February 24, 2019 08:54 PM

#HOWTO: Generate PDFs (Apache PDFBox) including Charts (XChart) with Java EE

by rieckpil at February 24, 2019 02:28 PM

Generating documents for e.g. invoices or reports is a central use case for enterprise applications. As a Java developer, you have a wide range of possible libraries to manipulate and create Word, Excel or PDF documents. To help you choose the right library, I going to present you a running example for generating PDF documents with Apache PDFBox containing charts created with XChart. The example is based on Java 8, Java EE 8 and deployed to Open Liberty.

The pom.xml looks like the following:

<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>de.rieckpil.blog</groupId>
  <artifactId>charts-in-pdf-java-ee</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>war</packaging>
  <dependencies>
    <dependency>
      <groupId>javax</groupId>
      <artifactId>javaee-api</artifactId>
      <version>8.0</version>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.eclipse.microprofile</groupId>
      <artifactId>microprofile</artifactId>
      <version>2.0.1</version>
      <type>pom</type>
      <scope>provided</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.pdfbox</groupId>
      <artifactId>pdfbox</artifactId>
      <version>2.0.13</version>
    </dependency>
    <dependency>
      <groupId>org.knowm.xchart</groupId>
      <artifactId>xchart</artifactId>
      <version>3.5.4</version>
    </dependency>
  </dependencies>
  <build>
    <finalName>charts-in-pdf-java-ee</finalName>
  </build>
  <properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <failOnMissingWebXml>false</failOnMissingWebXml>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
  </properties>
</project>

I’ve chosen Apache PDFBox (GitHub)as the PDF library as the library is actively maintainedopen-source, easy-to-learn and good enough for basic use cases. The charting library XChart (GitHub) is a light-weight Java library for plotting data with an intuitive developer API, is providing really good example charts and capable of plotting every important chart type (XYChart, Bar-, Pie-, Histogram-, Dial, Radar, Stick Chart …).

For this simple showcase, the additional dependencies are packed within the .war but they could and should be part of the application server to have a thin war with quick deployment cycles.

The JAX-RS configuration for this application is quite simple:

@ApplicationPath("resources")
public class JAXRSConfiguration extends Application {

}

For downloading the generated PDF document I’ve added a simple JAX-RS endpoint:

@Path("reports")
@Stateless
public class ReportResource {

  @Inject
  private PdfGenerator pdfGenerator;

  @GET
  @Produces(MediaType.APPLICATION_OCTET_STREAM)
  public Response createSimplePdfWithChart() throws IOException {
    return Response.ok(pdfGenerator.createPdf(), MediaType.APPLICATION_OCTET_STREAM)
        .header("Content-Disposition", "attachment; filename=\"simplePdf.pdf\"").build();

  }

}

To notify the client about the content type of the response the JAX-RS annotation @Produces(MediaType.APPLICATION_OCTET_STREAM) is required. In addtion, I’ve added the Content-Disposition header so that regular browsers will directly download the incoming file with the given filename simplePdf.pdf.

The injected EJB PdfGenerator is responsible for generating the PDF document as a byte array:

@Stateless
public class PdfGenerator {

  public byte[] createPdf() throws IOException {
    try (PDDocument document = new PDDocument()) {
      PDPage page = new PDPage(PDRectangle.A4);
      page.setRotation(90);

      float pageWidth = page.getMediaBox().getWidth();
      float pageHeight = page.getMediaBox().getHeight();

      PDPageContentStream contentStream = new PDPageContentStream(document, page);

      PDImageXObject chartImage = JPEGFactory.createFromImage(document,
          createChart((int) pageHeight, (int) pageWidth));

      contentStream.transform(new Matrix(0, 1, -1, 0, pageWidth, 0));
      contentStream.drawImage(chartImage, 0, 0);
      contentStream.close();

      document.addPage(page);

      ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
      document.save(byteArrayOutputStream);
      return byteArrayOutputStream.toByteArray();
    }

  }

  // ...
}

PDDocument is the central class for creating new PDF documents with Apache PDFBox. In this example, I’m using a try-with-resources block to create a new document and close it afterward. The PDDocument object can contain several PDPage objects which represent a physical PDF page. To display the chart later on in landscape mode, the page is rotated for 90 degrees. For writing content to the PDPage , I am also opening a PDPageContentStream . For including the chart as an image to the PDF page, I’m creating a PDImageXObject from a BufferedImage, transforming it (to also rotate the image) and drawing it to the page at position x=0 and y=0 (starting bottom left corner).

The chart is created as the following:

private BufferedImage createChart(int width, int height) {
  XYChart chart = new XYChartBuilder().xAxisTitle("X").yAxisTitle("Y").width(width).height(height)
      .theme(ChartTheme.Matlab).build();
  XYSeries series = chart.addSeries("Random", null, getRandomNumbers(200));
  series.setMarker(SeriesMarkers.NONE);
  return BitmapEncoder.getBufferedImage(chart);
}

private double[] getRandomNumbers(int numPoints) {
  double[] y = new double[numPoints];
  for (int i = 0; i < y.length; i++) {
    y[i] = ThreadLocalRandom.current().nextDouble(0, 1000);
  }
  return y;
}

I’m using a simple XYChart with randomly generated numbers and creating it as high and wide as the underlying PDF page is:

The resulting PDF document looks like this: simplePdf.pdf

There are quite a lot more possibilities with Apache PDFBox and XChart as the plotting engine, but this example should give you a quick start to get in touch with these libraries.

You can find the whole code base with instructions on how to run it on your machine on GitHub.

Have fun generating PDF documents,

Phil


by rieckpil at February 24, 2019 02:28 PM

Building an API Backend with MicroProfile (ebook)

by Hayri Cicek at February 24, 2019 10:02 AM

Few weeks ago, I started to work on an Ebook called Building an API Backend with MicroProfile, you will learn how to create a simple RESTful API backend with MicroProfile.
The project is open source and hosted on GitHub and everybody can contribute to the project or download the ebook and start coding.

by Hayri Cicek at February 24, 2019 10:02 AM

Java EE - Jakarta EE Initializr

February 20, 2019 11:08 PM

Getting started with Jakarta EE just became even easier!

Get started


February 20, 2019 11:08 PM

Eclipse MicroProfile Starter

by Hayri Cicek at February 19, 2019 09:32 AM

The new Eclipse MicroProfile Starter is live and it's really easy to generate a new project.
Go to https://start.microprofile.io/ and follow the steps below to generate a new project.

by Hayri Cicek at February 19, 2019 09:32 AM

Running GraphQL spqr and JNoSQL on GlassFish 5.1

by Jean-François James at February 15, 2019 06:51 PM

As you may know, GlassFish 5.1 has been recently released. This is the first Jakarta EE release of GlassFish. I’ve taken this opportunity to deploy  an application mixing GraphQL spqr and JNoSQL (backed by MongoDB) on it. And guess what? It works! This is a preliminary work for the MicroProfile GraphQL initiative that has just […]

by Jean-François James at February 15, 2019 06:51 PM

Helidon 1.0 is Released

by dmitrykornilov at February 14, 2019 10:47 PM

I am proud to announce that Helidon 1.0 is released. This version brings full MicroProfile 1.2 support in Helidon MP, support for Yasson and Jackson in Helidon SE, and contains bug fixes and performance improvements. We have finished the API changes that we’ve been working on over the last few months. From this point on we will have much greater API stability.

More details are in release notes.


by dmitrykornilov at February 14, 2019 10:47 PM

JavaEE 8 + Payara 5 + Microprofile 2.1 + Docker In about a minute

February 14, 2019 10:04 PM

Thin Wars to the rescue

It can be really easy to start on your JavaEE / JakartaEE application. It’ll take you about a minute…

In this minute you will get a project with:

  • JavaEE 8
  • MicroProfile 2.1
  • Preconfigured Payara 5 Full server docker container - ivonet/payara:5.184
  • Maven essential setup
  • Run and build scripts for all of this

The minute has started…

Enter the code below in a terminal were you want to create your project and press enter

1
2
3
4
mvn archetype:generate \
-DarchetypeGroupId=nl.ivonet \
-DarchetypeArtifactId=javaee8-payara-microprofile-archetype \
-DarchetypeVersion=1.0 -U

The first time you run this command, it will take just a bit more time as it will
download everything needed from the maven central repository.

you will be asked these questions:

1
2
3
4
5
Define value for property 'groupId': com.example
Define value for property 'artifactId': helloworld
Define value for property 'version' 1.0-SNAPSHOT: :
Define value for property 'package' com.example: :
Define value for property 'docker-hub-name': example

Just follow the instructions and your project will be created:

1
2
cd helloworld
./run

This will start the project in a docker container.
The docker container will be downloaded the first time and that might take more than the minute depending
on the speed of your internet connection.
After the first time it will only take seconds.

Now go to http://localhost:8080//rest/example and you will have a working
example HelloWorld application.

Done 😄

After burner

Now you can load it into your favorite IDE and start building your own stuff.
Don’t forget to read the README.md of the project to learn more about the available commands.

Have fun.

Links


February 14, 2019 10:04 PM

Payara 5 in docker with autodeploy function

February 12, 2019 09:32 PM

I wanted to play around with going back to the roots of java(EE) and wanted to take the docker part a step further.
Why not have the platform as a docker image in such a way that when building the project the docker image will take care of the
deployment automatically after each build.

Goal

  • Building a Java EE application as easy as possible
  • Have all the cool stuff available from MicroProfile
  • Stay as close to the core as possible.

Prerequisites

  • Docker installed
  • Java 8 installed
  • IDE
  • A sense of adventure :-)

Steps

Payara docker image

I wanted to make a docker image with Payara 5 on it. See this blog for more information on it.
It has to be configurable is such a way that it does not need to be rebuild every time I create an artifact.
Assume that the war I create will be deployed on a similar server and that I don’t need to deploy the
whole platform every time. So I don’t want to build my docker image every time either
when programming and building my application.

So I need:

  • Payara as a docker image -> ivonet/payara:5.184
  • a way to deploy automatically after each maven build
  • have it exposed to the local machine

In the image a link has been created between /autodeploy and the actual autodeploy diretory in the image (see the Dockerfile for more information). This is done for
convenience and ease of reading. Now you can mount this directory as a volume.

How to get auto deploy working?

In order to do this I have chosen to add a very small part to my maven java project to facilitate the autodeploy feature.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
...
<build>
<finalName>${artifactId}</finalName>
<plugins>
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.1.0</version>
<configuration>
<failOnMissingWebXml>false</failOnMissingWebXml>
<warName>${project.build.finalName}</warName>
<outputDirectory>artifact</outputDirectory>
</configuration>
</plugin>
...
</plugins>
</build>
...
</project>

When executing mvn package it will build the war file but also copy the final artifact to the
<project>/artifact directory.

When starting the created docker container (from the project directory) we can now make sure
that it will mount the <project>/artifact directory to the /autodeploy container directory.

1
2
3
4
5
docker run -d --name payara \
-p 8080:8080 \
-p 4848:4848 \
-v $(pwd)/artifact:/autodeploy \
ivonet/payara:5.184

This command will run the Payara 5 Full profile server in daemon mode, with the <project>/artifact folder mounted as a folder to
its inner /autodeploy directory.

Now if you build your project:

mvn clean package

it will build the war in the target folder and the plugin defined in the pom.xml file will make sure that the artifact is copied to the <project>/artifact directory
The Payara server is monitoring that folder because it has been mounted to its autodeploy directory and it will see
a new artifact in the directory. Now it will autodeploy that artifact with the artifact name as the rootContext.

So if you project is called HelloWorld the artifact will be called ./artifact/HelloWorld.war and the url for Payara will become
http://localhost:8080/HelloWorld.

Every time you rebuild the project with the maven commando’s payara will redeploy the artifact as it will recognise that it has changed.
Now you can build and test your software without having to build a new image every time while still testing against the final environment.

Pretty cool if I do say so myself 😄

Have fun,

Ivo.

Note

While creating the image I had started out with a small alpine openjdk 1.8 docker image,
but to my dismay it threw this error java.lang.NoClassDefFoundError: sun/security/ssl/SupportedEllipticCurvesExtension
when I tried to enable the remote Admin console. In the end I got it to work on a CentOs with a java 1.8.0_191 version
installed on it.
I must say that it was a big surprise for me that this bug was there and I will try again with standard
OpenJdk versions at a later date as I also want to migrate to Java 11+ but for now it is what it is 😄

Maven Archetype

This feature is also used in projects generated with this Maven Archetype.


February 12, 2019 09:32 PM

#OSSRRR at Devnexus 2019

by Ivar Grimstad at February 09, 2019 12:24 PM

Devnexus 2019

Devnexus 2019 is happening in Atlanta next month. This is truly an awesome tech conference run by the Atlanta Java Users Group and I am so happy to be part of it as a speaker for the third time.

My talk this year is a presentation of patterns commonly used in microservice architectures. Each of the patterns will be explained and demoed live using Eclipse MicroProfile.

Microservice patterns in Eclipse Microprofile

Another think I look forward to at Devnexus is to meet up with will all the people participating in Jakarta EE that are present at the conference.

The party theme this year is R3, which stand for Reflect, Relax, Recharge and is definitely the place to be to meet all the awesome community members, Java Champions and Groundbreaker Ambassadors present! Everybody will be there, and so should YOU!

There is a limited number of super cool t-shirts exclusively made for this party. One of them could be yours simply by writing a blog post!


by Ivar Grimstad at February 09, 2019 12:24 PM

Eclipse GlassFish 5.1 が正式リリースされました

by Kenji Hasunuma at January 29, 2019 04:33 PM

Eclipse GlassFish 5.1 がついに正式リリースされました。


by Kenji Hasunuma at January 29, 2019 04:33 PM

Glassfish 5.1 Release Marks Major Milestone for Java EE Transfer

by Arjan Tijms at January 29, 2019 04:29 PM

 

Today Eclipse GlassFish 5.1 has been released, and unlike the modest increase in version number might suggest, this truly marks a major milestone. Not just for the GlassFish project itself, but for Java EE and moving Jakarta EE forward even more.

 

A Look at The History of GlassFish

 

GlassFish goes back a long way. It at least goes back to the Kiva Enterprise Server, a Java application server which was released in January 1996 (for comparison, Java 1.0 itself was also released in that month!)

 

A year later, Netscape acquired Kiva, and the Kiva Enterprise Server became known as Netscape Application Server (NAS), which had its own pre-J2EE proprietary Java web APIs (such as the AppLogic framework which were like Servlets, and DAE for DB access). NAS 2.1, which was available from early 1998 was a particularly popular version. Application servers were quite pricey back then, as Netscape Application Server was around $35,000 per CPU.

 

In 1999, Sun and Netscape (later AOL) formed an alliance, and Netscape Application Server 4, which was released later that year, included support for an early version of J2EE (Servlets, EJBs, JSPs, and JDBC). For example, JSP support was for the early version 0.92. 

 

Netscape Application Server 4 was chosen by the alliance to continue development on instead of merging it with the NetDynamics 5.01 application server Sun had acquired earlier. The name was once again changed, this time into iPlanet Application Server (iAS). It was part of the iPlanet suite of products jointly developed by Sun and AOL (Netscape). 

 

iAS Version 6, from around the year 2000, was a J2EE 1.2 compatible server supporting things such as Servlets 2.2, EJB 1.1, JSP 1.1 (based on Jasper) and JTA 1.0 (based on the Encina transaction monitor).

 

For version 7, the name was once again changed, now in full to "Stanford University Network Open Net Environment Application Server", aka Sun ONE Application Server (S1AS, or SOAS).  S1AS 7 was made available for no cost when it was released in late 2002, although it was still closed source. It included a modified Tomcat 4, which has a long history as well. Version 8 once again saw a name change, when it became Sun Java System Application Server (SJSAS) 8, which was J2EE 1.4 compatible. Around this time period, Sun had also split off a derived version called the J2EE SDK (Reference Implementation, or RI) which was essentially the core of the full application server, but later on this became the Platform Edition of SJSAS and the pure RI was only made available for TCK testing.

 

Open Source GlassFish Project

 

In 2005 the open source GlassFish project was started, which was essentially formed by the donation of the source code for SJSAS 9 by Sun and the TopLink persistence source code by Oracle (for the new JPA implementation in EE 5). In May 2006, the Java EE 5 compatible and fully open source GlassFish 1.0 was released. After some intermediate versions, a major re-architectured version of GlassFish was released in December 2009; GlassFish 3.0. In the GlassFish source code internally, there are still many references to "V3", which refer to this major milestone.

 

Payara Server is Born

 

After Oracle acquired Sun, it still released a version 3.1 of GlassFish in early 2011 with production features such as clustering and load balancing, but after that release it got relatively silent. In November 2013, Oracle announced they would still support the open source GlassFish but ended commercial support. In true open-source fashion, this led to the Payara Server, which started as a fork of GlassFish and added commercial support, regular bug fixes, and regular component updates. GlassFish 4 was released to support Java EE 7, but from a server architecture point of view, it was a relatively minor update with mostly the components being updated to their EE 7 versions.

 

GlassFish is Transferred to the Eclipse Foundation

 

Late 2015/early 2016 it becomes more quiet on the GlassFish front, and several articles appear questioning Oracle's interest in Java EE and specifically GlassFish. In August 2017, Oracle indeed announces not wanting to be primarily responsible for Java EE and GlassFish anymore. A little later it's announced that Java EE and all the GlassFish code (GlassFish itself and all its constituent components) will be transferred to the Eclipse Foundation.  The name of the project would become "EE4J" and early 2018 the source code starts transferring to the "eclipse-ee4j" repo on GitHub.

 

As part of the deal between Oracle and Eclipse, it's decided to release a GlassFish 5.1 that's completely built by the Eclipse organisation from the transferred and relicensed components, and that is fully Java EE 8 certified.

 

Payara Services Involvement

 

Altogether this transfer has took a lot of work. Payara Services (the company as well as individuals working for Payara) have supported this process from its early stages. It included help with the initial cleaning of several projects for the vetting of the transfer. For instance, in the Mojarra project there was quite a bit of ancient code and other artefacts that were removed one by one as they would have been difficult to vet. After the bare source code transfer, several adjustments were needed to make the projects cleanly build, and later on to make them work on the Eclipse Jenkins instances (https://jenkins.eclipse.org). For this to happen, a large amount of jobs had to be created, for each project to build, stage and finally to release them to maven central. The component tracker at https://wiki.eclipse.org/Eclipse_GlassFish_5.1_Components_Release_Tracker gives some idea of which projects were involved.

 

Payara specifically contributed to the transfer of the following API projects and their associated implementations:

 

#

Leading

EE4J Impl

JSF

☑️

Mojarra

Expression Language

☑️

EL-RI

EE Security

-

Soteria

JACC

☑️

GlassFish

JASPIC

☑️

GlassFish

Interceptors

☑️
(shared)

-

JAX-RS

 

Jersey

JMS

 

OpenMQ

JSP

 

GlassFish

Servlet

 

GlassFish

WebSocket

 

Tyrus

EE Concurrency

☑️

Concurrency RI

 

All together it was a great experience working on this transfer, but at times also quite a bit of work, especially when in the beginning it wasn't clear at all how to proceed with certain things.

 

Now however, this work is finally done! GlassFish is fully built on Eclipse infrastructure and today's release  by Eclipse marks another major step in GlassFish's long journey, starting at Kiva, passing through NetScape, growing up at Sun, passing through Oracle, and now landing at Eclipse.

 

One Step Closer to Jakarta EE 9

 

But it's not only about GlassFish itself. With this transfer completed, and both GlassFish and its components available via the jakarta.* Maven coordinates, we are one major step closer to starting the work for Jakarta EE 9.

 

GlassFish 5.1 can be downloaded here: https://projects.eclipse.org/projects/ee4j.glassfish/downloads

 

The implementation components are available from the org.glassfish Maven coordinates as before: https://repo1.maven.org/maven2/org/glassfish/main/distributions/glassfish/5.1.0

 

The API jars now live under the new jakarta.* Maven coordinates: https://repo1.maven.org/maven2/jakarta/

 

We at Payara would like to thank all partners from Jakarta EE who helped with the transfer. Special thanks go to Dmitry Kornilov for his tireless amount of help and advice whenever we got stuck on something. Thanks Dmitry! :)

 


by Arjan Tijms at January 29, 2019 04:29 PM

Eclipse GlassFish 5.1 is released

by dmitrykornilov at January 29, 2019 02:00 PM

I am very excited to bring you some great news. Today Eclipse GlassFish 5.1 has finally been released and available

on Maven Central:

or can be downloaded from Eclipse web site:

A huge milestone has been reached. Eclipse GlassFish 5.1 is a pure Eclipse release. All components formerly supplied by Oracle have been transferred to the Eclipse Foundation from Oracle Java EE repositories, have passed the Eclipse release review, and have been released to Maven Central with new licensing terms. Eclipse GlassFish 5.1 has passed all CTS/TCK tests (run on Oracle infrastructure) and has been certified as Java EE 8 compatible.

CTS tests results (copied from Oracle infra):

This release doesn’t contain any new features. The feature set is the same as in Oracle GlassFish 5.0.1. The main goal was to demonstrate that GlassFish and all other components transferred from Oracle to Eclipse are buildable, functional and usable.

Another significant change is the license. It’s the first version of GlassFish released under EPL 2.0 + GPL 2.0 with the GNU classpath extension.

And the third change is the modification of Maven coordinates of GlassFish components. To distinguish Jakarta APIs from Java EE APIs we changed their Maven coordinates from javax to jakarta and released new versions. The full list of components used in GlassFish 5.1 can be found here.

In addition to delivering the software we have also learned:

  • How to use Eclipse Development Process to elect committers, submit projects for release reviews, etc.
  • How to use Eclipse build infrastructure to setup build jobs, release components to staging repository and to Maven Central
  • How to communicate with different project teams and work together to achieve shared goals
  • And much more…

The next step for the Jakarta EE community is to complete the Jakarta EE specification process, create Jakarta EE specifications that correspond to all the Java EE specifications and approve all these specifications through the specification process.  That process will no longer require a Reference Implementation, but it will require at least one Compatible Implementation. We hope the community will ensure that Eclipse GlassFish is Jakarta EE 8 compatible and will remain a compatible implementation as the Jakarta EE specification evolves in the future.

It took us more than a year to deliver this release. A huge amount work has been done and we wouldn’t have completed it without community help and support. I would like to say thanks to all people who participated in this release.


by dmitrykornilov at January 29, 2019 02:00 PM

Eclipse GlassFish 5.1 is here!

by Ivar Grimstad at January 29, 2019 01:52 PM

The release of Eclipse GlassFish 5.1 is an important milestone for Jakarta EE!

First of all, it is a confirmation that the GlassFish source code contributed by Oracle is possible to build and assemble on Eclipse Infrastructure.

Second, by passing the Java EE 8 Compatibility tests, it verifies that the code contributed follows the Java EE 8 specifications, hence is Java EE 8 Compatible.

Download Eclipse GlassFish 5.1 and give it a try!

And while you’re at it, why don’t you try it out with Apache NetBeans as I have shown below.

Eclipse GlassFish 5.1 in Apache NetBeans 10


by Ivar Grimstad at January 29, 2019 01:52 PM

Jersey 2.28 has been released

by Jan at January 28, 2019 11:23 AM

Jersey 2.28 has been released and is available in maven central! Jersey 2.28, the first Jakarta EE implementation of JAX-RS 2.1 has finally been released. Please let us know how you like it! After whole Java EE has been contributed … Continue reading

by Jan at January 28, 2019 11:23 AM

Eclipse Glassfish 5.1.0.RC2 has been released!

by Jan at January 26, 2019 01:14 AM

That’s right, Eclipse Glassfish 5.1.0.RC2, the Jakarta EE implementation, has been published in maven central. Eclipse Glassfish 5.1.0 release is terribly close. Latest Glassfish in maven central, Eclipse Glassfish 5.1.0.RC1, publicly available couple of months, is mostly identical to Oracle … Continue reading

by Jan at January 26, 2019 01:14 AM

Jersey 2.28-RC4 has been released

by Jan at January 22, 2019 02:20 PM

Jersey 2.28-RC4 is available in maven central! Jersey 2.28-RC4, the first Jakarta EE publicly available version of Jersey, is an almost final version of Jersey 2.28. Jersey 2.28, is to be released soon. It will be part of Eclipse Glassfish … Continue reading

by Jan at January 22, 2019 02:20 PM

EFSP: The Specification Committee Votes

by waynebeaton at January 21, 2019 05:40 PM

One key difference between Eclipse open source software projects as defined by the Eclipse Development Process (EDP), and open source specification projects as defined by the Eclipse Foundation Specification Process (EFSP) is that specification projects must be aligned with exactly one specification committee. More generally, specification projects are aligned with an Eclipse working group and are governed (in part) by the working group’s specification committee.

The specification committee is required to vote to approve key milestones in the lifecycle of their specification projects:

  • Specification project creation;
  • Release plan;
  • Revision to the scope;
  • Progress and release reviews;
  • Service releases; and
  • Designation of a profile or platform.

The most frequent votes occur when specification projects engage in the reviews that occur during the development cycle (highlighted in bold).

efsp

To succeed, a vote requires positive responses from a super-majority (defined as two-thirds) of the members of the specification committee. Votes to designate a specification as a profile or platform require positive responses from a super-majority of the specification committee members who represent the interests of Strategic Members of the Eclipse Foundation. It’s worth noting that there is no veto.

The criteria by which representatives decide how they’re going to vote varies by individual and according to the values of the individual and the organization that they represent (if applicable). Minimally, the specification committee is expected to use their vote to ensure that specification projects stay within scope. In the case of a progress review, the voters will need to consider whether or not the project is progressing in a manner that will eventually result in a successful vote on the eventual release review that gates the ratification of the final specification.

The EFSP is silent on what happens in the event of a failed vote. In the event of a failure, we expect that feedback regarding the reason for the failure will be provided to the project team, who will work to mitigate issues and then re-engage.

Please see:


by waynebeaton at January 21, 2019 05:40 PM

Glassfish 5.1.0 is to be released!

by Jan at January 20, 2019 09:29 PM

Glassfish Application Server has been the reference implementation of Java EE since Java EE 5. Those days, it was Glassfish version 2. Java EE 6 reference implementation was Glassfish 3.x, Java EE 7 reference implementation was Glassfish 4.0, Java EE … Continue reading

by Jan at January 20, 2019 09:29 PM

Jakarta EE 9 - 2019 Outlook

by Arjan Tijms at January 18, 2019 03:09 PM

As presumably well known by now, Java EE is in progress of being transferred to the Eclipse Foundation. A lot of work, partially behind the scenes, has been done to make his happen. This work included discussions between vendors and other interested individuals, the vetting of the code in the Java EE repo at GitHub, actually transferring the code from the Java EE repo to the Eclipse repo, and most recently the preparation of the transferred code to be buildable on Eclipse Foundation infrastructure and changing the Maven coordinates over from javax.* to jakarta.*

 


by Arjan Tijms at January 18, 2019 03:09 PM

Back to the top

Submit your event

If you have a community event you would like listed on our events page, please fill out and submit the Event Request form.

Submit Event