How to configure a GlassFish instance running on A

2020-03-15 08:35发布

问题:

I'm using GlassFish to serve up a Java EE web app. Things are working fine on my local dev machine. I have

  • copied postgres JDBC libraries into the right place
  • configured a connection pool and JDBC resource in the Glassfish admin console
  • deployed a web-app that uses said connection
  • seen the results in my browser

I'm trying to deploy that same app to AWS Elastic Beanstalk hosted Glassfish instance. AWS-EB uses Docker to deploy the Glassfish instance. I can do only the third step above (deploy a web-app), and am completely at a loss how to do the first two.

What I'd love to do is have web-access to the Glassfish admin console via, but that doesn't seem to work at any level. An alternative would be to use the glass fish "asadmin" on my local machine to configure the remote glass fish, but I can't make that happen either.

How does one configure a Glassfish instance hosted on AWS EB? Is it even possible?

I've made some observations, but I'd appreciate confirmation or otherwise:

  • it appears that AWS have a command in their CLI called 'asadmin' which is about autoscaling, and has the same name as 'asadmin' that ships with glassfish. Apart from making google searches hard, the two seem to have nothing to do with each other
  • if I connect to the AWS EC2 instance containing the Docker and Glassfish instance, the following things happen
    • sudo docker ps returns that there are ports 4848/tcp, 8080/tcp, 8181/tcp, and that none are mapped
    • wget localhost:8080 - connection refused
    • same for 8181 and 4848
    • wget localhost:80 returns the web page of the Glassfish home page
  • in that same instance running docker inspect I get an internal IP address (call it 1.2.3.4), then on that EC instance
    • wget 1.2.3.4:8080 (and 4848, 8181) all return html files
    • wget 1.2.3.4:80 - connection refused
  • if I run a bash shell in the docker container, the following things appear to be true
    • wget localhost:8080 (and 4848, 8181) all return well formed pages
    • wget localhost:80 - connection refused

So maybe I need to tell the EC2 instance to forward on from localhost to 1.2.3.4, but how can I make that happen when the EB load balancer scales it out.

Any advice would be greatly appreciated.

回答1:

What follows is something that works for me - but I have a feeling I'm missing something. Any edits/comments would be most welcome.

There are various hooks in the EB/Docker deployment that allow execution of post-deployement hooks to be run in a glassfish instance, within a docker container, within a EB instance. I used post-deployment hooks to set up a connection pool. Here's what the final install looks like, just for reference:

|  | |  \_WAR_/  | | |
|  | \_Glassfish_/ | |
|  \____Docker____/  |
\____EC2 Instance____/

The overall desired outcome is that, after the app is deployed, inside the Docker instance, the asadmin commands are run to create a JDBC connection pool, and to make that connection pool into a jdbc resource. On my local machine, the commands would be

asadmin create-jdbc-connection-pool 
    --datasourceclassname org.postgresql.ds.PGConnectionPoolDataSource 
    --restype javax.sql.ConnectionPoolDataSource 
    --property user=USERNAME:password=PASSWORD:serverName=DBHOST:portNumber=5432:databaseName=DBNAME 
    poolName

asadmin create-jdbc-resource --connectionpoolid poolName jdbc/dev

Where 'jdbc/dev' is the name that the java code needs to know to get a connection in the usual manner i.e.

InitialContext ctx = new InitialContext();
ds = (DataSource)ctx.lookup("jdbc/dev");

We want the commands to run inside the docker instance, because the docker instance has access to the environment variables that you declare in the AWS admin console, so I can pass configuration information without having it in my build scripts.

To achieve this outcome, we require that a file is created in the EC2 instance during installation, in my case called /opt/elasticbeanstalk/hooks/appdeploy/post/99_configure_jdbc.sh. This file will be executed post-deployment, as root, in the EC2 instance. I'll refer to it as the ec2-post-deploy-hook.

We're going to create that file using a .ebextensions/.config file, as documented here

  • http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html
  • http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html

My .config file has the following contents:

files:
  "/opt/elasticbeanstalk/hooks/appdeploy/post/99_configure_jdbc.sh":
    mode: "000755"
    owner: root
    group: root
    content: |
      #!/bin/bash
      date > /tmp/post 2>&1
      dockerid=`docker ps | grep latest | cut -d" " -f1`
      echo $dockerid >> /tmp/post 2>&1
      docker ps >> /tmp/post 2>&1
      docker exec $dockerid /var/app/WEB-INF/classes/setup_pool.sh >> tmp/post 2>&1

Everything after the content: | ends up in the ec2-post-deploy-hook.

I learned this idea from http://junkheap.net/blog/2013/05/20/elastic-beanstalk-post-deployment-scripts.

Only the last line and the 4th last line are needed, but the other lines are useful for debugging. Output ends up in /tmp/post on the EC2 instance.

The one trick in that file is that we can always get the ID of the docker container by

sudo docker ps | grep latest | cut -d" " -f1

because after deployment there will only be one Docker container running, and it will have "latest" in its name.

The last line of the ec2-post-deploy-hook uses docker to run, inside the docker instance, those commands that I originally wanted run - that is, the asadmin commands. I deploy a file called setup_pool.sh inside my .war file, so it ends up in a known location during deployment. My setup_pool.sh looks like this (and I call it a docker-post-deploy-hook):

dbuser=$PARAM1
dbpass=$PARAM2
dbhost=$PARAM3
dbname=$PARAM4

date > /tmp/setup_connections
echo '*********' >> /tmp/setup_connections
asadmin create-jdbc-connection-pool --datasourceclassname org.postgresql.ds.PGConnectionPoolDataSource --restype javax.sql.ConnectionPoolDataSource --property user=${dbuser}:password=${dbpass}:serverName=${dbhost}:portNumber=5432:databaseName=${dbname} ei-connection-pool >>   /tmp/setup_connections 2>&1
echo '*********' >> /tmp/setup_connections
asadmin create-jdbc-resource --connectionpoolid ei-connection-pool jdbc/dev >> /tmp/setup_connections 2>&1
echo '*********' >> /tmp/setup_connections

This file is run within in docker instance. The two asadmin commands are the meat, but again, there's some debugging into /tmp/setup_connections within the docker instance

Passwords, etc, are obtained from the AWS environment.

The only thing I cannot do at this point is have the AWS environment variables available on first deployment. I have no idea why, but I only seem to be able to set them after the instance is up and running. This means that I have to deploy twice, a dummy deploy, followed by an edit of the environment, followed by a real deploy.

So, summing up,

  • at deployment
    • a .config file generates an ec2-post-deploy-hook file,
    • the AWS system deploys the docker-post-deploy-hook as a part of the .war that is deployed to glassfish
  • at post deployment,
    • the elastic beanstalk system runs the ec2-post-deploy-hook
    • the ec2-post-deploy-hook runs the docker-post-deploy-hook
    • the docker-post-deploy-hook runs asadmin to set up the appropriate connection pools
  • at run time, the Java code in the web app makes use of the connection pools

And it all works. It's kind of ugly to behold, but, you know, so am I.



回答2:

After struggling with this myself for sometime, I think I have finally found an acceptable workaround (atleast for me) as follows :-

Create DockerFile and package it directly inside the WAR (at the highest level, not in any folder). DockerFile -

# Use the AWS Elastic Beanstalk Glassfish image
FROM        amazon/aws-eb-glassfish:4.1-jdk8-onbuild-3.5.1

# Exposes port 8080
EXPOSE      8080 4848 8181

# Install Datasource dependencies
RUN         curl -L -o /tmp/connectorj.zip https://server/path/connectorj.zip && \
            unzip /tmp/connectorj.zip -d /tmp && \
            cp /tmp/connectorj/mysql-connector-java-5.1.36-bin.jar /usr/local/glassfish4/glassfish/domains/domain1/lib/ && \
            mv /var/app/WEB-INF/classes/domain.xml /usr/local/glassfish4/glassfish/domains/domain1/config/

Now when this WAR is deployed (I am using 'eb deploy'). This DockerFile is executed.

In the simple example above - first mysql jdbc driver is downloaded and setup into glassfish's lib directory. Next I have packaged domain.xml (all the resources, etc already setup) inside the WAR itself, gets moved to glassfish's domain config folder to be loaded when glassfish will start.