I'm trying to run a number of commands, such as:asadmin create-custom-resource, however, I have multiple domains, and I wish to specify which domain to be affected.
How do I run asadmin commands and specify which domain to be affected?
I'm using Glassfish 3.1.2.2
You can't specify a domain directly for most of the asadmin commands.
But you can specify the Domain Administration Server, which controls the domain. It identifies the domain indirectly. To do this just use --host and --port parameters:
asadmin --host localhost --port 4848 create-custom-resource ...
See also:
Glassfish 3.1.1 - How to enable secure admin for different domains?
How to specify domain name while creating jdbc resource/connection pool in glassfish 3
Related
I have multiple domains that points to single IP but since I don’t to expose my IP I want to use Argo Tunnel and achieve the same functionality(Point all domains to same server).
But the problem is, with Argo tunnel I am unable to add multiple domains. I can’t create multiple tunnels with different domains to the same machine as for one machine there is one certificate installed and to initiate new argo tunnel previous certificate needs to be deleted.
How can create tunnel for abc.com, xyz.com qrs.com with single server ?
i have done this on my ubuntu cloud server. Follow these steps.
Step 1:
i moved the ~/.cloudflared/cert.pem to ~/.cloudflared.cert.pem.abc.com
Step 2 ( authenticate new domain xyz.com )
run in terminal: cloudflared login
once authenticated then run the follwing command to start the new tunnel
sudo cloudflared tunnel --hostname xyz.com --url http://127.0.0.1
you can also put this command in the background to keep it running in the background.
This will do the work you need but it has a problem.
the problem is that whenever you will restart or create any tunnel then you will require to add the cert.pem of that domain to this location ~/.cloudflared/cert.pem and then you can start that tunnel. once the tunnel is running this file is no longer required.
so in this process it will require replacing the cert.pem file everytime you start a new tunnel or restart any existing one.
This is the onlyway to support multiple tunnels at same time or you can use CNAME Setup feature of cloudflare but that needs the plan to be Business or higher.
I have running devstack on my machine and created an instance of Alpine Linux which runs a Rails API (IP 10.0.0.6) on port 3000 (also tried 80, 8080). Then I created a simple CirrOS client instance (IP 10.0.0.4) to access the /test endpoint of the API. However, i find that I can ŕun:
ping 10.0.0.6
from the CirrOS instance and receive response of packets. However, when I try:
curl -XGET http://10.0.0.6:3000/test
I receive the error:
curl: (7) couldn't connect to host
The two instances belong to the private network and the security group policy allows any Ingress and Egress of any kind of protocol.
The /test endpoint works locally on the API instance.
I also tested that I'm able to make an ssh connection from one instance to another.
What configuration could I be missing? Thanks!
Found the solution.
It wasn't a wrong configuration on openstack side.
I needed to run rails with the flag -b 0.0.0.0 to allow any IP. Rails on default only serves the localhost IP.
rails s -b 0.0.0.0
You could always try telneting on the particular port which server is running on to locate the issue whether it's networking issue or it is any other configuration issue.
I'm using Ansible to spin up a new Amazon EC2 install, and then I install Java and Tomcat (via the yum module). After placing the war for sample project from the Apache website in the webapps directory, I go and run the the command (below), nothing happens. It returns with response, no error. I've checked both the IP and port 8080 and Tomcat is not running.
[centos#sonar-test webapps]$ sudo systemctl start tomcat
[centos#sonar-test webapps]$ sudo systemctl start tomcat
[centos#sonar-test webapps]$
For reference, I was following this tutorial as well:
https://www.digitalocean.com/community/tutorials/how-to-install-apache-tomcat-7-on-centos-7-via-yum
From your comment on my question running curl in your ec2 instance
When I curl I get a large html document with various apache-esque things on it
It means Tomcat is installed and running.
If you don't access it, its because of your security group rules
In your ec2 console, select the Security Groups option. Edit the rules that is associated with your ec2 instance (the one running Tomcat) and permits inbound connections to port 8080 (so you can make request to your Tomcat server) and port 80 if you're running Apache (or nginx/another web server). If you're not sure about security, you can restrict the inbound traffic to come only from your IP so you can test but no-one else can make request.
im currently trying to setup a private Docker Registry in Artifacory (v4.7.4).
I've setup a local,remote and virtual docker Repository, added Apache as a Reverse Proxy. Added a DNS Entry for the virtual "docker" Repo.
Reverse Proxy is working but if i try something like:
docker pull docker.my.company.com/ubuntu:16.04
I'm getting:
https://docker.my.company.com/v1/_ping: x509: certificate is valid for
*.company.com, company.com, not docker.my.company.com
My Artifactory URL is: "my.company.com/artifactory" and i want the repositorys to be accessible on repo.my.company.com/artifactory.
I also have a Wildcard Certificate for company.com so i don't understand whats the problem here.
Or is there a way to access Artifactory over just http without SSL
Any Ideas?
According to the RFC-2818 Wildcard certificate matches only the one level down domains, but not deeper:
E.g., *.a.com matches foo.a.com but not bar.foo.a.com. f*.com matches foo.com but not bar.com.
In this case what you should do is use ports for mapping repositories, instead of subdomains, so the docker repository will be accessible under, for example my.company.com:5001/ instead of docker.my.company.com.
You can find the explanation about the change and how to do it using Artifactory Proxy settings generator in the User Guide.
If you are prepared to live with the certificate-name mismatch for-now, and understand the security implications of ignoring the name-mismatch and accessing the repo insecurely, you can apply the following workaround:
Edit /etc/default/docker and add the option DOCKER_OPTS="--insecure-registry docker.my.company.com".
Restart docker: [sudo] service docker restart.
I have a WebLogic docker container. The WLS admin port is configured at 7001. When I run the container, I use --hostname=[hosts' hostname] and expose the 7001 port at a different host port using -p 8001:7001 for example. The reason I do the port mapping is because I would want to run multiple WLS containers on the same host.
I have some applications that I deploy on this WebLogic. These applications use an external SDK (which I don't control) to get the application url using JMX (getURL operation of RuntimeServiceMBean).
This is where it gets wrong. The URL comes out as http://[container's IP]:7001. I would want it to retrieve http://[hosts' hostname]:8001 - i.e. the hostname I used to start the container and the port at which 7001 is mapped i.e. 8001.
Is there a way this could be done?
When the container is started, you should start WebLogic after adjusting the External Listen Address of your AdminServer. You can use WLST Offline for that from within a shell script, passing parameters with docker run -e KEY=VALUE, then later read these from inside the WLST script. Modify your AdminServer External Listen Address, exit(), then you can start AdminServer.
Here's an example on how to create the extra Network Channel with proper External Listen Address.