cloudfoundry - vmc register error with external uri - authentication

I installed cloudfoundry with the -D option to change the default domain. Cloudfoundry installs fine and starts but when I try to vmc in I get an error:
swampfox#swampcf:~$ vmc target api.mydomain.com
Successfully targeted to [http://api.mydomain.com]
swampfox#swampcf:~$ vmc register --email emailid#gmail.com --passwd mypass
Creating New User: OK
Attempting login to [http://api.mydomain.com]
Problem with login to 'http://api.mydomain.com', target refused connection (getaddrinfo: Name or service not known), try again or register for an account.
swampfox#swampcf:~$ vmc register --email emailid#gmail.com --passwd mypass
Creating New User: Error 100: Bad request
Can someone help. I need to have the external uri or this is useless for me.
This works fine if I take the default api.vcap.me but it only works on that vm and is not accessable from other infrastructure which is pretty useless.

I have found the issue. There is a bug in vmc-0.3.21. Backed it down to vmc-0.3.18 and everything works now.
Whoof! How to open a bug against vmc?

When you have tried api.vcap.me, did you do this by just changing the endpoint address in config/cloud_controller.yml? If so, it may be worth checking to see if the setup did set the endpoint correctly in all the other configuration files, uaa.yml especially in this case as you are having issues with login.
I have always used the standard configuration (api.vcap.me) and then manually changed the endpoint in all the configuration files using sed, for example, from the config directory;
sed -i 's/\.vcap\.me/.newdomain.com/g' *.yml

Actually I initially installed with default api.vcap.me. Then i wiped out the guest an completely reinstalled with -D mydomain.com. I have subsquently installed another CF on a different guest with api.vcap.me for comparison.
Checked the config /home/cfadmin/cloudfoundry/.deployments/devbox/config/uaa.yml and there is no reverence to api.swampnet.com or api.vcap.me.
Just a quick note. I can successfully login from an external domain like emc.com but i cannot login on the local machine or a machine in the same subnet. Whoof!
I noticed that the controller had external uris false so I set them to ture but that made no difference. If I set them to true on the api.vcap.me instance will that allow me to push and app with an external uri?

Related

How to overwrite the api proxy deployment using apigeetool

I am using the below command in jenkins to deploy the api proxies to apigee edge.
apigeetool deployproxy -u abc -o nonprod -e dev -n poc-jenkins1 -p xyz
But am getting the below error.
Error: Path /poc-deployment-automation conflicts with existing deployment path for revision 1 of the APIProxy poc-deploy-automation in organization nonprod, environment dev
Here is my requirement , please help me what command to use.
If API doesn’t exist in target environment, Create Api in new environment with version 1.
If API already exist in target environment, Create Api in new environment with new version (previous version + 1)
So what command should we use to fix the above error and what should we use to do the above 2 tasks.
Help Appreciated.
The apigeetool deployproxy command supports by default your requirements. It deploys the revision 1 if there is no proxy with the name, and increases the revision if it already exists.
However, based on the error you mentioned, it seems that you have a path conflict between two proxies. You are trying to deploy a proxy to a /poc-deployment-automation basepath, but there is another proxy called poc-deploy-automation which is listening on the same basepath. It is not possible, even if the proxy name is different, because the basepath is what apigee uses to redirect traffic to your proxy.
Check the xml file at the root of your proxy and change the basepath attribute.
Also, the basepath of an API Proxy can be anything, but could not be the same used at the same time by two proxies--only one can be deployed at time. The revision numbers are irrelevant in this situation.

ActiveMQ 5.15 HTTP ERROR: 503

Run environment :linux (CentOS 7), JDK 1.8, & ActiveMQ 5.15
I started Activemq then visit the management page with Chrome,when I try to log in with the default username & password I get the following error;
HTTP ERROR: 503
Problem accessing /admin/. Reason:
Service Unavailable Powered by Jetty://
How can I resolve this problem?
I was getting this same error. It turns out that I had run it as root user originally, then later I stopped it and ran it as a non-root user. Certain data files that had been created and owned by the original root instance were not accessible to the non-root user.
Check the ownership of the files, and change them if necessary to match the user that the broker is running as.
Had the same issue.
Maybe something went wrong the extraction of the package.
I downloaded this:
wget https://archive.apache.org/dist/activemq/5.15.0/apache-activemq-5.15.0-bin.tar.gz
and extracted it with:
sudo tar -zxvf apache-activemq-5.15.0-bin.tar.gz -C /opt
then it worked for me.
My two cents:
I start with the activemq in Ubuntu Repo, but then later change to binary package from official website.
In my case, the repo version left an /etc/default/activemq config file, which runs activemq with user "activemq". It turns out in previous experiments, I did not kill the old processes running under "activemq" when I start activemq under my own user name. There are two activemq processes running under different user names, and when connecting to admin console, I have a 503.
I delete the /etc/default/activemq file, and kill all activemq processes running under "activemq", then restart activemq with my user name, the 503 is gone.

Expose service in OpenShift Origin Server - router is not working

Our team decided to try using OpenShift Origin server to deploy services.
We have a separate VM with OpenShift Origin server installed and working fine. I was able to deploy our local docker images and those services are running fine as well - Pods are up and running, get their own IP and I can reach services endpoints from VM.
The issue is I can't get it working, so the services are exposed outside the machine. I read about the routers, which suppose to be the right way of exposing services, but it just won't work, now some details.
Lets say my VM is 10.48.1.1. The Pod with docker container with one of my services is running on IP 172.30.67.15:
~$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-svc 172.30.67.15 <none> 8182/TCP 4h
The service is simple Spring Boot app with REST endpoint exposed at port 8182.
Whe I call it from VM hosting it, it works just fine:
$ curl -H "Content-Type: application/json" http://172.30.67.15:8182/home
{"valid":true}
Now I wanted to expose it outside, so I created a router:
oc adm router my-svc --ports='8182'
I followed the steps from OpenShift dev doc both from CLI and Console UI. The router gets created fine, but then when I want to check its status, I get this:
$ oc status
In project sample on server https://10.48.3.161:8443
...
Errors:
* route/my-svc is routing traffic to svc/my-svc, but either the administrator has not installed a router or the router is not selecting this route.
I couldn't find anything about this error that could help me solve the issue - does anyone had similar issue? Is there any other (better/proper?) way of exposing service endpoint? I am new to OpenShift so any suggestions would be appirciated.
If anyone interested, I finally found the "solution".
The issue was that there was no "router" service created - I didn't know it has to be created.
Step by step, in order to create this service I followed the instructions from OpenShift doc page which were pretty easy, but I couldn't login using admin account.
I used default admin account
$ oc login -u system:admin
But instead using available certificate, it kept asking me for password, but it shouldn't. What was wrong? My env variables were reset, and I had to set them again
$ export KUBECONFIG="$(pwd)"/openshift.local.config/master/admin.kubeconfig
$ export CURL_CA_BUNDLE="$(pwd)"/openshift.local.config/master/ca.crt
$ sudo chmod +r "$(pwd)"/openshift.local.config/master/admin.kubeconfig
This was one of the first steps described in OpenShift docs OpenShift docs. After that the certificate is set correctly and login works as expected. As an admin I created router service (1st link) and the route started working - no more errors.
So in the end it came out to be pretty simple and dummy, but given that I didn't have experience with OpenShift it was hard for me to find out what is going on. Hope it will help if someone will have the same issue.

Unable to access glassfish served content when using localhost

I created this simple dynamic web project (glassfish 4.1.1 latest atm) using eclipse java ee Mars.2 that I installed 2 days ago.
Checking on the admin, the app is deployed and running fine. I could not access the web app using the localhost:8080 url but it works when I use <computername>:8080.
I could access the admin using localhost:4848.
I tried disabling the firewall but the problem persists. What could be the problem?
The error is:
404 Not Found
No context found for request
In eclipse I see the log int he console that says: Automatic timeout occurred
As I pointed out in comments, you can configure listeners in Configuration -> needed configuration -> Network Config -> Network Listeners. However, it is still rather strange that your localhost doesn't work with 0.0.0.0 IP address, since it is a special address which means "listen on all available IPs on given port". Perhaps your network is somehow misconfigured.

Hawtio executable jar always "Failed to log in, Forbidden"

I'm trying to put hawtio-1.4.11 to work, but failing. I'm using the simplest configuration.
In the same host, activemq-5.9.0 (clean, no configs), and I just run java -jar hawtio-app-1.4.11.jar.
I've tested the logins to activemq (both old console and hawtio) and it was working ok.
Then I accessed hawtio
http://my_ip_address:8080/hawtio
and filled the form, and I was redirected to the login page. Then when I click login, I get "Failed to log in, Forbidden".
Could not see any log messages the give me a hint.
Thanks in advance for any help.
Update:
I did the fowlling test:
wget --user admin --password admin --auth-no-challenge http://localhost:8161/hawtio/jolokia/read/org.apache.activemq:type=Broker,brokerName=localhost/TotalConsumerCount
And it worked (HTTP 200), and using a wrong password (HTTP 403). It seems to be working as it supposed to, but still can not make hawtio standalone to connect.
When I try to login using hawtio, the only log messages are those (regardless if I used the correct password or not):
2014-07-17 19:08:47,342 | DEBUG | Handling request for path /auth/login | io.hawt.web.AuthenticationFilter | qtp962581073-40
2014-07-17 19:08:47,342 | DEBUG | Doing authentication and authorization for path /auth/login | io.hawt.web.AuthenticationFilter | qtp962581073-40
BTW, I've tried a lot of different setups, including war version in jboss-4.2.3 but all failed too.
See this blog entry how to install hawtio in ActiveMQ as the web console
http://sensatic.net/activemq/activemq-and-hawtio.html
It also explains to setup the security part, which is likely what is your problem.
I was able to login to activemq console, but not in hawtio.
In my case I found that:
activemq console credetial are read from conf/jetty-realm.properties
hawtio credential are read from conf/users.properties and conf/groups.properties
In users.properties the password cannot contains same characters, in my case the euro sign €
I have had a similar issue: get always a hawtio (1.4.45) styled login page when try to remote connect to ActiveMQ jolokia api (ActiveMQ 5.10.1)
The reason was that the URL-paht configured in hawtio, that points to ActiveMQ jolokia api must end with an /!
for example /api/jolokia/
On a Unix machine - I fixed this through changing the order of the configurationpath to the configuration scripts. Added /bin/env as the first statment in the /bin/activemq script under # CONFIGURATION # For using instances,
since it ignores the others but the first as stated here
Unix configuration
happy coding !