WSO2 API Manager: Publish function does not work - api

Good afternoon! After I changed the ip address of my wso2 api manager service, I lost the ability to publish new APIs. Error appears: [500]: Internal server error
Error while updating the API in Gateway cdbe7ae3-1aef-4f03-8a3f-f84f530248af
What I did before: I replaced all localhost values ​​with the ip address of the host, I did this in all parameters that are not commented out. First of all, I changed the value of [server]
hostname = "{hostname}", I did all this in the /repository/conf/deployment.toml file.
Please tell me how to solve the problem!
I also independently came to the conclusion that instead of localhost, put the ip address in all parameters in the wso2am-3.2.0 \ repository \ deployment \ server \ synapse-configs \ default \ proxy-services \ WorkflowCallbackService.xml file
After that, I rebooted the server, but it didn't help.

Related

Cannot create VHost on RabbitMQ Management HTTP API

I've done some research as to how to create a vHost from the HTTP API on RabbitMQ. I am brand new to RabbitMQ so I just need some basic guidance. I've seen to add a vhost you go to the admin page and there is an option for virtual hosts. However, when I am in mine, I do not have said option. I don't know if there is something else I need to install or enable but I can't seem to find anyone else with this type of issue.
You might not have the right permissions to create a vhost. Check your permissions and make sure you are logged in with a user that has the administrator tag according to this: https://www.rabbitmq.com/management.html#:~:text=and%20credential%20management.-,Tag,-Capabilities
The HTTP API documentation has an example on how to create a vhost:
$ curl -i -u USER:PASSWORD -H "content-type:application/json"
-XPUT http://localhost:15672/api/vhosts/foo

Letsencrypt + Docker + Nginx

I am referring this link https://miki725.github.io/docker/crypto/2017/01/29/docker+nginx+letsencrypt.html
to enable SSL on my app which is running along with docker. So the problem here is when I run the below command
docker run -it --rm \
-v certs:/etc/letsencrypt \
-v certs-data:/data/letsencrypt \
deliverous/certbot \
certonly \
--webroot --webroot-path=/data/letsencrypt \
-d api.mydomain.com
It throws an error:
Failed authorization procedure. api.mydomain.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://api.mydomain.com/.well-known/acme-challenge/OCy4HSmhDwb2dtBEjZ9vP3HgjVXDPeghSAdqMFOFqMw:
So can any one please help me and let me know if I am missing something or doing something wrong.
What seems to be missing from that article and possibly from your setup is that the hostname api.mydomain.com needs to have a public DNS record pointing to the IP address of the machine on which the Nginx container is running.
The Let's Encrypt process is trying to access the file api.mydomain.com/.well-known/acme-challenge/OCy4HSmhDwb2dtBEjZ9vP3HgjVXDPeghSAdqMFOFqMw. This file is put there by certbot. If the address api.mydomain.com does not resolve to the address of the machine from which you are running certbot then the process will fail.
You will also need to have ports 80 and 443 open for it to work.
Based on the available info that is my best suggestion on where you can start looking to resolve the issue.

Docker: how to force graylog web interface over https?

I'm currently struggling to get graylog working over https in a docker environment. I'm using the jwilder/nginx-proxy and I have the certificates in place.
When I run:
docker run --name=graylog-prod --link mongo-prod:mongo --link elastic-prod:elasticsearch -e VIRTUAL_PORT=9000 -e VIRTUAL_HOST=test.myserver.com -e GRAYLOG_WEB_ENDPOINT_URI="http://test.myserver.com/api" -e GRAYLOG_PASSWORD_SECRET=somepasswordpepper -e GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 -d graylog2/server
I get the following error:
We are experiencing problems connecting to the Graylog server running
on http://test.myserver.com:9000/api. Please verify that the server is
healthy and working correctly.
You will be automatically redirected to the previous page once we can
connect to the server.
This is the last response we received from the server:
Error message
Bad request Original Request
GET http://test.myserver.com/api/system/sessions Status code
undefined Full error message
Error: Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
When I go to the URL in the message, I get a reply: {"session_id":null,"username":null,"is_valid":false}
This is the same reply I get when running Graylog without https.
In the docker log file from the graylog is nothing mentioned.
docker ps:
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES 56c9b3b4fc74 graylog2/server "/docker-entrypoint.s" 5
minutes ago Up 5 minutes 9000/tcp, 12900/tcp
graylog-prod
When running docker with the option -p 9000:9000 all is working fine without https, but as soon as I force it to go over https I get this error.
Anyone an idea what I'm doing wrong here?
Thanks a lot!
Did you try GRAYLOG_WEB_ENDPOINT_URI="https://test.myserver.com/api" ?

SSH tunnel not working - empty server response

I am trying to setup an SSH tunnel but I am new to this process. This is my setup:
Machine B has a web service with restricted access. Machine A has been granted access to Machine B's service, based on a firewall IP whitelist.
I can connect to Machine A using an ssh connection. After that I try to access the webservice on Machine B from my localhost, but I cannot.
The webservice endpoint looks like this:
service.test.organization.com:443/org/v1/sendData
So far, I have created an ssh tunnel like this:
ssh -L 1234:service.test.organization.com:443 myuser#machineb.com
My understanding was that using this approach, I could hit localhost:1234 and it would be forwarded to service.test.organization.com:443, through Machine B.
I have confirmed that from Machine B, I can execute a curl command to send a message to the webservice, and i get a response (so that is working). I have tried using PostMan in my browser, and curl in terminal from localhost, but I have been unsuccessful. (curl -X POST -d #test.xml localhost:1234/org/v1/sendData)
Error message: curl: (52) Empty reply from server
There's a lot of material on SSH and I am sifting through it, but if anyone has any pointers, I would really appreciate it!
Try to add Host HTTP header: curl -H "Host: service.test.organization.com" -X POST -d #test.xml http://localhost:1234/org/v1/sendData
The networking issue was caused by the request format. My request object was built with a destination of 'localhost:1234'. So even though it was reaching the proper machine, the machine ignored it.
To solve this I added a record in my host file, like this:
service.test.organization.com 127.0.0.1
Then I was able send the message. First I opened the tunnel,
ssh -L 443:service.test.organization.com:443 myuser#machineb.com,
Then using using this curl command: curl -X POST -d #test.xml service.test.organization.com:443/org/v1/sendData
The host file causes the address to resolve to localhost, then the ssh tunnel knows to forward it on.

setting up pagekite with my own frontend to access ssh

I'm trying to expose my ssh server to my own frontend using Pagekite 0.5.6d on a linux box.
This is the line for my frontend:
./pagekite.py --clean \
--isfrontend \
--ports=23456 \
--domain=raw:client1.bla.ch:toto
This is the line for my client:
./pagekite.py --clean \
--frontend=nn.nn.nn.nn:23456 \
--service_on=raw/22:client1.bla.ch:localhost:22:toto
If I try to launch the client, I get rejected with that line:
REJECTED: raw-22:client1.bla.ch (port)
And on my front end, there is a line like that that appears:
Connecting to front-end x.x.x.x:x ...
What could be wrong in my config?
Thanks to BjarniRunar (one of the guy of Pagekite), adding the flag:
--rawports=virtual
do the tricks. Unfortunately, It seems that it's undocumented.