setting up pagekite with my own frontend to access ssh - reverse-proxy

I'm trying to expose my ssh server to my own frontend using Pagekite 0.5.6d on a linux box.
This is the line for my frontend:
./pagekite.py --clean \
--isfrontend \
--ports=23456 \
--domain=raw:client1.bla.ch:toto
This is the line for my client:
./pagekite.py --clean \
--frontend=nn.nn.nn.nn:23456 \
--service_on=raw/22:client1.bla.ch:localhost:22:toto
If I try to launch the client, I get rejected with that line:
REJECTED: raw-22:client1.bla.ch (port)
And on my front end, there is a line like that that appears:
Connecting to front-end x.x.x.x:x ...
What could be wrong in my config?

Thanks to BjarniRunar (one of the guy of Pagekite), adding the flag:
--rawports=virtual
do the tricks. Unfortunately, It seems that it's undocumented.

Related

WSO2 API Manager: Publish function does not work

Good afternoon! After I changed the ip address of my wso2 api manager service, I lost the ability to publish new APIs. Error appears: [500]: Internal server error
Error while updating the API in Gateway cdbe7ae3-1aef-4f03-8a3f-f84f530248af
What I did before: I replaced all localhost values ​​with the ip address of the host, I did this in all parameters that are not commented out. First of all, I changed the value of [server]
hostname = "{hostname}", I did all this in the /repository/conf/deployment.toml file.
Please tell me how to solve the problem!
I also independently came to the conclusion that instead of localhost, put the ip address in all parameters in the wso2am-3.2.0 \ repository \ deployment \ server \ synapse-configs \ default \ proxy-services \ WorkflowCallbackService.xml file
After that, I rebooted the server, but it didn't help.

Letsencrypt + Docker + Nginx

I am referring this link https://miki725.github.io/docker/crypto/2017/01/29/docker+nginx+letsencrypt.html
to enable SSL on my app which is running along with docker. So the problem here is when I run the below command
docker run -it --rm \
-v certs:/etc/letsencrypt \
-v certs-data:/data/letsencrypt \
deliverous/certbot \
certonly \
--webroot --webroot-path=/data/letsencrypt \
-d api.mydomain.com
It throws an error:
Failed authorization procedure. api.mydomain.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://api.mydomain.com/.well-known/acme-challenge/OCy4HSmhDwb2dtBEjZ9vP3HgjVXDPeghSAdqMFOFqMw:
So can any one please help me and let me know if I am missing something or doing something wrong.
What seems to be missing from that article and possibly from your setup is that the hostname api.mydomain.com needs to have a public DNS record pointing to the IP address of the machine on which the Nginx container is running.
The Let's Encrypt process is trying to access the file api.mydomain.com/.well-known/acme-challenge/OCy4HSmhDwb2dtBEjZ9vP3HgjVXDPeghSAdqMFOFqMw. This file is put there by certbot. If the address api.mydomain.com does not resolve to the address of the machine from which you are running certbot then the process will fail.
You will also need to have ports 80 and 443 open for it to work.
Based on the available info that is my best suggestion on where you can start looking to resolve the issue.

Gitlab API add SSH-key

I've problems adding an SSH key to my gitlab server trough the API (It works well trough the webpage).
Gitlab information:
I came across this issue (which was fixed here) which was related to an "wrong" openssh implementation. They've fixed this in milestone 7.10. Only thing... My server has openssh 6.6 installed:
OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.3, OpenSSL 1.0.1f 6 Jan 2014
Now, I don't know if that fix is backwards compatible or not, but maybe good to mention.
Also, the logs show no warning or errors or whatsoever. The /tmp/gitlab_key* files are generated on the server:
The problem I'm facing is that gitlab can't create the fingerprint trough the API. This is the responce I get from the API:
{
"message": {
"fingerprint": ["cannot be generated"]
}
}
So right now I have no idea what the problem could be. I've been struggling with this for almost a week now, so I really hope that his problem could be fixed.
-just for the record, here's the script I'm using to add the ssh-key trough the API
#!/bin/bash
jsonFile="jsonResponce"
echo `curl http://gitserver/api/v3/session --data 'login=****&password=****'` > $jsonFile
userToken=$(jq '.private_token' $jsonFile)
finalUserToken=$(echo "$userToken" | tr -d '"')
echo "user token: $finalUserToken"
# Below key is for testing, will use output of cat ~/.ssh/id_rsa.pub later on
# sshKey="ssh-rsa AAAAB3N***** ****#***.com
# curl --data "private_token=$userToken&title=keyName&key=$sshKey" "http://gitserver/api/v3/user/keys"
rm $jsonFile
id_rsa.pub is base64 encoded file, it contains + character
http post with application/x-www-form-urlencoded, need encode it's content preventing + being convert to (space)
try
curl --data-urlencode "key=$key_pub" --data-urlencode "title=$hostname" \
http://gitlabserver/api/v3/user/keys?private_token=$Token
see: this
Improving on #Mathlight's answer the following snippet uploads public ssh key to gitlab.com
curl -X POST -F "private_token=${GITLAB_TOKEN}" -F "title=$(hostname)" -F "key=$(cat ~/.ssh/id_rsa.pub)" "https://gitlab.com/api/v3/user/keys"
OP here
In the mean time I've updated the server to version 8.8 and changed the curl code a bit and now it's working like a charm:
curl -X POST -F "private_token=${userToken}" -F "title=${sshName}" -F "key=${sshKey}" "${gitServer}/user/keys"
Just in case anybody needs this in the future...

Zimbra ZCS 8.6 no dkim signature

I made DKIM configuration as mentioned in ZCS manual by running:
/opt/zimbra/libexec/zmdkimkeyutil -a -d rush.zone
Tested it:
/opt/zimbra/opendkim/sbin/opendkim-testkey -d rush.zone -s {Domain Selector} -x /opt/zimbra/conf/opendkim.conf
No errors till now. Restarted ZCS. And still no DKIM signature header in messages nor from mail sent via Webmail, neither from SMTP.
Let me know where should I look in to debug this issue.
Thanks!
Add 127.0.0.1 at the end of /opt/zimbra/conf/opendkim-localnets.conf.in and do:
$ su - zimbra
$ zmcontrol restart
This answer by Sergiu Bivol is working GREAT.
$ nano /opt/zimbra/conf/opendkim-localnets.conf.in
Your file will look simillar to :
%%zimbraMtaMyNetworksPerLine%%
127.0.0.1
Save the file and execute :
$ su zimbra
$ zmcontrol restart
I got same problem. 2 things you should check:
Check the zimbra log /var/log/zimbra.log about opendkim.
Verify dkim private key on ldap
/opt/zimbra/libexec/zmdkimkeyutil -q -d {domain}

Invalid Registry Endpoint pushing docker image

I built a docker container with docker 1.0, and tried to push it to a private docker registry mapped to s3, but it gives me "invalid registry endpoint".
docker push loca.lhost:5000/company/appname
2014/06/20 12:50:07 Error: Invalid Registry endpoint: Get http://loca.lhost:5000/v1/_ping: read tcp 127.0.0.1:5000: connection reset by peer
The registry was started following settings similar to the example (adding aws region), and does respond if I do a telnet localhost 5000.
docker run \
-e SETTINGS_FLAVOR=s3 \
-e AWS_BUCKET=my-docker-images \
-e STORAGE_PATH=/registry \
-e AWS_KEY=AAAA \
-e AWS_SECRET=BBBBBBB \
-e AWS_REGION=eu-west-1 \
-e SEARCH_BACKEND=sqlalchemy \
-p 5000:5000 \
registry &
s3 logging for the bucket:
8029384029384092830498 my-docker-images [16/Jun/2014:19:25:56 +0000] 123.123.123.127 arn:aws:iam::1234567890:user/docker-image-manager C9976333A1EFBB7A REST.GET.BUCKET - "GET /?prefix=registry/repositories/&delimiter=/ HTTP/1.1" 200 - 291 - 39 39 "-" "Boto/2.27.0 Python/2.7.6 Linux/3.8.0-42-generic" -
Ok, it was due to me specifying AWS_REGION (eu-west-1) and the registry service failing part way through startup.
Taking that out, the registry server finishes initializing and starts listening on the port, and a curl request to the /_ping url returned a response.
https://github.com/dotcloud/docker-registry/issues/400
I was able to retrieve enough console information to debug this by putting the settings in a config.yml file, setting loglevel to debug, then have docker running the registry image passing the config file rather than calling directly as I did above.