I am building a Restful API client using .NET core framework. The APIs are OpenStack API, however, because of the network configuration, I cannot access the API from my local computer (also development computer), I have to ssh into a machine that can ssh into the OpenStack infrastructure when accessing OpenStack normally.
Bearing this in mind, is it possible to use SSH tunnel for the API endpoints and then call it in the implemented Web API client? I have tried to do this, but the call to the endpoint returns error 401 - content length is required.
Basically its possible to call the Openstack API endpoint through an SSH-tunnel without any public accessible API-endpoints. Bacause I have no experience with .NET core framework this answer is really generic without C# code. I hope it helps you anyway.
IMPORTANT: You can use the following steps only, when you have a admin-login to the openstack-deployment and you should ONLY!!! use this way, when the openstack-deployment is a test-deployment, where breaking the deployment doesn't affect other users.
1. SSH-tunnel
You can forward ports with the command:
ssh -L 127.0.0.1:<PORT>:<IP_REMOTE>:<PORT> <USER_JUMPHOST>#<IP_JUMPHOST> -fN
<PORT> = Port of the Openstack-Component you want to access remotely (for example 5000 for keystone)
<IP_REMOTE> = IP of the host, where your openstack deployment is running
<USER_JUMPHOST>#<IP_JUMPHOST> = ssh-access to the jumphost, which is between you and your openstack deployment
This has to be done for each openstack component. If you don't want this command in the backgroup remove the -fN at the end.
Here at first you have to forward Keystone with port 5000.
example: ssh -L 127.0.0.1:5000:192.168.62.1:5000 deployer#192.168.67.1 -fN
You can test the access via curl or webbrowser from your local pc:
curl http://127.0.0.1:5000
{"versions": {"values": [{"id": "v3.13", "status": "stable", "updated": "2019-07-19T00:00:00Z", "links": [{"rel": "self", "href": "http://127.0.0.1:5000/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}]}}
2. change openstack-endpoints
To be also able to login on the openstack deployment through the tunnel, you have to change the endpoints to listen to the localhost on the remote system too, where your openstack depoyment is:
Login normally on your openstack deployment as admin-user.
List all endpoints: openstack endpoint list
Change the public and internal-endpoint of keystone to localhost:
openstack endpoint set --url http://127.0.0.1:5000 <ID_OF_INTERNAL_KEYSTONE_ENDPOINT>
After changing the internal endpoint it will break the openstack-login on the remotesystem for now, but don't worry.
Now you can login to the openstack via openstack-client from your local pc. Here you have to authorize against local-host. If you use an rc-file to login, you have to change the auth-url to export OS_AUTH_URL=http://127.0.0.1:5000/v3
Change the nova endpoints by running on your local pc openstack endpoint set --url "http://127.0.0.1:8774/v2.1" <ID> for the internal and public endpoint of nova to run commands like openstack server list through your ssh-tunnel (of course you need also an ssh-tunnel for port 8774) to do this.
3. Authorize against the openstack-deployment
When you send HTTP-Requests without the openstack-client, you have to manually request an authentication token from the deployment:
Login normally on your openstack deployment
Make a Token-Request:
curl -v -s -X POST "$OS_AUTH_URL/auth/tokens?nocatalog" -H "Content-Type: application/json" -d '{ "auth": { "identity": { "methods": ["password"],"password": {"user": {"domain": {"name": "'"$OS_USER_DOMAIN_NAME"'"},"name": "'"$OS_USERNAME"'", "password": "'"$OS_PASSWORD"'"} } }, "scope": { "project": { "domain": { "name": "'"$OS_PROJECT_DOMAIN_NAME"'" }, "name": "'"$OS_PROJECT_NAME"'" } } }}' --stderr - | grep X-Subject-Token
This command can be used without changes. The Value after the Key X-Subject-Token is the token from Keystone. Copy this value and export the token as the environment variable OS_TOKEN. For example like the following line
export OS_TOKEN=gAAAAABZuj0GZ6g05tKJ0hvihAKXNJgzfoT4TSCgR7cgWaKvIvbD66StJK6cS3FqzR2DosmqofnR_N-HztJXUcVhwF04HQsY9CBqQC7pblGnNIDWCXxnJiCH_jc4W-uMPNA6FBK9TT27vE5q5AIa487GcLLkeJxdchXiDJvw6wHty680eJx3kL4
Make requests with the token.
For example GET-Requests with curl:
curl -s -X GET -H "X-Auth-Token: $OS_TOKEN" http://127.0.0.1:5000/v3/users | python -m json.tool
Related
I have hosted JFrog-OSS docker container which is running behind Nginx along with DNS attached to it .I want to create new repositories(local ones) using REST Apis (curl commands ) .Since it's a free version .Is it possible to have create local repositories using CUrl commands in JFrog ?.
curl -s -uadmin:password -X PUT -H 'Content-Type: application/json' https://devops.com/artifactory/libs-release-local/path/to/directory/ -d '
[ {
"\key"\ : "\example-repo-local"\,
"\description"\ : "\artifactory repository"\,
"\type"\ : "\LOCAL"\,
"\url"\ : "\https://devops.com/artifactory/example-repo-local"\,
"\packageType"\ : "\Generic"\
}'
Creating repositories via REST API requires Artifactory Pro.
Also, the url and JSON configurations seems wrong.
The url that the request is sent to should not include path, but a repository only.
The "type" key should be "rclass". "url" key is only for remote.
See documentation for more details.
I have set up the parse-server and parse-dashboard in a dedicated instance on Alibaba Cloud with 2vCPU and 8GB of RAM with ApsaraDB for MongoDB as DB for Parse.
I successfully setup the dashboard and server. When I tried to access the server I get the following error,
"Unable to connect to server."
Parse Dashboard Error Screenshot
I am successfully able to make POST and GET Requests like below
curl -X POST \
-H "X-Parse-Application-Id: APPLICATION_ID" \
-H "Content-Type: application/json" \
-d '{"score":1337,"playerName":"Sean Plott","cheatMode":false}' \
http://localhost:1337/parse/classes/GameScore
//Response
{
"objectId": "2ntvSpRGIK",
"createdAt": "2016-03-11T23:51:48.050Z"
}
I am able to connect via putty and FTP.
Thanks
While your computer (by that I mean other applications) is reaching the server the Parse dashboard isn't.
Inside your dashboard config you can try changing
http://localhost:1337/parse
to
http://[ip-address]:1337/parse
Also go through this thread, you might be able to get some more insight on why it is not working:
https://github.com/parse-community/parse-dashboard/issues/785
I have problem when creating Telegram Bot with a webhook.
Currently I have done this:
I have a cloud, with IP Address xxx.xxx.xxx.xxx, without a domain
name
I created certificate within JKS file inside a Java application (I have followed instructions from https://core.telegram.org/bots/self-signed)
My certificate is self-signed, and I use IP Address xxx.xxx.xxx.xxx as my CN
I exported it to Public Key certificate to use later in 'setWebhook' command parameter
I execute this command : curl -F "url=https://xxx.xxx.xxx.xxx" -F "certificate=#my-pem-file.pem" https://api.telegram.org/botXXX:XXX/setWebhook
I run my bot engine
I try to call the URL in browser, with address : https://xxx.xxx.xxx.xxx (GET method), and it works fine after browser 'add exception' to my self-signed certificate
'works fine' means browser can recognize the public certificate, and it can display correct response as I developed
I try to follow test script from https://core.telegram.org/bots/webhooks#testing-your-bot-with-updates, example is:
curl --tlsv1 -v -k -X POST -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"update_id":10000,"message":{ "date":1441645532,"chat":{"last_name":"Test Lastname","id":1111111,"first_name":"Test","username":"Test"},"message_id":1365,"from":{"last_name":"Test Lastname","id":1111111,"first_name":"Test","username":"Test"},"text":"/start"}}' "https://xxx.xxx.xxx.xxx"
It works fine.
I am sure that parameter 'url' and 'certificate' in API of 'setWebhook' works fine, because when I call API of 'getWebhookInfo' to Telegram, it replied:
{
"ok": true,
"result": {
"url": "https://xxx.xxx.xxx.xxx",
"has_custom_certificate": true,
"pending_update_count": 25,
"last_error_date": 1484557151,
"last_error_message": "Connection timed out",
"max_connections": 40
}
}
I try to send any message to my bot, but nothing shown in my internal application log, and when I call API of 'getWebhookInfo' it always show the same, it said "Connection timed out"
What should I do in my certificate?
I had the same problem at last week, webhooks worked and stopped (timeout error). It turned out because of the blocking of the telegram in Russia. My server was not in Russia, it was in Holland, but after the server was changed it all worked.
I'm running aCoreOS to host some Docker containers, and I want to block SSH access from containers to the CoreOS host, which I guess will be a reasonable matter of security.
I've tried to restrict this access using /etc/hosts.deny file or using iptables. The problem with this approaches is that they both needs container's IP ranges defined specifically and I can't find a guaranteed way to specify it automatically on all new hosts.
As docker documentation describes, the default network definition for bridge interface is:
$ sudo docker network inspect bridge
[
{
"name": "bridge",
"id": "7fca4eb8c647e57e9d46c32714271e0c3f8bf8d17d346629e2820547b2d90039",
"driver": "bridge",
"containers": {
"bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727": {
"endpoint": "e0ac95934f803d7e36384a2029b8d1eeb56cb88727aa2e8b7edfeebaa6dfd758",
"mac_address": "02:42:ac:11:00:03",
"ipv4_address": "172.17.0.3/16",
"ipv6_address": ""
},
"f2870c98fd504370fb86e59f32cd0753b1ac9b69b7d80566ffc7192a82b3ed27": {
"endpoint": "31de280881d2a774345bbfb1594159ade4ae4024ebfb1320cb74a30225f6a8ae",
"mac_address": "02:42:ac:11:00:02",
"ipv4_address": "172.17.0.2/16",
"ipv6_address": ""
}
}
}
]
Docker documentation also says that it uses 172.17.42.1/16 for docker0 interface when it is available (which is also configurable using -b option). But I see some of containers own IPs like 10.1.41.2 and it makes it difficult to block them using hard coded IP range.
I know about --icc option which will effect inter container communication on daemon, and want to find a clean way like this and restrict SSH access from containers to their host.
Thank you.
You could use iptables to set a firewall rule preventing this.
iptables -A INPUT -i docker0 -p tcp --destination-port 22 -j DROP
(To be run on the host)
I am trying to setup an SSH tunnel but I am new to this process. This is my setup:
Machine B has a web service with restricted access. Machine A has been granted access to Machine B's service, based on a firewall IP whitelist.
I can connect to Machine A using an ssh connection. After that I try to access the webservice on Machine B from my localhost, but I cannot.
The webservice endpoint looks like this:
service.test.organization.com:443/org/v1/sendData
So far, I have created an ssh tunnel like this:
ssh -L 1234:service.test.organization.com:443 myuser#machineb.com
My understanding was that using this approach, I could hit localhost:1234 and it would be forwarded to service.test.organization.com:443, through Machine B.
I have confirmed that from Machine B, I can execute a curl command to send a message to the webservice, and i get a response (so that is working). I have tried using PostMan in my browser, and curl in terminal from localhost, but I have been unsuccessful. (curl -X POST -d #test.xml localhost:1234/org/v1/sendData)
Error message: curl: (52) Empty reply from server
There's a lot of material on SSH and I am sifting through it, but if anyone has any pointers, I would really appreciate it!
Try to add Host HTTP header: curl -H "Host: service.test.organization.com" -X POST -d #test.xml http://localhost:1234/org/v1/sendData
The networking issue was caused by the request format. My request object was built with a destination of 'localhost:1234'. So even though it was reaching the proper machine, the machine ignored it.
To solve this I added a record in my host file, like this:
service.test.organization.com 127.0.0.1
Then I was able send the message. First I opened the tunnel,
ssh -L 443:service.test.organization.com:443 myuser#machineb.com,
Then using using this curl command: curl -X POST -d #test.xml service.test.organization.com:443/org/v1/sendData
The host file causes the address to resolve to localhost, then the ssh tunnel knows to forward it on.