kubectl replied connection refused on Linux while OK in another machine (Mac) - ssl

Update: I just run the command inside the docker of the same Linux machine, and it worked. Therefore might be an issue related to Linux distro. I personally suspect something related to SSL certifications.
I set up a Kubernetes cluster in AWS EKS and a whole running environment by using MacBook. However, I found out myself cannot setup kubectl correctly in my Linux machine (ArchLinux).
I've tried to run the kubectl --v=1000 get svc (some cluster info was masked)
I1020 11:01:44.053581 3266 loader.go:359] Config loaded from file /home/realturner/.kube/config
I1020 11:01:44.054963 3266 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.14.7 (linux/amd64) kubernetes/1861c59" 'https://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api?timeout=32s'
I1020 11:01:44.299305 3266 round_trippers.go:438] GET https://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api?timeout=32s in 244 milliseconds
I1020 11:01:44.299331 3266 round_trippers.go:444] Response Headers:
I1020 11:01:44.299367 3266 cached_discovery.go:121] skipped caching discovery info due to Get https://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api?timeout=32s: dial tcp: lookup XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com on [::1]:53: read udp [::1]:34122->[::1]:53: read: connection refused
When compared to another machine, the successful one replies headers and body
I1020 11:03:44.358266 1675 loader.go:359] Config loaded from file /Users/realturner/.kube/config-tv
I1020 11:03:44.359417 1675 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.14.6 (darwin/amd64) kubernetes/96fac5c" 'https://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api?timeout=32s'
I1020 11:03:46.186432 1675 round_trippers.go:438] GET https://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api?timeout=32s 200 OK in 1826 milliseconds
I1020 11:03:46.186481 1675 round_trippers.go:444] Response Headers:
I1020 11:03:46.186498 1675 round_trippers.go:447] Content-Length: 149
I1020 11:03:46.186512 1675 round_trippers.go:447] Date: Sun, 20 Oct 2019 03:03:46 GMT
I1020 11:03:46.186525 1675 round_trippers.go:447] Audit-Id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
I1020 11:03:46.186538 1675 round_trippers.go:447] Content-Type: application/json
I1020 11:03:46.262841 1675 request.go:942] Response Body: {"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"0.0.0.0/0","serverAddress":"ip-10-xxx-xxx-xxx.ec2.internal:443"}]}
I'd suspect a network or firewall problem, but simply doing curl to that endpoint do have some response, though lack of permission:
$ curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.14.7 (linux/amd64) kubernetes/1861c59" 'https://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api?timeout=32s'
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying x.xxx.xxx.xx:443...
* TCP_NODELAY set
* Connected to XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com (x.xxx.xxx.xx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=kube-apiserver
* start date: Oct 17 10:29:43 2019 GMT
* expire date: Oct 16 10:29:44 2020 GMT
* issuer: CN=kubernetes
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55cb670e87b0)
> GET /api?timeout=32s HTTP/2
> Host: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com
> Accept: application/json, */*
> User-Agent: kubectl/v1.14.7 (linux/amd64) kubernetes/1861c59
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 403
< audit-id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 188
< date: Sun, 20 Oct 2019 14:56:42 GMT
<
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/api\"","reason":"Forbidden","details":{},"code":403}
* Connection #0 to host XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com left intact
Edit - here's my .kube/config, same as .kube/tv-config (some items masked). It's generated by aws eks update-kubeconfig --name <Cluster>
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <Same as the one in `Certificate authority
` of EKS>
server: https://<PREFIX>.<REGION>.eks.amazonaws.com
name: arn:aws:eks:<REGION>:<AccountId>:cluster/<Cluster>
contexts:
- context:
cluster: arn:aws:eks:<REGION>:<AccountId>:cluster/<Cluster>
user: arn:aws:eks:<REGION>:<AccountId>:cluster/<Cluster>
name: arn:aws:eks:<REGION>:<AccountId>:cluster/<Cluster>
current-context: arn:aws:eks:<REGION>:<AccountId>:cluster/<Cluster>
kind: Config
preferences: {}
users:
- name: arn:aws:eks:<REGION>:<AccountId>:cluster/<Cluster>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- <REGION>
- eks
- get-token
- --cluster-name
- <Cluster>
command: aws

Wow, it turns out to be the DNS resolution problem, despite that I used the newly install the system for several days without noticing it.
Previously I just tried getent hosts <DNS> for DNS resolution and use curl -v <PREFIX>.<REGION>.eks.amazonaws.com> for DNS resolution test. They both had replied correctly before I found my /etc/resolv.conf is actually empty.
I have missed configuring systemd's DNS resolution. As documented here:
Note that if you want to take advantage of automatic DNS configuration from DHCP, you need to enable systemd-resolved and symlink /run/systemd/resolve/resolv.conf to /etc/resolv.conf
After symbolic linking now it just works as expected!

Related

TLS certification expired only for some users

I have a k8s cluster with an ingress nginx as a reverse proxy. I am using letsencrypt to generate TLS certificate
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: ******
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
Everything worked fine for months. Today,
$ curl -v --verbose https://myurl
returns
* Rebuilt URL to: https://myurl/
* Trying 51.103.58.**...
* TCP_NODELAY set
* Connected to myurl (51.103.58.**) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, Server hello (2):
* SSL certificate problem: certificate has expired
* stopped the pause stream!
* Closing connection 0
curl: (60) SSL certificate problem: certificate has expired
More details here: https://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
HTTPS-proxy has similar options --proxy-cacert and --proxy-insecure.
For 2 other people on my team, error is the same and I have the same error when I use Postman (expired certificate).
But for another one, we get no error :
* Trying 51.103.58.**...
* TCP_NODELAY set
* Connected to myurl (51.103.58.**) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=myurl
* start date: Jul 24 07:15:13 2021 GMT
* expire date: Oct 22 07:15:11 2021 GMT
* subjectAltName: host "myurl" matched cert's "myurl"
* issuer: C=US; O=Let's Encrypt; CN=R3
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fd9be00d600)
> GET / HTTP/2
> Host: myurl
> User-Agent: curl/7.64.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200
< server: nginx/1.19.1
< date: Thu, 30 Sep 2021 16:11:23 GMT
< content-type: application/json; charset=utf-8
< content-length: 56
< vary: Origin, Accept-Encoding
< access-control-allow-credentials: true
< x-xss-protection: 1; mode=block
< x-frame-options: DENY
< strict-transport-security: max-age=15724800; includeSubDomains
< x-download-options: noopen
< x-content-type-options: nosniff
< etag: W/"38-3eQD3G7Y0vTkrLR+ExD2u5BSsMc"
<
* Connection #0 to host myurl left intact
{"started":"2021-09-30T13:30:30.912Z","uptime":9653.048}* Closing connection 0
When I use my web browser to go to the website, everything works fine and the certificate is presented as valid and for now, I get no error in prod or staging environment. (same error on staging)
Has anyone an explanation on this ?
Warning! Please plan OS upgrade path. The below advice should be applied only in emergency situation to quickly fix a critical system.
Your team missed OS update or ca-certificates package update.
Below solution works on old Debian/Ubuntu systems.
First check if you have offending DST Root CA X3 cert present:
# grep X3 /etc/ca-certificates.conf
mozilla/DST_Root_CA_X3.crt
Make sure the client OS have the proper ISRG Root X1 present too:
# grep X1 /etc/ca-certificates.conf
mozilla/ISRG_Root_X1.crt
This is going to disable X3:
# sed -i '/^mozilla\/DST_Root_CA_X3/s/^/!/' /etc/ca-certificates.conf && update-ca-certificates -f
Try curl https://yourdomain now, should pass.
Again, plan an upgrade please.
This is related to the expired DST Root CA X3, which expired Sep 30 14:01:15 2021 GMT.
The DST CA Root X3 certificate is part of the "cacert-bundle".
As of today the "cacert-bundle" can be found here: https://curl.se/docs/caextract.html
as part of the bundle https://curl.se/ca/cacert.pem.
The expired certificate is:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
44:af:b0:80:d6:a3:27:ba:89:30:39:86:2e:f8:40:6b
Signature Algorithm: sha1WithRSAEncryption
Issuer: O=Digital Signature Trust Co., CN=DST Root CA X3
Validity
Not Before: Sep 30 21:12:19 2000 GMT
Not After : Sep 30 14:01:15 2021 GMT
Subject: O=Digital Signature Trust Co., CN=DST Root CA X3
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Which is used to verify peer in curl calls to websites using Let's Encrypt issued certificates.
Here's a detailed solution to your problem: https://stackoverflow.com/a/69411107/1549092
Let's Encrypt formal address of the issue can be found here: https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/
Even if not K8S related, main explanation is contained in :
Sudden OpenSSL Error messages: error:14090086 using file_get_contents . I complete it with K8S related here.
I fixed the same issue by upgarding my certbot and reissuing certificate with `` --preferred-chain 'ISRG Root X1'
you can do the same with options in the yaml of the cert issuer :
see here : https://cert-manager.io/docs/configuration/acme/#use-an-alternative-certificate-chain
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
preferredChain: "ISRG Root X1"

Getting "maximum redirect" when upgrading to nginx 3.11.1

I have a Kubernetes cluster running in AWS, and I am working through upgrading various components. Internally, we are using NGINX, and it is currently at v1.1.1 of the nginx-ingress chart (as served from old stable), with the following configuration:
controller:
publishService:
enabled: "true"
replicaCount: 3
service:
annotations:
external-dns.alpha.kubernetes.io/hostname: '*.MY.TOP.LEVEL.DOMAIN'
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: [SNIP]
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
targetPorts:
http: http
https: http
My service's ingress resource looks like...
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
[SNIP]
spec:
rules:
- host: MY-SERVICE.MY.TOP.LEVEL.DOMAIN
http:
paths:
- backend:
serviceName: MY-SERVICE
servicePort: 80
path: /
status:
loadBalancer:
ingress:
- hostname: [SNIP]
This configuration works just fine, however, when I upgrade to v3.11.1 of the ingress-nginx chart (as served from the k8s museum).
With an unmodified config, curling to the HTTPS scheme redirects back to itself:
curl -v https://MY-SERVICE.MY.TOP.LEVEL.DOMAIN/INTERNAL/ROUTE
* Trying W.X.Y.Z...
* TCP_NODELAY set
* Connected to MY-SERVICE.MY.TOP.LEVEL.DOMAIN (W.X.Y.Z) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=*.MY.TOP.LEVEL.DOMAIN
* start date: Aug 21 00:00:00 2020 GMT
* expire date: Sep 20 12:00:00 2021 GMT
* subjectAltName: host "MY-SERVICE.MY.TOP.LEVEL.DOMAIN" matched cert's "*.MY.TOP.LEVEL.DOMAIN"
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
> GET INTERNAL/ROUTE HTTP/1.1
> Host: MY-SERVICE.MY.TOP.LEVEL.DOMAIN
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 308 Permanent Redirect
< Content-Type: text/html
< Date: Wed, 28 Apr 2021 19:07:57 GMT
< Location: https://MY-SERVICE.MY.TOP.LEVEL.DOMAIN/INTERNAL/ROUTE
< Content-Length: 164
< Connection: keep-alive
<
<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host MY-SERVICE.MY.TOP.LEVEL.DOMAIN left intact
* Closing connection 0
(I wish I had captured more verbose output...)
I tried modifying the NGINX config to append the following:
config:
use-forwarded-headers: "true"
and then...
config:
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
These did not seem to make a difference. It was in the middle of the day, so I wasn't able to dive too far in before rolling back.
What should I look at, and how should I debug this?
Update:
I wish that I had posted a complete copy of the updated config, because I would have noticed that I did not correctly apply the change to add config.compute-full-forwarded-for: "true". It need to be within the controller block, and I had placed it elsewhere.
Once the compute-full-forwarded-for: "true" config was added, everything started to work immediately.
This is a community wiki answer posted for better visibility. Feel free to expand it.
As confirmed by #object88 the issue was with misplaced config.compute-full-forwarded-for: "true" configuration which was located in the wrong block. Adding it to the controller block solved the issue.

Curl is not sending client certificate

I am trying to send a simple curl request:
curl -k -i --key ./key.pem --cert ./cert.pem https://target_ip/whatever/
The problem I'm having is that it does not send any certificate. The validation clearly passes as otherwise I was getting errors such as the key does not match but then I can see in wireshark that the certificates are not being sent in the TCP connection around Client Hello. Switches like --verbose or --cacert don't do much either.
I am able to send the very same certificates through postman successfully.
I have tried sending the same curl request from various sources such as my WSL2 ubuntu, a debian container in the cloud, a VM, ...
Any tips why it is not sending the certs?
EDIT I - output from curl -v
* Trying 52.xxx.xxx.xx:443...
* TCP_NODELAY set
* Connected to 52.xxx.xxx.xx (52.xxx.xxx.xx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=NGINXIngressController
* start date: Aug 10 18:08:13 2020 GMT
* expire date: Aug 10 18:08:13 2021 GMT
* issuer: CN=NGINXIngressController
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /whatever/ HTTP/1.1
> Host: custom.localhost.dev
> User-Agent: curl/7.68.0
> Accept: */*
> Authorization: Bearer eyJ0...
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 403 Forbidden
HTTP/1.1 403 Forbidden
< Server: nginx/1.19.0
Server: nginx/1.19.0
< Date: Mon, 10 Aug 2020 22:23:24 GMT
Date: Mon, 10 Aug 2020 22:23:24 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 153
Content-Length: 153
< Connection: keep-alive
Connection: keep-alive
<
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.19.0</center>
</body>
</html>
* Connection #0 to host 52.xxx.xxx.xx left intact
EDIT II - wireshark captures
It seems to be too much of a hassle to anonymise pcap, so here's just some snaps. Hopefully you'll be able to see all you need. I have highlighted the packet where I do (not) see the cert being sent. Note that I am running the postman on my windows workstation, whereas the curl is in the WSL2, hence the different source addresses. Other hosts for curl did behave the same though.
Curl
Postman
EDIT III - Client Hellos
Curl
Postman
The ClientHello shows a clear difference: postman uses the server_name extension (SNI) to provide the expected hostname while curl does not.
This likely triggers a different part of the configuration in the web server: postman triggers access to the specific virtual host given as server_name while curl will probably run into the default configuration. Assuming that only the specific virtual host enables client certificates this explains why the CertificateRequest is send by the server only to postman but not to curl.
It is unclear what this hostname is, but based on the length it cannot be an IP address. Thus postman somehow must know the expected hostname of the server even though it is claimed that the access was done with https://target_ip/ only, i.e. without a given hostname. curl cannot derive from this URL the expected hostname and thus cannot set server_name. To make curl be aware of the hostname to set server_name while still being able to access a specific IP use the --resolve option:
curl --resolve hostname:443:target_ip https://hostname/

Xively REST API with cURL - 403 Forbidden

I'm trying to follow the xively cURL tutorial. I have created new device in develop mode, copied the default auto-generated API key (with READ, WRITE, CREATE, DELETE permissions) to the example in tutorial and get response:
{"title":"Forbidden","errors":"You do not have the necessary permissions to access this resource"}
I must be missing some obvious step. Do I need to activate the API key somehow before using it in scripts?
The cURL command:
curl --request POST \
--data '{"title":"My feed", "version":"1.0.0"}' \
--header "X-ApiKey: cPHLfGw1WJdMAbU8FzbfsdFyJ8suayHEH3OChRrkpYwQCmrb" \
--verbose \
https://api.xively.com/v2/feeds
full verbose output:
* About to connect() to api.xively.com port 443 (#0)
* Trying 216.52.233.120...
* Connected to api.xively.com (216.52.233.120) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using ECDHE-RSA-RC4-SHA
* Server certificate:
* subject: C=US; postalCode=01801; ST=MA; L=Woburn; street=First Floor; street=500 Unicorn Park Drive; O=LogMeIn Inc.; OU=Secure Link SSL Wildcard; CN=*.xively.com
* start date: 2013-05-07 00:00:00 GMT
* expire date: 2014-04-27 23:59:59 GMT
* subjectAltName: api.xively.com matched
* issuer: C=US; O=Network Solutions L.L.C.; CN=Network Solutions Certificate Authority
* SSL certificate verify ok.
> POST /v2/feeds HTTP/1.1
> User-Agent: curl/7.29.0
> Host: api.xively.com
> Accept: */*
> X-ApiKey: cPHLfGw1WJdMAbU8FzbfsdFyJ8suayHEH3OChRrkpYwQCmrb
> Content-Length: 38
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 38 out of 38 bytes
< HTTP/1.1 403 Forbidden
< Date: Sat, 30 Nov 2013 11:03:15 GMT
< Content-Type: application/json; charset=utf-8
< Content-Length: 98
< Connection: keep-alive
< X-Request-Id: 6cbb9676b448a4967187271dd246b423f7da2e39
<
* Connection #0 to host api.xively.com left intact
{"title":"Forbidden","errors":"You do not have the necessary permissions to access this resource"}
It depends what you are trying to do. POST requests to api.xively.com/v2/feeds are no longer supported with any API key. This is because, since the implementation of devices, programatic creation of feeds directly is no longer supported.
This is an oversight in the Xively tutorial and I will inform the appropriate people to make sure that it gets changed.
In the mean time, since you have already created a device, you are basically ready to start the cURL tutorial at step 3 "Update a Feed". Use the API key and Feed ID from the development device you have already created on the website. Make sure to change your URL, body, and from a POST request to a PUT request.
I think you need to add your feed id onto the end of the URL.

Github Api v3 giving 404 when I try to create a repo

I've executed the following command with the respective response.
However, if I try to get information about the /user, it works, witch means that my token is valid.
What Am I doing wrong?
guto#willie:~/$ curl -v -XPOST -H 'Authorization: token S3CR3T' -H 'Content-Type: application/json; charset=utf-8' https://api.github.com/user/repos -d '{"name":"my-new-repo","description":"my new repo description"}'
Output:
* About to connect() to api.github.com port 443 (#0)
* Trying 207.97.227.243... connected
* Connected to api.github.com (207.97.227.243) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using AES256-SHA
* Server certificate:
* subject: O=*.github.com; OU=Domain Control Validated; CN=*.github.com
* start date: 2009-12-11 05:02:36 GMT
* expire date: 2014-12-11 05:02:36 GMT
* subjectAltName: api.github.com matched
* issuer: C=US; ST=Arizona; L=Scottsdale; O=GoDaddy.com, Inc.; OU=http://certificates.godaddy.com/repository; CN=Go Daddy Secure Certification Authority; serialNumber=07969287
* SSL certificate verify ok.
> POST /user/repos HTTP/1.1
> User-Agent: curl/7.21.3 (x86_64-pc-linux-gnu) libcurl/7.21.3 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18
> Host: api.github.com
> Accept: */*
> Authorization: token S3CR3T
> Content-Type: application/json; charset=utf-8
> Content-Length: 62
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.0.4
< Date: Tue, 27 Dec 2011 03:45:12 GMT
< Content-Type: application/json; charset=utf-8
< Connection: keep-alive
< Status: 404 Not Found
< X-RateLimit-Limit: 5000
< ETag: "31b00b4920d3470b70611b10e0ba62a7"
< X-OAuth-Scopes: public_repo, user
< X-RateLimit-Remaining: 4976
< X-Accepted-OAuth-Scopes: repo
< Content-Length: 29
<
{
"message": "Not Found"
}
* Connection #0 to host api.github.com left intact
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):
guto#willie:~/projetos/apostilas/4linux-helper$
Check the Oath Scope GitHub documentation:
$ curl -H "Authorization: bearer TOKEN" https://api.github.com/users/technoweenie -I
HTTP/1.1 200 OK
X-OAuth-Scopes: repo, user
X-Accepted-OAuth-Scopes: user
You need to have the repo scope in order to have the right to create a repo, as illustrated by the SO question "Github v3 API - create a REPO".
repo
DB read/write access, and Git read access to public and private repos.
NOTE: Your application can request the scopes in the initial redirection.
You can specify multiple scopes by separating them by a comma.
https://github.com/login/oauth/authorize?
client_id=...&
scope=user,public_repo
try using
curl -v -X POST -H 'Content-Type: application/x-www-form-urlencoded' -d '{"name":"my-new-repo","description":"my new repo description"}' https://api.github.com/user/repos?access_token=S3CR3T