Not Getting Custom Nameservers Using Godaddy Api - api

I used this api call to get DNS records and nameservers using domain name
https://api.godaddy.com/v1/domains/testsd34.com/records/NS
GetRecords here is the api call
For default godaddy nameservers its giving everything perfectly but whenever i am using custom nameservers for domain that time this api call not giving nameservers in response its giving empty array,
anyone knows how to get custom nameservers using this api call?

Finally, I found a way to get and edit nameservers for domain.
(For custom nameservers, records are not set by GoDaddy, therefore you have to
query nameserver provider.)
Following is the API call for getting nameservers:
HTTP request:
GET https://api.godaddy.com/api/v1/domains/mydomain.com
HTTP headers:
Authorization -> sso-key my-key:my-secret
Content-Type -> application/json
Response will contain JSON object which has key "nameservers"
with pair of nameservers that you have. Example:
"nameServers": [
"ns1.mynameservers.com",
"ns2.mynameservers.com"
]
To edit the nameservers via API call, you can use following API call:
HTTP request:
PATCH https://api.godaddy.com/api/v1/domains/mydomain.com
HTTP headers:
Authorization -> sso-key my-key:my-secret
Content-Type -> application/json
HTTP body:
{
"nameServers": [
"ns3.mynameservers.com",
"ns4.mynameservers.com"
]
}

Related

Setting an Authorization header after a ForwardAuth in Traefik

I'm moving from Nginx to Traefik as the reverse-proxy of a Docker Swarm.
Currently, each request coming with a Bearer Token is sent to an authentication service (microservice running in the Swarm) which sends back a JWT when auth is correct. I then need to use this JWT in the Authorization header to the request can be sent to the service it targets.
The current setup with Nginx:
auth_request /auth;
auth_request_set $jwt $upstream_http_jwt;
proxy_set_header "Authorization" "jwt $jwt";
Can this approach be done with Traefik ForwardAuth directly or do I have to add a middleware to create this header once the request has been authenticated ?
This is possible if your authentication service can return the JWT in the Authorization header of its response. Set the authResponseHeaders option of the ForwardAuth middleware to Authorization.
The authResponseHeaders option is the list of headers to copy from the authentication server response and set on forwarded request, replacing any existing conflicting headers.
E.g.
http:
middlewares:
auth:
forwardAuth:
address: "http://your_auth_server/auth"
authResponseHeaders:
- "Authorization"

ERR_SSL_VERSION_OR_CIPHER_MISMATCH from AWS API Gateway into Lambda

I have set up a lambda and attached an API Gateway deployment to it. The tests in the gateway console all work fine. I created an AWS certificate for *.hazeapp.net. I created a custom domain in the API gateway and attached that certificate. In the Route 53 zone, I created the alias record and used the target that came up under API gateway (the only one available). I named the alias rest.hazeapp.net. My client gets the ERR_SSL_VERSION_OR_CIPHER_MISMATCH error. Curl indicates that the TLS server handshake failed, which agrees with the SSL error. Curl indicates that the certificate CA checks out.
Am I doing something wrong?
I had this problem when my DNS entry pointed directly to the API gateway deployment rather than that backing the custom domain name.
To find the domain name to point to:
aws apigateway get-domain-name --domain-name "<YOUR DOMAIN>"
The response contains the domain name to use. In my case I had a Regional deployment so the result was:
{
"domainName": "<DOMAIN_NAME>",
"certificateUploadDate": 1553011117,
"regionalDomainName": "<API_GATEWAY_ID>.execute-api.eu-west-1.amazonaws.com",
"regionalHostedZoneId": "...",
"regionalCertificateArn": "arn:aws:acm:eu-west-1:<ACCOUNT>:certificate/<CERT_ID>",
"endpointConfiguration": {
"types": [
"REGIONAL"
]
}
}

Gmail API not a valid origin

I'm using Gmail API Javascript, but this is problem.
Uncaught
{error: "idpiframe_initialization_failed", details: "Not a valid origin for the client: http://localhosā€¦itelist this origin for your project's client ID."}
details
:
"Not a valid origin for the client: http://localhost has not been whitelisted for client ID 731803464357-pdq2kfb0qg5ahca5gvvht343u2qmbgdk.apps.googleusercontent.com . Please go to https://console.developers.google.com/ and whitelist this origin for your project's client ID."
error:
"idpiframe_initialization_failed"
This is my config with file: http://localhost/b.html
Did you add the Origin HTTP header before doing the HTTP request ?
const headers = new HttpHeaders().set('Origin', 'http://localhost')

Cross-Origin Resource Sharing between https and http?

i have a page that is hosted on both HTTP and HTTPS, and it makes a HTTP call with jquery to a local http server on the client computer with the following code:
var url = "http://127.0.0.1:1234/Ping";
var ajaxSettings = {
url: url,
timeout: 1000
};
return $.ajax(ajaxSettings);
the client application has the following headers:
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET
Access-Control-Allow-Headers: Accept, Origin, Content-type
This works great when using http but when using https i get a error.
Is there any way to solve this? (generating a ssl certificate and registering it seems a bit overkill)

HaProxy Transparent Proxy To AWS S3 Static Website Page

I am using haproxy to balance a cluster of servers. I am attempting to add a maintenance page to the haproxy configuration. I believe I can do this by defining a server declaration in the backend with the 'backup' modifier. Question I have is, how can I use a maintenance page hosted remotely on AWS S3 bucket (static website) without actually redirecting the user to that page (i.e. the haproxy server 'redir' definition).
If I have servers: a, b, c. All servers go down for maintenance then I want all requests to be resolved by server definition d (which is labeled with 'backup') to a static address on S3. Note, that I don't want paths to carry over and be evaluated on s3, it should always render the static maintenance page.
This is definitely possible.
First, declare a backup server, which will only be used if the non-backup servers are down.
server s3-fallback example.com.s3-website-us-east-1.amazonaws.com:80 backup
The following configuration entries are used to modify the request or the response only if we're using the alternate path. We're using two tests in the following examples:
# { nbsrv le 1 } -- if the number of servers in this backend is <= 1
# (and)
# { srv_is_up(s3-fallback) } -- if the server named "s3-fallback" is up; "server name" is the arbitrary name we gave the server in the config file
# (which would mean it's the "1" server that is up for this backend)
So, now that we have a backup back-end, we need a couple of other directives.
Force the path to / regardless of the request path.
http-request set-path / if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If you're using an essentially empty bucket with an error document, then this isn't really needed, since any request path would generate the same error.
Next, we need to set the Host: header in the outgoing request to match the name of the bucket. This isn't technically needed if the bucket is named the same as the Host: header that's already present in the request we received from the browser, but probably still a good idea. If the bucket name is different, it needs to go here.
http-request set-header host example.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If the bucket name is not a valid DNS name, then you should include the entire web site endpoint here. For a bucket called "example" --
http-request set-header host example.s3-website-us-east-1.amazonaws.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If your clients are sending you their cookies, there's no need to relay these to S3. If the clients are HTTPS and the S3 connection is HTTP, you definitely wat to strip these.
http-request del-header cookie if { nbsrv le 1 } { srv_is_up(s3-fallback) }
Now, handling the response...
You probably don't want browsers to cache the responses from this alternate back-end.
http-response set-header cache-control no-cache if { nbsrv le 1 } { srv_is_up(s3-fallback) }
You also probably don't want to return "200 OK" for these responses, since technically, you are displaying an error page, and you don't want search engines to try to index this stuff. Here, I've chosen "503 Service Unavailable" but any valid response code would work... 500 or 502, for example.
http-response set-status 503 if { nbsrv le 1 } { srv_is_up(s3-fallback) }
And, there you have it -- using an S3 bucket website endpoint as a backup backend, behaving no differently than any other backend. No browser redirect.
You could also configure the request to S3 to use HTTPS, but since you're just fetching static content, that seems unnecessary. If the browser is connecting to the proxy with HTTPS, that section of the connection will still be secure, although you do need to scrub anything sensitive from the browser's request, since it will be forwarded to S3 unencrypted (see "cookie," above).
This solution is tested on HAProxy 1.6.4.
Note that by default, the DNS lookup for the S3 endpoint will only be done when HAProxy is restarted. If that IP address changes, HAProxy will not see the change, without additional configuration -- which is outside the scope of this question, but see the resolvers section of the configuration manual.
I do use S3 as a back-end server behind HAProxy in several different systems, and I find this to be an excellent solution to a number of different issues.
However, there is a simpler way to have a custom error page for use when all the backends are down, if that's what you want.
errorfile 503 /etc/haproxy/errors/503.http
This directive is usually found in global configuration, but it's also valid in a backend -- so this raw file will be automatically returned by the proxy for any request that tries to use this back-end, if all of the servers in this back-end are unhealthy.
The file is a raw HTTP response. It's essentially just written out to the client as it exists on the disk, with zero processing, so you have to include the desired response headers, including Connection: close. Each line of the headers and the line after the headers must end with \r\n to be a valid HTTP response. You can also just copy one of the others, and modify it as needed.
These files are limited by the size of a response buffer, which I believe is tune.bufsize, which defaults to 16,384 bytes... so it's only really good for small files.
HTTP/1.0 503 Service Unavailable\r\n
Cache-Control: no-cache\r\n
Connection: close\r\n
Content-Type: text/plain\r\n
\r\n
This site is offline.
Finally, note that in spite of the fact that you're wanting to "transparently proxy a request," I don't think the phrase "transparent proxy" is the correct one for what you're trying to do, because a "transparent proxy" implies that either the client or the server or both would see each other's IP addresses on the connection and think they were communicating directly, with no proxy in between, because of some skullduggery done by the proxy and/or network infrastructure to conceal the proxy's existence in the path. This is not what you're looking for.