GoDaddy API to create subdoman returns "The given domain is not registered, or does not have a zone file" - godaddy-api

I'm trying to use GoDaddy's API to create a subdomain using the following http request:
PATCH /v1/domains/domainName.com/records
Host: api.ote-godaddy.com
Authorization: sso-key API_KEY:API_SECRET
Content-Type: application/json
Content-Length: 100
[
{
"data": "111.111.111.111",
"name": "subdomainName",
"ttl": 6000,
"type": "A"
}
]
but I get the following response:
{
"code": "UNKNOWN_DOMAIN",
"message": "The given domain is not registered, or does not have a zone file"
}

Please changes the host name as https://api.godaddy.com. your request will be work only production URL.
Please generate Production level API Key & SECRET.
Body: (Raw - Json Type)
[
{
"data": "YourServerIp",
"name": "subdomainName",
"port": 80,
"priority": 10,
"protocol": "string",
"service": "string",
"ttl": 600,
"type": "A"
}
]
Note:
It's only happened when primary domain already exists on your go daddy account

I figured out these requests only work using the production authorization against the production url but won't work using ote-authorization against the ote url. Maybe the url has to be set as an ote domain and not a production domain. Not sure.

Related

Newman loads the pfx certificate but it is not used to connect to the endpoint

I'm having issue in execute a postman collection, from newman, which involves loading a pfx certificate to establish a TLSMA connection.
From the Postman application, the certificate is loaded correctly (from the setting) and used for the domain https://domain1.com to connect with a TLSMA counterpart server.
When I export the json collection and environment there is no mention about domain and certificate associated.
Checking the json schema here newman accepts a certificate definition in the request but applying it does not work, here my example:
"request": {
"method": "GET",
"header": [],
"certificate": {
"name": "Dev or Test Server",
"matches": ["https://domain1.com/*"],
"cert": { "src": "./certificate.pfx" }
},
"url": {
"raw": "https://domain1.com/as/authorization.oauth2",
"host": ["https://domain1.com"],
"path": ["as", "authorization.oauth2"],
"query": [
{
I also tried to apply the certificate configuration in an external file cert-list.json with the following content:
[{
"name": "Dev or Test Server",
"matches": ["https://domain1.com/*"],
"cert": { "src": "./certificate-t.pfx" }
}]
but it does not work either.
Here the newman command:
newman run domain.postman_collection.json -n 1 --ssl-client-cert-list cert-list.json -e env.postman_environment.json -r cli --verbose
Do you know where I am doing wrong?
Change cert to pfx
try:
[{
"name": "Dev or Test Server",
"matches": ["https://domain1.com/*"],
"pfx": { "src": "./certificate-t.pfx" }
}]

How to access multiple S3 origins (in the same bucket) from a single cloudfront distribution?

I have a cloudfront distribution that is working fine with an S3 origin.
After adding a second origin, I also add a new cache behaviour so I would get:
first.domain.com: goes to the first origin (via the default * cache behaviour path)
first.domain.com/elsewhere: goes to the new origin (via a new elsewhere/* cache behaviour path)
I feel something maybe wrong or missing, but can't tell from the docs what it could be.
After reading these answers:
One
Two
I can't still figure what is not working. I enabled the S3 logs but they can take hours to update.
Any help is appreciated!
The error I get after hitting the second URL is:
"response": {
"status": 403,
"statusText": "",
"httpVersion": "http/2.0",
"headers": [
{
"name": "status",
"value": "403"
},
{
"name": "content-type",
"value": "application/xml"
},
{
"name": "date",
"value": "Fri, 17 Aug 2018 03:28:54 GMT"
},
{
"name": "server",
"value": "AmazonS3"
},
{
"name": "x-cache",
"value": "Error from cloudfront"
},
{
"name": "via",
"value": "1.1 275132367c30f17c9825826491390fe3.cloudfront.net (CloudFront)"
},
{
"name": "x-amz-cf-id",
"value": "Ag_JzYYNMVJLMlz9Dd8yDgS1qDCRFlihzlCauDXOE0-fojAPQLQNQQ=="
}
It would seem that the dist has no access, but I did the same OAID as with the first origin, I checked the bucket permissions allow the OAID, and the first origin is working fine.
Maybe it's some slow propagation issue about adding an S3 origin?

OpenShift Service Account Permissions to Read Pod and Deployment Status

I would like to use the OpenShift REST api to make queries from a separate portal. We first created a service account using the following steps (where my-id is an admin in the project):
C:\openshift>oc login
Authentication required for https://openshift-test.foo.com:8443 (openshift)
Username: my-id
Password:
Login successful.
You have access to the following projects and can switch between them with 'oc project <projectname>':
* datalake-replication-consumers
datalake-replication-demo
Using project "datalake-replication-consumers".
C:\openshift>oc create serviceaccount gmi-registry
serviceaccount "gmi-registry" created
C:\openshift>oc policy add-role-to-user admin system:serviceaccounts:datalake-replication-consumers:gmi-registry
role "admin" added: "system:serviceaccounts:datalake-replication-consumers:gmi-registry"
C:\openshift>oc serviceaccounts get-token gmi-registry
<token here>
I then pasted that token as a bearer token into Postman to make a few api calls. Since I added my service account to the admin role within my project, I assumed this would work, but instead we're getting back a 403.
GET pods:
https://openshift-test.foo.com:8443/api/v1/namespaces/datalake-replication-consumers/pods
Response:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "User \"system:serviceaccount:datalake-replication-consumers:gmi-registry\" cannot list pods in project \"datalake-replication-consumers\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
GET specific deployment:
https://openshift-test.foo.com:8443/oapi/v1/namespaces/datalake-replication-consumers/deploymentconfigs/entity-65869977-9d56-49a5-afa2-4a547df82d5c
Response:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "User \"system:serviceaccount:datalake-replication-consumers:gmi-registry\" cannot get deploymentconfigs in project \"datalake-replication-consumers\"",
"reason": "Forbidden",
"details": {
"name": "entity-65869977-9d56-49a5-afa2-4a547df82d5c",
"kind": "deploymentconfigs"
},
"code": 403
}
What are we missing for service account permissions here?
UDPATE: I should also add that I pulled my own bearer token out of the CLI and used that for both calls. That worked.
Not a very exciting answer, but our problem was solved when we installed a v3.7 instance. My initial tests were on v1.5, which I think corresponds to 3.5 or 3.6 in the enterprise offering?

Can't Connect to Service via Marathon-lb using DCOS

I recently went through the tutorial for load balancing apps in DCOS using marathon-lb (in the example they balance some nginx containers: https://dcos.io/docs/1.9/networking/marathon-lb/marathon-lb-advanced-tutorial/). I am trying to use this approach to internally load balance my own custom application. The custom app I am using is a play scala app. I have the internal marathon-lb set up and can successfully use it for the nginx container but when I try to use my own docker image I cannot get this to work. I start up my service with my custom image and I can access the service fine by using the IP and port that gets assigned to it (i.e. if the service gets deployed on 10.0.0.0 and is available on port 1234 then curl http://10.0.0.0:1234/ works as expected and I can also make my api calls as defined in my application routes). However, when I try to access the app through the load balancer (curl -i http://marathon-lb-internal.marathon.mesos:10002, where 10002 is the service port) then I get this message:
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
For reference, here is my json file I'm using to start my custom service:
{
"id": "my-app",
"container": {
"type": "DOCKER",
"docker": {
"image": "my_repo/my_image:1.0.0",
"network": "BRIDGE",
"portMappings": [
{ "hostPort": 0, "containerPort": 9000, "servicePort": 10002, "protocol": "tcp" }
],
"parameters": [
{ "key": "env", "value": "USER_NAME=user" },
{ "key": "env", "value": "USER_PASSWORD=password" }
],
"forcePullImage": true
}
},
"instances": 1,
"cpus": 1,
"mem": 1000,
"healthChecks": [{
"protocol": "HTTP",
"path": "/v1/health",
"portIndex": 0,
"timeoutSeconds": 10,
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"maxConsecutiveFailures": 10
}],
"labels":{
"HAPROXY_GROUP":"internal"
},
"uris": [ "https://s3.amazonaws.com/my_bucket/my_docker_credentials" ]
}
I had the same problem and found the solution here
marathon-lb health check failing on all spray.io containers
Need to add
"HAPROXY_0_BACKEND_HTTP_HEALTHCHECK_OPTIONS": " http-send-name-header Host\n timeout check {healthCheckTimeoutSeconds}s\n"
To your config so that the REST layer doesn't bark on the health check from marathon

Error loading file stored in Google Cloud Storage to Big Query

I have been trying to create a job to load a compressed json file from Google Cloud Storage to a Google BigQuery table. I have read/write access in both Google Cloud Storage and Google BigQuery. Also, the uploaded file belongs in the same project as the BigQuery one.
The problem happens when I access to the resource behind this url https://www.googleapis.com/upload/bigquery/v2/projects/NUMERIC_ID/jobs by means of a POST request. The content of the request to the abovementioned resource can be found as follows:
{
"kind" : "bigquery#job",
"projectId" : NUMERIC_ID,
"configuration": {
"load": {
"sourceUris": ["gs://bucket_name/document.json.gz"],
"schema": {
"fields": [
{
"name": "id",
"type": "INTEGER"
},
{
"name": "date",
"type": "TIMESTAMP"
},
{
"name": "user_agent",
"type": "STRING"
},
{
"name": "queried_key",
"type": "STRING"
},
{
"name": "user_country",
"type": "STRING"
},
{
"name": "duration",
"type": "INTEGER"
},
{
"name": "target",
"type": "STRING"
}
]
},
"destinationTable": {
"datasetId": "DATASET_NAME",
"projectId": NUMERIC_ID,
"tableId": "TABLE_ID"
}
}
}
}
However, the error doesn't make any sense and can also be found below:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0: "
}
],
"code": 400,
"message": "Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0: "
}
}
I know the problem doesn't lie either in the project id or in the access token placed in the authentication header, because I have successfully created an empty table before. Also I specify the content-type header to be application/json which I don't think is the issue here, because the body content should be json encoded.
Thanks in advance
Your HTTP request is malformed -- BigQuery doesn't recognize this as a load job at all.
You need to look into the POST request, and check the body you send.
You need to ensure that all the above (which seams correct) is the body of the POST call. The above Json should be on a single line, and if you manually creating the multipart message, make sure there is an extra newline between the headers and body of each MIME type.
If you are using some sort of library, make sure the body is not expected in some other form, like resource, content, or body. I've seen libraries that use these differently.
Try out the BigQuery API explorer: https://developers.google.com/bigquery/docs/reference/v2/jobs/insert and ensure your request body matches the one made by the API.