While there is a thread about this problem on Google's FAQ, it seems like there are only two answers that have satisfied other users. I'm certain there is no proxy on my network and I'm pretty sure I've configured boto as I see credentials in the request.
Here's the capture from gsutil:
/// Output sanitized
Creating gs://64/...
DEBUG:boto:path=/64/
DEBUG:boto:auth_path=/64/
DEBUG:boto:Method: PUT
DEBUG:boto:Path: /64/
DEBUG:boto:Data: <CreateBucketConfiguration><LocationConstraint>US</LocationConstraint></\
CreateBucketConfiguration>
DEBUG:boto:Headers: {'x-goog-api-version': '2'}
DEBUG:boto:Host: storage.googleapis.com
DEBUG:boto:Params: {}
DEBUG:boto:establishing HTTPS connection: host=storage.googleapis.com, kwargs={'timeout':\
70}
DEBUG:boto:Token: None
DEBUG:oauth2_client:GetAccessToken: checking cache for key dc3
DEBUG:oauth2_client:FileSystemTokenCache.GetToken: key=dc3 present (cache_file=/tmp/o\
auth2_client-tokencache.1000.dc3)
DEBUG:oauth2_client:GetAccessToken: token from cache: AccessToken(token=ya29, expiry=2\
013-07-19 21:05:51.136103Z)
DEBUG:boto:wrapping ssl socket; CA certificate file=.../gsutil/third_party/boto/boto/cace\
rts/cacerts.txt
DEBUG:boto:validating server certificate: hostname=storage.googleapis.com, certificate ho\
sts=['*.googleusercontent.com', '*.blogspot.com', '*.bp.blogspot.com', '*.commondatastora\
ge.googleapis.com', '*.doubleclickusercontent.com', '*.ggpht.com', '*.googledrive.com', '\
*.googlesyndication.com', '*.storage.googleapis.com', 'blogspot.com', 'bp.blogspot.com', \
'commondatastorage.googleapis.com', 'doubleclickusercontent.com', 'ggpht.com', 'googledri\
ve.com', 'googleusercontent.com', 'static.panoramio.com.storage.googleapis.com', 'storage\
.googleapis.com']
GSResponseError: status=400, code=MissingSecurityHeader, reason=Bad Request, detail=A non\
empty x-goog-project-id header is required for this request.
send: 'PUT /64/ HTTP/1.1\r\nHost: storage.googleapis.com\r\nAccept-Encoding: iden\
tity\r\nContent-Length: 98\r\nx-goog-api-version: 2\r\nAuthorization: Bearer ya29\r\nU\
ser-Agent: Boto/2.9.7 (linux2)\r\n\r\n<CreateBucketConfiguration><LocationConstraint>US</\
LocationConstraint></CreateBucketConfiguration>'
reply: 'HTTP/1.1 400 Bad Request\r\n'
header: Content-Type: application/xml; charset=UTF-8^M
header: Content-Length: 232^M
header: Date: Fri, 19 Jul 2013 20:44:24 GMT^M
header: Server: HTTP Upload Server Built on Jul 12 2013 17:12:36 (1373674356)^M
It looks like you might not have a default_project_id specified in your .boto file.
It should look something like this:
[GSUtil]
default_project_id = 1234567890
Alternatively, you can pass the -p option to the gsutil mb command to manually specify a project. From the gsutil mb documentation:
-p proj_id Specifies the project ID under which to create the bucket.
Related
The question is with reference to ""https://learn.microsoft.com/en-us/azure/iot-dps/quick-create-simulated-device-x509-python"
The section 'https://learn.microsoft.com/en-us/azure/iot-dps/quick-create-simulated-device-x509-python#simulate-the-device' talks about modifying certain parameters. I get the following error when running the python code.
$ python provisioning_device_client_sample.py -i 0ne0007F9D9 -s X509 -p http
Python 2.7.12 (default, Nov 12 2018, 14:36:49)
[GCC 5.4.0 20160609]
Provisioning Device Client for Python
Starting the Provisioning Client Python sample...
Scope ID=0ne0007F9D9
Security Device Type X509
Protocol HTTP
Provisioning API Version: 1.2.12
Press Enter to interrupt...
Register status callback:
reg_status = CONNECTED
user_context = None
PUT /0ne0007F9D9/registrations/riot-device-cert/register?api-version=2018-09-01-preview HTTP/1.1
UserAgent: prov_device_client/1.0
Accept: application/json
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Host: global.azure-devices-provisioning.net:443
content-length: 39
len: 39
{ "registrationId":"riot-device-cert" }
HTTP Status: 401
date: Thu, 26 Sep 2019 18:48:49 GMT
content-type: application/json; charset=utf-8
transfer-encoding: chunked
x-ms-request-id: 883b82ee-f696-4e68-9aec-61abc1e4a55b
strict-transport-security: max-age=31536000; includeSubDomains
{"errorCode":401002,"trackingId":"883b82ee-f696-4e68-9aec-61abc1e4a55b","message":"CA certificate not found.","timestampUtc":"2019-09-26T18:48:50.364959Z"}
Error: Time:Thu Sep 26 14:48:50 2019 File:/usr/sdk/src/c/provisioning_client/src/prov_device_ll_client.c Func:prov_transport_process_json_reply Line:323 failure retrieving json auth key value
Error: Time:Thu Sep 26 14:48:50 2019 File:/usr/sdk/src/c/provisioning_client/src/prov_transport_http_client.c Func:prov_transport_http_dowork Line:941 Unable to process registration reply.
Error: Time:Thu Sep 26 14:48:50 2019 File:/usr/sdk/src/c/provisioning_client/src/prov_device_ll_client.c Func:on_transport_registration_data Line:572 Failure retrieving data from the provisioning service
Register device callback:
register_result = PARSING
iothub_uri = None
user_context = None
Device registration failed!
I could not find a location where I should copy the device certificates. May be my understanding is wrong. Help me correct it.
Thanks,
Sreeju
Did you use Visual Studio to build the project like they mentioned in the previous step? If so, VS is supposed to wire that in for you so that you don't have to copy the cert anywhere on your end, just use the cert to set up the device on the AzIotHub side.
To troubleshoot why that isn't happening, can you include or link your built provisioning_device_client_sample.py file? It probably shows or points to where the X509SecurityClient class is being instantiated, which will lead to the X509 object, which has an attribute (self._cert_file) that will show the file path. It'd also help if you can run this in a python IDE so we could pull stuff up on a console.
If that is inconvenient, I could build the SDK/sample and run through the thing myself, but I haven't opened up Visual Studio on my VM in ages and will probably need to go through some licensing fandango. (I mostly use the IOTHub Device & Service SDKs, the newer versions of which don't need to be built, or the REST api for areas where the SDKs break.) It'll be a little bit before I have some spare time for that.
I tried to publish a message to both the default exchange and also some other exchange via the HTTP Management API but I always get back an authorization error.
curl -i -u myuser:mypw -XPOST -d'{"properties":{},"routing_key":"my_key","payload":"my body","payload_encoding":"string"}' https://myinstance.rmq.cloudamqp.com/api/exchanges/vhost/myvhost/publish
HTTP/1.1 401 Unauthorized
Server: nginx/1.14.2
Date: Mon, 01 Apr 2019 05:27:10 GMT
Content-Type: application/json
Content-Length: 53
Connection: keep-alive
content-security-policy: default-src 'self'
vary: accept, accept-encoding, origin
{"error":"not_authorised","reason":"Access refused."}%
I tried it both on a self hosted RabbitMQ (installed via helm on k8s) and our CloudAMQP instance.
But if I login on the Management Web UI with the very same user then I can publish a message to the exchange and also consume from a queue.
I expect that the Management Web UI just uses the HTTP API for performing this actions so I am confused why it works when I do it via the UI.
Reading all vhost on the other hand works also with the HTTP API.
curl -i -u myuser:mypw https://myinstance.rmq.cloudamqp.com/api/vhosts
HTTP/1.1 200 OK
Can somebody explain to me whats going on there? What puzzels me the most is the fact that it works on the UI using the same user:pw.
I figured out the problem, I did use the wrong URL path.
For vhost: / and the default exchange it should be:
http://myinstance.rmq.cloudamqp.com/api/exchanges/%2F/amq.default/publish
In my case, using the CloudAmqp free plan, I needed to use my user name as vHost in rhe URL:
https://myinstance.rmq.cloudamqp.com/api/exchanges/[myrandomusernamefromfreeplan]/amq.default/publish
How would I go about translating the following curl command to a Jitterbit operation?
curl -i -u username:password -X POST -F file=#/path/to/file.csv
https://website.com/api/filepost
Currently I have my Operation structured as follows:
Script:
$jitterbit.target.http.form_data = true;
$jitterbit.target.http.form_data.filename = "file.csv";
$jitterbit.target.http.form_data.name = "file";
Source
A CSV file without headers, which matches the API's specifications (sent the same file successfully via curl)
Transformation:
Text to Text - both source and target use the same file format as the Source file
API Endpoint
Currently I authenticate successfully, but I get a 400/Bad Request error message saying "No file attached".
Full error message:
The operation "2. POST Preapplicants - CSV to API" failed.
Fatal Error
Failed to post to the url 'https://website.com/api/filepost’.
The last (and probably most relevant) error was: The server
returned HTTP Status Code : 400 Bad Request Error is: The
request could not be processed by the server due to invalid
syntax. Headers sent by the server: HTTP/1.1 400 Bad Request
Server: nginx/1.10.3 (Ubuntu) Content-Type: application/json
Transfer-Encoding: chunked Connection: keep-alive Cache-Control:
no-cache Date: Tue, 12 Sep 2017 18:55:38 GMT The response was:
{"message":"No file attached."}
I solved this problem by doing the following:
1. Changing from a transformation operation to an archive operation (using the same source, target, and script)
2. Changing the content-type of my HTTP connection to multipart/form-data (the default content-type passed by curl)
you can do it that way. what I did was use a file list operation stored that into an array then uses a Base64EncodeFile() function to upload the file
Our server access internet through a proxy. When I try to run a pull command such as
sudo docker run -t -i ubuntu:14.04 /bin/bash
I get the below error:
Get https://index.docker.io/v1/repositories/ubuntu/images: tls: failed to parse
certificate from server: x509: negative serial number
The wget command wget -S -d -O - https://get.docker.io yields the below output:
Setting --output-document (outputdocument) to - DEBUG output created
by Wget 1.13.4 on linux-gnu.
URI encoding = UTF-8' URI encoding =UTF-8'
--2014-08-27 17:13:46-- https://get.docker.io/ Connecting to :... connected. Created socket 3. Releasing
0x00000000016829f0 (new refcount 0). Deleting unused
0x00000000016829f0.
---request begin--- CONNECT get.docker.io:443 HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Proxy-Authorization: Basic
Y3RzXDMxMzMwMDpzd2VldGZlbC4yOQ==
---request end--- proxy responded with: [HTTP/1.1 200 Connection established Date: Wed, 27 Aug 2014 11:49:52 GMT Age: 0 Via: 1.0
xaahshshhds
] Initiating SSL handshake. Handshake successful; connected socket 3
to SSL handle 0x00000000016831c0 certificate: subject:
/emailAddress=aaa#bbbb.com/C=yy/ST=aa/L=xx/O=yy/OU=mycompany/CN=get.docker.io
issuer:
/emailAddress=aaa#bbbb.com/C=yy/ST=aa/L=xx/O=yy/OU=mycompany/CN=mycompany
ERROR: cannot verify get.docker.io's certificate, issued by
/emailAddress=aaa#bbbb.com/C=yy/ST=aa/L=xx/O=yy/OU=mycompany/CN=mycompany':
Unable to locally verify the issuer's authority. To connect to
get.docker.io insecurely, use--no-check-certificate'. Closed 3/SSL
0x00000000016831c0
Please give me some directions on how I should go about this issue.
EDIT:
I ve now disabled the proxy for this IP segment but I still get the same error.
The command: wget -S -d -O - https://get.docker.io gets the below output now:
Setting --output-document (outputdocument) to -
DEBUG output created by Wget 1.13.4 on linux-gnu.
URI encoding = `UTF-8'
--2014-09-04 11:26:12-- https://get.docker.io/
Resolving get.docker.io (get.docker.io)... 162.242.195.77
Caching get.docker.io => 162.242.195.77
Connecting to get.docker.io (get.docker.io)|162.242.195.77|:443... connected.
Created socket 3.
Releasing 0x00000000022d8fd0 (new refcount 1).
Initiating SSL handshake.
Handshake successful; connected socket 3 to SSL handle 0x00000000022dabd0
certificate:
subject: /serialNumber=exkd9EjUozUulWIyUDurQPMEPBLSc2Bq/OU=GT98568428/OU=See www.rapidssl.com/resources/cps (c)13/OU=Domain Control Validated - RapidSSL(R)/CN=*.docker.io
issuer: /C=US/O=GeoTrust, Inc./CN=RapidSSL CA
X509 certificate successfully verified and matches host get.docker.io
---request begin---
GET / HTTP/1.1
User-Agent: Wget/1.13.4 (linux-gnu)
Accept: */*
Host: get.docker.io
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 503 Service Unavailable
Server: nginx/1.7.1
Date: Thu, 04 Sep 2014 06:03:28 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
Cache-Control: no-cache
---response end---
HTTP/1.1 503 Service Unavailable
Server: nginx/1.7.1
Date: Thu, 04 Sep 2014 06:03:28 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
Cache-Control: no-cache
Registered socket 3 for persistent reuse.
Skipping 108 bytes of body: [<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
] done.
2014-09-04 11:26:13 ERROR 503: Service Unavailable.
subject: /emailAddress=aaa#bbbb.com/C=yy/ST=aa/L=xx/O=yy/OU=mycompany/CN=get.docker.io
issuer: /emailAddress=aaa#bbbb.com/C=yy/ST=aa/L=xx/O=yy/OU=mycompany/CN=mycompany
It looks like the proxy in your company uses SSL interception to inspect SSL traffic, which means that you get a certificate signed by the proxy CA of your company instead of the original certificate. It also looks like that this proxy CA is not trusted by your system and thus the verification fails.
I would recommend that you contact your firewall administrator on how to deal with the problem. Either they will add an exception for the SSL inspection, or they will tell you which certificate you need to import as trusted in your system.
This should be fixed for any Docker compiled with Go 1.6+, see: https://github.com/golang/go/commit/a0ea93dea5f5741addc8c96b7ed037d0e359e33f.
When using the HTTP API I am trying to make a call to the aliveness-test for monitoring purposes. At the moment I am testing using curl and the following command:
curl -i http://guest:guest#localhost:55672/api/aliveness-test/
And I get the following response:
HTTP/1.1 404 Object Not Found
Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
Date: Mon, 05 Nov 2012 17:18:58 GMT
Content-Type: text/html
Content-Length: 193
<HTML><HEAD><TITLE>404 Not Found</TITLE></HEAD><BODY><H1>Not Found</H1>The requested document was not found on this server.<P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></BODY></HTML>
When making a request just to list the users or vhosts, the requests returns successfully:
$ curl -I http://guest:guest#localhost:55672/api/users
HTTP/1.1 200 OK
Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
Date: Mon, 05 Nov 2012 17:51:44 GMT
Content-Type: application/json
Content-Length: 11210
Cache-Control: no-cache
I'm using the latest stable version (2.8.7) of RabbitMQ and obviously have the management plugin installed for the API to work with the users call (the response is left out due to it containing company data but is just regular JSON as expected).
There isn't much on the internet about this call failing so I am wondering if anyone has seen this before?
Thanks,
Kristian
Turns out that the '/' at the beginning of the vhosts names is not implicit, even when as part of a URL. To get this to work I simply changed my request from:
curl -i http://guest:guest#localhost:55672/api/aliveness-test/
To
curl -i http://guest:guest#localhost:55672/api/aliveness-test/%2F
As %2F is '/' HTTP encoded, my request now queries the vhost named '/' and returns a 200 response which looks like:
{"status":"ok"}