Authorization header is invalid -- one and only one ' ' (space) required - Amazon S3 - ruby-on-rails-3

Trying to precompile my assets in Rails app and sync with Amazon S3 Storage:
with this mesage:
Any feedback appreciated:
Expected(200) <=> Actual(400 Bad Request)
response => #<Excon::Response:0x00000007c45a98 #data={:body=>"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>InvalidArgument</Code><Message>Authorization header is invalid -- one and only one ' ' (space) required</Message><ArgumentValue>AWS [\"AKIAINSIQYCZLWYSROWQ\", \"7RAxhY5nLkbACICMqjDlee5pCaEhf4LKgSpJ+R9k\"]:LakbTXVMX6I72MViNie/fe+79qU=</ArgumentValue><ArgumentName>Authorization</ArgumentName><RequestId>250C76936044E6D5</RequestId><HostId>j2jK/dv0xTnNddtSFHuVicGv5wWjXl4zXuhOyPcO6+2WWlAYWSkn0CHPwdtnOPet</HostId></Error>", :headers=>{"x-amz-request-id"=>"250C76936044E6D5", "x-amz-id-2"=>"j2jK/dv0xTnNddtSFHuVicGv5wWjXl4zXuhOyPcO6+2WWlAYWSkn0CHPwdtnOPet", "Content-Type"=>"application/xml", "Transfer-Encoding"=>"chunked", "Date"=>"Tue, 20 Aug 2013 13:28:36 GMT", "Connection"=>"close", "Server"=>"AmazonS3"}, :status=>400, :remote_ip=>"205.251.235.165"}, #body="<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>InvalidArgument</Code><Message>Authorization header is invalid -- one and only one ' ' (space) required</Message><ArgumentValue>AWS [\"AKIAINSIQYCZLWYSROWQ\", \"7RAxhY5nLkbACICMqjDlee5pCaEhf4LKgSpJ+R9k\"]:LakbTXVMX6I72MViNie/fe+79qU=</ArgumentValue><ArgumentName>Authorization</ArgumentName><RequestId>250C76936044E6D5</RequestId><HostId>j2jK/dv0xTnNddtSFHuVicGv5wWjXl4zXuhOyPcO6+2WWlAYWSkn0CHPwdtnOPet</HostId></Error>", #headers={"x-amz-request-id"=>"250C76936044E6D5", "x-amz-id-2"=>"j2jK/dv0xTnNddtSFHuVicGv5wWjXl4zXuhOyPcO6+2WWlAYWSkn0CHPwdtnOPet", "Content-Type"=>"application/xml", "Transfer-Encoding"=>"chunked", "Date"=>"Tue, 20 Aug 2013 13:28:36 GMT", "Connection"=>"close", "Server"=>"AmazonS3"}, #status=400, #remote_ip="205.251.235.165">

Have had an error with the same message twice now and both times it was due to pasting an extra space at the end of access key or secret key in config file.

Check where your setting the aws_access_key_id to use with your asset syncer.
This should be something that looks like AKIAINSIQYCZLWYSROWQ, whereas it looks like you've set it to a 2 element array of both your access key id and the secret access key.
Furthermore, given that you've now placed those credentials in the public domain you should revoke them immediately.

Extra space at the end of access key is one issue and the reason is copying from Amazon IAM UI puts the extra space.
The other thing is that when you have configuration in /.aws/credentials folder or other configuration conflicts with environment values. This happened to me when configuring CircleCI and docker machines.

This error also happens if you haven't enabled GET/POST in cloudfront and try to do GET/POST to api which are hosted behind cloudfront.

Error 400 occurs more than 20 cases. Here is a pdf that describe all errors: List of AWS S3 Error Codes

Related

Using a service account and JSON key which is sent to you to upload data into google cloud storage

I wrote a python script that uploads files from a local folder into Google cloud storage.
I also created a service account with sufficient permission and tested it on my computer using that service account JSON key and it worked.
Now I send the code and JSON key to someone else to run but the authentication fails on her side.
Are we missing any authentication through GCP UI?
def config_gcloud():
subprocess.run(
[
shutil.which("gcloud"),
"auth",
"activate-service-account",
"--key-file",
CREDENTIALS_LOCATION,
]
)
storage_client = storage.Client.from_service_account_json(CREDENTIALS_LOCATION)
return storage_client
def file_upload(bucket, source, destination):
storage_client = config_gcloud()
...
The error happens in the config_cloud and it says it is expecting str, path, ... but gets NonType.
As I said, the code is fine and works on my computer. How anotehr person can use it using JSON key which I sent her?She stored Json locally and path to Json is in the code.
CREDENTIALS_LOCATION is None instead of the correct path, hence it complaining about it being NoneType instead of str|Path.
Also you don't need that gcloud call, that would only matter for gcloud/gsutil commands, not python client stuff.
And please post the actual stacktrace of the error next time, not just a misspelled interpretation of it.

Weed Filer backup errors in the log

I started a weed filer.backup process to backup all the data to an S3 bucket. Lot of logs are getting generated with the below error messages. Do i need to update any config to resolve or these messages can be ignored?
s3_write.go:99] [persistent-backup] completeMultipartUpload buckets/persistent/BEV_Processed_Data/2011_09_30/2011_09_30/GT_BEV_Output/0000000168.png: EntityTooSmall: Your proposed upload is smaller than the minimum allowed size
Apr 21 09:20:14 worker-server-004 seaweedfs-filer-backup[3076983]: #011status code: 400, request id: 10N2S6X73QVWK78G, host id: y2dsSnf7YTtMLIQSCW1eqrgvkom3lQ5HZegDjL4MgU8KkjDG/4U83BOr6qdUtHm8S4ScxI5HwZw=
Another message
malformed xml the xml you provided was not well formed or did not validate against
This issue happens with empty files or files with small content. Looks like aws s3 multipart upload does not accept streaming empty files. Is there any setting on SeaweedFs that i am missing?

Amazon Redshift COPY always return S3ServiceException:Access Denied,Status 403

I'm really struggling with how to do data transfer from Amazon S3 bucket to Redshift with COPY command.
So far, I created an IAM User and 'AmazonS3ReadOnlyAccess' policy is assigned. But when I call COPY command likes following, Access Denied Error is always returned.
copy my_table from 's3://s3.ap-northeast-2.amazonaws.com/mybucket/myobject' credentials 'aws_access_key_id=<...>;aws_secret_access_key=<...>' REGION'ap-northeast-2' delimiter '|';
Error:
Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid EB18FDE35E1E0CAB,ExtRid ,CanRetry 1
Details: -----------------------------------------------
error: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid EB18FDE35E1E0CAB,ExtRid ,CanRetry 1
code: 8001
context: Listing bucket=s3.ap-northeast-2.amazonaws.com prefix=mybucket/myobject
query: 1311463
location: s3_utility.cpp:542
process: padbmaster [pid=4527]
-----------------------------------------------;
Is there anyone can give me some clues or advice?
Thanks a lot!
Remove the endpoint s3.ap-northeast-2.amazonaws.com from the S3 path:
COPY my_table
FROM 's3://mybucket/myobject'
CREDENTIALS ''
REGION 'ap-northeast-2'
DELIMITER '|'
;
(See the examples in the documentation.) While the Access Denied error is definitely misleading, the returned message gives some hint as to what went wrong:
bucket=s3.ap-northeast-2.amazonaws.com
prefix=mybucket/myobject
We'd expect to see bucket=mybucket and prefix=myobject, though.
Check encription of bucket.
According doc : https://docs.aws.amazon.com/en_us/redshift/latest/dg/c_loading-encrypted-files.html
The COPY command automatically recognizes and loads files encrypted using SSE-S3 and SSE-KMS.
Check kms: rules on you key|role
If files from EMR, check Security configurations for S3.
your redshift cluster role does not have right to access to the S3 bucket. make sure the role you use for redshift has access to the bucket and bucket does not have policy that blocks the access

SoapUI dataSource illegal character in authority

I am trying to use an external dataSource in SoapUI to send some basic GET http requests to a number of nodes, and i get "Illegal character in authority at index 7".
What i have setup.
1x dataSource (external file > excel):
The nodes setup appears to be correct (its called "nodes") > column required is called "node".
Getting the rows from the datafile from the dataSource options appear to be working correctly.
1x HTTP request
GET request, URL is: http://${nodes#node}:2040/api/doSometimes
I know i need to add the loop at the end, however the HTTP request isn't working with the first node yet, so i'll do the loop once the request works.
The error i get when trying to run the HTTP request:
Sun Aug 10 11:20:18 IDT 2014:ERROR:An error occurred [Illegal character in authority at index 7: http://XXX.XXX.XXX.XXX /api/doSomething], see error log for details (where XXX.XXX.XXX.XXX is the first ADDRESS from the nodes#node file). -- Also notice its missing the port.
The error log sais: Sun Aug 10 11:29:25 IDT 2014:ERROR:java.lang.NullPointerException
Clarification: we do not have a WSDL available, however the service does reply to different queries. /api/sendID WILL return the ID. I want to get all IDS from all NODES in the file.
Any ideas what i can do to mend this ?
Used a preset REST request which included the required parameters. Used standard testsuite > added the request port directly in the datasource #node which appeared to be causing the issue.

Adding file in S3 from salesforce using S3 API

I have installed AWS S3 toolkit on sdfc. I have the following code where AWS_S3_ExampleController is part of the installed toolkit. When i execute this code the debug log shows the generic "internal Server error" 500.
The returned soap message has
soapenv:Value>ns1:Client.InvalidArgument soapenv:Value soapenv:Code soapenv:Reasonsoapenv:Text xml:lang="en"Invalid id /soapenv:Text/soapenv:Reason soapenv:Detail ArgumentValue>SAmeer Thakur ArgumentValue>ArgumentName>CanonicalUser/ID/soapenv:Detail>soapenv:Fault> soapenv:Body>soapenv:Envelope>
I do not know how to resolve the Invalid id being seen
AWS_S3_ExampleController c = new AWS_S3_ExampleController();
c.constructor();
c.fileName=fileName;
c.OwnerId='Sameer Thakur';
c.bucketToUploadObject= bucketName;
c.fileSize=100000;
c.fileBlob= Blob.valueOf(record);
c.accessTypeSelected='public-read-write';
System.debug('Before insert');
c.syncFilesystemDoc();
System.debug('After insert');
Any pointer would be appreciated
Thank you
Sameer
The problem was i needed to define ownerid with canonical value.
This value is generated using access key and secret key.
The url to generate canonical user id value is # http://www.bucketexplorer.com/awsutility.aspx
regards
Sameer