The following code tries to select some data from a file stored on S3:
let client = S3Client::new(Region::default());
let source = ... object providing bucket and key ...;
let r = SelectObjectContentRequest {
bucket: source.bucket,
key: source.key,
expression: "select id from S3Object[*].id".to_string(),
expression_type: "SQL".to_string(),
input_serialization: InputSerialization {
json: Some(JSONInput { type_: Some("LINES".to_string()) }),
..Default::default()
},
output_serialization: OutputSerialization {
json: Some(JSONOutput { record_delimiter: Some("\n".to_string()) }),
..Default::default()
},
..Default::default()
};
It causes the following error:
The specified method is not allowed against this
resource.POST
The example is a 1:1 port of a working Python/boto3 example, so I'm quite sure it should work. I found this issue, which is a few month old and the status is not clear to me. How do I get this working with Rust?
Unfortunately s3 select still doesn't work on the latest rusoto_s3-0.40.0. The issue you linked has all the answer. The problems are twofold.
First, right now the s3 select request rusoto sends out has a bogus query string. It should be /ObjectName?select&select-type=2, but rusoto encodes it to be /bjectName?select%26select-type=2. That's the error you saw.
To verify, run your project like so:
$ RUST_LOG=rusoto,hyper=debug cargo run
You will see logs from rusoto and hyper. Sure enough it emits an incorrect URI. One can even dig into the code responsible:
let mut params = Params::new();
params.put("select&select-type", "2");
request.set_params(params);
It is supposed to be:
let mut params = Params::new();
params.put("select-type", "2");
params.put("select", "");
request.set_params(params);
Although the fix seems trivial, remember these are glue code generated from AWS botocore service manifests, not manually coded. To incorporate the fix is not that straightforward.
Second, the bigger problem. The AWS s3 select response uses a customized binary format. rusoto simply doesn't have a deserializer for that yet.
Related
I'm working on importing CSV files from a Google Drive, through Apps Scripts into Big Query.
BUT, when the code gets to the part where it needs to send the job to BigQuery, it states that the dataset is not found - even though the correct dataset ID is already in the code.
Very much thank you!
If you are making use of the google example code, this error that you indicate is more than a copy and paste. However, validate that you have the following:
const projectId = 'XXXXXXXX';
const datasetId = 'YYYYYYYY';
const csvFileId = '0BwzA1Orbvy5WMXFLaTR1Z1p2UDg';
try {
table = BigQuery.Tables.insert(table, projectId, datasetId);
Logger.log('Table created: %s', table.id);
} catch (error) {
Logger.log('unable to create table');
}
according to the documentation in the link:
https://developers.google.com/apps-script/advanced/bigquery
It also validates that in the services tag you have the bigquery service enabled.
I'm generating a HTML-table with data from a geojson-file using leaflet. It works fine, but only if I do not delete the "alert" from the following code. Otherwise the data are displaied without table. How to solve this?
$.ajax({url:"wind.geojson"}).done(function(data) {
var data = JSON.parse(data);
L.geoJson(data,
{pointToLayer: MarkerStyle1});
});
alert();
function MarkerStyle1 (feature,latlng) {
...
document.writeln ("<td width='40'><div align='center'>" ,feature.properties.title, "</div></td>\n");
...
};
I do not see any possibility here to upload a file, it's 200kb. It's to find here: 1 The file works well, when I show the objects on a map.
2 shows on older version of the site, where the table is generated with php (done by a friend, I do not use php).
Perhaps it is a problem, that I do not have a "map", no "map.addLayer()" in this code!?
I now realized that the problem is caused by the asynchronous working of AJAX. I changed the code to
$.ajax({url:"wind.geojson", async: false})
Now it is working without the "alert()"-line!
I'm rather new to both golang and encoding.com and I'm trying to use the encoding.com API wrapper to transcode a simple video file, but I'm rather confused by the format to use.
When looking at the tests I can see how to call the AddMedia function (https://github.com/nytimes/encoding-wrapper/blob/master/encodingcom/media_test.go#L9-L39) but unfortunately it doesn't work for me.
package main
import ("github.com/NYTimes/encoding-wrapper/encodingcom")
func main() {
client, err := encodingcom.NewClient("https://manage.encoding.com", "123", "key")
format := encodingcom.Format{
Output: []string{"https://key:secret#bucket.s3.amazonaws.com/aladin.ogg"},
VideoCodec: "libtheora",
AudioCodec: "libvorbis",
Bitrate: "400k",
AudioBitrate: "64k",
}
addMediaResponse, err := client.AddMedia([]string{"https://samples.mplayerhq.hu/h264/Aladin.mpg"},
[]encodingcom.Format{format}, "us-east-1")
}
}
The error "raised" is
APIError.Errors.Errors0: Output format 'https://key:secret#bucket.s3.amazonaws.com/aladin.aac' is not allowed! (format #0)
APIError.Message:
and I really don't get it, the Output element in the Format looks missplaced, am I reading the test wrong? Using the API builder the format parameter should receive only the format, for example "ogg", and there's a "destination" parameter for S3. It also doesn't specify if the url must be urlencoded, but honestly I don't think so. Still keys and secrets can contain for example the char '/'
Any more experienced gopher?
I am new to S3 and need to use it for image storage. I found a half dozen versions of an s2wrapper for cf but it appears that the only one set of for v4 is one modified by Leigh
https://gist.github.com/Leigh-/26993ed79c956c9309a9dfe40f1fce29
Dropped in the com directory and created a "test" page that contains the following code:
s3 = createObject('component','com.S3Wrapper').init(application.s3.AccessKeyId,application.s3.SecretAccessKey);
but got the following error :
So I changed the line 37 from
variables.Sv4Util = createObject('component', 'Sv4').init(arguments.S3AccessKey, arguments.S3SecretAccessKey);
to
variables.Sv4Util = createObject('component', 'Sv4Util').init(arguments.S3AccessKey, arguments.S3SecretAccessKey);
Now I am getting:
I feel like going through Leigh code and start changing things is a bad idea since I have lurked here for year an know Leigh's code is solid.
Does any know if there are any examples on how to use this anywhere? If not what I am doing wrong. If it makes a difference I am using Lucee 5 and not Adobe's CF engine.
UPDATE :
I followed Leigh's directions and the error is now gone. I am addedsome more code to my test page which now looks like this :
<cfscript>
s3 = createObject('component','com.S3v4').init(application.s3.AccessKeyId,application.s3.SecretAccessKey);
bucket = "imgbkt.domain.com";
obj = "fake.ping";
region = "s3-us-west-1"
test = s3.getObject(bucket,obj,region);
writeDump(test);
test2 = s3.getObjectLink(bucket,obj,region);
writeDump(test2);
writeDump(s3);
</cfscript>
Regardless of what I put in for bucket, obj or region I get :
JIC I did go to AWS and get new keys:
Leigh if you are still around or anyone how has used one of the s3Wrappers any suggestions or guidance?
UPDATE #2:
Even after Alex's help I am not able to get this to work. The Link I receive from getObjectLink is not valid and getObject never does download an object. I thought I would try the putObject method
test3 = s3.putObject(bucketName=bucket,regionName=region,keyName="favicon.ico");
writeDump(test3);
to see if there is any additional information, I received this :
I did find this article https://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html but it is pretty old and since S3 specifically suggests using dots in bucketnames I don't that it is relevant any longer. There is obviously something I am doing wrong but I have spent hours trying to resolve this and I can't seem to figure out what it might be.
I will give you a rundown of what the code does:
getObjectLink returns a HTTP URL for the file fake.ping that is found looking in the bucket imgbkt.domain.com of region s3-us-west-1. This link is temporary and expires after 60 seconds by default.
getObject invokes getObjectLink and immediately requests the URL using HTTP GET. The response is then saved to the directory of the S3v4.cfc with the filename fake.ping by default. Finally the function returns the full path of the downloaded file: E:\wwwDevRoot\taa\fake.ping
To save the file in a different location, you would invoke:
downloadPath = 'E:\';
test = s3.getObject(bucket,obj,region,downloadPath);
writeDump(test);
The HTTP request is synchronous, meaning the file will be downloaded completely when the functions returns the filepath.
If you want to access the actual content of the file, you can do this:
test = s3.getObject(bucket,obj,region);
contentAsString = fileRead(test); // returns the file content as string
// or
contentAsBinary = fileReadBinary(test); // returns the content as binary (byte array)
writeDump(contentAsString);
writeDump(contentAsBinary);
(You might want to stream the content if the file is large since fileRead/fileReadBinary reads the whole file into buffer. Use fileOpen to stream the content.
Does that help you?
I spent hours to figure out why I cannot use Mango Query features. In Fauxton I can neither add Mango Indexes, neither run a Mango query. For instance, in NodeJS:
var PouchDB = require('pouchdb');
PouchDB.plugin(require('pouchdb-find'));
var db = new PouchDB('http://localhost:5986/books');
db.createIndex({ index: { fields: ['nom'] } })
.then(console.log)
.catch(console.log);
=> { error: 'bad_request',
reason: 'Referer header required.',
name: 'bad_request',
status: 400,
message: 'Referer header required.' }
Any clue welcome! Thanks
It looks like this plugin can only perform the search operation on a local PouchDB database, and not translate it to a remote CouchDB query.
You probably want to set up the local db like this:
var db = new PouchDB('books') (instead of the url) and then setup replication for your documents as described here in the PouchDB docs. Your index will not be synced however.
An advantage caused by this is that you can always query your database even if the CouchDB server goes down.