Cloud Storage Transfer "Failed" - amazon-s3

I've tried repeatedly to use the Google Developers Console tools to Create a Transfer that works, but haven't had any luck. My source is in S3.
I tried with the "S3://" URL, but when trying to accept the transfer settings, I consistently get "source bucket doesn't exist". I test my URL by placing it in a browser, and I do get it to resolve, so I don't know what's up.
Even more puzzling is when I try using a text file of URLs. These URLs are all http:// strings, and each of them properly loads in a browser. I figured this would be even more straightforward as there are no permissions to be dealt with, really, since each file in the S3 bucket already has read permissions.
Instead, all I get in the Transfer history is "Failed", with no other information at all.
At first, I was greedy and included all my files. When I got nowhere with that, I cut it down to a single file. Still no go.
Here is the text file.
Any clues, por favor?

It looks like your text file doesn't follow the specified format. You should add the header and size/MD5 of each file as described at https://cloud.google.com/storage/transfer/#urls

Related

XPages POI4Xpages download to network location

Using POI4Xpages which is great LINK
However, I was wondering, at present, when it creates my word document, it simply downloads, like a normal download from the internet, storing it the downloads folder in windows (using Chrome anyways)
Is there a way, using POI4XPages, to instead, dump the file to a specified network location, for example a shared drive?
After that, I would simply build a link to the file using the network location, and a filename variable for example to pick the correct file.
If thats not possible, is it possible to get a handle on the file before or after it is downloaded, and then save it to a field in the xpage?
In short, I want to avoid the user downloading the file, then having to attach it manually to the xpage.
Thanks
POI allows you to get a handle to the file using the variable "workbook". You are also able to provide the specific downloadFileName you wish to use. Using the postGenerationProcess property you should be able to make a call to a Java method that makes the connection to your network drive where you can use the "workbook" variable and downloadFileName value to save your document. If this doesn't work definitely post a question on their project site because the creator does reply.

How to implement XML-safe private Amazon S3 URLs?

On my photography website, I am storing photos on Amazon S3. To actually display them on the website, I am using signed URLs. This means that image URLs expire. Only the web application itself is able to generate valid image file URLs.
An example URL would look like this:
http://media.jungledragon.com/images/1849/21346_small.JPG?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1411603210&Signature=9MMO3zEXECtvB0w%2FuMEN8obt1ow%3D
Note that by the time you read this, that URL may have already expired. That's ok, the question is about the format.
Whilst the above URL format works fine on the website, it breaks XML files. The reason for this is the & character, which should be escaped.
For example, I'm trying to implement Windows 8.1 live tiles for the website, which you can link to an RSS feed. My RSS feed is here:
http://www.jungledragon.com/all/rss/promoted
That feed will work in most RSS readers, however, the Windows 8 tile builder (http://www.buildmypinnedsite.com/en) is particularly strict about the XML being valid. Here you can see the error it throws on said feed:
http://notifications.buildmypinnedsite.com/?feed=http://www.jungledragon.com/all/rss/promoted&id=1
Now, my simple thinking was to encode the & that are part of the signed URLs, by & or &. Whilst that may make the XML valid, unfortunately S3 does not accept & to be encoded. When used like that, the image will no longer load.
I'm wondering whether I am in a circular problem that cannot be solved?
I have had many similar problems with RSS feeds. XML documents should always use & (or an equivalent like & or &). If a reader is not capable of extracting the URL properly, then the reader is the culprit, not you. But I can tell you that reader programmers will disagree with you.
If you are a programmer, you could fix the problem by having a redirect, but that's a bit of work. So you'd retrieve the URL from S3, save that in your database and create a URL on your website such as http://www.jungledragon.com/images/123 and link the S3 URL with your images/123 page. Now when someone goes to page images/123, you retrieve the URL you saved from your S3 server.
Actually, if the URL http://www.jungledragon.com/images/123 is a reference to your image, you can get the S3 URL at that time and do the redirect on the fly!

How to use SoundManager2 to stream from SoundCloud, and make visualizations?

SoundManager2 gets a data error and I cannot visualize anything?
or
I cannot access the song, permission denied?
or
It works when I first play it, but if I pause it and play again, I get a data error?
This has recently been fixed, as it was partly due to half of the needed files being there. Now it is fixed however, it still might not work off the bat.
The obvious first step is you use the api to get the track stream_url, which looks like http://api.soundcloud.com/tracks/69322564/stream?client_id=CLIENT_ID
If you use this as the media url in SoundCloud, you will find that you press play, and if you have visualizations they will work, and everything is nice. However if you now pause the track, and press play again, you will get a data error, metadata will cease to be accessible, and your visualizations will break. This is because api.soundcloud.com has a crossdomain file, and when you access it you get a 3XX redirect to ec-media.soundcloud.com. This site now also has a crossdomain.xml file, however that pesky 3XX redirect ruins both permissions, so you hit an error.
The answer to this is you make the redirect leap first, outside of soundmanager2, so that there is no redirect that will break it. For instance in Python:
import urllib2
starturl = 'http://api.soundcloud.com/tracks/69322564/stream?client_id=CLIENT_ID'
res = urllib2.urlopen(starturl)
finalurl = res.geturl()
print finalurl
This could be more elegant, but it will print the url that the api redirects to. This url will look something like http://ec-media.soundcloud.com/2j0lNF81jv9m.128.mp3?LONG_STRING&AWSAccessKeyId=ACCESS_KEY&Expires=1355864871&Signature=SIGNATURE
This domain has the crossdomain.xml file, and due to the fact that there is no redirect, things will run smoothly, data will be accessed, all will be well.
"I did this and it worked, but now it says the file is forbidden"
Now we draw your attention to the previous url, in particular &Expires=1355864871. The file you are redirected to is not permanent, so you need to grab it each time. For me this is easy, I work in django so I can simply run the python above in my views scripts. You'll have to find a way to implement this in your code of choice. (Should work in javascript too).
After all this is done, you should be able to pause and play as much as you want, and retrieve the waveform data, the EQ data, and the peak data. With these things, some fun things can be done. Hope this helped.

Meteor File Uploads

I see that this has been asked here before, but nothing since Meteor.http has been available. I'm still grasping the concepts of Meteor and file uploads are totally eluding me.
Here's my question:
So, in what I believe to be the right method,
Meteor.http.call("POST", url, [options], [asyncCallback]) what do you put for the url? With the client/server javascript relationship in meteor, it doesn't seem like it really uses urls that much.
If anyone has a basic example of a file upload in meteor, that would just be extra awesome.
well been playing a bit with meteor. Made a collectionFS a mix of meteor and gridFS (could be compatible).
Test it here: http://collectionfs.meteor.com/
It support quit large files, multiple files, users etc. I've tested a 50Mb seems ok, if connection is lost or browser dies the user can resume upload.
It should even be possible to have multiple users upload to exact same file - haven't quit found a usecase for it, but it's possible.
Accounts, publishing etc. is as with collections - the test is in autopublish mode, though only meta data is avaliable - chunks of data is served in background via blobs.
I'll try getting it on github,
Take a look at filepicker.io. They handle the upload, store it into your S3, and return to you the url that you can dump into your db.
Wget the filepicker script into your client folder.
wget https://api.filepicker.io/v0/filepicker.js
Insert a filepicker input tag
<input type="filepicker" id="attachment">
In the startup, initialize it:
Meteor.startup( function() {
filepicker.setKey("YOUR FILEPICKER API KEY");
filepicker.constructWidget(document.getElementById('attachment'));
});
Attach a event handler
Template.templateNameHere.events({
'change #attachment': function(evt){
console.log(evt.files);
}
});
(I had posted on How would one handle a file upload with Meteor? Sorry. I'm new here. Is it kosher to copy the same answer twice? Anyone who knows better can feel free to edit this.)
Checkout how to accomplish this using Meteor.Method on the server and the FileReader's api on the client
https://gist.github.com/dariocravero/3922137
After several searches, this looks to me the easiest (and for the moment the meteor's style way) to handle a file upload with no extra dependencies.
Since meteor includes JQuery by default, you can utilize a Jquery plugin for that, i presume, something like: https://github.com/blueimp/jQuery-File-Upload/wiki/Options can do the trick for you, and supports both GET and PUT.
Otherwise it would be a pain in the ass to get it to work, but not impossible, since you can access PUT in meteor.
If you would prefer a more pure JS sollution maybe you can look at: http://igstan.ro/posts/2009-01-11-ajax-file-upload-with-pure-javascript.html
And adapt it.
There is no ready made support for file uploads so share what you come up with, i would be very interested!
Alternatively (if you wouldn't like to use a 3rd party solution like filepicker) you could use the meteor router package.
This handles the HTTP requests on server-side.

Hiding/changing the virtual path in classic ASP

We have a website that requires a username and password. Once logged in, the user can select a link to a PDF in the web browser. Once this has happened they are able to see the full URL path of the PDF, they could copy and paste the path into a different browser without logging in, or send the address to someone else to look at.
I am asking this for a co-worker so I am not too sure on what is needed, but they want to change it from say "documents/customerlist.pdf" to "documents/info.asp" (not sure what the file type should be, maybe just "documents/info"?) I think that is what the goal is. Is this possible? If someone could point me in the right direction we might be able to figure it out!
I should think you can do this in ASP. You'll need to deliver the PDF dynamically via an ASP page, which detects the user's session and only serves the data if they are suitably authenticated (so copying the URL to a different browser/machine will result in a 404 or access denied, as you wish). You'll need to read the data from file and binary-write it to the browser, and set HTTP headers for mime-type, content length etc.
I'd start off with serving it on a pdf.asp?file=customerlist URL, but you can later experiment with changing this to something more readable (docs/customerlist.php). You'll need to look into URL rewriting here.
So, that's the general approach. If you do a web-search around these topics ("ASP serve binary file", "ASP URL rewriting") you are sure to get plenty of examples.