When I attempt to upload a virtual appliance to Bluemix, my session expires causing the upload to fail. The appliance image is stored on my local machine and I follow the following process, having first logged into Bluemix:
From the Dashboard, I select 'Run Virtual Machines', which opens the 'Create a Virtual Machine' page. Then, I select 'Upload image' in the right-hand frame. I select ISO as the image format, then click the 'Browse' button and select the appliance ISO from my local disk.
Then I click 'Upload' and the upload begins. After a period of time, during the upload, the following message appears:
'Your session has expired, click OK to refresh the page and renew it. You might be asked to login if necessary'.
At this point the upload is terminated.
How do I resolve the issue of my session expiring part-way through the upload? Is there a robust method for uploading virtual machine images to Bluemix?
probably your iso image is too big to be uploaded using your network speed, the browser session ends before your upload is finished.
I think about two options available for you:
open two different browser tabs/windows, use the first one to upload your image and use the second one to keep manually your session not expired (simply navigating on bluemix dashboard): not really useful, especially for a very long upload...
instead of uploading your image from local, you can upload your image from an external URL (the second value on the combo box 'How to upload'
if you downloaded your ISO previously, you can simply use on Bluemix dashboard the same URL used to download it locally; if you have your own image locally only, you should upload it on the web (on a shared hosting for example) and then use the URL pointing to your uploaded ISO to download it to Bluemix
You can use glance CLI (which are a set of python scripts) to upload images. Please refer to http://docs.openstack.org/user-guide/common/cli_install_openstack_command_line_clients.html for installation instructions on the platform of your choice.
You can also check this tutorial which shows step-by-step instructions on how to install the CLI clients.
https://www.mirantis.com/blog/mirantis-openstack-express-installing-openstack-cli-clients/
Though the UI calls the same from under the covers, you will at least reduce the overhead of the UI.
Related
I am quite new into the world of remote connections so I don't really know what is possible and what is not.
I have established connection to a remote pc over ssh. I need a large file from this remote to be uploaded to a file-sender internet page. One way is to simply copy the file from remote to my local and subsequently upload from local but I want to speed up this task. I am wondering if there is a (safe) way to 'browse' through files or select files located on remote when selecting files in the upload website?
For illustration, think of selecting an image for Google's search by image and this image is located on my remote computer. After hitting the 'select a file' button, want to be able to pick a file from my remote computer to have it uploaded via this button. My question is not how to upload a file to a remote server.
The remote computer does not have any browser or so installed, it is just a collection of file directories and media disk connections that I can access. (I don't have all the details but this is all I know) That's why using the upload website through an internet browser, for example with a GUI as Ubuntu's Genome, is not an option.
Also, the upload internet page is not a specific url to upload to, so a solution like wget does not work either.
I have tried googling with the question in my title but this leads to me to solutions like Chrome's Secure Shell. I don't completely understand what I can do with it but it feels like that does not allow me to do what I want.
fyi, I work on Windows (using Ubuntu occasionally)
I have found the answer on: http://makerlab.cs.hku.hk/index.php/en/mapping-network-drive-over-ssh-in-windows .
Need to install WinSfp and SSHFS-Win. Then in windows file browser, mount a new network drive with Folder: \sshfs\username#domain . I can now browse the files through the windows file browser and thus can select files for upload
I am a newbie to AWS and one of the tasks I have is to figure out how to download MSIs, ISOs stored in S3 through a web browser. I read that I could use CLI behind the scenes. So if a customer clicks on one download; the app would make a request to S3 using one of the commands and that would download the file lets say through Google Chrome or IE (Please correct me if I'm wrong in the usage of CLI).
Now if the download stops for some reason due to internet failure; is there a way to resume the download? How do I get a download done through a client.
Thanks in advance for helping. Unfortunately the AWS links gave me very little information so seeking help here!
May
Files stored in Amazon S3 can be directly accessed via web browser, just like clicking a link on any website.
If the files are marked as publicly-accessible, anyone with the link can download the file.
If you wish to limit access to the files, your application can generate a pre-signed URL that will work for a limited time period that you specify (eg 5 minutes). Users can use/click that link to download the file within that time period.
You can also download files using the AWS Command-Line Interface (CLI), which has Copy and Sync commands. This would, however, require installation of the CLI on the user's computer. This is great if they are regularly download files or if you wish to automate the download (eg every hour or daily).
If you wish to explore AWS, sign-up for an account and make use of the Free Usage Tier, which lets you try some services for no charge.
I'm developing an application that uses (lots) of image processing.
The general overview of the system is:
User Uploads photos to server (Raw photo, with FULL resolution)
Server Fetches new photos, and apply image processing on them
Server resizes image and serves those photos (delete the full one?)
My current situation is that I have almost no expertise in image hosting nor large data uploading and managing.
What I plan to do is:
User uploads directly from Browser to Amazon S3 (Full Image)
User notifies my server, and add the uploaded file to the Queue for my workers
When worker receives a job, it downloads the full image (from Amazon), and process it. Updates database, and then re-uploads the image to Cloudinary (resize in server?)
Use the hosted image on Cloudinary from now on.
My doubts are regarding the process time. I don't want to upload it directly to my server, because it would require a lot of traffic and create a bottleneck, so using Amazon S3 would reduce that. And hosting images with Amazon would not be that good, since they don't provide specific API's to deal with images as Cloudinary does.
Working with separate servers for uploading, and only triggering my server when upload is done by the browser is ok? Using Cloudinary for hosting images is also something that makes sense? Sending to Amazon, instead of my own server (direct upload to my server) should be avoided?
(This is more a guidance/design question)
Why wouldn't you prefer uploading directly to Cloudinary?
The image can be uploaded directly from the browser to your Cloudinary account, without any further servers involved. Cloudinary then notifies you about the uploaded image and its details, then you can perform all the image processing in the cloud via Cloudinary. You can either manipulate the image while keeping the original, or you may choose to replace the original with the manipulated one.
At present, I start up red5 in linux command line ./red5.sh and it runs the script. Then I go to http://localhost:5080 demos page to set up my camera and audio input and all works fine in testing the stream both on demo page and in swf of my webpage.
Question is, do I need to include some java and/or action script for the swf player to
bypass the red5 demo page so I can directly connect my input and stream in the code of the page? Also so only logged in webpage viewers can connect?
Overall wondering if there is a way of hiding the server stream from anyone not logged in to view it on my site? I understand in webapps folder somewhere there is the hosts list of IP but it would be impossible to know the IP of the viewers as opposed to unwanted viewers or bandwidth stealers.
I am trying to set up a site for poetry readings and make it so readers can record live to my server and then logged in viewers can view from my website. I am trying to figure out whether I must have that red5 page open and if that doesn't pose some kind of risk.
Found my own way of doing this just by removing and renaming files and folders.
If you go to usr/local/red5/webapps here lies all the directories for viewing when you go to default port 5080 so I simply installed the applications I needed and then took everything out of there except those applications I wanted and needed to run. I took out all and placed it in a folder in /var directory named it red5_movedstuff in case I want access to further applications later on.Then I renamed the applications I am using in webapps folder and kept admin folder to access them but I renamed my applications and had to importantly rename also in WEB-INF for each application name change.
Now if someone goes to myip:5080 they get a blank page and by changing names of applications I've hidden my directories beyond that including list of streams.
I have imported a website using Import of expression web 4. Works great but the pictures are remote URL (http) links. Is there anyway to force them to be downloaded and the url of the img pointed to the local saved image?
Or is there a feature where i can right click on an img and force it to be downloaded locally in expresion web 4?
I can't seem to figure it out.
I thought Expression Web does this automatically upon saving the HTML file?
Otherwise, try downloading the website completely offline (e.g. with HTTrack) and work with this version from within Expression Web.