How to switch off local storage on OwnCloud completely - amazon-s3

I am using ownCloud for several users. So far all the data was stored on a machine drive. I want to store all the data on Amazon S3 (and none on the local machine).
How can I switch off local storage and make Amazon S3 folder default?
The search did not return any meaningful results other than instructions on how to simply add S3 storage

The official ownCloud documentation provide the relevant configuration here:
https://doc.owncloud.com/server/9.1/admin_manual/enterprise_external_storage/s3_swift_as_primary_object_store_configuration.html#configuration
unfortunately I run the community edition where S3 is not supported so I haven't had yet a chance to test such instructions.

Related

Shared storage mount on AWS Windows EC2?

i just ask for opportunities to mount a shared storage in Windows Server 2016 or higher.
So i found a lot of on AWS EFS, EBS, S3
My problem is to install data of a Software in a shared volume.
EBS is just for local mounting, can i map it on other servers?
EFS ist not for Windows but my favourite choice, are the solutions to mount it as volume in Windows or i should mount it on Linux first?
Is S3 mountable to use it like a file system?
What can i use or what is the best solution für shared storing data using cloud technologies
EBS is just for local mounting, can i map it on other servers?
Indeed, you can mount EBS only to a single EC2 instance. However - you can expose the mounted EBS as NFS on the instance
EFS ist not for Windows but my favourite choice, are the solutions to mount it as volume in Windows or i should mount it on Linux first?
I don't think that matters.. (if I'm mistaken, please correct me)
Edit: I stay corrected, AWS EFS from Windows Server 2012 seem you mouting EFS on Windows doesn't work properly. You could mount the EFS on a linux server and expose it as NFS.
Is S3 mountable to use it like a file system?
S3 is not indeded to be mounted as a filesystem and it's not montable by default. If needed, there are 3rd party tools to do that (e.g. fuse-s3fs on linux), but IMHO it's not the most efficient way.
As well there are solutions from AWS (aws storage gateway) to migrate on-premise content to S3 and back.
Mounting S3 as FS I'd consider mounting S3 only when
the files are created or read, not updated (S3 doesn't support updating part of an object)
the objects (files) are to be shared by other services or through internet

How to access Amazon S3 backup without Jungle Disk

I've been backing up my Mac to the Amazon S3 cloud using Jungle Disk. Now that Mac is dead. Fine, my backups are on the cloud. So, I go to my other Mac and download Jungle Disk. It is a workgroup version of the software. When I run it it wants me to verify that I purchased the software. Well, when I first set up the Jungle Disk client some years ago there was a free client. I'd rather not pay for this unless there's no good alternative.
Next I login to my Amazon S3 Console. I have a bunch of buckets there which are impossible to navigate.
So, I google around for S3 browsers and find Cyberduck. I download and install that. When I run it it wants a server URL. At this point I'm stuck.
Is there a client that knows about the structure of backups in S3 that I can install on this other Mac to get to my backed up data?
After a couple of conversations with Jungle Disk support I was given this (undocumented) url:
https://downloads.jungledisk.com/jungledisk/JungleDiskDesktop3160.dmg
I've downloaded and installed the client, didn't have to pay anything, and I've gotten to my backed up data. Whew!
Sol got his stuff fixed. Sharing additional background for future readers. Jungle Disk uses the WebDAV standard to allow access through our web service layer. Depending on the version of Jungle Disk you're running we have a few different URLs you'll authenticate to. Ping our team at support.jungledisk.com and we'll get you setup.

Amazon S3 WebDAV access

I would like to access my Amazon S3 buckets without third-party software, but simply through the WebDAV functionality available in most operating systems. Is there a way to do that ? It is important to me that no third-party software is required.
There's a number of ways to do this. I'm not sure about your situation, so here they are:
Option 1: Easiest: You can use a 3rd party "cloud gateway" provider, like http://storagemadeeasy.com/CloudDav/
Option 2: Set up your own "cloud gateway" server
Set up a dedicated server or virtual server to act as a gateway. Using Amazon's own EC2 would be a good choice.
Set up software that mounts S3 as a drive. Two I know of on Windows: (1) CloudBerry Drive http://www.cloudberrylab.com/ and (2) WebDrive (http://webdrive.com). For Linux, I have never done it, but you can try: https://github.com/s3fs-fuse/s3fs-fuse
Set up a webdav server like CrushFTP. (It comes to mind because it's stable and cheap and works on any OS.) Another option is IIS but I personally find it's harder to set up securely for webdav.
Set up a user in your WebDav server (ie CrushFTP or IIS) with access to the mapped S3 drive.
Possible snag: Assuming you're using Windows, to start your services automatically and have this work, you may need to set up both services to use the same Windows user account (Services->(Your Service)->[right-click]Properties->Log On tab). This is because the S3 mapping software might not map the S3 drive for all Windows users. Alternatively, you can use FireDaemon if you get stuck on this step to start the programs as a service all under the same username.
Other notes: I have experience using WebDrive under pretty heavy loads, and it seems to work well. Under tons of pounding (I'm talking thousands of files per hour being added to a 5 TB WebDrive) it started to crash Windows. But I'm not sure if you are going that far with it. Also, if you're using EC2, you may not have that issue since it was likely caused by a huge transfer queue in memory and EC2 will have faster transit to S3 and keep the queue smaller.
I finally gave up on this idea and today I use Rclone (https://rclone.org) to synchronize my files between AWS S3 and different computers. Rclone has the ability to mount remote storage on a local computer, but I don't use this feature. I simply use the copy and sync commands.
S3 does not support webdav, so you're out of luck!
Also, S3 does not support hierarchial name spaces, so you cant directly map a filesystem onto it
There is an example java project here for putting a webdav server over Amazon S3 - https://github.com/miltonio/milton-aws

Using Amazon S3 along with Amazon RDS

I'm trying to host a database on Amazon RDS, and the actual content the database will store info on (videos) will be hosted on Amazon S3. I have some questions about this process I was hoping someone can help me with.
Can a database hosted on Amazon RDS interact (Search, update) something on Amazon S3? So if I have a database on Amazon RDS, and run a delete command to remove a specific video, is it possible to have that command remove the video on S3? Also, is there a tutorial on how to make the two mediums interact?
Thanks very much!
You will need an intermediary scripting language to maintain this process. For instance, if you're building a web based application that stores videos on S3 and the info for these videos including their locations on RDS you could write a PHP application (hosted on an EC2 instance, or elsewhere outside of Amazon's cloud) that connects to the MySQL database on RDS and does the appropriate queries and then interacts with Amazon S3 to complete a certain task there (e.g. delete a video like you stated).
To do this you would use the Amazon AWS SDK, for PHP the link is: http://aws.amazon.com/php/
You can use Java, Ruby, Python, .NET/Windows, and mobile SDKs to do these various tasks on S3, as well as control other areas of AWS if you use them.
You can instead find third-party scripts that do what you want and build an application around them, like for example, if someone wrote a simpler S3 interaction class you could use instead of rewriting some of your own code.
For a couple command line applications I've built I have used this handy and free tool: http://s3tools.org/s3cmd which is basically a command line tool for interacting with S3. Very useful for bash scripts.
Tyler

Amazon EC2 Windows AMI with shared S3 storage

I've currently got a base Windows 2008 Server AMI that I created on Amazon EC2. I use it to create 20-30 EBS-based EC2 instances at a time for processing large amounts of data into PDFs for a client. However, once the data processing is complete, I have to manually connect to each machine and copy off the files. This takes a lot of time and effort, and so I'm trying to figure out the best way to use S3 as a centralised storage for the outputted PDF files.
I've seen a number of third party (commercial) utilities that can map S3 buckets to drives within Windows, but is there a better, more sensible way to achieve what I want? Having not used S3 before, only EC2, I'm not sure of what options are available, and I've not been able to find anything online addressing the issue of using S3 as centralised storage for multiple EC2 Windows instances.
Update: Thanks for suggestions of command line tools for using S3. Was hoping for something a little more integrated and less ad-hoc. Seeing as EC2 is closely related to S3 (S3 used to be the default storage mechanism for AMIs, etc), that there might be something neater/easier I could do. Perhaps even around Private Cloud Networks and EC2 backed S3 servers, etc, or something (an area I know nothing about). No other ideas?
I'd probably look for a command line tool. A quick search on Google lead me to a .Net tool:
http://s3.codeplex.com/
And a Java one:
http://www.beaconhill.com/opensource/s3cp.html
I'm sure there are others out there as well.
You could use an EC2 instance with EBS exported through samba which can act as a centralized storage that windows instances can map?
this sounds very much like a hadoop/Amazon MapReduce job to me. Unfortunately, hadoop is best deployed on Linux:
Hadoop on windows server
I assume the software you use for pdf-processing is Windows only?
If this is not the case, I'd seriously consider porting your solution to Linux.