I am trying to move my moodledata folder content into Amazon S3. i didnt found any document (or guide) to configure this setup.
I am using MOODLE 3.3 STABLE build version.
Can anyone help me to setup this?
You could use s3fs and mount it on your webserver.
I suggest to use local directory (for performance) to:
cache, localcache and sessions
Related
I have migrated moodle data directory to Amazon s3. Now I am trying to access all the files from the s3 storage using this plugin moodle-tool_objectfs
Attaching my settings screenshot. I am trying to access all the media files amazon s3 instead from server file system. Example, site logo, course materials in PDF format, etc.,
Thanks for the shout-out Russell!
It sounds like you have manually migrated the content to S3 rather than relying on this plugin to do the work for you. I'd guess that your manual migration has put the files into a structure/path that the plugin isn't expecting. especially if you have copied your complete moodledata folder into S3 and not just the uploaded user files. (The tool_objectfs plugin does not replace the need for a normal moodledata directory, it just allows the majority of your files to be stored in S3.)
Usually you would have a Moodle site set up with a normal moodledata directory and then you would install our tool_objectfs plugin which would migrate files from moodledata to your s3 storage, relying on the plugin to perform the migration for you.
As per the official documentation, I have created the Eventual directory and Symbolic links(_add pointing to filestore & _pre pointing to _pre) within it. The automatic migration does not happen. I am using docker container of artifactory pro version 6.23.13 . I have waited overnight for the migration to happen but it didnt. Also the artifactory was serving only 4 artifacts.
Answering to my own question, I had initially created the eventual directory and links in the path /var/opt/jfrog/artifactory which is the home for docker container. Seems like there is another path that exists within the container /opt/jfrog/artifactory and creating the directory and links in that path seemed to do the trick
I am trying to create a custom AMI using packers.
I want to install some specific software on the custom AMI and my setups are present in S3 bucket. But it seems there is no direct way to download S3 file in packers just like cfn-init.
So is there any way to download file on EC2 instance using packers.
Install the awscli in the instance and use iam_instance_profile to give the instance permissions to get the files from S3.
I can envisage an instance where this is ineffective.
When building the image upon aws you use your local creds. Whilst the image is building this building packer image has a packer user and is not you and so not your creds and can't access the S3 (if private)
One option https://github.com/enmand/packer-provisioner-s3
Two option, use local-shell provisioner you pull down the S3 files to your machine using aws S3 cp, then file provisioner to upload to the correct folder in the builder image, you can then use remote-shell to do any other work on the files. I chose this as, although it's more code, it is more universal when I share my build, other have no need to install other stuff
Three option wait and wait. There is an enhancement spoke of in 2019 packer GitHub to offer an S3 passthrough using local cars but isn't on the official roadmap.
Assuming awscli is already installed on Ec2, use below sample commmand in a shell provisioner.
sudo aws s3 cp s3://bucket-name/path_to_folder/file_name /home/ec2-user/temp
I'm trying to figure out right now how to backup some data to S3.
We have a local backup system implemented using rsnapshot and that works perfectly. We're trying to use s3cmd with the --sync option to mimic rsync to transfer the files.
The problem we're having is that symlinks aren't created as symlinks, it seems to be resolving them to the physical file and uploading that instead. Does anyone have any suggestions as to why this would happen?
Am I missing something obvious? Or is it that S3 just isn't suited to this sort of operation? I could setup an EC2 instance and attach some EBS, but it'd be preferable to use S3.
Amazon S3 itself doesn't have the concept of symlinks, which is why I suspect s3cmd uploads the physical file. Its a limitation of S3, not s3cmd.
I'm assuming that you need the symlink itself copied though? If that's the case, can you gzip/tar your directory with symlink and upload that?
There is no symlinks available on S3 but what you can use is google's solution which creates a based file system using S3 (FUSE) . More information here :
https://code.google.com/p/s3fs/wiki/FuseOverAmazon
and here:
http://tjstein.com/articles/mounting-s3-buckets-using-fuse/
I hope it helps
Try using the -F, --follow-symlinks OPTiON when using sync. This worked for me.
I have problem with viewing video from my bucket on S3.
I'm using EC2 instance. Bucket mounted as folder via s3fs. When i try to load a big file i have a pause before starting download. In this pause, i see that file download (cache) to EC2. When it was cached, file start to download in browser.
I try to configure s3fs and disable cache, but option -o use_cache="" doesn't work. I try to use s3fslite, but it is also cache files before sending it to user.
How to disable caching? Maybe there is some faster solution, that can help me to use s3 bucket like folder on EC2?
You don't need to download the files, either serve them directly from s3, or use cloudfront.
If you are trying to control access to the file. Use signed URLs which will give them user a certain amount of time to access the file before the link expires.