Images uploaded through ActiveStorage disappear after Dokku deployment - ruby-on-rails-5

I successfully deployed my Rails application to my DigitalOcean droplet through Dokku. After deploying it, I started uploading images to my site. After pushing a new version and redeploying the app, the uploaded images disappeared.
Now, I've already read that Dokku uses ephemeral storage. I've tried following a guide to make it persistent storage, but with no success.
This is the command that I tried:
dokku storage:mount underlords /var/lib/dokku/data/storage:/storage
After redeployment, it still didn't work.

If you are using persistent storage, note that the second path is an absolute path within your app container. It is not relative to the /app directory, but relative to the root path. This means that you should be saving your files to /storage and not /app/storage.

Related

Artifactory Automatic Migration to S3 from local Storage does not happen

As per the official documentation, I have created the Eventual directory and Symbolic links(_add pointing to filestore & _pre pointing to _pre) within it. The automatic migration does not happen. I am using docker container of artifactory pro version 6.23.13 . I have waited overnight for the migration to happen but it didnt. Also the artifactory was serving only 4 artifacts.
Answering to my own question, I had initially created the eventual directory and links in the path /var/opt/jfrog/artifactory which is the home for docker container. Seems like there is another path that exists within the container /opt/jfrog/artifactory and creating the directory and links in that path seemed to do the trick

Cloudfront caching react website pages despite using file versioning

So to explain a problem I have a static website hosted on s3 and CloudFront as the CDN. I have used create-react-app(CRA) to create the react package for my website. CRA by default does versioning of webpack build files and the versions are visible in the s3 bucket as well.
Still when I do a deployment, the latest changes don't come up(I have waited even a day hoping it would come). I am not sure what is causing this issue. Can anyone please help.
I have added the screenshots of my cloudfront behaviour tab and the s3 bucket files having build versions.
Ps, If it is the case of browser cache how can I disable it so that my clients always see the most latest version of my website.
Hi you have to invalidate cache in the distribution settings tab. I ideally invalidate all cache by passing /* or you can also specify folder or file to clear cacheing.
example: /index.html
Even in your CI/CD pools line you can ask deploy agent to invalidate cache by passing distribution ID and path

Asp.net core 2.0 site running on LightSail Ubuntu can't access AWS S3

I have a website that I've build with Asp.net core 2.0. The website gets a list of files sitting in my ASW S3 bucket and displays them as links for authenticated users. When I run the website locally I have no issues and am able to access S3 to generate pre-signed urls. When I deploy the web app to LightSail ubuntu I get this incredibly useful error message : AmazonS3Exception: Access Denied.
At first I thought is was a region issues. I changed my S3 buckets to use the same region as my Lightsail ubuntu instance (East #2). Then I thought it my be a CORS issues and made sure that my buckets allowed CORS.
I'm kinda stuck at the moment.
I had exactly the same issue, then i solved it by creating an environment variable. To add environment variable permanently in Ubuntu open the environment file by the following command
sudo vi /etc/environment
then add your credential like this
AWS_ACCESS_KEY_ID=YOUR_KEY_ID
AWS_SECRET_ACCESS_KEY=YOUR_SECRECT_ACCESS_KEY
Then save the environment file, and Restart your asp core app.

Google Compute Replacing var/www/html Directory

I've launched Wordpress on Google Compute Engine (via their automated launcher process). It installs quickly and easily and visiting the external IP displayed in my Compute Engine VM Instances Dashboard, I am able access the fresh installation of Wordpress.
However, when I scp an existing Wordpress installation oldWPsite into var/www/ then replace my html directory
mv html htmlFRESH
mv oldWPsite html
my site returns a 'failed to open' error. Directory permissions user:group are identical.
Moreso, when I return the directories to their original configuration
mv html oldWPsite
mv htmlFRESH html
Still, the error persists.
I am familiar with other hosting paradigms where I can easily switch between the publicly served files by simply modifying directory names. Is there something unique about Google Compute Engine? What is the best way to import existing sites, files, etc into the Google Cloud environment?
Replicate
Install Wordpress via Google Launcher on a micro-VM.
Visit public IP of the VM instance.
SCP a fresh installation of Wordpress tovar/www.
Replace the Google installed html directory with the newly created and copied Wordpress directory using mv commands.
Visit public IP of the VM instance.
===
Referenced Questions:
after replacing /var/www/html directory, apache does not work anymore
permission for var/www/html directory - a2enmod command unrecognized on new G-compute VM
The import .htaccess file had https redirect which caused the server to prompt failure since https is not setup in a fresh launch of Wordpress through GCE. Compounding the issue, the browser cache held that memory when the previous site was moved back to the initial conditions.
Per usual, the solution involved the investigation of user errors.

Rails 4 images in public folder are not loading on Apache development

I am new to rails. I am working on a sample application for social networking. I have managed to upload the profile picture of users manually (By copying the image uploaded to /tmp/image to the public folder- public/images/tmp/image) and saved the path to db as avatar_url.
In the profile view I used
<%= image_tag(#userinfo.avatar_url, :alt=>"Avatar image")%>
and getting the picture when running on the rails server.
But after that I have deployed the app in apache with passenger in the development environment by setting RailsEnv development. After that the images are not loading. I tried to go to myip:80/public/images/tmp/image, and it gives Routing Error.
After searching on the web, I found that adding config.serve_static_assets = true in production.rb will solve the problem in production. But no use for me because it also stated that the static files will serve in development by default. For confirming the problem again, I started the rails server and opened localhost:3000/profile, image is there and not getting the image in myip:80/profile.
So do I need to add any other config. Or am I not supposed to do that in this way.
Finally, I got the solution for my problem. Just sharing here.
The problem was actually because of permission issues. The picture will be created in a root temp directory on the form submission. Then I copied the image form the temp folder to the public folder. Hence it has only read permissions. After I deployed it, the image gets returns 403 forbidden error.
I used,
FileUtils.chmod 775, target
to set the permission. After that it worked well.
The option config.serve_static_assets = true tells rails to serve the static assets for your application, but that job should really be left to Apache.
Your issue sounds more related to your Apache configuration than rails.
I would take a look at a tutorial on how to configure Apache and Passenger to make sure your environment is setup correctly.
Anything in the public folder should be served by the web server. However myip:80/public/images/tmp/image is not a valid path. You would need to also have a filename at the end with an extension.