We have a CMS application written in Ruby On Rails 3 and it is using Paperclip to handle file uploads and ImageMagick to do image manipulating. It has been working great, and we are very happy.
For a new customer we want to make a deployment: the application server is in a public network and the uploaded content + the database is stored on a secured computer in their internal environment.
We have 2 main tracks right now:
Make 2 applications from the application we have right now:
A media asset application (to be developed by us) to handle all of the uploaded files using a REST-based API. Images will represented by a GUID, and we would add functions so that the images can be scaled and cropped.
Another application to be most of the application server excluding the image scaling part and storing part. When an uploaded image is requested, it will function as an adapter to the media asset application to get all the files in the correct sizes and layouts.
Pros:
We have control over what's happening
Might be a cool application in itself
Cons:
A project that might grow and be very complex
Need to make a huge change in our current application
Need to run several Rails applications locally while developing
Make an OpenStack installation:
The other option is to make an OpenStack installation and configure Paperclip to use it. If we have understood correctly, Amazon S3 is using something similar to OpenStack and Paperclip should be possible to configure against a custom installation.
Pros:
Paperclip and our setup will not be affected that much
Cons:
Will not be simple to run a local installation
Might be difficult to setup OpenStack
Very little knowledge on the product if it would fail
Any ideas, thoughts, experiences?
Related
I want to deploy odoo as cheap as possible. I tried with gcloud sql (15-30€/m) + cloud run. But after some minutes passed the odoo interface shows me a white screen with so many logs in the console similar to this:
GET 404 1.04 KB24 ms Chrome 91 https://bf-dev3-u7raxlu3nq-ew.a.run.app/web/content/290-f328144/1/website.assets_editor.css
My interpretation is that, as cloud run is stateless, and the web static files seems to be stored in the core module, after the container is killed this information is lost. As I've been one month working looking for a solution, before trying any another way of deploying I ask the community: Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?
Here I listed all the ideas that I tried:
First, I thought that this css files were store in the werkzeug session, so I tried two addons that stored this session in a place different from the filestore. These addons were camptocamp odoo-cloud-platform-14.0/session-redis and misc-addons-13.0/base_session_store_psql. But, then the problem persisted.
Then I read that the static css and js file generated in the web editor are stored in odoo as attachments, and the addons misc-addons-13.0/ir_attachment_s3 could store these files in s3. But, although I configured this addon the problem persisted.
Next, I found this link describing needing to regenerate assets so them to be stored in the db. But, although I did that the problem persisted.
Finally, I thought to deploy odoo in other ways. The way of directly in a vm seems to be the more minimalistic and standard, and so seem to have the more chances to work, although it will be difficult to implement gitops. It can be deployed containers in the vm through docker compose what will help deploying updates. Gke anthos seems to implement gitops too and seems to persist volumes, but in the description it shows gke anthos is stateless. Finally, there's the way of deploying in a k8s cluster, this way will implement containers and allow autoscaling vs the docker compose way in a vm. But it's true it seems to be more expensive and more difficult to implement. Regarding seem to be more expensive it is thought of trying little working nodes machines so the cost stays small during the night. Regarding the difficulty of deploying, it is desired to implement gitops so it seems argo or other should be added. Also, I heard gke autopilot has a good free tier and is easier to deploy.
Thanks in advance :)
Cloud Run isn't the good solution for that. Indeed, if the werkzeug session is persisted in memory, the same client isn't sure to access to the same instance each time, and thus to lost the file even in the middle of a session.
The best solution is to use VM with sticky session configuration. You can use old school deployment on Compute Engine, or Cloud Native solution with GKE/K8S. It's more or less the same cost if you have only 1 cluster (the first one is free)
Just a correction about GKE Anthos. I think you talk about Cloud Run on Anthos, and yes, it's like Cloud Run but use KNative on GKE to manage the containers, and it's also serverless. But GKE can handle stateful deployment, as you need with odoo
My question is essentially how can I get around the storage quota limits enforced on a PWA? A little background...
I am hoping to create an offline-ready line-of-business progressive web app that would ideally push about 2GB of images and video resources onto my user's phones or tablets - well beyond the current storage quota for caches and Indexed DB. What I'd like to be able to do is have my users (we all work at the same company) do a 1 time download of a zip file or directory and have the user's store that on their phone/tablet's file system in a well known directory. As the online version of the app treats these files as URL's, the fetch api would seem ideal since I could serve from online if connected or the local serviceworker managed cache if not online. But the qouta limits have me stumped. None of the files are larger than 15MB, but there's no way to know which files are needed before a user goes offline. Can I use something like an HTML input type=file tag to load files into the cache at runtime and then treat them as URL's? Of course I would remove other files to make room. But since these files wouldn't be coming from "the origin" with its secure https address (a PWA requirement I think) , but rather a local file system, I'm not sure this will work. If it is workable, would my users be forced to browse to the files manually?
If its an option, you can have a native Android service to do the caching part to avoid space constraint and then serve the data from native code to PWA using websockets/secure web sockets.
No PWA solution possible for now. File API has limitation as its sand-boxed.
I'm going to deploy a web application with multiple Pyramid application servers and nginx as a load balancer.
This application will have a feature for uploading files which should be available for downloading afterwards.
Total size of uploaded files may be very big so I'd like to deploy a separate file webserver to serve these static files. (this is one reason why I don't like rsync solution proposed here).
What is the best solution to handle file upload and syncronization in this case? I was thinking about NFS or something like that, but I'm not sure it is a good way to solve the problem. I suppose there must be some best-practices here or even a tool or library for these purposes.
UPDATE:
I don't want use cloud services like Dropbox, it would be nicer to find some syncronization solution inside the network segment.
UPDATE2:
I finished with setting up NFS, for now it works perfectly.
not really a python or pyramid related question. But, you should investigate distributed file systems and CDN's both of which are for this kind of thing. gridfs is easy enough to get going with. But there are plenty of other options. Both Amazon and Google have similar services.
I have an rails app deployed to Heroku. It has image uploads. After deployed to heroku again, i am unable to see old images that are uploaded.
Is heroku reset images folder when app is re-deployed? Please tell me the reason behind it.
Background
Heroku uses an 'ephemeral file system', which from an application architecture point of view should be considered as read-only - it is discarded as soon as the dyno is stopped or restarted (which, along with other occasions, occurs after each push), and is also not shared between multiple dynos.
This is fine for executing code from, as most application data is stored in a database that is independent of the dynos. However, for file uploads this presents a problem, and so any uploads should not be stored directly in the dyno filesystem.
Solution
The simplest solution is to use something like Amazon S3 as your file upload storage solution, and, if using a gem like Paperclip, this is natively supported within the gem. There is a great overview article in the Heroku Dev Center about using S3 and Heroku (https://devcenter.heroku.com/articles/s3), which leads into an article contributed by Thoughtbot (the developers of Paperclip) on implemenation specifics within a Rails app (https://devcenter.heroku.com/articles/paperclip-s3)
Folks:
I'm creating an app using Node Webkit. The purpose of this app is to display images and pdfs. The app needs to download those files from a central repository, and cache them locally. When the app runs offline, the files should still be available, and displayed.
On the face of it, this sounds like appcache is the answer - and that indeed is where I was heading when this was a pure webapp in a browser. However, now I've discovered node-webkit, and here we are.
node-webkit's GitHub wiki states:
"However, application cache is designed for browser use, for apps using node-webkit, it's less useful than the other two method, read HTML5 Application Cache if you want to use it."
But doesn't say why.
I've also researched node.js filesystem - but that seems like a whole magnitude of complexity above what I need.
Can anyone point me in a sensible direction?
Thanks.
It has to do with the nature of App Cache itself.
You specify a manifest file that lists all the static assets required for your app to run offline. You don't have any programmatic access to the cache to add and remove files via JS.
So for a node-webkit app, it'd make more sense to fetch these files and store them in the Application Support folder (Or AppData, depending on the platform). That's where the node.js part is really useful, the file IO stuff.