So we have this basic Rails 3 website with capistrano 2.5.19 plus multi-stage extension.
The site is simple, but it has 40,000+ of images out there. So deployments take a long time, going both to our QA server and production. The issue is not usually network load, because capistrano only downloads what changed in svn. The issue is the time it takes for our servers to backup the old release (40k worth of images) and copy the new release (another 40k of images.)
Does anyone know of a best practice approach to this? Is the only way to split this into two SVN folders and two deployment scripts combined with some symlink magic? Or can i tell capistrano to exclude the images on certain deployments where I know images have not changed?
Well, we have this issue too. A solution is a library called fast_remote_cache if you're on linux.
https://github.com/37signals/fast_remote_cache
The idea is that it hard links to the cache so the copy is much faster. Once the site finally gets large enough that even this takes too long, then it is time to consider asset servers.
It's probably better not to have all those images in your repository, or at least in a different repository.
You'll want to see about setting up an asset server. They're easy to hook into Rails, as long as you use the XXX_tag helpers. And you could just have the asset server run plain old Apache - not need for anything dynamic on it...
You might also be able to hook a "cloud" file store (I'm thinking Amazon S3, but there are plenty of others) in to serve the same purpose - they'll provide file backup (and version control, in some cases), and you won't even have to worry about running the asset server yourself.
Hope this helps!
Related
I want to deploy odoo as cheap as possible. I tried with gcloud sql (15-30€/m) + cloud run. But after some minutes passed the odoo interface shows me a white screen with so many logs in the console similar to this:
GET 404 1.04 KB24 ms Chrome 91 https://bf-dev3-u7raxlu3nq-ew.a.run.app/web/content/290-f328144/1/website.assets_editor.css
My interpretation is that, as cloud run is stateless, and the web static files seems to be stored in the core module, after the container is killed this information is lost. As I've been one month working looking for a solution, before trying any another way of deploying I ask the community: Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?
Here I listed all the ideas that I tried:
First, I thought that this css files were store in the werkzeug session, so I tried two addons that stored this session in a place different from the filestore. These addons were camptocamp odoo-cloud-platform-14.0/session-redis and misc-addons-13.0/base_session_store_psql. But, then the problem persisted.
Then I read that the static css and js file generated in the web editor are stored in odoo as attachments, and the addons misc-addons-13.0/ir_attachment_s3 could store these files in s3. But, although I configured this addon the problem persisted.
Next, I found this link describing needing to regenerate assets so them to be stored in the db. But, although I did that the problem persisted.
Finally, I thought to deploy odoo in other ways. The way of directly in a vm seems to be the more minimalistic and standard, and so seem to have the more chances to work, although it will be difficult to implement gitops. It can be deployed containers in the vm through docker compose what will help deploying updates. Gke anthos seems to implement gitops too and seems to persist volumes, but in the description it shows gke anthos is stateless. Finally, there's the way of deploying in a k8s cluster, this way will implement containers and allow autoscaling vs the docker compose way in a vm. But it's true it seems to be more expensive and more difficult to implement. Regarding seem to be more expensive it is thought of trying little working nodes machines so the cost stays small during the night. Regarding the difficulty of deploying, it is desired to implement gitops so it seems argo or other should be added. Also, I heard gke autopilot has a good free tier and is easier to deploy.
Thanks in advance :)
Cloud Run isn't the good solution for that. Indeed, if the werkzeug session is persisted in memory, the same client isn't sure to access to the same instance each time, and thus to lost the file even in the middle of a session.
The best solution is to use VM with sticky session configuration. You can use old school deployment on Compute Engine, or Cloud Native solution with GKE/K8S. It's more or less the same cost if you have only 1 cluster (the first one is free)
Just a correction about GKE Anthos. I think you talk about Cloud Run on Anthos, and yes, it's like Cloud Run but use KNative on GKE to manage the containers, and it's also serverless. But GKE can handle stateful deployment, as you need with odoo
We're developing a solution which uses Ektron. As part of our solution we all have local IIS instances (localhost) and deploy to this local instance as part of the development life cycle.
The problem is that after a deployment and once dll's are replaced IIS restarts and the app pool is recycled, this means that Ektron dll's need to reload themselves.
This process takes an extended amount of time.
Is there anyway to improve the loading time of "Ektron"
To some extent, this is the nature of a large app running as a website rather than a web application. Removing the workarea from your local environment is one way to get this compile time down, though this will naturally not work depending on your workflow, for example if you are not using a separate dev DB or if you are storing the workarea in source control.
I have seen some attempts to pre-complile the workarea and keep the working code in a separate project (http://dev.ektron.com/forum.aspx?g=posts&t=10996) but this approach will only speed up your builds, not the recompilation of individual pages that will occur after a build as a result of running as a web site.
The last (and least best-practice) solution is to simply avoid making code changes that cause a recompile, like modifying app_code. Apps running as websites are perfectly happy to recompile a single page's codebehind without regenerating DLLs, which is advantageous for productivity but ultimately discourages good practices like reusing code in libraries. Keep in mind that this is terrible advice, but if you have a deadline and are staring at an ektron page loading every 30 minutes it can be useful to know.
Same problem here. I found this: http://brianpereras.blogspot.com/2013/06/ektron-85-86-workarea-is-slow-compared.html
That says that the help documentation was moved to be retrieved from an online source (documentation.ektron.com). We're running Ektron 9, and I just made this change and it seems much faster on first load (after iisreset).
The solution is to set documentation.ektron.com to 127.0.0.1 in your hosts file.
There is not, this is just how IIS works. Instead of running a local instance of Ektron it's a good idea just to point your web.config file to the database of your test database and copy the /workarea folder to your local PC. You can't edit ektron locally but you can change the data on your test server and it will show up locally.
I'm going to deploy a web application with multiple Pyramid application servers and nginx as a load balancer.
This application will have a feature for uploading files which should be available for downloading afterwards.
Total size of uploaded files may be very big so I'd like to deploy a separate file webserver to serve these static files. (this is one reason why I don't like rsync solution proposed here).
What is the best solution to handle file upload and syncronization in this case? I was thinking about NFS or something like that, but I'm not sure it is a good way to solve the problem. I suppose there must be some best-practices here or even a tool or library for these purposes.
UPDATE:
I don't want use cloud services like Dropbox, it would be nicer to find some syncronization solution inside the network segment.
UPDATE2:
I finished with setting up NFS, for now it works perfectly.
not really a python or pyramid related question. But, you should investigate distributed file systems and CDN's both of which are for this kind of thing. gridfs is easy enough to get going with. But there are plenty of other options. Both Amazon and Google have similar services.
Folks:
I'm creating an app using Node Webkit. The purpose of this app is to display images and pdfs. The app needs to download those files from a central repository, and cache them locally. When the app runs offline, the files should still be available, and displayed.
On the face of it, this sounds like appcache is the answer - and that indeed is where I was heading when this was a pure webapp in a browser. However, now I've discovered node-webkit, and here we are.
node-webkit's GitHub wiki states:
"However, application cache is designed for browser use, for apps using node-webkit, it's less useful than the other two method, read HTML5 Application Cache if you want to use it."
But doesn't say why.
I've also researched node.js filesystem - but that seems like a whole magnitude of complexity above what I need.
Can anyone point me in a sensible direction?
Thanks.
It has to do with the nature of App Cache itself.
You specify a manifest file that lists all the static assets required for your app to run offline. You don't have any programmatic access to the cache to add and remove files via JS.
So for a node-webkit app, it'd make more sense to fetch these files and store them in the Application Support folder (Or AppData, depending on the platform). That's where the node.js part is really useful, the file IO stuff.
I'll be using RTC in the near future here at work. My question is: where does it put the files the team members will be working on? I understand that each programmer will work on the projects files and they will push the changes to the main repository. We have a local web server where we test our work (php). So, do we have to configure RTC to publish the files to the web server? or the RTC server must be installed in the webserver so it can save the files?
We use Rational Team Concert almost exactly as you describe, and it works brilliantly. My small team of web developers collaborates on website source code and delivers it to two different streams depending on its readiness: production-stream and staging-stream. Then we have defined two builds that check out the source code, move some things around, and push the files to the web servers via SCP. So, with a few clicks we kick off a staging build, watch it finish in about two minutes and everyone can see the changes on the staging server. When the code is ready for prime-time, the change sets are delivered to production-stream and the production build is kicked off, which is configured to copy the files to the production web server.
But even before a staging or production build is run, any of us can simply configure a local web server in RTC using the Eclipse PDE and Web Tools add-ons and see the site running in localhost as we develop.
All our work is done within Rational Team Concert, from planning, to bug tracking, to source control, to builds. It's very well-suited for website management.
Your understanding is correct - you work on files locally, and they get uploaded on to the server when you checkin. Bear in mind that checkin in RTC terms really means back-up your files to the server, it is a Deliver command that shares the files with others (it is worth a quick look at the articles on jazz.net that explains how SCM works).
One way to pubish to your php server is to make that part of a build, or a build in its own right (which RTC also handles - in conjunction with your favourite build tool). The build would copy the files to the php server. The advantage of doing this as a build is you will know exactly what versions of your files are being copied, and you will be able to reproduce this copy at any point in the future.
You do not need to install the RTC server on the php server.
You can also try posting on the forums on http://jazz.net/ if you have questions on RTC.
Hope that helps.
Another alternative would be to use the command line interface to accept all changes into a workspace and run that with a cron job.
To handle discarded change sets, you'd probably want to use something like:
scm workspace replace-components <workspace-name> stream <uuid-of-stream> --all
after you had initially loaded the workspace on your web server.