On production, we have a web infrastructure as it follows:
Load Balancer (haproxy)
API Server (PHP + apache)
Frontend Server (Javascript + nginx)
MySql Server
Redis Server
I'd love to start using Vagrant to make production environments exactly the same as the development ones, plus making it easy for a new developer to jumpstart doing his job.
The big question is: how should I build the box?
Should I put everything in one box or should I build more boxes? And how many?
It depends on convention you've reached with developers. Ask yourself one question: in what type of structure do you wish to work: distributed or centralized.
If answer is "distributed", you can make one box per one project. You won't mess up when you will nedd to up any project that got over last modification few time ago. But this method intakes much memory and storage space and sometimes it doesn`t make sense, if the most part of your projects based on the same production environment.
If answer is "centralized", it means one box per all projects builded on the same environment is enough for you. It saves plenty of time, but also it`s easy to confuse, when you're looking for an old project. You can set up Docker container per each project in your Vagrant box.
Additionally I'd like to suggest you Packer usage for box building. That`s definitely clear instrument for this goal, it can make "ready to work" Vagrant box for every virtualization environment and execute shell scripts/CMS scripts. Just put in box everything essential for production environment, later developers can add some package dependencies through Vagrant provisioning and share it by Vagrantfile settings.
Related
I want to deploy odoo as cheap as possible. I tried with gcloud sql (15-30€/m) + cloud run. But after some minutes passed the odoo interface shows me a white screen with so many logs in the console similar to this:
GET 404 1.04 KB24 ms Chrome 91 https://bf-dev3-u7raxlu3nq-ew.a.run.app/web/content/290-f328144/1/website.assets_editor.css
My interpretation is that, as cloud run is stateless, and the web static files seems to be stored in the core module, after the container is killed this information is lost. As I've been one month working looking for a solution, before trying any another way of deploying I ask the community: Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?
Here I listed all the ideas that I tried:
First, I thought that this css files were store in the werkzeug session, so I tried two addons that stored this session in a place different from the filestore. These addons were camptocamp odoo-cloud-platform-14.0/session-redis and misc-addons-13.0/base_session_store_psql. But, then the problem persisted.
Then I read that the static css and js file generated in the web editor are stored in odoo as attachments, and the addons misc-addons-13.0/ir_attachment_s3 could store these files in s3. But, although I configured this addon the problem persisted.
Next, I found this link describing needing to regenerate assets so them to be stored in the db. But, although I did that the problem persisted.
Finally, I thought to deploy odoo in other ways. The way of directly in a vm seems to be the more minimalistic and standard, and so seem to have the more chances to work, although it will be difficult to implement gitops. It can be deployed containers in the vm through docker compose what will help deploying updates. Gke anthos seems to implement gitops too and seems to persist volumes, but in the description it shows gke anthos is stateless. Finally, there's the way of deploying in a k8s cluster, this way will implement containers and allow autoscaling vs the docker compose way in a vm. But it's true it seems to be more expensive and more difficult to implement. Regarding seem to be more expensive it is thought of trying little working nodes machines so the cost stays small during the night. Regarding the difficulty of deploying, it is desired to implement gitops so it seems argo or other should be added. Also, I heard gke autopilot has a good free tier and is easier to deploy.
Thanks in advance :)
Cloud Run isn't the good solution for that. Indeed, if the werkzeug session is persisted in memory, the same client isn't sure to access to the same instance each time, and thus to lost the file even in the middle of a session.
The best solution is to use VM with sticky session configuration. You can use old school deployment on Compute Engine, or Cloud Native solution with GKE/K8S. It's more or less the same cost if you have only 1 cluster (the first one is free)
Just a correction about GKE Anthos. I think you talk about Cloud Run on Anthos, and yes, it's like Cloud Run but use KNative on GKE to manage the containers, and it's also serverless. But GKE can handle stateful deployment, as you need with odoo
We are new to test automation, and finally we finally automated our setup (black box testing). A brief overview of our setup:
Each setup consists of 2 linux PCs and 1 windows PC - the PCs communicate to one another via an embedded board.
Before we used to manually run test cases from the linux machine - this would require both typing in the Linux machine and also some operations in the windows PC. Now instead we have written a C wrapper from which you can trigger any number of test cases and thanks to AutoIT the windows PC operations are now automated as well.
Now we have multiple such setups - I want to have a central test controller that,
Given a set of test cases (and the corresponding executable to be run on the embedded board) can distribute and trigger it in parallel across setups
During overnight tests it can keep track of which test cases have been executed and which ones are pending
Quarantine test cases
Continuous integration - we use CVS
etc etc
Basically a powerful test harness software running on a PC - this PC is connected to all the setups using a router.
Any suggestions from open source (free) projects for such a software, more than having all the features mentioned above, i want something that does most of it, I can code and add additional functionalities as need
I tried browsing online and seem to find some but they all seem to be for testing websites, not sure if it would suit my use case. Would really appreciate inputs in this regard.
Thanks
ok i am going to ignore most of the stuff that you described which looks like you have automated already, and correct me if i am wrong but i think you are looking for is controller for all these automated jobs.
I would say that Jenkins CI is the ideal solution for you.
In jenkins everything is controlled by a master machine, this machine then in turn controls slaves (being your other pcs or linux machines) via java. so this gives you overview of the system.
you can then create jobs and ristrict where they can run, these jos can pretty much do anything. Including taking parameters for what to run, you can also create matrix configuration jobs which allow for one setup to be run simlutansily on however many slaves you need.
You can set this process or timer, or trigger , build externaly or internaly or etc etc ...
Also I am pretty sure Jenkins has some dedicated plugins for working with CVS some of which are built into the OOTB setup.
Jenkins is the way to go here .
We're developing a solution which uses Ektron. As part of our solution we all have local IIS instances (localhost) and deploy to this local instance as part of the development life cycle.
The problem is that after a deployment and once dll's are replaced IIS restarts and the app pool is recycled, this means that Ektron dll's need to reload themselves.
This process takes an extended amount of time.
Is there anyway to improve the loading time of "Ektron"
To some extent, this is the nature of a large app running as a website rather than a web application. Removing the workarea from your local environment is one way to get this compile time down, though this will naturally not work depending on your workflow, for example if you are not using a separate dev DB or if you are storing the workarea in source control.
I have seen some attempts to pre-complile the workarea and keep the working code in a separate project (http://dev.ektron.com/forum.aspx?g=posts&t=10996) but this approach will only speed up your builds, not the recompilation of individual pages that will occur after a build as a result of running as a web site.
The last (and least best-practice) solution is to simply avoid making code changes that cause a recompile, like modifying app_code. Apps running as websites are perfectly happy to recompile a single page's codebehind without regenerating DLLs, which is advantageous for productivity but ultimately discourages good practices like reusing code in libraries. Keep in mind that this is terrible advice, but if you have a deadline and are staring at an ektron page loading every 30 minutes it can be useful to know.
Same problem here. I found this: http://brianpereras.blogspot.com/2013/06/ektron-85-86-workarea-is-slow-compared.html
That says that the help documentation was moved to be retrieved from an online source (documentation.ektron.com). We're running Ektron 9, and I just made this change and it seems much faster on first load (after iisreset).
The solution is to set documentation.ektron.com to 127.0.0.1 in your hosts file.
There is not, this is just how IIS works. Instead of running a local instance of Ektron it's a good idea just to point your web.config file to the database of your test database and copy the /workarea folder to your local PC. You can't edit ektron locally but you can change the data on your test server and it will show up locally.
I'll be using RTC in the near future here at work. My question is: where does it put the files the team members will be working on? I understand that each programmer will work on the projects files and they will push the changes to the main repository. We have a local web server where we test our work (php). So, do we have to configure RTC to publish the files to the web server? or the RTC server must be installed in the webserver so it can save the files?
We use Rational Team Concert almost exactly as you describe, and it works brilliantly. My small team of web developers collaborates on website source code and delivers it to two different streams depending on its readiness: production-stream and staging-stream. Then we have defined two builds that check out the source code, move some things around, and push the files to the web servers via SCP. So, with a few clicks we kick off a staging build, watch it finish in about two minutes and everyone can see the changes on the staging server. When the code is ready for prime-time, the change sets are delivered to production-stream and the production build is kicked off, which is configured to copy the files to the production web server.
But even before a staging or production build is run, any of us can simply configure a local web server in RTC using the Eclipse PDE and Web Tools add-ons and see the site running in localhost as we develop.
All our work is done within Rational Team Concert, from planning, to bug tracking, to source control, to builds. It's very well-suited for website management.
Your understanding is correct - you work on files locally, and they get uploaded on to the server when you checkin. Bear in mind that checkin in RTC terms really means back-up your files to the server, it is a Deliver command that shares the files with others (it is worth a quick look at the articles on jazz.net that explains how SCM works).
One way to pubish to your php server is to make that part of a build, or a build in its own right (which RTC also handles - in conjunction with your favourite build tool). The build would copy the files to the php server. The advantage of doing this as a build is you will know exactly what versions of your files are being copied, and you will be able to reproduce this copy at any point in the future.
You do not need to install the RTC server on the php server.
You can also try posting on the forums on http://jazz.net/ if you have questions on RTC.
Hope that helps.
Another alternative would be to use the command line interface to accept all changes into a workspace and run that with a cron job.
To handle discarded change sets, you'd probably want to use something like:
scm workspace replace-components <workspace-name> stream <uuid-of-stream> --all
after you had initially loaded the workspace on your web server.
If you look at the Linux ecosystem (especially the Ubuntu and Alestic EC2 images) there is a common technique where the VMs are pre-configured to look at the EC2 user-data and use it as a boot script. The nice thing about this approach is that you can write a boot script that further provisions your machine, allowing you to avoid making a new image every time your software that runs on the machine changes.
I want to do the same thing for Windows, but given that I'm an Mac and Linux guy, I'm a bit lost on where to start. My requirements are:
This must run on Windows Server 2008
A bootstrap script needs to start when the machine boots up, read the user-data file by pulling down the contents http://169.254.169.254/1.0/user-data
The bootstap script then needs to run the contents of that file as if it were a script
The script embedded in the user-data needs to run in such a way that it has access to the desktop environment (ie: it can launch a browser, etc).
I'm not quite sure how services work in Windows or if I need to enable auto-login, so any advice here would be appreciated. The ultimate goal is to run a Java program that launches some custom software that in turn launches a web browser (IE, Firefox, etc) and is capable of taking screenshots.
The screenshot part is interesting, because in the past when I've tried this the only way I could get something other than a black screen was to have UltraVNC or RealVNC boot up as a service, though I don't know why that helped.
I'm looking for answers to three specific questions, as well as any general advice:
Should I be focussing on a Windows service or auto-login + bat file in the "Startup" folder?
If I use a Windows service, is there anything special that I need to do to make sure desktop access and/or screenshots are available?
Do you recommend any tools for common Linux commands, like curl or wget? Last time I used Windows I used Cygwin a lot, but is there something more appropriate to use here?
I have not tried auto-login on Windows instances in EC2, but here's the support document on how to enable it.
We boot-strap our Windows instances using a custom AMI with a custom Windows 'install' service already installed. The boot-strap installer reads a URL from user-data at startup. The URL points to a ZIP file stored in S3. The installer then downloads, un-zips, and executes the actual application installer -- in our case a simple CMD fie.
This setups allows us to have one base AMI and then be able to easily overlay 15+ different application configurations (without having to rebuild the AMI). If you only have one application configuration this may be overkill for your situation.
The only trouble we ran into was having our installer service start to early -- changing the service startup mode to "Automatic Delayed" fixed that issue.
We wrote our boot-strap installer in Java, launched via YAJSW, because we're comfortable with it. If you just want a few simple Unix tools, most are available pre-compiled for Windows, for example wget.
For something completely different, you could try PsExec to configure the instance after it has booted.
You can try using RightScale's free developer account to create plain Powershell scripts and associate them with your Windows instances to run at boot time. The RightScale dashboard solves exactly the problems you are trying to solve above.
DISCLAIMER: I work for RightScale.
As for screen capture CutyCapt is a simple tool you can point at a URL and generate an image from.
Unxutils is a great solution for those looking for unix tools on Windows. It's got the wget.exe that you're looking for, however, using Powershell to download stuff is not so bad either:
$wc = new-object system.net.webclient
$wc.DownloadFile("http://stackoverflow.com","test.html")
If you can write a batch file to do your setup, then you can run it at startup of the vm by doing this:
1. Run REGEDT32.EXE.
2. Modify the following value within HKEY_CURRENT_USER:
Software\Microsoft\Windows NT\CurrentVersion\Winlogon\ParseAutoexec
1 = autoexec.bat is parsed
0 = autoexec.bat is not parsed
As an answer to #3, I would say that you can do just about anything in a batch file that you need which includes downloading from a ftp server (but not from a http server). I am really interested in this stuff and so if you have questions, try asking me.
If you use Elastic Beanstalks you can use this:
Customizing the Software on EC2 Instances Running Windows
It uses YAML formatting standards, e.g.
packages:
msi:
mysql: http://dev.mysql.com/get/Downloads/Connector-Net/mysql-connector-net-6.6.5.msi/from/http://cdn.mysql.com/
or
sources:
"c:/myproject/myapp": http://s3.amazonaws.com/mybucket/myobject.zip
I know this is a little bit late to help out with the original post but for anyone who is still reading this one solution is to use the http://cloudinitnet.codeplex.com/ project. The service is easily installed using a powershell script and will create a local administrator account to use while running.
The goal for this project was to replace the Cloud-Init project used in Amazon Linux and Ubuntu.