N00b question here, but I am currently deploying my Rails webapp on my EC2 server instance using rubber. I was previously deploying on heroku (got frustrated with how slow it was on each startup) and decided to switch hosts last night.
There are a few environment variables I'd like to set upon deploying (for stripe). In my heroku case, I was able to just set the environment variables at the command line using something like:
heroku set ENV_VAR1=xxxxyyyyzzzz ENV_VAR2=xxxxyyyyzzzz
I was wondering if there was either a cap command, or some file I could alter to set those evironment vars on EC2, or even a command from the command line I could run to set them? I am not super familiar with rubber / cap, as I was just following one of the rails cast videos last night.
Thanks everyone.
Related
I have an ASP.Net Core app running on Ubuntu on the server (I dev on Windows). In a block of code, I use Environment.GetEnvironmentVariable("API_KEY"));. In dev, the key is retrieved succesfully. However, when I publish the code to the server, the key cannot be retrieved for some reason. API_KEY ends up being null.
Note: The ASP.NET Core runtimes are both the same versions (5.0 preview 7) on dev and prod and yes, I have the environment variable setup on the server and can printenv it out. When I plug in the actual key into the app rather than use an environment variable name, my app works as I want it to.
Typically you have to manually put that setting in your applications "configuration" section. When running on you local machine, it will pull from the local.settings.json, but on the server running in Release mode, it will pull from the server-side config. That config is located here:
This is done because your local environment should be running different (safe) settings than something running live on a server.
If you accidentally commit your code to a repo and that local.settings.json goes along with it, someone who gets in to your repo won't be able to get critical data.
I'm currently developing a simple webapp with seperated frontend (Vue) and backend (quarkus REST API) project. For now, I've setup a MVP, where the frontend is displaying some simple data which is called from the backend. To get a working MVP i need to setup CORS support. However, first i want to explain my setup:
Setup
I'm starting developing environment of my frontend with npm run serve and of my backend with ./mvnw quarkus:dev. Frontend is running on localhost:8081 and backend running on localhost:8080.
Heroku allows to run your apps locally aswell with the command heroku local web. Frontend is running on port 0.0.0.0:5001 and backend on 0.0.0.0:5000.
To achieve this setup i setup two .env files on my frontend which are pointing to my backend api. If i want to work in development mode the file .env.development is loaded:
VUE_APP_ROOT_API=http://localhost:8080
and if i run heroku local web the file .env.local with
VUE_APP_ROOT_API=0.0.0.0:5000
is loaded.
In my backend I've set
quarkus.http.cors=true
in my application.properties.
Now I want to deploy those two projects to heroku and use it in production. Therefore I setup two heroku projects and set a config variable in my frontend project with the following value:
VUE_APP_ROOT_API:https://mybackend.herokuapp.com
Calls from my frontend are successfully working!
Question
For the next step, I want to restrict it more and just enable my frontend to call my API. I know i can set something like
quarkus.http.cors.origins=myfrontend.herokuapp.com
However, I dont know how i could do this on quarkus with different environments (development, local and production)? I've found this link but I don't know how to configure heroku and my backend app correctly. Do i need to setup different profiles which are applied on my different environments? Or is there another solution? Do i need Herokus Config Variables?
Thanks for the help so far!
quarkus.http.cors.origins is overridable at runtime so you have several possibilities.
You could use a profile and have everything set up in your application.properties with %prod.quarkus.http.cors.origins=.... Then you either use -Dquarkus.profile=prod when launching your application or you use QUARKUS_PROFILE=prod as an environment variable.
Another option is to use an environment variable for quarkus.http.cors.origins. That would be QUARKUS_HTTP_CORS_ORIGINS=....
My recommendation would be to use a profile. That way you can safely check that all your configuration is consistent at a glance.
I have a dockerfile where I build an apache web server with some custom configurations etc.
Executing the Dockerfile I create an image that could be used in a deployment yaml file using Kubernetes.
Everything is working properly but after deployment, my apache service is down in every container of every pod.
Obviously I can access in every container to execute an /etc/init.d/apache2 start but this solution is not very smart..
So my question is: how can I set my custom apache to be running during the execution of the deploy yaml file?
PS: I tried this solution: with the dockerfile I created a docker container then I accessed on it and I started apache. Then I created a new image from this container (dockerfile commit + gcloud image push) but when I deploy the application I always find apache down
Well, first things first - I would very much recommend just using the official apache2 image and then making your custom configurations from there. They're documentation states this in the following paragraph:
Configuration
To customize the configuration of the httpd server, just COPY your custom configuration in as /usr/local/apache2/conf/httpd.conf.
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
However if you're dead-set on building everything yourself; you'll notice that inside of the Dockerfile for the official image they are copying in a BASH script and then setting this as the CMD option. This works because when running a Docker container you should be running a single process; this is why, as you stated, running it from it's service is a bad idea.
You can find the script they're running here, it's very short at 7 lines - so you shouldn't have too much trouble figuring out where to go from here.
Best of luck!
I'm pretty new to the Gcloud environment, but getting the hang of it.
Though with our first project live on an instance, I've been shuffeling some static IP's, instances and snapshots around for optimal deployment workflow. Though whats going on now, I can't understand;
I have two instances (i.e.) live-1 and dev-2.
Now I can connect to live-1 using gcloud compute ssh live-1 and it's okay.
When I try to connect to dev-2 using gcloud compute ssh dev-2, it logs me in to live-1.
The first time I tried to ssh to dev-2 it took longer than usual. After that it just connects me to the wrong instance immediately.
The goal was (as you might've guessed) to copy the live environment to a testing one. I did create an image of live-1, and cloned it to setup dev-2 with it. But in my earlier experience trying this, this was possible and worked as expected.
Whenever I use the Compute Console in the browser and use the online SSH tool from the instance list, it does connect to dev-2 properly. But on my local machine, using aformentioned command, connects me to live-1.
I already removed the IP for dev-2 from my known hosts, figuring it's cached somewhere, but no luck. What am I missing here?
Edit: I found out just now that the instances are separated though 'named' the same; if I login to dev-2, I do see myuser#live-1: in the shell, but it appears it is running a separate instance. I created a dummy file on the supposed dev-2, and it doesn't show up at the actual live-1 machine.
So this is very confusing; I rely on the 'user-tag' thing in front of every shell line to know where and what I'm actually working on; having two instances with the same name but different environments is confusing.
Ok, it was dead simple. Just run sudo hostname [desiredhostname] in the terminal, and restart it.
So in my case I logged in to dev-2 and ran sudo hostname dev-2.
I am using rails 3.2 and postgres 9.1. I'm hosting on Amazon EC2 instance, and using capistrano and rubber to deploy.
So my question is, since I don't set database options in database.yml for production, how do I increase the pool size from the default 5?
My guess is in config/rubber/rubber-postgresql.yml, but I don't know what to actually put in there to change the pool size.
The database.yml is generated by rubber. There is a config file to play with. config/rubber/common/database.yml
in there you can play with your database settings. Specifically, you will see that the pool size has a default value that you can change.