What files does Bluepill create on my system? - ruby-on-rails-3

I am working with Rails 3.0.3 and Bluepill 0.0.51. I am trying to troubleshoot a situation where Bluepill is trying to start multiple instances of my ruby servers in some cases, but having trouble knowing where to start since the only files I have to look at are my ruby server .rb files and my .pill file in my rails app root folder. What are the configuration/other files that Bluepill creates on my system? Thanks so much in advance.

When you do run a successful pill recepie it can create different files based on your configuration. If you demonize the process it will create a .pid for each process that stores a value that represents the Process ID of that process so it can shut it down. It will also create a log file and sometimes files in sock. It uses /var/run/bluepill by default but if you run it in --no-privileged mode it will want you to specify the location, preferebly in your applications folder.
It doesn't create anything to configure but rather you configure everything in the pill file. It can be tricky getting everything to work but you have to keep trying. I realize this post is old so i hope your solved it :)

Related

Why did Pycharm ask me to set up a sync folder every time when I add a remote interpreter?

Every time I tried to config a remote interpreter, Pycharm asked me to set a sync folder. In my routine, I usually have the Cannot find declaration to go to error which can not be solved by invalidating caches. So I have to config the interpreter again. And these caused the redundant folders in my remote machine. And another situation is that I want to create other projects with the same interpreter. Where I have to config the folder mapping for each project to make the interpreter valid.
I do not understand this way. In my opinion, the sync folders should correspond to my local project. And the interpreter should be independent of the projects.
Every time I tried to config a remote interpreter, Pycharm asked me to set a sync folder.
To be able to execute a script on the remote machine, it is necessary to make sure it exists on it. This is by design, but if you already have a project folder deployed, you can change the suggested paths to needed ones during the interpreter configuration.
See step 7. https://www.jetbrains.com/help/pycharm/configuring-remote-interpreters-via-ssh.html#ssh
And another situation is that I want to create other projects with the same interpreter. Where I have to config the folder mapping for each project to make the interpreter valid.
Unfortunately, this setup does not work, please vote for
https://youtrack.jetbrains.com/issue/PY-40680/Allow-reusing-a-single-remote-interpreter-in-multiple-project
to increase its priority.

How to Prevent GitLab Runner From Ever Using /home

I build my runner and it works fine. However, when it initializes, it first clones the project to /home/user/builds/xxxx... I never, ever, ever want GitLab to use /home. Never. Not for anything. I was told that it is impossible to change it to a different location. I find it hard to believe.
See in the image below, it gives me a warning about templates not found in some made up directory, then clones the entire project under the user's home directory. I don't give it that command - so it must be a default. Is there a way to choose ANY other mount point? The project is several hundred gigabytes and the /home directory is 50k. I cannot control that. So to a different mount point it must go.
I can provide the yml etc, but this is about core behavior of the runner itself - not anything I created. I'm hoping it is a simple variable I can send when initializing the runner.
Thank you in advance.

What is the optimal way to store data-files for testing using travis-ci + Docker?

I am trying to set-up the testing of the repository using travis-ci.org and Docker. However, I couldn't find any manuals about what is the politics on memory usage.
To perform a set of tests (test.sh) I need a set of input files to run on, which are very big (up to 1 Gb, but average 500 Mb).
One idea is to wget directly in test.sh script, but for each test-run it would be not efficient to download the input file again and again.
The other idea is to create a separate dockerfile containing the test-files and mount it as a drive, but this would be not nice to push such a big dockerimage in the general register.
Is there a general prescription for such tests?
Have you considered using Travis File Cache?
You can write your test.sh script in a way so that it will only download a test file if it was not available on the local file system yet.
In your .travis.yml file, you specify which directories should be cached after a successful build. Travis will automatically restore that directory and files in it at the beginning of the next build. As your test.sh script will then notice the file exists already, it will simply skip the download and your build should be a little faster.
Note that how the Travis cache works is that it will create an archive file and put it on some cloud storage where it will need to download it later on as well. However, the assumption is that the network traffic will likely be inside that "cloud" and potentially in the same data center as well. This should still give you some benefits in terms of build time and lower use of resources in your own infrastructure.

Launching a JAR file using Apache as a background process

I have a data parsing utility in the form of a runnable JAR file. I also have an Apache server (Ubuntu 12.04) to which data files are uploaded. Is there anyway that I could launch said JAR file as a background process when a file is uploaded? (FYI: File access by multiple processes isn't a concern here; I've got file locking in place.)
Related idea: if the above isn't possible, I could always launch the aforementioned JAR file from a bash script. However, I'm still not sure how to do that via Apache. I'm quite a novice at using it effectively.
Edit: Just noticed this potential php solution. Apache folks: is this a good idea, or is there a better solution?
Maybe you can use File Alternation Monitor to achieve this. It can be configured as a background daemon which performs operations if the new file is spotted. If you want to avoid starting while the file is currently uploaded, wait approx. 5 minutes after the file change time and start processing your utility.
I use a similar technique for monitoring uploaded files on a Samba share and it works flawless.

Are Files in Public/Assets Required with Asset Pipeline?

I'm in the process of upgrading a 3.0 Rails App to 3.1.4 including the Asset Pipeline.
I'm on Heroku, so I'm I have this in my application.rb
config.assets.initialize_on_precompile = false
I noticed that when I run:
bundle exec rake assets:precompile
it creates files in a public/assets directory (even though my assets are in app/assets already).
For example, it creates files like application-72b2779565ba79101724d7356ce7d2ee, as well as replicating the images I have in app/assets.
My questions are:
(1) should be uploading these files to my production server?
(2) if I'm suppose to be uploading these, am I suppose to update each application-xxxxxxxx or only the latest one?
To your first question: Heroku will not allow you to modify the filesystem. So your assertion is correct- You will need to pre-compile the asset pipeline before you send it up to Heroku, so that it can be utilized in your production environment.
And the latter: You'll want to make sure you have the latest compilation. Any others wont be used. The "xxxxxxx" portion is to make sure that your users have the latest and greatest version of your assets. It's a way of versioning what the browser gets, and making sure they're not caching a bad copy of the JavaScript, when you want to set up their cache to hold on to the JS and CSS files as long as they can, instead of constantly getting it from your web server.
Take my Heroku comments with a slight grain of salt, as I have not deployed to Heroku before. I just know how their system works to some degree.