Im currently running JIRA 6.3.8 on a Windows Server 2008 virtual machine ,which periodically restarts outside of business hours to apply updates.
The result of this is that after every restart, JIRA breaks, stating that the home directory is locked, and a number of plugins fail to load. This has been a recurring problem that I have attempted a number of solutions for, including increasing JVM memory, deleting bundled plugins, creating exceptions in the firewall and ultimately, reinstalling JIRA(Which doesn't always solve the problem).
Are there any more permenant solutions to this?
EDIT: after some investigation It seems that this is a common problem with seemingly no concrete solution. According to some users, Virtual machines shut down faster and this is causing issues as JIRA doesn't shut down properly, causing these errors
Based off of some comments made, I am unable to install JIRA onto a Linux/Unix VM as It is an enterprise environment and I am only allocated a Windows VM, and disabling automatic updates is not an option due to securty policies regarding the VMs
I had the same problem several years ago. as I remember, the reason was in JIRA lock file. To solve the problem delete it before start service JIRA again.
The file should be named similar to
.jira-home.lock
EDIT
forgot to notice, this file is normally hidden
If using ec2 Linux AMI Do the following
SSH to machine
cd /data/jira - this is where the lock file is generated when Jira is started
ls -la -- Shows all files even hidden ones you can also use ls -al
remove .(*).lock file - Delete the lock file
Restart jira (/opt/jiraXXX/bin/shutdown.sh followed by startup.sh)
Verify the link - JIRA server link
If fails with felix cache issue -
cd to /data/jira/plugins/.osgi-plugins/felix
Delete/remove all the folders in the above folder (sudo rm -rf *)
Repeat 5 and 6
For me, this error was caused by a bad symbolic link in jira-shared-data/data
Related
I am working on a college project along with a group of people. Our goal is to add features to an already existing application that runs on the web. Currently, I'm in the process of getting the source code to run on my machine. This consists of cloning a bunch of repos, installing MySQL and some (very old and outdated :-| ) versions of Python, and running some scripts. The process sounds straightforward but it isn't; there are a lot of dependancies that need to be met for the code to run, which means that I need to spend a lot of time looking at error logs trying to figure out what package is missing and needs to be installed or downgraded. But that's not the point of this question.
I'd like to make it easier for people to pick up the project in the future and work on it without having to spend hours just to get the code to compile. I'd like to get the project set up on a Linux VM (something I know how to do using VirtualBox) and then somehow share (?) that VM so that other people can simply set it up and be able to immediately have the code compiling (something that I don't know how to do, or if it is even possible).
Additionally, I'd like to be able to do all the coding on the host OS if possible, and only do the compiling/running on the VM (something I also don't know how to do). I would like some help/pointers with all the "I don't know" 's, as I don't know much about VM's other than how to set one up using VirtualBox.
You can use Vagrant to automate the provisioning of the VM, and setup all your tools and dependencies using Docker.
There are many good tutorials and sample vagrantfiles online to get you started. There is a learning curve involved, but well worth the effort. Many companies use Vagrant to quickly provision dev environments.
Vagrant can automatically download a specific distro/version of a VM from the web if one is not already locally installed. It can also provision a Docker container, in which you can install any required dependencies, tools, etc. You can store the vagrantfile, dockerfile, scripts, etc. in GitHub for easy access by your colleagues. All they would have to do is install Vagrant and run vagrant up from the command line.
If you want to write code on the host machine and compile/test it on the VM, you will need to setup a shared folder in the VM using Guest Additions (see here). Be VERY careful with line endings if you are working in Windows and running in Linux. You can setup the shared folder with Vagrant as well (see here).
I'm not very experienced in *nix operating systems and I'm trying to set up an embedded programming environment in WSL, but I'm getting hung up on basic issues. Last time I was working on this project I had downloaded some files (cargo and rustup, but that shouldn't matter), and I confirmed that they were there and working by getting the version number with -V.
After restarting my computer WSL doesn't recognize rustup or cargo as commands, and the folders don't show up with ls, even though they show up when I check for them in Windows Explorer.
The directory I've been working out of is %LOCALAPPDATA%\Packages\TheDebianProject.DebianGNULinux_76v4gfsz19hv4\LocalState\rootfs\home*user* which I'm pretty sure is the default. I’ve verified this by creating a .txt in WSL and finding it with Windows Explorer
Working on Windows 10 64-bit. I chose Debian for arbitrary reasons/ open to switching.
I’m not too worried about the files themselves, I just want to be able to avoid this in the future.
Firstly since you are new to WSL please be aware that the recommendations are to not under any situations edit or modify any Linux files inside of your %LOCALAPPDATA% folder using Windows apps or tools which includes moving files using file explorer. See this blog post from Microsoft https://devblogs.microsoft.com/commandline/do-not-change-linux-files-using-windows-apps-and-tools/ If you do you can see corruptions missing files and crashes.
I have no experience with cargo or rust but it sounds like you didnt update your .bashrc (start up script) with details needed to add things to the environment on start up.
There are a few things you can do
Use the history command to look back at what you did when you installed things
Use sudo find / -name rust to look for the executable in your system
When using ls remember that files/folders that begin with a dot are hidden so you need to use ls -al to see them in the terminal
I assume you followed this guide for installation (or similar). If you did not and are still having issues please detail how you installed things.
I'm getting started with Vagrant and spent some time installing packages, setting up my DB and adding some data to the DB. Now that I have a base working box for my development environment, I would like to share this image with colleagues, that they can use as local VMs.
Is this not possible with vagrant? I just tried vagrant package and then destroyed it and did a vagrant up with my config.vm.box_url pointed to that packaged box location. To my dismay none of my installed packages, or files and configurations were included with my packaged vm.
Am I misunderstanding what vagrant is for or perhaps expecting vagrant to do something it's not designed to do here? If installed packages aren't the purpose of vagrant package, then what use cases is it for?
I've read through the docs and not found answers to these questions there.
Of course I can provision everything, and I'll get there too, but it's not what I'm getting at in this question.
I've since come to the conclusion that vagrant is basically for provisioning. If I need this I can just use virtual-box, but provisioning is a better approach because system packages and application dependencies etc. are a moving target.
Using provisioning allows future provisions to stay up to date and potentially expose incompatibilities.
I'm trying to deploy openMRS v.1.9.2 to a local VM running CentOS & Glassfish 2 for work. Unfortunately, I could not get it to work. Normally, I just download the standalone found at source forge. I just double-click the jar, and I'm good to go.
I normally just SSH into the the VM, so I first tried doing everything through a terminal. Here are the steps I took:
Using wget, retrieve the .zip
Create a dir (I just called it /openmrs), cd into the new directory, and then expand the .zip.
cd into the directory.
At this point, there are two options to start openMRS.
Run the bash script: ./run-on-linux.sh
Run the .JAR: java -jar [insert_jar_name].jar -commandline
When I run the .JAR, I get a stack trace.
When I try to run the bash script, I get another error.
Anyways, I thought I found a potential solution in an openMRS JIRA ticket, but it seems aimed at Glassfish 3, and not Glassfish 2 (which is what I need to use).
I then tried deploying the .WAR via the Glassfish admin UI. I thought it would work, but after going through the steps of selecting a language, whether or not to use demo data, etc. I received this.
Does anyone have experience deploying openMRS to Glassfish 2.1.1? Unfortunately Glassfish 3 doesn't seem to be a realistic option. I would really appreciate any help here. Thanks.
Although it doesn't solve my problem of not being able to successfully deploy openMRS to an instance of Glassfish v.2, I did manage to get myself further by just installing MySQL on the VM. Our work machines are all set up for postgres, so I think should have guessed earlier that not having a MySQL server installation was the problem.
Here is a tutorial I used to install MySQL
I'm developing several Rails 3.2 applications on Mac OS X Lion. Last night, I updated from 10.7.4 to 10.7.5, and I found this morning that I'm no longer able to connect to my development Postgresql databases (while my production environments are working just fine, with the same codebases). This is the case for all applications I'm developing locally which use PostgreSQL.
The error message I'm getting every time I try to connect:
PG::Error: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
I've read a few other SO posts about similar problems, but most of them suggest PATH changes in ~/.bash_profile. When I run which psql (as suggested in the other posts), though, /usr/local/bin/psql is returned, which is correct (according to the other posts).
I'm hesitant to uninstall and reinstall PostgreSQL again (even via Homebrew), as I don't want to mess with my existing PostgreSQL databases for all of my applications. (Perhaps that's not a potential problem—I'm not confident enough to say.)
I've uninstalled and reinstalled the pg gem several times, closed and reopened my shell session, restarted my machine (and every combination thereof), all to no avail.
Where can I go from here?
Your issue is that PostgreSQL is not running. First I highly recommend backing up PostgreSQL db's before doing OS upgrades because sometimes things don't go well. If you have a dump, you have a lot more options if things go south. I do agree that recompiling on a new OS poses some questions regarding your existing dbs.
The first thing to do of course is to try to start PostgreSQL normally. Maybe it is just missing a startup script. Something like sudo -u postgres pg_ctl -D /path/to/postgresql/data/dir
If that doesn't work you need to look at the error message and try to resolve that. Hopefully it works without problem. If it doesn't work copy your data directory, if you can, to a system running the old version. Try to dump it from there. After you have your copy (important, back up files first!) you may see if you can get pg_upgrade to work. If that fails, try to compile (via homebrew) the same major version you were running before.
If that fails, hire an expert.