Jira backup files being automatically created every 3 hours - backup

On my Jira server v7.5.2 (CentOS 7), in /data/atlassian/jira/export, there is a bunch of zipfiles created every 3 hours, each around 200 Mb in size:
...
2018-Aug-14-0000.zip
2018-Aug-14-0300.zip
2018-Aug-14-0600.zip
2018-Aug-14-0900.zip
2018-Aug-14-1200.zip
2018-Aug-14-1500.zip
...
Apparently they're automated backups. However, there is neither any Scheduled Job nor any cron job with such a timing.
What could be creating these files? Is there any other Jira job scheduling or setting that I should check?

Look for Admin, System, Services. There is a Backup Service running there. You can edit or delete it there

Related

RavenDB Restore Stuck

We are trying to restore Ravendb from the backup file. We are using Raven studio. The restore process copied index files from the backup to the new location but it's stuck at the below step:
Esent Restore: Restore Begin
Esent Restore: 18 1001
I couldn't see any other logs or exceptions.
The backup size is around 123 GB.
How do I fix this stuck process?
After lots of investigation, I found the issue.
Seems the IIS application pool was configured to recycle itself every 20 min. So, after 20 mins, RavenDB was used to kill the restore process.
I found the issue by monitoring the Resouce Monitor -> CPU -> Process Manager tab. You should able to see the Raven process doing loads of write operations during restore and should able to monitor when service gets stopped.

Which process runs Webmin Scheduled Functions?

Exactly who wakes up and runs the entries in Webmin Scheduled Functions? doesn't seem to be crond. is it miniserv.pl?
Looking at miniserv.pl file, tells me that as miniserv is always running, it effectively manages its cron jobs.
# Initially read webmin cron functions and last execution times
&read_webmin_crons();
Search through the file for webmin_crons string, you will get all info you need in this regard.
As far as I'm aware, we don't use system crond to run internal scheduled functions.

Rundeck project and job sync between 2 instances with backend as mysql cluster

I have set up 2 rundecks in 2 VMs and mysql cluster so Rundeck #1 on VM#1 connects to Mysql DB#1 and similarly Rundeck #2 on VM#2 connects to Mysql DB#2.
The problem now I have is whenever I am creating a project / job in rundeck #1 that I am not able to see it in rundeck #2. What should I do?
Any help will be appreciated
I would first try to switch the databases, i.e. Rundeck#2 connects to MySql DB#1 to see if the jobs are visible.
If this is the case, then you have a sync issue.
If jobs are still not visible, then i assume that there are some identification problems of the Rundeck instances.
Just my 2 cents.
The issue can be fixed by maintaining the default engine in my.cnf.
So in my case I just modified the /etc/my.cnf
Introduced the following option below the header [mysqld]:
default-storage-engine=NDBCLUSTER
And did a mysql restart and the tables sync started to happen.
delete the rundeck db before proceeding with any modifications.
Thanks and hope this helps everyone facing such issues.

RavenDB periodic backup bundle + web admin does not persist changes

I'm using the latest stable version (3.0.3660) on a VM on Windows Azure and would like to enable period backup. Have tried to enable both local backup and backup to Azure but the GUI doesn't seem to persist the changes. Modal dialog says "Saving..." but nothing more.
Is there a log for this so that I can troubleshoot what doesn't work?
/Erik
I tried it too and the database is non-responsive for several minutes (a co-worker was waiting for tens of minutes). But after waiting a while it actually does something. I configured the Azure backup and that went wrong because it couldn't upload a blob of that large a size. The error was logged and can be found in the studio > status > logs.
Running the server standalone (instead of running as a service) doesn't give any additional feedback either.
Managed it to work by setting "Raven/AnonymousAccess" to Admin and then save the changes, not sure why. Connected with API key that should have full access.

SQL 2012 - SSIS Package not populating Text file when scheduled

I'm working on SQL 2012 Enterprise and I have a set of SSIS package exports which push data out to text files on a shared network folder. The packages aren't complex and under most circumstances they work perfectly. The problem I'm facing is that they do not work when scheduled - despite reporting that they have succeded.
Let me explain the scenarios;
1) When run manually from within BIDS, they work correctly, txt files are created and populated with data.
2) When deployed to the SSISDB and run from the Agent job they also work as expected - files are created and populate with data.
3) When the Agent job is scheduled to run in the evening, the job runs and reports success. The files are created but the data is not populated.
I've checked the reports on the Integration Services Catalogs and compared the messages line by line from the OnInformation. Both runs reports that the Flat File Destination wrote xxxx rows.
The data is there, the Agent account has the correct access. I cannot fathom why the job works when started manually, but behaves differently when scheduled.
Has anyone seen anything similar? It feels like a very strange bug....
Kind Regards,
James
Make sure that the account you have set up as the proxy for the SSIS task has read/write access to the file.
IMX, when you run an SQL Agent job manually, it appears to use the context of the user who initiates it in some way. I always assumed it was a side effect of impersonation. It's only when it actually runs with the schedule that everything uses the assigned security rights.
Additionally, I think when the user starts the job, the user is impersonating the proxy, but when the job is run via the schedule, the agent's account is impersonating the proxy. Make sure the service account has the right to impersonate the proxy. Take a look at sp_grant_login_to_proxy and sp_enum_login_for_proxy.
Here's a link that roughly goes through the process:
http://www.mssqltips.com/sqlservertip/2163/running-a-ssis-package-from-sql-server-agent-using-a-proxy-account/
I also recall this video being useful:
http://msdn.microsoft.com/en-us/library/dd440761(v=SQL.100).aspx
I had the same problem with Excel files. It was permission rights.
What worked for me was adding the SERVICE account to the folder's security tab. Then the SQL Agent can access the files.