What if I stop paying Jira, would I lose whole my backlog and other achievements of my team or it just would be frozen?
And is there any way to backup all the data of account in the Jira?
Thank you!
You can find backup instructions here:
https://confluence.atlassian.com/cloud/cancelations-744721616.html
I believe you will lose access but the data will still exist for 2 weeks, so you can reactivate:
"Once your site has been deactivated (i.e. your site has been taken offline), you have two weeks to pay your outstanding quote or contact Atlassian to have the site restored before your data will be permanently deleted. Note that data backups for permanently deleted instances can sometimes be retrieved by raising a ticket with our Support team within the first month after your instance has been deleted." - https://confluence.atlassian.com/cloud/billing-and-user-count-744721614.html
Related
we have been running a query on BigQuery for the last couple of weeks, and it has been executing fine. However, as of the morning of February 16 2016, it would only run at billing tier 2. Did Google change the billing tier definitions internally over the weekend as a Valentine's gift? ...
More seriously, it is important to communicate these changes (well) ahead of time.
Apologies for the surprise!
This change is part of our new high-compute query pricing, which is documented here:
https://cloud.google.com/bigquery/pricing#high-compute
The original announcement of the high-compute query pricing plan was here:
http://googlecloudplatform.blogspot.com/2015/08/Google-BigQuery-adds-UDF-support-for-deeper-cloud-analytics.html
This change was originally intended to go live earlier in the year, but was delayed for several weeks. I've rolled it back once again, but you should expect it to get rolled forward soon (weeks, if not days).
If you have additional questions or concerns, feel free to contact support, post here, or file a bug on our issue tracker:
https://cloud.google.com/bigquery/support
For more info about high-compute queries, see this SO answer:
https://stackoverflow.com/a/32638711/1375400
I currently have an Azure S2 database running via the new Azure Portal.
I notice my billing was higher than it should be and after investigating further, I noticed there were new databases appearing every day then disappearing.
Basically, something is running a CreateDatabase and DeleteDatabase event every evening, and I'm being charged an extra hour each day.
Microsofts response is:
"Our Operations Team investigated the issue and found that these databases did indeed exist in a 1 hour windows at midnight PST every day. It looks like you may have some workload which is doing this unknowingly or an application with permissions which is unknowingly creating these databases and then dropping them. "
I haven't set up any scripts to do this, and I have no apps running that could be doing this.
How can I find out what's happening?
Regards
Ben
There are Many SQL Servers hosted on different different Servers.
All Servers are working based on "SQL Server Authentication". So the Same Login is used by many people in the Organization.
How to trace who deleted some of the records in particular table?
Do we need any additional coding like Triggers are required or its a in-build feature of SQL server to provide those details?
Please help me.
Thank You.
If the deletion has already occurred and you had nothing in place to track / log this, then the chances are going to be very low - they are not zero, but not far above it.
If you use the transaction log to identify the exact deletion and the session id of the deletion, which we already know is the shared user login - and you have got successful login security auditing enabled you would in theory be able to trace it back to the IP address that made the deletion.
However - that is a pretty slim chance - I would suspect that the login is from the actual application software and you would of needed that to be running directly on the users machine, e.g not a 3-tier / web based server of any flavor, but a good old thick client app making direct connections.
That gets you an IP and a time, but not a who was logged in on that machine at that time, if its shared in any form, then you are having to get login records on the machine etc.
I have been a lurker for several years and I think I have a question that hasn't been answered here.
We were in the middle of some pretty intense maintenance on or SQL server last night. Or primary database mdb file was very badly fragmented. We maintain a test copy of this database for testing and proof of concept purposes.
I had setup log shipping on the test database and without thinking I deleted the test database without removing the log shipping first. I am getting error 14421 - The log shipping secondary database SERVER.database has restore threshold of 45 minutes and is out of sync. No restore was performed for 10310 minutes. Restored latency is 0 minutes. Check agent log and logshipping monitor information.
I have removed everything I could with tsql. My research leads me to believe that this error is due to the backup job still trying to operate but I cannot find this job to remove it. It's really not a big deal but the error shows up every couple of minutes in the log.
Is there anything I can do?
Thanks in advance!
Log Shipping Information is stored in MSDB, not in the database itself. All you need to do is create a new database with the same name as the deleted database. Right click for properties, log ship tab and then uncheck the box to log ship the database. When you click okay the jobs(on primary and secondary) will be removed.
We do not use our Azure storage account for anything except standard Azure infrastructure concerns (i.e. no application data). For example, the only tables we have are the WAD (Windows Azure Diagnostics) ones, and our only blob containers are for vsdeploy, iislogfiles, etc. We do not use queues in the app either.
14 cents per gigabyte isn't breaking the bank yet, but after several months of logging WAD info to these tables, the storage account is quickly nearing 100 GB.
We've found that deleting rows from these tables is painful, with continuation tokens, etc, because some contain millions of rows (have been logging diagnostics info since June 2011).
One idea I have is to "cycle" storage accounts. Since they contain diagnostic data used by MS to help us debug unexpected exceptions and errors, we could log the WAD info to storage account A for a month, then switch to account B for the following month, then C.
By the time we get to the 3rd month, it's a pretty safe bet that we no longer need the diagnostics data from storage account A, and can safely delete it, or delete the tables themselves rather than individual rows.
Has anyone tried an approach like this? How do you keep WAD storage costs under control?
Account rotation would work, if you don't mind the manual work to be done updating your configurations and redeploying every month. That would probably be the most cost-effective route, as you wouldn't have to pay for all the transaction to query and delete the logs.
There are some tools that will purge logs for you. Azure Diagnostics Manager from Cerebrata [which is currently showing me an ad to the right :) ] will do it, though it's a manual process too. I think they have some Powershell commandlets to do it as well.