MSSQL Automated Jobs - sql

I have been recently reading about configuring jobs within SQL Server and that they can be configured to do specific tasks.
I recently had issues whereby all the DB indexes where > 75% fragmented and I wondered if there was a way to have SQL Server automatically manage itself.
Now when reading about setting up and configuring jobs it mentions the SQL Server Agent.
In the DB Server I was looking at the SQL Server Agent was switched off.
This made me think that having a "job" to handle the rebuilding/reorganising of indexes may not be great if this agent can simply be disabled...
Is there anything at a DB level which can be configured to do this, or is this still really in the hands of a "DBA"?
So to summarise, my question is, what is the best way to handle rebuilding/reorganising indexes?

A job calling some stored procedures could be your answer.
Automation of this task depends on your DB: volume of data, fragmentation degree, batch updates, etc.
I recommend you to check regularly your index fragmentation, before applying an automatized solution.
Also, you can programmatically check if SQL Server Agent is running.

Related

Trigger Based Replication (Live Sync) OR Transactional Replication in MSSQL

can someone give me a clear idea about which technique/ method is more reliable, less memory consuming and faster in replicating data from one Database to another in MSSQL database(SQl Server 2012) and why. We are in the process of developing a Live GPS based tracking application and I am confused with which method to proceed with
Trigger Based Replication (Live Sync)
(OR)
Transactional Replication
Thanks in Advance ☺
I would recommend using standardised solutions whenever possible. Within the choice given to you, transaction replication should be an obvious favourite, because:
It doesn't require any coding and can be deployed using standard tools. This makes it much faster to deploy and maintain - any proper DBA can do it, some of them even being blindfolded.
Actual data transfer is done by replication agents which are separate applications external to the SQL Server process and client connections. Any network issues within the publisher-distributor-subscriber(s) chain will lead to delays in copying the data, but they will not affect the performance of the publisher database itself.
With triggers, you have neither of these advantages: you will have to add a lot of code, and sluggish network will make data-changing queries slower, potentially leading to timeouts.
Of course, there are many more ways to move the data between the databases in SQL Server, such as (in no particular order):
AlwaysOn Availability Groups (Database mirroring);
Log shipping;
CDC (Change Data Capture);
Service Broker.
However, given your needs, transaction replication still looks like your best bet, overall.

Is database replication the way to go to keep production and development databases in sync?

I am not a DBA; however, my small company is using SQL Server for a project that we are working on. On the same SQL Server instance there is a MS Great Plains (Dynamics GP) database - as we pass data back and forth between the two databases (mainly a scribe process getting our data and transferring it into GP).
We are using database replication (snapshot) as a means of syncing our production and development (and soon DR) environments. Right now its set to replicate every three hours during core business hours - mainly to keep production and development up to date for us while we are working.
1) Is this the correct way of doing such a thing? Is there a better way?
2) Does this stress the server or the SQL Server? Is this a possible cause of GP database issues because they are on the same server and instance?
3) Replication only occurs on the non GP database - this shouldn't affect the GP database at all right?
Our database should stay rather small. In doing the snapshot, it is my understanding that tables get locked while the replication is going on. Do the tables stay locked until the entire replication is done or are they off loading after they are completed as the process continues?
There are many ways to sync a SQL Server with another. There is replication which you are currently using, log shipping, backup/restore, mirroring, and Always On to name a few methods.
The "best" method depends on your requirements. If you're concerned about disaster recovery, snapshot replication is not a great option and I would look into AlwaysOn Availability Groups.
If load on your production system is a concern I would look into nightly restoring a backup of the production system.
To answer your specific questions:
1) Is this the correct way of doing such a thing? Is there a better way?
This answer depends on your exact requirements
2) Does this stress the server or the SQL Server?
Doing something is always more work than doing nothing. Depending on many factors this could affect your production server.
3) Replication only occurs on the non GP database - this shouldn't affect the GP database at all right?
Your server only has a finite amount of hardware resources. It could affect the performance of queries against the GP database
We have found that having replication in place also adds complexity when it comes to upgrades and schema changes. If you must have dev and prod in sync (and I would argue about that) Always On or log shipping would be my preferred techniques.
DR is a separate issue. You have to determine your Recovery Point Objective (RPO) and Recovery Time Objective (RTO) and adopt the appropriate technology to satisfy your requirements.

Mimicing the setup of an azure sql server database locally

We are having significant performance problems on azure. Various factors have made this difficult to examine precisely on azure itself. If the problems are in the performance of the code or of the database I would like to examine them by running locally. However it appears that the default configuration of our database on azure is different than it is locally, e.g. apparently an azure created database defaults to run with different configuration than my local database, e.g. the default on azure includes read committed snapshot as I understand, but that is not the default for a database I create in sql server. That means that performance issues are different for the two.
My question is how can I find all such discrepancies between the setup of the two and correct them so that when I find speed issues locally I will know they represent speed issues on azure. I am a sql server novice. I recognize that I cannot recreate "time to database" and "network time" issues that way, but I don't think those are what are killing us.
You might find my answer to this post useful.
We had great advantages in implementing telemetry to gather information and use it later for analysis, to finally find out where and how you are spending your time interacting with SQL and therefore how to improve the query plans. Here is a link to the CAT blog post: http://blogs.msdn.com/b/windowsazure/archive/2013/06/28/telemetry-basics-and-troubleshooting.aspx

SQL Server full copy of database for read operations

Please advise what suits my problem better. I have a highload web app hosted on the same server where SQL server is hosted. I also have SQL Service reporting running on the same server, generating user reports.
So my server basically works on top of disk read/write speed. I'm going to get another server and install there another SQL server in order to host SSRS there. So my criteria is to get as fresh data as it possible.
I've looked a couple of solution, currently I do make backup via jobs, copy it to second server and restore it there, also via jobs. But that's not the best solution.
All replication mechanism(transaction, merge, snapshot) affect publisher database by locking it's table, what is unacceptable for me.
So I wonder is there any possibility to create a replica with read only access, that would be synced periodically not affecting main db? I would put all report load to that replica and make my primary db be used only by web app.
What solution might suit my problem? As I'm not a DBA, I'd start investigating that direction. Thanks.
Transactional Replication is typically used to off-load reporting to another server/instance and can be near real-time in a best case scenario. The benefit of Transactional Replication is you can place different indexes on the subscriber(s) to optimize reporting. You can also choose to replicate only a portion of the data if only a subset is needed for reporting.
The only time locking occurs with Transactional Replication is when you generate a snapshot. With concurrent snapshot processing, which is the default for Transactional Replication, the shared locks are only held for a short period of time, so users are able to continue working uninterrupted. Either way, this shouldn't be an issue since you'll likely be generating the snapshot during a period of low user activity anyway.

Windows Service or SQL Job?

I have an archiving process that basically deletes archived records after a set number of days. Is it better to write a scheduled SQL job or a windows service to accomplish the deletion? The database is mssql2005.
Update:
To speak to some of the answers below, this question is regarding an in house application and not a distributed product.
It depends on what you want to accomplish.
Do you want to store the deleted archives somewhere? Log the changes? An SQL Job should perform better since it is run directly in the database, but it is easier to give a service acces to resources outside the database. So it depends on what you want to do,,,
I would think a scheduled SQL job would be a safer solution since if the database is migrated to a new machine, someone doing the migration might forget that there is a windows service involved and forget to start/install it on the new server.
In the past we've had a number of SQL Jobs run. Recently, however, we've been moving to calling those processes from .Net code as a client application run from a windows schedule task, for two reasons:
It's easier to implement features like logging this way.
We have other batch jobs that don't run in the database, and therefore must be in windows scheduled tasks. This way all the batch jobs of any type will be listed in one place.
Please note that regardless of how you do it, for this task you do not want a service. Services run all day, and will consume a bit of the server's ram all day.
In this, you have a task you need to run, and run once a day, every day. As such, you'd either want a job in SQL Server or as Joel described an application (console or winforms) that was setup on a schedule to execute and then unload from the server's memory space.
Is this for you/in house, or is this part of a product that you distribute.
If in house, I'd say the SQL job. That's just another service too.
If it's part of a product that you distribute, I would consider how the installation and support will be.
To follow on Corey's point, if this is externally distributed, will you need to support SQL Express? If not, I'd go with the SQL job directly. Otherwise, you'll have to get more creative as SQL Express does not have the SQL Agent that comes with the full versions of SQL 2005 (as well as MSDE). Without the SQL Agent, you'll need another way to automatically fire the job. It could be a windows service, a scheduled task (calling a .NET app, powershell script, VBscript, etc.), or you could try to implement some trigger in SQL Server directly.