I know this topic is a bit general but I do have a specific use case. Does the logging utility in SSIS have any specific advantages, or additional information, over the internal logging of the SSISDB?
My environment contains packages, which are executed via the stored procs in the SSISDB. We have a separate table that tracks the executionid of each package as they are processed and reports the statuses back to the user through a web application.
I am trying to have some foresight. So far I have been monitoring the event_messages, operations and executions table in SSISDB and it has been completely sufficient in debugging and troubleshooting so far. SSISDB seems to keep track of the exact same information that SSIS logging does.
Are there any discrepancies or differences, other than the obvious data structure differences, between the SSISDB tables and SSIS package logging? Specifically, does SSIS logging have any additional information that the SSISDB does not record? I would rather not have to come back and update all of my SSIS packages (and sprocs, and application...) later if I can avoid it.
I'm hoping someone else has come across this scenario and has some insight. Thanks!
Started on a project a year or so ago and set up both types of logging, with the same fear you have. I can safely say it was a complete waste of time and I only ever use the SSISDB execution reports.
I highly recommend you check out custom SSISDB logging options if you haven't already.
https://www.timmitchell.net/post/2016/12/22/ssis-custom-logging-levels/
Related
My question is not about a specific code. I am trying to automate a business data governance data flow using a SQL backend. I have put a lot of time searching the internet or reaching out people for the right direction, but unfortunately I have not yet found something promising so I have a lot of hope I would find some people here to save from a big headache.
Assume that we have a flow (semi static/dynamic flow) for our business process. We have different departments owning portions of data. we need to take different actions during the flow such as data entry, data validation, data exportation, approvals, rejections, notes etc and also automatically define deadlines, create reports of overdue tasks and people accountable for them etc.
I guess the data management part would not be extremely difficult, but how to write an application (codes) to run the flow (workflow engine) is where I struggle. Should I use triggers or should I choose to write codes to frequently run queries to push the completed steps to next step, how I can use SQL tables to keep the track of flow etc
If one could give me some hints on this matter, I would be greatly appreciated
I would suggest using the sql server integration services SSIS, you can easily mange the scripts and workflow based on some lookup selections, and also you can schedule SSIS package on timely bases to trigger and do the job.
It's hard task to implement application server on sql server. Also it's will be very vendor depended solution. Best way i think to use sql server as data storage and some application server for business logic over data storage.
I have a client who has been promised that he will get a regular copy of the database behind the application we are hosting for him.
The copy is in fact a backup (DB_EXPORT.BAK) that he downloads through SFTP.
(I know, I did not make that promise). I do not want to give him the whole with all the proprietary stored procedures, functions, users and other stuff. I want to give him a slimmed down version of that database with most tables, only selected sp's, some functions, no users and so on.
As I see there are two ways to do this:
a SSIS job that copies certain stuff (using Import/Export Wizard)
replication (snapshot or transactional)
The thing is: the original (DB1) AND the copy (DB_EXPORT) will be hosted on the same server. So using replication feels a bit awkward (publishing to yourself?) but it does give an easy interface for configuring which articles to replicate. Using a SSIS package feels more logical but is actually hard to change.
What can you say about this? Is there a better way for doing this? I am looking for a way that will allow people who just about understand SQL server wil be able to understand.
Thanks for thinking with me!
Not sure if this is the best answer, but I would go with snapshot replication per our discussion, and your avoidance of TSQL scripting.
Replication is relatively simple to setup, but can be a nightmare to troubleshoot (esp. transactional). Often times, its easier to completely delete the publication/subscription and rebuild.
On that note, you can fully script replication configurations -- if someone else has to maintain this, it may be as simple as you scripting out the replication removal (pub and sub removal), and scripting out the replication build-out. All they'd have to do is run the drop/build scripts and it's done.
You can also alter the scheduled job to run the backup immediately following the snapshot generation.
Is any thing wrong if i create alter script on the entire database in analysis service in the development server SSMS and execute that script on the production server SSMS instead of deploying through BIDS?
no, you actually should never use BIDS to deploy to prod. BIDS will always overwrites the management settings(security and partition) of the target server.
the best option is to use the Deployment Wizard. It enables you to generate an incremental deployment script that updates the cube and dimension structures. Can customize how roles and partitions are handled. It uses as input files the XML output files generated by building the SSAS in BIDS and you can run on several modes:
Silent Mode (/s): Runs the utility in silent mode and not display any dialog boxes.
Answer file mode (/a): Do not deploy. Only modify the input files.
Output mode (/o): No user interface is displayed. Generate the XMLA script that would be sent to the deployment targets. Deployment will not occur.
If you want a complete synchronization, you can use the "Synchronize Database Wizard". It pretty much clones a database. When the destination database already exists, it performs metadata synchronization and incremental data synchronization. When the destination database does not exist, a full deployment and data synchronization is done.
I think the main disadvantage of scripting the whole database is that everything may be reprocessed. Also, if another team or team member is responsible for deploying the script it may be a lot harder to review and understand if everything is rebuilt with each update.
I work for Red Gate and we recently introduced a free tool called SSAS Compare to help manage this scenario. It helps you to create a script containing just the changes you want to deploy
I read it somewhere"
"A separate database should be created
if SSIS logging is required. (Do not
use the Sysdtslog90 table in either
master or msdb. This is not a security
related concern but could be a
performance issue since SSIS can
generate a lot of logging data.
Microsoft recommends creating a
separate database for logging."
Why? Just to keep things separate and be more organized or there is a deeper meaning to this?
Regards
Manjot
a performance issue since SSIS can
generate a lot of logging data
To be able to finely control the I/O path of the logging data: on which LUN, how many spindles etc.
If it's generating a lot of log information, it may compete with the other database functions if they are sharing the same physical disks. This means both your logging and your database could take a performance hit.
My company at present has the following Setup:
127 SQL servers (2000 and 2005) and over 700 databases. We are in the process of server consolidation and are planning on having a Master server / Target servers setup to enable centralized administration. As part of this project, I have been given the responsiblity of creating a script based automated backup / maintenance solution.
Thanks to Ola Hallengren's script available here I have made a lot of progress.
Here is what I plan:
I have a database in the master server which has details of SQL instances, databases and backup path details.
I am in the process of modifying Hallengren's script to read from this database and create jobs dynamically.
Users are allowed to define what kind of backup they want, how often and how long the backup needs to be kept.
I am also looking at having the ability to spread out jobs, so that I do not have too many jobs running at the same time.
My thoughts are to create tables that have the data needed to be passed as parameters to sp_add_job, sp_add_jobstep and sp_add_jobschedule.
Please share your experiences in the pitfalls and hurdles with this setup. All ideas welcome.
Thanks,
Raj
You might also consider the approach of creating a full backup job and a transaction log backup job on each server, which retrieve the databases from the master database and feeds them into a procedure for backup. You could run the jobs every 5 minutes and the procedure would have to work out what time it is and what type of backup is required.
I say this, because things could get messy creating massive amounts of jobs in an automated manner - but maybe you have it worked out well. Be sure to create a 'primary key' with the job name if you take your original approach.
Spreading out jobs should be easy if you keep record in the database and find available windows to put new jobs in. I've seen scripts for this.
You could also do this in a SSIS package that runs from your admin server. It would iterate over the databases and connect to servers to perform the backups.
Check out the first 3 article here: http://www.sqlmag.com/Authors/AuthorID/1089/1089.html
Moving to 2005 we were not happy with the Intergration Services Maintenance Plans
We wrote sp's to do Full and Tran Backups as well as reindex, arrgegate management, archiving etc
A Daily and Hourly Job would step thru master..sysdatabases and in a try catch block
apply the necessary maintenance. Would not be hard to read from any table and check user defined conditions.
Unfortunatedly there was no way to tie output to msdb..sysjobhistry, so we logged to central table. The upside was we had a lot more control over what was logged.
However would be simpler to read than going thru 700 jobs.