Start SQL Server Jobs when field = specific value - sql

I don't know if this is even possible, so i would appreciate any ideas, even those outside of Sql Server 2005, on how this might be accomplished. I have a linked server set up to a remote mainframe and I have a simple import job that runs overnight. The problem is that the table on the mainframe that the import needs to come from is just a temporary report file that gets overwritten each time a user runs that report, sometimes with different parameters, so the data is always changing. One request was that the SQL job would run only when a specific user runs the report. This is stored as a field in the same mainframe report table that the import is coming from. Setting up a scheduled run on the mainframe is not an option since we don't control it an having the owners set it up would be costly, don't ask me why.
Any ideas that will keep me from forcing the user to run the mainframe report at a specific time would be helpful.

Well, the only thing that you could do from this side is to pull periodically and detect a change. You may try to set up a job that queries only report version, time-stamp and the author. The job runs every 5 minutes and triggers the import job when it detect changes. Not elegant, but it may be good enough.

Related

SQL Agent job failure universal handling

I'm in a situation where I have a server running sql 2012 with roughly two hundred scheduled jobs (all are SSIS package executions). I'm facing a directive from management where I need to run some custom software to create a bug report ticket whenever a job fails. Right now I'm relying on half the jobs jobs notifying an operator on failure, while the other half do like a "go to step X- send failure email" for each step on failure, where "step X" is some sql that queries the DB and sends out an email saying which job failed at which step.
So what I'm looking for is some universal solution where I can have every job do the same thing when it fails (in this case, run some program that creates a bug tracking ticket). I am trying to avoid the situation where I manually go into every single job and add a new step at the end, with all previous steps changing to "go to step Y on failure" where step Y is this thing that creates the bug report.
My first thought was to create a new job that queries the execution history tables and looks for unhandled failures and then does the bug report creation itself. However, I already made the mistake of presenting this idea to the manager and was told it's not a viable solution because it's "reactive and not proactive" and also not creating tickets in real-time. I should know better than to brainstorm with non-programming management but it's too late, so that option is off the table and I haven't been able to uncover any other methods.
Any suggestions?
I'm proposing this as an answer, though it's not a technical solution. Present the possible solutions and let the manager decide:
Update all the Agent Jobs - This will take a lot of time and every job will need to be tested, which will also take a lot of time. I'd guess 2-8 weeks depending on how it's done.
Create an error handler job that monitors the logs and creates tickets based on those errors. This has two drawbacks - it is not "real-time" (as desired by the manager) and something will need to be put into place to insure errors are only reported once. This has the upside of being one change to manage. Also it can be made near real time if it were run on the minute.
A third option, which would be more a preliminary step, is to create an error report based off of the logs. This will help to understand the quantity and types of failures. This may help to shape the ultimate solution - do we want all these tickets, can they be broken up into different categories, do we want tickets for errors that are self-healing (i.e. connection errors which have built-in retries)?

Current session is no longer available due to structural changes in the database - Tabular

We are using a SQL Server Tabular model which we use for self-service BI purposes. At monthly basis we have some 90 distinct persons who are using the model. Recently we encountered some issues/errors in the client tools(Excel and Power BI) that are connecting to the Tabular model. See screenshots. We did not make any significant changes to the model the past period.
We noticed that the errors keep showing up after our incremental load, i.e. a full process of a number of partitions we process these partitions every 15 minutes. The process is kicked of by a SSIS job which is scheduled every 15 minutes and processes 5 partitions in 3 tables.
Edit: After some research I figured out that the problem lies in the perspectives. Everytime I do a full process on any object. The error appears. This does not happen on the default model view. Still not found a solution though.
The error occurs when you make a change to the power bi report or the excel file. For example when you do a refresh, or when you click a filter. If you press refresh multiple times the connection comes back and everything works as it is supposed to. It seems like the clients lose their connection to the model. After 15 minutes the problem occurs again.
This is very aggravating for the users. Especially when they are in the middle of a presentation.
This is what we tried:
We tried searching Google for a solution
Checked that we have the latest SQL Server 2016 update (13.0.5149.0)
SSAS Builds from Visual Studio(2015 en 2017)
No full process on tables, only on
partitions.
Upgrading the server from 4 to 8 cpu cores.
I hope somebody can help us.
You shouldn't have the error that you are seeing with just a full process of a partition or even the full table. We do this every hour for a number of core tables and we do not see any issues like this (and we would)
I am starting from the hypothesis that
Your 15 minute process is doing more than just processing the partitions with a refresh command
Something else is happening on the environment (either scheduled or not). Who has permissions to change the schema? Could it be users / developers deliberately or not making changes?
The only things that should cause that kind of error would be Alter, Delete or CreateOrReplace TMSL commands
So unless that triggers your own ideas on a diagnostic process I would do the following steps
Note: I presume that your users also see this issue on your test environment when you run your 15 min processing routine on that. You should do the following on that test environment where nothing else is running to eliminate the possibility of someone else interfering with the experiment. If you don't have a representative test environment then you will have to do on live but I would do this out of hours or under some kind of change control process with your 15 minute refresh turned off and admin permissions to the cube heavily locked down to ensure that nothing can interfere with your experiment.
First prove that you can reproduce this issue with the 15 minute routine
Get your sample PowerBI report that is known to present the error (I'd prefer Power BI for a repro as it is slightly simpler than Excel)
Refresh your PowerBI and explore the data to prove that the error doesn't occur
Run your 15 minute process
You should now see the problem reported. If you do, great, you have a reproduceable issue! If you don't then it is not quite as you thought it was and you need to find the way of reliably reproducing these errors. (perhaps something else is happening that isn't the 15 minute process)
So now you are sure how you can reproduce the issue, you need to isolate whether it is really the processing that is causing the problem
Refresh your PowerBI and explore the data to prove that the error doesn't occur
Execute (via SSMS) your XMLA that processes the entire database for one of your tables
it should look something like this
{
"refresh": {
"type": "full",
"objects": [
{
"database": "yourdbname"
}
]
}
}
Do the thing that your users do when they see the issue.
If you too see the issue, then I would raise to Microsoft Support as this shouldn't happen
If you don't see the issue then you can refine this processing to just be the partition for a single table. But as we have done a process for the entire db above if shouldn't change the result
If you still don't see the issue then it isn't the processing that is causing this issue (which I suspect) and it is something else in the 15 minute routine that is causing it. Look deeper into that process and understand what else it is doing.
Alongside this checking the logs should show if there are any other processing tasks or types of XMLA happening.
I hope these ideas get you closer to finding the actual activity that is causing this experience for your users. It would be great if you could post with how you got on and what you found.
I have the same problem here if I install the latest CU on my SQL Server 2017. My production environment is still running with CU3 (Jan/2018) due to this problem.
Knowing that I would suggest reverting your installation to a previous release. Maybe 13.0.5026.0 (SP2) or even to the 13.0.4466.4 (Jan/2018).
I am facing the same issue with SQL Server 2017 CU 11 installed.
The issue indeed occurs in case of a 'full refresh' in combination with the use of a 'perspective' in an existing connection. The workaround to use the default 'Model' in the connection does indeed 'solve' the issue.

VB.NET: Display SQL Server Table Row Count in Real Time?

I've got an app at work I support that uses a SQL Server 2008 DB (vendor created/supported app). One of the things this app does is load records into ETL tables in the DB all day to be moved to a data warehouse.
Unfortunately, the app is having lots of problems with the ETL tables right now and the vendor has no monitoring solution. I have no accesses to the DB to add a stored procedure or anything, but I can run a count * on the ETL tables to see if things are getting out of hand.
I have managed to write a VB.NET app that will return the COUNT of rows in these ETL tables so I can keep an eye on things, but it will only return the counts if I fire a button event.
I've never written an app that runs/updates "in real time" before, and I'm looking for some guidance on how I can create an app that would update these COUNT values in as close to real time as possible.
Any guidance would be greatly appreciated!
You could achieve that by writing a Console application, since you seem used to .Net.
The console application runs and you can read the values by using console.writeline() and console.readline() in your program.cs. Or you could update the record counts in a table or send an email.
When you say real time, the console application can be scheduled to run - e.g. through creating a task in task scheduler or sql agent, or it can be run by launching the exe. A rough example is that, you could send yourself an email every 10 minutes by creating a task that launches the console ap every 10 minutes.
If you're using a Windows Forms app, just add in a Timer object that fires the SQL query off. As an added bonus, you could include fields on the form to control how often the timer fires to get the resolution that's right for you.
You can use the Timer control in Console apps too, of course.

BigQuery Double Imports

I am using Google BigQuery from app engine. I have a cron job that runs every 15 minutes to do an export to BigQuery. Randomly though, the import runs twice. However, the appengine logs do not reflect this. I have a set of blobs I maintain to write data to bigquery with, and duplicate data is not being written to them. Has anyone else had bigquery problems with duplicate imports? Again, my appengine logs show the imports happening only one time, and I'm kind of at a loss how to troubleshoot.
One way to troubleshoot is to look at your import jobs. You can do this using the bq tool by running bq ls -j to list the jobs you've run, and bq show -j <job_id> to show details about particular jobs.
We've not heard of any other cases of duplicate loads. One idea to prevent this is to give your import jobs an id ... by default one gets created for you (it will look like job_). Job ids are enforced to be unique within a project, so if you generate an id per import you intend to do, if a double import is triggered, the second one will fail immediately because the job id will already exist.
I am facing the same problem where the jobs seem to have imported twice even though our logs shows it was submitted only once.
Also looked into the jobs and the above command shows it was successfully processed only once.
Note that since the job was only submitted once, not sure how controlling the job_id will help in this case. It seems to be something internal to BigQuery that might have caused the jobs to duplicate?
Let me know if you need anything from my end to investigate.
Thanks,

Stopping SQL code execution

We have a huge Oracle database and I frequently fetch data using SQL Navigator (v5.5). From time to time, I need to stop code execution by clicking on the Stop button because I realize that there are missing parts in my code. The problem is, after clicking on the Stop button, it takes a very long time to complete the stopping process (sometimes it takes hours!). The program says Stopping... at the bottom bar and I lose a lot of time till it finishes.
What is the rationale behind this? How can I speed up the stopping process? Just in case, I'm not an admin; I'm a limited user who uses some views to access the database.
Two things need to happen to stop a query:
The actual Oracle process has to be notified that you want to cancel the query
If the query has made any modification to the DB (DDL, DML), the work needs to be rolled back.
For the first point, the Oracle process that is executing the query should check from time to time if it should cancel the query or not. Even when it is doing a long task (big HASH JOIN for example), I think it checks every 3 seconds or so (I'm looking for the source of this info, I'll update the answer if I find it). Now is your software able to communicate correctly with Oracle? I'm not familiar with SLQ Navigator but I suppose the cancel mechanism should work like with any other tool so I'm guessing you're waiting for the second point:
Once the process has been notified to stop working, it has to undo everything it has already accomplished in this query (all statements are atomic in Oracle, they can't be stopped in the middle without rolling back). Most of the time in a DML statement the rollback will take longer than the work already accomplished (I see it like this: Oracle is optimized to work forward, not backward). If you are in this case (big DML), you will have to be patient during rollback, there is not much you can do to speed up the process.
If your query is a simple SELECT and your tool won't let you cancel, you could kill your session (needs admin rights from another session) -- this should be instantaneous.
When you cancel a query, the Oracle client should send OCIBreak() but this isn't implemented on a Windows server, that could be the cause.
Also, have your DBA check the value of SQLNET.EXPIRE_TIME.