Can someone help me on this?
Am working on a portal(Website) where all the data comes from another application. I load those data into my application's database table using a SQL Job on hourly basis everyday. The real problem is when the job runs and the data gets started to load into the portal. My portal's behavior is bad. i.e. when a user opens the portal when the job is running he see data mismatch/slow performance etc. But everything becomes normal once the job runs successfully.
Job takes 7 minutes to get complete and the real problem occurs during that 7 mins.
Please help, Thank you in advance.
Related
I have a View which is populated by a SQL request to a regular dataset. As far as I know, not a soul is looking at that view (at least not very often). But if I go to the View and click on Project History then there is an IAM service account which is running the query every 15 seconds. Each query is shuffling 2.55 MB and reading 70,000 records, so I'd really prefer that it didn't do this.
The dataset source to create the view shows that it's last modified date was 3 days ago, so the service account is not being trigged by a change in the source. I checked the job scheduler and nothin there. So what is triggering it and how can I tell it to calm down?
We are using a SQL Server Tabular model which we use for self-service BI purposes. At monthly basis we have some 90 distinct persons who are using the model. Recently we encountered some issues/errors in the client tools(Excel and Power BI) that are connecting to the Tabular model. See screenshots. We did not make any significant changes to the model the past period.
We noticed that the errors keep showing up after our incremental load, i.e. a full process of a number of partitions we process these partitions every 15 minutes. The process is kicked of by a SSIS job which is scheduled every 15 minutes and processes 5 partitions in 3 tables.
Edit: After some research I figured out that the problem lies in the perspectives. Everytime I do a full process on any object. The error appears. This does not happen on the default model view. Still not found a solution though.
The error occurs when you make a change to the power bi report or the excel file. For example when you do a refresh, or when you click a filter. If you press refresh multiple times the connection comes back and everything works as it is supposed to. It seems like the clients lose their connection to the model. After 15 minutes the problem occurs again.
This is very aggravating for the users. Especially when they are in the middle of a presentation.
This is what we tried:
We tried searching Google for a solution
Checked that we have the latest SQL Server 2016 update (13.0.5149.0)
SSAS Builds from Visual Studio(2015 en 2017)
No full process on tables, only on
partitions.
Upgrading the server from 4 to 8 cpu cores.
I hope somebody can help us.
You shouldn't have the error that you are seeing with just a full process of a partition or even the full table. We do this every hour for a number of core tables and we do not see any issues like this (and we would)
I am starting from the hypothesis that
Your 15 minute process is doing more than just processing the partitions with a refresh command
Something else is happening on the environment (either scheduled or not). Who has permissions to change the schema? Could it be users / developers deliberately or not making changes?
The only things that should cause that kind of error would be Alter, Delete or CreateOrReplace TMSL commands
So unless that triggers your own ideas on a diagnostic process I would do the following steps
Note: I presume that your users also see this issue on your test environment when you run your 15 min processing routine on that. You should do the following on that test environment where nothing else is running to eliminate the possibility of someone else interfering with the experiment. If you don't have a representative test environment then you will have to do on live but I would do this out of hours or under some kind of change control process with your 15 minute refresh turned off and admin permissions to the cube heavily locked down to ensure that nothing can interfere with your experiment.
First prove that you can reproduce this issue with the 15 minute routine
Get your sample PowerBI report that is known to present the error (I'd prefer Power BI for a repro as it is slightly simpler than Excel)
Refresh your PowerBI and explore the data to prove that the error doesn't occur
Run your 15 minute process
You should now see the problem reported. If you do, great, you have a reproduceable issue! If you don't then it is not quite as you thought it was and you need to find the way of reliably reproducing these errors. (perhaps something else is happening that isn't the 15 minute process)
So now you are sure how you can reproduce the issue, you need to isolate whether it is really the processing that is causing the problem
Refresh your PowerBI and explore the data to prove that the error doesn't occur
Execute (via SSMS) your XMLA that processes the entire database for one of your tables
it should look something like this
{
"refresh": {
"type": "full",
"objects": [
{
"database": "yourdbname"
}
]
}
}
Do the thing that your users do when they see the issue.
If you too see the issue, then I would raise to Microsoft Support as this shouldn't happen
If you don't see the issue then you can refine this processing to just be the partition for a single table. But as we have done a process for the entire db above if shouldn't change the result
If you still don't see the issue then it isn't the processing that is causing this issue (which I suspect) and it is something else in the 15 minute routine that is causing it. Look deeper into that process and understand what else it is doing.
Alongside this checking the logs should show if there are any other processing tasks or types of XMLA happening.
I hope these ideas get you closer to finding the actual activity that is causing this experience for your users. It would be great if you could post with how you got on and what you found.
I have the same problem here if I install the latest CU on my SQL Server 2017. My production environment is still running with CU3 (Jan/2018) due to this problem.
Knowing that I would suggest reverting your installation to a previous release. Maybe 13.0.5026.0 (SP2) or even to the 13.0.4466.4 (Jan/2018).
I am facing the same issue with SQL Server 2017 CU 11 installed.
The issue indeed occurs in case of a 'full refresh' in combination with the use of a 'perspective' in an existing connection. The workaround to use the default 'Model' in the connection does indeed 'solve' the issue.
I've got an app at work I support that uses a SQL Server 2008 DB (vendor created/supported app). One of the things this app does is load records into ETL tables in the DB all day to be moved to a data warehouse.
Unfortunately, the app is having lots of problems with the ETL tables right now and the vendor has no monitoring solution. I have no accesses to the DB to add a stored procedure or anything, but I can run a count * on the ETL tables to see if things are getting out of hand.
I have managed to write a VB.NET app that will return the COUNT of rows in these ETL tables so I can keep an eye on things, but it will only return the counts if I fire a button event.
I've never written an app that runs/updates "in real time" before, and I'm looking for some guidance on how I can create an app that would update these COUNT values in as close to real time as possible.
Any guidance would be greatly appreciated!
You could achieve that by writing a Console application, since you seem used to .Net.
The console application runs and you can read the values by using console.writeline() and console.readline() in your program.cs. Or you could update the record counts in a table or send an email.
When you say real time, the console application can be scheduled to run - e.g. through creating a task in task scheduler or sql agent, or it can be run by launching the exe. A rough example is that, you could send yourself an email every 10 minutes by creating a task that launches the console ap every 10 minutes.
If you're using a Windows Forms app, just add in a Timer object that fires the SQL query off. As an added bonus, you could include fields on the form to control how often the timer fires to get the resolution that's right for you.
You can use the Timer control in Console apps too, of course.
I have SQL job that runs every night which does various inserts/updates/deletes. The job contains 40 steps which mainly execute stored procedures.
It's been running fine up until a week ago when suddenly the run time went up from 2.5 hours to over 5 hours, sometimes even 8,9,10!
Could one you please give me any pointers?
First of all let me recommend you a valuable resource on Simple-Talk site. Is a detailed methodology of how to troubleshoot performance issues on SQL Server.
Does the insert you say was carried out was a huge bulk insert that could affect performance? Maybe if it was a huge load the query execution plans could be different and you need to re-tune your table structure, indexes, etc.
If the run time suddenlychanged and no changes where done in the queries or your database structure then I would ask myself several questions:
first, does the process is still taking so long or it was only one time it ran so slow? maybe now is running smoothly and the issue only arised once. Nevertheless, try to find what triggered that bad performance, it can happend again and take down your server
is the server a dedicated sql server? if not, check if some new tasks unrelated to the SQL engine had been configured, maybe a new tasks is doing some heavy I/O jobs and therefore your CRUD operations take longer
if it is a dedicated server, then check that no new job has been added and can take down your existing jobs. Check this SO link for details on jobs settled up from the SQL Agent
maybe low memory due to another process on same server?
And there is lot more to check, but before going deeper I would check that no external (non sql server related) was the reason of the delay on the process execution.
I don't know if this is even possible, so i would appreciate any ideas, even those outside of Sql Server 2005, on how this might be accomplished. I have a linked server set up to a remote mainframe and I have a simple import job that runs overnight. The problem is that the table on the mainframe that the import needs to come from is just a temporary report file that gets overwritten each time a user runs that report, sometimes with different parameters, so the data is always changing. One request was that the SQL job would run only when a specific user runs the report. This is stored as a field in the same mainframe report table that the import is coming from. Setting up a scheduled run on the mainframe is not an option since we don't control it an having the owners set it up would be costly, don't ask me why.
Any ideas that will keep me from forcing the user to run the mainframe report at a specific time would be helpful.
Well, the only thing that you could do from this side is to pull periodically and detect a change. You may try to set up a job that queries only report version, time-stamp and the author. The job runs every 5 minutes and triggers the import job when it detect changes. Not elegant, but it may be good enough.