I can see in my personal history many queries that run on a regular bases, that are scheduled queries and the exists in google sheets or dashboards that are connected to big query, but I cant find the sheets containing these queries to stop it.
Any way that I can know from the personal history on big query the sheet containing this query so I can track it and pause.
Thanks
Related
I recently created a fairly lengthy SQL query script that takes the information of some base tables like forecast, bill of materials, part information and so on to automatically create a production schedule.
The script itself works well, but whenever something in those base tables changes the query script needs to be rerun (the script itself involves first dropping the tables created by the query, and then basically running the longer query of creating and dropping tables to get to the final schedule).
To make things easier on the front end user, my intention was to create a front end through access to allow the users to update the necessary base data.
My question is, is there a way to set something up either through Microsoft SSMS or Access (2016) that would run this script automatically whenever these tables were updated?
My initial search showed a lot of people talking about SQL Server Agent being able to automate queries, but I was not able to find anything regarding running a script when a table is updated, only scheduling things based on time frequency.
Ideally I think the easiest option would be if it were possible on the Access front end to allow the user to run this script by just pushing a button on a form, but I am open to whatever options would achieve the same goal.
Thanks in advance.
I am seeing if there is an efficient/fast way of finding all tables/columns updated from a specific process. Basically we want to know all SQL columns that are updated from a frontend ERP process.
I know of two ways - either enable Change Tracking on every single table which is not very efficient, or spin up a blank test environment, perform the process and do row counts on all tables then go and view the data.
Does anyone else have a better method than the two described above?
May I have your sharing on how to manage your SQL queries in a file system across all different vendors.
I am a data analyst and I write a lot of different SQL queries everyday. They are all stand alone queries and a currently all stored in a file server. Recently, I have a problem that on a query that I developed one year ago from the file server based on the function of the query. I also find that sometime a subquery can be reused between different queries by simply copy and paste, and of course, I do have this done a few time across different queries. But my problem is I don't know how to keep track on this too.
Do you know how to do it?
Thanks in advance!
I'm pretty new here and usually don't resort to forum post unless I really can't figure things out by searching for a solution on my own but I'm stuck. I work in an IT department and I am developing a tool that will be used to compare across three data pulls to see where we are missing data. All data is supposed to be accounted for in all three databases so we need to find discrepancies where it does not match. This is data used across all of our car dealerships and it is pulled from three providers who give it to us. (Example: Our website listing, cars actually on sale in our inventory, and the third deals with web listings on other sites).
Unfortunately, whenever we do an export from each site the dealership locations do not match with the exact same syntax. I have all three tables in a sql database that is reuploaded by the user each month. I have case statements written so I can run a query to change each matching dealership in a way that matches syntax across all three tables. For example 'Ford Denham' and 'Denham Ford' are all changed to 'ASFD' which is an acronym we use for that dealership.
Now we have reports which I have created with SQL Report Builder. My problem is, all of my queries are written as if the Location is always 'ASFD' so I can match records based on location. When the user uploads data how can I automatically have my Case Statement run on the new files in the database without having to trigger the query myself? If I don't run the Case Statement rename then none of the reports will run correctly because Locations do not match correctly.
Thank you for any help. Let me know if I should have gone about this a different way since I have never really posted here before. I tried to be as descriptive as possible.
I am moving an Access application's tables from an Access file to a SQL Server. The original application had a front end file with a data file it linked to. The data file lived on a network drive. I am now linking the front end file to a SQL Server using an ODBC connection.
All the forms appear to work. However, there are two reports that are painfully slow. In the old configuration, the reports would load up in a couple seconds.
However, in the new version they can take minutes. These reports consist of multiple sub-reports that have their own datasets along with the main report's dataset. I have gone through each query and tweaked it so that it takes less than a second for each query to run. However, I still have the delay when I try to run the overall report.
I have worked with linked tables quite a bit. However, reporting on the access side is fairly new to me. Is there any reason the reports would be slower with the linked tables compared to a linked file? Is there a good practice concerning sub reports in reporting that might speed up the load time?
This can occur quite often. The reason of course is that when using Access with a JET/ACE (file back end), then the application can make “better” choices about how to join data.
When you use SQL server, then the main report and the sub reports are viewed by Access as separate tables from separate data sources. So Access often does a rather poor job of joining the data together. And toss in the fact that reports often requery and reload parts of data many times (more often than one would like), then shortcomings in how Access pulls data from SQL server really show up. (Access has a hard time joining the parent tables to the child tables in the sup-reports).
The most simple (and least amount of changes) is to the change the query used for the main reports (and the sub reports) to views on the sql server. You then link to those views from Access (they show up as standard tables in Access). You then based the main report on this view (as opposed to the query). And do the same for the sub-reports. This approach is not the best, but it should improve things dramatically, especially if the main report is a query that joins multiple tables, and also if the sub reports are also based on queries that are based on multiple tables.
The above is the least amount of work, since then filters and where clauses etc. used on the reports should continue to function without changes to your front end application.