SQL Job failing when one of the steps attempts to connect to another server - jobs

I have created a job where step 1 calls for the deletion of data from table1 on server "ucpdapps2". Step 2 calls to pull data from table5 on server "archive" and place it in table1 on server "ucpdapps2". When I run the job step 1 works but step 2 fails with the error
I don't even have a user called UCPDAPPS2.

I think I figured it out. It kept bothering me the error kept showing the user as ucpdapps2$ and that no such user existed. So after some reading, the server creates itself as a user. So seeing how the job is run under that server I figured maybe that is the user name that must have rights in the archive server even though I linked the two servers to talk to one another. So I just added that user under the security/login tab under the archive server.

Related

Will BigQuery finish long running jobs with a destination table if my browser crashes / computer turns off?

I frequently run BigQuery jobs in the web gui that take 30 minutes or more, saving the results into another table to view later.
Since I'm not waiting for the result to come soon, and not storing them in my computer's memory, it would be great if I could start a query and then turn off my computer, to come back the next day and look at the results in the destination table.
Will this work?
The same applies if my computer crashes, or browser runs out of memory, or anything else that causes me to lose my connection to Bigquery while the job is running.
The simple answer is yes, the processing takes place in the cloud, not on your browser. As long as you set a destination table, the results will be saved there or if not, you can check the query history to see if there were any issues which caused it not to be produced.
If you don't set a destination table it will save to a temporary table which may not be available if you don't return in time.
I'm sure someone can give you a much more detailed answer.
Even if you have not defined destination table - you still can access result of the query by checking Query History. You should locate your query in the list of presented queries and then expand respective item and locate value of Destination Table.
Note: this is not regular table - rather so called anonymous table that is being available for about 24 hours after query was executed
So, knowing that table you can just use it in whatever way you want - for example just simply query it as in below
SELECT *
FROM `yourproject._1e65a8880ba6772f612fbe6ff0eee22c939f1a47.anon9139110fa21b95d8c8729cf0bb6e4bb6452946d4`
Note: anonymous table is being "saved" in a "system" dataset that is started with underscore so you will not be able to see it in UI. Also table name startes with 'anon' which I believe states for 'anonymous'

variable for SQLLOGDIR not found

I am using ola hallengren script for maintenance solution. When I run just the Database backup job for user database I get the following error. Unable to start execution of step 1 (reason: Variable SQLLOGDIR not found). The step failed.
I have checked the directory permissions and there is no issue there. The script creates the job with no problem. I get error message when I try to run the job.
I had this same issue just the other day. I run a number of 2017 servers but the issue happened when I started running on a 2012 server.
I've dropped Ola a mail to confirm but best I can make out is that the SQLLOGDIR parameter specified in the 'advanced' tab for the step (for logging outputs) is not compatible with 2012 and maybe below 2017 though I have not tested these.
HTH,
Adam.
You need to replace this part in the advanced tab with the job name for example :
$(ESCAPE_SQUOTE(JOBNAME)) replace it with CommandLogCleanup_$(ESCAPE_SQUOTE(JOBID)) so then it will look like this:
$(ESCAPE_SQUOTE(SQLLOGDIR))\CommandLogCleanup_$(ESCAPE_SQUOTE(JOBID))_$(ESCAPE_SQUOTE(STEPID))_$(ESCAPE_SQUOTE(DATE))_$(ESCAPE_SQUOTE(TIME)).txt
instead of this:
$(ESCAPE_SQUOTE(SQLLOGDIR))\$(ESCAPE_SQUOTE(JOBNAME))_$(ESCAPE_SQUOTE(STEPID))_$(ESCAPE_SQUOTE(DATE))_$(ESCAPE_SQUOTE(TIME)).txt
Do this for all the other jobs if you don't want to recreate them.
I had the same issue on my SQL Server 2012 version, the error was during the dB backup using Ola's scripts, as mentioned above the issue is with the output file, I changed the location and the output file from the SQL Job and reran the job successfully (refer the attached screenshot for reference.
The error is related to the job output file.
When you create a maintenance job using the Ola script it will automatically assign output file to the step. Sometimes the location does not exist on the server.
I faced the same issue, then I ran the integrity script manually on the server and it completed without error, then I found that the error is in job configuration.
I changed the job output file location and now job also running fine.
The trick is to build the string for the #output_file_name parameter element by element before calling the stored procedure. If you look into Olas code you will see that is exactly what he is doing.
I have tried to describe this in more detail in the post Add SQL Agent job step with tokens in output file name.

Azure SQL Server database - Deleting data

I am currently working on a project that is based on:
Azure EventHub1-->Stream Analytics1-->SQL Server DB
Azure EventHub1-->Stream Analytics2-->Document DB
Both SQL Server and DocumentDB have their respective Stream job, but share the same EventHub stream.
DocumentDB is an archive sink and SQL Server DB is a reporting base that should only houses 3 days of data. This is per reporting and query efficiency requirements.
Daily we receive around 30K messages through EventHub, that are pushed through Stream job (basic SELECT query, no manipulation) to a SQL Server table.
To preserve 3 days of data, we had designed a Logic App that calls a SQL SP that deletes any data, based on date, that is more than 3 days old. Runs every day at 12am.
Also, there is another business rule Logic App that READs from the SQL table to perform business logic checks. Runs every 5 mins.
We noticed that for some strange reason the Logic App for data deletion isn't working and data through months has stacked up to 3 Million rows. The SP can be run manually, as tested in Dev setup.
The Logic App shows Succeeded status, but SP execute step shows an Amber check sign, which when expanded says 3 tries occurred.
I am not sure why the SP doesn't delete old data. My understanding is that because Stream job keep pushing data, the Delete operation in SP can't get a Delete Lock and time out.
Try using Azure Automation instead. Create a runbook that runs the stored procedure. Here you will find an example and step-by-step procedure.

How Database Query Data Processed Between 3 Computers?

I'm trying to understand how data is really processed and sent between servers, but am unable to find a step by step detailed explanation. To explain using an example:
Assume you have 100M records that's 10GB in size and you need to transfter the data between 2 database servers over a LAN using a separate desktop computer to execute the script. In total you have three computers:
Server A has the source database you are selecting the data from.
Server B has the destination database you are writing the data to.
Desktop C is where you are executing your script from, using a client tool like SQL Developer or TOAD.
The SQL would look something like this:
Insert Into ServerBDatabase.MyTable (MyFields)
Select MyFields
From ServerADatabase.MyTable
Can someone explain where the data is processed and how it is sent? Or even point me to a good resource to read up on the subject.
For example, I'm trying to understand if it's something like this:
The client tool on Desktop C sends the query command to the DBMS on Server A. At this point, all that is being sent over the LAN is the text of the command.
The DBMS on Server A receives the query command, interprets it, and then
processes the data on Server A. At this point, the full 10GB data is loaded into memory for processing and nothing is being sent over the LAN.
Once the full 100M records are done being processed on Server A, the DBMS on Server A sends both the text of the query command and the full 100M records of data over the LAN to the DBMS on Server B. Given bandwidth constraints, this data is broken up and sent in chunks over the LAN at some amount of bits per second and loaded on Server B's memory in those data chunk amounts.
As the DBMS on Server B recieves the chunks of data over the LAN, it then pieces it back together on Server B in it's memory to retun it back to it's full 100M records state.
Once the 100M records are fully pieced back together by the DBMS on Server B, the DBMS on Server B then executes the query command to insert the records from it's memory in the destination table one row at a time and writes it to disk.
All of this are assumptions, so I know I could have it all wrong which is why I'm seeking help. Please help correct my errors and/or fill in the blanks.
This is your query:
Insert Into ServerBDatabase.MyTable (MyFields)
Select MyFields
From ServerADatabase.MyTable;
Presumably, you are executing this on a client tool connected to Server A (although the same reasoning applies to Server B). The entire query is sent to the server, and Server A would need to have a remote connection to Server B.
In other words, once the query is sent, the client has nothing to do with it -- other than waiting for the query to be sent.
As for the details of inserting from one database to another, those depend on the database. I would expect the data to be streamed into the table on Server B, all within a single transaction. The exact details might depend on the specific database software and its configuration.

Deleting sql snapshots in SSIS

I have two different scripts, one created by me and one by my collegue, that is using the same snapshots.
16.00 (or 4 PM) Coded by me.
Script 1 - deletes snapshots if they are there, creates new snapshots - executes code.
04.00 (or 4 AM) Coded by Collegue
Script 2 - deletes snapshots if they are there, creates new snapshots - executes code.
Both of these scripts are SSIS scripts that are just holders for Stored Procedures (the SSIS scripts actually don't do much more than executes a bunch of Stored Procedures in a chain).
Script 2 works without problem.
Script 1 get's 'snapshot cannot be deleted, you don't have access or the snapshot is not present in the database'
If I run script 1 in SQL Studio it works perfectly so I have not spelled anything incorrectly.
Both scrips are running under the same user both in the SSIS engine and in the JOBS engine.
I don't even know where I should start looking for errors for this?? Any suggestions?
------------- Edit: Script added ----------------
IF EXISTS(select NULL from sys.databases where name='Citybase_Snapshot')
BEGIN
DROP DATABASE Citybase_Snapshot;
END
CREATE DATABASE CityBase_Snapshot ON
( NAME = FastighetsBok_Data, FILENAME = 'S:\Snapshots\Citybase_Snapshot.ss' )
AS SNAPSHOT OF Citybase;
---------------- Edit: Error message added ----------------------
As far as I know this is a normal error message from SQL server.
EXEC proc_..." failed with the following error: "Cannot drop the
database 'Citybase_Snapshot', because it does not exist or you do not
have permission.".
The answer was more simple than I imagined.
You set an active user on the SSIS script that's running when you create a job in the SQL server for the SSIS job, but that's not the only place you might have a security user on.
You also need to check the connection you actually establish inside the SSIS script to make sure that the user that you use to connect to the database is allowed to drop the snapshot.
Easy as pie (when you know what's wrong).