I am looking for your assistant in understanding the way and the code to get the column value to be changed after 4 days.
Currently, whenever I am inserting a new row in the database, the value of the flag will be "Y" by default and I will capture the created date & time by storing the sysdate by default. Now, I want the flag to be changed to "N" after 4 days from the creation.
So, please guide and instruct me on the way to do the above.
You should look into creating a Trigger on INSERT/UPDATE/DELETE IF your table is frequently accessed for those operations.
If not, then you would have to schedule a job to run SQL daily or how frequent you'd like to check.
how to schedule a job for sql query to run daily?
Related
At last BigQuery supports using ; in the queries, so I can write more than one query in one "block", if I seperate them with semicolon.
If I run the code manually, it works. But I cannot schedule that.
When I want to schedule, I have two choices:
(New) Web UI: I must give a destination table. If I don't do it, I could not save the scheduled query. But all my queries are updates and inserts with different "destination tables". Like these:
UPDATE project.exampledataset.a
SET date = current_date()
WHEN TRUE
;
INSERT INTO project.otherdataset.b
SELECT c,d
FROM project.otherdataset.c
So I cannot even make a scheduling in the Web UI.
Classic UI: I tried this, because the official documentary states, that I should leave the "destination table" blank, and Classic UI allows it. I can setup the scheduling, but it doesn't run, when it should. I get the error message in email "Error status: Dataset specified in the query ('') is not consistent with Destination dataset 'exampledataset'."
AIK scripting (and using semicolon) is a very new feature in BigQuery, but I hope someone can help me.
Yes, I know that I could schedule every query one by one, but I would like to resolve it with one big script.
Looks like the scheduled query was defined earlier with destination dataset defined with APPEND/TRUNCATE type transaction. While updating the same scheduled query to a DML query, GUI doesn't show the dataset field / table name to update to NULL. Hence this error is coming considering the previously set dataset and table name in the scheduled query.
Hence the fix is to delete the scheduled query and create it from scratch with DML query option. It worked for me.
Scripting is supported in scheduled query now. However, scripting query, when being scheduled, doesn't support setting a destination table for now. You still need to use DDL/DML to make change to existing table.
E.g.:
CREATE OR REPLACE TABLE destinationTable AS
SELECT *
FROM sourceTable
WHERE date >= maxDate
As of 2022, the BQ Console UI will let you create a new scheduled query without a destination dataset, but it won't let you update a prior SELECT to use DDL/DML block syntax. However, you can use the BigQuery Data Transfer API to update the destinationDatasetId field, via transferconfigs/patch. Use transferconfigs/list to get the configId for a given scheduled query.
Note that you can either use the in-browser API Explorer, if you have the appropriate credentials, or write a programmatic solution. Also seems useful for setting/updating any other fields, including renaming scheduled queries.
I have a report which is sent daily which has some no. of rows but I want to send a separate report with a subject which says it is "critical" as it has n no. of rows in it.
How do I schedule this in SSRS?
Thank you!
Create a data driven subscription that only returns results if your table contains n rows.
A Data Driven Subscription would be best if you have the Enterprise version of SQL, but if you don't you'll need to get creative. One method that should work is to create a copy of the existing report (if it's TheNinjaReport call the copy TheNinjaReport_Critical or something), and alter the query so that it throws an error if there aren't the requisite number of rows. When the query throws an error, the subscription will fail and nothing will go to the end user. Something like
IF (SELECT COUNT(*) FROM dbo.ErrorLog) > 100
SELECT *
FROM dbo.ErrorLog
ELSE
RAISERROR('Not a critical number of errors', 16, 1)
This is not ideal because now you have two reports to maintain, but it will get you where you need to be.
I have in a table a nullubule timestamp that tracks when the entry got called from a client. Sometimes something goes wrong on the client side and I need to set the timestamp back to null. I tried directly in SQL management studio to execute the query:
USE [MyDB]
GO
UPDATE [dbo].[MyTable]
SET [MyTimestamp]=null
WHERE ID=SomeInt;
I get the message that one row got altered but when I refresh my select * on the table there is no change on the timestamp.
PS: The whole DB runs on an azure server but I can also not get it to work on my test DB on local host in SQL Server 2014.
Would be grateful for input
The answer is you cannot change the timestamp column to NULL. It is like a row version number.
Also
The timestamp data type is just an incrementing number and does not
preserve a date or a time.
There are some workarounds which you can use as the one which is used here in the related thread but now Timestamp datatype is rarely used.
We have a bunch of T-SQL scripts dependent on today's date and when they run. If one doesn't run on the week it should, we end up temporarily setting the system time a day before, run the script, then set it back.
Is there anyway to temporarily set the system date for a script without changing the original script, like when you execute it or only for that session?
You could store the actual date in a table / temp table.
THen retrieve or update that date rather then making a call to GetDate().
I've found an answer by someone else, here I share it: "The date is tied to the OS date and time. See here: http://msdn.microsoft.com/en-us/library/ms188383.aspx".
You could refer to this other question Simulate current date on a SQL Server instance?
I am faced with a peculiar requirement which is as follows:
A network-intensive operation is triggered to a server by multiple clients, through a web-interface. However, only one operation is allowed at a time, and hence an entry(tuple) is made in an SQL table to indicate that the operation is in progress. Once the operation is complete (irrespective of success or failure), the appropriate result is displayed back to the client(s), and the corresponding tuple is removed from the SQL table.
Since the operation is network-intensive, a scenario where the operation needs to be "considered" to be cancelled, after some timeout (10 minutes) has to be introduced.
Is there ANY way the lifetime of a row in SQL be associated with a timeout value, so that is is deleted after certain time? My application is primarily written in Java 1.5 and EJB 3.0, using JPA/Hibernate to access Oracle 10g DB engine.
Thanks in advance.
Regards,
Nagendra U M
I would suggest that you try using a timestamp column containing the start time of the task.
A before trigger can be then made to delete the old column before a new one is inserted if the task timed out.
If you want to have multiple tasks with different timeouts, you can even add a column with the timeout in seconds. Just code your trigger accordingly.
I don't know that Oracle has this kind of facility but I think no db engine have this.
If you want to do it at DB level,
you must have a datetime column, e.g.; 'CreatedDate' in
table. This column will have
datetime when record was created.
Write a procedure and put it in a
schedule job. This job will run after every 10 minutes and remove the 10 minutes old records. The query will be like
this.
T-SQL: Please convert it according to your db engine.
DELETE FROM yourtable WHERE CreatedDate < DATEADD(mi, -10, GETDATE())
This will delete all records older than 10 minutes from table.
This is just to give you idea of schedule job. It is in SQL Server. I don't know about Oracle
step_by_step_guide_to_add_a_sql_job_in_sql_server_2005
It sounds like you're implementing a mutex using the database, take a look at this question and see if it helps? Sounds like transactional access to a flag table will solve this for you, as long as you catch both success & failure states in your server code.