I can't run inserts in firebird 2.5, Table Unknown - sql

I'm trying to create a temporary table to save some codes, but when I try to insert a code it throws me the following error as if the table did not exist:
can't format message 13:796 -- message file C:\Windows\firebird.msg
not found. Dynamic SQL Error. SQL error code = -204. Table unknown.
TEMPCODES. At line 1, column 13.
These are the lines that I try to run:
create global temporary table TEMPCODES
(
codigo varchar(13)
)
on commit delete rows;
insert into TEMPCODES values('20-04422898-0');
Why can't it find the table if I'm creating it before?

In Firebird, you cannot use a database object in the same transaction that created it. You need to commit before you can use the table.
In other words, you should use:
create global temporary table TEMPCODES
(
codigo varchar(13)
)
on commit delete rows;
commit;
insert into TEMPCODES values('20-04422898-0');
Also, it is important to realise that global temporary tables (GTT) are intended as permanent objects. The idea is to create a GTT once, and then use it whenever you need it. The content of a GTT is only visible to the current transaction (on commit delete rows) or to the current connection (on commit preserve rows). Creating a GTT on the fly is not the normal usage pattern for GTTs.

Related

How to truncate a partitioned external table in hive?

I'm planning to truncate the hive external table which has one partition. So, I have used the following command to truncate the table :
hive> truncate table abc;
But, it is throwing me an error stating : Cannot truncate non-managed table abc.
Can anyone please suggest me out regarding the same ...
Make your table MANAGED first:
ALTER TABLE abc SET TBLPROPERTIES('EXTERNAL'='FALSE');
Then truncate:
truncate table abc;
And finally you can make it external again:
ALTER TABLE abc SET TBLPROPERTIES('EXTERNAL'='TRUE');
By default, TRUNCATE TABLE is supported only on managed tables. Attempting to truncate an external table results in the following error:
Error: org.apache.spark.sql.AnalysisException: Operation not allowed: TRUNCATE TABLE on external tables
Action Required
Change applications. Do not attempt to run TRUNCATE TABLE on an external table.
Alternatively, change applications to alter a table property to set external.table.purge to true to allow truncation of an external table:
ALTER TABLE mytable SET TBLPROPERTIES ('external.table.purge'='true');
There is an even better solution to this, which is basically a one liner.
insert overwrite table table_xyz select * from table_xyz where 1=2;
This code will delete all the files and create a blank file in the external folder location with absolute zero records.
Look at https://issues.apache.org/jira/browse/HIVE-4367 : use
truncate table my_ext_table force;

What is the process during re-naming and re-creating a MS-SQL table using stored procedure?

I have a table called myTable where continuous insertion is happening. I will rename that table by myTable_Date and create a new table, myTable through a Store Procedure.
I want to know what will happen during re-naming and re-creating the table, will it drop any packet?
SQL Server has sp_rename built in if you just want to change the name of a table.
sp_rename myTable, myTable_Date
Would change the name from myTable to myTable_Date
But it only changes the name reference in sys.Objects so make sure any references are altered and read the documentation about it :)
The Microsoft doc for it is HERE
When you rename the myTable to myTableDate, myTable won't exist anymore so if someone tries to insert something inside myTable it will fail.
When you create new myTable with the same name and columns everything will be fine and the insertion process will continue.
I suggest you to make a little script renaming the table and creating new one. Something like this:
sp_rename myTable, myTable_Date
GO
CREATE TABLE myTable(
-- Table definition
)
When you rename the table you will get warning like this: "Caution: Changing any part of an object name could break scripts and stored procedures." so you better create the new table fast.
Other option is you create a table exact like myTable and insert all data from myTable there and then can delete them from myTable. No renaming, no dropping and insertion process will not be interrupted.
I want to know what will happen during re-naming and re-creating the
table, will it drop any packet?
Inserts attempted after the table is renamed will err until the table is recreated. You can avoid that by executing the tasks in a transaction. Short term blocking will happen if an insert is attempted before the transaction is committed but no rows will be lost. For example:
CREATE PROC dbo.ReanmeMytableWithDate
AS
DECLARE #NewName sysname = 'mytable_' + CONVERT(nchar(8), SYSDATETIME(), 112);
SET XACT_ABORT ON;
BEGIN TRY;
BEGIN TRAN;
EXEC sp_rename N'dbo.mytable', #NewName;
CREATE TABLE dbo.mytable(
col1 int
);
COMMIT;
END TRY
BEGIN CATCH
THROW;
END CATCH;
GO
I don't know your use case for renaming tables like this but it seems table partitioning might be a better approach as #Damien_The_Unbeliever suggested. Although table partitioning previously required Enterprise Edition, the feature is available in Standard Edition beginning with SQL Server 2016 SP1 as well as Azure SQL Database.

Can AWS Redshift drop a table that is wrapped in transaction?

During the ETL we do the following operations:
begin transaction;
drop table if exists target_tmp;
create table target_tmp like target;
insert into target_tmp select * from source_a inner join source_b on ...;
analyze table target_tmp;
drop table target;
alter table target_tmp rename to target;
commit;
The SQL command is performed by AWS Data Pipeline, if this is important.
However, the pipelines sometimes fail with the following error:
ERROR: table 111566 dropped by concurrent transaction
Redshift supports serializable isolation. Does one of the commands break isolation?
Yes that works, but if generating the temp table takes a while you can expect to see that error for other queries while it runs. You could try generating the temp table in a separate transaction (transaction may not be needed unless you worry about updates to the source tables). Then do a quick rotation of the table names so there is much less time for contention:
-- generate target_tmp first then
begin;
alter table target rename to target_old;
alter table target_tmp rename to target;
commit;
drop table target_old;

Passing temp table from one execution to another

I want to pass a temp table from one execution path to another one nested in side it
What I have tried is this:
DECLARE #SQLQuery AS NVARCHAR(MAX)
SET #SQLQuery = '
--populate #tempTable with values
EXECUTE('SELECT TOP (100) * FROM ' + tempdb..#tempTable)
EXECUTE sp_executesql #SQLQuery
but it fails with this error message:
Incorrect syntax near 'tempdb'
Is there a another\better way to pass temporary table between execution contexts?
You can create a global temp table using the ##tablename syntax (double hash). The difference is explained on the TechNet site:
There are two types of temporary tables: local and global. They differ from each other in their names, their visibility, and their availability. Local temporary tables have a single number sign (#) as the first character of their names; they are visible only to the current connection for the user, and they are deleted when the user disconnects from the instance of SQL Server. Global temporary tables have two number signs (##) as the first characters of their names; they are visible to any user after they are created, and they are deleted when all users referencing the table disconnect from the instance of SQL Server.
For example, if you create the table employees, the table can be used by any person who has the security permissions in the database to use it, until the table is deleted. If a database session creates the local temporary table #employees, only the session can work with the table, and it is deleted when the session disconnects. If you create the global temporary table ##employees, any user in the database can work with this table. If no other user works with this table after you create it, the table is deleted when you disconnect. If another user works with the table after you create it, SQL Server deletes it after you disconnect and after all other sessions are no longer actively using it.
If a temporary table is created with a named constraint and the temporary table is created within the scope of a user-defined transaction, only one user at a time can execute the statement that creates the temp table. For example, if a stored procedure creates a temporary table with a named primary key constraint, the stored procedure cannot be executed simultaneously by multiple users.
The next suggestion may be even more helpful:
Many uses of temporary tables can be replaced with variables that have the table data type. For more information about using table variables, see table (Transact-SQL).
Your temp table will be visible inside the dynamic sql with no problem. I am not sure if you are creating the temp table inside the dynamic sql or before.
Here it is with the table created BEFORE the dynamic sql.
create table #Temp(SomeValue varchar(10))
insert #Temp select 'made it'
exec sp_executesql N'select * from #Temp'
The reason for your syntax error is that you are doing an unnecessary EXECUTE inside an EXECUTE, and you didn't escape the nested single-quote. This would be the correct way to write it:
SET #SQLQuery='
--populate #tempTable with values
SELECT TOP 100 * FROM tempdb..#tempTable'
However, I have a feeling that the syntax error is only the beginning of your problems. Impossible to tell what you're ultimately trying to do here, only seeing this much of the code, though.
Your quotations are messed up. Try:
SET #SQLQuery='
--populate #tempTable with values
EXECUTE(''SELECT TOP 100 * FROM '' + tempdb..#tempTable + '') '

SQL: Why does a CREATE TRIGGER need to be preceded by GO

When making a SQL script to create a trigger on a table, I wanted to check that the trigger doesn't already exist before I create it. Otherwise the script cannot be run multiple times.
So I added a statement to first check whether the trigger exists. After adding that statement, the CREATE TRIGGER statement no longer works.
IF NOT EXISTS (SELECT name FROM sysobjects
WHERE name = 'tr_MyTable1_INSERT' AND type = 'TR')
BEGIN
CREATE TRIGGER tr_MyTable1_INSERT
ON MyTable1
AFTER INSERT
AS
BEGIN
...
END
END
GO
This gives:
Msg 156, Level 15, State 1, Line 5
Incorrect syntax near the keyword 'TRIGGER'.
The solution would be to drop the existing trigger and then create the new one:
IF EXISTS (SELECT name FROM sysobjects
WHERE name = 'tr_MyTable1_INSERT' AND type = 'TR')
DROP TRIGGER tr_MyTable1_INSERT
GO
CREATE TRIGGER tr_MyTable1_INSERT
ON MyTable1
AFTER INSERT
AS
BEGIN
...
END
GO
My question is: why is the first example failing? What is so wrong with checking the trigger exists?
Certain statements need to be the first in a batch (as in, group of statements separated by GO ).
Quote:
CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE RULE, CREATE SCHEMA, CREATE TRIGGER, and CREATE VIEW statements cannot be combined with other statements in a batch. The CREATE statement must start the batch. All other statements that follow in that batch will be interpreted as part of the definition of the first CREATE statement.
It's simply one of the rules for SQL Server batches (see):
http://msdn.microsoft.com/en-us/library/ms175502.aspx
Otherwise you could change an object, say a table, and then refer to the change in the same batch, before the change was actually made.
Schema changes should always be seperate batch calls...I am guessing they do it to gaurantee your SELECT will succeed, if you modify schema in the same batch they may not be able to gaurantee that. Just a guess...