I am running a daily full load to fill a table in my db. I have discovered that using
insert into targettable
select * from sourcetable
store some values in my target table that are possibly entered at source with wrong encoding. For example, one value looks like this:
ΞΏΞΉΞΊΞΏΞ½ΞΏΞΌΞΏΟΞΉΟΞ±Ξ½Ξ½Ξ·Ο
But when i execute :
select * from targettable where name = 'ΞΏΞΉΞΊΞΏΞ½ΞΏΞΌΞΏΟΞΉΟΞ±Ξ½Ξ½Ξ·Ο'
it returns no rows.
DB,table and column collation is set to Greek_CI_AI
Is there a way to locate and fix these values in my target table?
If not how would i make sure that these values will not be inserted in my target table. (can use sql statement or SSIS task )
Related
I am using bulk copy to insert data from datatable(got data from oracle database) to sql table. So that is good and I do not have any problem whith that. So after this job when data inserted correctly I am trying to update a field of oracle database table with key of above datatable. the schema to my approach shows below.
update table1 set column1=1 where id in ( all keys of above datatable)
It is not working and oracle do not run that because string literal too long.
how to can I solve that? I do not want to create a temp table in oracle because this service working all time.
I'd consider using a subquery instead, e.g.
update table1 set
column1 = 1
where id in (select key
from above_datatable
)
I am having an issue with an sql query used in job automation
The procedure inserts data from a source table(48 columns) to destination table(49 columns where the 49th/last column is NOT in the source table). But all columns in the destination and source table accept null, so that shouldn't be an issue copying from 48 columns to 49 columns.
It throws this error :
Column name or number of supplied values does not match table definition. [SQLSTATE 21S01] (Error 213). The step failed.
It should just insert null into the 49th column and I have checked the column names and they correspond.
Let's treat this like I can't delete the 49th column.
Please what can I do here?
Accepting NULL doesn't mean you can specify 49 cols and 48 values in the sql INSERT statement. The number of columns and number of values must match exactly. Either drop extra column from INSERT list or add 49th value (NULL I guess) to the values list. In both cases if column is NULLable, it will be set to NULL.
First, if you have code that's not working, you should post it so we can tell for sure what's happening. But I'd be pretty willing to bet you're trying to short cut the process and use something like this:
INSERT tableB
SELECT *
FROM tableA
But the tables don't have the same number of columns, so the SQL Engine doesn't know which source column goes into which destination column. You need to provide an explicit list so it knows which one you intend to ignore:
INSERT tableB
(
col1,
col2,
...
col48
)
SELECT
col1,
col2,
...
col48
FROM tableA;
I want to copy data from one table to another in vertica using COPY FROM VERTICA command. I have a table having large data in it and i want to select few data (where field1 = 'some val' etc) from it and copy to another table.
Source table has columns of type long varchar and i want to copy these value in another table having different column type like varchar, date and boolean etc. What i want is that only valid values should be copied in destination table, error data should be rejected.
I tried to move data using insert command like below, but problem is that if even there is a single row with invalid data then it 'll terminate process (i have nothing copied in destination table).
INSERT INTO cb.destTable(field1, field2, field3)
Select cast(field1 as varchar), cast(field2 as varchar), cast(field3 as int)
FROM sourceTable Where Id = 2;
How this can be done?
COPY FROM VERTICA and EXPORT TO VERTICA are intended to copy data between clusters. Even if you did loopback the connection, you would not be able to use rejects as they are not supported by COPY FROM VERTICA. The mappings are strict, so if it cannot coerce it will fail.
You'll have to:
INSERT ... SELECT ... WHERE <conditions to filter out data that won't coerce>
INSERT ... SELECT <expressions that massage data that won't coerce>
Export data to a file using vsql (you can turn off headers/footers, turn off padding, set the delimiter to something that doesn't exist in your data, etc) Then use a copy to load it back in.
Try exporting it into a csv file:
=>/o output.csv
=>Select cast(field1 as varchar), cast(field2 as varchar), cast(field3 as int) FROM sourceTable Where Id = 2;
=>/o
Then use COPY command to load it back into the desired table.
COPY FROM '(csv_directory)' DELIMITER '(comma or your configured delimiter)' NO ESCAPE NULL '(NULL indicator)' SKIP 1;
Are they both in the same Vertica database? If so an alternative is:
DROP TABLE IF EXISTS cb.destTable;
CREATE TABLE cb.destTable AS
SELECT field1::VARCHAR, field2::VARCHAR, field3::VARCHAR
FROM sourceTable WHERE Id = 2;
I've created a Stored Procedure that refreshes the data in a table. It first re-loads the entire table. Next, several filters are applied. (Example: the column 'Model' must equal 'W'; all rows with model 'B' are deleted.) This happens after the table has been loaded (and not during) because I want to log how many rows are deleted because of each individual filter. After the filters have been applied, some columns contain the same value in every row (the other values were deleted in the filtering process). These columns are now useless, so I want to delete them.
This seems to be problematic for SQL Server. When given the command to execute the SP, it indicates that the columns it is supposed to remove in its final step do not currently exist and refuses to run. That is technically correct, the columns currently don't exist, but they will be created by the SP itself.
Some mockup code:
CREATE PROCEDURE dbo.Procedure AS (
DROP TABLE dbo.Table
SELECT * INTO dbo.Table FROM dbo.View
INSERT INTO dbo.Log VALUES (GETDATE(),(SELECT COUNT(1) FROM dbo.Table))
DELETE FROM dbo.Table WHERE Model <> 'W'
INSERT INTO dbo.Log VALUES (GETDATE(),(SELECT COUNT(1) FROM dbo.Table))
ALTER TABLE dbo.Table DROP COLUMN Model
)
Error code when executing:
[2016-09-02 12:25:20] [S0001][207] Invalid column name 'Model'.
How do I circumvent this problem and get the SP to run?
If I understand correctly, you can use dynamic SQL:
exec sp_executesql 'ALTER TABLE dbo.Table DROP COLUMN Model';
Syntax to remove any column from table in SQL Server is
ALTER TABLE TableName DROP COLUMN ColumnName ;
This may be cause for issue.
Can you check one more time for the existency of the column 'Model' exists in the view.
because i have tried with the same scenario and its works for me..
I am using pentaho DI to insert data into fact table . Thing is the table from which I am populating my fact table contains 10000 reccords and changes frequently . Using database lookups and insert update I am able to load my fact table correctly once . But when new records are added to my souce table(say it becomes 15000) and I am again inserting records into fact table then these 15000 recods are again added to my fact table . What I want is to add new 5000 records that do not exist in fact table .Please suggest me on what transformations I need to perform to acheive this .
try doing an upsertion instead insertion (if the row exists then update , if not insert)
You can use some DB function.
In SQL Server 2008, there is a merge sql that solve this type problem.
It is a example as follows in SQL Server 2008:
MERGE Production.UnitMeasure AS target
USING (SELECT #UnitMeasureCode, #Name) AS source (UnitMeasureCode, Name)
ON (target.UnitMeasureCode = source.UnitMeasureCode)
WHEN MATCHED THEN
UPDATE SET Name = source.Name
WHEN NOT MATCHED THEN
INSERT (UnitMeasureCode, Name)
VALUES (source.UnitMeasureCode, source.Name)
OUTPUT deleted., $action, inserted. INTO #MyTempTable;