So I am basing my code on the Import to mapped columns question asked by another user.
Here is the code...
DECLARE #TempTable TABLE (Name nvarchar(max))
BULK INSERT #TempTable
FROM ‘C:\YourFilePath\file.csv’
WITH ( FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
INSERT INTO TABLE ([Name], [TypeId])
Select Name,'99E05902-1F68-4B1A-BC66-A143BFF19E37' from #TempTable
Do I put this code into a stored procedure or a function to run it from my ASP script?
Is your TypeID a constant at the last line?
You can do either SP or TableFunction. But since TableFunction is depandant on the version of SQL Server you have, it might make sense to use SP.
Related
Hi I created a stored procedure that uses OPEN JSON and insert data into a table.
The problem is when I run the stored procedure it shows an error.
I am using SQL server 2016 (SQl Server 13.0.4446.0). I am not getting the same issue when using using sql server 13.0.1742.0
CREATE PROCEDURE [dbo].Test2--'[{"FileId":1,"DataRow":"3000926900"}]'
(
#data varchar(max)
)
AS
BEGIN
create table #Temp
(
FileId bigint,
DataRow nvarchar(max),
DateLoaded DateTime
)
INSERT INTO [dbo].#Temp
SELECT * FROM OPENJSON(#data)
WITH (FileId bigint,
DataRow nvarchar(max),
DateLoaded DateTime)
select * from #temp
END
Error:
If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.
Check your database compatibility level. OPENJSON is new with SQL Server 2016, and if your compatibility level is set "SQL Server 2014 (120)" or less the OPENJSON function will not be correctly recognized or executed. See the MSDN docs at https://learn.microsoft.com/en-us/sql/t-sql/functions/openjson-transact-sql .
Every day a PPE.txt file with clients data, separated by semicolon and always with the same layout is stored to a specific file directory.
Every day someone has to update a specific table from our database based in this PPE.txt.
I want to automate this process via a SQL script
What I thought would be a solution is to import the data via a script from this .txt file into a created table, then execute the update.
What I have so far is
IF EXISTS (SELECT 1 FROM Sysobjects WHERE name LIKE 'CX_PPEList_TMP%')
DROP TABLE CX_PPEList_TMP
GO
CREATE TABLE CX_PPEList_TMP
(
Type_Registy CHAR(1),
Number_Person INTEGER,
CPF_CNPJ VARCHAR(14),
Type_Person CHAR(1),
Name_Person VARCHAR(80),
Name_Agency VARCHAR(40),
Name_Office VARCHAR(40),
Number_Title_Related INTEGER,
Name_Title_Related VARCHAR(80)
)
UPDATE Table1
SET SN_Policaly_Exposed = 'Y'
WHERE Table1.CD_Personal_Number = CX_PPEList_TMP.CPF_CNPJ
AND Table1.SN_Policaly_Exposed = 'N'
UPDATE Table1
SET SN_Policaly_Exposed = 'N'
WHERE Table1.CD_Personal_Number NOT IN (SELECT CX_PPEList_TMP.CPF_CNPJ
FROM CX_PPEList_TMP)
AND Table1.SN_Policaly_Exposed = 'Y'
I know I haven't given much, but it is because I don't have much yet.
I want to populate the CX_PEPList_TMP temp table with the data from the PEP.txt file via a script so I could just execute this script to update my database. But I don't know any kind of command I can use neither have found in my research.
Thanks in advance!
Using OPENROWSET
You can read text files using OPENROWSET option (first you have to enable adhoc queries)
Using Microsoft Text Driver
SELECT * FROM OPENROWSET('MSDASQL',
'Driver={Microsoft Text Driver (*.txt; *.csv)};
DefaultDir=C:\Docs\csv\;',
'SELECT * FROM PPE.txt')
Using OLEDB provider
SELECT
*
FROM
OPENROWSET
('Microsoft.ACE.OLEDB.12.0','Text;Database=C:\Docs\csv\;IMEX=1;','SELECT *
FROM PPE.txt') t
Using BULK INSERT
You can import text file data to a staging table and update data from it:
BULK INSERT dbo.StagingTable
FROM 'C:\PPE.txt'
WITH
(
FIELDTERMINATOR = ';',
ROWTERMINATOR = '\n'
)
In your case,i recommend you to use an ETL like SSIS it's much better and easy to work with and you can also Schedule the package to execute in a specific time
I can't seem to find the answer to this quite trivial question.
I would like to bulk import data from a .csv file (with an unknown number of columns, comma separated) file to a new SQL Server table, within an existing database. The BULK INSERT statement works fine if the table is predefined, but since I don't know the number of columns of the .csv file upfront, this won't work.
I was trying to use bulk in combination with openrowset, but can't get it working.
By the way: SSIS won't be an option in this case, since I would like to incorporate the query within R (sqlquery) or Python.
Help would be highly appreciated!
I have found a workaround, using R, to solve the problem above. The same logic can be applied in other languages. I advise everyone using this function to keep in mind the useful comments above.
I wrote a small function to capture the steps in R:
SQLSave <- function(dbhandle, data, tablename) {
# Export data to temp path, for example within your SQL Server directory.
write.csv2(data,file = "\\\\pathToSQL\\temp.csv",row.names=FALSE,na="")
# Write first 100 rows to SQL Server, to incorporate the data structure.
sqlSave(dbhandle, head(data,100), tablename = tablename, rownames = FALSE, safer = FALSE)
# SQL Query to remove data in the table, structure remains:
sqlQuery(dbhandle,paste("DELETE FROM [",tablename,"]",sep=""));
# SQL Query to bulk insert all data from temp .csv to SQL Server
sqlQuery(dbhandle,paste("BULK INSERT [",tablename,"]
FROM '\\\\pathToSQL\\temp.csv'
WITH
(
FIELDTERMINATOR = ';',
ROWTERMINATOR = '\\n',
FIRSTROW = 2,
KEEPNULLS
)",sep=""));
# Delete temp file from file directory
file.remove("\\\\pathToSQL\\temp.csv")
}
I am currently struggling with the same problem. I have first read the first row (headers) using bulk insert and created the table. Then again using bulk insert from row 2 imported data in the table. Although you have to change datatype after checking the data imported.
CREATE TABLE #Header(HeadString nvarchar(max))
declare #TableName nvarchar(100)='byom.DenormReportingData_100_100'
DECLARE #Columns nvarchar(max)=''
declare #Query nvarchar(max)=''
DECLARE #QUERY2 NVARCHAR(MAX)=''
bulk insert #Header
from 'F:/Data/BDL_BI_Test.csv'
with(firstrow=1,lastrow=1)
select #Columns=(select quotename(value)+' VARCHAR(500)'+',' from #Header cross apply string_split(HeadString,',') for xml path(''))
if isnull(#Columns,'')<>''
begin
set #Columns = left(#Columns,len(#Columns)-1)
select #Query=#Query+'CREATE TABLE '+#TableName+' ('+#Columns+')'
exec(#QUERY)
end
select #QUERY2 =#QUERY2+'bulk insert '+#TableName+' from ''F:/Data/BDL_BI_Test.csv''
with(firstrow=2,FORMAT=''csv'',FIELDTERMINATOR='','',ROWTERMINATOR=''\n'')'
exec(#QUERY2)
Need to read CSV file information one by one. i.e. If the customer in the file is existing in Customer table insert into detail table otherwise insert into error table. So I can't use bulk insert method.
How to read one by one record from CSV file? How to give the path?
Bulk insert method is not going to work here.
One option is to use an INSTEAD OF INSERT trigger to selectively put the row in the correct table, and then use your normal BULK INSERT with the option FIRE_TRIGGERS.
Something close to;
CREATE TRIGGER bop ON MyTable INSTEAD OF INSERT AS
BEGIN
INSERT INTO MyTable
SELECT inserted.id,inserted.name,inserted.otherfield FROM inserted
WHERE inserted.id IN (SELECT id FROM customerTable);
INSERT INTO ErrorTable
SELECT inserted.id,inserted.name,inserted.otherfield FROM inserted
WHERE inserted.id NOT IN (SELECT id FROM customerTable);
END;
BULK INSERT MyTable FROM 'c:\temp\test.sql'
WITH (FIELDTERMINATOR=',', FIRE_TRIGGERS);
DROP TRIGGER bop;
If you're importing files regularly, you can create a table (ImportTable) with the same schema, set the trigger on that and do the imports to MyTable through bulk import to ImportTable. That way you can keep the trigger and as long as you're importing to ImportTable, you don't need to do any special setup/procedure for each import.
CREATE TABLE #ImportData
(
CVECount varchar(MAX),
ContentVulnCVE varchar(MAX),
ContentVulnCheckName varchar(MAX)
)
BULK INSERT #ImportData
FROM 'D:\test.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
TABLOCK
)
select * from #ImportData
//Here you can write your script to user read data one by one
DROP TABLE #ImportData
Use bulk insert to load into a staging table and then process it line by line.
I am writing a stored procedure that dynamically creates a SQL string, #SQLQuery. After I create this query, I need to execute the query and insert it into a table in the database while adding another column that specifies a unique ID for this particular insert. (Context: It is possible in this application that multiple groupings of data will be inserted into this table and I need to be able to differientiate between groupings at a later date. )
This issue is similar to this question except I am using Microsoft SQL Server 2008 instead of mysql. I have tried the solution there:
INSERT INTO data_table_name
EXECUTE(#SQLQuery), #SomeID
but MS SQL Server 2008 doesn't like that syntax.
Any ideas on how to do this in SQL Server 2008?
You can store query result in table variable and then read from that with extra column and write to final table
DECLARE #temp table (col1 int, col2 varchar(10), ....)
INSERT INTO #temp
EXEC(#SQLQuery)
INSERT INTO data_table_name
SELECT *, #SomeID FROM #temp
You can also append #SomeID in your dynamic sql string.
Example:
SET #SQLQuery = 'SELECT *,' + #SomeID + ' FROM ' + #tableNameVar
and then do this
INSERT INTO data_table_name
EXECUTE(#SQLQuery)
Since you mentioned you are doing this in a Stored Procedure, what I would suggest is for you to:
Execute the #SQLQuery first;
Upon successful execution of the #SQLQuery, insert the #SQLQuery to the table with the unique ID.
i.e.
EXEC sp_executesql #SQLQuery, #Param
IF ##ERROR <> 0
BEGIN
INSERT INTO TableA(Query)
VALUES(#SQLQuery)
END
You're better off designing TableA such that the ID will be an Identity so that a unique sequential ID is generated when you insert a record into that table.