Schema.ini have to be enter a blank line at the top - sql

Today, I was found a problem with schema.ini, here is my example:
Query:
SELECT *
FROM OpenDataSource('Microsoft.ACE.OLEDB.12.0','Data Source="C:\Temp\";User ID=;Password=;Extended properties="Text;HDR=Yes;FMT=Delimited()"')...[ve01#csv]
ve01.csv file content:
Record No.|Sales Target Link
00000000|00000000
00000001|00000000
00000002|00000003
00000003|00000007
00000004|00000008
00000005|00000000
schema.ini file:
---------------------------
[VE01.csv]
ColNameHeader=True
Format=Delimited(|)
TextDelimiter=
Col1=Record_No Text
Col2=Sales_Target_Link Text
---------------------------
The query will return data correctly seperated by (|) if I add a blank line at the top of schema.ini like below:
---------------------------
[VE01.csv]
ColNameHeader=True
Format=Delimited(|)
TextDelimiter=
Col1=Record_No Text
Col2=Sales_Target_Link Text
---------------------------
Can someone please help?
Thanks

Related

Splunk extract a value from string which begins with a particular value

Could you help me extract file name in table format.
Here the below field just before file name is always constant. "Put File /test/abc/test/test/test to /test/test/test/test/test/test/test/test/test/test destFolderPath: /test/test/test/test/test/test/test/abc/def/hij"
This is an event from splunk
2021-04-08T01:03:40.155069+00:00 somedata||someotherdata||..|||Put File /test/abc/test/test/test to /test/test/test/test/test/test/test/test/test/test destFolderPath: /test/test/test/test/test/test/test/abc/def/hij/CHARGEBACK_20210407_060334_customer.csv
Result should be in table format: (font / format doesnt matter)
File Name
CHARGEBACK_20210407_060334_customer.csv
Assuming the original event/field ends with the file name, you should use this regular expression:
(?<file_name>[^\/]+)$
This will extract the text between the last "/" and the end of the event/field ("$").
You can test it here: https://regex101.com/r/J6bU3m/1
Now you can use Splunk's rex command to extract fields at search-time:
| makeresults
| eval _raw="2021-04-08T01:03:40.155069+00:00 somedata||someotherdata||..|||Put File /test/abc/test/test/test to /test/test/test/test/test/test/test/test/test/test destFolderPath: /test/test/test/test/test/test/test/abc/def/hij/CHARGEBACK_20210407_060334_customer.csv"
| fields - _time
| rex field=_raw "(?<file_name>[^\/]+)$"
Alternatively, you could also use this regular expression since you mentioned that the file path is always the same:
| rex field=_raw "abc\/def\/hij\/(?<file_name>.+)"

import a txt file with 2 columns into different columns in SQL Server Management Studio

I have a txt file containing numerous items in the following format
DBSERVER: HKSER
DBREPLICAID: 51376694590
DBPATH: redirect.nsf
DBTITLE: Redirect AP
DATETIME: 09.03.2015 09:44:21 AM
READS: 1
Adds: 0
Updates: 0
Deletes: 0
DBSERVER: HKSER
DBREPLICAID: 21425584590
DBPATH: redirect.nsf
DBTITLE: Redirect AP
DATETIME: 08.03.2015 09:50:20 PM
READS: 2
Adds: 0
Updates: 0
Deletes: 0
.
.
.
.
please see the source capture here
I would like to import the txt file into the following format in SQL
1st column 2nd column 3rd column 4th column 5th column .....
DBSERVER DBREPLICAID DBPATH DBTITLE DATETIME ......
HKSER 51376694590 redirect.nsf Redirect AP 09.03.2015 09:44:21 AM
HKSER 21425584590 redirect.nsf Redirect AP 08.03.2015 01:08:07 AM
please see the output capture here
Thanks a lot!
You can dump that file into a temporary table, with just a single text column. Once imported, you loop through that table using a cursor, storing into variables the content, and every 10 records inserting a new row to the real target table.
Not the most elegant solution, but it's simple and it will do the job.
Using Bulk insert you can insert these headers and data in two different columns and then using dynamic sql query, you can create a table and insert data as required.
For Something like this I'd probably use SSIS.
The idea is to create a Script Component (As a Transformation)
You'll need to manually define your Output cols (Eg DBSERVER String (100))
The Src is your File (read Normally)
The Idea is that you build your rows line by line then add the full row to the Output Buffer.
Eg
Output0Buffer.AddRow();
Then write the rows to your Dest.
If all files have a common format then you can wrap the whole thiing in a for each loop

Exporting data containing line feeds as CSV from PostgreSQL

I'm trying to export data From postgresql to csv.
First i created the query and tried exporting From pgadmin with the File -> Export to CSV. The CSV is wrong, as it contains for example :
The header : Field1;Field2;Field3;Field4
Now, the rows begin well, except for the last field that it puts it on another line:
Example :
Data1;Data2;Data3;
Data4;
The problem is i get error when trying to import the data to another server.
The data is From a view i created.
I also tried
COPY view(field1,field2...) TO 'C:\test.csv' DELIMITER ',' CSV HEADER;
It exports the same file.
I just want to export the data to another server.
Edit:
When trying to import the csv i get the error :
ERROR : Extra data after the last expected column. Context Copy
actions, line 3: <<"Data1, data2 etc.">>
So the first line is the header, the second line is the first row with data minus the last field, which is on the 3rd line, alone.
In order for you to export the file in another server you have two options:
Creating a shared folder between the two servers, so that the
database also has access to this directory.
COPY (SELECT field1,field2 FROM your_table) TO '[shared directory]' DELIMITER ',' CSV HEADER;
Triggering the export from the target server using the STDOUT of
COPY. Using psql you can achieve this running the following
command:
psql yourdb -c "COPY (SELECT * FROM your_table) TO STDOUT" > output.csv
EDIT: Addressing the issue of fields containing line feeds (\n)
In case you wanna get rid of the line feeds, use the REPLACE function.
Example:
SELECT E'foo\nbar';
?column?
----------
foo +
bar
(1 Zeile)
Removing the line feed:
SELECT REPLACE(E'foo\nbaar',E'\n','');
replace
---------
foobaar
(1 Zeile)
So your COPY should look like this:
COPY (SELECT field1,REPLACE(field2,E'\n','') AS field2 FROM your_table) TO '[shared directory]' DELIMITER ',' CSV HEADER;
the described above export procedure is OK, e.g:
t=# create table so(i int, t text);
CREATE TABLE
t=# insert into so select 1,chr(10)||'aaa';
INSERT 0 1
t=# copy so to stdout csv header;
i,t
1,"
aaa"
t=# create table so1(i int, t text);
CREATE TABLE
t=# copy so1 from stdout csv header;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself, or an EOF signal.
>> i,t
1,"
aaa"
>> >> >> \.
COPY 1
t=# select * from so1;
i | t
---+-----
1 | +
| aaa
(1 row)

Find exact file name from the file name with complete path stored in database

I need to find exact file name by executing SQL query on the table containing the file_name column .In file_name column the complete path of files are stored like D:/Workspace/app.js
I can find app.js with query Query
SELECT *
FROM details
WHERE file_name LIKE '%app.js'
but the problem is if I write the query like
SELECT *
FROM details
WHERE file_name LIKE '%p.js'
it lists app.js file also . So anyone could guide me how to get an exact match for file name from the database if file names are stored with the comple path?
Thanks in advance.
How about this?
SELECT * FROM details WHERE file_name LIKE '%/app.js' OR file_name LIKE '%\app.js'
The "%" sign is used to define wildcards (missing letters) both before and after the pattern. So you'll never find %app.js because there are no xxxxapps.js.
Thanks to all of you ,
I got the result I wanted
sql = "SELECT * FROM details WHERE file_name RLIKE ?";
ps = conn.prepareStatement(sql);
ps.setString(1, "[[:<:]]"+fname+"[[:>:]]");
This gives the exact string that fname varible contains.

Complex sub-string replace query SQL

I have the following table containing path info:
I need to replace the DIRECTORY_NAME value in the PATH field with the NEW_DIR_NAME value recursively.
sample table:
PATH |DIRECTORY_NAME |NEW_DIR_NAME
...............................................................................................................
\folder1\folder2\2a | folder2\2a | folder2/2a
\folder1\folder2\2a\folder3 | folder3 | folder3
\folder1\folder2\2a\folder4 | folder4 | folder4
\folder1\folder2\2a\folder4\2a\2b | 2a\2b | 2a/2b
...............................................................................................................
The result would look like this:
* changes are in bold
NEW_PATH
...............................................................................................................
\folder1\ folder2/2a
\folder1\ folder2/2a\folder3
\folder1\ folder2/2a\folder4
\folder1\ folder2/2a\folder4\ 2a/2b
...............................................................................................................
database is Oracle.
using the select replace(PATH, DIRECTORY_NAME, NEW_DIR_NAME) function will yield the folowing (not the solution):
\folder1\ folder2/2a
\folder1\folder2\2a\ folder3
\folder1\folder2\2a\ folder4
\folder1\folder2\2a\folder4\ 2a/2b
Please tell me your field name isn't really STRING. Anyways, here's the code you need, based on the supplied field names.
SELECT REPLACE(STRING,REFERENCE,REPLACE_WITH)
Your problem is your data. Your table posits a one-to-one relationship between PATH and DIRECTORY_NAME, and hence with NEW_DIR_NAME. But according to your required output this is clearly not so. The DIRECTORY_NAME appears in multiple values of PATH.
So what you need to do is run the replace() statement for every combination where DIRECTORY_NAME != NEW_DIR_NAME
for lrec in ( select DIRECTORY_NAME, NEW_DIR_NAME
from your_table
where DIRECTORY_NAME != NEW_DIR_NAME )
loop
update your_table
set path = replace(PATH, lrec.DIRECTORY_NAME, lrec.NEW_DIR_NAME)
;
end loop;
This is not a particularly efficient approach but presumably this is a one-off exercise.