End of the line symbol in liquibase - liquibase

I'm using liquibase in springboot app.
I'm inserting data in DB (postgres) with migration script.
The problem:
i have long string to insert in DB and i use '\n' to break in on lines.
Like following:
insert into public.long_string (id, header, content) values
('string1', 'header1',
'Line1\nLine2\nLine3\nLine4\nLine5'\nLine6.')
;
But in DB i have Stored following:
'Line1\\nLine2\\nLine3\\nLine4\\nLine5'\\nLine6.'
And the same i have in response on front-end request.
Please help. Thanks in advance.

I tested using this changset with Postgres, but I was unable to recreate your issue:
- changeSet:
id: insert-table-test3
author: XYZ
changes:
- sql:
sql: insert into test3 (column1) values ('Line1\nLine2\nLine3\nLine4\nLine5\nLine6.');
This was inserted 'Line1\nLine2\nLine3\nLine4\nLine5\nLine6.', as expected.
Can you provide your entire changeset so it can be reviewed? Also I noticed you have an extra single-quote after the 5 in your example. I had to remove that to run the changeset.

Solved with
'E'
added before inserted string contains special symbols.
https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-ESCAPE
like:
insert into public.long_string (id, header, content) values
('string1', 'header1',
E'Line1\nLine2\nLine3\nLine4\nLine5'\nLine6.')
;

Related

Non-english letters in sql script problem - Oracle Sql Developer

I'm executing some random script to my local database and I have problem with non-english letters. When I'm executing the same insert statement directly from sql develeoper everything is ok. Could somebody explain my how can I avoid this problem using sql script?
Example. Everything works okay.
Statement: insert into my_table values ('aaaaała');
Result: 'aaaaała';
Now I'm pasting the same insert statement into my sql file(script.sql) and I'm wirting:
#'D:\script.sql';
'D:\' - it is location of that file
Statement: insert into my_table values ('aaaaała');
Result: 'aaaaała';
The result is wrong:
My settings:
You must set your NLS_LANG value according to the character set of the script.sql file. Typically you set this in the options in at the "Save" dialog.
For example if the .sql file was saved as UTF-8 then you must run:
set NLS_LANG=.AL32UTF8
sqlplus .... #'D:\script.sql';
See also OdbcConnection returning Chinese Characters as "?" for more details.

liquibase is putting single quote around all values of a csv line for the insert sql statement

This is what I have in the csv file:
CONTACT_TYP_CD,CONTACT_TYP_DESC,CREATE_DATE,CREATE_USER,UPDATE_DATE,UPDATE_USER
"ALL","Contact to be used for all communications","2014-03-14 00:00:00","CS_MAIN",null,null
This is how I load this file through liquibase:
<loadData file="src/main/resources/METAINF/install/seed_data/seed_contact_type.csv"
tableName="CONTACT_TYPE">
</loadData>
This is what liquibase uses to insert the data into oracle:
liquibase.exception.DatabaseException: Error executing SQL INSERT INTO CONTACT_TYPE (CONTACT_TYP_CD,CONTACT_TYP_DESC,CREATE_DATE,CREATE_USER,UPDATE_DATE,UPDATE_USER) VALUES ('"LL","Contact to be used for all communications","2014-03-14 00:00:00","CS_MAIN",null,null'): ORA-00947: not enough values
Can someone tell me what I am doing wrong? Thank you
Try removing the double quotes from your csv file, and define columns types, see the following test case as an example:
changeSet source
csv file
Another solution might be to enclose also the column titles in quotes, as seen here
you need to add the xml escape for double quote in the quotechar
quotchar="""
Your Problem here is the separator, not the quote. Liquibase seems to use semikolons by default, so you have to specify separator=",". Otherwise the whole line is considered a single value.

HIve CLI doesn't support MySql style data import to tables

Why can't we import data into hive CLI as following, The hive_test table has user, comments columns.
insert into table hive_test (user, comments)
value ("hello", "this is a test query");
Hive throws following exception in hive CLI
FAILED: ParseException line 1:28 cannot recognize input near '(' 'user' ',' in select clause
I don't want to import the data through csv file like following for a testing perpose.
load data local inpath '/home/hduser/test_data.csv' into table hive_test;
It's worth noting that Hive advertises "SQL-like" syntax, rather than actual SQL syntax. There's no particular reason to think that pure SQL queries will actually run on Hive. HiveQL's DML is documented here on the Wiki, and does not support the column specification syntax or the VALUES clause. However, it does support this syntax:
INSERT INTO TABLE tablename1 SELECT ... FROM ...
Extrapolating from these test queries, you might be able to get something like the following to work:
INSERT INTO TABLE hive_test SELECT 'hello', 'this is a test query' FROM src LIMIT 1
However, it does seem that Hive is not really optimized for this small-scale data manipulation. I don't have a Hive instance to test any of this on.
I think, it is because user is a built-in (Reserved) keyword.
Try this:
insert into table hive_test ("user", comments)
value ('hello', 'this is a test query');

SQL data has unwanted line breaks?

I have a script that is somehow inserts line breaks into the end of the data that is being inserted in SQL. I don't see anything in the script that is adding a line break.
Is there a way to strip all line breaks inside the INSERT statement? I can't imagine what could be doing this.
Thanks,
Mike
Apply TRIM() mysql function to the needed fields.
INSERT INTO table (field) VALUES (TRIM('foobar'))
where foobar is your data.
But better I would suggest to find why that script adds those newlines.
Thanks for the quick answers guys. I ended up using the php trim function right before the data insert - that worked.

Importing csv file to SQL Server Management Studio - No tables available

I am trying to import a csv file to insert data into an existing table on my database. I go through the wizard and when it comes to select source tables and views for the destination, there are none to choose from. It just thinks I am trying to create a new table.
Any suggestions? Thanks!
Skip the wizard and use just BULK INSERT, here is an example:
BULK
INSERT CSVTest
FROM 'c:\csvtest.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
Full example : SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server
Cyril's answer is along the right track. However, a true CSV-compliant solution now exists since SQL Server 2017 (14.x) CTP 1.1.
Use this
BULK INSERT destinationtable
FROM 'filepath'
WITH
(
FORMAT = 'CSV'
)
GO
There is an issue though. If your data uses NULL to indicate nulls, then you'll need to remove them as MSSQLSMS will not accept NULL as a valid NULL value. Do a search/replace for ,NULL to ,.
For example (4 columns):
1,NULL,test,"Escaped text with comma, this works"
must be formatted like this:
1,,test,"Escaped text with comma, this works"
See SQL won't insert null values with BULK INSERT for information on NULL insertion problems.
You can go to https://learn.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-2017 for more information.
We do have multiple options:
using dts wizard
by programming
to know dts path or code in c# go through it
http://sqlcopy.blogspot.in/2012/07/bulk-sql-to-sql-sql-to-csv-csv-to-sql.html (dead link)