escape character on copy csv - sql

I have a sql script with intend to export data from postgres and spool into a csv file which used to work fine until I added random sample into the line.
Here is the code it look like with random()
\COPY (select accountid, to_char(createtime,'YYYY-MM-DD HH24:MI:SS.ms') from accounts random()< 0.01 limit 1000) to '/home/oracle/scripts/accounts_p.csv' WITH DELIMITER ',' NULL AS ' '
ERROR MESSAGE when running this sql script:
psql:accounts_sample_p.sql:1: \copy: ERROR: syntax error at or near ")"
LINE 1: ...from accounts random ( ) < 0.01 l...
^
Appearantly it did not like the (). Tried using escape character with \ before ( and before ), but it did not help.
Can anyone give me an advice on how to overcome this? Thanks.

you seem to be missing a 'WHERE' in your SELECT statement. i.e.:
select accountid, to_char(createtime,'YYYY-MM-DD HH24:MI:SS.ms') from WHERE accounts random()< 0.01 limit 1000
Alternative methods on selecting a random set in PostgreSQL can be found here if you're struggling with performance:
Best way to select random rows PostgreSQL

Related

unable to execue the following select statement getting error

when i execute the following select statement i am getting the below error.
I am trying to execute the below statement in Oracle server 9.1.
I believe due to this i am getting this error.
I know we can excute the query for single quotes by '%Blondeau D''Uva%'. But i am looking for query which will pass the special character value as parameter.
Kindluy let me know how to escape single quotes in old oracle server
ERROR :
ORA-00933: SQL Command not properly ended
Error at line : 1 Column : 50
Query:
Select * FROM TABLEA where UI_FNAME like q'[%Michael%]' and UI_LNAME like q'[ %Blondeau D'Uva%]';
On oracle the following should work
Select *
FROM TABLEA
where UI_FNAME like '[%Michael%]'
and UI_LNAME like '[ %Blondeau D''Uva%]';
So no duplicate where
And you have to quote the ' between D und Uva
And probably the q before the ' is wrong ... so I have removed it as well.
Ok tried it out with the q operator:
Select *
FROM employees
where first_name like q'[%Michael%]'
and last_name like q'[ %Blondeau D'Uva%]';
No errors no row ...
Two things about your query:
Two times usage of where where there should be just one.
About the second condition. You have used q'[ %Blondeau D'Uva%]' in like clause. I think that won't give you the result you might be looking for. This has nothing to do with your error, but still, it would not hurt to re-check the query.
Try this, this shouldn't run you into any error :
Select * FROM TABLEA
where UI_FNAME like q'[%Michael%]'
and UI_LNAME like q'[%Blondeau D'Uva%]';
Cheers!
As others have tried to give you an answer, but I think you probably need to keep % outside brackets -
Select *
FROM TABLEA
where UI_FNAME like '%[Michael]%'
and UI_LNAME like '%[ Blondeau D''Uva]%';

Invalid digits on Redshift

I'm trying to load some data from stage to relational environment and something is happening I can't figure out.
I'm trying to run the following query:
SELECT
CAST(SPLIT_PART(some_field,'_',2) AS BIGINT) cmt_par
FROM
public.some_table;
The some_field is a column that has data with two numbers joined by an underscore like this:
some_field -> 38972691802309_48937927428392
And I'm trying to get the second part.
That said, here is the error I'm getting:
[Amazon](500310) Invalid operation: Invalid digit, Value '1', Pos 0,
Type: Long
Details:
-----------------------------------------------
error: Invalid digit, Value '1', Pos 0, Type: Long
code: 1207
context:
query: 1097254
location: :0
process: query0_99 [pid=0]
-----------------------------------------------;
Execution time: 2.61s
Statement 1 of 1 finished
1 statement failed.
It's literally saying some numbers are not valid digits. I've already tried to get the exactly data which is throwing the error and it appears to be a normal field like I was expecting. It happens even if I throw out NULL fields.
I thought it would be an encoding error, but I've not found any references to solve that.
Anyone has any idea?
Thanks everybody.
I just ran into this problem and did some digging. Seems like the error Value '1' is the misleading part, and the problem is actually that these fields are just not valid as numeric.
In my case they were empty strings. I found the solution to my problem in this blogpost, which is essentially to find any fields that aren't numeric, and fill them with null before casting.
select cast(colname as integer) from
(select
case when colname ~ '^[0-9]+$' then colname
else null
end as colname
from tablename);
Bottom line: this Redshift error is completely confusing and really needs to be fixed.
When you are using a Glue job to upsert data from any data source to Redshift:
Glue will rearrange the data then copy which can cause this issue. This happened to me even after using apply-mapping.
In my case, the datatype was not an issue at all. In the source they were typecast to exactly match the fields in Redshift.
Glue was rearranging the columns by the alphabetical order of column names then copying the data into Redshift table (which will
obviously throw an error because my first column is an ID Key, not
like the other string column).
To fix the issue, I used a SQL query within Glue to run a select command with the correct order of the columns in the table..
It's weird why Glue did that even after using apply-mapping, but the work-around I used helped.
For example: source table has fields ID|EMAIL|NAME with values 1|abcd#gmail.com|abcd and target table has fields ID|EMAIL|NAME But when Glue is upserting the data, it is rearranging the data by their column names before writing. Glue is trying to write abcd#gmail.com|1|abcd in ID|EMAIL|NAME. This is throwing an error because ID is expecting a int value, EMAIL is expecting a string. I did a SQL query transform using the query "SELECT ID, EMAIL, NAME FROM data" to rearrange the columns before writing the data.
Hmmm. I would start by investigating the problem. Are there any non-digit characters?
SELECT some_field
FROM public.some_table
WHERE SPLIT_PART(some_field, '_', 2) ~ '[^0-9]';
Is the value too long for a bigint?
SELECT some_field
FROM public.some_table
WHERE LEN(SPLIT_PART(some_field, '_', 2)) > 27
If you need more than 27 digits of precision, consider a decimal rather than bigint.
If you get error message like “Invalid digit, Value ‘O’, Pos 0, Type: Integer” try executing your copy command by eliminating the header row. Use IGNOREHEADER parameter in your copy command to ignore the first line of the data file.
So the COPY command will look like below:
COPY orders FROM 's3://sourcedatainorig/order.txt' credentials 'aws_access_key_id=<your access key id>;aws_secret_access_key=<your secret key>' delimiter '\t' IGNOREHEADER 1;
For my Redshift SQL, I had to wrap my columns with Cast(col As Datatype) to make this error go away.
For example, setting my columns datatype to Char with a specific length worked:
Cast(COLUMN1 As Char(xx)) = Cast(COLUMN2 As Char(xxx))

stream analytics query get error column name doesn't exist, but it does?

When I run my query in management studio it works fine, but in a stream analytics job it throws an error: Query compilation error: Invalid column name: 'afkorting'. Column with such name does not exist..
I downloaded the input tables to check if something went wrong with uploading, but that file does have that column name (and I double checked for capital letters, miswriting etc), so how can I fix this?
This is my query:
; WITH Check AS
(
SELECT afkorting, *
FROM Reizen RE
LEFT JOIN Gegevens AP
ON RE.ID = AP.code
)
SELECT *
FROM Check CH
JOIN Model VM
ON CH.afkorting = VM.Station
WHERE VM.h_station = VM.v_station
AND DATEPART(hour, CH.MsgReportDate) = VM.start_uur
AND (DATEPART(minute, CH.MsgReportDate) BETWEEN VM.start_minuut AND VM.eind_minuut)
AND DATEPART(weekday, CH.MsgReportDate) = VM.weekdag
Hope someone can help me!
*PROBLEM SOLVED: you need to give in all columnnames, so not SELECT * but SELECT column1, column2 and use the given prefixes of the table, in my case: AP.column1, RE.column2 etc*
Just summarize all comments above for resolving the issue, I did some testing for Stream Query language elements WITH, SELECT & JOIN. Here is my result list for the issue.
Without JOIN, using column names with symbol * in the WITH scope is correct for executing on ASA.
With JOIN, it's necessary to list all column names you want without symbol * for executing. The reason seems to be to avoid ambiguity with column name conflict.
you need to give in all column names, so not
SELECT * but SELECT column1, column2
and use the given prefixes of the table,
for example
in my case:
AP.column1, RE.column2 etc

Cannot figure out SQL Select statement in Access 2016

I'm having some trouble understanding WHY a select statement isn't working in a query I'm making.
I've got the SELECT and FROM lines functioning. With just those, ALL results from my selected table are displayed - 517 or so
What I want to do is display results based on a pattern using LIKE - What I have so far
SELECT *
FROM Tbl_ServiceRequestMatrix
WHERE Tbl_ServiceRequestMatrix.[Application/Form] LIKE 'P%';
This returns 0 results - despite the fact that the column selected DOES have entries that start with 'P'
I also tried utilising brackets, see if that was the issue - still displays 0 results:
SELECT *
FROM Tbl_ServiceRequestMatrix
WHERE ((Tbl_ServiceRequestMatrix.[Application/Form])='p%');
Can any one help me understand why my WHERE ** LIKE statement is causing 0 results to be displayed?
The wildcard character in MS Access is (by default) * instead of %:
WHERE Tbl_ServiceRequestMatrix.[Application/Form] LIKE "P*"
LIKE Statement has different parameters in different sql languages.
In MS Access you need * Instead of % in LIKE Statement.

SIMPLE SQL Select Where Query

Anyone got any idea why this doesn't work. Im at a loss
The following
SELECT * FROM tblCustomerDetails WHERE AccountNo='STO00900'
Returns nothing however if i run the same query with any othe accoutn number it works.
and this account will show when i run
SELECT TOP 10 * FROM tblCustomerDetails ORDER BY ID desc
Picture explains it better.
Thanks
Try as Notulysses suggested, but I would recommend it a bit differently:
SELECT * FROM tblCustomerDetails WHERE LTRIM(RTRIM(AccountNo)) = 'STO00900'
The LIKE operator will likely match more rows than you need (if te AccountNo column is not unique), so I'd go with trimming the whitespaces and then checking for a specific account.
There may be some space in the entry either in the start or at the end ,try to trim both ends of the entry.
Try
SELECT * FROM tblCustomerDetails WHERE AccountNo LIKE '%STO00900%'
As there can be hidden characters.