I have the following table definition:
create table null_test (some_array character varying[]);
And the following SQL file containing data.
copy null_test from stdin;
{A,\N,B}
\.
When unnesting the data (with select unnest(some_array) from null_test), the second value is "N", when I am expecting NULL.
I have tried changing the data to look as follows (to use internal quotes on the array value):
copy null_test from stdin;
{"A",\N,"B"}
\.
The same non-null value "N" is inserted?
Why is this not working and is there a workaround for this?
EDIT
As per the accepted answer, the following worked. However, the two representation of NULL values within a COPY command depending on whether you're using single or array values is inconsistent.
copy null_test from stdin;
{"A",NULL,"B"}
\.
\N represents NULL as a whole value to COPY, not as part of another value and \N isn't anything special to PostgreSQL itself. Inside an array, the \N is just \N and COPY just passes the array literal to the database rather than trying to interpret it using COPY's rules.
You simply need to know how to build an array literal that contains a NULL and from the fine manual:
To set an element of an array constant to NULL, write NULL for the element value. (Any upper- or lower-case variant of NULL will do.) If you want an actual string value "NULL", you must put double quotes around it.
So you could use these:
{A,null,B}
{"A",NULL,"B"}
...
to get NULLs in your arrays.
Related
I hope I'm able to ask my question as simple as possible. I am very new to working with PowerShell.
Now to my question:
I use Invoke-Sqlcmd to run a query, which puts Data in a variable, let's say $Data.
In this case I query for triggers in an SQL Database.
Then I kind of split the array to get more specific information:
$Data2 = $Data | Where {$_.table -like 'dbo.sportswear'}
$Data3 = $Data2 | Where {$_.event -match "Delete"}
So in the end I have a variable with these Indexes(?), I'm not sure if they are called indexes.
table
trigger_name
activation
event
type
status
definition
Now all I want is to check something in the definition.
So I create a $Data4 = $Data3.definition, so far so good.
But now I have a big text and I want only the content of 2-3 specific rows.
When I used like $Data4[1] or $Data4[1..100], I realized that PowerShell sees every char as a line/row.
But when I just write $Data4 it shows me the content nice formatted with paragraphs, new lines and so on.
Has anyone an idea how I can get specific rows or lines of my variable?
Thank you all :)
It appears $Data4 is a formatted string. Since it is a single string, any indexed element lookups return single characters (of type System.Char). If you want indexes to return longer substrings, you will need to split your string into multiple strings somehow or come up with a more sophisticated search mechanism.
If we assume the rows you are after are actual lines separated by line feed and/or carriage return, you can just split on those newline characters and use indexes to access your lines:
# Array indexing starts at 0 for line 1. So [1] is line 2.
# Outputs lines 2,3,4
($Data4 -split '\r?\n')[1..3]
# Outputs lines 2,7,20
($Data4 -split '\r?\n')[1,6,19]
-split uses regex to match characters and perform a string split on all matches. It results in an array of substrings. \r matches a carriage return. \n matches a line feed. ? matches 0 or one character, which is needed in case there are no carriage returns preceding your line feeds.
I was expecting Null_marker to replace the blank STRING with null but, it did not work. Any suggestions, please?
tried using the --null_marker="null"
$gcloud_dir/bq load $svc_ac --max_bad_records=10 --replace --source_format=CSV --null_marker="null" --field_delimiter=',' table source
the empty stings did not get replaced with NULL
Google Cloud Support here!
After reading through the documentation, the description for the --null_marker flag states:
Specifies a string that represents a null value in a CSV file. For example, if you specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default value is the empty string.
Therefore setting null_marker=null will not replace empty strings with NULL, it will only treat 'null' as a null value. At this point you should either:
Replace empty strings before uploading the CSV file.
Once you have uploaded the CSV file make a query using the replace function.
I'm trying to assign the value of a column from the query to a variable using cfset tag. For example, if the value is 'abcd#1244', then if I use <cfset a = #trim(queryname.column)#> it will return only abcd. But I need the whole value of that column.
You will need to escape the # symbol. You can get clever and do it all in one swoop (# acts as an escape character when placed next to another #).
Example being
The item## is #variable#.
In order to print "The item# is 254."
There are plenty of text and string functions at your disposal.
I'd recommend first trying to escape the value as soon as it is drawn from your database.
http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec1a60c-7ffc.html
I have a .txt file with a plain list of words (one word on each line) that I want to copy into a table. The table was created in Rails with one row: t.string "word".
The same file loaded into another database/table worked fine, but in this case I get:
pg_fix_development=# COPY dictionaries FROM '/Users/user/Documents/en.txt' USING DELIMITERS ' ' WITH NULL as '\null';
ERROR: invalid input syntax for integer: "aa"
CONTEXT: COPY dictionaries, line 1, column id: "aa"
I did some googling on this but can't figure out how to fix it. I'm not well versed in SQL. Thanks for your help!
If you created that table in Rails then you almost certainly have two columns, not one. Rails will add an id serial column behind your back unless you tell it not to; this also explains your "input syntax for integer" error: COPY is trying to use the 'aa' string from your text file as a value for the id column.
You can tell COPY which column you're importing so that the default id values will be used:
copy dictionaries(word) from ....
I am trying to stream data through an AWK program to a Postgres COPY command. This works great usually. However, in my data recently I have been getting long text stings containing '\.' values.
Postgres Documentation mentions this combination of characters represents the end-of-data marker, http://www.postgresql.org/docs/9.2/static/sql-copy.html, and I am getting the associated errors when trying to insert with COPY.
My question is, is there a way to turn this off? Perhaps change the end-of-data marker to a different combination of characters? Or do I have to alter/remove these strings before trying to insert using the COPY command?
You can try to filter your data through sed 's:\\:\\\\:g' - this would change every \ in your data to \\, which is a correct escape sequence for single backslash in copy data.
But I think not only backslash would be problematic. Also newlines should be encoded by \n, carriage returns as \r and tabs as \t (tab is a default field delimiter in copy).