I'm trying to import a txt file in the following format. However, NetLogo keeps dumping the following error.
Illegal number format (line number 2, character 1)
error while observer running FILE-READ
called by procedure HOLIDAY-FILE-IMPORT
called by procedure SETUP
called by procedure GO
called by Button 'go'
The file consists of Strings. However, it works fine when i convert the data into integers. Anyone can advise how to read strings in txt in NetLogo?
Related
On command line
convert(varchar,getdate(),120)
gives below error:
Unknown argument '04:59:42.xml'
I am saving data to an xml file.
when I use below command,proc works perfectly.
convert(varchar,getdate(),112)
I need the file to save with time.
You are trying to create a file with a colon in the name, which is not allowed. No way to get around that restriction. However, you can replace the colons with other characters when naming the file, e.g.
replace(convert(varchar,getdate(),120),':','')
I have a csv file containing
numbers like "1.456e+07"
and I am using function "copy_expert" to export the file to database
but I am getting error
psycopg2.DataError: invalid input syntax for integer: "1.5637e+07"
I notice that I can insert "100" as an integer, but when I do "1.5637e+07" with qoute, it doesn't work.
I am using pandas dataframe's to_csv to generate the csv files. not sure how to get rid of qoute for integer like "1.5637e+07" only (I have string column), or whether there is other solution.
I find out the solution
Normally, pandas doesn't put quotes around number. However, I set float_format parameter which causes this. I reset
quoting=csv.QUOTE_MINIMAL
in the function call and the quotes go away.
I am loading CSV file using COPY.
COPY cts FROM 'C:\...\cts.csv' using DELIMITERS',';
However, error comes out
ERROR: invalid input syntax for type double precision: ""
CONTEXT: COPY testdata, line 7, column latitude: ""
How to fix it please?
Looks like your CSV isn't quite formatted correctly. "" isn't a number, and numbers don't need to be be quoted in CSV.
I find it's usually easier in PostgreSQL to create a staging import table with all text columns, and import CSVs to there first. Then do a cleanup query to put the CSV data into the real table.
I've seen variations of this question all over the place yet can't seem to get this to work. I need to be able to bulk insert data from a flat file where some of the text fields will contain carriage returns.
I have set the flat file up to be delimited by the caret ^ symbol. The Row delimiter is a vertical pipe and the column delimiter is a tab. Why does the import still fail when my text field has a carriage return in it?
I was under the impression that if the row/column delimiter was NOT a CR/LF then a delimited text field could contain a CR/LF (or single CR or single LF). How can I get the import to work? Thanks.
PS - the way I've been testing is to just take a table, export it to a flat file with delimiters set as above, insert a newline in a text field, then try to import the data again using the SQL Server Import Export Wizard in both directions. Here is the error message I see:
Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column "Column 23" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
Error 0xc020902a: Data Flow Task 1: The "Source - IVREJECTHD_txt.Outputs[Flat File Source Output].Columns[Column 23]" failed because truncation occurred, and the truncation row disposition on "Source - IVREJECTHD_txt.Outputs[Flat File Source Output].Columns[Column 23]" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
Error 0xc0202092: Data Flow Task 1: An error occurred while processing file "C:\Users\bbauer\Desktop\IVREJECTHD.txt" on data row 2.
Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on Source - IVREJECTHD_txt returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
Bulk Insert can import embedded CR/LF pairs in text fields. Something else is going on with the raw data in your source at the specified column (23) on the second row. There are a number of causes for the "text was truncated" error. Some of them are touched on in this thread. One common cause which particularly bites those using the Wizard is not specifying the target column width. It doesn't matter if your target table is set up correctly; if the column width specified in the import isn't big enough, you'll get this error.
You might consider performing a bulk insert using T-SQL and a format file; if you need to repeatedly test your import process and refine it, it's a lot easier to make modifications and re-run.
Also, as noted in this answer, the embedded CR/LFs will be present even if the tools (e.g. Management Studio) aren't displaying them to you.
im currently want to inport my data from flat file to the database.
the flat file is in a txt file. in that txt file, i save a list of URLs. example:
http://www.mimi.com/Hotels-g303188-Rurrenabaque-Hotels.html
im using the SQL Server Import and Export wizard to do it. but when the time of execution, it has error saying
Error 0xc02020a1:
Data Flow Task 1: Data conversion failed. The data conversion for column
"Column 0" returned status value 4 and status text "Text was truncated or one
or more characters had no match in the target code page.".
can anyone help?..
You get this error because the text is too long for the column youve chosen to put it in.
Text was truncated or
You might want to check the size of the database column vis-a-vis your input data. Does the longest URL less than the column width?
one or more characters had no match in the target code page.".
Check if your input file has any special characters. An easy way to check this would be to save your file in ANSI (Notepad > Save As > Encoding = ANSI). Note - you'd still have to select the right code page so that the import interprets your input text correctly.
Here's a very nice link that has some background on what code pages are - http://www.joelonsoftware.com/articles/Unicode.html
Note you can also change the target column data type (to text stream for example) in the Datasource->Advanced section