What's "\." in postgresql dump file? - sql

I have a Postgresql dump file which i'm trying to restore. I get this error regarding an invalid data i guess.
ERROR: invalid input syntax for integer: "."
and when i checked the file, there are data like this:
469215 2009-10-10 18:16:47.041377 0 1
471217 2009-10-10 18:25:12.536352 0 1
473224 2009-10-17 09:46:43.041604 0 1
473228 2009-10-22 10:58:40.194244 0 1
.
so i was wondering what is this "." do?
i check some other working dumps and they ended their data line with "." which i guess it's the correct syntax!
Please tell me what's the correct syntax and what does it do?
thank you

Seems that it marks the end of a COPY statement
From the documentation
End of data can be represented by a single line containing just backslash-period (\.).
An end-of-data marker is not necessary when reading from a file, since the end of file
serves perfectly well; it is needed only when copying data to or from client
applications using pre-3.0 client protocol.

Related

Troubleshooting BCP and Format File Errors

First off, sorry for the long post. I wanted to be thorough with my examples/data, and the bulk of this post is just that.
I inherited a Bulk Import Process using a format file (.fmt) at my new job. This process was created by the guy that worked here before me, and it is my job to learn this process (and fix it now). I have limited knowledge of this stuff, but I have done some research. After a few weeks, I haven't really gotten anywhere. Here is what I am working with...
--BCP Command to import data from C:\Desktop\20180629_2377167_PR_NP.txt to table LA_Temp.dbo.ProvReg
bcp LA_Temp.dbo.ProvReg IN C:\Desktop\20180629_2377167_PR_NP.txt -f C:\Desktop\PROVREG.FMT -T -S SERVERNAME -k -m 1000000
--Table Structure which format file is created from:
SELECT [NPI]
,[D1]
,[EntityType]
,[D2]
,[ReplaceNPI]
,[D3]
,[ProvName]
,[D4]
,[MailAddr1]
,[D5]
,[MailAddr2]
,[D6]
,[MailCity]
,[D7]
,[MailState]
,[D8]
,[MailZip]
,[D9]
,[MailCountry]
,[D10]
,[MailPhone]
,[D11]
,[MailFax]
,[D12]
,[LocAddr1]
,[D13]
,[LocAddr2]
,[D14]
,[LocCity]
,[D15]
,[LocState]
,[D16]
,[LocZip]
,[D17]
,[LocCountry]
,[D18]
,[LocPhone]
,[D19]
,[LocFax]
,[D20]
,[Taxonomy1]
,[D21]
,[Taxonomy2]
,[D22]
,[Taxonomy3]
,[D23]
,[OtherProvID]
,[D24]
,[OtherProvIDType]
,[D25]
,[ProvEnumDate]
,[D26]
,[LastUpdate]
,[D27]
,[DeactivateRC]
,[D28]
,[DeactivateDate]
,[D29]
,[ReactivateDate]
,[D30]
,[Gender]
,[D31]
,[License]
,[D32]
,[LicenseState]
,[D33]
,[AuthorizedContact]
,[D34]
,[ContactTitle]
,[D35]
,[ContactPhone]
,[D36]
,[PanelOpen]
,[D37]
,[Language1]
,[D38]
,[Language2]
,[D39]
,[Language3]
,[D40]
,[Language4]
,[D41]
,[Language5]
,[D42]
,[AgeRestrict]
,[D43]
,[PCPMax]
,[D44]
,[PCPActual]
,[D45]
,[PCPAll]
,[D46]
,[EnrollInd]
,[D47]
,[EnrollDate]
,[D48]
,[FamilyOnly]
,[D49]
,[SubSpec1]
,[D50]
,[SubSpec2]
,[D51]
,[SubSpec3]
,[D52]
,[ContractName]
,[D53]
,[ContractBegin]
,[D54]
,[ContractEnd]
,[D55]
,[Parish1]
,[D56]
,[Parish2]
,[D57]
,[Parish3]
,[D58]
,[Parish4]
,[D59]
,[Parish5]
,[D60]
,[Parish6]
,[D61]
,[Parish7]
,[D62]
,[Parish8]
,[D63]
,[Parish9]
,[D64]
,[Parish10]
,[D65]
,[Parish11]
,[D66]
,[Parish12]
,[D67]
,[Parish13]
,[D68]
,[Parish14]
,[D69]
,[Parish15]
,[D70]
,[PCPInd]
,[D71]
,[DisplayOnline]
,[D72]
,[ExpAgeRestrict]
,[D73]
,[Suffix]
,[D74]
,[Title]
,[D75]
,[PrescriberInd]
,[Spaces]
,[End]
FROM [LA_Temp].[dbo].[ProvReg]
--Example Text File Data (this is one line)
9999999999 ^0^ ^ ^3800 HMA BLVD STE 305 ^ ^METAIRIE ^LA^70006 ^ ^5048729679^ ^3800 HMA BLVD ^ ^METAIRIE ^LA^70006 ^ ^9999999999^ ^207Q00000X^ ^ ^0000000^2001^ ^00000000^ ^00000000^00000000^F^ ^LA^ ^ ^ ^N^1^0^0^0^0^2^00000^00000^00000^ ^ ^ ^ ^ ^ ^000000000000000000000000000000^00000000^00000000^26^00^00^00^00^00^00^00^00^00^00^00^00^00^00^0^0^Accept patients of age 000-000^ ^MD ^ ^
--Format file
11.0
153
1 SQLCHAR 0 40 "\t" 1 NPI SQL_Latin1_General_Pref_CP1_CI_AS
2 SQLCHAR 0 2 "\t" 2 D1 SQL_Latin1_General_Pref_CP1_CI_AS
3 SQLCHAR 0 2 "\t" 3 EntityType
...all the way to...
153 SQLCHAR 0 2 "\r\n" 153 End
I have changed directories, servername, and some of the text file data to maintain security, however, it is very similar.
Here is the problem I am encountering:
With the "\t" used in the format file I just created from the SQL table, I get the error: [Microsoft][SQL Server Native Client 11.0]Unexpected EOF encountered in BCP data-file.
If I change this to just "" or "^" (as I 'think' it should be since the text file is using carrot delimiter), the rows began to copy with error
[Microsoft][SQL Server Native Client 11.0]String data, right truncation SQLState = 22001, NativeError = 0. BCP copy in failed.
If anyone can please point me in the right direction here for troubleshooting this issue, or if you see anything out of place, please let me know. As I mentioned, I have been at this for some time, and can use any suggestions I can get. Unfortunately, there is no one at my company I can ask about this.
try adding the -e option to your bcp command. this will give you an error file in which BCP will write some samlpe lines from the file that it had problems with. Very helpful with troubleshooting the type of error you are getting now (you are correct to change your delimiter in the format file).
The error you are getting now "string data" and "truncation" is just as it states. However, this truncation can be occurring for a number of reasons. The destination table's columns may not be large enough to hold the data that is contained between the defined field delimiters. There may be delimiters appearing in your data and so this could be tricking the bcp utility into thinking a column has ended before it was intended to end in the file (this is less likely with the delimiter you are using... but ya never know... I always prefer fixed width if possible.). And, of course, the source of the data may very well have written you a file that contradict whatever agreed upon spec led you to define your destination as you have.
The error is accurate, teh trick is finding where. Use the -e option to allow BCP to capture problematic lines:
BCP table_dest IN "C:\FILE.TXT" -S SVR -T -f"C:\FORMAT_FILE.txt" -e"C:\ERROR_FILE.txt"
The "error_file.txt" will include line numbers and will include a sample of lines that it couldn't handle. Just copy and past to find in the file youare trying to load to see for yourself.
Strongly suggest using a more advanced text editing tool. Do not use windows notepad or wordpad. Use something like notepad++ or ultraedit to inspect ascii text files.

Getting Internal Server Error on pgSQL

Im tryingto import data from windows CSV (comma delimiter) file into pgSQL faxtest1 table, but I keep getting error saying "The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application."
The following is my code:
COPY faxtest1
FROM 'C:‪\Users\David\Desktop\test3.csv'
WITH DELIMITER AS ',' CSV ;
The CSV file is like:
Status,Fax ID
Fax to Email,2104
Fax to Email,2108
It is a bug of pg admin 4, hope they will fix it in the future.
In version 14, in the Import/Export data function, there are 2 columns, "Options" and "Columns." Try manually select the columns one at time, separated by a comma. See if this would by pass the error.
It worked for me.

Fortran runtime error: End of file when reading input data

I'm currently running a code and I'm always getting to the same end. I am trying to read an input file and it returns the error:
Fortran runtime error: End of file
In an other post they said to put in the iostat specifier so now my code looks like this:
INTEGER :: m
INTEGER :: st
Open(Unit = 13,action='read',file='Data_Inp.dat',status='old')
read (13,*, iostat = st) m
write (*,*) st
write (*,*) m
ALLOCATE(winkel(m),energie(m))
Do i = 1,m
read(13,*),winkel(i),energie(i)
End Do
And the input file looks like this:
12
-17.83 -0.019386527878
-15.83 -0.020125057233
-12.83 -0.020653853148
-11.83 -0.020840036028
-9.83 -0.020974157405
-8.83 -0.021056401707
-6.83 -0.021065517811
-5.83 -0.020992571816
-4.83 -0.020867828448
-1.83 -0.02069158012
Now the terminal prints a -1 for iostat and a constantly changing number for m.
If the first read command is causing an error, check for extraneous characters before or after "12" in your input file, especially if you created it on one platform (Windows?) and using it on another platform (Linux? Mac?)

Uploading job fails on the same file that was uploaded successfully before

I'm running regular uploading job to upload csv into BigQuery. The job runs every hour. According to recent fail log, it says:
Error: [REASON] invalid [MESSAGE] Invalid argument: service.geotab.com [LOCATION] File: 0 / Offset:268436098 / Line:218637 / Field:2
Error: [REASON] invalid [MESSAGE] Too many errors encountered. Limit is: 0. [LOCATION]
I went to line 218638 (the original csv has a headline, so I assume 218638 should be the actual failed line, let me know if I'm wrong) but it seems all right. I checked according table in BigQuery, it has that line too, which means I actually successfully uploaded this line before.
Then why does it causes failure recently?
project id: red-road-574
Job ID: Job_Upload-7EDCB180-2A2E-492B-9143-BEFFB36E5BB5
This indicates that there was a problem with the data in your file, where it didn't match the schema.
The error message says it occurred at File: 0 / Offset:268436098 / Line:218637 / Field:2. This means the first file (it looks like you just had one), and then the chunk of the file starting at 268436098 bytes from the beginning of the file, then the 218637th line from that file offset.
The reason for the offset portion is that bigquery processes large files in parallel in multiple workers. Each file worker starts at an offset from the beginning of the file. The offset that we include is the offset that the worker started from.
From the rest of the error message, it looks like the string service.geotab.com showed up in the second field, but the second field was a number, and service.geotab.com isn't a valid number. Perhaps there was a stray newline?
You can see what the lines looked like around the error by doing:
cat <yourfile> | tail -c +268436098 | tail -n +218636 | head -3
This will print out three lines... the one before the error (since I used -n +218636 instead of +218637), the one that had the error, and the next line as well.
Note that if this is just one line in the file that has a problem, you may be able to work around the issue by specifying maxBadRecords.

AMPL:How to print variable output using NEOS Server, when you can't include data and model command in the command file?

I'm doing some optimization using a model whose number of constraints and variables exceeds the cap for the student version of, say, AMPL, so I've found a webpage [http://www.neos-server.org/neos/solvers/milp:Gurobi/AMPL.html] which can solve my type of model.
I've found however that when using a solver where you can provide a commandfile (which I assume is the same as a .run file) the documentation of NEOS server tells that you should see the documentation of the input file. I'm using AMPL input which according to [http://www.neos-guide.org/content/FAQ#ampl_variables] should be able to print the decision variables using a command file with the appearance:
solve;
display _varname, _var;
The problem is that NEOS claim that you cannot add the:
data datafile;
model modelfile;
commands into the .run file, resulting in that the compiler cannot find the variables.
Does anyone know of a way to work around this?
Thanks in advance!
EDIT: If anyone else has this problem (which I believe many people have based on my Internet search). Try to remove any eventual reset; command from the .run file!
You don't need to specify model or data commands in the script file submitted to NEOS. It loads the model and data files automatically, solves the problem, and then executes the script (command file) you provide. For example submitting diet1.mod model diet1.dat data and this trivial command file
display _varname, _var;
produces the output which includes
: _varname _var :=
1 "Buy['Quarter Pounder w/ Cheese']" 0
2 "Buy['McLean Deluxe w/ Cheese']" 0
3 "Buy['Big Mac']" 0
4 "Buy['Filet-O-Fish']" 0
5 "Buy['McGrilled Chicken']" 0
6 "Buy['Fries, small']" 0
7 "Buy['Sausage McMuffin']" 0
8 "Buy['1% Lowfat Milk']" 0
9 "Buy['Orange Juice']" 0
;
As you can see this is the output from the display command.