Im trying to export data from big query table to zip file in command line by using BQ extract. It generated multiple empty files (with header) , along with one file with correct data. Can someone please let me know , why empty files are generated.
Thanks
This is a BigQuery issue already reported. I suggest starring the issue and asking for an update on it.
I faced the same empty files issue when using EXPORT DATA.
After doing a bit of R&D found the solution. Put LIMIT xxx in your SELECT SQL and it will do the trick.
You can find the count, and put that as LIMIT value.
SELECT ....
FROM ...
WHERE ...
LIMIT xxx
Backstory
I have recently been given the task of maintaining a SSIS process that a former colleague oversaw. I have only a minimal experience with BIDS/SSIS/"What ever MS marketing wants to call it now" and have an issue which I can't seem to find a solution to.
Issue
I have a Data Flow that includes reading images data from a table as well as doing a file read on the images them self's.
For the image read a 'Import Column transformation' (here by called ICt) is being used, and it hangs indefinitely.
The module gets handed 2500 rows of image data (name, path, date created etc) and using the 'path' column the ICt tries to read the file. I've set the correct input column under 'Input and Output Properties' as well as setting the output column. The input column has the output columns ID in its FileDataColumnId.
When running the process it just hangs as yellow and nothing happens. I can access the images in the explorer, and know they exist (at least some).
Tools used
Windows 7
Visual Studio 2008 sp2
SQL-Server 2012
All hints, tips or possible solutions would be appreciated.
I have seen questions on stackoverflow similar/same as the one I am asking now, however I couldn't manage to solve it in my situation.
Here is the thing:
I have an excel spreadsheet(.xlsx) whom i converted in comma seperated value(.CSV) as it is said in some answers:
My excel file looks something like this:
--------------------------------------------------
name | surname | voteNo | VoteA | VoteB | VoteC
--------------------------------------------------
john | smith | 1001 | 30 | 154 | 25
--------------------------------------------------
anothe| person | 1002 | 430 | 34 | 234
--------------------------------------------------
other | one | 1003 | 35 | 154 | 24
--------------------------------------------------
john | smith | 1004 | 123 | 234 | 53
--------------------------------------------------
john | smith | 1005 | 23 | 233 | 234
--------------------------------------------------
In PostgreSQL I created a table with name allfields and created 6 columns
1st and 2nd one as a character[] and last 4 ones as integers with the same name as shown in the excel table (name, surname, voteno, votea, voteb, votec)
Now I'm doing this:
copy allfields from 'C:\Filepath\filename.csv';
But I'm getting this error:
could not open file "C:\Filepath\filename.csv" for reading: Permission denied
SQL state: 42501
My questions are:
Should I create those columns in allfields table in PostgreSQL?
Do I have to modify anything else in Excel file?
And why I get this 'permission denied' error?
Based on your file, neither of the first two columns needs to be an array type (character[]) - unlike C-strings, the "character" type in postgres is a string already. You might want to make things easier and use varchar as the type of those two columns instead.
I don't think you do.
Check that you don't still have that file open and locked in excel - if you did a "save as" to convert from xlsx to csv from within excel then you'll likely need to close out the file in excel.
SQL state: 42501 in PostgreSQL means you don't have permission to perform such operation in the intended schema. This error code list shows that.
Check that you're pointing to the correct schema and your user has enough privileges.
Documentation also states that you need select privileges on origin table and insert privileges on the destination table.
You must have select privilege on the table whose values are read by
COPY TO, and insert privilege on the table into which values are
inserted by COPY FROM. It is sufficient to have column privileges on
the column(s) listed in the command.
Yes I think you can. For COPY command, there is optional HEADER clause. Check
http://www.postgresql.org/docs/9.2/static/sql-copy.html
I don't think so. With my #1 and #3, it should works.
You need superuser permission for that.
1) Should I create those columns in allfields table in PostgreSQL?
Use text for the character fields. Not an array in any case, as #yieldsfalsehood pointed out correctly.
2) Do I have to modify anything else in Excel file?
No.
3) And why I get this 'permission denied' error?
The file needs be accessible to your system user postgres (or what ever user you are running the postgres server with). Per documentation:
COPY with a file name instructs the PostgreSQL server to directly read
from or write to a file. The file must be accessible to the server and
the name must be specified from the viewpoint of the server.
The privileges of the database user are not the cause of the problem. However (quoting the same page):
COPY naming a file or command is only allowed to database superusers,
since it allows reading or writing any file that the server has privileges to access.
Regarding the permission problem, if you are using psql to issue the COPY command, try using \copy instead.
Ok the Problem was that i need to change the path of the Excel file. I inserted it in the public account where all users can access it.
If you face the same problem move your excel file to ex C:\\User\Public folder (this folder is a public folder without any restrictions), otherwise you have to deal with Windows permission issues.
For those who do not wish to move the files they wish to read to a different location(public) for some reason. Here is a clear solution.
Right click the folder holding the file and select properties.
Select the Security tab under properties.
Select Edit
Select Add
Under the field Enter the object Names to select, Type in Everyone
Click OK to all the dialog boxes or Apply if it is activated
Try reading the file again.
I have a comma delimited text file containing discrepancies across two different databases, and need to update one of the databases with information from the aforementioned text file. The text file is in the following format:
ID valueFromDb1 valueFromDb2
1 1234 4321
2 2345 5432
... ... ...
I need to go update a table by checking for the ID value, and where valueFromDb1 exists replace it with valueFromDb2. There are around 11,000 rows that need to be updated. Is there a way I can access the information in this text file directly through an sql query? My other thought was to write a java program to do this for me, but I'm not convinced that is the easiest solution.
The article below demonstrates one way to read a text file in MS SQL Server by using xp_cmdshell. In order for it to work the file has to be on one of the drives of the server. Once you have the file loaded into a table variable (which is what the code in the article will do) you should be able to do the joins and updates pretty easily. Let us know if you need any other help.
http://www.kodyaz.com/articles/read-text-file-using-xp_cmdshell.aspx
I got a txt file which includes 350.000 lines and I have to download and insert it to my sql server database. I write the part that connects to FTP gets the related file and download it. What I want is insert it to my table.
Here's a line:
9996281000L0000000000000000
As you can see also I need to seperate the specific parts like
999 628 1000 L 0000000000000000
I need an effective solution which cuts the lines and inserts the data to related columns.
Anyone any ideas how I can achieve this?
Look into the BCP utility and its format files. It's a detailed and somewhat complex process, but it will do the job quickly and efficiently once set up.
You can get similar functionality (with a much better GUI) with SQL Server Integration Services (SSIS). While completely different, it does much the same thing as bcp.