Running select command on postgres relational Table containing data in tera bytes - sql

I have a relational table in postgres of 3 TB. Now I want to dump its content to a csv file. For doing so I am following the tutorial: http://www.mkyong.com/database/how-to-export-table-data-to-file-csv-postgresql/
My problem is after specifying the file to which the export has to be done and select statement. Postgres shows "Killed". Is it because of the relational table being of 3TB. If yes, then how should I export my data from postgres to another file (txt or csv, etc). If not, then how should I figure out the possible cause of the select command getting Killed.

Killed suggests you're running on a system where the out-of-memory killer (OOM killer) is enabled by memory over-commit settings. This isn't recommended by the manual.
If you disable overcommit you'll get a neater 'out of memory' error to the client instead of a sigkill and server re-start.
As for the COPY ... are you running COPY (SELECT ...) ? Or just COPY tablename TO .... ? Try a direct copy without a query, see if that helps.
When diagnosing faults you should be looking at the PostgreSQL error logs (which would tell you more about this problem) and system logs like the kernel logs or dmesg output.
When asking questions about PostgreSQL on Stack Overflow always include the exact server version from select version(), the exact command text/code run, the exact unedited text of any error messages, etc.

Related

SQL server Warning: Fatal error 829 occurred at Oct 10 2019 12:48 PM. Note the error and time, and contact your system administrator

The 2 table not insert or select or delete or drop table command execute then show error below:
The error I'm receiving
Warning: Fatal error 829 occurred at Oct 10 2019 12:48PM. Note the
error and time, and contact your system administrator.
DROP TABLE [dbo].[tbl_SalesMaster_tmp]
GO
Just a quick search on Google and find a similar thread here. However, I extracted the possible solution for an easy reference.
Means there's an I/O subsystem problem. Is something called a 'hard I/O error'. SQL Server asks the OS to read a page and it says no - this means the I/O subsystem couldn't read the page in question.
The CHECKDB output means that it couldn't create the internal database snapshot that it uses to get a transactionally-consistent point-in-time view of the database. There are a number of different causes of this:
There may not be any free space on the volume(s) storing the data files for the database
The SQL service account might not have create-file permissions in the directory containing the data files for the database
If neither of these are the case, you can create your own database snapshot and run DBCC CHECKDB on that. One you have, run the following:
DBCC CHECKDB (yourdbname) WITH NO_INFOMSGS, ALL_ERRORMSGS
Whatever the results are, you're looking at either restoring from a backup, extracting data to a new database, or running repair. Each involves varying amounts of downtime and data-loss. You're also going to have to do some root-cause analysis to figure out what happened to cause the corruption in the first place.
By the way - do you have page checksums enabled? Have you looked in the SQL error log or Windows application event log for any signs of corruption or things going wrong with the I/O subsystem?

Merge replication Error: The process could not bulk copy into table

Hi I am using SQL SERVER 2005 Service pack 4 on both publisher and distributor. While trying to setup merge replication, i am getting below error continuously. Below are replication details.
I am using push subscription and path is network path.
Distributer and publisher present on the same server.
I have restored recent backup on subscriber and 1 week back backup on publisher.
I am setting up replication for only few tables, procedures and user defined functions.
I have verified and both the publisher and subscriber are having same schema.
As the replication is failing initially saying unable to drop userdefined functions : To resolve it I have set publisher property for user defined functions as Keep existing object unchanged.
Every time the error is coming after running synchronization for around 50 to 55 minutes.
My snapshot agent is working fine without any issue. Problem is only with merge agent.
I have changed the verbosehistory value to 3 in merge agent profile but it is not giving any additional information
Error messages: The merge process was unable to deliver the snapshot
to the Subscriber. If using Web synchronization, the merge process may
have been unable to create or write to the message file. When
troubleshooting, restart the synchronization with verbose history
logging and specify an output file to which to write. (Source:
MSSQL_REPL, Error number: MSSQL_REPL-2147201001)
Get help: http://help/MSSQL_REPL-2147201001
The process could not bulk copy into table
'"dbo"."refund_import_log"'. (Source: MSSQL_REPL, Error number:
MSSQL_REPL20037)
Get help: http://help/MSSQL_REPL20037
The system cannot find the file specified. (Source: MSSQLServer, Error
number: 0)
Get help: http://help/0
To obtain an error file with details on the errors encountered when
initializing the subscribing table, execute the bcp command that
appears below. Consult the BOL for more information on the bcp
utility and its supported options. (Source: MSSQLServer, Error number:
20253)
Get help: http://help/20253
bcp "greyhound"."dbo"."refund_import_log" in
"\usaz-ism-db-02\ghstgrpltest\unc\USAZ-ISM-DB-02_GREYHOUND_GREYHOUND-STAGE\20150529112681\refund_import_log_7.bcp"
-e "errorfile" -t"\n\n" -r"\n<,#g>\n" -m10000 -SUSGA-QTS-GT-01 -T -w (Source: MSSQLServer, Error number: 20253)
Here i am getting problem with different table every time.
Is there any bug related to it ? If so where i can get the fix ? If it is not a bug then please let me know how to resolve this problem.
The error message tells you the problem:
The process could not bulk copy into table '"dbo"."refund_import_log"'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20037)
It then gives you a perfectly good repro, to see why bulk copy is failing:
bcp "greyhound"."dbo"."refund_import_log" in "\usaz-ism-db-02\ghstgrpltest\unc\USAZ-ISM-DB-02_GREYHOUND_GREYHOUND-STAGE\20150529112681\refund_import_log_7.bcp" -e "errorfile" -t"\n\n" -r"\n<,#g>\n" -m10000 -SUSGA-QTS-GT-01 -T -w
Looking at the bcp repro above, can you please doublecheck the UNC path that you set for the snapshot folder, it looks incorrect to me. UNC paths should have two forward slashes in the beginning, yours only has one. The UNC path should look like this:
\\usaz-ism-db-02\ghstgrpltest\unc\

Import huge SQL file into SQL Server

I use the sqlcmd utility to import a 7 GB large SQL dump file into a remote SQL Server. The command I use is this:
sqlcmd -S IP address -U user -P password -t 0 -d database -i file.sql
After about 20-30 min the server regularly responds with:
Sqlcmd: Error: Scripting error.
Any pointers or advice?
I assume file.sql is just a bunch of INSERT statements. For a large amount of rows, I suggest using the BCP command-line utility. This will perform orders of magnitude faster than individual INSERT statements.
You could also bulk insert data using the T-SQL BULK INSERT command. In that case, the file path needs to be accessible by the database server (i.e. UNC path or copied to a drive on the server) and along with needed permissions. See http://msdn.microsoft.com/en-us/library/ms188365.aspx.
Why not use SSIS. While I have a certificate as a DBA, I always try to use the right tool for the job.
Here are some reasons to use SSIS.
1 - Use can still use fast-load, bulk copy. Make sure you set the batch size.
2 - Error handling is much better.
However, if you are using fast-load, either the batch commits or it gets tossed.
If you are using single record, you can direct each error row to a separate destination.
3 - You can perform transformations on the source data before loading it into the destination.
In short, Extract Translate Load.
4 - SSIS loves memory and buffers. If you want to get really in depth, read some articles from Matt Mason or Brian Knight.
Last but not least, the LAN/WAN always plays a factor if the job is not running on the target server with the input file on a local disk.
If you are on the same backbone with a good pipe, things go fast.
In summary, yeah you can use BCP. It is great for little quick jobs. Anything complicated with robust error handling should be done with SSIS.
Good luck,

Sql server - log is full due to ACTIVE_TRANSACTION [duplicate]

This question already has answers here:
The transaction log for the database is full
(15 answers)
Closed 8 years ago.
I have a very large database (50+ GB). In order to free space in my hard drive, I tried deleting old records from one of the tables . I ran the command:
delete from Table1 where TheDate<'2004-01-01';
However, SQL Server 2012 said:
Msg 9002, Level 17, State 4, Line 1
The transaction log for database 'MyDb' is full due to 'ACTIVE_TRANSACTION'.
and it did not delete a thing. What does that message mean? How can I delete the records?
Here is what I ended up doing to work around the error.
First, I set up the database recovery model as SIMPLE. More information here.
Then, by deleting some old files I was able to make 5GB of free space which gave the log file more space to grow.
I reran the DELETE statement sucessfully without any warning.
I thought that by running the DELETE statement the database would inmediately become smaller thus freeing space in my hard drive. But that was not true. The space freed after a DELETE statement is not returned to the operating system inmediatedly unless you run the following command:
DBCC SHRINKDATABASE (MyDb, 0);
GO
More information about that command here.
Restarting the SQL Server will clear up the log space used by your database.
If this however is not an option, you can try the following:
* Issue a CHECKPOINT command to free up log space in the log file.
* Check the available log space with DBCC SQLPERF('logspace'). If only a small
percentage of your log file is actually been used, you can try a DBCC SHRINKFILE
command. This can however possibly introduce corruption in your database.
* If you have another drive with space available you can try to add a file there in
order to get enough space to attempt to resolve the issue.
Hope this will help you in finding your solution.

Fastest way to copy table contet from one server to another

Im looking for fastest way to copy some tables from one sybase server (ase 12.5) to another. Currently im using bcp tool but it takes time to create proper bcp.fmt file.
Tables have the same structure. There is about 25K rows in every table. I have to copy about 40 tables.
I tryed to use -c parameter for bcp but I get errors while importing:
CSLIB Message: - L0/O0/S0/N24/1/0:
cs_convert: cslib user api layer: common library error: The conversion/operation
was stopped due to a syntax error in the source field.
My standard bcp in/out commands:
bcp.exe SPEPL..VSoftSent out VSoftSent.csv -U%user% -P%pass% -S%srv% -c
bcp.exe SPEPL..VSoftSent in VSoftSent.csv -U%user2% -P%pass2% -S%srv2% -e import.err -c
Since you are copying from different servers, BCP is the way to go!
If it was in the same server would be different.
Are you saying it's from 1 Sybase ASE host to another Sybase ASE host?
If you don't want to mess with BCP or I/O on the file system, you could create a CIS proxy table in your destination database that references either a stored procedure with a select statement or a physical table in your source database.
Then you could just
insert into destinationtable (col1, col2...)
select
col1, col2...
from proxytablename
CIS proxy is fairly resource intensive, so I'd be very careful about how much work you're doing here.