SQL server Warning: Fatal error 829 occurred at Oct 10 2019 12:48 PM. Note the error and time, and contact your system administrator - sql

The 2 table not insert or select or delete or drop table command execute then show error below:
The error I'm receiving
Warning: Fatal error 829 occurred at Oct 10 2019 12:48PM. Note the
error and time, and contact your system administrator.
DROP TABLE [dbo].[tbl_SalesMaster_tmp]
GO

Just a quick search on Google and find a similar thread here. However, I extracted the possible solution for an easy reference.
Means there's an I/O subsystem problem. Is something called a 'hard I/O error'. SQL Server asks the OS to read a page and it says no - this means the I/O subsystem couldn't read the page in question.
The CHECKDB output means that it couldn't create the internal database snapshot that it uses to get a transactionally-consistent point-in-time view of the database. There are a number of different causes of this:
There may not be any free space on the volume(s) storing the data files for the database
The SQL service account might not have create-file permissions in the directory containing the data files for the database
If neither of these are the case, you can create your own database snapshot and run DBCC CHECKDB on that. One you have, run the following:
DBCC CHECKDB (yourdbname) WITH NO_INFOMSGS, ALL_ERRORMSGS
Whatever the results are, you're looking at either restoring from a backup, extracting data to a new database, or running repair. Each involves varying amounts of downtime and data-loss. You're also going to have to do some root-cause analysis to figure out what happened to cause the corruption in the first place.
By the way - do you have page checksums enabled? Have you looked in the SQL error log or Windows application event log for any signs of corruption or things going wrong with the I/O subsystem?

Related

Is there size limit on appending ORC data files to Vora tables

I created a Vora table in Vora 1.3 and tried to append data to that table from ORC files that I got from SAP BW archiving process (NLS on Hadoop). I had 20 files, in total containing approx 50 Mio records.
When I tried to use the "files" setting in the APPEND statement as "/path/*", after approx 1 hour Vora returned this error message:
com.sap.spark.vora.client.VoraClientException: Could not load table F002_5F: [Vora [eba156.extendtec.com.au:42681.1640438]] java.lang.RuntimeException: Wrong magic number in response, expected: 0x56320170, actual: 0x00000000. An unsuccessful attempt to load a table might lead to an inconsistent table state. Please drop the table and re-create it if necessary. with error code 0, status ERROR_STATUS
Next thing I tried was appending data from each file using separate APPEND statements. On the 15th append (of 20) I've got the same error message.
The error indicates that the Vora engine on node eba156.extendtec.com.au is not available. I suspect it either crashed or ran into an out-of-memory situtation.
You can check the log directory for a crash dump. If you find one, please open a customer message for further investigation.
If you do not find a crash dump, it is likely a out-of-memory situation. You should find confirmation in either the engine log file or in /var/log/messages (if the oom killer ended the process). In that case, the available memory is not sufficient to load the data.

Merge replication Error: The process could not bulk copy into table

Hi I am using SQL SERVER 2005 Service pack 4 on both publisher and distributor. While trying to setup merge replication, i am getting below error continuously. Below are replication details.
I am using push subscription and path is network path.
Distributer and publisher present on the same server.
I have restored recent backup on subscriber and 1 week back backup on publisher.
I am setting up replication for only few tables, procedures and user defined functions.
I have verified and both the publisher and subscriber are having same schema.
As the replication is failing initially saying unable to drop userdefined functions : To resolve it I have set publisher property for user defined functions as Keep existing object unchanged.
Every time the error is coming after running synchronization for around 50 to 55 minutes.
My snapshot agent is working fine without any issue. Problem is only with merge agent.
I have changed the verbosehistory value to 3 in merge agent profile but it is not giving any additional information
Error messages: The merge process was unable to deliver the snapshot
to the Subscriber. If using Web synchronization, the merge process may
have been unable to create or write to the message file. When
troubleshooting, restart the synchronization with verbose history
logging and specify an output file to which to write. (Source:
MSSQL_REPL, Error number: MSSQL_REPL-2147201001)
Get help: http://help/MSSQL_REPL-2147201001
The process could not bulk copy into table
'"dbo"."refund_import_log"'. (Source: MSSQL_REPL, Error number:
MSSQL_REPL20037)
Get help: http://help/MSSQL_REPL20037
The system cannot find the file specified. (Source: MSSQLServer, Error
number: 0)
Get help: http://help/0
To obtain an error file with details on the errors encountered when
initializing the subscribing table, execute the bcp command that
appears below. Consult the BOL for more information on the bcp
utility and its supported options. (Source: MSSQLServer, Error number:
20253)
Get help: http://help/20253
bcp "greyhound"."dbo"."refund_import_log" in
"\usaz-ism-db-02\ghstgrpltest\unc\USAZ-ISM-DB-02_GREYHOUND_GREYHOUND-STAGE\20150529112681\refund_import_log_7.bcp"
-e "errorfile" -t"\n\n" -r"\n<,#g>\n" -m10000 -SUSGA-QTS-GT-01 -T -w (Source: MSSQLServer, Error number: 20253)
Here i am getting problem with different table every time.
Is there any bug related to it ? If so where i can get the fix ? If it is not a bug then please let me know how to resolve this problem.
The error message tells you the problem:
The process could not bulk copy into table '"dbo"."refund_import_log"'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20037)
It then gives you a perfectly good repro, to see why bulk copy is failing:
bcp "greyhound"."dbo"."refund_import_log" in "\usaz-ism-db-02\ghstgrpltest\unc\USAZ-ISM-DB-02_GREYHOUND_GREYHOUND-STAGE\20150529112681\refund_import_log_7.bcp" -e "errorfile" -t"\n\n" -r"\n<,#g>\n" -m10000 -SUSGA-QTS-GT-01 -T -w
Looking at the bcp repro above, can you please doublecheck the UNC path that you set for the snapshot folder, it looks incorrect to me. UNC paths should have two forward slashes in the beginning, yours only has one. The UNC path should look like this:
\\usaz-ism-db-02\ghstgrpltest\unc\

Sql server - log is full due to ACTIVE_TRANSACTION [duplicate]

This question already has answers here:
The transaction log for the database is full
(15 answers)
Closed 8 years ago.
I have a very large database (50+ GB). In order to free space in my hard drive, I tried deleting old records from one of the tables . I ran the command:
delete from Table1 where TheDate<'2004-01-01';
However, SQL Server 2012 said:
Msg 9002, Level 17, State 4, Line 1
The transaction log for database 'MyDb' is full due to 'ACTIVE_TRANSACTION'.
and it did not delete a thing. What does that message mean? How can I delete the records?
Here is what I ended up doing to work around the error.
First, I set up the database recovery model as SIMPLE. More information here.
Then, by deleting some old files I was able to make 5GB of free space which gave the log file more space to grow.
I reran the DELETE statement sucessfully without any warning.
I thought that by running the DELETE statement the database would inmediately become smaller thus freeing space in my hard drive. But that was not true. The space freed after a DELETE statement is not returned to the operating system inmediatedly unless you run the following command:
DBCC SHRINKDATABASE (MyDb, 0);
GO
More information about that command here.
Restarting the SQL Server will clear up the log space used by your database.
If this however is not an option, you can try the following:
* Issue a CHECKPOINT command to free up log space in the log file.
* Check the available log space with DBCC SQLPERF('logspace'). If only a small
percentage of your log file is actually been used, you can try a DBCC SHRINKFILE
command. This can however possibly introduce corruption in your database.
* If you have another drive with space available you can try to add a file there in
order to get enough space to attempt to resolve the issue.
Hope this will help you in finding your solution.

Running select command on postgres relational Table containing data in tera bytes

I have a relational table in postgres of 3 TB. Now I want to dump its content to a csv file. For doing so I am following the tutorial: http://www.mkyong.com/database/how-to-export-table-data-to-file-csv-postgresql/
My problem is after specifying the file to which the export has to be done and select statement. Postgres shows "Killed". Is it because of the relational table being of 3TB. If yes, then how should I export my data from postgres to another file (txt or csv, etc). If not, then how should I figure out the possible cause of the select command getting Killed.
Killed suggests you're running on a system where the out-of-memory killer (OOM killer) is enabled by memory over-commit settings. This isn't recommended by the manual.
If you disable overcommit you'll get a neater 'out of memory' error to the client instead of a sigkill and server re-start.
As for the COPY ... are you running COPY (SELECT ...) ? Or just COPY tablename TO .... ? Try a direct copy without a query, see if that helps.
When diagnosing faults you should be looking at the PostgreSQL error logs (which would tell you more about this problem) and system logs like the kernel logs or dmesg output.
When asking questions about PostgreSQL on Stack Overflow always include the exact server version from select version(), the exact command text/code run, the exact unedited text of any error messages, etc.

Prevent MS SQL 2005 master DB from being corrupted

I am trying to resolve what cause the following corruption.
2011-06-29 10:47:26.42 spid5s Starting up database 'master'.
2011-06-29 10:47:26.53 spid5s Error: 9003, Severity: 20, State: 1.
2011-06-29 10:47:26.53 spid5s The log scan number (216:72:1) passed to log scan in database 'master' is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication. Otherwise, restore from backup if the problem results in a failure during startup.
2011-06-29 10:47:26.53 spid5s Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.
I can find plenty of threads and information on how to recover databases when master db is corrupt. I can recover them succussfully.
HOWEVER, this is not very satisfactory for customers to have perform these operations. I have been able to examine event log files of when the corruption occurs. From there I can see server working fine then computer is shutdown, few hours later the computer is switched on and master db is corrupted.
Any help greatly appreciated
One of:
disk corruption. Run chkdsk etc with SQL Server shutdonw
someone has been playing with the MDF/LDF files
The master DB starts once when SQL Server starts up: so why did this happen? Patch? BSOD? PEBKAC? Note: the MDF/LDF files won't be locked when SQL is shutdown...
I can't recall a corrupt master, ever, unless it's one of the 3 reasons above