Merge replication Error: The process could not bulk copy into table - sql

Hi I am using SQL SERVER 2005 Service pack 4 on both publisher and distributor. While trying to setup merge replication, i am getting below error continuously. Below are replication details.
I am using push subscription and path is network path.
Distributer and publisher present on the same server.
I have restored recent backup on subscriber and 1 week back backup on publisher.
I am setting up replication for only few tables, procedures and user defined functions.
I have verified and both the publisher and subscriber are having same schema.
As the replication is failing initially saying unable to drop userdefined functions : To resolve it I have set publisher property for user defined functions as Keep existing object unchanged.
Every time the error is coming after running synchronization for around 50 to 55 minutes.
My snapshot agent is working fine without any issue. Problem is only with merge agent.
I have changed the verbosehistory value to 3 in merge agent profile but it is not giving any additional information
Error messages: The merge process was unable to deliver the snapshot
to the Subscriber. If using Web synchronization, the merge process may
have been unable to create or write to the message file. When
troubleshooting, restart the synchronization with verbose history
logging and specify an output file to which to write. (Source:
MSSQL_REPL, Error number: MSSQL_REPL-2147201001)
Get help: http://help/MSSQL_REPL-2147201001
The process could not bulk copy into table
'"dbo"."refund_import_log"'. (Source: MSSQL_REPL, Error number:
MSSQL_REPL20037)
Get help: http://help/MSSQL_REPL20037
The system cannot find the file specified. (Source: MSSQLServer, Error
number: 0)
Get help: http://help/0
To obtain an error file with details on the errors encountered when
initializing the subscribing table, execute the bcp command that
appears below. Consult the BOL for more information on the bcp
utility and its supported options. (Source: MSSQLServer, Error number:
20253)
Get help: http://help/20253
bcp "greyhound"."dbo"."refund_import_log" in
"\usaz-ism-db-02\ghstgrpltest\unc\USAZ-ISM-DB-02_GREYHOUND_GREYHOUND-STAGE\20150529112681\refund_import_log_7.bcp"
-e "errorfile" -t"\n\n" -r"\n<,#g>\n" -m10000 -SUSGA-QTS-GT-01 -T -w (Source: MSSQLServer, Error number: 20253)
Here i am getting problem with different table every time.
Is there any bug related to it ? If so where i can get the fix ? If it is not a bug then please let me know how to resolve this problem.

The error message tells you the problem:
The process could not bulk copy into table '"dbo"."refund_import_log"'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20037)
It then gives you a perfectly good repro, to see why bulk copy is failing:
bcp "greyhound"."dbo"."refund_import_log" in "\usaz-ism-db-02\ghstgrpltest\unc\USAZ-ISM-DB-02_GREYHOUND_GREYHOUND-STAGE\20150529112681\refund_import_log_7.bcp" -e "errorfile" -t"\n\n" -r"\n<,#g>\n" -m10000 -SUSGA-QTS-GT-01 -T -w
Looking at the bcp repro above, can you please doublecheck the UNC path that you set for the snapshot folder, it looks incorrect to me. UNC paths should have two forward slashes in the beginning, yours only has one. The UNC path should look like this:
\\usaz-ism-db-02\ghstgrpltest\unc\

Related

How do I import a SQL dump into pgadmin 4 using postgresql?

I have a SQL dump that I need to import using postgresql into pgadmin4, however when I run the command, the schema gets created but none of the data comes with it, I have the database set up in pgadmin4. This is my first time using postgresql and pgadmin so I know I have to be missing something.
The SQL dump file was sent to me directly, I did not use pg_dump to migrate anything, the file is in my downloads and I need to plug it in to pgadmin.
I need this SQL dump because I need to log into several portals locally for a large project.
On Windows, using postgres version 14, I've tried several ways from other solutions on stack overflow, first using the command line in both bash and powershell
This here is the command I was told to use that should add the tables and data for the app from a coworker, and it worked fine for him.
C:\Program Files\PostgreSQL\14\bin>psql -h localhost -U postgres -d the_database -f PATH_TO_YOUR_DOWNLOADS\data_dump.sql
This command will create the schema in the pgadmin database but no data comes with it. (I know the data is missing because I cant use my dummy logins to get into the project)
Second, I tried using the built in restore and backup methods in pgadmin and both of those end in an error
`Process failed Restoring backup on the server 'PostgreSQL 14 (localhost:5432)
Third I tried using the query tool and link the sql file that way, but when I hit execute I get an error there as well.
Using the query tool, when I link the download file, I can see the data in the Query, but it is not in the database.
ERROR: syntax error at or near "2"
LINE 3285: 2 Some Test 2020-11-13 07:42:29.356827 2020-11-13 04:32:...
^
SQL state: 42601
Character: 87447
Any advice?
Do I need the SQL file formatted in any certain way?
I just need the data to be imported into pgadmin4 database WITH my schema.

Issues while using Ora2pg for Oracle to Postgresql - report generation

I am trying to generate the report from OracleDB --19c with ora2pg --V23.1.
Command Used: ora2pg -t show_report --dump_as_html -l db_report_filename.html -c E:\ora2pg\ora2pg.cong
Error generated in html report:
FATAL: ORA-00604: error occurred at recursive SQL level 1 ORA-08177: can't serialize access for this transaction (DBD ERROR: OCIStmtExecute)
Looking for ideas to resolve this issue.
This issue was fixed when a configuration change in ora2pg conf file was changed
Data are exported in serialized transaction mode to have a consistent snapshot of the data, see Oracle documentation about what parameter to increase to not have this issue. Or if you are sure that no modification are done in the Oracle database you can force Ora2Pg to use a readonly transaction instead, see TRANSACTION directive in ora2pg.conf

SQL server Warning: Fatal error 829 occurred at Oct 10 2019 12:48 PM. Note the error and time, and contact your system administrator

The 2 table not insert or select or delete or drop table command execute then show error below:
The error I'm receiving
Warning: Fatal error 829 occurred at Oct 10 2019 12:48PM. Note the
error and time, and contact your system administrator.
DROP TABLE [dbo].[tbl_SalesMaster_tmp]
GO
Just a quick search on Google and find a similar thread here. However, I extracted the possible solution for an easy reference.
Means there's an I/O subsystem problem. Is something called a 'hard I/O error'. SQL Server asks the OS to read a page and it says no - this means the I/O subsystem couldn't read the page in question.
The CHECKDB output means that it couldn't create the internal database snapshot that it uses to get a transactionally-consistent point-in-time view of the database. There are a number of different causes of this:
There may not be any free space on the volume(s) storing the data files for the database
The SQL service account might not have create-file permissions in the directory containing the data files for the database
If neither of these are the case, you can create your own database snapshot and run DBCC CHECKDB on that. One you have, run the following:
DBCC CHECKDB (yourdbname) WITH NO_INFOMSGS, ALL_ERRORMSGS
Whatever the results are, you're looking at either restoring from a backup, extracting data to a new database, or running repair. Each involves varying amounts of downtime and data-loss. You're also going to have to do some root-cause analysis to figure out what happened to cause the corruption in the first place.
By the way - do you have page checksums enabled? Have you looked in the SQL error log or Windows application event log for any signs of corruption or things going wrong with the I/O subsystem?

variable for SQLLOGDIR not found

I am using ola hallengren script for maintenance solution. When I run just the Database backup job for user database I get the following error. Unable to start execution of step 1 (reason: Variable SQLLOGDIR not found). The step failed.
I have checked the directory permissions and there is no issue there. The script creates the job with no problem. I get error message when I try to run the job.
I had this same issue just the other day. I run a number of 2017 servers but the issue happened when I started running on a 2012 server.
I've dropped Ola a mail to confirm but best I can make out is that the SQLLOGDIR parameter specified in the 'advanced' tab for the step (for logging outputs) is not compatible with 2012 and maybe below 2017 though I have not tested these.
HTH,
Adam.
You need to replace this part in the advanced tab with the job name for example :
$(ESCAPE_SQUOTE(JOBNAME)) replace it with CommandLogCleanup_$(ESCAPE_SQUOTE(JOBID)) so then it will look like this:
$(ESCAPE_SQUOTE(SQLLOGDIR))\CommandLogCleanup_$(ESCAPE_SQUOTE(JOBID))_$(ESCAPE_SQUOTE(STEPID))_$(ESCAPE_SQUOTE(DATE))_$(ESCAPE_SQUOTE(TIME)).txt
instead of this:
$(ESCAPE_SQUOTE(SQLLOGDIR))\$(ESCAPE_SQUOTE(JOBNAME))_$(ESCAPE_SQUOTE(STEPID))_$(ESCAPE_SQUOTE(DATE))_$(ESCAPE_SQUOTE(TIME)).txt
Do this for all the other jobs if you don't want to recreate them.
I had the same issue on my SQL Server 2012 version, the error was during the dB backup using Ola's scripts, as mentioned above the issue is with the output file, I changed the location and the output file from the SQL Job and reran the job successfully (refer the attached screenshot for reference.
The error is related to the job output file.
When you create a maintenance job using the Ola script it will automatically assign output file to the step. Sometimes the location does not exist on the server.
I faced the same issue, then I ran the integrity script manually on the server and it completed without error, then I found that the error is in job configuration.
I changed the job output file location and now job also running fine.
The trick is to build the string for the #output_file_name parameter element by element before calling the stored procedure. If you look into Olas code you will see that is exactly what he is doing.
I have tried to describe this in more detail in the post Add SQL Agent job step with tokens in output file name.

Running select command on postgres relational Table containing data in tera bytes

I have a relational table in postgres of 3 TB. Now I want to dump its content to a csv file. For doing so I am following the tutorial: http://www.mkyong.com/database/how-to-export-table-data-to-file-csv-postgresql/
My problem is after specifying the file to which the export has to be done and select statement. Postgres shows "Killed". Is it because of the relational table being of 3TB. If yes, then how should I export my data from postgres to another file (txt or csv, etc). If not, then how should I figure out the possible cause of the select command getting Killed.
Killed suggests you're running on a system where the out-of-memory killer (OOM killer) is enabled by memory over-commit settings. This isn't recommended by the manual.
If you disable overcommit you'll get a neater 'out of memory' error to the client instead of a sigkill and server re-start.
As for the COPY ... are you running COPY (SELECT ...) ? Or just COPY tablename TO .... ? Try a direct copy without a query, see if that helps.
When diagnosing faults you should be looking at the PostgreSQL error logs (which would tell you more about this problem) and system logs like the kernel logs or dmesg output.
When asking questions about PostgreSQL on Stack Overflow always include the exact server version from select version(), the exact command text/code run, the exact unedited text of any error messages, etc.