The source is SQL Server 2016 and Target is Netezza 7.2
When a source table is being mapped to the target, The below message appears,
ERROR:
An error has occurred while setting the replication method for dbo.CCM [An error occurred while turning on supplemental logging for dbo.CCM.
Failed to get publication ID.]. Check the event log for related events and a possible cause.
SQL Server Replication is enabled with a local distributor database. We have checked the CDC event logs and the same error is logged, nothing much in detail.
Any help on this would be appreciated.
You need to check the trace files. These are located in whichever folder you selected for instance data in the install. If you do not know this you can look at /conf/userfolder.vmargs - /instance/log
If you cannot find any useful information, then turn on detailed traces
1.) Management Console, configuration perspective, select the MS SQL Server datastore, properties, system parameters
2.) Add a new parameter global_trace_hours and specify a numeric value say 4
3.) Save
4.) The tracing is enabled dynamically - tracing will be set on for the number of hours you specify. The value will be automatically decremented every minute and then when it gets to 0 tracing is automatically and dynamically disabled
5.) Attempt to change the replication method to mirror again
6.) In the folder /instance/log/on you should find some files with data in
7.) Copy the trace file to a location with a short path (e.g. C:\TEMP) - or if it has already been zipped, unzip to C:\TEMP
8.) Open a command prompt as administrator
9.) Change directory to /bin
10.) Execute dmdecodetrace C:\TEMP\ | more
Note that the additional trace files are not full text, to minimize the impact of writing them, so need to be decoded
If you still do not get any pointers open a support ticket.
One potential cause could be that the table does not have a primary key. SQL replication requires a primary key, and as CDC is using the SQL replication to ensure that the full row images are logged in the transaction log, it is also a prereq for CDC as well.
Related
I am currently evaluating Flyway software as a deployment option for our
company. We run our database deployments on an ORACLE database and
currently spool the output from a sqlplus session for logging purposes. We
use this to verify feedback information such as were objects created
successfully, were packages and functions, etc. compiled without errors,
verify amount of records entered and so forth.
Is there similar logging functionality in Flyway? Currently the only
logging we have found is in the server logs. We can tell from these logs
that a script has completed successfully or has triggered an ORA error but
we are curious as to whether this is the extent of the database logging
options or not.
Thank you,
We used the command line method for running flyway and turned on debug output (-X). Along with a lot of other output it also logs more information about the SQL migrations run (eg content of repeatable migrations) and the number of records affected. This is not perfect however it helped us a lot in capturing more information about what was applied.
See https://flywaydb.org/documentation/commandline/ as it is not documented for each individual command as it applies to flyway itself.
After a fresh install of Hana SPS12 I get this event:
Delta merge (mergedog) configuration ( ID 10 )
Determines whether or not the 'active' parameter in the 'mergedog'
section of system configuration file(s) is 'yes'. mergedog is the
system process that periodically checks column tables to determine
whether or not a delta merge operation needs to be executed.
If this operation is so important why is it not a default setting? Why do I have to change this setting?
This operation is important. Therefore the default setting for Mergedog active is yes. The system checks the setting and alerts like reported. See alert message above.
This check should not be done for services without own persistence like dpserver. There was a useless check for dpserver which caused that wrong alert.
You may ignore the alert for now and change the setting. Finally it will go away after next upgrade.
I am developing a winforms application in Vb.net
when i try to add a tableadapter to an existing dataset I am receiving the error:
Failed to open a connection to the database.
"An attempt to attach an auto-named database for file ###Filelocation### failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share." Check the connection and try again.
This same dataset has 2 other tableadapters using the same dataconnection (as I am selecting the already existing dataconnection) which work fine
this connection is using application connection strings:
Data Source=(LocalDB)\v11.0;AttachDbFilename=|DataDirectory|\test.mdf;Integrated
server=localhost;user id=root;password=password;database=testuser;persistsecurityinfo=True
this error pops up every time I select the dataconnector on the first window that pops up.
Why is this happening
additional info:
The other 2 tableadapters were added to this dataset using a different computer
this is for a mysql connection
I found this post on the MSDN forum:
An attempt to attach an auto-named database ....\aspnetdb.mdf failed
One of the suggestions is as Mr. DonBoitnott correctly says, add User Instance=True;. But, there's also another suggestion posted by Luke A.
"For the record, not one single error message given to me during the course of trying to fix this was relevant to the actual problem. Upon first receiving "An attempt to attached an auto-named database..." I looked online for every suggestion I could find: use an absolute path to the MDF, reorder TCP/IP and named pipes in the server configuration, disable/enable UserInstance (depending on where you looked), change security settings, reconfigure authentication, give specific login credentials.
None of these worked. All of these led to different vague/ambiguous error messages, which led to another problem which required a solution which led back to the original error message... an endless loop of problems, completely unrelated to what was actually wrong.
_Also, posts about setting correct permissions on the App_Data folder are deceptive, as they imply the default permissions were not sufficient for SQL Express (in fact, they are). The whole point of this VS environment is that you can develop a web application and plop it right onto an IIS/SQLExpress setup and have it work. This makes the applications more portable (within IIS, of course) and secure. Of course, everything configuration-related has been obfuscated enough to make it several orders of magnitude more difficult than it has to be."_
So: Try prepend Initial Catalog=uniquenamehere to your connection string.
Though he says "... where 'uniquenamehere' is some name for your project.", try replacing uniquenamehere with the actual name of the database where the table exists.
the file is a .mdf file so it is a mssql file
my guess would be that as you stated you are using mysql the connection string has to be a tad different and therefore the Dataset can't connect to Mysql
Try adding User Instance=True; to the connection string.
I've got a maintenance plan that executes weekly in the off hours. It's always reporting success, but the old backups don't get deleted. I don't want the drive filling up.
DB Server info: SQL Server Standard Edition 9.00.3042.00
There is a "Maintenance Cleanup Task" set to
"Search folder and delete files based on an extension"
and "Delete files based on the age of the file at task run time" is checked and set to 4 weeks.
The only thing I can see is that my backups are each given their own subfolder and that this is not recursive. Am I missing something?
Also: I have seen the issues pre-SP2, but I am running service pack 2.
If you make your backups in subfolders, you have to specify the exact subfolder for deleting.
For example:
You make the backup by choosing the option that says something like "Make one backup file for each database" and check the box that says "Create subfolder for each database".
(I work with a German version of SQL Server, so I translate everything into English myself now)
The specified folder is H:\Backup, so the backups will actually be created in the folder H:\Backup\DatabaseName.
And if you want the Maintenance Cleanup Task to delete the backups via "Delete files based on the age of the file at task run time", you have to specify the folder H:\Backup\DatabaseName, not H:\Backup !!!
This is the mistake that I made when I started using SQL Server 2005 - I put the same folder in both fields, Backup and Cleanup.
My understanding is that you can only include the first level of subfolders. I am assuming that you have that check-box checked already.
Are your backups deeper than the just one level?
Another thought is, do you have one single maintenance plan that you run to delete backups of multiple databases? The reason I ask this is because the way I could see that you would have to do that would be to point it to a folder that was one level higher meaning that your "include first-level subfolders" would not be deep enough.
The way I have mine set up is that the Maintenance Cleanup Task is part of my backup process. So once the backup completes for a specific database the Maintenance Cleanup Task runs on that same database backup files. This allows me to be more specific on the directory so I don't run into the directory structure being too deep. Since I have the criteria set the way I want, items don't get deleted till I am ready for them to be deleted either way.
Tim
Make sure your maintenance plan does not have any errors associated it with. You can check the error log under the SQL Server Agent area in the SQL Server Management Studio. If there are errors during your maintenance plans, then it is probably quitting before it starts to delete the outdated backups.
Another issue could be the "workflow" of the maintenance plan.
If your plan consists of more than one task, you have to connect the tasks with arrows to define the order in which they will run.
Possible issue #1:
You forgot to connect them with arrows. I just tested that - the job runs without any error or warning, but it executes only the first task.
Possible issue #2:
You defined the workflow in a way that the cleanup task will never run. If you connect two tasks with an arrow, you can right-click on the arrow and specify if the second task will run always or only when the first one does/does not run successful (this changes the color of the arrow, possible are red/green/blue). Maybe the backup works, and then the cleanup never runs because it will only run when the backups fails?
Running sp_attach_single_file_db gives this error:
The log scan number (10913:125:2) passed to log scan in database 'myDB' is not valid
Isn't it supposed to re-create the log file?
How else would I be able to attach/repair that .mdf file?
It depends what mode your database was in when it was detached, it's possible there are uncommitted/unwritten transactions in that log file that are needed to attach the database, otherwise there would be data loss. Do you know what recovery mode you were in when it was detached?
It worked, when I installed another one (with .ldf log file) then the one in question, then detached the first one again. Don't ask me why.