SSIS If sequence container is set to TransactionOption = Required data flow task fails - sql

I have an SSIS data package with a sequence container(and a nested sequence container) that works fine when I set the transaction option to supported. However when I set it to required it fails. I suspect it's because my source destination is on another server, is transaction option required not a possibility when doing a cross server data flow?

SSIS Is compatible with Transactions across different data sources, however as I understand it they require the use of the MSDTC service. If your data source is not compatible with this then it will fail. If your data source is compatible I.E. another Windows machine with SQL Server, then check that the service is switched on and configured correctly.
You could also set the specific parts of the sequence container to set the TransactionOption to not support to get around it, although I don't know if that will work for a source.

I've had this in the past. ensure you have port TCP port 135(RPC) and program MsDtsSrvr.exe allowed through your windows firewall on the server. you can test by temp disabling windows firewall on the server and run your SSIS package. If it runs enable again and add the rules above.
Hope this helps

Related

Source Computer name in custom logs collected by Azure Monitor Agent

I have set up Azure Monitor custom log collection on my Linux VM by following the tutorial and all works fine, except that the Computer Name column in my custom table does not get populated. This means I have no easy way to distinguish between similar logs sourced from multiple VMs.
I could probably hack in the hostname into the log file itself and get Azure to parse it into a field, but on one hand I don't want to customize the log file if possible, I believe the agent should be capable of propagating this information somehow.
Is there anything that needs to be configured outside of the tutorial, or is it a current limitation of the Azure Monitor Agent?
Fixed in 2023 Feb by Microsoft: https://learn.microsoft.com/en-us/answers/questions/951629/custom-logs-hostname-field-azure-monitoring-agent

SSIS package gets fatal error while reading the input stream from the network

Problem
When executing a data-heavy SSIS package that inserts the data from a database in EnvironmentA1 to a database EnvironmentB1, I get the following error:
A fatal error occurred while reading the input stream from the network. The session will be terminated (input error: 10060, output error: 0)
Context Information
EnvironmentA1 - virtual machine in local data center, running SQL Server 2017
EnvironmentB1 - virtual machine in Azure, running SQL Server 2017
The package is being executed from SSIS Catalog scheduled daily by SQL Agent. Very occasionally it will succeed but it is now generally expected to fail every time it runs, different step every time.
What is really baffling to me about this is that if I set to run the same package interactively in Visual Studio using the exact same connection strings with the same security context for both EnvironmentA1 & EnvironmentB1 connection managers it will succeed every time without any issues. The Visual Studio itself is installed elsewhere in EnvironmentC1.
This is how example entries in SQL Error Log on EnvironmentB1 look like around the time of failure:
Error messages from SSIS Catalog execution report:
Everything above and the research made suggest that this is network related issue. The common suggestion found was to disable any TCP Offloading related features which I did for both environments but that didn't make any difference.
EnvironmentA1:
EnvironmentB1:
Additionally for testing purposes I disabled the following features from NIC configuration on each environmet:
EnvironmentA1:
Receive-Side Scaling State
Large Send Offload V2 IPv4
Large Send Offload V2 IPv6
TCP Checksum Offload IPv4
TCP Checksum Offload IPv6
EnvironmentB1:
Receive-Side Scaling
Large Send Offload Version 2 IPv4
Large Send Offload Version 2 IPv6
TCP Checksum Offload IPv4
TCP Checksum Offload IPv6
IPSec Offload
Also to note there are other SSIS packages that interact with same both environments and some of them has never produced a similar error, but they are either dealing with insignificant amount of data or pushing it in the opposite direction (EnvironmentB1 to EnvironmentA1)
As a temporary measure I have also tried deploying the package to the SSIS Catalog of EnvironmentA2 (development version of EnvironmentA1) and scheduling execution using production connection strings, but it gets the exact same issue and the only guaranteed way to run the package successfully remains running it via Visual Studio.
If anyone could at least point me in the right direction of diagnosing this issue, that would be greatly appreciated. Please let me know if I can add any other info for the context.
Your 3rd SSIS error states the connection was forceably closed by remote host.
That suggests firewall or network filtering issues. Check with your network guys if that could be the case.

Perforce replica server that can write to main server and has build capability

I need to customize the Perforce server to achieve the following requirements:
I need a local replica server which gets synced with the main server in a different geographical location. I can have the same time zone settings for the local and main servers
The client should be able to commit to the replica server.
The replica server will have build capability as well as a test frame work that is run whenever a build is succesfull.
Once the build and test is succesfull the code should get committed to main server.
I know that the replica server provided by perforce is used as a readonly server which can't write to main server and the forwarding replica just forwards the commands to main server.
I can't use proxy server, as the local server should work even when the main server is offline.
Is it possible to do this? Can anyone point me to some articles which would help me to set up such a server
I had asked the same question in the Perforce forum, but the question is still under verification by moderators.
An edge/commit setup may meet your requirements, as an Edge Server handles some local operations associated with workspaces and work in progress.
As well as read-only commands, the following operations can be performed on an Edge Server:
syncing, checking out, merging, resolving, and reverting files
More information about edge/commit archetecture is available here:
http://www.perforce.com/perforce/doc.current/manuals/p4dist/chapter.distributed.html
You may also want to look at BuildFarm servers:
http://www.perforce.com/perforce/doc.current/manuals/p4dist/chapter.replication.html#DB5-72814
Hope this helps,
Jen!
Build Server doesn't allow build work spaces to submit files. If submitting files is required as part of the build process, consider the use of an edge server to support your automated build processes.
With the implementation of edge servers in 2013.2, we now recommend that you use an edge server instead of a build farm server.
Edge servers offer all the functionality of build farm servers and yet offload more work from the main server and improve performance, with the additional flexibility of being able to run write commands as part of the build process.

How to swap a server in and Out of cluster during runtime

I am implementing session replication in my application. This is old application.
I made all changes and now need to test the server switch and confirm that the objects in session is properly carried to another server in server list.
I have 1 Admin server and 2 managed servers. So the cluster is made of 2 managed server.
while testing I have to always bounce the server and test the flow of my application. This process is very time consuming. So I am looking for any other way to sway a server in and out of cluster
during runtime. I asked on Oracle support website , but they said only way to bounce the server.
How can I write a script for this?
Is there a parameter in weblogic or wlproxy plugin config file that help in this switch.
Your help is appreciated.
using Weblogic scripting tool (WLST) in script mode, you can write a script to automate the shutdown / startup of the managed server that you would like to remove temporarily from the cluster.
you create a file with .py extension which will contain the weblogic commands that you would like to run.
shutdown.py:
connect('username','password','t3://adminIP:port')
shutdown('servername')
disconnect()
startup.py:
connect('username','password','t3://adminIP:port')
start('servername')
disconnect()
to run the script from commandline:
java weblogic.WLST c:\myscripts\shutdown.py
you can put this line in a shell/batch script.
Another way is to write a Java program or an ANT script to invoke the commands using the weblogic.jar file that comes with weblogic.
If you were to change the state of a weblogic managed server from running to admin mode then also you can test the session replication.
You can do this from admin console by selecting the managed server and going to control tab and changing the state of the server to Admin. You can change it back to running from the same place.
Using WLST you can use the commands suspend and resume
http://docs.oracle.com/cd/E11035_01/wls100/server_start/server_life.html
http://docs.oracle.com/cd/E14571_01/web.1111/e13813/quick_ref.htm
suspending and resuming managed servers is quicker than shutting it down and restarting it again.
I have tested this at my end and it works fine, ie when I change the state to admin, my request goes to another managed server and the session is also replicated.
I have used the sample WLS cluster replication example available in wls installation.

Why are my WebLogic clustered MDB app deployments in warning state?

I have a WebLogic cluster on which I've deployed numerous topics and applications that use them. My applications uniformly show themselves in a Warning status. Looking at Monitoring on the deployment, I see the MDB application connects to Server #1, but on server #2 it shows this:
MDB application appName is NOT connected to messaging system.
My JMS Server is targetted to a migratable target, which is in turn targetted to the #1 server and has a cluster identified. And messages sent to either server all flow as expected. I just don't know why these deployments show in a Warning state.
WebLogic 11g
This can be avoided by using the parameter below
<start-mdbs-with-application>false</start-mdbs-with-application>
In the weblogic-application.xml, Setting start-mdbs-with-application to false forces MDBs to defer starting until after the server instance opens its listen port, near the end of the server boot up process.
If you want to perform startup tasks after JMS and JDBC services are available, but before applications and modules have been activated, you can select the Run Before Application Deployments option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppActivation attribute to “true”).
If you want to perform startup tasks before JMS and JDBC services are available, you can select the Run Before Application Activations option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppDeployments attribute to “true”).
Refer :http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/message_beans.html
this is applicable for the versions till 12c and later
I don't like unanswered questions, so I'm going to answer this one.
The problem is resolved, though I was not involved in its resolution. At present the problem only exists for the length of time it takes the JMS subsystem to fully initialize. During that period (with many queues, it can take a while) the JNDI system throws errors and the apps are truly in warning state. Once the JMS is fully initialized, everything goes green.
My belief is that someone corrected something in the JMS Server / Cluster config. I'll never know what it was.