I have created a task using automation anywhere to run automatically at specified times and the schedule kicks off well until i logon to the machine remotely using vpn access, when i start loging on to the machine using the vpn my automatic schedules stops working after that, what could be the cause of this issue and how do i resolve it? the machine currently runs windows 7 enterpise.
Kind Regards,
Reuben Kekana
Given your information, the first thing that comes to mind:
When you log into the environment hosting the bot, you essentially 'steal' the connection from the AA control room. When you then disconnect from the environment, neither you nor the control room have an active session to this environment. This in effect means the environment 'logs off' and thus no longer runs any scheduled tasks.
You would need to go to the control room and re-establish this connection.
Related
I have a .NET process that collects a bunch of rates each morning before I get in -- this does a few things:
Checks if bbcomm.exe is running
Starts bbcomm.exe if it's not running
Creates a Session on localhost:8194 then connects to the //blp/refdata service
Pull some historical and reference data points
Works great, except when it doesn't.
Sometimes the connection to Bloomberg persists after I logoff Bloomberg Desktop, even after a reboot, for several days.
Sometimes it doesn't and it won't collect rates until I logon to Bloomberg which starts the connections up right away.
Does BBComm cache my credentials? I see in bbcomm.log it initiates the connection with an SSL certificate -- is this specific to my PC? Is it generated when I logon to Bloomberg, and expires at some interval?
How does it work aside from magic?
Since I need to profile the application runs in remote machine where GUI is not allowed. I started remote session profiling with JProfiler8 and ran /bin/jpenable agent in remote host. After the successful analysis I need to stop that remote jpenable jprofiler8 agent. How can I do that?
To make sure previously started agent is still in running state or not, I ran the /bin/jpenable agent again. Now I don't see previously binded JVM. So i assume it already bind with previous agent.
Unfortunately, it is not possible to unload a JVMTI profiling agent. The JVM only unloads agents when it shuts down.
I would really appreciated another perspective on an issue we have been experiencing.
The environment:
We have a small subset of VMs (5 Windows Server 2008 R2 VM's) hosted on a Windows Server 2012 Cluster of 8 Physical Hosts which supports 100's over VMs across various OS (2008/2012 etc).
The issue:
Servers within the subset of VMs experience widespread network SERVICE failures. The failure presents itself as a loss in connectivity for a large number of network related services operating on the VMs (including certain critical network dependant applications).
The impacts:
Server remains online.
Inability to RDP to the servers via Domain Accounts (Local accounts are fine).
Windows event logs associated with Netlogon Failure: Event ID 5719 - This computer was not able to set up a secure session with a domain controller in domain DOWNERGROUP due to the following:
The RPC server is unavailable. This may lead to authentication problems.
Windows event logs assocaited with Group Policy Failure:
Event ID 1054:The processing of Group Policy failed. Windows could not
obtain the name of a domain controller. This could be caused by a name
resolution failure. Verify your Domain Name System (DNS) is configured
and working correctly
Widespread Agent Failure (AV, Monitoring, Application) - Lack of connectivty to centralised management servers.
The resolution(s). Stopping an agent service. Strange however its not limited to a specific agent however if we stop agent A, the server comes back to life, however if we also stop agent B, the server comes back to life with Agent A still running. Restarting the VM also resolves the issue.
Note that these events do not appear on other VMs hosted off the same host at the time of the outage. Also note that the guest is located on the same host prior to, during and after the outage.
We have investigated the suspicion that their may be issues with Dynamic Range Port Allocation with the server possibly getting into a bottleneck state. We have implementedthe "MaxUserPort" and "TCPTimedWaitDelay" registry parameters and have set them to 65k and 30 respectively.
Also note that when an outage occurs, it does not always occur on the same VMs in the group. Often times it is 2, 3, 4 or all servers.
Im really just asking if anyone can see these symptoms and relate to possible causes for our situation.
Any help/discussion would be appreciated.
Well, this turned out to be an interesting resolution.
We discovered that one of our server agents, while not actually showing open ports in Netstat, had over 40,000 handles growing linearly over time.
Had to enable the "handles" column in task manager to be able to see this info.
This was the miracle post...
http://blogs.technet.com/b/kimberj/archive/2012/07/06/sever-quot-hangs-quot-and-ephemeral-port-exhaustion-issues.aspx
Using Vagrant+Chef Solo I'm setting up two VMs: #1 is a TeamCity server, #2 is a TeamCity agent. Provisioning is done by first installing the TeamCity server package on VM #1, then the agent VM is booted and requests data from the server which is used to install the agent. That whole thing works fine.
But now I want to alter the server after the agent is done provisioning. I want to modify the server's database directly, to change an attribute that is only available after the agent has spun up. But is there a way for one VM's provisioning to trigger another VM? Once the agent is done I'd like to somehow resume provisioning the server, so I can make the database edit..
Any thoughts, recommendations, or feedback welcomed. I'm new to Vagrant, Chef, and TeamCity, so there's a chance I'm missing a much easier solution.
* Why do I want to edit the DB directly you may be wondering? TeamCity agents must be authorized before they can be used, and I want to do this programmatically. The solution I've found is to directly edit the DB, because authorization functionality is not exposed via the TeamCity REST API (as far as I can tell)
If you can test the agent is installed/answering, you may add a ruby block looping over this test before continuing the recipe execution.
This loop should have a sleep and a counter to avoid infinite loops.
I've no knowledge of teamcity, so can't tell if it's the best way.
In general, Chef is designed to manage your system, not simply provision it (though this is less true in the modern Cloud world with "golden image" strategies). Nonetheless, in your case, you best bet is to just setup chef-client as a service that runs every 15 minutes. Once the client has finished provisioning, the next run of the server will be able to authorize it.
If you really want to "trigger" the one from the other, you'd need either do that externally with something like etcd or consul, or you would need to setup an ssh keypair between the boxes and add a ruby_block on the client that either does the database modification directly, or calls chef-client on the server.
I have a fairly simple WCF Role on Azure that I am trying to deploy two instances of - the role is fairly-well united tested and I've been able to run it successfully on the local emulator with no readily apparent issues.
The role has a couple of startup tasks that run in the background as the role starts, namely the installation of a pair of Windows Services that run asynchronously in the background doing data processing.
When I deploy the Azure, the first role instance boots normally and quickly, with the Windows Services successfully installed and running (I RDPed in to verify.) The second instance permanently hangs in a "Waiting for Host" state. I've tried rebooting that role instance individually and it doesn't appear to fix the problem.
I've also tried redeploying the entire package to Azure with the same results - first role instance starts fine, the second hangs.
What can cause this problem? Where should I look to try to fix the issue?
So ultimately I resolved this issue by turning to Windows Azure support - turns out I was being consistently deployed onto a "bad node" which I didn't think was possible given how the Fabric Controller works.
Nonetheless, if you run into this issue - Azure support is your best bet.