HubConnection::Start().Wait() times out if behind web proxy - signalr.client

Without a web proxy, Start().Wait() works fine. Connection trace:
11:31:15.0221694 - null - ChangeState(Disconnected, Connecting)
11:31:17.1749694 - 054a636a-10dc-4d39-a77b-709639ea4e5f - SSE: GET http://<removed>/signalr/connect?transport=serverSentEvents&connectionToken=TFuti92AamDL%2FsFNOE8LF1N6T10bDcosIqdkmHbLxYpPwNtW9szZNHHDkrLPR1mFa0Pu%2FUgqmU6fkA%2Fh6iuOY9tTMfjfwqwa%2F5vpZk%2B9iuESgPD5OFYZelTG%2FZn16USK&connectionData=[{"Name":"myHub"}]
11:31:17.9549694 - 054a636a-10dc-4d39-a77b-709639ea4e5f - ChangeState(Connecting, Connected)
But behind a web proxy, it times out or returns after a long time (4-5 minutes) if TransportConnectTimeout is increased. Connection trace:
05:04:05.4397657 - null - ChangeState(Disconnected, Connecting)
05:04:06.1727657 - 7d8ed176-4ca7-461b-97bb-d32b2e71d950 - SSE: GET http://<removed>/signalr/connect?transport=serverSentEvents&connectionToken=Q0FllYmOPNl0%2BQqI643N%2Bzed2zuNAEvLywMLnqkPV4H6%2BPMaiwlrEYGJsNBvrG8QMWdnEJh%2B0qf5UBDj1rpp9JNktaISXa4vhwpK6KnUo32R6d4vBEgunh9Ju%2FRZTm%2Bu&connectionData=[{"Name":"myHub"}]
05:04:11.1737657 - 7d8ed176-4ca7-461b-97bb-d32b2e71d950 - Auto: Failed to connect to using transport serverSentEvents. System.TimeoutException: Transport timed out trying to connect
05:04:11.1837657 - 7d8ed176-4ca7-461b-97bb-d32b2e71d950 - LP Connect: http://<removed>/signalr/connect
05:04:11.8147657 - 7d8ed176-4ca7-461b-97bb-d32b2e71d950 - ChangeState(Connecting, Connected)
05:04:11.8217657 - 7d8ed176-4ca7-461b-97bb-d32b2e71d950 - LP Poll: http://<removed>/signalr/poll
So if behind the web proxy, SignalR fails to connect with SSE protocol and falls back to long polling and connects in about 5 seconds, but still Start().Wait() does not return.
So, how to get it working behind a web proxy? I am using SignalR version 2.0.1.

Here's a workaround: Instead of waiting on Start(), handle the HubConnection.StateChanged event. This event is fired on time.

Related

Forward to external url on live probe failure

I currently forward traffic to an internal service
labels:
- traefik.http.routers.ocean.rule=Host(`ocean.xxx.ch`)
- traefik.http.routers.ocean.tls=true
- traefik.http.routers.ocean.tls.certresolver=lets-encrypt
- traefik.http.services.ocean.loadbalancer.server.port=3000
- traefik.http.services.ocean.loadbalancer.healthcheck.path=/_actuator/probes/readiness
- traefik.http.services.ocean.loadbalancer.healthcheck.interval=10s
If the service fails health check I would like the traffic to be forwarded to an external url whale.yyy.ch instead until the primary service comes back online. Is that possible?

Why does DistributedCache SessionHandler throw connection issues?

I have a .net core app running in VM's in azure where I use Redis as an implementation for DistributedCache. This way we have user sessions stored in Redis and can be shared in the web farm. We only use Redis for storing sessions. We are using Azure Cache for Redis with a normal instance. Both the VM and Redis are in the same region.
Add in startup:
services.AddStackExchangeRedisCache(options => {
options.Configuration = configuration["RedisCache:ConnectionString"];
});
In the web app we are having intermittent problems with redis closing connections. All calls to Redis are managed by calling session Async-methods like below.
public static async Task<T> Get<T>(this ISession session, string key) {
if (!session.IsAvailable)
await session.LoadAsync();
var value = session.GetString(key);
return value == null ? default(T) : JsonConvert.DeserializeObject<T>(value);
}
The errors we are seeing are:
StackExchange.Redis.RedisConnectionException: No connection is available to service this operation: EVAL; An existing connection was forcibly closed by the remote host.; IOCP: (Busy=0,Free=1000,Min=2,Max=1000), WORKER: (Busy=3,Free=32764,Min=512,Max=32767), Local-CPU: n/a
---> StackExchange.Redis.RedisConnectionException: SocketFailure on myredis.redis.cache.windows.net:6380/Interactive, Idle/Faulted, last: EVAL, origin: ReadFromPipe, outstanding: 1, last-read: 34s ago, last-write: 0s ago, keep-alive: 60s, state: ConnectedEstablished, mgr: 9 of 10 available, in: 0, last-heartbeat: 0s ago, last-mbeat: 0s ago, global: 0s ago, v: 2.0.593.37019
---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host..
And
StackExchange.Redis.RedisConnectionException: SocketFailure on myredis.redis.cache.windows.net:6380/Interactive, Idle/Faulted, last: EXPIRE, origin: ReadFromPipe, outstanding: 1, last-read: 0s ago, last-write: 0s ago, keep-alive: 60s, state: ConnectedEstablished, mgr: 9 of 10 available, in: 0, last-heartbeat: 0s ago, last-mbeat: 0s ago, global: 0s ago, v: 2.0.593.37019
---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host..
We are not experiencing traffic spikes during the timeouts and the Redis instance is not under any heavy load.
I have no idea how to troubleshoot this further. Any idea?
The connection might be closed by Redis server because of idling for too long.
In your Azure Cache control you can find the config for Redis server, see if you can find timeout setting.
If you can issue commands through command line, you can also issue this command
CONFIG get timeout
If it's zero, it means no timeout.
Then the issue is with your redis client. I'm not familiar with .Net, whatever client you're using to connect to Redis server, check the timeout option or Google search (Name of the client)+ timeout and see if you can find any useful information.

Dcm4chee Connection to ldap://ldap:389 broken - reconnect error

I'm using dcm4chee docker stack with ldap and postgreSQL and have a floating error:
ldap:389; socket closed; remaining name 'cn=Devices,cn=DICOM Configuration,dc=mdw,dc=io'
2018-07-13 06:30:42,089 INFO [org.dcm4che3.conf.ldap.ReconnectDirContext] (Thread-0 (ActiveMQ-client-global-threads)) Connection to ldap://ldap:389 broken - reconnect
All three services are running on the same host. What can I do to avoid that error?
Disable the firewall. If firewall was enabled add an exception for all 3 services.

WildFly Swarm apps using an external ActiveMQ broker

I'm having a very hard time to get two WildFly swarm apps (based on 2017.9.5 version) communicate with each other over a standalone ActiveMQ 5.14.3 broker. All done using YAML config as I can't have a main method in my case.
after reading hundreds of outdated examples and inaccurate pages of documentation, I settled with following settings for both producer and consumer apps:
swarm:
messaging-activemq:
servers:
default:
jms-topics:
domain-events: {}
messaging:
remote:
name: remote-mq
host: localhost
port: 61616
jndi-name: java:/jms/remote-mq
remote: true
Now it seems that at least part of the setting is correct as the apps start except for following warning:
2017-09-16 14:20:04,385 WARN [org.jboss.activemq.artemis.wildfly.integration.recovery] (MSC service thread 1-2) AMQ122018: Could not start recovery discovery on XARecoveryConfig [transportConfiguration=[TransportConfiguration(name=, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&localAddress=::&host=localhost], discoveryConfiguration=null, username=null, password=****, JNDI_NAME=java:/jms/remote-mq], we will retry every recovery scan until the server is available
Also when producer tries to send messages it just times out and I get following exception (just the last part):
Caused by: javax.jms.JMSException: Failed to create session factory
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:727)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createXAConnection(ActiveMQConnectionFactory.java:304)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createXAConnection(ActiveMQConnectionFactory.java:300)
at org.apache.activemq.artemis.ra.ActiveMQRAManagedConnection.setup(ActiveMQRAManagedConnection.java:785)
... 127 more
Caused by: ActiveMQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=AMQ119013: Timed out waiting to receive cluster topology. Group:null]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:797)
at org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory.createConnectionInternal(ActiveMQConnectionFactory.java:724)
... 130 more
I suspect that the problem is ActiveMQ has security turned on, but I found no place to give username and password to swarm config.
The ActiveMQ instance is running using Docker and following compose file:
version: '2'
services:
activemq:
image: webcenter/activemq
environment:
- ACTIVEMQ_NAME=amqp-srv1
- ACTIVEMQ_REMOVE_DEFAULT_ACCOUNT=true
- ACTIVEMQ_ADMIN_LOGIN=admin
- ACTIVEMQ_ADMIN_PASSWORD=your_password
- ACTIVEMQ_WRITE_LOGIN=producer_login
- ACTIVEMQ_WRITE_PASSWORD=producer_password
- ACTIVEMQ_READ_LOGIN=consumer_login
- ACTIVEMQ_READ_PASSWORD=consumer_password
- ACTIVEMQ_JMX_LOGIN=jmx_login
- ACTIVEMQ_JMX_PASSWORD=jmx_password
- ACTIVEMQ_MIN_MEMORY=1024
- ACTIVEMQ_MAX_MEMORY=4096
- ACTIVEMQ_ENABLED_SCHEDULER=true
ports:
- "1883:1883"
- "5672:5672"
- "8161:8161"
- "61616:61616"
- "61613:61613"
- "61614:61614"
any idea what's going wrong?
I had bad times trying to get it working too. The following YML solved my problem:
swarm:
network:
socket-binding-groups:
standard-sockets:
outbound-socket-bindings:
myapp-socket-binding:
remote-host: localhost
remote-port: 61616
messaging-activemq:
servers:
default:
remote-connectors:
myapp-connector:
socket-binding: myapp-socket-binding
pooled-connection-factories:
myAppRemote:
user: username
password: password
connectors:
- myapp-connector
entries:
- 'java:/jms/remote-mq'

ActiveMQ Master/Slave on Weblogic - vm transport issue

I am trying to configure ActiveMQ master/slave setup on a single WebLogic machine. The problem is when I start Managed Server1 it successfully connects to vm transport and everything works perfectly, but when I start Managed Server2 I am receiving the following errors in broker logs
INFO 2016-September-27 10:08:00,227 ActiveMQEndpointWorker:124 - Connection attempt already in progress, ignoring connection exception
INFO 2016-September-27 10:08:01,161 TransportConnector:260 - Connector vm://localhost started
INFO 2016-September-27 10:08:30,228 TransportConnector:291 - Connector vm://localhost stopped
INFO 2016-September-27 10:08:30,229 TransportConnector:260 - Connector vm://localhost started
WARN 2016-September-27 10:08:30,228 ActiveMQManagedConnection:385 - Connection failed: javax.jms.JMSException: peer (vm://localhost#61) stopped.
WARN 2016-September-27 10:08:30,231 TransportConnection:823 - Failed to add Connection ID:ndl-wls-300.mydomain.com-52251-1474966937425-65:1 due to java.lang.NullPointerException
ERROR 2016-September-27 10:08:30,233 ActiveMQEndpointWorker:183 - Failed to connect to broker [vm://localhost?create=false]: java.lang.NullPointerException
javax.jms.JMSException: java.lang.NullPointerException
Please help, I am stuck with this.
I still don't see the reason for the slave within the same VM. I suggest you reach out to an ActiveMQ expert consultant to validate your architecture.
However, I think I can help you move a little bit closer to this issue:
There is a fundamental miss understanding here.. the vm url is broken down like this:
vm://${brokerName}?option=value,etc
The first time you create vm://localhost?create=true.. you have created a broker
The second time you reference vm://localhost?create=false.. you have created a client connection to the first broker.
To get two brokers, you'd need two different vm://${brokerName}?create=true