Can not run Redis commands using StackExchangeRedis - asp.net-core

Hello i am trying to connect to a Redis database from a ASP NET Core 3.1 application and i keep getting this error when i issue a command.
> 'No connection is active/available to service this operation: SET a; A
> blocking operation was interrupted by a call to
> WSACancelBlockingCall., mc: 1/1/0, mgr: 10 of 10 available,
> clientName: [ClientName], IOCP: (Busy=2,Free=998,Min=8,Max=1000),
> WORKER:
I think it has something to do with the library StackExchangeRedis since until now it worked, up until it stopped working randomly.I have updated to the last version, restarted pc, whatever and nothing.
I can connect to my local redis and issue commands with both the Redis-Cli and using telnet 127.0.0.1 6379 , so that is why i think the culprit is the library.
ConnectionString
localhost:6379,ssl=True,allowAdmin=True,abortConnect=False,defaultDatabase=0
How i use it:
var con=ConnectionMultiplexer.Connect(connectionString); //passes
con.GetDatabase().StringSet("a","a"); //throws

If just using it for localhost development purposes you can try disabling ssl : localhost:6379,**ssl=false**,allowAdmin=True,abortConnect=False,defaultDatabase=0

Related

Kafka-s3-connect killed instantly after start

I want to connect aws-Kafka with s3 using confluence connector on my ec2 server. I try to configure everything like in tutorials. When I run connect-standalone or connect-distributed, at first everything goes well, I don't get any errors in the logs but after information about connection starting, my connector died instantly without any information. Has anybody got same problem?
config/connect-standalone.properties
bootstrap.servers=msk-connection-string
plugin.path=/home/ubuntu/connectors/confluentinc-kafka-connect-s3
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
offset.storage.file.filename=/tmp/connect.offsets
connector.properties
connector.class=io.confluent.connect.s3.S3SinkConnector
format.class=io.confluent.connect.s3.format.bytearray.ByteArrayFormat
flush.size=1
topics=SomeTopic
s3.bucket.name=bucket-name-here
s3.region=us-west-2
s3.part.size=5242880
aws.access.key.id=****
aws.secret.access.key=****
behavior.on.null.values=ignore
storage.class=io.confluent.connect.s3.storage.S3Storage
topics.dir=../topics
store.url=http://bucket-name.s3-website-Region.amazonaws.com
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
logs:
[2021-08-20 06:32:35,954] INFO Kafka version: 2.7.0 (org.apache.kafka.common.utils.AppInfoParser:119)
[2021-08-20 06:32:35,954] INFO Kafka commitId: 448719dc99a19793 (org.apache.kafka.common.utils.AppInfoParser:120)
[2021-08-20 06:32:35,954] INFO Kafka startTimeMs: 1629441155953 (org.apache.kafka.common.utils.AppInfoParser:121)
Killed
Please help!
MSK requires TLS connection
When adding few lines with ssl configuration to config/connect-standalone.properties
producer.security.protocol=SSL
consumer.security.protocol=SSL
security.protocol=SSL
ssl.protocol=TLS
ssl.truststore.location=/your/path/to/truststore/kafka.client.truststore.jks
It starts working properly!

Error while running query on Impala with Superset

I'm trying to connect impala to superset, and when I test the connection prints: "Seems OK!", and when I try to see databases on impala with the SQL Editor in the left side it shows all databases without problems.
Preview of Databases/Tables
But when i write a query and click on "Run Query", it gives the error: "Could not start SASL: b'Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Ticket expired)'"
Error running query
I'm running superset with SSL and in production mode (with Gunicorn) and Impala with SSL in a Kerberized Hadoop Cluster, and my impala database config is:
Impala Config
And in the extras I put:
{
"metadata_params": {},
"engine_params": {
"connect_args": {
"port": 21050,
"use_ssl": "True",
"ca_cert": "path/to/my/ca_cert.pem",
"auth_mechanism": "GSSAPI"
}
},
"metadata_cache_timeout": {},
"schemas_allowed_for_csv_upload": []
}
How can I solve this error? In my superset log it only shows:
Triggering query_id: 65
INFO:superset.views.core:Triggering query_id: 65
Query 65: Running query on a Celery worker
INFO:superset.views.core:Query 65: Running query on a Celery worker
Versions: Superset 0.36.0, Impyla 0.16.2
I was able to fix this error doing this steps:
1 - Created service user for celery-worker, created a kerberos ticket for him and created a crontab to renew the ticket.
2 - Runned celery worker from this service user, instead running from root.
3 - Killed an celery-worker that was running in another machine of my cluster
4 - Restarted Impala and Superset
I think this error ocurred because in some queries instead of use the celery worker in my superset machine, it was using the celery worker that was in another machine without a valid kerberos ticket. I could fix this error because when I was reading celery-worker log , it showed that a connection with the celery worker in other machine failed in a query running.

ASP.NET Core SignalR websocket connection limit

I produce load testing of SignalR (ASP.NET Core) application hosted at Windows Server 2016 standard using Microsoft.AspNetCore.SignalR.Client.
Dotnet core hosting 2.1.1 installed
And i can not create more than 3000 (2950-3050) connections.
Already tried recomendations as described here:
How to configure concurrency in .NET Core Web API?
Limiting performance factors of WebSocket in ASP.NET 4.5?
Set limit concurrent connections for websocket on iis 8
Added limits to UseKestrel (this seems to work if i set values to 100 or 1000):
var host = new WebHostBuilder()
.UseKestrel(options =>
{
options.Limits.MaxConcurrentConnections = 50000;
options.Limits.MaxConcurrentUpgradedConnections = 50000;
})
Changed all aspnet.config files by adding this:
<system.web>
<applicationPool maxConcurrentRequestsPerCPU="50000" />
</system.web>
Executed this command:
cd %windir%\System32\inetsrv\ appcmd.exe set config /section:system.webserver/serverRuntime /appConcurrentRequestLimit:50000
Added performance counter for Web Service\Current Connections - Maximum Connections. And Maximum Connections increases to 3300 and stops.
There are no exceptions in server logs. But I feel that there are some restrictions in system.
Server IIS logs contains only this:
GET /messageshub
id=A_3x1sH9kHM1Rc3oPSgP6w
80 - 172.20.192.11 - - 404 0 0 3
Client exceptions is basically the following:
System.Net.Http.HttpRequestException: Error while copying content to a
stream. ---> System.IO.IOException: Unable to read data from the
transport connection: An existing connection was forcibly closed by
the remote host.
On Windows you may have dynamic port assignment issue .
Windows by default has 5000 port numbers ready to be assigned to TCP connections and 1024 of them are reserved for the OS itself which you will end up with 3977 ports free to be assigned .
In your case the number is 3300 as you mentioned but it's possible that 3300 of the connections are established and 677 of them are Time_Waited.
In any case i recommend to use
netstat -an | find 'Established" -c
netstat -an | find 'TIME" -c
netstat -an | find 'CLOSED" -c
In order to figure out the number of established & time_wait & close_wait connections at the time you received the IO exception and if the number is close to 5000 just add this to your registry and reboot and test again
[HKEY_LOCAL_MACHINE \System \CurrentControlSet \Services \Tcpip \Parameters]
MaxUserPort = 5000 (Default = 5000, Max = 65534)

Dcm4chee Connection to ldap://ldap:389 broken - reconnect error

I'm using dcm4chee docker stack with ldap and postgreSQL and have a floating error:
ldap:389; socket closed; remaining name 'cn=Devices,cn=DICOM Configuration,dc=mdw,dc=io'
2018-07-13 06:30:42,089 INFO [org.dcm4che3.conf.ldap.ReconnectDirContext] (Thread-0 (ActiveMQ-client-global-threads)) Connection to ldap://ldap:389 broken - reconnect
All three services are running on the same host. What can I do to avoid that error?
Disable the firewall. If firewall was enabled add an exception for all 3 services.

Apache HTTPD Websocket Tunnel Plugin Error

My websocket connection fails to connect when connecting through Apache ws tunnel plugin intermittently. The connection always works when hitting the app servers directly.
I see the below errors.
Error during WebSocket handshake: Invalid status line
WebSocket connection to 'ws://host' failed: One or more reserved bits are on: reserved1 = 1, reserved2 = 0, reserved3 = 0
and sometimes
WebSocket connection to 'ws://host' failed: Unrecognized frame opcode: 12
and at times
Error during WebSocket handshake: Status line does not end with CRLF ui-toolkit-vendor.js:21965
Infrastructure
Apache HTTPD 2.4.9 with mod_proxy_wstunnel and mod_proxy_balancer modules
The ws tunnel module ported with 2.4.9 version has several bugs which have been later fixed in the 2.4.12 build. Please find the excerpt from the SVN log.
Revision 1587075 - (view) (download) (annotate) - [select for diffs]
Modified Sun Apr 13 18:41:05 2014 UTC (11 months, 3 weeks ago) by covener
File length: 20119 byte(s)
Diff to previous 1587057 (colored)
several related mod_proxy_wstunnel changes that are tough to pull apart:
make async websockets tunnel opt-in
add config for how long we block a thread in asynch mode
add config for a cap on the synchronous path
avoid sending error responses down the upgraded tunnel