I have enabled SQL support for my cache and i am running below query on a large dataset.
I keep on getting warning messages -
WARNING: Query execution is too long [duration=3625ms, type=MAP, distributedJoin=false, enforceJoinOrder=false, lazy=false,
LIMIT 200, node=TcpDiscoveryNode [id=e70cc42c-fae6-420d-87f1-8102e386b27a, consistentId=10.105.143.70, addrs=ArrayList [0:0:0:0:0:0:0:1, 10.105.143.70, 127.0.0.1], sockAddrs=HashSet [/10.105.143.70:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1638195967523, loc=true, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=false], reqId=3, segment=0]
Below is my query -
try (QueryCursor<List<?>> cur = cache2.query(new SqlFieldsQuery("select _key from table"))) {
for (List<?> r : cur) {
Long key = (Long)r.get(0);
}
}
Sometimes it is taking as long as 50 seconds to fetch as low as 200 records (LIMIT 200)
Is there any way i can tune this to get the query finish quickly?
Try setting lazy mode on your SqlFieldsQuery:
try (QueryCursor<List<?>> cur = cache2.query(new SqlFieldsQuery("select _key from table").setLazy(true))) {
for (List<?> r : cur) {
Long key = (Long)r.get(0);
}
}
Related
Running into strange issue with Generalized S3 source connector running on confluent platform. Not able to pin point where exactly the error is or what the root cause is :
The only error I see in the ssh console is this (related to logs) :
[2023-02-11 11:12:45,464] INFO [Worker clientId=connect-1, groupId=connect-cluster-1] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1709)
log4j:ERROR A "io.confluent.log4j.redactor.RedactorAppender" object is not assignable to a "org.apache.log4j.Appender" variable.
log4j:ERROR The class "org.apache.log4j.Appender" was loaded by
log4j:ERROR [PluginClassLoader{pluginLocation=file:/usr/share/java/source-2.5.1/}] whereas object of type
log4j:ERROR "io.confluent.log4j.redactor.RedactorAppender" was loaded by [jdk.internal.loader.ClassLoaders$AppClassLoader#251a69d7].
log4j:ERROR Could not instantiate appender named "redactor".
[2023-02-11 11:13:35,741] INFO Injecting Confluent license properties into connector '<unspecified>' (org.apache.kafka.connect.runtime.WorkerConfigDecorator:412)
[2023-02-11 11:13:44,001] INFO Injecting Confluent license properties into connector 'S3GenConnectorConnector_7' (org.apache.kafka.connect.runtime.WorkerConfigDecorator:412)
[2023-02-11 11:13:44,006] INFO S3SourceConnectorConfig values:
aws.access.key.id = <<ACCESS KEY HERE>>
aws.secret.access.key = [hidden]
behavior.on.error = fail
bucket.listing.max.objects.threshold = -1
confluent.license = [hidden]
confluent.topic = _confluent-command
confluent.topic.bootstrap.servers = [172.27.157.66:9092]
confluent.topic.replication.factor = 3
directory.delim = /
file.discovery.starting.timestamp = 0
filename.regex = (.+)\+(\d+)\+.+$
folders = []
format.bytearray.extension = .bin
format.bytearray.separator =
format.class = class io.confluent.connect.s3.format.string.StringFormat
format.json.schema.enable = false
mode = RESTORE_BACKUP
parse.error.topic.prefix = error
partition.field.name = []
partitioner.class = class io.confluent.connect.storage.partitioner.DefaultPartitioner
path.format =
record.batch.max.size = 200
s3.bucket.name = mytestbucketamtk
s3.credentials.provider.class = class com.amazonaws.auth.DefaultAWSCredentialsProviderChain
s3.http.send.expect.continue = true
s3.part.retries = 3
s3.path.style.access = true
s3.poll.interval.ms = 60000
s3.proxy.password = null
s3.proxy.url =
s3.proxy.username = null
s3.region = us-east-1
s3.retry.backoff.ms = 200
s3.sse.customer.key = null
s3.ssea.name =
s3.wan.mode = false
schema.cache.size = 50
store.url = null
task.batch.size = 10
topic.regex.list = [first_topic:.*]
topics.dir = topics
(io.confluent.connect.s3.source.S3SourceConnectorConfig:376)
[2023-02-11 11:13:44,029] INFO Using configured AWS access key credentials instead of configured credentials provider class. (io.confluent.connect.s3.source.S3Storage:500)
Connector Config file below :
{
"name": "S3GenConnectorConnector_7",
"config": {
"connector.class": "io.confluent.connect.s3.source.S3SourceConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"mode": "RESTORE_BACKUP",
"format.class": "io.confluent.connect.s3.format.string.StringFormat",
"s3.bucket.name": "mytestbucketamtk",
"s3.region": "us-east-1",
"aws.access.key.id": <<ACCESS KEY HERE>>,
"aws.secret.access.key": <<SECRET KEY>>,
"topic.regex.list":"first_topic:.*"
}
}
The tasks are not getting created. And also no other errors in the connect console. The connect cluster is running on confluent platform. Any pointers in right direction would be appreciated. Did I miss any required configuration ?
I have this application using Akka.net cluster feature. The people who wrote the code have left the company.
I am trying to understand the code and we are planning a deployment.
The cluster has 2 types of nodes
QueueServicer: supports sharding and only these nodes should participate in sharding.
LightHouse: They are just seed nodes, nothing else.
Lighthouse : 2 nodes
QueueServicer : 3 Nodes
I see one of the QueueServicer node unable to join the cluster. Both lighthouse nodes are refusing connection. It constantly tries to join and never succeeds. This has been happening for the last 5 days or so and the node is never dying also. Its CPU and memory usage is high. Also It doesn't have any queue processor actors running when filtered search through the log. It takes long hours for Garbage collection etc. I see in the log for this node, the following.
{"timestamp":"2021-09-08T22:26:59.025Z", "logger":"Akka.Event.DummyClassForStringSources", "message":Tried to associate with unreachable remote address [akka.tcp://myapp#lighthouse-1:7892]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://myapp#lighthouse-1:7892] Caused by: [System.AggregateException: One or more errors occurred. (Connection refused akka.tcp://myapp#lighthouse-1:7892) ---> Akka.Remote.Transport.InvalidAssociationException: Connection refused akka.tcp://myapp#lighthouse-1:7892 at Akka.Remote.Transport.DotNetty.TcpTransport.AssociateInternal(Address remoteAddress) at Akka.Remote.Transport.DotNetty.DotNettyTransport.Associate(Address remoteAddress) --- End of inner exception stack trace --- at System.Threading.Tasks.Task1.GetResultCore(Boolean waitCompletionNotification) at Akka.Remote.Transport.ProtocolStateActor.<>c.<InitializeFSM>b__12_18(Task1 result) at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
{"timestamp":"2021-09-08T22:26:59.025Z", "logger":"Akka.Event.DummyClassForStringSources", "message":Tried to associate with unreachable remote address [akka.tcp://myapp#lighthouse-0:7892]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://myapp#lighthouse-0:7892] Caused by: [System.AggregateException: One or more errors occurred. (Connection refused akka.tcp://myapp#lighthouse-0:7892) ---> Akka.Remote.Transport.InvalidAssociationException: Connection refused akka.tcp://myapp#lighthouse-0:7892 at Akka.Remote.Transport.DotNetty.TcpTransport.AssociateInternal(Address remoteAddress) at Akka.Remote.Transport.DotNetty.DotNettyTransport.Associate(Address remoteAddress) --- End of inner exception stack trace --- at System.Threading.Tasks.Task1.GetResultCore(Boolean waitCompletionNotification) at Akka.Remote.Transport.ProtocolStateActor.<>c.<InitializeFSM>b__12_18(Task1 result) at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
There are other "Now supervising", "Stopping" "Started" logs which I am omitting here.
Can you please verify if the HCON config is correct for split brain resolver and Sharding?
I think LightHouse/SeeNodes should not have the sharding configuration specified. I think it is a mistake.
I also think, split brain resolver configuration might be wrong in LightHouse/SeedNodes and should not be specified for seed nodes.
I appreciate your help.
Here is the HOCON for QueueServicer Trimmed
akka {
loggers = ["Akka.Logger.log4net.Log4NetLogger, Akka.Logger.log4net"]
log-config-on-start = on
loglevel = "DEBUG"
actor {
provider = cluster
serializers {
hyperion = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
}
serialization-bindings {
"System.Object" = hyperion
}
}
remote {
dot-netty.tcp {
….
}
}
cluster {
seed-nodes = ["akka.tcp://myapp#lighthouse-0:7892",akka.tcp://myapp#lighthouse-1:7892"]
roles = ["QueueProcessor"]
sharding {
role = "QueueProcessor"
state-store-mode = ddata
remember-entities = true
passivate-idle-entity-after = off
}
downing-provider-class = "Akka.Cluster.SplitBrainResolver, Akka.Cluster"
split-brain-resolver {
active-strategy = keep-majority
stable-after = 20s
keep-majority {
role = "QueueProcessor"
}
}
down-removal-margin = 20s
}
extensions = ["Akka.Cluster.Tools.PublishSubscribe.DistributedPubSubExtensionProvider,Akka.Cluster.Tools"]
}
Here is the HOCON for Lighthouse
akka {
loggers = ["Akka.Logger.log4net.Log4NetLogger, Akka.Logger.log4net"]
log-config-on-start = on
loglevel = "DEBUG"
actor {
provider = cluster
serializers {
hyperion = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
}
serialization-bindings {
"System.Object" = hyperion
}
}
remote {
dot-netty.tcp {
…
}
}
cluster {
seed-nodes = ["akka.tcp://myapp#lighthouse-0:7892",akka.tcp://myapp#lighthouse-1:7892"]
roles = ["lighthouse"]
sharding {
role = "lighthouse"
state-store-mode = ddata
remember-entities = true
passivate-idle-entity-after = off
}
downing-provider-class = "Akka.Cluster.SplitBrainResolver, Akka.Cluster"
split-brain-resolver {
active-strategy = keep-oldest
stable-after = 30s
keep-oldest {
down-if-alone = on
role = "lighthouse"
}
}
}
}
I meant to reply to this sooner.
Here is your problem: you're using two different split brain resolver configurations - one for the QueueServicer and one for Lighthouse. Therefore, how your cluster resolves itself is going to be quite different depending upon who is the leader of each half of the cluster.
I would stick with a simple keep-majority strategy and use it uniformly on all nodes throughout the cluster - we're very likely going to enable this by default in Akka.NET v1.5.
If you have any questions, please feel free to reach out to us: https://petabridge.com/
I want to configure my Zeppelin to make an authentication to my AD with ldap. I've configured in conf/shiro.ini following informations :
ldapRealmExtern = org.apache.zeppelin.realm.LdapRealm
ldapRealmExtern.contextFactory.url = ldap://authentication.mycompany.com:389
ldapRealmExtern.contextFactory.systemUsername = CN=user,OU=XX_Func,OU=XX_Users,OU=XX_ACC,OU=XX,DC=xx,DC=FR
ldapRealmExtern.contextFactory.systemPassword = ******
ldapRealmExtern.contextFactory.authenticationMechanism = simple
ldapRealmExtern.authorizationEnabled = true
ldapRealmExtern.userSearchBase = dc=xx,dc=FR
#ldapRealmExtern.userSearchFilter = (&(cn={0})(objectclass=organizationalPerson))
ldapRealmExtern.userSearchFilter = cn={0}
ldapRealmExtern.userObjectClass = organizationalPerson
ldapRealmExtern.userSearchAttributeName = cn
ldapRealmExtern.groupObjectClass = group
ldapRealmExtern.memberAttribute = member
ldapRealmExtern.groupSearchBase = dc=xx,dc=FR
ldapRealmExtern.groupSearchFilter = member={0}
ldapRealmExtern.memberAttributeValueTemplate=cn={0},OU=XX_Intern,OU=XX_Users,OU=XX_ACC,OU=XX,DC=xx,DC=FR
When I start Zeppelin, I can make a login, but following exception is thrown :
WARN [2020-12-03 06:16:56,887] ({qtp1580893732-92} ModularRealmAuthenticator.java[doMultiRealmAuthentication]:224) - Realm [org.apache.zeppelin.realm.LdapRealm#33f9f341] threw an exception during a multi-realm authentication attempt:
java.lang.IllegalArgumentException: principal argument cannot be null.
at org.apache.shiro.subject.SimplePrincipalCollection.add(SimplePrincipalCollection.java:104)
at org.apache.shiro.subject.SimplePrincipalCollection.<init>(SimplePrincipalCollection.java:59)
at org.apache.shiro.authc.SimpleAuthenticationInfo.<init>(SimpleAuthenticationInfo.java:93)
at org.apache.zeppelin.realm.LdapRealm.createAuthenticationInfo(LdapRealm.java:985)
at org.apache.shiro.realm.ldap.DefaultLdapRealm.queryForAuthenticationInfo(DefaultLdapRealm.java:377)
at org.apache.zeppelin.realm.LdapRealm.queryForAuthenticationInfo(LdapRealm.java:268)
at org.apache.shiro.realm.ldap.DefaultLdapRealm.doGetAuthenticationInfo(DefaultLdapRealm.java:295)
at org.apache.zeppelin.realm.LdapRealm.doGetAuthenticationInfo(LdapRealm.java:217)
at org.apache.shiro.realm.AuthenticatingRealm.getAuthenticationInfo(AuthenticatingRealm.java:568)
at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doMultiRealmAuthentication(ModularRealmAuthenticator.java:219)
at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doAuthenticate(ModularRealmAuthenticator.java:269)
at org.apache.shiro.authc.AbstractAuthenticator.authenticate(AbstractAuthenticator.java:198)
at org.apache.shiro.mgt.AuthenticatingSecurityManager.authenticate(AuthenticatingSecurityManager.java:106)
at org.apache.shiro.mgt.DefaultSecurityManager.login(DefaultSecurityManager.java:270)
at org.apache.shiro.subject.support.DelegatingSubject.login(DelegatingSubject.java:256)
at org.apache.shiro.web.filter.authc.AuthenticatingFilter.executeLogin(AuthenticatingFilter.java:53)
at org.apache.shiro.web.filter.authc.FormAuthenticationFilter.onAccessDenied(FormAuthenticationFilter.java:154)
at org.apache.shiro.web.filter.AccessControlFilter.onAccessDenied(AccessControlFilter.java:133)
at org.apache.shiro.web.filter.AccessControlFilter.onPreHandle(AccessControlFilter.java:162)
at org.apache.shiro.web.filter.PathMatchingFilter.isFilterChainContinued(PathMatchingFilter.java:203)
at org.apache.shiro.web.filter.PathMatchingFilter.preHandle(PathMatchingFilter.java:178)
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:131)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:383)
at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
at org.apache.zeppelin.server.CorsFilter.doFilter(CorsFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:502)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:411)
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:305)
at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
at java.lang.Thread.run(Thread.java:748)
I've also a log which say that I don't have any role :
WARN [2020-12-03 06:16:56,947] ({qtp1580893732-92} LoginRestApi.java[postLogin]:206) - {"status":"OK","message":"","body":{"principal":"myuser","ticket":"cb575d5e-a170-4e5f-9160-8350b3853943","roles":"[]"}}
Do you have any idea of what is wrong in this configuration ? How can I get the groups with my AD ?
Thanks
One solution was to upgrade Apache Zeppelin to 0.9.0-preview2. Then the login on active directory work again.
We also faced same error.We still getting exception , but we managed to get roles populating for user and authorization is working .We changed many properties, but The property which makes difference is :
ldapRealm.groupSearchEnableMatchingRuleInChain = true
After recent investigation and a Stack over flow question I realise that the cluster sharding is a better option than a cluster-consistent-hash-router. But I am having trouble getting a 2 process cluster going.
One process is the Seed and the other is the Client. The Seed node seems to continuously throw dead letter messages (see the end of this question).
This Seed HOCON follows:
akka {
loglevel = "INFO"
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
serializers {
wire = "Akka.Serialization.WireSerializer, Akka.Serialization.Wire"
}
serialization-bindings {
"System.Object" = wire
}
}
remote {
dot-netty.tcp {
hostname = "127.0.0.1"
port = 5000
}
}
persistence {
journal {
plugin = "akka.persistence.journal.sql-server"
sql-server {
class = "Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer"
schema-name = dbo
auto-initialize = on
connection-string = "Data Source=localhost;Integrated Security=True;MultipleActiveResultSets=True;Initial Catalog=ClusterExperiment01"
plugin-dispatcher = "akka.actor.default- dispatcher"
connection-timeout = 30s
table-name = EventJournal
timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
metadata-table-name = Metadata
}
}
sharding {
connection-string = "Data Source=localhost;Integrated Security=True;MultipleActiveResultSets=True;Initial Catalog=ClusterExperiment01"
auto-initialize = on
plugin-dispatcher = "akka.actor.default-dispatcher"
class = "Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer"
connection-timeout = 30s
schema-name = dbo
table-name = ShardingJournal
timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
metadata-table-name = ShardingMetadata
}
}
snapshot-store {
sharding {
class = "Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer"
plugin-dispatcher = "akka.actor.default-dispatcher"
connection-string = "Data Source=localhost;Integrated Security=True;MultipleActiveResultSets=True;Initial Catalog=ClusterExperiment01"
connection-timeout = 30s
schema-name = dbo
table-name = ShardingSnapshotStore
auto-initialize = on
}
}
cluster {
seed-nodes = ["akka.tcp://my-cluster-system#127.0.0.1:5000"]
roles = ["Seed"]
sharding {
journal-plugin-id = "akka.persistence.sharding"
snapshot-plugin-id = "akka.snapshot-store.sharding"
}
}}
I have a method that essentially turns the above into a Config like so:
var config = NodeConfig.Create(/* HOCON above */).WithFallback(ClusterSingletonManager.DefaultConfig());
Without the "WithFallback" I get a null reference exception out of the config generation.
And then generates the system like so:
var system = ActorSystem.Create("my-cluster-system", config);
The client creates its system in the same manner and the HOCON is almost identical aside from:
{
remote {
dot-netty.tcp {
hostname = "127.0.0.1"
port = 5001
}
}
cluster {
seed-nodes = ["akka.tcp://my-cluster-system#127.0.0.1:5000"]
roles = ["Client"]
role.["Seed"].min-nr-of-members = 1
sharding {
journal-plugin-id = "akka.persistence.sharding"
snapshot-plugin-id = "akka.snapshot-store.sharding"
}
}}
The Seed node creates the sharding like so:
ClusterSharding.Get(system).Start(
typeName: "company-router",
entityProps: Props.Create(() => new CompanyDeliveryActor()),
settings: ClusterShardingSettings.Create(system),
messageExtractor: new RouteExtractor(100)
);
And the client creates a sharding proxy like so:
ClusterSharding.Get(system).StartProxy(
typeName: "company-router",
role: "Seed",
messageExtractor: new RouteExtractor(100));
The RouteExtractor is:
public class RouteExtractor : HashCodeMessageExtractor
{
public RouteExtractor(int maxNumberOfShards) : base(maxNumberOfShards)
{
}
public override string EntityId(object message) => (message as IHasRouting)?.Company?.VolumeId.ToString();
public override object EntityMessage(object message) => message;
}
In this scenario the VolumeId is always the same (just for experiment sake).
Both processes come to life but the Seed keeps throwing this error to the log:
[INFO][7/05/2017 9:00:58 AM][Thread 0003][akka://my-cluster-system/user/sharding
/company-routerCoordinator/singleton/coordinator] Message Register from akka.tcp
://my-cluster-system#127.0.0.1:5000/user/sharding/company-router to akka://my-cl
uster-system/user/sharding/company-routerCoordinator/singleton/coordinator was n
ot delivered. 4 dead letters encountered.
Ps. I am not using Lighthouse.
From the quick look, you're starting a cluster sharding proxy on your client node and you're telling it that sharded nodes are those using seed role. This doesn't match the cluster sharding definition on seed node, when you haven't specified any role.
Since there is no role to limit it, cluster sharding on a seed node will treat all nodes in the cluster as perfectly capable of hosting sharded actors - including client node, which doesn't have cluster sharding (non-proxy) instantiated on it.
This may not be the only issue, but you could either host cluster sharding on all of your nodes, or use ClusterShardingSettings.Create(system).WithRole("seed") to limit your shard only to a specific subset of nodes (having seed role) in the cluster.
Thanks Horusiath, that's fixed it:
return sharding.Start(
typeName: "company-router",
entityProps: Props.Create(() => new CompanyDeliveryActor()),
settings: ClusterShardingSettings.Create(system).WithRole("Seed"),
messageExtractor: new RouteExtractor(100)
);
The clustered shard is now communicating between the 2 processes. Thanks very much for that bit.
We have a typical web-service which serves JSON data read from a remote database. I was trying out returning Result and AsyncResult, each with the following configuration:
play {
akka {
event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]
loglevel = WARNING
actor {
default-dispatcher = {
fork-join-executor {
parallelism-factor = 1.0
parallelism-max = 1
}
}
}
}
}
and one with
parallelism-factor = 1.0
parallelism-max = 5
Following are observations where the time taken to complete 500 requests is given (average of 5 readings):
1. parallelism-max=1 and parallelism-factor=1.0
Result :
Completion time = 291662 ms.
AsyncResult:
Completion time = 55601 ms
2. parallelism-max=5 and parallelism-factor=1.0
Result :
Completion time = 46419 ms.
AsyncResult:
Completion time = 46977 ms
We can see that with parallelism-max=1, AsyncResult clearly takes very less time compare to Result. However, with parallelism-max=5, Result and AsyncResult give very similar timings.
Shouldn't the time required become less as the number of threads increases, for AsyncResult also ?
Requesting for help to understand the reasons behind this observation.