Im having a problem when I deploy a feature. The feature contains three bundles, and Karaf deploys well these bundles, but when they are deployed ActiveMQ starts to having problems.
The deployed bundles are simples. The "complicated" is a camel route who expose a CXF endpoint and call a endpoint mock. I just attached to this threar the .kar, the zip of that kar and my fuse log. The service is running, but the problem with activeMQ happend always
The error is always the same:
2013-05-14 15:19:48,046 | INFO | veMQ Broker: amq | ActiveMQServiceFactory$$anon$1 | ? ? | 106 - org.springframework.context - 3.1.3.RELEASE | Refreshing org.fusesource.mq.fabric.ActiveMQServiceFactory$$anon$1#33c91e: startup date [Tue May 14 15:19:48 ART 2013]; root of context hierarchy
2013-05-14 15:19:48,048 | INFO | veMQ Broker: amq | XBeanXmlBeanDefinitionReader | ? ? | 105 - org.springframework.beans - 3.1.3.RELEASE | Loading XML bean definitions from file [/home/ramiro/tecPlata/jboss-fuse-6.0.0.redhat-024/etc/activemq.xml]
2013-05-14 15:19:48,095 | INFO | veMQ Broker: amq | DefaultListableBeanFactory | ? ? | 105 - org.springframework.beans - 3.1.3.RELEASE | Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#1885c3a: defining beans [org.springframework.beans.factory.config.PropertyPlaceholderConfigurer#0,org.apache.activemq.xbean.XBeanBrokerService#0]; root of factory hierarchy
2013-05-14 15:19:48,159 | INFO | veMQ Broker: amq | PListStoreImpl | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | PListStore:[/home/ramiro/tecPlata/jboss-fuse-6.0.0.redhat-024/data/amq/amq/tmp_storage] started
2013-05-14 15:19:48,163 | ERROR | veMQ Broker: amq | BrokerService | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Failed to start Apache ActiveMQ (amq, null). Reason: javax.management.InstanceAlreadyExistsException: org.apache.activemq:type=Broker,brokerName=amq
javax.management.InstanceAlreadyExistsException: org.apache.activemq:type=Broker,brokerName=amq
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)[:1.6.0_30]
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)[:1.6.0_30]
at org.apache.activemq.broker.jmx.ManagementContext.registerMBean(ManagementContext.java:380)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.apache.activemq.broker.jmx.AnnotatedMBean.registerMBean(AnnotatedMBean.java:72)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.apache.activemq.broker.BrokerService.startManagementContext(BrokerService.java:2337)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.apache.activemq.broker.BrokerService.start(BrokerService.java:543)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.fusesource.mq.fabric.ActiveMQServiceFactory$ClusteredConfiguration$$anon$3.run(ActiveMQServiceFactory.scala:307)[128:org.jboss.amq.mq-fabric:6.0.0.redhat-024]
2013-05-14 15:19:48,164 | INFO | veMQ Broker: amq | BrokerService | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Apache ActiveMQ 5.8.0.redhat-60024 (amq, null) is shutting down
2013-05-14 15:19:48,168 | INFO | veMQ Broker: amq | TransportConnector | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Connector openwire Stopped
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | PListStoreImpl | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | PListStore:[/home/ramiro/tecPlata/jboss-fuse-6.0.0.redhat-024/data/amq/amq/tmp_storage] stopped
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | KahaDBStore | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Stopping async queue tasks
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | KahaDBStore | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Stopping async topic tasks
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | KahaDBStore | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Stopped KahaDB
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | BrokerService | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Apache ActiveMQ 5.8.0.redhat-60024 (amq, null) uptime 0.010 seconds
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | BrokerService | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Apache ActiveMQ 5.8.0.redhat-60024 (amq, null) is shutdown
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | ActiveMQServiceFactory | ? ? | 128 - org.jboss.amq.mq-fabric - 6.0.0.redhat-024 | Broker amq failed to start. Will try again in 10 seconds
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | ActiveMQServiceFactory | ? ? | 128 - org.jboss.amq.mq-fabric - 6.0.0.redhat-024 | Exception on start: javax.management.InstanceAlreadyExistsException: org.apache.activemq:type=Broker,brokerName=amq
javax.management.InstanceAlreadyExistsException: org.apache.activemq:type=Broker,brokerName=amq
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)[:1.6.0_30]
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)[:1.6.0_30]
at org.apache.activemq.broker.jmx.ManagementContext.registerMBean(ManagementContext.java:380)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.apache.activemq.broker.jmx.AnnotatedMBean.registerMBean(AnnotatedMBean.java:72)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.apache.activemq.broker.BrokerService.startManagementContext(BrokerService.java:2337)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.apache.activemq.broker.BrokerService.start(BrokerService.java:543)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.fusesource.mq.fabric.ActiveMQServiceFactory$ClusteredConfiguration$$anon$3.run(ActiveMQServiceFactory.scala:307)[128:org.jboss.amq.mq-fabric:6.0.0.redhat-024]
Dropbox URL to Fuse LOG https://dl.dropboxusercontent.com/u/225304/fuse.log
Dropbox URL to .kar file https://dl.dropboxusercontent.com/u/225304/PruebaFeature-1.0-SNAPSHOT.kar
This example I used a clean Fuse. Any ideas of what is happening? i dont know if the problem is the configuration of ActiveMQ, or anything else.
This is what i recive when I list activemq in Karaf
This is when I list the broker in karaf
JBossFuse:karaf#root> activemq:query --jmxlocal
Name = KahaDBPersistenceAdapter[/home/ramiro/tecPlata/jboss-fuse-6.0.0.redhat-024/data/amq/kahadb]
brokerName = amq
Transactions = []
Size = 13411
InstanceName = KahaDBPersistenceAdapter[/home/ramiro/tecPlata/jboss-fuse-6.0.0.redhat-024/data/amq/kahadb]
Data = [1]
type = Broker
Service = PersistenceAdapter
brokerName = amq
service = Health
CurrentStatus = Good
type = Broker
brokerName = amq
connector = clientConnectors
type = Broker
StatisticsEnabled = true
connectorName = openwire
destinationName = ActiveMQ.Advisory.MasterBroker
MemoryUsageByteCount = 0
DequeueCount = 0
type = Broker
destinationType = Topic
Name = ActiveMQ.Advisory.MasterBroker
MinEnqueueTime = 0
MaxAuditDepth = 2048
AverageEnqueueTime = 0.0
InFlightCount = 0
MemoryLimit = 67108864
brokerName = amq
EnqueueCount = 1
MaxEnqueueTime = 0
MemoryUsagePortion = 1.0
ProducerCount = 0
UseCache = true
BlockedProducerWarningInterval = 30000
AlwaysRetroactive = false
Options =
MaxProducersToAudit = 64
PrioritizedMessages = false
ConsumerCount = 0
ProducerFlowControl = true
Subscriptions = []
QueueSize = 0
MaxPageSize = 200
DispatchCount = 0
MemoryPercentUsage = 0
ExpiredCount = 0
TopicSubscribers = []
TemporaryQueues = []
Uptime = 1 minute
TemporaryTopicSubscribers = []
MemoryPercentUsage = 0
BrokerVersion = 5.8.0.redhat-60024
StatisticsEnabled = true
TotalDequeueCount = 0
TopicProducers = []
QueueSubscribers = []
Topics = [org.apache.activemq:type=Broker,brokerName=amq,destinationType=Topic,destinationName=ActiveMQ.Advisory.MasterBroker]
TotalMessageCount = 0
SslURL =
TemporaryQueueSubscribers = []
BrokerName = amq
DynamicDestinationProducers = []
Persistent = true
DataDirectory = /home/ramiro/tecPlata/jboss-fuse-6.0.0.redhat-024/data/amq
Queues = []
DurableTopicSubscribers = []
TotalConsumerCount = 0
InactiveDurableTopicSubscribers = []
JobSchedulerStoreLimit = 0
TempPercentUsage = 0
MemoryLimit = 67108864
VMURL = vm://amq
OpenWireURL = tcp://fluxit-ntb-43:61616?maximumConnections=1000
JobSchedulerStorePercentUsage = 0
TotalEnqueueCount = 1
TemporaryQueueProducers = []
StompSslURL =
TemporaryTopics = []
StompURL =
Slave = false
BrokerId = ID:fluxit-ntb-43-58596-1368558172573-0:1
TotalProducerCount = 0
StorePercentUsage = 0
brokerName = amq
StoreLimit = 107374182400
TransportConnectors = {openwire=tcp://fluxit-ntb-43:61616?maximumConnections=1000}
TemporaryTopicProducers = []
TempLimit = 53687091200
QueueProducers = []
type = Broker
The features.xml in your kar is incorrect which cause this error. It has some bundle like
<bundle>mvn:org.apache.felix/org.apache.felix.configadmin/1.2.4</bundle>
<bundle>mvn:org.apache.aries/org.apache.aries.util/1.0.0</bundle>
<bundle>mvn:org.apache.aries.proxy/org.apache.aries.proxy.api/1.0.0</bundle>
<bundle>mvn:org.apache.aries.blueprint/org.apache.aries.blueprint/1.0.1.redhat-60024</bundle>
Those bundles are very fundamental for the container and already get installed by container by default.
It shouldn't be in your features.xml, or if they're there, you should have
resolver="(obr)" for feature and dependency="true" for those bundle so that OBR resolver can kick in to prevent install redundant bundles.
Moreover, the
<bundle>mvn:org.apache.aries.blueprint/org.apache.aries.blueprint/1.0.1.redhat-60024</bundle>
is invalid for aries.blueprint 1.0.x, it should be
<bundle dependency="true" start-level="20">mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.api/1.0.1.redhat-60024</bundle>
<bundle dependency="true" start-level="20">mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.core/1.0.1.redhat-60024</bundle>
<bundle dependency="true" start-level="20">mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.cm/1.0.1.redhat-60024</bundle>
instead. Otherwise you will see errors like
ERROR: Bundle org.apache.aries.blueprint [251] EventDispatcher: Error during dispatch. (java.lang.ClassCastException: org.apache.aries.blueprint.ext.impl.ExtNamespaceHandler cannot be cast to org.apache.aries.blueprint.NamespaceHandler)
java.lang.ClassCastException: org.apache.aries.blueprint.ext.impl.ExtNamespaceHandler cannot be cast to org.apache.aries.blueprint.NamespaceHandler
This means you have two conflict aries.blueprint bundle installed in your container which messed up almost everything.
In a summary, change your features.xml in your kar like
<?xml version="1.0" encoding="UTF-8"?>
<features>
<feature name='tosMock' version='1.0.0-SNAPSHOT'>
<bundle>mvn:com.tecplata.esb.services/tosMock/1.0.0-SNAPSHOT</bundle>
</feature>
<feature name='esb-entities' version='1.0.0-SNAPSHOT'>
<bundle>mvn:com.tecplata.esb/esb-entities/1.0.0-SNAPSHOT</bundle>
</feature>
<feature name='vesselsService-sei' version='1.0.0-SNAPSHOT'>
<feature version='1.0.0-SNAPSHOT'>esb-entities</feature>
<bundle>mvn:com.tecplata.esb.services.sei/vesselsService-sei/1.0.0-SNAPSHOT</bundle>
</feature>
<feature name='vesselsVisitorService' version='1.0.0-SNAPSHOT'>
<bundle>mvn:org.apache.camel/camel-core/2.10.0.redhat-60024</bundle>
<feature version='1.0.0-SNAPSHOT'>vesselsService-sei</feature>
<bundle>mvn:com.tecplata.esb.services/vesselsVisitorService/1.0.0-SNAPSHOT</bundle>
</feature>
</features>
can make it work.
Freeman
The user has cross posted the same questions in many places. When you do this please tell us, as the conversation is the scattered.
Being active discussed here:
http://fusesource.com/forums/thread.jspa?threadID=4797&tstart=0
But also posted here:
https://community.jboss.org/thread/228200
Related
The problem that I have is that I have Rabbitmq and Celery running on fly (versions and configs are below). Both of them deploy normally and without any problems, however when I send a task to Rabbitmq on fly using the public dedicated Ipv4 address I get the following error: “Server has closed the connection unexpectedly”.
Versions and configurations:
OS: Ubuntu 20.04.4 LTS
Rabbitmq version: 3.8.2
Celery version: 5.2.7
The fly.toml file for Rabbitmq:
app = “rabbitmqserver”
kill_signal = “SIGINT”
kill_timeout = 5
processes =
[env]
[experimental]
auto_rollback = true
[[services]]
http_checks =
internal_port = 5672
processes = [“app”]
protocol = “tcp”
script_checks =
[[services.ports]]
handlers = [“tls”]
port = 5672
Can you provide a suitable configuration for Rabbitmq so that I can send tasks to it using it’s Ipv4 address?
I tried multiple other configurations for Rabbitmq on fly and it also did not work. Furthermore, I made sure that all the needed ports are exposed and that the machine is actually alive (checked using ping command).
Tried configuration:
app = "rabbitmq-app"
kill_signal = "SIGINT"
kill_timeout = 5
processes = []
[env]
RABBITMQ_MNESIA_DIR = "/var/lib/rabbitmq/mnesia/data"
[experimental]
allowed_public_ports = []
auto_rollback = true
[[services]]
http_checks = []
internal_port = 5672
processes = ["app"]
protocol = "tcp"
script_checks = []
[[services.tcp_checks]]
grace_period = "1s"
interval = "15s"
restart_limit = 0
timeout = "2s"
# rabbitmq admin
[[services]]
http_checks = []
internal_port = 15672
protocol = "tcp"
script_checks = []
[[services.ports]]
handlers = ["http", "tls"]
port = "15672"
[[services.tcp_checks]]
grace_period = "1s"
`
I'm trying to build a small AKka.NET Cluster Application but I'm having problems since I keep getting the following error:
[WARNING][7/14/2022 10:42:06 AM][Thread 0011][akka.tcp://accountsystem#localhost:57959/system/cluster/core/daemon/joinSeedNodeProcess-1] Couldn't join seed nodes after [2] attempts, will try again. seed-nodes=[akka.tcp://accountsystem#localhost:2551]
This is my docker-compose file for my lighthouse service:
version: '3'
services:
accountsystem.lighthouse:
image: petabridge/lighthouse:latest
hostname: accountsystem.lighthouse
ports:
- '2551:2551'
environment:
ACTORSYSTEM: "accountsystem"
CLUSTER_PORT: 2551
CLUSTER_IP: "accountsystem.lighthouse"
CLUSTER_SEEDS: "akka.tcp://accountsystem#accountsystem.lighthouse:2551"
And this is my AKka.NET configuration:
<![CDATA[
akka {
actor {
provider = cluster
}
remote {
log-remote-lifecycle-events = DEBUG
dot-netty.tcp {
hostname = "localhost"
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://accountsystem#localhost:2551"]
#auto-down-unreachable-after = 30s
}
}
]]>
Also I am sharing my code setup for my AKkaService:
public Task StartAsync(CancellationToken cancellationToken)
{
var akkaConfig = (AkkaConfigurationSection) System.Configuration.ConfigurationManager.GetSection("akka");
var bootstrap = BootstrapSetup.Create()
.WithConfig(akkaConfig.AkkaConfig)
.WithActorRefProvider(ProviderSelection.Cluster.Instance); // launch Akka.Cluster
// enable DI support inside this ActorSystem, if needed
var diSetup = DependencyResolverSetup.Create(_serviceProvider);
// merge this setup (and any others) together into ActorSystemSetup
var actorSystemSetup = bootstrap.And(diSetup);
// start ActorSystem
_actorSystem = ActorSystem.Create("accountsystem", actorSystemSetup);
var props = DependencyResolver.For(_actorSystem).Props<AccountRouterActor>();
_actorRef = _actorSystem.ActorOf(props, "account");
_actorSystem.WhenTerminated.ContinueWith(tr => {
_applicationLifetime.StopApplication();
});
return Task.CompletedTask;
}
I have been reading about actor system and server mismatch, but I think the name is correct which would be accountsystem. I am also sharing my console after starting up my docker compose file with all the messages. Maybe that will help.
docker-compose up
WARNING: Found orphan containers (accountpoc_mssql_1, mysql) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Starting accountpoc_accountsystem.lighthouse_1 ... done
Attaching to accountpoc_accountsystem.lighthouse_1
accountsystem.lighthouse_1 | [Docker-Bootstrap] IP=accountsystem.lighthouse
accountsystem.lighthouse_1 | [Docker-Bootstrap] PORT=2551
accountsystem.lighthouse_1 | [Docker-Bootstrap] SEEDS=["akka.tcp://accountsystem#accountsystem.lighthouse:2551"]
accountsystem.lighthouse_1 | [Lighthouse] ActorSystem: accountsystem; IP: accountsystem.lighthouse; PORT: 2551
accountsystem.lighthouse_1 | [Lighthouse] Performing pre-boot sanity check. Should be able to parse address [akka.tcp://accountsystem#accountsystem.lighthouse:2551]
accountsystem.lighthouse_1 | [Lighthouse] Parse successful.
accountsystem.lighthouse_1 | [INFO][07/14/2022 10:52:48][Thread 0001][remoting (akka://accountsystem)] Starting remoting
accountsystem.lighthouse_1 | [INFO][07/14/2022 10:52:48][Thread 0001][remoting (akka://accountsystem)] Remoting started; listening on addresses : [akka.tcp://accountsystem#accountsystem.lighthouse:2551]
accountsystem.lighthouse_1 | [INFO][07/14/2022 10:52:48][Thread 0001][remoting (akka://accountsystem)] Remoting now listens on addresses: [akka.tcp://accountsystem#accountsystem.lighthouse:2551]
accountsystem.lighthouse_1 | [INFO][07/14/2022 10:52:48][Thread 0001][Cluster (akka://accountsystem)] Cluster Node [akka.tcp://accountsystem#accountsystem.lighthouse:2551] - Starting up...
accountsystem.lighthouse_1 | [INFO][07/14/2022 10:52:48][Thread 0001][Cluster (akka://accountsystem)] Cluster Node [1.6.2] - Node [akka.tcp://accountsystem#accountsystem.lighthouse:2551] is JOINING itself (with roles [lighthouse], version [1.6.2]) and forming a new cluster
accountsystem.lighthouse_1 | [INFO][07/14/2022 10:52:48][Thread 0001][Cluster (akka://accountsystem)] Cluster Node [akka.tcp://accountsystem#accountsystem.lighthouse:2551] - is the new leader among reachable nodes (more leaders may exist)
accountsystem.lighthouse_1 | [INFO][07/14/2022 10:52:48][Thread 0001][Cluster (akka://accountsystem)] Cluster Node [akka.tcp://accountsystem#accountsystem.lighthouse:2551] - Leader is moving node [akka.tcp://accountsystem#accountsystem.lighthouse:2551] to [Up]
accountsystem.lighthouse_1 | [INFO][07/14/2022 10:52:49][Thread 0001][Cluster (akka://accountsystem)] Cluster Node [akka.tcp://accountsystem#accountsystem.lighthouse:2551] - Started up successfully
accountsystem.lighthouse_1 | [INFO][07/14/2022 10:52:49][Thread 0001][akka.tcp://accountsystem#accountsystem.lighthouse:2551/user/petabridge.cmd] petabridge.cmd host bound to [0.0.0.0:9110]
accountsystem.lighthouse_1 | [ERROR][07/14/2022 10:52:58][Thread 0008][akka.tcp://accountsystem#accountsystem.lighthouse:2551/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Faccountsystem%40localhost%3A59968-1/endpointWriter] Dropping message [Akka.Actor.ActorSelectionMessage] for non-local recipient [[akka.tcp://accountsystem#localhost:2551/]] arriving at [akka.tcp://accountsystem#localhost:2551] inbound addresses [akka.tcp://accountsystem#accountsystem.lighthouse:2551]
accountsystem.lighthouse_1 | [ERROR][07/14/2022 10:53:02][Thread 0008][akka.tcp://accountsystem#accountsystem.lighthouse:2551/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Faccountsystem%40localhost%3A59968-1/endpointWriter] Dropping message [Akka.Actor.ActorSelectionMessage] for non-local recipient [[akka.tcp://accountsystem#localhost:2551/]] arriving at [akka.tcp://accountsystem#localhost:2551] inbound addresses [akka.tcp://accountsystem#accountsystem.lighthouse:2551]
accountsystem.lighthouse_1 | [ERROR][07/14/2022 10:53:08][Thread 0008][akka.tcp://accountsystem#accountsystem.lighthouse:2551/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Faccountsystem%40localhost%3A59968-1/endpointWriter] Dropping message [Akka.Actor.ActorSelectionMessage] for non-local recipient [[akka.tcp://accountsystem#localhost:2551/]] arriving at [akka.tcp://accountsystem#localhost:2551] inbound addresses [akka.tcp://accountsystem#accountsystem.lighthouse:2551]
accountsystem.lighthouse_1 | [WARNING][07/14/2022 10:53:10][Thread 0008][akka.tcp://accountsystem#accountsystem.lighthouse:2551/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Faccountsystem%40localhost%3A59968-1] Association with remote system akka.tcp://accountsystem#localhost:59968 has failed; address is now gated for 5000 ms. Reason is: [Akka.Remote.EndpointDisassociatedException: Disassociated
accountsystem.lighthouse_1 | at Akka.Remote.EndpointWriter.PublishAndThrow(Exception reason, LogLevel level, Boolean needToThrow)
accountsystem.lighthouse_1 | at Akka.Remote.EndpointWriter.Unhandled(Object message)
accountsystem.lighthouse_1 | at Akka.Actor.UntypedActor.Receive(Object message)
accountsystem.lighthouse_1 | at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message)
accountsystem.lighthouse_1 | at Akka.Actor.ActorCell.ReceiveMessage(Object message)
accountsystem.lighthouse_1 | at Akka.Actor.ActorCell.ReceivedTerminated(Terminated t)
accountsystem.lighthouse_1 | at Akka.Actor.ActorCell.AutoReceiveMessage(Envelope envelope)
accountsystem.lighthouse_1 | at Akka.Actor.ActorCell.Invoke(Envelope envelope)]
accountsystem.lighthouse_1 | [ERROR][07/14/2022 10:53:10][Thread 0008][akka://accountsystem/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Faccountsystem%40localhost%3A59968-1/endpointWriter] Disassociated
accountsystem.lighthouse_1 | Cause: Akka.Remote.EndpointDisassociatedException: Disassociated
accountsystem.lighthouse_1 | at Akka.Remote.EndpointWriter.PublishAndThrow(Exception reason, LogLevel level, Boolean needToThrow)
accountsystem.lighthouse_1 | at Akka.Remote.EndpointWriter.Unhandled(Object message)
accountsystem.lighthouse_1 | at Akka.Actor.UntypedActor.Receive(Object message)
accountsystem.lighthouse_1 | at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message)
accountsystem.lighthouse_1 | at Akka.Actor.ActorCell.ReceiveMessage(Object message)
accountsystem.lighthouse_1 | at Akka.Actor.ActorCell.ReceivedTerminated(Terminated t)
accountsystem.lighthouse_1 | at Akka.Actor.ActorCell.AutoReceiveMessage(Envelope envelope)
accountsystem.lighthouse_1 | at Akka.Actor.ActorCell.Invoke(Envelope envelope)
Well, I just had to change the hocon configuration to the following:
<![CDATA[
akka {
actor {
provider = cluster
}
remote {
log-remote-lifecycle-events = DEBUG
dot-netty.tcp {
hostname = "localhost"
port = 2551
}
}
cluster {
seed-nodes = [
"akka.tcp://accountsystem#localhost:2551"]
#auto-down-unreachable-after = 30s
}
}
]]>
In order for these environment variables to work:
CLUSTER_PORT: 2551
CLUSTER_IP: "accountsystem.lighthouse"
CLUSTER_SEEDS: "akka.tcp://accountsystem#accountsystem.lighthouse:2551"
You'll need to install https://github.com/petabridge/akkadotnet-bootstrap/tree/dev/src/Akka.Bootstrap.Docker and call BootstrapFromDocker() on your Config object. That's what Lighthouse is doing internally.
You might also find using Akka.Cluster.Hosting to be an easier way of doing this since you can now extract those values from a typed Microsoft.Extensions.Configuration section and pass them in programmatically to the WithRemoting and WithClustering methods ala:
builder.Services.AddAkka("MyActorSystem", configurationBuilder =>
{
configurationBuilder
.WithRemoting("localhost", 8110)
.WithClustering(new ClusterOptions(){ Roles = new[]{ "myRole" },
SeedNodes = new[]{ Address.Parse("akka.tcp://MyActorSystem#localhost:8110")}})
.WithActors((system, registry) =>
{
var echo = system.ActorOf(act =>
{
act.ReceiveAny((o, context) =>
{
context.Sender.Tell($"{context.Self} rcv {o}");
});
}, "echo");
registry.TryRegister<Echo>(echo); // register for DI
});
});
I have Splunk UF and Splunk Enterprise Server, both v8.2.1, running in docker containers but I am unable to see any data on the Enterprise Server with regards to the new index I created, 'mytest':
The Enterprise Server has default port 9997 active as a receiver port:
Both of the containers are connected to 'splunk' network I created:
"Containers": {
"0f9e44620ce9fba16df21af6d2253c4b02b9714cb3ea126a616f10d06f836eb9": {
"Name": "dspinelli-uf",
"EndpointID": "0e1dd065ee3d815c943a8b52e6107e53a4b57d9e3103b17d1461611543769869",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"3a1a084561eda8013baa8847f4ca30fd68eb74468ff666195bf1c15e0f8a280f": {
"Name": "dspinelli-ent",
"EndpointID": "7159b1a41840f9dfae04b50bb61386f8c3ac2233aee334026b9f1d685cfcf571",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
Inputs.conf on the UF:
[splunktcp://9997]
disabled = 0
[http://hec-uf]
description = UF HTTP Event Collector
disabled = 0
token = 4022d42f-9132-442a-8a79-5d3eea1ad40d
index = mytest
indexes = mytest
outputgroup = tcpout
Outputs.conf on UF:
[indexAndForward]
index = false
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = dspinelli-ent:9997
[tcpout-server://dspinelli-ent:9997]
Communication between the UF and Enterprise Server is established:
netstat -an | grep 9997
tcp 0 0 0.0.0.0:9997 0.0.0.0:* LISTEN
tcp 0 0 172.18.0.3:44420 172.18.0.2:9997 ESTABLISHED
./bin/splunk list forward-server
Active forwards:
dspinelli-ent:9997
Configured but inactive forwards:
None
Attempt to curl the UF with some test data shows success:
curl -k https://x.x.x.x:8087/services/collector \
> -H 'Authorization: Splunk 4022d42f-9132-442a-8a79-5d3eea1ad40d' \
> -d '{"sourcetype": "demo", "event":"Hello, I was sent from UF"}'
{"text":"Success","code":0}
However, no data is ever displayed on the index in Enterprise Server:
Does anyone know what could possibly be wrong here or what the next steps would be?
The issue was with inputs.conf. Updated as follows:
[http://hec-uf]
description = UF HTTP Event Collector
disabled = 0
token = 4022d42f-9132-442a-8a79-5d3eea1ad40d
_TCP_ROUTING = *
index = _internal
After update/restart the messages started to be received on Enterprise:
I have Odoo 10 working since last 4 years. The Scheduled actions have been working fine until 7th May 2021.
Server Specs :
CPU - 4
Ram 16 GB
Ubuntu
The database name is : kwspl
In the server log, I find the following lines :
File "/opt/odoo/odoo-server/addons/bus/controllers/main.py", line 35, in poll
raise Exception("bus.Bus unavailable")
Exception: bus.Bus unavailable
2021-05-24 15:50:54,391 2376 INFO kwspl werkzeug: 127.0.0.1 - - [24/May/2021 15:50:54] "POST /longpolling/poll HTTP/1.1" 200 -
**2021-05-24 15:50:56,701 2381 DEBUG ? odoo.service.server: WorkerCron (2381) polling for jobs
2021-05-24 15:50:56,702 2381 DEBUG ? odoo.service.server: WorkerCron (2381) 'kwspl' time:0.001s mem: 233352k -> 233352k (diff: 0k)
2021-05-24 15:51:03,660 2382 DEBUG ? odoo.service.server: WorkerCron (2382) polling for jobs
2021-05-24 15:51:03,662 2382 DEBUG ? odoo.service.server: WorkerCron (2382) 'kwspl' time:0.002s mem: 233352k -> 233352k (diff: 0k)**
2021-05-24 15:51:04,530 2379 DEBUG kwspl odoo.modules.registry: Multiprocess signaling check: [Registry - 614 -> 614] [Cache - 57570 -> 57570]
2021-05-24 15:51:04,532 2379 ERROR kwspl odoo.http: Exception during JSON request handling.
Traceback (most recent call last):
File "/opt/odoo/odoo-server/odoo/http.py", line 640, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/opt/odoo/odoo-server/odoo/http.py", line 677, in dispatch
result = self._call_function(**self.params)
The odoo.conf is as below :
[options]
addons_path = /opt/odoo/odoo-server/addons,/opt/odoo/custom/addons
admin_passwd = ******
csv_internal_sep = ,
data_dir = /opt/odoo/.local/share/Odoo
#db_filter = kwspl
db_host = False
db_maxconn = 64
#db_name = False
db_name = 'kwspl'
db_password = False
db_port = False
db_template = template1
db_user = odoo
dbfilter = ^kwspl$
demo = {}
email_from = False
geoip_database = /usr/share/GeoIP/GeoLiteCity.dat
import_partial =
limit_memory_hard = 4684354560
limit_memory_soft = 4147483648
limit_request = 8192
limit_time_cpu = 420
limit_time_real = 180
limit_time_real_cron = -1
list_db = False
log_db = False
log_db_level = warning
#log_handler = :INFO
log_level = debug
logfile = /var/log/odoo/odoo-server.log
logrotate = False
longpolling_port = 8072
max_cron_threads = 2
osv_memory_age_limit = 1.0
osv_memory_count_limit = False
pg_path = None
pidfile = None
proxy_mode = True
reportgz = False
server_wide_modules = web,web_kanban
smtp_password = False
smtp_port = 25
smtp_server = localhost
smtp_ssl = False
smtp_user = False
syslog = False
test_commit = False
test_enable = False
test_file = False
test_report_directory = False
translate_modules = ['all']
unaccent = False
without_demo = False
workers = 4
xmlrpc = True
#xmlrpc_interface =
xmlrpc_port = 8069
If I change the following paramenters in odoo.conf
db_name = False
dbfilter = ^%d$
The following lines are seen in the log:
raise Exception("bus.Bus unavailable")
Exception: bus.Bus unavailable
2021-05-24 15:59:58,457 2574 INFO kwspl werkzeug: 127.0.0.1 - - [24/May/2021 15:59:58] "POST /longpolling/poll HTTP/1.1" 200 -
2021-05-24 16:00:03,261 2576 DEBUG ? odoo.service.server: WorkerCron (2576) polling for jobs
2021-05-24 16:00:03,316 2576 DEBUG ? odoo.tools.translate: translation went wrong for "'Selecting the "Warning" option will notify user with the message, Selecting "Blocking Message" will throw an exception with the message and block the flow. The Message has to be written in the next field.'", skipped
2021-05-24 16:00:03,376 2576 WARNING ? odoo.addons.base.ir.ir_cron: Skipping database kwspl because of modules to install/upgrade/remove.
2021-05-24 16:00:03,377 2576 INFO ? odoo.sql_db: ConnectionPool(used=0/count=0/max=64): Closed 1 connections to 'dbname=kwspl user=odoo'
2021-05-24 16:00:03,377 2576 DEBUG ? odoo.service.server: WorkerCron (2576) kwspl time:0.109s mem: 220928k -> 227084k (diff: 6156k)
2021-05-24 16:00:03,377 2576 DEBUG ? odoo.service.server: WorkerCron (2576) polling for jobs
2021-05-24 16:00:03,388 2576 INFO ? odoo.sql_db: ConnectionPool(used=0/count=0/max=64): Closed 1 connections to "dbname=\\'kwspl\\' user=odoo"
2021-05-24 16:00:03,388 2576 DEBUG ? odoo.service.server: WorkerCron (2576) 'kwspl' time:0.006s mem: 227084k -> 227084k (diff: 0k)
2021-05-24 16:00:04,190 2577 DEBUG ? odoo.service.server: WorkerCron (2577) polling for jobs
2021-05-24 16:00:04,244 2577 DEBUG ? odoo.tools.translate: translation went wrong for "'Selecting the "Warning" option will notify user with the message, Selecting "Blocking Message" will throw an exception with the message and block the flow. The Message has to be written in the next field.'", skipped
2021-05-24 16:00:04,264 2577 WARNING ? odoo.addons.base.ir.ir_cron: Skipping database kwspl because of modules to install/upgrade/remove.
2021-05-24 16:00:04,264 2577 INFO ? odoo.sql_db: ConnectionPool(used=0/count=0/max=64): Closed 1 connections to 'dbname=kwspl user=odoo'
2021-05-24 16:00:04,264 2577 DEBUG ? odoo.service.server: WorkerCron (2577) kwspl time:0.068s mem: 220928k -> 227172k (diff: 6244k)
2021-05-24 16:00:04,265 2577 DEBUG ? odoo.service.server: WorkerCron (2577) polling for jobs
2021-05-24 16:00:04,274 2577 INFO ? odoo.sql_db: ConnectionPool(used=0/count=0/max=64): Closed 1 connections to "dbname=\\'kwspl\\' user=odoo"
2021-05-24 16:00:04,275 2577 DEBUG ? odoo.service.server: WorkerCron (2577) 'kwspl' time:0.006s mem: 227172k -> 227172k (diff: 0k)
2021-05-24 16:00:05,377 2571 DEBUG kwspl odoo.modules.registry: Multiprocess signaling check: [Registry - 614 -> 614] [Cache - 57570 -> 57570]
The Scheduled Actions are no longer running while the automated Tasks are working normally.
Is this problem causing this ? -> Skipping database kwspl because of modules to install/upgrade/remove.
If this is the issue, do I check which module is the culprit?
Any Guesses?
On further investigation, I found that there was an error message in logs :
crm_rma_lot_mass_return: module not found
I tried to find this module in the current directories but could not find.
So I created an Odoo scaffolding with the same name and uploaded it on the server in one of the include directories mentioned in odoo-server.conf
This solved the problem. Odoo is now executing the Scheduled Actions.
If some faced a similar problem, I would be happy to help.
all.
I am using logstash-1.4.2 to consume messages stored in my activeMQ(with stomp plubgin).
in my acitveMQ.xml config file, I have the line:
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
when I run my logstash, I have this error:
C:\logstash\logstash-1.4.2\bin>logstash agent -f logstashconfig.conf
+---------------------------------------------------------+
| An unexpected error occurred. This is probably a bug. |
| You can find help with this problem in a few places: |
| |
| * chat: #logstash IRC channel on freenode irc. |
| IRC via the web: http://goo.gl/TI4Ro |
| * email: logstash-users#googlegroups.com |
| * bug system: https://logstash.jira.com/ |
| |
+---------------------------------------------------------+
The error reported is:
Couldn't find any input plugin named 'stomp'. Are you sure this is correct? Trying to load the stomp input plugin resulted in this error: no such file to load -- logstash/inputs/stomp
in my logstashconfig.conf, I have :
input {
stomp {
password => "admin"
user => "admin"
}
}
output {
file {
path => "C:\logstash\logstash-1.4.2\cosumedfromstomp.txt"
}
}
If I consume from rabbitMQ, with the following logstashconfig.conf, it would be correct (here is my rabbitMQ version of config):
input {
rabbitmq {
host => "amqp"
queue => "elasticsearch"
key => "elasticsearch"
exchange => "elasticsearch"
type => "all"
durable => true
auto_delete => false
exclusive => false
format => "json_event"
debug => false
}
}
output {
file {
path => "C:\logstash\logstash-1.4.2\cosumedfromstomp.txt"
}
}
I don't have trouble with my rabbitMQ version of logstash, the text file created looks correct.
My question is :
1, do I configure my stomp input wrong? since I don't have the "queue" name in my config, it might be wrong?
2, or if the problem is I didn't create the stomp plugin correctly, if that is the reason, it would not be about logstash...
Thanks
You need to install the Contributed Plugins. These have been removed from the core download for Logstash. The Stomp plugin is located in the contributed plugins:
Stomp
Milestone: 2
This is a community-contributed plugin! It does not ship with logstash
by default, but it is easy to install! To use this, you must have
installed the contrib plugins package.
Directions here:
http://logstash.net/docs/1.4.2/contrib-plugins
Hosted on GitHub here:
https://github.com/elasticsearch/logstash-contrib