When even run activemq cmd, getting below lines but server is not starting.
Java Runtime: Oracle Corporation 1.8.0_131 C:\Program Files\Java\jdk1.8.0_131\jre
Heap sizes: current=1005056k free=989327k max=1005056k
JVM args: -Dcom.sun.management.jmxremote -Xms1G -Xmx1G -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=D:\Apache ActiveMq\apache-activemq-5.14.5\bin..\conf\login.config -Dactivemq.classpath=D:\Apache ActiveMq\apache-activemq-5.14.5\bin..\conf;D:\Apache ActiveMq\apache-activemq-5.14.5\bin../conf;D:\Apache ActiveMq\apache-activemq-5.14.5\bin../conf; -Dactivemq.home=D:\Apache ActiveMq\apache-activemq-5.14.5\bin.. -Dactivemq.base=D:\Apache ActiveMq\apache-activemq-5.14.5\bin.. -Dactivemq.conf=D:\Apache ActiveMq\apache-activemq-5.14.5\bin..\conf -Dactivemq.data=D:\Apache ActiveMq\apache-activemq-5.14.5\bin..\data -Djava.io.tmpdir=D:\Apache ActiveMq\apache-activemq-5.14.5\bin..\data\tmp
Extensions classpath:
[D:\Apache ActiveMq\apache-activemq-5.14.5\bin..\lib,D:\Apache ActiveMq\apache-activemq-5.14.5\bin..\lib\camel,D:\Apache ActiveMq\apache-activemq-5.14.5\bin..\lib\optional,D:\Apache ActiveMq\apache-activemq-5.14.5\bin..\lib\web,D:\Apache ActiveMq\apache-activemq-5.14.5\bin..\lib\extra]
ACTIVEMQ_HOME: D:\Apache ActiveMq\apache-activemq-5.14.5\bin..
ACTIVEMQ_BASE: D:\Apache ActiveMq\apache-activemq-5.14.5\bin..
ACTIVEMQ_CONF: D:\Apache ActiveMq\apache-activemq-5.14.5\bin..\conf
ACTIVEMQ_DATA: D:\Apache ActiveMq\apache-activemq-5.14.5\bin..\data
Usage: Main [--extdir ] [task] [task-options] [task data]
Tasks:
browse - Display selected messages in a specified destination.
bstat - Performs a predefined query that displays useful statistics regarding the specified broker
consumer - Receives messages from the broker
create - Creates a runnable broker instance in the specified path.
decrypt - Decrypts given text
dstat - Performs a predefined query that displays useful tabular statistics regarding the specified destination type
encrypt - Encrypts given text
export - Exports a stopped brokers data files to an archive file
list - Lists all available brokers in the specified JMX context
producer - Sends messages to the broker
purge - Delete selected destination's messages that matches the message selector
query - Display selected broker component's attributes and statistics.
start - Creates and starts a broker using a configuration file, or a broker URI.
stop - Stops a running broker specified by the broker name.
Task Options (Options specific to each task):
--extdir - Add the jar files in the directory to the classpath.
--version - Display the version information.
-h,-?,--help - Display this help information. To display task specific help, use Main [task] -h,-?,--help
Task Data:
- Information needed by each specific task.
JMX system property options:
-Dactivemq.jmx.url= (default is: 'service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi')
-Dactivemq.jmx.user=
-Dactivemq.jmx.password=
Related
I am setting up a Rabbitmq single node container built form a docker image. The Image is configured to persist to nfs mounted disc.
I ran into an issue when the image is restarted. Since every time a node restarted it gets unique name and the restarted node searching for the old nodes it’s reads from cluster_nodes.config file
Error dump shows:
Error during startup: {error,
{failed_to_cluster_with,
[rabbit#9c3bfb851ba3],
"Mnesia could not connect to any nodes."}}
How can I configure my image to use same name each time when it’s restarted instead of using arbitrary node name given by Kubernetes cluster?
I've got a Fargate service running, and can view its Cloudwatch log streams using the AWS console (navigate to the service, and click on its Logs tab).
I'm looking at the AWS documentation for GetLogEvents and see that you can access the logs using the log group name and log stream name. While I know the log group name for the service, the log stream name is generated dynamically.
How do I obtain the current log stream name for the running Fargate service?
I'm checking the AmazonECSClient documentation, any pointers would be helpful.
EDIT:
I found that the log group is actually specified for the container, not the service. Retrieving the task definition for the service, I can iterate over the container definitions which have the LogConfiguration section that indicates the Options, however that only provides the log group and a stream prefix, no log stream name:
- service
- task definition
- container definitions
- LogConfiguration:
LogDriver: awslogs
Options: awslogs-group=/ecs/myservice
awslogs-region=us-east-1
awslogs-stream-prefix=ecs
EDIT 2:
I see from the AWS Console, that the link in the Logs tab does contain the log stream name. See the stream value in this sample URL:
https://us-east-1.console.aws.amazon.com/cloudwatch/home
?region=us-east-1
#logEventViewer:group=/ecs/myservice;stream=ecs/myservice/ad7246dd-bb0e-4eff-b059-767d30d40e69
How does the AWS Console obtain that value?
I finally found the format of the log stream name in the AWS documentation here:
awslogs-stream-prefix
Required: No, unless using the Fargate launch type in which case it is required.
The awslogs-stream-prefix option allows you to associate a log stream
with the specified prefix, the container name, and the ID of the Amazon
ECS task to which the container belongs. If you specify a prefix with
this option, then the log stream takes the following format:
prefix-name/container-name/ecs-task-id
Note that the ecs-task-id is the GUID portion of the task's ARN:
For this sample Task ARN:
arn:aws:ecs:us-east-1:123456789012:task/12373b3b-84c1-4398-850b-4caef9a983fc
the ecs-task-id to use for the log stream name is:
12373b3b-84c1-4398-850b-4caef9a983fc
From Linux server with MQ client installed we are trying to set up connection to secured channel. I am ETL person and our MQ admin is struggling. Anyways I will explain what I tried (which obviously hasn't worked yet ) and anyone please let me know what else needs to be done to set up the connectivity.. Thanks :)
tmp/mqmutility/keyrepmodmq> ls
AMQCLCHL.TAB key.kdb key.rdb key.sth MODE_MODELTAP_DEV_keyStLst.txt
export MQSSLKEYR=/tmp/mqmutility/keyrepmodmq/key
export MQCHLLIB=/tmp/mqmutility/keyrepmodmq
export MQCHLTAB=AMQCLCHL.TAB
/opt/mqm/samp/bin> amqsputc <queue_name> <queue_manager_name>
Sample AMQSPUT0 start
MQCONN ended with reason code 2058
Note: I can connect to the same queue manager for a non-SSL channel.
Any help will be great and other approaches you follow for SSL channel connectivity from client machine will also be helpful.
When using a Client Channel Definition Table (CCDT) file - your AMQCLCHL.TAB file, a return code of 2058 usually means that the queue manager name the application tried to use - your 'queue_manager_name' - was not found in any of the channel entries in the CCDT file.
If you're using. MQ V8 you can very easily display the entries in your CCDT file and the queue manager names they are configured for using the following command:
runmqsc -n
DISPLAY CHANNEL(*) QMNAME
If none of the channels in your file have the queue manager name you are using when running the amqsputc sample then this is the cause of your 2058 reason code.
Hopefully it will be clear when you see the entries in the file listed out which queue manager name you should be using, but if not, update your question with some more details (like the contents of said file and the queue manager details) and we can help further.
You must ensure that you have a CLNTCONN channel defined which has the queue manager name you want to use in the QMNAME field, and that you have a matching named SVRCONN channel defined on the queue manager. Since you are using SSL, you must also ensure that these two channels are using the same SSLCIPH.
Please read Creating server-connection and client-connection definitions on the server and it's child topics.
This question is very closely related to this and this. With the SFTP connector streaming is on by default and cannot be turned off (version 3.5.2). If I have a flow like the below
1) Inbound SFTP connector with a large polling frequency (in hours for example)
2) File outbound to store the stream into a file in local storage
I would expect here that the SFTP inbound deletes the source file after the transfer is complete. (But this does not happen)
Reading the documentation I find the below
Auto Delete (Applies to inbound SFTP endpoint only) . Click this box to delete the file after it has been read. Note: If an error occurs when Mule is writing to the outbound endpoint, the file is not deleted. Auto Delete only works if both the inbound and outbound endpoints use the SFTP transport.
Is there a workaround for this ?, basically how can I delete the source file once I have downloaded it off the SFTP server ?
The above is mostly a issue with the documentation, SFTP auto-delete does work even if the outbound is not a SFTP connector. I have logged a JIRA about it here
Update:
I have determined the root cause of the issue, the output from SFTP connector when deployed on cloudhub returns an instance of class java.io.ByteArrayInputStream, however when the application is deployed via Anypoint studio the output is instance of class org.mule.transport.sftp.SftpInputStream.
To reproduce make sure the file size is less than 2 MB
This causes the file not to be deleted when deployed on Cloudhub.
Logs from cloudhub
Local deployment logs
2015-04-28 15:37:50 DEBUG SftpConnectionFactory:118 - Successfully connected to: sftp://decision_1:#####XXXXXXX:22/To_DI_Local
2015-04-28 15:37:50 DEBUG SftpClient:121 - Attempting to cwd to: /To_DI_Local
2015-04-28 15:37:50 DEBUG SftpConnector:121 - Successfully changed working directory to: /To_DI_Local
2015-04-28 15:37:50 DEBUG SftpMessageReceiver:121 - Routing file: ZCCR_EXTRACT_FINAL.CSV
**2015-04-28 15:37:50 INFO LoggerMessageProcessor:193 - InputSFTPEndpoint org.mule.transport.sftp.SftpInputStream**
2015-04-28 15:37:50 DEBUG SftpMessageReceiver:121 - Routed file: ZCCR_EXTRACT_FINAL.CSV
2015-04-28 15:37:50 DEBUG SftpMessageReceiver:121 - Polling. Routed all 1 files found at sftp://decision_1:#####XXXXXX:22/To_DI_Local
2015-04-28 15:37:50 INFO LoggerMessageProcessor:193 - InputSFTPEndpoint org.mule.transport.sftp.SftpInputStream
My Flow looks like the below
Update 2:
However if the file is a big one i think anything about 10MB then the return type is com.google.common.io.MultiInputStream and the file does get deleted.
Any ideas why Cloudhub would be behaving like this ?
I am newer to Mulesoft and I encountered the same issue. What I found was that Mulesoft puts a lock on the file that it is reading. For my particular scenario, I was reading a file and then uploading to Fusion (cloud). Mulesoft would not auto delete the file. When it polled the folder path again, it would see that same file there and try to read it again. What we had to do to force the release of the file, was add a Byte Array to String. It seems to have forced Mulesoft to read the file in its entirety and transform the payload which severed the lock on the file.
After adding a destination(Queue) to Destination Resources from Admin Console at Resources/JMS Resources/Destination Resources, no physical destinations are displayed at server(Admin Server)/JMS Physical Destinations .Instead, the following error message is displayed below the heading:
An error has occured
Unable to list JMS Destinations
Also, on trying to add a new Physical Destination at server(Admin Server)/JMS Physical Destinations, of type 'Queue', following error message is displayed:
An error has occured
Unable to create JMS Destination
On trying to add a Physical Destination using asadmin in command-line as:
asadmin> create-jmsdest -T queue DemoQueue
the following error is displayed:
remote failure: Unable to create JMS Destination.
Command create-jmsdest failed.
Here, GlassFish Server Open Source Edition 3.1-b24 is run on Ubuntu with kernel 2.6.28-11-server.
Any help is appreciated.
I don't think that you should create physical destinations manually. All you need to do to set up JMS resources in GlassFish is defining a connection factory and destinations - all under Resources - JMS resources branch in admin interface. When your destinations are used physical destinations will be created automatically.
confused me to no end first time so I sympathise
For GFv2.1.1 (and I suspect for v3) a physical destination - mq.sys.dmq - is already created and configured and queues are created here. The messaging server is SunMQ and if it is your intention to use this out-of-the-box then you don't need to create another physical destination.
if you do indeed need to create another physical destination launch [path-to-glassfish]/imq/bin/imqadmin.exe (or ubuntu equiv) and do it there.