I have Odoo 10 working since last 4 years. The Scheduled actions have been working fine until 7th May 2021.
Server Specs :
CPU - 4
Ram 16 GB
Ubuntu
The database name is : kwspl
In the server log, I find the following lines :
File "/opt/odoo/odoo-server/addons/bus/controllers/main.py", line 35, in poll
raise Exception("bus.Bus unavailable")
Exception: bus.Bus unavailable
2021-05-24 15:50:54,391 2376 INFO kwspl werkzeug: 127.0.0.1 - - [24/May/2021 15:50:54] "POST /longpolling/poll HTTP/1.1" 200 -
**2021-05-24 15:50:56,701 2381 DEBUG ? odoo.service.server: WorkerCron (2381) polling for jobs
2021-05-24 15:50:56,702 2381 DEBUG ? odoo.service.server: WorkerCron (2381) 'kwspl' time:0.001s mem: 233352k -> 233352k (diff: 0k)
2021-05-24 15:51:03,660 2382 DEBUG ? odoo.service.server: WorkerCron (2382) polling for jobs
2021-05-24 15:51:03,662 2382 DEBUG ? odoo.service.server: WorkerCron (2382) 'kwspl' time:0.002s mem: 233352k -> 233352k (diff: 0k)**
2021-05-24 15:51:04,530 2379 DEBUG kwspl odoo.modules.registry: Multiprocess signaling check: [Registry - 614 -> 614] [Cache - 57570 -> 57570]
2021-05-24 15:51:04,532 2379 ERROR kwspl odoo.http: Exception during JSON request handling.
Traceback (most recent call last):
File "/opt/odoo/odoo-server/odoo/http.py", line 640, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/opt/odoo/odoo-server/odoo/http.py", line 677, in dispatch
result = self._call_function(**self.params)
The odoo.conf is as below :
[options]
addons_path = /opt/odoo/odoo-server/addons,/opt/odoo/custom/addons
admin_passwd = ******
csv_internal_sep = ,
data_dir = /opt/odoo/.local/share/Odoo
#db_filter = kwspl
db_host = False
db_maxconn = 64
#db_name = False
db_name = 'kwspl'
db_password = False
db_port = False
db_template = template1
db_user = odoo
dbfilter = ^kwspl$
demo = {}
email_from = False
geoip_database = /usr/share/GeoIP/GeoLiteCity.dat
import_partial =
limit_memory_hard = 4684354560
limit_memory_soft = 4147483648
limit_request = 8192
limit_time_cpu = 420
limit_time_real = 180
limit_time_real_cron = -1
list_db = False
log_db = False
log_db_level = warning
#log_handler = :INFO
log_level = debug
logfile = /var/log/odoo/odoo-server.log
logrotate = False
longpolling_port = 8072
max_cron_threads = 2
osv_memory_age_limit = 1.0
osv_memory_count_limit = False
pg_path = None
pidfile = None
proxy_mode = True
reportgz = False
server_wide_modules = web,web_kanban
smtp_password = False
smtp_port = 25
smtp_server = localhost
smtp_ssl = False
smtp_user = False
syslog = False
test_commit = False
test_enable = False
test_file = False
test_report_directory = False
translate_modules = ['all']
unaccent = False
without_demo = False
workers = 4
xmlrpc = True
#xmlrpc_interface =
xmlrpc_port = 8069
If I change the following paramenters in odoo.conf
db_name = False
dbfilter = ^%d$
The following lines are seen in the log:
raise Exception("bus.Bus unavailable")
Exception: bus.Bus unavailable
2021-05-24 15:59:58,457 2574 INFO kwspl werkzeug: 127.0.0.1 - - [24/May/2021 15:59:58] "POST /longpolling/poll HTTP/1.1" 200 -
2021-05-24 16:00:03,261 2576 DEBUG ? odoo.service.server: WorkerCron (2576) polling for jobs
2021-05-24 16:00:03,316 2576 DEBUG ? odoo.tools.translate: translation went wrong for "'Selecting the "Warning" option will notify user with the message, Selecting "Blocking Message" will throw an exception with the message and block the flow. The Message has to be written in the next field.'", skipped
2021-05-24 16:00:03,376 2576 WARNING ? odoo.addons.base.ir.ir_cron: Skipping database kwspl because of modules to install/upgrade/remove.
2021-05-24 16:00:03,377 2576 INFO ? odoo.sql_db: ConnectionPool(used=0/count=0/max=64): Closed 1 connections to 'dbname=kwspl user=odoo'
2021-05-24 16:00:03,377 2576 DEBUG ? odoo.service.server: WorkerCron (2576) kwspl time:0.109s mem: 220928k -> 227084k (diff: 6156k)
2021-05-24 16:00:03,377 2576 DEBUG ? odoo.service.server: WorkerCron (2576) polling for jobs
2021-05-24 16:00:03,388 2576 INFO ? odoo.sql_db: ConnectionPool(used=0/count=0/max=64): Closed 1 connections to "dbname=\\'kwspl\\' user=odoo"
2021-05-24 16:00:03,388 2576 DEBUG ? odoo.service.server: WorkerCron (2576) 'kwspl' time:0.006s mem: 227084k -> 227084k (diff: 0k)
2021-05-24 16:00:04,190 2577 DEBUG ? odoo.service.server: WorkerCron (2577) polling for jobs
2021-05-24 16:00:04,244 2577 DEBUG ? odoo.tools.translate: translation went wrong for "'Selecting the "Warning" option will notify user with the message, Selecting "Blocking Message" will throw an exception with the message and block the flow. The Message has to be written in the next field.'", skipped
2021-05-24 16:00:04,264 2577 WARNING ? odoo.addons.base.ir.ir_cron: Skipping database kwspl because of modules to install/upgrade/remove.
2021-05-24 16:00:04,264 2577 INFO ? odoo.sql_db: ConnectionPool(used=0/count=0/max=64): Closed 1 connections to 'dbname=kwspl user=odoo'
2021-05-24 16:00:04,264 2577 DEBUG ? odoo.service.server: WorkerCron (2577) kwspl time:0.068s mem: 220928k -> 227172k (diff: 6244k)
2021-05-24 16:00:04,265 2577 DEBUG ? odoo.service.server: WorkerCron (2577) polling for jobs
2021-05-24 16:00:04,274 2577 INFO ? odoo.sql_db: ConnectionPool(used=0/count=0/max=64): Closed 1 connections to "dbname=\\'kwspl\\' user=odoo"
2021-05-24 16:00:04,275 2577 DEBUG ? odoo.service.server: WorkerCron (2577) 'kwspl' time:0.006s mem: 227172k -> 227172k (diff: 0k)
2021-05-24 16:00:05,377 2571 DEBUG kwspl odoo.modules.registry: Multiprocess signaling check: [Registry - 614 -> 614] [Cache - 57570 -> 57570]
The Scheduled Actions are no longer running while the automated Tasks are working normally.
Is this problem causing this ? -> Skipping database kwspl because of modules to install/upgrade/remove.
If this is the issue, do I check which module is the culprit?
Any Guesses?
On further investigation, I found that there was an error message in logs :
crm_rma_lot_mass_return: module not found
I tried to find this module in the current directories but could not find.
So I created an Odoo scaffolding with the same name and uploaded it on the server in one of the include directories mentioned in odoo-server.conf
This solved the problem. Odoo is now executing the Scheduled Actions.
If some faced a similar problem, I would be happy to help.
Related
I want to use couchDB(V. 2.3.1) with SSL enabled, so I added [ssl] part to /opt/couchdb/etc/local.d/docker.ini file as shown below:
[ssl]
port = 6984
enable = true
cert_file = /etc/hyperledger/fabric/tls/server.crt
key_file = /etc/hyperledger/fabric/tls/server.key
cacert_file = /etc/hyperledger/fabric/tls/ca.crt
[daemons]
httpsd = {couch_httpd, start_link, [https]}
[admins]
Admin = ...
[couchdb]
uuid = ...
but i can't access the webUI with https! having this error:
This site can’t provide a secure connection
"IP" uses an unsupported protocol.
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
Unsupported protocol
The client and server don't support a common SSL protocol version or cipher suite.
this is the logs:
[error] 2020-05-17T06:52:18.046389Z nonode#nohost <0.19077.3> -------- SSL: hello: tls_handshake.erl:127:Fatal error: handshake failure - malformed_handshake_data
[error] 2020-05-17T06:52:18.046426Z nonode#nohost <0.18899.3> -------- application: mochiweb, "Accept failed error", "{error,{tls_alert,\"handshake failure\"}}"
[error] 2020-05-17T06:52:18.046508Z nonode#nohost <0.18899.3> -------- CRASH REPORT Process (<0.18899.3>) with 0 neighbors exited with reason: {error,accept_failed} at mochiweb_acceptor:init/4(line:75) <= proc_lib:init_p_do_apply/3(line:247); initial_call: {mochiweb_acceptor,init,['Argument__1','Argument__2',...]}, ancestors: [https,couch_secondary_services,couch_sup,<0.202.0>], messages: [], links: [<0.253.0>], dictionary: [], trap_exit: false, status: running, heap_size: 1598, stack_size: 27, reductions: 954
can somebody please help me?
I found the solution and wrote a post about it:
https://medium.com/#pouyashojaei85/enabling-ssl-for-docker-couchdb-container-127388eca1a8
I want to know the cause of Rabbitmq crash which randomly occur. can you let me know what kind of causes could be considered?
Also my team should manually restart the rabbitmq when crash happens, so I want to know if there is a way to restart rabbitmq server automatically.
Here is the error report when rabbitmq crash occur:
=WARNING REPORT==== 6-Dec-2017::07:56:43 ===
closing AMQP connection <0.4387.0> (000000:23070 -> 00000:5672, vhost: '/', user: '00000'):
client unexpectedly closed TCP connection
Also this is part of sasl.gsd fild :
=SUPERVISOR REPORT==== 7-Dec-2017::10:03:15 ===
Supervisor: {local,sockjs_session_sup}
Context: child_terminated
Reason: {function_clause,
[{gen_server,cast,
[{},sockjs_closed],
[{file,"gen_server.erl"},{line,218}]},
{rabbit_ws_sockjs,service_stomp,3,
[{file,"src/rabbit_ws_sockjs.erl"},{line,150}]},
{sockjs_session,emit,2,
[{file,"src/sockjs_session.erl"},{line,173}]},
{sockjs_session,terminate,2,
[{file,"src/sockjs_session.erl"},{line,311}]},
{gen_server,try_terminate,3,
[{file,"gen_server.erl"},{line,629}]},
{gen_server,terminate,7,
[{file,"gen_server.erl"},{line,795}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,247}]}]}
Offender: [{pid,<0.20883.1160>},
{id,undefined},
{mfargs,
{sockjs_session,start_link,
["pd4tvvi0",
{service,"/stomp",
#Fun<rabbit_ws_sockjs.1.47892404>,{},
"//cdn.jsdelivr.net/sockjs/1.0.3/sockjs.min.js",
false,true,5000,25000,131072,
#Fun<rabbit_ws_sockjs.0.47892404>,undefined},
[{peername,{{172,31,6,213},9910}},
{sockname,{{172,31,5,49},15674}},
{path,"/stomp/744/pd4tvvi0/htmlfile"},
{headers,[]},
{socket,#Port<0.12491352>}]]}},
{restart_type,transient},
{shutdown,5000},
{child_type,worker}]
=CRASH REPORT==== 7-Dec-2017::10:03:20 ===
crasher:
initial call: sockjs_session:init/1
pid: <0.25851.1160>
registered_name: []
exception exit: {function_clause,
[{gen_server,cast,
[{},sockjs_closed],
[{file,"gen_server.erl"},{line,218}]},
{rabbit_ws_sockjs,service_stomp,3,
[{file,"src/rabbit_ws_sockjs.erl"},{line,150}]},
{sockjs_session,emit,2,
[{file,"src/sockjs_session.erl"},{line,173}]},
{sockjs_session,terminate,2,
[{file,"src/sockjs_session.erl"},{line,311}]},
{gen_server,try_terminate,3,
[{file,"gen_server.erl"},{line,629}]},
{gen_server,terminate,7,
[{file,"gen_server.erl"},{line,795}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,247}]}]}
in function gen_server:terminate/7 (gen_server.erl, line 800)
ancestors: [sockjs_session_sup,<0.177.0>]
messages: []
links: [<0.178.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 175
neighbours:
Please check out the error report I posted above and let me know the causes of rabbitmq crash and the way to automatically restart rabbitmq server.
Thanks!!
I want to sink JSON data into Apache Phoenix with Apache Flume, followed an online guide http://kalyanbigdatatraining.blogspot.com/2016/10/how-to-stream-json-data-into-phoenix.html, but met the following error. How to resolve it? Many thanks!
My environment list as:
hadoop-2.7.3
hbase-1.3.1
phoenix-4.12.0-HBase-1.3-bin
flume-1.7.0
In flume, I added phoenix sink related jars in $FLUME_HOME/plugins.d/phoenix-sink/lib
commons-io-2.4.jar
twill-api-0.8.0.jar
twill-discovery-api-0.8.0.jar
json-path-2.2.0.jar
twill-common-0.8.0.jar
twill-discovery-core-0.8.0.jar
phoenix-flume-4.12.0-HBase-1.3.jar
twill-core-0.8.0.jar
twill-zookeeper-0.8.0.jar
2017-11-11 14:49:54,786 (lifecycleSupervisor-1-1) [DEBUG - org.apache.phoenix.jdbc.PhoenixDriver$2.onRemoval(PhoenixDriver.java:159)] Expiring localhost:2181:/hbase because of EXPLICIT
2017-11-11 14:49:54,787 (lifecycleSupervisor-1-1) [INFO - org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.closeZooKeeperWatcher(ConnectionManager.java:1712)] Closing zookeeper sessionid=0x15fa8952cea00a6
2017-11-11 14:49:54,787 (lifecycleSupervisor-1-1) [DEBUG - org.apache.zookeeper.ZooKeeper.close(ZooKeeper.java:673)] Closing session: 0x15fa8952cea00a6
2017-11-11 14:49:54,787 (lifecycleSupervisor-1-1) [DEBUG - org.apache.zookeeper.ClientCnxn.close(ClientCnxn.java:1306)] Closing client for session: 0x15fa8952cea00a6
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1-SendThread(localhost:2181)) [DEBUG - org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:818)] Reading reply sessionid:0x15fa8952cea00a6, packet:: clientPath:null serverPath:null finished:false header:: 3,-11 replyHeader:: 3,2620,0 request:: null response:: null
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1) [DEBUG - org.apache.zookeeper.ClientCnxn.disconnect(ClientCnxn.java:1290)] Disconnecting client for session: 0x15fa8952cea00a6
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1-SendThread(localhost:2181)) [DEBUG - org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1086)] An exception was thrown while closing send thread for session 0x15fa8952cea00a6 : Unable to read additional data from server sessionid 0x15fa8952cea00a6, likely server has closed socket
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1) [INFO - org.apache.zookeeper.ZooKeeper.close(ZooKeeper.java:684)] Session: 0x15fa8952cea00a6 closed
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1-EventThread) [INFO - org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:512)] EventThread shut down
2017-11-11 14:49:54,790 (lifecycleSupervisor-1-1) [ERROR - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)] Unable to start SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#2d2052a0 counterGroup:{ name:null counters:{} } } - Exception follows.
java.lang.NoSuchMethodError:
org.apache.twill.zookeeper.ZKClientService.startAndWait()Lcom/google/common/util/concurrent/Service$State;
at org.apache.phoenix.transaction.TephraTransactionContext.setTransactionClient(TephraTransactionContext.java:147)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.initTxServiceClient(ConnectionQueryServicesImpl.java:401)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:415)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.access$500(ConnectionQueryServicesImpl.java:257)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2384)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2360)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2360)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.phoenix.flume.serializer.BaseEventSerializer.initialize(BaseEventSerializer.java:140)
at org.apache.phoenix.flume.sink.PhoenixSink.start(PhoenixSink.java:119)
at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:45)
at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:249)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2017-11-11 14:49:54,792 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:149)] Component type: SINK, name: Phoenix Sink__1 stopped
2017-11-11 14:49:54,792 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:155)] Shutdown Metric for type: SINK, name: Phoenix Sink__1. sink.start.time == 1510382993516
Here is my flume-agent.properties
agent.sources = exec
agent.channels = mem-channel
agent.sinks = phoenix-sink
agent.sources.exec.type = exec
agent.sources.exec.command = tail -F /Users/chenshuai1/tmp/users.json
agent.sources.exec.channels = mem-channel
agent.sinks.phoenix-sink.type = org.apache.phoenix.flume.sink.PhoenixSink
agent.sinks.phoenix-sink.batchSize = 10
agent.sinks.phoenix-sink.zookeeperQuorum = localhost
agent.sinks.phoenix-sink.table = users2
agent.sinks.phoenix-sink.ddl = CREATE TABLE IF NOT EXISTS users2 (userid BIGINT NOT NULL, username VARCHAR, password VARCHAR, email VARCHAR, country VARCHAR, state VARCHAR, city VARCHAR, dt VARCHAR NOT NULL CONSTRAINT PK PRIMARY KEY (userid, dt))
agent.sinks.phoenix-sink.serializer = json
agent.sinks.phoenix-sink.serializer.columnsMapping = {"userid":"userid", "username":"username", "password":"password", "email":"email", "country":"country", "state":"state", "city":"city", "dt":"dt"}
agent.sinks.phoenix-sink.serializer.partialSchema = true
agent.sinks.phoenix-sink.serializer.columns = userid,username,password,email,country,state,city,dt
agent.sinks.phoenix-sink.channel = mem-channel
agent.channels.mem-channel.type = memory
agent.channels.mem-channel.capacity = 1000
agent.channels.mem-channel.transactionCapacity = 100
This is what my "/srv/gitlab/config/gitlab.rb" looks like for email settings
################################
# GitLab email server settings #
################################
# see https://gitlab.com/gitlab-org/omnibus-gitlab/blob/629def0a7a26e7c2326566f0758d4a27857b52a3/doc/settings/smtp.md#smtp-settings
# Use smtp instead of sendmail/postfix.
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "corp-myorg-com.mail.outlook.com"
gitlab_rails['smtp_port'] = 25
# gitlab_rails['smtp_user_name'] = "smtp user"
# gitlab_rails['smtp_password'] = "smtp password"
# gitlab_rails['smtp_domain'] = "example.com"
# gitlab_rails['smtp_authentication'] = "login"
# gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = true
# gitlab_rails['smtp_openssl_verify_mode'] = 'none' # Can be: 'none', 'peer', 'client_once', 'fail_if_no_peer_cert', see http://api.rubyonrails.org/classes/ActionMailer/Base.html
# gitlab_rails['smtp_ca_path'] = "/etc/ssl/certs"
# gitlab_rails['smtp_ca_file'] = "/etc/ssl/certs/ca-certificates.crt"
When I try to sign up from GitLab web page, I see in logs that
==> /var/log/gitlab/gitlab-rails/production.log <==
Started POST "/users" for 172.32.1.111 at 2015-09-10 04:50:47 +0000
Processing by RegistrationsController#create as HTML
Parameters: {"utf8"=>"✓", "authenticity_token"=>"[FILTERED]", "user"=>{"name"=>"Harit Himanshu", "username"=>"harit", "email"=>"harit#myorg.com", "password"=>"[FILTERED]"}}
Completed 200 OK in 364ms (Views: 44.7ms | ActiveRecord: 24.1ms)
==> /var/log/gitlab/nginx/gitlab_access.log <==
172.32.1.111 - - [10/Sep/2015:04:50:48 +0000] "POST /users HTTP/1.1" 200 2223 "http://172.16.205.153:8080/users/sign_in" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.85 Safari/537.36"
==> /var/log/gitlab/sidekiq/current <==
2015-09-10_04:50:56.62636 2015-09-10T04:50:56.626Z 447 TID-elbtk Devise::Async::Backend::Sidekiq JID-24a487b2ee95a069f6783b43 INFO: start
==> /var/log/gitlab/gitlab-rails/production.log <==
Sent mail to harit#myorg.com (331.0ms)
==> /var/log/gitlab/sidekiq/current <==
2015-09-10_04:50:57.01071 2015-09-10T04:50:57.010Z 447 TID-elbtk Devise::Async::Backend::Sidekiq JID-24a487b2ee95a069f6783b43 INFO: fail: 0.384 sec
2015-09-10_04:50:57.01139 2015-09-10T04:50:57.011Z 447 TID-elbtk WARN: {"retry"=>true, "queue"=>"mailer", "class"=>"Devise::Async::Backend::Sidekiq", "args"=>["confirmation_instructions", "User", "4", "H7fU4LytjadCSCMyJv-S", {}], "jid"=>"24a487b2ee95a069f6783b43", "enqueued_at"=>1441859975.09268, "error_message"=>"SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol", "error_class"=>"OpenSSL::SSL::SSLError", "failed_at"=>1441859975.4031353, "retry_count"=>5, "retried_at"=>1441860657.0101695}
2015-09-10_04:50:57.01141 2015-09-10T04:50:57.011Z 447 TID-elbtk WARN: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol
2015-09-10_04:50:57.01145 2015-09-10T04:50:57.011Z 447 TID-elbtk WARN: /opt/gitlab/embedded/lib/ruby/2.1.0/net/smtp.rb:586:in `connect'
2015-09-10_04:50:57.01146 /opt/gitlab/embedded/lib/ruby/2.1.0/net/smtp.rb:586:in `tlsconnect'
2015-09-10_04:50:57.01146 /opt/gitlab/embedded/lib/ruby/2.1.0/net/smtp.rb:554:in `do_start'
2015-09-10_04:50:57.01147 /opt/gitlab/embedded/lib/ruby/2.1.0/net/smtp.rb:520:in `start'
2015-09-10_04:50:57.01147 /opt/gitlab/embedded/service/gem/ruby/2.1.0/gems/mail-2.6.3/lib/mail/network/delivery_methods/smtp.rb:112:in `deliver!'
What is the issue, I am able to use this relay address in production with other projects.
What am I missing?
Thanks
I was having the same issue with Gitlab CE 8.11.4. I found that these settings worked:
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = false
Im having a problem when I deploy a feature. The feature contains three bundles, and Karaf deploys well these bundles, but when they are deployed ActiveMQ starts to having problems.
The deployed bundles are simples. The "complicated" is a camel route who expose a CXF endpoint and call a endpoint mock. I just attached to this threar the .kar, the zip of that kar and my fuse log. The service is running, but the problem with activeMQ happend always
The error is always the same:
2013-05-14 15:19:48,046 | INFO | veMQ Broker: amq | ActiveMQServiceFactory$$anon$1 | ? ? | 106 - org.springframework.context - 3.1.3.RELEASE | Refreshing org.fusesource.mq.fabric.ActiveMQServiceFactory$$anon$1#33c91e: startup date [Tue May 14 15:19:48 ART 2013]; root of context hierarchy
2013-05-14 15:19:48,048 | INFO | veMQ Broker: amq | XBeanXmlBeanDefinitionReader | ? ? | 105 - org.springframework.beans - 3.1.3.RELEASE | Loading XML bean definitions from file [/home/ramiro/tecPlata/jboss-fuse-6.0.0.redhat-024/etc/activemq.xml]
2013-05-14 15:19:48,095 | INFO | veMQ Broker: amq | DefaultListableBeanFactory | ? ? | 105 - org.springframework.beans - 3.1.3.RELEASE | Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#1885c3a: defining beans [org.springframework.beans.factory.config.PropertyPlaceholderConfigurer#0,org.apache.activemq.xbean.XBeanBrokerService#0]; root of factory hierarchy
2013-05-14 15:19:48,159 | INFO | veMQ Broker: amq | PListStoreImpl | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | PListStore:[/home/ramiro/tecPlata/jboss-fuse-6.0.0.redhat-024/data/amq/amq/tmp_storage] started
2013-05-14 15:19:48,163 | ERROR | veMQ Broker: amq | BrokerService | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Failed to start Apache ActiveMQ (amq, null). Reason: javax.management.InstanceAlreadyExistsException: org.apache.activemq:type=Broker,brokerName=amq
javax.management.InstanceAlreadyExistsException: org.apache.activemq:type=Broker,brokerName=amq
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)[:1.6.0_30]
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)[:1.6.0_30]
at org.apache.activemq.broker.jmx.ManagementContext.registerMBean(ManagementContext.java:380)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.apache.activemq.broker.jmx.AnnotatedMBean.registerMBean(AnnotatedMBean.java:72)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.apache.activemq.broker.BrokerService.startManagementContext(BrokerService.java:2337)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.apache.activemq.broker.BrokerService.start(BrokerService.java:543)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.fusesource.mq.fabric.ActiveMQServiceFactory$ClusteredConfiguration$$anon$3.run(ActiveMQServiceFactory.scala:307)[128:org.jboss.amq.mq-fabric:6.0.0.redhat-024]
2013-05-14 15:19:48,164 | INFO | veMQ Broker: amq | BrokerService | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Apache ActiveMQ 5.8.0.redhat-60024 (amq, null) is shutting down
2013-05-14 15:19:48,168 | INFO | veMQ Broker: amq | TransportConnector | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Connector openwire Stopped
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | PListStoreImpl | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | PListStore:[/home/ramiro/tecPlata/jboss-fuse-6.0.0.redhat-024/data/amq/amq/tmp_storage] stopped
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | KahaDBStore | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Stopping async queue tasks
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | KahaDBStore | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Stopping async topic tasks
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | KahaDBStore | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Stopped KahaDB
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | BrokerService | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Apache ActiveMQ 5.8.0.redhat-60024 (amq, null) uptime 0.010 seconds
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | BrokerService | ? ? | 114 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Apache ActiveMQ 5.8.0.redhat-60024 (amq, null) is shutdown
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | ActiveMQServiceFactory | ? ? | 128 - org.jboss.amq.mq-fabric - 6.0.0.redhat-024 | Broker amq failed to start. Will try again in 10 seconds
2013-05-14 15:19:48,169 | INFO | veMQ Broker: amq | ActiveMQServiceFactory | ? ? | 128 - org.jboss.amq.mq-fabric - 6.0.0.redhat-024 | Exception on start: javax.management.InstanceAlreadyExistsException: org.apache.activemq:type=Broker,brokerName=amq
javax.management.InstanceAlreadyExistsException: org.apache.activemq:type=Broker,brokerName=amq
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)[:1.6.0_30]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)[:1.6.0_30]
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)[:1.6.0_30]
at org.apache.activemq.broker.jmx.ManagementContext.registerMBean(ManagementContext.java:380)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.apache.activemq.broker.jmx.AnnotatedMBean.registerMBean(AnnotatedMBean.java:72)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.apache.activemq.broker.BrokerService.startManagementContext(BrokerService.java:2337)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.apache.activemq.broker.BrokerService.start(BrokerService.java:543)[114:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at org.fusesource.mq.fabric.ActiveMQServiceFactory$ClusteredConfiguration$$anon$3.run(ActiveMQServiceFactory.scala:307)[128:org.jboss.amq.mq-fabric:6.0.0.redhat-024]
Dropbox URL to Fuse LOG https://dl.dropboxusercontent.com/u/225304/fuse.log
Dropbox URL to .kar file https://dl.dropboxusercontent.com/u/225304/PruebaFeature-1.0-SNAPSHOT.kar
This example I used a clean Fuse. Any ideas of what is happening? i dont know if the problem is the configuration of ActiveMQ, or anything else.
This is what i recive when I list activemq in Karaf
This is when I list the broker in karaf
JBossFuse:karaf#root> activemq:query --jmxlocal
Name = KahaDBPersistenceAdapter[/home/ramiro/tecPlata/jboss-fuse-6.0.0.redhat-024/data/amq/kahadb]
brokerName = amq
Transactions = []
Size = 13411
InstanceName = KahaDBPersistenceAdapter[/home/ramiro/tecPlata/jboss-fuse-6.0.0.redhat-024/data/amq/kahadb]
Data = [1]
type = Broker
Service = PersistenceAdapter
brokerName = amq
service = Health
CurrentStatus = Good
type = Broker
brokerName = amq
connector = clientConnectors
type = Broker
StatisticsEnabled = true
connectorName = openwire
destinationName = ActiveMQ.Advisory.MasterBroker
MemoryUsageByteCount = 0
DequeueCount = 0
type = Broker
destinationType = Topic
Name = ActiveMQ.Advisory.MasterBroker
MinEnqueueTime = 0
MaxAuditDepth = 2048
AverageEnqueueTime = 0.0
InFlightCount = 0
MemoryLimit = 67108864
brokerName = amq
EnqueueCount = 1
MaxEnqueueTime = 0
MemoryUsagePortion = 1.0
ProducerCount = 0
UseCache = true
BlockedProducerWarningInterval = 30000
AlwaysRetroactive = false
Options =
MaxProducersToAudit = 64
PrioritizedMessages = false
ConsumerCount = 0
ProducerFlowControl = true
Subscriptions = []
QueueSize = 0
MaxPageSize = 200
DispatchCount = 0
MemoryPercentUsage = 0
ExpiredCount = 0
TopicSubscribers = []
TemporaryQueues = []
Uptime = 1 minute
TemporaryTopicSubscribers = []
MemoryPercentUsage = 0
BrokerVersion = 5.8.0.redhat-60024
StatisticsEnabled = true
TotalDequeueCount = 0
TopicProducers = []
QueueSubscribers = []
Topics = [org.apache.activemq:type=Broker,brokerName=amq,destinationType=Topic,destinationName=ActiveMQ.Advisory.MasterBroker]
TotalMessageCount = 0
SslURL =
TemporaryQueueSubscribers = []
BrokerName = amq
DynamicDestinationProducers = []
Persistent = true
DataDirectory = /home/ramiro/tecPlata/jboss-fuse-6.0.0.redhat-024/data/amq
Queues = []
DurableTopicSubscribers = []
TotalConsumerCount = 0
InactiveDurableTopicSubscribers = []
JobSchedulerStoreLimit = 0
TempPercentUsage = 0
MemoryLimit = 67108864
VMURL = vm://amq
OpenWireURL = tcp://fluxit-ntb-43:61616?maximumConnections=1000
JobSchedulerStorePercentUsage = 0
TotalEnqueueCount = 1
TemporaryQueueProducers = []
StompSslURL =
TemporaryTopics = []
StompURL =
Slave = false
BrokerId = ID:fluxit-ntb-43-58596-1368558172573-0:1
TotalProducerCount = 0
StorePercentUsage = 0
brokerName = amq
StoreLimit = 107374182400
TransportConnectors = {openwire=tcp://fluxit-ntb-43:61616?maximumConnections=1000}
TemporaryTopicProducers = []
TempLimit = 53687091200
QueueProducers = []
type = Broker
The features.xml in your kar is incorrect which cause this error. It has some bundle like
<bundle>mvn:org.apache.felix/org.apache.felix.configadmin/1.2.4</bundle>
<bundle>mvn:org.apache.aries/org.apache.aries.util/1.0.0</bundle>
<bundle>mvn:org.apache.aries.proxy/org.apache.aries.proxy.api/1.0.0</bundle>
<bundle>mvn:org.apache.aries.blueprint/org.apache.aries.blueprint/1.0.1.redhat-60024</bundle>
Those bundles are very fundamental for the container and already get installed by container by default.
It shouldn't be in your features.xml, or if they're there, you should have
resolver="(obr)" for feature and dependency="true" for those bundle so that OBR resolver can kick in to prevent install redundant bundles.
Moreover, the
<bundle>mvn:org.apache.aries.blueprint/org.apache.aries.blueprint/1.0.1.redhat-60024</bundle>
is invalid for aries.blueprint 1.0.x, it should be
<bundle dependency="true" start-level="20">mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.api/1.0.1.redhat-60024</bundle>
<bundle dependency="true" start-level="20">mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.core/1.0.1.redhat-60024</bundle>
<bundle dependency="true" start-level="20">mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.cm/1.0.1.redhat-60024</bundle>
instead. Otherwise you will see errors like
ERROR: Bundle org.apache.aries.blueprint [251] EventDispatcher: Error during dispatch. (java.lang.ClassCastException: org.apache.aries.blueprint.ext.impl.ExtNamespaceHandler cannot be cast to org.apache.aries.blueprint.NamespaceHandler)
java.lang.ClassCastException: org.apache.aries.blueprint.ext.impl.ExtNamespaceHandler cannot be cast to org.apache.aries.blueprint.NamespaceHandler
This means you have two conflict aries.blueprint bundle installed in your container which messed up almost everything.
In a summary, change your features.xml in your kar like
<?xml version="1.0" encoding="UTF-8"?>
<features>
<feature name='tosMock' version='1.0.0-SNAPSHOT'>
<bundle>mvn:com.tecplata.esb.services/tosMock/1.0.0-SNAPSHOT</bundle>
</feature>
<feature name='esb-entities' version='1.0.0-SNAPSHOT'>
<bundle>mvn:com.tecplata.esb/esb-entities/1.0.0-SNAPSHOT</bundle>
</feature>
<feature name='vesselsService-sei' version='1.0.0-SNAPSHOT'>
<feature version='1.0.0-SNAPSHOT'>esb-entities</feature>
<bundle>mvn:com.tecplata.esb.services.sei/vesselsService-sei/1.0.0-SNAPSHOT</bundle>
</feature>
<feature name='vesselsVisitorService' version='1.0.0-SNAPSHOT'>
<bundle>mvn:org.apache.camel/camel-core/2.10.0.redhat-60024</bundle>
<feature version='1.0.0-SNAPSHOT'>vesselsService-sei</feature>
<bundle>mvn:com.tecplata.esb.services/vesselsVisitorService/1.0.0-SNAPSHOT</bundle>
</feature>
</features>
can make it work.
Freeman
The user has cross posted the same questions in many places. When you do this please tell us, as the conversation is the scattered.
Being active discussed here:
http://fusesource.com/forums/thread.jspa?threadID=4797&tstart=0
But also posted here:
https://community.jboss.org/thread/228200