TomEE disable TransactionManager defaultTransactionTimeout? - apache-tomee

Is there a way to disable the timeout wihtout getting rid of the transaction manager in TomEE?
My sample transaction manager from tomee.xml is:
<TransactionManager id="MyTransactionManager" type="TransactionManager">
adler32Checksum = true
bufferSizeKb = 32
checksumEnabled = true
<!--defaultTransactionTimeout = 10 minutes-->
defaultTransactionTimeout = 10000 minutes
flushSleepTime = 50 Milliseconds
logFileDir = txlog
logFileExt = log
logFileName = howl
maxBlocksPerFile = -1
maxBuffers = 0
maxLogFiles = 2
minBuffers = 4
threadsWaitingForceThreshold = -1
txRecovery = false
</TransactionManager>
But sometimes the transaction can be longer than 10000 minutes (~ 7 days).
TomEE version 1.7.4

7 Days? Holy smokes! What on earth are you doing? :D
Unfortunately if you remove it, it will go back to the default. However, you could simply put the following which would work just fine:
<TransactionManager id="MyTransactionManager" type="TransactionManager">
...
defaultTransactionTimeout = 10 days
...
</TransactionManager>

Alternatively you can wrap your code invocation with a bean setting the duration for that particular case using UserTransaction: http://docs.oracle.com/javaee/6/api/javax/transaction/UserTransaction.html#setTransactionTimeout(int)

Related

Apache Flume with 2 different interceptors on same source

I am trying to add 2 different interceptors on the same source and send the intercepted data to 2 different channels.
But, I was not able to configure the same. Couldn't find any documentation about the same. Also, I am having some issues with the channel selectors. Not sure how to select a channel with the different interceptors.
Here is my code so far:
a1.sources = syslog_udp
a1.channels = chan1 chan2
a1.sinks = sink1 sink2 //both are different kafka sinks
a1.sources.syslog_udp.type = syslogudp
a1.sources.syslog_udp.port = 514
a1.sources.syslog_udp.host = 0.0.0.0
a1.sources.syslog_udp.keepFields = true
a1.sources.syslog_udp.interceptors = i1 i2
a1.sources.syslog_udp.interceptors.i1.type = regex_filter
a1.sources.syslog_udp.interceptors.i1.regex = '<regex_string1>'
a1.sources.syslog_udp.interceptors.i1.excludeEvents = false
a1.sources.syslog_udp.interceptors.i2.type = regex_filter
a1.sources.syslog_udp.interceptors.i2.regex = '<regex_string1>'|'<regex_string2>'
a1.sources.syslog_udp.interceptors.i2.excludeEvents = false
a1.sources.syslog_udp.selector.type = multiplexing
a1.sources.syslog_udp.channels = chan1 chan2
a1.channels.chan1.type = memory
a1.channels.chan1.capacity = 200
a1.channels.chan2.type = memory
a1.channels.chan2.capacity = 200
Seems like there is no straight-forward setup for this.
A work-around for this kind of layout is to have a single/wider channel interceptor in one agent, pipe the output to an avro-sink and setup a new agent for the avro-source and set-up the new channel interceptor on that.

Configure a DataSource in TomEE in system.properties instead of tomee.conf

I'm able to configure a DataSource Resource in TomEE by modifying the "conf/tomee.xml" file. However, it's sort of awkward to automate this modification, as I have to insert the DataSource definition before the "" line. I heard from a comment in a related SO posting from me that it's easier to append to the "system.properties" file.
So, I tried translating this:
<Resource id="sus2" type="DataSource">
JdbcDriver = oracle.jdbc.driver.OracleDriver
MaxActive = 10
MinIdle = 2
MaxIdle = 2
MaxWait = 10000
JdbcUrl = jdbc:oracle:thin:#${DB_HOST}:${DB_PORT}:${DB_SID}
UserName = ${DB_USER}
Password = ${DB_PASSWORD}
</Resource>
Which works, to the following:
db = new://Resource?type=DataSource
db.id = Resource/sus2
db.JdbcDriver = oracle.jdbc.driver.OracleDriver
db.MaxActive = 10
db.MinIdle = 2
db.MaxIdle = 2
db.MaxWait = 10000
db.JdbcUrl = jdbc:oracle:thin:#${DB_HOST}:${DB_PORT}:${DB_SID}
db.UserName = ${DB_USER}
db.Password = ${DB_PASSWORD}
which does not work. It fails, saying it couldn't find the "Resource/sus2" resource.
The configuration reference can be found at http://tomee.apache.org/ng/admin/configuration/resources.html
You have to understand that XML attributes becomes URI query parameters then I think it will work.
In other words:
db = new://Resource?type=DataSource
becomes
sus2 = new://Resource?type=DataSource
and your db.id doesn't do anything - I think it is logged.
In short: replace all your "db" by "sus2" and it will work

celery worker not publishing message to the rabbitmq?

I have a setup where celery_result_backend has been configured to 'amqp'. I can see my tasks getting executed by the worker in logs. But
It is creating the queue with task id but its status is expired.I am not getting the result (result = AsyncResult(taskid); result.get() hangs). I tried all the backed supported:
1)Mysql: It is not putting data to the celery created tables
2) Redis: It is not putting data to the db
I two centos system.
1) I am calling the delay method to send the task to proper rabbitmq. And the worker is listening to the queue, from there it will pick the task and process(I can see task in the queue and getting executed by the worker in machine 2 But the result is not being put into the backend.
).Here I am doing the result.get() It hangs.
2) The worker is running on it to execute the task.It executes the task but I think not able to put the rersult
Settings:
RABBITMQ_BROKER_HOST = '10.213.166.133'
RABBITMQ_BROKER_PORT = dqms_settings.RABBITMQ_BROKER_PORT
RABBITMQ_BROKER_VHOST = dqms_settings.RABBITMQ_BROKER_VHOST
RABBITMQ_BROKER_USERNAME = dqms_settings.RABBITMQ_BROKER_USERNAME
RABBITMQ_BROKER_PASSWORD = dqms_settings.RABBITMQ_BROKER_PASSWORD
BROKER_URL = 'amqp://%s:%s#%s:%s/%s' % (RABBITMQ_BROKER_USERNAME,
RABBITMQ_BROKER_PASSWORD,
RABBITMQ_BROKER_HOST,
RABBITMQ_BROKER_PORT,
RABBITMQ_BROKER_VHOST)
#CELERY_TASK_RESULT_EXPIRES = 18000
#CELERY_IGNORE_RESULT = True
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
#CELERY_RESULT_BACKEND = 'db+mysql://svcacct-dqms:s3cretP#ssw0rd#10.213.166.202:3306/dqms'
#CELERY_RESULT_BACKEND = 'amqp'
#CELERY_AMQP_TASK_RESULT_EXPIRES = 1000
#CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = TIME_ZONE
CELERYD_PREFETCH_MULTIPLIER = dqms_settings.CELERYD_PREFETCH_MULTIPLIER
CELERY_DEFAULT_QUEUE = dqms_settings.CELERY_DEFAULT_QUEUE
CELERY_DEFAULT_EXCHANGE_TYPE = dqms_settings.CELERY_DEFAULT_EXCHANGE_TYPE
CELERY_DEFAULT_ROUTING_KEY = dqms_settings.CELERY_DEFAULT_ROUTING_KEY
CELERY_QUEUES = dqms_settings.CELERY_QUEUES
CELERY_ROUTES = dqms_settings.CELERY_ROUTES
CELERYD_HIJACK_ROOT_LOGGER = dqms_settings.CELERYD_HIJACK_ROOT_LOGGER
CELERY_ACKS_LATE = dqms_settings.CELERY_ACKS_LATE
CELERY_RESULT_BACKEND = 'redis://:s3cretP#ssw0rd#10.213.166.204:6379/5' #'djcelery.backends.database.DatabaseBackend'
#CELERY_REDIS_MAX_CONNECTIONS = 6
#CELERY_ALWAYS_EAGER = False
Can some one help why it is not putting the result in the queue?
This is a issue which is happening quite common now.
setting CELERY_ALWAYS_EAGER to TRUE will do the work
However this is not the best solution in production scenario.

How to use regex_extractor selector and multiplexing interceptor together in flume?

I am testing flume to load data into hHase and thinking about parallel data loading with using flume's selector and inteceptor, because of speed gap between source and sink.
So, what I want to do with flume are
creating Event's header with interceptors's regex_extractor type
multiplexing Event with header to more than two channels with selector's multiplexing type
in one source-channel-sink.
and tried configuration as below.
agent.sources = tailsrc
agent.channels = mem1 mem2
agent.sinks = std1 std2
agent.sources.tailsrc.type = exec
agent.sources.tailsrc.command = tail -F /home/flumeuser/test/in.txt
agent.sources.tailsrc.batchSize = 1
agent.sources.tailsrc.interceptors = i1
agent.sources.tailsrc.interceptors.i1.type = regex_extractor
agent.sources.tailsrc.interceptors.i1.regex = ^(\\d)
agent.sources.tailsrc.interceptors.i1.serializers = t1
agent.sources.tailsrc.interceptors.i1.serializers.t1.name = type
agent.sources.tailsrc.selector.type = multiplexing
agent.sources.tailsrc.selector.header = type
agent.sources.tailsrc.selector.mapping.1 = mem1
agent.sources.tailsrc.selector.mapping.2 = mem2
agent.sinks.std1.type = file_roll
agent.sinks.std1.channel = mem1
agent.sinks.std1.batchSize = 1
agent.sinks.std1.sink.directory = /var/log/flumeout/1
agent.sinks.std1.rollInterval = 0
agent.sinks.std2.type = file_roll
agent.sinks.std2.channel = mem2
agent.sinks.std2.batchSize = 1
agent.sinks.std2.sink.directory = /var/log/flumeout/2
agent.sinks.std2.rollInterval = 0
agent.channels.mem1.type = memory
agent.channels.mem1.capacity = 100
agent.channels.mem2.type = memory
agent.channels.mem2.capacity = 100
But, it doesn't work!
when selector part is removed, there are some interceptor debugging message in flume's log.
but when selector and interceptor are together, there are nothing.
Is there any wrong expression or something I missed?
Thanks for reading. :)
I found it.
In the flume log, there are warning message as below.
2013-10-10 16:34:20,514 (conf-file-poller-0) [WARN - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateSources(FlumeConfiguration.java:571)] Removed tailsrc due to Failed to configure component!
so I had attached below line
agent.sources.tailsrc.channels = mem1 mem2
and then It works!!!!

Xdebug and Webgrind couldn't get to work

I installed xdebug and webgrind on windows xampp 1.7.7 using this link: link . On going to http:// localhost/webgrind gives something as below instead of showing profiled script.No dropdown menu or something to select.
Select a cachegrind file above
(looking in C:\xampp\htdocs\webgrind\tmp/ for files matching /^cachegrind.out..+..+$/)
But I have 2 files in tmp folder starting with cachegrind.out names.
My settings in (xampp/php/php.ini) are
zend_extension = "C:\xampp\php\ext\php_xdebug-2.2.2-5.3-vc9.dll"
xdebug.profiler_output_dir = "C:\xampp\htdocs\webgrind\tmp"
xdebug.profiler_enable = 1
xdebug.profiler_enable_trigger = 0
xdebug.profiler_output_name = cachegrind.out.%t.%p
webgrind/config.php settings
static $storageDir = 'C:\xampp\htdocs\webgrind\tmp';
static $profilerDir = 'C:\xampp\htdocs\webgrind\tmp';
I tried with these too
static $storageDir = '';
static $profilerDir = '/tmp';
But no result. How can I get it to work?
I got this accidentally, playing with values in xampp/php/php.ini file.
Here is the thing change value of xdebug.profiler_append from 0 to 1.
i had the same pb that took a long day having xampp 1.7.7 in my windows 7 OS i found that the
php_xdebug-2.2.2-5.3-vc9.dll was bad even it was php_xdebug-2.2.2-5.3-vc9.exe in the first download after a look at php.ini i found the good default xdebug just here with extra lignes so i delete the semi-colonnes ;
**[XDebug] zend_extension = "C:\Programs\xampp\php\ext\php_xdebug.dll"
xdebug.auto_trace
; Type: boolean, Default value: 0
; When this setting is set to on, the tracing of function calls will be enabled just before the
; script is run. This makes it possible to trace code in the auto_prepend_file.
;xdebug.auto_trace = 0**
; xdebug.collect_includes
; Type: boolean, Default value: 1
yes there was the good php_xdebug.dll coming with xampp
so use it and dont look far :) maybe that will help you
it might be also the matter of relative addressing due to portable xampp.
you can change your Xdebug addresses like this.
before
[XDebug]
zend_extension = "\xampp\php\ext\php_xdebug.dll"
xdebug.profiler_append = 0
xdebug.profiler_enable = 1
xdebug.profiler_enable_trigger = 1
xdebug.profiler_output_dir = "\xampp\tmp"
xdebug.profiler_output_name = "cachegrind.out.%t-%s"
xdebug.remote_enable = 1
xdebug.remote_handler = "dbgp"
xdebug.remote_host = "127.0.0.1"
;xdebug.trace_output_dir = "\xampp\tmp"
after
[XDebug]
zend_extension = "D:\xampp\php\ext\php_xdebug.dll"
xdebug.profiler_append = 0
xdebug.profiler_enable = 1
xdebug.profiler_enable_trigger = 1
xdebug.profiler_output_dir = "D:\xampp\tmp"
xdebug.profiler_output_name = "cachegrind.out.%t-%s"
xdebug.remote_enable = 1
xdebug.remote_handler = "dbgp"
xdebug.remote_host = "127.0.0.1"
;xdebug.trace_output_dir = "\xampp\tmp"