I have some issues on my tws dynamic agent , how i can set the traces to understand better the problems ?
the agent is not enable to connect to the server
You need to edit the file:
<TWA_home>/TWS/ITA/cpa/config/JobManager.ini
and change the value of JobManager.loggerfl.level from 3000 (the default) to 1000.
Example:
JobManager.loggerfl.level = 1000
Then, you have to restart the dynamic agent.
This change will make the agent write more information in the files:
$TWA_HOME/TWS/stdlist/JM/JobManager_message.log
$TWA_HOME/TWS/stdlist/JM/JobManager_trace.log
Have a look to these pages for more information:
https://www.ibm.com/support/knowledgecenter/en/SSGSPN_9.2.0/com.ibm.tivoli.itws.doc_9.2/distr/src_tr/awstrdynagtlogfiles.htm
https://www.ibm.com/support/knowledgecenter/en/SSGSPN_9.2.0/com.ibm.tivoli.itws.doc_9.2/distr/src_ad/awsadconflogmesprops.htm
I also suggest to increase the size of the trace files and the number of trace files. These are the entries in JobManager.ini
JobManager.trfl.level = 1000
JobManager.trhd.maxFileBytes = 5024000
JobManager.trhd.maxFiles = 10
These settings set the trace level to the DEBUG level (1000), 10 trace files are written, each trace file has the size of 5 Mb.
Related
I've recently set some traces and extended events up and running in SQL on our new virtual server to show the access that users have to each database and whether they have logged in recently, and have set the file to save as a physical file on the server rather than writing to a SQL table to save resource. I've set the traces as jobs running at 8am each morning with a 12-hour delay so we can record as much information as possible.
Our IT department ideally don't want anything other than the OS on the C drive of the virtual server, so I'd like to be able to write the trace from my SQL script either to a different partition or to another server altogether.
I have attempted to insert a direct path to a different server within my code and have entered a different partition to C, however unless I write the trace/extended event files to the C drive I get an error message.
CREATE EVENT SESSION [LoginTraceTest] ON SERVER
ADD EVENT sqlserver.existing_connection(SET collect_database_name=
(1),collect_options_text=(1)
ACTION(package0.event_sequence,sqlos.task_time,sqlserver.client_pid,
sqlserver.database_id,sqlserver.
database_name,sqlserver.is_system,sqlserver.nt_username,sqlserver.request_id,sqlserver.server_principal_sid,sqlserver.session_id,sqlserver.session_nt_username,
sqlserver.sql_text,sqlserver.username)),
ADD EVENT sqlserver.login(SET collect_database_name=
(1),collect_options_text=(1)
ACTION(package0.event_sequence,sqlos.task_time,sqlserver.client_pid,sqlserver.database_id,sqlserver.database_name,sqlserver.is_system,sqlserver.nt_username,sqlserver.request_id,sqlserver.server_principal_sid,sqlserver.session_id,sqlserver.
session_nt_username,sqlserver.sql_text,sqlserver.username) )
ADD TARGET package0.asynchronous_file_target (
SET FILENAME = N'\\SERVER1\testFolder\LoginTrace.xel',
METADATAFILE = N'\\SERVER1\testFolder\LoginTrace.xem' );
The error I receive is this:
Msg 25641, Level 16, State 0, Line 6
For target, "package0.asynchronous_file_target", the parameter "filename" passed is invalid. Target parameter at index 0 is invalid
If I change it to another partition rather than a different server:
SET FILENAME = N'D:\Traces\LoginTrace\LoginTrace.xel',
METADATAFILE = N'D:\Traces\LoginTrace\LoginTrace.xem' );
SQL server states that the command completed successfully, but the file isn't written to the partition.
Any ideas please as to what I can do to write the files to another partition or server?
I am using Atomikos for JTA transaction.
I have following setting for JTA:
UserTransactionImp userTransactionImp = new UserTransactionImp();
userTransactionImp.setTransactionTimeout(900);
but when my code perform JTA transaction, then if it takes more than 5 minutes (which is default value) then it throws exception:
Caused by: com.atomikos.icatch.RollbackException: Prepare: NO vote
at com.atomikos.icatch.imp.ActiveStateHandler.prepare(ActiveStateHandler.java:231)
at com.atomikos.icatch.imp.CoordinatorImp.prepare(CoordinatorImp.java:681)
at com.atomikos.icatch.imp.CoordinatorImp.terminate(CoordinatorImp.java:970)
at com.atomikos.icatch.imp.CompositeTerminatorImp.commit(CompositeTerminatorImp.java:82)
at com.atomikos.icatch.imp.CompositeTransactionImp.commit(CompositeTransactionImp.java:336)
at com.atomikos.icatch.jta.TransactionImp.commit(TransactionImp.java:190)
... 25 common frames omitted
it looks like its taking the default jta transaction timeout (even though i am setting timeout explicitely (to 15 minutes/900 seconds).
I tried using following properties in application.properties file however it still takes the default timeout value(300 seconds).
spring.jta.atomikos.properties.max-timeout=600000
spring.jta.atomikos.properties.default-jta-timeout=10000
I have also tried with below property but no luck:
spring.transaction.default-timeout=900
Can anyone suggest if I need any other setting? I am using wildfly plugin, spring boot and atomikos api for JTA transaction.
From the Atomikos documentation:
com.atomikos.icatch.max_timeout
Specifies the maximum timeout (in milliseconds) that can be allowed for transactions. Defaults to 300000. This means that calls to UserTransaction.setTransactionTimeout() with a value higher than configured here will be max'ed to this value. For 4.x or higher, a value of 0 means no maximum (i.e., unlimited timeouts are allowed).
Indeed, if you take a look at the Atomikos library source code (for both versions 4.0.0M4 and 3.7.0), in the createCC method from class com.atomikos.icatch.imp.TransactionServiceImp you will see:
387: if ( timeout > maxTimeout_ ) {
388: timeout = maxTimeout_;
389: //FIXED 20188
390: LOGGER.logWarning ( "Attempt to create a transaction with a timeout that exceeds maximum - truncating to: " + maxTimeout_ );
391: }
So any attempt to specify a longer transaction timeout gets capped to maxTimeout_ which has a default value of 300000 set during initialization if none is specified.
You can set the com.atomikos.icatch.max_timeout as a JVM argument with:
-Dcom.atomikos.icatch.max_timeout=900000
or you could use The Advanced Case recipe specified in the Configuration for Spring Section from the Atomikos documentation.
I've resolved similar problem where configuration in application.yml (or application. properties) of Spring Boot did not get picked up.
There was even a log that I later found mentioned in official docs.
However, I added transactions.properties file (next to the application.yml) where I set mine desired properties.
# Atomikos properties
# Service must be defined!
com.atomikos.icatch.service = com.atomikos.icatch.standalone.UserTransactionServiceFactory
# Override default properties.
com.atomikos.icatch.log_base_dir = ./atomikos
Some properties can be set within transactions.properties and other within jta.properties file.
I am trying to fetch more than 1000 records from dynamic Ax using document service.But i am unable to fetch more than 1000 records.
We are using Mule esb to fetch the records from dynamic Ax using dynamic Ax connector.We have also tried by using static Query but we are getting following error :
Connection Reset(java.net.SocketException) invoking
https://10.10.1.23:9333/router :connection reset
When we test the connection in mule it gives test connection successful.
Is there any possible way to fetch thousands of record from dynamic Ax?
Thanks in advance
You might need to tweak the settings on the Windows Gateway Services instance running on https://10.10.1.23:9333/router.
Specifically the AX connector related settings such as max buffer size and max received message size. The settings are tweaked in the config file of the service called Mule.SelfHost.exe.config, which should reside in the installation directory. After tweaking remember to restart the service!
See this guide for more details:
https://docs.mulesoft.com/mule-user-guide/v/3.8/windows-gateway-services-guide#configuring-dynamics-crm-and-ax-connector-settings
I know that a constant delay can be set in
settings.py
DOWNLOAD_DELAY = 2
however, if I set the delay to 2s it is not efficient enough. If I set the DOWNLOAD_DELAY = 0.
The crawler is able to crawl about 10 pages. after that, the target page will return something like " you are requesting too frequently ".
What I want to do is the keep the download_delay to 0. once the "requesting too frequently" msg is found in the html. it change the delay to 2s. After a while it switch back to zero.
is there any module can do this? or any other better idea to handle such case?
Update:
I found that is a extension call AutoThrottle
but is it able to customize some logic like this??
if (requesting too frequently) is found
increase the DOWNLOAD_DELAY
If right after you get anti-spider page, then in 2 seconds you can get data page, then what you are asking probably requires writing a downloader middleware
that checks for anti-spider page, reset all scheduled requests to a renew-queue, start a looping call when spider is idle to get request from the renew-queue, (the looping interval is your hack for a new download delay), and try to decide when the download delay is not necessary again (requires some tests), then stop the looping and reschedule all the requests in renew-queue to scrapy scheduler. You will need to use redis queue in case of distributed crawl.
With download delay set to 0, in my experience throughput can go easily above 1000 items/min. If anti-spider page pops up after 10 responses, then it is not worth the effort.
Instead maybe you can try to find out how fast does your target server allow, may be 1.5s, 1s, 0.7s, 0.5s etc. Then maybe redesign your product takes into consideration the throughput your crawler can achieve.
You can use Auto Throttle extension now. It is turned off by default. You can add these parameters in your project's settings.py file to enable it.
AUTOTHROTTLE_ENABLED = True
# The initial download delay
AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
AUTOTHROTTLE_MAX_DELAY = 300
# The average number of requests Scrapy should be sending in parallel to
# each remote server
AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
AUTOTHROTTLE_DEBUG = True
Yes, You can use the time module to set the dynamic delay.
import time
for i in range(10):
*** Operations 1****
time.sleep( i )
*** Operations 2****
Now you can see the delay between Operations 1 and Operations 2.
Note:
the variable 'i' is in the form of seconds.
I'm trying to use the asadmin interface to monitor a thread-pool on GlassFish 3.1.1. I'm executing the following command:
asadmin get -m server.network.my-listener.thread-pool.*
and I'm getting data back, but most of it has lastsampletime = -1 (so the related data is zero; and is worthless).
Note: I've also tried the REST interface, which I believe asadmin delegates to, and the JMX interface. Same problem: much of the data has lastsampletime = -1.
I've already turned monitoring to HIGH for all modules. What am I missing?
It seems like redeploying my application was necessary for the monitoring to actually get values. Perhaps I interpreted the manual incorrectly but it seems to suggest that a restart/redeploy wouldn't be required:
Oracle GlassFish Server 3.1 Administration Guide
Also, it is weird that the following shows there is no monitoring data:
asadmin get -m server.thread-pools.thread-pool.http-thread-pool.*
Instead you must go through a specific network listener like:
asadmin get -m server.network.http-listener-2.thread-pool.*
It also took me by surprise that enabling thread-pool monitoring IS NOT enough to see thread pool statistics. You must also enable http-service monitoring:
asadmin enable-monitoring
asadmin set server.monitoring-service.module-monitoring-levels.thread-pool=HIGH
asadmin set server.monitoring-service.module-monitoring-levels.http-service=HIGH
That's all you should need to do.
Enable monitoring, set to HIGH, for the http-service module on the DAS, stand-alone instance, or cluster you want to monitor.
Deploy an app to the DAS, stand-alone instance, or cluster and make http-requests.
asadmin get -m *instancename*.network.*listener*.thread-pool.*
Looks like you are monitoring DAS, since you are using asadmin get -m server.network.my-listener.thread-pool.*.
I deployed a simple war to DAS and made a bunch of http requests. I see the corethreads-count and maxthreads-count have last sample time as -1. And the remaining statistics have actual last sample times.
asadmin get -m "server.network.http-listener-1.thread-pool.*"
server.network.http-listener-1.thread-pool.corethreads-count = 0
server.network.http-listener-1.thread-pool.corethreads-description = Core number of threads in the thread pool
server.network.http-listener-1.thread-pool.corethreads-lastsampletime = -1
server.network.http-listener-1.thread-pool.corethreads-name = CoreThreads
server.network.http-listener-1.thread-pool.corethreads-starttime = 1320764890444
server.network.http-listener-1.thread-pool.corethreads-unit = count
server.network.http-listener-1.thread-pool.currentthreadcount-count = 5
server.network.http-listener-1.thread-pool.currentthreadcount-description = Provides the number of request processing threads currently in the listener thread pool
server.network.http-listener-1.thread-pool.currentthreadcount-lastsampletime = 1320765351708
server.network.http-listener-1.thread-pool.currentthreadcount-name = CurrentThreadCount
server.network.http-listener-1.thread-pool.currentthreadcount-starttime = 1320764890445
server.network.http-listener-1.thread-pool.currentthreadcount-unit = count
server.network.http-listener-1.thread-pool.currentthreadsbusy-count = 0
server.network.http-listener-1.thread-pool.currentthreadsbusy-description = Provides the number of request processing threads currently in use in the listener thread pool serving requests
server.network.http-listener-1.thread-pool.currentthreadsbusy-lastsampletime = 1320765772814
server.network.http-listener-1.thread-pool.currentthreadsbusy-name = CurrentThreadsBusy
server.network.http-listener-1.thread-pool.currentthreadsbusy-starttime = 1320764890445
server.network.http-listener-1.thread-pool.currentthreadsbusy-unit = count
server.network.http-listener-1.thread-pool.dotted-name = server.network.http-listener-1.thread-pool
server.network.http-listener-1.thread-pool.maxthreads-count = 0
server.network.http-listener-1.thread-pool.maxthreads-description = Maximum number of threads allowed in the thread pool
server.network.http-listener-1.thread-pool.maxthreads-lastsampletime = -1
server.network.http-listener-1.thread-pool.maxthreads-name = MaxThreads
server.network.http-listener-1.thread-pool.maxthreads-starttime = 1320764890443
server.network.http-listener-1.thread-pool.maxthreads-unit = count
server.network.http-listener-1.thread-pool.totalexecutedtasks-count = 31
server.network.http-listener-1.thread-pool.totalexecutedtasks-description = Provides the total number of tasks, which were executed by the thread pool
server.network.http-listener-1.thread-pool.totalexecutedtasks-lastsampletime = 1320765772814
server.network.http-listener-1.thread-pool.totalexecutedtasks-name = TotalExecutedTasksCount
server.network.http-listener-1.thread-pool.totalexecutedtasks-starttime = 1320764890444
server.network.http-listener-1.thread-pool.totalexecutedtasks-unit = count
Command get executed successfully.
To instantly enable monitoring without restart use enable-monitoring command
enable-monitoring
enable-monitoring --modules jvm=LOW
enable-monitoring --modules thread-pool=HIGH
enable-monitoring --modules http-service=HIGH
enable-monitoring --modules jdbc-connection-pool=HIGH
The trick is that thread-pool and http-service modules must have high level to get monitoring info.
For more info refer https://docs.oracle.com/cd/E26576_01/doc.312/e24928/monitoring.htm#GSADG00558