How to Configure the Web Connector from metrics.log Values - apache

I am reviewing the ColdFusion Web Connector settings in workers.properties to hopefully address a sporadic response time issue.
I've been advised to inspect the output from the metrics.log file (CF Admin > Debugging & Logging > Debug Output Settings > Enable Metric Logging) and use this to inform the adjustments to the settings max_reuse_connections, connection_pool_size and connection_pool_timeout.
My question is: How do I interpret the metrics.log output to inform the choice of setting values? Is there any documentation that can guide me?
Examples from over a 120 hour period:
95% of entries -
"Information","scheduler-2","06/16/14","08:09:04",,"Max threads: 150 Current thread count: 4 Current thread busy: 0 Max processing time: 83425 Request count: 9072 Error count: 72 Bytes received: 1649 Bytes sent: 22768583 Free memory: 124252584 Total memory: 1055326208 Active Sessions: 1396"
Occurred once -
"Information","scheduler-2","06/13/14","14:20:22",,"Max threads: 150 Current thread count: 10 Current thread busy: 5 Max processing time: 2338 Request count: 21 Error count: 4 Bytes received: 155 Bytes sent: 139798 Free memory: 114920208 Total memory: 1053097984 Active Sessions: 6899"
Environment:
3 x Windows 2008 R2 (hardware load balanced)
ColdFusion 10 (update 12)
Apache 2.2.21

Richard, I realize your question here is from 2014, and perhaps you have since resolved it, but I suspect your problem was that the port set in the CF admin (below the "metrics log" checkbox) was set to 8500, which is your internal web server (used by the CF admin only, typically, if at all). That's why the numbers are not changing. (And for those who don't enable the internal web server at installation of CF, or later, most values in the metrics log are null).
I address this problem in a blog post I happened to do just last week: http://www.carehart.org/blog/client/index.cfm/2016/3/2/cf_metrics_log_part1
Hope any of this helps.

Related

concurrent testing of login functionality with 50 users/threads is not working

i have given the thread count = 50
rampup period =0
for 48 threaads it is getting passed , for 2 threads there is no failure captured in the selenium log files.
I am expecting concurrent login of 50 users with 0 rampup period , i am not able to find out the exact reason of failure . please suggest the fixes to handle this scenario.
Check jmeter.log file for any suspicious entries
Add View Results Tree listener to your test plan - it will allow you to inspect request and response details
50 real browsers might be too high for a single machin
as per WebDriver Sampler documentation
From experience, the number of browser (threads) that the reader creates should be limited by the following formula:
C = N + 1
where
C = Number of Cores of the host running the test
and N = Number of Browser (threads).
as per Firefox 62.0 system requirements
512MB of RAM / 2GB of RAM for the 64-bit version
So you will need a machine with 51 cores and 100 GB of RAM in order to ensure there will no be JMeter-side bottleneck. If your machine hardware specifications are lower - you will have to go for Remote Testing

Asterisk: "TLS clean shutdown alert reading data" after 120s in SIP call

I am using a Secure SIP trunk provided by Twilio to implement an IVR. I have implemented per Twilio's Asterisk configuration guide, installed SRTP to /usr/local/lib, as well as implemented the configuration in https://wiki.asterisk.org/wiki/display/AST/Secure+Calling+Tutorial.
The problem lies in any call that is longer than 2 minutes cannot be ended cleanly and causes Asterisk to restart.
sip.conf (using chan_sip, not pjsip):
[general]
; other configuration lines removed
tlsenable=yes
tlsbindaddr=0.0.0.0
tlscertfile=/etc/pki/tls/private/pbx.pem
tlscafile=/etc/pki/tls/private/gd_bundle-g2-g1.crt
tlscipher=ALL
tlsclientmethod=tlsv1
tlsdontverifyserver=yes
[twilio-trunk](!)
type=peer
context=from-twilio ;Which dialplan to use for incoming calls
dtmfmode=rfc4733
canreinvite=no
insecure=port,invite
transport=tls
qualify=yes
encryption=yes
media_encryption=sdes
I can make and receive calls just fine, and I have confirmed the calls are encrypted both via wireshark and confirmation from Twilio's own support queue.
At exactly 120 seconds into every call, this debug pops up:
[Dec 6 13:14:39] DEBUG[30015]: iostream.c:157 iostream_read: TLS clean shutdown alert reading data
[Dec 6 13:14:39] DEBUG[30015]: chan_sip.c:2905 sip_tcptls_read: SIP TCP/TLS server has shut down
The call continues to flow bi-directionally, the caller never knows there is a problem until they hit a hangup in context, i.e. h,1,Hangup(). Then Asterisk is restarted (new PID) and the caller hangs in limbo for another 5 minutes before the call times out with a fast busy. Twilio confirms they see the BYE and return an ACK at the point of the Hangup.
I was on 13.11 and updated to 15.1.3, same result. Calls longer than 120s result in TLS message in debug and Asterisk restarts.
No Google query results out there. Twilio hasn't been real helpful. Can anyone shed some light on what is happening and where I need to look next?
More logs:
[Dec 8 10:18:48] DEBUG[4993][C-00000001]: channel.c:5551 set_format: Channel SIP/twilio0-00000000 setting write format path: gsm -> ulaw
[Dec 8 10:18:48] DEBUG[4993][C-00000001]: res_rtp_asterisk.c:4017 rtp_raw_write: Difference is 2472, ms is 329
[Dec 8 10:18:48] DEBUG[4993][C-00000001]: channel.c:3192 ast_settimeout_full: Scheduling timer at (50 requested / 50 actual) timer ticks per second
– <SIP/twilio0-00000000> Playing ‘IVR/omnicare_9d_account.gsm’ (language ‘en’)
[Dec 8 10:18:48] DEBUG[4993][C-00000001]: res_rtp_asterisk.c:4928 ast_rtcp_interpret: Got RTCP report of 64 bytes from 34.203.250.7:10475
[Dec 8 10:18:53] DEBUG[4993][C-00000001]: res_rtp_asterisk.c:4928 ast_rtcp_interpret: Got RTCP report of 64 bytes from 34.203.250.7:10475
[Dec 8 10:18:55] DEBUG[4992]: iostream.c:157 iostream_read: TLS clean shutdown alert reading data
[Dec 8 10:18:55] DEBUG[4992]: chan_sip.c:2905 sip_tcptls_read: SIP TCP/TLS server has shut down
[Dec 8 10:18:58] DEBUG[4993][C-00000001]: channel.c:3192 ast_settimeout_full: Scheduling timer at (0 requested / 0 actual) timer ticks per second
[Dec 8 10:18:58] DEBUG[4993][C-00000001]: channel.c:3192 ast_settimeout_full: Scheduling timer at (0 requested / 0 actual) timer ticks per second
[Dec 8 10:18:58] DEBUG[4993][C-00000001]: channel.c:3192 ast_settimeout_full: Scheduling timer at (0 requested / 0 actual) timer ticks per second
[Dec 8 10:18:58] DEBUG[4993][C-00000001]: channel.c:5551 set_format: Channel SIP/twilio0-00000000 setting write format path: ulaw -> ulaw
[Dec 8 10:18:58] DEBUG[4993][C-00000001]: res_rtp_asterisk.c:4928 ast_rtcp_interpret: Got RTCP report of 64 bytes from 34.203.250.7:10475
[Dec 8 10:19:01] DEBUG[4914]: cdr.c:4305 ast_cdr_engine_term: CDR Engine termination request received; waiting on messages…
Asterisk uncleanly ending (0).
Executing last minute cleanups
== Destroying musiconhold processes
[Dec 8 10:19:01] DEBUG[4914]: res_musiconhold.c:1627 moh_class_destructor: Destroying MOH class ‘default’
[Dec 8 10:19:01] DEBUG[4914]: cdr.c:1289 cdr_object_finalize: Finalized CDR for SIP/twilio0-00000000 - start 1512749813.880448 answer 1512749813.881198 end 1512749941.201797 dispo ANSWERED
== Manager unregistered action DBGet
== Manager unregistered action DBPut
== Manager unregistered action DBDel
== Manager unregistered action DBDelTree
[Dec 8 10:19:01] DEBUG[4914]: asterisk.c:2157 really_quit: Asterisk ending (0).
Check your firewall logs. We've had issues with sessions being torn down by firewalls that thought the NAT entries were stale/old.
You can also try configuring Asterisk to send keep-alive packets using the option qualify=yes and nat=yes in your sip.conf entry for that user/trunk. Or inside the RTP stream with rtpkeepalive=<secs>. The best docs I could find for sip.conf are the example config on github.
I dug in the source code for the text "TLS clean shutdown alert reading data", which pointed me to some OpenSSL docs which suggest a clean/normal closure (which I'm guessing was caused by your firewall):
The TLS/SSL connection has been closed. If the protocol version is SSL 3.0 or higher, this result code is returned only if a closure alert has occurred in the protocol, i.e. if the connection has been closed cleanly. Note that in this case SSL_ERROR_ZERO_RETURN does not necessarily indicate that the underlying transport has been closed.

jmeter and apachetop - why I see different values?

Probably explanation is simple - but I couldn't find answer to my question:
I am running jmeter test from one VM (worker) to another (target). On worker I have jmeter with 100 threads (100 users). On target I have API that runs on Apache. When I run "apachetop -f access_log" on target, I see only about 7 req/s.
Can someone explain me, why I don't see 100 req/s on target?
In test result in jmeter I see always 200 OK, so all request are hitting the target, and moreover target always responds. So I am not dropping any requests here. Network bandwidth between machines is 1G. What I am missing here?
Thanks,
Daddy
100 users doesn't necessarily mean 100 requests per second, even more, it is highly unlikely.
According to JMeter glossary:
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Roughly, if JMeter is able to get response from server in 1 second - you will get 100 requests/second. If response time will be 2 seconds - throughput will be 50 requests/second, etc, response time 4 seconds - 25 requests/second, etc.
Also JMeter configuration matters. If you don't provide enough loops you may run into a situation where some threads already finished and some are not even started. See JMeter Test Results: Why the Actual Users Number is Lower than Expected article for more detailed explanation.
Your target load = 100 threads ( you are assuming it should generate 100 req/sec as per your plan)
Your actual load = 7 req / sec = 7*3600 / hour = 25200
Per thread throughput = 25200 / 100 threads = 252 iterations / thread / hour
Per transaction time = 3600 / 252 = 14.2 secs
This means, JMeter should be actually sending each request every 14 secs per thread. i.e., 100 requests for every 14.2 secs.
Now, analyze your JMeter summary report for the transaction timers to find out where the remaining 13.2 secs are being spent.
Possible issues are
1. High DNS resolution time (DNS issue)
2. High connection setup time (indicates load balancer issues)
3. High Request send time (indicates n/w or firewall throttling issues)
4. High request receive time (same as #3)
Now, the time that you see in Apache logs are mostly visible to JMeter as time to first byte time. I am not sure about the machine that you are running your testing. If your worker can support curl, use Curl to find the components for a single request.
echo 'request payload for POST'
| curl -X POST -H 'User-Agent: myBrowser' -H 'Content-Type: application/json' -d #- -s -w '\nDNS time:\t%{time_namelookup}\nTCP Connect time:\t%{time_connect}\nAppCon Protocol time:\t%{time_appconnect}\nRedirect time:\t%{time_redirect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' http://mytest.test.com
If the above output indicates no such issues then the time must have been spent within JMeter. You should tune your JMeter implementation by using various options like beanshell / JSR223 etc.

Wrong balance between Aerospike instances in cluster

I have an application with a high load for batch read operations. My Aerospike cluster (v 3.7.2) has 14 servers, each one with 7GB RAM and 2 CPUs in Google Cloud.
By looking at Google Cloud Monitoring Graphs, I noticed a very unbalanced load between servers: some servers have almost 100% CPU load, while others have less than 50% (image below). Even after hours of operation, the cluster unbalanced pattern doesn't change.
Is there any configuration that I could change to make this cluster more homogeneous? How to optimize node balancing?
Edit 1
All servers in the cluster have the same identical aerospike.conf file:
Aerospike database configuration file.
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
paxos-recovery-policy auto-reset-master
pidfile /var/run/aerospike/asd.pid
service-threads 32
transaction-queues 32
transaction-threads-per-queue 32
batch-index-threads 32
proto-fd-max 15000
batch-max-requests 200000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
#address any
port 3000
}
heartbeat {
mode mesh
mesh-seed-address-port 10.240.0.6 3002
mesh-seed-address-port 10.240.0.5 3002
port 3002
interval 150
timeout 20
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace test {
replication-factor 3
memory-size 5G
default-ttl 0 # 30 days, use 0 to never expire/evict.
ldt-enabled true
storage-engine device {
file /data/aerospike.dat
write-block-size 1M
filesize 180G
}
}
Edit 2:
$ asinfo
1 : node
BB90600F00A0142
2 : statistics
cluster_size=14;cluster_key=E3C3672DCDD7F51;cluster_integrity=true;objects=3739898;sub-records=0;total-bytes-disk=193273528320;used-bytes-disk=26018492544;free-pct-disk=86;total-bytes-memory=5368709120;used-bytes-memory=239353472;data-used-bytes-memory=0;index-used-bytes-memory=239353472;sindex-used-bytes-memory=0;free-pct-memory=95;stat_read_reqs=2881465329;stat_read_reqs_xdr=0;stat_read_success=2878457632;stat_read_errs_notfound=3007093;stat_read_errs_other=0;stat_write_reqs=551398;stat_write_reqs_xdr=0;stat_write_success=549522;stat_write_errs=90;stat_xdr_pipe_writes=0;stat_xdr_pipe_miss=0;stat_delete_success=4;stat_rw_timeout=1862;udf_read_reqs=0;udf_read_success=0;udf_read_errs_other=0;udf_write_reqs=0;udf_write_success=0;udf_write_err_others=0;udf_delete_reqs=0;udf_delete_success=0;udf_delete_err_others=0;udf_lua_errs=0;udf_scan_rec_reqs=0;udf_query_rec_reqs=0;udf_replica_writes=0;stat_proxy_reqs=7021;stat_proxy_reqs_xdr=0;stat_proxy_success=2121;stat_proxy_errs=4739;stat_ldt_proxy=0;stat_cluster_key_err_ack_dup_trans_reenqueue=607;stat_expired_objects=0;stat_evicted_objects=0;stat_deleted_set_objects=0;stat_evicted_objects_time=0;stat_zero_bin_records=0;stat_nsup_deletes_not_shipped=0;stat_compressed_pkts_received=0;err_tsvc_requests=110;err_tsvc_requests_timeout=0;err_out_of_space=0;err_duplicate_proxy_request=0;err_rw_request_not_found=17;err_rw_pending_limit=19;err_rw_cant_put_unique=0;geo_region_query_count=0;geo_region_query_cells=0;geo_region_query_points=0;geo_region_query_falsepos=0;fabric_msgs_sent=58002818;fabric_msgs_rcvd=57998870;paxos_principal=BB92B00F00A0142;migrate_msgs_sent=55749290;migrate_msgs_recv=55759692;migrate_progress_send=0;migrate_progress_recv=0;migrate_num_incoming_accepted=7228;migrate_num_incoming_refused=0;queue=0;transactions=101978550;reaped_fds=6;scans_active=0;basic_scans_succeeded=0;basic_scans_failed=0;aggr_scans_succeeded=0;aggr_scans_failed=0;udf_bg_scans_succeeded=0;udf_bg_scans_failed=0;batch_index_initiate=40457778;batch_index_queue=0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0;batch_index_complete=40456708;batch_index_timeout=1037;batch_index_errors=33;batch_index_unused_buffers=256;batch_index_huge_buffers=217168717;batch_index_created_buffers=217583519;batch_index_destroyed_buffers=217583263;batch_initiate=0;batch_queue=0;batch_tree_count=0;batch_timeout=0;batch_errors=0;info_queue=0;delete_queue=0;proxy_in_progress=0;proxy_initiate=7021;proxy_action=5519;proxy_retry=0;proxy_retry_q_full=0;proxy_unproxy=0;proxy_retry_same_dest=0;proxy_retry_new_dest=0;write_master=551089;write_prole=1055431;read_dup_prole=14232;rw_err_dup_internal=0;rw_err_dup_cluster_key=1814;rw_err_dup_send=0;rw_err_write_internal=0;rw_err_write_cluster_key=0;rw_err_write_send=0;rw_err_ack_internal=0;rw_err_ack_nomatch=1767;rw_err_ack_badnode=0;client_connections=366;waiting_transactions=0;tree_count=0;record_refs=3739898;record_locks=0;migrate_tx_objs=0;migrate_rx_objs=0;ongoing_write_reqs=0;err_storage_queue_full=0;partition_actual=296;partition_replica=572;partition_desync=0;partition_absent=3228;partition_zombie=0;partition_object_count=3739898;partition_ref_count=4096;system_free_mem_pct=61;sindex_ucgarbage_found=0;sindex_gc_locktimedout=0;sindex_gc_inactivity_dur=0;sindex_gc_activity_dur=0;sindex_gc_list_creation_time=0;sindex_gc_list_deletion_time=0;sindex_gc_objects_validated=0;sindex_gc_garbage_found=0;sindex_gc_garbage_cleaned=0;system_swapping=false;err_replica_null_node=0;err_replica_non_null_node=0;err_sync_copy_null_master=0;storage_defrag_corrupt_record=0;err_write_fail_prole_unknown=0;err_write_fail_prole_generation=0;err_write_fail_unknown=0;err_write_fail_key_exists=0;err_write_fail_generation=0;err_write_fail_generation_xdr=0;err_write_fail_bin_exists=0;err_write_fail_parameter=0;err_write_fail_incompatible_type=0;err_write_fail_noxdr=0;err_write_fail_prole_delete=0;err_write_fail_not_found=0;err_write_fail_key_mismatch=0;err_write_fail_record_too_big=90;err_write_fail_bin_name=0;err_write_fail_bin_not_found=0;err_write_fail_forbidden=0;stat_duplicate_operation=53184;uptime=1001388;stat_write_errs_notfound=0;stat_write_errs_other=90;heartbeat_received_self=0;heartbeat_received_foreign=145137042;query_reqs=0;query_success=0;query_fail=0;query_abort=0;query_avg_rec_count=0;query_short_running=0;query_long_running=0;query_short_queue_full=0;query_long_queue_full=0;query_short_reqs=0;query_long_reqs=0;query_agg=0;query_agg_success=0;query_agg_err=0;query_agg_abort=0;query_agg_avg_rec_count=0;query_lookups=0;query_lookup_success=0;query_lookup_err=0;query_lookup_abort=0;query_lookup_avg_rec_count=0
3 : features
cdt-list;pipelining;geo;float;batch-index;replicas-all;replicas-master;replicas-prole;udf
4 : cluster-generation
61
5 : partition-generation
11811
6 : edition
Aerospike Community Edition
7 : version
Aerospike Community Edition build 3.7.2
8 : build
3.7.2
9 : services
10.0.3.1:3000;10.240.0.14:3000;10.0.3.1:3000;10.240.0.27:3000;10.0.3.1:3000;10.240.0.5:3000;10.0.3.1:3000;10.240.0.43:3000;10.0.3.1:3000;10.240.0.30:3000;10.0.3.1:3000;10.240.0.18:3000;10.0.3.1:3000;10.240.0.42:3000;10.0.3.1:3000;10.240.0.33:3000;10.0.3.1:3000;10.240.0.24:3000;10.0.3.1:3000;10.240.0.37:3000;10.0.3.1:3000;10.240.0.41:3000;10.0.3.1:3000;10.240.0.13:3000;10.0.3.1:3000;10.240.0.23:3000
10 : services-alumni
10.0.3.1:3000;10.240.0.42:3000;10.0.3.1:3000;10.240.0.5:3000;10.0.3.1:3000;10.240.0.13:3000;10.0.3.1:3000;10.240.0.14:3000;10.0.3.1:3000;10.240.0.18:3000;10.0.3.1:3000;10.240.0.23:3000;10.0.3.1:3000;10.240.0.24:3000;10.0.3.1:3000;10.240.0.27:3000;10.0.3.1:3000;10.240.0.30:3000;10.0.3.1:3000;10.240.0.37:3000;10.0.3.1:3000;10.240.0.43:3000;10.0.3.1:3000;10.240.0.33:3000;10.0.3.1:3000;10.240.0.41:3000
I have a few comments about your configuration. First, transaction-threads-per-queue should be set to 3 or 4 (don't set it to the number of cores).
The second has to do with your batch-read tuning. You're using the (default) batch-index protocol, and the config params you'll need to tune for batch-read performance are:
You have batch-max-requests set very high. This is probably affecting both your CPU load and your memory consumption. It's enough that there's a slight imbalance in the number of keys you're accessing per-node, and that will reflect in the graphs you've shown. At least, this is possibly the issue. It's better that you iterate over smaller batches than try to fetch 200K records per-node at a time.
batch-index-threads – by default its value is 4, and you set it to 32 (of a max of 64). You should do this incrementally by running the same test and benchmarking the performance. On each iteration adjust higher, then down if it's decreased in performance. For example: test with 32, +8 = 40 , +8 = 48, -4 = 44. There's no easy rule-of-thumb for the setting, you'll need to tune through iterations on the hardware you'll be using, and monitor the performance.
batch-max-buffer-per-queue – this is more directly linked to the number of concurrent batch-read operations the node can support. Each batch-read request will consume at least one buffer (more if the data cannot fit in 128K). If you do not have enough of these allocated to support the number of concurrent batch-read requests you will get exceptions with error code 152 BATCH_QUEUES_FULL . Track and log such events clearly, because it means you need to raise this value. Note that this is the number of buffers per-queue. Each batch response worker thread has its own queue, so you'll have batch-index-threads x batch-max-buffer-per-queue buffers, each taking 128K of RAM. The batch-max-unused-buffers caps the memory usage of all these buffers combined, destroying unused buffers until their number is reduced. There's an overhead to allocating and destroying these buffers, so you do not want to set it too low compared to the total. Your current cost is 32 x 256 x 128KB = 1GB.
Finally, you're storing your data on a filesystem. That's fine for development instances, but not recommended for production. In GCE you can provision either a SATA SSD or an NVMe SSD for your data storage, and those should be initialized, and used as block devices. Take a look at the GCE recommendations for more details. I suspect you have warnings in your log about the device not keeping up.
It's likely that one of your nodes is an outlier with regards to the number of partitions it has (and therefore number of objects). You can confirm it with asadm -e 'asinfo -v "objects"'. If that's the case, you can terminate that node, and bring up a new one. This will force the partitions to be redistributed. This does trigger a migration, which takes quite longer in the CE server than in the EE one.
For anyone interested, Aerospike Enterpirse 4.3 introduced 'uniform-balance' which homogeneously balances data partitions. Read more here: https://www.aerospike.com/blog/aerospike-4-3-all-flash-uniform-balance/

recently,the sybase database report some errors,I don't know how to solve,

Recently I used the sybase database and it report some errors, I searched many information and Expand configuration, but it's still happening.
My server computer memory and the cpu usage is normal, I don't know if machines system resources are enough for it.
This is Part of my sybase error log:
02:00000:00000:2011/12/31 10:13:21.60 kernel nl__read_defer: read failed on socket 36.
02:00000:00000:2011/12/31 10:13:25.92 kernel nl__read_defer: read failed on socket 154.
system resources not enough,Can not complete the requested service
06:00000:00000:2011/12/31 10:13:32.70 kernel nl__read_defer: read failed on socket 164.
system resources not enough,Can not complete the requested service
00:00000:00000:2011/12/31 10:13:34.40 kernel nl__read_defer: read failed on socket 34.
system resources not enough,Can not complete the requested service
....more
00:00000:00014:2012/01/04 10:25:03.59 kernel NT operating system error 1450 in module 'e:\ase1253\porttree\svr\sql\generic\ksource\strmio\n_winsock.c' at line 1987
00:00000:00014:2012/01/04 10:25:03.59 kernel NT operating system error 1450 in module 'e:\ase1253\porttree\svr\sql\generic\ksource\strmio\n_winsock.c' at line 1987
....more
ct_results(): network packet layer: internal net library error: Net-Library operation terminated due to disconnect.
my sybase product is ASE,and my machine memory is 4G
there are some configuration about sybase which I changed:
[Named Cache:default data cache]
cache size = 1400M
[Meta-Data Caches]
number of open databases = 50
number of open objects = 5000
number of open indexes = 5000
[Disk I/O]
number of devices = 20
[Physical Memory]
max memory = 2000000
[Processors]
max online engines = 8
number of engines at startup = 8
[SQL Server Administration]
procedure cache size = 400000
[User Environment]
number of user connections = 200
[Lock Manager]
number of locks = 150000
[Rep Agent Thread Administration]
enable rep agent threads = 1
the other configuration are Default
Any help would be greatly appreciated.