I am using Restcomm smsc gateway 7.3.135. When both client and server were running on same machine, I am getting only 50 tps on single connection.
In the document I have read we can get upto 1000tps. Please guide how can I achieve this.
Thanks
the community edition of restcomm projects is not performance tested, only the product is going through those performance tests as it requires a lot of fine tuning in the project itself, logging, OS, JVM Options, it also depends on the hardware you're running. You should may want to contact Telestax to get help on that as it's usually pretty involved.
Related
Currently I use jmeter aggregate report or summary report for submitting reports. But they expect something extra.. How can I give. Is there any plugins for getting server resources usage when testing load.
Reporting: since JMeter 3.0 there is a HTML Reporting Dashboard which can be generated during the test run. It contains exhaustive overview information. If you need to find out the reason of the bottleneck or memory leak or whatever you can consider extra Graphs available via JMeter Plugins project.
The same JMeter Plugins project provides PerfMon - client-server application which is able to collect over 70 different metrics and plot them via JMeter Listener. See How to Monitor Your Server Health & Performance During a JMeter Load Test guide for detailed setup and usage instructions.
There are quite a few plug-ins available that can help you analyze the results better. You can refer to https://jmeter-plugins.org/ for the same.
Most popularly used ones are:
Response Times Over Time
Response Times Percentiles
Transactions per Second
Response Latencies Over Time
In case of server usage you can use following that comes with JMeter plug-ins
PerfMon Metrics Collector and Server Agent or
In case of Unix based system use sar command that comes with sysstat package or VMstat. In case of windows based system use Perfmon to capture the system utilization data while the test is running and then use Ksar to plot graphs with the data collected using sar. https://sourceforge.net/projects/ksar/
If you have collected data using Perfmon then plot the graphs using PAL. https://pal.codeplex.com/
In this case, I would suggest using Grafana. It shows realtime results. And the best thing is, it can be configured according to the need.
Now, the thing is how to use it? Using it is not that tough.
If you're using a Mac or Linux (Any Flavour) things become easy. If you're using Windows, I would suggest using a virtual machine. The reason behind that is windows block traffic after some requests. And that causes a lot of pain in the head.
In my case, I used a virtual machine to setup ubuntu inside it and then configured Grafana.
For working with Grafana, you need to have these two things installed.
Grafana Itself
Influx Db for the backend
Links for both here below:
https://grafana.com/grafana/download?platform=linux
https://portal.influxdata.com/downloads/
Once installed and setup,
You need to use Backen Listener to push results o Graphite Client (Installed along with Influx DB Automatically).
I know it is a bit confusing but once you understand the thing, you and your client will love the detailed reports.
Remeber, Grafana is all about configuration.
Let me know if you have any confusion regarf=ding this.
Happy to help. :)
We are noticing that IBM MobileFirst Server is using High Memory by Java TM Platform SE binary process, after 2 3 days of server start it reach up to 6 GB which cause the server in hang status, then only restart is the solution.
in logs we found below message:
"No buffer space available (maximum connections reached?): connect"
Enviornment: IBm Worklight Server 7.1 and java version is 1.7 64 bit on windows server 2012. hybrid Mobile application running on this server.
It seems that there might be some configuration required can any one advice ?
Lots of information missing... this can be caused by any number of reasons.
Are you in a cluster? if yes, how many servers? how much memory is available to each machine?
How many adapters do you have deployed? What is the value you gave to the serverSessionTimeout property? This for example can cause connections to stay open for a longer time, meaning the server will not "clean/remove" connections... and the more you have open, the more memory you will require.
all of these and more can contribute to how much memory you may need.
See also: http://www-01.ibm.com/support/docview.wss?uid=swg21690707
It mentions DB2, but the idea is - the more connections, the more memory you will need.
I'm using websphere liberty profile v8.5.5.0 and worklight 6.2.
The full version of my WL and runtime is:
Server version: 6.2.0.00.20140922-2259
Project WAR version: 6.2.0.00.20140922-2259
I've noticed that sometimes I have troubles getting into the worklightconsole, the server takes a too big of a time to answer and most of the time it just gives me a time out.
Regarding JVM Heap its at 60 - 70% of the total heap, most likkely 1,5 Gb or something like that.
On the FFDC, sometimes I get a error saying something close to an
FFDC Incident has been created: "javax.naming.ServiceUnavailableException: ldap.example.com:389; socket closed; remaining name 'o=example' com.ibm.ws.wim.adapter.ldap.LdapConnection 1670" at ffdc.log
I have my LDAP connected to this websphere via VPN, and I know that webspheres historically have trouble dealing with LDAP.
However I don't see any more errors on the logs; the machine eventually recovers and is able to work correctly, but for some time is 'down'.
If I enable tracing, the verbosity overwhelms the machine and I can't even start the worklightconsole, neither continue to work with worklight like calling an adapter from an application.
There is one more thing, it seems that this happens more frequently after updates on existing application versions or adapters. Does this ring a bell with anyone?
If i ask for a restart when the machine is sluggish, the stoping of the websphere takes quite some time but eventually stops normally and when I start it, everything is fine right out of the bat.
Before asking for a PMR, I would like to know if there is something else I could do to troubleshoot this problem.
Thanks in advance.
My initial "smell" of the problem is that sometimes your VPN connection with LDAP is very slow or your LDAP server is taking too long to respond.
My suggestion is that you try using WAIT(wait.ibm.com), it's a non-invasive easy to use diagnostic tool, to further investigate. If you find out the call to LDAP is getting hang then I suggest you try tuning Liberty LDAP cache, this should help.
Azure and EC2 are optimized for running servers. Lots and lots of servers. Both platforms attempt to manage tons of things for you -- in Azure's case, it wants to manage even the target operating system.
However, I'd like to use such a service for a different reason: Testing.
I've got a ton of operating systems I need to support. My tests don't actually take that long, but running them on every platform is time consuming. I was going to just use a cloud service for this, thinking that these machines would be running for much less than an hour, and it wouldn't cost all that much.
The problem is that the major cloud services won't run client versions of Windows -- Windows Server only.
Is there a cloud service which would let me run every client and server version, and every service pack level, of Windows released starting with Windows 2000 SP4 to the present day?
Try CloudSigma, Defiantly can upload your own ISO's and run any x86 and 64bit OS you like on it. They have their in-house versions to get started but you can bring your own OS versions.
Based in Switzerland but they would have also the servers in the US, performance i've expected to quite good.
https://www.cloudsigma.com/
There is also a free trail on at the moment
https://cs.cloudsigma.com/accounts/signup/
The list of Open Virtualization Alliance members may have some candidates for you.
A search on the page for "operating system" suggests the following possibilities (in addition to the already-mentioned CloudSigma):
ElasticHosts
stepping stone GmbH (I'm less sure about this one)
Sublime IP
No, commercial cloud services like Azure and Amazon EC2 are themselves virtual, so you don't get a great deal of control over the operating system.
An option may be to consider renting a full physical server (colocated, or managed) and then use a battery of virtual machines to run the tests. Something like VMWare's snapshot feature sounds perfect: spin up a clean virtual machine, deploy the test code, then throw away changes to the disk once the tests have been completed.
Or, indeed, as #Stuart suggests - run the tests locally.
This definitely isn't something Azure offers - I think all of Azure's images are based near to Windows Server 2008 R2.
For EC2 you could set up images for Server 2003 through to 2008R2 - but nothing else. There are also some services out there to assist with this - e.g. VaasNet http://www.vaasnet.com/catalog
For testing the other Windows operating systems, I simply don't think there's a cloud service available to let you do this. I don't even think there are any cloud services where you can run "Virtual PC" type applications on top of the hosted operating system - as I think most of the virtualization APIs are disabled in the cloud environments (virtualization within virtualization not supported!)
Sorry to say this, but your best bet may be local test hardware running VirtualPC images.
It appears that the Xen Cloud Platform might do what you're after. This page ends with:
Guest Operating Systems: the XCP binary distribution is delivered with a wide range of Linux and Widnows guests. Check out the release notes for a complete list.
And their PDF document Xen Cloud Platform Virtual Machine Installation Guide (Release 0.1, Published October 2009) says that Windows 2000 Server has "No known issues."
(I don't have any affiliation with Xen)
In conjunction with the above, there is also a list of Xen VirtualPrivateServerProviders, several of which say they include Windows.
Buy time on an EC2 instance and use it to host VirtualBox VMs with VMs set up for each operating system you want to test for. Use a RDP client or VNC or some other means to control the guest OS. This forum post seems to point to that being possible. But yes it is not a cloud service itself and you would have todo some initial setup and configuration work yourself.
Many times, I get:
-Frozen, load goes to 5.0. Can't use my box.
-Just doesn't work.
Do following steps:
1.rabbitmq-plugins enable rabbitmq_management
2.service rabbitmq-server restart
3.browse to http://rabbitmq-server-ip:15672
4.login with
username: guest
password: guest
Dont forget to change your password later.
As sheki notes, rabbitmqctl is your first port of call for diagnostics, and for building monitoring on top of, but it's not suitable for actual monitoring directly being a manual command line.
I've found DataDog very good to monitor both the MQ details, plus the host platform in parallel. e.g. you can watch the queue levels and set alerts on queues backing-up, while also watching the CPU/memory/IO inflicted by these queue levels. It really helps to get ratios of resource usage, and the alerts are good. Having a uniform platform for both infrastructure and application level monitoring is surprisingly rare, but speeds up diagnoses of production issues hugely.
NewRelic is similar and also has a RabbitMQ plugin, although I've not used this plugin specifically, I've used NR for years and found it invaluable in diagnosing operational issues.
AppDynamics is another example. Similarly this allows you to drill down into your app from a high-level dashboard, and visually navigate from problems to causes. It's especially good with visualising the network of a distributed application across various services/servers. I've used this, for example, to find complex problems in .NET applications and SQL Server clusters using 3rd party Web Services (e.g. latency and its consequences to your app over chatty protocols). These things are very difficult to diagnose, especially for developers who are limited to checking their code. Diagnosing operational issues requires a much broader picture.
I gave up trying to even install and configure Nagios. I know it's the 'best' but it's the best of an old breed of self-configured beasts which we don't have time to manage. I didn't even get it going... and eventually turned to the more 'modern' cloud approach. Once you get over the trust factor, it's pretty liberating.
I'm using these APM platforms together* to aggregate data from:
Windows O/S level Event Logs/Services
Linux O/S level
AWS console level
RDS, EC2
Apache
MySQL
App integrations / custom NR plugins I've written
Rabbit MQ
*NewRelic can feed into Datadog! So if you are already using NR you don't need to install DD on those hosts as well.
Being able to view all these levels together gives you a view on the publishers, middleware, MQ servers, workers and front-end app - all in one dashboard.
I would highly recommend an approach like this, because just looking at one server alone leads you to a lot of head-scratching. Seeing an entire stack in one customisable dashboard is just so illuminating it takes most of the guesswork out of it.
Worried about installing these things? I found New Relic to be especially light-weight and unobtrusive. AppDynamics seemed to stress the host a bit more, but mostly that's because you had to run the visualisation tools on the host! (this may have changed). DataDog seems performant, but creates a lot of control panels/icons on the target host (perhaps just a visual impression).
To a four year old question - this answer probably wasn't available in 2011, but in 2015 these once 'startup' style APM services are just tens or hundred dollars a month for an unbelievably rich enterprise-level solution.
There are bunch of RabbitMQ monitoring plugins available for different monitoring systems like Nagios, Zabbix etc.
Look at http://www.rabbitmq.com/how.html#management
Using rabbitmqctl is the most straight forward solution to check the status of the node.
$ rabbitmqctl status
This should tell you the status of the RabbitMQ node.
If you have PRTG (or any probe system with a HTTP sensor check), you can check the server status described at the following page:
https://blog.cdemi.io/monitoring-rabbitmq-in-prtg/
In particular you have to
Enable Management Plugin
The rabbitmq-management plugin provides an HTTP-based API for management and monitoring of your RabbitMQ
server, along with a browser-based UI and a command line tool,
rabbitmqadmin. The management plugin is included in the RabbitMQ
distribution. To enable it, we need to run: rabbitmq-plugins enable
rabbitmq_management on the RabbitMQ nodes. For more details on the
Management plugin refer to RabbitMQ Documentation.
The web UI is located at: http://server-name:15672/ The HTTP API and
its documentation are both located at: http://server-name:15672/api/
Once done, you can check the overview of your server with the API:
http://server-name:15672/api/overview
Where you have a JSON with all details about the server, active connections, queues, etc.
This cmd will help you service rabbitmq-server status
OR try theseservice rabbitmq-server stop and service rabbitmq-server start then service rabbitmq-server status.