In a web game built on Turbogears v2.1.5, logged-in users POST a 16-byte message periodically. The server CPU reaches 100% when the POST rate is 60 POSTs-per-second. (For testing, we have removed all work such as updating the DB with each post-- the server simply returns an empty response immediately.)
Using wrk to GET a 16-byte static file we see Turbogears reaching rates of ~500 requests-per-second and want to match or get close to that rate with our game's POSTs. We'd really like to be at 1,000 or more POSTs per second.
Setup: Turbogears v2.1.5, AWS c3.large, Windows Server 2008 R2, Intel Xeon, E5-2680 v2 # 2.8Ghz 2.8Ghz.
Question: Are there tg2 settings or other changes that would let us in this scenario handle 500 or more POSTs-per-second?
If you are able to upgrade to TG2.3 the work in the more recent releases greatly improved the framework performances ( http://blog.axant.it/archives/452 ) out of the box.
Also through the new minimal mode introduced in 2.3 ( http://turbogears.readthedocs.io/en/latest/turbogears/minimal/index.html ) you can easily disable any component you don't need like i18n, sessions etc.. for more speed improvements ( see the various X.enabled options at http://turbogears.readthedocs.io/en/latest/reference/config-options.html ). Disabling i18n and static files support usually gives a good performance boost.
Related
For testing API, I need to set some data.
Is it better (cheaper and faster) to import data through the app like regular user. Using frontend which will set and store the data into data base, and I will test that data and have it stored forever, until I decide that data needs to be changed.
Or is it better to do a POST method with API and set data through the API.
I assume that programing API POST method is more expensive and time consuming then using regular app. And also data is stored only while test is runing (that is less memory on other hand, but assume that few MB of DB is not giving higher advantage).
Maybe I am missing something.
Uploading anything through UI is more costlier and slower than uploading through API its not the other way:
Thats why we have the inverted pyramid tobe anti pattern in testing:
Ideal pattern is the testing pyramid or inverted cone
There is a ton of documentation on academic theory and best practices on how to manage versioning for RESTful Web Services, however I have not seen much discussion on how multiple REST APIs interact with data.
I'd like to see various architectural strategies or documentation on how to handle hosting multiple versions of your app that rely on the same data pool.
For instance, suppose you make a database level destructive change to a database table that causes you to have to increment your major API version to v2.
Now at any given time, users could be interacting with the v1 web service and the v2 web service at the same time and creating data that is visible and editable by both services. How should this be handled?
Most of changes introduced to API affect the content of the response, till changes introduced are incremental this is not a very big problem (note: you should never expose the exact DB model directly to the clients).
When you make a destructive/significant change to DB model and new API version of API is introduced, there are two options:
Turn the previous version off, filter out all queries to reply with 301 and new location.
If 1. is impossible to need to maintain both previous and current version of the API. Since this might time and money consuming it should be done only for some time and finally previous version should be turned off.
What with DB model? When two versions of API are active at the same time I'd try to keep the DB model as consistent as possible - having in mind that running two versions at the same time is just temporary. But as I wrote earlier, DB model should never be exposed directly to the clients - this may help you to avoid a lot of problems.
I have given this a little thought...
One solution may be this:
Just because the v1 API should not change, it doesn't mean the underlying implementation cannot change. You can modify the v1 implementation code to set a default value, omit the saving of a field, return an unchecked exception, or do some kind of computational logic that helps the v1 API to be compatible with the shared datasource. Then, implement a better, cleaner, more idealistic implementation in v2.
when you are going to change any thing in your API structure that can change the response, you most increase you'r API Version.
for example you have this request and response:
request post: a, b, c, d
res: {a,b,c+d}
and your are going to add 'e' in your response fetched from database.
if you don't have any change based on 'e' in current client versions, you can add it on your current API version.
but if you'r new changes are going to change last responses, for example:
res: {a+e, b, c+d}
you most increase API number to prevent crashing.
changing in the request input's are the same.
Let's say I push new code to the Worklight server for purposes of a Direct Update. Can I allow users to still use the application for a set amount time before they actually have to accept the update or is the application essentially unavailable to them until they download the new code?
If you are developing your application using Worklight 6.2, then you as a developer can take over the entire Direct Update flow and can essentially decide how to handle a received update from the server.
Note that by taking full control, you own the flow end-to-end; the default Worklight framework handling will not be available and the full responsibility is on the developer to ensure the validity of every step.
You can read more about customizing Direct Update, here:
Initial reading, starting slide #14: http://public.dhe.ibm.com/software/mobile-solutions/worklight/docs/v620/05_06_Using_Direct_Update_to_quickly_update_your_application.pdf
In depth reading: http://www-01.ibm.com/support/knowledgecenter/SSZH4A_6.2.0/com.ibm.worklight.dev.doc/dev/c_customizing_direct_update_ui_android_wp8_ios.html
In your scenario, I think you could probably go in a less extreme way and just do some tweaks before letting the Worklight framework handle the update from the server. Meaning, you could use the example provided in the training module (slide #18 from the PDF above), where you intercept the update:
wl_directUpdateChallengeHandler.handleDirectUpdate = function(directUpdateData,
directUpdateContext) {
... // display message or counter
}
And display a message and start a counter, and when time's up just directUpdateContext.start(); the update.
I'm trying to benchmark/ do performance testing of API's at my work. So the client facing is REST format while the backend data is retrieved by SOAP messages. So my question is can some of you share your thoughts on how you implement it (if you have done so in the past/doing it now), am basically interested in avg response time it takes for API to return results for the client
Please let me know if you need any additional information to answer the question
Could not say it any better than Mark, really: http://www.mnot.net/blog/2011/05/18/http_benchmark_rules
Maybe you should give JMeter a try.
You can try using Apache Benchmark.This is simple and quick
Jmeter gives you additional flexibility like adding functional cases along with performance details. Results will be almost similar to Apache Benchmark tool.
The detailed one which gives Functional Test Result, performance counters settings, Call response time details, CPU and Memory changes along with Load/Stress results, with different bandwidth and browser settings - Visual Studio Team system
I used VSTS2010 for performance testing. Also GET and POST are straight forward. PUT and DELETE need coded version of webtest.
Thanks,
Madhusudanan
Tesco
If you are trying to test the REST -> SOAP calls. One more thing you can consider is to have some stubs created (for backend). This way you can perf test REST -> Stub performance followed by Stub -> SOAP perfomance. This will help in analyzing the individual components.
At work we ran up against the problem of setting server-side cookies - a lot of them. Right now we have a PHP script, the sole purpose of which is to set a cookie on the client for our domain. This happens a lot more than 'normal' requests to the server (which is running an app), so we've discussed moving it to its own server. This would be an Apache server, probably dedicated, with one PHP script 3 lines long, just running over and over again.
Surely there must be a faster, better way of doing this, rather than starting up the whole PHP environment. Basically, I need something super simple that can sit around all day/night doing the following:
Check if a certain cookie is set, and
If that cookie is not set, fill it with a random hash (right now it's a simple md5(microtime))
Any suggestions?
You could create a simple http server yourself to accept requests and return the set-cookie header and empty body. This would allow you to move the cookie generation overhead to wherever you see fit.
I echo the sentiments above though; Unless cookie generation is significantly expensive, I don't think you will gain much by moving from your current setup.
By way of an example, here is an extremely simple server written with Tornado that simply sets a cookie on GET or HEAD requests to '/'. It includes an async example listening for '/async' which may be of use depending on what you are doing to get your cookie value.
import time
import tornado.ioloop
import tornado.web
class CookieHandler(tornado.web.RequestHandler):
def get(self):
cookie_value = str( time.time() )
self.set_cookie('a_nice_cookie', cookie_value, expires_days=10)
# self.set_secure_cookie('a_double_choc_cookie', cookie_value)
self.finish()
def head(self):
return self.get()
class AsyncCookieHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def get(self):
self._calculate_cookie_value(self._on_create_cookie)
#tornado.web.asynchronous
def head(self):
self._calculate_cookie_value(self._on_create_cookie)
def _on_create_cookie(self, cookie_value):
self.set_cookie('double_choc_cookie', cookie_value, expires_days=10)
self.finish()
def _calculate_cookie_value(self, callback):
## meaningless async example... just wastes 2 seconds
def _fake_expensive_op():
val = str(time.time())
callback(val)
tornado.ioloop.IOLoop.instance().add_timeout(time.time()+2, _fake_expensive_op)
application = tornado.web.Application([
(r"/", CookieHandler),
(r"/async", AsyncCookieHandler),
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
Launch this process with Supervisord and you'll have a simple, fast, low-overhead server that sets cookies.
You could try using mod_headers (usually available in the default install) to manually construct a Set-Cookie header and emit it -- no programming needed as long as it's the same cookie every time. Something like this could work in an .htaccess file:
Header add Set-Cookie "foo=bar; Path=/; Domain=.foo.com; Expires=Sun, 06 May 2012 00:00:00 GMT"
However, this won't work for you. There's no code here. It's just a stupid header. It can't come up with the new random value you'd want, and it can't adjust the expire date as is standard practice.
This would be an Apache server, probably dedicated, with one PHP script 3 lines long, just running over and over again. [...] Surely there must be a faster, better way of doing this, rather than starting up the whole PHP environment.
Are you using APC or another bytecode cache? If so, there's almost no startup cost. Because you're talking about setting up an entire server just for this, it sounds like you control the server as well. This means that you can turn off apc.stat for even less of a startup hit.
Really though, if all that script is doing is building an md5 hash and setting a cookie, it should already be blisteringly fast, especially if it's mod_php. Do you already know, though benchmarking and testing, that the script isn't performing as well as you'd like? If so, can you share those benchmarks with us?
It would be interesting to know why do you think you need extra server - do you actually have a bottle neck for generating the cookie or somewhere else ? Is it the log writing as requests happen alot ? ajax polling ? Client download speed ?
Atleast for starters, i'd look something more efficient than fetching time to generate the "random hash". For example, on this intel i7 laptop i have, generating 999999 md5 hashes from microtime takes roughly about 4 seconds and doing same thing with random numbers is second faster (not taking a seeding of rand into account).
Then, if you take opening/and closing of socket into account, just moving your script (which is most likely already really fast - that is, without knowing how your pages take that into account), you will end up actually slowing down the requests. Actually, now that i've re-read your question, it makes me think that your cookie setter script is already a dedicated page ? Or do you just "include" into real content served by another php script? If not, try that approach. Also this would beneficial if you have default logging rules for apache, if cookies are set in on own page, your apache will log a row for that and in high load systems, this will cumulate to total io time spend by apache.
Also, consider that testing if cookie is set and then setting it, might be slower than just to forcefully set it always even if cookie exists or not ?
But overall, i don't think you'd need to set up a server just to offload cookie generation without knowing more about how you handle the cookies now.. Unless you are doing something really nasty.
Apache has a module called mod_usertrack which looks like it might do exactly what you want. There's no need for PHP and you could likely create a really optimised lightweight Apache config to serve this with.
If you want to go for something even faster and are happy to not use Apache you could use lighttpd and it's mod_usertrack or nginx's HttpUserId module