I am looking to tune Zend Optimizer+ on our Zend Server installation. Is there any tool that can show statistics about Zend Optimizer's performance? e.g. number of hits and misses, utilization of shared memory etc.
Apart from Optimizer+ configuration directives, anything else we can do within our application code to help the bytecode caching engine do a better job?
You may call function accelerator_get_status() and see the result - it would contain some information about the O+ internal state and what happens inside.
You may also look into increasing realpath_cache_size - default is too small for big application.
There is not much to tune with Zend Optimizer. Check out:
Zend Optimizer User Guide (pdf)
Zend Optimizer Knowledge Base
Apart from this, you can only tweak what is available through the Zend Server UI.
For Zend Optimizer Plus, there is a number of ini configs you can change as explained in
http://files.zend.com/help/Zend-Server-Community-Edition/zendoptimizerplus.html
Related
Recently i heard about 'FastCGI' word,as i have used CGI.I have no idea whether this fastCGi really works fast.i.e,does it really provides best efficiency and performance? or not? as there are many alternative options in the market for CGI.so,which is the best technique CGI,FastCGI or anything else.
FastCGI is a language independent extension of CGI and has all the features of CGI plus additional benefits when it comes to scaling your applications with distributed computing and using multiple nodes to host your application.
You can go through some links like the official website and even though this link is old, its a good read
What advantages and disadvantages using nginx+Apache+mod_wsgi vs nginx+uWSGI(vurtualenv) in production
Advantages of first variant using i see in that mod_wsgi developing since 2007 and have more stable version and easy administrated
Advantages of second variant is more high perfomance (see Benchmark of Python WSGI Servers, available to use uWSGI server in virtualenv that is more secure.
Disadvantage of second variant is a still no major version, need to creating something controling scripts for starting uWSGI servers for each virtual host (or use supervisor)
What do you thinking about it?
When you load your typical large Python web application on top of the most popular WSGI servers, the performance difference isn't actually that much and usually nothing to get excited about. Hello world benchmarks like the one you quote are very misleading as they test a very narrow use case and the configurations used are usually never comparable. You should consider watching my PyCon talk which talk about bottlenecks in web servers and web applications.
http://pyvideo.org/video/703/web-server-bottlenecks-and-performance-tuning
Given that the WSGI server is not usually the problem, you should just choose that which you find easiest to manage and has the sorts of features you think you will require. Then use benchmarking and monitoring of that choice to work out how to set it up so as to perform best for your specific web application. Even then, any increase in performance or gains in user satisfaction are not usually going to come from such tuning.
I am new to Magento and impressed by the MVC framework that powers it, making module development a well thought out solution. I am strong CakePHP developer.
I am working on a project that uses a dropshipper for the physical products. As a result, every day at 4am a feed needs to be parsed and the products / categories modified, plus stock information. A CRON will be setup to do this.
Additional requirements are:
Upon a sucessful order, the system must upload a CSV feed to the Dropshipper via FTP with the order details for distribution.
Realtime stock checks, either every hour by CRON or a lookup on the product page
I can think of 2 approaches:
Write everything natively into Magento. As a newbie, this is going to be a big learning curve, but it is the right solution?
Write a simple CakePHP app that runs as a shell. This will use the Magento API to manage all dropshipper processes. This solution will be easier to rollout but introduces an additional system to support.
Does anyone have an advice relating to dropshipping in Magento?
First, with respect to the product import (product, stock data), make sure to do the real data saving inside of Magento. There have been changes to the catalog implementation in the past, and it's likely with a framework like Magento that there will be more. Keeping it inside the framework will reduce the likelihood of it simply no longer operating and you getting a very unpleasant phone call.
Another advantage to this approach is that, in contrast to the API approach, the native code will not try to spin up the entire framework for every request. This is expensive and to be avoided. Depending on how many products there are, you may need to break the script into multiple executions due to memory leaks when saving catalog products.
Don't tie the stock checks to a catalog page view. Some web crawler will come eat your lunch.
Finally, there's no easy FTP library built into Magento, but throwing that on another cronjob and using system calls to perform the actual (S)FTP call is possibly your easiest option.
Hope that helps!
Thanks,
Joe
I think the answer to this question is simple. Write it in what you know. The biggest reason is "UPGRADES"... with Magento being as high profile, the possibility of being hacked with older versions increase every day. Therefore, when they release new versions, you are going to want to upgrade. With that in mind, are you going to want to add all of your changes into each new version as it is released? Probably not. If there is a solution to write this as a separate tool, that is what you should do.
PROS TO BUILDING OUTSIDE OF MAGENTO
No need to re-integrate the upgrades
every time a new version of Magento
is released.
Code is easier to maintain.
Tool is easier to write in something
you are familiar with.
No learning curve.
Integration speed will be much
quicker.
More flexibility since you do not
have to fit inside Magento code
limitations.
So this question came up today and I didn't have a specific or scientific answer.
What are the costs associated with using jsf (or tomahawk, faclets, etc., etc.) tags in place of traditional html tags. My gut reaction is that you should use jsf tags in situations where you need the additional functionality they provide, and use traditional tags when you don't. Also I feel like jsf tags would require more resources (since the server has to take them and rerender them as html anyways) than html. Does anybody know what the cost actually is (as far as time and memory)? Also useful information is what is the convention that is in use, pure jsf or a mixture of the two?
Sure there is a cost. Whether that is noticeable or negligible depends on the hardware and the load of the server in question. Profile it and upgrade the server if necessary.
You should however realize that on the other hand you save time and cost compared to implementing the same without help of a component based MVC framework. That's going to be a lot of boilerplate code gathering the paramters, doing validations, conversions, updating model values which is possibly not written efficient as compared to existing and widely used MVC frameworks.
The Sun JSF development team puts performance as high priority and Mojarra is already optimized as much as possible.
Our site http://www.skill-guru.com runs on JSF/ Tomahawk / Rich faces on a tomcat server. We do not see any speed issues here.
As Jeff pointed out , it all gets compiled so there is not much noticeable difference until and unless you really use too much rich faces or other fancy stuff.
JSF does help you make your life easy.
A JSF page gets compiled upon first request (or pre-compiled if you specify that in the config). Thus, it's not like the page needs to be parsed every time it's requested. I don't have any specific numbers relating to time/cpu/memory cost, but I'm sure it's negligible.
What is NHibernate Interceptor, and what purpose does it serve in an application?
Also, in this article, I learned that using NHibernate makes a desktop application slower at startup, so to avoid this, I need to save the configuration in a file, and later load it from the saved file. How can I do that? I didn't find any examples in that tutorial.
An interceptor allows you to execute additional functionality when an entity is retrieved / deleted / updated / inserted in the DB ...
Interceptors article
Hibernate doc
other useful info
About making your app slower:
I'd suggest that you only have a look at optimizing start-up time, when it really becomes a problem.
When you build a session-factory, NHibernate will parse all the mappings, and that is an operation that is a bit expensive. But, as long as you have a limited number of entities, the performance hit isn't that big.
I have never ever had to optimize the initialization of NHibernate, because of slow startup times.
I'd suggest that you first concentrate on the core of your application -the problem you're trying to solve- and afterwards have a look on how you could improve startup performance.
(If you'll ever have to do it).
Interceptors, like the name itself says, allows you to intercept NHibernate operations (save/update/delete/load/flush/etc).
A newer, more flexible API to achieve this is the event system.
About serializing the configuration, the code is there, it's the class Effectus.Infrastructure.BootStrapper which is called at application startup.
An interceptor's dissection series written by me can be found in here
http://blog.scooletz.com/2011/02/03/nhibernate-interceptor-magic-tricks-pt-1/
hope it helps