Redis Issue - Incr by to many - redis

I'm running PHP-FPM with Redis on AWS.
Currently I'm having a really strange issue that I can't seem to figure out.
When I INCR or HINCRBY and increment by 1 it always increments by around 20 to 30 instead.
I have tried the following:
Commented out all other redis code (no change).
Setup a single PHP page using the same code outside of the site (this works fine - increments by 1).
In the main site (that is having the issue) I put the code in the header, after the last HTML tag and other places and it behaves the same.
I have an AJAX page within the site which is invoked separately if requested and this works fine. Therefore the issue only occurs during the main site load.
I've tested redis-cli using the commands and this works fine.
I can't seem to find any loads on the AWS Redis System to read so I'm not sure exactly what is occurring here but it appears the command is running multiple times.
I also read the value back after its written and the value reports correctly. So the increment seems to work - however when I re-check redis using a GUI tool I can see it’s increased by a much larger number.
I'm really at a loss for what to try next and was hoping someone might have some advice.
Thank you.

Related

PhantomJS - set time limit on page.open()? Or workaround?

Using PhantomJS and bash, I'm working on a little piece of anti-malware that reads a web page, grabs all the domains that are delivering assets to the browser, then prints each server's country of origin. It works fine except for one site that has a... uh... 'suboptimal' piece of javascript that calls to an external server every 5 seconds. PhantomJS just loads the resource over and over and over, page.open() never finishes, and page.onLoadFinished() is never called.
Is there a way around this? Can I set a time limit on page.load()? I guess, as a workaround, can I set a time limit on the Linux process?
Thanks in advance, and if anyone is interested in a copy of this script let me know and I'll post it somewhere public.
I solved this problem using the solutions given here to set a execution time limit on the phantomjs command and kill it if needed.
Command line command to auto-kill a command after a certain amount of time

inittab respawn of Node.js too fast

So I am trying to keep my Node server on a embedded computer running when it is out in the field. This lead me to leveraging inittab's respawn action. Here is the file I added to inittab:
node:5:respawn:node /path/to/node/files &
I know for a fact that when I startup this node application from command line, it does not get to the bottom of the main body and console.log "done" until a good 2-3 seconds after I issue the command.
So I feel like in that 2-3 second window the OS just keeps firing off respawns of the node app. I see in the error logs too in fact that the kernel ends up killing off a bunch of node processes because its running out of memory and stuff... plus I do get the 'node' process respawning too fast will suspend for 5 minutes message too.
I tried wrapping this in a script, dint work. I know I can use crontab but thats every minute... am I doing something wrong? or should I have a different approach all together?
Any and all advice is welcome!
TIA
Surely too late for you, but in case someone else finds such a problem: try removing the & from the command invocation.
What happens is that when the command goes to the background (thanks to the &), the parent (init) sees that it exited, and respawns it. Result: a storm of new instantations of your command.
Worse, you mention embedded, so I guess you are using busybox, whose init won't rate-limit the respawning - as would other implementations. So the respawning will only end when the system is out of memory.
inittab is overkill for this. I found out what I need is a process monitor. I found one that is lightweight and effective; it has some good reports of working great out in the field. http://en.wikipedia.org/wiki/Process_control_daemon
Using this would entail configuring this daemon to start and monitor your Node.js application for you.
That is a solution that works from the OS side.
Another way to do it is as follows. So if you are trying to keep Node.js running like I was, there are several modules written meant to keep other Node.js apps running. To mention a couple there are forever and respawn. I chose to use respawn.
This method entails starting one app written in Node.js that uses the respawn module to start and monitor the actual Node.js app you were interested in keeping running anyway.
Of course the downside of this is that if the Node.js engine (V8) goes down altogether then both your monitoring and monitored process will go down with it :-(. But its better than nothing!
PCD would be the ideal option. It would go down probably only if the OS goes down, and if the OS goes down then hope fully one has a watchdog in place to reboot the device/hardware.
Niko

why server.transfer process slow in vb.net?

I need your help, I have a problem with server.transfer code in vb.net, it runs so slow..
My Question:
Why does it run slowly (take 5 minutes to move between web pages (.aspx))?
What should i check for this trouble?
Is it because operating system? Im use windows 7, before i used windows XP there is no problem like this...
is server.transfer related to database connection (not sure)? I use mysql (XAMPP packages).
Or may be because other configuration that i miss out in windows seven.
FYI: i try in several web browser same result(loading 5 minutes)..
Thank every one that answer my question, thank you very much!
One thing I've found on this is that it can have to do with the status code the transferred page returns. If it returns a 500 error, it can make your server transfer run upwards of five minutes.
One way to test this, if you can, is to run the transferred page in isolation and generate any of the information being transferred on the other side to see if any errors are generated.
It took me a day to figure this out. Hopefully it helps someone else.

AppFabric caching's local cache isnt working for us... What are we doing wrong?

We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated.
It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working.
We have enabled it like this...
bool UseLocalCache =
int LocalCacheObjectCount = int.MaxValue;
TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3);
DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased;
if (UseLocalCache)
{
configuration.LocalCacheProperties =
new DataCacheLocalCacheProperties(
LocalCacheObjectCount,
LocalCacheDefaultTimeout,
LocalCacheInvalidationPolicy
);
// configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300));
}
Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing).
The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache.
Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly.
We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either.
I cant find anything in SO or Google.
Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think?
(note: the IIS box running ASp.net isnt in yhe cluster - it is just the client).
Any insights greatfully received!
Which DataCache methods are you using to read from the cache? Several of the DataCache methods will always make a hit against the server regardless of local cache being configured. You pretty much have to make sure you only use Get if you want the local cache to be leveraged.
This is one my biggest nits with AppFabric Caching. They don't explain any of this to you and so when you begin to rely on local caching you begin to fall into these little pitfalls because you do not think you're paying a penalty for talking to the service, transferring data over the wire and deserializing objects, but you are.
The worst thing is, I could understand having to talk to the service to make sure the local cache represents the latest data. I can even understand transferring the data back so that multiple calls are not made. What I can not understand for the life of me though is that even if the instance in the local cache is verified to still be the current version that came back from the cache, they still deserialize the object from the wire rather than just returning the instance that's in memory already. If your objects are large/complex this can hurt a lot.
After days and days of looking into why we get so many Local Cache misses we finally solved it:
There is a bug with local cache in AppFabric v 1.1 that is fixed in CU4, see http://support2.microsoft.com/kb/2800726/en-us
Make sure that the Microsoft.ApplicationServer.Caching.Client.dll used by your application is also updated. We had CU4 installed on the machine but got the Client.dll without CU4 from a NuGet package in our application. In our case a simple NuGet package update made everything work.
After installing CU4 and making sure that the Client.dll was also updated we reduced our reads towards the AppFabric Host by a lot, due to Local Cache hits increasing. yay!
Have you tried using a nhibernate profiler? http://nhprof.com/
There is also this:
http://mdcadmintool.codeplex.com/
It's a nice way to manage and view the cache.
Both of these may help in debugging the issue.

PHP script stops running arbitrarily with no errors

I have a PHP script that seemed to stop running after about 20 minutes.
To try to figure out why, I made a very simple script to see how long it would run without any complex code to confuse me.
I found that the same thing was happening with this simple infinite loop. At some point between 15 and 25 minutes of running, it stops without any message or error. The browser says "Done".
I've been over every single possible thing I could think of:
set_time_limit ( session.gc_maxlifetime in the php.ini)
memory_limit
max_execution_time
The point that the script is stopped is not consistent. Sometimes it will stop at 15 minutes, sometimes 22 minutes.
Please, any help would be greatly appreciated.
It is hosted on a 1and1 server. I contacted them and they don't provide support for bugs caused by developers.
At some point your browser times out and stops loading the page. If you want to test, open up the command line and run the code in there. The script should run indefinitely.
Have you considered just running the script from the command line, eg:
php script.php
and have the script flush out a message every so often that its still running:
<?php
while (true) {
doWork();
echo "still alive...";
flush();
}
in such cases, i turn on all the development settings in php.ini, of course on a development server. This display many more messages, including deprecation warnings.
In my experience of debugging long running php scripts, the most common cause was memory allocation failure (Fatal error: Allowed memory size of xxxx bytes exhausted...)
I think what you need to find out is the exact time that it stops (you can set an initial time and keep dumping out the current time minus initial). There is something on the server side that is stopping the file. Also, consider doing an ini_get to check to make sure the execution time is actually 0. If you want, set the time limit to 30 and then EVERY loop you make, continue setting at 30. Every time you call set_time_limit, the counter resets and this might allow you to bypass the actual limits. If this still isn't working, there is something on 1and1's servers that might kill the script.
Also, did you try the ignore_user_abort?
I appreciate everyone's comments. Especially James Hartig's, you were very helpful and sent me on the right path.
I still don't know what the problem was. I got it to run on the server with using SSH, just by using the exec() command as well as the ignore_user_abort(). But it would still time out.
So, I just had to break it into small pieces that will run for only about 2 minutes each, and use session variables/arrays to store where I left off.
I'm glad to be done with this fairly simple project now, and am supremely pissed at 1and1. Oh well...
I think this is caused by some process monitor killing off "zombie processes" in order to allow resources for other users.
Run the exec using "2>&1" to log anything including stderr.
In my output I managed to catch this:
...
script.sh: line 4: 15932 Killed php5-cli -d max_execution_time=0 -d memory_limit=128M myscript.php
So something (an external force, not PHP itself) is killing my process!
I use IdWebSpace which is excellent BTW but I think most shared hosting providers impose this resource/process control mechanism just to be sane.