I am working on a development Plone 4.3.3 site with Zope 2.13.22 on Debian and noticed when I ran my backup using collective.recipe.backup that the backup time in the file name is 6 hours ahead of my system.
Example:
Backup name = 2014-08-06-17-08-15.fsz
System time (and write time according to properties) = 2014-08-06 11:08:15
I have checked multiple areas of Plone and they all match my system time.
My buildout.cfg contains the correct Time Zone information.
Any ideas as to what might be causing this or how to correct it? Thank you in advance.
IIRC, this is how the backup script works. Time values are always rendered in UTC, not server local time. This allows for unambiguous ordering of backup files.
If you look at the source of the repozo script used for backup you can see that the date portion of the filename always uses time.gmtime() so this is not something you can change.
Related
my app checks a website for some files with simple NSURLConnection Method. Now I want to recognize if one of the files has changed without downloading the file and compare it.
I thought about md5 checksums but how can I do this without wasting traffic downloading the whole file.
Do you have any ideas for this?
Check the timestamp on the file. That should be easier then using md5 checksums. I don't know how your app or or server API is implemented but the idea is pretty straightforward:
On the server create an API that allows you to query when a file was last modified (keeping track of the modification timestamps should already be handled by the OS on the server)
When you download the file on your client also store the timestamp (i.e. when the server thinks the file was last modified).
When checking whether to update a file, first ask the server timestamp for the file and compare it with the one in your client app - if the server timestamp is more recent than the one on your client download the new file, otherwise do nothing.
In our rails app, the timezone is set to UTC in our environment file. This doesn't cause any problems when running on our production or staging servers. However, none of our local development machines are set to UTC in the system clock, and this is causing some test failures when comparing dates. This is because Rails is using UTC when we call DateTime.now, where-as our MySQL database is using the system time (CST in my case).
Is there a way to ensure that in certain cases, DateTime.now does NOT use the UTC timezone? I guess what I'm asking for is a pure SQL way of updating date fields, which bypasses the Rails engine.
Actually you should be able to tell rails to use local settings only for development and other environments like test by adding this to corresponding config/environments file :
config.time_zone = "Eastern Time (US & Canada)" #Change this
config.active_record.time_zone_aware_attributes = false
config.active_record.default_timezone = :local
That worked for me, but I switched everything including production to local time. But I don;t see why this approach wouldn't work per environment!
I am trying to get XDebug working on my local wamp installation (Uniform Server 8).
However when I put
xdebug.remote_enable=1
in my php.ini, which is required for my IDE to use xdebug, loading the pages gets really slow as in 5 seconds per page slow. The debugger works though.
I haven't used xdebug before but I can imagine that it normally shouldn't take this long. I'm pretty sure it might have something to do with using the symfony2 framework.
Does anyone have an idea what's causing this?
It's maybe because this is what it does!
Check the default storage place for xdebug logs (most of the times /tmp/xdebug/something)
which on Windows would be something different than on unix/linux systems.
set these in your php.ini if you want them placed/named somewhere else:
xdebug.profiler_output_dir
Type: string, Default value: /tmp
The directory where the profiler output will be written to, make sure that the user who the PHP will be running as has write permissions to that directory. This setting can not be set in your script with ini_set().
xdebug.profiler_output_name
Type: string, Default value: cachegrind.out.%p
This setting determines the name of the file that is used to dump traces into. The setting specifies the format with format specifiers, very similar to sprintf() and strftime(). There are several format specifiers that can be used to format the file name.
Generating these files is taxing to your system. But these are what you need to profile your code.
Also go read http://xdebug.org/docs before you actually use it again so that you know what exactly you are trying to do.
As per another answer on SO, you need to set xdebug.remote_autostart = 0 in your php.ini
I just started playing with the Azure Library for Lucene.NET (http://code.msdn.microsoft.com/AzureDirectory). Until now, I was using my own custom code for writing lucene indexes on the azure blob. So, I was copying the blob to localstorage of the azure web/worker role and reading/writing docs to the index. I was using my custom locking mechanism to make sure we dont have clashes between reads and writes to the blob. I am hoping Azure Library would take care of these issues for me.
However, while trying out the test app, I tweaked the code to use compound-file option, and that created a new file everytime I wrote to the index. Now, my question is, if I have to maintain the index - i.e keep a snapshot of the index file and use it if the main index gets corrupt, then how do I go about doing this. Should I keep a backup of all the .cfs files that are created or handling only the latest one is fine. Are there api calls to clean up the blob to keep the latest file after each write to the index?
Thanks
Kapil
After i answered this, we ended up changing our search infrastructure and used Windows Azure Drive. We had a Worker Role, which would mount a VHD using the Block Storage, and host the Lucene.NET Index on it. The code checked to make sure the VHD was mounted first and that the index directory existed. If the worker role fell over, the VHD would automatically dismount after 60 seconds, and a second worker role could pick it up.
We have since changed our infrastructure again and moved to Amazon with a Solr instance for search, but the VHD option worked well during development. it could have worked well in Test and Production, but Requirements meant we needed to move to EC2.
i am using AzureDirectory for Full Text indexing on Azure, and i am getting some odd results also... but hopefully this answer will be of some use to you...
firstly, the compound-file option: from what i am reading and figuring out, the compound file is a single large file with all the index data inside. the alliterative to this is having lots of smaller files (configured using the SetMaxMergeDocs(int) function of IndexWriter) written to storage. the problem with this is once you get to lots of files (i foolishly set this to about 5000) it takes an age to download the indexes (On the Azure server it takes about a minute,, of my dev box... well its been running for 20 min now and still not finished...).
as for backing up indexes, i have not come up against this yet, but given we have about 5 million records currently, and that will grow, i am wondering about this also. if you are using a single compounded file, maybe downloading the files to a worker role, zipping them and uploading them with todays date would work... if you have a smaller set of documents, you might get away with re-indexing the data if something goes wrong... but again, depends on the number....
I've got a maintenance plan that executes weekly in the off hours. It's always reporting success, but the old backups don't get deleted. I don't want the drive filling up.
DB Server info: SQL Server Standard Edition 9.00.3042.00
There is a "Maintenance Cleanup Task" set to
"Search folder and delete files based on an extension"
and "Delete files based on the age of the file at task run time" is checked and set to 4 weeks.
The only thing I can see is that my backups are each given their own subfolder and that this is not recursive. Am I missing something?
Also: I have seen the issues pre-SP2, but I am running service pack 2.
If you make your backups in subfolders, you have to specify the exact subfolder for deleting.
For example:
You make the backup by choosing the option that says something like "Make one backup file for each database" and check the box that says "Create subfolder for each database".
(I work with a German version of SQL Server, so I translate everything into English myself now)
The specified folder is H:\Backup, so the backups will actually be created in the folder H:\Backup\DatabaseName.
And if you want the Maintenance Cleanup Task to delete the backups via "Delete files based on the age of the file at task run time", you have to specify the folder H:\Backup\DatabaseName, not H:\Backup !!!
This is the mistake that I made when I started using SQL Server 2005 - I put the same folder in both fields, Backup and Cleanup.
My understanding is that you can only include the first level of subfolders. I am assuming that you have that check-box checked already.
Are your backups deeper than the just one level?
Another thought is, do you have one single maintenance plan that you run to delete backups of multiple databases? The reason I ask this is because the way I could see that you would have to do that would be to point it to a folder that was one level higher meaning that your "include first-level subfolders" would not be deep enough.
The way I have mine set up is that the Maintenance Cleanup Task is part of my backup process. So once the backup completes for a specific database the Maintenance Cleanup Task runs on that same database backup files. This allows me to be more specific on the directory so I don't run into the directory structure being too deep. Since I have the criteria set the way I want, items don't get deleted till I am ready for them to be deleted either way.
Tim
Make sure your maintenance plan does not have any errors associated it with. You can check the error log under the SQL Server Agent area in the SQL Server Management Studio. If there are errors during your maintenance plans, then it is probably quitting before it starts to delete the outdated backups.
Another issue could be the "workflow" of the maintenance plan.
If your plan consists of more than one task, you have to connect the tasks with arrows to define the order in which they will run.
Possible issue #1:
You forgot to connect them with arrows. I just tested that - the job runs without any error or warning, but it executes only the first task.
Possible issue #2:
You defined the workflow in a way that the cleanup task will never run. If you connect two tasks with an arrow, you can right-click on the arrow and specify if the second task will run always or only when the first one does/does not run successful (this changes the color of the arrow, possible are red/green/blue). Maybe the backup works, and then the cleanup never runs because it will only run when the backups fails?