Replacing or recreating a file in Windows 8 RT keeps the old DateCreated value - file-io

I'm attempting to cache data in a file for a Windows Store app, and using the DateCreated value to determine if it is out of date.
I first tried doing this:
var file = await rootFolder.CreateFileAsync(filename, Windows.Storage.CreationCollisionOption.ReplaceExisting);
FileIO.WriteTextAsync(file, contents);
but when it saves the file only the DateModified value is changed, even though the comments for the ReplaceExisting option clearly state that it recreates the file and replaces an existing one.
So I decided to force it to delete the file and recreate it with this:
var file = await rootFolder.CreateFileAsync(filename, Windows.Storage.CreationCollisionOption.ReplaceExisting);
// force delete because windows rt is not doing what it's supposed to in the line above!!
await file.DeleteAsync();
file = await rootFolder.CreateFileAsync(filename);
FileIO.WriteTextAsync(file, contents);
but amazingly, I still get the same result! The file is deleted and recreated with the OLD CREATION DATE!
Is this a bug, or am I doing something wrong here?

This is by design, a feature called "File system tunneling". This KB article explains the behavior and rationale.
The workaround it documents requires registry editing, clearly you cannot rely on that in a Store application. You'll need to find a workaround, like using the last-written timestamp or alternating between two files or keep track of age in a separate file.

Thanks for the comments everyone, it turns out the Modified date IS available but you have to get it through the GetBasicPropertiesAsync() method as shown here: http://msdn.microsoft.com/en-us/library/windows/apps/windows.storage.fileproperties.basicproperties.datemodified.aspx

Related

Using Leigh version of S3Wrapper.cfc Can't get past Init

I am new to S3 and need to use it for image storage. I found a half dozen versions of an s2wrapper for cf but it appears that the only one set of for v4 is one modified by Leigh
https://gist.github.com/Leigh-/26993ed79c956c9309a9dfe40f1fce29
Dropped in the com directory and created a "test" page that contains the following code:
s3 = createObject('component','com.S3Wrapper').init(application.s3.AccessKeyId,application.s3.SecretAccessKey);
but got the following error :
So I changed the line 37 from
variables.Sv4Util = createObject('component', 'Sv4').init(arguments.S3AccessKey, arguments.S3SecretAccessKey);
to
variables.Sv4Util = createObject('component', 'Sv4Util').init(arguments.S3AccessKey, arguments.S3SecretAccessKey);
Now I am getting:
I feel like going through Leigh code and start changing things is a bad idea since I have lurked here for year an know Leigh's code is solid.
Does any know if there are any examples on how to use this anywhere? If not what I am doing wrong. If it makes a difference I am using Lucee 5 and not Adobe's CF engine.
UPDATE :
I followed Leigh's directions and the error is now gone. I am addedsome more code to my test page which now looks like this :
<cfscript>
s3 = createObject('component','com.S3v4').init(application.s3.AccessKeyId,application.s3.SecretAccessKey);
bucket = "imgbkt.domain.com";
obj = "fake.ping";
region = "s3-us-west-1"
test = s3.getObject(bucket,obj,region);
writeDump(test);
test2 = s3.getObjectLink(bucket,obj,region);
writeDump(test2);
writeDump(s3);
</cfscript>
Regardless of what I put in for bucket, obj or region I get :
JIC I did go to AWS and get new keys:
Leigh if you are still around or anyone how has used one of the s3Wrappers any suggestions or guidance?
UPDATE #2:
Even after Alex's help I am not able to get this to work. The Link I receive from getObjectLink is not valid and getObject never does download an object. I thought I would try the putObject method
test3 = s3.putObject(bucketName=bucket,regionName=region,keyName="favicon.ico");
writeDump(test3);
to see if there is any additional information, I received this :
I did find this article https://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html but it is pretty old and since S3 specifically suggests using dots in bucketnames I don't that it is relevant any longer. There is obviously something I am doing wrong but I have spent hours trying to resolve this and I can't seem to figure out what it might be.
I will give you a rundown of what the code does:
getObjectLink returns a HTTP URL for the file fake.ping that is found looking in the bucket imgbkt.domain.com of region s3-us-west-1. This link is temporary and expires after 60 seconds by default.
getObject invokes getObjectLink and immediately requests the URL using HTTP GET. The response is then saved to the directory of the S3v4.cfc with the filename fake.ping by default. Finally the function returns the full path of the downloaded file: E:\wwwDevRoot\taa\fake.ping
To save the file in a different location, you would invoke:
downloadPath = 'E:\';
test = s3.getObject(bucket,obj,region,downloadPath);
writeDump(test);
The HTTP request is synchronous, meaning the file will be downloaded completely when the functions returns the filepath.
If you want to access the actual content of the file, you can do this:
test = s3.getObject(bucket,obj,region);
contentAsString = fileRead(test); // returns the file content as string
// or
contentAsBinary = fileReadBinary(test); // returns the content as binary (byte array)
writeDump(contentAsString);
writeDump(contentAsBinary);
(You might want to stream the content if the file is large since fileRead/fileReadBinary reads the whole file into buffer. Use fileOpen to stream the content.
Does that help you?

Adwords api report without download

I'm working with Adwords API, I already can download reports like: all keywords with impressions, clics, ctr, conversions, etc...
The problem is, I need to show this report on our web tool when the user set the date range.
Now I'm doing this: The user selects Start Date 01/09/2014 End Date 15/09/2014, I call Adwords Api, download CSV, parsing it, and then show the results on the screen, but this way is not optimal and I would like to know how to call the API and get the results "at the moment", getting an XML or JSON without download a file.
Is it possible??
The only way I found was calling CampaignService class.. getting all campaigns, then for each campaign calling AdgroupService for get all Adgroups, then keywords.... It's really impractical.
How can I do it?
Thank you very much.
It appears in recent versions of the libary they created a function (getAsString()) so we can achieve this:
$reportDownloader = new ReportDownloader($session);
$reportDownloadResult = $reportDownloader->downloadReport($reportDefinition);
//Normal way of downloading to file
//$reportDownloadResult->saveToFile($filePath);
//printf("Report with name '%s' was downloaded to '%s'.\n",
// $reportDefinition->getReportName(), $filePath);
//New way by calling getAsString();
$reportAsString = $reportDownloadResult->getAsString();
echo $reportAsString;
Old suggestions about passing null as the $filePath no longer work.
Yes, you can get XML without downloading file. Just set $path to Null when calling ReportUtils::DownloadReport() and it returns you response without saving it to file.
The AdWords API library (as of v201506) allows you to set the download_format to one of CSVFOREXCEL, CSV, TSV, XML, GZIPPED_CSV, or GZIPPED_XML. It unfortunately does not support JSON, even if you ask to download the report data (not as a file).
$reportAsString = $reportDownloadResult->getAsString();
$xml = simplexml_load_string($reportAsString);

Can't replace mongo document

I am attempting to save documents to a mongoDB cluster (sharded replica sets) and am having a strange issue. I am using pymongo 2.7.2 and TokuMX 1.5 mongodb 2.4.10.
When I attempt to save (overwrite) existing documents I am getting an exception that looks like the document I am saving is too large:
doc = db.collection.find_one()
db.collection.save(doc)
pymongo.errors.OperationFailure: BSONObj size: 18798961 (0x71D91E01) is invalid. Size must be between 0 and 16793600(16MB) First element: op: "u"
However this works fine:
doc = db.collection.find_one()
db.collection.remove({'_id': doc['_id']})
db.collection.save(doc)
The document in question is about 9mb, so it looks like when I attempt to replace the document it is somehow doubling the size of the document, exceeding the 16mb limit.
Any ideas as to what could cause this behavior?
Apparently this is a known issue with TokuMX. Oplog entries are twice the size of the document, so replacing a 9mb document will result in a 18mb oplog entry- which raises the exception.
The solution would be to limit document writes to less than 8mb so that oplog entries never exceed 16mb.
I think this is a side effect of how save is implemented in PyMongo.
Under the hood if the document has a _id then the save(doc) is turned into an update(doc, doc). That is where the doubling is coming into play since the query+update is 18MB.
When you removed the _id you changed the save(doc) into a insert(doc) of a new document with a new _id. I don't think that is what you wanted.
Rather than use save I would recommend constructing a query with just the _id field from the original document and doing the update call manually. I would even go so far as you should enter a Jira ticket to get PyMongo to do this for you.
HTH,
Rob.

Why does GetBasicPropertiesAsync() sometimes throw an Exception?

In Windows8, I'm trying to use GetBasicPropertiesAsync() to get the size of a newly created file. Sometimes, but not always (~25% of the time), this call gives an exception of:
"Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))".
The file is created using DotNetZip. I'm adding thousands of files to the archive which takes a few minutes to run:
using (ZipFile zip = new ZipFile())
{
zip.AddFile(...); // for thousands of files
zip.Save(cr.ArchiveName);
}
var storageFile = await subFolder.GetFileAsync(cr.ArchiveName);
// storageFile is valid at this point
var basicProperties = await storageFile.GetBasicPropertiesAsync(); // BOOM!
A few apparently random things seem to decrease the likelihood of the exception:
Deleting an existing copy of cr.ArchiveName before the start of the loop.
Not viewing the directory using File Explorer
Weird, huh? It smells like it might be a bug related to File System Tunneling or maybe it's some internal caching that DotNetZip is performing and holding onto resources (maybe renaming the TEMP file) even after the ZipFile is disposed?
Trying to (unsuccessfully) answer my own question.
At first, I though this was a known issue with DotNetZip holding onto file handles until the next garbage collection. I am using the SL/WP7 port of DotNetZip from http://slsharpziplib.codeplex.com/ which presumably doesn't include the bug fixed by this workitem:
http://dotnetzip.codeplex.com/workitem/12727
But, according to that theory, doing:
GC.Collect();
GC.WaitForPendingFinalizers();
should have provided a work around, which it didn't.
Next I tried using handle, which didn't show any other activity on the failing StorageFile.
So for now, I'm still stumped.

an error 3013 thrown when writing a file Adobe AIR

I'm trying to write/create a JSON file from a AIR app, I'm trying not so show a 'Save as' dialogue box.
Here's the code I'm using:
var fileDetails:Object = CreativeMakerJSX.getFileDetails();
var fileName:String = String(fileDetails.data.filename);
var path:String = String(fileDetails.data.path);
var f:File = File.userDirectory.resolvePath( path );
var stream:FileStream = new FileStream();
stream.open(f, FileMode.WRITE );
stream.writeUTFBytes( jsonToExport );
stream.close();
The problem I'm having is that I get a 'Error 3013. File or directory in use'. The directory/path is gathered from a Creative Suite Extension I'm building, this path is the same as the FLA being developed in CS that the Extension is being used with.
So I'm not sure if the problem is that there are already files in the directory I'm writing the JSON file to?
Do I need to add a timer in order to close the stream after a slight delay, giving some time to writing the file?
Can you set up some trace() commands? I would need to know what the values of the String variables are, and the f.url.
Can you read from the file that you are trying to write to, or does nothing work?
Where is CreativeMakerJSX.getFileDetails() coming from? Is it giving you data about a file that is in use?
And from Googling around, this seems like it may be a bug. Try setting up a listener for when you are finished, if you have had the file open previously.
I re-wrote how the file was written, no longer running into this issue.