Get BIM 360/ACC File version Uploaded time via API - api

Could anyone please let me know if the current API for ACC can get the file version uploaded time?
Or how differentiate a file is modified as attributes are changed, or modified as a new file is uploaded (via API)?
I am using the method below but seems the upload time is not available.
https://forge.autodesk.com/en/docs/data/v2/reference/http/projects-project_id-versions-version_id-GET/
Hope my query is clear to all.
Kind Regards,
R
Get Method to get ACC file version details with the Version upload time

Related

Potential bug in GCP regarding public access settings for a file

I was conversing with someone from GCS support, and they suggested that there may be a bug and that I post what's happening to the support group.
Situation
I'm trying to adapt this Tensorflow demo ...
https://www.tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization
... to something I can use with images stored on my GCP account. Substituting one of my images to run through the process.
​​I have the bucket set for allUsers to have public access, with a Role of Storage Object Viewer.
However, the demo still isn't accepting my files stored in GCS.
For example, this file is being rejected:
https://storage.googleapis.com/01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpeg
That file was downloaded from the examples in the demo, and then uploaded to my GCS and the link used in the demo. But it's not being accepted. I'm using the URL from the Copy URL link.
Re: publicly accessible data
I've been following the instructions on making data publicly accessible.
https://cloud.google.com/storage/docs/access-control/making-data-public#code-samples_1
I've performed all the above operations from the console, but the bucket still doesn't indicate public access for the bucket in question. So I'm not sure what's going on there.
Please see the attached screen of my bucket permissions settings.
So I'm hoping you can clarify if those settings look good for those files being publicly accessible.
Re: Accessing the data from the demo
I'm also following this related article on 'Accessing public data'
https://cloud.google.com/storage/docs/access-public-data#storage-download-public-object-python
There are 2 things I'm not clear on:
If I've set public access the way I have, do I still need code as in the example on the 'Access public data' article just above?
If I do need to add this to the code from the demo, can you tell me how I can find these 2 parts of the code:
a. source_blob_name = "storage-object-name"
b. destination_file_name = "local/path/to/file"
I know the path of the file above (01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpeg), but don't understand whether that's the storage-object-name or the local/path/to/file.
And if it's either one of those, then how do I find the other value?
And furthermore, to make a bucket public, why would I need to state an individual file? That's making me think that code isn't necessary.
Thank you for clarifying any issues or helping to resolve my confusion.
Doug
If I've set public access the way I have, do I still need code as in the example on the 'Access public data' article just above?
No, you don't need to. I actually did some testing and I was able to pull images in GCS, may it be set to public or not.
As what we have discussed in this thread, what's happening in your project is that the image you are trying to pull in GCS has a .jpeg extension but is not actually .jpeg. The actual image is in .jpg causing TensorFlow to not able to load it properly.
See this testing following the demo you've mentioned and the image from your bucket. Note that I used .jpg as the image's extension.
content_urls = dict(
test_public='https://storage.cloud.google.com/01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpg'
)
Also tested another image from your bucket and it was successfully loaded in TensorFlow.
Most likely the problem is your turtle ends in .jpeg and your libraries are looking for .jpg.
The Errors you're seeing would be much more helpful to figure out the problem.

XPages POI4Xpages download to network location

Using POI4Xpages which is great LINK
However, I was wondering, at present, when it creates my word document, it simply downloads, like a normal download from the internet, storing it the downloads folder in windows (using Chrome anyways)
Is there a way, using POI4XPages, to instead, dump the file to a specified network location, for example a shared drive?
After that, I would simply build a link to the file using the network location, and a filename variable for example to pick the correct file.
If thats not possible, is it possible to get a handle on the file before or after it is downloaded, and then save it to a field in the xpage?
In short, I want to avoid the user downloading the file, then having to attach it manually to the xpage.
Thanks
POI allows you to get a handle to the file using the variable "workbook". You are also able to provide the specific downloadFileName you wish to use. Using the postGenerationProcess property you should be able to make a call to a Java method that makes the connection to your network drive where you can use the "workbook" variable and downloadFileName value to save your document. If this doesn't work definitely post a question on their project site because the creator does reply.

Dropbox Webhooks API

I want to fetch the events related to my dropbox like I can see here , means I want to get when a particular file is added, changed, moved, deleted or renamed and by which user.
I have looked into the Webhooks docs. The webhook docs states the the response it sends to the callback url contails the userids, with which I can update the directory listing based on the webhook response for the user by calling the /delta.
But with it I cannot tell if there is an operation made for a file like a particular file has been renamed or deleted as since if I rename a file fro abc to xyz. If I get a response then I will look for the changes related to the file xyz which I will not find in my existing database so logically I will be making the events as deleted abc and added xyz, where as the reality is renamed abc to xyz.
It will be really grateful if you can help me regarding this.
There's no real way to detect a rename (versus a delete and an add) via the Dropbox API. You can use heuristics (like whether the new file has the same contents as the old file and was created around the same time as the old file was deleted), but those are just going to be guesses with varying levels of accuracy.
Also, there's currently no way via the API to see which user modified a file in a shared folder.
UPDATE: The Core API now includes (in beta) the ability to see who last modified a file in a shared folder. See https://www.dropbox.com/developers/blog/101/new-in-beta-shared-folder-metadata.
A file rename is reported as DeletedMetadata followed by a FileMetadata for the file.
What is annoying is that DeletedMetadata do not contain the file .id attribute, only .name and .path, while FileMetadata include all attributes (.id same as before any deletions/renames, updated .name and .path).
So you should have a local mapping connecting names/paths and IDs in order to know which file got deleted (when only DeletedMetadata is received) or renamed (when both DeletedMetadata and FileMetadata are received).

FineUploader: Harvest original last modified date when uploading to Amazon S3

I would like to send the last modified date of the uploaded file to the server. I have the javascroipt snippet to get that using FileApi ($(this).fineUploaderS3('getFile', id).lastModifiedDate). I would like to send this information when the uploadSuccess's endPoint is called, but I cannot find the callback which is right for me at Events | Fine Uploader documentation, and I cannot find the way I could inject the data.
These are submitted as POST parameters to my server when the upload finished to S3: key, uuid, name, bucket. I would like to inject the lastModified date here somehow.
Option 2:
Asking the Amazon S3 service about last modification date does not help directly, because the uploaded file has the current date, not the file's original date. It would be great if we could inject the information into the FineUploader->S3 communication in a way that S3 would use it for setting it's own last modified date for the uploaded file.
Other perspective I considered:
If I use onSubmit and setParams then I the Amazon S3 server will take it as 'x-amz-meta-lastModified'. The problem is that when I upload larger files (which is uploaded in chunks with an other dance) then I get signing error. ...<Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>....
EDIT
The Other perspective I considered works. The bottleneck was the name of the custom metadata chih I used at setParams. It cannot contain capital letters, otherwise the signing fails. I did not find any reference documentation for it. For one I checked Object Key and Metadata - Amazon Simple Storage Service. If someone could find me a reference I would include that here.
The original question (when and how to send last modified date to the server component) remains.
(Server is PHP.)
EDIT2
The Option 2 will not work, as far my research went the "Last Modified" entry cannot be manually altered at Amazon S3.
If the S3 API does not return the expected last modified date, you can check the value of the lastModifiedDate on the File object associated with the upload (provided the browser supports the file API) and send that value as a parameter to the upload success endpoint. See the documentation for the setUploadSuccessParams API method for more details.

Loading dynamically generated KML into google maps api

I have a bit of an issue with loading a dynamically generated KML into google maps api.
The KML file is generated by oracle and is of the format
http://server/oracleservioce.method?parm1=100&parm2=100
If I try and load that uRL (endcoded or decoded) I always get a KMLLayerStatus as INVALID_DOCUMENT.
If I save the resultant file to a local file with a KML extension it works foine, otherwise I get errors.
I even tried renaming the file to .xml and .dat (arbitrary names) and they all fail. It seems that google api need the file to have a .KML extension. This will not work in the dynamic environment. Can anybody suggest a way forward?
Thanks,
PS: I Need to use google maps API, I can not use openlayers or any other solution. The file needs to be loaded into a google.maps.kmllayer object.
I did this, no matter on the extension, but you have to set the mimetype on the http response: https://developers.google.com/kml/documentation/kml_tut#kml_server