Thumbnail storage - geonetwork

Currently GeoNetwork stores Thumbnails (at least in my latest version) on the computer and not where the XML metadata is stored.
my question: is there a way to store Thumbnails (pictures) in a database (same for the XML metadata) ?
thanks
Abel

No, this functionality is not available in GeoNetwork, at least not yet in 3.6.0 version or next 3.8.0.

Related

How do I convert a Google Doc for alternate cloud storage, with revision history intact, to a openable document in Dropbox

Is there a way to do this easily? Keep version history of the document on the document? As a .gdoc or .whatever-format or am I resigned in having to download, separately, all the revisions made in the past on said document?
For context: I have a document I've been editing and revising over the years for my own medical history and list of meds, history, etc. etc. and have been using Google Docs to do this, because it was convenient and I didn't have to pay for Microsoft Office and additionally install a good word processor on my PC. Now recently I've purchased Dropbox Personal for cloud storage needs.
I want to do the following: Take the Google Doc and save it as a .gdoc (which isn't an option in the File menu??) and take it over to Dropbox's Vault as an editable hardcopy with its revisions history in tact.
Otherwise, what I have done (before I even comprehended revision history was a thing) is just copy pasted its current version, onto a new .gdoc in Dropbox Vault.
So, is that possible? And if so, how and as easily (lazily) as I possibly can? Also, is this even the right place to ask for this? Apologies if it isn't. I didn't see much else about this specific issue anywhere... (also lazy)
Thanks!
EDIT:
I am by no means a coder in any sense. I'm a full time elderly caretaker and I'm just a guy with a specific, niche?, technical, problem and thought this was the first place to ask without having to go through tech support w/ Google chat etc. And it might also help some other people that like seeing how their documents have changed over the years, history fans etc. At the end of the day it's a programming/coding issue, that could be resolved someway some how... Right?
If I can add pictures here for context, LMK.
Thanks :)
The .gdoc file format is only accessible via Google Docs which is on web. Downloading the file to your local storage means you would have to access it on your device using your local apps (word editor) such as Microsoft Office,Libre Office, etc.(other word editor apps on desktop application level) which is why the .gdoc format is not available when you download. This is also why you won't be able to have it openable from your dropbox.
The version/revision history on Google Docs is intact only to that specific file with that unique ID. So when you download the file, the version history won't be available to the physically downloaded file which is stored on web or even when you make a copy of it, the version history does not get copied, therefore that won't be an option too.
It looks like you'll have to stick to manually copying or making a backup of the current version of the file before editing, since the version history is only kept for a period of 30 days or the last 100 versions, unless manually set to "Keep forever" to keep a version forever.
Google drive version history: https://googledrivepro.com/google-drive-version-history/

NTFS alternate data stream usage

We are potentially looking for a solution of how we can identify different versions of a text file on Windows Operating Systems - there are no file attributes that support versioning (e.g. 2.0, 2.1 etc.), but using ADS would allow us to write, for example, version information to an alternative stream within the file. My question, is this a suitable use of ADS, or are there drawbacks or reasons we should not do this? I have been using the information at this link to play around with ADS - https://blog.codefluententities.com/2013/03/14/manipulating-ntfs-alternate-data-streams-in-c-with-the-codefluent-runtime-client/
File versions for me are rather metadata than another data stream, so I'd use extended file attributes in this case. Alternate data stream is perfectly suitable for e.g. file preview.
That said, extended attributes have similar drawbacks as ADS (usually lost during the backup or archiving), but unlike ADS, they are supported by FAT32 devices.

The best way to manage images when importing from csv in prestashop

I want to know the best way how to handle/manage our products images when we import products from csv in Prestashop 1.6. I mean, does Prestashop provide place/space to upload many images? or we must upload in external website (what website)?
May be this question is general enough, but when I googling I dont get the clear answer. Your answers I appreciate.
Newer PrestaShop versions support new storage architecture for pictures. This new system of image placement allows to work with images much faster, keeping them in order. Images are stored at /img/p folder, in created subfolders that correspond to image ID
Basically, you will avoid having 100,000 pictures in the same “/img/p” folder. Instead, the pictures will be placed into subfolders within “/img/p” directory (e.g.: “/img/p/1/2/ for image with ID 12 or /img/p/7/6/5/4/7 for image with ID 76547).

Get All Documents from a CouchBase Bucket without View or N1QL

I am implementing an Express Web service using CouchBase as my database. To get all documents stored in a bucket, i created a view using the web console.
My question is if there is a way to do the same thing but without creating a view or using N1QL.
I was looking at the Couchbase Server REST API, but i didn't found a way.
Thank you
You could design your schema around something like this. I am thinking of a key pattern specifically that would allow for a bulk get of a range of docs.
Beyond that, there is no way without a view or N1QL.
In Couchbase 3.0 and higher, you can also use DCP to stream all documents from a bucket. Currently the DCP protocol is only implemented in java, you can see an example here: http://github.com/branor/couchbase-dcp-consumer
Note that there is a problem in the 1.1.0+ version of the couchbase core-io library, so you need to use version 1.1.0-dp (developer preview) to open a stream. DCP support in the SDK is still experimental, so I wouldn't use it in production yet.
Create a document that will hold the keys of all your documents.
While inserting a key value pair in couchbase, also append the key to that document.
Eg:
<Key1, Value1>
<Key2, Value2>
.
.
.
<Keyx, Valuex>
<All_Keys, <Key1, Key2, Key3...Keyx>>
To get all the documents,
Just do a client.get("All_Keys") and then do a client.getBulk() operation.

iPad - how should I distribute offline web content for use by a UIWebView in application?

I'm building an application that needs to download web content for offline viewing on an iPad. At present I'm loading some web content from the web for test purposes and displaying this with a UIWebView. Implementing that was simple enough. Now I need to make some modifications to support offline content. Eventually that offline content would be downloaded in user selectable bundles.
As I see it I have a number of options but I may have missed some:
Pack content in a ZIP (or other archive) file and unpack the content when it is downloaded to the iPad.
Put the content in a SQLite database. This seems to require some 3rd party libs like FMDB.
Use Core Data. From what I understand this supports a number of storage formats including SQLite.
Use the filesystem and download each required file individually. OK, not really a bundle but maybe this is the best option?
Considerations/Questions:
What are the storage limitations and performance limitations for each of these methods? And is there an overall storage limit per iPad app?
If I'm going to have the user navigate through the downloaded content, what option is easier to code up?
It would seem like spinning up a local web server would be one of the most efficient ways to handle the runtime aspects of displaying the content. Are there any open source examples of this which load from a bundle like options 1-3?
The other side of this is the content creation and it seems like zipping up the content (option 1) is the simplest from this angle. The other options would appear to require creation of tools to support the content creator.
If you have the control over the content, I'd recommend a mix of both the first and the third option. If the content is created by you (like levels, etc) then simply store it on the server, download a zip and store it locally. Use CoreData to store an Index about the things you've downloaded, like the path of the folder it's stored in and it's name/origin/etc, but not the raw data. Databases are not thought to hold massive amounts of raw content, rather to hold structured data. And even if they can -- I'd not do so.
For your considerations:
Disk space is the only limit I know on the iPad. However, databases tend to get slower if they grow too large. If you barely scan though the data, use the file system directly -- may prove faster and cheaper.
The index in CoreData could store all relevant data. You will have very easy and very quick access. Opening a content will load it from the file system, which is quick, cheap and doesn't strain the index.
Why would you do so? Redirect your WebView to a file:// URL will have the same effect, won't it?
Should be answered by now.
If you don't have control then use the same as above but download each file separately, as suggested in option four. after unzipping both cases are basically the same.
Please get back if you have questions.
You could create a xml file for each bundle, containing the path to each file in the bundle, place it in a folder common to each bundle. When downloading, download and parse the xml first and download each ressource one by one. This will spare you the overhead of zipping and unzipping the content. Create a folder for each bundle locally and recreate the folder structure of the bundle there. This way the content will work online and offline without changes.
With a little effort, you could even keep track of file versions by including version numbers in the xml file for each ressource, so if your content has been partially updated only the files with changed version numbers have to be downloaded again.