I am working in Ektron 8.6.
I need to migrate contents from Vignette CMS to Ektron.Is there is any option in ekron to automate the content migration process rather than doing it manually(creation of contents by fetching the HTML from other CMS)?
You need to use the Ektron API to add content. You will have to write that code yourself. The folders you use to organize your content in Ektron will be different from anyone else's. Also the way content is organized in Vignette is unique to your site so the import code has to be custom.
I have found that if you have fewer than 400 pieces of content it is easier to do it manually.
Related
My employer has recently switched its CMS to AEM(Adobe Experience Manager).
We store a large amount of documentation and our site users need to be able to find the information contained within those documents, some of which are 100s pages in length.
Adobe are disappointingly saying their search tool will not search PDFs. Is there any format for producing or saving pdfs that allow the content be indexed?
I think you need to configure external index/search tools like Apache Solr and use REST endpoint to sync DAM data and fetch results on queries.
Out of the box AEM supports most binary formats, without needing for SOLR. You only need this in advanced scenarios, like exposing search outside of Authoring or having millions of assets.
When any asset is uploaded to AEM Dam it will go though a Dam Asset Workflow which has a step Metadata Processor. That step will extract content from the asset. So "binary" assets like Word docs, Excel and PDF it will be searchable. As long as you have Dam Asset Update workflow enabled you will be ok.
This is my first time on Stack Overflow. Thanks to all for providing valuable information and helping one another.
I am currently working on Apache Solr 7. There is a POC I need to complete as I have less time so putting this question here. I have setup SOLR on my windows machine. I have created core and uploaded a PDF document using /update/extract from Admin UI. After uploading, I can see the metadata of the file if I query from the Admin UI using query button. I was wondering if I can get the actusl content of the PDF as well. I can see there is one tlog file gets generated under /data/tlog/tlog000... with raw PDF data but not the actual file.
So the question are,
1. Can I get the PDF content?
2. does Solr stores the actual file somewhere?
a. If it stores then where it does?
b. If it does not store then, is there a way to store THE FILE?
Regards,
Munish Arora
Solr will not sore the actual file anywhere.
Depending on your config it can store the binary content though.
Using the extract request handler Apache Solr relies on Apache Tika[1] to extract the content from the document[2].
So you can search and return the content of the pdf and a lot of other metadata if you like.
[1] https://tika.apache.org/
[2] https://lucene.apache.org/solr/guide/6_6/uploading-data-with-solr-cell-using-apache-tika.html
What is the best way to read and write Hippo content programmaticaly? I want to build a migration tool that writes some pages and binary files to Hippo. I am now using the JCR API to create nodes in the repo, is there any better approach?
Have you tried:
http://import-tool.forge.onehippo.org/
(you can checkout source code and use it as a reference if needed)
Another one you could check is:
https://forge.onehippo.org/svn/restimporter/
(no documentation other than:
https://forge.onehippo.org/svn/restimporter/trunk/README.txt
)
hth
I'm building an application that needs to download web content for offline viewing on an iPad. At present I'm loading some web content from the web for test purposes and displaying this with a UIWebView. Implementing that was simple enough. Now I need to make some modifications to support offline content. Eventually that offline content would be downloaded in user selectable bundles.
As I see it I have a number of options but I may have missed some:
Pack content in a ZIP (or other archive) file and unpack the content when it is downloaded to the iPad.
Put the content in a SQLite database. This seems to require some 3rd party libs like FMDB.
Use Core Data. From what I understand this supports a number of storage formats including SQLite.
Use the filesystem and download each required file individually. OK, not really a bundle but maybe this is the best option?
Considerations/Questions:
What are the storage limitations and performance limitations for each of these methods? And is there an overall storage limit per iPad app?
If I'm going to have the user navigate through the downloaded content, what option is easier to code up?
It would seem like spinning up a local web server would be one of the most efficient ways to handle the runtime aspects of displaying the content. Are there any open source examples of this which load from a bundle like options 1-3?
The other side of this is the content creation and it seems like zipping up the content (option 1) is the simplest from this angle. The other options would appear to require creation of tools to support the content creator.
If you have the control over the content, I'd recommend a mix of both the first and the third option. If the content is created by you (like levels, etc) then simply store it on the server, download a zip and store it locally. Use CoreData to store an Index about the things you've downloaded, like the path of the folder it's stored in and it's name/origin/etc, but not the raw data. Databases are not thought to hold massive amounts of raw content, rather to hold structured data. And even if they can -- I'd not do so.
For your considerations:
Disk space is the only limit I know on the iPad. However, databases tend to get slower if they grow too large. If you barely scan though the data, use the file system directly -- may prove faster and cheaper.
The index in CoreData could store all relevant data. You will have very easy and very quick access. Opening a content will load it from the file system, which is quick, cheap and doesn't strain the index.
Why would you do so? Redirect your WebView to a file:// URL will have the same effect, won't it?
Should be answered by now.
If you don't have control then use the same as above but download each file separately, as suggested in option four. after unzipping both cases are basically the same.
Please get back if you have questions.
You could create a xml file for each bundle, containing the path to each file in the bundle, place it in a folder common to each bundle. When downloading, download and parse the xml first and download each ressource one by one. This will spare you the overhead of zipping and unzipping the content. Create a folder for each bundle locally and recreate the folder structure of the bundle there. This way the content will work online and offline without changes.
With a little effort, you could even keep track of file versions by including version numbers in the xml file for each ressource, so if your content has been partially updated only the files with changed version numbers have to be downloaded again.
I'm working with a CMS and need to import data to it using typical html forms. The data itself is in csv files with one page per row. Such is the CMS that importing directly to db isn't possible due to the complexity of the design. It's pretty important that i "fake" usual user interaction because the CMS does a lot of background work that's crucial for the import.
Basically, for each row in the csv file, I need to copy a csv column to a html textfield, or select a checkbox, or click a certain button. One major issue is mapping the data in the csv to actions in the CMS. So if one column contains the string 'foobar' is really means "set the firstName dropdown widget to 'foobar'".
Is there a tool to automate this? I´ve been looking at AutoHotKey, Selendium, Web-Harvester and many other tools but I'm not convinced they are the correct tools. The main problem is being able to interact with the html pages in a easy way.
There are a bunch of tools that do that. Visual Studio Team Test Edition will do this by recording your actions and allow you to modify the resulting C# programming. You can then read from your CSV and replay in a loop.
You can also do this relatively easily if your interface doesn't change much using HTML Agility Pack.
Also I've written regular C# ( HttpWebRequest and Regex ) programs to do this and it's not very difficult either.