IBM Connections - Wikis and Files extraction from database - sql

I need to extract the wiki html content for a specific community and I have access only to the database.
Starting from the table wikis.library table and connecting it with the wikis.media i'm able to retrieve the data,summary but not the html content.
Where is the html content of a wiki page saved?
Thanks.

It's saved on the file share. File share is configured in the WebSphere Environment variable WIKIS_CONTENT_DIR. Read the revision from MEDIA_REVISION table and extract the file name of the page on the file share via the MEDIA_FILE_ID field.
You can probably also use the Wiki API to retrieve the content https://ds_infolib.hcltechsw.com/ldd/appdevwiki.nsf/xpAPIViewer.xsp?lookupName=API+Reference#action=openDocument&res_title=Retrieving_a_wiki_page_ic50&content=apicontent

Related

Azure Data Factory HTTP Connector Data Copy - Bank Of England Statistical Database

I'm trying to use the HTTP connector to read a CSV of data from the BoE statistical database.
Take the SONIA rate for instance.
There is a download button for a CSV extract.
I've converted this to the following URL which downloads a CSV via web browser.
[https://www.bankofengland.co.uk/boeapps/database/_iadb-fromshowcolumns.asp?csv.x=yes&Datefrom=01/Dec/2021&Dateto=01/Dec/2021 &SeriesCodes=IUDSOIA&CSVF=TN&UsingCodes=Y][1]
Putting this in the Base URL it connects and pulls the data.
I'm trying to split this out so that I can parameterise some of it.
Base
https://www.bankofengland.co.uk/boeapps/database
Relative
_iadb-fromshowcolumns.asp?csv.x=yes&Datefrom=01/Dec/2021&Dateto=01/Dec/2021 &SeriesCodes=IUDSOIA&CSVF=TN&UsingCodes=Y
It won't fetch the data, however when it's all combined in the base URL it does.
I've tried to add a "/" at the start of the relative URL as well and that hasn't worked either.
According to the documentation ADF puts the "/" in for you "[Base]/[Relative]"
Does anyone know what I'm doing wrong?
Thanks,
Dan
[1]: https://www.bankofengland.co.uk/boeapps/database/_iadb-fromshowcolumns.asp?csv.x=yes&Datefrom=01/Dec/2021&Dateto=01/Dec/2021 &SeriesCodes=IUDSOIA&CSVF=TN&UsingCodes=Y
I don't see a way you could download that data directly as a csv file. The data seems to be manually copied from the site, using their Save as option.
They have used read-only block and hidden elements, I doubt there would any easy way or out of the box method within ADF web activity to help on this.
You can just manually copy-paste into a csv file.

Save pdf file loaded in iFrame to database after edit Oracle APEX

I am trying to save a PDF file that is loaded in an iFrame after sign it, i am using the (PSPDFKit standalone) in Oracle APEX 190200 version.
I need save the is database instead of download file.
How I can get file and save file in database through AJAX callback?
Screenshot:
You can use instance.exportPDF() to get the PDF as an ArrayBuffer. Then you can convert the ArrayBuffer to Blob and send it to the server. Hopefully, this should solve your issue.
I would suggest you to reach our support directly. We offer a blazing fast assistance and the questions are handled directly by the Web team: https://pspdfkit.com/support/request/.

Apache Folder Index Description Field

In an Apache Index file listing, there is a description field (along with Name, Last Modified and Size). What should or could populate this column of data?
More information:
On an Apache web server, I can enable a setting called "Apache Module mod_autoindex"
When this setting is enabled, if I visit a folder in a browser, and that folder does not have an index.html file, Apache will display the files and folders in that folder. The interface is pretty basic, but provides useful information about the files on a server.
File/folder information is displayed in a table with 4 columns (presumably generated by Apache). These columns are: Name, Last Modified, Size and Description.
Name, Last Modified and Size are self-explanatory. The description column however, is always empty. I was curious would could or should show up here. I had a hard time finding documentation on it.
A colleague of mine here found what I needed.
The description column on the Apache File Listing index view is populated using data you can create here: http://httpd.apache.org/docs/current/mod/mod_autoindex.html#adddescription
Edit: I'll also add that this documentation on setting file index formatting and descriptions via the .htaccess file is really helpful too: https://perishablepress.com/better-default-directory-views-with-htaccess/
Take a look at my website: https://wrcraig.com/ApacheDirectoryDescriptions. It goes beyond the default directory description, providing a spreadsheet to assist in creating detailed descriptions and exporting them in FancyIndex/AddDescription format for inclusion in .htaccess.
It also provides a menu driven BASH scripted alternative, using the FancyIndex descriptive data above (automatically adding A/V durations) to recursively populate index.html while retaining the security features of .htaccess.
The site has examples of the input spreadsheet and both the FancyIndex output and the optional BASH scripted output.

Can Apache solr stores actual files which are uploaded on it?

This is my first time on Stack Overflow. Thanks to all for providing valuable information and helping one another.
I am currently working on Apache Solr 7. There is a POC I need to complete as I have less time so putting this question here. I have setup SOLR on my windows machine. I have created core and uploaded a PDF document using /update/extract from Admin UI. After uploading, I can see the metadata of the file if I query from the Admin UI using query button. I was wondering if I can get the actusl content of the PDF as well. I can see there is one tlog file gets generated under /data/tlog/tlog000... with raw PDF data but not the actual file.
So the question are,
1. Can I get the PDF content?
2. does Solr stores the actual file somewhere?
a. If it stores then where it does?
b. If it does not store then, is there a way to store THE FILE?
Regards,
Munish Arora
Solr will not sore the actual file anywhere.
Depending on your config it can store the binary content though.
Using the extract request handler Apache Solr relies on Apache Tika[1] to extract the content from the document[2].
So you can search and return the content of the pdf and a lot of other metadata if you like.
[1] https://tika.apache.org/
[2] https://lucene.apache.org/solr/guide/6_6/uploading-data-with-solr-cell-using-apache-tika.html

Cloud file storage with file tagging and search by tags/filename

My project needs to meet next requirements.
store large amount of files for reasonable price
tag individual files with custom tags
have API method to search files by name (contains) and tags (exact)
do it all via JS SDK (keep project serverless)
I made some work with Amazon S3 and turned out
no search method in JS SDK http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjectsV2-property
listObjects accepts param Key Prefix (i.e. filename starts with), so there is no way to find by contains
no param to search by tag at all, i can only get it for individual file with getObjectTagging
So question is - what stable service can i use for file storage WITH functionality described above
Azure? Google Cloud? Backblaze B2? something else?
thanks!
If you use Azure blob storage, you can use Azure Search blob indexer to index both the metadata and textual content of your blobs. For a walkthrough of setting this up, see Build and query your first Azure Search index in the portal.