First method being called by Advanced Database Crawler - lucene

Currently I'm using sitecore 6.6 and using advanced database crawler.
My sitecore environment currently is 1 cm on premise and 1 cd at cloud.
The ADC is working fine now.
Can I know which line of sitecore call the ADC method?
And what is the first method to be called by sitecore to ADC?
Which line of configuration in sitecore specifying the ADC's method being called?

The ADC is a data access library, so Sitecore itself does not natively call it, you need to call it in your code for your front-end components (Layouts, Sublayouts, WebControls). The ADC configuration defines the name of your search index and what to include and/or exclude from it.
Here's an overview video of the ADC: http://sitecoreblog.alexshyba.com/2010/11/sitecore-searcher-and-advanced-database.html
The latest code base is no longer the ADC but rather called scSearchContrib: https://github.com/sitecorian/SitecoreSearchContrib

Related

Nifi API - Update parameter context

We've created a parameter context within Nifi which is allocated to several process groups. We would like to update the value of one parameter within the parameter context. Is there any option to do this via the API?
NiFi CLI from nifi-toolkit has commands for interacting with parameters, there is one for set-param:
https://github.com/apache/nifi/tree/master/nifi-toolkit/nifi-toolkit-cli/src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/params
You could use that, or look at the code to see how it uses the API.
Also, anything you can do from NiFi UI has go through the REST API. So you can always open Chrome Dev Tools, take some action in the UI like updating a parameter, and then look at which calls are made.

Apache UIMA: modify dictionary annotators at run-time

Apache UIMA dictionaries* are compile-time objects.
Dictionaries must be pre-defined.
Is there any mechanism to add entries to dictionaries at run-time?
Any pattern or workaround?
*as implemented by IBM Watson Explorer Content Analytics
Thank you
The dictionaries are loaded at the first use of the engine. What I did in a previous project was that I add a filewatch to the directory that contains all the ruta logic (ruta scripts and additional resources) and at a change event, I started in a background thread a new engine. After the new engine is loaded I send a dummy request into the engine (so everything is initialized) and when that's completed I replaced the live engine with the new engine.
With that approach we had a system where we can did live updates to ruleset. (scripts, configuration and dictionary entries)

How to save PDF from HTML in Azure Functions

I'm developing an application which will have a web crawler for some sites.
The application will trigger a Azure Function by URL where the crawler will start the work.
So far, so good, but, we'll have to save some evidence that the crawler passed though the site. We're thinking of save a PDF file with the screen that the crawler passed, but, as Azure Functions doesn't have GDI+, it won't work with Selenium or PhantomJS.
One different approach can be download the HTML content and somehow save this HTML string (with all the JS and CSS dependency) into a PDF file.
i'd like of some library which can work with Azure Functions to make the screenshot of some URL (or HTML string) and save to PDF.
Thanks.
Unfortunately the App Service Sandbox whose rules Azure Functions live by is going to block most GDI+ API calls. We have had success with one third party library (ByteScout) for some PDF generation needs but I think in your case that type of operation is explicitly blocked. You can find out more details here https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#win32ksys-user32gdi32-restrictions
There is no workaround that I'm aware of because at the end of the day most of these solutions are relying on GDI+ in the underlying OS (directly or indirectly).
Your only real option is to offload that workload to virtual machine without the restriction on the API.That could take the form of a dedicated VM or something like an Azure Container Instance whose life-cycle you can manage more dynamically as needed. We do something similar today where we have a message queue being monitored on a VM and our azure function drops the request into the queue for processing.

How can I run Zend Framework code alongside legacy (non-ZF) code on the same server on the same HTTP port?

I have a large codebase that I am trying to eventually convert to Zend-Framework-powered stack.
I at times write new modules to where I have a choice:
keep writing using legacy routing/initialization/etc
somehow figure out how to use ZF for the new module only while the rest of the legacy code works "as before"
Is this possible?
How?
To give you an idea, code I have now uses proprietary multiple routing files, where everything in ZF goes through one single router file.
So legacy code is called like so i.e.:
http://legacy:80/index.php?route=product
May be similar to zend framework 2 in a subdirectory
Zend Middleware approach
I was able to follow https://docs.zendframework.com/zend-mvc/middleware/ and implement an IndexMiddleware class. I can see that IndexMiddleware::process() method is being called. But I am not certain how to go further, and how to engage my legacy web application to return data as before.
MiddlewareListener.
Legacy App - index.php
$module = filter($_GET['p']);
if (!empty($module))
$inc = 'portal/{$module}.php'; //prep a legacy module
require($inc); //run module
There are many solutions there... Depends on how much new code you have, and addresses you want.
Long story short, you could work at the server level (aliases, rewrite, etc), or at the PHP code level.
Something you could do is use the index.php from the Zend Skeleton for instance, and the default url routing through index.php. Then look at the application lifecycle, especially the route event. I believe that's a good point to add a listener that would dispatch the old application. You can find numbers of Listeners in the Zend MVC code to base your code on (look at the middleware one for instance).

Gaining Root Access w/ Elevated Helper & SMJobBless

I'm working on something that needs to install files periodically into a folder in /Library.
I understand that in the past I could have used one of the Authenticate methods but those have since been deprecated in 10.7.
What I've understood from my reading so far:
I should create a helper that somehow gets authenticated and have that helper do all of the moving tasks. I've taken a look at some of the sample code, including some involving XPC and one called Elevator but I'm a bit confused.
A lot of it seems to deal with setting up some sort of client / server model but I'm not sure how this would translate into me actually installing my files into the correct directories. Most of the examples are just passing strings.
My question simply: How can I create my folder in /Library programmatically and periodically write files to it while only prompting the user for a password ONCE and never again? I'm really not sure how to approach this and there doesn't seem to be much documentation.
You are correct that there isn't much documentation for this. You'll basically write another app, the helper app, which will get installed with SMJobBless(). Not surprisingly,
the tricky part here is the code signing. The least obvious part for me was that the SMAuthorizedClients and SMPrivilegedExecutables entries in the info plist files of each app are dependent on the identity/certificate that you used to sign the app with. There is also a trick with the compiler/linker to getting the info plist file compiled into the helper tool, which will be a single executable file, rather than a bundle.
Once you get the helper app up and running then you have to devise a way to communicate with it since these are two different processes. XPC is one option, perhaps the easiest. XPC is typically used with server processes, but what you are using here is the communication side of XPC only. Basically it passes dictionaries back and forth between the two apps. Create a standard format for the dictionary. I used #"action", #"source", and #"destination" with 3 different action values, #"filemove", #"filecopy", and #"makedirectory". Those are the 3 things that my helper app can do and I can easily add more if necessary.
The helper app will basically setup the XPC connection and event handler stuff and wait for a connection and commands. The commands will just be a dictionary so you check for the appropriate keys/values and do whatever.
I can provide more details and code if you need more help, but this question is 9 months old so I don't want to waste time giving you details you've already figured out.