What is the URL query parameter for EQL in Endeca 5.x? - endeca

I have to generate an Endeca Url that has an EQL (Nrs) parameter as well, e.g.:
N=200590+82&Nrs=collection()/record[p_MyProperty<=100+or+p_MyOtherProperty>200]
I've tried it out on Endeca 6.0 and it perfectly works but our target system is 5.0 which completely ignores the Nrs parameter. Adding or removing this parameter the result set is the same.
Does 5.x uses different syntax for EQL? Or is it a feature from 6.0? Maybe in our Endeca instance this feature is turned off?

The Nrs parameter is available in Endeca 5.1.4.

I have a feature support document that tells me that EQL was introduced in version 5.1.1. And specifically not supported in 5.0.
You could try using some features that did work in 5.0.
You could also do some baseline calculations with record filters. That might give you the intended effect. (Pretty much pre-calculate the ranges you need and select them via record filter.)
Documentation for record filters:
https://docs.oracle.com/cd/E29584_01/webhelp/mdex_basicDev/src/rbdv_urlparams_nr.html

Related

Superset i don't have 'parameter' button

I have used google translator, so excuse me if something looks weird.
I have superset on an Ubuntu virtual machine.
I have downloaded a Superset OpenSource and I have version 1.1.0 of Superset.
I am trying to do SQL queries with parameters to be able to create parameterized charts, however, in the SQL Lab section I do not see the 'parameters' button.
I have tried to create a parameter in the configuration of a dataset, in the section 'template parameters' with the syntax:
{"parameter": "value"}
And then use the parameter as follows: {{parameter}}
but it throws me wrong.
I have set the value of "ENABLE_TEMPLATE_PROCESSING" to True in the config.py file
I have also modified the superset_config.py file as per the official superset documentation.
Do I have to configure something else?
Could it be that my changes to the configuration files are not being applied?
I have tried to relaunch the following commands to see if they helped with the latter:
superset db upgrade
superset init
We just had the same issue.
Yes - this is turned off by default as mentioned here (11172)
https://github.com/apache/superset/blob/master/UPDATING.md
In our case we were adding the ENABLE_TEMPLATE_PROCESSING in the wrong place in the superset_config.py file, which meant the "parameters" option did not appear.
It has to be added against the FEATURE_FLAG dictionary like this - for this to work a designed :
FEATURE_FLAGS = {"ALERT_REPORTS": True, "ENABLE_TEMPLATE_PROCESSING": True}
Probably, you don't want parameters in the SQL Lab, but in the Dashboard you are using to view the results of your SQL Lab query, in that case, you build the query in SQL Lab without parameters, save it as a virtual dataset and afterwards, you create a Chart and a Filter Box over that virtual dataset and use them in the Dashboard.
But if you really need to parametrize your query, look for information about Jinja Template in the documentation and in Superset User's channel in Youtube. Beware, the concept of Virtual Dataset was introduced quite recently, so information a few months old is a bit different. The same with Jinja Template, it's a bit different now I think.

How can I get more results from anzograph

I am using anzograph with SPARQL trough http using RDFlib. I do not specify any limits in my query, and still I only receive 1000 solutions. The same seems to happen on the web interface.
If I fire the same query on other triple stores with the same data, I do get all results.
Moreover, if I fire this query using their command line tool on the same machine as the database, I do get all results (millions). Maybe it is using a different protocol with the local database. If I specify the hostname and port explicitly on the command line, I get 1030 results...
Is there a way to specify that I want all results from anzograph over http?
I have found the service_graph_rowset_limit setting and changed its value to 100000000 in both config/settings_standalone.conf and config/settings.conf, (and restarted the database) but to no avail.
let me start by thanking you for pointing this issue out.
You have identified a regression of a fix, that had been intended to protect the web UI from freezing on unbounded result sets, but affected the regular sparql endpoint user as well.
Our Anzo customers do not see this issue, as they use the internal gRPC API directly.
We have produced a fix that will be in our upcoming anzograph 2.4.0 and in our upcoming patch release 2.3.2 set of images.
Older releases will receive this fix as well (when we have a shipment vehicle).
If it is urgent to you I can provide you both a point fix (root.war file).
What exact image are you using?
Best - Frank

Does Datastax DSE 5.1 search support Solr local paramater as used in facet.pivot

I understand that DSE 5.1 runs Solr 6.0 version.
I am trying to use facet.pivot feature using Solr local paramater, but it does not seem to be working.
My data is as follows
Simple 4 fields
What I need is to group the result by name field so as to get sum(money) for each Year. I believe facet.pivot with local parameter can solve but not working with DSE 5.1.
From:Solr documentation
Combining Stats Component With Pivots
In addition to some of the general local parameters supported by other types of faceting, a stats local parameters can be used with facet.pivot to refer to stats.field instances (by tag) that you would like to have computed for each Pivot Constraint.
Here is what I want to use.
stats=true&stats.field={!tag=piv1}money&facet=true&facet.pivot={!stats=piv1}name
If you're trying to execute these queries from solr_query within CQL, the stats component is not supported. We keep the faceting to simple parameters as the purpose is to provide more GROUP By type functionality in solr_query, not analytics.
With DSE 5.1 (Solr 6.0.1), and the desire for analytics with Solr, use the HTTP's JSON Facet API from Solr. It has replaced the stats component and provides what you are looking for in a more robust fashion.

Sitefinity and The database schema version is higher than the running Sitefinity version. Downgrade is not allowed

Has anyone come across a situation where they are unable to start a Sitefinity site due to this error
The database schema version (6421) is higher than the running Sitefinity version (6410). Downgrade is not allowed
I've searched for a decent answer but I haven't been able to find one yet. Any help would be good.
It means that you ran build 6421 on this database (and possibly upgaded it). Now you are trying to run a lower build 6410, which is not allowed by default.
You can override this behavior by setting ignoreDowngradeExceptions="true" on the connection string, although you should be careful with using this approach on a production environment.
In case of these two builds, overriding the default behavior will most probably not be a problem because they both belong to the same Sitefinity version (10.0), so there should be no schema changes between them.
It would be a problem though, if you try to run a 9.2 build on a 10.0 database.
I know its late. Maybe still can help someone else. Execute this query. Its works for me.
update [DB_NAME].[dbo].[sf_schema_vrsns]
set version_number = 6410
where version_number = 6421
update [DB_NAME].[dbo].[sf_schema_vrsns]
set [assembly] = REPLACE([assembly],'10.0.6421.0','10.0.6410.0')
where [assembly] like '%10.0.6421.0%';

MongoDB shell getCollectionNames not working properly

When in mongo shell 3.x I found this strange behaviour:
Typing db.getCollectionNames() i get [] but I know that there are collections
Typing db.myColl.findOne() in fact it returns a document as I expect
Does anyone know why?
Thanks
You're already on 3.x. so this shouldn't have any impact i take ?
docs.mongodb.com/manual/reference/method/db.getCollectionNames
as the link says it returns empty array for shell lower than 3.X
For MongoDB 3.0 deployments using the WiredTiger storage engine, if
you run db.getCollectionNames() from a version of the mongo shell
before 3.0 or a version of the driver prior to 3.0 compatible version,
db.getCollectionNames() will return no data, even if there are
existing collections.
FOr more details, refer here