Query Document Schema in MarkLogic - sql

I would like to query the Schema definition of a Index in MarkLogic.
How can I query that?
What would be the query to do that?
I am talking about the Schema such as Elasticsearch Schema, with Field Types, Analyses, etc.
Please think of my question, as if I am asking how to see the column types, and column names in Oracle. How to do the same in MarkLogic? Any examples?

MarkLogic has a universal index, so there is no requirement to define a schema up front to search on specific elements or properties.
To do datatyped queries on element or properties, you can use TDE in MarkLogic 9 to define how to project datatyped values from documents in a collection into the indexes as a view over the documents. To find out the list of columns with data types for a view, you can either query the system columns view or retrieve the TDE template from the schemas database.
In MarkLogic 8 and before, you would define range indexes on elements, properties, fields, or paths. On the enode, the Admin API can get the list of range indexes for any database. On the middle tier, the Management REST API can express the equivalent REST request.
Hoping that clarifies,

Related

Can Azure Cognitive Search Index Use A Lookup Table?

I have created an Azure Cognitive Search Service index. All the fields that I want to be able to search, retrieve, filter, sort, facet are included within the single table that the index is built from. Some of the data fields in that table are coded, but I have a separate table that serves as a dictionary that defines those codes more literally in plain english. I would like to be able to search on the defined/literal values in my search results from the index without having to add the contents of the large dictionary table to the search index.
Is it possible to configure the index to use a referential table in this way? How?
Or is my only option to denormalize the entire contents of the dictionary into the index table?
Thanks!
Azure Cognitive Search Services will only return data that it has stored in its search indexes. If you want to "join" that data with some external table, that's something you may need to do client-side.
If you want to search against the terms in your external table, that data needs to be included in the search index.

recursively dissect sql schema

I use DBeaver with postgresql. It has a feature that lists a tree view of a db's schemas, including information_schema, pg_catalog, and public. Then, within each schema, there are a set of headings: Tables, Views, Materialized Views, Indexes, Functions, Sequences, Data Types, Aggregate Functions. Within each of these headings there are other entities, and so on to several levels in depth.
I would like to create that tree view independently of DBeaver, using tkinter. I can handle the tkinter part, but I haven't been able to divine the SQL statements that dissect schemas recursively down to leaf nodes. I've only found the topmost statement, which is:
select schema_name from information_schema.schemata
Beyond that, I cannot find anything that enables me to display deeper structure. I have read all the so-called schema tutorials; they are focused only on user-created tables. I've also read the official postgresql docs on schemas; they read like a dictionary and have no tutorial value whatever.
Any help, please.
You find all required information (e. g. schemata, tables, views, sequences, etc.) in the information_schema schema, which you can inspect with DBeaver. The PostgreSQL information schema is documented at https://www.postgresql.org/docs/current/information-schema.html . Good luck!

Is there any work around for indexing list in Apche ignite and use in where clause?

I am not sure how to index List/array in apache ignite. I want to use my list/array in where clause, I can write custom function but it will search all the data set, But I am looking for indexing of list/array.
Please help me.
A common way to store lists in SQL database is to create a table of pairs, representing one-to-many relation.
Columns of this table of pairs can be indexed and used in where clauses after joining with the initial table.
To make joins work fast, you will probably need to make records of these two tables collocated by affinity.

Index SQL table with solr using facets

i am solr newbie, and i am trying to use it for setup a faceted search from a database denormalized view (a table with a lot's of fields).
At the moment i have created the index in solr and i can query the database via solr url. I will use the solr facets to generate the search menu: a set of given fields with all possible values and with the number of occurences for each value
Now the question is, should I use solr to create the fecets and use plain old SQL to query the database or it is better to use solr also to query the database?
I use the facets to create a search refinement, if you want to suggest to you what to look for you should search for terms.
[https://cwiki.apache.org/confluence/display/solr/The+Terms+Component][1]
It is always better to query the solr for your search results because if you query the database the number of results can be different compared to what you are showing against that facet , as the results in database my not be yet updated in solr.
Another reason is performance , querying different fields spread across multiple tables is expensive compared to all the denormalized documents indexed in a search engine.

Lucene index a large many-to-many relationship

I have a database with two primary tables:
The components table (50M rows),
The assemblies table (100K rows),
..and a many-to-many relationship between them (about 100K components per assembly), thus in total 10G relationships.
What's the best way to index the components such that I could query the index for a given assembly? Given the amount of relationships, I don't want to import them into the Lucene index, but am looking instead for a way to "join" with my external table on-the-fly.
Solr supports multi-valued fields. Not positive if Lucene supports them natively or not. It's been a while for me. If only one of the entities is searchable, which you mentioned is components, I would index all components with a field called "assemblies" or "assemblyIds" or something similar and include whatever metadata you need to identify the assemblies.
Then you can search components with
assemblyIds:(1, 2, 3)
To find components in assembly 1, 2 or 3.
To be brief, you've got to process the data and Index it before you can search. Therefore, there exists no way to just "plug-in" Lucene to some data or database, instead you've to plug-in (process, parse, analyze, index, and query) the data it self to the Lucene.
rustyx: "My data is mostly static. I can even live with a read-only index."
In that case, you might use Lucene it self. You can iterate the datasource to add all the many-to-many relations to the Lucene index. How did you come up with that "100GB" size? People index millions and millions of documents using Lucene, I don't think it'd be a problem for you to index.
You can add multiple field instances to the index with different values ("components") in a document having an "assembly" field as well.
rustyx: "I'm looking instead into a way to "join" the Lucene search with my external data source on the fly"
If you need something seamless, you might try out the following framework which acts like a bridge between relational database and Lucene Index.
Hibernate Search : In that tutorial, you might search for "#ManyToMany" keyword to find the exact section in the tutorial to get some idea.