Clarification: Neo4j ogm Session API returning 0 instances for countEntitiesOfType for builtin model types - neo4j-ogm

I am using neo4j-ogm-core version 2.1.2 with neo4j-ogm-bolt-driver version 2.1.2 and able to query user defined models using session. However when executing the following lines of code for built in types( NodeModel & RelationshipModel) the API returns 0 instances.
System.out.println("count(NodeModel) : "+session.countEntitiesOfType(NodeModel.class));
System.out.println("count(RelationshipModel) : "+session.countEntitiesOfType(RelationshipModel.class));
Is it expected result for builtin types or some additional configuration is required for looking up concrete realization?

NodeModel and RelationshipModel are internal OGM infrastructure classes. They cannot be used for querying.
If you want to have a global count of nodes and relationships in Neo4j, you can use custom queries like :
MATCH (n) return count(n) or MATCH ()-[r]-() RETURN count(r)

Related

Representing complex data types in XACML using Authzforce

I am new to XACML and I would be grateful if you can help me with one problem I encountered.
I use AuthzForce Core PDP (version 17.1.2).
I am wondering what is the correct approach of representing complex data types in XACML.
Example
Access should be granted if PIP response contains any person whose name is present in names array from request and salary of that person is higher than salary provided in request.
Request
names = ["Eric", "Kyle"]
salary = 1500
PIP response
[
{
"name": "Kyle",
"salary": 1000
},
{
"name": "Kenny",
"salary": 2000
},
{
"name": "Eric",
"salary": 4000
},
{
"name": "Stan",
"salary": 3000
}
]
Access will be granted because PIP response contains person with name Eric and his salary is higher than 1500.
My implementation
To represent PIP response I ended up with creating custom type by extending StringParseableValue class from AuthzForce. For above mentioned logic I use attribute designator in xml and have coresponding attribute provider (class extending BaseNamedAttributeProvider) in Java performing PIP call.
I also wrote two custom functions:
Find people with higher salary than provided in one param (returns filtered list)
Get person name (returns string)
And using those functions and standard function I wrote policy and it works.
However my solution seems to be overcomplicated. I suppose what I did can be achieved by using only standard functions.
Additionally if I wanted to define hardcoded bag of people inside other policy single element would look like this:
<AttributeValue DataType="person">name=Eric###salary=4000</AttributeValue>
There is always possibility that parsing of such strings might fail.
So my question is: What is a good practice of representing complex types like my PIP response in XACML using Authzforce? Sometimes I might need to pass more complex data in the request and I saw example in XACML specification showing passing such data inside <Content> element.
Creating a new XACML data-type - and consequently new XACML function(s) to handle that new data-type - seems a bit overkill indeed. Instead, you may improve your PIP (Attribute Provider) a little bit, so that it returns only the results for the employees named in the Request, and only their salaries (extracting them from the JSON using JSON path) returned as a bag of integers.
Then, assuming this PIP result is set to the attribute employee_salaries in your policy (bag of integers) for instance, and min_salary is the salary in the Request, it is just a matter of applying any-of(integer-less-than, min_salary, employee_salaries) in a Condition. (I'm using short names for the functions by convenience, please refer to the XACML 3.0 standard for the full identifiers.)
Tips to improve the PIP:
One issue here is performance (scalability, response time / size...) because if you have hundreds even thousands of employees, it is overkill to get the whole list from the REST service over and over, all the more as you need only a small subset (the names in the Request). Instead, you may have some way to request the REST service to return only a specific employees, using query parameters; an example using RSQL (but this depends on the REST service API):
HTTP GET http://rest-service.example.com/employees?search=names=in=($employee_names)
... where you set the $employee_names variable to (a comma-separated list of) the employee names from the Request (e.g. Eric,Kyle). You can get these in your AttributeProvider implementation, from the EvaluationContext argument of the overriden get(...) method (EvaluationContext#getNamedAttributeValue(...)).
Then you can use a JSON path library (as you did) to extract the salaries from the JSON response (so you have only the salaries of the employees named in the Request), using this JSON path for instance (tested with Jayway):
$[*].salary
If the previous option is not possible, i.e. you have no way of filtering employees on the REST API, you can always do this filtering in your AttributeProvider implementation with the JSON path library, using this JSON path for instance (tested with Jayway against your PIP response):
$[?(#.name in [$employee_names])].salary
... where you set the $employee_names variable like in the previous way, getting the names from the EvaluationContext. So the actual JSONpath after variable replacement would be something like:
$[?(#.name in [Eric,Kyle])].salary
(You may add quotes to each name to be safe.)
All things considered, if you still prefer to go for new XACML data-type (and functions), and since you seem to have done most of the work (impressive btw), I have a suggestion - if doable without to much extra work - to generalize the Person data-type to more generic JSON object datatype that could be reused in any use case dealing with JSON. Then see whether the extra functions could be done with a generic JSONPath evaluation function applied to the new JSON object data-type. This would provide a JSON equivalent to the standard XML/XPath data-type and functions we already have in XACML, and this kind of contribution would benefit the AuthzForce community greatly.
For the JSON object data-type, actually you can use the one in the testutils module as an example: CustomJsonObjectBasedAttributeValue which has been used to test support of JSON objects for the GeoXACML extension.

Laravel Scout toSearchableArray attribute is not filterable

I've been doing some testing with laravel scout and according to the documentation (https://laravel.com/docs/8.x/scout#configuring-searchable-data), I've mapped my User model as such:
/**
* Get the indexable data array for the model.
*
* #return array
*/
public function toSearchableArray()
{
$data = $this->toArray();
return array_merge($data, [
'entity' => 'An entity'
]);
}
Just for the sake of testing, this is literally what I came down to on the debugging.
After importing the User model with this mapping, I can see on the meilisearch dashboard it is indeed showing the user data + the entity = 'An entity'.
However, when applying this:
User::search('something')->where('entity', 'An entity')->get()
It produces the following error:
"message": " --> 1:1\n |\n1 | entity=\"An entity\"\n | ^----^\n |\n = attribute `entity` is not filterable, available filterable attributes are: ",
"exception": "MeiliSearch\\Exceptions\\ApiException",
"file": "/var/www/api/vendor/meilisearch/meilisearch-php/src/Http/Client.php",
Tracing back to view the 'filterable attributes', I've ended at the conclusion that:
$client = app(\MeiliSearch\Client::class);
dump($client->index('users')->getFilterableAttributes()); // Returns []
$client->index('users')->updateFilterableAttributes(['entity']);
dump($client->index('users')->getFilterableAttributes()); // Returns ['entity']
Forcing the updateFilterableAttributes now allows me to complete the search as intended, but I don't feel this should be the regular behaviour? If its mapped on the searchableArray, it should be searchable? What am I not understanding and what other approaches are there to achieve this goal?
This is actually not an issue but a requirement of meilisearch in particular. Scout under the hood uses different drivers for indexing - "algolia", "meilisearch", "database", "collection" and even "null", all of them have different indexing methods unifing of which would be troublesome and inefficient for scout I believe.
So filtering or a faceted search, as meilisearch refers to it, requires us to establish filtering criteria first, which is empty by default for document (models in laravel) fields.
Quoting from the docs:
This step is mandatory and cannot be done at search time. Filters need
to be properly processed and prepared by Meilisearch before they can
be used.
Updating filterableAttributes requires recreating the entire
index. This may take a significant amount of time depending on your
dataset size.
For more info please refer to meilisearch official docs https://docs.meilisearch.com/learn/advanced/filtering_and_faceted_search.html

What is the right way to save the process instance id(s) created?

Using Camunda as the tool for orchestration of the microservices. At later time, I find the process_instances_id generated necessary for continuing a particular process by using it in messageEventReceived(). Code as follows:
val processid = getProcessID(key1, key2)
val runtimeService = processengine.getRuntimeService
val subscription = runtimeService.createEventSubscriptionQuery
.eventType("message")
.eventName(eventname)
.processInstanceId(executionid)
.singleResult
runtimeService.messageEventReceived(subscription.getEventName, subscription.getExecutionId)
As of this moment the processid is saved and then retrieved from the database using the getProcessID(...) function when necessary. Is this proper?
Does camunda already have the list of process_ids in its own database? If so, how do I retrieve a particular process instance id just giving composite key(s)? Is that even possible?
It is the common way. You can also use the public api to get the process instance and his id via the process definition key.
See the following example from the documentation:
runtimeService.createProcessInstanceQuery()
.processDefinitionKey("invoice")
.list();
For your given example there is also a simpler way. It is possible to correlate the message via the runtime service.
See this example from the documenation:
runtimeService.createMessageCorrelation("messageName")
.processInstanceBusinessKey("AB-123")
.setVariable("payment_type", "creditCard")
.correlate();
You can use
runtimeService.createProcessInstanceQuery().list();
the query supports fluent criteria for filtering, for example on process_key, variables, businessKey ...

ORM for Spring-MongoDB integration with native querying support

I am a new to using Mongo DB and exploring the frameworks around for migrating from mysql to mongodb. So far from my findings I have been able to figure out SpringMongo as the best solution to my requirements.
The only problem is that instead of using a DSL based or abstract querying mechanism, I wished the framework allowed me to pass plain json string as arguments to the different methods exposed by the API(find, findOne) so that the query parameters can be written out to an external file (using a key to refer) and passed to the methods by reading and parsing at run time. But the framework should be capable of mapping the results to the domain objects.
Is there a way in spring-mongo to achieve this? Or is there any other frameworks on the same lines
You could use Spring Data to do that, just use the BasicQuery class instead of Query class. Your code will look like the following:
/* Any arbitrary string that could to parsed to DBObject */
Query q = new BasicQuery("{ filter : true }");
List<Entity> entities = this.template.find(q, Entity.class);
If you want more details:
http://static.springsource.org/spring-data/data-mongo/docs/current/reference/html/#mongo.query
http://static.springsource.org/spring-data/data-mongodb/docs/current/api/org/springframework/data/mongodb/core/query/BasicQuery.html
Well I got to find this one in the Spring data MongoOperations...
String jsonCommand = "{username: 'mickey'}";
MongoOperations mongoOps = //get mongooperations implemantation
mongoOps.executeCommand(jsonCommand)
It returns an instance of CommandResult that encapsulates the result.

JPA & Ebean ORM: Empty collection is not empty

I've started switching over a project from hand-written JDBC ORM code to Ebeans. So far it's been great; Ebeans is light and easy to use.
However, I have run into a crippling issue: when retrieving a one-to-many list which should be empty there is actually one element in it. This element looks to be some kind of proxy object which has all null fields, so it breaks code which loops through the collection.
I've included abbreviated definitions here:
#Entity
class Store {
...
#OneToMany(mappedBy="store",cascade=CascadeType.ALL,fetch=FetchType.LAZY)
List<StoreAlbum> storeAlbums = new LinkedList<StoreAlbum>();
}
#Entity
class StoreAlbum {
...
#ManyToOne(optional=false,fetch=FetchType.EAGER)
#JoinColumn(name="store_id",nullable=false)
Store store;
}
The ... are where all the standard getters and setters are. The retrieval code looks like this:
Store s = server.find(Store.class)
.where()
.eq("store_id",4)
.findUnique();
Assert.assertEquals("Sprint",s.getStoreName());
Assert.assertEquals(0, s.getStoreAlbums().size());
The database is known to contain a 'store' row for "Sprint", and the 'store_album' table does not contain any rows for that store.
The JUnit test fails on the second assertion. It finds a list with 1 element in it, which is some kind of broken StoreAlbum object. The debugger shows the object as being of the type "com.lwm.catalogfeed.domain.StoreAlbum$$EntityBean$test#1a5e68a" with null values for all the fields which are declared as nullable=false (and optional=false).
Am I missing something here?
Thought I'd post an update on this... I ended up giving up on EBeans and instead switched the implementation over to use MyBatis. MyBatis is fantastic; the manual is easy to read and thorough. MyBatis does what you expect it to do. I got it up and running in no time.
EBeans didn't appear to detect that the join for the associated collection resulted in a bunch of null ids, but MyBatis handled this scenario cleanly.
I ran into the same issue and was able to solve it by adding an identity column to the secondary table (StoreAlbum). I did not investigate the cause but I suppose Ebean needs a primary key on the table in these kind of situations.