artifactory rest api fully qualified class search - repository

Is there any way to use fully qualified class name to search from Artifactory(similar to class-searth in Artifactory web UI). Based on this Documentation , i know i can use wildcard(*) and .class file extension like this:-
GET /api/search/archive?name=*Logger.class&repos=third-party-releases-local,repo1-cache
But i am looking for a way to use fully qualified class name similar to this:-
GET /api/search/archive?name=org.apache.log4j.Logger&repos=third-party-releases-local,repo1-cache
but this is not working.

You can use the Artifactory query language for this.
For example, a query for searching an archive item called org/apache/log4j/Logger.class in the jcenter-cache repository would be
items.find({
"repo" : "jcenter-cache",
"archive.entry.name":{"$eq":"Logger.class "},
"archive.entry.path":{"$eq":"org/apache/log4j"}
})
The response would be
{
"results" : [ {
"repo" : "jcenter-cache",
"path" : "org/apache/log4j/com.springsource.org.apache.log4j/1.2.16",
"name" : "com.springsource.org.apache.log4j-1.2.16.jar",
"type" : "file",
"size" : 481202,
"created" : "2015-12-30T20:57:36.305Z",
"created_by" : "admin",
"modified" : "2010-08-04T13:18:06.000Z",
"modified_by" : "admin",
"updated" : "2015-12-30T20:57:36.354Z"
} ],
"range" : {
"start_pos" : 0,
"end_pos" : 1,
"total" : 1
}
}
To run such a query using curl use the following when the query is inside a file names aql.txt
curl -H "content-type: text/plain" -uuser:password --data #aql.txt http://my-artifactory-host/api/search/aql

Related

Karate - Conditional JSON schema validation

I am just wondering how can I do conditional schema validation. The API response is dynamic based on customerType key. If customerType is person then, person details will be included and if the customerType is org organization details will be included in the JSON response. So the response can be in either of the following forms
{
"customerType" : "person",
"person" : {
"fistName" : "A",
"lastName" : "B"
},
"id" : 1,
"requestDate" : "2021-11-11"
}
{
"customerType" : "org",
"organization" : {
"orgName" : "A",
"orgAddress" : "B"
},
"id" : 2,
"requestDate" : "2021-11-11"
}
The schema I created to validate above 2 scenario is as follows
{
"customerType" : "#string",
"organization" : "#? response.customerType=='org' ? karate.match(_,personSchema) : karate.match(_,null)",
"person" : "#? response.customerType=='person' ? karate.match(_,orgSchema) : karate.match(_,null)",
"id" : "#number",
"requestDate" : "#string"
}
but the schema fails to match with the actual response. What changes should I make in the schema to make it work?
Note : I am planning to reuse the schema in multiple tests so I will be keeping the schema in separate files, independent of the feature file
Can you refer to this answer which I think is the better approach: https://stackoverflow.com/a/47336682/143475
That said, I think you missed that the JS karate.match() API doesn't return a boolean, but a JSON that contains a pass boolean property.
So you have to do things like this:
* def someVar = karate.match(actual, expected).pass ? {} : {}

Complex MongodbDB query in Mule4

I am trying to make a Mongodb query in Mule with the $in function, but mule says Invalid input '$', expected Namespace or NameIdentifier
have a collection that stores user authorization
{
"_id" : ObjectId("584a0dea073d4c3e976140a9"),
"partnerDataAccess" : [
{
"factoryID" : "Fac-1",
"partnerID" : "Part-1"
}
],
"userID" : "z12",
}
{
"_id" : ObjectId("584f5eba073d4c3e976140ab"),
"partnerDataAccess" : [
{
"factoryID" : "Fac-1",
"partnerID" : "Part-2"
},
{
"factoryID" : "Fac-2",
"partnerID" : "Part-2"
}
],
"userID" : "w12",
}
the flow will submit a userID and partnerID and query the database to see if authorization exist
when I query from Robo 3T, I write queries like this
e.g. user w12 and partner Part-2
db.getCollection('user').find({
userID:"w12", "partnerDataAccess.partnerID": {$in : ["Part-2", "ALL"]}
})
The $in was used because there is the "ALL" setting for admins
but while I try to put the find part into the Mongodb connector, Mule gives error during development and runtime
Hardcoded:
<mongo:find-one-document collectionName="user" doc:name="Find one document" doc:id="a03a6689-6b9d-473c-b8a6-3b8d1e989e38" config-ref="MongoDB_Config">
<mongo:find-query ><![CDATA[#[{
userID:"w12",
"partnerDataAccess.partnerID": {$in : ["Part-2", "ALL"]}
}]]]></mongo:find-query>
</mongo:find-one-document>
parametized
<mongo:find-one-document collectionName="user" doc:name="Find one document" doc:id="a03a6689-6b9d-473c-b8a6-3b8d1e989e38" config-ref="MongoDB_Config">
<mongo:find-query ><![CDATA[#[{
userID: payload.User,
"partnerDataAccess.partnerID": {$in : [ payload.partner, "ALL"]}
}]]]></mongo:find-query>
</mongo:find-one-document>
Error:
during development:
Invalid input '$', expected } or ~ or , (line 3, column 38):
Runtime:
Message : "Script '{
userID:"w12",
"partnerDataAccess.partnerID": {$in : ["Part-2", "ALL"]}
} ' has errors:
Invalid input '$', expected Namespace or NameIdentifier (line 3, column 38):
at 3 : 3" evaluating expression:
I have tried removing the $ or escaping the $ with backslash but it does not work
I know my query is not actually complex, welcome any help
seems to have found the correct way
><![CDATA[#[{
userID:"w12",
"partnerDataAccess.partnerID": {"\$in" : ["Part-2", "ALL"]}
}]]]>

Push from Filbeat to Elasticsearch with custom _type and _id

The problem is to push json logs collected by Filebeat to Elasticsearch with defined _type and _id. Default elastic _type is "log" and _id is smth. like "AVryuUKMKNQ7xhVUFxN2".
My log row:
{"unit_id":10001,"node_id":1,"message":"Msg ..."}
Desired record in Elasticsearch:
"hits" : [ {
"_index" : "filebeat",
"_type" : "unit_id",
"_id" : "10001",
...
"_source" : {
"message" : "Msg ...",
"node_id" : 1,
...
}
} ]
I know how to do it with Logstash, just use document_id => "%{unit_id}" and document_type => "unit_id" in the output section. The goal is to use only Filebeat. Because it is a very-light weight solution and no intermediate aggregation is needed here.
You can set a custom _type by using the document_type option in Filebeat.
There is no way to set the _id directly in Filebeat as of version 5.x.
filebeat.prospectors:
- paths: ['/var/log/messages']
document_type: syslog
You could use the Elasticsearch Ingest Node feature to set the _id field. You would need to use a script processor to copy a value from the event into the _id field. Once you have defined your pipeline you would tell Filebeat to send its data to that pipeline using the output.elasticsearch.pipeline config option.
You can now set a custom _id : https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-deduplication.html

Mongo adding an object to original object

I am not sure if I am asking the correct question but I assume this is just a basic mongodb question.
I currently have this:
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"fullname" : "test",
"username" : "test",
"email" : "test#gmail.com",
"password" : "$2a$10$Wl29i6FemBrnOKq/ZErSguxlfvqoayZQkaEDirkmDl5O3GDEQjOV2"
}
and I would like to add an exercise object like this:
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"fullname" : "test",
"username" : "test",
"email" : "test#gmail.com",
"password" : "$2a$10$Wl29i6FemBrnOKq/ZErSguxlfvqoayZQkaEDirkmDl5O3GDEQjOV2",
"exercises": {
"benchpress",
"rows",
"curls",
}
I am just unsure how to create exercises with the object without using $push which just opens up an array. I don't want an array, I want an object.
Any help would be greatly appreciated it.
An object is a key-value pair. In your representation of the second document, you have a nested document exercises as a key and its value as an object containing only strings. Don't you see something strange there? An object without keys?
It should probably be an array of strings. Note that an array is indeed an object where the key is the numeric index starting from 0 and the value is the string in that position.
(You have an additional comma and a missing curly-brace. Lets fix that.)
This is the document we wish to see after updating the document.
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"fullname" : "test",
"username" : "test",
"email" : "test#gmail.com",
"password" : "$2a$10$Wl29i6FemBrnOKq/ZErSguxlfvqoayZQkaEDirkmDl5O3GDEQjOV2",
"exercises": [
"benchpress",
"rows",
"curls"
]
}
Now, back to your question. How can we update the existing document with the exercises document? Its pretty simple. Mongodb has a 'update' method which exactly does that. Since we don't want to replace the entire document and just add additional fields, we should use $set to update specific fields. Fire up the mongo shell and switch to your database using use db-name. Then execute the following command. I assume you have an existing document with id ObjectId("57af98d4d71c4efff5304335"). Note that ObjectId is a BSON datatype.
db.scratch.update({ "_id" : ObjectId("57af98d4d71c4efff5304335") }, { $set: {"exercises": ["benchpress", "rows", "curls"] } })
This will update the document as
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"fullname" : "test",
"username" : "test",
"email" : "test#gmail.com",
"password" : "$2a$10$Wl29i6FemBrnOKq/ZErSguxlfvqoayZQkaEDirkmDl5O3GDEQjOV2",
"exercises" : [
"benchpress",
"rows",
"curls"
]
}
Here scratch refers to the collection name. The update method takes 3 parameters.
Query to find the document to update
The Update parameter(document to update). You can either replace the whole document or just specific parts of the document(using $set).
An optional object which can tell Mongodb to insert the record if the document doesn't exist(upsert) or update multiple documents that match the criteria(multiple).
EXTRA
Warning: If you execute the following in the mongo shell,
db.scratch.update({ "_id" : ObjectId("57af98d4d71c4efff5304335") }, {"exercises": ["benchpress", "rows", "curls"] })
the entire document would be replaced except the _id field. So, the record would be something like this:
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"exercises" : [
"benchpress",
"rows",
"curls"
]
}
You should only do this when you are aware of the consequence.
Hope this helps.
For more, see https://docs.mongodb.com/manual/reference/method/db.collection.update/

FIWARE-Orion Context Broker metadata updates trigger notifications

I'm using 3 FIWARE GEs: IDAS+Orion+CEP.
As reported in the Orion documentation (https://github.com/telefonicaid/fiware-orion/blob/develop/doc/manuals/user/metadata.md) "changing the metadata of a given attribute or adding a new metadata element is considered a change even if attribute value itself hasn't changed".
Is there a way to send notifications from Orion only if the value of the attribute specified in the subscription changes?
I've tried the solution proposed in the documentation, delete and re-create the attribute, in order to remove the metadata. But, since the messages to Orion are produced by IDAS, the metadata are created with the new communication.
Thanks.
UPDATE:
GEs Version:
- Orion - 0.26.1-next
- IoTAgent (IDAS) - 1.3.1
The metadata added by IDAS are:
"attributes" : [
{
"name" : "temperature",
"type" : "int",
"value" : "37",
"metadatas" : [
{
"name" : "TimeInstant",
"type" : "ISO8601",
"value" : "2015-12-29T12:46:04.421859"
}
]
}
]
Specifically, from mongodb query:
"temperature" : { "value" : "37", "type" : "int", "md" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2015-12-29T12:46:04.421859" } ], "creDate" : 1450716887, "modDate" : 1451393164 }
As far as I know, TimeInstant metadata sending from IDAS/IoTAgent to Orion couldn't be disabled by the time being.
A possible workaround could be to have a proxy between IDAS and Orion ir order to remove the TimeInstant metadata (or the whole metadata field in JSON to prevent some other metadata could be causing a similar problem).