How to make a Dynamic/Optional Filter(parameters) in mongo DB query at (Jasper Studio) - sql

I'm creating a web aplication and it's working perfectly, but at the end user need to create a report from it's data.
On the report page I created some txt boxes where users will type for filtering. Those txt boxes could be empty and I need to return everything from the DB, or some parameter could be filled. Remenbering that I need to pass txt boxes content as params to JasperServer and they will be used in the Query.
A example of data input is:
txtName= empty (null),
txtCity= 'Belo Horizonte'
It should generate a report with all record of people how lives in Belo Horizonte no matter the name.
I made it in SQL and works perfect. After I tried to use the same logic on mongo but it doesn't work. I have tried with $lt, $gt, $lte, $gte, $exist, $ne and bunch other aggregation tool and I was not able to make it propertly.
SQL:
select * from myfirstreports
where ($P{city} is null or cidade =$P{city})
AND ($P{name} is null or nome =$P{name})
Mongo:
{
'collectionName' : 'myfirstreports',
'findFields' :
{
'nome': 1, 'numeros': 1, 'vulgo': 1, 'cidade': 1,
'usuResponsavelCadastro': 1, 'created_at': 1
},
findQuery :
{
$and: [
{$or:[{ $P{city}: {$eq: null}}, {'cidade': $P{city}}]},
{ $or:[{$P{name}: {'$eq': null}}, {'nome': $P{name}}]}
]
}
}

I used the following expression:
$P{city}.equals(null)? "{ }" : "{'cidade': '$P!{city}'}"//Need to create a non prompting parameter
$P{name}.equals(null)? "{ }": "{'nome': '$P!{name}'}"
$P!{...} parameters allows me to create a query as a string and pass to JasperSoft report.

Related

Text search in aggregation using pymongo

I have a collection named users, it has following attributes
{
“_id”: “937a04d3f516443e87abe8308a1fe83e”,
“username”: “andy”,
“full_name”: “andy white”,
“image” : “https://example.com/xyz.jpg”,
… etc
}
i want to make a text search on full_name and username using aggregation pipeline, so that if a user search for any 3 letters, then the most relevant full_name or username returned sorted by relevancy,
i have already created text index on username and full_name and then i tried query from below link:
https://www.mongodb.com/docs/manual/tutorial/text-search-in-aggregation/#return-results-sorted-by-text-search-score
pipeline_stage = [
{"$match": {"$text": {"$search": “whit”}}},
{"$sort": {“score”: {"$meta": “textScore”}}},
{"$project": {“username”: 1,“full_name”: 1,“image”:1}}
]
stages = [*pipeline_stage]
users = users_db.aggregate(stages)
but i am getting below error:
pymongo.errors.OperationFailure: FieldPath field names may not start with ‘$’. Consider using $getField or $setField., full error: {‘ok’: 0.0, ‘errmsg’: “FieldPath field names may not start with ‘$’. Consider using $getField or $setField.”, ‘code’: 16410, ‘codeName’: ‘Location16410’, ‘$clusterTime’: {‘clusterTime’: Timestamp(1657811022, 14), ‘signature’: {‘hash’: b’a\xb4rem\x02\xc3\xa2P\x93E\nS\x1e\xa6\xaa\xb0\xb1\x85\xb5’, ‘keyId’: 7062773414158663703}}, ‘operationTime’: Timestamp(1657811022, 14)}
I also tried below link (my query also below) but i am getting full text search results, not working for partial text search:
https://www.mongodb.com/docs/manual/tutorial/text-search-in-aggregation/#match-on-text-score
pipeline_stage = [
{"$match": {"$text": {"$search": search_key}}},
{"$project": {"full_name": 1, "score": {"$meta": "textScore"}}},
]
Any help will be appreciated,
Note: I want to do partial text search, sorted by relevant records at top,
Thanks
Your project stage is incorrect, it should be
pipeline_stage = [
{"$match": {"$text": {"$search": "and"}}},
{"$sort": {"score": {"$meta": "textScore"}}},
{"$project": { "username": "$username", "full_name": "$full_name", "image": "$image"}}
]
Also note if you use an English text search, words like and are not indexed.

FaunaDB: how to fetch a custom column

I'm just learning FaunaDB and FQL and having some trouble (mainly because I come from MySQL). I can successfully query a table (eg: users) and fetch a specific user. This user has a property users.expiry_date which is a faunadb Time() type.
What I would like to do is know if this date has expired by using the function LT(Now(), users.expiry_date), but I don't know how to create this query. Do I have to create an Index first?
So in short, just fetching one of the users documents gets me this:
{
id: 1,
username: 'test',
expiry_date: Time("2022-01-10T16:01:47.394Z")
}
But I would like to get this:
{
id: 1,
username: 'test',
expiry_date: Time("2022-01-10T16:01:47.394Z"),
has_expired: true,
}
I have this FQL query now (ignore oauthInfo):
Query(
Let(
{
oauthInfo: Select(['data'], Get(Ref(Collection('user_oauth_info'), refId))),
user: Select(['data'], Get(Select(['user_id'], Var('oauthInfo'))))
},
Merge({ oauthInfo: Var('oauthInfo') }, { user: Var('user') })
)
)
How would I do the equivalent of the mySQL query SELECT users.*, IF(users.expiry_date < NOW(), 1, 0) as is_expired FROM users in FQL?
Your use of Let and Merge show that you are thinking about FQL in a good way. These are functions that can go a long way to making your queries more organized and readable!
I will start with some notes, but they will be relevant to the final answer, so please stick with me.
The Query function
https://docs.fauna.com/fauna/current/api/fql/functions/query
First, you should not need to wrap anything in the Query function, here. Query is necessary for defining functions in FQL that will be run later, for example, in the User-Defined Function body. You will always see it as Query(Lambda(...)).
Fauna IDs
https://docs.fauna.com/fauna/current/learn/understanding/documents
Remember that Fauna assigns unique IDs for every Document for you. When I see fields named id, that is a bit of a red flag, so I want to highlight that. There are plenty of reasons that you might store some business-ID in a Document, but be sure that you need it.
Getting an ID
A Document in Fauna is shaped like:
{
ref: Ref(Collection("users"), "101"), // <-- "id" is 101
ts: 1641508095450000,
data: { /* ... */ }
}
In the JS driver you can use this id by using documentResult.ref.id (other drivers can do this in similar ways)
You can access the ID directly in FQL as well. You use the Select function.
Let(
{
user: Get(Select(['user_id'], Var('oauthInfo')))
id: Select(["ref", "id"], Var("user"))
},
Var("id")
)
More about the Select function.
https://docs.fauna.com/fauna/current/api/fql/functions/select
You are already using Select and that's the function you are looking for. It's what you use to grab any piece of an object or array.
Here's a contrived example that gets the zip code for the 3rd user in the Collection:
Let(
{
page: Paginate(Documents(Collection("user")),
},
Select(["data", 2, "data", "address", "zip"], Var("user"))
)
Bring it together
That said, your Let function is a great start. Let's break things down into smaller steps.
Let(
{
oauthInfo_ref: Ref(Collection('user_oauth_info'), refId)
oauthInfo_doc: Get(Var("oathInfoRef")),
// make sure that user_oath_info.user_id is a full Ref, not just a number
user_ref: Select(["data", "user_id"], Var("oauthInfo_doc"))
user_doc: Get(Var("user_ref")),
user_id: Select("id", Var("user_ref")),
// calculate expired
expiry_date: Select(["data", "expiry_date"], Var("user_doc")),
has_expired: LT(Now(), Var("expiry_date"))
},
// if the data does not overlap, Merge is not required.
// you can build plain objects in FQL
{
oauthInfo: Var("oauthInfo_doc"), // entire Document
user: Var("user_doc"), // entire Document
has_expired: Var("has_expired") // an extra field
}
)
Instead of returning the auth info and user as separate points if you do want to Merge them and/or add additional fields, then feel free to do that
// ...
Merge(
Select("data", Var("user_doc")), // just the data
{
user_id: Var("user_id"), // added field
has_expired: Var("has_expired") // added field
}
)
)

Can I update a FaunaDB document without knowing its ID?

FaunaDB's documentation covers how to update a document, but their example assumes that I'll have the id to pass into Ref:
Ref(schema_ref, id)
client.query(
q.Update(
q.Ref(q.Collection('posts'), '192903209792046592'),
{ data: { text: "Example" },
)
)
However, I'm wondering if it's possible to update a document without knowing its id. For instance, if I have a collection of users, can I find a user by their email, and then update their record? I've tried this, but Fauna returns a 400 (Database Ref expected, String provided):
client
.query(
q.Update(
q.Match(
q.Index("users_by_email", "me#example.com")
),
{ name: "Em" }
)
)
Although Bens comments are correct, (that's the way you do it), I wanted to note that the error you are receiving is because you are missing a bracket here: "users_by_email"), "me#example.com"
The error is logical if you know that Index takes an optional database reference as second argument.
To clarify what Ben said:
If you do this you'll get another error:
Update(
Match(
Index("accounts_by_email"), "test#test.com"
),
{ data: { email: "test2#test.com"} }
)
Since Match could potentially return more then one element. It returns a set of references called a SetRef. Think of setrefs as lists that are not materialized yet. If you are certain there is only one match for that e-mail (e.g. if you set a uniqueness constraint) you can materialize it using Paginate or Get:
Get:
Update(
Select(['ref'], Get(Match(
Index("accounts_by_email"), "test#test.com"
))),
{ data: { email: 'test2#test.com'} }
)
The Get returns the complete document, we need to specify that we require the ref with Select(['ref']..
Paginate:
Update(
Select(['data', 0],
Paginate(Match(
Index("accounts_by_email"), "test#test.com"
))
),
{ data: { email: "testchanged#test.com"} }
)
You are very close! Update does require a ref. You can get one via your index though. Assuming your index has a default values setting (i.e. paging a match returns a page of refs) and you are confident that the there is a single match or the first match is the one you want then you can do Select(["ref"], Get(Match(Index("users_by_email"), "me#example.com"))) to transform your set ref to a document ref. This can then be passed into update (or to any other function that wants a document ref, like Delete).

Multiple MySQL queries returning undefined when outputting value

I am running two database queries to retrieve data that I will outputting in a message embed. The queries are returning the proper rows when I just dump the entire result into the console. However, whenever I try to output the actual value for one of the rows, it displays as undefined in the message embed.
From what I've found based on examples, rows[0].somevalue should be outputting the correct results.
let mentionedUser = message.mentions.members.first();
let captainUser = client.users.find(user => user.id == `${mentionedUser.id}`);
con.query(`SELECT * FROM captains WHERE id = '${mentionedUser.id}';SELECT * FROM results WHERE captain = '${captainUser.username}'`, [2, 1], (err, rows) => {
if(err) throw err;
console.log(rows);
const infoEmbed = new Discord.RichEmbed()
.setColor("#1b56af")
.setAuthor('Captain Information', client.user.displayAvatarURL)
.setThumbnail('https://i.imgur.com/t3WuKqf.jpg')
.addField('Captain Name', `${mentionedUser}`, true)
.addField('Cap Space', `${rows[0].credits}`, true) // Returns undefined
message.channel.send(infoEmbed);
});
This is the console result
[ [ RowDataPacket {
id: '91580646270439424',
team_name: 'Resistance',
credits: 85,
roster_size: 2 } ],
[ RowDataPacket { id: 'Sniper0270', captain: 'BTW8892', credits: 10 },
RowDataPacket { id: 'Annex Chrispy', captain: 'BTW8892', credits: 5 } ] ]
In the code posted above, the expected output of rows[0].credits should output 85. No error codes are present, it just displayed as "undefined" in the message embed.
You are executing two queries inside a single query call. It looks like the mysql library returns an array of arrays in this scenario where the first value is the result of the first query and the second is the result of the second query. This is non standard. Normally you would either execute each query in its own query call or you would use a union to join the two queries into a single resultset.
this is not the practical way to send query request , as query is a single statement excluding the bulk update , you cannot execute two different query using a single con.query , it is not a proper way. execute them separately

Pentaho SQL to MongoDb - Array Issue

I need to update elements in an array, then, when I run the transformation at the first time, the array receives the righ numbers if elements in the PROD array. But if I run it again, the array will receives the same elements
Example:
At the first time, I got the document below, and It is correct:
{
"_id" : ObjectId("58e2c81f781a75592f69f8a5"),
"DDATA_ORC" : ISODate("2016-08-02T03:00:00.000Z"),
"SNUMORC" : "113239",
"PROD" : [
{
"SPRODUTO" : "TONER HP CE411A CIANO (305A)"
}
]
}
But if I run the transformation again, the PROD array will be updated with the same SPRODUTO:
{
"_id" : ObjectId("58e2c81f781a75592f69f8a5"),
"DDATA_ORC" : ISODate("2016-08-02T03:00:00.000Z"),
"SNUMORC" : "113239",
"PROD" : [
{
"SPRODUTO" : "TONER HP CE411A CIANO (305A)"
},
{
"SPRODUTO" : "TONER HP CE411A CIANO (305A)"
}
]
}
It is a problem because I will get wrong results for queries.
That is may plugin configurations:
Options Tab and Document Path tab
I need to update the array only if It receives or lose an item.
Thanks in advance
I solved this issue.
If anyone have this problem, the solution is to create 2 "MongoDB Output". In the first output, you need to set the array (the array will be recreated every time that the update query runs sucessfuly) . I did It using a dummy field.
First Output Document Fields
In the second "MongoDB Output", You need to execute a push to populate the array.
Second Output Document Fields
In the "Output Options" tab, You have to set Update, Upsert and "Modifier Update"