Reading text from a text file in mongodb - mongodb-query

Does mongo shell commands support to retrieve / search a words from a text file or PDF or Excel files.
All My files all stored in MongoDB (fs.files / fs.chunks) and I am referring my files in a main collection using _id with dbref's.
I want to search a few words from a main collection (db.abc) corresponding file.
Example :
{
"_id" : ObjectId ("5a0257610f32472c6c5740c0"),
"chunkSize" : 261120,
"uploadDate" : ISODate ("2017-11-08T01:01:21.350Z"),
"length" : 196172,
"md5" : "a1fec9f5c652c6d844e2bf35bc08eaae",
"filename" : "test.txt"
}
db.abc.insert ({"_id": "1",
"project": {
"$ref": "fs.files",
"$id": ObjectId("5a0257610f32472c6c5740c0"),
"$db": "dbname"},
"pp": "987-654-222",
"descr": "circuitboard"
}
)
I Want to search a words from text.txt at Mongo shell using mongodb commands.
does it possible ?
Thanks

Related

How to convert Json to CSV and send it to big query or google cloud bucket

I`m new to nifi and I want to convert big amount of json data to csv format.
This is what I am doing at the moment but it is not the expected result.
These are the steps:
processes to create access_token and send request body using InvokeHTTP(This part works fine I wont name the processes since this is the expected result) and getting the response body in json.
Example of json response:
[
{
"results":[
{
"customer":{
"resourceName":"customers/123456789",
"id":"11111111"
},
"campaign":{
"resourceName":"customers/123456789/campaigns/222456422222",
"name":"asdaasdasdad",
"id":"456456546546"
},
"adGroup":{
"resourceName":"customers/456456456456/adGroups/456456456456",
"id":"456456456546",
"name":"asdasdasdasda"
},
"metrics":{
"clicks":"11",
"costMicros":"43068982",
"impressions":"2079"
},
"segments":{
"device":"DESKTOP",
"date":"2021-11-22"
},
"incomeRangeView":{
"resourceName":"customers/456456456/incomeRangeViews/456456546~456456456"
}
},
{
"customer":{
"resourceName":"customers/123456789",
"id":"11111111"
},
"campaign":{
"resourceName":"customers/123456789/campaigns/222456422222",
"name":"asdasdasdasd",
"id":"456456546546"
},
"adGroup":{
"resourceName":"customers/456456456456/adGroups/456456456456",
"id":"456456456546",
"name":"asdasdasdas"
},
"metrics":{
"clicks":"11",
"costMicros":"43068982",
"impressions":"2079"
},
"segments":{
"device":"DESKTOP",
"date":"2021-11-22"
},
"incomeRangeView":{
"resourceName":"customers/456456456/incomeRangeViews/456456546~456456456"
}
},
....etc....
]
}
]
Now I am using:
===>SplitJson ($[].results[])==>JoltTransformJSON with this spec:
[{
"operation": "shift",
"spec": {
"customer": {
"id": "customer_id"
},
"campaign": {
"id": "campaign_id",
"name": "campaign_name"
},
"adGroup": {
"id": "ad_group_id",
"name": "ad_group_name"
},
"metrics": {
"clicks": "clicks",
"costMicros": "cost",
"impressions": "impressions"
},
"segments": {
"device": "device",
"date": "date"
},
"incomeRangeView": {
"resourceName": "keywords_id"
}
}
}]
==>> MergeContent( here is the problem which I don`t know how to fix)
Merge Strategy: Defragment
Merge Format: Binary Concatnation
Attribute Strategy Keep Only Common Attributes
Maximum number of Bins 5 (I tried 10 same result)
Delimiter Strategy: Text
Header: [
Footer: ]
Demarcator: ,
What is the result I get?
I get a json file that has parts of the json data
Example: I have 50k customer_ids in 1 json file so I would like to send this data into big query table and have all the ids under the same field "customer_id".
The MergeContent uses the split json files and combines them but I will still get 10k customer_ids for each file i.e. I have 5 files and not 1 file with 50k customer_ids
After the MergeContent I use ==>>ConvertRecord with these settings:
Record Reader JsonTreeReader (Schema Access Strategy: InferSchema)
Record Writer CsvRecordWriter (
Schema Write Strategy: Do Not Write Schema
Schema Access Strategy: Inherit Record Schema
CSV Format: Microsoft Excel
Include Header Line: true
Character Set UTF-8
)
==>>UpdateAttribute (custom prop: filename: ${filename}.csv) ==>> PutGCSObject(and put the data into the google bucket (this step works fine- I am able to put files there))
With this approach I am UNABLE to send data to big query(After MergeContent I tried using PutBigQueryBatch and used this command in bq sheel to get the schema I need:
bq show --format=prettyjson some_data_set.some_table_in_that_data_set | jq '.schema.fields'
I filled all the fields as needed and Load file type: I tried NEWLINE_DELIMITED_JSON or CSV if I converted it to CSV (I am not getting errors but no data is uploaded into the table)
)
What am I doing wrong? I basically want to map the data in such a way that each fields data will be under the same field name
The trick you are missing is using Records.
Instead of using X>SplitJson>JoltTransformJson>Merge>Convert>X, try just X>JoltTransformRecord>X with a JSON Reader and a CSV Writer. This skips a lot of inefficiency.
If you really need to split (and you should avoid splitting and merging unless totally necessary), you can use MergeRecord instead - again with a JSON Reader and CSV Writer. This would make your flow X>Split>Jolt>MergeRecord>X.

Cloudant search document by attributes of nested objects

My documents in cloudant have the following structure
{
"_id" : "1234",
"name" : "test",
"objects" : [
{
"type" : "TYPE1"
"time" : "1215
},
{
"type" : "TYPE2"
"time" : "1115"
}
]
}
Now I need to query my documents by a list of types.
Examples
1) If I would query with TYPE1 then all the documents where there is an object with this type would return. (The example doc would return)
2) If I would query with TYPE1 and TYPE3 it would return all documents which contain either of them (The example doc would return)
3) If I would query with TYPE3, TYPE4 and TYPE5 it would return all documents which contain either of them (The example doc would not return)
How would the code in the _design document look like and how would my API request look like?
One option is to use Cloudant Search.
Sample design document named types, which indexes each type property in your objects array
{
"_id": "_design/types",
"views": {},
"language": "javascript",
"indexes": {
"one-of": {
"analyzer": "standard",
"index": "function (doc) {\n for(var i in doc.objects) {\n index(\"type\", doc.objects[i].type); \n }\n}"
}
}
}
Query examples:
Search for one key (type=val)
GET https://$HOST/$DATABASE/_design/$DDOC/_search/one-of?q=type%3ATYPE1
Search for multiple keys (type=val1 OR type=val2)
GET https://$HOST/$DATABASE/_design/$DDOC/_search/one-of?q=type%3ATYPE1%20OR%20type%3ATYPE2
Search for multiple keys (type=val1 AND type=val2)
GET https://$HOST/$DATABASE/_design/$DDOC/_search/one-of?q=type%3ATYPE1%20AND%20type%3ATYPE2
To include the documents in the response append &include_docs=true.

Mongo adding an object to original object

I am not sure if I am asking the correct question but I assume this is just a basic mongodb question.
I currently have this:
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"fullname" : "test",
"username" : "test",
"email" : "test#gmail.com",
"password" : "$2a$10$Wl29i6FemBrnOKq/ZErSguxlfvqoayZQkaEDirkmDl5O3GDEQjOV2"
}
and I would like to add an exercise object like this:
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"fullname" : "test",
"username" : "test",
"email" : "test#gmail.com",
"password" : "$2a$10$Wl29i6FemBrnOKq/ZErSguxlfvqoayZQkaEDirkmDl5O3GDEQjOV2",
"exercises": {
"benchpress",
"rows",
"curls",
}
I am just unsure how to create exercises with the object without using $push which just opens up an array. I don't want an array, I want an object.
Any help would be greatly appreciated it.
An object is a key-value pair. In your representation of the second document, you have a nested document exercises as a key and its value as an object containing only strings. Don't you see something strange there? An object without keys?
It should probably be an array of strings. Note that an array is indeed an object where the key is the numeric index starting from 0 and the value is the string in that position.
(You have an additional comma and a missing curly-brace. Lets fix that.)
This is the document we wish to see after updating the document.
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"fullname" : "test",
"username" : "test",
"email" : "test#gmail.com",
"password" : "$2a$10$Wl29i6FemBrnOKq/ZErSguxlfvqoayZQkaEDirkmDl5O3GDEQjOV2",
"exercises": [
"benchpress",
"rows",
"curls"
]
}
Now, back to your question. How can we update the existing document with the exercises document? Its pretty simple. Mongodb has a 'update' method which exactly does that. Since we don't want to replace the entire document and just add additional fields, we should use $set to update specific fields. Fire up the mongo shell and switch to your database using use db-name. Then execute the following command. I assume you have an existing document with id ObjectId("57af98d4d71c4efff5304335"). Note that ObjectId is a BSON datatype.
db.scratch.update({ "_id" : ObjectId("57af98d4d71c4efff5304335") }, { $set: {"exercises": ["benchpress", "rows", "curls"] } })
This will update the document as
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"fullname" : "test",
"username" : "test",
"email" : "test#gmail.com",
"password" : "$2a$10$Wl29i6FemBrnOKq/ZErSguxlfvqoayZQkaEDirkmDl5O3GDEQjOV2",
"exercises" : [
"benchpress",
"rows",
"curls"
]
}
Here scratch refers to the collection name. The update method takes 3 parameters.
Query to find the document to update
The Update parameter(document to update). You can either replace the whole document or just specific parts of the document(using $set).
An optional object which can tell Mongodb to insert the record if the document doesn't exist(upsert) or update multiple documents that match the criteria(multiple).
EXTRA
Warning: If you execute the following in the mongo shell,
db.scratch.update({ "_id" : ObjectId("57af98d4d71c4efff5304335") }, {"exercises": ["benchpress", "rows", "curls"] })
the entire document would be replaced except the _id field. So, the record would be something like this:
{
"_id" : ObjectId("57af98d4d71c4efff5304335"),
"exercises" : [
"benchpress",
"rows",
"curls"
]
}
You should only do this when you are aware of the consequence.
Hope this helps.
For more, see https://docs.mongodb.com/manual/reference/method/db.collection.update/

artifactory rest api fully qualified class search

Is there any way to use fully qualified class name to search from Artifactory(similar to class-searth in Artifactory web UI). Based on this Documentation , i know i can use wildcard(*) and .class file extension like this:-
GET /api/search/archive?name=*Logger.class&repos=third-party-releases-local,repo1-cache
But i am looking for a way to use fully qualified class name similar to this:-
GET /api/search/archive?name=org.apache.log4j.Logger&repos=third-party-releases-local,repo1-cache
but this is not working.
You can use the Artifactory query language for this.
For example, a query for searching an archive item called org/apache/log4j/Logger.class in the jcenter-cache repository would be
items.find({
"repo" : "jcenter-cache",
"archive.entry.name":{"$eq":"Logger.class "},
"archive.entry.path":{"$eq":"org/apache/log4j"}
})
The response would be
{
"results" : [ {
"repo" : "jcenter-cache",
"path" : "org/apache/log4j/com.springsource.org.apache.log4j/1.2.16",
"name" : "com.springsource.org.apache.log4j-1.2.16.jar",
"type" : "file",
"size" : 481202,
"created" : "2015-12-30T20:57:36.305Z",
"created_by" : "admin",
"modified" : "2010-08-04T13:18:06.000Z",
"modified_by" : "admin",
"updated" : "2015-12-30T20:57:36.354Z"
} ],
"range" : {
"start_pos" : 0,
"end_pos" : 1,
"total" : 1
}
}
To run such a query using curl use the following when the query is inside a file names aql.txt
curl -H "content-type: text/plain" -uuser:password --data #aql.txt http://my-artifactory-host/api/search/aql

MultiLevel JSON in PIG

I am new to PIG scripting and working with JSONs. I am in the need of parsing multi-level json files in PIG. Say,
{
"firstName": "John",
"lastName" : "Smith",
"age" : 25,
"address" :
{
"streetAddress": "21 2nd Street",
"city" : "New York",
"state" : "NY",
"postalCode" : "10021"
},
"phoneNumber":
[
{
"type" : "home",
"number": "212 555-1234"
},
{
"type" : "fax",
"number": "646 555-4567"
}
]
}
I am able to parse a single level json through JsonLoader() and do join and other operations and get the desired results as JsonLoader('name:chararray,field1:int .....');
Is it possible to parse the above mentioned JSON file using the built-in JsonLoader() function of PIG 0.10.0. If it is. Please explain me how it is done and accessing fields of the particular JSON?
You can handle nested json loading with Twitter's Elephant Bird: https://github.com/kevinweil/elephant-bird
a = LOAD 'file3.json' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad')
This will parse the JSON into a map http://pig.apache.org/docs/r0.11.1/basic.html#map-schema the JSONArray gets parsed into a DataBag of maps.
It is possible by creating your own UDF. A simple UDF example is shown in below link
http://pig.apache.org/docs/r0.9.1/udf.html#udf-java
C = load 'path' using JsonLoader('firstName:chararray,lastName:chararray,age:int,address:(streetAddress:chararray,city:chararray,state:chararray,postalCode:chararray),
phoneNumber:{(type:chararray,number:chararray)}')