Loading `belongsToMany` relation using the `with` method always returning an empty array - orm

I have 2 models, Person and PersonType with a belongsToMany relation.
The problem is that when I try to get all the PersonTypes related to a Person using the query().with() method, it is always returning an empty array even if I have the right data in the 3 tables people, person_type and the intermdiate person_person_type.
This is the code that is trying to load the people with their types:
const Person = use('App/Models/Person')
const people = await Person.query().with('types')
This is what I got from the code above:
[
{
"id": 3,
"firstName": "Eleandro",
"lastName": "Duzentos",
"types": []
},
{
"id": 5,
"firstName": "Lorem",
"lastName": "Ipsum",
"types": []
}
]
This is what it should return:
[
{
"id": 3,
"firstName": "Eleandro",
"lastName": "Duzentos",
"types": [
{
name: "Requester",
slug: "requester"
},
{
"name": "Provider",
"slug": "provider"
},
]
},
{
"id": 5,
"firstName": "Lorem",
"lastName": "Ipsum",
"types": [
{
"name": "Requester",
"slug": "requester"
}
]
}
]
Bellow is each models and their database migrations:
Person:
'use strict'
const Model = use('Model')
class Person extends Model {
static get table () {
return 'people'
}
types() {
return this.belongsToMany('App/Models/PersonType')
.withTimestamps()
}
}
module.exports = Person
'use strict'
/** #type {import('#adonisjs/lucid/src/Schema')} */
const Schema = use('Schema')
class CreatePeopleSchema extends Schema {
up () {
this.create('people', (table) => {
table.increments()
table.string('first_name')
table.string('last_name')
table.date('birth_date')
table.string('address')
table.string('identification_number').nullable()
table.integer('naturality_id').unsigned().index()
table
.foreign('naturality_id')
.references('id')
.inTable('municipalities')
.onDelete('cascade')
table.integer('person_genre_id').unsigned().index()
table
.foreign('person_genre_id')
.references('id')
.inTable('person_genres')
.onDelete('cascade')
table.boolean('is_activated').default(false).notNullable()
table.timestamps()
table.timestamp('deleted_at').nullable()
})
}
down () {
this.drop('people')
}
}
module.exports = CreatePeopleSchema
PersonType:
'use strict'
const Model = use('Model')
class PersonType extends Model {
people() {
return this.belongsToMany('App/Models/Person')
.withTimestamps()
}
}
module.exports = PersonType
'use strict'
/** #type {import('#adonisjs/lucid/src/Schema')} */
const Schema = use('Schema')
class CreatePersonTypesSchema extends Schema {
up() {
this.create('person_types', (table) => {
table.increments()
table.string('name').unique()
table.string('slug').unique().index()
table.text('description').nullable()
table.string('icon').nullable()
table.timestamps()
})
}
down() {
this.drop('person_types')
}
}
module.exports = CreatePersonTypesSchema
and the intermediate table:
'use strict'
/** #type {import('#adonisjs/lucid/src/Schema')} */
const Schema = use('Schema')
class PersonPersonTypeSchema extends Schema {
up () {
this.create('person_person_type', (table) => {
table.increments()
table.integer('person_id').unsigned().index()
table
.foreign('person_id')
.references('id')
.inTable('people')
.onDelete('cascade')
table.integer('person_type_id').unsigned().index()
table
.foreign('person_type_id')
.references('id')
.inTable('person_types')
.onDelete('cascade')
table.timestamps()
})
}
down () {
this.drop('person_person_type')
}
}
module.exports = PersonPersonTypeSchema
This is the query Ludic is performing:
select `person_types`.*, `person_person_type`.`person_type_id` as `pivot_person_type_id`, `person_person_type`.`person_id` as `pivot_person_id` from `person_types` inner join `person_person_type` on `person_types`.`id` = `person_person_type`.`person_type_id` where `person_person_type`.`person_id` in (?, ?)
What I'm doing wrong?

Using pivot table name in association method and handle response format using toJSON()
'use strict'
const Model = use('Model')
class Person extends Model {
static get table () {
return 'people'
}
types() {
return this.belongsToMany('App/Models/PersonType')
.pivotTable('person_type')
}
}
module.exports = Person

I found the issue.
Since I was using the knexSnakeCaseMappers from Objection.js as suggested here, it was making Adonis unable to perform some operations like load relations.
Just remove it and use snake_case or try another solution.

Related

GraphQL pagination partial response with error array

I have a query like below
query {
heroes {
node {
name
}
endCursor
}
}
I am trying to understand how GraphQL can handle the error handling and return partial response. I looked at https://github.com/graphql/dataloader/issues/169 and tried to create a resolver like below;
{
Query: {
heroes: async (_) => {
const heroesData = await loadHeroesFromDataWarehouse();
return {
endCursor: heroesData.endCursor;
node: heroesData.map(h => h.name === 'hulk' ? new ApolloError('Hulk is too powerful') : h)
}
}
}
}
I was hoping it would resolve something like below;
{
"errors": [
{
"message": "Hulk is too powerful",
"path": [
"heroes", "1"
],
}
],
"data": {
"heroes": [
{
"name": "spiderman"
},
null,
{
"name": "ironman"
}
]
}
}
but it is completely failing making the heroes itself null like below;
{
"errors": [
{
"message": "Hulk is too powerful",
"path": [
"heroes"
],
}
],
"data": {
"heroes": null
}
}
How can I make resolver to return me the desired partial response?
Found the solution, basically we need a resolver to resolve the edge model itself;
{
Query: {
heroes: (_) => loadHeroesFromDataWarehouse()
},
HeroesEdge {
node: async (hero) => hero.name === 'hulk' ? new ApolloError('Hulk is too powerful') : hero
}
}

Inserting data in vuex-orm database that is already normalized

Assuming the data i receive is already normalized, or at least the relations.
How can this data be inserted into the vuex-orm database?
Example JSON data:
{
"carmodel": [
{
"id": 1,
"title": "M3",
"manufacturer_id": 1
},
{
"id": 2,
"title": "a-class"
"manufacturer_id": 2
}
],
"manufacturer": [
{
"id": 1,
"title": "BMW"
},
{
"id": 2,
"title": "Mercedes"
}
]
}
Manufacturer Model and Carmodel are inserted like this:
Manufacturer.insert({ data: response.data.manufacturer })
CarModel.insert({ data: response.data.carmodel })
This example model will not work:
import { Model } from '#vuex-orm/core'
import Manufacturer from '#/models/Manufacturer'
export default class CarModel extends Model {
static entity = 'carModels'
static fields () {
return {
id: this.attr(null),
title: this.string(''),
manufacturer: this.hasOne(Manufacturer, 'manufacturer_id')
}
}
}
Ok, i think i got it. Instead of this.hasOne i have to use belongsTo and use the manufacturer_id from the same model:
import { Model } from '#vuex-orm/core'
import Manufacturer from '#/models/Manufacturer'
export default class CarModel extends Model {
static entity = 'carModels'
static fields () {
return {
id: this.attr(null),
title: this.string(''),
manufacturer_id: this.attr(null),
manufacturer: this.belongsTo(Manufacturer, 'manufacturer_id')
}
}
}

How to configure Typegoose with GraphQL to reference subdocument to part of another document? [duplicate]

I'm pretty new to Mongoose and MongoDB in general so I'm having a difficult time figuring out if something like this is possible:
Item = new Schema({
id: Schema.ObjectId,
dateCreated: { type: Date, default: Date.now },
title: { type: String, default: 'No Title' },
description: { type: String, default: 'No Description' },
tags: [ { type: Schema.ObjectId, ref: 'ItemTag' }]
});
ItemTag = new Schema({
id: Schema.ObjectId,
tagId: { type: Schema.ObjectId, ref: 'Tag' },
tagName: { type: String }
});
var query = Models.Item.find({});
query
.desc('dateCreated')
.populate('tags')
.where('tags.tagName').in(['funny', 'politics'])
.run(function(err, docs){
// docs is always empty
});
Is there a better way do this?
Edit
Apologies for any confusion. What I'm trying to do is get all Items that contain either the funny tag or politics tag.
Edit
Document without where clause:
[{
_id: 4fe90264e5caa33f04000012,
dislikes: 0,
likes: 0,
source: '/uploads/loldog.jpg',
comments: [],
tags: [{
itemId: 4fe90264e5caa33f04000012,
tagName: 'movies',
tagId: 4fe64219007e20e644000007,
_id: 4fe90270e5caa33f04000015,
dateCreated: Tue, 26 Jun 2012 00:29:36 GMT,
rating: 0,
dislikes: 0,
likes: 0
},
{
itemId: 4fe90264e5caa33f04000012,
tagName: 'funny',
tagId: 4fe64219007e20e644000002,
_id: 4fe90270e5caa33f04000017,
dateCreated: Tue, 26 Jun 2012 00:29:36 GMT,
rating: 0,
dislikes: 0,
likes: 0
}],
viewCount: 0,
rating: 0,
type: 'image',
description: null,
title: 'dogggg',
dateCreated: Tue, 26 Jun 2012 00:29:24 GMT
}, ... ]
With the where clause, I get an empty array.
With a modern MongoDB greater than 3.2 you can use $lookup as an alternate to .populate() in most cases. This also has the advantage of actually doing the join "on the server" as opposed to what .populate() does which is actually "multiple queries" to "emulate" a join.
So .populate() is not really a "join" in the sense of how a relational database does it. The $lookup operator on the other hand, actually does the work on the server, and is more or less analogous to a "LEFT JOIN":
Item.aggregate(
[
{ "$lookup": {
"from": ItemTags.collection.name,
"localField": "tags",
"foreignField": "_id",
"as": "tags"
}},
{ "$unwind": "$tags" },
{ "$match": { "tags.tagName": { "$in": [ "funny", "politics" ] } } },
{ "$group": {
"_id": "$_id",
"dateCreated": { "$first": "$dateCreated" },
"title": { "$first": "$title" },
"description": { "$first": "$description" },
"tags": { "$push": "$tags" }
}}
],
function(err, result) {
// "tags" is now filtered by condition and "joined"
}
)
N.B. The .collection.name here actually evaluates to the "string" that is the actual name of the MongoDB collection as assigned to the model. Since mongoose "pluralizes" collection names by default and $lookup needs the actual MongoDB collection name as an argument ( since it's a server operation ), then this is a handy trick to use in mongoose code, as opposed to "hard coding" the collection name directly.
Whilst we could also use $filter on arrays to remove the unwanted items, this is actually the most efficient form due to Aggregation Pipeline Optimization for the special condition of as $lookup followed by both an $unwind and a $match condition.
This actually results in the three pipeline stages being rolled into one:
{ "$lookup" : {
"from" : "itemtags",
"as" : "tags",
"localField" : "tags",
"foreignField" : "_id",
"unwinding" : {
"preserveNullAndEmptyArrays" : false
},
"matching" : {
"tagName" : {
"$in" : [
"funny",
"politics"
]
}
}
}}
This is highly optimal as the actual operation "filters the collection to join first", then it returns the results and "unwinds" the array. Both methods are employed so the results do not break the BSON limit of 16MB, which is a constraint that the client does not have.
The only problem is that it seems "counter-intuitive" in some ways, particularly when you want the results in an array, but that is what the $group is for here, as it reconstructs to the original document form.
It's also unfortunate that we simply cannot at this time actually write $lookup in the same eventual syntax the server uses. IMHO, this is an oversight to be corrected. But for now, simply using the sequence will work and is the most viable option with the best performance and scalability.
Addendum - MongoDB 3.6 and upwards
Though the pattern shown here is fairly optimized due to how the other stages get rolled into the $lookup, it does have one failing in that the "LEFT JOIN" which is normally inherent to both $lookup and the actions of populate() is negated by the "optimal" usage of $unwind here which does not preserve empty arrays. You can add the preserveNullAndEmptyArrays option, but this negates the "optimized" sequence described above and essentially leaves all three stages intact which would normally be combined in the optimization.
MongoDB 3.6 expands with a "more expressive" form of $lookup allowing a "sub-pipeline" expression. Which not only meets the goal of retaining the "LEFT JOIN" but still allows an optimal query to reduce results returned and with a much simplified syntax:
Item.aggregate([
{ "$lookup": {
"from": ItemTags.collection.name,
"let": { "tags": "$tags" },
"pipeline": [
{ "$match": {
"tags": { "$in": [ "politics", "funny" ] },
"$expr": { "$in": [ "$_id", "$$tags" ] }
}}
]
}}
])
The $expr used in order to match the declared "local" value with the "foreign" value is actually what MongoDB does "internally" now with the original $lookup syntax. By expressing in this form we can tailor the initial $match expression within the "sub-pipeline" ourselves.
In fact, as a true "aggregation pipeline" you can do just about anything you can do with an aggregation pipeline within this "sub-pipeline" expression, including "nesting" the levels of $lookup to other related collections.
Further usage is a bit beyond the scope of what the question here asks, but in relation to even "nested population" then the new usage pattern of $lookup allows this to be much the same, and a "lot" more powerful in it's full usage.
Working Example
The following gives an example using a static method on the model. Once that static method is implemented the call simply becomes:
Item.lookup(
{
path: 'tags',
query: { 'tags.tagName' : { '$in': [ 'funny', 'politics' ] } }
},
callback
)
Or enhancing to be a bit more modern even becomes:
let results = await Item.lookup({
path: 'tags',
query: { 'tagName' : { '$in': [ 'funny', 'politics' ] } }
})
Making it very similar to .populate() in structure, but it's actually doing the join on the server instead. For completeness, the usage here casts the returned data back to mongoose document instances at according to both the parent and child cases.
It's fairly trivial and easy to adapt or just use as is for most common cases.
N.B The use of async here is just for brevity of running the enclosed example. The actual implementation is free of this dependency.
const async = require('async'),
mongoose = require('mongoose'),
Schema = mongoose.Schema;
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
mongoose.connect('mongodb://localhost/looktest');
const itemTagSchema = new Schema({
tagName: String
});
const itemSchema = new Schema({
dateCreated: { type: Date, default: Date.now },
title: String,
description: String,
tags: [{ type: Schema.Types.ObjectId, ref: 'ItemTag' }]
});
itemSchema.statics.lookup = function(opt,callback) {
let rel =
mongoose.model(this.schema.path(opt.path).caster.options.ref);
let group = { "$group": { } };
this.schema.eachPath(p =>
group.$group[p] = (p === "_id") ? "$_id" :
(p === opt.path) ? { "$push": `$${p}` } : { "$first": `$${p}` });
let pipeline = [
{ "$lookup": {
"from": rel.collection.name,
"as": opt.path,
"localField": opt.path,
"foreignField": "_id"
}},
{ "$unwind": `$${opt.path}` },
{ "$match": opt.query },
group
];
this.aggregate(pipeline,(err,result) => {
if (err) callback(err);
result = result.map(m => {
m[opt.path] = m[opt.path].map(r => rel(r));
return this(m);
});
callback(err,result);
});
}
const Item = mongoose.model('Item', itemSchema);
const ItemTag = mongoose.model('ItemTag', itemTagSchema);
function log(body) {
console.log(JSON.stringify(body, undefined, 2))
}
async.series(
[
// Clean data
(callback) => async.each(mongoose.models,(model,callback) =>
model.remove({},callback),callback),
// Create tags and items
(callback) =>
async.waterfall(
[
(callback) =>
ItemTag.create([{ "tagName": "movies" }, { "tagName": "funny" }],
callback),
(tags, callback) =>
Item.create({ "title": "Something","description": "An item",
"tags": tags },callback)
],
callback
),
// Query with our static
(callback) =>
Item.lookup(
{
path: 'tags',
query: { 'tags.tagName' : { '$in': [ 'funny', 'politics' ] } }
},
callback
)
],
(err,results) => {
if (err) throw err;
let result = results.pop();
log(result);
mongoose.disconnect();
}
)
Or a little more modern for Node 8.x and above with async/await and no additional dependencies:
const { Schema } = mongoose = require('mongoose');
const uri = 'mongodb://localhost/looktest';
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
const itemTagSchema = new Schema({
tagName: String
});
const itemSchema = new Schema({
dateCreated: { type: Date, default: Date.now },
title: String,
description: String,
tags: [{ type: Schema.Types.ObjectId, ref: 'ItemTag' }]
});
itemSchema.statics.lookup = function(opt) {
let rel =
mongoose.model(this.schema.path(opt.path).caster.options.ref);
let group = { "$group": { } };
this.schema.eachPath(p =>
group.$group[p] = (p === "_id") ? "$_id" :
(p === opt.path) ? { "$push": `$${p}` } : { "$first": `$${p}` });
let pipeline = [
{ "$lookup": {
"from": rel.collection.name,
"as": opt.path,
"localField": opt.path,
"foreignField": "_id"
}},
{ "$unwind": `$${opt.path}` },
{ "$match": opt.query },
group
];
return this.aggregate(pipeline).exec().then(r => r.map(m =>
this({ ...m, [opt.path]: m[opt.path].map(r => rel(r)) })
));
}
const Item = mongoose.model('Item', itemSchema);
const ItemTag = mongoose.model('ItemTag', itemTagSchema);
const log = body => console.log(JSON.stringify(body, undefined, 2));
(async function() {
try {
const conn = await mongoose.connect(uri);
// Clean data
await Promise.all(Object.entries(conn.models).map(([k,m]) => m.remove()));
// Create tags and items
const tags = await ItemTag.create(
["movies", "funny"].map(tagName =>({ tagName }))
);
const item = await Item.create({
"title": "Something",
"description": "An item",
tags
});
// Query with our static
const result = (await Item.lookup({
path: 'tags',
query: { 'tags.tagName' : { '$in': [ 'funny', 'politics' ] } }
})).pop();
log(result);
mongoose.disconnect();
} catch (e) {
console.error(e);
} finally {
process.exit()
}
})()
And from MongoDB 3.6 and upward, even without the $unwind and $group building:
const { Schema, Types: { ObjectId } } = mongoose = require('mongoose');
const uri = 'mongodb://localhost/looktest';
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
const itemTagSchema = new Schema({
tagName: String
});
const itemSchema = new Schema({
title: String,
description: String,
tags: [{ type: Schema.Types.ObjectId, ref: 'ItemTag' }]
},{ timestamps: true });
itemSchema.statics.lookup = function({ path, query }) {
let rel =
mongoose.model(this.schema.path(path).caster.options.ref);
// MongoDB 3.6 and up $lookup with sub-pipeline
let pipeline = [
{ "$lookup": {
"from": rel.collection.name,
"as": path,
"let": { [path]: `$${path}` },
"pipeline": [
{ "$match": {
...query,
"$expr": { "$in": [ "$_id", `$$${path}` ] }
}}
]
}}
];
return this.aggregate(pipeline).exec().then(r => r.map(m =>
this({ ...m, [path]: m[path].map(r => rel(r)) })
));
};
const Item = mongoose.model('Item', itemSchema);
const ItemTag = mongoose.model('ItemTag', itemTagSchema);
const log = body => console.log(JSON.stringify(body, undefined, 2));
(async function() {
try {
const conn = await mongoose.connect(uri);
// Clean data
await Promise.all(Object.entries(conn.models).map(([k,m]) => m.remove()));
// Create tags and items
const tags = await ItemTag.insertMany(
["movies", "funny"].map(tagName => ({ tagName }))
);
const item = await Item.create({
"title": "Something",
"description": "An item",
tags
});
// Query with our static
let result = (await Item.lookup({
path: 'tags',
query: { 'tagName': { '$in': [ 'funny', 'politics' ] } }
})).pop();
log(result);
await mongoose.disconnect();
} catch(e) {
console.error(e)
} finally {
process.exit()
}
})()
what you are asking for isn't directly supported but can be achieved by adding another filter step after the query returns.
first, .populate( 'tags', null, { tagName: { $in: ['funny', 'politics'] } } ) is definitely what you need to do to filter the tags documents. then, after the query returns you'll need to manually filter out documents that don't have any tags docs that matched the populate criteria. something like:
query....
.exec(function(err, docs){
docs = docs.filter(function(doc){
return doc.tags.length;
})
// do stuff with docs
});
Try replacing
.populate('tags').where('tags.tagName').in(['funny', 'politics'])
by
.populate( 'tags', null, { tagName: { $in: ['funny', 'politics'] } } )
Update: Please take a look at the comments - this answer does not correctly match to the question, but maybe it answers other questions of users which came across (I think that because of the upvotes) so I will not delete this "answer":
First: I know this question is really outdated, but I searched for exactly this problem and this SO post was the Google entry #1. So I implemented the docs.filter version (accepted answer) but as I read in the mongoose v4.6.0 docs we can now simply use:
Item.find({}).populate({
path: 'tags',
match: { tagName: { $in: ['funny', 'politics'] }}
}).exec((err, items) => {
console.log(items.tags)
// contains only tags where tagName is 'funny' or 'politics'
})
Hope this helps future search machine users.
After having the same problem myself recently, I've come up with the following solution:
First, find all ItemTags where tagName is either 'funny' or 'politics' and return an array of ItemTag _ids.
Then, find Items which contain all ItemTag _ids in the tags array
ItemTag
.find({ tagName : { $in : ['funny','politics'] } })
.lean()
.distinct('_id')
.exec((err, itemTagIds) => {
if (err) { console.error(err); }
Item.find({ tag: { $all: itemTagIds} }, (err, items) => {
console.log(items); // Items filtered by tagName
});
});
#aaronheckmann 's answer worked for me but I had to replace return doc.tags.length; to return doc.tags != null; because that field contain null if it doesn't match with the conditions written inside populate.
So the final code:
query....
.exec(function(err, docs){
docs = docs.filter(function(doc){
return doc.tags != null;
})
// do stuff with docs
});

BigQueryIO.write DynamicDestination withCreateDisposition - Clustering fields

BigQueryIO.write.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED) along with DynamicDestinations we can write to the dynamic table and if the table does not exist it will create the table from TableSchema provided from DynamicDestinations.
I am not able to add clustering fields part of TableSchema model, because it does not have such a feature.
how we can add DynamicDestinations having TableSchema with clustering fields?
bigQuery API is one way to add cluster field to a table
Using this link you can test the API before writing your code
function execute() {
return gapi.client.bigquery.jobs.insert({
"resource": {
"configuration": {
"query": {
"clustering": {
"fields": [
"Field1",
"Field2"
]
},
"query": "select 5",
"destinationTable": {
"datasetId": "Id1",
"projectId": "Project1",
"tableId": "T1"
}
}
}
}
})
.then(function(response) {
// Handle the results here (response.result has the parsed body).
console.log("Response", response);
},
function(err) { console.error("Execute error", err); });
}
And This is a JS example on how to manipulate the parameters:
static setConfiguration(params, configuration) {
//To have a destination table we MUST have a tableId
if (params.destinationTable && params.destinationTable.tableId) {
configuration.query.destinationTable = params.destinationTable
}
if (params.clusteringFields) {
configuration.query.clustering = {fields: params.clusteringFields}
}
if (params.timePartitioning) {
configuration.query.timePartitioning = {
type: 'DAY',
field: params.timePartitioning
}
}
if (params.writeDisposition) {
configuration.query.writeDisposition = params.writeDisposition
}
if (params.queryPriority && params.queryPriority.toUpperCase() === "BATCH") {
configuration.query.priority = "BATCH"
}
if (params.useCache === false) {
configuration.query.useQueryCache = params.useCache
}
if (params.maxBillBytes) {
configuration.query.maximumBytesBilled = params.maxBillBytes
}
if (params.maxBillTier) {
configuration.query.maximumBillingTier = params.maxBillTier
}
}
Now after version 2.16.0, BigQueryIO does provide an option to add clusteringFields in dynamic destinations.
#Override
public TableDestination getTable(String eventName) {
return new TableDestination(tableSpec,
tableDescription, timePartitioning, clustering);
}
Notice that the 4th parameter is clustering, which you can use.

Elastic Search when to add dynamic mappings

I've been having troubles with Elastic Search (ES) dynamic mappings. Seems like I'm in a catch-22. https://www.elastic.co/guide/en/elasticsearch/guide/current/custom-dynamic-mapping.html
The main goal is to store everything as a string that comes into ES.
What I've tried:
In ES you can't create a dynamic mapping until the index has been
created. Okay, makes sense.
I can't create an empty index, so if
the first item sent into the index is not a string, I can't
re-assign it... I won't know what type of object with be the first
item in the index, it could be any type, due to how the the app accepts a variety of objects/events.
So if I can't create the mapping ahead of time, and I can't insert an empty index to create the mapping, and I can't change the mapping after the fact, how do I deal with the first item if its NOT a string???
Here's what I'm currently doing (using the Javascript Client).
createESIndex = function (esClient){
esClient.index({
index: 'timeline-2015-11-21',
type: 'event',
body: event
},function (error, response) {
if (error) {
logger.log(logger.SEVERITY.ERROR, 'acceptEvent elasticsearch create failed with: '+ error + " req:" + JSON.stringify(event));
console.log(logger.SEVERITY.ERROR, 'acceptEvent elasticsearch create failed with: '+ error + " req:" + JSON.stringify(event));
res.status(500).send('Error saving document');
} else {
res.status(200).send('Accepted');
}
});
}
esClientLookup.getClient( function(esClient) {
esClient.indices.putTemplate({
name: "timeline-mapping-template",
body:{
"template": "timeline-*",
"mappings": {
"event": {
"dynamic_templates": [
{ "timestamp-only": {
"match": "#timestamp",
"match_mapping_type": "date",
"mapping": {
"type": "date",
}
}},
{ "all-others": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "string",
}
}
}
]
}
}
}
}).then(function(res){
console.log("put template response: " + JSON.stringify(res));
createESIndex(esClient);
}, function(error){
console.log(error);
res.status(500).send('Error saving document');
});
});
Index templates to the rescue !! That's exactly what you need, the idea is to create a template of your index and as soon as you wish to store a document in that index, ES will create it for you with the mapping you gave (even dynamic ones)
curl -XPUT localhost:9200/_template/my_template -d '{
"template": "index_name_*",
"settings": {
"number_of_shards": 1
},
"mappings": {
"type_name": {
"dynamic_templates": [
{
"strings": {
"match": "*",
"match_mapping_type": "*",
"mapping": {
"type": "string"
}
}
}
],
"properties": {}
}
}
}'
Then when you index anything in an index whose name matches index_name_*, the index will be created with the dynamic mapping above.
For instance:
curl -XPUT localhost:9200/index_name_1/type_name/1 -d '{
"one": 1,
"two": "two",
"three": true
}'
That will create a new index called index_name_1 with a mapping type for type_name where all properties are string. You can verify that with
curl -XGET localhost:9200/index_name_1/_mapping/type_name
Response:
{
"index_name_1" : {
"mappings" : {
"type_name" : {
"dynamic_templates" : [ {
"strings" : {
"mapping" : {
"type" : "string"
},
"match" : "*",
"match_mapping_type" : "*"
}
} ],
"properties" : {
"one" : {
"type" : "string"
},
"three" : {
"type" : "string"
},
"two" : {
"type" : "string"
}
}
}
}
}
}
Note that if you're willing to do this via the Javascript API, you can use the indices.putTemplate call.
export const user = {
email: {
type: 'text',
},
};
export const activity = {
date: {
type: 'text',
},
};
export const common = {
name: {
type: 'text',
},
};
import { Client } from '#elastic/elasticsearch';
import { user } from './user';
import { activity } from './activity';
import { common } from './common';
export class UserDataFactory {
private schema = {
...user,
...activity,
...common,
relation_type: {
type: 'join',
eager_global_ordinals: true,
relations: {
parent: ['activity'],
},
},
};
constructor(private client: Client) {
Object.setPrototypeOf(this, UserDataFactory.prototype);
}
async create() {
const settings = {
settings: {
analysis: {
normalizer: {
useLowercase: {
filter: ['lowercase'],
},
},
},
},
mappings: {
properties: this.schema,
},
};
const { body } = await this.client.indices.exists({
index: ElasticIndex.UserDataFactory,
});
await Promise.all([
await (async (client) => {
await new Promise(async function (resolve, reject) {
if (!body) {
await client.indices.create({
index: ElasticIndex.UserDataFactory,
});
}
resolve({ body });
});
})(this.client),
]);
await this.client.indices.close({ index: ElasticIndex.UserDataFactory });
await this.client.indices.putSettings({
index: ElasticIndex.UserDataFactory,
body: settings,
});
await this.client.indices.open({
index: ElasticIndex.UserDataFactory,
});
await this.client.indices.putMapping({
index: ElasticIndex.UserDataFactory,
body: {
dynamic: 'strict',
properties: {
...this.schema,
},
},
});
}
}
wrapper.ts
class ElasticWrapper {
private _client: Client = new Client({
node: process.env.elasticsearch_node,
auth: {
username: 'elastic',
password: process.env.elasticsearch_password || 'changeme',
},
ssl: {
ca: process.env.elasticsearch_certificate,
rejectUnauthorized: false,
},
});
get client() {
return this._client;
}
}
export const elasticWrapper = new ElasticWrapper();
index.ts
new UserDataFactory(elasticWrapper.client).create();