I've created my first GET API call that does a fuzzy search against an Elastic Search index and so far everything works well (w00t w00t). I'm stuck on one piece though; I'd like to parse out some of the information returned from ElasticSearch, specifically in the _source section. I've done some recon and discovered that you can use a mapping function to do this, but I can't get the results to display on the page or console. So the question is, let's say I just wanted to pull out the name field from _source is that possible and is there a preferred method to do it?
router.get('/suggest/:search', function(req, res, next) {
const searchTerm = req.params.search;
client.search({
index: 'product_pipeline_import',
type: 'jdbc-demo',
body: {
suggest: {
productSuggest: {
prefix: searchTerm,
completion: {
field: 'name',
fuzzy: {
fuzziness: 2
}
}
}
}
}
}).then(function(resp) {
var hits = resp.hits.hits.map(function(hit){
console.log(hit._source)
});
}, function(err) {
console.trace(err.message);
});
})
What I'm trying to parse from:
{
"text": "Philips - Brilliance 27\" IPS LED HD Monitor - Text",
"_index": "test-index",
"_type": "products",
"_id": "Philips - Brilliance 27\" IPS LED HD Monitor - Textured Black",
"_score": 5,
"_source": {
"name": "Philips - Brilliance 27\" IPS LED HD Monitor - Textured Black",
"#version": "1",
"model": "272P4APJKEB",
"#timestamp": "2017-11-19T20:56:11.537Z",
"type": "products",
"manufacturer": "Philips"
}
Related
I have a situation where I want to get the full (data) from the backend as a CSV file. I have already prepared the backend for that, but normally the front-end state => (filters) is not in contact with the backend unless I send a request, so I managed to solve the problem by mimicking the process of showing all data but by a custom button and a GET request ( not an ajax request ). knowing that I am using serverSide: true in datatables.
I prepared the backend to receive a request like ( Show All ) but I want that link to be sent by custom button ( Export All ) not by the show process itself as by the picture down because showing all data is not practical at all.
This is the code for the custom button
{
text: "Export All",
action: function (e, dt, node, config) {
// get the backend file here
},
},
So, How could I send a request like the same request sent by ( Show All ) by a custom button, I prepared the server to respond by the CSV file. but I need a way to get the same link to send a get request ( not by ajax ) by the same link that Show All sends?
If you are using serverSide: true that should mean you have too much data to use the default (serverSide: false) - because the browser/DataTables cannot handle the volume. For this reason I would say you should also not try to use the browser to generate a full export - it's going to be too much data (otherwise, why did you choose to use serverSide: true?).
Instead, use a server-side export utility - not DataTables.
But if you still want to pursuse this approach, you can build a custom button which downloads the entire data set to the DataTables (in your browser) and then exports that complete data to Excel.
Full Disclosure:
The following approach is inspired by the following DataTables forum post:
Customizing the data from export buttons
The following approach requires you to have a separate REST endpoint which delivers the entire data set as a JSON response (by contrast, the standard response should only be one page of data for the actual table data display and pagination.)
How you set up this endpoint is up to you (in Laravel, in your case).
Step 1: Create a custom button:
I tested with Excel, but you can do CSV, if you prefer.
buttons: [
{
extend: 'excelHtml5', // or 'csvHtml5'
text: 'All Data to Excel', // or CSV if you prefer
exportOptions: {
customizeData: function (d) {
var exportBody = getDataToExport();
d.body.length = 0;
d.body.push.apply(d.body, exportBody);
}
}
}
],
Step 2: The export function, used by the above button:
function GetDataToExport() {
var jsonResult = $.ajax({
url: '[your_GET_EVERYTHING_url_goes_here]',
success: function (result) {},
async: false
});
var exportBody = jsonResult.responseJSON.data;
return exportBody.map(function (el) {
return Object.keys(el).map(function (key) {
return el[key]
});
});
}
In the above code, my assumption is that the JSON response has the standard DataTables object structure - so, something like:
{
"data": [
{
"id": "1",
"name": "Tiger Nixon",
"position": "System Architect",
"salary": "$320,800",
"start_date": "2011/04/25",
"office": "Edinburgh",
"extn": "5421"
},
{
"id": "2",
"name": "Garrett Winters",
"position": "Accountant",
"salary": "$170,750",
"start_date": "2011/07/25",
"office": "Tokyo",
"extn": "8422"
},
{
"id": "3",
"name": "Ashton Cox",
"position": "Junior Technical Author",
"salary": "$86,000",
"start_date": "2009/01/12",
"office": "San Francisco",
"extn": "1562"
}
]
}
So, it's an object, containing a data array.
The DataTables customizeData function is what controls writing this complete JSON to the Excel file.
Overall, your DataTables code will look something like this:
$(document).ready(function() {
$('#example').DataTable( {
serverSide: true,
dom: 'Brftip',
buttons: [
{
extend: 'excelHtml5',
text: 'All Data to Excel',
exportOptions: {
customizeData: function (d) {
var exportBody = GetDataToExport();
d.body.length = 0;
d.body.push.apply(d.body, exportBody);
}
}
}
],
ajax: {
url: "[your_SINGLE_PAGE_url_goes_here]"
},
"columns": [
{ "title": "ID", "data": "id" },
{ "title": "Name", "data": "name" },
{ "title": "Position", "data": "position" },
{ "title": "Salary", "data": "salary" },
{ "title": "Start Date", "data": "start_date" },
{ "title": "Office", "data": "office" },
{ "title": "Extn.", "data": "extn" }
]
} );
} );
function GetDataToExport() {
var jsonResult = $.ajax({
url: '[your_GET_EVERYTHING_url_goes_here]',
success: function (result) {},
async: false
});
var exportBody = jsonResult.responseJSON.data;
return exportBody.map(function (el) {
return Object.keys(el).map(function (key) {
return el[key]
});
});
}
Just to repeat my initial warning: This is probably a bad idea, if you really needed to use serverSide: true because of the volume of data you have.
Use a server-side export tool instead - I'm sure Laravel/PHP has good support for generating Excel files.
I have installed the strapi-starter-blog locally and I'm trying to understand how I can query article by ID (or slug). When I open the GraphQL Playground, I can get all the article using:
query Articles {
articles {
id
title
content
image {
url
}
category {
name
}
}
}
The response is:
{
"data": {
"articles": [
{
"id": "1",
"title": "Thanks for giving this Starter a try!",
"content": "\n# Thanks\n\nWe hope that this starter will make you want to discover Strapi in more details.\n\n## Features\n\n- 2 Content types: Article, Category\n- Permissions set to 'true' for article and category\n- 2 Created Articles\n- 3 Created categories\n- Responsive design using UIkit\n\n## Pages\n\n- \"/\" display every articles\n- \"/article/:id\" display one article\n- \"/category/:id\" display articles depending on the category",
"image": {
"url": "/uploads/blog_header_network_7858ad4701.jpg"
},
"category": {
"name": "news"
}
},
{
"id": "2",
"title": "Enjoy!",
"content": "Have fun!",
"image": {
"url": "/uploads/blog_header_balloon_32675098cf.jpg"
},
"category": {
"name": "trends"
}
}
]
}
}
But when I try to get the article using the ID with variable, like here github code in the GraphQL Playground with the following
Query:
query Articles($id: ID!) {
articles(id: $id) {
id
title
content
image {
url
}
category {
name
}
}
}
Variables:
{
"id": 1
}
I get an error:
...
"message": "Unknown argument \"id\" on field \"articles\" of type \"Query\"."
...
What is the difference and why can't I get the data like in the example of the Github repo.
Thanks for your help.
It's the difference between articles and article as the query. If you use the singular one you can use the ID as argument
I'm trying to read emails responded by the Gmail API.
I have trouble accessing all the "parts". And don't have great ways to traverse through the response. I'm also lost as to how many parts can exist so that I can make sure I read the different email responses properly. I've shortened the response below...
{ "payload": { "mimeType": "multipart/mixed", "filename": "",
], "body": { "size": 0 }, "parts": [ {
"body": {
"size": 0
},
"parts": [
{
"partId": "0.0",
"mimeType": "text/plain",
"filename": "",
"headers": [
{
"name": "Content-Type",
"value": "text/plain; charset=\"us-ascii\""
},
{
"name": "Content-Transfer-Encoding",
"value": "quoted-printable"
}
],
"body": {
"size": 2317,
"data": "RGVhciBNSVQgQ2x1YiBWb2x1bnRlZXJzIGluIEFzaWEsDQoNCkJ5IG5vdyBlYWNoIG9mIHlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBpbnZpdGF0aW9ucyB0byB0aGUgcmVjZXB0aW9ucyBpbiBib3RoIFNpbmdhcG9yZSBhbmQgSG9uZyBLb25nIHdpdGggUHJlc2lkZW50IFJlaWYgb24gTm92ZW1iZXIgNyBhbmQgTm92ZW1iZXIg"
}
},
{
"partId": "0.1",
"mimeType": "text/html",
"filename": "",
"headers": [
{
"name": "Content-Type",
"value": "text/html; charset=\"us-ascii\""
},
{
"name": "Content-Transfer-Encoding",
"value": "quoted-printable"
}
],
"body": {
"size": 9116,
"data": "PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9vZmZpY2UvMjA"
}
}
] }, {
"partId": "1",
"mimeType": "text/plain",
"filename": "",
"body": {
"size": 411,
"data": "X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NClRoYW5rIHlvdSBmb3IgYWxsb3dpbmcgdXMgdG8gcmVhY2ggeW91IGJ5IGVtYWlsLCB0aGUgbW9zdCBpbW1lZGlhdGUgbWVhbnMgZm9yIHNoYXJpbmcgaW5mb3JtYXRpb24gd2l0aCBNSVQgYWx1bW5pLiANCklmIHlvdSB3b3VsZCBsaWtlIHRvIHVuc3Vic2NyaWJlIGZyb20gdGhpcyBtYWlsaW5nIGxpc3Qgc2VuZCBhIGJsYW5rIGVtYWlsIHRvIGxpc3RfdW5zdWJzY3JpYmVAYWx1bS5taXQuZWR1IGFuZCBwdXQgdGhlIGxpc3QgbmFtZSBpbiB0aGUgc3ViamVjdCBsaW5lLg0KRm9yIGV4YW1wbGU6DQpUbzogbGlzdF91bnN1YnNjcmliZUBhbHVtLm1pdC5lZHUNCkNjOg0KU3ViamVjdDogYXNpYW9mZg0K"
} } ] } }
Is there something I'm missing?
A MIME message is not just an array it's a full blown tree structure. So you'll have to traverse it to correctly handle it. Luckily JSON parsers are plentiful and the problem can easily be handled with recursion. In many languages there exist very useful email parsing libraries that can make accessing traditional parts (e.g. the text/plain or text/html displayable part, or attachments) not too laborious.
You'll have to set up walker functions to traverse through the json and pick out the bits you are after. Here is a part of what I wrote. This may help you jumpstart your code. NOTE: this is used inside of wordpress...hence the special jQuery call. Not needed if you do not need to use jquery inside wordpress.
function makeApiCall() {
gapi.client.load('gmail', 'v1', function() {
//console.log('inside call: '+myquery);
var request = gapi.client.gmail.users.messages.list({
'userId': 'me',
'q': myquery
});
request.execute(function(resp) {
jQuery(document).ready(function($) {
//console.log(resp);
//$('.ASAP-emailhouse').height(300);
$.each(resp.messages, function(index, value){
messageId = value.id;
var messagerequest = gapi.client.gmail.users.messages.get({
'userId': 'me',
'id': messageId
});//end var message request
messagerequest.execute(function(messageresp) {
//console.log(messageresp);
$.each(messageresp, responsewalker);
function responsewalker(key, response){
messagedeets={};
$.each(messageresp.payload.headers, headerwalker);
function headerwalker(headerkey, header){
if(header.name =='Date'){
d = new Date(header.value);
var curr_date = d.getDate();
var curr_month = d.getMonth() + 1; //Months are zero based
var curr_year = d.getFullYear();
var formatteddate = curr_month+'/'+curr_date+'/'+curr_year;
messagedeets['date']=formatteddate;
//$('.ASAP-emailhouse').append('<p>'+header.value+'</p>');
}
if(header.name =='Subject'){
//console.log(header.value);
messagedeets.subject=header.value;
}
}
messagedeets.body = {};
$.each(messageresp.payload.parts, walker);
function walker(partskey, value) {
//console.log(value.body);
if (value.body.data !== "undefined") {
//console.log(value.body);
var messagebody = atob(value.body.data);
messagedeets.body.partskey = messagebody;
}
console.log(messagedeets);
$('.ASAP-emailhouse').append('<div class="messagedeets"><p class="message-date">'+messagedeets.date+': <span class="message-subject">'+messagedeets.subject+'</span></p><p>'+messagedeets.body.partskey+'</p></div>');
}//end responsewalker
//$('.ASAP-emailhouse').append('</li>');
}
//$('.ASAP-emailhouse').append('</ul>');
});//end message request
});//end each message id
});//end jquery wrapper for wordpress
});//end request execute list messages
});//end gapi client load gmail
}
The MIME parts you are looking for are in an array. JSON does not tell you up front how many items are in an array. Even MIME itself does not provide a way of knowing how many parts are present without looking at the entire message. You will just have to traverse the entire array to know how many parts are in it, and process each part as you encounter it.
To know how much parts exists, you can just use the Length property.
Example :
json.payload.parts.length
For your example, this property is 2 because there are 2 parts.
When a search term appears not only once but several times in the document I'm searching the score goes up. While this might be wanted most of the times, it is not in my case.
The query:
"query": {
"bool": {
"should": {
"nested": {
"path": "editions",
"query": {
"match": {
"title_author": {
"query": "look me up",
"operator": "and",
"boost": 2
}
}
}
}
},
"must": {
"nested": {
"path": "editions",
"query": {
"match": {
"title_author": {
"query": "look me up",
"operator": "and",
"fuzziness": 0.5,
"boost": 1
}
}
}
}
}
}
}
doc_1
{
"editions": [
{
"editionid": 1,
"title_author": "look me up look me up",
},
{
"editionid": 2,
"title_author": "something else",
}
]
}
and doc_2
{
"editions": [
{
"editionid": 3,
"title_author": "look me up",
},
{
"editionid": 4,
"title_author": "something else",
}
]
}
Now, doc_1 would have a higher score due to the fact that the search terms are included twice. I don't want that. How do I turn this behavior off? I want the same score - no matter if the search term was found once or twice in the matching document.
In addition to what #keety and #Sid1199 talked about there is another way to do that: special property for fields with type "text" called index_options. By default it is set to "positions", but you can explicitly set it to "docs", so term frequencies will not be placed in the index and Elasticsearch will not know about repetitions while searching.
"title_author": {
"type": "text",
"index_options": "docs"
}
There is a property in Elastic search known as "similarity". There are a lot of types of similarities, but the one that is useful here is "boolean". If you set similarity to "boolean" in your mapping, it will prevent multiple boosting of your query.
"title_author":{"type":"text","similarity":"boolean"}
If you run your query on this mapping, it will boost only once regardless of the number of time the word appears. You can read up more on similarities here
This is only available in ES versions 5.4 and above
I'm very new to the facebook api for my website, and I am using the javascript sdk. I want to get the users latest school information, including school name, course and year of study. This is what I have so far but it breaks the login script and returns 'response.education.school is undefined'. I'm guessing I'll need some kind of for loop to go through the education array as most users have more than one school listed?
function login() {
FB.login(function(response) {
if(response.authResponse) {
// connected
FB.api('/me', function(response) {
fbLogin(response.id, response.name, response.firstname, response.email,
response.education.school.name, response.education.concentration.name, response.education.year.name);
});
} else {
// cancelled
}
}, {scope: 'email, user_education_history, user_hometown'});
}
response.education.school is undefined
This is because responce.education is an array of objects. This would be an example for me (actual information removed)
"education": [
{
"school": {
"id": "",
"name": ""
},
"year": {
"id": "",
"name": ""
},
"concentration": [
{
"id": "",
"name": ""
}
],
"type": ""
},
...
]
You need to iterate over it and process each educaional step e.g.
for(ed in response.education) {
var school = response.education[ed].school;
var schoolName = school.name;
...
}
And so on; you are currently passing an aobject structure to your fbLogIn that can't handle it. If you want the latest school education, you simply pick the one that has the most recent year.name value.