How to get new Search Engine results in the past 24h using a SERP API? - api

Assume I am in possession of a SERP API, which given a keyword, returns me the Google results of that keyword in JSON format (for example: https://serpapi.com/):
{
"organic_results": [
{
"position": 1,
"title": "Coffee - Wikipedia",
"link": "https://en.wikipedia.org/wiki/Coffee",
"displayed_link": "https://en.wikipedia.org › wiki › Coffee",
"snippet": "Coffee is a brewed drink prepared from roasted coffee beans, the seeds of berries from certain Coffea species. From the coffee fruit, the seeds are ...",
"sitelinks":{/*snip*/}
,
"rich_snippet":
{
"bottom":
{
"extensions":
[
"Region of origin: Horn of Africa and ‎South Ara...‎",
"Color: Black, dark brown, light brown, beige",
"Introduced: 15th century"
]
,
"detected_extensions":
{
"introduced_th_century": 15
}
}
}
,
"about_this_result":
{
"source":
{
"description": "Wikipedia is a free content, multilingual online encyclopedia written and maintained by a community of volunteers through a model of open collaboration, using a wiki-based editing system. Individual contributors, also called editors, are known as Wikipedians.",
"source_info_link": "https://en.wikipedia.org/wiki/Wikipedia",
"security": "secure",
"icon": "https://serpapi.com/searches/6165916694c6c7025deef5ab/images/ed8bda76b255c4dc4634911fb134de53068293b1c92f91967eef45285098b61516f2cf8b6f353fb18774013a1039b1fb.png"
}
,
"keywords":
[
"coffee"
]
,
"languages":
[
"English"
]
,
"regions":
[
"the United States"
]
}
,
"cached_page_link": "https://webcache.googleusercontent.com/search?q=cache:U6oJMnF-eeUJ:https://en.wikipedia.org/wiki/Coffee+&cd=4&hl=en&ct=clnk&gl=us",
"related_pages_link": "https://www.google.com/search?q=related:https://en.wikipedia.org/wiki/Coffee+Coffee"
},
/* Results 2,3,4... */
]}
What is a good way to get new results from the past 24h? I added the &tbs=qdr:d query parameter, which only shows the results from the past day. That's a good first step.
The 2nd step is to filter out only relevant results. When there are no relevant results, Google shows this message box:
What is their algorithm to show this box?
Idea 1: "grep -i {exact_keywords}"
For example, if I search a keyword like "Alexander Pope", the 24h Google query might return results about the pope, written by a guy called Alexander. That's not super relevant. My naive idea is to grep (case insensitive) the exact keyword "Alexander Pope".
But that might leave out some good results.
Any other ideas?

Related

Extract table data from Wikipedia

is there any way to extract only table data I am trying to extract a table from the specific section "Grade One" from this article https://en.wikipedia.org/wiki/List_of_motor_racing_circuits_by_FIA_grade using Api sandbox but I am getting only the whole content of the page.
this is the URL from the API sandbox which gives me all content.
https://en.wikipedia.org/wiki/Special:ApiSandbox#action=parse&format=json&page=List%20of%20motor%20racing%20circuits%20by%20FIA%20grade&prop=text
I followed the steps I described in my answer in order to get the data you want.
This is the URL:
https://en.wikipedia.org/wiki/Special:ApiSandbox#action=parse&format=json&page=List%20of%20motor%20racing%20circuits%20by%20FIA%20grade&prop=sections%7Ctext&section=1&disablelimitreport=1&utf8=1
The output contains the table and the text in the "Grade One" section.
This is the API Sandbox example.
Response:
{
"parse": {
"title": "List of motor racing circuits by FIA grade",
"pageid": 57151782,
"text": {
"*": "<div class=\"mw-parser-output\"><h2><span class=\"mw-headline\" id=\"Grade_One\">Grade One</span><span class=\"mw-editsection\"><span class=\"mw-editsection-bracket\">[</span>edit<span class=\"mw-editsection-bracket\">]</span></span></h2>\n<p>There are 40 Grade One circuits for a total of 49 layouts in 27 nations as of December 2021. Circuits holding Grade One certification may host events involving \"Automobiles of Groups D (FIA International Formula) and E (Free Formula) with a weight/power ratio of less than 1 kg/hp.\"<sup id=\"cite_ref-ISC2019_1-0\" class=\"reference\">[1]</sup> As such, a Grade One certification is required to host events involving Formula One cars.<sup id=\"cite_ref-2\" class=\"reference\">[2]</sup><sup id=\"cite_ref-2021_December_list_3-0\" class=\"reference\">[3]</sup>\n</p>\n<table class=\"wikitable sortable\" width=\"75%\" style=\"font-size: 95%;\">\n<tbody><tr>\n<th>Circuit\n</th>\n<th>Location\n</th>\n<th>Country\n</th>\n<th>Layout\n</th>\n<th>Length\n</th>\n<th>Continent\n</th></tr>\n<tr>\n<td>Albert Park Circuit\n</td>\n<td>Melbourne\n</td>\n<td><span class=\"flagicon\"><img alt=\"\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/8/88/Flag_of_Australia_%28converted%29.svg/23px-Flag_of_Australia_%28converted%29.svg.png\" decoding=\"async\" width=\"23\" height=\"12\" class=\"thumbborder\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/8/88/Flag_of_Australia_%28converted%29.svg/35px-Flag_of_Australia_%28converted%29.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/8/88/Flag_of_Australia_%28converted%29.svg/46px-Flag_of_Australia_%28converted%29.svg.png 2x\" data-file-width=\"1280\" data-file-height=\"640\" /> </span>Australia\n</td>\n<td>Grand Prix\n</td>\n<td>5.279 km (3.280 mi)\n</td>\n<td>Australia\n</td></tr>\n<tr>[...the rest of the table is shown here]"
},
"sections": [
{
"toclevel": 1,
"level": "2",
"line": "Grade One",
"number": "1",
"index": "1",
"fromtitle": "List_of_motor_racing_circuits_by_FIA_grade",
"byteoffset": 0,
"anchor": "Grade_One"
}
]
}
}
I can't see how you can return the table only, so, you have to extract the table from the API response (by using a script).

Handling multiple rows returned by IMPORTJSON script on GoogleSheets

I am trying to populate a google sheet using an API. But the API has more than one row to be returned for a single query. Following is the JSON returned by API.
# https://api.dictionaryapi.dev/api/v2/entries/en/ABANDON
[
{
"word": "abandon",
"phonetics": [
{
"text": "/əˈbændən/",
"audio": "https://lex-audio.useremarkable.com/mp3/abandon_us_1.mp3"
}
],
"meanings": [
{
"partOfSpeech": "transitive verb",
"definitions": [
{
"definition": "Cease to support or look after (someone); desert.",
"example": "her natural mother had abandoned her at an early age",
"synonyms": [
"desert",
"leave",
"leave high and dry",
"turn one's back on",
"cast aside",
"break with",
"break up with"
]
},
{
"definition": "Give up completely (a course of action, a practice, or a way of thinking)",
"example": "he had clearly abandoned all pretense of trying to succeed",
"synonyms": [
"renounce",
"relinquish",
"dispense with",
"forswear",
"disclaim",
"disown",
"disavow",
"discard",
"wash one's hands of"
]
},
{
"definition": "Allow oneself to indulge in (a desire or impulse)",
"example": "they abandoned themselves to despair",
"synonyms": [
"indulge in",
"give way to",
"give oneself up to",
"yield to",
"lose oneself in",
"lose oneself to"
]
}
]
},
{
"partOfSpeech": "noun",
"definitions": [
{
"definition": "Complete lack of inhibition or restraint.",
"example": "she sings and sways with total abandon",
"synonyms": [
"uninhibitedness",
"recklessness",
"lack of restraint",
"lack of inhibition",
"unruliness",
"wildness",
"impulsiveness",
"impetuosity",
"immoderation",
"wantonness"
]
}
]
}
]
}
]
By using the following calls via IMPORTJSON,
=ImportJSON(CONCATENATE("https://api.dictionaryapi.dev/api/v2/entries/en/"&$A2), "/phonetics/text", "noHeaders")
=ImportJSON(CONCATENATE("https://api.dictionaryapi.dev/api/v2/entries/en/"&$A2), "/meanings/partOfSpeech", "noHeaders")
=ImportJSON(CONCATENATE("https://api.dictionaryapi.dev/api/v2/entries/en/"&$A2), "/meanings/definitions/definition", "noHeaders")
=ImportJSON(CONCATENATE("https://api.dictionaryapi.dev/api/v2/entries/en/"&$A2), "/meanings/definitions/synonyms", "noHeaders")
=ImportJSON(CONCATENATE("https://api.dictionaryapi.dev/api/v2/entries/en/"&$A2), "/meanings/definitions/example", "noHeaders")
I am able to get the following in GoogleSheets,
Whereas, the actual output according to JSON should be,
As you can see a complete row is being overwritten. How can this be fixed?
EDIT
Following is the link to sheet for viewing only.
I believe your goal as follows.
You want to achieve the bottom image in your question on Google Spreadsheet.
Unfortunately, I couldn't find the method for directly retrieving the bottom image using ImportJson. So in this answer, I would like to propose a sample script for retrieving the values you expect using Google Apps Script. I thought that creating a sample script for directly achieving your goal might be simpler rather than modifying ImportJson.
Sample script:
function SAMPLE(url) {
var res = UrlFetchApp.fetch(url, {muteHttpExceptions: true});
if (res.getResponseCode() != 200) return res.getContentText();
var obj = JSON.parse(res.getContentText());
var values = obj[0].meanings.reduce((ar, {partOfSpeech, definitions}, i) => {
definitions.forEach(({definition, example, synonyms}, j) => {
var v = [definition, Array.isArray(synonyms) ? synonyms.join(",") : synonyms, example];
var phonetics = obj[0].phonetics[i];
ar.push(j == 0 ? [(phonetics ? phonetics.text : ""), partOfSpeech, ...v] : ["", "", ...v]);
});
return ar;
}, []);
return values;
}
When you use this script, please put =SAMPLE(CONCATENATE("https://api.dictionaryapi.dev/api/v2/entries/en/"&$A2)) to a cell as the custom formula.
Result:
When above script is used, the following
Note:
In this sample script, when the structure of the JSON object is changed, it might not be able to be used. So please be careful this.
References:
Class UrlFetchApp
Custom Functions in Google Sheets

How do I get info about a Youtube video's chapters from the API?

Recently, Youtube added the ability to break up their videos in the progress bar into sections called "chapters".
https://support.google.com/youtube/answer/9884579?hl=en
Currently I am able to get info about a video from the Youtube API. However, it doesn't seem like there's any info about a video's chapters, and I haven't found anything in the API documentation about chapters. Am I missing something, or is there simply no way to get chapter data yet?
As far as I know, such data is in plain text in the description of the video.
So, you can use the following example:
Video used in this demonstration: Top 10 Monsters with 2500 Attack in YuGiOh
URL Request:
https://www.googleapis.com/youtube/v3/videos?part=snippet&id=NNgYId7b4j0&key=[YOUR_API_KEY]
Response:
{
"kind": "youtube#videoListResponse",
"etag": "YpVLmrSx1iP8hAJOnumaTBoKqqQ",
"items": [
{
"kind": "youtube#video",
"etag": "oIoJq5F3RHvBbtVohafaJ_1SThU",
"id": "NNgYId7b4j0",
"snippet": {
"publishedAt": "2020-09-14T18:37:46Z",
"channelId": "UC0roOaAn95Rtgoe078RkVXQ",
"title": "Top 10 Monsters with 2500 Attack in YuGiOh",
"description": "In this video we'll go over the best monsters that have 2500 attack, and attack threshold for a lot of boss monsters actually.\n\nCheck out my DnD channel #TheD&DLogs \n\n--The List--\nIntro: (0:00)\n10- Blue-Eyes Spirit Dragon: (0:00)\n9- Invoked Mechaba: (2:14)\n8- Number S39: Utopia the Lightning: (3:23)\n7- Earthbound Immortal Aslla Piscu: (4:35)\n6- Eldlich the golden Lord: (6:04)\n5- True King Lithosagym the Disaster: (7:34)\n4- Block Dragon: (8:54)\n3- Astrograph sorcerer: (10:25)\n2- Beatrice lady of the eternal: (12:36)\n1- Firewall Dragon: (14:37)\n- \n-----------------------------------------\n#yugioh #top10 \n\nDuels are all done on EDOpro, its completely free and updated all the time. If you want it, just look for the EDOpro discord and you'll find all you need to download it from there\n\nSome of the Video backgrounds in this video were made by \"Amitai Angor AA VFX\" https://www.youtube.com/dvdangor2011\n\n\nhttps://twitter.com/hirumared\nhttps://twitter.com/TheDuelLogs",
"thumbnails": {
"default": {
"url": "https://i.ytimg.com/vi/NNgYId7b4j0/default.jpg",
"width": 120,
"height": 90
},
"medium": {
"url": "https://i.ytimg.com/vi/NNgYId7b4j0/mqdefault.jpg",
"width": 320,
"height": 180
},
"high": {
"url": "https://i.ytimg.com/vi/NNgYId7b4j0/hqdefault.jpg",
"width": 480,
"height": 360
},
"standard": {
"url": "https://i.ytimg.com/vi/NNgYId7b4j0/sddefault.jpg",
"width": 640,
"height": 480
},
"maxres": {
"url": "https://i.ytimg.com/vi/NNgYId7b4j0/maxresdefault.jpg",
"width": 1280,
"height": 720
}
},
"channelTitle": "TheDuelLogs",
"tags": [
"yugioh",
"ygo",
"dev",
"pro",
"link",
"duels",
"auto-matic duels",
"online",
"current",
"ban",
"list",
"dueling",
"network",
"theduellogs",
"the",
"duel",
"logs",
"loggs",
"Yu",
"Gi",
"Oh!",
"YGOpro",
"gimmick",
"links",
"top ten",
"2020",
"edopro"
],
"categoryId": "20",
"liveBroadcastContent": "none",
"localized": {
"title": "Top 10 Monsters with 2500 Attack in YuGiOh",
"description": "In this video we'll go over the best monsters that have 2500 attack, and attack threshold for a lot of boss monsters actually.\n\nCheck out my DnD channel #TheD&DLogs \n\n--The List--\nIntro: (0:00)\n10- Blue-Eyes Spirit Dragon: (0:00)\n9- Invoked Mechaba: (2:14)\n8- Number S39: Utopia the Lightning: (3:23)\n7- Earthbound Immortal Aslla Piscu: (4:35)\n6- Eldlich the golden Lord: (6:04)\n5- True King Lithosagym the Disaster: (7:34)\n4- Block Dragon: (8:54)\n3- Astrograph sorcerer: (10:25)\n2- Beatrice lady of the eternal: (12:36)\n1- Firewall Dragon: (14:37)\n- \n-----------------------------------------\n#yugioh #top10 \n\nDuels are all done on EDOpro, its completely free and updated all the time. If you want it, just look for the EDOpro discord and you'll find all you need to download it from there\n\nSome of the Video backgrounds in this video were made by \"Amitai Angor AA VFX\" https://www.youtube.com/dvdangor2011\n\n\nhttps://twitter.com/hirumared\nhttps://twitter.com/TheDuelLogs"
},
"defaultAudioLanguage": "en"
}
}
],
"pageInfo": {
"totalResults": 1,
"resultsPerPage": 1
}
}
Get the response:
response.items[0].snippet.description
Results:
"In this video we'll go over the best monsters that have 2500 attack, and attack threshold for a lot of boss monsters actually.
Check out my DnD channel #TheD&DLogs
--The List--
Intro: (0:00)
10- Blue-Eyes Spirit Dragon: (0:00)
9- Invoked Mechaba: (2:14)
8- Number S39: Utopia the Lightning: (3:23)
7- Earthbound Immortal Aslla Piscu: (4:35)
6- Eldlich the golden Lord: (6:04)
5- True King Lithosagym the Disaster: (7:34)
4- Block Dragon: (8:54)
3- Astrograph sorcerer: (10:25)
2- Beatrice lady of the eternal: (12:36)
1- Firewall Dragon: (14:37)
-
-----------------------------------------
#yugioh #top10
Duels are all done on EDOpro, its completely free and updated all the time. If you want it, just look for the EDOpro discord and you'll find all you need to download it from there
Some of the Video backgrounds in this video were made by "Amitai Angor AA VFX" https://www.youtube.com/dvdangor2011
https://twitter.com/hirumared
https://twitter.com/TheDuelLogs"
One more time YouTube Data API v3 doesn't provide a basic feature.
I would suggest you to use my open-source YouTube operational API, indeed by requesting https://yt.lemnoslife.com/videos?part=chapters&id=VIDEO_ID you would get a JSON with the video chapters (titles and timestamps) you are looking for in item['chapters']['chapters'].
Example of result with YouTube video id NNgYId7b4j0:
{
"kind": "youtube#videoListResponse",
"etag": "NotImplemented",
"items": [
{
"kind": "youtube#video",
"etag": "NotImplemented",
"id": "NNgYId7b4j0",
"chapters": {
"areAutoGenerated": false,
"chapters": [
{
"title": "10- Blue-Eyes Spirit Dragon",
"time": 0,
"thumbnails": [
{
"url": "https:\/\/i.ytimg.com\/vi\/NNgYId7b4j0\/hqdefault_4000.jpg?sqp=-oaymwEiCKgBEF5IWvKriqkDFQgBFQAAAAAYASUAAMhCPQCAokN4AQ==&rs=AOn4CLCoTrvu0Yu-iNxb7o4II-pxi5WVbQ",
"width": 168,
"height": 94
},
{
"url": "https:\/\/i.ytimg.com\/vi\/NNgYId7b4j0\/hqdefault_4000.jpg?sqp=-oaymwEjCNACELwBSFryq4qpAxUIARUAAAAAGAElAADIQj0AgKJDeAE=&rs=AOn4CLCuupNwIgFIf9hXbjMsvpSGThFyhg",
"width": 336,
"height": 188
}
]
},
{
"title": "9- Invoked Mechaba",
"time": 134,
"thumbnails": [
{
"url": "https:\/\/i.ytimg.com\/vi\/NNgYId7b4j0\/hqdefault_135933.jpg?sqp=-oaymwEiCKgBEF5IWvKriqkDFQgBFQAAAAAYASUAAMhCPQCAokN4AQ==&rs=AOn4CLBe94BKNpQXvM2dUl75LtcgX0N03w",
"width": 168,
"height": 94
},
{
"url": "https:\/\/i.ytimg.com\/vi\/NNgYId7b4j0\/hqdefault_135933.jpg?sqp=-oaymwEjCNACELwBSFryq4qpAxUIARUAAAAAGAElAADIQj0AgKJDeAE=&rs=AOn4CLBULUhlI1OOjJiW6mpFDUhPzh4Adw",
"width": 336,
"height": 188
}
]
},
...
]
}
}
]
}
I am replying with this answer to help people such as myself who ended up on this video wanting to find a youtube chapter parser / extractor for text rather than where to find the chapter data. Just to add some further information, currently, there is no way to get the chapters from the official YouTube API, so the only way to get the chapters from a text-description response (like the YouTube API provides) is to parse it in some way:
My answer is in Javascript but it can easily be converted: The idea is to extract the MIN:SEC and HR:MIN:SEC timestamps then to generate the title we remove the word that includes them (So this would typically remove however people aesthetically wrap them too [00:00] or (00:00)
It's far from perfect, but in my experience it's better than the other solutions I've seen on github/npm at the time of writing this. You might want to also trim away starting and ending spaces and punctuational separators such as (-, :, ~, |) too
const parseChapters = (description) => {
// Extract timestamps (either 00:00:00, 0:00:00, 00:00 or 0:00)
const lines = description.split("\n")
const regex = /(\d{0,2}:?\d{1,2}:\d{2})/g
const chapters = []
for (const line of lines) {
// Match the regex and check if the line contains a matched regex
const matches = line.match(regex)
if (matches) {
const ts = matches[0]
const title = line
.split(" ")
.filter((l) => !l.includes(ts))
.join(" ")
chapters.push({
timestamp: ts,
title: title,
})
}
}
return chapters
}
Very late answer but it solved my problem.
You could use the code below. It's written in C# but it can easily be transcribed into another language. Since you can already get youtube video data, I assume you also have the description of the video.
private static IEnumerable<string> GetChaptersFromDescription(string text)
{
var lines = text.Split("\n");
var regex = new Regex(#"[0-9]:[0-9][0-9]");
foreach (var line in lines)
{
if (regex.IsMatch(line))
{
yield return line;
}
}
}

Get data Amadeus API Flight low fare search

I need parameters to get data that contains stops array. I tried about 100 different combinations, and i didn't get any response that returns stops array in results.
If anyone knows how to accomplish this, please provide your answer.
Thanks.
Having stops is not that common and it usually depends on the distance between origin and destination. For example, having London as origin and Sydney as destination:
https://test.api.amadeus.com/v1/shopping/flight-offers?origin=LON&destination=SYD&departureDate=2019-08-01&nonStop=false&returnDate=2019-08-28
You can check in the response that most of the segments contain stops:
"stops": [
{
"iataCode": "HKG",
"duration": "0DT1H0M",
"arrivalAt": "2019-08-28T12:00:00+08:00",
"departureAt": "2019-08-28T13:00:00+08:00"
},
{
"iataCode": "DOH",
"duration": "0DT1H0M",
"arrivalAt": "2019-08-28T14:00:00+03:00",
"departureAt": "2019-08-28T15:00:00+03:00"
},
{
"iataCode": "BAH",
"duration": "0DT1H0M",
"arrivalAt": "2019-08-28T16:00:00+03:00",
"departureAt": "2019-08-28T17:00:00+03:00"
}
]
Where stop means that an aircraft lands for refueling, for instance, but passengers don't necessary get out of the plane.

putting exact matches first in elasticsearch multimatch query

it sure seems that there's no easy way to do this ... how can i ensure that certain fields in my multi match query going to actually be boosted correctly so that exact matches show up at the top?
i honestly seem to have tried this a multitude of ways, but maybe someone knows the answer ...
in my movie and music database, i'm trying to search multiple fields at once, but ensure that exact matches make it to the top and that certain fields such as title and artist name have more boost.
here's the main portion of my query...
"query": {
"bool": {
"should": [
{
"multi_match": {
"type": "phrase_prefix",
"query": "brave",
"max_expansions": 10,
"fields": [
"title^3",
"artists.name^2",
"starring.name^2",
"credits.name",
"tracks^0.1"
]
}
}
],
"minimum_number_should_match": 1
}
}
as you see, the query is 'brave'. it just so happens there's a movie called brave. perfect, i want it at the top - since not only is it an exact match, but the match is in the title. however, there's a popular song called 'brave' from sara bareilles which ends up on top. why?
i've tried every analyzer known to man, custom and otherwise, and i've tried changing the 'type' parameter to every other permutation (phrase, best_fields, cross_fields, most_fields), and it just doesn't seem to honor the fact that i'm effectively trying to either promote 'title' and 'artists.name' and 'starring.name' and DEMOTE 'tracks'.
is there any way i can ensure all exact matches show up at the top (especially in title, etc) followed by expansions, etc?
any suggestions would be helpful.
EDIT
the analyzer i'm currently using which seems to work better than others is a custom one i call 'nameAnalyzer' which is made up of a 'lowercase' filter and 'keyword' tokenizer only.
here's some example documents in the order in which they're appearing in the results:
fields": {
"title": [
"Brave"
],
"credits.name": [
"Kelly MacDonald",
"Emma Thompson",
"Billy Connolly",
"Julie Walters",
"Kevin McKidd",
"Craig Ferguson",
"Robbie Coltrane"
],
"starring.name": [
"Emma Thompson",
"Julie Walters",
"Billy Connolly",
"Kevin Mckidd",
"Kelly Macdonald"
]
,
fields": {
"credits.name": [
"Hilary Weeks",
"Scott Wiley",
"Sarah Sample",
"Debra Fotheringham",
"Dustin Christensen",
"Russ Dixon"
],
"title": [
"Say Love"
],
"artists.name": [
"Hilary Weeks"
],
"tracks": [
"Say Love",
"Another Second Chance",
"It's A Good Day",
"Brave",
"I Found Me",
"Hero",
"Tell Me",
"Where I Am",
"Better Promises",
"Even When"
]
,
fields": {
"title": [
"Brave Little Toaster"
],
"credits.name": [
"Randy Bennett",
"Jim Jackman",
"Randy Cook",
"Judy Toll",
"Jon Lovitz",
"Tim Stack",
"Timothy E. Day",
"Thurl Ravenscroft",
"Deanna Oliver",
"Phil Hartman",
"Jonathon Benair",
"Joe Ranft"
],
"starring.name": [
"Jon Lovitz",
"Thurl Ravenscroft",
"Tim Stack",
"Timothy E. Day",
"Deanna Oliver"
]
},
"fields": {
"title": [
"Braveheart"
],
"credits.name": [
"Bernard Horsfall",
"Martin Dempsey",
"James Robinson",
"Robert Paterson",
"Alan Tall",
"Rupert Vansittart",
"Donal Gibson",
"Malcolm Tierney",
"Sandy Nelson",
"Sean Lawlor"
],
"starring.name": [
"Brendan Gleeson",
"Sophie Marceau",
"Mel Gibson",
"Patrick Mcgoohan",
"Catherine Mccormack"
]
}
maybe someone knows why the second title ... (in this case not sara bareilles as i said before, but) Hillary Weeks - who has a track called 'brave' ... why is it before title 'braveheart' and 'brave little toaster'?
EDIT AGAIN
to further complicate the situation, what if i had a 'rank' field that was a part of my document? i'm finding it very difficult to add that to my _score field using a script score function...
"functions": [
{
"script_score": {
"script": "_score * 1/ doc['rank'].value"
}
}
]