How to edit / change default datatable pages message - datatables

I have a datatable, and I would like to change the default message:
Showing 21 to 30 of 57 entries
To:
Showing 21 to 30 of 57 items (or whatever)
I found this in the official doc
So basically I have to change:
"info": "Showing _START_ to _END_ of _TOTAL_ entries",
"infoEmpty": "Showing 0 to 0 of 0 entries",
...
To:
"info": "Showing _START_ to _END_ of _TOTAL_ items",
"infoEmpty": "Showing 0 to 0 of 0 items",
...
But I don't know how to hook it up all together.
Is there any possibility to do that without using any additional plugins?

Here is an example - no additional plugins are needed:
$('#example').DataTable( {
"language": {
"info": "Showing _START_ to _END_ of _TOTAL_ items",
"infoEmpty": "Showing 0 to 0 of 0 items",
"infoFiltered": "(filtered from _MAX_ total items)"
}
} );
In the above example I also added a third change, in case you want to handle all examples of "entries" to "items".
Some background notes:
These phrases are all part of the language option that you referenced.
Generally, this option is used to support changing all phrases to a language other than English - either inline or via a URL.
But you can use the option to modify the default English phrases also.

Related

Extract table data from Wikipedia

is there any way to extract only table data I am trying to extract a table from the specific section "Grade One" from this article https://en.wikipedia.org/wiki/List_of_motor_racing_circuits_by_FIA_grade using Api sandbox but I am getting only the whole content of the page.
this is the URL from the API sandbox which gives me all content.
https://en.wikipedia.org/wiki/Special:ApiSandbox#action=parse&format=json&page=List%20of%20motor%20racing%20circuits%20by%20FIA%20grade&prop=text
I followed the steps I described in my answer in order to get the data you want.
This is the URL:
https://en.wikipedia.org/wiki/Special:ApiSandbox#action=parse&format=json&page=List%20of%20motor%20racing%20circuits%20by%20FIA%20grade&prop=sections%7Ctext&section=1&disablelimitreport=1&utf8=1
The output contains the table and the text in the "Grade One" section.
This is the API Sandbox example.
Response:
{
"parse": {
"title": "List of motor racing circuits by FIA grade",
"pageid": 57151782,
"text": {
"*": "<div class=\"mw-parser-output\"><h2><span class=\"mw-headline\" id=\"Grade_One\">Grade One</span><span class=\"mw-editsection\"><span class=\"mw-editsection-bracket\">[</span>edit<span class=\"mw-editsection-bracket\">]</span></span></h2>\n<p>There are 40 Grade One circuits for a total of 49 layouts in 27 nations as of December 2021. Circuits holding Grade One certification may host events involving \"Automobiles of Groups D (FIA International Formula) and E (Free Formula) with a weight/power ratio of less than 1 kg/hp.\"<sup id=\"cite_ref-ISC2019_1-0\" class=\"reference\">[1]</sup> As such, a Grade One certification is required to host events involving Formula One cars.<sup id=\"cite_ref-2\" class=\"reference\">[2]</sup><sup id=\"cite_ref-2021_December_list_3-0\" class=\"reference\">[3]</sup>\n</p>\n<table class=\"wikitable sortable\" width=\"75%\" style=\"font-size: 95%;\">\n<tbody><tr>\n<th>Circuit\n</th>\n<th>Location\n</th>\n<th>Country\n</th>\n<th>Layout\n</th>\n<th>Length\n</th>\n<th>Continent\n</th></tr>\n<tr>\n<td>Albert Park Circuit\n</td>\n<td>Melbourne\n</td>\n<td><span class=\"flagicon\"><img alt=\"\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/8/88/Flag_of_Australia_%28converted%29.svg/23px-Flag_of_Australia_%28converted%29.svg.png\" decoding=\"async\" width=\"23\" height=\"12\" class=\"thumbborder\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/8/88/Flag_of_Australia_%28converted%29.svg/35px-Flag_of_Australia_%28converted%29.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/8/88/Flag_of_Australia_%28converted%29.svg/46px-Flag_of_Australia_%28converted%29.svg.png 2x\" data-file-width=\"1280\" data-file-height=\"640\" /> </span>Australia\n</td>\n<td>Grand Prix\n</td>\n<td>5.279 km (3.280 mi)\n</td>\n<td>Australia\n</td></tr>\n<tr>[...the rest of the table is shown here]"
},
"sections": [
{
"toclevel": 1,
"level": "2",
"line": "Grade One",
"number": "1",
"index": "1",
"fromtitle": "List_of_motor_racing_circuits_by_FIA_grade",
"byteoffset": 0,
"anchor": "Grade_One"
}
]
}
}
I can't see how you can return the table only, so, you have to extract the table from the API response (by using a script).

How to get new Search Engine results in the past 24h using a SERP API?

Assume I am in possession of a SERP API, which given a keyword, returns me the Google results of that keyword in JSON format (for example: https://serpapi.com/):
{
"organic_results": [
{
"position": 1,
"title": "Coffee - Wikipedia",
"link": "https://en.wikipedia.org/wiki/Coffee",
"displayed_link": "https://en.wikipedia.org › wiki › Coffee",
"snippet": "Coffee is a brewed drink prepared from roasted coffee beans, the seeds of berries from certain Coffea species. From the coffee fruit, the seeds are ...",
"sitelinks":{/*snip*/}
,
"rich_snippet":
{
"bottom":
{
"extensions":
[
"Region of origin: Horn of Africa and ‎South Ara...‎",
"Color: Black, dark brown, light brown, beige",
"Introduced: 15th century"
]
,
"detected_extensions":
{
"introduced_th_century": 15
}
}
}
,
"about_this_result":
{
"source":
{
"description": "Wikipedia is a free content, multilingual online encyclopedia written and maintained by a community of volunteers through a model of open collaboration, using a wiki-based editing system. Individual contributors, also called editors, are known as Wikipedians.",
"source_info_link": "https://en.wikipedia.org/wiki/Wikipedia",
"security": "secure",
"icon": "https://serpapi.com/searches/6165916694c6c7025deef5ab/images/ed8bda76b255c4dc4634911fb134de53068293b1c92f91967eef45285098b61516f2cf8b6f353fb18774013a1039b1fb.png"
}
,
"keywords":
[
"coffee"
]
,
"languages":
[
"English"
]
,
"regions":
[
"the United States"
]
}
,
"cached_page_link": "https://webcache.googleusercontent.com/search?q=cache:U6oJMnF-eeUJ:https://en.wikipedia.org/wiki/Coffee+&cd=4&hl=en&ct=clnk&gl=us",
"related_pages_link": "https://www.google.com/search?q=related:https://en.wikipedia.org/wiki/Coffee+Coffee"
},
/* Results 2,3,4... */
]}
What is a good way to get new results from the past 24h? I added the &tbs=qdr:d query parameter, which only shows the results from the past day. That's a good first step.
The 2nd step is to filter out only relevant results. When there are no relevant results, Google shows this message box:
What is their algorithm to show this box?
Idea 1: "grep -i {exact_keywords}"
For example, if I search a keyword like "Alexander Pope", the 24h Google query might return results about the pope, written by a guy called Alexander. That's not super relevant. My naive idea is to grep (case insensitive) the exact keyword "Alexander Pope".
But that might leave out some good results.
Any other ideas?

create order with different extra bag for outbound and inbound

I want to test order create API by adding extra bags. And I am experiencing a strange problem.
I make a search for Paris-NYC round trip, then I send the request to offer price API using include=detailed-fare-rules,bags parameter.
In the response, I get 2 kinds of extra bag information:
1 bag, 30 EUR
2 bags, 75 EUR
"bags": {
"1": {
"quantity": 1,
"name": "CHECKED_BAG",
"price": {
"amount": "30.00",
"currencyCode": "EUR"
},
"bookableByItinerary": true,
"segmentIds": [
"1",
"3"
],
"travelerIds": [
"1"
]
},
"2": {
"quantity": 2,
"name": "CHECKED_BAG",
"price": {
"amount": "75.00",
"currencyCode": "EUR"
},
"bookableByItinerary": true,
"segmentIds": [
"1",
"3"
],
"travelerIds": [
"1"
]
}
}
Everything goes well if I create order by:
adding 1 bag for outbound(paris to NYC), and adding 1 bag for inbound(NYC to Paris)
adding only 1 bag for outbound (0 extra bag for inbound)
adding 2 bag for outbound(paris to NYC), and adding 2 bags for inbound(NYC to Paris)
The problem is for the scenario:
I create order by adding 1 bag for outbound, and adding 2 bags for inbound.
In this case, the order is created with a warning message
"warnings": [
{
"status": 200,
"code": 0,
"title": "BookingWithPriceMarginWarning",
"detail": "The prices are lower than expected"
}
]
And the created order contains 1 extra bag for outbound, and 1 extra bag for inbound.
So I have 2 questions about this strange problem:
Is it normal that my order is modified when processing order create ?
Adding different number of extra bags for different itineraries is supported ?
Thanks
Is it normal that my order is modified when processing order create ?
It depends if you are Self-Service or Enterprise user:
For Enterprise users, Flight Create Orders offer the possibility to do a "best-effort" for additional-service booking. If this option is activated, Flight Create Orders gives priority to the reservation of your flight and remove the additional service that cannot be booked. That's why you receive the warning in your request when it happens.
Self Service users have the default behavior which rejects the creation of the order if at least one additional service can not be booked. In this case you will receive the following error:
{
"errors": [
{
"status": 400,
"code": 38034,
"title": "ONE OR MORE SERVICES ARE NOT AVAILABLE",
"detail": "Error booking additional services"
}
]
}
Adding different number of extra bags for different itineraries is supported ?
Yes, that is supported. Be aware that you cannot have an infinite amount of bag on a plane, so it could happen that you get an error when adding extra bags if there are too many bags already added by other passengers.

How to change datatables display string at bottom

I am using datatables plugin .
While using it shows - "Showing 1 to 10 of 57 entries", something like this at bottom .
I want to change this . Can anyone helps me how to change this?
You can override the sInfo and sInfoEmpty strings like this:
$(document).ready( function() {
$('#example').dataTable( {
"oLanguage": {
"sInfo": "Showing _START_ to _END_ of _TOTAL_ entries",
"sInfoEmpty": "Showing 0 to 0 of 0 entries"
}
} );
} );
See media/js/jquery.dataTables.js, lines 9272-9309 in the current version (DataTables-1.9.4)

Using the Instagram API to get ALL followers

I'm using the Instagram API to get the number of people who follow a given account as follows.
$follow_info = file_get_contents('https://api.instagram.com/v1/users/477644454/followed-by?access_token=ACESS_TOKEN&count=-1');
$follow_info = #json_decode($follow_info, true);
This returns a set of 50 results. They do have a next_url key in the array, but it becomes time consuming to keep on going to the next page of followers when dealing with tens of thousands.
I read on StackOverflow that setting the count parameter to -1 would return the entire set. But, it doesn't seem to...
Instagram limits the number of results returned in their API for all sorts of endpoints, and they change these limits arbitrarily, without warning, presumably to handle server load.
Several similar threads exist:
Instagram API not fufilling count parameter
Displaying more than 20 photos in instagram API
Instagram API: How to get all user media? (see comments on answer too, -1 returns 1 less result).
350 Request Limit for Instagram API
Instagram API: How to get all user media?
In short, you won't be able to increase the maximum returned rows, and you'll be stuck paginating.
$follow_info = file_get_contents('https://api.instagram.com/v1/users/USER_ID?access_token=ACCES_TOKEN');
$follow_info = json_decode($follow_info);
print_r($follow_info->data);
And:
return
{
"meta": {
"code": 200
},
"data": {
"username": "i_errorw",
"bio": "A Casa do Júlio é um espaço para quem gosta da ideia de cuidar da saúde com uma alimentação saudável e saborosa.",
"website": "",
"profile_picture": "",
"full_name": "",
"counts": {
"media": 5,
"followed_by": 10,
"follows": 120000
},
"id": "1066376857"
}
}
if the APIs are optional
using the mobile version of twitter you can extract a full list of a followers for a designed target using a very simple bash script
the sleep time must me chosen carefully to avoid temporary ip block
the script can be executed by :
./scriptname.sh targetusername
content
#!/bin/bash
counter=1
wget --load-cookies ./twitter.cookies -O - "https://mobile.twitter.com/$1/followers?" > page
until [ $counter = 0 ]; do
cat page | grep -i "#" | grep -vi "fullname" | grep -vi "$1" | awk -F">" '{print $5}' | awk -F"<" '{print $1}' >> userlist
nextpage=$(cat page | grep -i "cursor" | awk -F'"' '{print $4}')
wget --load-cookies twitter.cookies -O - "https://mobile.twitter.com/$nextpage" > page
if [ -z $nextpage ]; then
exit 0
fi
sleep 5
done
it creates a file "userlist" including all usernames that follows the designed target one by line
PS: a cookies file filled with your credentials is necessary to wget to authenticate the requests
I personally suggest to use Wizboost for instagram automation. And the reason is that I have used this tool and my experience is amazing. It gave me a lot of followers. Now you don’t need to invest time in competing with other Instagram accounts as Wizboost has got your back for this, in fact for everything. You don’t need to do anything you can just relax and Wizboost will get you followers, likes and comments. And you can also schedule your posts too. So easy to use and still got lots of potential. I just love Wizboost for all the services it has.
$follow_info = file_get_contents('https://api.instagram.com/v1/users/USER_ID?access_token=ACCES_TOKEN');
$follow_info = json_decode($follow_info);
print_r($follow_info->data);
return
{
"meta": {
"code": 200
},
"data": {
"username": "casadojulio",
"bio": "A Casa do Júlio é um espaço para quem gosta da ideia de cuidar da saúde com uma alimentação saudável e saborosa.",
"website": "",
"profile_picture": "",
"full_name": "",
"counts": {
"media": 5,
"followed_by": 25,
"follows": 12
},
"id": "1066376857"
}
}