Buddypress - Limit Blog Excerpt on Activity Stream - buddypress

I'm using the following code to limit the number of words that appear on my blog pages.
function custom_excerpt_length( $length ) { return 32; } add_filter( 'excerpt_length', 'custom_excerpt_length', 999 );
I need to limit the number of words that appear on the BuddyPress Activity Stream when someone creates a blog post. Currently when someone creates a new blog post, the excerpt appears on the Activity Stream with about 50+ words. I need to limit this to around 32 words.
Does anyone know how to accomplish this?
Thx

This is what I'm using. The first part limits the text, the second part ensures the thumbnail is present. Thank you shanebp for pointing me in the right direction.
function 5_activity_content_body( $content, $activity ) {
$content_length = 42;
if ( 'new_blog_post' == $activity->type ) {
$images = extractImageFromContent($content);
$_content = (strlen($content) > $content_length) ? substr($content, 0, $content_length).'...' : $content;
if(!empty($images)) {
$_content .= implode("", $images);
}
return $_content;
}
return $content;}
add_filter( 'bp_get_activity_content_body', '5_activity_content_body', 10, 2 );
This brings the thumbnail back.
function extractImageFromContent($html) {
preg_match_all('/<img.*?src=[\'"](.*?)[\'"].*?>/i', $html, $matches);
return ($matches[0]) ? $matches[0] : array(); }

Related

Optimisation of google api requests to retrieve data from youtube videos

I'm trying to build a website with a section that displays the last video of a youtube channel and the thumbnails and titles of the next 9 most recent videos. The problem I'm having is that I consume the quota of requests just by testing the website.
To get this information I use Google's API. I retrieve the last 10 videos of the channel as follows:
<?PHP
include('global_variables.php');
$randomNumber = rand(2,10);
$randomKey = "api.key$randomNumber";
$data = file_get_contents("https://www.googleapis.com/youtube/v3/search?key=".$GLOBALS['api_key']."&part=snippet&channelId=".$GLOBALS['channel_id']."&order=date&maxResults=".$GLOBALS['videos_in_playlist']);
$json = json_decode($data);
echo $data;
?>
I call this PHP file in these ways:
// TO LOAD PLAYING YOUTUBE VIDEO
function loadVideo(iframe) {
$.getJSON("load_playing_video.php",
function(data) {
var videoNumber = (iframe.getAttribute('vnum') ? Number(iframe.getAttribute('vnum')) : 0);
iframe.setAttribute("src", "https://youtube.com/embed/" + data.items[videoNumber].id.videoId+ "?controls=1&autoplay=1&color=white");
// PLACE VIDEO TITLE
$("#playing-video-title").html(data.items[videoNumber].snippet.title);
// PLACE VIDEO DESCRIPTION
$("#playing-video-description").html(data.items[videoNumber].snippet.description);
}
);
}
loadVideo(document.querySelector("#playing-video"));
// TO LOAD THUMBNAILS AND TITLES ON SIDEBAR
$.getJSON("load_playing_video.php",
function(data) {
for (var index = 0; index < <?php echo $GLOBALS['videos_in_playlist']?>; index++) {
var videoImage = data.items[index].snippet.thumbnails.default.url;
var title = data.items[index].snippet.title;
var imgId = "#playlist-thumbnail" + index;
var titleId = "#playlist-title" + index;
document.querySelector(imgId).setAttribute("src",videoImage);
document.querySelector(titleId).innerHTML = title;
}
}
);
I was hoping someone could help me optimise the number of requests I do.
And if someone knows how I could increase the quota of requests (even if I have to pay) I would really appreciate some guidelines.
Thank you!!

whmcs server-side pagination

I'm working with WHMCS and I notice that list views are not working well.
That's because in the clientarea's list views I have tousans thousands of records to display and the DataTables is crashing.
Is there a way to paginate from the server-side? I will appreciate any idea.
Here is an idea: let' say you are viewing the Domains list page, you can use ClientAreaPage hook to create a variable to load a "paged" copy of domains:
add_hook( 'ClientAreaPage', 1, function( $vars )
{
$myVars = array();
if (App::getCurrentFilename() == 'clientarea' && isset($_GET['action']) && $_GET['action'] == 'domains') {
$domains2 = array();
foreach($vars['domains'] as $k => $domain) {
if ($k < 3) {//your code to handle pagination
$domains2[] = $domain;
}
}
$myVars['domains2'] = $domains2;
$myVars['currentpage'] = 1;
}
return $myVars;
});
In clientareadomains.tpl (template file), you need to change $domains to $domains2:
{foreach key=num item=domain from=$domains2}
Of course, it is not simple task, you need to handle pagination in the hook and the tpl files.
Hope it helps.

Fiddler: Programmatically add word to Query string

Please be kind, I'm new to Fiddler
My purpose:I want to use Fiddler as a Google search filter
Summary:
I'm tired of manually adding "dog" every time I use Google.I do not want the "dog" appearing in my search results.
For example:
//www.google.com/search?q=cat+-dog
//www.google.com/search?q=baseball+-dog
CODE:
dog replaced with -torrent-watch-download
// ==UserScript==
// #name Tamper with Google Results
// #namespace http://superuser.com/users/145045/krowe
// #version 0.1
// #description This just modifies google results to exclude certain things.
// #match http://*.google.com
// #match https://*.google.com
// #copyright 2014+, KRowe
// ==/UserScript==
function GM_main () {
window.onload = function () {
var targ = window.location;
if(targ && targ.href && targ.href.match('https?:\/\/www.google.com/.+#q=.+') && targ.href.search("/+-torrent/+-watch/+-download")==-1) {
targ.href = targ.href +"+-torrent+-watch+-download";
}
};
}
//-- This is a standard-ish utility function:
function addJS_Node(text, s_URL, funcToRun, runOnLoad) {
var D=document, scriptNode = D.createElement('script');
if(runOnLoad) scriptNode.addEventListener("load", runOnLoad, false);
scriptNode.type = "text/javascript";
if(text) scriptNode.textContent = text;
if(s_URL) scriptNode.src = s_URL;
if(funcToRun) scriptNode.textContent = '(' + funcToRun.toString() + ')()';
var targ = D.getElementsByTagName('head')[0] || D.body || D.documentElement;
targ.appendChild(scriptNode);
}
addJS_Node (null, null, GM_main);
At first I was going to go with Tampermonkey userscripts,Because I did not know about Fiddler
==================================================================================
Now,lets focus on Fiddler
Before Request:
I want Fiddler to add text at the end of Google Query string.
Someone suggested me to use
static function OnBeforeRequest(oSession: Session) {
if (oSession.uriContains("targetString")) {
var sText = "Enter a string to append to a URL";
oSession.fullUrl = oSession.fullUrl + sText;
}
}
Before Response:
This is where my problem lies
I totally love the HTML response,Now I just want to scrape/hide the word in the search box without changing the search results.How can it be done? Any Ideas?
http://i.stack.imgur.com/4mUSt.jpg
Can you guys please take the above information and fix the problem for me
Thank you
Basing on goal definition above, I believe you can achieve better results with your own free Google custom search engine service. In particular, because you have control over GCSE fine-tuning results, returned by regular Google search.
Links:
https://www.google.com/cse/all
https://developers.google.com/custom-search/docs/structured_search

Ebay API with description

How do I get the Ebay API to return a description?
I have some code that makes an API call as follows:
http://svcs.ebay.com/services/search/FindingService/v1?
callname=findItemsAdvanced&
responseencoding=XML&
appid=appid&
siteid=0&
version=525&
QueryKeywords=keywords;
It returns items, but it's missing the full description text. I'm not seeing the next step to ask for the detailed descriptions.
You have to use Shopping API, for instance: http://developer.ebay.com/DevZone/shopping/docs/CallRef/GetSingleItem.html#sampledescriptionitemspecifics
I use following (very simple function to get Item detail from ebay):
function eBayGetSingle($ItemID){
$URL = 'http://open.api.ebay.com/shopping';
//change these two lines
$compatabilityLevel = 967;
$appID = 'YOUR_APP_ID_HERE';
//you can also play with these selectors
$includeSelector = "Details,Description,TextDescription,ShippingCosts,ItemSpecifics,Variations,Compatibility";
// Construct the GetSingleItem REST call
$apicall = "$URL?callname=GetSingleItem&version=$compatabilityLevel"
. "&appid=$appID&ItemID=$ItemID"
. "&responseencoding=XML"
. "&IncludeSelector=$includeSelector";
$xml = simplexml_load_file($apicall);
if ($xml) {
$json = json_encode($xml);
$array = json_decode($json,TRUE);
return $array;
}
return false;
}

How to use Twitter Api To Get more than 20 list Members in a single request?

i want to get more than 20 users using the the twitter api in a single request
is there any parameter that specifies it?
i am using this api
http://api.twitter.com/1/Barelyme/Politics/members.xml?cursor=-1
According to the Twitter List API Docs:
http://apiwiki.twitter.com/Twitter-REST-API-Method:-GET-list-members
You cant get more than 20 in a single request.
If you're using twitteroauth by abraham, you can iterate through the pages of list members (this example assumes $connection is already defined by a functional implementation of twitteroauth):
$user = $connection->get('account/verify_credentials'); //Gets/Tests credentials
$listmembers = $connection->get("{$user->screen_name}/LISTNAMEORID/members"); //Gets first page of list members; MUST edit "LISTNAMEORID"
$pagevalue = ""; //Set page value for later use
if($listmembers->next_cursor == 0){ //There is only one page of followers
for ($j=0, $k=count($listmembers->users); $j<$k; $j++){
//Your actions here
//print_r($listmembers); //Displays the page of list members
}
} else { //There are multiple pages of followers
while ($pagevalue!=$listmembers->next_cursor){
for ($j=0, $k=count($listmembers->users); $j<$k; $j++){
//Your actions here
//print_r($listmembers); //Displays the page of list members
}
$pagevalue = $listmembers->next_cursor; //Increment the 'Next Page' link
$listmembers = $connection->get("{$user->screen_name}/LISTNAMEORID/members", array('cursor' => $pagevalue)); //Gets next page of list members; MUST edit "LISTNAMEORID"
}
}
$connection = new TwitterOAuth(CONSUMER_KEY, CONSUMER_SECRET, ACCESS_TOKEN, ACCESS_TOKEN_SECRET);
$listmembers = $connection->get("MexicoTimes/mexicanpoliticians/members");
$members = array();
while ($listmembers->next_cursor_str != "0") {
foreach($listmembers->users as $user)
$members[] = $user;
$cursor = $listmembers->next_cursor_str;
$listmembers = $connection->get("MexicoTimes/mexicanpoliticians/members", array('cursor' => $cursor));
}
This one worked for me with Abraham's Twitteroauth
Probably not though, you can poll multiple times > 1 for more data.
Given that twitter said you can't do it, you Probably can't