WP_Query() and ordering items - sql

I have to order items from wp_post by their menu_order (pages only).
I've written this line:
$query = new WP_Query();
$all = $query -> query(array('post_type' => 'page', 'posts_per_page' => -1, 'orderby' => 'menu_order', 'order' => 'DESC'));
$children = get_page_children($id, $all);
foreach ($children as $child) {
if ($child -> ID == get_the_id()) {
echo '<li class="active">' . $id . $child -> post_title . '</li>';
I see the items but they are not ordered.
Thanks.
FZ

I would say that the issue relates to the fact you are passing the $all object through get_page_children before using it in a loop.
Why don't you just forget get_page_children and add 'post_parent' to your list of WP_Query arguments?
$all = $query->query(array('post_type' => 'page', 'post_parent' => $id, 'posts_per_page' => -1, 'orderby' => 'menu_order', 'order' => 'DESC'))
A helpful thread on Wordpress Stack Exchange;
https://wordpress.stackexchange.com/questions/35123/wp-get-all-the-sub-pages-of-the-parent-using-wp-query
EDIT.
To expand on comments. Sometimes there's confusion about menu_order - it does not relate to wp_nav_menus but to the Page Attributes Order input box, as shown below;

Related

post_where clause to exclude post attachment from result

I'm having an issue with my custom 'where' clause as it would also include the post attachment on the search results.
Here's the filter function
function title_filter( $where, $wp_query ){
global $wpdb;
if( $search_term = $wp_query->get( 'title_filter' ) ) :
$search_term = $wpdb->esc_like( $search_term );
$search_term = ' \'%' . $search_term . '%\'';
$title_filter_relation = ( strtoupper( $wp_query->get( 'title_filter_relation' ) ) == 'OR' ? 'OR' : 'AND' );
$where .= ' '.$title_filter_relation.' ' . $wpdb->posts . '.post_title LIKE ' . $search_term;
endif;
return $where;
}
and here are my parameters
$args = array(
'post_type' => 'bb_destinations',
'posts_per_page' => -1,
'suppress_filters' => false,
'title_filter' => $search,
'title_filter_relation' => 'OR',
'meta_query' => $meta_query,
'tax_query' => $tax_query,
'meta_key' => 'offer_repeater_%_destination_offer_price',
'orderby' => 'meta_value',
'order' => 'ASC',
);
it seems the search term would also include the attachments uploaded to the post which is weird. How do I exclude them?
for example, I would search the term 'Zia Hotel', it would also include the attachments uploaded to that post.

Existence of posts from complex WP_Query

I have a comparative set of arguments for WP_Query involving a custom field.
On a page I need to say "Are there going to be results?, if so display a link to another page that displays these results, if not ignore" There are between 500 and 1200 posts of this type but could be more in the future. Is there a more efficient or direct way of returning a yes/no to this query?
$args = array(
'post_type' => 'product',
'posts_per_page' => -1,
'meta_query' => array(
array(
'key' => 'partner',
'value' => $partner,
'compare' => 'LIKE',
),
),
);
$partner_query = new WP_Query($args);
if ($partner_query->have_posts() ) { [MAKE LINK] }
The link is not made from data returned, we already have that information.
Perhaps directly in the database. My SQL is not up to phrasing the query which in English is SELECT * from wp_posts WHERE post_type = 'product'} AND (JOIN??) post_meta meta_key =
partner AND post_id = a post_id that matches the first part of the query.
And if I did this, would this be more efficient that the WP_Query method?
Use 'posts_per_page' => 1 and add 'no_found_rows' => true and 'fields' => 'ids'. This will return the ID of a matching post, and at the same time avoid the overhead of counting all the matching posts and fetching the entire post contents. Getting just one matching post id is far less work than counting the matching posts. And it's all you need.
Like this:
$args = array(
'post_type' => 'product',
'posts_per_page' => 1,
'no_found_rows' => true,
'fields' => 'ids',
'meta_query' => array(
array(
'key' => 'partner',
'value' => $partner,
'compare' => 'LIKE',
),
),
);
$partner_query = new WP_Query($args);
if ($partner_query->have_posts() ) { [MAKE LINK] }
no_found_rows means "don't count the found rows", not "don't return any found rows". It's only in the code, not the documentation. Sigh.

how to integrate multimodelform with echmultiselect yii?

I am running into a little snag with combining extensions "EchMultiSelect" and "MultiModelForm" of YII framework.
What I'm trying to do is copy a set of form elements one of which elements is an EchMultiSelect widget.
According to tutorial on the jqRelCopy page, I would need to pass a copy of the element (datePicker in their example) to the 'jsAfterNewId' option:
'jsAfterNewId' => JQRelcopy::afterNewIdDatePicker($datePickerConfig),
So, I tried to modify that to:
'jsAfterNewId' => MultiModelForm::afterNewIdMultiSelect($memberFormConfig['elements']),
Where I also added the following to MultiModelForm.php:
public static function afterNewIdMultiSelect($element)
{
$options = isset($element['options']) ? $element['options'] : array();
$jsOptions = CJavaScript::encode($options);
return "if(this.attr('multiple')=='multiple'){this.multiselect(jQuery.extend({$jsOptions}));};";
}
its copied and working properly when i am using Add Person link but what happens if i am adding/cloning three items for example and when i change the third item multiselct option then its reflecting to the first multiselect dropdown only this is same for other as well also when i am adding new items by clicking on the Add Person link then its cloning the same element to the new row item
here is the code for the form configuration variables and multimodel widget call.
//$userList=array of the userIds from users table
$memberFormConfig = array(
'elements'=>array(
'userId'=>array(
'type'=>'ext.EchMultiSelect.EchMultiSelect',
'model' => $User,
'dropDownAttribute' => 'userId',
'data' => $userList,
'dropDownHtmlOptions'=> array(
'style'=>'width:500px;',
),
),
...
...
));
calling the MultiModelForm widget from the same view file
$this->widget('ext.multimodelform.MultiModelForm',array(
'id' => 'id_member', //the unique widget id
'formConfig' => $memberFormConfig, //the form configuration array
'model' => $model, //instance of the form model
'tableView' => true,
'validatedItems' => $validatedMembers,
'data' => Person::model()->findAll('userId=:userId', array(':userId'=>$model->id)),
'addItemText' => 'Add Person',
'showAddItemOnError' => false, //not allow add items when in validation error mode (default = true)
'fieldsetWrapper' => array('tag' => 'div',
'htmlOptions' => array('class' => 'view','style'=>'position:relative;background:#EFEFEF;')
),
'removeLinkWrapper' => array('tag' => 'div',
'htmlOptions' => array('style'=>'position:absolute; top:1em; right:1em;')
),
'jsAfterNewId' => MultiModelForm::afterNewIdMultiSelect($memberFormConfig['elements']),
));
Can someone help me with this please?
Thanks in advance!
After a long searching and googleing i found the solution for this, just replace the function in your MultiModelForm.php:
public static function afterNewIdMultiSelect($element)
{
$options = isset($element['options']) ? $element['options'] : array();
$jsOptions = CJavaScript::encode($options);
return "if ( this.hasClass('test123456') )
{
var mmfComboBoxParent = this.parent();
// cloning autocomplete and select elements (without data and events)
var mmfComboBoxClone = this.clone();
var mmfComboSelectClone = this.prev().clone();
// removing old combobox
mmfComboBoxParent.empty();
// addind new cloden elements ()
mmfComboBoxParent.append(mmfComboSelectClone);
mmfComboBoxParent.append(mmfComboBoxClone);
// re-init autocomplete with default options
mmfComboBoxClone.multiselect(jQuery.extend({$jsOptions}));
}";
}
Thats It....!!
Thanks...!!!

How to SORT a SET and GET the full HASH

I'm new to Redis and I have to say I love it till now :)
I'm bumping into an issue I'm not sure how to solve it in the more efficient way.
I have a SET of HASH. Each HASH describe a post.
Here is the code to create and store the HASH:
// Create the HASH
$key = 'post:'.$post->getId();
$this->redis->hSet($key, 'created', $post->getCreated());
$this->redis->hSet($key, 'author', $post->getAuthor());
$this->redis->hSet($key, 'message', $post->getMessage());
// Store the HASH in the SET
$this->redis->sAdd('posts', $post->getId());
Now, previously I was storing all the post's attributes in a data field of the HASH (json_encoded) and I was fetching the information like this:
$key = 'posts';
$data = $this->redis->sort($key, array(
'by' => 'nosort',
'limit' => array($offset, $limit),
'get' => 'post:*->data '
));
if (!is_array($data)) {
return array();
}
foreach ($data as &$post) {
$post = json_decode($post, true);
}
It was working great, I had all the posts information :)
But I had conflicts when updating the post in Redis (concurrent updates), so I've decided to have all the post's attributes in separated fields of the HASH and it fixed my issue of conflicts.
Now the problem I have is to fetch the HASH from my SET. Do I have to specify every single fields like this:
$key = 'posts';
$data = $this->redis->sort($key, array(
'by' => 'nosort',
'limit' => array($offset, $limit),
'get' => array('post:*->created', 'post:*->author', 'post:*->message')
));
Or is there another way to fetch the full HASH directly within the SET?
I heard about pipeline but I'm not sure it's what I'm looking for and if I can use it with phpredis
Cheers, Maxime
UPDATE
I'm not sure I explained myself clearly. I have some elements in a set (post_id).
I want to get the first 10 posts of the SET, which means I want 10 hash (with all their fields and value) in order to build a post object.
I was previously storing all the object information in one field of the hash (data), now I have one field per attribute of the object.
before:
myHash:<id> data
now:
myHash:<id> id "1234" created "2010-01-01" author "John"
Before I was using SORT to fetch the top 10 posts (and paginate easily), like this:
$key = 'posts';
$data = $this->redis->sort($key, array(
'by' => 'nosort',
'limit' => array(0, 10),
'get' => 'post:*->data '
));
Now that I have X members to my hash I'm wondering what is the best solution.
Is it:
$key = 'posts';
$data = $this->redis->sort($key, array(
'by' => 'nosort',
'limit' => array($offset, $limit),
'get' => 'post:*->data '
));
Or maybe:
$key = 'posts';
$data = $this->redis->sort($key, array(
'by' => 'nosort',
'limit' => array($offset, $limit),
'get' => '#'
));
foreach($data as $post_id) {
$posts[] = $this->redis->hGetAll('post:'.$post_id);
}
Or finally:
$key = 'posts';
$data = $this->redis->sort($key, array(
'by' => 'nosort',
'limit' => array($offset, $limit),
'get' => '#'
));
$pipeline = $this->redis->multi();
foreach ($data as $post_id) {
$pipeline->hGetAll('post:'.$post_id);
}
return $pipeline->exec();
Or something else that I don't know yet?
What is the best, faster way to do this?
If you have read redis's source , you'll find that is not possible. There is a workaround that using lua script to combine 'sort' and 'hgetall' commands in a single redis invocation.
The 'get pattern' is processed by function 'lookupKeyByPattern'.
https://github.com/antirez/redis/blob/unstable/src/sort.c#L61
If you start with the redis.io documentation on hashes you'll find there are commands which allow you to get multiple hash members. In particular "HGETALL" for pulling all fields and values, or "HMGET" for pulling a set of fields with their values.
Additionally, for setting them I would recommend setting them in one pass with "HMSET"

Set specific node id's when importing data into drupal 7

I'm using the feeds module to import my existing data into Drupal 7 and it works great but I have one issue with the nids it generates.
I want these to match my existing site id's then I can have a nice clean transition between old and new keeping even the same urls.
Two approaches here;
1. Somehow assign these nid's as part of the import.
2. Renumber the nid's after import.
I can't find any module or other code in google to do either so looks like I will have to hack something together myself... has anyone done this before?
-
Could it be as simple as updating all these?
SELECT table_name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE column_name = 'nid'
comment
history
location_instance
node
node_access
node_comment_statistics
node_counter
node_revision
search_node_links
taxonomy_index
edit: and these...
SELECT table_name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE column_name = 'entity_id'
feeds_item
field_data_body
field_data_comment_body
field_data_field_address
field_data_field_image
field_data_field_state
field_data_field_tags
field_data_field_type
field_data_field_website
field_revision_body
field_revision_comment_body
field_revision_field_address
field_revision_field_image
field_revision_field_state
field_revision_field_tags
field_revision_field_type
field_revision_field_website
Here's what I did in the end...
It seems to all be working correctly, but please be very carefully and make a backup (like I did) before doing anything like this.
header('Content-type: text/plain');
global $database, $tables, $prefix;
$database = // your database
$prefix = 'drupal_';
$tables = array (
'comment' => 'nid',
'history' => 'nid',
'location_instance' => 'nid',
'node' => 'nid',
'node_access' => 'nid',
'node_comment_statistics' => 'nid',
'node_counter' => 'nid',
'node_revision' => 'nid',
'search_node_links' => 'nid',
'taxonomy_index' => 'nid',
'feeds_item' => 'entity_id',
'field_data_body' => 'entity_id',
'field_data_comment_body' => 'entity_id',
'field_data_field_address' => 'entity_id',
'field_data_field_image' => 'entity_id',
'field_data_field_state' => 'entity_id',
'field_data_field_tags' => 'entity_id',
'field_data_field_type' => 'entity_id',
'field_data_field_website' => 'entity_id',
'field_revision_body' => 'entity_id',
'field_revision_comment_body' => 'entity_id',
'field_revision_field_address' => 'entity_id',
'field_revision_field_image' => 'entity_id',
'field_revision_field_state' => 'entity_id',
'field_revision_field_tags' => 'entity_id',
'field_revision_field_type' => 'entity_id',
'field_revision_field_website' => 'entity_id'
);
// Move all nids +10000 (out of the way)
$query = "SELECT nid FROM {$prefix}node WHERE nid < 10000 ORDER BY nid";
echo "$query\n";
$result = $database->query($query);
while($data = $result->fetchRow()) {
echo "Processing nid: {$data['nid']}\n";
changeNodeId($data['nid'], $data['nid'] + 10000);
}
// Move all nids to match guids
// (I originally imported through the feeds module, so used the guids to reorder here, but you can use your own logic as required...)
$query = "SELECT guid, entity_id FROM {$prefix}feeds_item WHERE guid <> entity_id ORDER BY ABS(guid)";
echo "$query\n";
$result = $database->query($query);
while($data = $result->fetchRow()) {
echo "Processing guid: {$data['guid']} (nid: {$data['entity_id']})\n";
changeNodeId($data['entity_id'], $data['guid']);
}
function changeNodeId($old, $new)
{
global $database, $tables, $prefix;
echo "Updating nid: {$old} -> {$new}\n";
// Check new doesn't already exist
$query = "SELECT * FROM {$prefix}node WHERE nid={$new}";
$result = $database->query($query);
if ($result->fetchRow()) {
echo "Error nid: {$new} already exists!\n";
return;
}
foreach ($tables as $table => $column)
{
$query = "UPDATE {$prefix}{$table} SET {$column} = {$new} WHERE {$column} = {$old}";
echo "$query\n";
$database->query($query);
}
}
Notes.
The tables listed above worked for me, it will almost definitely be different for you depending on what modules you have installed.
This will break your menus and any URL aliases you have set, so you have to go through manually afterwards and fix these up, not a major though.
Also good idea to reset your auto increment id on the node table. ALTER TABLE node AUTO_INCREMENT = X where X is 1 greater than the highest nid.
If your source contains the original nids, you can just set a mapping in your feed importer to assign the nid. This way there should be no reason to manipulate the db subsequent to the import process, as each node will simply be assigned the same nid.
Obviously this may break if you have existing nodes in your site with the same nids.