How to make a reverse foreach search in velocity script? - velocity

Marketo has a limit of 10 most recent opportunities that are searchable, and unfortunately we have a good number of users with more than 10 opportunities.
It appears the foreach loop starts at the least recently updated opportunity, and works its way up the list to the most recently update opportunity. The issue here is that when they have more than 10, the script can't access those opportunities that are the most recently updated. We could get around this by reversing the order the script searches the opportunity list (by reversing the foreach).
This is is the setup we have now (the script looks for a set of conditions within an opportunity, if it doesn't find them it looks for a different set, and so on).
#set($stip_guid = ${StipList.get(0).stip_opp_guid})
#foreach($opportunity in $OpportunityList)
#if($opportunity.o_opportunity_guid == $stip_guid && $opportunity.o_clear_to_close_date)
Display Unique Copy A
#break
#elseif($opportunity.o_opportunity_guid == $stip_guid && $opportunity.o_sent_to_underwriting)
Display Unique Copy B
#break
#elseif($opportunity.o_opportunity_guid == $stip_guid && $opportunity.o_processing_received)
Display Unique Copy C
#break
#else
Default Copy
#break#end#end

Marketo doesn't seem to be providing a tool which would reverse a collection.
But why not look on indices rather than on objects themselves?
#set($max = $opportunityList.size() - 1)
#foreach($i in [ $max .. 0 ])
#set($opportunity = $opportunityList[$i])
...
#end

Related

How to get just the most recent of all documents

In sanity studio you get a nice list of the most recent version of all your documents. If there is a draft you get that, if not, you get the published one.
I need the same list for a few filters and scripts. The following groq does the job but is not very fast and does not work in the new API (v2021-03-25).
*[
_type == $type &&
!defined(*[_id == "drafts." + ^._id])
]._id
A way around the breaking changes in the API is to use length() = 0 in place of !defined() but that makes an already slow query 10-20 X slower.
Does anyone know a way of making filters that consider only the latest version?
Edit: An example where I need this is if I want to see all documents without any categories. Regardless whether it is the published document or the draft that has no categories it shows up in a normal filter. So if you add categories but don't immediately want to publish it will be confusing in the no-categories-list. ,'-)
100 X improvement on API v2021-03-25 🥳
The only way I was able to solve this with speed was to first make a projection of the sub-query so it doesn't run once for every non-draft. Then I thought, why not project both sets and then figure out the overlap, and that was even faster! It runs more than 10 x faster than possible on API v1 and 100 x faster than any suggestions for new API.
{
'drafts': *[ _type == $type && _id in path("drafts.**") ]._id,
'published': *[ _type == $type && !(_id in path("drafts.**"))]._id,
}
{
'current': published[ !("drafts." + # in ^.drafts) ] + drafts
}
First I get both drafts and non-drafts and "store" it in this projection, like a variable-😉-ish
Then I start with my non-drafts - published
And filter out any that has a counterpart in my drafts "variable"
Lastly I add all drafts to the my list of filtered non-drafts
Overall I think you're on the right track. Some ideas to help you out:
Drafts are always fresher and newer than published documents, so if a given doc's id in path("drafts.**"), that's already the last updated one.
Knowing the above allows you to skip the defined(*[_id == ...]) part of the query for drafts, speeding up your execution
As drafts are already included, we can exclude published documents with a draft (defined(*[_id == "drafts." + ^._id][0]))
Notice I added a [0] to the end of the query to pick only the first element that matches. This will improve performance slightly.
For getting only documents that have no categories, use count(categoriesField) < 1
Order documents with | order(_updatedAt desc) to get the freshest documents first
And paginate your request to reduce the payload and speed things up.
Here's a sample query applying these principles (I haven't ran it, you may have to do some adjustments there):
*[
_type == $type &&
// Assuming you only want those without categories:
count(categories) < 1 &&
(
// Is either a draft -> drafts are always fresher
_id in path("drafts.**") ||
// Or a published document with no draft
!defined(*[_id == "drafts." + ^._id][0])
// 👆 with the check above we're ensuring only
// published documents run the expensive defined query
)
]
// Order by last updated
| order(_updatedAt desc)
// Paginate for faster queries
[$paginationStart..$paginationEnd]
// Get only the _id, assuming that's what you want
._id
Hope this helps 🙌

ImageJ batch processing - opening a series of images containing a specific name and doing stuff on them

I have 25K tif files (please don't ask why) that I want to organize into stacks on image J. Basically for each region of interest (ROI), there are 50 images which breaks down into 25 z-planes for two channels. I want everything in a single stack. And I'd like to batch process the whole folder without opening 50 images 500 times at a time. I've attached a picture of what the file names look like:
Folder organization
r01c01f01p01-ch1.tif - the first 10 characters are unique ID to each ROI, then plane number (p01) then channel - ch1 or ch2, then file extension
Here's what I have so far (which I cobbled together based on other macros so this may not make sense...).This is using the ImageJ macros language.
//Processing loop to process each file in the folder.
for (i=0; i<list.length; i++) {
showProgress(i+1, list.length);
if (endsWith(list[i], ".tif")) { // skip the subfolder (I create a subfolder earlier in the macros)
print("-- Processing file: " + list[i] + " --");
open(dir+list[i]);
imageTitle= getTitle();
newTitle = substring(imageTitle, 0, lengthOf(imageTitle)-10); // r01c01f01p, cutting off plane number and then the rest to just get the ROI ID
//This is where I'm stuck:
// find all files containing newTitle and open them (which would be 50 at a time), then run the following macros on them
run("Images to Stack", "name=Ch1 title=[] use");
run("Duplicate...", "title=Ch2 duplicate");
selectWindow("Ch1");
run("Slice Remover", "first=1 last=50 increment=2");
selectWindow("Ch2");
run("Slice Remover", "first=2 last=50 increment=2");
run("Merge Channels...", "c1=Ch1 c2=Ch2 create");
saveAs("tiff", dirNew + newTitle + "_Stack.tif");
//Close(All)?
}
}
print("-- Done --");
showStatus("Finished.");
setBatchMode(false); // Exit batch mode
run("Collect Garbage");
Thank you!
You could do something like:
for (plane=1; plane<51; plane++) {
open(newTitle+plane+"-ch1.tif");
open(newTitle+place+"-ch2.tif");
}
Which would take care of the opening. I would be inclined to have a loop prior to this which would collate the number of unique "newTitle"'s, as your current setup would end up doing something like opening the first item, assembling the combined TIF, and then repeat the process 25K times if I understand it correctly.
Given that you know the number of unique "r01c01f01p" values, in principle you could do a set of stacked loops akin to:
newTitleArray = newArray();
for (r=1; r<50; r++) {
titleBit = "r0" + toString(r);
for (c=1; c<501; c++) {
titleBit = titleBit + "f0"...
Alternatively, you could set up a loop where you check for unique "r01c01f01p" values and add them to an array. In any case, you'd replace the for "list" loop with the for "newTitleArray" loop, and then continue onto the opener I listed above, instead of your existing one.
If I am understanding correctly, it seems like you might do well to stack by channel first, then merge the two. I am not 100% sure, but I think you could potentially use a macro I have already created to do that. It was originally meant to batch process terabytes of 5D data, so it should be very comfortable handling your volume of images. It is not exactly what you are looking for, but should be super easy to modify (I went a little overboard with the commenting in the code), and I think the only thing it does that you might rather it not is produce max projects from the inputs. I'll throw a link here and look for your reply. If it's of interest, let me know and we can work to make it suit your needs together :-) Otherwise, if you could provide a little more detail about where you're getting stuck and/or where I may have misunderstood, I will do my very best to help!
https://github.com/evanjkiely/FIJIMacros

Using deepstream List for tens of thousands unique values

I wonder if it's a good/bad idea to use deepstream record.getList for storing a lot of unique values, for example, emails or any other unique identifiers. The main purpose is to be able to answer a question quickly whether we already have, say, a user with such email (email in use) or another record by specific unique field.
I made few experiments today and got two problems:
1) when I tried to populate the list with few thousands values I got
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
and my deepstream server went off. I was able to fix it by adding more memory to the server node process with this flag
--max-old-space-size=5120
it doesn't look fine but allowed me to make a list with more than 5000 items.
2) It wasn't enough for my tests so I precreated the list with 50000 items and put the data directly to rethinkdb table and got another issue on getting the list or modifing it:
RangeError: Maximum call stack size exceeded
I was able to fix it with another flag:
--stack-size=20000
It helps but I believe it's only matter of time when one of those errors appear in production when the list size reaches proper value. I don't know really whether it's nodejs, javascript, deepstream or rethinkdb issue. That's all in general made me think that I try to use deepstream List wrong way. Please, let me know. Thank you in advance!
Whilst you can use lists to store arrays of strings, they are actually intended as collections of recordnames - the actual data would be stored in the record itself, the list would only manage the order of the records.
Having said that, there are two open Github issues to improve performance for very long lists by sending more efficient deltas and by introducing a pagination option
Interesting results in regards to memory though, definitely something that needs to be handled more gracefully. In the meantime you could drastically improve performance by combining updates into one:
var myList = ds.record.getList( 'super-long-list' );
// Sends 10.000 messages
for( var i = 0; i < 10000; i++ ) {
myList.addEntry( 'something-' + i );
}
// Sends 1 message
var entries = [];
for( var i = 0; i < 10000; i++ ) {
entries.push( 'something-' + i );
}
myList.setEntries( entries );

Update more record in one query with Active Record in Rails

Is there a better way to update more record in one query with different values in Ruby on Rails? I solved using CASE in SQL, but is there any Active Record solution for that?
Basically I save a new sort order when a new list arrive back from a jquery ajax post.
#List of product ids in sorted order. Get from jqueryui sortable plugin.
#product_ids = [3,1,2,4,7,6,5]
# Simple solution which generate a loads of queries. Working but slow.
#product_ids.each_with_index do |id, index|
# Product.where(id: id).update_all(sort_order: index+1)
#end
##CASE syntax example:
##Product.where(id: product_ids).update_all("sort_order = CASE id WHEN 539 THEN 1 WHEN 540 THEN 2 WHEN 542 THEN 3 END")
case_string = "sort_order = CASE id "
product_ids.each_with_index do |id, index|
case_string += "WHEN #{id} THEN #{index+1} "
end
case_string += "END"
Product.where(id: product_ids).update_all(case_string)
This solution works fast and only one query, but I create a query string like in php. :) What would be your suggestion?
You should check out the acts_as_list gem. It does everything you need and it uses 1-3 queries behind the scenes. Its a perfect match to use with jquery sortable plugin. It relies on incrementing/decrementing the position (sort_order) field directly in SQL.
This won't be a good solution for you, if your UI/UX relies on saving the order manually by the user (user sorts out the things and then clicks update/save). However I strongly discourage this kind of interface, unless there is a specific reason (for example you cannot have intermediate state in database between old and new order, because something else depends on that order).
If thats not the case, then by all means just do an asynchronous update after user moves one element (and acts_as_list will be great to help you accomplish that).
Check out:
https://github.com/swanandp/acts_as_list/blob/master/lib/acts_as_list/active_record/acts/list.rb#L324
# This has the effect of moving all the higher items down one.
def increment_positions_on_higher_items
return unless in_list?
acts_as_list_class.unscoped.where(
"#{scope_condition} AND #{position_column} < #{send(position_column).to_i}"
).update_all(
"#{position_column} = (#{position_column} + 1)"
)
end

Adding more information to Magento packingslip or Invoice PDF

How can i add additional information to the Magento Packingslip PDF. I am using integrated label paper, so i would like to add repeat the customers delivery address at the footer and also, some details like total quantity of items in the order and the cost of the items in the order. I am currently modifying local files in: Mage/Sales/Model/Order/Pdf/ but I have only managed to change to font.
EDIT:
Okay, i have made good progress and added most of the information i need, however, i have now stumbled across a problem.. I would like to get the total weight of all items in each order and Quantity.
I am using this code in the shipment.php:
Under the foreach (This is useful in case an order has a split delivery - as you can have one order with multiple "shipments" So this is why I have the code after here:
foreach ($shipment->getAllItems() as $item){
if ($item->getOrderItem()->getParentItem()) {
continue;
...
then I have this further down:
$shippingweight=0;
$shippingweight= $item->getWeight()*$item->getQty();
$page->drawText($shippingweight . Mage::helper('sales'), $x, $y, 'UTF-8');
This is great for Row totals. But not for the whole shipment. What I need to do is have this bit of code "added up" for each item in the shipment to create the total Weight of the whole shipment. It is very important that I only have the total of the shipment - not the order as I will be splitting shipments.
Almost at the end of getPdf(...) in Mage_Sales_Model_Order_Pdf_Shipment you will find the code that inserts new pages (lines 93-94 in 1.5.0.1)
if($this->y<15)
$page = $this->newPage(....)
happening in a for-loop.
You can change the logic here to make it change page earlier to make room for your extra information and then add it before the page shift. You should also place code after the for-block if you want it to appear on the last page as well.
Note: You should not change files in app/code/code/Mage directly. Instead, you should place your changed files under app/code/local/Mage using the same folder structure. That way your changes won't accidently get overwritten in an upgrade.
This should get you the total number of items in your order (there might be a faster way but don't know it off the top of my head):
$quote = Mage::getModel('sales/quote')->load($order->getQuoteId());
$itemsCount = $quote->getItemsSummaryQty();
For invoice PDF
at app/code/local/Mage/Sales/Model/Order/Pdf/Items/Invoice/Default.php
add this before $lineBlock
$product = Mage::getModel('catalog/product')->loadByAttribute('sku', $this->getSku($item), array('weight'));
$lines[][] = array(
'text' => 'Weight: '. $product->getData('weight')*1 .'Kg/ea. Total: ' .$product->getData('weight')*$item->getQty() . 'Kg',
'feed' => 400
);