Piranha CMS: how to find child pages for a page - piranha-cms

How can I find child page-item(s), to get its title and link from within a Block? (the way pages are structured in the manager)
I tried a bit with the Sitemap, but I'm having a little trouble instantiating the object. How do I do it?
It would be a bit of a hassle to loop through all the nodes in the Sitemap to find the correct page. Is there a better way?

To get the sitemap structure you simply call:
var sitemap = api.Sites.GetSitemap();
If you have multiple sites you'll need to specify the site you want, otherwise the sitemap for the default site is returned.
var sitemap = api.Sites.GetSitemap(siteId);
Once you have the sitemap you can get a partial sitemap from your current page and do something fun with the subpages with the following code:
var sitemap = api.Sites.GetSitemap();
var partial = sitemap.GetPartial(myPage.Id);
foreach (var subpage in partial)
{
// Do your stuff here!
}
Best regards
Håkan

Related

Indexed_search only the news detail

I'm setup a TYPO3 v9.5 website with the Indexed_search ext.
When I search a word using the seachbox on the FE it show all results : home page, categories pages, and news pages.
there is a way to index/search only the newsitems detail page ?
There are multiple ways to achieve this.
In my opinion the simplest (without setting up crawler configurations) would be to limit indexing to only this page.
See https://docs.typo3.org/c/typo3/cms-indexed-search/master/en-us/Configuration/General/Index.html
On your root page you would set page.config.index_enable = 0 in TypoScript setup and on your news detail page page.config.index_enable = 1. Then clear and rebuild the index.
Another possibility for smaller sites is to filter the shown results in your Fluid template. I would not really suggest that but it works, too.

Programatically render template area in Magnolia CMS

I search for rendering specific content in Magnolia like components with Render Engine So I found this topic on stackoverflow : Programatically render template area in Magnolia CMS
My question is about the structure of the following classes : FilteringAppendableWrapper and
FakeResponse : used to put fake Http Response in the AppendableFilteringResponseOutputProvider
Thanks for any help.
What about it do yo want to ask? Why he didn't use the real response? Because he was faking whole (web) request to a page too and then getting output from his "FakeResponse" and adding it to json array to fill in body of the response of a request that came over REST.
Personally, I think such solution is overkill for the job and if I had to do the same, I would probably simply register variation template for a component or whole page and when requesting page w/ e.g. .ajax extension I would have my variation to kick in and rendered whole page (or area or component) as a json array.
HTH,
Jan
There is no need for fake response. This simple code work well :
Session session = MgnlContext.getJCRSession("website");
Node content = session.getNode("/protoTest/content/01");
componentDefinition = templateDefinitionAssignment.getAssignedTemplateDefinition(content);
OutputStream outputStream = new ByteArrayOutputStream();
OutputStreamWriter writer = new OutputStreamWriter(outputStream);
appendable = new AppendableOnlyOutputProvider(writer);
renderingEngine.render(content, componentDefinition, new HashMap<String, Object>(), appendable);
((Writer) appendable.getAppendable()).flush();
System.out.println(outputStream.toString());

Accessing global site regions in Piranha CMS

I have added a global site region to my site and filled it with some content. How can I read this content from page view and/or layout?
This feature differs a bit between WebPages & MVC, the reason for this being that in WebPages (like WebForms) a Layout-page have a different model than the actual page being executed. If you use WebPages you simply add the following line first in the Layout page:
#inherits Piranha.WebPages.LayoutPage
This will automatically load the layout page model and all the global regions.
If you're using MVC this can't be done automatically as the Layout doesn't have a model. You can simply add the following in your Layout-page:
#{
Piranha.Models.PageModel global;
if (HttpContext.Current.Items["Piranha_CurrentPage"] != null) {
var current =
(Piranha.Models.Page)HttpContext.Current.Items["Piranha_CurrentPage"];
global = Piranha.Models.PageModel.GetBySite(current.SiteTreeId);
} else {
global = Piranha.Models.PageModel.GetBySite(Piranha.Config.SiteTreeId);
}
}
This snippet loads the layout page from:
If it's a page is displayed it loads the site tree the page is contained in
If it's a post it loads the site tree from the current site.
Hope this helps you!
Regards
/Håkan

There is a way in Tumblr to get media images URL in the same domain of the blog name?

Lets say my blog is http://foo.tumblr.com.
All the post's images are stored in xx.media.tumblr... (for example: https://24.media.tumblr.com/tumblr_kzjlfiTnfe1qz4rgho1_250.jpg) (first 2 numbers can be skipped)
But i want the URL of the image be in the same domain of my blog, and looks something like this:
http://foo.tumblr.com/tumblr_kzjlfiTnfe1qz4rgho1_250.jpg
(that doesn't exist)
Why i need that? I am creating a script, and it generates a canvas that detects if the image have transparency with a getImageData (all the .jpg are skipped), but since the subdomain is different, i get a cross-domain security error, and the canvas is tainted, avoiding the use of getImageData.
So.. how can i do that?
I think Tumblr API could be useful, but how?
Scrape your sitemap for all posts and get their images. You could use the API or just with Javascript in the browser console:
xmlin = prompt(); // view-source:biochemistri.es/sitemap1.xml
parser = new DOMParser();
xmlDoc=parser.parseFromString(xmlin,"text/xml");
xmlDoc.querySelectorAll('loc')[0].remove();
posts = xmlDoc.querySelectorAll('loc');
postlist = [];
for (i=0;i<posts.length;i++) {postlist.push(posts[i].innerHTML)};
...to generate an array containing all posts, which can be navigated through for photo posts (div.post.photo) and their URLs copied.
Then simply generate a new list of images with a for loop and newImg = document.createElement('img'), setting an origin attribute using newImg.setAttribute('origin') = myPhotoList[n] which can then be used to select an image programmatically:
document.querySelector("img[origin='"+{PhotoURL-HighRes}+"']"
(or {PhotoURL-1280}, {PhotoURL-500}, {PhotoURL-250} etc. Once retrieved over an XMLHttpRequest, you could simply switch the post in the DOM. The {PhotoURL-HighRes} in my example above wouldn't work, it'd be an attribute from the page I'm just indicating which part you'd want to get from the theme HTML.
As explained in this post, there is a variable which could be used as a more concise attribute than the full origin URL if you want to be a bit more specific with regular expressions.
This would effectively put all of your images onto your local URL, with URLs like foo.tumblr.com/images/tumblr_kzjlfiTnfe1qz4rgho1_250.jpg, and avoid cross-domain restrictions. I'm guessing it'd work only if you don't have a ton of posts as custom pages such as you'd be using to store images do have a restriction on their size (though I suppose you could make a second one).
Also might be sensible to include CSS to set display: none in case anyone stumbles upon the page by accident, and a redirect function to the homepage with window.onload or similar.

save phantom js processed page into html file with absolute url

I want to save my special web pages after document loaded into special file name via all url and links convert to absolute url such as wget -k.
//phantomjs
var page = require('webpage').create();
var url = 'http://google.com/';
page.open(url, function (status) {
var js = page.evaluate(function () {
return document;
});
console.log(js.all[0].outerHTML);
phantom.exit();
});
for example my html content somthing like this:
page
must be
page
It's my sample script but how can i convert all url and links such as wget -k using phantomjs?
You can modify your final HTML so that it has a <base> tag - this will make all relative URLs working. In your case, try putting <base href="http://google.com/"> right after the <head> on the page.
It is not really supported by PhantomJS is more than just an HTTP client. Imagine if there is a JavaScript code which pulls a random content with image on the main landing page.
The workaround which might or might not for you is to replace all the referred resource in the DOM. This is possible using some CSS3 selector (href for a, src for img, etc) and manual path resolve relative to the base URL. If you really need to track and enlist every single resource URL, use the network traffic monitoring feature.
Last but not least, to get the generated content you can use page.content instead of that complicated dance with evaluate and outerHTML.