I am trying to create an academic document. I use quarto to be able to include code and formulas. How can I include my university's logo in the top left corner of my document?
For the moment here is what I have managed to do:
---
title: "test"
header-includes:
- \usepackage{titling}
- \pretitle{\begin{center}\LARGE\includegraphics[width=12cm]{nantesuniv.png}\\[\bigskipamount]}
- \posttitle{\end{center}}
output:
pdf_document:
toc: true
format: pdf
documentclass: report
papersize: letter
---
Only I can control the width and length of my logo but not its position.
You could try the quarto titlepages extension:
---
title: "test"
author: "Example Author"
format:
titlepage-pdf:
titlepage-logo: "nantesuniv.png"
logo-align: left
logo-size: 1.5
titlepage-theme:
elements: ["\\logoblock", "\\vfill", "\\authorblock", "\\titleblock"]
documentclass: report
papersize: letter
---
Related
The custom date works in RMarkdown-pdf but I noticed Quarto doesn't.
How can I use custom date in Quarto YAML?
---
title: "Some pdf document"
author: "me"
date: "Spring 2022" <- I would like to use this
format: pdf
----
---
title: "Some pdf document"
author: "me"
date: "Last update : `r Sys.Date()`" <- Or, like this
format: pdf
----
Current Quarto-pdf generates %m/%d/%Y format date only.
You can provide last-modified keyword (which refers to the last modified date and time of the file containing the date) to date and use date-format to modify the date. Also you can put additional words in between square brackets which will be then escaped and kept as is.
---
title: "Some pdf document"
author: "me"
date: last-modified
date-format: "[Last Updated on] MMMM, YYYY"
format: pdf
---
Now, since there is no format-string for season name, it is possible make one using Pandoc Lua filter.
---
title: "Some pdf document"
author: "None"
date: last-modified
date-format: "[Last Updated on] %MMM%, YYYY"
format: pdf
filters:
- custom-date.lua
---
Note Here we have used %MMM%, which will be replaced by season name.
custom-date.lua
local function replace_mon_with_season(date)
local season_table = {
Jan = "Winter", Feb = "Winter", Mar = "Spring",
Apr = "Spring", May = "Spring", Jun = "Summer",
Jul = "Summer", Aug = "Summer", Sep = "Autumn",
Oct = "Autumn", Nov = "Autumn", Dec = "Spring"
}
local date = pandoc.utils.stringify(date)
local mon = date:match("%%(%a+)%%")
local season = season_table[mon]
return date:gsub("%%" .. mon .. "%%", season)
end
function Meta(m)
m.date = replace_mon_with_season(m.date)
return m
end
I am using alexusmai laravel file manager together with tinyMCE. When LFM returns the path of a selected file in the insert/edit image window it is correct. It must include the public folder as I am on shared hosting /public/assets....
As soon as the insert/edit image window is closed, the url inserted in tinyMCE displays the logo/image correctly. When I look in the code editor the url is converted to ../../../../public/assets....
Even if I edited the path in the code editor and save, tinyMCE changes it back as soon as I return to the wysiwyg editor. In a way it's ok as it works but I would prefer that the path is kept as given by the user or LFM.
Is there anyway to stop tinyMCE to do that?
Here is the js code I use to initialise the editor:
window.onload = function () {
tinymce.init({
selector: '#{{ $field->name }}',
height: 600,
plugins: [
'image paste importcss searchreplace autolink preview',
'code visualblocks visualchars fullscreen image link media template',
'table charmap hr insertdatetime lists wordcount textpattern charmap quickbars emoticons help',
],
menubar: 'file edit view insert format tools table help',
toolbar: [
'code fullscreen preview | bold italic underline | image media template link | formatselect fontsizeselect | alignleft aligncenter alignright alignjustify | outdent indent | numlist bullist | forecolor backcolor | charmap emoticons | removeformat undo redo'
],
templates: [
{ title: 'Row Ani Img L', description: 'Row with animnated image on left', content: rowL },
{ title: 'Row Ani Img R', description: 'Row with animnated image on right', content: rowR }
],
content_css: '{{ makeURLabsolute(config('path')['public'].'custom/'.config('site.array_names')['file'].'.css',config('is_web')) }}',
branding: false,
toolbar_sticky: true,
toolbar_mode: 'sliding',
file_picker_callback (callback, value, meta) {
let x = window.innerWidth || document.documentElement.clientWidth || document.getElementsByTagName('body')[0].clientWidth
let y = window.innerHeight|| document.documentElement.clientHeight|| document.getElementsByTagName('body')[0].clientHeight
tinymce.activeEditor.windowManager.openUrl({
url : '/file-manager/tinymce5',
title : 'Asset Manager',
width : x * 0.8,
height : y * 0.8,
onMessage: (api, message) => {
callback(message.content, { text: message.text })
}
})
}
});
};
To describe REST APIs in an application-centric matter, I am trying out the ALPS descriptor and its toolings.
Say I have an object "Artwork", an artwork has the following properties: headline, artform, authors, etc.
How should I describe the property "authors"? In OpenAPI spec it is easy, simply say "type: Array", then specify reference under "items". What is the equivalent way of describe a collection in ALPS?
I am using YAML format to describe the API.
My current descriptors looks like this:
descriptors:
# vocabulary properties
- id: "identifier"
type: "semantic"
text: "Unique identifier within Alpha Org. This is not the serial number of the item. This identifier is automatically generated when a new artwork item is created in the system."
ref: "https://schema.org/identifier"
- id: "headline"
type: "semantic"
text: "Title of the artwork"
ref: "https://schema.org/headline"
- id: "artform"
type: "semantic"
text: "Form of the artwork: painting? photograph? sculpture? others?"
ref: "https://schema.org/artform"
- id: "authors"
type: "semantic"
text: "Authors of this artwork"
ref: "https://schema.org/author"
- id: "serialNumber"
type: "semantic"
text: "Serial number of the artwork. If the artwork does not have a serial number, such as an original painting from an artist, then this number is simply the examplar number, usually 1."
ref: "https://schema.org/serialNumber"
- id: "description"
type: "semantic"
text: "Description of the artwork"
ref: "https://schema.org/description"
- id: "obtainDate"
type: "semantic"
text: "Date on which Alpha Org obtained the ownership of this artwork."
ref: "https://schema.org/Date"
- id: "itemLocation"
type: "semantic"
text: "Location of the storage of the artwork item"
ref: "https://schema.org/itemLocation"
- id: "images"
type: "semantic"
text: "Images of the artwork"
ref: "https://schema.org/image"
ALPS descriptions don't make distinctions between single items and collections. From the model POV anything could be a collection, from the implementation POV, each service implementing that ALPS description makes their own decisions.
as a common practice, i typically use the IANA link-rel values when it is important
rel: item and rel: collection
there's a small utility (ALPS UNIFIED) that shows how to convert ALPS descriptions into OpenAPI definitions:
https://github.com/mamund/alps-unified
the question is about parsing an html stream obtained by load/markup in a way you can get html tags constituent parts, i.e. when you find
<div id="one">my text</div>
you should end with something like <div id="one">, {my text} and </div> in the same container, something like
[<div id="one"> {my text} </div>]
or even better
[<div> [id {one}] {my text} </div>]
the parsing problem is matching paired html tags, in html a tag may be an empty tag with maybe attributes but without content and thus without an ending tag or a normal tag maybe with attributes and content and so an ending tag, but both types of tag are just a tag
I mean when you find a sequence like <p>some words</p> you have a P tag just the same you get whit a sequence like <p /> just a P tag, in first case you have associated text and ending tag and in the latter you don't, that's all
In other words, html attributes and content are properties of tag element in html, so representing this in json you will get someting like:
tag: { name: "div" attributes: { id: "one } content: "my text" }
this means you have to identify content of a tag in order to assign it to properly tag, which in terms of rebol parse means identifing matching tags (opening tag and ending tag)
In rebol you can easy parse an html sequence like:
<div id="yo">yeah!</div><br/>
with the rule:
[ some [ tag! string! tag! | tag! ]]
but with this rule you will match the html
<div id="yo">yeah!</div><br/>
and also
<div id="yo">yeah!</p><br/>
as being the same
so you need a way to match the same opening tag when appearing in ending position
sadly rebol tags cannot (AFAIK) be parametrized with tag name, so you cannot say something like:
[ some [ set t1 tag! set s string! set t2 tag!#t1/1 | tag! ] ]
the t1/1 notation is due to a (bad) feature of rebol including all tag atributes at same level of tag name (another bad feature is not reckognizing matching tags as being the same tag)
Of course you can achieve the goal using code such as:
tags: copy []
html: {<div id="yo">yeah!</p><br/>}
parse html [ some [ set t1 tag! set s string! set t2 tag! (tag: first make block! t1 if none <> find t2 tag [append/only tags reduce [t1 s] ]) | tag! (append/only tags reduce [t1])]]
but the idea is to use a more elegant and naive approach using parse dialect only
There's a way to parse pairs of items in rebol parse dialect, simply using a word to store the expected pair:
parse ["a" "a"] [some [set s string! s ]]
parse ["a" "a" "b" "b"] [some [set s string! s ]]
But this doesn't work well with tags due to tags carry attributes and special ending marks (/) and thus it's not easy to find the ending pair from initial one:
parse [<p> "some text" </p>] [some [ set t tag! set s string! t ]
parse [<div id="d1"> "some text" </div>] [some [ set t tag! set s string! t ]
don't work cause </p> is not equal to <p> and neither </div> is equal to <div id="d1">
Again you can fix it with code:
parse load/markup "<p>preug</p>something<br />" [
some [
set t tag! (
b: copy t remove/part find b " " tail b
insert b "/"
)
set s string!
b (print [t s b])
|
tag!
|
string!
]
]
but this is not simple and zen code anymore, so question's still alive ;-)
I'm trying to get all the content from Wikipedia:Unusual_articles and I'm able to get the list of table content by calling this endpoint:
https://en.wikipedia.org/w/api.php?action=parse&format=json&prop=sections&page=Wikipedia:Unusual_articles
and the data I got back look something like this:
{
title: "Wikipedia:Unusual articles",
pageid: 154126,
sections: [
{
toclevel: 1,
level: "2",
line: "Places and infrastructure",
number: "1",
index: "T-1",
fromtitle: "Wikipedia:Unusual_articles/Places_and_infrastructure",
byteoffset: null,
anchor: "Places_and_infrastructure"
},
{
toclevel: 2,
level: "3",
line: "Americas",
number: "1.1",
index: "T-2",
fromtitle: "Wikipedia:Unusual_articles/Places_and_infrastructure",
byteoffset: null,
anchor: "Americas"
},
...
But I'm not able to get the content of a particular section. For example under Americas is a list of the table with a link and a short description, but is there a way to obtain the link and short description from the API?
You can get the content of every page section by using MediaWiki API with action=parse in two steps. First you have to get all sections from the page with:
https://en.wikipedia.org/w/api.php?action=parse&prop=sections&page=Wikipedia:Unusual_articles
From the response you see that section Americas has index=T-2 (T means transcluded page) and it comes from fromtitle=Wikipedia:Unusual_articles/Places_and_infrastructure. Now we use these index and fromtitle to get the content of the section with:
https://en.wikipedia.org/w/api.php?action=parse&page=Wikipedia:Unusual_articles/Places_and_infrastructure§ion=2&prop=...
where:
prop=wikitext - gives the original section wikitext that was parsed.
prop=text - gives the parsed section text of the wikitext.