Doxygen: xrefitem key works with '\' but not with '#' in the documentation - alias

The below is my ALIASES configuration in my Doxyfile
ALIASES = "In=\par Changes in version:" \
"WindowsSpecific=\xrefitem WindowsSpecific \"\" \"\"" \
"LinuxSpecific=\xrefitem LinuxSpecific \"\" \"\""
In my program, if I provide the custom tag with backslash symbol, Doxygen generates the output properly
/**
* \WindowsSpecific
* #In 1.0.1
*/
int getVersion();
I can see a "WindowsSpecific" item in "Related pages".
However, if I provide the custom tag with # symbol
/**
* #WindowsSpecific
* #In 1.0.1
*/
int getVersion();
I do not get the output i.e. Related page with "WindowsSpecific" as desired. I am using doxygen v 1.8.13

Related

why does objcopy removes my section from binary?

I'm trying to link my tiny bare-metal educational project for ARM. I have one simple assembly source and linker script. There is a special separate section for exception vectors and startup code:
.section STARTUP_SECTION, "x"
_reset:
b reset_handler # Reset
b . # Undefined instruction
b . # SWI
b . # Prefetch Abort
b . # Data Abort
b . # reserved
b . # IRQ
b . # FIQ
reset_handler:
# some code here
b .
# then .text and .data section
And simple linker script:
ENTRY(_reset)
SECTIONS
{
. = 0;
.startup . :
{
startup.o (STARTUP_SECTION)
reset_section_end = .;
}
.text. : {*(.text)}
.data . : {*(.data)}
.bss . : {*(.bss COMMON)}
}
I see all my sections in map file produced by linker, and .text section lies at higher address than .startup as expected. But when I convert it to binary with:
arm-none-eabi-objcopy -O binary startup.elf startup.bin
...I see that it starts from .text contents, and my startup section is missing. I'm still able to see all sections in elf file when I disassemble it with objdump, but objcopy removes .startup. The section is not marked as NOLOAD or something like this. Is "NOLOAD" type a default for such section and why? And how to mark it as "LOAD" since there is no such section type according to linker manual.
What is going on here?

How to load customized dynamic libs (*.so) in tensorflow serving (gpu)?

I wrote my own cudaMelloc as follows, which I plan to apply in tensorflow serving (GPU) to trace the cudaMelloc calls via the LD_PRELOAD mechanism (could be used to limit the GPU usage for each tf serving container with proper modification as well).
typedef cudaError_t (*cu_malloc)(void **, size_t);
/* cudaMalloc wrapper function */
cudaError_t cudaMalloc(void **devPtr, size_t size)
{
//cudaError_t (*cu_malloc)(void **devPtr, size_t size);
cu_malloc real_cu_malloc = NULL;
char *error;
real_cu_malloc = (cu_malloc)dlsym(RTLD_NEXT, "cudaMalloc");
if ((error = dlerror()) != NULL) {
fputs(error, stderr);
exit(1);
}
cudaError_t res = real_cu_malloc(devPtr, size);
printf("cudaMalloc(%d) = %p\n", (int)size, devPtr);
return res;
}
I compile the above code into a dynamic lib file using the following command:
nvcc --compiler-options "-DRUNTIME -shared -fpic" --cudart=shared -o libmycudaMalloc.so mycudaMalloc.cu -ldl
When applied to a vector_add program compiled with command nvcc -g --cudart=shared -o vector_add_dynamic vector_add.cu, it works well:
root#ubuntu:~# LD_PRELOAD=./libmycudaMalloc.so ./vector_add_dynamic
cudaMalloc(800000) = 0x7ffe22ce1580
cudaMalloc(800000) = 0x7ffe22ce1588
cudaMalloc(800000) = 0x7ffe22ce1590
But when I apply it to tensorflow serving using the following command, the cudaMelloc calls do not refer to the dynamic lib I wrote.
root#ubuntu:~# LD_PRELOAD=/root/libmycudaMalloc.so ./tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=resnet --model_base_path=/models/resnet
So here's my questions:
Is it because that tensorflow-serving is built in a fully static manner, such that tf-serving refers to the libcudart_static.a instead of libcudart.so?
If so, how could I build tf-serving to enable dynamic linking?
Is it because that tensorflow-serving is built in a fully static manner, such that tf-serving refers to the libcudart_static.a instead of libcudart.so?
It probably isn't built fully-static. You can see whether it is or not by running:
readelf -d tensorflow_model_server | grep NEEDED
But it probably is linked with libcudart_static.a. You can see whether it is or not with:
readelf -Ws tensorflow_model_server | grep ' cudaMalloc$'
If you see unresolved (U) symbol (as you would for the vector_add_dynamic binary), then LD_PRELOAD should work. But you'll probably see a defined (T or t) symbol instead.
If so, how could I build tf-serving to enable dynamic linking?
Sure: it's open-source. All you have to do is figure out how to build it, then how to build it without libcudart_static.a, and then figure out what (if anything) breaks when you do so.

Jupyter: export as PDF when notebook itself has LaTeX included

I have an IPython notebook, and I want to export it as a pdf with latex. This works for notebooks which do not contain LaTeX themselves, but when I directly use latex inside the notebook and then try to import it as an pdf, I get the following error:
nbconvert failed: PDF creating failed, captured latex output:
Failed to run "['xelatex', 'notebook.tex', '-quiet']" command:
This is XeTeX, Version 3.141592653-2.6-0.999993 (TeX Live 2021/Arch Linux) (preloaded format=xelatex)
restricted \write18 enabled.
entering extended mode
(./notebook.tex
LaTeX2e <2020-10-01> patch level 4
L3 programming layer <2021-02-18>
(/usr/share/texmf-dist/tex/latex/base/article.cls
Document Class: article 2020/04/10 v1.4m Standard LaTeX document class
(/usr/share/texmf-dist/tex/latex/base/size11.clo))
... more sty files (I shortened the log)
(/usr/share/texmf-dist/tex/latex/jknapltx/mathrsfs.sty)
No file notebook.aux.
(/usr/share/texmf-dist/tex/latex/base/ts1cmr.fd)
(/usr/share/texmf-dist/tex/latex/caption/ltcaption.sty)
*geometry* driver: auto-detecting
*geometry* detected driver: xetex
*geometry* verbose mode - [ preamble ] result:
* driver: xetex
* paper: <default>
* layout: <same size as paper>
* layoutoffset:(h,v)=(0.0pt,0.0pt)
* modes:
* h-part:(L,W,R)=(72.26999pt, 469.75502pt, 72.26999pt)
* v-part:(T,H,B)=(72.26999pt, 650.43001pt, 72.26999pt)
* \paperwidth=614.295pt
* \paperheight=794.96999pt
* \textwidth=469.75502pt
* \textheight=650.43001pt
* \oddsidemargin=0.0pt
* \evensidemargin=0.0pt
* \topmargin=-37.0pt
* \headheight=12.0pt
* \headsep=25.0pt
* \topskip=11.0pt
* \footskip=30.0pt
* \marginparwidth=59.0pt
* \marginparsep=10.0pt
* \columnsep=10.0pt
* \skip\footins=10.0pt plus 4.0pt minus 2.0pt
* \hoffset=0.0pt
* \voffset=0.0pt
* \mag=1000
* \#twocolumnfalse
* \#twosidefalse
* \#mparswitchfalse
* \#reversemarginfalse
* (1in=72.27pt=25.4mm, 1cm=28.453pt)
(/usr/local/share/texmf/tex/latex/ucs/ucsencs.def)
(/usr/share/texmf-dist/tex/latex/hyperref/nameref.sty
(/usr/share/texmf-dist/tex/latex/refcount/refcount.sty)
(/usr/share/texmf-dist/tex/generic/gettitlestring/gettitlestring.sty))
Package hyperref Warning: Rerun to get /PageLabels entry.
(/usr/share/texmf-dist/tex/latex/amsfonts/umsa.fd)
(/usr/share/texmf-dist/tex/latex/amsfonts/umsb.fd)
(/usr/share/texmf-dist/tex/latex/jknapltx/ursfs.fd)
LaTeX Warning: No \author given.
! Missing $ inserted.
<inserted text>
$
l.393 \begin{matrix}
?
! Emergency stop.
<inserted text>
$
l.393 \begin{matrix}
No pages of output.
Transcript written on notebook.log.
I honestly don't know what I should do with this.
What I also noticed by using the command line was, that generating the tex file works fine. But then converting it to a pdf wiht pdflatex yields the same error as above.
The notebook uses latex like this:

Export COMMON block from DLL with gfortran

I am having trouble correctly accessing a variable in a Fortran DLL from a Fortran EXE when the variable is part of a COMMON block.
I have a trivial code simple.f90 which I compile into a DLL using MSYS64/MinGW-w64 gfortran 9.2 as
x86_64-w64-mingw32-gfortran simple.f90 -o simple.dll -shared
! simple.f90
module m
implicit none
integer :: a, b
!common /numbers/ a, b
end module
subroutine init_vals
use m
implicit none
a = 1
b = 2
end subroutine
This library is used from a even simpler program prog.f90, compiled as
x86_64-w64-mingw32-gfortran prog.f90 -o prog -L. -lsimple
! prog.90
program p
use m
implicit none
print *, 'Before', a, b
call init_vals
print *, 'After', a, b
end program
When the COMMON block /numbers/ is commented out, the code works and prints the expected result:
Before 0 0
After 1 2
However, when I uncomment the COMMON block, the output becomes
Before 0 0
After 0 0
as if the variables used by the program were suddenly distinct from those used in the library.
Both variants work equally well in a Linux-based OS with gfortran 9.1.
I am aware that "On some systems, procedures and global variables (module variables and COMMON blocks) need special handling to be accessible when they are in a shared library," as mentioned here: https://gcc.gnu.org/onlinedocs/gcc-4.9.0/gfortran/GNU-Fortran-Compiler-Directives.html . However, I was not able to insert a statement of the type
!GCC$ ATTRIBUTES DLLIMPORT :: numbers
or
!GCC$ ATTRIBUTES DLLEXPORT :: numbers
anywhere in the code without being snapped at by the compiler.
As pointed out by M. Chinoune in the comment, current gfortran lacks the ability to import common blocks from DLLs. Even though there has been a patch for some time, it is not yet merged. In the end, I needed two things to make the above code work:
First, apply the following patch to GCC 9.2 and compile the compiler manually in MSYS2:
--- gcc/fortran/trans-common.c.org 2019-03-11 14:58:44.000000000 +0100
+++ gcc/fortran/trans-common.c 2019-09-26 08:31:16.243405900 +0200
## -102,6 +102,7 ##
#include "trans.h"
#include "stringpool.h"
#include "fold-const.h"
+#include "attribs.h"
#include "stor-layout.h"
#include "varasm.h"
#include "trans-types.h"
## -423,6 +424,9 ##
/* If there is no backend_decl for the common block, build it. */
if (decl == NULL_TREE)
{
+ unsigned id;
+ tree attribute, attributes;
+
if (com->is_bind_c == 1 && com->binding_label)
decl = build_decl (input_location, VAR_DECL, identifier, union_type);
else
## -454,6 +458,23 ##
gfc_set_decl_location (decl, &com->where);
+ /* Add extension attributes to COMMON block declaration. */
+ if (com->head)
+ {
+ attributes = NULL_TREE;
+ for (id = 0; id < EXT_ATTR_NUM; id++)
+ {
+ if (com->head->attr.ext_attr & (1 << id))
+ {
+ attribute = build_tree_list (
+ get_identifier (ext_attr_list[id].middle_end_name),
+ NULL_TREE);
+ attributes = chainon (attributes, attribute);
+ }
+ }
+ decl_attributes (&decl, attributes, 0);
+ }
+
if (com->threadprivate)
set_decl_tls_model (decl, decl_default_tls_model (decl));
Second, only the line
!GCC$ ATTRIBUTES DLLIMPORT :: a, b
was needed in the main program (right after implicit none), but not any exports anywhere. This is apparently a different syntactical approach then in Intel Fortran, where one imports the COMMON block rather than its constituents. I also found out that I needed to import both a and b even if I only needed b. (When only a was needed, importing a only was enough.)

Python - Use %s in value of config file

I use a config file (type .ini) to save my SQL queries, then i get a query by its key. All work fine, until creating a query with parameters, example :
;the ini file
product_by_cat = select * from products where cat =%s
I use :
config = configparser.ConfigParser()
args= ('cat1')
config.read(path_to_ini_file)
query= config.get(section_where_are_stored_thequeries,key_of_the_query)
complete_query= query%args
I get the error :
TypeError: not all arguments converted during string formatting
So it try to format the string at retrieving the value from the ini file.
Any proposition of my problem.
You can use format function like this
ini file
product_by_cat = select * from products where cat ={}
python:
complete_query= query.format(args)
depending on the versions of ConfigParser (Python 2 or Python 3) you may need to double the % like this or it throws an error:
product_by_cat = select * from products where cat =%%s
Although a better way would be to use the raw version of the config parser, so the % char isn't interpreted
config = configparser.RawConfigParser()