Google Colab headers don't show prepended number in Table of Contents - google-colaboratory

In Google Colaboratory, I like to create markdown section headers with a prepended number, like so:
# 1. My Notebook
-----
## 1. My section
However, when I look at the Table of Contents pane, the number does not show up.
When I instead use a letter, such as "A", the letter does show up.
How do I fix it so that the prepended numerals (e.g. 1.) show up in the Table of Contents?

You can write it like this:
# 1. Heading
## 1.1 Sub-Heading
Using to replace the space.

Related

Match beginning of string with lookbehind and named group

help needed to match full message in a Lookbehind.
Lets say i have the following simplified string:
1 hostname Here is some Text
at the beginning i could have 1 or 2 digits followed by space, which i would ignore.
then i need the first word captured as "host"
and then i would like to look behind to the first space, so that capture group "message" has everything starting after the first 2 digits and space. i.e. "hostname Here is some Text"
my regex is:
^[1-9]\d{0,2}\s(?<host>[\w][\w\d\.#-]*)\s(?<message>(?<=\s).*$)
this gives me:
host = "hostname"
message = "Here is some Text"
I can't figure out how my lookbehind needs to look like.
Thanks for your help.
ok, i found it. What needs to be done is to put message as the first group, and everything else, including the other groups inside the message group:
^[1-9]\d{0,2}\s(?<message>(?<host>[\w][\w\d\.#-]*)\s.*$)

How to forward logs with Splunk Forwarder for the files with no header and logs should be in form of key/Value

I have a splunk forwarder setup already on my host.
I have certain files in the folder (/tom/mike/). File names are starting with Back*.
The content of file may in one or multiple line. There are multiple fixed position values separated with some spaces in each line with no header.
Content (Example: Consider "-" as one space)
Tom---516-----RTYUI------45678
Mik---345-----XYXFF------56789
I need splunk logs for each line.
like:
Key1= Tom Key2=516 Key3= RTYUI Key4= 45678
Key1= Mike Key2= 345 Key3= XYXFF Key4= 56789
I know inputs.conf changes would be like below:
[monitor:///tom/mike/Back*]
index=myIndex
blacklist=\.(gz|zip|bkz|arch|etc)$
sourcetype = BackFileData
Please suggest changes which can be done in props.conf. Please keep in mind that delimiter is fixed for each value in line but its not same (like 2 spaces) for all column values. There are no headers as well in these files.
You can use kvdelims if you want a search-time extraction or you can make a transforms.conf rule and apply it in props.conf and it will extract at index time
Here's a good article covering all those scenarios
https://www.splunk.com/blog/2008/02/12/delimiter-based-key-value-pair-extraction.html

syslog-ng match and filter is not working the way I want

I have following messages
1)"customer1"," 5","0","".....
2)"customer2"," 5","0",""....
3)"customer3"," 5","0",""...
4)""," 5","0",""
5)""," 5","0",""
What I want to achieve is based on first value in double quotes I want to create folders and then writing logs in the respective folder only and whenever double quote is blank send those logs in Others folder.With the following configuration I am able to create folder like (customer1,customer2 and customer3). Problem Occurs when I have blank value at the first place like log 4 and 5.
syslog-ng.conf
filter c1 {match('(^"")' flags("store-matches") type("pcre") value("MESSAGE") );};
destination d1 {file("/opt/data/cef/other/${DAY}${MONTH}${YEAR}_other.log");};
log {source(s_udp);filter(c1);destination(d1);};
filter c2 {match('(?<=")([\w\s]*)(?=")' flags("store-matches") type("pcre") value("MESSAGE") );};
destination d2 {file("/opt/data/cef/$1/${DAY}${MONTH}${YEAR}_$1.log");};
log {source(s_udp);filter(c2);destination(d2);};
First filter checks if the first double quote is empty or just like "" and it writes those logs into Others folder.Problem is with the second filter it matches everything between "". So it works fine if it has value but misbehave if it is empty .So it writes this log into a file with the name 03_06_2017.log in /opt/data/cef folder. I am not sure why it is creating a separate file .
Please help .
Regards
VG
I think it would be easier to use a csv-parser: https://www.balabit.com/documents/syslog-ng-ose-latest-guides/en/syslog-ng-ose-guide-admin/html/csv-parser.html
If the number of columns in the messages varies, and you only need the first column for your filter, then you can use the greedy flag to take care of the other columns.

.docx file chapter extraction

I would like to extract the content of a .docxfile, chaptervise.
So, my .docxdocument has a register and every chapter has some content
1. Intro
some text about Intro, these things, those things
2. Special information
these information are really special
2.1 General information about the environment
environment should be also important
2.2 Further information
and so on and so on
So finally it would be great to receive a Nx3 matrix, containing the index number, the index name and at least the content.
i_number i_name content
1 Intro some text about Intro, these things, those things
2 Special Information these information are really special
...
Thanks for your help
You could export or copy-paste your .docx in a .txt and apply this R script :
library(stringr)
library(readr)
doc <- read_file("filename.txt")
pattern_chapter <- regex("(\\d+\\.)(.{4,100}?)(?:\r\n)", dotall = T)
i_name <- str_match_all(doc, pattern_chapter)[[1]][,1]
paragraphs <- str_split(doc, pattern_chapter)[[1]]
content <- paragraphs[-which(paragraphs=="")]
result <- data.frame(i_name, content)
result$i_number <- seq.int(nrow(result))
View(result)
It doesn't work if your document contains any sort of line which is not a heading beginning with a number (eg, footnotes or numbered lists)
(please, no mindless downvote : this script works perfectly with the example given)

How to use FILE_MASK parameter in FM EPS2_GET_DIRECTORY_LISTING

I am trying to filter files using FILE_MASK parameter in EPS2_GET_DIRECTORY_LISTING to reduce time searching all files in the folder (has thousands of files).
File mask I tried:
TK5_*20150811*
file name in the folder is;
TK5_Invoic_828243P_20150811111946364.xml.asc
But it exports all files to DIR_LIST table, so nothing filtered.
But when I try with;
TK5_Invoic*20150811*
It works!
What I think is it works if I give first 10 characters as it is. But in my case I do not have first 10 characters always.
Can you give me an advice on using FILE_MASK?
Haven’t tried, but this sounds plausible:
https://archive.sap.com/discussions/thread/3470593
The * wildcard may only be used at the end of the search string.
It is not specified, what a '*' matches to, when it is not the last non-space character in the FILE parameter value.