How do I remove specific parameters from URL in splunk? - splunk

/app/account-headless#state=FD=reqdoc&PN=952949&accessMode=Internet&accessType=MOBILE
In splunk, I would like to group by the URLs by filtering/removing the parameter based on parameter name, for example remove PN=952949 or accessType=MOBILE

Try using sed to remove the undesired parameters. You'll need one sed command for each parameter.
| rex mode=sed field=URL "s/[?&]PN=[^&$]+//"
| rex mode=sed field=URL "s/[?&]accessType=[^&$]+//"

#RichG suggested rex in sed mode
You can also use a pair of eval replace calls:
| eval URL=replace(URL,"\&PN=[^\&]+","")
| eval URL=replace(URL,"\&accessType=[^\&]+","")

Related

Get full list of Groups and Projects in Gitlab Cloud

I'm trying to get a full list of Projects and groups in out Gitlab cloud account.
I'm currently using their documentation as reference (bear in mind though I'm no developer) and using Linux command line to do so. Here's the documentation I'm trying to use:
https://docs.gitlab.com/ee/api/projects.html
https://docs.gitlab.com/ee/api/groups.html#list-a-groups-projects
I'm using the following command to get the data and parse in a readable format that I will export to csv or spreadsheet afterwards:
curl --header "PRIVATE-TOKEN: $TOKEN" "https://gitlab.com/api/v4/projects/?owned=yes&per_page=1000&page=1" | python -m json.tool | grep -E "http_url_to_repo|visibility" | awk '!(NR%2){print$0p}{p=$0}' | awk '{print $4,$2}' | sed -E 's/\"|\,//g' > gitlab.txt
My problem is that the code only return about 100 of the 280 repositories we have. It doesn't seem to get it recursively from all the groups and subgroups.
Any ideas on how I can improve this search to get everything ?
Thank you
It seems it can get only 100 per page so you will have to run it two times - first with page=1 and next with page=2. And for second page you will need >> to append to existing file gitlab.txt
curl --header "..." "https://...&per_page=100&page=1" | ... > gitlab.txt
curl --header "..." "https://...&per_page=100&page=2" | ... >> gitlab.txt
Or you will have to write script which first get all pages and later send it to pipe. You may also try to use for-loop in bash

How to read the number of lines in a multiline variable in Ansible

I pass a multiline variable dest_host from Jenkins to Ansible as below
ansible-playbook -i allmwhosts.hosts action.yml -e '{ dest_host: myhost1
myhost2 }' --tags validate
In ansible i wish to count the number of lines present in dest_host which in this case is 2.
I can think of command: "cat {{ dest_host }} | wc -l" register the output and then print as a solution. However, is these a better way to get this in Ansible rather than going for a unix command ?
That is what the | length filter is for
- debug:
msg: '{{ dest_host | length }}'
vars:
dest_host: "alpha\nbeta\n"
although be forewarned that your -e does not do what you think it does (about the lines) because of yaml's scalar folding
ansible -e '{ bob:
alpha
beta
}' -m debug -a var=bob -c local -i localhost, localhost
emits
"bob": "alpha beta"
but the | length can still help you by using | split | length
Do note that not all results may play nicely by just passing | split | length — for example, take a stdout like below:
stdout:
- "this is the first line\nthis is the second line"
If you wanted to count the number of lines, {{ stdout[0] | split | length }} would give you something like 9 or 10, not 2 — it splits by spaces!
So, in a case like this, you would instead need to use {{ stdout[0].split('\n') | length }} (thanks Python), which would give you 2 as intended/desired.

Get first characters to a variable

I have an output like this
I need to get the id 65a8fa6 as an variable to a new command.
I'm familiar with grep and use it to get the line I need. But how do I only pick the first 7 caracters.
This is where I'm now
vagrant global-status | grep DB1
Output
65a8fa6 default vmware_desktop running /Users/USER/Documents/Vagrant/centos7SN/DB1
1st solution: You could simply do this with awk. Simply look for string DB in line and then if its found then print 1st field of that line. Save the output into variable var and later you could use it for your use.
val=$(vagrant global-status | awk '/DB1/{print $1}')
OR for matching either db1 OR DB1 try in any awk:
val=$(vagrant global-status | awk '/[dD][bB]1/{print $1}')
2nd solution: If you have GNU awk and you want to use ignorecase then try:
val=$(vagrant global-status | awk -v IGNORECASE="1" '/DB1/{print $1}')
3rd solution: To get first 7 characters try:
But how do I only pick the first 7 characters.
val=$(vagrant global-status | awk '/[dD][bB]1/{print substr($0,1,7)}')
Sed alternative:
val=$(vagrant global-status | sed -rn 's/(^[[:alnum:]]{7})(.*$)/\1/p')
Split the output of vagrant ... into two sections using sed and regular expressions (-r) Substitute the line for the first section only and print.
You can also use the cut command to find instances of what you're after, provided that there's some consistent text near what you want to find:
Say you want to find Hello out of the following line:
Hello here's some text blablablabla
You can find it doing something like:
echo Hello here's some text blablablabla | grep text | cut -d " " -f 1
Should output Hello

using sed to add a backslash in front of a variable

I have a variable and that variable only needs a '\' in front of it.
I would say that the sed command is the ideal tool for it?
I tried using single quotes, double quotes, multiple variables, combination of variables, ...
I don't get an error returned but the end result is not showing what I need it do be
FOLDER=$(echo `cat file.XML | grep "Value" | cut -d \" -f2`)
echo $FOLDER
sed -i "s#"$FOLDER"#"\\$FOLDER"#g" ./file.XML
echo $FOLDER
After execution, I get
$ ./script.sh
b4c17422-1365-4fbe-bccd-04e0d7dbb295
b4c17422-1365-4fbe-bccd-04e0d7dbb295
Eventually I need to have a result like
$ ./script.sh
b4c17422-1365-4fbe-bccd-04e0d7dbb295
\b4c17422-1365-4fbe-bccd-04e0d7dbb295
Fixed thanks to the input of Cyrus and Ed Morton.
FOLDER=$(echo `cat file.XML | grep "Value" | cut -d \" -f2`)
NEW_FOLDER="\\$FOLDER"
sed -i "s#$FOLDER#\\$NEW_FOLDER#g" ./file.XML

How to quote a shell variable in a TCL-expect string

I'm using the following awk command in an expect script to get the gateway for a particular destination
route | grep $dest | awk '{print $2}'
However the expect script does not like the $2 in the above statement.
Does anyone know of an alternative to awk to perform the same function as above? ie. output 2nd column.
You can use cut:
route | grep $dest | cut -d \ -f 2
That uses spaces as the field delimiter and pulls out the second field
To answer your Expect question, single quotes have no special meaning to the Tcl parser. You need to use braces to protect the body of the awk script:
route | grep $dest | awk {{print $2}}
And as awk can do what grep does, you can get away with one less process:
route | awk -v d=$dest {$0 ~ d {print $2}}
Before switching to another utility, check if changing field separator worrks. Documentation for field separators in GNU Awk here.
SED is the best alternative to use. If you don't mind a dependency, Perl should also be sufficient to solve the task
Depending on the structure of your data, you can use either cut, or use sed to do both filtering and printing the second column.
Alternatively, you could use Perl:
perl -ne 'if(/foo/) { #_ = split(/:/); print $_[1]; }'
This will print second token of each line containing foo, with : as token separator.