What is this type of data received in response XHR d=1 - xmlhttprequest

I'm looking for some days what is this type of data but I didn't find any clue. I'm thinking about gzip deflate or brotli but it's very complicated to find something when you don't know at all the origin.
So these data are obtained following a post to a URL in the chrome browser available in Dev tools -> Network ->fetch/XHR -> Response
In the request headers i have :
POST / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate, br
Accept-Language: fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7
Connection: keep-alive
Content-Length: 37
Content-type: text/plain
Host: "
Origin: "
Referer: "
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-site
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36
sec-ch-ua: "Chromium";v="104", " Not A;Brand";v="99", "Google Chrome";v="104"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Windows"
And the response data looks like this
d=1 10 1 15 2 1 0 1 1 3 3 6 0 1 6 4 0 5 0 5 6 12 3 1 1 0 5 0 0 0 0
3 0 0 0 0 1
6 0
52 99999850 0 0 // 99999850 is the balance on the casino machine
I think it can be decoded
In the payload i have
d=4 // this is the count of spin when I throw a spin it goes from 4 to 5
4QcDkBeyHwq!45797 // I think this is my ID
1 10 1 15 1 // 15 is the bet
I know this can be very confusing but do you have any leads or anything to help me?
Thank you

Related

sort according to a column value and take first n lines between the same headers in a file

I have an input like this;
header score1 score2
item 1.3 100
item 2.3 170
item 4.0 35
header score1 score2
item 2.9 45
item 1.7 55
header score1 score2
item 0.5 60
header score1 score2
header score1 score2
item 1.4 75
item 2.5 120
item 3.7 200
header score1 score2
I want to consider the lines between two lines including 'header' individually. Sort the lines according to the value in the second column in descending order and take the first two lines with the highest values. Also, add the header at the top. It is known that the list starts with "header score1 score2" .
So desired output is this;
header score1 score2
item 4.0 35
item 2.3 170
header score1 score2
item 2.9 45
item 1.7 55
header score1 score2
item 0.5 60
header score1 score2
header score1 score2
item 3.7 200
item 2.5 120
header score1 score2
I am a relatively new awk user so my best methodology for now is separating the steps in words and then doing the code research and application stepwise. Building a code block is something I cannot apply for now.
So first I have to separately consider every interval between the lines starting with "header"
1.
awk '/header/ {p=1;print;next} /^header/ && p {p=0;print} p' input.txt
This inputs the same file as expected. What I understand from this is when there is 'header it is printing it and continues to print below lines until another 'header'
the sorting and taking the first 2 I am doing with this code:
2.
sort -k2 -nr | head -2 # this should be without the header
I am guessing that I have to insert the second code inside the first one somehow, so I would appreciate any help about this.
Thank you
awk '
/header/ {
# Close pipe to sort when we get header.
close("sort -r|head -2")
# Copy header line to standard output.
print
next
}
{ # other lines to sort.
print | "sort -r|head -2"
}' input_file|column -t
header score1 score2
item 4.0 35
item 2.3 170
header score1 score2
item 2.9 45
item 1.7 55
header score1 score2
item 0.5 60
header score1 score2
header score1 score2
item 3.7 200
item 2.5 120
header score1 score2
Using the DSU (Decorate/Sort/Undecorate) idiom with any awk+sort+cut:
$ cat tst.sh
#!/usr/bin/env bash
awk -v OFS='\t' '
$1 == "header" {
blockNr++
}
{ print blockNr, ($1 == "header" ? 0 : 1), $0 }
' "$#" |
sort -k1,1n -k2,2n -k4,4rn |
cut -f3- |
awk '
$1 == "header" {
cnt = 0
}
cnt++ < 3
'
$ ./tst.sh file
header score1 score2
item 4.0 35
item 2.3 170
header score1 score2
item 2.9 45
item 1.7 55
header score1 score2
item 0.5 60
header score1 score2
header score1 score2
item 3.7 200
item 2.5 120
header score1 score2
See How to sort data based on the value of a column for part (multiple lines) of a file? for more info on how DSU works.
another awk with some support and no error handling.
$ awk 'BEGIN {cmd="sort -k2nr | head -2"}
/header/ {close(cmd); print; next}
{print | cmd}' file
header score1 score2
item 4.0 35
item 2.3 170
header score1 score2
item 2.9 45
item 1.7 55
header score1 score2
item 0.5 60
header score1 score2
header score1 score2
item 3.7 200
item 2.5 120
header score1 score2
uses awk for partitioning the data to sections at the headers and delegating the sort and take 2 functions to other commands.
took me long enough — the concept is to create a synthetic array index key for sorting purposes that
incorporates the rank ordering from multiple columns,
can deal with floating point, have them mix-and match with integers, and
maintains numeric rank ordering in ASCII
be wide enough that it'll only start to overflow in very
large or small numbers, in the absence of big-integer library
minimizing number of compare ops needed
* a single unified sort key compare
instead of
* having to go into each sub-key field when dealing with tie-breakers
CODE
BEGIN {
1 OFS = "\t"
1 PROCINFO["sorted_in"] = "#ind_str_desc"
1 __ = "[+-:]+"
1 gsub("^|$", "[[:blank:]]", __)
1 ____ = (((_*=_+=_^=_<_)^_)^--_)
1 _ = _<_
}
# Rule(s)
4 ($_)!~__ { # 2
2 print
2 while (($_) !~ __) {
2 split("", ___)
2 $_ = ""
2 getline
}
}
4 {
9 do {
9 ___[sprintf("x%.12X%.12X",-(____/$(NF-!_)),-(____/$(NF)))] = $_
9 getline
} while (($_) ~ __)
4 ______ = (! _) + (! _)
9 for (_______ in ___) {
9 if (-______ < +______--) { # 7
7 print _______, ___[_______]
}
}
4 split("", ___)
4 print
4 $_ = ""
}
OUTPUT
header score1 score2
xFFFFFFFFFFC00000FFFFFFFFFFF8AF8B item 4.0 35
xFFFFFFFFFF90B217FFFFFFFFFFFE7E7F item 2.3 170
header score1 score2
xFFFFFFFFFFA7B962FFFFFFFFFFFA4FA5 item 2.9 45
xFFFFFFFFFF69696AFFFFFFFFFFFB5870 item 1.7 55
header score1 score2
xFFFFFFFFFE000000FFFFFFFFFFFBBBBC item 0.5 60
header score1 score2
header score1 score2
xFFFFFFFFFFBACF92FFFFFFFFFFFEB852 item 3.7 200
xFFFFFFFFFF99999AFFFFFFFFFFFDDDDE item 2.5 120
header score1 score2

How to display table of top 5 URL with their status and percentage on splunk

Need a table to show the top 5 URL as given below in Splunk. Is this possible in Splunk? I tried many ways but I can't get all status of a URL as a single row.
API 200 204 400 401 499 500
/wodetails/ACP 895(50%) - - - - 1
This is a case where the chart command can be used:
index="main" source="access.log" sourcetype="access_combined"
| chart c(status) by uri, status
uri
200
204
400
499
/basic/status
11
1
1
1
/search/results
3
0
0
0
To add the percentages, you can use eventstats
index="main" source="access.log" sourcetype="access_combined"
| eventstats count as "totalCount" by uri
| eventstats count as "codecount" by uri, status
| eval percent=round((codecount/totalCount)*100)
| eval cell=codecount." (".percent."%)"
| chart values(cell) by uri,status
uri
200
204
400
499
/basic/status
11 (79%)
1 (7%)
1 (7%)
1 (7%)
/search/results
3 (100%)

Multiple rows data into multiple columns in one table

I have a table called Product Variant.
sequence No item
400 1 4.5
500 1 0
501 1 0
502 1 0
503 1 B-DP
504 2 0
400 1 2.5
500 2 0
501 2 0
502 2 0
503 2 B-PP
504 2 0
My Required output is :
sequence No item item1
503 1 B-DP 4.5
503 2 B-PP 2.5
I am trying but not coming as expected.. Can anyone suggest me on this please.
Thanks in Advance.
Something like this?
select max(case when item like 'B%' then sequence end),
no,
sum(try_convert(numeric(38, 6), item1))
from t
group by no;

How to mark faces of a hole in Tetgen?

Using a .poly file, I created models with holes in them. I would like to mark/identify the faces bounding these holes. Is there a way? Maybe using regions?
You can do it using the [boundary marker] attribute. See the example below.
Example (with arbitrary numbers):
342 3 0 1
- - -
1 0.312500 0.000000 0.000000 1
2 0.304126 0.000000 0.031250 1
. . .
303 -0.004873 -0.259999 -0.017013 2
304 -0.008291 -0.267013 -0.008944 2
. . .
where the format is:
[index] [x_coordinate] [y_coordinate] [z_cordinate] [boundary marker]
So now, you have marked vertices. So in the next part of the .poly file, we'll specify faces.
676 1
- - -
1 0 1
3 15 16 13
1 0 1
3 13 16 17
. . .
1 0 2
3 304 303 335
1 0 2
3 303 309 335
where the format is:
[# of polygons] [# of holes] [boundary marker]
[# of corners] [corner 1] [corner 2] ... [corner #]

NFS network traffic with auto_direct

I'm interested in how the NFS network traffic goes when there is a redirect on the server side.
E.g.: the client accesses dir_a , mounted on the NFS server_a, but on server_a , /etc/auto_direct contains an entry that redirects dir_a to dir_b on server_b.
In this case, which server will the NFS client communicate with ? The most important question is, between which machines will the bulk of the NFS data traffic take place ?
All this is for Solaris 10, if that matters.
I've made some tests and from that it seems that the client somehow knows about the redirect:
user#client $ df dir_a
dir_a(auto_direct ): 0 blocks 0 files
I've made some file access in dir_a and watched the interfaces of the client towards server_a and server_b.
On the client I did:
cd dir_a; while true; do echo 1111111111111111111111111111 >> t; done
On the client's interface to server_a there was no traffic increase (only in the total traffic): (the time when the above script loop was running is marked with * below.)
nmsadm#atrcxb1951: netstat -I bnxe0 10
input bnxe0 output input (Total) output
packets errs packets errs colls packets errs packets errs colls
8819 0 4476 0 0 8920 0 4494 0 0
8800 0 4451 0 0 8871 0 4466 0 0
8753 0 4371 0 0 27468 0 26777 0 0 *
8704 0 4378 0 0 27772 0 27227 0 0 *
8734 0 4381 0 0 28425 0 28044 0 0 *
8789 0 4453 0 0 13053 0 9317 0 0
8765 0 4407 0 0 8871 0 4420 0 0
While on the client's interface towards server_b there was:
nmsadm#atrcxb1951:~$ netstat -I bnxe4 10
input bnxe4 output input (Total) output
packets errs packets errs colls packets errs packets errs colls
121 0 17 0 0 8942 0 4494 0 0
10467 0 12473 0 0 19264 0 16927 0 0 *
18579 0 22362 0 0 27291 0 26732 0 0 *
21735 0 25978 0 0 30466 0 30364 0 0 *
10971 0 12970 0 0 19760 0 17395 0 0 *
35 0 12 0 0 8782 0 4432 0 0
So in my case it seems that the client handles the redirection and server_a is not proxying the NFS data traffic.
I'd be still curious about under what circumstances does this work like this. Any configuration option, etc.