Success Probability of FORS few time signature scheme - cryptography

I was reading the FORS and DFORS Few-Time-Signature scheme. I understood the security of HORS Few time signature equals to $ (rk/t)^k $, but I could not understand the security of FORS is $ (r/t)^k $ as discussed in the following paper page no 5. [Hash based signatures Revisited]: https://eprint.iacr.org/2020/564.pdf
In FORS few time signature, How Adversary A who observed the signatures of r messages, finding a message $ m_{r+1} $ that is in an r−subset cover relation with the other r messages $ (C^{r-FORS}{k} (m{1}, m_{2}, . . . , m_{r+1})) $ has probability of success $(r/t)^κ $
I could not understand How to derive the success probability of FORS few time signature equals to $(r/t)^κ $??

Related

Hashcat : mask attack getting error "seperator unmatched"

I am desperately trying to recover a veracrypt password with hashcat.
I got the hash file of the encrypted device with dd command and then used the sha512sum command to get the hash so it should be :
c21cd34530e01d4f31f329a9c53643984894e1411ee6400551d7f614d4e3409ec643e3a0c3684238b9656c2793239666aa907f7739055197b094804679026810
I remember a part of the password so I guessed a mask attack with hashcat should be helpful.
But I keep getting "separator unmatched"
I typed the following command :
hashcat --force -m 1800 -a 3 -i --increment-min 20 --increment-max 21 c21cd34530e01d4f31f329a9c53643984894e1411ee6400551d7f614d4e3409ec643e3a0c3684238b9656c2793239666aa907f7739055197b094804679026810 ?u?1?1?s?u?1?1?s?d?d?d?d?s?l?l
Hash 'c21cd34530e01d4f31f329a9c53643984894e1411ee6400551d7f614d4e3409ec643e3a0c3684238b9656c2793239666aa907f7739055197b094804679026810': Separator unmatched
No hashes loaded.
I do not understand my mistake, the mask is after the hash however.

How to use randomTrips.py in SUMO on win8

I'm using randomTrips.py in SUMO to generate random trips on win8. I have a map.net.xml file and try to create a trips.xml file through randomTrips.py. However, the problem occurs and I don't know how to deal with it. The code is as follows:
C:\Program Files (x86)\Eclipse\sumo\tools>randomTrips.py -n map.net.xml -l 200 -e -o map.trips.xml
I don't get the .trips.xml file I want. And the outcome is as follows, it seems that I have missed some properties of the function in my code, but I don't know how to correct it. If anyone knows how to solve the problem, pls give me some valuable suggestions. Thanks.
The outcome is :
Usage: randomTrips.py [options]
Options:
-h, --help show this help message and exit
-n NETFILE, --net-file=NETFILE
define the net file (mandatory)
-a ADDITIONAL, --additional-files=ADDITIONAL
define additional files to be loaded by the rout
-o TRIPFILE, --output-trip-file=TRIPFILE
define the output trip filename
-r ROUTEFILE, --route-file=ROUTEFILE
generates route file with duarouter
--weights-prefix=WEIGHTSPREFIX
loads probabilities for being source, destinatio
via-edge from the files named .src.xml,
.sink.xml and .via.xml
--weights-output-prefix=WEIGHTS_OUTPREFIX
generates weights files for visualisation
--pedestrians create a person file with pedestrian trips inste
vehicle trips
--persontrips create a person file with person trips instead o
vehicle trips
--persontrip.transfer.car-walk=CARWALKMODE
Where are mode changes from car to walking allow
(possible values: 'ptStops', 'allJunctions' and
combinations)
--persontrip.walkfactor=WALKFACTOR
Use FLOAT as a factor on pedestrian maximum spee
during intermodal routing
--prefix=TRIPPREFIX prefix for the trip ids
-t TRIPATTRS, --trip-attributes=TRIPATTRS
additional trip attributes. When generating
pedestrians, attributes for and
supported.
--fringe-start-attributes=FRINGEATTRS
additional trip attributes when starting on a fr
-b BEGIN, --begin=BEGIN
begin time
-e END, --end=END end time (default 3600)
-p PERIOD, --period=PERIOD
Generate vehicles with equidistant departure tim
period=FLOAT (default 1.0). If option --binomial
used, the expected arrival rate is set to 1/peri
-s SEED, --seed=SEED random seed
-l, --length weight edge probability by length
-L, --lanes weight edge probability by number of lanes
--speed-exponent=SPEED_EXPONENT
weight edge probability by speed^ (defaul
--fringe-factor=FRINGE_FACTOR
multiply weight of fringe edges by (defa
--fringe-threshold=FRINGE_THRESHOLD
only consider edges with speed above as
edges (default 0)
--allow-fringe Allow departing on edges that leave the network
arriving on edges that enter the network (via
turnarounds or as 1-edge trips
--allow-fringe.min-length=ALLOW_FRINGE_MIN_LENGTH
Allow departing on edges that leave the network
arriving on edges that enter the network, if the
at least the given length
--min-distance=MIN_DISTANCE
require start and end edges for each trip to be
least m apart
--max-distance=MAX_DISTANCE
require start and end edges for each trip to be
most m apart (default 0 which disables a
checks)
-i INTERMEDIATE, --intermediate=INTERMEDIATE
generates the given number of intermediate way p
--flows=FLOWS generates INT flows that together output vehicle
the specified period
--maxtries=MAXTRIES number of attemps for finding a trip which meets
distance constraints
--binomial=N If this is set, the number of departures per sec
will be drawn from a binomial distribution with
and p=PERIOD/N where PERIOD is the argument give
option --period. Tnumber of attemps for finding
which meets the distance constraints
-c VCLASS, --vclass=VCLASS, --edge-permission=VCLASS
only from and to edges which permit the given ve
class
--vehicle-class=VEHICLE_CLASS
The vehicle class assigned to the generated trip
(adds a standard vType definition to the output
--validate Whether to produce trip output that is already c
for connectivity
-v, --verbose tell me what you are doing
Probably the file name association with .py files is broken, see Python Command Line Arguments (Windows). Try to run the script with python explicitly:
python randomTrips.py -n map.net.xml -l 200 -e -o map.trips.xml
I just tried last week. You can search randomTrips.py under SUMO's folder. Then you find the location of randomTrips.py, then you open the cmd and call python to execute it. You also need to specify the net.xml.

festival 2.4: why do some voices not work with singing mode?

voice_kal_diphone and voice_ral_diphone work correctly in singing mode (there's vocal output and the pitches are correct for the specified notes).
voice_cmu_us_ahw_cg and the other CMU voices do not work correctly--there's vocal output but the pitch is not changed according to the specified notes.
Is it possible to get correct output with the higher quality CMU voices?
The command line for working (pitch-affected) output is:
text2wave -mode singing -eval "(voice_kal_diphone)" -o song.wav song.xml
The command line for non-working (pitch-unaffected) output is:
text2wave -mode singing -eval "(voice_cmu_us_ahw_cg)" -o song.wav song.xml
Here's song.xml:
<?xml version="1.0"?>
<!DOCTYPE SINGING PUBLIC "-//SINGING//DTD SINGING mark up//EN" "Singing.v0_1.dtd" []>
<SINGING BPM="60">
<PITCH NOTE="A4,C4,C4"><DURATION BEATS="0.3,0.3,0.3">nationwide</DURATION></PITCH>
<PITCH NOTE="C4"><DURATION BEATS="0.3">is</DURATION></PITCH>
<PITCH NOTE="D4"><DURATION BEATS="0.3">on</DURATION></PITCH>
<PITCH NOTE="F4"><DURATION BEATS="0.3">your</DURATION></PITCH>
<PITCH NOTE="F4"><DURATION BEATS="0.3">side</DURATION></PITCH>
</SINGING>
You may also need this patch to singing-mode.scm:
## -339,7 +339,9 ##
(defvar singing-max-short-vowel-length 0.11)
(define (singing_do_initial utt token)
- (if (equal? (item.name token) "")
+ (if (and
+ (not (equal? nil token))
+ (equal? (item.name token) ""))
(let ((restlen (car (item.feat token 'rest))))
(if singing-debug
(format t "restlen %l\n" restlen))
To set up my environment I used the festvox fest_build script. You can also download voice_cmu_us_ahw_cg separately.
It seems that the problem is in phones generation.
voice_kal_diphone uses UniSyn synthesis model, while voice_cmu_us_ahw_cg uses ClusterGen model. The last one has own intonation and duration model (state-based) instead of phone intonation/duration: possibly you noticed that duration didn't changed too in generated 'song'.
singing-mode.scm tries to extract each syllable and modify its frequency. In case of ClusterGen model wave generator simply ignores syllables frequencies and durations set in Target due to different modelling.
As a result we have better voice quality (based on statistic model), but can't change frequency directly.
Very good description of generation pipeline can be found here.

How can I use the value of mp2t.af.pcr as a Tshark field?

I have a wireshark capture that contains an RTP multicast stream (plus some other incidental data).
Using a Tshark command like the following, I can produce a CSV of the RTP timestamp compared with the packet capture time:
tshark.exe -r "capture.pcap" -Eseparator=, -Tfields -e rtp.timestamp -e frame.time_epoch -d udp.port==5000,rtp
This decodes the UDP packets as RTP, and successfully prints out the two fields as expected.
Now, my question: The payload of the RTP stream is an MPEG2 Transport Stream, and I also want to print the PCR value (if there is one) alongside the packet and RTP timestamps.
In wireshark, I can see the PCR being decoded correctly, however using a command like the following:
tshark.exe -r "HBO HD CZ.pcap" -Eseparator=,-Tfields -e rtp.timestamp -e frame.time_epoch -e mp2t.af.pcr -d udp.port==5000,mp2t
...only prints out a "1" if there is a PCR oresent, not the actual value. I have also checked the .pcr_flag to confirm that these two are not exchanged, but still I see the same result.
The documentation seems to call mp2t.af.pcr a "Label", does this mean that Tshark is not able to use it as a field? Is there a way to generate a CSV with these values?
(What part of the documentation calls it a "Label"? That's a somewhat odd description of a named field.)
The problem is that the value that Wireshark displays after "base(XXX)*300 + ext(YYY)" is calculated and displayed, but the field itself isn't given an integral type and is instead given a type that doesn't have a value. Arguably, it should be an FT_UINT64 field and should be given a value, so that you can filter on it and can print the value in TShark.
Please file an enhancement request for this on the Wireshark Bugzilla.

How can i view all comments posted by users in bitbucket repository

In the repository home page , i can see comments posted in recent activity at the bottom, bit it only shows 10 commnets.
i want to all the comments posted since beginning.
Is there any way
Comments of pull requests, issues and commits can be retrieved using bitbucket’s REST API.
However it seems that there is no way to list all of them at one place, so the only way to get them would be to query the API for each PR, issue or commit of the repository.
Note that this takes a long time, since bitbucket has seemingly set a limit to the number of accesses via API to repository data: I got Rate limit for this resource has been exceeded errors after retrieving around a thousand results, then I could retrieve about only one entry per second elapsed from the time of the last rate limit error.
Finding the API URL to the repository
The first step is to find the URL to the repo. For private repositories, it is necessary to get authenticated by providing username and password (using curl’s -u switch). The URL is of the form:
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}
Running git remote -v from the local git repository should provide the missing values. Check the forged URL (below referred to as $url) by verifying that repository information is correctly retrieved as JSON data from it: curl -u username $url.
Fetching comments of commits
Comments of a commit can be accessed at $url/commit/{commitHash}/comments.
The resulting JSON data can be processed by a script. Beware that the results are paginated.
Below I simply extract the number of comments per commit. It is indicated by the value of the member size of the retrieved JSON object; I also request a partial response by adding the GET parameter fields=size.
My script getNComments.sh:
#!/bin/sh
pw=$1
id=$2
json=$(curl -s -u username:"$pw" \
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}/commit/$id/comments'?fields=size')
printf '%s' "$json" | grep -q '"type": "error"' \
&& printf "ERROR $id\n" && exit 0
nComments=$(printf '%s' "$json" | grep -o '"size": [0-9]*' | cut -d' ' -f2)
: ${nComments:=EMPTY}
checkNumeric=$(printf '%s' "$nComments" | tr -dc 0-9)
[ "$nComments" != "$checkNumeric" ] \
&& printf >&2 "!ERROR! $id:\n%s\n" "$json" && exit 1
printf "$nComments $id\n"
To use it, taking into account the possibility for the error mentioned above:
A) Prepare input data. From the local repository, generate the list of commits as wanted (run git fetch -a prior to update the local git repo if needed); check out git help rev-list for how it can be customised.
git rev-list --all | sort > sorted-all.id
cp sorted-all.id remaining.id
B) Run the script. Note that the password is passed here as a parameter – so first assign it to a variable safely using stty -echo; IFS= read -r passwd; stty echo, in one line; also see security considerations below. The processing is parallelised onto 15 processes here, using the option -P.
< remaining.id xargs -P 15 -L 1 ./getNComments.sh "$passwd" > commits.temp
C) When the rate limit is reached, that is when getNComments.sh prints !ERROR!, then kill the above command (Ctrl-C), and execute these below to update the input and output files. Wait a while for the request limit to increase, then re-execute the above one command and repeat until all the data is processed (that is when wc -l remaining.id returns 0).
cat commits.temp >> commits.result
cut -d' ' -f2 commits.result | sort | comm -13 - sorted-all.id > remaining.id
D) Finally, you can get the commits which received comments with:
grep '^[1-9]' commits.result
Fetching comments of pull requests and issues
The procedure is the same as for fetching commits’ comments, but for the following two adjustments:
Edit the script to replace in the URL commit by pullrequests or by issues, as appropriate;
Let $n be the number of issues/PRs to search. The git rev-list command above becomes: seq 1 $n > sorted-all.id
The total number of PRs in the repository can be obtained with:
curl -su username $url/pullrequests'?state=&fields=size'
and, if the issue tracker is set up, the number of issues with:
curl -su username $url/issues'?fields=size'
Hopefully, the repository has few enough PRs and issues so that all data can be fetched in one go.
Viewing comments
They can be viewed normally via the web interface on their commit/PR/issue page at:
https://bitbucket.org/{repoOwnerName}/{repoName}/commits/{commitHash}
https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/{prId}
https://bitbucket.org/{repoOwnerName}/{repoName}/issues/{issueId}
For example, to open all PRs with comments in firefox:
awk '/^[1-9]/{print "https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/"$2}' PRs.result | xargs firefox
Security considerations
Arguments passed on the command line are visible to all users of the system, via ps ax (or /proc/$PID/cmdline). Therefore the bitbucket password will be exposed, which could be a concern if the system is shared by multiple users.
There are three commands getting the password from the command line: xargs, the script, and curl.
It appears that curl tries to hide the password by overwriting its memory, but it is not guaranteed to work, and even if it does, it leaves it visible for a (very short) time after the process starts. On my system, the parameters to curl are not hidden.
A better option could be to pass the sensitive information through environment variables. They should be visible only to the current user and root via ps axe (or /proc/$PID/environ); although it seems that there are systems that let all users access this information (do a ls -l /proc/*/environ to check the environment files’ permissions).
In the script simply replace the lines pw=$1 id=$2 with id=$1, then pass pw="$passwd" before xargs in the command line invocation. It will make the environment variable pw visible to xargs and all of its descendent processes, that is the script and its children (curl, grep, cut, etc), which may or may not read the variable. curl does not read the password from the environment, but if its password hiding trick mentioned above works then it might be good enough.
There are ways to avoid passing the password to curl via the command line, notably via standard input using the option -K -. In the script, replace curl -s -u username:"$pw" with printf -- '-s\n-u "%s"\n' "$authinfo" | curl -K - and define the variable authinfo to contain the data in the format username:password. Note that this method needs printf to be a shell built-in to be safe (check with type printf), otherwise the password will show up in its process arguments. If it is not a built-in, try with print or echo instead.
A simple alternative to an environment variable that will not appear in ps output in any case is via a file. Create a file with read/write permissions restricted to the current user (chmod 600), and edit it so that it contains username:password as its first line. In the script, replace pw=$1 with IFS= read -r authinfo < "$1", and edit it to use curl’s -K option as in the paragraph above. In the command line invocation replace $passwd with the filename.
The file approach has the drawback that the password will be written to disk (note that files in /proc are not on the disk). If this too is undesirable, it is possible to pass a named pipe instead of a regular file:
mkfifo pipe
chmod 600 pipe
# make sure printf is a builtin, or use an equivalent instead
(while :; do printf -- '%s\n' "username:$passwd"; done) > pipe&
pid=$!
exec 3<pipe
Then invoke the script passing pipe instead of the file. Finally, to clean up do:
kill $pid
exec 3<&-
This will ensure the authentication info is passed directly from the shell to the script (through the kernel), is not written to disk and is not exposed to other users via ps.
You can go to Commits and see the top line for each commit, you will need to click on each one to see further information.
If I find a way to see all without drilling into each commit, I will update this answer.