I am looking for a way to show the results of the file "tcp-variants-comparison.cc" under ns3 (3.28) used with Ubuntu 18.04.
I found here an old topic from 2013, but it seems not to work correctly in my current environment.
P.S: I am a newbie in ns3, so i will appreciate any help.
regards
cedkhader
Running ./waf --run "tcp-variants-comparison --tracing=1" yields the following files:
-rw-rw-r-- 1 112271415 Aug 5 15:52 TcpVariantsComparison-ascii
-rw-rw-r-- 1 401623 Aug 5 15:52 TcpVariantsComparison-cwnd.data
-rw-rw-r-- 1 1216177 Aug 5 15:52 TcpVariantsComparison-inflight.data
-rw-rw-r-- 1 947619 Aug 5 15:52 TcpVariantsComparison-next-rx.data
-rw-rw-r-- 1 955550 Aug 5 15:52 TcpVariantsComparison-next-tx.data
-rw-rw-r-- 1 38 Aug 5 15:51 TcpVariantsComparison-rto.data
-rw-rw-r-- 1 482134 Aug 5 15:52 TcpVariantsComparison-rtt.data
-rw-rw-r-- 1 346427 Aug 5 15:52 TcpVariantsComparison-ssth.data
You can use other command line arguments to generate the desired output, see list below.
Program Arguments:
--transport_prot: Transport protocol to use: TcpNewReno, TcpHybla, TcpHighSpeed, TcpHtcp, TcpVegas, TcpScalable, TcpVeno, TcpBic, TcpYeah, TcpIllinois, TcpWestwood, TcpWestwoodPlus, TcpLedbat [TcpWestwood]
--error_p: Packet error rate [0]
--bandwidth: Bottleneck bandwidth [2Mbps]
--delay: Bottleneck delay [0.01ms]
--access_bandwidth: Access link bandwidth [10Mbps]
--access_delay: Access link delay [45ms]
--tracing: Flag to enable/disable tracing [true]
--prefix_name: Prefix of output trace file [TcpVariantsComparison]
--data: Number of Megabytes of data to transmit [0]
--mtu: Size of IP packets to send in bytes [400]
--num_flows: Number of flows [1]
--duration: Time to allow flows to run in seconds [100]
--run: Run index (for setting repeatable seeds) [0]
--flow_monitor: Enable flow monitor [false]
--pcap_tracing: Enable or disable PCAP tracing [false]
--queue_disc_type: Queue disc type for gateway (e.g. ns3::CoDelQueueDisc) [ns3::PfifoFastQueueDisc]
--sack: Enable or disable SACK option [true]
in ns3.36.1 I used this command
./ns3 run examples/tcp/tcp-variants-comparison.cc -- --tracing=1
and output look like this
TcpVariantsComparison-ascii
TcpVariantsComparison-cwnd.data
TcpVariantsComparison-inflight.data
TcpVariantsComparison-next-rx.data
TcpVariantsComparison-next-tx.data
TcpVariantsComparison-rto.data
TcpVariantsComparison-rtt.data
TcpVariantsComparison-ssth.data
Related
Before trying to assemble sequence data, I get a file size estimate for my raw READ1/READ2 files by running the command ls -l -h from the directory the files are in. The output looks something like this:
-rwxrwxrwx# 1 catharus2021 staff 86M Jun 11 15:03 pluvialis-dominica_JJW362-READ1.fastq.gz
-rwxrwxrwx# 1 catharus2021 staff 84M Jun 11 15:03 pluvialis-dominica_JJW362-READ2.fastq.gz
For a previous run using the identical command, but a different batch of data, the output was as such:
-rwxr-xr-x 1 catharus2021 staff 44M Mar 16 2018 lagopus_lagopus_alascensis_JJW1970_READ1.fastq.gz
-rwxr-xr-x 1 catharus2021 staff 52M Mar 16 2018 lagopus_lagopus_alascensis_JJW1970_READ2.fastq.gz
It doesn't seem to be affecting any downstream commands, but does anyone know why the strings at the very beginning (-rwxrwxrx# vs. -rwxr-xr-x) are different? I assume that they're permissions flags, but google has been less-than-informative when I try to type those in and search.
Thanks in advance for your time.
The coding describes who can access a file in which way. It is oredred:
owner - group - world
rwxr-xr-x
user can read, write and execute
group can only read and execute
world can only read and execute
This prevents that other people can overwrite your data. If you change to
rwxrwxrwx
Everybody can overwrite your data.
One of my redis servers is repeatedly going down today without any overt, diagnosable cause. My users all end up getting Error 111 connecting to unix socket: /var/run/redis/redis2.sock. Connection refused errors.
Looking into the logs at /var/log/redis, the last few lines capture nothing more nefarious than a scheduled backup:
[8248] 09 Mar 07:48:17.090 * 10 changes in 21600 seconds. Saving...
[8248] 09 Mar 07:48:17.374 * Background saving started by pid 47613
[47613] 09 Mar 07:51:02.257 * DB saved on disk
[47613] 09 Mar 07:51:02.486 * RDB: 526 MB of memory used by copy-on-write
[8248] 09 Mar 07:51:02.920 * Background saving terminated with success
The pid file still exists too. Which implies the server wasn't formally shut down, and redis was still daemonized?
I logged into my system and did sudo service redis-server restart twice to get it up and running. Apart from these logs, how else can I diagnose what might have gone wrong?
Update: I noticed that at the time of the first crash, disk swapping started taking place. This hasn't happened before. Moreover, cat /proc/sys/vm/swappiness confirms swappiness is set to 2.
free -m shows (after normal operation):
total used free shared buffers cached
Mem: 28136 27015 1120 305 80 6586
-/+ buffers/cache: 20349 7787
Swap: 1023 991 32
free -m shows (after the redis server goes down):
total used free shared buffers cached
Mem: 28136 8770 19365 305 60 441
-/+ buffers/cache: 8268 19868
Swap: 1023 1022 1
This sounds like the work of the OS' OOM killer - you can verify/discredit the hypothesis by reviewing the /var/log/syslog.
In this case, the persistence job's overhead triggered the killer. You need to provision for that by setting maxmemory and allocating enough RAM to accommodate persistence's requirements, including COW.
Note that free isn't useful after the fact - you need to monitor your resources continuously.
As for swap, if you don't care about latency then you can certainly do that.
I am using IBM LSF and trying to get usage statistics during a certain period. I found that bhist does the job, but the short form bhist output does not show all of the fields I need.
What I want to know is:
Is bhist's output field customizable? The fields I need are:
<jobid>
<user>
<queue>
<job_name>
<project_name>
<job_description>
<submission_time>
<pending_time>
<run_time>
If 1 is not possible, the long form (bhist -l) output shows everything I need, but the format is hard to manipulate. I've pasted an example of the format below.
For example, the number of line between records is not fixed, and the word wrap in each event may break the line in the middle of a word I'm trying to scan for. How do I parse this format with sed and awk?
JobId <1531>, User <user1>, Project <default>, Command< example200>
Fri Dec 27 13:04:14: Submitted from host <hostA> to Queue <priority>, CWD <$H
OME>, Specified Hosts <hostD>;
Fri Dec 27 13:04:19: Dispatched to <hostD>;
Fri Dec 27 13:04:19: Starting (Pid 8920);
Fri Dec 27 13:04:20: Running with execution home </home/user1>, Execution CWD
</home/user1>, Execution Pid <8920>;
Fri Dec 27 13:05:49: Suspended by the user or administrator;
Fri Dec 27 13:05:56: Suspended: Waiting for re-scheduling after being resumed
by user;
Fri Dec 27 13:05:57: Running;
Fri Dec 27 13:07:52: Done successfully. The CPU time used is 28.3 seconds.
Summary of time in seconds spent in various states by Sat Dec 27 13:07:52 1997
PEND PSUSP RUN USUSP SSUSP UNKWN TOTAL
5 0 205 7 1 0 218
------------------------------------------------------------
.... repeat
I'm adding a second answer because it might help you with your problem without actually having to write your own solution (depending on the usage statistics you're after).
LSF already has a utility called bacct that computes and prints out various usage statistics about historical LSF jobs filtered by various criteria.
For example, to get summary usage statistics about jobs that were dispatched/completed/submitted between time0 and time1, you can use (respectively):
bacct -D time0,time1
bacct -C time0,time1
bacct -S time0,time1
Statistics about jobs submitted by a particular user:
bacct -u <username>
Statistics about jobs submitted to a particular queue:
bacct -q <queuename>
These options can be combined as well, so for example if you wanted statistics about jobs that were submitted and completed within a particular time window for a particular project, you can use:
bacct -S time0,time1 -C time0,time1 -P <projectname>
The output provides some summary information about all jobs that match the provided criteria like so:
$ bacct -u bobbafett -q normal
Accounting information about jobs that are:
- submitted by users bobbafett,
- accounted on all projects.
- completed normally or exited
- executed on all hosts.
- submitted to queues normal,
- accounted on all service classes.
------------------------------------------------------------------------------
SUMMARY: ( time unit: second )
Total number of done jobs: 0 Total number of exited jobs: 32
Total CPU time consumed: 46.8 Average CPU time consumed: 1.5
Maximum CPU time of a job: 9.0 Minimum CPU time of a job: 0.0
Total wait time in queues: 18680.0
Average wait time in queue: 583.8
Maximum wait time in queue: 5507.0 Minimum wait time in queue: 0.0
Average turnaround time: 11568 (seconds/job)
Maximum turnaround time: 43294 Minimum turnaround time: 40
Average hog factor of a job: 0.00 ( cpu time / turnaround time )
Maximum hog factor of a job: 0.02 Minimum hog factor of a job: 0.00
Total Run time consumed: 351504 Average Run time consumed: 10984
Maximum Run time of a job: 1844674 Minimum Run time of a job: 0
Total throughput: 0.24 (jobs/hour) during 160.32 hours
Beginning time: Nov 11 17:55 Ending time: Nov 18 10:14
This command also has a long form output that provides some bhist -l-like information about each job that might be a bit easier to parse (although still not all that easy):
$ bacct -l -u bobbafett -q normal
Accounting information about jobs that are:
- submitted by users bobbafett,
- accounted on all projects.
- completed normally or exited
- executed on all hosts.
- submitted to queues normal,
- accounted on all service classes.
------------------------------------------------------------------------------
Job <101>, User <bobbafett>, Project <default>, Status <EXIT>, Queue <normal>,
Command <sleep 100000000>
Wed Nov 11 17:37:45: Submitted from host <endor>, CWD <$HOME>;
Wed Nov 11 17:55:05: Completed <exit>; TERM_OWNER: job killed by owner.
Accounting information about this job:
CPU_T WAIT TURNAROUND STATUS HOG_FACTOR MEM SWAP
0.00 1040 1040 exit 0.0000 0M 0M
------------------------------------------------------------------------------
...
Long form output is pretty hard to parse. I know bjobs has an option for unformatted output (-UF) in older LSF versions which makes it a bit easier, and the most recent version of LSF allows you to customize which columns get printed in short form output with -o.
Unfortunately, neither of these options are available with bhist. The only real possibilities for historical information are:
Figure out some way to parse bhist -l -- impractical and maybe not even possible due to inconsistent formatting as you've discovered.
Write a C program to do what you want using the LSF API, which exposes the functions that bhist itself uses to parse the lsb.events file. This is the file that stores all the historical information about the LSF cluster, and is what bhist reads to generate its ouptut.
If C is not an option for you, then you could try writing a script to parse the lsb.events file directly -- the format is documented in the configuration reference. This is hard, but not impossible. Here is the relevant document for LSF 9.1.3.
My personal recommendation would be #2 -- the function you're looking for is lsb_geteventrec(). You'd basically read each line in lsb.events one at a time and pull out the information you need.
I'm using SQLite and a php application that runs in background. I have blocked the application using (Ctrl-c) and I just noticed that i have database.sqlite and database.sqlite-journal.
At the moment, How can I remove -journal file without compromising the database?
Thank you!
P.S. SQLite version 3.7.9
EDIT:
-rw-r--r--. 1 damiano damiano 51M 8 mar 18.15 test.sqlite2
-rw-r--r--. 1 damiano damiano 2,6K 8 mar 18.15 test.sqlite2-journal
[damiano#localhost backup]$ sqlite3 test.sqlite2
SQLite version 3.7.13 2012-06-11 02:05:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite>
[damiano#localhost backup]$ ls -lh
-rw-r--r--. 1 damiano damiano 51M 8 mar 18.15 test.sqlite2
-rw-r--r--. 1 damiano damiano 2,6K 8 mar 18.15 test.sqlite2-journal
[damiano#localhost backup]$
Execute this command:
sqlite3 test.sqlite2 vacuum
It will make your database as small as possible and apply possible outstanding transactions or rollbacks in -journal file (and remove it in process). You can actually execute any other transaction that does something (but simply connect/disconnect is NOT enough), but vacuum seems like easiest approach.
Just open the database (with your program or with the sqlite3 command-line tool).
SQLite will then roll back the changes of your interrupted transaction and afterwards remove the journal.
How does dump create the incremental backup? It seems I should use the same file name when I create a level 1 dump:
Full backup:
dump -0aLuf /mnt/bkup/backup.dump /
and then for the incremental
dump -1aLuf /mnt/bkup/backup.dump /
What happens if I dump the level 1 to a different file:
dump -1aLuf /mnt/bkup/backup1.dump /
I am trying to understand how dump keeps track of the changes. I am using a ext3 file system.
This is my /etc/dumpdates:
# cat /etc/dumpdates
/dev/sda2 0 Wed Feb 13 10:55:42 2013 -0600
/dev/sda2 1 Mon Feb 18 11:41:00 2013 -0600
My level 0 for this system was around 11GB and then I ran level 1 today and I used the same filename and the size was around 5 GB.
I think I figured out the issue. It looks like dump adds information in the file so it knows when the previous level occurred.
Level 0 backup
# file bkup_tmp_0_20130220
bkup_tmp_0_20130220: new-fs dump file (little endian), This dump Wed Feb 20 14:29:31 2013, Previous dump Wed Dec 31 18:00:00 1969, Volume 1, Level zero, type: tape header, Label my-label, Filesystem /tmp, Device /dev/sda3, Host myhostname, Flags 3
Level 1 backup, after some change
# file bkup_tmp_1_20130220
bkup_tmp_1_20130220: new-fs dump file (little endian), This dump Wed Feb 20 14:30:48 2013, Previous dump Wed Feb 20 14:29:31 2013, Volume 1, Level 1, type: tape header, Label my-label, Filesystem /tmp, Device /dev/sda3, Host myhostname, Flags 3