How does dump create the incremental backup? It seems I should use the same file name when I create a level 1 dump:
Full backup:
dump -0aLuf /mnt/bkup/backup.dump /
and then for the incremental
dump -1aLuf /mnt/bkup/backup.dump /
What happens if I dump the level 1 to a different file:
dump -1aLuf /mnt/bkup/backup1.dump /
I am trying to understand how dump keeps track of the changes. I am using a ext3 file system.
This is my /etc/dumpdates:
# cat /etc/dumpdates
/dev/sda2 0 Wed Feb 13 10:55:42 2013 -0600
/dev/sda2 1 Mon Feb 18 11:41:00 2013 -0600
My level 0 for this system was around 11GB and then I ran level 1 today and I used the same filename and the size was around 5 GB.
I think I figured out the issue. It looks like dump adds information in the file so it knows when the previous level occurred.
Level 0 backup
# file bkup_tmp_0_20130220
bkup_tmp_0_20130220: new-fs dump file (little endian), This dump Wed Feb 20 14:29:31 2013, Previous dump Wed Dec 31 18:00:00 1969, Volume 1, Level zero, type: tape header, Label my-label, Filesystem /tmp, Device /dev/sda3, Host myhostname, Flags 3
Level 1 backup, after some change
# file bkup_tmp_1_20130220
bkup_tmp_1_20130220: new-fs dump file (little endian), This dump Wed Feb 20 14:30:48 2013, Previous dump Wed Feb 20 14:29:31 2013, Volume 1, Level 1, type: tape header, Label my-label, Filesystem /tmp, Device /dev/sda3, Host myhostname, Flags 3
Related
Before trying to assemble sequence data, I get a file size estimate for my raw READ1/READ2 files by running the command ls -l -h from the directory the files are in. The output looks something like this:
-rwxrwxrwx# 1 catharus2021 staff 86M Jun 11 15:03 pluvialis-dominica_JJW362-READ1.fastq.gz
-rwxrwxrwx# 1 catharus2021 staff 84M Jun 11 15:03 pluvialis-dominica_JJW362-READ2.fastq.gz
For a previous run using the identical command, but a different batch of data, the output was as such:
-rwxr-xr-x 1 catharus2021 staff 44M Mar 16 2018 lagopus_lagopus_alascensis_JJW1970_READ1.fastq.gz
-rwxr-xr-x 1 catharus2021 staff 52M Mar 16 2018 lagopus_lagopus_alascensis_JJW1970_READ2.fastq.gz
It doesn't seem to be affecting any downstream commands, but does anyone know why the strings at the very beginning (-rwxrwxrx# vs. -rwxr-xr-x) are different? I assume that they're permissions flags, but google has been less-than-informative when I try to type those in and search.
Thanks in advance for your time.
The coding describes who can access a file in which way. It is oredred:
owner - group - world
rwxr-xr-x
user can read, write and execute
group can only read and execute
world can only read and execute
This prevents that other people can overwrite your data. If you change to
rwxrwxrwx
Everybody can overwrite your data.
I am looking for a way to show the results of the file "tcp-variants-comparison.cc" under ns3 (3.28) used with Ubuntu 18.04.
I found here an old topic from 2013, but it seems not to work correctly in my current environment.
P.S: I am a newbie in ns3, so i will appreciate any help.
regards
cedkhader
Running ./waf --run "tcp-variants-comparison --tracing=1" yields the following files:
-rw-rw-r-- 1 112271415 Aug 5 15:52 TcpVariantsComparison-ascii
-rw-rw-r-- 1 401623 Aug 5 15:52 TcpVariantsComparison-cwnd.data
-rw-rw-r-- 1 1216177 Aug 5 15:52 TcpVariantsComparison-inflight.data
-rw-rw-r-- 1 947619 Aug 5 15:52 TcpVariantsComparison-next-rx.data
-rw-rw-r-- 1 955550 Aug 5 15:52 TcpVariantsComparison-next-tx.data
-rw-rw-r-- 1 38 Aug 5 15:51 TcpVariantsComparison-rto.data
-rw-rw-r-- 1 482134 Aug 5 15:52 TcpVariantsComparison-rtt.data
-rw-rw-r-- 1 346427 Aug 5 15:52 TcpVariantsComparison-ssth.data
You can use other command line arguments to generate the desired output, see list below.
Program Arguments:
--transport_prot: Transport protocol to use: TcpNewReno, TcpHybla, TcpHighSpeed, TcpHtcp, TcpVegas, TcpScalable, TcpVeno, TcpBic, TcpYeah, TcpIllinois, TcpWestwood, TcpWestwoodPlus, TcpLedbat [TcpWestwood]
--error_p: Packet error rate [0]
--bandwidth: Bottleneck bandwidth [2Mbps]
--delay: Bottleneck delay [0.01ms]
--access_bandwidth: Access link bandwidth [10Mbps]
--access_delay: Access link delay [45ms]
--tracing: Flag to enable/disable tracing [true]
--prefix_name: Prefix of output trace file [TcpVariantsComparison]
--data: Number of Megabytes of data to transmit [0]
--mtu: Size of IP packets to send in bytes [400]
--num_flows: Number of flows [1]
--duration: Time to allow flows to run in seconds [100]
--run: Run index (for setting repeatable seeds) [0]
--flow_monitor: Enable flow monitor [false]
--pcap_tracing: Enable or disable PCAP tracing [false]
--queue_disc_type: Queue disc type for gateway (e.g. ns3::CoDelQueueDisc) [ns3::PfifoFastQueueDisc]
--sack: Enable or disable SACK option [true]
in ns3.36.1 I used this command
./ns3 run examples/tcp/tcp-variants-comparison.cc -- --tracing=1
and output look like this
TcpVariantsComparison-ascii
TcpVariantsComparison-cwnd.data
TcpVariantsComparison-inflight.data
TcpVariantsComparison-next-rx.data
TcpVariantsComparison-next-tx.data
TcpVariantsComparison-rto.data
TcpVariantsComparison-rtt.data
TcpVariantsComparison-ssth.data
I am using redis version 3.0.6. The redis-server process is being run by the redis user.
Suddenly from 5 days after 24 hours redis began failing "opening .rdb for saving." It was working properly before this.
As you can see in the snippet from the logs below, Redis was behaving normally, and then started failing. Power-cycling the server later resolved the issue.
1427:M 24 May 01:09:05.102 * Background saving started by pid 2493
2493:C 24 May 01:09:34.916 * DB saved on disk
2493:C 24 May 01:09:34.917 * RDB: 310 MB of memory used by copy-on-write
1427:M 24 May 01:09:34.950 * Background saving terminated with success
1427:M 24 May 01:14:35.026 * 10 changes in 300 seconds. Saving...
1427:M 24 May 01:14:35.036 * Background saving started by pid 2494
2494:C 24 May 01:15:04.329 * DB saved on disk
2494:C 24 May 01:15:04.330 * RDB: 298 MB of memory used by copy-on-write
1427:M 24 May 01:15:04.408 * Background saving terminated with success
1427:M 24 May 01:20:05.008 * 10 changes in 300 seconds. Saving...
1427:M 24 May 01:20:05.018 * Background saving started by pid 2499
2499:C 24 May 01:20:33.830 * DB saved on disk
2499:C 24 May 01:20:33.831 * RDB: 330 MB of memory used by copy-on-write
1427:M 24 May 01:20:33.843 * Background saving terminated with success
1427:M 24 May 01:23:46.966 # Failed opening .rdb for saving: Read-only file system
1427:M 24 May 01:25:34.029 * 10 changes in 300 seconds. Saving...
1427:M 24 May 01:25:34.038 * Background saving started by pid 2500
2500:C 24 May 01:25:34.038 # Failed opening .rdb for saving: Read-only file system
1427:M 24 May 01:25:34.139 # Background saving error
1427:M 24 May 01:25:40.059 * 10 changes in 300 seconds. Saving...
1427:M 24 May 01:25:40.064 * Background saving started by pid 2501
2501:C 24 May 01:25:40.064 # Failed opening .rdb for saving: Read-only file system
1427:M 24 May 01:25:40.165 # Background saving error
1427:M 24 May 01:25:46.080 * 10 changes in 300 seconds. Saving...
1427:M 24 May 01:25:46.085 * Background saving started by pid 2502
2502:C 24 May 01:25:46.085 # Failed opening .rdb for saving: Read-only file system
1427:M 24 May 01:25:46.186 # Background saving error
1427:M 24 May 01:25:52.100 * 10 changes in 300 seconds. Saving...
1427:M 24 May 01:25:52.105 * Background saving started by pid 2503
2503:C 24 May 01:25:52.105 # Failed opening .rdb for saving: Read-only file system
1427:M 24 May 01:25:52.206 # Background saving error
So, my question: how could this happen?
Please give me proper solution for this.
The "Read-only file system" I think is the key here. It's possible the device it's trying to write to is mounted incorrectly but since it happened randomly, the system may have forced the filesystem into readonly mode. There's a number of conditions that can trigger the operating system to put the filesystem into a read-only mode. This can mean that the filesystem became corrupt or there was some other filesystem consistency issue. If you're hosting on a cloud provider and the disk is network-backed like EBS in AWS, this can be triggered by a temporary network issue. Sometimes the issues are momentary and either force remounting the partition (or power cycling the server) will fix the issue. Other times it's permanent, but since your server came back up just fine, that would appear to not be the case. But the true fix for this would lie in your hardware setup which wasn't detailed.
This answer is related albeit thin on the "why": Failed opening the RDB file ... Read-only file system
After an upgrade.. (Ubuntu 14.04 LTS)
I had redis complain of this.. the file system was not RO. It was fine.
kill -9 REDIS-PROCESS # Otherwise it wouldn't die. looping on the error.
Deleted the dump.rdb file that already existed..
Started REDIS again, and the problem appeared to go away. (I only just did it.. so things may come back..)
It looks like it may have been an upgrade issue..
you can check your redis.conf, in this configure file you can find where the dbfilename is,
give the permission 755 'dir' which include dbfilename, it is /var/lib/redis (centos),
and the user and group to 'redis', but it should be 644 for the files in the dir.
restart redis.
I'm looking at the redis output console and I'm trying to understand the displayed info :
(didn't find that info in the quick guide)
So redis-server.exe outputs this :
/*1*/ [2476] 24 Apr 11:46:28 # Open data file dump.rdb: No such file or directory
/*2*/ [2476] 24 Apr 11:46:28 * The server is now ready to accept connections on port 6379
/*3*/ [2476] 24 Apr 11:42:35 - 1 clients connected (0 slaves), 1188312 bytes in use
/*4*/ [2476] 24 Apr 11:42:40 - DB 0: 1 keys (0 volatile) in 4 slots HT.
Regarding line #1 - what does the dump.rdb file is used for ? is it the data itself ?
what is the [2476] number ? it is not a port since line #2 tells port is 6379
What does (0 slaves) means ?
in line #3 - 1188312 bytes used - but what is the max value so i'd know overflows ...? is it for whole databases ?
Line #3 What does (0 volatile) means ?
Line #4 - why do i have 4 slots HT ? I have no data yet
[2476] - process ID
dump.rdb - redis can persist data by snapshoting, dump.rdb is the default file name http://redis.io/topics/persistence
0 slaves - redis can work in master-slave mode, 0 slaves informs you that there are no slave servers connected
1188312 bytes in use - total number of bytes allocated by Redis using its allocator
0 volatile - redis can set keys with expiration time, this is the count of them
4 slots HT - current hash table size, initial table size is 4, as you add more items hash table will grow
I'm using SQLite and a php application that runs in background. I have blocked the application using (Ctrl-c) and I just noticed that i have database.sqlite and database.sqlite-journal.
At the moment, How can I remove -journal file without compromising the database?
Thank you!
P.S. SQLite version 3.7.9
EDIT:
-rw-r--r--. 1 damiano damiano 51M 8 mar 18.15 test.sqlite2
-rw-r--r--. 1 damiano damiano 2,6K 8 mar 18.15 test.sqlite2-journal
[damiano#localhost backup]$ sqlite3 test.sqlite2
SQLite version 3.7.13 2012-06-11 02:05:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite>
[damiano#localhost backup]$ ls -lh
-rw-r--r--. 1 damiano damiano 51M 8 mar 18.15 test.sqlite2
-rw-r--r--. 1 damiano damiano 2,6K 8 mar 18.15 test.sqlite2-journal
[damiano#localhost backup]$
Execute this command:
sqlite3 test.sqlite2 vacuum
It will make your database as small as possible and apply possible outstanding transactions or rollbacks in -journal file (and remove it in process). You can actually execute any other transaction that does something (but simply connect/disconnect is NOT enough), but vacuum seems like easiest approach.
Just open the database (with your program or with the sqlite3 command-line tool).
SQLite will then roll back the changes of your interrupted transaction and afterwards remove the journal.