When i use the Construction Heuristics i always get the same values assigned to my PlanningVariables which leads to a bad initial solution.
I am quite sure i make an error here but cannot see where i fail.
My Problem Description:
I try to assign resources to WorkOrders. One WorkOrder can have several ResourceAssignments. This ResourceAssignment is my main PlanningEntity.
The resource is the PlanningVariable.
WorkOrders are also PlanningEntities but that information is only necessary here to show i have more than one PlanningEntity and therefore cannot go with the more simple ConstructionHeuristic configurations.
I use value range provider from entity so i can give back different sets of Resources for different types of ResourceAssignments. (e.g. Human, Forklift, Handscanner ...)
My problem is that the Construction Heuristic seems to assign always the same Resource to all ResourceAssignment.
Here is the output from the CH - as you can see the same Human Resource (id 100) is assigned to both ResourceAssignments. This is the same for all following PlanningEntities.
2015-07-23 10:57:12,291 [main] INFO Solving started: time spent (185), best score (uninitialized/-9900hard/0soft), environment mode (FAST_ASSERT), random (JDK with seed 0).
2015-07-23 10:57:12,429 [main] DEBUG CH step (0), time spent (325), score (-9900hard/0soft), selected move count (200), picked move (ResourceAssignment{id=0, workorder=WorkOrder{id=0, name=WO0, start=Thu Jul 23 12:57:11 CEST 2015, stop=Thu Jul 23 14:57:11 CEST 2015, offset=0}, resource=null, type=human} => Resource{type=human, id=100}).
2015-07-23 10:57:12,523 [main] DEBUG CH step (1), time spent (419), score (-9900hard/0soft), selected move count (200), picked move (ResourceAssignment{id=1, workorder=WorkOrder{id=0, name=WO0, start=Thu Jul 23 12:57:11 CEST 2015, stop=Thu Jul 23 14:57:11 CEST 2015, offset=0}, resource=null, type=human} => Resource{type=human, id=100}).
2015-07-23 10:57:12,576 [main] DEBUG CH step (2), time spent (472), score (-9900hard/0soft), selected move count (99), picked move (ResourceAssignment{id=2, workorder=WorkOrder{id=0, name=WO0, start=Thu Jul 23 12:57:11 CEST 2015, stop=Thu Jul 23 14:57:11 CEST 2015, offset=0}, resource=null, type=forklift} => Resource{type=forklift, id=0}).
2015-07-23 10:57:12,667 [main] DEBUG CH step (3), time spent (563), score (-9900hard/0soft), selected move count (200), picked move (ResourceAssignment{id=3, workorder=WorkOrder{id=1, name=WO1, start=Thu Jul 23 12:57:11 CEST 2015, stop=Thu Jul 23 14:57:11 CEST 2015, offset=0}, resource=null, type=human} => Resource{type=human, id=100}).
2015-07-23 10:57:12,757 [main] DEBUG CH step (4), time spent (653), score (-9900hard/0soft), selected move count (200), picked move (ResourceAssignment{id=4, workorder=WorkOrder{id=1, name=WO1, start=Thu Jul 23 12:57:11 CEST 2015, stop=Thu Jul 23 14:57:11 CEST 2015, offset=0}, resource=null, type=human} => Resource{type=human, id=100}).
2015-07-23 10:57:12,795 [main] DEBUG CH step (5), time spent (691), score (-9900hard/0soft), selected move count (99), picked move (ResourceAssignment{id=5, workorder=WorkOrder{id=1, name=WO1, start=Thu Jul 23 12:57:11 CEST 2015, stop=Thu Jul 23 14:57:11 CEST 2015, offset=0}, resource=null, type=forklift} => Resource{type=forklift, id=0}).
2015-07-23 10:57:12,867 [main] DEBUG CH step (6), time spent (763), score (-9900hard/0soft), selected move count (200), picked move (ResourceAssignment{id=6, workorder=WorkOrder{id=2, name=WO2, start=Thu Jul 23 12:57:11 CEST 2015, stop=Thu Jul 23 14:57:11 CEST 2015, offset=0}, resource=null, type=human} => Resource{type=human, id=100}).
2015-07-23 10:57:12,917 [main] DEBUG CH step (7), time spent (813), score (-9900hard/0soft), selected move count (200), picked move (ResourceAssignment{id=7, workorder=WorkOrder{id=2, name=WO2, start=Thu Jul 23 12:57:11 CEST 2015, stop=Thu Jul 23 14:57:11 CEST 2015, offset=0}, resource=null, type=human} => Resource{type=human, id=100}).
It seems like it ignores all other moves - and of course i have a rule that punishes using the same resource at the same time with a hard score.
Here is my solution configuration.
<?xml version="1.0" encoding="UTF-8"?>
<solver>
<environmentMode>FAST_ASSERT</environmentMode>
<termination>
<unimprovedSecondsSpentLimit>10000</unimprovedSecondsSpentLimit>
<secondsSpentLimit>220000</secondsSpentLimit>
</termination>
<!-- Domain model configuration -->
<solutionClass>com.opal.solver.resource.ResourceAssignmentSolution</solutionClass>
<entityClass>com.opal.solver.resources.entity.ResourceAssignment</entityClass>
<entityClass>com.opal.solver.resources.entity.WorkOrder</entityClass>
<!-- Score configuration -->
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
<scoreDrl>resourceAsssignmentRules.drl</scoreDrl>
</scoreDirectorFactory>
<constructionHeuristic>
<queuedEntityPlacer>
<entitySelector id="placerEntitySelector">
<cacheType>PHASE</cacheType>
<entityClass>com.opal.solver.resources.entity.ResourceAssignment</entityClass>
</entitySelector>
<changeMoveSelector>
<entitySelector mimicSelectorRef="placerEntitySelector"/>
<valueSelector>
<selectionOrder>ORIGINAL</selectionOrder>
<variableName>resource</variableName>
</valueSelector>
</changeMoveSelector>
</queuedEntityPlacer>
</constructionHeuristic>
</solver>
I am sure I just miss some core concept of optaplanner here. I would be grateful for any hint into the right direction.
Looks like you have a verbose configuration of FIRST_FIT right now. Use FIRST_FIT_DECREASING instead, by declaring difficulty comparison and using that construction heuristic type.
<constructionHeuristic>
<constructionHeuristicType>FIRST_FIT_DECREASING</>
</>
See docs chapter on First Fit Decreasing. Normally, FFD is clearly better than FF (for example 4% on CloudBalancing on average), but use Local Search to improve upon that solution.
If you're really looking for random different CH results (for example for GeneticAlgorithm population) use selectionOrder RANDOM instead of ORGINAL.
Related
In haproxy, I implemented a simple way to delay clients sending too many requests too fast:
In haproxy.cfg:
acl conn_rate_high sc1_conn_rate(ipsticktable) gt 20
http-request lua.delay_request if conn_rate_high
In the lua file:
function delay_request(txn)
core.msleep(1000)
end
core.register_action("delay_request", { "http-req" }, delay_request);
This works okay, but what if I want to delay more if there are even more requests? I can have multiple delay functions with different delays, and multiple acls and conditions, but this gets clunky really fast.
The sensible thing would be to calculate the delay inside the lua function from the connection rate. My problem is, that I cannot pass the connection rate value to the lua function.
Here is what I tried and does not work:
In haproxy.cfg:
http-request set-var(txn.sc1_conn_rate) sc1_conn_rate(ipsticktable)
In the lua script:
function delay_request(txn)
local cr = tostring(txn:get_var("sc1_conn_rate"))
txn:Alert("conn_rate: ")
txn:Alert(cr)
txn:Alert("\n")
core.msleep(tonumber(cr) * 500)
end
The output in the log is:
Nov 18 14:41:06 haproxy[10988]: conn_rate:
Nov 18 14:41:06 haproxy[10988]: conn_rate:
Nov 18 14:41:06 haproxy[10988]: nil
Nov 18 14:41:06 haproxy[10988]: nil
Nov 18 14:41:06 haproxy[10988]: conn_rate:
Nov 18 14:41:06 haproxy[10988]: conn_rate:
Nov 18 14:41:06 haproxy[10988]: nil
Nov 18 14:41:06 haproxy[10988]: nil
Nov 18 14:41:06 haproxy[10988]: .
Nov 18 14:41:06 haproxy[10988]: .
Nov 18 14:41:06 haproxy[10988]: Lua function 'delay_request': runtime error: /etc/haproxy/delay.lua:6: attempt to perform arithmetic on a nil value from /etc/haproxy/delay.lua:6 C function line 1.
Nov 18 14:41:06 haproxy[10988]: Lua function 'delay_request': runtime error: /etc/haproxy/delay.lua:6: attempt to perform arithmetic on a nil value from /etc/haproxy/delay.lua:6 C function line 1.
So, as far as I understand, get_var got nil for some reason. :(
My main question is: How do I pass the value of sc1_conn_rate(ipsticktable) to the lua function?
Bonus question: Why are all alerts printed twice?
I am looking for a way to show the results of the file "tcp-variants-comparison.cc" under ns3 (3.28) used with Ubuntu 18.04.
I found here an old topic from 2013, but it seems not to work correctly in my current environment.
P.S: I am a newbie in ns3, so i will appreciate any help.
regards
cedkhader
Running ./waf --run "tcp-variants-comparison --tracing=1" yields the following files:
-rw-rw-r-- 1 112271415 Aug 5 15:52 TcpVariantsComparison-ascii
-rw-rw-r-- 1 401623 Aug 5 15:52 TcpVariantsComparison-cwnd.data
-rw-rw-r-- 1 1216177 Aug 5 15:52 TcpVariantsComparison-inflight.data
-rw-rw-r-- 1 947619 Aug 5 15:52 TcpVariantsComparison-next-rx.data
-rw-rw-r-- 1 955550 Aug 5 15:52 TcpVariantsComparison-next-tx.data
-rw-rw-r-- 1 38 Aug 5 15:51 TcpVariantsComparison-rto.data
-rw-rw-r-- 1 482134 Aug 5 15:52 TcpVariantsComparison-rtt.data
-rw-rw-r-- 1 346427 Aug 5 15:52 TcpVariantsComparison-ssth.data
You can use other command line arguments to generate the desired output, see list below.
Program Arguments:
--transport_prot: Transport protocol to use: TcpNewReno, TcpHybla, TcpHighSpeed, TcpHtcp, TcpVegas, TcpScalable, TcpVeno, TcpBic, TcpYeah, TcpIllinois, TcpWestwood, TcpWestwoodPlus, TcpLedbat [TcpWestwood]
--error_p: Packet error rate [0]
--bandwidth: Bottleneck bandwidth [2Mbps]
--delay: Bottleneck delay [0.01ms]
--access_bandwidth: Access link bandwidth [10Mbps]
--access_delay: Access link delay [45ms]
--tracing: Flag to enable/disable tracing [true]
--prefix_name: Prefix of output trace file [TcpVariantsComparison]
--data: Number of Megabytes of data to transmit [0]
--mtu: Size of IP packets to send in bytes [400]
--num_flows: Number of flows [1]
--duration: Time to allow flows to run in seconds [100]
--run: Run index (for setting repeatable seeds) [0]
--flow_monitor: Enable flow monitor [false]
--pcap_tracing: Enable or disable PCAP tracing [false]
--queue_disc_type: Queue disc type for gateway (e.g. ns3::CoDelQueueDisc) [ns3::PfifoFastQueueDisc]
--sack: Enable or disable SACK option [true]
in ns3.36.1 I used this command
./ns3 run examples/tcp/tcp-variants-comparison.cc -- --tracing=1
and output look like this
TcpVariantsComparison-ascii
TcpVariantsComparison-cwnd.data
TcpVariantsComparison-inflight.data
TcpVariantsComparison-next-rx.data
TcpVariantsComparison-next-tx.data
TcpVariantsComparison-rto.data
TcpVariantsComparison-rtt.data
TcpVariantsComparison-ssth.data
I'm having troubles with the BigQuery Data Transfer Service for Google AdWords scheduled export of some reports for a number of large accounts. The transfer was correctly set up, and it works perfectly on small and medium size accounts. However, on large accounts, I haven't been able to import a single day in over a month.
Inspecting the error logs, it seems a problem with INSUFFICIENT_TOKENS. This is quite weird though because I don't have any issue retrieving data from the Adwords API directly.
Below is an example of the logs for one import.
Started Oct 22, 2017, 7:40:00 AM
Ended Oct 22, 2017, 7:17:09 PM
Run Name projects/<project_id>/locations/us/transferConfigs/<another_id>/runs/<run_id>
Log Messages
generic::unavailable: Error while processing report for table 'Ad'. <?xml version="1.0" encoding="UTF-8" standalone="yes"?><reportDownloadError><ApiError><type>ReportDownloadError.ERROR_GETTING_RESPONSE_FROM_BACKEND</type><trigger>Unable to read report data</trigger><fieldPath Oct 22, 2017, 7:03:24 PM
generic::unavailable: Error while processing report for table 'Ad'. <?xml version="1.0" encoding="UTF-8" standalone="yes"?><reportDownloadError><ApiError><type>ReportDownloadError.ERROR_GETTING_RESPONSE_FROM_BACKEND</type><trigger>Unable to read report data</trigger><fieldPath Oct 22, 2017, 6:24:03 PM
generic::unavailable: Error while processing report for table 'Ad'. <?xml version="1.0" encoding="UTF-8" standalone="yes"?><reportDownloadError><ApiError><type>ReportDownloadError.ERROR_GETTING_RESPONSE_FROM_BACKEND</type><trigger>Unable to read report data</trigger><fieldPath Oct 22, 2017, 5:45:24 PM
generic::unavailable: Error while processing report for table 'Ad'. <?xml version="1.0" encoding="UTF-8" standalone="yes"?><reportDownloadError><ApiError><type>ReportDownloadError.ERROR_GETTING_RESPONSE_FROM_BACKEND</type><trigger>Unable to read report data</trigger><fieldPath Oct 22, 2017, 5:09:58 PM
generic::unavailable: Error while processing report for table 'Ad'. <?xml version="1.0" encoding="UTF-8" standalone="yes"?><reportDownloadError><ApiError><type>ReportDownloadError.ERROR_GETTING_RESPONSE_FROM_BACKEND</type><trigger>Unable to read report data</trigger><fieldPath Oct 22, 2017, 4:35:58 PM
generic::unavailable: Error while processing report for table 'Ad'. <?xml version="1.0" encoding="UTF-8" standalone="yes"?><reportDownloadError><ApiError><type>ReportDownloadError.ERROR_GETTING_RESPONSE_FROM_BACKEND</type><trigger>Unable to read report data</trigger><fieldPath Oct 22, 2017, 4:01:31 PM
generic::unavailable: Error while processing report for table 'Ad'. <?xml version="1.0" encoding="UTF-8" standalone="yes"?><reportDownloadError><ApiError><type>ReportDownloadError.ERROR_GETTING_RESPONSE_FROM_BACKEND</type><trigger>Unable to read report data</trigger><fieldPath Oct 22, 2017, 3:27:40 PM
generic::unavailable: Error while processing report for table 'Ad'. <?xml version="1.0" encoding="UTF-8" standalone="yes"?><reportDownloadError><ApiError><type>ReportDownloadError.ERROR_GETTING_RESPONSE_FROM_BACKEND</type><trigger>Unable to read report data</trigger><fieldPath Oct 22, 2017, 2:54:47 PM
generic::unavailable: Error while processing report for table 'Ad'. <?xml version="1.0" encoding="UTF-8" standalone="yes"?><reportDownloadError><ApiError><type>ReportDownloadError.ERROR_GETTING_RESPONSE_FROM_BACKEND</type><trigger>Unable to read report data</trigger><fieldPath Oct 22, 2017, 2:22:26 PM
generic::unavailable: Error while processing report for table 'Ad'. <?xml version="1.0" encoding="UTF-8" standalone="yes"?><reportDownloadError><ApiError><type>ReportDownloadError.ERROR_GETTING_RESPONSE_FROM_BACKEND</type><trigger>Unable to read report data</trigger><fieldPath Oct 22, 2017, 1:50:29 PM
Transfer load date: 20171019 Oct 22, 2017, 1:15:08 PM
Error code 1 : TRANSACTION_ROLLBACK: . Will retry later. Oct 22, 2017, 1:05:07 PM
Error code 1 : INSUFFICIENT_TOKENS: . Will retry later. Oct 22, 2017, 12:55:06 PM
Error code 1 : INSUFFICIENT_TOKENS: . Will retry later. Oct 22, 2017, 12:45:06 PM
Error code 1 : TRANSACTION_ROLLBACK: . Will retry later. Oct 22, 2017, 12:35:05 PM
Error code 1 : INSUFFICIENT_TOKENS: . Will retry later. Oct 22, 2017, 12:25:04 PM
Error code 1 : INSUFFICIENT_TOKENS: . Will retry later. Oct 22, 2017, 12:15:04 PM
Error code 1 : INSUFFICIENT_TOKENS: . Will retry later. Oct 22, 2017, 12:05:03 PM
Error code 1 : INSUFFICIENT_TOKENS: . Will retry later. Oct 22, 2017, 11:55:03 AM
Error code 1 : INSUFFICIENT_TOKENS: . Will retry later. Oct 22, 2017, 11:45:02 AM
Error code 1 : TRANSACTION_ROLLBACK: . Will retry later. Oct 22, 2017, 11:35:01 AM
Error code 1 : INSUFFICIENT_TOKENS: . Will retry later. Oct 22, 2017, 11:25:01 AM
Error code 1 : INSUFFICIENT_TOKENS: . Will retry later. Oct 22, 2017, 9:54:05 AM
Error code 1 : INSUFFICIENT_TOKENS: . Will retry later. Oct 22, 2017, 9:44:04 AM
Error code 1 : TRANSACTION_ROLLBACK: . Will retry later. Oct 22, 2017, 8:30:03 AM
Error code 1 : INSUFFICIENT_TOKENS: . Will retry later. Oct 22, 2017, 8:20:02 AM
Error code 1 : TRANSACTION_ROLLBACK: . Will retry later. Oct 22, 2017, 8:10:02 AM
Error code 1 : TRANSACTION_ROLLBACK: . Will retry later. Oct 22, 2017, 8:00:01 AM
Error code 1 : INSUFFICIENT_TOKENS: . Will retry later. Oct 22, 2017, 7:50:00 AM
Error code 1 : TRANSACTION_ROLLBACK: . Will retry later. Oct 22, 2017, 7:40:00 AM
Dispatched run to data source with id 162523770456362 Oct 22, 2017, 7:40:00 AM
The Customer Service told me to use this channel since the BigQuery team is monitoring it.
If this is a problem with the Quota limits, how can I increase the limit in order to the Data Transfer working for these large accounts?
Many thanks!
Sorry for the long wait.
The rollout is done. When the error happens, we will just skip the particular table.
Also we added one more option "Exclude Removed/Disabled Items" (see attachment). If you check that, we won't pull deleted items/stats for you -- probably you should try this.
Thank you again for using our service!
Sorry for the late response.
It's a known issue: probably you have too many deleted Ads in the account, which blocks the download.
We are working with adwords team to fix this, but that will probably take 2-3 months.
A potential workaround: is it OK to skip the 'Ad' table, if it cannot be downloaded? By doing that you can at least get other tables.
(The workaround is not there yet, but we could implement it quickly if people can accept it)
I am using IBM LSF and trying to get usage statistics during a certain period. I found that bhist does the job, but the short form bhist output does not show all of the fields I need.
What I want to know is:
Is bhist's output field customizable? The fields I need are:
<jobid>
<user>
<queue>
<job_name>
<project_name>
<job_description>
<submission_time>
<pending_time>
<run_time>
If 1 is not possible, the long form (bhist -l) output shows everything I need, but the format is hard to manipulate. I've pasted an example of the format below.
For example, the number of line between records is not fixed, and the word wrap in each event may break the line in the middle of a word I'm trying to scan for. How do I parse this format with sed and awk?
JobId <1531>, User <user1>, Project <default>, Command< example200>
Fri Dec 27 13:04:14: Submitted from host <hostA> to Queue <priority>, CWD <$H
OME>, Specified Hosts <hostD>;
Fri Dec 27 13:04:19: Dispatched to <hostD>;
Fri Dec 27 13:04:19: Starting (Pid 8920);
Fri Dec 27 13:04:20: Running with execution home </home/user1>, Execution CWD
</home/user1>, Execution Pid <8920>;
Fri Dec 27 13:05:49: Suspended by the user or administrator;
Fri Dec 27 13:05:56: Suspended: Waiting for re-scheduling after being resumed
by user;
Fri Dec 27 13:05:57: Running;
Fri Dec 27 13:07:52: Done successfully. The CPU time used is 28.3 seconds.
Summary of time in seconds spent in various states by Sat Dec 27 13:07:52 1997
PEND PSUSP RUN USUSP SSUSP UNKWN TOTAL
5 0 205 7 1 0 218
------------------------------------------------------------
.... repeat
I'm adding a second answer because it might help you with your problem without actually having to write your own solution (depending on the usage statistics you're after).
LSF already has a utility called bacct that computes and prints out various usage statistics about historical LSF jobs filtered by various criteria.
For example, to get summary usage statistics about jobs that were dispatched/completed/submitted between time0 and time1, you can use (respectively):
bacct -D time0,time1
bacct -C time0,time1
bacct -S time0,time1
Statistics about jobs submitted by a particular user:
bacct -u <username>
Statistics about jobs submitted to a particular queue:
bacct -q <queuename>
These options can be combined as well, so for example if you wanted statistics about jobs that were submitted and completed within a particular time window for a particular project, you can use:
bacct -S time0,time1 -C time0,time1 -P <projectname>
The output provides some summary information about all jobs that match the provided criteria like so:
$ bacct -u bobbafett -q normal
Accounting information about jobs that are:
- submitted by users bobbafett,
- accounted on all projects.
- completed normally or exited
- executed on all hosts.
- submitted to queues normal,
- accounted on all service classes.
------------------------------------------------------------------------------
SUMMARY: ( time unit: second )
Total number of done jobs: 0 Total number of exited jobs: 32
Total CPU time consumed: 46.8 Average CPU time consumed: 1.5
Maximum CPU time of a job: 9.0 Minimum CPU time of a job: 0.0
Total wait time in queues: 18680.0
Average wait time in queue: 583.8
Maximum wait time in queue: 5507.0 Minimum wait time in queue: 0.0
Average turnaround time: 11568 (seconds/job)
Maximum turnaround time: 43294 Minimum turnaround time: 40
Average hog factor of a job: 0.00 ( cpu time / turnaround time )
Maximum hog factor of a job: 0.02 Minimum hog factor of a job: 0.00
Total Run time consumed: 351504 Average Run time consumed: 10984
Maximum Run time of a job: 1844674 Minimum Run time of a job: 0
Total throughput: 0.24 (jobs/hour) during 160.32 hours
Beginning time: Nov 11 17:55 Ending time: Nov 18 10:14
This command also has a long form output that provides some bhist -l-like information about each job that might be a bit easier to parse (although still not all that easy):
$ bacct -l -u bobbafett -q normal
Accounting information about jobs that are:
- submitted by users bobbafett,
- accounted on all projects.
- completed normally or exited
- executed on all hosts.
- submitted to queues normal,
- accounted on all service classes.
------------------------------------------------------------------------------
Job <101>, User <bobbafett>, Project <default>, Status <EXIT>, Queue <normal>,
Command <sleep 100000000>
Wed Nov 11 17:37:45: Submitted from host <endor>, CWD <$HOME>;
Wed Nov 11 17:55:05: Completed <exit>; TERM_OWNER: job killed by owner.
Accounting information about this job:
CPU_T WAIT TURNAROUND STATUS HOG_FACTOR MEM SWAP
0.00 1040 1040 exit 0.0000 0M 0M
------------------------------------------------------------------------------
...
Long form output is pretty hard to parse. I know bjobs has an option for unformatted output (-UF) in older LSF versions which makes it a bit easier, and the most recent version of LSF allows you to customize which columns get printed in short form output with -o.
Unfortunately, neither of these options are available with bhist. The only real possibilities for historical information are:
Figure out some way to parse bhist -l -- impractical and maybe not even possible due to inconsistent formatting as you've discovered.
Write a C program to do what you want using the LSF API, which exposes the functions that bhist itself uses to parse the lsb.events file. This is the file that stores all the historical information about the LSF cluster, and is what bhist reads to generate its ouptut.
If C is not an option for you, then you could try writing a script to parse the lsb.events file directly -- the format is documented in the configuration reference. This is hard, but not impossible. Here is the relevant document for LSF 9.1.3.
My personal recommendation would be #2 -- the function you're looking for is lsb_geteventrec(). You'd basically read each line in lsb.events one at a time and pull out the information you need.
How does dump create the incremental backup? It seems I should use the same file name when I create a level 1 dump:
Full backup:
dump -0aLuf /mnt/bkup/backup.dump /
and then for the incremental
dump -1aLuf /mnt/bkup/backup.dump /
What happens if I dump the level 1 to a different file:
dump -1aLuf /mnt/bkup/backup1.dump /
I am trying to understand how dump keeps track of the changes. I am using a ext3 file system.
This is my /etc/dumpdates:
# cat /etc/dumpdates
/dev/sda2 0 Wed Feb 13 10:55:42 2013 -0600
/dev/sda2 1 Mon Feb 18 11:41:00 2013 -0600
My level 0 for this system was around 11GB and then I ran level 1 today and I used the same filename and the size was around 5 GB.
I think I figured out the issue. It looks like dump adds information in the file so it knows when the previous level occurred.
Level 0 backup
# file bkup_tmp_0_20130220
bkup_tmp_0_20130220: new-fs dump file (little endian), This dump Wed Feb 20 14:29:31 2013, Previous dump Wed Dec 31 18:00:00 1969, Volume 1, Level zero, type: tape header, Label my-label, Filesystem /tmp, Device /dev/sda3, Host myhostname, Flags 3
Level 1 backup, after some change
# file bkup_tmp_1_20130220
bkup_tmp_1_20130220: new-fs dump file (little endian), This dump Wed Feb 20 14:30:48 2013, Previous dump Wed Feb 20 14:29:31 2013, Volume 1, Level 1, type: tape header, Label my-label, Filesystem /tmp, Device /dev/sda3, Host myhostname, Flags 3