Deleted a Perforce Checkpoint - backup

I made my first Perforce checkpoint and deleted the folder by accident. What should I do? Will creating a new checkpoint create a gap in the "chronology"? Can I make a checkpoint that is not reliant on previous checkpoints? Apologies about any ambiguity, I am new to Perforce server management. Thanks

Just take a new checkpoint and you'll be fine. Each checkpoint stands on its own as a snapshot of the database at the point when it was taken. The journal files fill in the time in between checkpoints.
To recover the database from a disaster, all you ever need is the last checkpoint plus the current journal file. If you've lost the last checkpoint somehow but you have an older checkpoint plus the intervening journal files, you can use the journals to catch up:
checkpoint.n + journal.n = checkpoint.n+1
Hence once you take a new checkpoint, everything before it becomes redundant from a recovery perspective.
When you create checkpoint.n, the current journal is rotated and becomes journal.n-1, filling in the operations between checkpoint.n-1 and checkpoint.n. The current journal starts over from scratch recording everything that's happened since checkpoint.n.

Related

How do I create a backup for a table which will be used for a full-refresh?

I have an incremental model A where each day is calculated using the previous day's value. Running a full-refresh means that this table needs to be calculated since the beginning of time which is very inefficient and takes too long.
I have tried to create a backup table which will take a copy of the table's value each month, and have model A refer to the backup table during a full-refresh so that the values only after the backup need to be recalculated and I can arrive at today's value much quicker. However this gives me an error:
Encountered an error:
Found a cycle: model.model_A --> model.backup --> model.model_A
This is because the backup refers to the model to get the value each month, while model A also refers to the backup to build off in the case of a full-refresh.
Is there a way around this problem, avoiding rebuilding the entire model from the beginning of time every time I do a full-refresh?
Yes, you can't have 'circular loops' or cycles in your build process.
If there is an application that calculates the values for each day, you could perhaps store the new values back in the same source table(s), just adding a 'updated_at' or something similar. If I understand your use case correctly, you could then use this value whenever you need to query only the last day's information.

Explanation of Intellij undo.documentUndoLimit and undo.globalUndoLimit

What do these settings actually do?
What values should a pro have on these and what impact do they have? Refactoring undo for instance, "X file has already changed so you can not undo" would be GREAT if we could throw in the garbage using any setting combination.
Based on this question the documentUndoLimit is the max level of undo for each document (actually current document) and the globalUndoLimit is the undo max level of undo for the IDE on the current project.
For example, when you change codes of a file and then decide to undo the documentUndoLimit value is the number of levels that you can go back. but when you add some files or delete some other files, rename them and etc, the globalUndoLimit value is the number of levels that you can go back.

How to create a Priority queue schedule in Autosys?

Technologies available: Autosys, Informatica, Unix scripting, Database (available via informatica)
How our batch currently works is with filewatchers looking for a file called "control.txt" which gets deleted when a feed starts processing. It gets recreated once completed which allows all "control" autosys jobs waiting, to have one pick up the control file and begin processing data feeds one by one.
However, the system has grown large, and some feeds have become more important than others, and we're looking at ways to improve our scheduler to prioritize feeds over others.
With the current design, of one a file deciding when the next feed runs, it can't be done, and I haven't been able to come up with a simple solution to make it happen.
Example:
1. Feed A is processing
2. Feed B, Feed C, Feed X, Feed F come in while Feed A is processing
3. Need to ensure that Feed B is processed next, even though C, X, F are ready.
4. C, X, F have a lower priority than A and B, but have the same priority and can process in any order
A very interesting question. One thing that I can think of is to have an extra Autosys job with a shell script that copies the file in certain order. Like:
Create input folder e.g. StageFolder
Let's call your current Autosys input folder "the InputFolder"
Have Autosys monitor it and for any file run a OrderedFileCopyScript.sh, every minute
OrderedFileCopyScript.sh should copy one file from StageFolder to InputFolder in desired order only if InputFolder is empty
I hope I made myself clear.
I oppose use of Autosys for this requirement ! Wrong tool !
I don't know all the details but considering an application with the usual reference tables.
In this case you should make use of feed reference table to include relative priorities.
I would suggest to create(or reuse) a table to loaded by the successor job of the file watcher.
1) Table to contain the unprocessed file with the corresponding priority and then use this table to process the files based on the priority.
2) Remove/archive the entries once done.
3) Have another job of this and run like a daemon with start_times/run_window.
This gives the flexibility to deal with change in priorities and keeps overall design simple.
This gives

Can a mft_reference correspond to two different files at different time?

I am working on parsing USN Journal files now, and what I know is that in USN Journal log entry, there is a mft_reference field, it references the corresponding FileRecord in MFT table.
After a period of time, the USN Journal files may accumulate quite lot of file change records, such as file adding, file modifying, file deleting.
If I just get a mft_reference number(64 bits integer) mft_refer_1 at the very beginning of the USN Journal file, and get another mft_reference number mft_refer_2 at the end of the USN Journal file, and they are equal in value, mft_refer_1 == mft_refer_2 Can I say the two journal records are specifying the same file?What I am not quite sure is if an later added FileRecord will replace the position of a former deleted FileRecord.
Thank you in advance!
I figure out this by experimenting with "fsutil usn" tools;
First we should know how mft_refer is composed:
0xAAAABBBBBBBBBBBB, where AAAA stands for update number, and BBBBBBBBBBBB stands for File Record index into MFT table.
First I create a text document named by "daniel.txt", and find out its mft_refer is 0x00050000000c6c3f,
and then I delete it to Recycle Bin, its name is changed to something like "$R2QW90X.txt", but its mft_refer is still 0x00050000000c6c3f,
I delete it thoroughtly from Recycle Bin, and create another document also named as "daniel.txt", now the new document's mft_refer is 0x00040000000c6c48,
and then I create several other temporary files, one of these files occupies the 0x00000000000c6c3f-th file record with an updated mft_refer 0x00060000000c6c3f.
So my coclusion is the file record space is very precious in MFT, if a previous file has been thoroughtly deleted, then the file record space will be reclaimed for a new created file, but will update the "update number" field in mft_refer.
For the detailed experiment process, see here

How to backup tcsh history periodically to a single file in chronological manner?

I use tcsh at work - one of the features I use extensively is command-line history completion at the shell prompt. Currently, I've limited the size of my history file to 2000 (as I don't want to slow down the shell too much). However at times I need a command I know I've used a month or two back , but by now has been erased. So I want a system wherein:
My history buffer stores 2000 lines only
Instead of older commands getting erased , they should be saved into a "master" history file, ordered chronologically i.e if two shells were opened , then the commands entered in the history should be sorted as per the datestamp (not the order in which the shells were closed)!
It would be perfect , if this master history file could be auto-backed up, say per week basis.
I'm sure many of avid shell users have faced a situation like this - I'm hoping to get the answer from one of such users !!
2000 is pretty low. You could raise that a fair amount without suffering too much.
Next you probably want to store the history on logout, since this is when new commands are added to the .history file.
Create a file called .logout in your $HOME (for bash users, this file is .bash_logout). In this, copy the contents of the history to a permanent store. For example:
cat $HOME/.history >> $HOME/.ancient_history
This will append the history to a file ".ancient_history". For bash users, the file to copy is called .bash_history.
Then create a cron job that creates a back up of this every now and again. For starters here is one that moves the file to a filename with a date stamp at 5 minutes past midnight every day.
5 0 * * * mv $HOME/.ancient_history $HOME/.ancient_history_`date +%s`
There are probably more things you could do with this, but this is enough to get started. It's a pretty good idea that I hadn't thought of doing before either :-)
never quite thought of doing this but the simplest way would be to write a cron job that appended the history file to another file. The problem with this would be that you would get duplicates unless you wrote the cron to clear the history file after it did the dump.
history is stored (as far as i am aware) by line number only so the numbers would repeat for each dump. but you cold add a marker line with the date of the dump.