In libgit2, what do the git_index_entry->flags_extended mean (and when are they set)? - libgit2

I am attempting to manage a repository's index vs. its HEAD tree using libgit2 (via Objective-Git, but I'm increasingly finding myself heading down the vanilla libgit2 rabbit hole), and am wondering what exactly the flags_extended field's bitmasks on git_index_entry struct actually mean. Additionally, when are these flags set? I've been digging through the libgit2 source, but cannot seem to find where flags_extended comes into play.
The reason I ask:
I have a simple test repository with one commit containing some simple test files. The working copy has one tracked file with a minor change and one untracked file, both of which have been staged externally (git add . on the commandline). In my application, I need to "unstage" the files, so I fetch their respective git_index_entry structs. I was expecting the flags_extended to have GIT_IDXENTRY_UPDATED set for the modified file and GIT_IDXENTRY_ADDED set for the previously untracked file, but in fact both flags_extended fields are empty, which is what prompted this question (the only thing set is the GIT_IDX_ENTRY_NAMEMASK in the flags field).
I can certainly fetch the HEAD tree and compare the entries with the entries in the index, but I was hoping that libgit2 was already providing that info via the flags_extended.

I was expecting the flags_extended to have GIT_IDXENTRY_UPDATED set for the modified file and GIT_IDXENTRY_ADDED set for the previously untracked file.
No, these flags are fundamentally internal to libgit2. They are used to maintain the information about index entries in-memory after loading the index from disk. They are to prevent and/or detect internal data races, they are not for determining the status of your repository.
If you want to compare the HEAD to the index, load the HEAD tree and then use git_diff_tree_to_index.

Related

What exactly does "Shelve base revisions of files under distributed version control systems" do in IntelliJ?

This is the official description of the docs
If this option is enabled, the base revision of files will be saved to
a shelf that will be used during a 3-way merge if applying a shelf
leads to conflicts. If it is disabled, IntelliJ IDEA will look for the
base revision in the project history, which may take a while;
moreover, the revision that the conflicting shelf was based on may be
missing (for example, if the history was changed as a result of the
rebase operation).
What does this setting do in more simple terms and does it have implications on merge conflicts as well without using shelves explicitly?
The reason for this question is mainly an issue we are frequently facing during merge conflicts, where old changes keep re-appearing out of nowhere that delete lines of code which are still being used.
One of the patch files in the .idea/shelf directory contains these exact changes which makes me assume it is related to what we are witnessing.
Uncommitted_changes_before_Checkout_at_28_12_2021_10_30_[Default_Changelist]

How to keep CMake generated files?

I'm using add_custom_command() to generate some files. ninja clean removes them, as it should. One of the files is intended as a default/example implementation, to be modified by the user. It is only generated if it does not already exist. I would like for ninja clean not to remove this file.
I have tried a number of things but without success:
add_custom_target(): CMake complains about the missing file unless I name it in BYPRODUCTS, but doing this also leads to removal on clean
set_file_properties(... GENERATED FALSE) doesn't work because CMake complains about the file missing.
set_directory_properties() failed in a similar way: "folder doesn't exist or not yet processed" (it does exist)
I previously generated the example implementation and just let the user copy it or model their code on it. This works, but isn't entirely satisfactory. Is my use-case so unlikely that CMake doesn't support it?
I am afraid you requirment (conceptually, have make create something which make clean does not remove) is rather unusual. I can think of two potential solutions/workarounds.
One, move the file's generation to CMake time. That is, create it using execute_process() instead of add_custom_command(). This may or may not be possible, based on whether the file-generation process (the current custom command) depends on the rest of the build or not.
Two, totally hide the example file's existence from CMake. That is, have the custom command also generate some other file (maybe just a timestamp file) and have its driving custom target depend on that one instead. Do not list the example file as ither the custom command's dependency, output, or byproduct. That way, nothing will depend on it and neither CMake nor Ninja should not care whether it exists or not, so they will not complain or try to clean it up.
If it is an example for the user, it should not be in your build folder, but in the install folder. I don't see why you would need add_custom_command or the other commands you listed.
Therefore, you have to provide install() instructions.
You can then call make install. Cleaning will not remove those and only installing again will overwrite them if necessary.
For those, who come here a long time after the original question was asked (like me), I'll write my solution:
The tool called in add_custom_command generates two files with identical content:
one that is saved in sources, never mentioned anywhere
and one that's marked as byproduct, and then is depended on
So the first one is the file we wanted in the first place.
And the second one is actually used in build process, and gets deleted on clean.
For me the issue is that I actually want to save generated files in VCS so I can track changes. And this approach gives ne what I need.

Objective-C - Finding directory size without iterating contents

I need to find the size of a directory (and its sub-directories). I can do this by iterating through the directory tree and summing up the file sizes etc. There are many examples on the internet but it's a somewhat tedious and slow process, particularly when looking at exceptionally large directory structures.
I notice that Apple's Finder application can instantly display a directory size for any given directory. This implies that the operating system is maintaining this information in real time. However, I've been unable to determine how to access this information. Does anyone know where this information is stored and if it can be retrieved by an Objective-C application?
IIRC Finder iterates too. In the old days, it used to use FSGetCatalogInfo (an old File Manager call) to do this quickly. I think there's a newer POSIX call for that these days that's the fastest, lowest-level API for this, especially if you're not interested in all the other info besides the size and really need blazing speed over easily maintainable code.
That said, if it is cached somewhere in a publicly accessible place, it is probably Spotlight. Have you checked whether the spotlight info for a folder includes its size?
PS - One important thing to remember when determining the size of a file: Mac files can have two "forks", the data fork, and the resource fork (where e.g. Finder keeps the info if you override a particular file to open with another application than the default for its file type, and custom icons assigned to files). So make sure you add up both forks' sizes, or your measurements will be off.

NHibernate to SELECT with FOR SHARE in PostgreSQL

Let me explain my scenario...
Say I have two table: "Paths" and "Packages"
Paths table is consists of "PathId" "StartNode", "EndNode"
Packages table is consists of "StartNode", "DestinationNode", "Paths"
I am going to write a program which compute the path that route a package from start to destination, using whatever shortest path algorithm.
In order to make sure the paths settings are not changing while I compute the path, I believe I should use "SELECT * FROM Paths FOR SHARE" to obtain a share lock. So until the paths are computed and transaction is committed, the path start/end must not be changed, and at any point - the computed paths in the database is always valid.
The question is: I am writing the program C# with NHibernate, how do i get a share lock?
I have tried:
Session.QueryOver<Path>().Lock().Read.List()
but it does not do the magic.
Is Session.CreateSQLQuery() the only solution?
As far as I know NHibernate does not currently support SELECT...FOR SHARE, but it does support FOR UPDATE and FOR UPDATE NOWAIT (see lock modes), which is of course more strict, but it would work.
Since NHibernate is an open source project you may suggest this enhancement and of course implement it.

Software configuration management tool for hundreds of binary files, many are large

Note: I've tried searching, Stackoverflows near useless. I am not sure what kind of tool I need.
At my organization we need to keep track of the software configuration for many types of computers including the binary installers and automation scripts. Change is infrequent but the size of latest version of the configuration is several gigs.
We are trying to use Mercurial to store changes but it is just too slow, even without many revisions at all. I did an hg status but killed it after it took 10 minutes without finishing.
We are looking for a way to store the current configuration as well as having the old configurations there just in case. I have never done anything like this before and do not know what tools are available or even suitable for such tasks. Can someone point me in the right direction or tell me how the are solving this problem? Thanks
Since hard disk space is cheap and being able to view binary differences isn't very helpful, perhaps the best option you have is to store each configuration in a new directory that is indexed somehow. Example below:
/software/configs/2009-03-15
/software/configs/2009-09-28
/software/configs/2009-09-30
Given the size of your files and the infrequent number of changes, this would allow you to pick a configuration from a given 'tag' without the overhead of revision control.
If you pack your files into a single tar file and generate a SHA-512 hash, then you can be reasonably sure that no one has tampered with your files since they were archived.
While I don't know specific details about how to implement this strategy in mercurial, I have been working with git and git-fat. It sets up a general procedure that is likely to be feasible on mercurial as well. Basically the idea is whenever you add a binary file to the repository, under the hood, the repo creates a symlink to the file that is actually stored in another location as a checksummed object.
This allows large files to be tracked by the repo, without storing the actual data inside. It requires the data to be stored in some other location (perhaps in a binary management system).
It might take some configuration to do it in mercurial, but I think it's an elegantly simple solution.