How to set checkout strategy in libgit2? - libgit2

I 'm trying to pull from a repo with libgit2.
My steps are these:
I use git_remote_fetch to down remote origin data and the return OK.
after I use git_merge API.
The question is when I just use git_index_remove_bypath deleted a file 'aa.txt' in local branch 'master', But not commit it. At the same times I Merge remote branch('origin/master') head. the remote head just modify 'bb.txt'. But when I use git_merge it return error code '-13'. the error info is "1 uncommitted change would be overwritten by merge". I just deleted a file in my local branch.
But I can successed to execute in Git command line 'git pull'.
so, I suspect that my strategy is wrong when I execute Checkout. How to avoid this error?

I just deleted a file in my local branch.
If you only called git_index_remove_bypath, and you did not commit that change, then you have not deleted a file in your local branch. You have an uncommitted change.
That's why you're getting this error:
1 uncommitted change would be overwritten by merge
Commit the change, then do the merge. Or do the merge, then remove the file. But doing the merge in the state you're in is not possible because it would remove uncommitted changes.

Related

How to make the SSIS package status to failure when propagate was set to false for a Sequence container

I have an SSIS package with for each loop > sequence container. The sequence container is trying to read file from For each loop and process its data. The requirement was to not fail the entire package when any exception happened in processing a file but to continue processing the next file until all the files were processed from the for each loop. For this, I have set the Propagate variable for the sequence container to False. I have also added email step on On Error event of Sequence container. The package is running as expected and able to process all files even when any exception happened with any file. But I would like the status of my SSIS package to be failed finally since one of the files got failed. How can I achieve that ?
Did you try this options?
(SSIS version in russian on the left side but it's sequence container)
View -> Properties window -> Then click on your sequence container and it will show you ther properties of sequence container.
If i were you first of all i would try property "FailPackageOnFailture" - it should cover your question if i get it right.
P.S. Also you can see the whole properties of your project when you click on a free place in your project
UPDATED (after comments and more clear understanding task):
The idea is - set this param Maximum ErrorCount for SQ as max as you want - in this case it wont stop the package because 1 of the files was failed in SQ and next file will process, but it should stop package after SQ will finish his work because you don't change MaximumErrorCount for package.
Important - a value of zero sets the error count threshold to infinity and package or task never get's Failure

Ontotext GraphDB Repository cannot be used for queries

I am getting an error message while trying to sparql in a particular repository.
Error :
The currently selected repository cannot be used for queries due to an error:
Page [id=7, ref=1,private=false,deprecated=false] from pso has size of 206 != 820 which is written in the index: PageIndex#244 [OPENED] ref:3 (parent=null freePages=1 privatePages=0 deprecatedPages=0 unusedPages=0)
So I tried to recreate the repository by uploading a new RDF file, but still issue persist. Any solution? Thanks in advance
The error indicates an inconsistency between what is written in the index (pso.index) and the actual page (pso). Is there any chance that the binary files were modified/over-written/partially merged? Under normal operation, you should never get this an error.
The only way to hide this error is to start GraphDB with: ./graphdb -Dthrow.exception.on.index.inconsistency=false. I will recommend doing this only for dumping the repository content into an RDF file, drop the repository, and recreate it.

libgit2 - cherry pick multiple commits

I am looking for a method to cherry pick two or more commits.
My goal is to be able to cherry pick multiple commits to allow a user to review those changes before committing them, and not requiring users to commit after each cherry pick.
I've added below a code snippet that will accept a repository path, followed by two commits and try to cherry pick them consecutively. However I'm not certain what options I need to set to allow two commits to be cherry picked.
As is the first cherry pick works, but the 2nd fails with
1 uncommitted change would be overwritten by merge
I had tried using the option GIT_CHECKOUT_ALLOW_CONFLICTS but was not successful. What options are needed to allow for cherry picking multiple commits?
#include <stdio.h>
#include "git2.h"
#define onError(error, errorMsg)\
if (error){\
const git_error* lg2err = giterr_last();\
if (lg2err){\
printf("%s %s\n", errorMsg, lg2err->message);\
return 1;\
}\
}
int main(int argc, char* argv[])
{
if(argc != 4) { printf("Provide repo commit1 commit2\n"); return 1;}
printf("Repository: %s\n Commit1: %s\n Commit2: %s\n", argv[1], argv[2], argv[3]);
int error;
git_libgit2_init();
git_repository * repo;
git_oid cid1, cid2;
git_commit *c1 =NULL;
git_commit *c2 =NULL;
error = git_repository_open(&repo, argv[1]);
onError(error,"Repo open failed: ");
git_cherrypick_options cherry_opts = GIT_CHERRYPICK_OPTIONS_INIT;
git_oid_fromstr(&cid1, argv[2]);
git_oid_fromstr(&cid2, argv[3]);
error = git_commit_lookup(&c1, repo, &cid1);
onError(error,"commit lookup failed: ");
error = git_commit_lookup(&c2, repo, &cid2);
onError(error,"commit2 lookup failed: ");
error = git_cherrypick(repo, c1, &cherry_opts);
onError(error,"cherry1 failed: ");
error = git_cherrypick(repo, c2, &cherry_opts);
onError(error,"cherry2 failed: ");
return 0;
}
What's happening is that libgit2 is refusing to overwrite a file on disk that has been modified, but its contents have not actually been stored anywhere by git. This file is "precious", and git and libgit2 will take great pains to avoid overwriting it.
There's no way to overcome this because cherry-picking is not applying the differences in the commit based on your working directory contents. It's applying the differences in the commit to HEAD. That is to say that your only options would be to ignore the changes in this cherry-pick or to overwrite the changes that the previous cherry-pick introduced.
Let me give you a concrete example:
Suppose that you have some file at commit 1:
one
two
three
four
five
And you have some commit based on 1 (let's call it 2), that changes the file to be:
one
2
three
four
five
And you have still another commit in a different branch. It's also based on 1 (let's call it 2'). It changes the file to be:
one
two
three
4
five
What happens if you are on commit 1 and cherry-pick both 2 and 2' without committing? Logically, you might expect it to do a merge! But it will not.
If you're on commit 1 and you git_cherrypick for commit 2 in libgit2 (or git cherry-pick --no-commit on the command line) for the first commit, it will read the file out of HEAD, and apply the changes for commit 2. This is a trivial example, so the contents are, literally, matching the contents of commit 2. That file will be placed on disk.
Now, if you do nothing else - you don't commit this - then you're still on commit 1. And if you again do a git_cherrypick (this time for commit 2') then libgit2 will read the file out of HEAD and apply the changes for commit 2'. And again, in this trivial example, applying the changes in 2' to the file in 1 gives you the contents of the file in commit 2'.
Because what it won't do is read the file out of the working directory.
So now when it goes to try to write those results to the working directory, there's a checkout conflict. Because the contents of the file on disk don't match the value of the file in HEAD or in what we're trying to checkout. So you're blocked.
What you probably want to do is create a commit at this stage. I know you said that you wanted t avoid "requiring users to commit after each cherry pick". But there's a difference between creating a commit object in libgit2 which is lightweight and can be discarded easily (where it will be garbage collected eventually) and doing the moral equivalent of running git commit which updates a branch pointer.
If you merely create a commit and write it into the object database - without switching to it or checking it out - then you can reuse that data for other steps in your work without ever giving the user the appearance of having done a commit. It's entirely in memory (and a little bit in the object database) without ever hitting the working directory.
What I'd encourage you to do is to cherry-pick each commit that you want into an index, which does its work in-memory and doesn't touch the disk. When you're happy with the results, you can create a commit object. You'll need to use the git_cherrypick_commit API instead of git_cherrypick to produce an index, then turn that into a tree for . For example:
git_reference *head;
git_signature *signature;
git_commit *base1, *base2, *result1;
git_index *idx1, *idx2;
git_oid tree1;
/* Look up the HEAD reference */
git_repository_head(&head, repo);
git_reference_peel((git_object **)&base1, head, GIT_OBJ_COMMIT);
/* Pick the first cherry, getting back an index */
git_cherrypick_commit(&idx1, repo, c1, base1, 0, &cherry_opts);
/* Write that index into a tree */
git_index_write_tree(&tree_id1, idx1);
/* And create a commit object for that tree */
git_signature_now(&signature, "My Cherry-Picking System", "foo#example.com");
git_commit_create_from_ids(&result_id1,
repo,
NULL, /* don't update a reference */
signature,
signature,
NULL,
"Transient commit that will be GC'd eventually.",
&tree_id1,
1,
&cid1);
git_commit_lookup(&result1, repo, &result_id1);
/* Now, you can pick the _second_ cherry with the commit you just created as a base... */
git_cherrypick_commit(&idx2, repo, c1, result1, 0, &cherry_opts);
Eventually you'll get your terminal commit and you can just check it out - and I mean that in the libgit2 git_checkout notion of checking out, which just puts those contents in your working directory. Still, don't update any branch pointers. This will give the result where files are only modified in the working directory (and index) but the user has not committed anything, their HEAD has not moved.
git_checkout_tree(repo, final_result_commit, NULL);
(You can pass a git_commit * to git_checkout_tree. It knows what to do.)
I could have made this a lot easier for you by giving you a git_cherrypick_tree API. This would let you cut out the middleman of creating a commit that you don't need. But I didn't think that anybody would want to do this. (Sorry!)
The reason that I didn't think that anybody would want to do this is because what you're describing is more accurately called rebase. Rebase is a sequenced set of patch application or cherry-pick steps. (Interactive rebase is a bit more involved, so let's ignore that for now.)
libgit2 has a git_rebase machinery that can work entirely in-memory, saving you some of the bookkeeping involved in converting indexes to trees and writing commits to disk. It can be invoked to work completely in-memory (see rebase_commit_inmemory) which may help you here.
In either case, the end result is largely the same, a series of commits that were written into the object database without the user ever knowing about it, and updating their working directory to match at the end.

git revert : Unable to undo an individual commit even in a simple case

In order to try and understand git revert, I made a series of 4 simple commits -- A, B, C, D -- to a text file, foo.txt, with the intention of undoing only commit B later and leaving commits A, C, D intact.
So, in each commit, I added a new line each to the file, emulating either a feature added or a bug introduced.
After commit A, contents of foo.txt:
Feature A
After commit B, contents of foo.txt: (Here, I introduce a bug that I'll later try to undo/revert.)
Feature A
Bug
After commit C, contents of foo.txt:
Feature A
Bug
Feature C
After commit D, contents of foo.txt:
Feature A
Bug
Feature C
Feature D
Now, to undo the effects of Commit B (which introduced the bug), I did:
git revert master^^
What I expected to happen was, a new Commit E that removed the Bug line from the file, leaving the file contents as:
Feature A
Feature C
Feature D
However, I got the error:
error: could not revert bb58ed3... Bug introduced
hint: after resolving the conflicts, mark the corrected paths
hint: with 'git add <paths>' or 'git rm <paths>'
hint: and commit the result with 'git commit'
with the contents of the file following the unsuccessful git revert being:
Feature A
<<<<<<< HEAD
Bug
Feature C
Feature D
=======
>>>>>>> parent of bb58ed3... Bug introduced
( bb58ed3 is the hash of Commit B, and 'Bug introduced' this commit's comment.)
Question:
What is going on here?
If even such a simple, one-line commit cannot be reverted/undone automatically, and must require manual resolution from me, then how could I revert a much more complex commit whose original developer may even be someone else!
Is there a special set of cases where git revert would be better applicable?
git sees each commit as a changelist (I simplify things here) and tries to "unapply" that when you call git revert. Each changelist also includes some context to ensure that the change makes sense. For example, if the change we want to make is "add return after line 10", it's more likely to break things than "add return after line 10, if lines 7-9 contain X, Y, and Z". So, we can describe your second commit as (again, simplifying this a little here):
Assuming that the first line of the file is Feature A.
Assuming that there is no second line.
Make the second line contain Bug.
After you've added few more lines, context of the Bug changed significantly, so git revert is not sure whether it can simply remove the line. Maybe the newly added lines actually fixed the bug. So it asks you to explicitly resolve the conflict of contexts.
As for your questions 2-3: yes, git revert is usable in cases when you're reverting a piece of file which was not changed since then. For example, the bug was introduced in a foo function, but only bar function (which is located ten lines below) was modified since then. In that case, git revert is likely to automatically revert the change, because it sees that the context is unchanged.
UPD: here is an example of why context matters even if you're trying to revert your own code:
Commit A (mind the mistype):
int some_vlue = 0;
read_int_into(some_vlue);
some_vlue = some_vlue++;
Commit B (bug introduced):
int some_vlue = 0;
some_vlue = 123;
some_vlue = some_vlue++;
Commit C (name fixed):
int some_value = 0;
some_value = 123;
some_value = some_value++;
Now, in order to revert commit B, one have to have some context, as we cannot simply replace some_value = 123 with older line read_int_into(some_vlue) - it would be compilation error.

Avoid hanging while compiling Oracle package

we have a situation where the compiling of a package takes for ever!
if we compile the package with a new name then it works!
what I understood, Compiling hangs because of locks on the package!
something like this might help identify the problem!
SELECT s.sid,
l.lock_type,
l.mode_held,
l.mode_requested,
l.lock_id1,
FROM dba_lock_internal l,
v$session s
WHERE s.sid = l.session_id
AND UPPER(l.lock_id1) LIKE '%PROCEDURE_NAME%'
AND l.lock_type = 'Body Definition Lock';
also this
select
x.sid
from
v$session x, v$sqltext y
where
x.sql_address = y.address
and
y.sql_text like '%PROCEDURE_NAME%';
is it only 'body Definition Lock' that prevent the compiling?
is there any other lock types that prevent the compiling?
how to avoid the locks and do the compiling? by killing the sessions only? is there something else?
You might want to look into Edition-based Redefinition which will let you create a new revision, compile new versions without being blocked by other sessions currently using the packages and enable the new revision later on.
Basically, if someone or something else (any other scheduled job) is executing the package, then you won’t be able to perform the recompile. To get around this, you need to identify the locking session and kill it. Killing session is that option we have, dbms_lock is only useful on locks created by dbms_lock You cannot just "unlock" some object - the lock is there for an extremely relevant reason.
Other lock you may come across is Dependency Lock: Consider
Procedure-1 from Package A contains a call to Procedure-2 from Package B.
procedure-1 from Package A is running.
Then you may get lock while compiling Package-B