Get the current directory status information? [duplicate] - libgit2

How can I get the status of a single directory, rather than the whole repository?
I have looked at the following:
git_status_list_new which gets the repository status, but I care only about files located in a single directory.
git_status_file which gets the status of a single.
Is there a way that I can get the status of a single directory using libgit2?

git_status_list_new accepts a git_status_options struct, which contains a pathspec member which controls which files will be included in the list. You can use that to limit your returned statuses to those in a single directory.

Related

How to read values dynamically from a file for a property in updateAttribute?

I added some custom properties in the 'updateAttribute' processor using the '+' button. For example: I declared a property 'DBConnectionURL' and gave the value as 'jdbc:mysql://localhost:3306/test'. Then, in the 'DBCPConnectionPool' service controller, I simple used the value'${DBConnectionURL}' for 'Database Connection URL' property. But, I manually gave the value for 'DBConnectionURL' property.I want a way where I can feed the value dynamically from a file, so that i just need to change the value in the file and the value for 'DBConnectionURL' changes dynamically based on the value present in the file. Is there a way to do it?
Rishab,
You have to use nifi variable registry.
In conf/nifi.properties, you could configure the below configuration in it for dynamically update a value in your data flow.
nifi.variable.registry.properties=./dynamic.properties
You can give your variables in that file dynamic.properties it should present in conf directory.
For an example, If dynamic.properties files contains below values
DBCPURL= jdbc://<host>:<port>
you can use that in your data flow by using ${DBCPURL}
Note: You should restart nifi services if you change any configuration in conf/nifi.properties.Otherwise your changes not worked in dataflow.
Feel free to accept it be answer if it worked for you.

boost build - sources with the same name

src
|--Manager.cpp
|--Specializations
| |--Manager.cpp
Building this Boost.Build tries to create
/bin/...
|--Manager.o
|--Manager.o
but fails. How to resolve this automatically? I read FAQ item, but I don't like the solution, as I have to fix things manually when I have a same class name, but different namespace. Would it be possible to make Boost.Build automatically prefix object file names with directory?
/bin/...
|--Manager.o
|--Specializations.Manager.o
Or duplicate the source directory tree?
/bin/...
|--Manager.o
|--Specializations
| |--Manager.o
This behavior has been changed a long time ago and should just work. Boost.Build now mimics the source structure, i.e. you should get both bin/Manager.o and bin/Specializations/Manager.o.

MsTest, DataSourceAttribute - how to get it working with a runtime generated file?

for some test I need to run a data driven test with a configuration that is generated (via reflection) in the ClassInitialize method (by using reflection). I tried out everything, but I just can not get the data source properly set up.
The test takes a list of classes in a csv file (one line per class) and then will test that the mappings to the database work out well (i.e. try to get one item from the database for every entity, which will throw an exception when the table structure does not match).
The testmethod is:
[DataSource(
"Microsoft.VisualStudio.TestTools.DataSource.CSV",
"|DataDirectory|\\EntityMappingsTests.Types.csv",
"EntityMappingsTests.Types#csv",
DataAccessMethod.Sequential)
]
[TestMethod()]
public void TestMappings () {
Obviously the file is EntityMappingsTests.Types.csv. It should be in the DataDirectory.
Now, in the Initialize method (marked with ClassInitialize) I put that together and then try to write it.
WHERE should I write it to? WHERE IS THE DataDirectory?
I tried:
File.WriteAllText(context.TestDeploymentDir + "\\EntityMappingsTests.Types.csv", types.ToString());
File.WriteAllText("EntityMappingsTests.Types.csv", types.ToString());
Both result in "the unit test adapter failed to connect to the data source or read the data". More exact:
Error details: The Microsoft Jet database engine could not find the
object 'EntityMappingsTests.Types.csv'. Make sure the object exists
and that you spell its name and the path name correctly.
So where should I put that file?
I also tried just writing it to the current directory and taking out the DataDirectory part - same result. Sadly, there is limited debugging support here.
Please use the ProcessMonitor tool from technet.microsoft.com/en-us/sysinternals/bb896645. Put a filter on MSTest.exe or the associate qtagent32.exe and find out what locations it is trying to load from and at what point in time in the test loading process. Then please provide an update on those details here .
After you add the CSV file to your VS project, you need to open the properties for it. Set the Property "Copy To Output Directory" to "Copy Always". The DataDirectory defaults to the location of the compiled executable, which runs from the output directory so it will find it there.

SSIS - Skip Missing Files

I have a SSIS 2008 package that calls about 25 other SSIS packages.
Each of those child packages loads a specific file into a table. But sometimes one or more of these input files will be missing.
How can I let a child package fail (because a file is missing) but let the rest of the parent package keep on running?
I've tried increasing the maximum error count on the parent package, the tasks in the parent package that call each child, and in the child package itself. None of that seemed to make any difference. I still get this error when I run it with a file missing:
SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The
Execution method succeeded, but the
number of errors raised (2) reached
the maximum allowed (1); resulting in
failure. This occurs when the number
of errors reaches the number specified
in MaximumErrorCount. Change the
MaximumErrorCount or fix the errors.
Edit:
failpackageonfailure and faulparentonfailure are already all set to false everywhere.
I haven't tried this, but this is how I would approach it.
Create a variable for the file name and the child package name.
Use a For Each Loop container. Have it go through the location of the files and pull the file names one at a time. Use the file name to change the child package name variable. In the container have the task to run the child package and have the name dynamically set based on the values of the child package name variable.
Then it should only try to run the child packages which have appropriate files.
in the properties of the execute package task, you can set the failpackageonfailure and faulparentonfailure. i haven't worked with these, but you can probably play with them to get your desired results.
Side note: for simplicity, I'd set these settings on the parent SSIS package.
There is a MaximumErrorCount values at the Sequence Containers & package level. If you're using this be sure your values are in-sync because the package level settings take precedence.
Another option is the ForcedExecutionValue.
To set this up, load the properties tab for each of container and:
1) ForceExecutionValue to TRUE
This will cause the container to return whatever value you put in the variable (see step #2), despite the outcome of the task(s).
2) ForcedExecutionValue to 0
This acts a return value for that task, and sets it to 0 (true, think "return 0" as in C++).
I hope that helps.
This will cause the package to
Load the properties using "ForcedExecutionValue" to 0, then Then set the Force
I have done this kind of scenario development, first plan the package execution method as whenever you will get a file we need to process the package if not either fail or leave the package ultimately our target is to process all the package of files existing. take a variable for all the packages. set the variable to "Y" or "N" on the existing of the file using script component or connection string in the parent package. the existing condition to execute the package on the value of the variable.
This method gave us desired results of process multiple files with different occurences of source files.
thanks
prav

Using sub-repo with hgwebdir difficulties in mercurial

Allright I got myself in a deadlock with Mercurial and sub-repos... Here's what happenend:
I had a large mercurial repo that I server via apache and hgweb.cgi.
Due to the size of the repo I decided to move to sub-repositories and share these with hgwebdir.cgi.
Using the convert tool with the filemap option I created several sub-repositories:
/main/foo
/main/bar
Nicely created an entry for the sub-repositories in .hgsub:
foo = foo
bar = bar
And set hgwebdir.cgi up to show $/** as the root folder.
Now when I went to my site (foo.com/hg) I saw my sub-repositories with one empty reposory among them (no name, no content), but I could not download it (archive location unknown):
empty_repo http://img707.imageshack.us/img707/8237/emptysubrepo.png
That was allright until I added a new sub-repository.
I could not push the new .hgsub file to foo.com/hg, since that page is served by hgwebdir.
The only method I can work currently is switch from hgwebdir to hgweb, commit .hgsubste and switch back to hgwebdir.
Does someone have a good setup for such a mess?
On the webserver your main and its subrepos should appear as siblings -- not with the subrepos inside main.
Main
ASCII
AlignDistribute
And the URLs in your .hgsub should look like:
ASCII = ../ASCII
AlignDistribute = ../AlignDsitribute
Then you'll be able to push/pull to http://foo.com/hg/Main and when you clone it the clone/update will automatically attach and clone down the separate subrepos.
From what I've read on https://www.mercurial-scm.org/wiki/PublishingRepositories#multiple
The keys (on the left) and the values (on the right) are both filesystem paths
The keys should be prefixes of the values and are "subtracted" from the values in order to generate the URL paths to each repository
What I'm guessing happened is that in your hgweb(dir) configuration you're specifying the same value for a collection possibly as the key, so during subtraction it ends up with a blank name and no way to get to it.
When I use [collections] to set /a/full/path = /a/full/path directly to a repo, it'll end up blank too, because it's reading that folder as a repo because it is a repo, instead of each sub-directory being an individual repo, after I removed the .hg folder and .hgsubs and everything from the root of my collection entry, all the subfolders started showing up properly.
I originally used in [paths], /path/to/my/project = /path/to/my/project, and since that is a single referenced repository, it'll subtract the value from the key, leaving you once again with '', instead I used project = /path/to/my/project and it came out as 'project'.
Hopefully that URL or these descriptions will get you out of your pickle!