how to do obfuscation of source compiled by team city - msbuild

I have set up team city to compile source code, on every check-in. now i want to do obfuscation using Confuser.how to trigger it automatically after every compile.

It is a bit old now, however I found out the solution and may be useful for someone like me :
Step1 :write a obfuscaticate.crproj file to obfuscaticate your dlls.
sample for 4 dlls:
Step 2:add a build step in team city to execute the bat file(obfuscaticate.crproj) create in step 1.
Make sure you have your crproj file, dlls to obfuscaticate and Confuser dlls like :Confuser.Core.dll etc. in the same folder Dependancy(here in Image)

Related

rstan() should not run in #'#example?

In package development, each example requires <5s. However, the pair of stan_model() and rstan::sampling() take long times more than 5s as follows:
Examples with CPU or elapsed time > 5s
user system elapsed
fit 1.25 0.11 32.47
So I put \donttest{} for each rstan::sampling() in roxygen comments #'#examples
In examples#'#examples, we should not run sampling() or is there any treatment ?
I had tried to create my package based on the code rstan_package_skeleton(path = 'BayesianAAA') when I was taught from you (Thank you !!) but, I do not understand many things about it.
Previously, rstan_package_skeleton(path = 'BayesianAAA') launched the errors in my computer ( but now the error does not occur).
So, I made my package without the rstan_package_skeleton(), say BayesianAAA, and in my original making, I put the Model_A.stan,Model_B.stan,Model_C.stan,.... in the inst/extdata and I refer my stan files as follows;
scr <- system.file("extdata", "Model_A.stan", package="BayesianAAA")
scr <- rstan::stan_model(scr)
I have many questions about the code rstan_package_skeleton(path = 'BayesianAAA').
1) The first question is How to include my existing stan files and how to refer my .stan files for the rstan::stan_model() ?
According to the page following page, it said that
If we had existing .stan files to include with the package we could use the optional stan_files argument to rstan_package_skeleton to include them.
So, I think I should execute, I am not sure but the following like manner is required;
`rstan_package_skeleton(path = 'BayesianAAA', stan_files = "Model_A.stan" )`.
But I do not know how to write the code for several stan files, say Model_A.stan,Model_B.stan,Model_C.stan in my existing package made without the rstan_package_skeleton(). I do not understand , but the following code is correct ? Since I do not where the files described in the variable stan_files is reflected in the new project created by the rstan_package_skeleton().
`rstan_package_skeleton(path = 'BayesianAAA', stan_files = c("Model_A.stan",`Model_B.stan`,`Model_C.stan` )`.
Here, the another question arise, that is,
2) The second question: Where I execute the code rstan_package_skeleton(path = 'BayesianAAA', stan_files = "Model_A.stan" ) ? I execute it form the R studio console in my existing package project. Is it correct ? And then, the new project arise and it is contained the old existing project. What should I do ?
https://cran.r-project.org/web/packages/rstantools/vignettes/minimal-rstan-package.html
3) I do not quite know about the packages "rstanarm" , but I try to imitate it for my package, but I can not fined any .stan file in it, I am wrong ?
I am sorry for my poor English, and Lack of study about these things.
I would be grateful if you could tell me.
You generally should not be writing a package that calls stan_model at runtime, unless like brms or tmbstan you are generating a Stan program at runtime as opposed to writing it statically. There are dozens of packages on CRAN that provide compiled Stan programs basically by following the build process developed for rstanarm, which is facilitated by the rstantools::rstan_package.skeleton function, the step-by-step guide, and the developer guidelines which directly address your question
CRAN policy permits long installation times but imposes restrictions on the time consumed by examples and unit tests that are much shorter than the time that it takes to compile even a simple Stan program. Thus, it is only possible to adequately test your package if it has pre-compiled Stan programs.
Even then, it can be difficult to sample from a posterior distribution (adequately) in five seconds, so you often have to use small datasets, one chain, a small number of iterations, etc.
It is best to pass the names of your Stan programs (which should end in a .stan extension, not use a period otherwise, and only have ASCII letters, numbers, and the underscore in their names) to rstantools::rstan_package_skeleton(). If doing so from RStudio, I would call it while not in an existing project. Then
During installation, all Stan programs will be compiled and saved in the list stanmodels that can then be used by R function in the package. The rule is that the Stan program compiled from the model code in src/stan_files/foo.stan is stored as list element stanmodels$foo.
There are dozens of R packages that have Stan programs in their src/stan_files directory (although the locations of the Stan programs are going to move to inst/stan for the next rstantools release) that for the most part just followed the vignettes and did not have to do any additional steps except write more R functions.

How to create an op like conv_ops in tensorflow?

What I'm trying to do
I'm new to C++ and bazel and I want to make some change on the convolution operation in tensorflow, so I decide that my first step is to create an ops just like it.
What I have done
I copied conv_ops.cc from //tensorflow/core/kernels and change the name of the ops registrated in my new_conv_ops.cc. I also changed some name of the functions in the file to avoid duplication. And here is my BUILD file.
As you can see, I copy the deps attributes of conv_ops from //tensorflow/core/kernels/BUILD. Then I use "bazel build -c opt //tensorflow/core/user_ops:new_conv_ops.so" to build the new op.
What my problem is
Then I got this error.
I tried to delete bounds_check and got same error for the next deps. Then I realize that there is some problem for including h files in //tensorflow/core/kernels from //tensorflow/core/user_ops. So how can I perfectely create a new op excatcly like conv_ops?
Adding a custom operation to TensorFlow is covered in the tutorial here. You can also look at actual code examples.
To address your specific problem, note that the tf_custom_op_library macro adds most of the necessary dependencies to your target. You can simply write the following :
tf_custom_op_library(
name="new_conv_ops.so",
srcs=["new_conv_ops.cc"]
)

Compiling and Llnking to used modules in external directory Compaq Fortran command prompt

I've already asked a similar question, here:
Linking to modules in external directory Compaq Visual Fortran command prompt
And I thought that the first answer was correct (that is, in the manual they say you can simply specify the path name before the module), but after deleting the temporary files in my library folder, this approach seemed to stop working. Trying with the /include[:path] approach, here is my .bat file:
df /include:..\FORTRAN_LIB\ __constants
myIO griddata_mod myfdgen myDiff magneticField /exe:magneticField
And an error is returned saying:
__constants
myIO
griddata_mod
myfdgen
myDiff
magneticField
f90: Severe: No such file or directory
... file is '__constants'
Again, I apologize that this question is VERY specific, but it seems like it should be simple and does not work at all.
p.s. Originally, I was using:
df ..\FORTRAN_LIB\__constants ..\FORTRAN_LIB\myIO
..\FORTRAN_LIB\griddata_mod ..\FORTRAN_LIB\myfdgen
..\FORTRAN_LIB\myDiff magneticField /exe:magneticField
But, as I've said, it stopped working after I deleted the temporary files in my FORTRAN_LIB folder. Also note, these .bat files used only one line, I've broken them into several lines just for readability. I would prefer using the /include[:path] option since that seems like a better solution.
Okay, so I think I figured out a workaround at the very least. I understood that the /include[:dir] specifies to search in "dir" for included files. But it seemed from documentation, that this also specifies to search for USEd modules but that doesn't seem to be the case.
My program now looks like this:
include '..\FORTRAN_LIB\__constants.f90'
include '..\FORTRAN_LIB\computeError.f90'
include '..\FORTRAN_LIB\griddata_mod.f90'
include '..\FORTRAN_LIB\myfdgen.f90'
include '..\FORTRAN_LIB\myDiff.f90'
include '..\FORTRAN_LIB\myIO.f90'
program magneticField
use constants
use computeError_mod
use griddata_mod
use myfdgen_mod
use myDiff_mod
use myIO_mod
implicit none
...
And my DF command like this:
df magneticField /exe:magneticField
And everything seems to work fine. It would be nicer to have the /include[:dir] option, but so long I'm able to reach in a separate directory, I'm satisfied. If anyone can find a better solution I'll switch the checkmark. I hope this helps with anyone else who was confused like me.

boost build - sources with the same name

src
|--Manager.cpp
|--Specializations
| |--Manager.cpp
Building this Boost.Build tries to create
/bin/...
|--Manager.o
|--Manager.o
but fails. How to resolve this automatically? I read FAQ item, but I don't like the solution, as I have to fix things manually when I have a same class name, but different namespace. Would it be possible to make Boost.Build automatically prefix object file names with directory?
/bin/...
|--Manager.o
|--Specializations.Manager.o
Or duplicate the source directory tree?
/bin/...
|--Manager.o
|--Specializations
| |--Manager.o
This behavior has been changed a long time ago and should just work. Boost.Build now mimics the source structure, i.e. you should get both bin/Manager.o and bin/Specializations/Manager.o.

extra-paths not added to python path with zc.recipe.testrunner

I am trying to run tests by adding a version of tornado downloaded from github.com in the sys.path.
[tests]
recipe = zc.recipe.testrunner
extra-paths = ${buildout:directory}/parts/tornado/
defaults = ['--auto-color', '--auto-progress', '-v']
But when I run bin/tests I get the following error :
ImportError: No module named tornado
Am I not understanding how to use extra-paths ?
Martin
Have you tried looking into generated bin/tests script if it contains your path? It will tell definitely if your buildout.cfg is correct or not. Maybe problem is elsewhere. Because it seem that your code is ok.
If you happen to regularly include various branches from git/mercurial or elsewhere to buildout, you might be interested in mr.developer. mr.developer can download and add package to develop =. You wont need to set extra-path in every section.