extract code from cgal - cgal

I would like to use the halfedge data structure of CGAL in my project.
Since it is licenced by LGPL, I would prefer to distribute this small piece together with my software, instead of requiring the installation of this big library on the user's system.
So my question is, is there something like boost's bcp for Cgal? I started to manually copy the halfedge source files but it looks fairly compicated.
thank you in advance

There is currently no automated way to extract only a subset of CGAL headers or just a package including its dependencies. You can try to do this manually though by just following all include dependencies.
Even if you do this, the biggest problem is that CGAL assertions require linking to libCGAL. You can avoid this by disabling the assertions through the CGAL_NDEBUG macro. It's also not easy to see from a CGAL distribution if a package compiles code into libCGAL, but neither Polyhedron nor Halfedge_DS do that, so you should be fine.

Related

Is there a way to compare 2 Pypi package sourecode difference

I have already built pypi package stored on pypi server few days back. Now I want to compare source code diff between already built pypi package and recent code built today. Is there any way to this?
I want to compare already built pypi package and newly build code. And If there is any difference in source code then only create a new package and upload it to pypi server
If you have only Python bytecodes, you cannot get the corresponding source code (that hypothetical transformation is called decompilation, and is not possible in general; read e.g. about Rice's theorem). Since any translation (such as the one done by the python program) from source code to bytecode is losing some information (e.g. name of local variables, comments explaining the intent of the code).
Equality of the behavior of functions by static analysis of their source code (and the observable behavior of your code is what you really care about) is an undecidable problem. Learn more about λ-calculus, it is deeply related to that question.
The source code (by definition, the preferred form of code on which developers work) is not only for computers, but mostly for fellow developers: in other words, most of its value and its meaning is a social one (and that is what free software is about). Read more about the semantics of programs.
For example, renaming a variable from i to x may convey the implicit hypothesis that the intended dynamic runtime type of the value of that variable was an integer, and becomes a floating point.
Maybe you want some kind of package manager (or some version control system, if you deal with source code, or some build automation tool, if you build then install software out of it). Python has something to manage packages. The scons build automation uses Python, but there are many other build automation tools, GNU make being a common one (that you could use to drive compilation from .py source files to .pyc bytecode files and their installation). For version control, I recommend git.
PS. Your question is very unclear and smells like some XY problem.

Produce static libs from tensorflow_cc and tensorflow_framework

As far as I understand using bazel I can only produce libtensorflow_cc.so and libtensorflow_framework.so.
I need to produce static libs that are position independent (-fPIC) because I'll link them to a dynamic lib of my own later.
I found this answer which suggest the use of a Makefile included in the project.
I successfully used it to replace the libtensorflow_cc.so but what can I do to replace libtensorflow_framework.so?
Not an actual answer, but too long for a comment.
I managed to do something like what you mention using Bazel on Windows. In particular, I wanted to make a single wrapper DLL with one or two headers (limited in functionality) that I could move around easily. I'll write a summary of the things that I did; it's rather convoluted an customized for our needs, but maybe you find something useful.
I pass --config=monolithic to the bazel build command (besides any other option that you need). That will avoid modularizing the library and thus remove the dependency to a libtensorflow_framework.so (see
tools/bazel.rc).
The goal that I build is not any of the ones in the TensorFlow repository. Instead, I add a very small program that uses my wrapper as a new Bazel target (a C++ file plus my headers headers and a BUILD file). So all of TensorFlow had to be compiled beforehand in order to compile this final dummy program.
When I get that done, I take advantage of the fact that Bazel does already compile every subgoal as a static library. I check a file under the bazel-bin directory generated for my dummy program goal with a name ending .params - there I find the path of all the static libraries that were used to compile it.
I copy all of these intermediate static libraries to somewhere else. Also, I copy a bunch of headers I will need to compile my final wrapper (TensorFlow own's, but also Eigen, Protobuf and Nsync now too). I put all of this in a build area I have prepared before.
I use NMake Makefile to produce my custom DLL, using the static libraries, the copied headers and my own thin wrapper.
And that's about it, I think. I have an ugly Bash script I run on MSYS2 that does everything for me. Usually with every new release I need to tweak one or two things (some option in the configure script, some additional headers I need to copy, etc.), but I do get it to work in the end. It's quite a lot of fiddling though, so I'm not necessarily saying you should use the same approach (but feel free to ask for details about any step if you want).
Using the -2.params files #jdehesa mentioned and bazel verbose output (-s switch), you can even create a link command to eventually statically link these intermediate static libraries. I automated this process for Windows/Linux/macOS and included it to the vcpkg package manager. To use it just run vcpkg install tensorflow:x64-windows-static. If you're interested in the sources, you'll find them here.

I built webRTC but uncertain how to put it into project

So, the new webRTC has getCaptureSession, but cocoa pod version is old, and doesn't have it. I really could use that session. So I have a few options, 2 of which are "wait for cocoa pods" (not gonna happen), or 2) "Place library into project".
My main problem is, even though I somehow managed to build the libs for sim and device, I do not know if pasting just libwebrtc.a would work, and even so, I can't find the header files that go with it.
I fear my question comes from a lack of understanding of libraries more than this particular library, so I tried to make it generalized towards that. I do understand enough to add, create, fix bugs with adding most libs.
The sample project only has one .a file (libwebrtc), and google doesn't use the usual XCode workflow, so I spend inordinate amounts of time trying to figure out Google's custom tooling. (Try googling how to use Gyp files - I get they make xcodeprojects, and I can see the specs, but how to run them?)
Just to reiterate, I have successfully built the libs, but uncertain how to paste and get the header with the function I'm after, in the file avfoundationvideocapturer.h from Google's webRTC issue 4070 - so yes, it's in there.
Thanks.
You have to add it to the project as a file. Then include it in the Link Binary With Libraries section in the Build Phases of your target configuration.
Take a look at this answer.
For the header files, have a look at this. Header files are under src/talk/app/webrtc/objc/public/*.h or something like that.

Is it worth it to create static libraries for iOS?

There is code that I want to include in most of my projects. Things like AFNetworking, categories for CoreData and unit testing, etc.
It seems logical to include all of these in a static library, and then use it in each project. I've noticed though, that many third-party libraries (like AFNetworking, and it's predecessor ASIHTTP) are included in projects by copying over all of their source files and then manually linking the necessary libraries to the project target.
This seems to me like the easiest way. It took a fair amount of time to figure out how to include an existing static library into a project. Even after I knew how, it still seems like a pain to do it for every new project. Also, the header search paths that you specify are to a local directory with the static library's files. Wouldn't it be easier, and is there a way, to copy the static library's files into the project? This is the same idea as including the class files directly like most libraries seem to do already, but it would be more organized because everything would be lumped into one library project, instead of having class files everywhere and having to include every one of them.
Static libraries feel like they should be the right way to go. Make a library that can be used with all projects that includes classes that every project will need. Makes sense. I am just conflicted because it seems like the right way to go is to leave everything out of a 'formal' library, and just copy over all of the class files instead.
I guess I am just looking for what experienced developers find to be the best option.
I would be among the first to admit that the process of referencing a static library in Xcode is not entirely intuitive. However, using a static library is the best option, without a doubt.
The main reason is maintainability: when you copy source code of a library to many places, you must remember to update all of them to the latest code when you upgrade to the next version of the library. This may be a rather error-prone process, especially when the underlying library source changes significantly (e.g. new files are added, old files are renamed, etc.)
There's a halfway solution - make an XCode project that builds your static library from source and put that into a shared repository (ie.. git submodule etc) which is included from each project's main repository.
Each of your projects would include this submodule and project. Then they get the latest source code each time they pull that submodule. If you set this up as a build dependency it will build a static library the first time you build and then XCode is smart enough just to include it each subsequent build so you get the benefit of fast build times.
You also get the advantage of having the source right there for stepping though / debugging.
If it's in a separate XCode project and a new version of a library adds or removes a source file you would only need to change that shared project - all your individual projects wouldn't change at all.
What about using CocoaPods? This tool does exactly what you want in a declarative way: you have a file (Podfile) where you declare your dependencies, and the tool downloads all the dependencies and builds a static library that gets added to your project.
I would agree that static libraries feel like they might be the correct way to go for a number of reasons, but can also introduce some issues.
The positives would be creating an easy way to add a library to a project. Although not completely intuitive, it is rather trivial to add a static library to a project after one does it a few times. Add the files, add the search path, done. This could also be useful in certain source control situations. Also, updating a library may be easier.
I think the real problem here is for the open source community. By including, say AFNetworking, for example, as a static library, you lose all access to the implementation files. This is a great feature of including source rather than a library. It lets you change code to how you see fit, and hopefully give back.

Linker chooses "wrong" main with Boost.Test

When using Boost.Test, there is generally no need to define a main() function, since Boost.Test provides one itself.
I recently had to convert my project to use static linking of 3rd party libraries (on VS2010). Naturally, I had to link to multiple .libs so that the build succeeds, and my build ran just fine.
However, when I ran my test project, something really strange happened. It seems that one of the 3rd party .libs (libpng), required by one of my dependent libraries, contained a test file with a main() function defined within (pngtest.c, if you must know).
Since my project did not have a main() function, the linker chose that one as my "test" application. Thus, non of my tests run.
Does anyone know how I prevent this from happening? How can I tell the linker/compiler to use the Boost.Test main()?
Answering my own question, and clarifying #Tom's answer.
Turns out that the libpng build script I was using was not the the original shipping with libpng but one created by the OpenCV build system. The file pngtest.c was mistakenly included int the build.
The solution to the problem was to remove pngtest.c from the libpng build script.
The latest OpenCV version, does not include this file anymore.
For more details see my post to Boost mailing list here and my OpenCV bug report here.
Adi, I had the same problem. Looks like you were already all over this one. Thanks to Google and your efforts, I was able to figure it out.
Here's some info to round out the answer:
discussion:
http://boost.2283326.n4.nabble.com/Boost-Test-Linker-chooses-wrong-main-function-td4634872.html
solution:
http://code.opencv.org/issues/2322
Basically, I just excluded the pngtest.c file from the libpng project, and recompiled OpenCV. Looks like it will be fixed in the next release of OpenCV.
Thanks!