Cocoa ConnectionKit Framework Dependencies - objective-c

I'd like to use ConnectionKit in a project, but haven't yet been able to compile the framework.
I haven't been able to find a definitive list of external projects that ConnectionKit depends on. I've attempted to include projects that fit my best guess, but nothing has worked so far.
Does anyone know exactly what projects / libraries ConnectionKit requires?

To take my comment and turn it into an answer: it appears that ConnectionKit already includes many of its dependencies by default. However, there are two Git submodules that you're going to have to install for this framework to compile: "libssh2_sftp-Cocoa-wrapper" and "DAVKit".
The easiest way to install them is to cd to the framework directory once you've cloned it, and to run git submodule update --init libssh2_sftp-Cocoa-wrapper DAVKit.

Related

Is it possible to "stuff" one ROS package into another ROS package?

I am working on a ROS package to be deployed on our lab robots. There is a feature in my package requires a third party ROS package. I don't think this package is released yet, at least I couldn't find it at ROS wiki document site. The dependent package is called ros_msg_parser for subscribing topics without knowing their msg type beforehand. Here is the link to the repo. (https://github.com/facontidavide/ros_msg_parser)
I need to mention that we use ubuntu 16.04 in all our devices. And we program with ROS, and C++.
My intention is to deploy my own ROS package to the robot without worrying about if the ros_msg_parser package is installed on the device or not.
I know a couple of ways to do it:
Use a .so library file. (We don't think this approach is the ideal way to proceed for us, since the .so library is going to be a black box for other colleagues in lab in future, and no way to know its version and so no.)
Release ros_msg_parser ROS repo and use it as a ros eco-environment program, such as std_msgs.
And at last, (not we want) we could build/install this ros_msg_parser package on all our devices.
I have also researched on externalpackage_add, to build/install ros_msg_parser as a third party library. Then I realized that I am using a ros package as my dependency, not really the standard way of build && cmake && install. Correct me if I am wrong.
I have desired package working alright now, by catkin_make the ros_msg_parser package in my working space together.
I am just wondering if any one can help me with things like if there is any approach I can do or any where I can research on my own to fulfill my goal.
Thanks in advance.
Furtunately, I have got some help from team to solve this problem. It is rather simpler than I imagined.
Here are the steps that we took to implementation:
git clone the ros pacakge source and only copy the source files to a folder called your_third_party_folder/ parallel to your_main_work_space/src folder. Remember to remove your git clone histories etc, you will only need the source files, otherwise your main work space won't work well with your own repo. Due to a dirty repo prompt, you will not be able to push our third_party project to your repo. Maybe there are another ways to solve this, but it is just simpler to copy the source files to a folder where want.
work on your two CMakeLists.txt files, make sure to inlcude, link and target some libraries to pass catkin_make
and don't forget to add_subdirectory(YOUR_THIRD_PARTY_PACKAGE) in your main workspace CMakeLists.txt file.
Note it took me quiet some time to fix the compiling process, but finally the third_party project is installed with no .so file and no local library installation.

git submodule vs npm package?

I'm using git submodule to build and shared components between projects. The project is not in production yet, so, at this point submodule is serving well.
But I'm concerned about maintenance and deploy, would be a good idea transform it into a npm package ?
An npm package will allow fragmentation across different package versions. On the other hand, git submodules have a bit of a learning curve, and the tooling is really not that good. With git submodules, you have all the source in one folder.
If it's at all possible, I'd recommend using a plain monorepo for all projects. You may need to create build time variables (via babel plugin/s), you may need some sort of "live config" get served from the backend. I worked with git submodules for a year and I've recently worked with a project that uses npm to share code.
I would recommend using only one git submodule, for all shared code, instead of several submodules. I would strongly consider using lerna, and use your one git submodule to track lerna's packages directory. And if the team decides they don't like git submodules, you can easily make this repo a sibling git repo, instead of a submodule. However, above all this, I'd recommend using a plain monorepo.
Here's a great talk on monorepo's from Netflix: https://www.youtube.com/watch?v=VNqmHJtItCs (strong focus on discouraging npm-style packages)
Here's google's infamous monorepo talk: https://www.youtube.com/watch?v=W71BTkUbdqE
This is a great site to read to help you think about good development flows: https://trunkbaseddevelopment.com/ (it primarily advocates for the monorepo approach)
If you are developing software for different clients(different people/companies paying you for similar projects), and have some agreement that they should be at least ~80% the same, you may really enjoy using build flags to help get started on splitting functionality, but I'm sure you should very proactively keep the code around the build flags clean, and refactor into re-usable components/packages. Give each client some sort of build-flags.json. Build flags should be named for features only, which in theory can all be individually toggled. Some code may be totally custom for each project, in this case, you may want to consider using dynamic imports, but generally this is a pain point I have yet to fully cross, although I have plenty of unrefined ideas around this.
If a monorepo is just not happening, I would actually recommend using npm packages+separate repos over git submodules, assuming you can do good semantic versioning of the package. (And, yalc seems to be a good tool for linking together packages, as opposed to the standard npm/yarn link)
My findings after trying both lerna, npm workspaces and git submodules. I find it is not a case of the one vs the other.
The reason why I say this is because one can have submodules that are part of the monorepo. Doing exactly this made my development experience better as I could clone an existing project and actively develop it within the bigger project (monorepo). I could then contribute back to the cloned project once satisfied with the changes. This is something that you cannot do with npm workspaces alone. Hence my argument that it is not a case of one vs the other. They solve different problems and can therefore complement each other.
Before using npm workspaces I would use npm link all the time. npm workspaces makes this use-case of developing with multiple packages more convenient. Even when the team you work with does not use a monorepo; you could use one to develop multiple packages and test them in conjunction. Once satisfied, you can push the individual repos. This is something you cannot do with git alone.
Maybe you can think of more novel ways of combining the features of npm and git.

Nested ExternalProject_Add with shared dependencies

I'm trying to apply ExternalProject_Add to automate the installation of dependencies of a mid sized C project. Things were going well until I had to install a library which uses ExternalProject_Add to install one of its dependencies, which also happen to be used by my own project. It would be nice if I could avoid having to rebuild this library, but instead use the already installed version.
Is there a good way to accomplish this? Can I tell ExternalProject to download stuff only if some condition, e.g., library already installed, is not met?

Adding Google Framework to Xcode?

I have been using the YouTube v2 APIs for a little while and now have finally gotten around to setting up v3. So I have downloaded their frameworks/libraries from here:
svn checkout http://google-api-objectivec-client.googlecode.com/svn/trunk/ google-api-objectivec-client-read-only
And looked closely at the instructions they give for adding it to your project here:
http://code.google.com/p/google-api-objectivec-client/wiki/BuildingTheLibrary
For the life of me I cannot get my app to compile with some sort of problem. Its either and ARC problem, can't find header files or all sorts of other errors.
Hoping someone can come along who has it working and simply put some 1 2 3 steps as it would make a lot peoples lives easier as I am sure the Google APIs of all varieties are pretty relevant to a lot of people and will continue to be, thanks.
I had the same problem recently. The quickest way I could find to get the new client library up and running quickly was to install with this CocoaPod project, called iOS-GTLYouTube.
CocoaPods is a really simple way to install libraries. If that's new to you, you can check out the instructions at cocoapods.org.
Install Cocoapods if you don't have it, using your system ruby:
$ sudo gem install cocoapods
Create a Podfile located within your project with the following:
platform :ios, '7.0'
pod 'iOS-GTLYouTube'
In your project directory, using the system ruby:
$ pod install

Third-Party Code and Git

When developing iOS applications, I frequently use third-party code from GitHub and reusable classes I created myself. What I have been doing is cloning the source code into a specific folder somewhere in ~/Documents, where I kept all the library code. Then I would drag the source files into the Xcode project and code away, with a local Git repository keeping track of the changes in my own source code. So far so good, but I recently found a severe problem: I wanted to switch back to a older version of my Xcode project and found that it did not compile anymore because it used an older version of the third-party code, and nowhere had I stored which version it used!
How is this problem usually solved? I have looked briefly into Git submodules, but I'm not sure if it's the right thing. I also briefly read about CocoaPods, but could I also use that for libraries I created myself?
It is actually solved with git submodule: the idea is to reference an exact commit for each submodule you need, allowing you to go back in the history, and find the coherent set of commits you need for your project to compile then.
(More in this answer)
However, that does require a slight change in your working tree structure, since each submodule would become sub-directories of the parent repo which represents your project.
Note also that it (git submodule) is useful for source dependencies.
CocoaPods would be more for building the binaries you depend on (binary dependency).