Rails 3 and Git: two applications, shared database - ruby-on-rails-3

I have two Rails 3 applications that will share portions of the same database through an internally developed gem. This is an internal project where we will always have full control over both applications. One application is bare metal administration (dev facing, potentially unstable) and another is the content publishing system (user facing, production). Its not practical nor desirable to meld the applications.
I've already seen Rails - Shared Database Tables between two apps
My proposed solution is to git submodule and share the /db directory of both applications.
I want to know if this is a valid approach, and if so are there any pitfalls I'm setting myself up for? If this isn't valid what is a good alternative? (The goal here is to remain as simple as possible, no interprocess APIs.)

I have used this approach and it does work. If you are using capistrano for deployment enable submodule deployment like this
set :git_enable_submodules, 1
You have to be careful not to forget to sync the /db folder, before you start creating migrations, they are created with a timestamp and you can end up with the wrong sequence of migrations.

Related

How to securely set up continuous delivery?

Setup:
Private master repo and every developer has their own private fork.
Currently using CircleCI, but we'd be happy to switch to satisfy requirements
Branches on master repo are protected with merge restrictions
Requirements:
Build + test on forked pull requests
Deploy to different environments based on master repo branch updates
Not all developers can be fully trusted with production credentials
Partial Solution:
Enable building and passing secrets on forked pull requests (Reference)
Use CircleCI contexts to set environment variables per branch. This allows different deploy targets.
Problems:
All repo specific secrets as well as all global contexts are now accessible by anyone who can open a PR.
Even if we disable building on forked pull requests, anyone with write access to at least one repo can access all global contexts.
Question:
This would seems to be a very common use case. How do other companies solve it?
Is CircleCI not the right tool for this? - No, it is not (see below).
Should we build a custom solution?
Edit1:
CircleCI got back to me and surprisingly this is not a use case they support. Looking into other providers now. Above questions are still unanswered.
Edit2:
I've also contacted TravisCi and SemaphoreCi and it appears that only TravisCi supports building forked PRs and not leaking secrets into them (Reference).
SempahoreCi is missing (1) building forked PRs and (2) hiding secrets from the deployment phase in non-master workflows
CircleCi has restricted contexts, but they would require manually changing workflows. Definitely not easy to set up and I don't fully understand how they would work.

How does CakePHP use the 'hash' field/column in User?

I have a CakePHP 2.3 app with a MySQL database.
I'm building a new app (in a different language and framework).
The plan is to completely replace the CakePHP app with the new one. The code is almost ready, so I've just attempted to run it in production for the first time.
User login seems to have crashed in production because the field hash in the users table was changed to old_hash. I did this because the new framework can't have fields with the name hash.
Testing in development, this was not a problem at all.
In production it became a problem.
Development:
App in Vagrant VM and database in my machine's local MySQL
(MySQL 5.7)
Production:
App in AWS EC2 and database in AWS RDS
(MySQL 5.6)
Because all users have the hash column blank, I assumed it didn't matter.
The fact that it worked locally after changing it to old_hash led me to worry even less about it.
I've searched for documentation on this specifically, but did not find anything detailed enough.
What does CakePHP use the hash field/column in the users table for?
Is there a place in the code where I can explicitly tell it to look for something named old_hash instead?
What could be influencing the difference in behaviors between development and production?
Figuring all of this out would be awesome, because then my 2 apps would be able to briefly coexist in production, making the transition smoother.

jhipster 3 Migrate from monolithic to microservices

Currently I've a JHipster 3.3 monolithic application and I would like to migrate to microservices architecture. I've already created the registry, the gateway and the uaa service. Now I need to migrate the core business of my application into a microservice. Is there a facility to perform it? Can I make it automatically?
You could either convert your monolith into a service, or re-generate it from your entity definitions.
First approach requires a good understanding about Spring Cloud, you'd start by annotating your app with #EnableEurekaClient, add missing depdendencies on Spring cloud to your pom.xml, add missing properties to your application*.yml, create bootstrap*-yml files. Then you would move your client part to your gateway. This is not easy especially if you're new to spring cloud.
Second approach requires you to generate a microservices app with same options as your monolith, then copy to it your .jhipster folder which contains your entity definitions and re-generate them running yo jhipster:entity <entityName> for each entity in same order as you created them initially and then generate htem also on gateway for generating the client part.
You should also take time to think about why you're migrating, if you turn your monolith app into a single service then it might be a bad idea as you'll only add complexity, it makes sense only if you are planning to add more services and/or split your monolith into several services. There is a good free ebook and video at O'Reilly: "Microservices AntiPatterns and Pitfalls"
For start I want also to subscribe to the last part of the answer of Gaƫl:
think about why you're migrating?
Personally I am at the moment in a migration process. I start in 2015 a JHipster monolith app (at that time that was the only option :) ) which I still develop and add new features. For my monolith I decide to migrate to microservice because we gone increase the team and want to go with a DDD in the future. I must admit that there is some overhead at the begin and the learning curve is quite steep but in the end the results are very rewarding especially if you believe in CI (y)
This is how I migrate my monolith:
be sure that you have all your sources commited and sync with your VSC (I use git as DVCS)
without any changes just run the jhipster generator and overwrite all the old sources
make a git diff to have an overview of the files that are generated from jhipster and which you have modified
if you have not changed the format of the files that jhipster generates it should be just some files in webapp folder and configuration file
if you have differences only because of formatting I will recommend to check the code and then update your base code of your monolith app
the target is to have a few as possible differences when regenerating the the monolith app with the jhipster generator (is better to have fewer files to check when migrating to microservices)
at this moment I imply that you are on clean workspace (i.e. all your changes are sync with the VCS) and if you will run a yo jhipster you will have as few as possible file to recheck manually
in the root folder of the app there is a .yo-rc.json file
in that file you should change the applicationType from monolith to getaway and authenticationType from what you have to jwt e.g.
.yo-rc.json
"jhipsterVersion": "3.5.1",
"serverPort": "8080",
"applicationType": "gateway",
"jhiPrefix": "jhi",
after merging the new generated files you should have now the gateway of the microservice (it can be that you need to delete some classes depending on which authenticationType your monolith use to have)
personally I am working now on moving some of the responsibilities(all the staff that the old monolith did) which exist in the gateway to migrate to sand alone microservices
the migration of the services mentioned in 6.1 is something that goes parallel with adding new features to the app and those will be added as new microservices
My recommendation is to go in small steps/increments and it will be nice if you have a CI so that you can have asap also a feedback about your migration ;)
Good luck.
Cheers, duderoot

IBM Worklight 6.1.0.1 - worklightserverhost attribute and app-builder

How is the worklightserverhost attribute on the app-builder task used? This is important as when deploying an tested application into a production environment, you normally wouldn't do a new build (as this could introduce regression problems). However, the fact that this is a mandatory property and contains in this scenario the test server URL and context - does it force you then to do a new build for the production environment?
Yes, a re-build for each environment does seem to be the usual approach. While we might prefer a "build once, promote through the stages" pattern, I think by careful use of tagging in your source repository you can get pretty good defence against regression.
Alternatively, I think with care you could set up your network so that the app is built once directing to, say,
myco.mobile.hostv21
you could then have that resolve to the different stages as appropriate.

Need advice regarding deployment on multiple remote machines

Currently I am using ms-deploy to build and deploy on several machines using team-city. In my current scenario, I need to build, package and deploy on Dev. After this I need to deploy this package on test and Live servers (which are on different domain. I understand how we do it but problem is Web transformation only occurs for test and live configs if we build a package. It means if I want to use the same package that is created for Dev cannot be used, as web transformation only occurred for Dev web config. Also know that we can change web config when un-packaging but that parameters are very limited. We have a lot of changes not just the connection string or db changes.
Another solution is to add another step to build packages for test and live as part of Dev deployment but then it means a lot of copying on remote servers, once for test and once for live which is a lot of time consuming due to different domains.
Can you please guide what is the best solution in this scenario. So I can use team-city to publish to Dev and test and live using same package and different web configs in one go.
To configure items at deployment time which are not automatically created for you. You can add a file named parameters.xml to your project and extend what you want to make available at deployment time.
Here's some documentation on the approach Using Deployment Parameters for Web.Config File Settings.