I have requirement from client
I need to updated two files across all repositories.
each repository had 3 branches Dev,staging,master, i want to update the 2 files in this 3 branches.
it will be great if someone give hint or script to do this. ?
many thanks for reading this.waiting for some inputs. thanks...
Related
I am using gitlab for my project. There are 3 or 4 different repository which is taking huge space. Is there any better way to handle large space taken up? I have huge performance issue with computer. Local repository had to be deleted every time after branch work is completed to freeup space. This means, I am cloning the repo every time I need to work on new branch which is taking 30mins sometime which is again not helping and consuming huge time. I m also working on all three repository sequentially which means clone and delete 4 times for one assigned work which doesn't seem efficient.
Is it possible at all to keep all 4 repo in my local and still be efficient with space and performance of computer ?
I am using VScode.
Any suggestion appreciated?
Best Strategies to keep local repository and yet efficient and avoid deleting every time.
I've had similar problems in the past with repos that contained many branches, tags, and commits. What helped me was using the --single-branch option of the clone command.
Since you mentioned you're using VS Code and GitLab, I'm also assuming you're using the GitLab WorkFlow VS Code extension. Sadly, I don't know how to specify the --single-branch option with that extension, but there should be a way.
I'm working on a (GNU/Linux) system with a ClearCase client installed. On this system, people don't always have views correspond to every branch; and have different "branch trees" for different files.
Now, given that I have no relevant view set as the present working view, or that the PWV is irrelevant to the branches I'm interested in - how do I generate a diff between the files existing on two specific branches?
In this question:
https://stackoverflow.com/a/2786120/1593077
there are assumptions about the PWV being relevant, I believe.
One simple way would be to create two snapshot views dedicated for that task:
one with a config spec selecting branch1
one with a config spec selecting branch2
Then a Linux tool like kdiff3 (or at least a simple diff -qr dir1 dir2) would display the difference between files list and file contents.
I'm working on a project with several developers and I need to systematically inform them about latest changes.
Any solution?
There's only one solution no matter how big is your project and how many guys are involved in.
It is called VCS (version control system).
I highly recommend you to use service like GitHub or BitBucket to have access to your code repository from all over the world. You can create a private repo and add your friends as collaborators, so only these people would have access to code.
Hope it will help.
This is an open ended question. I have noob understanding of databases but willing to learn whatever is required. Though I believe my problem could be done without learning a lot.
So, here goes the question:
I have large amount of files getting generated in mt projects(depending on the builds) and I need to archive them and also need to reproduce them according to buildNumber if requested by users. I don't expect these requests to be a lot. May be 1-2 requests a day.
For eg: 16GB data per build every week. Most of the files in weekly builds are duplicate. And I don't want to archive them again and again. I prefer to store them only once. There is one caveat that it can happen that the files relative location can change, even though content hasn't changed.
My approach is as follow: Create a hash from each file. Create the key-value pair as fileHash-actual file and store it. Store this information in some kind of manifest file for each build. So, I should be able to create the builds back with correct files/paths etc.
Can it ever happen that 2 different files will ever have the same hash? Can some database help to do it efficiently? I am currently thinking of dumping all files in one folder.
Thanks
I'm working on a project which involves aggregating data from a variety of sources so that users can search and mine it from a single front-end interface. The project breaks pretty cleanly into two components:
The cron triggered (Whenever gem) code that pulls data from various sources and POPULATES the database.
The front-end code that CONSUMES the data and presents it to the user.
I want to split the codebase into separate projects with a shared model to encourage clean separation of concerns but am not sure how best to go about that in Rails 3.
I saw this SO question about using a shared folder/submodule in SVN or Git but that doesn't seem very clean to me:
Sharing Models between two Rails Projects - using git submodules?
I come from a Java/MVN background were you would just create 3 modules (one shared and two that depend on it) and call it a day. Then with Maven you could invoke a build on the parent project and it would automatically update the shared code JAR in each dependent project.
Can the same be achieved using Rails Engines, Rake, and RubyGems? Or is there a better "rails way" to do it?
Thanks,
-James
You can keep the models in a gem/plugin. The DB configurations should remain in their respective apps, though.