We have a remote repository (Which has .repo and multiple git projects in it)
We want to replicate the same to local server.
With the existing replication.config file
url = gerrit2#15.145.25.168:/home/gerrit2/gerrit_testsite/git/${name}.git
Only the git projects are replicated to local server.
How to replicate .repo . We need the exact replication of the code which is present in the remote server.
.repo cannot be replicated. Replication works only with .git
So make a mirror of the complete repo and then enable replication.
Related
As a a part of the Artifactory Migration of one product from a group of products , we are decided to go with Pull Replication from New server to Old server .
What is the best way of making Remote Repositories (which are pulled from Old server ) to Local repositories in the New Art instance to have the same configuration as Old server (which are Local)
Or We have another plan to go i.e We are keeping the migrated Remote repositories , and creating a local repository for each remote and then accessing these two with same url by creating a virtual Repository which includes the local and remote. (is this works good )
Please let us know if any best approaches here for my case
As a a part of the Artifactory Migration of one product from a group of products , we are decided to go with Pull Replication from New server to Old server .
What is the best way of making Remote Repositories (which are pulled from Old server ) to Local repositories in the New Art instance to have the same configuration as Old server (which are Local)
Or We have another plan to go i.e We are keeping the migrated Remote repositories , and creating a local repository for each remote and then accessing these two with same url by creating a virtual Repository which includes the local and remote. (is this works good )
Please let us know if any best approaches here for my case
I have my Heroku production database scheduled to make daily backups, and I want to restore the backups onto my staging database daily as well. This way I can keep the staging box in sync with production for testing/debugging purposes and have a daily test of the restoration process run automatically.
I've tried to schedule a bash script to run on the staging box to perform the restore. The script I have uses the Heroku CLI to pull the url of the latest backup and perform the restoration. The problem I have is with authenticating the Heroku CLI. Since I can't open a browser on the dyno, I need to find a safe way to authenticate.
Should I pull a .netrc file from somewhere? Is it even a good idea to give a dyno the CLI? Is there a better way to go about this without standing up another server to run the restorations?
You can put an authorization token in the HEROKU_API_KEY env variable on your staging environment. Generate the token with heroku auth:token.
Then set the token on staging with heroku config:set HEROKU_API_KEY=token -a staging
From a security standpoint, this means your staging environment pretty much has full access to your production environment.
A more secure way is to have a scheduled task run on the production app or a new app just for this purpose that copies the db backup to an S3 bucket the staging app has access to. The staging app the restores from the backup in the s3 bucket. Staging needs no access to production.
This is a good idea anyway - if you lose access to Heroku you'll still have access to your backups.
There's a buildpack for this - https://github.com/kbaum/heroku-database-backups. I encourage you to read the code in the build pack - it is a pretty simple processes. I would also either fork the buildpack, or just write your own code because it will have full access to your production environment. I woud never trust a third party buildpack with that.
Bonus points if your job scrubs sensitive information from your production database for staging. It could do this by:
Restoring the production backup to second database
Scrub sensitive information from the second database
Backup the second database
Push the second database backup to the S3 bucket.
I'm sure there's a good amount of developers here that use DirectAdmin and I had a quick question.
I've always used cPanel and I'm not on a server that is using DirectAdmin instead. Where in DirectAdmin can you generate a full backup of the account at the user level?
Also, do DirectAdmin backups include everything related to the account like cPanel backups do? For example, not only the files and databases but also the cron jobs, DNS zones, email accounts, etc.?
And where are the backups stored by default? Is there an option to send the backups to a remote server via FTP like you can with cPanel?
There are two different backup systems built into DA:
Admin Tools | System Backup. This tool lets you backup configuration data and arbitrary directories, locally or using FTP or SCP.
Admin Tools | Admin Backup/Transfer. This tool is oriented toward backing up data account by account, in one archive per account, in a format that you can use to restore from (in the same tool) on the original or another DA server (i.e. if you want to transfer to a new server). You can back up locally and/or via FTP.
Both options can also be scheduled via cron.
Depending on your level of access, only one of these might be available to you. This page has further info for non-administrators: http://www.site-helper.com/backup.html.
You can improve your DirectAdmin backup with an incremental backup plugin that includes local and remote backup location, please check the setup guide here
I am thinking of deploying my Rails app to Engine Yard. I have a MySql db with all of the data for the site. When I deploy to engine yard cloud, will I be able to "push" this database to the server somehow?
Something like this (?):
https://blog.heroku.com/archives/2009/3/18/push_and_pull_databases_to_and_from_heroku/
Or can I somehow put the mysql database in the git repo so it is pushed to the server?
See: https://support.cloud.engineyard.com/entries/20996676-Restore-or-load-a-database
Use scp to ssh copy your database to the server.
We have a couple production servers that are configured to only allow access via RDP. There are no acessible shares. The dev team have no say in changing this setup but we want to automate code deployments to these machines. Presently we have to set Remote Desktop to share a local drive with the server, then RDP to the server and manually copy the deployment.
Any one know of a way to tunnel over RDP and drop files to a given directory on the remote host from the command line? The instructions will need to be included in an MSBuild configuration.
If you can get WS-MAN set up, PowerShell remoting and/or pmodem might be your ticket? https://web.archive.org/web/20180429054125/http://www.nivot.org/blog/2009/11/default