This question already has answers here:
sharpsvn logmessage edit sharpsvn?
(2 answers)
Closed 6 years ago.
I have made an program that works can read out TortoiseSVN commits and shows it into listbox and a textbox.
Now I have to do this, when you press publish button then for each selected commit the logmessage needs to add [PUBLISH].
How do I update an commit and send it back to the database, in a VB script using Subversion and sharpSVN
This is similar to another question in which someone asked about the possibility of changing commit details directly on the repository server through SharpSvn.
Basically, the answer is you can't (sorry), though if you have admin access on the repo, you can do it yourself.
Related
This question already has answers here:
Can't set up the HMR: stuck with "Waiting for update signal from WDS..." in console
(9 answers)
Closed 3 years ago.
Good afternoon,
I have been trying to create a react app by following this tutorial:
https://www.youtube.com/watch?v=Ke90Tje7VS0
The issue is, I can't seem to connect to the dev server. The page comes up and works fine, but I can't update anything. When I open the console on the server, all I see is "[HMR] Waiting for update signal from WDS..." Does anyone have any idea how I can fix this so I can get on with the tutorial?
Thanks in advance!
There is an ongoing issue github react link
Check some of the solutions from this link.
This question already has an answer here:
How to prevent users from killing C# Application [closed]
(1 answer)
Closed 9 years ago.
I have an application cafe timer. I want my application process not to end by task manager from users. Thanks
You need to run it as a Windows service with elevated credentials so that users have insufficient rights to end it.
You could bullet proof this by having your service auto restart on shutdown, and regularly saving some state into a data file so that you can carry on in case your service ever does get terminated.
This question already has answers here:
Don't lock files of application while running
(4 answers)
Closed 9 years ago.
I created a small application which users launch through a desktop shortcut pointing to a network share. Whenever I recompile the app and want to move it to the network share to make it available, it is always locked by many users who are using it, understandably. What I don't get is that I can close all the handles (net file 1234 /close) and the users are unaffected, i.e. they can still work on the app. I then copy the new file and ask them to restart.
Is there a way to "cut off" programmatically the users from the network exe file once they have launched it, so that I don't have to manually close all of the handles every the time?
They'll be affected but the way the jitter works indeed makes it a bit unlikely that their program will crash. Crashes are likely when you use Reflection in your code or the user's machine is under memory pressure from other processes and the program executes code that has not been jitted. YMMV.
The proper deployment choice here is ClickOnce. That creates a local copy of the program on the user's machine. And it automatically sees your update when they restart.
A possible band-aid is renaming the executable. Which works because Windows puts a lock on the file content, not the directory entry. Which lets you copy the update with the same name. Any user that restarts the app will now get the new version. You'll eventually be able to delete the renamed file. But do favor ClickOnce first.
This question already has answers here:
What is the best practice for dealing with passwords in git repositories?
(9 answers)
Closed 8 years ago.
Along the code there could be very sensitive information such as passwords, amazon s3 keys, etc that I don't want to be sent to git at all.
I'd like those very specific fields to either be replaced with "SECRET" or something like that.
Also, is git private repo solving this?
Since git tracks text and not just files, replacing these lines with some other text would be interpreted by git as a change on the code, so it would overwrite the original sensitive info in the next commit.
What I use to do in these cases is to modularize my code so this info get isolated in a single file, and then I add a line with the file name to the .gitignore file.
The .gitignore file is a collection of patterns, one per line, of file names to be ignored by git while tracking changes in your repo.
For example, if I'm writing a web system in php, I create a file that only store info about credentials for connecting to the database (frameworks use to do so too, so you could guess it's a good practice...). So I write this file once with the test server credentials (which my collaborators are supposed to know) or with some dummy credentials, commit it and push it to my remote, and then I add the file name to my .gitignore.
In the other hand, you have the command git add -p, which interactively let you skip lines, but that would result on a file without the mentioned lines in your remote repo, an you having to manually skip the lines every time you add the file...
A Good reference for git is Progit. Highly recommended if you are starting with git... Also, Github's help center is a very good place to look.
I hope it would be helpful! good luck!!!
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've recently started work on developing a site using Magento.
All of my files and DB are on a linux-based remote web hosting server.
What I'd like to implement is some sort of system where all of my files and the database are backed up once per day "just in case". I'd also like to be able to use the same system to manually back everything up before making any major changes.
I've explored using a solution like Git or SVN in conjunction with CRON MySQL dumps; but they seem to be overkill for my needs.
Any ideas?
Thanks in advance for taking the time to read this and reply.
This article explains how to move Magento to another server
http://www.magentocommerce.com/wiki/groups/227/moving_magento_to_another_server
You could use point 1 and 2 to backup your sql dump and the important magento folders with the help of a daily cron job.
Probably not as much of an overkill solution as you think. SVN (or any other source control system) will let you keep perfect records of how your system looked at any previous time, so when you blow up your website with new code (which happens often when developing Magento), you can quickly restore it to any previous state.
You will especially find this useful when it doesn't become apparent that everything exploded until several days afterward. Hope that helps.
Thanks,
Joe
There is a new extension for scheduled Magento DB backups - Magento Autobackup
We use this service to backup magento: http://magento-backups.com/ They combine version control w/ database dumps and keep it all offsite. They're having a special for like $130/yr. Setup was super easy - only took about 10 mins including subversion install. And customer service was on point when we had troubles.
This is something that I just posted to another question here. I also use git, but it's nice to just grab a tar of the files, scp or ftp them to a different server and upload it.
The next step would be to make a script that changes the base urls and the payment gateway to "test". Maybe another day!
Magento: Backup Advice
Relevent info:
I prefer nightly backups for magento. This isn't for record keeping, but it's for shit hits the roof type senarios. If something really goes bad, you're better off getting the store up and running ASAP and worrying about open orders and lost sales information once the store is up.
The backup script is crude, but it makes a gziped copy of the database and file directory in a directory that you can make for backups. It appends the month and day to the files. You need to make sure the user has the correct permissions to tar the magento file structure.
!/bin/sh
m_user='databaseusername'
m_pass='databasepasswd'
db_name='databasename'
od='/home/user/backups/website/' #output directory of the backups
id='/var/www/html/' #the location of the site
name=$od$db_name
name+="_"
mysqldump --opt -u $m_user -p$m_pass $db_name | gzip -c | cat > $name$(date +%m-%d).sql.gz; tar -zcvf $name$(date +%m-%d).tar.gz $id