I just want to delete my repository in my local pc because the size is too big. My idea is keep all file in svn server and delete svn on local pc so that all file is backup on server. Please help.
Via FTP or Samba you could map a network drive to the repository's location on the server. This would allow you to directly access the files (edit a project in an IDE, etc) and commit directly to the server without any local hard drive space consumed.
Be aware, if you are using your repository for code, compiling will most likely be severely limited by your network connection.
Related
While trying to solve another issue with connection problems to our servers, I thought to solve the problem by setting the MaxConnections and MaxStartups to my sshd.conf
When restarting ssh everything seemed fine, but this morning I found out that our Jenkins server didn't connect to any of the dev servers. So I tried logging into the system, finding out that I cannot log in to any of our dev servers anymore.
Looks like I made a F##$up in the sshd.conf and created a lockout for all the dev servers.
When trying to login I get an "port 22: Connection refused" error.
Is there any other way to get into the systems without having to connect every disk to another server to adjust the sshd.conf??
There are several options available for recovery in this situation:
Use the interactive serial console. This requires advanced preparation.
Add a startup script to fix the file, and then reboot to trigger the script.
Shutdown the instance, attach the disk to a recovery instance, use the recovery instance to mount the disk and fix the file, then detatch the disk and recreate the instance using the fixed disk. (you can also do this from a snapshot for added safety).
I've been studying NFS and what I don't understand is this: after the client receives the filehandle from the server (all the way at the end of the whole NFS/mountd/NFSd etc. communication process) is the file data then written somewhere on the client? And then the client reads/writes to that file on the client and then sends it back over the network to the server? Or is the client reading and writing to this file on the server over the network? Thanks!
As name says NFS (network File system) means accessing the files residing on the server. So every client NFS request either READ/WRITE will fetch the data from server over the network. Usually all NFS client implementaions will use some File caching/data caching mechanism. Once the data is read from server, it could store the data in its own cache (like buffer cache etc) for subsequent reads so as to improve the performance. As long as the client cache is valid, it doesn't need to go and fetch the data from server again and again.
I am trying to upload a large file from my linux server via SSH to a ftp server. (i only have ftp access to this remote server)
Here is the command i am trying to do
put myfile.zip
Result:
150 Opening BINARY mode data connection for myfile.zip
The transfer starts but upon getting to about 10gb uploaded, i get this error. and the file deletes itself from the ftp server.
451 Transfer aborted. Input/output error
I was wondering if there is an alternative or if i am doing something wrong? or even if there is a way i can resume the upload..
The protocol has no limitations, but most servers have default upload limits out-of-the-box. There are also timeouts on concrete servers applicable.
You said you can SSH to it, didn't you? So what about scp to it, or to mount remote folder as sshfs aka sftp?
I installed Neo4J on a server of a hosting provider. The app that I run on it works fine. However, how would I access Neo4J shell? As I understand, I would do it through http://www.myapp.com:PORT normally (if I uncomment accept all internal connections in Neo4J config). But is there a way to access shell, admin and web interface without uncommenting those external connections line? Like directly from SSH for example? How?
Thanks!
There's a neo4j-shell command within the bin folder. That will give you ssh access to running queries against neo4j
I am newbie in working with rdiff.
I am taking the backup using rdiff from client to server end.
can anyone tell me why we need to install rdiff backup at server end as well?
hwo it works?
rdiff access file systems of connect to the rdiff on server?
The suggested solution with mountable file systems are barely usable though. And that is exactly why you need rdiff on the server, as it makes delta calculations and optimizes the throughput by sending only information needed as a result. Otherwise why even bother using rdiff at all ...
Actually you don't need rdiff on the remote side... if you have SFTP-access. Have a look at this article on the rdiff wiki page.
In general you don't need rdiff on the remote side if you can access the remote filesystem in some other way like NFS or CIFS (although CIFS has been troublesome for some users).