Restoring a .bz2 Mysql Backup? - backup

I've tried doing my research on this, found this website: http://www.lullabot.com/blog/importexport-large-mysql-databases and still am confused as to why this isn't working like it should. I'm trying to restore a mysql .bz2 backup from one server to another with the database. The command I'm running to do so is:
bunzip2 SOB-MySQL-backup-summaries_live-2012-01-05.sql.bz2 | mysql -h 192.168.255.53 -u sobuser -p summaries_criticaltest
I'm running this in a folder of 2 backup files being:
-rw-r--r-- 1 root root 19339638 Jan 5 13:50 SOB-MySQL-backup-summaries_dev-2012-01-05.sql.bz2
-rw-r--r-- 1 root root 453 Jan 10 09:45 SOB-MySQL-backup-summaries_live-2012-01-05.sql.bz2
The output I'm getting is just this: bunzip2: Output file SOB-MySQL-backup-summaries_live-2012-01-05.sql already exists.
I'm not trying to dump anything, just restore the backup zip to the database. I may be doing this all wrong but any help would be good. Thanks!

The first command will decompress SOB-MySQL-backup-summaries_live-2012-01-05.sql.bz2 to SOB-MySQL-backup-summaries_live-2012-01-05.sql - and apparently that already happened once.
From man bunzip2 (at your box, or online e.g. at http://www.manpagez.com/man/1/bzip2/ ):
You can also compress or decompress files to the standard output by
giving the -c flag.
So, in the part before the |, you're looking for this:
bunzip2 -c SOB-MySQL-backup-summaries_live-2012-01-05.sql.bz2 | ...etc...

Related

SSH/Fuse mount create file ok but can't delete it

I have a proxmox server so under debian, and I want to mount a remote directory from my Nas Synologies to make backups.
I normally use ssh mounts without any problem.
But this time I have an error that I have never encountered, I can create files, but not delete them.
I find this very strange and I don't see where this can come from
root#proxmox:/mnt/# sshfs user#192.168.0.1:home/data /mnt/dist-folder/ -o reconnect,
ServerAliveInterval=60,ServerAliveCountMax=30,allow_other,
default_permissions,uid=0,gid=0,umask=007
root#proxmox:/mnt# cd dist-folder/
root#proxmox:/mnt/dist-folder# touch aa.txt
root#proxmox:/mnt/dist-folder# ls -la
total 12
drwxrwx--- 1 root root 114 Mar 13 09:53 .
drwxr-xr-x 7 root root 4096 Mar 13 09:37 ..
-rwxrwx--- 1 root root 0 Mar 13 09:53 aa.txt
root#proxmox:/mnt/dist-folder# rm aa.txt
rm: cannot remove 'aa.txt': Permission denied
With uid=0,gid=0 for root user and group
Thanks
This is finally a problem specific to synology.
For the assembly of the file it is absolutely necessary to respect the path by starting with
/homes/<user>home/
So it's give
sshfs user#192.168.0.1:/homes/proxmox/home/data /mnt/dist-folder/
And it's works fine !
It's not the first time that I have an abnormal configuration for this synology tool... AGrrrr

Cron working partially

I just created a script to get document volumes from the Content Manager OnDemand database. It goes like this:
dateYesterday=`TZ=GMT+20 date +"%Y%m%d"`
fileToday="GrossVolumes_AsOf_"$dateYesterday
touch $fileToday
chmod 770 $fileToday
db2 connect to ARCHIVEPN >/dev/null
db2 -tmf CMOD_Gross_Volumes.sql | tee -a $fileToday
db2 quit
It works just fine when invoked directly from the shell and generates a file like 'GrossVolumes_AsOn_mmddYYYY' with all the details present:
/home/myprompt> . createMonthlyReport
But when I schedule it via a cron entry, it creates a zero-byte file and the details are nowhere to be seen.
Here is the cron entry:
54 11 * * * /home/myprompt/createMonthlyReport
Solution by OP.
It appears that the cron-task would need the fully-qualified path of the program (db2 in this case) to get going. So, I basically added the absolute path in the script (as can be seen below) to make it functional. It might not be the most efficient way but works nonetheless.
dateYesterday=`TZ=GMT+20 date +"%Y%m%d"`
fileToday="GrossVolumes_AsOf_"$dateYesterday
touch $fileToday
chmod 770 $fileToday
/path/to/db2_bin_directory/db2 connect to ARCHIVEPN >/dev/null
/path/to/db2_bin_directory/db2 -tmf CMOD_Gross_Volumes.sql | tee -a $fileToday
/path/to/db2_bin_directory/db2 quit

FTP Script - Put images in year/date folders

I have the following script that I am running on Centos which should connect to an external FTP and upload to a directory.
The directory is dependent on the date and needs to dynamically fetch the year and week number.
#!/bin/sh
USER3='USERNAME'
PASSWD3='PASSWORD'
YEAR= date "+%G"
WEEK= date "+%V"
ftp -n -i HOST.com <<SCRIPT3
user $USER3 $PASSWD3
binary
cd htdocs/uploads/$YEAR/$WEEK/
bin
mput *.jpg
quit
SCRIPT3
If I run the script I get this as a response:
# bash test.sh
2014
28
So it looks like it is displaying the year and week number but not implementing them in to the folder location part of the script.
How do i get the year/date to echo in to the URL of the folder?
Like this:
#!/bin/sh
USER3='USERNAME'
PASSWD3='PASSWORD'
YEAR=$(date "+%G")
WEEK=$(date "+%V")
ftp ...
Note that you may need to create the directory before changing there...

File Upload / Strange Permissions Issue

I have a site that's been running for a while. All was going well. Until now. Dun duunnn dunnnnn.
I am unable to upload from an attachment field to a particular directory. But I can upload to that directory.
Desired directory to upload (does not work):
/sites/default/files/resources/case-studies
4 drwxrwxrwx 2 apache apache 4096 Jul 22 2013 case-studies
Uploading DOES work to the parent directory:
/sites/default/files/resources
4 drwxrwxrwx 10 apache apache 4096 Mar 18 10:15 resources
As far as I can tell they are identically permissioned. Is there something I am missing?
Thanks, hive mind!
steve
For reasons I haven't figured out, this works: chmod -R a+w files <----- yay! This make sense to me, but I really can't figure out why that would work and not : chmod -R 777 files <----- boo!

How can I mount an S3 volume with proper permissions using FUSE

I have an Amazon S3 bucket (let's call it static.example.com) that I need to mount on an EC2 instance (Ubuntu 12.04.2). I've installed s3fs. I'm able to mount the volume, but I can't write to the bucket. I have tried:
sudo s3fs static.example.com -o use_cache=/tmp,allow_other,uid=33,gid=33 /mnt/static.example.com
I can then cd /mnt and ls -la to see:
drwxr-xr-x 5 root root 4096 Mar 28 18:03 .
drwxr-xr-x 25 root root 4096 Feb 19 19:22 ..
lrwxrwxrwx 1 root root 7 Feb 21 19:19 httpd -> /httpd/
drwx------ 2 root root 16384 Oct 9 2012 lost+found
drwxr-xr-x 1 www-data www-data 0 Jan 1 1970 static.example.com
This all looks good, but when I cd static.example.com and mkdir test, I get:
mkdir: cannot create directory `test': Permission denied
The only way I can actually create a directory or touch a file is to force it with sudo. This is not a viable option, however, because I want to write files to the bucket from Apache. My Apache server runs as user:group www-data. Running mount yields:
s3fs on /mnt/static.example.com type fuse.s3fs (rw,nosuid,nodev,allow_other)
How can I mount this bucket in a manner that will allow me to write to the bucket?
I'm the lead developer and maintainer of Open source project RioFS: a userspace filesystem to mount Amazon S3 buckets.
Our project is an alternative to “s3fs” project, main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “beta” state, but it's been running on several high-loaded fileservers for quite some time.
We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features.
Regarding your issue:
if'd you use RioFS, you could mount a bucket and have a write access to it using the following command (assuming you have installed RioFS and have exported AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables):
riofs -o allow_other http://s3.amazonaws.com bucket_name /mnt/static.example.com
(please refer to project description for command line arguments)
Please note that the project is still in the development, there are could be still a number of bugs left.
If you find that something doesn't work as expected: please fill a issue report on the project's GitHub page.
Hope it helps and we are looking forward to seeing you joined our community !
This works for me:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache
If you need to debug, just add ,f2 -f -d:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache,f2 -f -d
Try this method using S3Backer:
mountpoint/
file # (e.g., can be used as a virtual loopback)
stats # human readable statistics
Read more about it hurr:
http://www.turnkeylinux.org/blog/exploring-s3-based-filesystems-s3fs-and-s3backer