I want to exclude any file that has the word backup in it and is in format of zip , tar.gz from cpanel backup I do not know exactly what I should enter in cpbackup-exclude.conf
You can add "cpbackup-exclude.conf" in a global or user level.
Global:
/etc/cpbackup-exclude.conf
User level:
/home/username/cpbackup-exclude.conf
The entries would be like this:
*/backup-*tar.gz
*/backup-*zip
*/backup*gz
*/cpmove*
I contacted cpanel and they told me the following works. Has been tested
/*backup*.sql
/*backup*.tar
/*backup*.tar.bz2
/*backup*.tar.gz
/*backup*.zip
So the above will make sure any sql , tart etc that has the word backup in it inside all folders are excluded from backup
Related
I recently installed Splunk Enterprise on my laptop. I have log file with log entry in the following format:
2016-12-06 20:59:58,773 ProductName=XYZ ActivityGUID=bb1637a2-7b82-4878-a5a0-65f02679b7b1 BusinessModel=ABC ProductType=CCB ActivityName=endpoint-v2 ActivityStep=rs TimestampStart=2016-12-06 20:59:58,767 Timestamp=2016-12-06 20:59:58,773 HostLocal=10.186.108.199 HostRemote=10.186.108.5 Username=47b460c4-0a24-4a14-8b81-73b8f2dde43c OperationName=GetProduct SupplierID=v5-e48fb7508bb3484c9aa1f00b39ddb3e5-0-0-32 RAIL=70201 TRACK=0 StatusCode=0 TimestampEnd=2016-12-06 20:59:58,773 Duration=6 DurationN=5
I would like to upload that file to my local Splunk and experiment with different Splunk queries. So, I added the above log entry in a .txt file and uploaded that .txt file to Splunk using file uploader and chose all the default options. I am not finding that file in search to query on it. Can you please help?
Follow These steps to add data and run search query
I have a issue about a backup procedure with rdiff-backup. Let's say that i want to make a backup in the following way:
rdiff-backup foo /media/bar
When I do that, all the contents of "foo" are stored in "/media/bar/", but not "foo" itself. This is a problem to me because I want to backup multiple directories with the --include-globbing-filelist include-list option, if I do this, all the contents of the folders listed in include-list will be messy in the destination folder.
With rsync, if I do:
rsync -a foo /media/bar
"foo" and all of his contents will be transferred to "/media/bar" instead of his contents only.
So, is there any way to backup "foo" instead of only his contents?
Method 1
rdiff-backup foo /media/bar/foo
... just saying. ;-)
Method 2
Create an include list like this:
/home/me/foo
/home/me/other-foo
- /**
Then backup like this:
rdiff-backup --include-globbing-filelist include-list / /media/bar
In other words, tell rdiff-backup to backup everything, but then exclude everything you don't explicitly mention with the catch-all - /** rule at the foot of the include file.
My example starts at the root directory, but you could start at any level you like:
/foo
/other-foo
- /**
and
rdiff-backup --include-globbing-filelist include-list /home/me /media/bar
I like to start at root because a) it gives me maximum freedom to include and exclude stuff later, and b) I'm backing up some files from /etc anyway.
First, this question relates to Oracle SQL Developer 3.2, not SQL*Plus or iSQL, etc. I've done a bunch of searching but haven't found a straight answer.
I have several collections of scripts that I'm trying to automate (and btw, my SQL experience is pretty basic and mostly MS-based). The trouble I'm having is executing them by a relative path. for example, assume this setup:
scripts/A/runAll.sql
| /A1.sql
| /A2.sql
|
/B/runAll.sql
/B1.sql
/B2.sql
I would like to have a file scripts/runEverything.sql something like this:
##/A/runAll.sql
##/B/runAll.sql
scripts/A/runAll.sql:
##/A1.sql
##/A2.sql
where "##", I gather, means relative path in SQL*Plus.
I've fooled around with making variables but without much luck. I have been able to do something similar using '&1' and passing in the root directory. I.e.:
scripts/runEverything.sql:
#'&1/A/runAll.sql' '&1/A'
#'&1/B/runAll.sql' '&1/B'
and call it by executing this:
#'c:/.../scripts/runEverything.sql' 'c:/.../scripts'
But the problem here has been that B/runAll.sql gets called with the path: c:/.../scripts/A/B.
So, is it possible with SQL Developer to make nested calls, and how?
This approach has two components:
-Set-up the active SQL Developer worksheet's folder as the default directory.
-Open a driver script, e.g. runAll.sql, (which then changes the default directory to the active working directory), and use relative paths within the runAll.sql script to call sibling scripts.
Set-up your scripts default folder. On the SQL Developer toolbar, Use this navigation:
Tools > Preferences
In the preference dialog box, navigate to Database > Worksheet > Select default path to look for scripts.
Enter the default path to look for scripts as the active working directory:
"${file.dir}"
Create a script file and place all scripts associated in it:
runAll.sql
A1.sql
A2.sql
The content of runAll.sql would include:
#A1.sql;
#A2.sql;
To test this approach, in SQL Developer, click on File and navigate and open the script\runAll.sql file.
Next, select all (on the worksheet), and execute.
Through the act of navigating and opening the runAll.sql worksheet, the default file folder becomes "script".
I don't have access to SQL Developer right now so i can't experiment with the relative paths, but with the substitution variables I believe the problem you're seeing is that the positional variables (i.e. &1) are redefined by each start or #. So after your first #runAll, the parent script sees the same &1 that the last child saw, which now includes the /A.
You can avoid that by defining your own variable in the master script:
define path=&1
#'&path/A/runAll.sql' '&path/A'
#'&path/B/runAll.sql' '&path/B'
As long as runAll.sql, and anything that runs, does not also (re-define) path this should work, and you just need to choose a unique name if there is the risk of a clash.
Again I can't verify this but I'm sure I've done exactly this in the past...
you need to provide the path of the file as String , give the patch in double quote it will work
**
For Example
#"C:\Users\Arpan Saini\Zions R2\Reports Statements and Notices\Patch\08312017_Patch_16.2.3.17\DB Scripts\snsp.sql";
**
Execution of Sql
#yourPath\yourFileName.sql
How to pass parameters in file
#A1.sql; (Parameter)
#A2.sql; (Parameter)
This is not absolute or relative path issue. It's the SQL interpreter issue, where by default it will look for files which are having .sql extention.
Please make sure to modify the file name to file_name.sql
Ex: if workspace is having file name called "A", then move the file from A to "A.sql"
Is there any app for mac to split sql files or even script?
I have a large files which i have to upload it to hosting that doesn't support files over 8 MB.
*I don't have SSH access
You can use this : http://www.ozerov.de/bigdump/
Or
Use this command to split the sql file
split -l 5000 ./path/to/mysqldump.sql ./mysqldump/dbpart-
The split command takes a file and breaks it into multiple files. The -l 5000 part tells it to split the file every five thousand lines. The next bit is the path to your file, and the next part is the path you want to save the output to. Files will be saved as whatever filename you specify (e.g. “dbpart-”) with an alphabetical letter combination appended.
Now you should be able to import your files one at a time through phpMyAdmin without issue.
More info http://www.webmaster-source.com/2011/09/26/how-to-import-a-very-large-sql-dump-with-phpmyadmin/
This tool should do the trick: MySQLDumpSplitter
It's free and open source.
Unlike the accepted answer to this question, this app will always keep extended inserts intact so the precise form of your query doesn't matter; the resulting files will always have valid SQL syntax.
Full disclosure: I am a share holder of the company that hosts this program.
The UploadDir feature in phpMyAdmin could help you, if you have FTP access and can modify your phpMyAdmin's configuration (or are allowed to install your own instance of phpMyAdmin).
http://docs.phpmyadmin.net/en/latest/config.html?highlight=uploaddir#cfg_UploadDir
You can split into working SQL statements with:
csplit -s -f db-part db.sql "/^# Dump of table/" "{99}"
Which makes up to 99 files named 'db-part[n]' from db.sql
You can use "CREATE TABLE" or "INSERT INTO" instead of "# Dump of ..."
Also: Avoid installing any programs or uploading your data into any online service. You don't know what will be done with your information!
I tried to clone an oracle database server to another oracle database server.
After I completed the cloning, when I tried connecting to the database by starting SQL Plus
I got the following errors:
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '/home/oracle/oradata/ccisv2/system01.dbf'
I found that while cloning the control file of the original database location also got cloned.
Now in the new server I have the data files located at a different location. and that is not affected in the control file, which is the reason for the error.
In short I need to change the above path
/home/oracle/oradata/ccisv2/
to a new path
/home2/oracle/oradata/ccisv2/
I am not sure how can I change the control file and edit the path of the data file location.
Changing of the location of datafiles is not possible as I have less space in
/home/oracle/oradata/..
Can some one help me with this one...
You'll need to mount the database (not open it) and re-create the controlfile, renaming the data files in the process (see the CREATE CONTROLFILE command):
STARTUP MOUNT;
CREATE CONTROLFILE REUSE SET DATABASE "ORCL" RESETLOGS
MAXLOGFILES NN
MAXLOGMEMBERS N
MAXDATAFILES 254
MAXINSTANCES 1
MAXLOGHISTORY 1815
LOGFILE
GROUP 1 '/home2/oracle/oradata/ccisv2/REDO01.LOG' SIZE 56M,
GROUP 2 '/home2/oracle/oradata/ccisv2/REDO02.LOG' SIZE 56M,
GROUP 3 '/home2/oracle/oradata/ccisv2/REDO03.LOG' SIZE 56M
DATAFILE
'/home2/oracle/oradata/ccisv2/SYSTEM.DBF',
'/home2/oracle/oradata/ccisv2/USERS.DBF',
'/home2/oracle/oradata/ccisv2/sysaux.DBF',
'/home2/oracle/oradata/ccisv2/TOOLS.DBF',
etc...
CHARACTER SET WE8ISO8859P1;
ALTER DATABASE OPEN RESETLOGS;
QUIT;
All of your database files need to be re-identified in the controlfile with their new location.
Easiest is to just rename the datafiles to the new locations:
startup mount;
alter database rename file '/home/oracle/oradata/ccisv2/system01.dbf' to '/home2/oracle/oradata/ccisv2/system01.dbf';
and do this for all your files.
Normally we would use rman duplicate and use the file_name convert to do this for us.
re-creating the controlfile is also an option, renaming the files is easier.