My current task is to create a .bat file that can manually create an Oracle database, so that a Database Configuration Assistant is no longer necessary.
I am following this guide.
I am stuck at "Creating the database". Upon typing:
SQL> create database ORA10
I do not get the expected output as described on the guide:
SQL>create database ora10
logfile group 1 ('D:\oracle\databases\ora10\redo1.log') size 10M,
group 2 ('D:\oracle\databases\ora10\redo2.log') size 10M,
group 3 ('D:\oracle\databases\ora10\redo3.log') size 10M
character set WE8ISO8859P1
national character set utf8
datafile 'D:\oracle\databases\ora10\system.dbf'
size 50M
autoextend on
next 10M maxsize unlimited
extent management local
sysaux datafile 'D:\oracle\databases\ora10\sysaux.dbf'
size 10M
autoextend on
next 10M
maxsize unlimited
undo tablespace undo
datafile 'D:\oracle\databases\ora10\undo.dbf'
size 10M
default temporary tablespace temp
tempfile 'D:\oracle\databases\ora10\temp.dbf'
size 10M;
Instead I get a bunch of numbered input requests:
SQL> create database ORA10
2 and
3 it
4 doesn't
5 seem
6 to
7 stop
8 asking
9 for
10 inputs
Other sources/guides I've googled look similar to the aforementioned guide. As far as I know (I might not be using the right keywords), my output is not supposed to happen. I am unable to identify what is going on here.
Please note that I don't actually know much about SQL or using command prompt. My background is limited to classroom HTML/CSS/Java/Python. I have been dared to complete a number of programming related tasks within a certain period of time (a week) without any instruction or preparation - though I am allowed to use the internet for assistance. So far so good until now.
Any assistance will be appreciated, thank you in advance.
All the lines in your listing from create database ORA10 to size 10M; including the last semicolon are not output, but instead part of one single statement. The semicolon terminates it and tells the command interpreter to execute what you've written. Therefore it is continuing to ask you for input until you tell it you're done (with the semicolon).
If you simply want to create a database without specifying any other options, you can simply add a semicolon. The following will create a database with the name "ORA10" in the default configuration:
CREATE DATABASE ORA10;
Related
Currently testing a cluster and when using the "CREATE TABLE AS" the resulting managed table ends up being one file ~ 1.2 GB while the base file the query is created from has many small files. The SELECT portion runs fast, but then the result is 2 reducers running to create one file which takes 75% of the run time.
Additional testing:
1) If using "CREATE EXTERNAL TABLE AS" is used the query runs very fast and there is no merge files step involved.
2) Also, the merging doesn't appear to occur with version HDP 3.0.1.
You can change set hive.exec.reducers.bytes.per.reducer=<number> to let hive decide number of reducers based on reducer input size (default value is set to 1 GB or 1000000000 bytes ) [ you can refer to links provided by #leftjoin to get more details about this property and fine tuning for your needs ]
Another option you can try is to change following properties
set mapreduce.job.reduces=<number>
set hive.exec.reducers.max=<number>
The Issue
I've been running a particularly large query, generating millions of records to be inserted into a table. Each time I run the query I get an error reporting that the transaction log file is full.
I've managed to get a test query to run with a reduced set of results and by using SELECT INTO instead of INSERT into as pre built table. This reduced set of results generated a 20 gb table, 838,978,560 rows.
When trying to INSERT into the pre built table I've also tried using it with and without a Cluster index. Both failed.
Server Settings
The server is running SQL Server 2005 (Full not Express).
The dbase being used is set to SIMPLE for recovery and there is space available (around 100 gb) on the drive that the file is sitting on.
The transaction log file setting is for File Growth of 250 mb and to a maximum of 2,097,152 mb.
The log file appears to grow as expected till it gets to 4729 mb.
When the issue first appeared the file grow to a lower value however i've reduced the size of other log files on the same server and this appears to allow this transaction log file grow further by the same amount as the reduction on the other files.
I've now run out of ideas of how to solve this. If anyone has any suggestion or insight into what to do it would be much appreciated.
First, you want to avoid auto-growth whenever possible; auto-growth events are HUGE performance killers. If you have 100GB available why not change the log file size to something like 20GB (just temporarily while you troubleshoot this). My policy has always been to use 90%+ of the disk space allocated for a specific MDF/NDF/LDF file. There's no reason not to.
If you are using SIMPLE recovery SQL Server is supposed manage the task of returning unused space but sometimes SQL Server does not do a great job. Before running your query check the available free log space. You can do this by:
right-click the DB > go to Tasks > Shrink > Files.
change the type to "Log"
This will help you understand how much unused space you have. You can set "Reorganize pages before releasing unused space > Shrink File" to 0. Moving forward you can also release unused space using CHECKPOINT; this may be something to include as a first step before your query runs.
I'd like to write an SQL script to create a database. I'd like to parametrise it to be able to reuse it for future databases. As a base I'd like to use a script from Oracle documentation page:
CREATE DATABASE mynewdb
USER SYS IDENTIFIED BY sys_password
USER SYSTEM IDENTIFIED BY system_password
LOGFILE GROUP 1 ('/u01/app/oracle/oradata/mynewdb/redo01.log') SIZE 100M,
GROUP 2 ('/u01/app/oracle/oradata/mynewdb/redo02.log') SIZE 100M,
GROUP 3 ('/u01/app/oracle/oradata/mynewdb/redo03.log') SIZE 100M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
CHARACTER SET US7ASCII
NATIONAL CHARACTER SET AL16UTF16
EXTENT MANAGEMENT LOCAL
DATAFILE '/u01/app/oracle/oradata/mynewdb/system01.dbf' SIZE 325M REUSE
SYSAUX DATAFILE '/u01/app/oracle/oradata/mynewdb/sysaux01.dbf' SIZE 325M REUSE
DEFAULT TABLESPACE users
DATAFILE '/u01/app/oracle/oradata/mynewdb/users01.dbf'
SIZE 500M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE tempts1
TEMPFILE '/u01/app/oracle/oradata/mynewdb/temp01.dbf'
SIZE 20M REUSE
UNDO TABLESPACE undotbs
DATAFILE '/u01/app/oracle/oradata/mynewdb/undotbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
To run Oracle-sqlplus one has to set some system variables, like ORACLE_SID.
How can I access this ORACLE_SID from within the script? Eg. I'd like to replace CREATE DATABASE mynewdb with CREATE DATABASE ORACLE_SID
In my case, '/u01/app/oracle/oradata/mynewdb/redo01.log' is equal to '/oradata/ORACLE_SID/redo01.log' -> how can I embed this variable in the statement?
I hope my question is clear enough. Any hints appreciated.
Alex has given you the best practical help, but for anyone interested or for reference, see below.
If ever you need to reference shell environment variable in sqlplus, the method I use is to run a script that translates shell variables to sqlplus DEFINE statements, e.g.
cat shell2define.sh
set | grep '=' | sed 's/^/define /' > shell.sql
Then in sqlplus:
SQL> ! ./shell2define.sh
SQL> #shell.sql
SQL> define
Now you can refer to the shell variables as you would any sqlplus DEFINEd variable, e.g. &ORACLE_SID. The last "define" command just lists all the variables. Extend the scripts to remove/handle special variables like $_, and ones with quotes, or just use it to include variables you require. Don't forget also the use of $ORACLE_HOME/sqlplus/admin/glogin.sql to invoke this automatically should it be required every time.
This question already has answers here:
How do I shrink my SQL Server Database?
(16 answers)
Closed 8 years ago.
I'm facing a strange problem about a staging database used by my ETL (to update rows).
Only rows to update are stored in the database, then a script is executed to update the destination database. At the end of the process, It truncates the staging database.
It removes all data, however the allocated size for my database grows every execution time of my SSIS package. So, is there a way to reduce the allocated size and to limit the maximum allocated size ? In SQL Server Management Studio, there is a wizard to reduce data size and database size.
Is there the same command in T-SQL ?
Thanks !
Don't.
If your staging needs a database of size X, then size the database at X and leave it so. Attempting to shrink it is misguided at best. By shrinking it all you achieve is just invite an opportunity for your ETL to fail tomorrow, because it runs out of required disk space. Do not fool yourself that 'I only need space X for ETL'. You need space X, period.
I'm not even going to go into all the performance problems related to shrink and re-growth.
There is a command in T-SQL.
Look here [http://msdn.microsoft.com/de-de/library/ms189493.aspx][1]
DBCC SHRINKFILE (Transact-SQL): Shrinks the size of the specified data or log file for the current database, or empties a file by moving the data from the specified file to other files in the same filegroup, allowing the file to be removed from the database. You can shrink a file to a size that is less than the size specified when it was created. This resets the minimum file size to the new value.
But take the answer from Remus in consideration
I am moving around 10 million data from one table to another in SQL Server 2005. The Purpose of Data transfer is to Offline the old data.
After some time it throws an error Description: "The LOG FILE FOR DATABASE 'tempdb' IS FULL.".
My tempdb and templog is placed in a drive (other than C drive) which has around 200 GB free. Also my tempdb size in database is set to 25 GB.
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Please let me know other factors and I cannot experiment much as I am working on Production database so can you please let me know if they changes will have some other impact.
Thanks in Advance.
You know the solution. Seems you are just moving part of data to make your queries faster.
I am agree with your solution
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Go ahead
My guess is that you're trying to move all of the data in a single batch; can you break it up into smaller batches, and commit fewer rows as you insert? Also, as noted in the comments, you may be able to set your destination database to SIMPLE or BULK-INSERT mode.
Why are you using Log file at all? Copy your data (Data and Logfile) then set the mode on SIMPLE and run the transfer again.