I have Solaris machine with oracle Release 10.2.0.4.0
I have problem - I cant startup oracle
when I try to start oracle by the command:
hagrp -online oracle1 -sys machine1a
I get:
oracle1 machine1a Y N PARTIAL|FAULTED
remark ( hagrp -clear oracle1 -sys machine1a not help )
After checking and debug the problem I find that oracle not start up
because wrong param - shared_pool_size (this param was set to 0 , while need to set this param to 2G)
So I want to set the param shared_pool_size to 2G but I can’t because oracle is down !!!
My question - how to set the parameter shared_pool_size to 2G in spite oracle is down ??? , is it possible ???
su - oracle -c "sqlplus / as sysdba"
SQL*Plus: Release 10.2.0.4.0 - Production on Mon Mar 5 12:10:44 2012
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> alter system set shared_pool_size=2G scope=BOTH;
init.ora: (from my machine)
grep shared_pool_size /opt/oracle/v10.2.0/srvm/admin/init.ora
#shared_pool_size = 52428800 # INITIAL
shared_pool_size = 67108864 # datewarehouse, transaction processing
If you were starting from an old-style init.ora parameter file you could just change it in that. The scope in your alter system suggests you're using an spfile. You can startup nomount to make parameter changes, then shutdown again and try to start normally.
Are you sure you're connecting to the right database though? The output from your sqlplus command doesn't say connected to an idle instance, so it looks like whatever you're connected to is already running. Check where you are before running anything, and note any parameters you change first (e.g. with output from show parameters).
Based on conversation in chat...
You can create a temporary pfile, edit that and test starting up with it, and then when happy with the new configuration recreate your spfile, and finally restart from that. The steps are:
create pfile='/tmp/init.ora' from spfile;
-- edit /tmp/init.ora and change parms as needed
startup pfile='/tmp/init.ora';
-- repeat as needed until successful start
create spfile=’/path/to/spfile.ora’ from pfile=’/tmp/init.ora’;
shutdown
startup 'spfile=/path/to/spfile.ora';
Related
I would like to write an AUTONOMOUS stored procedure (native). I'm using the DB2 database (V11.01)
CREATE PROCEDURE SP_LOG (IN p_field1 char(2)
,IN p_field2 varchar(50)
,IN p_field3 varchar(50)
,IN p_field4 varchar(3926) )
VERSION V1
ISOLATION LEVEL CS
WLM ENVIRONMENT FOR DEBUG MODE WLMENV1
RESULT SETS 0
LANGUAGE SQL
ALLOW DEBUG MODE
AUTONOMOUS
BEGIN
...
...
END
I'm using IBM DATA STUDIO 4.1.1 and I get the following error:
Creazione di procedura memorizzata restituzioni
SQLCODE: -104, SQLSTATE: 42601. XXXXX.SP_LOG: 11: ILLEGAL SYMBOL
"". SOME SYMBOLS THAT MIGHT BE LEGAL ARE: FOR.
SQLCODE=-104, SQLSTATE=42601, DRIVER=4.18.60 XXXXX.SP_LOG -
Distribuzione per il debug non riuscita. XXXXX.SP_LOG - Rollback
completato correttamente.
If you have any recommendations, I'd love to hear them!
Thanks :)
Good news, I found the issue.
This is a bug of IBM Data Studio (IT26018 - AUTONOMOUS KEYWORD IS NOT CONSIDERED AS A VALID KEYWORD FOR DB2 ZOSV11 AND V12 IN DS.4.1.3 - For more information click here.
Here's how I solved it:
I downloaded the new version (IBM DATA STUDIO 4.1.3) - Link
Installed the new version using IBM INSTALLATION MANAGER
I downloaded the FIX - Link
Installed the FIX (Instructions are included in the .ZIP file (open the "Hotfix Guide.pdf" file)
I am trying to set up an Oracle WebLogic Datasource for my job. But every time I try to connect to the database, I get this error:
Connection test failed.
Message icon - Error Connection property: format error: Property is 'v$session.osuser' and value is 'Yann (Intern)'
I tried changing my username to "YannIntern" to remove the special characters, but the error is still the same and it also happens when I try to use Oracle SQL Developer. How can I stop the format error?
Java reads this from the user.name property which defaults from the os username. However, this can be overriden by setting -Duser.name=Yann
Here's an example in sqlcl
SQL> select sys_context('userenv', 'os_user') from dual;
SYS_CONTEXT('USERENV','OS_USER')
------------------------------------
klrice
<<< a short command to set java properties >>>
<<< which is the same as a -D property >>>
SQL> set property user.name kris
Setting user.name to kris (klrice)
SQL> #connect-klrice
Connected.
SQL> select sys_context('userenv', 'os_user') from dual;
SYS_CONTEXT('USERENV','OS_USER')
----------------------------------
kris
SQL>
Same happened to me, I have parenthesis in my computer's name. I spent a lot of time researching but finally, I have the solution (I created an account just to answer this haha). To solve this you have to:
-Go to the folder location where you installed SQL Developer.
-Then go to: \ide\bin\ide.conf
-At the end add the following line:
# Custom VM Option
AddVMOption -Duser.name=<YourUsername>
-Save the file, and restart SQL Developer, you will be able to connect and avoid that error message.
I've an assignment to run the command on Tandem by periodic time
I've work on Windows and Unix before and know that OS have there own schedule task but I cannot find one on Tandem
I've ask HPE support they mention that I must by a tool name "NetBatch" to make a schedule
now I come up with a solution by create job to run command like this
1.Run command
2.Wait time
2.Run command
4.Wait time
Have anyone here have experience with schedule task on Tandem please advice
Thanks
You can create a tacl script with a loop, add all commands you want in that loop with #DELAY at end of the loop so that the script waits till next iteration of loop
To add persistance to your script you can configure it under pathway as below:
RESET SERVER
SET SERVER MAXSERVERS 1
SET SERVER NUMSTATIC 1
SET SERVER PROGRAM $SYSTEM.SYS01.TACL
SET SERVER TMF OFF
SET SERVER IN $receive
SET SERVER ASSIGN TACLCSTM, $vol.subvol.taclin ==The script that you want to execute
SET SERVER OUT $vol.subvol.uroutfil ==Output printed from your script
ADD SERVER mytacl
You could switch to from Guardian to OSS and use crontab on tandem OSS
Yes you can insert wait between two [tandem advanced command language] TACL commands by entering some other command or using something from history like opening files.
dsply pr, prc310
cmprfile -28, today, range
cmprtime 00,23
dsply pr, prc310, diff
NOTE: The last code executes with a time difference of 3 seconds.
This will surely delay the simultaneous command from being executed at the same time.
I also faced the same situation, and followed above hack to get it rectified.
How do I populate the Transaction Processing Performance Council's TPC-DS database for SQL Server? I have downloaded the TPC-DS tool but there are few tutorials about how to use it.
In case you are using windows, you gotta have visual studio 2005 or later. Unzip dsgen in the folder tools there is dsgen2.sln file, open it using visual studio and build the project, will generate tables for you, I've tried that and I loaded tables manually into sql server
I've just succeeded in generating these queries.
There are some tips may not the best but useful.
cp ${...}/query_templates/* ${...}/tools/
add define _END = ""; to each query.tpl
${...}/tools/dsqgen -INPUT templates.lst -OUTPUT_DIR /home/query99/
Let's describe the base steps:
Before go to the next steps double-check that the required TPC-DS Kit has not been already prepared for your DB
Download TPC-DS Tools
Build Tools as described in 'v2.11.0rc2\tools\How_To_Guide-DS-V2.0.0.docx' (I used VS2015)
Create DB
Take the DB schema described in tpcds.sql and tpcds_ri.sql (they located in 'v2.11.0rc2\tools\'-folder), suit it to your DB if required.
Generate data that be stored to database
# Windows
dsdgen.exe /scale 1 /dir .\tmp /suffix _001.dat
# Linux
dsdgen -scale 1 -dir /tmp -suffix _001.dat
Upload data to DB
# example for ClickHouse
database_name=tpcds
ch_password=12345
for file_fullpath in /tmp/tpc-ds/*.dat; do
filename=$(echo ${file_fullpath##*/})
tablename=$(echo ${filename%_*})
echo " - $(date +"%T"): start processing $file_fullpath (table: $tablename)"
query="INSERT INTO $database_name.$tablename FORMAT CSV"
cat $file_fullpath | clickhouse-client --format_csv_delimiter="|" --query="$query" --password $ch_password
done
Generate queries
# Windows
set tmpl_lst_path="..\query_templates\templates.lst"
set tmpl_dir="..\query_templates"
set dialect_path="..\..\clickhouse-dialect"
set result_dir="..\queries"
set tmpl_name="query1.tpl"
dsqgen /input %tmpl_lst_path% /directory %tmpl_dir% /dialect %dialect_path% /output_dir %result_dir% /scale 1 /verbose y /template %tmpl_name%
# Linux
# see for example https://github.com/pingcap/tidb-bench/blob/master/tpcds/genquery.sh
To fix the error 'Substitution .. is used before being initialized' follow this fix.
I want to store statistics on UNDO tablespace in one table every 6 hours.
I've created simple table:
CREATE TABLE SYS.TB_UNDOSTAT (
MAX_UNDOBLKS NUMBER,
MAX_QUERY_LENGTH NUMBER,
MAX_QUERY_ID VARCHAR2(13),
DATE_OF_STAT DATE,
DMY_OF_STAT VARCHAR2(30),
TIME_OF_STAT VARCHAR2(30));
After that I've created Oracle External Job:
BEGIN
DBMS_SCHEDULER.CREATE_JOB
(job_name=>'ACCUMULATE_UNDOSTAT',
repeat_interval =>'FREQ=DAILY; BYHOUR=05,11,17,23',
job_type=>'EXECUTABLE',
job_action=>'/home/oracle/scripts/UNDOSTAT/accumulate_undostat_111.bsh',
enabled =>TRUE,
auto_drop=>FALSE,
comments=>'Take accumulate statistics from V$UNDOSTAT to
SYS.TB_UNDOSTAT one time through 6 hours On 111 Server'
);
END;
Content of the accumulate_undostat_111.bsh file is:
#!/bin/bash
export ORACLE_HOME=/u01/home/oracle/product/11.2.0/db_1
export ORACLE_SID=parustest
export PATH=$ORACLE_HOME/bin:$PATH
sqlplus -s << EOF
/ as sysdba
INSERT INTO FGA_OWNER.TB_UNDOSTAT (MAX_UNDOBLKS, MAX_QUERY_LENGTH,
MAX_QUERY_ID, DATE_OF_STAT, DMY_OF_STAT, TIME_OF_STAT)
SELECT max(undoblks), max(maxquerylen), maxqueryid, sysdate, to_char(sysdate,'DD.MM.YYYY'),
to_char(sysdate,'HH24:MI:SS') FROM SYS.V_$UNDOSTAT GROUP BY maxqueryid;
COMMIT;
exit;
EOF
exit 0
Job created without any problems. All necessary permissions have been granted.
But when I debug my shell script I have some problems:
[oracle#parustest111 UNDOSTAT]$ bash -o xtrace accumulate_undostat_111.bsh + export ORACLE_HOME=/u01/home/oracle/product/11.2.0/db_1
+ ORACLE_HOME=/u01/home/oracle/product/11.2.0/db_1
+ export ORACLE_SID=parustest
+ ORACLE_SID=parustest
+ export PATH=/u01/home/oracle/product/11.2.0/db_1/bin:/u01/home/oracle/product/11.2.0/db_1/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin
+ PATH=/u01/home/oracle/product/11.2.0/db_1/bin:/u01/home/oracle/product/11.2.0/db_1/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin
+ sqlplus -s
to_char(sysdate,'HH24:MI:SS') FROM V_ GROUP BY maxqueryid
*
ERROR at line 4:
ORA-00942: table or view does not exist
Elapsed: 00:00:00.00
Commit complete.
Elapsed: 00:00:00.00
+ exit 0
[oracle#parustest111 UNDOSTAT]$
Can someone explain and help me?
Thank you!
in unix shell, $ is the start of a variable, so your statement
FROM SYS.V_$UNDOSTAT
is interpreted by shell so that $UNDOSTAT looks for a unix variable called UNDOSTAT. To prevent this, you have to escape the statment
FROM SYS.V_\$UNDOSTAT
eg:
$ cat foo.bash
#!/bin/bash
sqlplus /<<EOF
select count(*) from v$session;
EOF
$ ./foo.bash
SQL*Plus: Release 11.2.0.2.0 Production on Mon Jan 28 12:56:43 2013
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
SQL> select count(*) from v
*
ERROR at line 1:
ORA-00942: table or view does not exist
vs:
$ cat foo2.bash
#!/bin/bash
sqlplus /<<EOF
select count(*) from v\$session;
EOF
$ ./foo2.bash
SQL*Plus: Release 11.2.0.2.0 Production on Mon Jan 28 12:56:49 2013
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
SQL>
COUNT(*)
----------
184
better still though, if i were you, i'd have the SQL file seperate and just call it
sqlplus -s << EOF
/ as sysdba
#yoursql.sql
COMMIT;
exit;
EOF
where the yoursql.sql file just had all your SQL. no need to worry about escaping stuff then.