A colleague gave me some code to run. I need to set the archive log location to a directory inside db_recovery_file_dest . I am using a VirtualBox VM , called "Oracle Developer Days"
I'm trying to run the following command :
ALTER SYSTEM SET log_archive_dest_1 = '/home' SCOPE=both;
But it's generating this error :
SQL> ALTER SYSTEM SET log_archive_dest_1 = '/home' SCOPE=both;
ALTER SYSTEM SET log_archive_dest_1 = '/home' SCOPE=both
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-16179: incremental changes to "log_archive_dest_1" not allowed with SPFILE
SQL>
What is the SPFILE ?
Also , could the problem be that I'm using a Virtual Machine ?
The correct syntax is ALTER SYSTEM SET log_archive_dest_1 = 'LOCATION=/home' SCOPE=both;. It's in the docs: find out more.
You shouldn't be setting it to /home. I hope that's a just a simplification you've made for posting here.
"What is the SPFILE ?"
You need to understand what you're doing. Please read the documentation and learn some basic concepts about the Oracle database and being a DBA. Find out more.
Which Oracle version are you using?
SPFILE stands for Server Parameter File (called PFILE prior to 9i release) it contains some parameters which is used by Oracle to initialize certain variables at the time when database is brought up.
You can use the below mentioned query to check where your server parameter (SPFILE) is stored.
show parameter spfile
Regards
Andy
Related
I am sending query to apache drill from apache spark. I am getting the following error:
java.sql.SQLException: Failed to create prepared statement: PARSE
ERROR: Encountered "\"" at line 1, column 23.
When traced, I found I need to write a custom sql dialect. The problem I do not find any examples for pyspark. All the examples are for scala or java. Any help is highly appreciated.!
Here is the pyspark code :
`dataframe_mysql = spark.read.format("jdbc").option("url", "jdbc:drill:zk=ip:2181;schema=dfs").option("driver","org.apache.drill.jdbc.Driver").option("dbtable","dfs.`/user/titanic_data/test.csv`").load()`
Looks like you have used a double quote in your SQL query (please share your SQL).
By default Drill uses back tick for quoting identifiers - `
But you can change it by setting the system/session option (when you are already connected to Drill by JDBC for example) or you can specify it in JDBC connecting string. You can find more information here:
https://drill.apache.org/docs/lexical-structure/#identifier-quotes
I navigated to the drill web ui and updated the planner.parser.quoting_identifiers parameter to ". Then I edited my query as below:
dataframe_mysql = spark.read.format("jdbc").option("url", "jdbc:drill:zk=ip:2181;schema=dfs;").option("driver","org.apache.drill.jdbc.Driver").option("dbtable","dfs.\"/user/titanic_data/test.csv\"").load()
And it worked like charm!
Need some help, because I can't find any solution for my problems with DBD::Oracle.
So at first, this is the current situation:
We are running Apache2 with mod_perl 2.0.4 at our company
Apache web server was set up with a startup script which is setting some environment variables (LD_LIBRARY_PATH, ORACLE_HOME, NLS_LANG)
In httpd.conf there are also environment variables for LD_LIBRARY_PATH and ORACLE_HOME (via SetEnv)
We are generally using the perl module DBI with driver DBD::Oracle to connect to our main database
Before we create a new instance of DBI we are setting some perl env variables, too (%ENV). We are setting ORACLE_HOME and NLS_LANG.
So far, this works fine. But now we are extending our system and need to connect to a remote database. Again, we are using DBI and DBD::Oracle. But for now there are some new conditions:
New connection must run in parallel to the existing one
TNSNAMES.ORA for the new connection is placed at a different location (not at $ORACLE_HOME.'/network/admin')
New database contents are provided by stored procedures, which we are fetching with DBD::Oracle and cursors (like explained here: https://metacpan.org/pod/DBD::Oracle#Binding-Cursors)
The stored procedures are returning object types and collection types, containing attributes of oracle type DATE
To get these dates in a readable format, we set a new env variable $ENV{NLS_DATE_FORMAT}
To ensure the date format we additionally alter the session by alter session set nls_date_format ...
Okay, this works fine, too. But only if we make a new connection on the console. New TNS location is found by the script, connection could be established and fetching data from the procedures by cursor is also working. Alle DATE types are formatted as specified.
Now, if we try to make this connection at apache environment, it fails. At first the datasource name could not resolved by DBI/DBD::Oracle. I think this is because of our new TNSNAMES.ORA file or rather the location is not found by DBI/DBD::Oracle in Apache context (published by $ENV{TNS_ADMIN}). But I don't know why???
The second problem is (if I create a dirty workaround for our first one) that the date format, published by $ENV{NLS_DATE_FORMAT} is only working on first level of our cursor select.
BEGIN OPEN :cursor FOR SELECT * FROM TABLE(stored_procedure) END;
The example above returns collection types of object which are containing date attributes. In Apache context the format published by NLS_DATE_FORMAT is not recognized. If I use a simple form of the example like this
BEGIN OPEN :cursor FOR SELECT SYSDATE FROM TABLE(stored_procedure) END;
the result (a single date field) is formatted well. So I think subordinated structures were not formatted because $ENV{NLS_DATE_FORMAT} works only in console context and not in Apache context, too.
So there must be a problem with the perl environment variables (%ENV) running under Apache and mod_perl. Maybe a problem of mod_perl?
I am at my wit's end. Maybe anyone in the whole wide world has a solution ... and excuse my english :-) If you need some further explanations, I will try to define it more precisely.
If your problem is that changes to %ENV made while processing a request don't seem to be honoured, this is because mod_perl assumes you might be running multiple threads and doesn't actually change the process environment when you change %ENV, so external libraries (like the oracle client) or child processes don't see the change.
You can work around it by first using the prefork MPM, so there aren't any threading issues, and then making changes to the environment using Env::C instead of the %ENV hash.
I am trying to connect to an OpenOffice Base database from Java and execute a query, and have not been able to.
These are the steps I followed:
1) Created a Database 'TestDB.odb' in OpenOffice, and a table 'Movies' with columns (ID, Name, Director)
2) Downloaded hsqldb jar file and inclued in project build path
3) Used the following code to connect to it:
String file_name_prefix = "C:/Documents and Settings/327701/My Documents/TestDB.odb";
Connection con = null;
Class.forName("org.hsqldb.jdbcDriver");
con = DriverManager.getConnection("jdbc:hsqldb:file:" + file_name_prefix, "sa","");
Statement statement = con.createStatement();
String query1 = "SELECT * FROM \"Movies\"";
ResultSet rs = statement.executeQuery(query1);
Althoug I'm able to connect to the Database, it throws the following exception on trying to execute the query:
org.hsqldb.HsqlException: user lacks privilege or object not found: Movies
Tried googling, but have not been able to resolve my problem. I'm stuck and it would be great if someone could guide me on how to fix this issue?
You cannot connect to an .odb database. The database you have connected to is in fact a separeate set of files with names such as TestDB.odb.script, etc.
Check http://user.services.openoffice.org/en/forum/viewtopic.php?f=83&t=17567 on how to use an HSQLDB database externally from OOo in server mode. You can connect to such databases with the HSQLDB jar.
OLD thread.
I lost 2 days of my life until I changed the property:
spring.jpa.properties.hibernate.globally_quoted_identifiers = false
I was using mysql before and then I changed to hsqldb in order to run some tests. I kinda copied and pasted this property without looking and then you know - Murphy's law ...
I hope it helps.
I just started working with an application that I inherited from someone else and I'm having some issues. The application is written in C# and runs in VS2010 against the 3.5 framework. I can't run the application on my machine to debug because it will not recognize the way they referenced their parameters when writing their DB queries.
For instance wherever they have a SQL or DB2 query it is written like this:
using (SqlCommand command = new SqlCommand(
"SELECT Field1 FROM Table1 WHERE FieldID=#FieldID", SQLconnection))
{
command.Parameters.AddWithValue("FieldID", 10000);
SqlDataReader reader = command.ExecuteReader();
...
If you will notice the "parameters.AddWithValue("FieldID", 10000);" statement does not include the "#" symbol from the original command text. When I run it on my machine I get an error message stating that the parameter "FieldID" could not be found.
I change this line:
command.Parameters.AddWithValue("FieldID", 10000);
To this:
command.Parameters.AddWithValue("#FieldID", 10000);
And all is well... until it hits the next SQL call and bombs out with the same error. Obviously this must be a setting within visual studio, but I can't find anything about it on the internet. Half the examples for SQL parameter addition are written including the "#" and the other half do not include it. Most likely I just don't know what to search for.
Last choice is to change every query over to use the "#" at the front of the parameter name, but this is the transportation and operations application used to manage the corporation's shipments and literally has thousands of parameters. Hard to explain the ROI on your project when the answer to the director's question "How's progress?" happens to be "I've been hard at it for a week and I've almost started."
Has anyone run into this problem, or do you know how to turn this setting off so it can resolve the parameter names without the "#"?
Success! System.Data is automatically imported whenever you create a .NET solution. I removed this reference and added it back to make sure that I had the latest version of this library and that fixed the issue. I must have had an old version of this library that was originally pulled in... only thing I can figure.
Its handled by the .NET Framework data providers not Visual Studio.
It depends on the data source. Look here:Working with Parameter Placeholders
You can try working with System.Data.Odbc provider and using the question mark (?) place holder. In thios case dont forget to add the parameters in the same order they are in the query.
we've got a real confusing problem. We're trying to test an SQL Bulk Load using a little app we've written that passes in the datafile XML, the schema, and the SQL database connection string.
It's a very straight-forward app, here's the main part of the code:
SQLXMLBULKLOADLib.SQLXMLBulkLoad4Class objBL = new SQLXMLBULKLOADLib.SQLXMLBulkLoad4Class();
objBL.ConnectionString = "provider=sqloledb;Data Source=SERVER\\SERVER; Database=Main;User Id=Username;Password=password;";
objBL.BulkLoad = true;
objBL.CheckConstraints = true;
objBL.ErrorLogFile = "error.xml";
objBL.KeepIdentity = false;
objBL.Execute("schema.xml", "data.xml");
As you can see, it's very simple but we're getting the following error from the library we're passing this stuff to: Interop.SQLXMLBULKLOADLib.dll.
The message reads:
Failure: Attempted to read or write protected memory. This is often an indication that other memory has been corrupted
We have no idea what's causing it or what it even means.
Before this we first had an error because SQLXML4.0 wasn't installed, so that was easy to fix. Then there was an error because it couldn't connect to the database (wrong connection string) - fixed. Now there's this and we are just baffled.
Thanks for any help. We're really scratching our heads!
I am not familiar with this particular utility (Interop.SQLXMLBULKLOADLib.dll), but have you checked that your XML validates to its schema .xsd file? Perhaps the dll could have issues with loading the xml data file into memory structures if it is invalid?
I try to understand your problem ,but i have more doubt in that,
If u have time try access the below link ,i think it will definitely useful for you
link text
I know I did something that raised this error message once, but (as often happens) the problem ended up having nothing to do with the error message. Not much help, alas.
Some troubleshooting ideas: try to determine the actual SQL command being generated and submitted by the application to SQL Server (SQL Profiler should help here), and run it as "close" to the database as possible--from within SSMS, using SQLCMD, direct BCP call, whatever is appropriate. Detailing all tests you make and the results you get may help.