Oracle global_names DELETE problem - sql

I'm using a database link to execute a DELETE statement on another DB, but the DB link name doesn't conform to global naming, and this requirement cannot change.
Also I have global_names set to false, and cannot be changed either.
When I try to use these links however, I receive:
ORA-02069: - global_names parameter must be set to TRUE for this operation
Cause: A remote mapping of the statement is required but cannot be achieved because
GLOBAL_NAMES should be set to TRUE for it to be achieved. -
Action: Issue `ALTER SESSION SET GLOBAL_NAMES = TRUE` (if possible)
What is the alternative action when setting global_names=true is not possible?
Cheers,
Jean

That parameter can be set at the session level. Could you not set the GLOBAL_NAMES value equal to TRUE in your session, execute your delete, then set if back to FALSE? If not could you create a new connections just for this delete and update the GLOBAL_NAMES value in that session to be true?

The problem is that the GLOBAL_NAMES parameter is set to TRUE in your environment. That requires that the DB link have the same name as the GLOBAL_NAME of the remote DB.
Here's a link which describes the situation more fully.

Related

Can we set author to the db user in liquibase for update?

In liquibase, "author" is normally hardcoded in the changeset. But we want to set it to the db user against which the changeset is being run. So we will have the author dev_schema in dev, and prod_schema in prod and so on. The db user are not known beforehand, so we like to set at runtime automatically from --username option of liquibase connection string.
./liquibase.bat --driver=oracle.jdbc.OracleDriver --changeLogFile="changelog.xml" --url="jdbc:oracle:thin:#localhost:1521:xe" --username=dev_schema ...
In the changeset tag I set the attribute author to ${username} but it is not picked up.
<changeSet author="${username}" ...
Also tried setting the environment variable, which worked, but then you have to set the same username twice. There is also a risk that if someone uses a different username, liquibase will fail to execute due to checksum failure.
Is that possible? Alternatively, any way around?
I guess that username is liquibase's system parameter and it is not used in changelog for placeholder replacement. Try to specify parameter like java param -Dusername=<your user> and let's see what will happen.

How do you update a locked record in Documentum using DQL?

I'm unable to update the record with DQL due to a lock. Is it possible to unlock the record, update it and lock it again?
I'm running the following code in idql64.exe on the content server.
UPDATE dm_document objects SET keywords = 'D' WHERE r_object_id = '90000000000000001'
GO
Error message:
[DM_SYSOBJECT_E_LOCKED]error:
"The operation on sysobject was unsuccessful because it is locked by user
You have to either unlock it by API, user interface or reset the attributes r_lock_owner and r_lock_machine. I would prefer using API or user interface. API command is
unlock,c,{object id}
and it can be easily scripted.
The issue is caused by a checkout - the user which is stated in the property above.
dqMan from FME is your friend!
Br, Henning
Yes, you need to be a member of dm_escalated_allow_save_on_lock group, in this case Documentum will do everything automatically.
I was able to achieve this by updating the r_immutable_flag column.
UPDATE dm_document(all) objects SET r_immutable_flag = 0 WHERE r_object_id = '90000000000000001'
GO
UPDATE dm_document(all) objects SET keywords = 'D' WHERE r_object_id = '90000000000000001'
GO
UPDATE dm_document(all) objects SET r_immutable_flag = 1 WHERE r_object_id = '90000000000000001'
GO

pyodbc hive can't set properties

Has anyone run into issues attempting to set hive properties through pyodbc and the properties not taking?
I'm able to connect to my Hive server and run queries that would indicate a session is remaining open (eg. using temporary tables).
However, when I try:
set hive.mapred.mode = nonstrict;
or
set hive.execution.engine = mr;
Neither of these properties get set.
Thoughts?
Try the following: set hive.hive.mapred.mode = nonstrict;

alter session set store.mongo.bson.record.reader doesn't work in Drill

Could you please help me ?
I'm trying to do : alter session set store.mongo.bson.record.reader= false; in Apache Drill but the output shows that it's still set to true.
I really need to change it so that I can read the real value of _id in MongoDB
Any help ?
Thanks
Go to http://localhost:8047/options (assuming drill is running in localhost).
And change your property to false and update it.

Why is my ALTER SYSTEM command failing here?

A colleague gave me some code to run. I need to set the archive log location to a directory inside db_recovery_file_dest . I am using a VirtualBox VM , called "Oracle Developer Days"
I'm trying to run the following command :
ALTER SYSTEM SET log_archive_dest_1 = '/home' SCOPE=both;
But it's generating this error :
SQL> ALTER SYSTEM SET log_archive_dest_1 = '/home' SCOPE=both;
ALTER SYSTEM SET log_archive_dest_1 = '/home' SCOPE=both
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-16179: incremental changes to "log_archive_dest_1" not allowed with SPFILE
SQL>
What is the SPFILE ?
Also , could the problem be that I'm using a Virtual Machine ?
The correct syntax is ALTER SYSTEM SET log_archive_dest_1 = 'LOCATION=/home' SCOPE=both;. It's in the docs: find out more.
You shouldn't be setting it to /home. I hope that's a just a simplification you've made for posting here.
"What is the SPFILE ?"
You need to understand what you're doing. Please read the documentation and learn some basic concepts about the Oracle database and being a DBA. Find out more.
Which Oracle version are you using?
SPFILE stands for Server Parameter File (called PFILE prior to 9i release) it contains some parameters which is used by Oracle to initialize certain variables at the time when database is brought up.
You can use the below mentioned query to check where your server parameter (SPFILE) is stored.
show parameter spfile
Regards
Andy