When running SELECT queries ignite cache discovery happens with no errors, however when attempting to DROP an existing index getting an exception:
class org.apache.ignite.internal.processors.query.IgniteSQLException: Cache doesn't exist: <cache_name>
The only reference I could find in is in here, with resolution being restarting the caches which is not desired behaviour.
Why is cache not found during DDL statements but found for DML?
Related
I've got a postgres database that I'm trying to clean up with drop schema public cascade. The data on it is not that important and I never made any backups. I'm just trying to rebuild it. However, it seems an error I made earlier is causing the drop command to fail.
When I run drop schema public cascade, I get a
ERROR: cache lookup failed for constraint XXXXX. I checked pg_constraints and it doesn't exist. It's probably linked to a table/index that doesn't exist anymore. Is there anyway I can get rid of this persisting/non-existing constraint so I can cleanup the database?
Perhaps it is enough to remove the dependency:
DELETE FROM pg_depend
WHERE objid = XXXXX;
While running Insert command INSERT INTO TABLE xyz PARTITION(partition_date='2020-02-28') values('A',123, 'C',45)..... or Alter table drop partition (alter table xyz drop if exists partition(partition_date='2020-02-28'); command in hive, if hive services got restarted in between through ambari or due to any unwanted scenario, then that acquired the exclusive lock on that partition which will remains after the restart also and for that kind of job there is no yarn application id is generated sometimes and if generated then it also got succeeded but exclusive locks remains on that table or partition, which later we have to manually released from the table or partitioned.
So why these locks remains on that partition or table and how these kind of scenarios can be handle at our end?
Is there any workaround for these kind of scenarios?
I met a similar problem and resolved it, there are two ways:
(1) The hive lock information is stored at a mysql table called hive.hive_locks, so you can delete relevant rows about your sql table, or truncate that table. But this way cannot fix the problem permanently.
(2) Add a configuration in hive-site.xml, just like this:
<property>
<name>metastore.task.threads.always</name>
<value>org.apache.hadoop.hive.metastore.events.EventCleanerTask,org.apache.hadoop.hive.metastore.RuntimeStatsCleanerTask,org.apache.hadoop.hive.metastore.repl.DumpDirCleanerTask,org.apache.hadoop.hive.metastore.txn.AcidHouseKeeperService</value>
</property>
You can also refer to the answer on this question, i made a detailed explanation about the second way:
https://stackoverflow.com/a/73771475/9120595
I created a Cloudera cluster for Impala.
Cloudera version: Cloudera Express 5.8.1
Impala version: 2.6.0-cdh5.8.0 RELEASE
If I run the following command via impala-shell:
create table test as select 1;
The following error is returned:
WARNINGS: Failed to open HDFS file for writing: hdfs://[DNhostname]:8020/user/hive/warehouse/test/_impala_insert_staging/[...]/[...].0.
Error(255): Unknown error 255
However, if I run:
create table test (testcol int);
insert into test select 1;
...The table is created without a hitch.
Any ideas on why the first statement might fail while the second set of commands would succeed, and what I could do to fix it? I might have messed something up with directory permissions, either locally or on HDFS, however I've set dfs.permissions to false to turn off HDFS permissions. I don't know what to check on the local folders to ensure the correct user(s) have the right permissions. In either case, I don't know why the permissions would cause the CREATE TABLE AS SELECT statement to fail but not CREATE, then INSERT.
I should also mention that DNhostname is the hostname of the HDFS datanode/impala daemon that I'm SSHed into, not the hostname of the namenode. This worries me because DNhostname was originally where my namenode was located; I moved it to a different host for reasons outside the scope of this question. Is it possible that CREATE TABLE AS SELECT is still expecting the namenode to be DNhostname for some reason?
You are creating a new table with the default DB path since you are not specifying a new path in your create statement. If you try to use another DB for this process, you most likely to be successful.
create database newdb
use newdb
create table test as select 1
This will prove the location of this DB is wrong in the metastore. Go to your metastore.dbs and find the ID of your database there. You need to set the location of your db correctly, like:
update <metastoreDB>.DBS set LOCATION = 'hdfs://NN_URI:8020/user/hive/warehouse' where id = id_of_your_db;'
We recently had to fix a bug with our context indexes not properly indexing the contents of any of the Office Open XML file formats that we uploaded to our database. The SQL that we ended up with was something akin to this:
BEGIN
CTX_DDL.CREATE_PREFERENCE('"CX_OBJECT_DST"', 'MULTI_COLUMN_DATASTORE');
-- DESCRIPTION and FILENAME are both VARCHAR2, OBJECT is a BLOB
CTX_DDL.SET_ATTRIBUTE('"CX_OBJECT_DST"', 'COLUMNS', 'DESCRIPTION,FILENAME,OBJECT');
CTX_DDL.SET_ATTRIBUTE('"CX_OBJECT_DST"', 'FILTER', 'N,N,Y');
END;
/
DROP INDEX SCHEMA_NAME.ATTACHMENT_OBJECT_IDX;
CREATE INDEX SCHEMA_NAME.ATTACHMENT_OBJECT_IDX
ON SCHEMA_NAME.ATTACHMENT (OBJECT)
INDEXTYPE IS CTXSYS.CONTEXT
PARAMETERS('datastore CX_OBJECT_DST')
NOPARALLEL;
ALTER INDEX SCHEMA_NAME.ATTACHMENT_OBJECT_IDX REBUILD;
This is on an Oracle 11.2.0.4 database.
At first glance, rebuilding an index immediately after it's created seems counterintuitive. But we found that if we omitted the REBUILD, the index didn't pick up the contents of any attachments that we uploaded.
I don't understand why this would be the case (although I will be the first to admit that my knowledge in this area isn't great). What does REBUILD do that CREATE doesn't that causes this to work?
Whenever someone asks why we're doing the rebuild immediately after creation, all we can respond with at the moment is "because it doesn't work if we don't", which isn't a very satisfactory answer to hear (or to give for that matter)...
We have a background job that runs once per minute that calls out to a stored procedure that calls:
CTX_DDL.SYNC_INDEX('ATTACHMENT_OBJECT_IDX');
The procedure itself just includes some exception handling code and this one call - nothing that should impact on this.
We took the job offline while the index was dropped and recreated, then bought it back online after it was finished. We then left that job running for a few minutes to ensure that it wasn't failing (which it wasn't), then uploaded our .docx file to the database. We again waited until the job had run and verified that it didn't fail (again, it was fine), then attempted to search for the contents of that uploaded file, which always returned no results.
If we then do a REBUILD on that index, the file is indexed and all new files from then on are also indexed properly. If we don't, it never seems to work (NB: We have also tried leaving the job online while the index was dropped and recreated, but didn't expect that to work - and it didn't).
I am running into an issue dropping and recreating unique clustered indexes on some sybase databases. I have been unable to reproduce the issue in my test env.
The error that results when the concurrency issue arrises is as follows:
Cannot drop or modify partition descriptor database 'xxx' (43), object xx, index xx (0), partition 'xx' as it is in use. Please retry your command later. Total reference count '4'. Task reference count '2'.
I know an exclusive lock on a table or row from an open tran will not cause this, and I don't think anything the end-users could be doing would change the sort order of the data.
The data is a clustered round robin, and is a single partition table.
Please advise.
Could you use "reorg" instead - that'd have the same effect and should not be vulnerable to this? But I'm not sure because I like you don't see how this happens - building the new clustered index shouldn't start until Sybase gets a table lock (it has to for a clustered,) so why does it appear something else is accessing? (DBCCs, or something system level with locks on system catalogs maybe, so that although the index can build, something about updating the system catalogs fails?)
Before 15.0.3 esd 4 REORG causes other queries that try to access the table being reorged to fail, not to be blocked, which can be annoying.