I have a ignite cluster running with 8 nodes.
I created one table
CREATE TABLE Sample(....) WITH "template=partitioned,backups=1"
and it is having 300 Millon entries (cache: "SampleCache")
Now I want to change back up to 0. How can I do this?
Documentation of ALTER doesn't specify anything related to this.
I want to avoid dropping table and creating again.
I'm afraid there's no support for changing cache configuration after it is started. This means you will have to drop table and create another one.
Related
While running Insert command INSERT INTO TABLE xyz PARTITION(partition_date='2020-02-28') values('A',123, 'C',45)..... or Alter table drop partition (alter table xyz drop if exists partition(partition_date='2020-02-28'); command in hive, if hive services got restarted in between through ambari or due to any unwanted scenario, then that acquired the exclusive lock on that partition which will remains after the restart also and for that kind of job there is no yarn application id is generated sometimes and if generated then it also got succeeded but exclusive locks remains on that table or partition, which later we have to manually released from the table or partitioned.
So why these locks remains on that partition or table and how these kind of scenarios can be handle at our end?
Is there any workaround for these kind of scenarios?
I met a similar problem and resolved it, there are two ways:
(1) The hive lock information is stored at a mysql table called hive.hive_locks, so you can delete relevant rows about your sql table, or truncate that table. But this way cannot fix the problem permanently.
(2) Add a configuration in hive-site.xml, just like this:
<property>
<name>metastore.task.threads.always</name>
<value>org.apache.hadoop.hive.metastore.events.EventCleanerTask,org.apache.hadoop.hive.metastore.RuntimeStatsCleanerTask,org.apache.hadoop.hive.metastore.repl.DumpDirCleanerTask,org.apache.hadoop.hive.metastore.txn.AcidHouseKeeperService</value>
</property>
You can also refer to the answer on this question, i made a detailed explanation about the second way:
https://stackoverflow.com/a/73771475/9120595
Thanks is advance for any help. Here is the scenario that I am trying to recreate in Mulesoft.
1,500,000 records in a table. Here is the current process that we use.
Start a transaction.
delete all records from the table.
reload the table from a flat file.
commit the transaction.
in the end we need the file in a good state, thus the use of the transaction. If there is any failure, the data in the table will be rolled back to the initial valid state.
I was able to get the speed that we needed by using the Batch element < 10 minutes, but it appears that transactions are not supported around the whole batch flow.
Any ideas how I could get this to work in Mulesoft?
Thanks again.
A little different workflow but how about:
Load temp table from flat file
If successful drop original table
Rename temp table to original table name
You can keep your Mule batch processing workflow to load the temp table and forget about rolling back.
For this you might try the following:
Use XA transactions (since more than one connector will be used,
regardless of the use of the same transport or not)
Enlist in the transaction the resource used in the custom Java code.
This also can be applied within the same transport (e.g. JDBC on the Mule configuration and also on the Java component), so it's not restricted to the case demonstrated in the PoC, which is only given as a reference.
Please refer to this article https://dzone.com/articles/passing-java-arrays-in-oracle-stored-procedure-fro
From temp table poll records.You can contruct array with any number of records. With 100K size will only involve 15 round trips in total.
To determine error records you can insert records in an error table but that has to be implemented in database procedure.
I have a problem to alter table in SQL Server 2012, it takes a long time to add 04 columns being allowed NULL to large table with 340 columns and approximately 166M rows and 01 non-clustered index
This problem happens only specific table after restoring.
I'm waiting the execution for 10 hours but it's not finished so I must cancel it for more investigation. It's such strange because the script is really really simple as below, and we have done it successfully before:
alter table sample_database.sample_schema.sample_table
add column_001 int null
,column_002 numeric(18,4) null
,column_003 nvarchar(500) null
,column_004 int null;
My questions are:
Why does it happen strangely?
How to solve this because it's related to our deployment package? We have done the workaround as creating new table with new columns and load data. But it doesn't work to us.
How to prevent this problem in the future?
Many thanks all,
If it is 2012 (according to tag), this may happen:
http://rusanu.com/2012/02/16/adding-a-nullable-column-can-update-the-entire-table/
If adding a nullable column in SQL Server 2012 has the potential of
increasing the row size over the 8060 size then the ALTER performs an
offline size-of-data update to every row of the table to ensure it
fits in the page. This behavior is new in SQL Server 2012.
The reason behind this is explained by #Anton.
-Page Split
And for workaround you could follow these steps:
Create clone table with empty rows(e.g. SELECT * INTO <New table> from sample_table WHERE 0=1)
Add new columns in .
Copy data from sample_table to .
Once step 3 completes modify the sample_table name and rename to sample_table.
You should also try to add columns with NOT NULL values and once columns added then modify it accordingly.
This case happened in our UAT environment. After we restore the database again from the latest backup, this problem doesn't happen again. The alter is completed within mili-seconds.
In my opinion, there're certain issue since the last restoration.
Many thanks all,
I had this same issue and a simple server restart worked.
We are using SQL Server 2008. We have an Existing database and it was required to ADD a new COLUMN to one of the Table which has 2700 rows only but one of its column is of type VARCHAR(8000). When i try to add new column (CHAR(1) NULL) by using ALTER table command, it takes too much time!! it took 5 minutes and the command was still running to i stopped the command.
Below is the command, i was trying to add new column:
ALTER TABLE myTable Add ColumnName CHAR(1) NULL
Can someone help me to understand that How the SQL Server handles
the ALTER Table command? what happens exactly?
Why it takes so much time to Add new column
EDIT:
What is the effect of Table size on ALTER Command?
Altering a table requires a schema lock. Many other operations require the same lock too. After all, it wouldn't make sense to add a column halfway a select statement.
So a likely explanation is that a process had the table locked for 5 minutes. The ALTER then has to wait until it gets the lock itself.
You can see blocked processes, and the blocking process, from the Activity Monitor in SQL Server Management Studio.
Well, one thing to bear in mind is that you were adding a new fixed length column to the table. The way that rows are structured in storage, all fixed length columns are placed before all of the variable length columns, for each row. So every row would have had to be updated in storage to make this change.
If, in turn, this caused the number of rows which could be stored on each page to change, a great many new allocations may have been required.
That being said, for the number of rows indicated, I wouldn't have though it should take 5 minutes - unless, as Andomar indicated, there was some lock contention also involved.
In a database for a forum I mistakenly set the body to nvarchar(MAX). Well, someone posted the Encyclopedia Britanica, of course. So now there is a forum topic that won't load because of this one post. I have identified the post and ran a delete query on it but for some reason the query just sits and spins. I have let it go for a couple hours and it just sits there. Eventually it will time out.
I have tried editing the body of the post as well but that also sits and hangs. When I sit and let my query run the entire database hangs so I shut down the site in the mean time to prevent further requests while it does it's thinking. If I cancel my query then the site resumes as normal and all queries for records that don't involve the one in question work fantastically.
Has anyone else had this issue? Is there an easy way to smash this evil record to bits?
Update: Sorry, the version of SQL Server is 2008.
Here is the query I am running to delete the record:
DELETE FROM [u413].[replies] WHERE replyID=13461
I have also tried deleting the topic itself which has a relationship to replies and deletes on topics cascade to the related replies. This hangs as well.
Option 1. Depends on how big the table itself and how big are the rows.
Copy data to a new table:
SELECT *
INTO tempTable
FROM replies WITH (NOLOCK)
WHERE replyID != 13461
Although it will take time, table should not be locked during the copy process
Drop old table
DROP TABLE replies
Before you drop:
- script current indexes and triggers so you are able to recreate them later
- script and drop all the foreign keys to the table
Rename the new table
sp_rename 'tempTable', 'replies'
Recreate all the foreign keys, indexes and triggers.
Option 2. Partitioning.
Add a new bit column, called let's say 'Partition', set to 0 for all rows except the bad one. Set it to 1 for bad one.
Create partitioning function so there would be two partitions 0 and 1.
Create a temp table with the same structure as the original table.
Switch partition 1 from original table to the new temp table.
Drop temp table.
Remove partitioning from the source table and remove new column.
Partitioning topic is not simple. There are some examples in the internet, e.g. Partition switching in SQL Server 2005
Start by checking if your transaction is being blocked by another process. To do this, you can run this command..
SELECT * FROM sys.dm_os_waiting_tasks WHERE session_id = {spid}
Replace {spid} with the correct spid number of the connection running your DELETE command. To get that value, run SELECT ##spid before the DELETE command.
If the column sys.dm_os_waiting_tasks.blocking_session_id has a value, you can use activity monitor to see what that process is doing.
To open activity monitor, right-click on the server name in SSMS' Object Explorer and choose Activity Monitor. The Processes and Resource Waits sections are the ones you want.
Since you're having issues deleting the record and recreating the table, have you tried updating the record?
Something like (changing "body" field name to whatever it is in the table):
update [u413].[replies] set body='' WHERE replyID=13461
Once you clear out the text from that single reply record you should be able to alter the data type of the column to set an upper bound. Something like:
alter table [u413].[replies] alter column body nvarchar(100)