I am using MariaDB (v10.4.11) and executing sql's via HeidiSQL (v11.3.0.6295).
I have a few tables set up, and I can edit some of them just fine, using either sqls or UI provided by HeidiSQL.
But, two of tables are not editable nor deletable. I tried to edit column names, delete columns, and even delete the entire tables via sqls and ui, but all failed. HeidiSQL crashes, or I have to stop the sql execution manually.
sqls I used are:
UPDATE table1
SET col1 = 'a'
WHERE col2 = 'b';
DROP TABLE table1;
When I execute above sqls on other tables, they work just fine. When I execute the same sqls above on those two tables, HeidiSQL crashes (no response).
I created a backup schema (same tables and same structures), and I could edit all the tables in the backup schema until someone else edited those tables. A co-worker inserted some data into those tables via python, and after that, the same thing happens. I cannot add new columns nor drop tables.
Another co-worker and I inserted into other tables and they are editable.
I looked into processlist and I killed queries that I made. If I kill all the queries (except root), will I be able to edit those tables?
Related
As stated I need help deleting all data from every table in a test database. There are 3477 tables and some of the tables were created by a past employee so I was unable to create a schema of the DB and recreate it empty.
Is there a fast way to delete all of the data and keep all of the tables and their structure? Also, I noticed when deleting data from the DB with Delete table_name, that the data file wasn't decreasing in size. Any reason why? Then I tried to just delete the data file to see what would happen and it erased everything, so i had to restore the test database. Now I'm back at block one....
Any help or guidance would be appreciated.... I've read a lot and everything just says use Delete or Truncate, but rather not do that for 3477 tables.
The TRUNCATE TABLE command deletes the data inside a table, but not the table itself.
You have a lot of tables (more than 3000...), so take a look to following link to truncate all tables:
Truncate all tables in a SQL Server database
What happens when I runned:
zcat /mnt/Postgres/restoreFile.gz | psql my_db
on the working database and after doing ALTER TABLE and other standard things there were problems with duplicated keys. When I stopped it and tried to insert into database then I got duplicates key error because of sequences and constraints. Seems like all data is in but what about the sequences. What really happend with that database?
A normal Postgres backup consists of table design (like create table) and data (like insert) statements. If you run it twice, most design statements will fail. The insert statements would succeed in so far as the data definition allows for duplicate rows.
So restoring a database to a production server would typically result in a lot of duplicate rows in tables without a primary key. Some design changes made after the backup (like changing the owner of a table) may be undone.
im new to SQL and can't figure out why my sql script isn't working.
I've two databases, and my task is to update a column of a specific table with the content of the same table in the other database if the conditions are met. The tables and columns of both databases have the same names, just party different content. I already looked through a lot of similar questions, but couldn't make it work / figure out what i did wrong.
UPDATE TABLE1
SET COLUMN_1 = Database2.TABLE1.COLUMN_1
WHERE Database2.TABLE1.COLUMN_2 LIKE '%DIN276%';
(Im running the query on the first database)
PostgreSQL database does not support cross-database queries.
You must create in your Database1 a foreighn data wrapper for TABLE1 from Database2, then you can perform queries with your TABLE1 in Database1 together with data from TABLE1 of Database2.
I have a particular SQL file in which i copy all contents from on table in a database to another table in another database.
The traditional INSERT statements are used to perform the same operation. However this table has 8.5 Million records and it fails. The queries succeed with a smaller database.
Also in when i run the select * query for that particular table the SQL query express shows out of memory exception.
In particular there is one table that has some many records. So this table alone i want to copy from the old Db to the new Db.
What are alternate ways to achieve this?
Is there any quick work around by which we can avoid this exception and make the queries succeed?
Let me put it this way. Why would this operation fail when there are a lot of records?
I don't know if this counts as "traditional INSERT", but have you tried "INSERT INTO"?
http://www.w3schools.com/sql/sql_select_into.asp
In a database for a forum I mistakenly set the body to nvarchar(MAX). Well, someone posted the Encyclopedia Britanica, of course. So now there is a forum topic that won't load because of this one post. I have identified the post and ran a delete query on it but for some reason the query just sits and spins. I have let it go for a couple hours and it just sits there. Eventually it will time out.
I have tried editing the body of the post as well but that also sits and hangs. When I sit and let my query run the entire database hangs so I shut down the site in the mean time to prevent further requests while it does it's thinking. If I cancel my query then the site resumes as normal and all queries for records that don't involve the one in question work fantastically.
Has anyone else had this issue? Is there an easy way to smash this evil record to bits?
Update: Sorry, the version of SQL Server is 2008.
Here is the query I am running to delete the record:
DELETE FROM [u413].[replies] WHERE replyID=13461
I have also tried deleting the topic itself which has a relationship to replies and deletes on topics cascade to the related replies. This hangs as well.
Option 1. Depends on how big the table itself and how big are the rows.
Copy data to a new table:
SELECT *
INTO tempTable
FROM replies WITH (NOLOCK)
WHERE replyID != 13461
Although it will take time, table should not be locked during the copy process
Drop old table
DROP TABLE replies
Before you drop:
- script current indexes and triggers so you are able to recreate them later
- script and drop all the foreign keys to the table
Rename the new table
sp_rename 'tempTable', 'replies'
Recreate all the foreign keys, indexes and triggers.
Option 2. Partitioning.
Add a new bit column, called let's say 'Partition', set to 0 for all rows except the bad one. Set it to 1 for bad one.
Create partitioning function so there would be two partitions 0 and 1.
Create a temp table with the same structure as the original table.
Switch partition 1 from original table to the new temp table.
Drop temp table.
Remove partitioning from the source table and remove new column.
Partitioning topic is not simple. There are some examples in the internet, e.g. Partition switching in SQL Server 2005
Start by checking if your transaction is being blocked by another process. To do this, you can run this command..
SELECT * FROM sys.dm_os_waiting_tasks WHERE session_id = {spid}
Replace {spid} with the correct spid number of the connection running your DELETE command. To get that value, run SELECT ##spid before the DELETE command.
If the column sys.dm_os_waiting_tasks.blocking_session_id has a value, you can use activity monitor to see what that process is doing.
To open activity monitor, right-click on the server name in SSMS' Object Explorer and choose Activity Monitor. The Processes and Resource Waits sections are the ones you want.
Since you're having issues deleting the record and recreating the table, have you tried updating the record?
Something like (changing "body" field name to whatever it is in the table):
update [u413].[replies] set body='' WHERE replyID=13461
Once you clear out the text from that single reply record you should be able to alter the data type of the column to set an upper bound. Something like:
alter table [u413].[replies] alter column body nvarchar(100)