Altering a on attribute data size from a table in SQL Server - sql

So I'm trying to do something I thought would've been straightforward. I have a table in the DB named "Images." It's 'Description' property is of type nvarchar(50). I simply want to make it nvarchar(250). Every time I try, it says it can't save because some tables would have to be redropped. I can't just delete it (i think) because, there's already data being maintained by it, and I can't lose it.
EDIT::
Exact error message
"Saving changes is not permitted. The
changes you have made require the
following tables to be dropped and
re-created. You have either made
changes to a table that can't be
re-created or enabled the option
Prevent saving changes that require
the table to be re-created."
Should I just disable the 'Prevent saving changes that require table re-creation' and save it from there.

This KB article explain it

Do you have any tables referencing the "Description" column? That would prevent you from changing the data type/length.

Were you doing this from the SSMS GUI or were you running a script using alter table to make the change?
IF you did it through the designer, I believe it creates another table, drops the orginal and renames the new table. If that table is in a PK/FK relationship. it can't drop the table. Never make table changes except by using a script. YOu also need these to properly put them in source control as well.

Related

Reflect the changes made in one table in sql to reflect automatically on another table

I have created a table in SQL by copying data from another table in a different database. How can I make the changes made in the old table get reflected in the new table automatically?
You can use a Trigger function to reflect the changes.
Refer this.
Given that you have access over the First table, Trigger would work fine.

Why does a temp table work but not a permanent table?

I've written a SQL query for a report that creates a permanent table and then performs a bunch of inserts and updates to get all the data, according to company policy. It runs fine in SQL Server Management Studio and in Crystal Reports 2008 on my machine. However, when I schedule it to run on the server with SAP BusinessObjects Central Management Console, it fails with the error "Associated statement not prepared."
I have found that changing this permanent table to be a temp table makes the query work. Why would this be?
Some research shows that this error is sometimes sent instead of the true error. Other people reporting it talk of foreign key and (I would also assume) duplicate key errors.
Things I would check:
Does your permanent table have any unique constraints that might be violated? Or any foreign key constraints?
Are you creating indexes on the table after it has been created?
Are you creating any views over this permanent table?
What happens if the table already exists before the job is run?
What happens to the table if the job fails?
Are there any intermediate steps (such as within a stored procedure) that might involve additional temp or permanent tables?
ETA: Also check what schema the permanent table belongs to: is it usually created with "dbo"? Are you specifying that explicitly? Is there any chance that there might be a permissions problem?
That is often a generic error. Are you able to run it on the server as the account that it is scheduled to run as? It is most likely a permission error or constraint issue.
Assuming you really need a regular table, why it's not possible to create the permanent table once, vs creating it every time you run the query?
Recreating regular user table each time query runs does not seem right. But to make it work you may try to recreate the table in a separate batch or query (e.g. put GO in the script, that splits it into separate queries).
Regarding why it happens, I'm thinking about statement caching. Server compiles the query and stores the result for some time in case same query has to run again. So it's my speculation that it tries to run the compiled query which refers to the table you have already dropped and recreated under the same name. Name is the same, but physically it's a new table. You could hit some bug in the server this way. Just a speculation, it can be different kind of problem.
Without seeing code it's a guess, but being that you are creating a permanent table everytime you run the report, I assume you must be dropping the table at some point? (Or you'd have a LOT of tables building up over time.)
I suggest a couple angles to consider:
1) Make certain to prefix tables (perhaps by a session ID or soemthing) if you are concerned about concurrency/locking issues and the like so each report run has a table exclusive to itself.
2) If you are dropping the table at the end, instead adjust your logic to leave the table be. Write code that drops when you (re)start the operation. It's possible the report is clinging to the table and you are destroying it prematurely.

Cannot modify table ( using microsoft sql server management studio 2008 )

I create 2 tables and another 1 with foreign keys to the other two.
I realized I want to make some changes to table no 3.
I try to update a field but I get an error "Saving changes is not permitted. The changes you have made require the following table to be dropped and re-created."
I delete those 2 relationships but when I look at dependencies I see my table still depends on those 2 and I still cannot make any change to it.
What can I do?
You can also enable saving changes that require dropping of tables by going to "tools->options->designers->Table and database designers" and unchecking "Prevent saving changes that require table re-creation"
Be careful with this though, sometimes it'll drop a table without being able to recreate it, which makes you lose all data that was in the table.
When using Microsoft SQL Server Management Studio 2012, the same message occurs.
I used the script feature to do modifications which can be seen as a rather good workaround if you wanna use the designer only within a "safe" mode.
Especially the GUI related to create a foreign key is not the best in my opinion. When using a script (alter table) for adding a fk, you are faster than using this GUI feature.
When adding/writing a 'not' in prior to null, that's not a hard issue. (Removing an 'Allow Nulls' for a column refers to "Saving changes is not permitted" when using the designer.)

How can I overwrite the contents of an SQLite file

I've just started using SQLite and I want to write all my application data to a file, not knowing if the file exists already; with 'normal' files, this is straightforward, but with SQLite I can't create a table if it already exists, and I can't insert a row if the primary key already exists.
I basically want to do something like "CREATE TABLE IF NOT EXISTS table....else...DELETE FROM table". There must be a way to do it, I suspect that there are some ways that are more efficient than others. You'd think, for example, that it would be better to use an existing table rather than deleting and recreating, but that depends on what's involved in checking if it exists and deleting its contents.
Alternatively, is there any way to write a database to memory (sqlite3_open(":memory:",db)), but then get hold of its contents - as a byte array or something - to write to a file?
For all database systems it will almost always be more efficient to DROP the table first and then re CREATE it. Using DELETE will require index updates etc, whereas a simple DROP removes the indexes too, and will not involve making transaction log entries For SQLite, you can do a DROP IF EXISTS to drop the table conditionaly.
What about having an empty, fully designed database file ready somewhere. Then, if the intended app data database does not exist, create it by copying the empty db file?
If you're going to read the application data in at some point before writing it back out - user settings for instance - you'll know whether the data exists or not. Try calls to tables wrapped in exception handling and you'll know if they exist - alternatively use the database tables to tell you what exists:
SELECT name FROM sqlite_master
WHERE type='table'
ORDER BY name;
If you only every want to overwrite an existing db file you can easily just delete the db file from the disk and then run your creation code - simples - no messing around with drop calls anywhere. No worries about having changes in the db between versions.

Inconsistent Generate Change Script

I add a column of type tinyint and being set to not allow nulls in a table and generate the change scripts. The table has data in it at this time. The script has code that creates a temp table and inserts the data that is in the current table into. It then deletes the old table and renames this temp table to the same name as the original table. All fine and good. My question is, why if I do the same thing to another table (same field, but different table), the generate change script does not include this new table insertion code?
Any tips would be greatly appreciated!
If the table does not contain data, there is no need to rebuild the table. Essentially Management Studio "plays it safe" behind the scenes by generating the script this way if it thinks it can't do it simply by just modifying the table. In my experience, it often does this when it doesn't really need to, however there are exceptions ... for example if you add your column not at the "end" of the table. Rather than make changes in the UI and script them, I recommend becoming familiar with the ALTER TABLE command. Rebuilding the table in that manner can be catastrophic on a production system, and can usually be avoided.