Renaming column to previously existing column with different type won't let me update - sql

Background: In a Java application I'm working on, I'm doing a refactoring of the storage of enum values. Previously, these were stored as integers, and mapped through enum values with a helper method in the enum. I would like to utilize the #EnumType.STRING capabilities of JPA to make the database more readable.
So, what I'm basically trying to do is change the type (as well as the values) of a column. For example, I had this table definition to begin with:
table Something (
id int,
source int,
[more columns]
)
I wanted to change the source-column into a VARCHAR(100) column instead, and here is how I did that:
Introduce a new column, called source_new with VARCHAR(100).
Populate the new column with mapped values based on the values of the old column (so each row with value 1 in the source column get's the value 'SomeSource' in source_new, each row with value 2 in source gets 'OtherSource', and so on
Drop the source-column
Rename the source_new column to source (using sp_rename)
My problem is this: Once this is done, I can't update the now newly defined source-column, because it still insists that it's an int column, and not a varchar column!
So a query like this:
update Something set source = 'SomeSource' where id = 1;
fails with this:
Error: Conversion failed when converting the varchar value 'SomeSource' to data type int.
SQLState: 22018
ErrorCode: 245
At the same time, sp_help of the table shows that the column is defined as varchar(100), and not int! Also, the column holds numerous varchar values from the original datamigration (from before the rename).
Is this a bug, or am I doing something wrong by renaming a column to a column name that was previously used with another type? (And as I'm typing the last question, it just sounds absurd to me, when I drop a column I expect to disappear, not to leave traces and in effect not allowing me to reuse the column name at any time in the future..)
SQLFiddle to illustrate (sp_rename doesn't work with SQLFiddle it seems): http://sqlfiddle.com/#!3/0380f/3

I have found the culprit, and it's name is trigger!
Some genious decided to put a check if the value updated is a valid source (checking against another table) in a trigger, so much for trusting your own code..
I spit on the shadow of people who hide functionality in database triggers, pfoy! Go back to the 80's where you belong! :p

Related

How do I update the property of a decimal data type in Microsoft SQL to change the number of decimals displayed?

Recently I have been using Microsoft SQL for creating databases that are referred to using an excel document. There have been a number of instances when I needed to make a small changes to my tables and ended up "DROP"-ing all my current tables and re-creating them using an updated query. I understand you can use UPDATE to change the values of records within a table, but I'm looking to manipulate a data type so that I can change the number of decimals in one record of my tables from 2 to 3. Code for creating the table looks something like this:
CREATE TABLE WIRE_INDEX
--"Field" "Data Type" "Null or Not"
(...
...
DENSITY decimal(18,2) Not Null);
I don't know if the solution is something obvious, but I have been unable to find anything useful. I'm not sure know how to refer to the data type of a field in SQL.
When I populate the database I use numbers like 0.283 and 0.164, but when I SELECT the record I only get the first two decimals. I'd like the first 3 decimals to appear in the way I enter them into the table.
(edit didn't show up properly)
(not sure if I'm supposed to post my solution), but credit to TEEKAY and Apurav for answering my question. I used the code posted by Apurav which looks like this:
ALTER TABLE WIRE_INDEX
ALTER COLUMN DENSITY decimal(18,3) Not Null
When I pulled the table, using a SELECT statement the precision showed three decimal places, but I lost the precision of my input and had to re-enter my values using UPDATE. Not sure if this is more effective than just starting over, but it worked for me and now I know.

Mapping field purpose in SchemaPropertyTypes table

I am trying Sense/Net Community edition features.
I defined and installed content type called "Vacation Request" successfully.
I want to know what is the purpose of "mapping" field in table: SchemaPropertyTypes
Many thanks,
I really, really hope you are asking this only out of curiosity, and not because you want to change something manually in the db - because it is not recommended :). Please always access the content repository through the API, do not query or modify the db directly.
Property types and values
Simple property values (like int or short text values) are stored in the FlatProperties table. This is a fixed-with table, containing a predefined number of columns dedicated to different types (e.g. x pieces of string columns, y pieces of int columns - see column names in the table).
Property definitions are stored in the SchemaPropertyTypes table, as you have found out.
The zero-based Mapping field in the SchemaPropertyTypes table defines the column index in the FlatProperties table for a particular property. E.g. a value of a string property with mapping 6 will be stored in the FlatProperties table's 'nvarchar_7' column (note the index is shifted by one, because the column name index is one-based).
If you take a look at the PropertyInfoView view (not table), it may help clarifying this: the last column of the view is a computed column that displays the column name that you can look up in the flat properties table.
(there are other useful SQL views there that display data in a more readable way)
Property 'overflow'
It is possible to register more properties of the same type (e.g. int) than can fit in one row in the FlatProperties table. Solution: Sense/Net stores these nodes in multiple rows - this is why there is a Page column there.
Although MS SQL Server supports a huge number of columns for some time, this design has been kept for compatibility reasons.
This is why you see mapping values in PropertyInfoView like 249 with column name nvarchar_10: the value is stored on page 3, which means that content occupies 3 records in the FlatProperties table.
'Other' property types
You may have noticed that in case of reference or long text properties there is no mapping. This is because we do not store them in the FlatProperties table, they have their own tables like ReferenceProperties or TextPropertiesNText.

Crystal reports linking issue

I have 2 columns, they are both patient numbers. One is defined as a string
and the other is defined as Float, null. So the link is not working. What must I do to the one defined as number? It comes from an excel. I have changed the cells there to text, it's showing this change when I upload to SQL Server, but Crystal reports sees it as a number and it won't link on the string column.
I thought to add a column to the table then copy in the contents, is that the way to go?
ALTER TABLE [Programmer].[dbo].['Preventive Care-Colon Cancer Sc']
ADD PatNum datatype nvarchar(50)

SQL Server invalid column name error complaining about column long gone

EDIT: Please read my answer below first, before you decide to read and try to understand the text below. You may not find it's worth it when you see what was going on ;)
I have a weird problem: SQL Server 2008 R2 keeps complaining about an invalid column that is indeed not there anymore, but I'm not using it either!
I can't update any rows in that table anymore from within my own application, where no reference to the column can be found, because I always get this error now.
I then wanted to update straight in SSMS as a test, but when I edit the rows there, I still get this error.
What happened before: I made a column called CertcUL varchar(1), and that worked. After a while it appeared I needed it to be a varchar(30), so I edited the table design and turned it into a varchar(30).
From that moment I saw that I could only update this column when I stored 1 character. When I tried to store more, I got an error warning me about string or binary truncation. So somehow, the previous varchar(1) info was still present in the DB.
When I renamed that column to CertcUL2 or Cert_cUL, the same things kept happening! So changing the column name does not change the underlying cause. Also when just trying to add some characters straight in SSMS.
When I deleted the column, and added a new one with varchar(30) straight away, and called 'test', the same problem remained! This column still only allows me to store one character! The column was the one but last column. Making it the last column does not help either. Only when creating an new column while keeping the other column, I can have columns that behave properly.
So somehow, SQL Server saves some meta data about a column, even when it has been deleted. And does not look at the name, but rather at the order in which the columns are created.
Does anyone have an idea how this can happen, and how I can fix this besides (probably) dropping and recreating the whole table?
Thanks!
Oh my God I feel so stupid...it's a trigger that still contains this column. I just noticed it because when trying to update with an update statement. Only this way I got a proper error message, so I now know what's going on. So stupid that I didn't check the triggers! Sorry about that!
More info: I had an update trigger on this table A, that copies all current values to a history table B that contains the same columns. So I did change the length of the column CertcUL in table A, but forgot about table B. So it was very confusing to see the old column name popping up every time, and see it complianing about string truncation while my column in table A seemed just fine.
Sorry again :)

SSIS external metadata column needs to be removed

I am creating a select statement on the fly because the column names and table name can change, but they all need to go into the same data destination. There are other commonalities that make this viable, if I need to later I will go into them. So, what it comes down to is this: I am creating the select statement with 16 columns, there will always be sixteen columns, no more, no less, the column names can change and the table name can change. When I execute the package the select statement gets built just fine but when the Data Flow tries to execute, I get the following error:
The "external metadata column "ColumnName" (79)" needs to be removed from the external metadata column collection.
The actual SQL Statement being generated is:
select 0 as ColumnName, Column88 as CN1, 0 as CN2, 0 as CN3, 0 as CN4,
0 as CN5, 0 as CN6, 0 as CN7, 0 as CN8, 0 as CN9, 0 as CN10,
0 as CN11, 0 as CN12, 0 as CN13, 0 as CN14, 0 as CN15 from Table3
The column 'Column88' is generated dynamicly and so is the table name. If source columns exist for the other ' as CNx' columns, they will appear the same way (Column88 as CN1, Column89 as CN2, Column90 as CN3, etc.) and the table name will always be in the form: Tablex where x is an integer.
Could anyone please help me out with what is wrong and how to fix it?
You're in kind of deep here. You should just take it as read that you can't change the apparent column names or types. The names and types of the input columns become the names and types of the metadata flowing down from the source. If you change those, then everything that depends on them must fail.
The solution is to arrange for these to be stable, perhaps by using column aliases and casts. For one table:
SELECT COLNV, COLINT FROM TABLE1
for another
SELECT CAST(COLV AS NVARCHAR(50)) AS COLNV, CAST(COLSMALL AS INTEGER) AS COLINT FROM TABLE2
Give that a try and see if it works out for you. You just really can't change the metadata without fixing up the entire remainder of the package.
I had the same issue here when I had to remove a column from my stored procedure (which spits out to a temp table) in SQL and add two columns. To resolve the issue, I had to go through each part of my SSIS package from beginning (source - in my case, pulls from a temporary table), all the way through to your destination (in my case a flat file connection to a flat file csv). I had to re-do all the mappings along the way and I watched for errors that game up in the GUI data flow tasks in SSIS.
This error did come up for me in the form of a red X with a circle around it, I hovered over and it mentioned the metadata thing...I double clicked on it and it warned me that one of my columns didn't exist anymore and wanted to know if I wanted to delete it. I did delete it, but I can tell you that this error has more to do with SSIS telling you that your mappings are off and you need to go through each part of your SSIS package to make sure everything is all mapped out correctly.
How about using a view in front of the table. and calling the view as the SSIS source. that way, you can map the the columns as necessary, and use ISNULL or COALESCE functions to keep consistent column patterns.