I have a tab delimited flat file. One of the column called as earlydate has values like:
18-08-2016 08:12:21
Can anyone suggest what would be best datatype to be in table. Other than VARCHAR OR NVARCHAR. I don't want to treat it like string.
Related
I have a table with column name Logo which has a datatype nvarchar(max) but the content is already in Base64 format. I want to move data from this column to another column which has a data type varbinary(max). If i use convert function it converts the logo Byte to varbinary which was actually in byte. How can i do this.
Such as in the Logo column which has nvarchar(max) data type i have this -
'iVBORw0KGgoAAAANSUhEUgAAAFoAAABaCAYAAAA4qEECAAAACXBI......'
and i want to move exactly the same value to another column which has a data type varbinary(max).
Thanx in advance
I am reading between the lines here, but I think the OP is saying that have data in a varchar column that has values like '0x1A23494947D324B'. As a result something like SELECT CONVERT(varbinary,'0x1234'); doesn't return what the OP expects (0x307831323334).
You need to use a style code here, to let SQL Server know that the value is already in a varbinary format:
SELECT CONVERT(varbinary(MAX),'0x1234',1);
I have a table which has two varchar columns where the raw data is stored. This raw data contains value and the field name. Field name is like my_cursor.attribute1. This cursor is of the data type (my_custompackage.custom_data_type).
Now, I need to get the data type of the column present in the custom_data_type.
This is wrong, but to give an idea it's something like
my_custompackage.custom_data_type.attribute13
But so far, I couldn't achieve anything. I have tried taking the value into a separate variable. Like `
select field_name into temp_variable from dual;
Then
select dump(temp_variable) into my_data_type
but it didn't work and I was getting the string value. So, could you please tell me how to proceed with this?
I am importing data into redshift using the SQL COPY statement. The data has comma thousands separators in the numeric fields which the COPY statement rejects.
The COPY statement has a number of options to specify field separators, date and time formats and NULL values. However I do not see anything to specify number formatting.
Do I need to preprocess the data before loading or is there a way to get redshift to parse the numbers corerctly?
Import the columns as TEXT data type in a temporary table
Insert the temporary table to your target table. Have your SELECT statement for the INSERT replace commas with empty strings, and cast the values to the correct numeric type.
How can I define a table type in SQL server when one of the columns is an array of decimals?
I'm trying to pass to stored procedure an .NET object one of the fields of which is an array of decimals.
Thanks
t-sql does not support arrays.
You do, however, have some options: here are 3 of them, from the best to the worst:
Create 2 table types, have a column in one act as a foreign key to the other.
Create a table type with a varchar(max) column that will hold your decimal
values as a comma delimited string.
Create a table type with an xml data type column.
I am using bulk insert to insert data from a csv file to a SQL table. One of the columns in the csv file is an "ID" columns: i.e. each cell in the column is an "ID number" that may have leading zeros. Example: 00117701, 00235499, etc.
The equivalent column in the SQL table is of varchar(255) type.
When I bulk insert the data into the table, the leading zeros in each element of the "ID" column disappear. In other words, 00117701 becomes 117701, etc.
Is this a column type problem? If not, what's the best way to overcome this problem?
Thanks!
not sure what is causing it to strip off the leading zeroes, but I had to 'fix' some data in the past and did something like this:
UPDATE <table> SET <field> = RIGHT('00000000'+cast(<field> as varchar(8)),8)
You may need to adjust it a bit for your purposes, but maybe you get the idea from it?