Create table and upload data in Teradata - sql

I am trying to upload 3,000 records to a Teradata table and I am getting the following error:
Error reading import file at record 1: Index and length must refer to
a location within the string
I am importing the data with a txt file and loading it with the following code:
-- Create Table
CT mytable
( col1 VARBYTE (35))
-- Insert data
INSERT INTO mytable VALUES(?)
The text file looks something like this
812619
816625
2B01112
...

Related

ERROR: Import .txt to table reports missing data for column - POSTGRESQL

Importing a .txt file into the postgreSQL database table, but I am getting an error:
ERROR:
missing data for column "filename" SQL state: 22P04
My table structure is:
Table Name : testdata_table
Cols Names are:
fileid (NOT NULL, PRIMARY KEY)
sno_id (NOT NULL)
file_name(NOT NULL)
The .txt contents are
19876
derek1.txt
19876
derek2.txt
19876 should be saved in sno_id and derek1.txt under filename and so on.
Tested using insert command on one to two data and did not face any issues in uploading the contents to table
SQL
insert into testdata_table (sno_id, file_name) values (19876,derek1.txt)
insert into testdata_table (sno_id, file_name) values (19876,derek2.txt)
select * from testdata_table
fileid sno_id file_name
1 19876 derek1.txt
2 19876 derek2.txt
(fileid is autogenerated)
PROBLEM STARTS HERE
When you right click a tablename (in this case is testdata_table ), there is an option to upload the data (using Import/Export)
I chose Import to upload the .txt file into testdata_table and in the cols category I choose only sno_id and file_name
Only the data for sno_id and file_name are in the .txt file and fileid is auto generated
There is no space after each text (ie 19876 or derek1.txt)
When I try to upload the contents in the table, I get the error as "missing data for column file_name"
Why encountering this error ?

How to handle the embedded commas in hive?

For example if I have a csv file with three cols,
sno,name,salary
1,latha, 2000
2,Bhavish, Chaturvedi, 3000
How to load this type of file in hive. I tried few of the posts from stackoverflow, but it didn't worked.
I have created a external table:
create external table test(
id int,
name string,
salary int
)
fields terminated by '\;'
stored as text file;
and loaded the data into it.
But when done select * from table, I got all null's into it.
I think CSV file has column name then you have to skip header to avoid the error follow the following steps:
Step 1: Create table e.g
CREATE TABLE salary (sno INT, name STRING, salary INT)
row format delimited fields terminated BY ',' stored as textfile
tblproperties("skip.header.line.count"="1");
Step 2: load the CSV file into table e.g
load data local inpath 'file path' into table salary;
Step 3: Test the records
select * from salary;

Getting Error 10293 while inserting a row to a hive table having array as one of the fileds

I have a hive table created using the following query:
create table arraytbl (id string, model string, cost int, colors array <string>,size array <float>)
row format delimited fields terminated by ',' collection items terminated by '#';
Now , while trying to insert a row:
insert into mobilephones values
("AA","AAA",5600,colors("red","blue","green"),size(5.6,4.3));
I get the following error:
FAILED: SemanticException [Error 10293]: Unable to create temp file for insert values Expression of type TOK_FUNCTION not supported in insert/values
How can I resolve this issue?
The syantax to enter values in complex datatype if kinda bit weird, however this is my personal opinion.
You need a dummy table to insert values into hive table with complex datatype.
insert into arraytbl select "AA","AAA",5600, array("red","blue","green"), array(CAST(5.6 AS FLOAT),CAST(4.3 AS FLOAT)) from (select 'a') x;
And this is how it looks after insert.
hive> select * from arraytbl;
OK
AA AAA 5600 ["red","blue","green"] [5.6,4.3]

blob to table mssql

I have a table with a field for text files. These files have multiple lines that I need to copy to a temporary table. I tried using Bulk Import like this:
Bulk Import MyTable From (Select File From FilesTable where Key = 1)
but no success (Incorrect sintax near "(").
Some people have sugested that I use the path of the file but that's not an option because the files are in a table.
AFAICT this can be done with a simple INSERT INTO ... SELECT statement:
INSERT INTO MyTable (
File
)
SELECT
[File]
FROM
FilesTable
WHERE
[Key]=1;

BulkInsert CSV file to table SQL Server

I've got some hard problems inserting my CSV file from a location into a table that will be used for making reports and data extraction matched with other data.
Create table #PD_ABC (
Column1
Column2 etc etc
)
BULK INSERT #PD_ABC FROM 'F:\BulkInsert\Andrej\UtkastAntal(23)Export20141003.csv'
WITH (FIELDTERMINATOR = ';',CODEPAGE = 'RAW',ROWTERMINATOR = '0x0a')
insert into Maintenance.dbo.PD_ABC_Del1
select * from #PD_ABC
So far I supose everything should work. I made a similar script for .txt files but when comming to CSV somehow I cannot import them correctly.
This is the erros message I've been receving.
Msg 4863, Level 16, State 1, Procedure PD_ABC_SP, Line 49
Bulk load data conversion error (truncation) for row 1, column 3 (Gldnr).
No idea how to move forward from this.
It looks like your Column3 doesn't have enough characters for data. Is column3 type char or varchar? If so, you should give it more characters.