Failed to create table: Error while reading data, error message: CSV table references column position 2, but line starting at position:0 contains only 1 columns.
I am doing a project and am trying to create a table by importing a CSV file. I get the above error code.
Please let me know what info would help answer this question.
Related
I created a table in Hive with below definition:
Create table Movie(Rank Int,Name String,Actor String,Year String)
Row format delimited
Fields terminated by "/t"
Stored as textfile;
After feeding the the above definition. I'm trying to load a self created (tab separated file, which I created on my own) and trying to load that from local. Once it is getting loaded on doing
select * from Movie;
it is giving me null values.
Can anyone please help me here as when I trying to the same thing with CSV files the query is showing all records. Please help
I'm getting error while I'm trying to transfer file from Google cloud storage to google big query. This is the error :
Error while reading data, error message: CSV table references column position 101, but line starting at postion:2611 contains only 101 columns
There was a new field that was recently added, so we believe this may be the issue b/c out of many loads, only 3 per day are working.
When I read this error, I understand it was the line starting in the incorrect column - but correct me if I am wrong.
Can this be corrected?
I have a JSON File. I want to move only selected fields to Hive table. So below is the statement I used to create a new table to import the data from JSON file to HIVE Table. While creating it doesn't give any error but when i use select * from JsonFile1 or count(*) from JsonFile1 I get error as Failed with exception java.io.IOException:java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer
I have browsed over the internet stuck with this since few days. I can't find a solution. I checked in the HDFS. I see there is a table created and complete file imported as-is(not just the fields I selected but all of it). I just provided the sample data, the actual data contains like 50+ field names. creating all the column names is cumbersome. Is that what we need to do? Thank you in advance.
CREATE EXTERNAL TABLE JsonFile1(user STRUCT<id:BIGINT,description:STRING, followers_count:INT>)
ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe'
LOCATION 'link/data';
I have data as below
{filter_level":"low",geo":null,"user":{"id":859264394,"description":"I don’t want it. Building #techteam, #LetsTalk!!! def#abc.com",
"contributors_enabled":false,"profile_sidebar_border_color":"C0DEED","name"krogmi",
"screen_name":"jkrogmi","id_str":"859264394",}}06:20:16 +0000 2012","default_profile_image":false,"followers_count":88,
"profile_sidebar_fill_color":"DDFFCC","screen_name":"abc_abc"}}
Answering my own question.
I have deleted the data in hdfs which I was pointing in the LOCATION '...', copied data again from local to hdfs and recreated the table again and it worked.
I am assuming that data was the problem.
I have a CSV file im looking to load into a TSQL table using the "type" command.
Code: type yourfilename
When looking in the command prompt its breaking the file lines into two different rows and inserting them separately into my destination table
EX.
"Manheim Chicago","Manheim","IL","199004520601","On
Block","2D4FV47V86H126473","2006","DODGE","MAGNUM 4X2 V6"
I want solution to look like this
Solution Pic
[1]: https://i.stack.imgur.com/Bkgf6.png
Where this would be one record in the table.
Question. Does anyone know how to format a type command so it displays a full record without line breaks?
I'm trying to import a CSV to a BigQuery table from the user interface. My import job fails with the message:
Too many errors encountered. (error code: invalid)
gs://foo/bar.csv: CSV table references column position 15, but line starting at position:29350998 contains only 15 columns. (error code: invalid)
I'm assuming this means the importer doesn't like null fields in source data without an actual null string. Is there a way to make the UI allow jagged rows on import? Or, if not, what CLI command should I use to import my CSV file to a table this way?
The UI has an Allow jagged rows checkbox that you can select. Did you try that? It's part of the Options for the Create Table wizard.