how to import csv with emojis in it - pandas

I'm a bit new to python and I was having trouble importing a csv dataset that has emojis in it
I've been using:
pd.read_csv(DATA, encoding = 'ISO-8859-1')
and the emojis are turning into gibberish, example to the right: 🙌 ---> ð\n\n
I've tried changing the import to be 'UTF-8' however that just prevents the import working at all with the following error: 'utf-8 codec can't decode bytes in position...'
I'm not really sure why the utf-8 encoding import doesn't work either. When I open the file with notepad++ and check the encoding, it shows utf-8.
My eventual goal is to convert the emojis to meaningful text with demojize from emoji. However I'm pretty sure I need the emojis to be in utf-8 encoding in order for that to work.
Any ideas?

Related

Fix Unicode Decode Error Without Specifying Encoding='UTF-8'

I am getting the following error:
'ascii' codec can't decode byte 0xf4 in position 560: ordinal not in range(128)
I find this very weird given that my .csv file doesn't have special characters. Perhaps it has special characters that specify header rows and what not, idk.
But the main problem is that I don't actually have access to the source code that reads in the file, so I cannot simply add the keyword argument encoding='UTF-8'. I need to figure out which encoding is compatible with codecs.ascii_decode(...). I DO have access to the .csv file that I'm trying to read, and I can adjust the encoding to that, but not the source file that reads it.
I have already tried exporting my .csv file into Western (ASCII) and Unicode (UTF-8) formats, but neither of those worked.
Fixed. Had nothing to do with unicode shenanigans, my script was writing a parquet file when my Cloud Formation Template was expecting a csv file. Thanks for the help.

UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xf1 in position 990: invalid continuation byte

I am comparing to csv files to each other to produce the final file with fathered differences information its giving me error message. I have resaved all files to csv decoded with utf-8 and tried running - it does not work. Can someone help me.
The problem is that your file is not in UTF-8 format. Many tools will refuse to handle data that is claimed to be UTF-8, but isn’t. I’d check first if that file is actually UTF-8 or is stored in some different encoding.

File csv has "ñ" in headers, I can't read it with pandas

I'm trying read csv with pandas, it has a header "año"
This is the unicode error
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd1 in position 1: invalid continuation byte
How can I read this csv file? I have a lot of files with this problem.
It is not in UTF-8 format. You need to give the format ISO-8859-1 to pandas.
You should post the pandas code where it's specifying UTF-8

Encoding Error of Reading .dta Files with Chinese Characters

I am trying to read .dta files with pandas:
import pandas as pd
my_data = pd.read_stata('filename', encoding='utf-8')
the error message is:
ValueError: Unknown encoding. Only latin-1 and ascii supported.
other encoding formality also didn't work, such as gb18030 or gb2312 for dealing with Chineses characters. If I remove the encoding parameter, the DataFrame will be all of garbage values.
Simply read the original data by default encoding, then transfer to the expected encoding! Suppose the column having garbled text is column1
import pandas as pd
dta = pd.read_stata('filename.dta')
print(dta['column1'][0].encode('latin-1').decode('gb18030'))
The print result will show normal Chinese characters, and gb2312 can also make it.
Looking at the source code of pandas (version 0.22.0), the supported encodings for read_stata are ('ascii', 'us-ascii', 'latin-1', 'latin_1', 'iso-8859-1', 'iso8859-1', '8859', 'cp819', 'latin', 'latin1', 'L1'). So you can only choose from this list.

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcc in position 3: invalid continuation byte

I'm trying to load a csv file using pd.read_csv but I get the following unicode error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcc in position 3: invalid continuation byte
Unfortunately, CSV files have no built-in method of signalling character encoding.
read_csv defaults to guessing that the bytes in the CSV file represent text encoded in the UTF-8 encoding. This results in UnicodeDecodeError if the file is using some other encoding that results in bytes that don't happen to be a valid UTF-8 sequence. (If they by luck did also happen to be valid UTF-8, you wouldn't get the error, but you'd still get wrong input for non-ASCII characters, which would be worse really.)
It's up to you to specify what encoding is in play, which requires some knowledge (or guessing) of where it came from. For example if it came from MS Excel on a western install of Windows, it would probably be Windows code page 1252 and you could read it with:
pd.read_csv('../filename.csv', encoding='cp1252')
I got the following error
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position
51: invalid continuation byte
This was because I made changes to the file and its encoding. You could also try to change the encoding of file to utf-8 using some code or nqq editor in ubuntu as it provides directory option to change encoding. If problem remains then try to undo all the changes made to the file or change the directory.
Hope this helps
Copy the code, open a new .py file and enter code and save.
I had this same issue recently. This was what I did
import pandas as pd
data = pd.read_csv(filename, encoding= 'unicode_escape')