BigQuery json special characters UI export issue - google-bigquery

I'm having trouble saving BigQuery table data having special characters as json with the proper encoding. This issue is only via the UI. When I save from the CLI the character encoding is proper. In the table the data is correctly encoded. I'm just clicking 'Save Result' and save as JSON local file. While loading data to the table the data was UTF-8 and properly encoded too. Sample special char: Bié
How to save table data with special characters from the UI correctly? I can use cli but its easier to use the UI if I can get this to export correctly.

As discussed in the comments, I could not reproduce the same problem. Since the results returned by the BigQuery Console and CLI should be the same I suggested that this could be a problem in the text editor which was confirmed.

Related

CKAN: How do I specify the encoding of my resource file?

I'm publishing my CSV files to CKAN using the API. I want to make my data easy to open in Brazilian Excel, so it must have:
semicolon ";" separated columns
coma "," as a decimal separator
use encoding cp-1252
I'm using Data Store and Data Pusher.
My problem is that if I upload my data with encoding cp1252, Data Pusher sends it as is to the Data Store that expects the data as UTF-8. The data preview doesn't display the accents correctly. In the image below Março were the correct value to display:
I want to have my user downloading the data as cp-1252, so it opens easily in Excel, but also have CKAN displaying it correctly. I must specify the encoding of the file while uploading the file.
I couldn't specify the encoding directly, but taking a look at Data Pusher source I saw that it uses the Messy Tables library. MT obey the environment locale set of the host machine, so I configure it to pt_BR.UTF8 and my accents worked fine.
Now I can publish my data with commas as a decimal separator and using encoding Windows-1252. The data correctly opens in Excel when it is downloaded, and is also correctly displayed in the Data Explorer.

Proper text encoding pipeline from Javascript object through CSV and up to Azure SQL

Here is a scenario.
We are collecting programmatically some data into Javascript object which contain Polish characters.
Then we are converting it to string with JSON2CSV library and sending to Azure blob with uploadBlockBlob method from #azure/storage-blob library.
After a while we are using Azure Functions triggered by blob storage trigger. We get "Myblob" property with string with CSV content. Then we are using Papaparse library to convert it back to object. And finally we are using content of the object to update database through mssql library.
In the process we are loosing polish characters.
JSON2CSV when converting to string does not seem to have "encoding" property exposed. Nor the uploadBlockBlob. With Papaparse enforcing encoding with "UTF-8" does not have any effect on the process (changing to cp1250 also does not help). The original content is scraped from a web page with software running on WIndows machine.
Any ideas how to preserve the encoding throughout the pipeline?
Closing the question as it appeared to be my mistake. I pushed Varchar type into NVarchar column - that is why non-latin characters got wrong.

mysql workbench not exporting data with utf8

I have a database encoded in utf8 and I want to export it to a dump file. The problem is that when I do it the data in the dump file are not encoded in utf8. Is there a way to define the encoding when creating the dump file ?
Your DB when you created it, may have been using another form of encoding aside from UTF8. You may want to refer to this article about how to change encoding settings. Hopefully once that has been changed you will be able to export.
https://dev.mysql.com/doc/refman/5.0/en/charset-applications.html
This doc will show you how to encode per table, as well as how to change your encoding via CLI.
Cheers.

BigQuery Backend Errors during upload operation

I want to know what are the possible errors that can arose from Big Query server side during upload mechanism, though the .CSV file that i'm uploading contains perfect data. Can you list out those errors?
Thanks.
Some of the common errors are:
Files must be encoded in UTF-8 format.
Source data must be properly
escaped within standard guidelines for CSV and JSON.
The structure of
records and the data within of must match the schema provided.
Individual files must be under the size limits listed on our
quota/limits page.
More information about BigQuery source data formats.
Check out our Data Loading cookbook for additional tips.

Where is some GOOD documentation on how to properly databind TinyMCE or FCKEditor in order to store in SQL database?

I have searched high and low and can only find some very bad documentation on how to properly save the data from a rich text editor to a SQL Server database. I am not working with personal profiles, I just want to understand how it is properly done, including how to properly escape said data.
Use parameterized queries and you don't need to escape or encode the data going into or coming out of the DB.
What you should be more concerned about is the composition of the HTML that you're receiving when it's be rendered back out from the database. It's not really enough to trust the person submitting the HTML to not be malicious.
Does the HTML contain script? Does the HTML contain XSS attacks? Does any formatting or CSS embedded in the HTML break your page? Does unclosed markup in the HTML break your page?
One simple way would be to HtmlEncode the content of the TinyMCE control when saving and Decode it when retrieving.