Repast: Batch run cause UTF-8 corrupted characters - repast-simphony

My model has a DataWriter class to ouput a csv file which contains some UTF-8 characters in the headerline. It's showing properly in the output csv file when I run the model in GUI mode. However, when I run it in batch mode, the headerline UTF-8 character in the output file is corrupted.
What's the underly cause of this issue in batch mode?

Related

Dymola converting output files to sdf - doesn't work for large files?

After simulation is finished Dymola runs dsres2sdf.exe to convert the results to sdf-format (if that option is enabled in the simulation setup output tab).
Usually this runs smoothly but sometimes it generates a sdf file that is very small (800 Byte) and empty.
Starting the dsres2sdf.exe manually from command line generates the same empty file.
I suspect that happens if the *.mat-File is very large (>1 GB)
Anybody has any clue how to get a proper sdf-File?
The SDF Editor and the SDF libraries for Python and MATLAB can read Dymola result files (*.mat) transparently (as if they were SDFs) and allow you to save them as *.sdf.
For example with Python:
import sdf
# load the Dymola result file
data = sdf.load('DoublePendulum.mat')
# re-save as SDF
sdf.save('DoublePendulum.sdf', data)

error: failed to encode '---------_dict.sql' from UTF-8 to Windows-1250

when I download the repository I get this error from the git
error: failed to encode '---------_dict.sql' from UTF-8 to Windows-1250.
Then while I want to commit and push I get the same error with the same files with the .sql extension. Anyone have any idea? Someone had a similar problem? Could it be related to the .gitattributes file which has
* .sql text working-tree-encoding = Windows-1250
This error message means that some part of the conversion failed, most likely because the contents of the file cannot be converted to windows-1250. It's likely that the file contains UTF-8 sequences corresponding to Unicode characters that have no representation in windows-1250.
You should contact the author of the repository and notify them of this problem and ask them to fix it. In your local system, you can add .git/info/attributes which has the following to force the files to UTF-8 instead:
*.sql text working-tree-encoding=UTF-8
Note that if you do this, you must ensure that the files you check in are actually UTF-8 and not windows-1250.

How to resolve CSV To BigQuery Load Error

Facing the below error while loading the csv file into BQ Table. Didn't face this problem when we were loading the files that are of TBs in size
'Error while reading data, error message: The options set for reading CSV prevent BigQuery from splitting files to read in parallel, and at least one of the files is larger than the maximum allowed size when files cannot be split. Size is: 7561850767. Max allowed size is: 4294967296.'
The limit for compressed files is 4GB.
If you file is not compressed you should check if there is any double quote characters (") in the file. Unmatched double quote characters could result in a large field (greater than 4GB) that cannot be split.
You can try loading the file from command line using something like:
bq --project_id <project_id> load --source_format=CSV --autodetect --quote $(echo -en '\000') <dataset.table> <path_to_source>
The idea would be to mute the default quote which is double quotes (").
Please refer the CLI documentation for the exact command.

UTF-8 encoding for .sql files created or modified in SSMS

The default encoding for files saved by my SSMS (v18) is ASCII, not UTF-8.
I’ve tried the steps below to set the default encoding to UTF-8 (so that I wouldn’t have to remember to set it every time I create/modify a file), but the default encoding remains ASCII.
Outside of SSMS, I can change the encoding in a text editor, but I don’t want to have to do that every time.
Have you encountered this issue and, if so, how did you resolve it?
Steps I tried:
From within SSMS, open the “template” file: C:\Program Files
(x86)\Microsoft SQL
Server\120\Tools\Binn\ManagementStudio\SqlWorkbenchProjectItems\Sql\SQLFile.sql.
Re-save using correct encoding:
File => Save As
Click the arrow next to the Save button
Choose the relevant encoding: Unicode (UTF-8 with asignature) - Codepage 65001
This is supposed to result in all new query windows having UTF-8 as the default encoding. But, this doesn’t work for me.

gzip several files and pipe them into one input

I have this program that takes one argument for the source file and then it parse it. I have several files gzipped that I would like to parse, but since it only takes one input, I'm wondering if there is a way to create one huge file using gzip and then pipe it into the only one input.
Use zcat - you can provide it with multiple input files, and it will de-gzip them and then concatenate them just like cat would. If your parser supports piped input into stdin, you can just pipe it directly; otherwise, you can just redirect the output to a file and then invoke your parser program on that file.
If the program actually expects a gzip'd file, then just pipe the output from zcat to gzip to recompress the combined file into a single gzip'd archive.
http://www.mkssoftware.com/docs/man1/zcat.1.asp