Additional spaces in String having read text file to String using FileInputStream - spaces

I'm trying to read in a text file to a String variable. The text file has multiple lines.
Having printed the String to test the "read-in" code, there is an additional space between every character. As I am using the String to generate character bigrams, the spaces are making the sample text useless.
The code is
try {
FileInputStream fstream = new FileInputStream(textfile);
DataInputStream in = new DataInputStream(fstream);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
//Read corpus file line-by-line, concatenating each line to the String "corpus"
while ((strLine = br.readLine()) != null) {
corpus = (corpus.concat(strLine));
}
in.close(); //Close the input stream
}
catch (Exception e) { //Catch exception if any
System.err.println("Error test check: " + e.getMessage());
}
I'd be grateful for any advice.
Thanks.

Your text file is likely to be UTF-16 (Unicode) encoded. UTF-16 takes two or four bytes to represent each character. For most western text files, the "in-between" bytes are non-printable and will look like spaces.
You can use the second argument of InputStreamReader to specify the encoding.
Alternatively, modify the text file (iconv on Unix, Save As.. dialog in Notepad on Windows):

Related

Converting XDP to PDF using Live Cycle replaces question marks(?) with space at multiple places. How can that be fixed?

I have been trying to convert XDP to PDF using Adobe Live Cycle. Most of my forms turn up good. But while converting some of them, I fine ??? replaced in place of blank spaces at certain places. Any suggestions to how can I rectify that?
Below is the code snippet that I am using:
public byte[] generatePDF(TextDocument xdpDocument) {
try {
Assert.notNull(xdpDocument, "XDP Document must be passed.");
Assert.hasLength(xdpDocument.getContent(), "XDPDocument content cannot be null");
// Create a ServiceClientFactory object
ServiceClientFactory myFactory = ServiceClientFactory.createInstance(createConnectionProperties());
// Create an OutputClient object
FormsServiceClient formsClient = new FormsServiceClient(myFactory);
formsClient.resetCache();
String text = xdpDocument.getContent();
String charSet = xdpDocument.getCharsetName();
if (charSet == null || charSet.trim().length() == 0) {
charSet = StandardCharsets.UTF_8.name();
}
byte[] bytes = text.getBytes();
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(bytes);
Document inTemplate = new Document(byteArrayInputStream);
// Set PDF run-time options
// Set rendering run-time options
RenderOptionsSpec renderOptionsSpec = new RenderOptionsSpec();
renderOptionsSpec.setLinearizedPDF(true);
renderOptionsSpec.setAcrobatVersion(AcrobatVersion.Acrobat_9);
PDFFormRenderSpec pdfFormRenderSpec = new PDFFormRenderSpec();
pdfFormRenderSpec.setGenerateServerAppearance(true);
pdfFormRenderSpec.setCharset("UTF8");
FormsResult formOut = formsClient.renderPDFForm2(inTemplate, null, pdfFormRenderSpec, null, null);
Document xfaPdfOutput = formOut.getOutputContent();
//If the input file is already static PDF, then below method will throw an exception - handle it
OutputClient outClient = new OutputClient(myFactory);
outClient.resetCache();
Document staticPdfOutput = outClient.transformPDF(xfaPdfOutput, TransformationFormat.PDF, null, null, null);
byte[] data = StreamIO.toBytes(staticPdfOutput.getInputStream());
return data;
} catch(IllegalArgumentException ex) {
logger.error("Input validation failed for generatePDF request " + ex.getMessage());
throw new EformsException(ErrorExceptionCode.INPUT_REQUIRED + " - " + ex.getMessage(), ErrorExceptionCode.INPUT_REQUIRED);
} catch (Exception e) {
logger.error("Exception occurred in Adobe Services while generating PDF from xdpDocument..", e);
throw new EformsException(ErrorExceptionCode.PDF_XDP_CONVERSION_EXCEPTION, e);
}
}
I suggest trying 2 things:
Check the font. Switch to something very common like Arial / Times New Roman and see if the characters are still lost
Check the character encoding. It might not be a simple question mark character you are using and if so the character encoding will be important. The easiest way is to make sure your question mark is ascii char 63 (decimal).
I hope that helps.

vb.net stream reader reads from a .accdb and .xml file without an error [duplicate]

How can I test whether a file that I'm opening in C# using FileStream is a "text type" file? I would like my program to open any file that is text based, for example, .txt, .html, etc.
But not open such things as .doc or .pdf or .exe, etc.
In general: there is no way to tell.
A text file stored in UTF-16 will likely look like binary if you open it with an 8-bit encoding. Equally someone could save a text file as a .doc (it is a document).
While you could open the file and look at some of the content all such heuristics will sometimes fail (eg. notepad tries to do this, by careful selection of a few characters notepad will guess wrong and display completely different content).
If you have a specific scenario, rather than being able to open and process anything, you should be able to do much better.
I guess you could just check through the first 1000 (arbitrary number) characters and see if there are unprintable characters, or if they are all ascii in a certain range. If the latter, assume that it is text?
Whatever you do is going to be a guess.
As others have pointed out there is no absolute way to be sure. However, to determine if a file is binary (which can be said to be easier than determining if it is text) some implementations check for consecutive NUL characters. Git apparently just checks the first 8000 chars for a NUL and if it finds one treats the file as binary. See here for more details.
Here is a similar C# solution I wrote that looks for a given number of required consecutive NUL. If IsBinary returns false then it is very likely your file is text based.
public bool IsBinary(string filePath, int requiredConsecutiveNul = 1)
{
const int charsToCheck = 8000;
const char nulChar = '\0';
int nulCount = 0;
using (var streamReader = new StreamReader(filePath))
{
for (var i = 0; i < charsToCheck; i++)
{
if (streamReader.EndOfStream)
return false;
if ((char) streamReader.Read() == nulChar)
{
nulCount++;
if (nulCount >= requiredConsecutiveNul)
return true;
}
else
{
nulCount = 0;
}
}
}
return false;
}
To get the real type of a file, you must check its header, which won't be changed even the extension is modified. You can get the header list here, and use something like this in your code:
using(var stream = new FileStream(fileName, FileMode.Open, FileAccess.Read))
{
using(var reader = new BinaryReader(stream))
{
// read the first X bytes of the file
// In this example I want to check if the file is a BMP
// whose header is 424D in hex(2 bytes 6677)
string code = reader.ReadByte().ToString() + reader.ReadByte().ToString();
if (code.Equals("6677"))
{
//it's a BMP file
}
}
}
I have a below solution which works for me.This is general solution which check all types of Binary file.
/// <summary>
/// This method checks whether selected file is Binary file or not.
/// </summary>
public bool CheckForBinary()
{
Stream objStream = new FileStream("your file path", FileMode.Open, FileAccess.Read);
bool bFlag = true;
// Iterate through stream & check ASCII value of each byte.
for (int nPosition = 0; nPosition < objStream.Length; nPosition++)
{
int a = objStream.ReadByte();
if (!(a >= 0 && a <= 127))
{
break; // Binary File
}
else if (objStream.Position == (objStream.Length))
{
bFlag = false; // Text File
}
}
objStream.Dispose();
return bFlag;
}
public bool IsTextFile(string FilePath)
using (StreamReader reader = new StreamReader(FilePath))
{
int Character;
while ((Character = reader.Read()) != -1)
{
if ((Character > 0 && Character < 8) || (Character > 13 && Character < 26))
{
return false;
}
}
}
return true;
}

Process a CSV file starting at a predetermined line/row using LumenWorks parser

I am using LumenWorks awesome CSV reader to process CSV files. Some files have over 1 million records.
What I want is to process the file in sections. E.g. I want to process 100,000 records first, validate the data and then send this records over an Internet connection. Once sent, I then reopen the file and continue from record 100,001. On and on till I finish processing the file. In my application I have already created the logic of keeping track of which record I am currently processing.
Does the LumenWorks parser support processing from a predetermined line in the CSV or it always has to start from the top? I see it has a buffer variable. Is there a way to use this buffer variable to achieve my goal?
my_csv = New CsvReader(New StreamReader(file_path), False, ",", buffer_variable)
It seems the LumenWorks CSV Reader needs to start at the top - I needed to ignore the first n lines in a file, and attempted to pass a StreamReader that was at the correct position/row, but got a Key already exists Dictionary error when I attempted to get the FieldCount (there were no duplicates).
However, I have found some success by first reading pre-trimmed file into StringBuilder and then into a StringReader to allow the CSV Reader to read it. Your mileage may vary with huge files, but it does help to trim a file:
using (StreamReader sr = new StreamReader(filePath))
{
string line = sr.ReadLine();
StringBuilder sbCsv = new StringBuilder();
int lineNumber = 0;
do
{
lineNumber++;
// Ignore the start rows of the CSV file until we reach the header
if (lineNumber >= Constants.HeaderStartingRow)
{
// Place into StringBuilder
sbCsv.AppendLine(line);
}
}
while ((line = sr.ReadLine()) != null);
// Use a StringReader to read the trimmed CSV file into a CSV Reader
using (StringReader str = new StringReader(sbCsv.ToString()))
{
using (CsvReader csv = new CsvReader(str, true))
{
int fieldCount = csv.FieldCount;
string[] headers = csv.GetFieldHeaders();
while (csv.ReadNextRecord())
{
for (int i = 0; i < fieldCount; i++)
{
// Do Work
}
}
}
}
}
You might be able to adapt this solution to reading chunks of a file - e.g. as you read through the StreamReader, assign different "chunks" to a Collection of StringBuilder objects and also pre-pend the header row if you want it.
Try to use CachedCSVReader instead of CSVReader and MoveTo(long recordnumber), MoveToStart etc. methods.

Split a "tagged" PDF document into multiple documents, keeping the tagging

In a project I have to split a PDF document into two documents, one containing all blank pages, and one containing all pages with content.
For this job, I use a PdfReader to read the source file, and two pdfCopy objects (one for the blank pages document, one for the pages with content document) to write the files to.
I use GetImportedPage to read a PdfImportedPage, which is then added to one of the PdfCopy writers.
Now, the problem is the following: the source file is using the "tagged PDF format". To preserve this (which is absolutely required), I use the SetTagged() method on both PdfCopy writers, and use the extra third parameter in GetImportedPage(...) to keep the tagged format. However, when calling the AddPage(...) on the PdfCopy writer, I get an invalid cast exception:
"Unable to cast object of type 'iTextSharp.text.pdf.PdfDictionary' to type 'iTextSharp.text.pdf.PRIndirectReference'."
Anyone has any ideas on how to solve this ? Any hints ?
Also: the project currently refers version 5.1.0.0 of the itext libraries. In 5.4.4.0 the third parameter to GetImportedPage does not seem to be there anymore.
Below, you can find a code extract:
iTextSharp.text.Document targetPdf = new iTextSharp.text.Document();
iTextSharp.text.Document blankPdf = new iTextSharp.text.Document();
iTextSharp.text.pdf.PdfReader sourcePdfReader = new iTextSharp.text.pdf.PdfReader(inputFile);
iTextSharp.text.pdf.PdfCopy targetPdfWriter = new iTextSharp.text.pdf.PdfSmartCopy(targetPdf, new FileStream(outputFile, FileMode.Create));
iTextSharp.text.pdf.PdfCopy blankPdfWriter = new iTextSharp.text.pdf.PdfSmartCopy(blankPdf, new FileStream(blanksFile, FileMode.Append));
targetPdfWriter.SetTagged();
blankPdfWriter.SetTagged();
try
{
iTextSharp.text.pdf.PdfImportedPage page = null;
int n = sourcePdfReader.NumberOfPages;
targetPdf.Open();
blankPdf.Open();
blankPdf.Add(new iTextSharp.text.Phrase("This document contains the blank pages removed from " + inputFile));
blankPdf.NewPage();
for (int i = 1; i <= n; i++)
{
byte[] pageBytes = sourcePdfReader.GetPageContent(i);
string pageText = "";
iTextSharp.text.pdf.PRTokeniser token = new iTextSharp.text.pdf.PRTokeniser(new iTextSharp.text.pdf.RandomAccessFileOrArray(pageBytes));
while (token.NextToken())
{
if (token.TokenType == iTextSharp.text.pdf.PRTokeniser.TokType.STRING)
{
pageText += token.StringValue;
}
}
if (pageText.Length >= 15)
{
page = targetPdfWriter.GetImportedPage(sourcePdfReader, i, true);
targetPdfWriter.AddPage(page);
}
else
{
page = blankPdfWriter.GetImportedPage(sourcePdfReader, i, true);
blankPdfWriter.AddPage(page);
blankPageCount++;
}
}
}
catch (Exception ex)
{
Console.WriteLine("Exception at LOC1: " + ex.Message);
}
The error occurs in the call to targetPdfWriter.AddPage(page); near the end of the code sample.
Thank you very much for your help.
Koen.

Append binary data to serialized xml header

I need to append binary data to file but before this data is an xml header. Whole file wont be proper xml file but it must proper xml header like following:
<EncryptedFileHeader>
<Algorithm>name</Algorithm>
<KeySize>256</KeySize>
<SubblockLength>64</SubblockLength>
<CipherMode>ECB</CipherMode>
<sessionKey>sessionKey</sessionKey>
</EncryptedFileHeader>
*binary data*
The xml header I do with JAXB marshalling easily, and even easier would be to add this binary data in base64 and store in note inside xml. But this is a clue. I have to store it as binary to save this overhead 33% space used by base64.
So the question is how to add this data and of course later read this back (serialize/deserialize) ?
Another question is how to remove from the first line of document?
I tried to use:
marshaller.setProperty("com.sun.xml.bind.xmlDeclaration", Boolean.FALSE);
but it throws an exception:
javax.xml.bind.PropertyException: name: com.sun.xml.bind.xmlDeclaration value: false
at javax.xml.bind.helpers.AbstractMarshallerImpl.setProperty(AbstractMarshallerImpl.java:358)
at com.sun.xml.internal.bind.v2.runtime.MarshallerImpl.setProperty(MarshallerImpl.java:527)
Thanks
Actualy I solved this by serializing xml header with JAXB, then appending binary data (bytearray) to existing file.
Reading from file with buffered reader as follows:
BufferedReader reader = new BufferedReader(new FileReader("filepath"));
String line, results = "";
while ((line = reader.readLine()) != null) {
results += line;
}
reader.close();
String[] splited = results.split("</EncryptedFileHeader>");
splited[0] += "</EncryptedFileHeader>";
String s0 = splited[0];
String s1 = new String(splited[1]);
ByteArrayInputStream bais = new ByteArrayInputStream(s0.getBytes());
Now i got a problem with second splited string s1, which consist data from "byteArrayOutputStream.toByteArray();". Now I have to transfer data from this string to byte array. From:
'��A����g�X���
to something like:
[39, -63, -116, 65, -123, -114, 27, -115, -2, 103, -64, 88, -99, -96, -26, -12]
I tried (on the same machine):
byte[] bytes = s1.getBytes();
but bytes array is different and returns 34 bytes instead 16. I read a lot about encodings but still have no idea.
EDIT:
The problem with different number of bytes was due to the different representation of new line by character and byte streams.