TeXnicCenter - spelling not working correctly - grammar

I have installed 2.02 Stable 64 bit version of TeXnicCenter and have following problem with spelling check. In one of my existing LaTeX document the grammar of the text in English is checked correctly and all typos are being underlined. In this file German language is not being recognise although I change setting for the language in the options for spelling. However, in other of my existing LaTex document the spelling tool is not recognising English text but it recognises text in German.
Here some hint: It could be that the other LaTex file has been created within German Windows environment. Now I have the Win 7 environment in English. Is it possible that it is connected with the text formatting? Is it possible to change it? Or is there a different cause?
Some other hint: When I generate a new LaTex file the spelling works fine for both English and German. So it is just the problem with the existing document.

Good hint from your side towards text encoding Phil. Solution is a bit different though. Apparently TexnicCenter is saving .tex files with ANSI encoding as default. As soon as .tex files are saved with UTF-8 encoding, spelling check works fine. There are not options to be set in the program. One has to go through Files->Save As and set the encoding while saving.

I know this is an old topic but here is what solved my issue: manually change the project language. Go to project > properties and then change the language there.

Related

How to convert unusual unicode characters (UTF-8) to PDF?

I would like to convert a text file containing Unicode characters in UTF-8 to a PDF file. When I cat the file or look at it with vim, everything is great, but when I open the file with LibreOffice, the formatting is off. I have tried various fonts, none of which have worked. Is there a font file somewhere on my Ubuntu 16.04 system which is used for display in a terminal window? It seems that would be the font to tell LibreOffice to use.
I am not attached to LibreOffice. Any app that will convert the text file into a PDF file is fine. I have tried txt2pdf and pandoc without success.
This is what the file looks like
To be more specific about the problem, below is an example of what the above lines look like in LibreOffice using Liberation Mono font (no mono font does better):
I answered to you by mail, but here is the answer. You are using some very specific characters; the most difficult to find being in the Miscellaneous Symbols unicode block. For instance the SESQUIQUADRATE which sould is on your second line as ⚼.
A quick search lead me to the two following candidates (for monospace fonts):
Everson Mono
GNU Unifont
As you can see, the block is partially covered by PragmataPro which is a very good font; however, I tried with an old version and found all your own characters, but an issue occured because the Sun character (rendered as ☉) seems to be printed twice wider than the other characters, but my version of this font is rather old and perhaps buggy.
Once you have chosen the font suiting your needs, you may be able to render your documents as PDF with various tools. I made all my experiments with txt2pdf which I use daily for many documents.

Headless conver-to PDF soft-hyphen replaced with zero-width whitespace

i'm working on an webapp creating LibreOffice Documents that i want to convert to PDFs with unoconv and a headless libreoffice.
There is just one problem i can't solve: The soft-hyphens i include in the .odt are replaced with zero-width whitespaces in the resulting PDF. The Problem is not related to unoconv - i tried it directly with a headless libreoffice (same result). i tried both v 4.1.4.2 as well as 4.2.5.2.
i tried another font (Ubuntu) (i use Arial as the body font) as i expected that the missing Arial font on Linux causing the problem (i have the problem on the production server with debian 7 as well on a virtualbox with ubuntu 12.04).
i even installed the arial font in hope it caused the problem due to libreoffice inability to calculate where to set the "real" hyphens without the font file at hand.
strange thing: using LO 4.1.4.2 on my mac (headless of course) produces flawless PDFs. So the problem must be related to either linux or some missing "graphical" package in my server setup. i installed the hyphen-de package which results in hyphens based on the dictionary, but the specified soft-hyphens are still replaced with zero-width whitespaces.
the problem affects both body text as well as text boxes that are used for annotations.
i'd appreciate any hint very very much!
I had a similar problem.
I had to install the right language hyphenation package that fit with the document's language.

MS Access VBA code editor character encoding and copy/paste

What is the actual encoding used in Access' VBA editor? I have been searching for a concrete answer for quite a while but with no luck.
I thought it was UTF-8 but I'm not very certain.
My main issue is that when writing a query in VBA I sometimes need to test it in Access' query editor. When copy-pasting however, I lose my native characters (greek in my case) as they turn to gibberish.
I have tried pasting in a text editor and saving it as different encodings but I can never recover the original characters.
Thanks in advance.
Edit
Let me explain this a bit further:
As you can see I can write my greek characters in the VBA editor normally:
However, copying the first line in Access' query editor, I get the following:
Same goes for a simple text editor:
So I am inclined to think that the problem lies inside the clipboard, due to the encoding used for the greek characters. I guess they are not Unicode, as I indeed have to make the change in the System Locale for non-unicode characters. So how are these characters saved/copied? In what encoding?
Answer
Actually this problem was solved by switching the keyboard input language to greek (EL), when copying the actual test string.
I am still not sure however, as to why that happens. If anyone can provide some insight into this, I would love to hear it.
Thanks again
The VBA editor does not support Unicode characters, either for input or display. Instead, it uses the older Windows technology called "code pages" to provide support for non-ASCII characters.
So, the character encoding in the VBA editor corresponds to the code page that is used by the Windows system locale as specified in the "Regional and Language Options" control panel. For example, with my system locale set to "Greek (Greece)"
I can enter Greek characters into my VBA code
However, if I switch my Windows system locale back to "English (United States)"
and re-open my VBA project, the Greek characters have changed to the corresponding characters in the new code page
If "Control Panel" -> "Regional and Language Options" -> "System Locale" is set correctly but you still suffer from this problem some times then note that while you're copying your keyboard layout must be switched to the non-English language.
This is applicable to all non-unicode-aware applications not only VBA.
Credit goes to #parakmiakos
details in this: http://www.pcreview.co.uk/forums/use-greek-characters-visual-basic-editor-t2097705.html
Looks like making sure your OS is set properly, and font choice inside the VBA editor.
I had a similar problem with Cyrillic characters. Part of the problem is solved when set the System locale correctly.
However, The VBA editor still does not recognize cyrillic characters when it has to interpret them from inside itself.
For example it can not display characters from the command:
Msgbox "Здравей"
but if the sheet name is in cyrillic characters it does it well:
Msgbox Activesheet.Name
Finally, it turned out that these kind of problems were solved when I changed to 32 bits version of MS Office.

docsplit conversion to PDF mangles non-ASCII characters in docx on Linux

My documentation management app involves converting a .docx file containing non-ASCII Unicode characters (Japanese) to PDF with docsplit (via the Ruby gem, if it matters). It works fine on my Mac. On my Ubuntu machine, the resulting PDF has square boxes where the characters should be, whether invoked through Ruby or directly on the command line. The odd thing is, when I open up the .docx file directly in LibreOffice and do a PDF export, it works fine. So it would seem there is some aspect to how docsplit invokes LO that causes the Unicode characters to be handled improperly. I have scoured various parts of the documentation and code for options that I might need to specify, with no luck. Any ideas of why this could be happening?
FWIW, docsplit invokes LO with the following options line in pdf_extractor.rb:
options = "--headless --invisible --norestore --nolockcheck --convert-to pdf --outdir #{escaped_out} #{escaped_doc}"
I notice that the output format can optionally be followed by an output filter a in pdf:output_filter_name--is this something I need to think about using?
I have tracked this down to the --headless option which docsplit passes to LibreOffice. That invokes a non-X version of LO, which apparently does not have the necessary Japanese fonts. Unfortunately, there appears to be no way to pass options to docsplit to tell it to omit the --headless option to LO, so I will end up patching or forking the code somehow.

Sublime text 2 prints wrong comment block on shortcut

For some time now, when i hit cmd-/ (mac) in .php files it prints the comments for html,
and not for php.
so it does instead of //
the format is set to PHP, and it shows 'php' on the bottom right of the editor,
also all the syntax highlighting is fine.
any idea?
thanks :)
T
Sublime is decent, I prefer notepad++ and brackets for development. For your issue, uninstall than install Sublime in default, see if it works. If it does work, and you have custom plugins for it, add one at a time and test. To see what is causing the problem. If nothing is causing the problem and you have everything set to how you had it prior. I am guessing that some form of data corruption occured or the shortcut for a comment in php is mixed up with the comment in html. The text editor shouldn't be treating a completely different language as another. I hope this helped.
What commenting is done is based on the scoping rules. I'm not a PHP programmer so I might get some of the details wrong, but you should get the general idea. If I understand correctly, PHP files consist of a mix of HTML elements and PHP code blocks. ST allows for languages to be "embedded" within another file type (in this case, embedding HTML in the PHP syntax). If the cursor is in an html region, it will use HTML commenting. If it is a PHP region, it will use PHP commenting. I know there are some issues with edge cases, but try moving the cursor into the actual PHP code block, then using the comment command. You could probably find a modified language file that will just treat everything as PHP if you want.
To check the scopes begin applied, you can use the ScopeHunter plugin. Alternatively, you can use the ctrl+alt+shift+p in windows and linux or cmd+alt+p in osx to display the scope in the status bar.
I hope that helps clarify how commenting works. How you choose to actually "solve" this is up to you though, as it's more of a personal preference thing.