How do you set the character encoding being used by the web server?
Thanks, R.
We found out what was causing the problem. The style sheet had become corrupted with some eroneous characters being written to the front of the file. The server was misinterpreting them causing it to change the encoding type that it was using. Deleted these chars and it worked great.
Related
I have a problem..
I work with ASP.NET and After I add a data (Arabic language) to my form via the browser it added to the SQL Server Management Studio like this (???????) even when I retrieve the data entered by the browser, Although there is no problem when I add my Arabic data via SQL Server Management Studio direct.
(there is no problem with English data)
Somewhere along the way your changing the charactertype to something else, that is probably Standart ascii.
It is impossible to know where exactly since we don't have hany information about that, but I would guess that in your Backend things are messed up, try commenting out your code and just echoing the values to see if they are already wrong at that stage.
Maybe change the Stringtype to utf-16
That should support any chars you throw at it
I am running the community edition from pentaho with Version 7.1 and i am facing a problem which i can't solve.
I have a transformation where i use german umlaut like ä, ö, ü because i read in text or define text which contains this letters. Nevertheless, in spoon, the transformation is running well, but if i transfer the transformation/job to the carte server, which is running on the same PC, i get the following error from the attachment.
error picture
if i remove every of this letters, the transformation/job on the carte server is also running great.
Does somebody have an idea how to configure the server or enviroment, because i cant remove the letters in the german language.
Thanks, Armin
Have a look on Pentaho Jira PDI-1101.
This was supposed to be fixed since 2008. If it reappeared, make a one step transformation with the problem, and reopen the case.
In the meanwhile, try one of the workaround proposed in the Jira case.
We have recently upgraded some of our servers running reporting services. The servers are now running Windows Server 2016 and SSRS 2014. Previously we where running SSRS 2008.
I'm not sure if my problem is related to the OS upgrade, or the SSRS upgrade.
The problem is that after the upgrade, reports rendered to PDF have started to do some font/text replacing-magic on all textblocks containing a norwegian character (æ, ø, å).
SSRS is embedding a new font with identity-h encoding, and apparently corrupting the underlying text. The PDF looks good. But text-search in adobe reader doesn't work on the affected textblock. And if I copy-paste the text into notepad, the entire line containing a norwegian character is garbled.
The affected .rdl is using Arial as font. Arial should support norwegian characters, so I'm not sure why SSRS is trying to do this. Arial is installed on the server.
How can I stop SSRS from doing this identity-h replacing?
Or if SSRS is correct to do so, how can I make searching and copy-pasting work?
Found a thread with identical issue on the msdn forums, they reported it as a bug to Microsoft, who responded like this:
"Posted by Microsoft on 18.04.2016 at 23:58:
We've addressed this issue in SQL Server 2016. Thanks for taking the time to submit the feedback." link
The solution is apparently to upgrade to SQL Server 2016.
I need to be able to save Chinese chars in my active database with ColdFusion 9
I have already figured out how to do it set the field types to NVARCHARS and NTEXT.
I have checked off Enable High ASCII characters and Unicode for data sources configured for non-Latin characters under the Datasources section.
It works great but... here is the question.
Changing the option Enable High ASCII characters and Unicode for data sources configured for non-Latin characters - will this create any other downstream issue with the current application? We will need to update the database structure - I am not sure what effect this option will have to my legacy code.
That should not have any effect on your database or your code. It's just a setting on how the application server (coldfusion) communicates to your database server.
I have a new server with the same Classic ASP code connecting to same SQL Server 2000 database with the same connection string yet it seems to be pulling data out of the database differently now. Specifically there is a custom encryption function which creates special characters (non-ASCII) and stores them in a VARCHAR field. (This is legacy code.) Since nothing has changed except the web server it has been hard to diagnose this problem.
Is there some setting that would control the database driver which would allow this data to come out of the database? It seems the character set is not treated the same with the new server as it was with the old server. Is there something I can change in the ODBC driver settings?
The server version change is from IIS 6 to IIS 7.5. The new server obviously also has new ODBC driver versions.
Any help is appreciated?
I suspect something to do with Locale rather than anything else. However I don't understand Locale. :-(
If it is a stored proc, a quick-fix might be to change the data type on the DB parameter/column to NVARCHAR. With ASP it will be unicode BSTR values in the application anyway, so moving the conversion into the database may make it easier to control, if necessary by specifying a collation to use for the conversion.
If you have the ASP code you could also edit the select to say cast(password as nvarchar(50)) as password or whatever to achieve the same result.