I have created a number of queries in CDA for use with Pentaho Report Designer. I can link to my CDA queries on the server but when I try to preview some of the queries I get the following error:
Character reference "" is an invalid XML character.
The queries run without a problem on the CDA previewer.
Thanks in advance,
Fergus
 is indeed an invalid XML character in XML version 1.0. In this case, more specifically, we're talking about the character &. The quick explanation as to why control characters are illegal:
[...] a markup language should not have any need to support
transmission and flow control characters and including them would
create a problem for any editors and parsers in binary conversion.
Pentaho Report Designer parser won't handle these control characters. Therefore, my suggestion is to change these values in your XML to & which is one of the five predefined entities in the XML specification, responsible to render the character &.
Related
We have one column in our table whose name is "House€1000" but after deploying the code from Azure Build Pipeline, we could see that the pound sign got converted to "?" in Azure Build Artifacts. Can anyone suggest some something which can resolve this issue?
The possibe cause is when we use non-unicode data types like char, varchar while defining the columns.
To cover characters of all languages , there might be different number of bytes involved .
Using unicode data types like nvarchar,nchar can covert them to UTF-8 encoded value .But they may not contain enough bytes to use that language .So try by including the bytes involved in that particular language for that symbol to appear .ex:VARCHAR( 270), currency > decimal (19, 9) to avoid data loss through data truncation.
Enable utf encoding while preparing the columns.Some times if not enabled , try to work around by using escaping characters ex: [% ] or [^] which means % and ^.
Please go through this Collation and Unicode support - SQL Server | Microsoft Docs
Reference:
Introducing UTF-8 support for Azure SQL Database | Azure updates | Microsoft Azure
storing-uk-pound-sterling-in-a-database
I am a new developer who just started using datastage (coming from a bit of experience with SSIS). One of the first things that I am doing is working with XML data flow into a database from MQ. I connect to the MQ, use an XML job to map out the tags to each db column, and then insert it into the db. However, I am having an issue with the incoming xml. One of the fields on each xml file that I process contains the same character sequence which is something along the lines of "&$!0" .
When I run my job I get an error saying that that is an illegal xml character and the job fails.
Is there a way within datastage to replace this value as it comes through the xml, or even just remove it? Is there a specific tool I should be using within my job for this?
Obviously the easiest solution would be to fix that data coming in, however in the mean-time while that is getting squared away, I want to be able to do some testing, so an alternate solution would be great for now.
Any advice would be greatly appreciated. I am a new developer so I apologize if this question is a bit ignorant/low level.
use a text editor like notepad++ to remove the characters yourself...
to automate, sed in linux will do your job and sed for windows will probably work on windows too!
These characters are nothing but Unicode. You need to remove them before you insert into DB table.
Try below code:
s = s.replaceAll("\\p{&$!0}+", "");
NOTE: You need to find out all Unicode and and replace them with "" (blank).
You will get more information here
I will like to know what entity is responsible for doing the encoding conversions necessaries to accomplish a SQL command successfully. For example: you have several places where output a SQL command.
SELECT title from T1 where title='título'
This may be execute from within the database client (which I assume it reads the database encoding and encode its commands after that) but what happen when this is a string in a programming language whose string encoding is not the same as the database?
Where the conversion takes place? In the class that connects to the database? The database and the connector do some kind of agreement when they are handshaking?
I'll love some information about this topic or some link where I can read about it.
Thanks in advance.
Case Java + MySQL
Internally in Java String is text is Unicode encoded.
In a Java source text should have the same encoding that the java compiler uses. A wrong matching between editor and compiler would mess up string literals.
Java thus transfers a Unicode string to the JDBC driver, the database client library.
The MySQL connections string can indicate which encoding to use in the client library to communicate with the database server. useEncoding=UTF-8, so Unicode, would be a good international choice.
The database can set a default encoding.
As also any table.
As also per column (say one for Hindi one for Chinese).
Besides the encoding, also the collation (sorting order of strings) is language and encoding specific. And have to be considered too.
Background: I have to create a report that will be run regularly to send to an external entity. It calls for a comma delimited text file. Certain fields required for the report contain commas (I can easily parse the commas out of the name fields, but errant commas in the address and certain number fields are trickier). I have no control over the database design or input controls.
I know how to get a comma-delimited text file of query results from SQL Server Management Studio. But the commas in the fields screw everything up. I can change the delimiting character and then get the fields right in Excel, but that's just a workaround - it needs to be able to meet specifications automatically.
This report previously ran on an antiquated DBMS - I have a copy of an old report, and the fields are all enclosed in double quotes ("...."). This would work - though I don't know how the external users parse the fields (not my problem) - but I'm too dumb to figure out how to do it in t-sql.
Thanks!
You can use the Export Data task, but if you must try getting these results from Management Studio after running a query, go to Tools>Options, find the settings for Grid Output and check the box to delimit fields that contain field separators. This option will only take effect when you open a new query window.
A database I report on using SSRS has a column which is stored as a BLOB. I happen to know that the BLOB contains XML (a string).
Is there any way for reporting services to extract this information?
Thanks
without more details you should be able to either cast the Data in TSQL to the Appropriate type: CAST(MyBlob As XML) or CAST(MyBlob as NVARCHAR(max)) assuming the type is stored in an ntext.
otherwise you may need to write some conversion code in an SSRS expression to create a calculated field.
OK, I found a limited solution - builds would be welcome.
You need to use the function
dbms_lob.substr([fieldname], [number_of_characters], [start_position])
Note that [number_of_characters] appears to have a maximum value of 2000