Is there any significant performance difference between the following two?
String json = mapper.writeValueAsString(searchResult);
response.getWriter().write(json);
vs
mapper.writeValue(response.getWriter(), searchResult);
writeValueAsString JavaDoc says:
Method that can be used to serialize any Java value as a String.
Functionally equivalent to calling writeValue(Writer,Object) with
StringWriter and constructing String, but more efficient.
So, in case, you want to write JSON to String is much better to use this method than writeValue. Both these methods use _configAndWriteValue.
In your case it is better to write JSON directly to response.getWriter() than generating String object and after that writing it to response.getWriter().
In the Bond C# manual, it notes the following:
These following changes will break wire compatibility and are not recommended:
Adding or removing required fields
Incompatible change of field types (any type change not covered above); e.g.: int32 to string, string to wstring
...
But it doesn't explain why. The use case here is that I'm using Bond that connects a C# application with a C++ backend. The field is currently a string. I want to change it to a wstring. The manual notes that C# strings can handle C++ strings and C++ wstrings. Therefore, why I can't I just change the field type from string to wstring? Why does this break wire compat?
In Bond's binary formats, strings are UTF8 encoded (no BOM) and wstrings are UTF16-LE encoded. If you were to switch a field from string to wstring, the reading side would try to interpret UTF8 data as UTF16-LE data. These two encodings are not compatible with each other, hence a field type change from string to wstring is a breaking change.
Note that the manual says "For example C# string can represent either Bond type string or wstring." It does not say anything about C++ types. When working with Bond across C# and C++, there are three type systems: Bond's, C#'s, and C++'s.
If on the C++ side, you want to use something akin to std::wstring to store the field in memory, take a look as using Custom type mapping with the string concept.
What is the difference between reading the properties from payload. for example there is a property in the payload which is named as con_id. when i read this property like this #[payload.con_id] then it is coming as null. where as #[payload.'con_id'] is returning the value.
few other notations which i know of is #[payload['con_id']] or #[json:con_id]
which one should be used at which scenario? if there are any special cases to use any specific notation then please let me know the scenario also.
Also, what is the common notation that has to be used from a mule soft platform supported point of view.
In Mule 3 any of those syntax are valid. Except the json: evaluator is for querying json documents where as the others are for querying maps/objects. Also the json: evaluator is deprecated in Mule 3 in favor of transforming to a map and using the MEL expressions below.
payload.property
payload.'property'
payload['property']
The reason the first fails in your case, is beacaue of the special character '_'. The underscore forces the field name to be wrapped in quotes.
Typically the . notation is preferred over the [''] as its shorter for accessing map fields. And then simply wrap property names in '' for any fields with special chars.
Note in Mule 4, you don't need to transform to a map/object first. Dataweave expression replace MEL as the expression language and allow you to directly query json or any type of payload without transforming to a map first.
Is there any advantage of using {} instead of string concatenation?
An example from slf4j
logger.debug("Temperature set to {}. Old temperature was {}.", t, oldT);
instead of
logger.debug("Temperature set to"+ t + ". Old temperature was " + oldT);
I think it's about speed optimization because parameters evaluation (and string concatenation) could be avoided in runtime depending on a config file. But only two parameters are possible, then sometimes there is no other choice than string concatenation. Needing views on this issue.
It is about string concatenation performance. It's potentially significant if your have dense logging statements.
(Prior to SLF4J 1.7) But only two parameters are possible
Because the vast majority of logging statements have 2 or fewer parameters, so SLF4J API up to version 1.6 covers (only) the majority of use cases. The API designers have provided overloaded methods with varargs parameters since API version 1.7.
For those cases where you need more than 2 and you're stuck with pre-1.7 SLF4J, then just use either string concatenation or new Object[] { param1, param2, param3, ... }. There should be few enough of them that the performance is not as important.
Short version: Yes it is faster, with less code!
String concatenation does a lot of work without knowing if it is needed or not (the traditional "is debugging enabled" test known from log4j), and should be avoided if possible, as the {} allows delaying the toString() call and string construction to after it has been decided if the event needs capturing or not. By having the logger format a single string the code becomes cleaner in my opinion.
You can provide any number of arguments. Note that if you use an old version of sljf4j and you have more than two arguments to {}, you must use the new Object[]{a,b,c,d} syntax to pass an array instead. See e.g. http://slf4j.org/apidocs/org/slf4j/Logger.html#debug(java.lang.String, java.lang.Object[]).
Regarding the speed: Ceki posted a benchmark a while back on one of the lists.
Since, String is immutable in Java, so the left and right String have to be copied into the new String for every pair of concatenation. So, better go for the placeholder.
Another alternative is String.format(). We are using it in jcabi-log (static utility wrapper around slf4j).
Logger.debug(this, "some variable = %s", value);
It's much more maintainable and extendable. Besides, it's easy to translate.
I think from the author's point of view, the main reason is to reduce the overhead for string concatenation.I just read the logger's documentation, you could find following words:
/**
* <p>This form avoids superfluous string concatenation when the logger
* is disabled for the DEBUG level. However, this variant incurs the hidden
* (and relatively small) cost of creating an <code>Object[]</code> before
invoking the method,
* even if this logger is disabled for DEBUG. The variants taking
* {#link #debug(String, Object) one} and {#link #debug(String, Object, Object) two}
* arguments exist solely in order to avoid this hidden cost.</p>
*/
*
* #param format the format string
* #param arguments a list of 3 or more arguments
*/
public void debug(String format, Object... arguments);
Concatenation is expensive, so you want it to happen only when needed. By using {}, slf4j performs the concatenation only if the trace is needed. In production, you may configure the log level to INFO, thus ignoring all debug traces.
A trace like this will concatenate the string even if the trace will be ignored, which is a waste of time :
logger.debug("Temperature set to"+ t + ". Old temperature was " + oldT);
A trace like this will be ignored at no cost :
logger.debug("Temperature set to {}. Old temperature was {}.", t, oldT);
If you have a lot of debug traces that you ignore in production, using {} is definitely better as it has no impact on performance.
Compliant logging is highly important for application development, as it affects performance.
The mentioned non-compliant logging is resulting with redundant toString() method invocation on each call, and is resulting with redundant temporary memory allocation and CPU processing, as can be seen at example high scale test execution, where we can take a look on redundant allocated temporary memory:
Look on method profiling:
Note: I am the author of this blog post, Logging impact on application performance.
I would like to get a list of the VB.net/C# "wide" functions for unicode - i.e. AscW, ChrW, MessageBoxW, etc.
Is there a list of these somewhere?
All string methods in .NET are Unicode by default. .NET uses unicode strings for all String methods, since System.String is Unicode.
From System.String's Documentation:
Represents text as a series of Unicode characters.
Any time you call any method that takes a String as a parameter, you're working in Unicode. There is no need for "wide character" versions of methods in .NET.
In fact, if you want to work with ANSI text, you need to explicitly tell the framework that is what you are doing.
This is often used via a method from the Marhsal class (for interop with other libraries), or via the Encoding class (Encoding.ASCII, or a different character encoding) to convert a series of bytes to or from text.
All .net strings are Unicode already. There are no Ascii strings to worry about. So they dropped the W from the Win32 names.
Strings in .net are all Unicode. You don't need specific functions to handle Unicode because it's built in already.